Using this function in my C# exe, I try to pass a Unicode string to my C++ DLL:
[DllImport("Test.dll", CharSet = CharSet.Unicode, CallingConvention = CallingConvention.StdCall)]
public static extern int xSetTestString(StringBuilder xmlSettings);
This is the function on the C++ DLL side:
__declspec(dllexport) int xSetTestString(char* pSettingsXML);
Before calling the function in C#, I do a MessageBox.Show(string) and it displays all characters properly. On the C++ side, I do: OutputDebugStringW((wchar_t*)pString);, but that shows that the non-ASCII characters were replaced by ‘?’.
Just change your export in native DLL to:
This will do the trick.
BTW – You cant simply do
char* str1 = (wchar_t*)pSettingsXML;because it does not convert the string. You need to usewcstombs_sto convert fromwchar_t*tochar*. But in your case you don’t have to do it.Notes: Best practice IMO is to use
TCHAR*instead ofwchar_t*directly, and set your native dll project General option Character Set to Use Unicode Character Set. This definesTCHAR*aswchar_t*.Mirosoft natively uses two sets of functions: ANSI, using 1-byte chracter, marked as FunctionNameA and Unicode, using 2-bytes character, marked as FunctionNameW. This Unicode is in fact UTF-16.
UTF-8 is a multi-byte string that uses 1-byte for standard character and 2-bytes for non-standard characters. To convert UTF-8 to UTF-16 you can use
MultiByteToWideCharfunction.