Solving the Unicode, UTF8, UTF16 and Text Files conundrum in VBA

Understanding Unicode variants like UTF8 and UTF16 and how they impact your Office VBA development is not so straightforward. This post will guide thru the experience of reading a text file with VBA, explain some of the pitfalls you may encounter on this path when dealing with different text encodings and file formats. We’ll shed some light on essential Unicode concepts you’ve preferred to leave aside until now, because – let’s face it – who wants to spend hours reading wikipedia or MSDN just to read a text file or understand the many rules and APIs for converting between encodings ?

No bulky and verbose .NET or undecipherable C++ code complications here.
Just immediately actionable, simple and humble, VBA code with one function to rule them all, and a 10 to 15 minutes read to understand it all.

Let’s start the experience right now. Try something:

Now we’ll try to read it and display it in Visual Basic, line by line, as usual:

Executing the “Test_ReadTextFileByLine” Sub (in the debug window) from this simple code snippet should do it…

…or not (!). The accented characters don’t display correctly.

Unicode, UTF8 and UTF16

Let’s state some facts before banging our heads on that:

Back to reading our file

At this point, we can guess that our Notepad old friend (on Windows 10 en_US version in my setup), probably stored our text file using a UTF8 encoding, which VBA is not aware of. Let’s take a look at the bytes in the file:

We see at lines 0 and 30 that our accented “é” are encoded as the two bytes C3 and A9, so this is a UTF8 file.

Then, at some point, we’ll have to convert an UTF8 representation of string to a UTF16 VBA friendly one.

Unfortunately, VBA cannot help here, so let’s take a detour to our trustworthy Win32 API.

Converting from UTF8 to UTF16 with the Win32 API

You’ll find all the code in the demo database of my reading_text_files github repository.

The function we’ll need is MultiByteToWideChar(), which we can declare as:

We’re going to have two variable sources, byte arrays and strings, to convert to UTF16, this is the VBA API functions signatures we’ll use:

Unicode normalization

There’s more than one way to represent a combination of characters in Unicode (MSDN). Extract:

Capital A with dieresis (umlaut) can be represented either as a single Unicode code point “Ä” (U+00C4) or the combination of Capital A and the combining Dieresis character (“A” + “¨”, that is, U+0041 U+0308). Similar considerations apply for many other characters with diacritic marks.

Simply put, a problem rises if we compare two Unicode strings that conceptually are the same, but use different code points (as the example above).

There are two more Win32 API functions that can help with that. One, NormalizeString(), transforms a Unicode string to a standard form, so it can be compared with another, even if the representations are different. The other, IsNormalizedString(), tests if a Unicode string is in the expected form.

There are a number of standard forms, but mainly, the one that “compresses” the code points into one character (I mean we get the attached form of “ae” instead of the “a” and “e”) is “NormalizationC”, value 1, from the following (C++) enumeration:

Normalization is an optional step, but for security considerations, should be used.
I wrapped the API (and followed MSDN guidance) in these two VBA API functions, and two others to get any error information:

Note:

I’m not following my coding guidelines for keeping error information inside a module, because we get an error either when calling Win32 API functions or a “logical” error when using the VBA API.

Then to test if something went wrong when calling UcNormalizeString() we have to test like that:

You can see a test scenario, that I sort of translated from the ones in MSDN, in the Test_Normalization() Sub, which calls:

Back to reading our file – again

Ok, now we know for sure that our file is in UTF8.
And we know that we have a nice UTF8DecodeString() at our disposal.

Are we not tempted to make this slight adaptation to our ReadTextFileByLine() function ? (see the UTF8DecodeString call in this code):

The result:

Whaaaat ? – Let’s debug that using the provided DumpStringBytes() function:

Which brings us to that output:

As we can see (and compare with the previous file’s hex dump), we do not have an UTF8 string in the variable sLine that is read by VBA from the file.

VBA converts the line it read from the file to a double byte (UTF16) string.

We cannot use VBA to read an UTF8 encoded text file using string variables.

Solution for reading and converting an UTF8 text file

We have to open the file in binary mode and read the contents in a byte array. This way VBA doesn’t do any conversion. We then just convert the byte array to an UTF8 string with the UTF8DecodeByteArrayToString().

And finally, we get it right:

Other text file encodings and BOMs

If Notepad saves files in UTF8 encoding, there are other encodings of text and file formats.

UTF8 and UTF16 text files may have, or not, a special series of bytes at the start of the file called the BOM (Byte Order Mark). The BOM is a magic number that we can use to infer the file encoding and byte endianness (order of bytes) of the file contents.

Without the BOM, guessing the file encoding can be tough.
But when there’s one, we can use it to make the necessary conversions, like in the following GetFileText() function, that can handle the following file encodings:

This is the signature of the function (code in the MTextFiles module of the Reading_Text_Files.accdb project), with a bit of documentation:

There’s a “text_files_samples” directory, in the github repository, with one file for each possible text file encoding.
Note that there are no UTF16 files with no BOM, as I used Notepad++ to generate the files and there’s no option in Notepad++ to generate UTF16 files with no BOM.

The Test_ReadSampleEncodings() procedure will read and check the contents of each file with this GetFileText() function:

Conclusion

We’ve seen different representation of text and encodings like UTF8, UTF16. We’re now able to convert between those encodings. And we now know how to read text from files with some of the most common file formats we may encounter, with VBA.

From here, it should be quite easy to also write any of these formats (using files open in binary mode helps).

Downloads

Head to the Reading_Text_Files github repository to get the source code, the example files and the Access demo database.

(MIT Licence)