Understanding Unicode variants like UTF8 and UTF16 and how they impact your Office VBA development is not so straightforward. This post will guide thru the experience of reading a text file with VBA, explain some of the pitfalls you may encounter on this path when dealing with different text encodings and file formats. We’ll shed some light on essential Unicode concepts you’ve preferred to leave aside until now, because – let’s face it – who wants to spend hours reading wikipedia or MSDN just to read a text file or understand the many rules and APIs for converting between encodings ?
No bulky and verbose .NET or undecipherable C++ code complications here.
Just immediately actionable, simple and humble, VBA code with one function to rule them all, and a 10 to 15 minutes read to understand it all.
Let’s start the experience right now. Try something:
- Open Windows notepad and copy/paste (or type) this text:
Fancy a café ? Or a piña colada ? – Oh, that’s so cliché!
I have a strong impression of Déjà vu.
You hide your true motive behind a friendly façade.
(Just my lame try to compose words with diacritics, in english. Inspiration found here and here)
- Save the file; let’s say in c:\temp\textfiles\notepad_text.txt
Now we’ll try to read it and display it in Visual Basic, line by line, as usual:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
Private Const TEST_FILE1 As String = "c:\temp\textfiles\notepad_text.txt" Public Sub ReadTextFileByLine(ByVal psFileName As String) Dim hFile As Integer Dim sLine As String 'Current line read from the file Dim iLineCt As Integer 'Line counter Debug.Print "---- ReadTextFileByLine()" hFile = FreeFile Open psFileName For Input Access Read As #hFile Debug.Print "[File: " & psFileName & "]" While Not EOF(hFile) iLineCt = iLineCt + 1 Line Input #hFile, sLine Debug.Print iLineCt & ":" & sLine Wend Close hFile Debug.Print "[EOF]" End Sub Public Sub Test_ReadTextFileByLine() ReadTextFileByLine TEST_FILE1 End Sub |
Executing the “Test_ReadTextFileByLine” Sub (in the debug window) from this simple code snippet should do it…
…or not (!). The accented characters don’t display correctly.
Unicode, UTF8 and UTF16
Let’s state some facts before banging our heads on that:
- Two forms of Unicode will be of interest here: UTF8 and UTF16.
- “Windows is Unicode“, UTF16 Unicode. So is VBA. Unicode is a big character set which is meant to be able to represent the character glyphs of different languages.
- Unicode (UTF16) encodes a character with two bytes (a “wide” character, in extension “wide” strings).
(Note: UCS2 is history, assume UCS2 (or UCS-2) is UTF16) - The representation of a character in Unicode is also called a code point.
- UTF8: not all the characters in the Unicode character set really need two bytes of encoding. UTF8 is sort of a packed representation of a series of Unicode characters, where one or two bytes can be used to represent a wide character.
Back to reading our file
At this point, we can guess that our Notepad old friend (on Windows 10 en_US version in my setup), probably stored our text file using a UTF8 encoding, which VBA is not aware of. Let’s take a look at the bytes in the file:
We see at lines 0 and 30 that our accented “é” are encoded as the two bytes C3 and A9, so this is a UTF8 file.
Then, at some point, we’ll have to convert an UTF8 representation of string to a UTF16 VBA friendly one.
Unfortunately, VBA cannot help here, so let’s take a detour to our trustworthy Win32 API.
Converting from UTF8 to UTF16 with the Win32 API
You’ll find all the code in the demo database of my reading_text_files github repository.
The function we’ll need is MultiByteToWideChar(), which we can declare as:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
Public Const CP_UTF8 As Long = 65001 'UTF-8 Code Page Private Const ERROR_NO_UNICODE_TRANSLATION As Long = 1113& 'No mapping for the Unicode character exists in the target multi-byte code page. Private Const MB_ERR_INVALID_CHARS As Long = &H8& #If Win64 Then Private Declare PtrSafe Function MultiByteToWideChar Lib "kernel32" ( _ ByVal CodePage As Long, _ ByVal dwFlags As Long, _ ByVal lpMultiByteStr As LongPtr, _ ByVal cchMultiByte As Long, _ ByVal lpWideCharStr As LongPtr, _ ByVal cchWideChar As Long) As Long Private Declare PtrSafe Function GetLastError Lib "kernel32" () As Long #Else 'Sys call to convert multiple byte chars to a char Private Declare Function MultiByteToWideChar Lib "kernel32" ( _ ByVal lCodePage As Long, _ ByVal dwFlags As Long, _ ByVal lpMultiByteStr As Long, _ ByVal cchMultiByte As Long, _ ByVal lpWideCharStr As Long, _ ByVal cchWideChar As Long) As Long Private Declare Function GetLastError Lib "kernel32" () As Long #End If |
We’re going to have two variable sources, byte arrays and strings, to convert to UTF16, this is the VBA API functions signatures we’ll use:
1 2 3 4 5 6 7 8 |
Public Function UTF8DecodeByteArrayToString( _ ByRef pabBytes() As Byte, _ Optional ByVal plStart As Long = 0&) As String End Function Public Function UTF8DecodeString(ByVal psSource As String) As String End Function |
Unicode normalization
There’s more than one way to represent a combination of characters in Unicode (MSDN). Extract:
Capital A with dieresis (umlaut) can be represented either as a single Unicode code point “Ä” (U+00C4) or the combination of Capital A and the combining Dieresis character (“A” + “¨”, that is, U+0041 U+0308). Similar considerations apply for many other characters with diacritic marks.
Simply put, a problem rises if we compare two Unicode strings that conceptually are the same, but use different code points (as the example above).
There are two more Win32 API functions that can help with that. One, NormalizeString(), transforms a Unicode string to a standard form, so it can be compared with another, even if the representations are different. The other, IsNormalizedString(), tests if a Unicode string is in the expected form.
There are a number of standard forms, but mainly, the one that “compresses” the code points into one character (I mean we get the attached form of “ae” instead of the “a” and “e”) is “NormalizationC”, value 1, from the following (C++) enumeration:
1 2 3 4 5 6 7 8 9 10 |
typedef enum _NORM_FORM { NormalizationOther = 0, // Not supported NormalizationC = 0x1, // Each base + combining characters to canonical precomposed equivalent. NormalizationD = 0x2, // Each precomposed character to its canonical decomposed equivalent. NormalizationKC = 0x5, // Each base plus combining characters to the canonical precomposed // equivalents and all compatibility characters to their equivalents. NormalizationKD = 0x6 // Each precomposed character to its canonical decomposed equivalent // and all compatibility characters to their equivalents. } NORM_FORM; |
Normalization is an optional step, but for security considerations, should be used.
I wrapped the API (and followed MSDN guidance) in these two VBA API functions, and two others to get any error information:
1 2 3 4 5 6 7 8 9 |
Public Function UcIsNormalizedString(ByVal psText As String) As Boolean End Function Public Function UcNormalizeString(ByVal psText As String) As String End Function Public Function UcGetLastError() As Long End Function Public Function UcGetLastErrorText() As String End Function |
Note:
I’m not following my coding guidelines for keeping error information inside a module, because we get an error either when calling Win32 API functions or a “logical” error when using the VBA API.
Then to test if something went wrong when calling UcNormalizeString() we have to test like that:
1 2 3 4 |
If (UcGetLastError() <> 0) Or (Len(UcGetLastErrorText()) > 0) Then TryNormalization = "FAILED: " & UcGetLastErrorText() End If |
You can see a test scenario, that I sort of translated from the ones in MSDN, in the Test_Normalization() Sub, which calls:
1 2 3 4 5 6 7 8 9 10 11 |
Private Function TryNormalization(ByVal psText As String) As String If UcIsNormalizedString(psText) Then TryNormalization = "Already normalized in this form" Exit Function End If TryNormalization = UcNormalizeString(psText) If (UcGetLastError() <> 0) Or (Len(UcGetLastErrorText()) > 0) Then TryNormalization = "FAILED: " & UcGetLastErrorText() End If End Function |
Back to reading our file – again
Ok, now we know for sure that our file is in UTF8.
And we know that we have a nice UTF8DecodeString() at our disposal.
Are we not tempted to make this slight adaptation to our ReadTextFileByLine() function ? (see the UTF8DecodeString call in this code):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
Public Sub ReadTextFileByLine_BadIdea(ByVal psFileName As String) Dim hFile As Integer Dim sLine As String 'Current line read from the file Dim iLineCt As Integer 'Line counter Debug.Print "---- ReadTextFileByLine()" hFile = FreeFile Open psFileName For Input Access Read As #hFile Debug.Print "[File: " & psFileName & "]" While Not EOF(hFile) iLineCt = iLineCt + 1 Line Input #hFile, sLine Debug.Print iLineCt & ":" & UTF8DecodeString(sLine) Wend Close hFile Debug.Print "[EOF]" End Sub Public Sub Test_ReadTextFileByLine_BadIdea() ReadTextFileByLine TEST_FILE1 End Sub |
The result:
Whaaaat ? – Let’s debug that using the provided DumpStringBytes() function:
Which brings us to that output:
As we can see (and compare with the previous file’s hex dump), we do not have an UTF8 string in the variable sLine that is read by VBA from the file.
VBA converts the line it read from the file to a double byte (UTF16) string.
We cannot use VBA to read an UTF8 encoded text file using string variables.
Solution for reading and converting an UTF8 text file
We have to open the file in binary mode and read the contents in a byte array. This way VBA doesn’t do any conversion. We then just convert the byte array to an UTF8 string with the UTF8DecodeByteArrayToString().
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
Public Sub ReadNotepadTextFile(ByVal psFileName As String) Dim abString() As Byte Dim sDecoded As String Dim hFile As Integer Debug.Print "---- ReadNotepadTextFile() in a byte array and convert to UTF8:" Debug.Print "[File: " & psFileName & "]" hFile = FreeFile Open psFileName For Binary Access Read As #hFile ReDim abString(1 To LOF(hFile)) As Byte Get #hFile, 1, abString Close hFile sDecoded = UTF8DecodeByteArrayToString(abString) Debug.Print sDecoded & vbCrLf & "[UTF8] (len=" & Len(sDecoded) & ")" Debug.Print "[EOF]" End Sub |
And finally, we get it right:
Other text file encodings and BOMs
If Notepad saves files in UTF8 encoding, there are other encodings of text and file formats.
UTF8 and UTF16 text files may have, or not, a special series of bytes at the start of the file called the BOM (Byte Order Mark). The BOM is a magic number that we can use to infer the file encoding and byte endianness (order of bytes) of the file contents.
Without the BOM, guessing the file encoding can be tough.
But when there’s one, we can use it to make the necessary conversions, like in the following GetFileText() function, that can handle the following file encodings:
- UTF16 BE / LE (Big Endian / Little Endian) with or without BOM,
- UTF8 with or without BOM
- ANSI (8 bits characters text, different character sets or code pages possible)
This is the signature of the function (code in the MTextFiles module of the Reading_Text_Files.accdb project), with a bit of documentation:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
'GetFileText() ' 'Synopsis '-------- ' Reads a text file as a binary file in memory, and converts the bytes into a VB string. ' Detects BOM (Byte Order Mark) if there's one and handles BE/LE (Big/Little Endian). ' 'Parameters '---------- ' psFilename ' Full or relative path of file to read ' ' psInFileFormat ' can be either "", "utf8" or "utf16". Any other value plays ' as "", and "" is assumed to be UTF16 LE with no BOM (as when we ' write a text file with VB/A). ' Rules applied for psInFileFormat: ' 1. "" (empty), UTF16 LE nom BOM is assumed; ' 2. "utf8", if there's a BOM, its not included in the returned ' text, and the text is UTF8decoded to a VB/A string (UTF16) ' 3. "utf16", it there's a BOM, it is used to determine LE/BE. If ' there's no BOM, LE is assumed. ' 'Returns '------- ' The decoded file text. ' 'Notes: '----- '1. Can handle files which size fits into a long, and in memory. '2. Use error trapping in your calling code to catch unexpected errors. '3. You can use Notepad++ to produce test files (See the "Encoding" menu) ' Notepad++ encoding | Translates to ' --------------------------+-------------- ' "Encode in ANSI" | "" ' "Encode in UTF-8" | "utf8" ' "Encode in UTF-8-BOM" | "utf8" ' "Encode in UCS-2 BE BOM" | "utf16" ' "Encode in UCS-2 LE BOM" | "utf16" ' (Remember UCS-2 = UTF16) '4. Tools like Typora (https://typora.io/) save files in utf8 with no BOM, ' Use "GetFileText(YourFilename,"utf8") to load them. ' Public Function GetFileText( _ ByVal psFileName As String, _ ByVal psInFileFormat As String) As String End Function |
There’s a “text_files_samples” directory, in the github repository, with one file for each possible text file encoding.
Note that there are no UTF16 files with no BOM, as I used Notepad++ to generate the files and there’s no option in Notepad++ to generate UTF16 files with no BOM.
The Test_ReadSampleEncodings() procedure will read and check the contents of each file with this GetFileText() function:
Conclusion
We’ve seen different representation of text and encodings like UTF8, UTF16. We’re now able to convert between those encodings. And we now know how to read text from files with some of the most common file formats we may encounter, with VBA.
From here, it should be quite easy to also write any of these formats (using files open in binary mode helps).
Downloads
Head to the Reading_Text_Files github repository to get the source code, the example files and the Access demo database.
(MIT Licence)
Recent Comments