UTF-32 Encoding

This section provides a quick introduction of the UTF-32 (Unicode Transformation Format - 32-bit) encoding for Unicode character set. UTF-32 uses 32 bits or 4 bytes to encode each character.

UTF-32: A character encoding schema that maps code points of Unicode character set to a sequence of 4 bytes (32 bites). UTF-32 stands for Unicode Transformation Format - 32-bit.

Here is my understanding of the UTF-32 specification. When UTF-32 encoding is used to encode (serialize) Unicode characters into a byte stream for communication or storage, there are 3 valid optional formats:

For example, all 3 encoding streams list below are valid UTF-32 encoded streams for 3 Unicode characters, U+004D, U+0061 and U+10000:

When UTF-32 encoding is used to decode (deserialize) a byte stream into Unicode characters, the following logic should be used:

As of today, July 2009, there are not many applications that support UTF-32 encoding. I only see Firefox 3.0.11 on my Windows system that supports UTF-32 encoding.

Last update: 2009.

Table of Contents

 About This Book

 Character Sets and Encodings

 ASCII Character Set and Encoding

 GB2312 Character Set and Encoding

 GB18030 Character Set and Encoding

 JIS X0208 Character Set and Encodings

 Unicode Character Set

 UTF-8 (Unicode Transformation Format - 8-Bit)

 UTF-16, UTF-16BE and UTF-16LE Encodings

UTF-32, UTF-32BE and UTF-32LE Encodings

UTF-32 Encoding

 UTF-32BE Encoding

 UTF-32LE Encoding

 Java Language and Unicode Characters

 Character Encoding in Java

 Character Set Encoding Maps

 Encoding Conversion Programs for Encoded Text Files

 Using Notepad as a Unicode Text Editor

 Using Microsoft Word as a Unicode Text Editor

 Using Microsoft Excel as a Unicode Text Editor

 Unicode Fonts

 Unicode Code Point Blocks - Code Charts

 Outdated Tutorials


 PDF Printing Version