Python, Unicode and UnicodeDecodeError

For historical reasons, the most common encoding (in Western Europe and the US, anyway) is ASCII. This is also Python’s default encoding. 

Let’s think about ASCII for a moment. It’s an encoding that uses 7 bits, which limits it to 128 possible values. That’s enough to represent all the characters that Western Europe and the US use (letters in both cases, the numbers, punctuation, a few characters with diacritics). Therefore, Unicode strings that only include code points that are in these 128 ASCII characters can be encoded as ASCII. Conversely, any ASCII encoded string can be decoded to Unicode.

It’s worth reiterating that terminology, as you come across it a lot: the transformation from Unicode to an encoding like ASCII is called ‘encoding’. The transformation from ASCII back to Unicode is called ‘decoding’.

    Unicode  ---- encode ----> ASCII
    ASCII    ---- decode ----> Unicode

Non-ASCII encodings

Most people don’t live in the US or Western Europe, and therefore have a requirement to store more characters than can be represented with ASCII. What those folk need to represent *is* part of the Unicode set (Unicode is massive!) – so a different encoding is required. Common encodings have familiar names: UTF-8 and UTF-16. UTF-8, for example, uses a single byte for encoding all the ASCII values, then variable numbers of bytes to encode further characters. (The ins and outs of these encodings are beyond the scope of this article – check out their respective Wikipedia entries for the gory details.)

The fact that the first byte of UTF-8 isthe same as ASCII is important, since it means that the encoding is backwards-compatible with ASCII. However, it can mask problems in software. We’ll come to this shortly.

Some terminology

Unicode-related terminology can get confusing. Here’s a quick glossary:

  • To encode
    • Encoding (the verb) means to take a a Unicode string and produce a byte string
  • To decode
    • Decoding (the verb) means to take a byte string and produce a Unicode string
  • An encoding
    • An encoding (the noun) is a mapping that describes how to represent a Unicode character as a byte or series of bytes. Encodings are named (like ‘ascii’, or ‘utf-8’) and are used both when encoding (verb!) Unicode strings and decoding byte strings.

In other words, when you encode or decode, you need to specify the encoding that you’re using. This will become clearer shortly.

Python, bytes and strings

You’ve probably noticed that there seems to be a couple of ways of writing down strings in Python. One looks like this:

  'this is a string'

Another looks like this:

  u'this is a string'

There’s a good chance that you also know that the second one of those is a Unicode string. But what’s the first one? And what does it actually mean to ‘be a Unicode string’?

The first one is simply a sequence of bytes. This byte sequence is, by convention, an ASCII representation (ie. encoding) of a string. The whole Python standard library, and most third-party modules, happily deal with strings natively in this encoding. As long as you live in US or Western Europe, then that’s probably fine for you.

The second one is a representation of a Unicode string. This can therefore contain any of the Unicode code points. It’s possible that whatever you’re using to edit the Python code (or just view it) might not be able to display the entire Unicode character set – for instance, a terminal usually has an encoding that it assumes data it’s trying to display is in. There’s a special notation, therefore, for representing arbitrary Unicode code points within a Python Unicode string: the \u and \U escapes. These will be followed by four or eight hex digits; there’s some subtlety here (see the Python string reference for further information) but you can simply think of the number after the \u (or \U) representing the Unicode code point of the character. So, for example, the following Python string:


represents LATIN SMALL LETTER B, or more simply:


To summarise then: the Unicode character set encompasses all characters that we may wish to represent. Individual encodings (ASCII, UTF-8, UTF-16, etc.) are representations of all or some of that full Unicode character set.

Encoding and Decoding

Byte strings and Unicode strings provide methods to perform the encoding and decoding for you. Remembering that you *encode* from Unicode to an encoding, you might try the following:

>>> u'\u0064'.encode('ascii')

As you’d expect, the Unicode string has an ‘encode’ method. You tell Python which encoding you want (‘ascii’ in this case, there are lots more supported by Python – check the docs) using the first parameter to the encode() call.

Conversely, byte strings have a decode() method:

>>> 'b'.decode('ascii')

Here, we’re telling Python to take the byte string ‘b’, decode it based on the ASCII decoder and return a Unicode string. 

Note that in both these previous cases, we didn’t really need to specify ‘ascii’ manually, since Python uses that as a default.


So, we’ve established that there are encodings which can represent Unicode, or more usually, a certain subset of the Unicode character set. We’ve already talked about how ASCII can only represent 128 characters. So, what happens if you have a Unicode string that contains code points that are outside that 128 characters? Let’s try something all too familiar to UK users: the £ sign. The Unicode code point for this character is 0x00A3:

>>> u'\u00A3'.encode('ascii')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'ascii' codec can't encode character u'\xa3'
in position 0: ordinal not in range(128)

Boom. This is Python telling you that it encountered a character in the Unicode string which it can’t represent in the requested encoding. There’s a fair amount of information in the error: it’s giving you the character that it’s having problems with, what position it was at in the string, and (in the case of ASCII) it’s telling you that the number it was expecting was in the range 0 – 127.

How do you fix a UnicodeEncodeError? Well, you’ve got a couple of options:

  • Pick an encoding that does have a representation for the problematic character
  • Use one of the error handling arguments to encode()

The first option is obviously ideal, although its practicality depends on what you’re doing with the encoded data. If you’re passing it to another system that (for example) requires its text files in ASCII format, you’re stuck. In that case, you’re left with one of the other two options. You can pass ‘ignore’, ‘replace’, ‘xmlcharrefreplace’ or ‘backslashreplace’ to the encode call:

>>> u'\u0083'.encode('ascii', 'ignore')
>>> u'\u0083'.encode('ascii', 'replace')
>>> u'\u0083'.encode('ascii','xmlcharrefreplace')
>>> u'\u0083'.encode('ascii','backslashreplace')

If you choose one of those options, you’ll have to let the eventual consumer of your encoded text know how to handle these.


This one is probably more familiar to most developers. A UnicodeDecodeError occurs when you ask Python to decode a byte string using a specified encoding, but Python encounters a byte sequence in that string that isn’t in the encoding that you specified (phew!). This one probably benefits from an example.

Consider once more the ASCII encoding. Being a 7-bit representation, ASCII only has 127 characters, represented by the numbers 0 – 127. So let’s imagine the ASCII-encoded string below:


In terms of ASCII numbers, that is:

72 105 33

Or in actual Python:

>>> s = chr(72) + chr(105) + chr(33)
>>> s
>>> s.decode('ascii')

That’s all great. But what happens if we add a byte that’s not in the ASCII range?

>>> s = s + chr(128)
>>> s.decode('ascii')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0x80
in position 3: ordinal not in range(128)

Boom. Python is saying that it encountered a character 0x80 (which is 128 in hex, the one we added) which was at position 3 (counting from zero) in the source byte string which was not in the range 0 – 127.

This is normally caused by using the incorrect encoding to try to decode a byte string to Unicode. So, for example, if you were given a UTF-8 byte string, and tried to decode it as ASCII, then you might well see a UnicodeDecodeError.

But why only might?

Well, remember what I mentioned before – UTF-8 shares the first 127 characters with ASCII. That means that you can take a UTF-8 byte sequence, and decode it with the ASCII decoder, and *as long as there are no characters outside the ASCII range* it will work. *Only* when that byte string starts featuring characters which don’t exist within the ASCII encoding do errors start being thrown.

ASCII – the default codec

Lots of Python programmers (well, US and Western European ones) can get quite a way into their Python careers converting byte strings to unicode like this:

>>> print unicode('hi!')

What’s going on here? Well, Python uses the ascii codec by default. So, the above is equivalent to:

>>> 'hi!'.decode('ascii')

And, because most US/European test data is composed of this byte string:


… nobody notices the problem until the Japanese office complains the intranet is broken.

Unicode Coercion

If you try to interpolate a byte string with a Unicode string, or vice-versa, Python will try and convert the byte string to Unicode using the default (ie. ascii) codec. So:

>>> u'Hi' + ' there'
u'Hi there'
>>> u'Hi %s' % 'there'
u'Hi there'
>>> 'Hi %s' % u'there'
u'Hi there'

These all work fine, because all the strings that we’re working with can be represented with ASCII. Look what happens when we try a character which can’t be represented with ASCII though:

>>> u'Hi ' + chr(128)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0x80
in position 0: ordinal not in range(128)

Python sees we’re trying to combine a Unicode string with a byte string, so tries to decode the byte string to Unicode using the ASCII codec. Since character 128 (the Euro symbol, as it happens) can’t be represented in ASCII, Python throws a UnicodeDecodeError.

In my experience, Unicode coercion is often where UnicodeDecodeErrors manifest themselves. The programmer has a Unicode string (probably a template) into which they’re trying to put some data from a database. Relational databases tend to supply byte strings. Usually the encoding is a property on the database connection. Often, however, developers simply assume it’s ASCII (or don’t do anything special at all, which in Python amounts to the same thing). They try to stick the data from the database (perhaps in UTF-8 or ISO-8859-1) into a Unicode string using the %s format specifier, Python tries to decode the byte string using the ascii codec, and the whole thing falls flat on its face.

Why do Python byte strings have an encode() method?

The sharp-eyed amongst you will have noticed that byte strings have an encode() method as well as a decode() method. What does this do? Quite simply, it does a decode-then-encode. The byte string is decoded to Unicode using the default (ascii) encoding, and is then encoded to the target encoding specified in the call to encode() using the appropriate encoding. As you’d expect, fun and games ensue if the original byte string isn’t actually encoded in ASCII at all.

Avoiding Unicode Errors

So – this is really what you care about, right? How do you avoid these Unicode problems? Well, there are three simple rules:

  • Within your application, always use Unicode
  • When you’re reading text in to your application, decode it as soon as possible with the correct encoding
  • When you’re outputting text from your application, encode at that point and do it explicitly

What does this mean in practice? Well, it means:

  • Whenever you’re writing string literals in code, always use u”.
  • Whenever you read any text in, call .decode(‘encoding’) on the byte string to obtain Unicode
  • Whenever you’re writing text out, pick an appropriate encoding to handle whatever Unicode you’re outputting – remember that ASCII can only represent a very limited subset

There are more places than you probably realise that text can get into your application. Here’s some:

  • An incoming request from a web browser
  • Some text read in from a data file on disk
  • A template file read in from disk
  • Some user’s input from a form
  • Some data from a database
  • Data returned from a web services call

Frameworks help a lot here. Many frameworks handle the common encoding and decoding cases (usually the template encoding, and data encoding from a database) for you, and just pass you back Unicode strings. Watch out for web request variables – many of those may be plain byte strings. Also watch out for web service responses; you might need to inspect the response headers to find out the encoding. And even then be careful; I’ve come across situations with in-house apps where declared encoding were simply wrong, leading to unexpected UnicodeDecodeErrors.

Figuring out which encoding to use

When you’re faced with a byte string, how do you know which decoding to use? The answer is, unfortunately, simple: you don’t. Some environments (such as the Web) may help you – HTTP requests and responses contain headers which specify the encoding used within them. You can inspect those, and if they’re wrong – well, at least you’ve got someone else to blame.

If you’re lucky, you know the byte string is encoding some XML. XML is gets a lot of flack, but one of the things it does right is to specify explicitly a default encoding that’s actually useful (UTF-8) and provide a mechanism to declare a different encoding. So with XML, you can scan the first few bytes of the file, decode using UTF-8, and look for the magic encoding declaration. If there isn’t one, then you can safely decode the rest of the file using UTF-8. If there is one, then switch encoding. Of course, your XML library of choice will do all this for you, and should give you Unicode text back once you’ve read your XML in.

If you’re unlucky, then you’ve got two more options. First off, you can talk to the people who run your source (or destination) system – find out what encodings they’re using, or accept, and use those.

The final, last resort option is to simply have a range of common encodings to try. A list I often use is ASCII, ISO-8859-1, UTF-8, UTF-16. Keep trying to decode with each of those in turn until one works. Which encodings you pick of course depends on what kind of files you’re expecting to see. You may also run into problems of course if you have a byte string in encoding X which also happens to be valid when decoded using encoding Y – in this case, you’ll just get garbage data. This is the cause of many of the ‘funny character’ bugs you see in web applications: byte strings being decoded using an encoding which happened to work, but was in fact not the original encoding used to create the byte string.

Python 3

I’m not going to talk too much about Python 3, since I haven’t actually used it yet. 

But – you rarely hear .NET or Java programmers complaining about Unicode errors. This is simply because both .NET and Java define a string to *be* Unicode in the first place. Anything involving the String class (in either runtime) is Unicode anyway; the developer sees encoding problems much less frequently as it’s much less common for unexpected byte data to creep into applications. This doesn’t mean the problems don’t exist, of course: at the end of the day, text is still being encoded to and from byte strings; it’s just done explicitly. (The fact that the default encoding on MS Windows, the OS on which many of these systems run, is UTF-16 helps here too – many more characters can be encoded in UTF-16 than ASCII).

My understanding is that Python 3 takes this general approach. Python 2’s ‘str’ type is gone. In its place is the ‘unicode’ type (equivalent to Java and .NET’s String class), and the ‘bytes’ type. String operations are done on ‘unicode’ instances.

Coding in a Unicode world

Unicode is here to stay. The days of writing software that would only need to work in American universities, where the only language and script used was US English in Latin text are long gone. There’s no magic to Unicode and the various encodings, and once you understand what’s going on, there’s no reason to have that sick feeling in the pit of your stomach the next time you see a UnicodeDecodeErrror. Just remember these rules:

  • Decode on the way in
  • Unicode everywhere in your application
  • Encode on the way out

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s