I think this page explained it in a very fine manner:
http://www.reportlab.com/i18n/python_un ... orial.html
"Once you get beyond the ASCII world, there are many different native encodings for different languages and operating systems. Converting between all of these is easiest with a central "common point", and that is Unicode. Unicode is a two-byte encoding which covers all of the world's common writing systems. It is important for many reasons:
Data Storage
If your customer database is all English, or even all Japanese, you can store it any way you like. But if you have to keep English, Japanese, Russian and Thai in the same file or database column, you can;t use a native encoding - you really need something like Unicode. Encoding
Conversion
If a new encoding needs to be added to a library, it is only necessary to establish a mapping to and from Unicode, and not to every other encoding in the world Operations on wide characters
Asian languages have to use more tha one byte per character. Most native encodings use a mix of single bytes for ASCII, and two bytes per chinese character. Software that needs to slice strings can potentially cut a character in half. It is much, much easier to write string- processing operations in Unicode, where every character is the same width.
Operating System Compatibility
For the above reasons, operating systems and low-level APIs have been moving to support Unicode, and there are more and more functions around which expect Unicode strings as arguments, or which return them. "