3 perluniintro - Perl Unicode introduction
7 This document gives a general idea of Unicode and how to use Unicode
8 in Perl. See L</Further Resources> for references to more in-depth
13 Unicode is a character set standard which plans to codify all of the
14 writing systems of the world, plus many other symbols.
16 Unicode and ISO/IEC 10646 are coordinated standards that unify
17 almost all other modern character set standards,
18 covering more than 80 writing systems and hundreds of languages,
19 including all commercially-important modern languages. All characters
20 in the largest Chinese, Japanese, and Korean dictionaries are also
21 encoded. The standards will eventually cover almost all characters in
22 more than 250 writing systems and thousands of languages.
23 Unicode 1.0 was released in October 1991, and 6.0 in October 2010.
25 A Unicode I<character> is an abstract entity. It is not bound to any
26 particular integer width, especially not to the C language C<char>.
27 Unicode is language-neutral and display-neutral: it does not encode the
28 language of the text, and it does not generally define fonts or other graphical
29 layout details. Unicode operates on characters and on text built from
32 Unicode defines characters like C<LATIN CAPITAL LETTER A> or C<GREEK
33 SMALL LETTER ALPHA> and unique numbers for the characters, in this
34 case 0x0041 and 0x03B1, respectively. These unique numbers are called
35 I<code points>. A code point is essentially the position of the
36 character within the set of all possible Unicode characters, and thus in
37 Perl, the term I<ordinal> is often used interchangeably with it.
39 The Unicode standard prefers using hexadecimal notation for the code
40 points. If numbers like C<0x0041> are unfamiliar to you, take a peek
41 at a later section, L</"Hexadecimal Notation">. The Unicode standard
42 uses the notation C<U+0041 LATIN CAPITAL LETTER A>, to give the
43 hexadecimal code point and the normative name of the character.
45 Unicode also defines various I<properties> for the characters, like
46 "uppercase" or "lowercase", "decimal digit", or "punctuation";
47 these properties are independent of the names of the characters.
48 Furthermore, various operations on the characters like uppercasing,
49 lowercasing, and collating (sorting) are defined.
51 A Unicode I<logical> "character" can actually consist of more than one internal
52 I<actual> "character" or code point. For Western languages, this is adequately
53 modelled by a I<base character> (like C<LATIN CAPITAL LETTER A>) followed
54 by one or more I<modifiers> (like C<COMBINING ACUTE ACCENT>). This sequence of
55 base character and modifiers is called a I<combining character
56 sequence>. Some non-western languages require more complicated
57 models, so Unicode created the I<grapheme cluster> concept, which was
58 later further refined into the I<extended grapheme cluster>. For
59 example, a Korean Hangul syllable is considered a single logical
60 character, but most often consists of three actual
61 Unicode characters: a leading consonant followed by an interior vowel followed
62 by a trailing consonant.
64 Whether to call these extended grapheme clusters "characters" depends on your
65 point of view. If you are a programmer, you probably would tend towards seeing
66 each element in the sequences as one unit, or "character". However from
67 the user's point of view, the whole sequence could be seen as one
68 "character" since that's probably what it looks like in the context of the
69 user's language. In this document, we take the programmer's point of
70 view: one "character" is one Unicode code point.
72 For some combinations of base character and modifiers, there are
73 I<precomposed> characters. There is a single character equivalent, for
74 example, to the sequence C<LATIN CAPITAL LETTER A> followed by
75 C<COMBINING ACUTE ACCENT>. It is called C<LATIN CAPITAL LETTER A WITH
76 ACUTE>. These precomposed characters are, however, only available for
77 some combinations, and are mainly meant to support round-trip
78 conversions between Unicode and legacy standards (like ISO 8859). Using
79 sequences, as Unicode does, allows for needing fewer basic building blocks
80 (code points) to express many more potential grapheme clusters. To
81 support conversion between equivalent forms, various I<normalization
82 forms> are also defined. Thus, C<LATIN CAPITAL LETTER A WITH ACUTE> is
83 in I<Normalization Form Composed>, (abbreviated NFC), and the sequence
84 C<LATIN CAPITAL LETTER A> followed by C<COMBINING ACUTE ACCENT>
85 represents the same character in I<Normalization Form Decomposed> (NFD).
87 Because of backward compatibility with legacy encodings, the "a unique
88 number for every character" idea breaks down a bit: instead, there is
89 "at least one number for every character". The same character could
90 be represented differently in several legacy encodings. The
91 converse is not also true: some code points do not have an assigned
92 character. Firstly, there are unallocated code points within
93 otherwise used blocks. Secondly, there are special Unicode control
94 characters that do not represent true characters.
96 A common myth about Unicode is that it is "16-bit", that is,
97 Unicode is only represented as C<0x10000> (or 65536) characters from
98 C<0x0000> to C<0xFFFF>. B<This is untrue.> Since Unicode 2.0 (July
99 1996), Unicode has been defined all the way up to 21 bits (C<0x10FFFF>),
100 and since Unicode 3.1 (March 2001), characters have been defined
101 beyond C<0xFFFF>. The first C<0x10000> characters are called the
102 I<Plane 0>, or the I<Basic Multilingual Plane> (BMP). With Unicode
103 3.1, 17 (yes, seventeen) planes in all were defined--but they are
104 nowhere near full of defined characters, yet.
106 Another myth is about Unicode blocks--that they have something to
107 do with languages--that each block would define the characters used
108 by a language or a set of languages. B<This is also untrue.>
109 The division into blocks exists, but it is almost completely
110 accidental--an artifact of how the characters have been and
111 still are allocated. Instead, there is a concept called I<scripts>, which is
112 more useful: there is C<Latin> script, C<Greek> script, and so on. Scripts
113 usually span varied parts of several blocks. For more information about
114 scripts, see L<perlunicode/Scripts>.
116 The Unicode code points are just abstract numbers. To input and
117 output these abstract numbers, the numbers must be I<encoded> or
118 I<serialised> somehow. Unicode defines several I<character encoding
119 forms>, of which I<UTF-8> is perhaps the most popular. UTF-8 is a
120 variable length encoding that encodes Unicode characters as 1 to 6
121 bytes. Other encodings
122 include UTF-16 and UTF-32 and their big- and little-endian variants
123 (UTF-8 is byte-order independent) The ISO/IEC 10646 defines the UCS-2
124 and UCS-4 encoding forms.
126 For more information about encodings--for instance, to learn what
127 I<surrogates> and I<byte order marks> (BOMs) are--see L<perlunicode>.
129 =head2 Perl's Unicode Support
131 Starting from Perl 5.6.0, Perl has had the capacity to handle Unicode
132 natively. Perl 5.8.0, however, is the first recommended release for
133 serious Unicode work. The maintenance release 5.6.1 fixed many of the
134 problems of the initial Unicode implementation, but for example
135 regular expressions still do not work with Unicode in 5.6.1.
136 Perl 5.14.0 is the first release where Unicode support is
137 (almost) seamlessly integrable without some gotchas (the exception being
138 some differences in L<quotemeta|perlfunc/quotemeta>). To enable this
139 seamless support, you should C<use feature 'unicode_strings'> (which is
140 automatically selected if you C<use 5.012> or higher). See L<feature>.
141 (5.14 also fixes a number of bugs and departures from the Unicode
144 Before Perl 5.8.0, the use of C<use utf8> was used to declare
145 that operations in the current block or file would be Unicode-aware.
146 This model was found to be wrong, or at least clumsy: the "Unicodeness"
147 is now carried with the data, instead of being attached to the
149 Starting with Perl 5.8.0, only one case remains where an explicit C<use
150 utf8> is needed: if your Perl script itself is encoded in UTF-8, you can
151 use UTF-8 in your identifier names, and in string and regular expression
152 literals, by saying C<use utf8>. This is not the default because
153 scripts with legacy 8-bit data in them would break. See L<utf8>.
155 =head2 Perl's Unicode Model
157 Perl supports both pre-5.6 strings of eight-bit native bytes, and
158 strings of Unicode characters. The general principle is that Perl tries
159 to keep its data as eight-bit bytes for as long as possible, but as soon
160 as Unicodeness cannot be avoided, the data is transparently upgraded
161 to Unicode. Prior to Perl 5.14, the upgrade was not completely
162 transparent (see L<perlunicode/The "Unicode Bug">), and for backwards
163 compatibility, full transparency is not gained unless C<use feature
164 'unicode_strings'> (see L<feature>) or C<use 5.012> (or higher) is
167 Internally, Perl currently uses either whatever the native eight-bit
168 character set of the platform (for example Latin-1) is, defaulting to
169 UTF-8, to encode Unicode strings. Specifically, if all code points in
170 the string are C<0xFF> or less, Perl uses the native eight-bit
171 character set. Otherwise, it uses UTF-8.
173 A user of Perl does not normally need to know nor care how Perl
174 happens to encode its internal strings, but it becomes relevant when
175 outputting Unicode strings to a stream without a PerlIO layer (one with
176 the "default" encoding). In such a case, the raw bytes used internally
177 (the native character set or UTF-8, as appropriate for each string)
178 will be used, and a "Wide character" warning will be issued if those
179 strings contain a character beyond 0x00FF.
183 perl -e 'print "\x{DF}\n", "\x{0100}\x{DF}\n"'
185 produces a fairly useless mixture of native bytes and UTF-8, as well
188 Wide character in print at ...
190 To output UTF-8, use the C<:encoding> or C<:utf8> output layer. Prepending
192 binmode(STDOUT, ":utf8");
194 to this sample program ensures that the output is completely UTF-8,
195 and removes the program's warning.
197 You can enable automatic UTF-8-ification of your standard file
198 handles, default C<open()> layer, and C<@ARGV> by using either
199 the C<-C> command line switch or the C<PERL_UNICODE> environment
200 variable, see L<perlrun> for the documentation of the C<-C> switch.
202 Note that this means that Perl expects other software to work the same
204 if Perl has been led to believe that STDIN should be UTF-8, but then
205 STDIN coming in from another command is not UTF-8, Perl will likely
206 complain about the malformed UTF-8.
208 All features that combine Unicode and I/O also require using the new
209 PerlIO feature. Almost all Perl 5.8 platforms do use PerlIO, though:
210 you can see whether yours is by running "perl -V" and looking for
213 =head2 Unicode and EBCDIC
215 Perl 5.8.0 also supports Unicode on EBCDIC platforms. There,
216 Unicode support is somewhat more complex to implement since
217 additional conversions are needed at every step.
219 Later Perl releases have added code that will not work on EBCDIC platforms, and
220 no one has complained, so the divergence has continued. If you want to run
221 Perl on an EBCDIC platform, send email to perlbug@perl.org
223 On EBCDIC platforms, the internal Unicode encoding form is UTF-EBCDIC
224 instead of UTF-8. The difference is that as UTF-8 is "ASCII-safe" in
225 that ASCII characters encode to UTF-8 as-is, while UTF-EBCDIC is
228 =head2 Creating Unicode
230 To create Unicode characters in literals for code points above C<0xFF>,
231 use the C<\x{...}> notation in double-quoted strings:
233 my $smiley = "\x{263a}";
235 Similarly, it can be used in regular expression literals
237 $smiley =~ /\x{263a}/;
239 At run-time you can use C<chr()>:
241 my $hebrew_alef = chr(0x05d0);
243 See L</"Further Resources"> for how to find all these numeric codes.
245 Naturally, C<ord()> will do the reverse: it turns a character into
248 Note that C<\x..> (no C<{}> and only two hexadecimal digits), C<\x{...}>,
249 and C<chr(...)> for arguments less than C<0x100> (decimal 256)
250 generate an eight-bit character for backward compatibility with older
251 Perls. For arguments of C<0x100> or more, Unicode characters are
252 always produced. If you want to force the production of Unicode
253 characters regardless of the numeric value, use C<pack("U", ...)>
254 instead of C<\x..>, C<\x{...}>, or C<chr()>.
256 You can also use the C<charnames> pragma to invoke characters
257 by name in double-quoted strings:
259 use charnames ':full';
260 my $arabic_alef = "\N{ARABIC LETTER ALEF}";
262 And, as mentioned above, you can also C<pack()> numbers into Unicode
265 my $georgian_an = pack("U", 0x10a0);
267 Note that both C<\x{...}> and C<\N{...}> are compile-time string
268 constants: you cannot use variables in them. if you want similar
269 run-time functionality, use C<chr()> and C<charnames::string_vianame()>.
271 If you want to force the result to Unicode characters, use the special
272 C<"U0"> prefix. It consumes no arguments but causes the following bytes
273 to be interpreted as the UTF-8 encoding of Unicode characters:
275 my $chars = pack("U0W*", 0x80, 0x42);
277 Likewise, you can stop such UTF-8 interpretation by using the special
280 =head2 Handling Unicode
282 Handling Unicode is for the most part transparent: just use the
283 strings as usual. Functions like C<index()>, C<length()>, and
284 C<substr()> will work on the Unicode characters; regular expressions
285 will work on the Unicode characters (see L<perlunicode> and L<perlretut>).
287 Note that Perl considers grapheme clusters to be separate characters, so for
290 use charnames ':full';
291 print length("\N{LATIN CAPITAL LETTER A}\N{COMBINING ACUTE ACCENT}"), "\n";
293 will print 2, not 1. The only exception is that regular expressions
294 have C<\X> for matching an extended grapheme cluster. (Thus C<\X> in a
295 regular expression would match the entire sequence of both the example
298 Life is not quite so transparent, however, when working with legacy
299 encodings, I/O, and certain special cases:
301 =head2 Legacy Encodings
303 When you combine legacy data and Unicode, the legacy data needs
304 to be upgraded to Unicode. Normally the legacy data is assumed to be
305 ISO 8859-1 (or EBCDIC, if applicable).
307 The C<Encode> module knows about many encodings and has interfaces
308 for doing conversions between those encodings:
311 $data = decode("iso-8859-3", $data); # convert from legacy to utf-8
315 Normally, writing out Unicode data
317 print FH $some_string_with_unicode, "\n";
319 produces raw bytes that Perl happens to use to internally encode the
320 Unicode string. Perl's internal encoding depends on the system as
321 well as what characters happen to be in the string at the time. If
322 any of the characters are at code points C<0x100> or above, you will get
323 a warning. To ensure that the output is explicitly rendered in the
324 encoding you desire--and to avoid the warning--open the stream with
325 the desired encoding. Some examples:
327 open FH, ">:utf8", "file";
329 open FH, ">:encoding(ucs2)", "file";
330 open FH, ">:encoding(UTF-8)", "file";
331 open FH, ">:encoding(shift_jis)", "file";
333 and on already open streams, use C<binmode()>:
335 binmode(STDOUT, ":utf8");
337 binmode(STDOUT, ":encoding(ucs2)");
338 binmode(STDOUT, ":encoding(UTF-8)");
339 binmode(STDOUT, ":encoding(shift_jis)");
341 The matching of encoding names is loose: case does not matter, and
342 many encodings have several aliases. Note that the C<:utf8> layer
343 must always be specified exactly like that; it is I<not> subject to
344 the loose matching of encoding names. Also note that currently C<:utf8> is unsafe for
345 input, because it accepts the data without validating that it is indeed valid
346 UTF-8; you should instead use C<:encoding(utf-8)> (with or without a
349 See L<PerlIO> for the C<:utf8> layer, L<PerlIO::encoding> and
350 L<Encode::PerlIO> for the C<:encoding()> layer, and
351 L<Encode::Supported> for many encodings supported by the C<Encode>
354 Reading in a file that you know happens to be encoded in one of the
355 Unicode or legacy encodings does not magically turn the data into
356 Unicode in Perl's eyes. To do that, specify the appropriate
357 layer when opening files
359 open(my $fh,'<:encoding(utf8)', 'anything');
360 my $line_of_unicode = <$fh>;
362 open(my $fh,'<:encoding(Big5)', 'anything');
363 my $line_of_unicode = <$fh>;
365 The I/O layers can also be specified more flexibly with
366 the C<open> pragma. See L<open>, or look at the following example.
368 use open ':encoding(utf8)'; # input/output default encoding will be
371 print X chr(0x100), "\n";
374 printf "%#x\n", ord(<Y>); # this should print 0x100
377 With the C<open> pragma you can use the C<:locale> layer
379 BEGIN { $ENV{LC_ALL} = $ENV{LANG} = 'ru_RU.KOI8-R' }
380 # the :locale will probe the locale environment variables like
382 use open OUT => ':locale'; # russki parusski
384 print O chr(0x430); # Unicode CYRILLIC SMALL LETTER A = KOI8-R 0xc1
387 printf "%#x\n", ord(<I>), "\n"; # this should print 0xc1
390 These methods install a transparent filter on the I/O stream that
391 converts data from the specified encoding when it is read in from the
392 stream. The result is always Unicode.
394 The L<open> pragma affects all the C<open()> calls after the pragma by
395 setting default layers. If you want to affect only certain
396 streams, use explicit layers directly in the C<open()> call.
398 You can switch encodings on an already opened stream by using
399 C<binmode()>; see L<perlfunc/binmode>.
401 The C<:locale> does not currently (as of Perl 5.8.0) work with
402 C<open()> and C<binmode()>, only with the C<open> pragma. The
403 C<:utf8> and C<:encoding(...)> methods do work with all of C<open()>,
404 C<binmode()>, and the C<open> pragma.
406 Similarly, you may use these I/O layers on output streams to
407 automatically convert Unicode to the specified encoding when it is
408 written to the stream. For example, the following snippet copies the
409 contents of the file "text.jis" (encoded as ISO-2022-JP, aka JIS) to
410 the file "text.utf8", encoded as UTF-8:
412 open(my $nihongo, '<:encoding(iso-2022-jp)', 'text.jis');
413 open(my $unicode, '>:utf8', 'text.utf8');
414 while (<$nihongo>) { print $unicode $_ }
416 The naming of encodings, both by the C<open()> and by the C<open>
417 pragma allows for flexible names: C<koi8-r> and C<KOI8R> will both be
420 Common encodings recognized by ISO, MIME, IANA, and various other
421 standardisation organisations are recognised; for a more detailed
422 list see L<Encode::Supported>.
424 C<read()> reads characters and returns the number of characters.
425 C<seek()> and C<tell()> operate on byte counts, as do C<sysread()>
428 Notice that because of the default behaviour of not doing any
429 conversion upon input if there is no default layer,
430 it is easy to mistakenly write code that keeps on expanding a file
431 by repeatedly encoding the data:
435 local $/; ## read in the whole file of 8-bit characters
438 open F, ">:encoding(utf8)", "file";
439 print F $t; ## convert to UTF-8 on output
442 If you run this code twice, the contents of the F<file> will be twice
443 UTF-8 encoded. A C<use open ':encoding(utf8)'> would have avoided the
444 bug, or explicitly opening also the F<file> for input as UTF-8.
446 B<NOTE>: the C<:utf8> and C<:encoding> features work only if your
447 Perl has been built with the new PerlIO feature (which is the default
450 =head2 Displaying Unicode As Text
452 Sometimes you might want to display Perl scalars containing Unicode as
453 simple ASCII (or EBCDIC) text. The following subroutine converts
454 its argument so that Unicode characters with code points greater than
455 255 are displayed as C<\x{...}>, control characters (like C<\n>) are
456 displayed as C<\x..>, and the rest of the characters as themselves:
460 map { $_ > 255 ? # if wide character...
461 sprintf("\\x{%04X}", $_) : # \x{...}
462 chr($_) =~ /[[:cntrl:]]/ ? # else if control character ...
463 sprintf("\\x%02X", $_) : # \x..
464 quotemeta(chr($_)) # else quoted or as themselves
465 } unpack("W*", $_[0])); # unpack Unicode characters
470 nice_string("foo\x{100}bar\n")
476 which is ready to be printed.
484 Bit Complement Operator ~ And vec()
486 The bit complement operator C<~> may produce surprising results if
487 used on strings containing characters with ordinal values above
488 255. In such a case, the results are consistent with the internal
489 encoding of the characters, but not with much else. So don't do
490 that. Similarly for C<vec()>: you will be operating on the
491 internally-encoded bit patterns of the Unicode characters, not on
492 the code point values, which is very probably not what you want.
496 Peeking At Perl's Internal Encoding
498 Normal users of Perl should never care how Perl encodes any particular
499 Unicode string (because the normal ways to get at the contents of a
500 string with Unicode--via input and output--should always be via
501 explicitly-defined I/O layers). But if you must, there are two
502 ways of looking behind the scenes.
504 One way of peeking inside the internal encoding of Unicode characters
505 is to use C<unpack("C*", ...> to get the bytes of whatever the string
506 encoding happens to be, or C<unpack("U0..", ...)> to get the bytes of the
509 # this prints c4 80 for the UTF-8 bytes 0xc4 0x80
510 print join(" ", unpack("U0(H2)*", pack("U", 0x100))), "\n";
512 Yet another way would be to use the Devel::Peek module:
514 perl -MDevel::Peek -e 'Dump(chr(0x100))'
516 That shows the C<UTF8> flag in FLAGS and both the UTF-8 bytes
517 and Unicode characters in C<PV>. See also later in this document
518 the discussion about the C<utf8::is_utf8()> function.
522 =head2 Advanced Topics
530 The question of string equivalence turns somewhat complicated
531 in Unicode: what do you mean by "equal"?
533 (Is C<LATIN CAPITAL LETTER A WITH ACUTE> equal to
534 C<LATIN CAPITAL LETTER A>?)
536 The short answer is that by default Perl compares equivalence (C<eq>,
537 C<ne>) based only on code points of the characters. In the above
538 case, the answer is no (because 0x00C1 != 0x0041). But sometimes, any
539 CAPITAL LETTER A's should be considered equal, or even A's of any case.
541 The long answer is that you need to consider character normalization
542 and casing issues: see L<Unicode::Normalize>, Unicode Technical Report #15,
543 L<Unicode Normalization Forms|http://www.unicode.org/unicode/reports/tr15> and
544 sections on case mapping in the L<Unicode Standard|http://www.unicode.org>.
546 As of Perl 5.8.0, the "Full" case-folding of I<Case
547 Mappings/SpecialCasing> is implemented, but bugs remain in C<qr//i> with them,
548 mostly fixed by 5.14.
554 People like to see their strings nicely sorted--or as Unicode
555 parlance goes, collated. But again, what do you mean by collate?
557 (Does C<LATIN CAPITAL LETTER A WITH ACUTE> come before or after
558 C<LATIN CAPITAL LETTER A WITH GRAVE>?)
560 The short answer is that by default, Perl compares strings (C<lt>,
561 C<le>, C<cmp>, C<ge>, C<gt>) based only on the code points of the
562 characters. In the above case, the answer is "after", since
563 C<0x00C1> > C<0x00C0>.
565 The long answer is that "it depends", and a good answer cannot be
566 given without knowing (at the very least) the language context.
567 See L<Unicode::Collate>, and I<Unicode Collation Algorithm>
568 L<http://www.unicode.org/unicode/reports/tr10/>
578 Character Ranges and Classes
580 Character ranges in regular expression bracketed character classes ( e.g.,
581 C</[a-z]/>) and in the C<tr///> (also known as C<y///>) operator are not
582 magically Unicode-aware. What this means is that C<[A-Za-z]> will not
583 magically start to mean "all alphabetic letters" (not that it does mean that
584 even for 8-bit characters; for those, if you are using locales (L<perllocale>),
585 use C</[[:alpha:]]/>; and if not, use the 8-bit-aware property C<\p{alpha}>).
587 All the properties that begin with C<\p> (and its inverse C<\P>) are actually
588 character classes that are Unicode-aware. There are dozens of them, see
591 You can use Unicode code points as the end points of character ranges, and the
592 range will include all Unicode code points that lie between those end points.
596 String-To-Number Conversions
598 Unicode does define several other decimal--and numeric--characters
599 besides the familiar 0 to 9, such as the Arabic and Indic digits.
600 Perl does not support string-to-number conversion for digits other
601 than ASCII 0 to 9 (and ASCII a to f for hexadecimal).
602 To get safe conversions from any Unicode string, use
603 L<Unicode::UCD/num()>.
607 =head2 Questions With Answers
613 Will My Old Scripts Break?
615 Very probably not. Unless you are generating Unicode characters
616 somehow, old behaviour should be preserved. About the only behaviour
617 that has changed and which could start generating Unicode is the old
618 behaviour of C<chr()> where supplying an argument more than 255
619 produced a character modulo 255. C<chr(300)>, for example, was equal
620 to C<chr(45)> or "-" (in ASCII), now it is LATIN CAPITAL LETTER I WITH
625 How Do I Make My Scripts Work With Unicode?
627 Very little work should be needed since nothing changes until you
628 generate Unicode data. The most important thing is getting input as
629 Unicode; for that, see the earlier I/O discussion.
630 To get full seamless Unicode support, add
631 C<use feature 'unicode_strings'> (or C<use 5.012> or higher) to your
636 How Do I Know Whether My String Is In Unicode?
638 You shouldn't have to care. But you may if your Perl is before 5.14.0
639 or you haven't specified C<use feature 'unicode_strings'> or C<use
640 5.012> (or higher) because otherwise the semantics of the code points
641 in the range 128 to 255 are different depending on
642 whether the string they are contained within is in Unicode or not.
643 (See L<perlunicode/When Unicode Does Not Happen>.)
645 To determine if a string is in Unicode, use:
647 print utf8::is_utf8($string) ? 1 : 0, "\n";
649 But note that this doesn't mean that any of the characters in the
650 string are necessary UTF-8 encoded, or that any of the characters have
651 code points greater than 0xFF (255) or even 0x80 (128), or that the
652 string has any characters at all. All the C<is_utf8()> does is to
653 return the value of the internal "utf8ness" flag attached to the
654 C<$string>. If the flag is off, the bytes in the scalar are interpreted
655 as a single byte encoding. If the flag is on, the bytes in the scalar
656 are interpreted as the (variable-length, potentially multi-byte) UTF-8 encoded
657 code points of the characters. Bytes added to a UTF-8 encoded string are
658 automatically upgraded to UTF-8. If mixed non-UTF-8 and UTF-8 scalars
659 are merged (double-quoted interpolation, explicit concatenation, or
660 printf/sprintf parameter substitution), the result will be UTF-8 encoded
661 as if copies of the byte strings were upgraded to UTF-8: for example,
667 the output string will be UTF-8-encoded C<ab\x80c = \x{100}\n>, but
668 C<$a> will stay byte-encoded.
670 Sometimes you might really need to know the byte length of a string
671 instead of the character length. For that use either the
672 C<Encode::encode_utf8()> function or the C<bytes> pragma
673 and the C<length()> function:
675 my $unicode = chr(0x100);
676 print length($unicode), "\n"; # will print 1
678 print length(Encode::encode_utf8($unicode)), "\n"; # will print 2
680 print length($unicode), "\n"; # will also print 2
681 # (the 0xC4 0x80 of the UTF-8)
686 How Do I Find Out What Encoding a File Has?
688 You might try L<Encode::Guess>, but it has a number of limitations.
692 How Do I Detect Data That's Not Valid In a Particular Encoding?
694 Use the C<Encode> package to try converting it.
697 use Encode 'decode_utf8';
699 if (eval { decode_utf8($string, Encode::FB_CROAK); 1 }) {
700 # $string is valid utf8
702 # $string is not valid utf8
705 Or use C<unpack> to try decoding it:
708 @chars = unpack("C0U*", $string_of_bytes_that_I_think_is_utf8);
710 If invalid, a C<Malformed UTF-8 character> warning is produced. The "C0" means
711 "process the string character per character". Without that, the
712 C<unpack("U*", ...)> would work in C<U0> mode (the default if the format
713 string starts with C<U>) and it would return the bytes making up the UTF-8
714 encoding of the target string, something that will always work.
718 How Do I Convert Binary Data Into a Particular Encoding, Or Vice Versa?
720 This probably isn't as useful as you might think.
721 Normally, you shouldn't need to.
723 In one sense, what you are asking doesn't make much sense: encodings
724 are for characters, and binary data are not "characters", so converting
725 "data" into some encoding isn't meaningful unless you know in what
726 character set and encoding the binary data is in, in which case it's
727 not just binary data, now is it?
729 If you have a raw sequence of bytes that you know should be
730 interpreted via a particular encoding, you can use C<Encode>:
732 use Encode 'from_to';
733 from_to($data, "iso-8859-1", "utf-8"); # from latin-1 to utf-8
735 The call to C<from_to()> changes the bytes in C<$data>, but nothing
736 material about the nature of the string has changed as far as Perl is
737 concerned. Both before and after the call, the string C<$data>
738 contains just a bunch of 8-bit bytes. As far as Perl is concerned,
739 the encoding of the string remains as "system-native 8-bit bytes".
741 You might relate this to a fictional 'Translate' module:
745 Translate::from_to($phrase, 'english', 'deutsch');
746 ## phrase now contains "Ja"
748 The contents of the string changes, but not the nature of the string.
749 Perl doesn't know any more after the call than before that the
750 contents of the string indicates the affirmative.
752 Back to converting data. If you have (or want) data in your system's
753 native 8-bit encoding (e.g. Latin-1, EBCDIC, etc.), you can use
754 pack/unpack to convert to/from Unicode.
756 $native_string = pack("W*", unpack("U*", $Unicode_string));
757 $Unicode_string = pack("U*", unpack("W*", $native_string));
759 If you have a sequence of bytes you B<know> is valid UTF-8,
760 but Perl doesn't know it yet, you can make Perl a believer, too:
762 use Encode 'decode_utf8';
763 $Unicode = decode_utf8($bytes);
767 $Unicode = pack("U0a*", $bytes);
769 You can find the bytes that make up a UTF-8 sequence with
771 @bytes = unpack("C*", $Unicode_string)
773 and you can create well-formed Unicode with
775 $Unicode_string = pack("U*", 0xff, ...)
779 How Do I Display Unicode? How Do I Input Unicode?
781 See L<http://www.alanwood.net/unicode/> and
782 L<http://www.cl.cam.ac.uk/~mgk25/unicode.html>
786 How Does Unicode Work With Traditional Locales?
788 Perl tries to keep the two separated. Code points that are above 255
789 are treated as Unicode; those below 256, generally as locale. This
790 works reasonably well except in some case-insensitive regular expression
791 pattern matches that in Unicode would cross the 255/256 boundary. These
793 Also, the C<\p{}> and C<\N{}> constructs silently assume Unicode values
794 even for code points below 256.
795 See also L<perlrun> for the
796 description of the C<-C> switch and its environment counterpart,
797 C<$ENV{PERL_UNICODE}> to see how to enable various Unicode features,
798 for example by using locale settings.
802 =head2 Hexadecimal Notation
804 The Unicode standard prefers using hexadecimal notation because
805 that more clearly shows the division of Unicode into blocks of 256 characters.
806 Hexadecimal is also simply shorter than decimal. You can use decimal
807 notation, too, but learning to use hexadecimal just makes life easier
808 with the Unicode standard. The C<U+HHHH> notation uses hexadecimal,
811 The C<0x> prefix means a hexadecimal number, the digits are 0-9 I<and>
812 a-f (or A-F, case doesn't matter). Each hexadecimal digit represents
813 four bits, or half a byte. C<print 0x..., "\n"> will show a
814 hexadecimal number in decimal, and C<printf "%x\n", $decimal> will
815 show a decimal number in hexadecimal. If you have just the
816 "hex digits" of a hexadecimal number, you can use the C<hex()> function.
818 print 0x0009, "\n"; # 9
819 print 0x000a, "\n"; # 10
820 print 0x000f, "\n"; # 15
821 print 0x0010, "\n"; # 16
822 print 0x0011, "\n"; # 17
823 print 0x0100, "\n"; # 256
825 print 0x0041, "\n"; # 65
827 printf "%x\n", 65; # 41
828 printf "%#x\n", 65; # 0x41
830 print hex("41"), "\n"; # 65
832 =head2 Further Resources
840 L<http://www.unicode.org/>
846 L<http://www.unicode.org/unicode/faq/>
852 L<http://www.unicode.org/glossary/>
856 Unicode Recommended Reading List
858 The Unicode Consortium has a list of articles and books, some of which
859 give a much more in depth treatment of Unicode:
860 L<http://unicode.org/resources/readinglist.html>
864 Unicode Useful Resources
866 L<http://www.unicode.org/unicode/onlinedat/resources.html>
870 Unicode and Multilingual Support in HTML, Fonts, Web Browsers and Other Applications
872 L<http://www.alanwood.net/unicode/>
876 UTF-8 and Unicode FAQ for Unix/Linux
878 L<http://www.cl.cam.ac.uk/~mgk25/unicode.html>
882 Legacy Character Sets
884 L<http://www.czyborra.com/>
885 L<http://www.eki.ee/letter/>
889 You can explore various information from the Unicode data files using
890 the C<Unicode::UCD> module.
894 =head1 UNICODE IN OLDER PERLS
896 If you cannot upgrade your Perl to 5.8.0 or later, you can still
897 do some Unicode processing by using the modules C<Unicode::String>,
898 C<Unicode::Map8>, and C<Unicode::Map>, available from CPAN.
899 If you have the GNU recode installed, you can also use the
900 Perl front-end C<Convert::Recode> for character conversions.
902 The following are fast conversions from ISO 8859-1 (Latin-1) bytes
903 to UTF-8 bytes and back, the code works even with older Perl 5 versions.
905 # ISO 8859-1 to UTF-8
906 s/([\x80-\xFF])/chr(0xC0|ord($1)>>6).chr(0x80|ord($1)&0x3F)/eg;
908 # UTF-8 to ISO 8859-1
909 s/([\xC2\xC3])([\x80-\xBF])/chr(ord($1)<<6&0xC0|ord($2)&0x3F)/eg;
913 L<perlunitut>, L<perlunicode>, L<Encode>, L<open>, L<utf8>, L<bytes>,
914 L<perlretut>, L<perlrun>, L<Unicode::Collate>, L<Unicode::Normalize>,
917 =head1 ACKNOWLEDGMENTS
919 Thanks to the kind readers of the perl5-porters@perl.org,
920 perl-unicode@perl.org, linux-utf8@nl.linux.org, and unicore@unicode.org
921 mailing lists for their valuable feedback.
923 =head1 AUTHOR, COPYRIGHT, AND LICENSE
925 Copyright 2001-2011 Jarkko Hietaniemi E<lt>jhi@iki.fiE<gt>
927 This document may be distributed under the same terms as Perl itself.