3 perlunicode - Unicode support in Perl
7 =head2 Important Caveats
9 Unicode support is an extensive requirement. While Perl does not
10 implement the Unicode standard or the accompanying technical reports
11 from cover to cover, Perl does support many Unicode features.
13 People who want to learn to use Unicode in Perl, should probably read
14 the L<Perl Unicode tutorial, perlunitut|perlunitut> and
15 L<perluniintro>, before reading
16 this reference document.
18 Also, the use of Unicode may present security issues that aren't obvious.
19 Read L<Unicode Security Considerations|http://www.unicode.org/reports/tr36>.
23 =item Safest if you "use feature 'unicode_strings'"
25 In order to preserve backward compatibility, Perl does not turn
26 on full internal Unicode support unless the pragma
27 C<use feature 'unicode_strings'> is specified. (This is automatically
28 selected if you use C<use 5.012> or higher.) Failure to do this can
29 trigger unexpected surprises. See L</The "Unicode Bug"> below.
31 This pragma doesn't affect I/O, and there are still several places
32 where Unicode isn't fully supported, such as in filenames.
34 =item Input and Output Layers
36 Perl knows when a filehandle uses Perl's internal Unicode encodings
37 (UTF-8, or UTF-EBCDIC if in EBCDIC) if the filehandle is opened with
38 the ":encoding(utf8)" layer. Other encodings can be converted to Perl's
39 encoding on input or from Perl's encoding on output by use of the
40 ":encoding(...)" layer. See L<open>.
42 To indicate that Perl source itself is in UTF-8, use C<use utf8;>.
44 =item C<use utf8> still needed to enable UTF-8/UTF-EBCDIC in scripts
46 As a compatibility measure, the C<use utf8> pragma must be explicitly
47 included to enable recognition of UTF-8 in the Perl scripts themselves
48 (in string or regular expression literals, or in identifier names) on
49 ASCII-based machines or to recognize UTF-EBCDIC on EBCDIC-based
50 machines. B<These are the only times when an explicit C<use utf8>
51 is needed.> See L<utf8>.
53 =item BOM-marked scripts and UTF-16 scripts autodetected
55 If a Perl script begins marked with the Unicode BOM (UTF-16LE, UTF16-BE,
56 or UTF-8), or if the script looks like non-BOM-marked UTF-16 of either
57 endianness, Perl will correctly read in the script as Unicode.
58 (BOMless UTF-8 cannot be effectively recognized or differentiated from
59 ISO 8859-1 or other eight-bit encodings.)
61 =item C<use encoding> needed to upgrade non-Latin-1 byte strings
63 By default, there is a fundamental asymmetry in Perl's Unicode model:
64 implicit upgrading from byte strings to Unicode strings assumes that
65 they were encoded in I<ISO 8859-1 (Latin-1)>, but Unicode strings are
66 downgraded with UTF-8 encoding. This happens because the first 256
67 codepoints in Unicode happens to agree with Latin-1.
69 See L</"Byte and Character Semantics"> for more details.
73 =head2 Byte and Character Semantics
75 Beginning with version 5.6, Perl uses logically-wide characters to
76 represent strings internally.
78 Starting in Perl 5.14, Perl-level operations work with
79 characters rather than bytes within the scope of a
80 C<L<use feature 'unicode_strings'|feature>> (or equivalently
81 C<use 5.012> or higher). (This is not true if bytes have been
82 explicitly requested by C<L<use bytes|bytes>>, nor necessarily true
83 for interactions with the platform's operating system.)
85 For earlier Perls, and when C<unicode_strings> is not in effect, Perl
86 provides a fairly safe environment that can handle both types of
87 semantics in programs. For operations where Perl can unambiguously
88 decide that the input data are characters, Perl switches to character
89 semantics. For operations where this determination cannot be made
90 without additional information from the user, Perl decides in favor of
91 compatibility and chooses to use byte semantics.
93 When C<use locale> is in effect (which overrides
94 C<use feature 'unicode_strings'> in the same scope), Perl uses the
96 with the current locale. Otherwise, Perl uses the platform's native
97 byte semantics for characters whose code points are less than 256, and
98 Unicode semantics for those greater than 255. On EBCDIC platforms, this
99 is almost seamless, as the EBCDIC code pages that Perl handles are
100 equivalent to Unicode's first 256 code points. (The exception is that
101 EBCDIC regular expression case-insensitive matching rules are not as
102 as robust as Unicode's.) But on ASCII platforms, Perl uses US-ASCII
103 (or Basic Latin in Unicode terminology) byte semantics, meaning that characters
104 whose ordinal numbers are in the range 128 - 255 are undefined except for their
105 ordinal numbers. This means that none have case (upper and lower), nor are any
106 a member of character classes, like C<[:alpha:]> or C<\w>. (But all do belong
107 to the C<\W> class or the Perl regular expression extension C<[:^alpha:]>.)
109 This behavior preserves compatibility with earlier versions of Perl,
110 which allowed byte semantics in Perl operations only if
111 none of the program's inputs were marked as being a source of Unicode
112 character data. Such data may come from filehandles, from calls to
113 external programs, from information provided by the system (such as %ENV),
114 or from literals and constants in the source text.
116 The C<utf8> pragma is primarily a compatibility device that enables
117 recognition of UTF-(8|EBCDIC) in literals encountered by the parser.
118 Note that this pragma is only required while Perl defaults to byte
119 semantics; when character semantics become the default, this pragma
120 may become a no-op. See L<utf8>.
122 If strings operating under byte semantics and strings with Unicode
123 character data are concatenated, the new string will have
124 character semantics. This can cause surprises: See L</BUGS>, below.
125 You can choose to be warned when this happens. See L<encoding::warnings>.
127 Under character semantics, many operations that formerly operated on
128 bytes now operate on characters. A character in Perl is
129 logically just a number ranging from 0 to 2**31 or so. Larger
130 characters may encode into longer sequences of bytes internally, but
131 this internal detail is mostly hidden for Perl code.
132 See L<perluniintro> for more.
134 =head2 Effects of Character Semantics
136 Character semantics have the following effects:
142 Strings--including hash keys--and regular expression patterns may
143 contain characters that have an ordinal value larger than 255.
145 If you use a Unicode editor to edit your program, Unicode characters may
146 occur directly within the literal strings in UTF-8 encoding, or UTF-16.
147 (The former requires a BOM or C<use utf8>, the latter requires a BOM.)
149 Unicode characters can also be added to a string by using the C<\N{U+...}>
150 notation. The Unicode code for the desired character, in hexadecimal,
151 should be placed in the braces, after the C<U>. For instance, a smiley face is
154 Alternatively, you can use the C<\x{...}> notation for characters 0x100 and
155 above. For characters below 0x100 you may get byte semantics instead of
156 character semantics; see L</The "Unicode Bug">. On EBCDIC machines there is
157 the additional problem that the value for such characters gives the EBCDIC
158 character rather than the Unicode one, thus it is more portable to use
159 C<\N{U+...}> instead.
163 use charnames ':full';
165 you can use the C<\N{...}> notation and put the official Unicode
166 character name within the braces, such as C<\N{WHITE SMILING FACE}>.
171 If an appropriate L<encoding> is specified, identifiers within the
172 Perl script may contain Unicode alphanumeric characters, including
173 ideographs. Perl does not currently attempt to canonicalize variable
178 Regular expressions match characters instead of bytes. "." matches
179 a character instead of a byte.
183 Bracketed character classes in regular expressions match characters instead of
184 bytes and match against the character properties specified in the
185 Unicode properties database. C<\w> can be used to match a Japanese
186 ideograph, for instance.
190 Named Unicode properties, scripts, and block ranges may be used (like bracketed
191 character classes) by using the C<\p{}> "matches property" construct and
192 the C<\P{}> negation, "doesn't match property".
193 See L</"Unicode Character Properties"> for more details.
195 You can define your own character properties and use them
196 in the regular expression with the C<\p{}> or C<\P{}> construct.
197 See L</"User-Defined Character Properties"> for more details.
201 The special pattern C<\X> matches a logical character, an "extended grapheme
202 cluster" in Standardese. In Unicode what appears to the user to be a single
203 character, for example an accented C<G>, may in fact be composed of a sequence
204 of characters, in this case a C<G> followed by an accent character. C<\X>
205 will match the entire sequence.
209 The C<tr///> operator translates characters instead of bytes. Note
210 that the C<tr///CU> functionality has been removed. For similar
211 functionality see pack('U0', ...) and pack('C0', ...).
215 Case translation operators use the Unicode case translation tables
216 when character input is provided. Note that C<uc()>, or C<\U> in
217 interpolated strings, translates to uppercase, while C<ucfirst>,
218 or C<\u> in interpolated strings, translates to titlecase in languages
219 that make the distinction (which is equivalent to uppercase in languages
220 without the distinction).
224 Most operators that deal with positions or lengths in a string will
225 automatically switch to using character positions, including
226 C<chop()>, C<chomp()>, C<substr()>, C<pos()>, C<index()>, C<rindex()>,
227 C<sprintf()>, C<write()>, and C<length()>. An operator that
228 specifically does not switch is C<vec()>. Operators that really don't
229 care include operators that treat strings as a bucket of bits such as
230 C<sort()>, and operators dealing with filenames.
234 The C<pack()>/C<unpack()> letter C<C> does I<not> change, since it is often
235 used for byte-oriented formats. Again, think C<char> in the C language.
237 There is a new C<U> specifier that converts between Unicode characters
238 and code points. There is also a C<W> specifier that is the equivalent of
239 C<chr>/C<ord> and properly handles character values even if they are above 255.
243 The C<chr()> and C<ord()> functions work on characters, similar to
244 C<pack("W")> and C<unpack("W")>, I<not> C<pack("C")> and
245 C<unpack("C")>. C<pack("C")> and C<unpack("C")> are methods for
246 emulating byte-oriented C<chr()> and C<ord()> on Unicode strings.
247 While these methods reveal the internal encoding of Unicode strings,
248 that is not something one normally needs to care about at all.
252 The bit string operators, C<& | ^ ~>, can operate on character data.
253 However, for backward compatibility, such as when using bit string
254 operations when characters are all less than 256 in ordinal value, one
255 should not use C<~> (the bit complement) with characters of both
256 values less than 256 and values greater than 256. Most importantly,
257 DeMorgan's laws (C<~($x|$y) eq ~$x&~$y> and C<~($x&$y) eq ~$x|~$y>)
258 will not hold. The reason for this mathematical I<faux pas> is that
259 the complement cannot return B<both> the 8-bit (byte-wide) bit
260 complement B<and> the full character-wide bit complement.
264 There is a CPAN module, L<Unicode::Casing>, which allows you to define
265 your own mappings to be used in C<lc()>, C<lcfirst()>, C<uc()>, and
266 C<ucfirst()> (or their double-quoted string inlined versions such as
267 C<\U>). (Prior to Perl 5.16, this functionality was partially provided
268 in the Perl core, but suffered from a number of insurmountable
269 drawbacks, so the CPAN module was written instead.)
277 And finally, C<scalar reverse()> reverses by character rather than by byte.
281 =head2 Unicode Character Properties
283 (The only time that Perl considers a sequence of individual code
284 points as a single logical character is in the C<\X> construct, already
285 mentioned above. Therefore "character" in this discussion means a single
288 Very nearly all Unicode character properties are accessible through
289 regular expressions by using the C<\p{}> "matches property" construct
290 and the C<\P{}> "doesn't match property" for its negation.
292 For instance, C<\p{Uppercase}> matches any single character with the Unicode
293 "Uppercase" property, while C<\p{L}> matches any character with a
294 General_Category of "L" (letter) property. Brackets are not
295 required for single letter property names, so C<\p{L}> is equivalent to C<\pL>.
297 More formally, C<\p{Uppercase}> matches any single character whose Unicode
298 Uppercase property value is True, and C<\P{Uppercase}> matches any character
299 whose Uppercase property value is False, and they could have been written as
300 C<\p{Uppercase=True}> and C<\p{Uppercase=False}>, respectively.
302 This formality is needed when properties are not binary; that is, if they can
303 take on more values than just True and False. For example, the Bidi_Class (see
304 L</"Bidirectional Character Types"> below), can take on several different
305 values, such as Left, Right, Whitespace, and others. To match these, one needs
306 to specify both the property name (Bidi_Class), AND the value being
308 (Left, Right, etc.). This is done, as in the examples above, by having the
309 two components separated by an equal sign (or interchangeably, a colon), like
310 C<\p{Bidi_Class: Left}>.
312 All Unicode-defined character properties may be written in these compound forms
313 of C<\p{property=value}> or C<\p{property:value}>, but Perl provides some
314 additional properties that are written only in the single form, as well as
315 single-form short-cuts for all binary properties and certain others described
316 below, in which you may omit the property name and the equals or colon
319 Most Unicode character properties have at least two synonyms (or aliases if you
320 prefer): a short one that is easier to type and a longer one that is more
321 descriptive and hence easier to understand. Thus the "L" and "Letter" properties
322 above are equivalent and can be used interchangeably. Likewise,
323 "Upper" is a synonym for "Uppercase", and we could have written
324 C<\p{Uppercase}> equivalently as C<\p{Upper}>. Also, there are typically
325 various synonyms for the values the property can be. For binary properties,
326 "True" has 3 synonyms: "T", "Yes", and "Y"; and "False has correspondingly "F",
327 "No", and "N". But be careful. A short form of a value for one property may
328 not mean the same thing as the same short form for another. Thus, for the
329 General_Category property, "L" means "Letter", but for the Bidi_Class property,
330 "L" means "Left". A complete list of properties and synonyms is in
333 Upper/lower case differences in property names and values are irrelevant;
334 thus C<\p{Upper}> means the same thing as C<\p{upper}> or even C<\p{UpPeR}>.
335 Similarly, you can add or subtract underscores anywhere in the middle of a
336 word, so that these are also equivalent to C<\p{U_p_p_e_r}>. And white space
337 is irrelevant adjacent to non-word characters, such as the braces and the equals
338 or colon separators, so C<\p{ Upper }> and C<\p{ Upper_case : Y }> are
339 equivalent to these as well. In fact, white space and even
340 hyphens can usually be added or deleted anywhere. So even C<\p{ Up-per case = Yes}> is
341 equivalent. All this is called "loose-matching" by Unicode. The few places
342 where stricter matching is used is in the middle of numbers, and in the Perl
343 extension properties that begin or end with an underscore. Stricter matching
344 cares about white space (except adjacent to non-word characters),
345 hyphens, and non-interior underscores.
347 You can also use negation in both C<\p{}> and C<\P{}> by introducing a caret
348 (^) between the first brace and the property name: C<\p{^Tamil}> is
349 equal to C<\P{Tamil}>.
351 Almost all properties are immune to case-insensitive matching. That is,
352 adding a C</i> regular expression modifier does not change what they
353 match. There are two sets that are affected.
357 and C<Titlecase_Letter>,
358 all of which match C<Cased_Letter> under C</i> matching.
359 And the second set is
363 all of which match C<Cased> under C</i> matching.
364 This set also includes its subsets C<PosixUpper> and C<PosixLower> both
365 of which under C</i> matching match C<PosixAlpha>.
366 (The difference between these sets is that some things, such as Roman
367 numerals, come in both upper and lower case so they are C<Cased>, but aren't considered
368 letters, so they aren't C<Cased_Letter>s.)
370 The result is undefined if you try to match a non-Unicode code point
371 (that is, one above 0x10FFFF) against a Unicode property. Currently, a
372 warning is raised, and the match will fail. In some cases, this is
373 counterintuitive, as both these fail:
375 chr(0x110000) =~ \p{ASCII_Hex_Digit=True} # Fails.
376 chr(0x110000) =~ \p{ASCII_Hex_Digit=False} # Fails!
378 =head3 B<General_Category>
380 Every Unicode character is assigned a general category, which is the "most
381 usual categorization of a character" (from
382 L<http://www.unicode.org/reports/tr44>).
384 The compound way of writing these is like C<\p{General_Category=Number}>
385 (short, C<\p{gc:n}>). But Perl furnishes shortcuts in which everything up
386 through the equal or colon separator is omitted. So you can instead just write
389 Here are the short and long forms of the General Category properties:
394 LC, L& Cased_Letter (that is: [\p{Ll}\p{Lu}\p{Lt}])
407 Nd Decimal_Number (also Digit)
411 P Punctuation (also Punct)
412 Pc Connector_Punctuation
416 Pi Initial_Punctuation
417 (may behave like Ps or Pe depending on usage)
419 (may behave like Ps or Pe depending on usage)
431 Zp Paragraph_Separator
434 Cc Control (also Cntrl)
440 Single-letter properties match all characters in any of the
441 two-letter sub-properties starting with the same letter.
442 C<LC> and C<L&> are special: both are aliases for the set consisting of everything matched by C<Ll>, C<Lu>, and C<Lt>.
444 =head3 B<Bidirectional Character Types>
446 Because scripts differ in their directionality (Hebrew and Arabic are
447 written right to left, for example) Unicode supplies these properties in
448 the Bidi_Class class:
453 LRE Left-to-Right Embedding
454 LRO Left-to-Right Override
457 RLE Right-to-Left Embedding
458 RLO Right-to-Left Override
459 PDF Pop Directional Format
461 ES European Separator
462 ET European Terminator
467 B Paragraph Separator
472 This property is always written in the compound form.
473 For example, C<\p{Bidi_Class:R}> matches characters that are normally
474 written right to left.
478 The world's languages are written in many different scripts. This sentence
479 (unless you're reading it in translation) is written in Latin, while Russian is
480 written in Cyrillic, and Greek is written in, well, Greek; Japanese mainly in
481 Hiragana or Katakana. There are many more.
483 The Unicode Script and Script_Extensions properties give what script a
484 given character is in. Either property can be specified with the
486 C<\p{Script=Hebrew}> (short: C<\p{sc=hebr}>), or
487 C<\p{Script_Extensions=Javanese}> (short: C<\p{scx=java}>).
488 In addition, Perl furnishes shortcuts for all
489 C<Script> property names. You can omit everything up through the equals
490 (or colon), and simply write C<\p{Latin}> or C<\P{Cyrillic}>.
491 (This is not true for C<Script_Extensions>, which is required to be
492 written in the compound form.)
494 The difference between these two properties involves characters that are
495 used in multiple scripts. For example the digits '0' through '9' are
496 used in many parts of the world. These are placed in a script named
497 C<Common>. Other characters are used in just a few scripts. For
498 example, the "KATAKANA-HIRAGANA DOUBLE HYPHEN" is used in both Japanese
499 scripts, Katakana and Hiragana, but nowhere else. The C<Script>
500 property places all characters that are used in multiple scripts in the
501 C<Common> script, while the C<Script_Extensions> property places those
502 that are used in only a few scripts into each of those scripts; while
503 still using C<Common> for those used in many scripts. Thus both these
506 "0" =~ /\p{sc=Common}/ # Matches
507 "0" =~ /\p{scx=Common}/ # Matches
509 and only the first of these match:
511 "\N{KATAKANA-HIRAGANA DOUBLE HYPHEN}" =~ /\p{sc=Common} # Matches
512 "\N{KATAKANA-HIRAGANA DOUBLE HYPHEN}" =~ /\p{scx=Common} # No match
514 And only the last two of these match:
516 "\N{KATAKANA-HIRAGANA DOUBLE HYPHEN}" =~ /\p{sc=Hiragana} # No match
517 "\N{KATAKANA-HIRAGANA DOUBLE HYPHEN}" =~ /\p{sc=Katakana} # No match
518 "\N{KATAKANA-HIRAGANA DOUBLE HYPHEN}" =~ /\p{scx=Hiragana} # Matches
519 "\N{KATAKANA-HIRAGANA DOUBLE HYPHEN}" =~ /\p{scx=Katakana} # Matches
521 C<Script_Extensions> is thus an improved C<Script>, in which there are
522 fewer characters in the C<Common> script, and correspondingly more in
523 other scripts. It is new in Unicode version 6.0, and its data are likely
524 to change significantly in later releases, as things get sorted out.
526 (Actually, besides C<Common>, the C<Inherited> script, contains
527 characters that are used in multiple scripts. These are modifier
528 characters which modify other characters, and inherit the script value
529 of the controlling character. Some of these are used in many scripts,
530 and so go into C<Inherited> in both C<Script> and C<Script_Extensions>.
531 Others are used in just a few scripts, so are in C<Inherited> in
532 C<Script>, but not in C<Script_Extensions>.)
534 It is worth stressing that there are several different sets of digits in
535 Unicode that are equivalent to 0-9 and are matchable by C<\d> in a
536 regular expression. If they are used in a single language only, they
537 are in that language's C<Script> and C<Script_Extension>. If they are
538 used in more than one script, they will be in C<sc=Common>, but only
539 if they are used in many scripts should they be in C<scx=Common>.
541 A complete list of scripts and their shortcuts is in L<perluniprops>.
543 =head3 B<Use of "Is" Prefix>
545 For backward compatibility (with Perl 5.6), all properties mentioned
546 so far may have C<Is> or C<Is_> prepended to their name, so C<\P{Is_Lu}>, for
547 example, is equal to C<\P{Lu}>, and C<\p{IsScript:Arabic}> is equal to
552 In addition to B<scripts>, Unicode also defines B<blocks> of
553 characters. The difference between scripts and blocks is that the
554 concept of scripts is closer to natural languages, while the concept
555 of blocks is more of an artificial grouping based on groups of Unicode
556 characters with consecutive ordinal values. For example, the "Basic Latin"
557 block is all characters whose ordinals are between 0 and 127, inclusive; in
558 other words, the ASCII characters. The "Latin" script contains some letters
559 from this as well as several other blocks, like "Latin-1 Supplement",
560 "Latin Extended-A", etc., but it does not contain all the characters from
561 those blocks. It does not, for example, contain the digits 0-9, because
562 those digits are shared across many scripts, and hence are in the
565 For more about scripts versus blocks, see UAX#24 "Unicode Script Property":
566 L<http://www.unicode.org/reports/tr24>
568 The C<Script> or C<Script_Extensions> properties are likely to be the
569 ones you want to use when processing
570 natural language; the Block property may occasionally be useful in working
571 with the nuts and bolts of Unicode.
573 Block names are matched in the compound form, like C<\p{Block: Arrows}> or
574 C<\p{Blk=Hebrew}>. Unlike most other properties, only a few block names have a
575 Unicode-defined short name. But Perl does provide a (slight) shortcut: You
576 can say, for example C<\p{In_Arrows}> or C<\p{In_Hebrew}>. For backwards
577 compatibility, the C<In> prefix may be omitted if there is no naming conflict
578 with a script or any other property, and you can even use an C<Is> prefix
579 instead in those cases. But it is not a good idea to do this, for a couple
586 It is confusing. There are many naming conflicts, and you may forget some.
587 For example, C<\p{Hebrew}> means the I<script> Hebrew, and NOT the I<block>
588 Hebrew. But would you remember that 6 months from now?
592 It is unstable. A new version of Unicode may pre-empt the current meaning by
593 creating a property with the same name. There was a time in very early Unicode
594 releases when C<\p{Hebrew}> would have matched the I<block> Hebrew; now it
599 Some people prefer to always use C<\p{Block: foo}> and C<\p{Script: bar}>
600 instead of the shortcuts, whether for clarity, because they can't remember the
601 difference between 'In' and 'Is' anyway, or they aren't confident that those who
602 eventually will read their code will know that difference.
604 A complete list of blocks and their shortcuts is in L<perluniprops>.
606 =head3 B<Other Properties>
608 There are many more properties than the very basic ones described here.
609 A complete list is in L<perluniprops>.
611 Unicode defines all its properties in the compound form, so all single-form
612 properties are Perl extensions. Most of these are just synonyms for the
613 Unicode ones, but some are genuine extensions, including several that are in
614 the compound form. And quite a few of these are actually recommended by Unicode
615 (in L<http://www.unicode.org/reports/tr18>).
617 This section gives some details on all extensions that aren't just
618 synonyms for compound-form Unicode properties
619 (for those properties, you'll have to refer to the
620 L<Unicode Standard|http://www.unicode.org/reports/tr44>.
626 This matches any of the 1_114_112 Unicode code points. It is a synonym for
629 =item B<C<\p{Alnum}>>
631 This matches any C<\p{Alphabetic}> or C<\p{Decimal_Number}> character.
635 This matches any of the 1_114_112 Unicode code points. It is a synonym for
638 =item B<C<\p{ASCII}>>
640 This matches any of the 128 characters in the US-ASCII character set,
641 which is a subset of Unicode.
643 =item B<C<\p{Assigned}>>
645 This matches any assigned code point; that is, any code point whose general
646 category is not Unassigned (or equivalently, not Cn).
648 =item B<C<\p{Blank}>>
650 This is the same as C<\h> and C<\p{HorizSpace}>: A character that changes the
651 spacing horizontally.
653 =item B<C<\p{Decomposition_Type: Non_Canonical}>> (Short: C<\p{Dt=NonCanon}>)
655 Matches a character that has a non-canonical decomposition.
657 To understand the use of this rarely used property=value combination, it is
658 necessary to know some basics about decomposition.
659 Consider a character, say H. It could appear with various marks around it,
660 such as an acute accent, or a circumflex, or various hooks, circles, arrows,
661 I<etc.>, above, below, to one side or the other, etc. There are many
662 possibilities among the world's languages. The number of combinations is
663 astronomical, and if there were a character for each combination, it would
664 soon exhaust Unicode's more than a million possible characters. So Unicode
665 took a different approach: there is a character for the base H, and a
666 character for each of the possible marks, and these can be variously combined
667 to get a final logical character. So a logical character--what appears to be a
668 single character--can be a sequence of more than one individual characters.
669 This is called an "extended grapheme cluster"; Perl furnishes the C<\X>
670 regular expression construct to match such sequences.
672 But Unicode's intent is to unify the existing character set standards and
673 practices, and several pre-existing standards have single characters that
674 mean the same thing as some of these combinations. An example is ISO-8859-1,
675 which has quite a few of these in the Latin-1 range, an example being "LATIN
676 CAPITAL LETTER E WITH ACUTE". Because this character was in this pre-existing
677 standard, Unicode added it to its repertoire. But this character is considered
678 by Unicode to be equivalent to the sequence consisting of the character
679 "LATIN CAPITAL LETTER E" followed by the character "COMBINING ACUTE ACCENT".
681 "LATIN CAPITAL LETTER E WITH ACUTE" is called a "pre-composed" character, and
682 its equivalence with the sequence is called canonical equivalence. All
683 pre-composed characters are said to have a decomposition (into the equivalent
684 sequence), and the decomposition type is also called canonical.
686 However, many more characters have a different type of decomposition, a
687 "compatible" or "non-canonical" decomposition. The sequences that form these
688 decompositions are not considered canonically equivalent to the pre-composed
689 character. An example, again in the Latin-1 range, is the "SUPERSCRIPT ONE".
690 It is somewhat like a regular digit 1, but not exactly; its decomposition
691 into the digit 1 is called a "compatible" decomposition, specifically a
692 "super" decomposition. There are several such compatibility
693 decompositions (see L<http://www.unicode.org/reports/tr44>), including one
694 called "compat", which means some miscellaneous type of decomposition
695 that doesn't fit into the decomposition categories that Unicode has chosen.
697 Note that most Unicode characters don't have a decomposition, so their
698 decomposition type is "None".
700 For your convenience, Perl has added the C<Non_Canonical> decomposition
701 type to mean any of the several compatibility decompositions.
703 =item B<C<\p{Graph}>>
705 Matches any character that is graphic. Theoretically, this means a character
706 that on a printer would cause ink to be used.
708 =item B<C<\p{HorizSpace}>>
710 This is the same as C<\h> and C<\p{Blank}>: a character that changes the
711 spacing horizontally.
715 This is a synonym for C<\p{Present_In=*}>
717 =item B<C<\p{PerlSpace}>>
719 This is the same as C<\s>, restricted to ASCII, namely C<S<[ \f\n\r\t]>>.
721 Mnemonic: Perl's (original) space
723 =item B<C<\p{PerlWord}>>
725 This is the same as C<\w>, restricted to ASCII, namely C<[A-Za-z0-9_]>
727 Mnemonic: Perl's (original) word.
729 =item B<C<\p{Posix...}>>
731 There are several of these, which are equivalents using the C<\p>
732 notation for Posix classes and are described in
733 L<perlrecharclass/POSIX Character Classes>.
735 =item B<C<\p{Present_In: *}>> (Short: C<\p{In=*}>)
737 This property is used when you need to know in what Unicode version(s) a
740 The "*" above stands for some two digit Unicode version number, such as
741 C<1.1> or C<4.0>; or the "*" can also be C<Unassigned>. This property will
742 match the code points whose final disposition has been settled as of the
743 Unicode release given by the version number; C<\p{Present_In: Unassigned}>
744 will match those code points whose meaning has yet to be assigned.
746 For example, C<U+0041> "LATIN CAPITAL LETTER A" was present in the very first
747 Unicode release available, which is C<1.1>, so this property is true for all
748 valid "*" versions. On the other hand, C<U+1EFF> was not assigned until version
749 5.1 when it became "LATIN SMALL LETTER Y WITH LOOP", so the only "*" that
750 would match it are 5.1, 5.2, and later.
752 Unicode furnishes the C<Age> property from which this is derived. The problem
753 with Age is that a strict interpretation of it (which Perl takes) has it
754 matching the precise release a code point's meaning is introduced in. Thus
755 C<U+0041> would match only 1.1; and C<U+1EFF> only 5.1. This is not usually what
758 Some non-Perl implementations of the Age property may change its meaning to be
759 the same as the Perl Present_In property; just be aware of that.
761 Another confusion with both these properties is that the definition is not
762 that the code point has been I<assigned>, but that the meaning of the code point
763 has been I<determined>. This is because 66 code points will always be
764 unassigned, and so the Age for them is the Unicode version in which the decision
765 to make them so was made. For example, C<U+FDD0> is to be permanently
766 unassigned to a character, and the decision to do that was made in version 3.1,
767 so C<\p{Age=3.1}> matches this character, as also does C<\p{Present_In: 3.1}> and up.
769 =item B<C<\p{Print}>>
771 This matches any character that is graphical or blank, except controls.
773 =item B<C<\p{SpacePerl}>>
775 This is the same as C<\s>, including beyond ASCII.
777 Mnemonic: Space, as modified by Perl. (It doesn't include the vertical tab
778 which both the Posix standard and Unicode consider white space.)
780 =item B<C<\p{Title}>> and B<C<\p{Titlecase}>>
782 Under case-sensitive matching, these both match the same code points as
783 C<\p{General Category=Titlecase_Letter}> (C<\p{gc=lt}>). The difference
784 is that under C</i> caseless matching, these match the same as
785 C<\p{Cased}>, whereas C<\p{gc=lt}> matches C<\p{Cased_Letter>).
787 =item B<C<\p{VertSpace}>>
789 This is the same as C<\v>: A character that changes the spacing vertically.
793 This is the same as C<\w>, including over 100_000 characters beyond ASCII.
795 =item B<C<\p{XPosix...}>>
797 There are several of these, which are the standard Posix classes
798 extended to the full Unicode range. They are described in
799 L<perlrecharclass/POSIX Character Classes>.
803 =head2 User-Defined Character Properties
805 You can define your own binary character properties by defining subroutines
806 whose names begin with "In" or "Is". The subroutines can be defined in any
807 package. The user-defined properties can be used in the regular expression
808 C<\p> and C<\P> constructs; if you are using a user-defined property from a
809 package other than the one you are in, you must specify its package in the
810 C<\p> or C<\P> construct.
812 # assuming property Is_Foreign defined in Lang::
813 package main; # property package name required
814 if ($txt =~ /\p{Lang::IsForeign}+/) { ... }
816 package Lang; # property package name not required
817 if ($txt =~ /\p{IsForeign}+/) { ... }
820 Note that the effect is compile-time and immutable once defined.
821 However, the subroutines are passed a single parameter, which is 0 if
822 case-sensitive matching is in effect and non-zero if caseless matching
823 is in effect. The subroutine may return different values depending on
824 the value of the flag, and one set of values will immutably be in effect
825 for all case-sensitive matches, and the other set for all case-insensitive
828 Note that if the regular expression is tainted, then Perl will die rather
829 than calling the subroutine, where the name of the subroutine is
830 determined by the tainted data.
832 The subroutines must return a specially-formatted string, with one
833 or more newline-separated lines. Each line must be one of the following:
839 A single hexadecimal number denoting a Unicode code point to include.
843 Two hexadecimal numbers separated by horizontal whitespace (space or
844 tabular characters) denoting a range of Unicode code points to include.
848 Something to include, prefixed by "+": a built-in character
849 property (prefixed by "utf8::") or a user-defined character property,
850 to represent all the characters in that property; two hexadecimal code
851 points for a range; or a single hexadecimal code point.
855 Something to exclude, prefixed by "-": an existing character
856 property (prefixed by "utf8::") or a user-defined character property,
857 to represent all the characters in that property; two hexadecimal code
858 points for a range; or a single hexadecimal code point.
862 Something to negate, prefixed "!": an existing character
863 property (prefixed by "utf8::") or a user-defined character property,
864 to represent all the characters in that property; two hexadecimal code
865 points for a range; or a single hexadecimal code point.
869 Something to intersect with, prefixed by "&": an existing character
870 property (prefixed by "utf8::") or a user-defined character property,
871 for all the characters except the characters in the property; two
872 hexadecimal code points for a range; or a single hexadecimal code point.
876 For example, to define a property that covers both the Japanese
877 syllabaries (hiragana and katakana), you can define
886 Imagine that the here-doc end marker is at the beginning of the line.
887 Now you can use C<\p{InKana}> and C<\P{InKana}>.
889 You could also have used the existing block property names:
898 Suppose you wanted to match only the allocated characters,
899 not the raw block ranges: in other words, you want to remove
910 The negation is useful for defining (surprise!) negated classes.
920 This will match all non-Unicode code points, since every one of them is
921 not in Kana. You can use intersection to exclude these, if desired, as
922 this modified example shows:
933 C<&utf8::Any> must be the last line in the definition.
935 Intersection is used generally for getting the common characters matched
936 by two (or more) classes. It's important to remember not to use "&" for
937 the first set; that would be intersecting with nothing, resulting in an
940 (Note that official Unicode properties differ from these in that they
941 automatically exclude non-Unicode code points and a warning is raised if
942 a match is attempted on one of those.)
944 =head2 User-Defined Case Mappings (for serious hackers only)
946 B<This feature has been removed as of Perl 5.16.>
947 The CPAN module L<Unicode::Casing> provides better functionality without
948 the drawbacks that this feature had. If you are using a Perl earlier
949 than 5.16, this feature was most fully documented in the 5.14 version of
951 L<http://perldoc.perl.org/5.14.0/perlunicode.html#User-Defined-Case-Mappings-%28for-serious-hackers-only%29>
953 =head2 Character Encodings for Input and Output
957 =head2 Unicode Regular Expression Support Level
959 The following list of Unicode supported features for regular expressions describes
960 all features currently directly supported by core Perl. The references to "Level N"
961 and the section numbers refer to the Unicode Technical Standard #18,
962 "Unicode Regular Expressions", version 13, from August 2008.
968 Level 1 - Basic Unicode Support
970 RL1.1 Hex Notation - done [1]
971 RL1.2 Properties - done [2][3]
972 RL1.2a Compatibility Properties - done [4]
973 RL1.3 Subtraction and Intersection - MISSING [5]
974 RL1.4 Simple Word Boundaries - done [6]
975 RL1.5 Simple Loose Matches - done [7]
976 RL1.6 Line Boundaries - MISSING [8][9]
977 RL1.7 Supplementary Code Points - done [10]
981 [3] supports not only minimal list, but all Unicode character
982 properties (see Unicode Character Properties above)
983 [4] \d \D \s \S \w \W \X [:prop:] [:^prop:]
984 [5] can use regular expression look-ahead [a] or
985 user-defined character properties [b] to emulate set
988 [7] note that Perl does Full case-folding in matching (but with
989 bugs), not Simple: for example U+1F88 is equivalent to
990 U+1F00 U+03B9, instead of just U+1F80. This difference
991 matters mainly for certain Greek capital letters with certain
992 modifiers: the Full case-folding decomposes the letter,
993 while the Simple case-folding would map it to a single
995 [8] should do ^ and $ also on U+000B (\v in C), FF (\f), CR
996 (\r), CRLF (\r\n), NEL (U+0085), LS (U+2028), and PS
997 (U+2029); should also affect <>, $., and script line
998 numbers; should not split lines within CRLF [c] (i.e. there
999 is no empty line between \r and \n)
1000 [9] Linebreaking conformant with UAX#14 "Unicode Line Breaking
1001 Algorithm" is available through the Unicode::LineBreaking
1003 [10] UTF-8/UTF-EBDDIC used in Perl allows not only U+10000 to
1004 U+10FFFF but also beyond U+10FFFF
1006 [a] You can mimic class subtraction using lookahead.
1007 For example, what UTS#18 might write as
1009 [{Greek}-[{UNASSIGNED}]]
1011 in Perl can be written as:
1013 (?!\p{Unassigned})\p{InGreekAndCoptic}
1014 (?=\p{Assigned})\p{InGreekAndCoptic}
1016 But in this particular example, you probably really want
1020 which will match assigned characters known to be part of the Greek script.
1022 Also see the L<Unicode::Regex::Set> module, it does implement the full
1023 UTS#18 grouping, intersection, union, and removal (subtraction) syntax.
1025 [b] '+' for union, '-' for removal (set-difference), '&' for intersection
1026 (see L</"User-Defined Character Properties">)
1028 [c] Try the C<:crlf> layer (see L<PerlIO>).
1032 Level 2 - Extended Unicode Support
1034 RL2.1 Canonical Equivalents - MISSING [10][11]
1035 RL2.2 Default Grapheme Clusters - MISSING [12]
1036 RL2.3 Default Word Boundaries - MISSING [14]
1037 RL2.4 Default Loose Matches - MISSING [15]
1038 RL2.5 Name Properties - DONE
1039 RL2.6 Wildcard Properties - MISSING
1041 [10] see UAX#15 "Unicode Normalization Forms"
1042 [11] have Unicode::Normalize but not integrated to regexes
1043 [12] have \X but we don't have a "Grapheme Cluster Mode"
1044 [14] see UAX#29, Word Boundaries
1045 [15] This is covered in Chapter 3.13 (in Unicode 6.0)
1049 Level 3 - Tailored Support
1051 RL3.1 Tailored Punctuation - MISSING
1052 RL3.2 Tailored Grapheme Clusters - MISSING [17][18]
1053 RL3.3 Tailored Word Boundaries - MISSING
1054 RL3.4 Tailored Loose Matches - MISSING
1055 RL3.5 Tailored Ranges - MISSING
1056 RL3.6 Context Matching - MISSING [19]
1057 RL3.7 Incremental Matches - MISSING
1058 ( RL3.8 Unicode Set Sharing )
1059 RL3.9 Possible Match Sets - MISSING
1060 RL3.10 Folded Matching - MISSING [20]
1061 RL3.11 Submatchers - MISSING
1063 [17] see UAX#10 "Unicode Collation Algorithms"
1064 [18] have Unicode::Collate but not integrated to regexes
1065 [19] have (?<=x) and (?=x), but look-aheads or look-behinds
1066 should see outside of the target substring
1067 [20] need insensitive matching for linguistic features other
1068 than case; for example, hiragana to katakana, wide and
1069 narrow, simplified Han to traditional Han (see UTR#30
1070 "Character Foldings")
1074 =head2 Unicode Encodings
1076 Unicode characters are assigned to I<code points>, which are abstract
1077 numbers. To use these numbers, various encodings are needed.
1085 UTF-8 is a variable-length (1 to 4 bytes), byte-order independent
1086 encoding. For ASCII (and we really do mean 7-bit ASCII, not another
1087 8-bit encoding), UTF-8 is transparent.
1089 The following table is from Unicode 3.2.
1091 Code Points 1st Byte 2nd Byte 3rd Byte 4th Byte
1093 U+0000..U+007F 00..7F
1094 U+0080..U+07FF * C2..DF 80..BF
1095 U+0800..U+0FFF E0 * A0..BF 80..BF
1096 U+1000..U+CFFF E1..EC 80..BF 80..BF
1097 U+D000..U+D7FF ED 80..9F 80..BF
1098 U+D800..U+DFFF +++++ utf16 surrogates, not legal utf8 +++++
1099 U+E000..U+FFFF EE..EF 80..BF 80..BF
1100 U+10000..U+3FFFF F0 * 90..BF 80..BF 80..BF
1101 U+40000..U+FFFFF F1..F3 80..BF 80..BF 80..BF
1102 U+100000..U+10FFFF F4 80..8F 80..BF 80..BF
1104 Note the gaps marked by "*" before several of the byte entries above. These are
1105 caused by legal UTF-8 avoiding non-shortest encodings: it is technically
1106 possible to UTF-8-encode a single code point in different ways, but that is
1107 explicitly forbidden, and the shortest possible encoding should always be used
1108 (and that is what Perl does).
1110 Another way to look at it is via bits:
1112 Code Points 1st Byte 2nd Byte 3rd Byte 4th Byte
1115 00000bbbbbaaaaaa 110bbbbb 10aaaaaa
1116 ccccbbbbbbaaaaaa 1110cccc 10bbbbbb 10aaaaaa
1117 00000dddccccccbbbbbbaaaaaa 11110ddd 10cccccc 10bbbbbb 10aaaaaa
1119 As you can see, the continuation bytes all begin with "10", and the
1120 leading bits of the start byte tell how many bytes there are in the
1123 The original UTF-8 specification allowed up to 6 bytes, to allow
1124 encoding of numbers up to 0x7FFF_FFFF. Perl continues to allow those,
1125 and has extended that up to 13 bytes to encode code points up to what
1126 can fit in a 64-bit word. However, Perl will warn if you output any of
1127 these as being non-portable; and under strict UTF-8 input protocols,
1130 The Unicode non-character code points are also disallowed in UTF-8 in
1131 "open interchange". See L</Non-character code points>.
1137 Like UTF-8 but EBCDIC-safe, in the way that UTF-8 is ASCII-safe.
1141 UTF-16, UTF-16BE, UTF-16LE, Surrogates, and BOMs (Byte Order Marks)
1143 The followings items are mostly for reference and general Unicode
1144 knowledge, Perl doesn't use these constructs internally.
1146 Like UTF-8, UTF-16 is a variable-width encoding, but where
1147 UTF-8 uses 8-bit code units, UTF-16 uses 16-bit code units.
1148 All code points occupy either 2 or 4 bytes in UTF-16: code points
1149 C<U+0000..U+FFFF> are stored in a single 16-bit unit, and code
1150 points C<U+10000..U+10FFFF> in two 16-bit units. The latter case is
1151 using I<surrogates>, the first 16-bit unit being the I<high
1152 surrogate>, and the second being the I<low surrogate>.
1154 Surrogates are code points set aside to encode the C<U+10000..U+10FFFF>
1155 range of Unicode code points in pairs of 16-bit units. The I<high
1156 surrogates> are the range C<U+D800..U+DBFF> and the I<low surrogates>
1157 are the range C<U+DC00..U+DFFF>. The surrogate encoding is
1159 $hi = ($uni - 0x10000) / 0x400 + 0xD800;
1160 $lo = ($uni - 0x10000) % 0x400 + 0xDC00;
1164 $uni = 0x10000 + ($hi - 0xD800) * 0x400 + ($lo - 0xDC00);
1166 Because of the 16-bitness, UTF-16 is byte-order dependent. UTF-16
1167 itself can be used for in-memory computations, but if storage or
1168 transfer is required either UTF-16BE (big-endian) or UTF-16LE
1169 (little-endian) encodings must be chosen.
1171 This introduces another problem: what if you just know that your data
1172 is UTF-16, but you don't know which endianness? Byte Order Marks, or
1173 BOMs, are a solution to this. A special character has been reserved
1174 in Unicode to function as a byte order marker: the character with the
1175 code point C<U+FEFF> is the BOM.
1177 The trick is that if you read a BOM, you will know the byte order,
1178 since if it was written on a big-endian platform, you will read the
1179 bytes C<0xFE 0xFF>, but if it was written on a little-endian platform,
1180 you will read the bytes C<0xFF 0xFE>. (And if the originating platform
1181 was writing in UTF-8, you will read the bytes C<0xEF 0xBB 0xBF>.)
1183 The way this trick works is that the character with the code point
1184 C<U+FFFE> is not supposed to be in input streams, so the
1185 sequence of bytes C<0xFF 0xFE> is unambiguously "BOM, represented in
1186 little-endian format" and cannot be C<U+FFFE>, represented in big-endian
1189 Surrogates have no meaning in Unicode outside their use in pairs to
1190 represent other code points. However, Perl allows them to be
1191 represented individually internally, for example by saying
1192 C<chr(0xD801)>, so that all code points, not just those valid for open
1194 representable. Unicode does define semantics for them, such as their
1195 General Category is "Cs". But because their use is somewhat dangerous,
1196 Perl will warn (using the warning category "surrogate", which is a
1197 sub-category of "utf8") if an attempt is made
1198 to do things like take the lower case of one, or match
1199 case-insensitively, or to output them. (But don't try this on Perls
1204 UTF-32, UTF-32BE, UTF-32LE
1206 The UTF-32 family is pretty much like the UTF-16 family, expect that
1207 the units are 32-bit, and therefore the surrogate scheme is not
1208 needed. UTF-32 is a fixed-width encoding. The BOM signatures are
1209 C<0x00 0x00 0xFE 0xFF> for BE and C<0xFF 0xFE 0x00 0x00> for LE.
1215 Legacy, fixed-width encodings defined by the ISO 10646 standard. UCS-2 is a 16-bit
1216 encoding. Unlike UTF-16, UCS-2 is not extensible beyond C<U+FFFF>,
1217 because it does not use surrogates. UCS-4 is a 32-bit encoding,
1218 functionally identical to UTF-32 (the difference being that
1219 UCS-4 forbids neither surrogates nor code points larger than 0x10_FFFF).
1225 A seven-bit safe (non-eight-bit) encoding, which is useful if the
1226 transport or storage is not eight-bit safe. Defined by RFC 2152.
1230 =head2 Non-character code points
1232 66 code points are set aside in Unicode as "non-character code points".
1233 These all have the Unassigned (Cn) General Category, and they never will
1234 be assigned. These are never supposed to be in legal Unicode input
1235 streams, so that code can use them as sentinels that can be mixed in
1236 with character data, and they always will be distinguishable from that data.
1237 To keep them out of Perl input streams, strict UTF-8 should be
1238 specified, such as by using the layer C<:encoding('UTF-8')>. The
1239 non-character code points are the 32 between U+FDD0 and U+FDEF, and the
1240 34 code points U+FFFE, U+FFFF, U+1FFFE, U+1FFFF, ... U+10FFFE, U+10FFFF.
1241 Some people are under the mistaken impression that these are "illegal",
1242 but that is not true. An application or cooperating set of applications
1243 can legally use them at will internally; but these code points are
1244 "illegal for open interchange". Therefore, Perl will not accept these
1245 from input streams unless lax rules are being used, and will warn
1246 (using the warning category "nonchar", which is a sub-category of "utf8") if
1247 an attempt is made to output them.
1249 =head2 Beyond Unicode code points
1251 The maximum Unicode code point is U+10FFFF. But Perl accepts code
1252 points up to the maximum permissible unsigned number available on the
1253 platform. However, Perl will not accept these from input streams unless
1254 lax rules are being used, and will warn (using the warning category
1255 "non_unicode", which is a sub-category of "utf8") if an attempt is made to
1256 operate on or output them. For example, C<uc(0x11_0000)> will generate
1257 this warning, returning the input parameter as its result, as the upper
1258 case of every non-Unicode code point is the code point itself.
1260 =head2 Security Implications of Unicode
1262 Read L<Unicode Security Considerations|http://www.unicode.org/reports/tr36>.
1263 Also, note the following:
1271 Unfortunately, the original specification of UTF-8 leaves some room for
1272 interpretation of how many bytes of encoded output one should generate
1273 from one input Unicode character. Strictly speaking, the shortest
1274 possible sequence of UTF-8 bytes should be generated,
1275 because otherwise there is potential for an input buffer overflow at
1276 the receiving end of a UTF-8 connection. Perl always generates the
1277 shortest length UTF-8, and with warnings on, Perl will warn about
1278 non-shortest length UTF-8 along with other malformations, such as the
1279 surrogates, which are not Unicode code points valid for interchange.
1283 Regular expression pattern matching may surprise you if you're not
1284 accustomed to Unicode. Starting in Perl 5.14, several pattern
1285 modifiers are available to control this, called the character set
1286 modifiers. Details are given in L<perlre/Character set modifiers>.
1290 As discussed elsewhere, Perl has one foot (two hooves?) planted in
1291 each of two worlds: the old world of bytes and the new world of
1292 characters, upgrading from bytes to characters when necessary.
1293 If your legacy code does not explicitly use Unicode, no automatic
1294 switch-over to characters should happen. Characters shouldn't get
1295 downgraded to bytes, either. It is possible to accidentally mix bytes
1296 and characters, however (see L<perluniintro>), in which case C<\w> in
1297 regular expressions might start behaving differently (unless the C</a>
1298 modifier is in effect). Review your code. Use warnings and the C<strict> pragma.
1300 =head2 Unicode in Perl on EBCDIC
1302 The way Unicode is handled on EBCDIC platforms is still
1303 experimental. On such platforms, references to UTF-8 encoding in this
1304 document and elsewhere should be read as meaning the UTF-EBCDIC
1305 specified in Unicode Technical Report 16, unless ASCII vs. EBCDIC issues
1306 are specifically discussed. There is no C<utfebcdic> pragma or
1307 ":utfebcdic" layer; rather, "utf8" and ":utf8" are reused to mean
1308 the platform's "natural" 8-bit encoding of Unicode. See L<perlebcdic>
1309 for more discussion of the issues.
1313 See L<perllocale/Unicode and UTF-8>
1315 =head2 When Unicode Does Not Happen
1317 While Perl does have extensive ways to input and output in Unicode,
1318 and a few other "entry points" like the @ARGV array (which can sometimes be
1319 interpreted as UTF-8), there are still many places where Unicode
1320 (in some encoding or another) could be given as arguments or received as
1321 results, or both, but it is not.
1323 The following are such interfaces. Also, see L</The "Unicode Bug">.
1324 For all of these interfaces Perl
1325 currently (as of 5.8.3) simply assumes byte strings both as arguments
1326 and results, or UTF-8 strings if the (problematic) C<encoding> pragma has been used.
1328 One reason that Perl does not attempt to resolve the role of Unicode in
1329 these situations is that the answers are highly dependent on the operating
1330 system and the file system(s). For example, whether filenames can be
1331 in Unicode and in exactly what kind of encoding, is not exactly a
1332 portable concept. Similarly for C<qx> and C<system>: how well will the
1333 "command-line interface" (and which of them?) handle Unicode?
1339 chdir, chmod, chown, chroot, exec, link, lstat, mkdir,
1340 rename, rmdir, stat, symlink, truncate, unlink, utime, -X
1352 open, opendir, sysopen
1356 qx (aka the backtick operator), system
1364 =head2 The "Unicode Bug"
1366 The term, the "Unicode bug" has been applied to an inconsistency
1367 on ASCII platforms with the
1368 Unicode code points in the Latin-1 Supplement block, that
1369 is, between 128 and 255. Without a locale specified, unlike all other
1370 characters or code points, these characters have very different semantics in
1371 byte semantics versus character semantics, unless
1372 C<use feature 'unicode_strings'> is specified.
1373 (The lesson here is to specify C<unicode_strings> to avoid the
1376 In character semantics they are interpreted as Unicode code points, which means
1377 they have the same semantics as Latin-1 (ISO-8859-1).
1379 In byte semantics, they are considered to be unassigned characters, meaning
1380 that the only semantics they have is their ordinal numbers, and that they are
1381 not members of various character classes. None are considered to match C<\w>
1382 for example, but all match C<\W>.
1384 The behavior is known to have effects on these areas:
1390 Changing the case of a scalar, that is, using C<uc()>, C<ucfirst()>, C<lc()>,
1391 and C<lcfirst()>, or C<\L>, C<\U>, C<\u> and C<\l> in regular expression
1396 Using caseless (C</i>) regular expression matching
1400 Matching any of several properties in regular expressions, namely C<\b>,
1401 C<\B>, C<\s>, C<\S>, C<\w>, C<\W>, and all the Posix character classes
1402 I<except> C<[[:ascii:]]>.
1406 In C<quotemeta> or its inline equivalent C<\Q>, no characters
1407 code points above 127 are quoted in UTF-8 encoded strings, but in
1408 byte encoded strings, code points between 128-255 are always quoted.
1412 This behavior can lead to unexpected results in which a string's semantics
1413 suddenly change if a code point above 255 is appended to or removed from it,
1414 which changes the string's semantics from byte to character or vice versa. As
1415 an example, consider the following program and its output:
1418 no feature 'unicode_strings';
1421 for ($s1, $s2, $s1.$s2) {
1429 If there's no C<\w> in C<s1> or in C<s2>, why does their concatenation have one?
1431 This anomaly stems from Perl's attempt to not disturb older programs that
1432 didn't use Unicode, and hence had no semantics for characters outside of the
1433 ASCII range (except in a locale), along with Perl's desire to add Unicode
1434 support seamlessly. The result wasn't seamless: these characters were
1437 Starting in Perl 5.14, C<use feature 'unicode_strings'> can be used to
1438 cause Perl to use Unicode semantics on all string operations within the
1439 scope of the feature subpragma. Regular expressions compiled in its
1440 scope retain that behavior even when executed or compiled into larger
1441 regular expressions outside the scope. (The pragma does not, however,
1442 affect the C<quotemeta> behavior. Nor does it affect the deprecated
1443 user-defined case changing operations--these still require a UTF-8
1444 encoded string to operate.)
1446 In Perl 5.12, the subpragma affected casing changes, but not regular
1447 expressions. See L<perlfunc/lc> for details on how this pragma works in
1448 combination with various others for casing.
1450 For earlier Perls, or when a string is passed to a function outside the
1451 subpragma's scope, a workaround is to always call C<utf8::upgrade($string)>,
1452 or to use the standard module L<Encode>. Also, a scalar that has any characters
1453 whose ordinal is above 0x100, or which were specified using either of the
1454 C<\N{...}> notations, will automatically have character semantics.
1456 =head2 Forcing Unicode in Perl (Or Unforcing Unicode in Perl)
1458 Sometimes (see L</"When Unicode Does Not Happen"> or L</The "Unicode Bug">)
1459 there are situations where you simply need to force a byte
1460 string into UTF-8, or vice versa. The low-level calls
1461 utf8::upgrade($bytestring) and utf8::downgrade($utf8string[, FAIL_OK]) are
1464 Note that utf8::downgrade() can fail if the string contains characters
1465 that don't fit into a byte.
1467 Calling either function on a string that already is in the desired state is a
1470 =head2 Using Unicode in XS
1472 If you want to handle Perl Unicode in XS extensions, you may find the
1473 following C APIs useful. See also L<perlguts/"Unicode Support"> for an
1474 explanation about Unicode at the XS level, and L<perlapi> for the API
1481 C<DO_UTF8(sv)> returns true if the C<UTF8> flag is on and the bytes
1482 pragma is not in effect. C<SvUTF8(sv)> returns true if the C<UTF8>
1483 flag is on; the bytes pragma is ignored. The C<UTF8> flag being on
1484 does B<not> mean that there are any characters of code points greater
1485 than 255 (or 127) in the scalar or that there are even any characters
1486 in the scalar. What the C<UTF8> flag means is that the sequence of
1487 octets in the representation of the scalar is the sequence of UTF-8
1488 encoded code points of the characters of a string. The C<UTF8> flag
1489 being off means that each octet in this representation encodes a
1490 single character with code point 0..255 within the string. Perl's
1491 Unicode model is not to use UTF-8 until it is absolutely necessary.
1495 C<uvchr_to_utf8(buf, chr)> writes a Unicode character code point into
1496 a buffer encoding the code point as UTF-8, and returns a pointer
1497 pointing after the UTF-8 bytes. It works appropriately on EBCDIC machines.
1501 C<utf8_to_uvchr(buf, lenp)> reads UTF-8 encoded bytes from a buffer and
1502 returns the Unicode character code point and, optionally, the length of
1503 the UTF-8 byte sequence. It works appropriately on EBCDIC machines.
1507 C<utf8_length(start, end)> returns the length of the UTF-8 encoded buffer
1508 in characters. C<sv_len_utf8(sv)> returns the length of the UTF-8 encoded
1513 C<sv_utf8_upgrade(sv)> converts the string of the scalar to its UTF-8
1514 encoded form. C<sv_utf8_downgrade(sv)> does the opposite, if
1515 possible. C<sv_utf8_encode(sv)> is like sv_utf8_upgrade except that
1516 it does not set the C<UTF8> flag. C<sv_utf8_decode()> does the
1517 opposite of C<sv_utf8_encode()>. Note that none of these are to be
1518 used as general-purpose encoding or decoding interfaces: C<use Encode>
1519 for that. C<sv_utf8_upgrade()> is affected by the encoding pragma
1520 but C<sv_utf8_downgrade()> is not (since the encoding pragma is
1521 designed to be a one-way street).
1525 C<is_utf8_char(s)> returns true if the pointer points to a valid UTF-8
1530 C<is_utf8_string(buf, len)> returns true if C<len> bytes of the buffer
1535 C<UTF8SKIP(buf)> will return the number of bytes in the UTF-8 encoded
1536 character in the buffer. C<UNISKIP(chr)> will return the number of bytes
1537 required to UTF-8-encode the Unicode character code point. C<UTF8SKIP()>
1538 is useful for example for iterating over the characters of a UTF-8
1539 encoded buffer; C<UNISKIP()> is useful, for example, in computing
1540 the size required for a UTF-8 encoded buffer.
1544 C<utf8_distance(a, b)> will tell the distance in characters between the
1545 two pointers pointing to the same UTF-8 encoded buffer.
1549 C<utf8_hop(s, off)> will return a pointer to a UTF-8 encoded buffer
1550 that is C<off> (positive or negative) Unicode characters displaced
1551 from the UTF-8 buffer C<s>. Be careful not to overstep the buffer:
1552 C<utf8_hop()> will merrily run off the end or the beginning of the
1553 buffer if told to do so.
1557 C<pv_uni_display(dsv, spv, len, pvlim, flags)> and
1558 C<sv_uni_display(dsv, ssv, pvlim, flags)> are useful for debugging the
1559 output of Unicode strings and scalars. By default they are useful
1560 only for debugging--they display B<all> characters as hexadecimal code
1561 points--but with the flags C<UNI_DISPLAY_ISPRINT>,
1562 C<UNI_DISPLAY_BACKSLASH>, and C<UNI_DISPLAY_QQ> you can make the
1563 output more readable.
1567 C<foldEQ_utf8(s1, pe1, l1, u1, s2, pe2, l2, u2)> can be used to
1568 compare two strings case-insensitively in Unicode. For case-sensitive
1569 comparisons you can just use C<memEQ()> and C<memNE()> as usual, except
1570 if one string is in utf8 and the other isn't.
1574 For more information, see L<perlapi>, and F<utf8.c> and F<utf8.h>
1575 in the Perl source code distribution.
1577 =head2 Hacking Perl to work on earlier Unicode versions (for very serious hackers only)
1579 Perl by default comes with the latest supported Unicode version built in, but
1580 you can change to use any earlier one.
1582 Download the files in the desired version of Unicode from the Unicode web
1583 site L<http://www.unicode.org>). These should replace the existing files in
1584 F<lib/unicore> in the Perl source tree. Follow the instructions in
1585 F<README.perl> in that directory to change some of their names, and then build
1586 perl (see L<INSTALL>).
1588 It is even possible to copy the built files to a different directory, and then
1589 change F<utf8_heavy.pl> in the directory C<$Config{privlib}> to point to the
1590 new directory, or maybe make a copy of that directory before making the change,
1591 and using C<@INC> or the C<-I> run-time flag to switch between versions at will
1592 (but because of caching, not in the middle of a process), but all this is
1593 beyond the scope of these instructions.
1597 =head2 Interaction with Locales
1599 See L<perllocale/Unicode and UTF-8>
1601 =head2 Problems with characters in the Latin-1 Supplement range
1603 See L</The "Unicode Bug">
1605 =head2 Interaction with Extensions
1607 When Perl exchanges data with an extension, the extension should be
1608 able to understand the UTF8 flag and act accordingly. If the
1609 extension doesn't recognize that flag, it's likely that the extension
1610 will return incorrectly-flagged data.
1612 So if you're working with Unicode data, consult the documentation of
1613 every module you're using if there are any issues with Unicode data
1614 exchange. If the documentation does not talk about Unicode at all,
1615 suspect the worst and probably look at the source to learn how the
1616 module is implemented. Modules written completely in Perl shouldn't
1617 cause problems. Modules that directly or indirectly access code written
1618 in other programming languages are at risk.
1620 For affected functions, the simple strategy to avoid data corruption is
1621 to always make the encoding of the exchanged data explicit. Choose an
1622 encoding that you know the extension can handle. Convert arguments passed
1623 to the extensions to that encoding and convert results back from that
1624 encoding. Write wrapper functions that do the conversions for you, so
1625 you can later change the functions when the extension catches up.
1627 To provide an example, let's say the popular Foo::Bar::escape_html
1628 function doesn't deal with Unicode data yet. The wrapper function
1629 would convert the argument to raw UTF-8 and convert the result back to
1630 Perl's internal representation like so:
1632 sub my_escape_html ($) {
1634 return unless defined $what;
1635 Encode::decode_utf8(Foo::Bar::escape_html(
1636 Encode::encode_utf8($what)));
1639 Sometimes, when the extension does not convert data but just stores
1640 and retrieves them, you will be able to use the otherwise
1641 dangerous Encode::_utf8_on() function. Let's say the popular
1642 C<Foo::Bar> extension, written in C, provides a C<param> method that
1643 lets you store and retrieve data according to these prototypes:
1645 $self->param($name, $value); # set a scalar
1646 $value = $self->param($name); # retrieve a scalar
1648 If it does not yet provide support for any encoding, one could write a
1649 derived class with such a C<param> method:
1652 my($self,$name,$value) = @_;
1653 utf8::upgrade($name); # make sure it is UTF-8 encoded
1654 if (defined $value) {
1655 utf8::upgrade($value); # make sure it is UTF-8 encoded
1656 return $self->SUPER::param($name,$value);
1658 my $ret = $self->SUPER::param($name);
1659 Encode::_utf8_on($ret); # we know, it is UTF-8 encoded
1664 Some extensions provide filters on data entry/exit points, such as
1665 DB_File::filter_store_key and family. Look out for such filters in
1666 the documentation of your extensions, they can make the transition to
1667 Unicode data much easier.
1671 Some functions are slower when working on UTF-8 encoded strings than
1672 on byte encoded strings. All functions that need to hop over
1673 characters such as length(), substr() or index(), or matching regular
1674 expressions can work B<much> faster when the underlying data are
1677 In Perl 5.8.0 the slowness was often quite spectacular; in Perl 5.8.1
1678 a caching scheme was introduced which will hopefully make the slowness
1679 somewhat less spectacular, at least for some operations. In general,
1680 operations with UTF-8 encoded strings are still slower. As an example,
1681 the Unicode properties (character classes) like C<\p{Nd}> are known to
1682 be quite a bit slower (5-20 times) than their simpler counterparts
1683 like C<\d> (then again, there are hundreds of Unicode characters matching C<Nd>
1684 compared with the 10 ASCII characters matching C<d>).
1686 =head2 Problems on EBCDIC platforms
1688 There are several known problems with Perl on EBCDIC platforms. If you
1689 want to use Perl there, send email to perlbug@perl.org.
1691 In earlier versions, when byte and character data were concatenated,
1692 the new string was sometimes created by
1693 decoding the byte strings as I<ISO 8859-1 (Latin-1)>, even if the
1694 old Unicode string used EBCDIC.
1696 If you find any of these, please report them as bugs.
1698 =head2 Porting code from perl-5.6.X
1700 Perl 5.8 has a different Unicode model from 5.6. In 5.6 the programmer
1701 was required to use the C<utf8> pragma to declare that a given scope
1702 expected to deal with Unicode data and had to make sure that only
1703 Unicode data were reaching that scope. If you have code that is
1704 working with 5.6, you will need some of the following adjustments to
1705 your code. The examples are written such that the code will continue
1706 to work under 5.6, so you should be safe to try them out.
1712 A filehandle that should read or write UTF-8
1715 binmode $fh, ":encoding(utf8)";
1720 A scalar that is going to be passed to some extension
1722 Be it Compress::Zlib, Apache::Request or any extension that has no
1723 mention of Unicode in the manpage, you need to make sure that the
1724 UTF8 flag is stripped off. Note that at the time of this writing
1725 (October 2002) the mentioned modules are not UTF-8-aware. Please
1726 check the documentation to verify if this is still true.
1730 $val = Encode::encode_utf8($val); # make octets
1735 A scalar we got back from an extension
1737 If you believe the scalar comes back as UTF-8, you will most likely
1738 want the UTF8 flag restored:
1742 $val = Encode::decode_utf8($val);
1747 Same thing, if you are really sure it is UTF-8
1751 Encode::_utf8_on($val);
1756 A wrapper for fetchrow_array and fetchrow_hashref
1758 When the database contains only UTF-8, a wrapper function or method is
1759 a convenient way to replace all your fetchrow_array and
1760 fetchrow_hashref calls. A wrapper function will also make it easier to
1761 adapt to future enhancements in your database driver. Note that at the
1762 time of this writing (October 2002), the DBI has no standardized way
1763 to deal with UTF-8 data. Please check the documentation to verify if
1767 # $what is one of fetchrow_{array,hashref}
1768 my($self, $sth, $what) = @_;
1774 my @arr = $sth->$what;
1776 defined && /[^\000-\177]/ && Encode::_utf8_on($_);
1780 my $ret = $sth->$what;
1782 for my $k (keys %$ret) {
1785 && Encode::_utf8_on($_) for $ret->{$k};
1789 defined && /[^\000-\177]/ && Encode::_utf8_on($_) for $ret;
1799 A large scalar that you know can only contain ASCII
1801 Scalars that contain only ASCII and are marked as UTF-8 are sometimes
1802 a drag to your program. If you recognize such a situation, just remove
1805 utf8::downgrade($val) if $] > 5.007;
1811 L<perlunitut>, L<perluniintro>, L<perluniprops>, L<Encode>, L<open>, L<utf8>, L<bytes>,
1812 L<perlretut>, L<perlvar/"${^UNICODE}">
1813 L<http://www.unicode.org/reports/tr44>).