• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Lecture 09
 

Lecture 09

on

  • 713 views

 

Statistics

Views

Total Views
713
Views on SlideShare
713
Embed Views
0

Actions

Likes
0
Downloads
2
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • As software systems are usually built in a layered approach, upper level software system can rely on facilities supported at lower layers of the system. The lowest layer of a software system is its operating system. If certain facilities or functions are provided at lower levels, applications built on top of it can directly use these functions without the need to build its own functions. Likewise, coding standards can be supported at different levels of the software platforms. Unicode as a new coding standard was not supported at the Microsoft operating system level for Windows older versions such as in all the Windows 9X series and Window ME which was developed for the notebook hardware platforms. Because of this, additional system software package was developed on top of the Window 9X operating systems particularly for Unicode applications and the package only has certain functions through the Unicode Application Programs Interface (API). Unicode applications can run only if it is compiled with this additional layered UAPI. The Windows 9x/ME platform supports different multi-byte encodings through a mechanism called code page where each multi-byte encoding is given a designated code page number and the system remembers at run time the current code page number so that fonts and related facilities associated with this code page can be properly loaded for the required locale. Window NT/2000/XP has changed in the operating system to support Unicode as the internal coding standard. All other encodings are converted into Unicode. Q: Can Java, the Unicode based application, run on Window 9X/ME platform and why?
  • The Linux operating system, with U NIX being its origin, uses the UTF-8 format to support Unicode applications. Any applications built on top of Linux for Unicode applications must be linked with the glibc 2.2.2 or its later versions for the C language applications and XFree86 4.0.3 for X-Windows applications. The documents, etc. are also written in UTF-8 and the locale functions conform to POSIX API. In the Apple machines Mac OS, wide character Unicode is supported from Mac OS 9.1. In fact, Mac OS was one of the first operating systems to support Unicode internally. Q : What is the main difference between UTF8 and Unicode? In terms of operating system, explain what is the advantage and disadvantage of Using UTF8 vs. Unicode.
  • The term code page is used in the old Microsoft multi-byte system to identify a particular coding standard used. Each coding standard is given an assigned code page identifier. For, example, Big5 is given the code page of 950. All documents written using a particular codepage will also carry the codepage information, which we call codeset announcement . The code page identifier is designated by Microsoft and a list of all of them can be found and announced to the public. Even in the Windows NT/2000/XP where internal code is already in Unicode, code page identifiers are still needed for code page conversions to and from Unicode. The conversion tables are also needed for multi-byte applications developed in the past. Windows NT/2000/XP also provide utility function to make use of these conversion tables.
  • The Java programming language uses Unicode internally. However, Java doesn’t require data from different places to be coded only in Unicode. Because Unicode is a superset of all character coding standards, we can always convert any multi-byte encoding to and from Unicode. Reference of Java supported multi-byte character sets and the APIs can be found in publicly available specifications and can be found on the internet. When writing a Java program, even the source program can be encoded in different coding standards. For example, we can write a Java program in a Windows 98 Big5 platform. In this case, if you hard code the Chinese msgs, they need to be converted to Unicode at compiling time. Therefore, Java compil ation gives you the option to specify the encoding as follows: javac –encoding The String type is used in Java. But the method getBytes for String with a specified multi-byte coding name will convert a string to a multibyte data. For example, the statement byte [] utf8Bytes = str.getBytes(“UTF-8”); Will convert the data into UTF-8 code . On the other hand , the statement String str = new String(utf8Bytes, “UTF-8”); Will convert a UTF-8 data into Unicode data.
  • Generally speaking, codeset conversion can suffer from 1-to-0 problem or 1-to-N problem if 1-to-1 mapping cannot be provided. However, since Unicode is a superset of all existing national standard, it can guarantee round-trip conversion . The definition of round-trip conversion is defined as follows: Suppose any file file 1 in codeset A is converted to a file file 2 in codeset B and then file 2 is converted back to codeset A with a file name file 3 . If file 3 equals to file 1 , we say that codeset B guarantees round-trip conversion for A.
  • The Java code Byte[ ] my_data = { 0xA4, 0x40} Is to put 2 bytes into the byte string my_data . Then the statement String my_unicode_data = new String (my_data ,”big5”) Convert the 2 bytes of data (1 characters in Big5) as big5 code to Java’s string type which is in Unicode. The conversion will be done automatically. Note that Java uses the codeset name, not the codepage number (in Microsoft) The method getByte for the String type can convert a Unicode string to any other multi-byte character. In the statement Byte[] my_b5_data =my_unicode_data. getBytes (“Big5”) The variable my_b5_data will contain the value of 0xA440, which is the value of the character “ 一” in Big5, whereas, the statement Byte[] my_gb_data = my_unicode_ d ata. getBytes (“GBK”) The variable my_gb_data will contain the value D2BB which is the value of the character “ 一” in GB2312
  • For file operations, Java also allow data strings to be coded in other multi-byte encodings. The above is an example where the input data can be interpreted as Big5 and will be converted automatically into Unicode.
  • Here is an example for output into multi-byte encoding, in this case, the output is in Big5. Thus the output is two Big5 code 0xAA65 0xB362 for “ 河豚 ” , even though the actual code we wrote to the output was the two Unicode values u6CB3u8C5A
  • Multilingual application is different from an application for a single language. In a multilingual application, data (normally, manipulation data) are multilingual in nature. For example, in a software teaching Chinese for English people, the display data in terms of the software manual should be in English, however, the data for manipulation, should be Chinese with English explanations, which are bilingual in nature. In the past, we have learned how to write I18N software. It should be pointed out, however, that the primary purpose of I18N is for a single language. It is intended to facilitate the porting of a application from one language/locale to another. Even if many of the Asian locales supports other Latin symbols, these symbols are not treated as part of the Asian scripts. It is however, useful to consider designing a multi-lingual application using the I18N approach. We can in the analysis of a multilingual application, separately consider display data and manipulation data. If this separation can be done properly, the display data can then be designed using the I18N approach.
  • This notes introduces a set of symbols introduced in ISO 10646 as ideograph description characters (IDCs). The characters are used to describe the structures of ideograph characters. For instance, a character 峰 is obviously composed of two character components in a left-and-right structure. The IDCs are used to indicate such left-to-right structure. Based on the IDCs, a ideographic composition scheme will then be discussed. The compositions scheme provides a method to describe a character based on its component characters.
  • The ideograph Description Characters are structural symbols to indicate the positions of character components used in forming a character, which we sometimes say that they are the smaller ideograph functional units. There is a total of 12 IDCs in ISO 10646 coded in the range of 2FF0 to 2FFB. For example, the symbol ⿰ indicates left-right structure for characters, 峰 , etc. Note that the characters 2FF2 and 2FF3 are used to describe characters through three component characters whereas all the other 10 symbols require only 2 component characters.
  • It is understood that ideographs are usually formed by smaller components such as radicals, ideographs proper( 獨立漢字 ), and ideograph components( 漢字部件 ). These radicals, ideographs proper and ideograph components sometimes are all called character components( 部件 ). As an example, we can see that the same components may form different characters, such as the two components 大 , 小 can be used to form two different characters depending on the relative positions of 大 , 小 . In other words, components are not the only factor in determining the character, the relative positions of components are also part of the character formation. It should be pointed out that Chinese has a long tradition to describe characters through their components. For example, when someone tell his name, “zhang”(Putonghua Pinying), he is likely to further example that it is “ 弓” “長” “張” , not “ 立” “早” “章” .
  • In all the current encodings of Chinese characters, each character is considered an independent symbol, and is thus given a separate codepoint. Such codepoint assignment has no regards (or very limited regards) to the internal or substructures of the characters. In other words, the codepoint assignment is not directly linked to these information. Whenever, a new character needs to be supported, extension to existing standard much be produced. Even though characters assigned in the same block are arranged in Kang Xi( 康熙) radical order, characters are indeed being assigned to different blocks, thus the radical order cannot be globally maintained. It is in the nature of Chinese language where new characters will be created once in a while. This gives rise to the need to extend the standard indefinitely, which can be very time consuming. Also, it is not practical to assign a codepoint to every existing (or existed) characters as some of the characters are quite rarely used and thus the need for exchange of these characters are also rarely needed. If you consider that the codespace being a resource, it would not be an efficient use of the resource if we have to give every rarely used character a codepoint and maintain them through out the system.
  • Because of limitations of the encoding method, and the practical needs for the use of new and existing but rarely used characters, the ISO 10646’s working group started to work on the idea of using character components to describe characters. The intention was to use structural symbols and existing characters (used as components) to describe not yet coded characters . The original proposal had 15 structural symbols, but eventual only 12 symbols were accepted and be given the code range of 2FF0 to 2FFB. The 3 uncoded symbols are not shown here, but their functions are explained. It should be noted that the Left_up_encompass can be used for characters such as 斗 . Yet, because they are coded already, and are not necessarily be included.
  • With the 12 structure symbols, an Ideographic Composition Scheme was also introduced to describe a character using an ideograph description sequence (IDS) formed by components and the structure symbols where the IDCs are considered operators to the components following certain rules. The IDSs can be described by a mathematic formula using the so called context free grammar through a well known Backus Naur Form(BNF). Like any grammer, a IDS grammar G is described by four components as listed above. Let G = {  , N, P, S}, where  : the set of terminal symbols — coded radicals, coded ideographs, and the 12 IDCs. N: the set of 5 non-terminal symbols N={IDS, IDS1, Binary_Symbol, Ternary_Symbol, Ideograph_Component } S = {IDS}, which is the start symbol of the grammar P: a set of rewrite rules which will be shown in the next page
  • The rewrite rules are listed above. Note that because there are two ternary IDC symbols, they require three components and are thus separately listed. Even though the choice of binary operator and ternary operator in some cases are arbitrary, once it is chosen, there is no ambiguity in processing the IDS. A contact free grammar can easily be processed by programs as there is no ambiguity. In other words, the character structure described by this grammar is not ambiguous. Meaning that it cannot be interpreted differently. This is very important as it implied that a legal IDS describes only one character. Of course, this statement is true only if the IDCs themselves are not ambiguous, this as we will see later is not the case for all IDCs.
  • The above gives some examples of IDSs which are very commonly used and there does not seem to have much of an ambiguity to anyone what the they represent.
  • For example, the I EOGRAPHIC DESCRIPTION CHARACTER OVERLAID ( IDC-OLD , ⿻ ) de scribes the abstract form of the ideograph with D1 and D2 overlaying each other. But, it is not clear how these two components should be overlaid and whether they should touch each other or not. For example, to describe the character 巫 using components, every one understand that there are two components 从 and 工 . Yet, they cannot be described by any other IDCs except ⿻ . But, if just take the IDS “ ⿻ 从工” , you cannot tell whether the two 人 in 从 should be split and the vertical bar( 丨 ) should go through the two 人 without touch them. Nor can you tell that the top horizontal bar( 一 ) should be over 人 and the bottom horizontal bar( 一 ) should be below 人 . This indicates that ⿻ has built in ambiguity. As another example, IDEOGRAPHIC DESCRIPTION CHARACTER SURROUND FROM UPPER RIGHT ( IDC-SUR, ⿹ ) can be used to describes the abstract form of the ideograph with D 1 on the right top corner of D2, and D 2 is encompassed by D1. For instance, ⿹ is used to represent the character . Yet, we would question the legitimacy of ⿹ 从工 and ⿹ 工从 as it is not clear what this sequence represent although there is no explicit rules against such a use.
  • It should be noted that each character in principle can be described by different IDSs as shown in the above examples. Generally speaking we can only say that what are the “most commonly used” decomposition, yet, we cannot generally claim which one is the “correct” decomposition. The reason is that decomposition rules themselves can be ambiguous even to the most knowledgeable scholars. For example, in the character 章 , it is normally being decomposed into 立早 because the character takes 立 as its radical (in Kang Xi dictionary) even though 十 is also a radical as used in the character 卓 .
  • Each IDS uniquely defines a character, but a character may be described by different IDSs. Using the example of “ 章 ” as an example,   it can also be described by “ 音 ” “ 十 ” . The reason is that the IDCs indicate the relative positions, but it doesn’t not give precise indication on the size of the components. Consequently, IDS cannot be used for rendering purpose. In other words, to render a character correctly based on an IDS, additional information needs to be provided.
  • The term components have been used in many places throughout this subjects. In fact, the basic strokes can be considered as components and every character is built based on strokes. However, in practice, we look at characters and their components (or decomposition) using a more top-down approach. That is, to look at its substructures using a more functional view. For example, in the character 琦 , we would decompose it into two components 王 and 奇 first. As 王 is already a radical (for classification and indexing), thus it serves as a functional unit for which we would not further decompose it. Again, 奇 is an ideograph proper( 正字 ), we would not need to further decompose it even though it can be further decomposed into other components 大 and 可 . Therefore, in the definition of components, we use a practical and recursive approach to define components as follows: All radicals are components All strokes are components( in fact, all strokes are coded as of spring of 2005) All coded ideographs are components
  • The above give some examples of components that might of use for rarely used characters.
  • The use of IDCs are originally for describing un-encoded characters. Its introduction gives an alternative mechanism to describe Chinese characters which are not yet coded. However, IDCs are not limited to describe un-encoded characters. IDCs can reveal the substructures of ideographs. When used in combination with the IDS, IDCs and the IDS provide a linear way of describe a character using its components. Thus the IDS is a convenient tool to describe character composition and decomposition. This has additional educational benefit for Chinese character study. When studying Chinese variants, we can use two IDSs to decompose them. The difference in the substructure or components can pinpoint the specific place where the two characters are different. For example, can be described as ⿰ 氵 ⿱    又 , whereas, can be described by ⿰ 氵 ⿱ 几又
  • In this page, we are given a few examples of characters, and we can see how they can described (or decomposed) using IDS. Ex 1: 忂 it is obviously a left-to-right structure. Thus its IDS => ⿰ 彳瞿 . For this decomposition to work, both 彳 and 瞿 must be coded ideographs (or components). And their Unicode are indeed U+ 5F73 and U+ 77BF Ex 2 : 䑑 => ⿰ U+ 81E3 U+ 83D0 Ex 3: 䔴 , its IDS => ⿱ 艹 ⿰ 祟又 because “ 祟又” is not a single character in Unicode. The respective Unicode for these component characters are U+ 8279 , U+ 795F , and U+ 53C8 Ex 4: 蠿 is difficult to decompose. Even though it is an obvious top-to-bottom structure, the component being the upper component of 蠿 is not defined in Unicode. Its description is thus quite troublesome shown below: ⿱ ⿰ ⿱ ⿱ ⿰ 幺幺 一 ⿱ ⿰ 幺幺 一丨 ⿰ 虫虫 . I added the parentheses to make it easier to see: ⿱ ( ⿰ ( ⿱ ( ⿱ ( ⿰ 幺幺 ) 一 ) ( ⿱ ( ⿰ 幺幺 ) 一 )) 丨 ) ( ⿰ 虫虫 ) The component characters Unicode are 幺 ( 5E7A) 一 (4E00), 丨 ( 4E28), 虫 (866B)

Lecture 09 Lecture 09 Presentation Transcript

  • Unicode support status in various platforms (Microsoft Windows)
    • Windows 9x / ME
      • Do not support Unicode internally
      • Limited Unicode APIs are supported.
      • Unicode applications compiled with Microsoft Layer for Unicode can be run on Win9x
      • Use code page to support different encodings
    • Windows NT / 2000 / XP
      • Support Unicode
      • Use of wide char (fixed 2 bytes)
      • Use UCS-2
  • Unicode support status in various platforms (Linux & Mac OS)
    • Linux
      • Newer Kernel supports Unicode
      • Requires glibc 2.2.2 and XFree86 4.0.3 or newer
      • Use UTF-8 in most case, e.g. filesystem
      • Set locale to <lang>_<place>.<encoding>, e.g. zh_TW.utf8
      • Enable UTF-8 support in console by executing unicode_start
    • Mac OS
      • Mac OS 9.1, Mac OS X support Unicode
      • 16-bit for Unicode character
  • What is a code page
    • There are a lot of different encodings, e.g. EUC-TW, Big5, Latin-1 etc.
    • A code page (code page identifier) is a number to identify a codeset.
      • e.g. 950 – Traditional Chinese (Big5)
      • e.g. 1252 – Windows Latin-1
      • Other code page identifiers can be found in:
      • http://msdn.microsoft.com/library/en-us/intl/unicode_81rn.asp
    • In Windows NT/2000/XP, code page conversion table provides information to convert between different encodings.
  • Java
    • Java is in Unicode internally. The supported encoding sets are provided by Java library packages rt.jar and i18n.jar
    • The supported encoding sets for java.io.* , java.lang.* and java.nio.* API can be found in:
    • http://java.sun.com/j2se/1.4/docs/guide/intl/encoding.doc.html
    • User Input/Output will be automatically convert between Unicode and System code page
    • Specify the encoding of the source files when compiling.
      • javac –encoding <encoding> <source files>
    • Convert to other supported encoding:
    • e.g. byte [] utf8Bytes = str.getBytes(“UTF-8”);
    • Convert from other supported encoding:
    • e.g. String str = new String(utf8Bytes, “UTF-8”);
  • Code Conversion
    • Generally codeset conversion cannot provide one-to-one mapping(unless the two character sets are exactly the same)
    • Unicode is a superset of every existing national standard => guaranteed round-trip conversion
    • Round-trip conversion : Suppose a file file 1 in codeset A is converted to a file file 2 in codeset B and then converted back to codeset A with a file name file 3 .
      • If file 3 =file 1 , we say that codeset B guarantees round-trip conversion for A.
  • Java Code conversion
    • Conversion from multibyte to Unicode
      • Byte[ ] my_data = { 0xA4, 0x40}
      • String my_unicode_data = new String (my_data ,”big5”)
      • Where “big5” is the name of the multibyte code name. Unicode needs this to do code conversion to:
    • Conversion from Unicode to multibyte
      • String my_unicode_data =“u4E00” ( 一 )
      • Byte[] my_b5_data =my_unicode_data. getBytes (“Big5”)
        • My_b5_data will have the value of 0xA440
      • Byte[] my_gb_data = my_unicode_ d ata. getBytes (“GBK”)
        • My_gb_data will have the value of D2BB
    • Text stream import
      • File I = new File (“input”);
      • FileInputStream tmpin = new FileInputStream (I);
      • BufferedReader in = new BufferedReader ( new InputStreamReader ( tmpin , “Big5”));
    • Once the BufferedReader in is established, data can be read using the readLine() method.
      • inputStr = in. readLine ();
    • Text Stream Export
      • File o = new File (“output.big5”);
      • FileOutputStream tmpout = new FileOutputStream ( o );
      • BufferedWriter out = new BufferedWriter(new OutputStreamWriter(tmpout, “Big5”));
    • … .
    • Out.println(“u6CB3u8C5A”); (“ 河豚” )
    • Out.close();
    • 0xAA65 0xB362
  • Multilingual applications
    • Software teaching Chinese for English people
    • Software teaching English for Chinese
    • Conceptually separate two types of data in a multilingual application:
      • Data related to display of menu/instructions,
      • Data related to the processing in the program
      • Multilingual application vs. I18n applications
      • I18N: data related to display and processing are the same and it is for the same language/convention
      • Multilingual applications: Data related to display is for one language(and can be internationalized). Data related to the processing can be multilingual and not necessarily related to the display language.
      • Unicode is the most convenient encoding for multilingual applications, but not absolutely necessary
  • The Ideographic Composition Scheme Used in ISO 10646
    • Introduction to Ideograph Description Characters(IDCs)
    • The ideographic composition scheme
    • Application using IDCs
  • What are Ideograph Description Characters
    • 12 structure symbols used to describe the formation of characters using some smaller ideograph functional units such as character components ⿰ ⿱⿲⿳⿴⿵⿶⿷⿸⿹⿺⿻
  • Characteristics of Ideographs
    • Ideograph characters are often formed by smaller ideographic elements such as Radicals, ideographs proper, and other ideographic components which we generally call ideograph components
    • Natural in the formation of characters
    • Examples: 2 components
    • =>
    • Chinese uses components has long been using components to describe characters, especially characters with the same pronunciation
  • Problems with ideograph C haracter Encoding
    • Each character is treated as a different symbol, and thus given a codepoint
    • Codepoint assignment in a block does try to follow radical order, but codepoint assignment does not consider the substructures(components). Thus such information is not revealed.
    • When new character is created, codepoint allocation is needed in new blocks , thus radical order cannot be globally maintained.
    • Also there is a potentially endless standardization process
      • Encoding of rarely used ideograph characters is a waste of resource both in terms of code space and also standardization effort
  • Introduction of IDCs
    • Work started in 1995 by ISO/IEC SC2/WG2/IRG in 1995
    • Objective of the Original proposal: use coded ideographs and “structure symbols” to describe not yet coded ideographs.
    • Original proposal has 15 “Ideograph Structure Symbols” base on study on Han characters, three of them didn’t make it to ISO 10646/Unicode:
      • Ideograph_Proper( 日 ): Every coded character is considered ideograph proper, thus not needed
      • Left_Up_Encompass: no un-encoded example
      • Mirror_Symmetry ( 非 ): left being mirrored to the right, but can be describe by Left_to_Right
    • Renames the 12 symbols as Ideograph Description Characters
  • Ideographic Composition Scheme
    • IDS describes a character using its components and indicating the relative positions of the components.
    • IDCs are considered operators to the components.
    • IDSs can be expressed by a context free grammar through the Backus Naur Form (BNF) . The grammar G has four components:
    • Let G = {  , N, P, S}, where
        •  : the set of terminal symbols — coded radicals, coded ideographs, and the 12 IDCs.
        • N:the set of 5 non-terminal symbols
          • N={IDS, IDS1, Binary_Symbol, Ternary_Symbol, Ideograph_Component}
        • S = {IDS}, which is the start symbol of the grammar
        • P: a set of rewrite rules
    • The following is the set of rewriting rules P:
    • IDS::=<Binary_Symbol><IDS1><IDS1>|<Ternary_Symbol>
    • <IDS1><IDS1><IDS1>
    • <IDS1> ::= <IDS> | <Ideograph_Component>
    • <Ideograph_Component>::= coded_ideograph | coded_radical | coded_component
    • <Binary-Symbol> ::= ⿰ | ⿱ | ⿴ | ⿵ | ⿶ | ⿷ | ⿸ | ⿹ | ⿺ | ⿻
    • <Ternary_Symbol> ::= ⿲ | ⿳
    • Note that even though the IDCs are terminal symbols, they are not part of the ideograph components.
  • Examples
        • IDEOGRAPHIC DESCRIPTION CHARACTER OVERLAID ( IDC-OLD , ⿻ ) :
          • The IDS introduced by IDC-OLD describes the abstract form of the ideograph with D1 and D2 overlaying each other.
          • ⿻ 从工 is an example of an IDS which represents the abstract from of 巫
    • IDEOGRAPHIC DESCRIPTION CHARACTER SURROUND FROM UPPER RIGHT ( IDC-SUR, ⿹ ) :
          • The IDS introduced by IDC-SUR describes the abstract form of the ideograph with D 1 on the right top corner of D2, and D 2 is encompassed by D1.
          • ⿹ is an example of an IDS which represents the abstract from of
    • IDS allows a character to be described by different sequences
    • One IDS should describe only one character, yet one character can be described by different IDSs.
    • IDS describes ideographic character composition at the abstract level. It indicates the relative positions of the components, but does not indicate the proportions.
    • Not intended for rendering.
    • Nesting is natural in ideographs and they are reflected in in the IDS scheme
  • Components
    • Ideographic Components(IRG definition) :
    • units which can be used to represent ideographs. These components consist of ideographs proper coded in ISO 10646 (BMP) and some basic elements used to form ideographs.
    • Radicals(IRG definition): those ideographic components listed in index pages of KX1, DKW, DJW, HYD.
    • ISO extensions:
      • Radicals
      • Components
    • 28 from GBK and more from IRG
    • ISO IRG component sample
  • Extending the Objectives of IDCs
    • Using coded characters to describe not yet code ideographs both for representation and exchange
    • Limit standardization to only modern characters, and not some rarely used characters
    • Learning of character composition(education)
    • Revealing substructures of ideograph characters
    • Description of ideograph variants
  • Examples
    • Given characters => IDS?
    • 忂 䑑 䔄 䔴 蠿
    • Given a IDS => what are the characters
    • 莫言
    • 艹旲言
    • Is the following a legal IDS?
    • 莫言 艹旲
  • Conclusion
    • IDCs are introduced in Unicode 3.0
    • The use is going beyond the original objective
    • Applications based on the IDCs were already developed such as in the the Hong Kong Glyph Specification.
    • IDCs should also useful in ideograph variant specifications
    • Additional search site:
      • http://glyph.iso10646hk.net/ccs/ccs.jsp?lang=zh_TW