- Source: Text normalization
Text normalization is the process of transforming text into a single canonical form that it might not have had before. Normalizing text before storing or processing it allows for separation of concerns, since input is guaranteed to be consistent before operations are performed on it. Text normalization requires being aware of what type of text is to be normalized and how it is to be processed afterwards; there is no all-purpose normalization procedure.
Applications
Text normalization is frequently used when converting text to speech. Numbers, dates, acronyms, and abbreviations are non-standard "words" that need to be pronounced differently depending on context. For example:
"$200" would be pronounced as "two hundred dollars" in English, but as "lua selau tālā" in Samoan.
"vi" could be pronounced as "vie," "vee," or "the sixth" depending on the surrounding words.
Text can also be normalized for storing and searching in a database. For instance, if a search for "resume" is to match the word "résumé," then the text would be normalized by removing diacritical marks; and if "john" is to match "John", the text would be converted to a single case. To prepare text for searching, it might also be stemmed (e.g. converting "flew" and "flying" both into "fly"), canonicalized (e.g. consistently using American or British English spelling), or have stop words removed.
Techniques
For simple, context-independent normalization, such as removing non-alphanumeric characters or diacritical marks, regular expressions would suffice. For example, the sed script sed ‑e "s/\s+/ /g" inputfile would normalize runs of whitespace characters into a single space. More complex normalization requires correspondingly complicated algorithms, including domain knowledge of the language and vocabulary being normalized. Among other approaches, text normalization has been modeled as a problem of tokenizing and tagging streams of text and as a special case of machine translation.
Textual scholarship
In the field of textual scholarship and the editing of historic texts, the term "normalization" implies a degree of modernization and standardization – for example in the extension of scribal abbreviations and the transliteration of the archaic glyphs typically found in manuscript and early printed sources. A normalized edition is therefore distinguished from a diplomatic edition (or semi-diplomatic edition), in which some attempt is made to preserve these features. The aim is to strike an appropriate balance between, on the one hand, rigorous fidelity to the source text (including, for example, the preservation of enigmatic and ambiguous elements); and, on the other, producing a new text that will be comprehensible and accessible to the modern reader. The extent of normalization is therefore at the discretion of the editor, and will vary. Some editors, for example, choose to modernize archaic spellings and punctuation, but others do not.
See also
Automated paraphrasing – Automatic generation or recognition of paraphrased textPages displaying short descriptions of redirect targets
Canonicalization – Process for converting data into a "standard", "normal", or canonical form
Text simplification – automated processPages displaying wikidata descriptions as a fallback
Unicode equivalence – Aspect of the Unicode standard
References
Kata Kunci Pencarian:
- Perjanjian Taipei
- Legitimasi Negara Israel
- Daftar istilah komputer
- Persetujuan Brussel 2013
- Text normalization
- Normalization
- Unicode equivalence
- Speech synthesis
- Normalization (image processing)
- Feature scaling
- Normalization (machine learning)
- URI normalization
- Automatic summarization
- Batch normalization