- Source: Character (computing)
- Source: Character computing
In computing and telecommunications, a character is a unit of information that roughly corresponds to a grapheme, grapheme-like unit, or symbol, such as in an alphabet or syllabary in the written form of a natural language.
Examples of characters include letters, numerical digits, common punctuation marks (such as "." or "-"), and whitespace. The concept also includes control characters, which do not correspond to visible symbols but rather to instructions to format or process the text. Examples of control characters include carriage return and tab as well as other instructions to printers or other devices that display or otherwise process text.
Characters are typically combined into strings.
Historically, the term character was used to denote a specific number of contiguous bits. While a character is most commonly assumed to refer to 8 bits (one byte) today, other options like the 6-bit character code were once popular, and the 5-bit Baudot code has been used in the past as well. The term has even been applied to 4 bits with only 16 possible values. All modern systems use a varying-size sequence of these fixed-sized pieces, for instance UTF-8 uses a varying number of 8-bit code units to define a "code point" and Unicode uses varying number of those to define a "character".
Encoding
Computers and communication equipment represent characters using a character encoding that assigns each character to something – an integer quantity represented by a sequence of digits, typically – that can be stored or transmitted through a network. Two examples of usual encodings are ASCII and the UTF-8 encoding for Unicode. While most character encodings map characters to numbers and/or bit sequences, Morse code instead represents characters using a series of electrical impulses of varying length.
Terminology
Historically, the term character has been widely used by industry professionals to refer to an encoded character, often as defined by the programming language or API. Likewise, character set has been widely used to refer to a specific repertoire of characters that have been mapped to specific bit sequences or numerical codes. The term glyph is used to describe a particular visual appearance of a character. Many computer fonts consist of glyphs that are indexed by the numerical code of the corresponding character.
With the advent and widespread acceptance of Unicode and bit-agnostic coded character sets, a character is increasingly being seen as a unit of information, independent of any particular visual manifestation. The ISO/IEC 10646 (Unicode) International Standard defines character, or abstract character as "a member of a set of elements used for the organization, control, or representation of data". Unicode's definition supplements this with explanatory notes that encourage the reader to differentiate between characters, graphemes, and glyphs, among other things. Such differentiation is an instance of the wider theme of the separation of presentation and content.
For example, the Hebrew letter aleph ("א") is often used by mathematicians to denote certain kinds of infinity (ℵ), but it is also used in ordinary Hebrew text. In Unicode, these two uses are considered different characters, and have two different Unicode numerical identifiers ("code points"), though they may be rendered identically. Conversely, the Chinese logogram for water ("水") may have a slightly different appearance in Japanese texts than it does in Chinese texts, and local typefaces may reflect this. But nonetheless in Unicode they are considered the same character, and share the same code point.
The Unicode standard also differentiates between these abstract characters and coded characters or encoded characters that have been paired with numeric codes that facilitate their representation in computers.
= Combining character
=The combining character is also addressed by Unicode. For instance, Unicode allocates a code point to each of
'i ' (U+0069),
the combining diaeresis (U+0308), and
'ï' (U+00EF).
This makes it possible to code the middle character of the word 'naïve' either as a single character 'ï' or as a combination of the character 'i ' with the combining diaeresis: (U+0069 LATIN SMALL LETTER I + U+0308 COMBINING DIAERESIS); this is also rendered as 'ï '.
These are considered canonically equivalent by the Unicode standard.
char
A char in the C programming language is a data type with the size of exactly one byte, which in turn is defined to be large enough to contain any member of the "basic execution character set". The exact number of bits can be checked via CHAR_BIT macro. By far the most common size is 8 bits, and the POSIX standard requires it to be 8 bits. In newer C standards char is required to hold UTF-8 code units which requires a minimum size of 8 bits.
A Unicode code point may require as many as 21 bits. This will not fit in a char on most systems, so more than one is used for some of them, as in the variable-length encoding UTF-8 where each code point takes 1 to 4 bytes. Furthermore, a "character" may require more than one code point (for instance with combining characters), depending on what is meant by the word "character".
The fact that a character was historically stored in a single byte led to the two terms ("char" and "character") being used interchangeably in most documentation. This often makes the documentation confusing or misleading when multibyte encodings such as UTF-8 are used, and has led to inefficient and incorrect implementations of string manipulation functions (such as computing the "length" of a string as a count of code units rather than bytes). Modern POSIX documentation attempts to fix this, defining "character" as a sequence of one or more bytes representing a single graphic symbol or control code, and attempts to use "byte" when referring to char data. However it still contains errors such as defining an array of char as a character array (rather than a byte array).
Unicode can also be stored in strings made up of code units that are larger than char. These are called "wide characters". The original C type was called wchar_t. Due to some platforms defining wchar_t as 16 bits and others defining it as 32 bits, recent versions have added char16_t, char32_t. Even then the objects being stored might not be characters, for instance the variable-length UTF-16 is often stored in arrays of char16_t.
Other languages also have a char type. Some such as C++ use at least 8 bits like C. Others such as Java use 16 bits for char in order to represent UTF-16 values.
See also
Character literal
Character (symbol)
Fill character
Combining character
Universal Character Set characters
Homoglyph
References
External links
Characters: A Brief Introduction by The Linux Information Project (LINFO)
ISO/IEC TR 15285:1998 summarizes the ISO/IEC's character model, focusing on terminology definitions and differentiating between characters and glyphs
Character computing is a trans-disciplinary field of research at the intersection of computer science and psychology. It is any computing that incorporates the human character within its context. Character is defined as all features or characteristics defining an individual and guiding their behavior in a specific situation. It consists of stable trait markers (e.g., personality, background, history, socio-economic embeddings, culture,...) and variable state markers (emotions, health, cognitive state, ...). Character computing aims at providing a holistic psychologically driven model of human behavior. It models and predicts behavior based on the relationships between a situation and character. Three main research modules fall under the umbrella of character computing: character sensing and profiling, character-aware adaptive systems, and artificial characters.
Overview
Character computing can be viewed as an extension of the well-established field of affective computing. Based on the foundations of the different psychology branches, it advocates defining behavior as a compound attribute that is not driven by either personality, emotions, situation or cognition alone. It rather defines behavior as a function of everything that makes up an individual i.e., their character and the situation they are in. Affective computing aims at allowing machines to understand and translate the non-verbal cues of individuals into affect. Accordingly, character computing aims at understanding the character attributes of an individual and the situation to translate it to predicted behavior, and vice versa.
''In practical terms, depending on the application context, character computing is a branch of research that deals with the design of systems and interfaces that can observe, sense, predict, adapt to, affect, understand, or simulate the following: character based on behavior and situation, behavior based on character and situation, or situation based on character and behavior.'' The Character-Behavior-Situation (CBS) triad is at the core of character computing and defines each of the three edges based on the other two.
Character computing relies on simultaneous development from a computational and psychological perspective and is intended to be used by researchers in both fields. Its main concept is aligning the computational model of character computing with empirical results from in-lab and in-the-wild psychology experiments. The model is to be continuously built and validated through the emergence of new data. Similar to affective and personality computing, the model is to be used as a base for different applications towards improving user experience.
History
Character computing as such was first coined in its first workshop in 2017. Since then it has had 3 international workshops and numerous publications. Despite its young age, it has already drawn some interest in the research community, leading to the publication of the first book under the same title in early 2020 published by Springer Nature.
Research that can be categorized under the field dates much older than 2017. The notion of combining several factors towards the explanation of behavior or traits and states has long been investigated in both Psychology and Computer Science, for example.
Character
The word character originates from the Greek word meaning “stamping tool”, referring to distinctive features and traits. Over the years it has been given many different connotations, like the moral character in philosophy, the temperament in psychology, a person in literature or an avatar in various virtual worlds, including video games. According to character computing character is a unification of all the previous definitions, by referring back to the original meaning of the word. Character is defined as the holistic concept representing all interacting trait and state markers that distinguish an individual. Traits are characteristics that mainly remain stable over time. Traits include personality, affect, socio-demographics, and general health. States are characteristics that vary in short periods of time. They include emotions, well-being, health, cognitive state. Each characteristic has many representation methods and psychological models. The different models can be combined or one model can be preset for each characteristic. This depends on the use-case and the design choices.
Areas
Research into character computing can be divided into three areas, which complement each other but can each be investigated separately. The first area is sensing and predicting character states and traits or ensuing behavior. The second area is adapting applications to certain character states or traits and the behavior they predict. It also deals with trying to change or monitor such behavior. The final area deals with creating artificial agents e.g., chatbots or virtual reality avatars that exhibit certain characteristics.
The three areas are investigated separately and build on existing findings in the literature. The results of each of the three areas can also be used as a stepping stone for the next area. Each of the three areas has already been investigated on its own in different research fields with focus on different subsets of character. For example, affective computing and personality computing both cover different areas with a focus on some character components without the others to account for human behavior.
The Character-Behavior-Situation triad
Character computing is based on a holistic psychologically driven model of human behavior. Human behavior is modeled and predicted based on the relationships between a situation and a human's character. To further define character in a more formal or holistic manner, we represent it in light of the Character–Behavior–Situation triad. This highlights that character not only determines who we are but how we are, i.e., how we behave. The triad investigated in Personality Psychology is extended through character computing to the Character–Behavior–Situation triad. Any member of the CBS triad is a function of the two other members, e.g., given the situation and personality, the behavior can be predicted. Each of the components in the triad can be further decomposed into smaller units and features that may best represent the human's behavior or character in a particular situation. Character is thus behind a person's behavior in any given situation. While this is a causality relation, the correlation between the three components is often more easily used to predict the components that are most difficult to measure from those measured more easily. There are infinitely many components to include in the representation of any of C, B, and S. The challenge is always to choose the smallest subset needed for prediction of a person's behavior in a particular situation.
References
Kata Kunci Pencarian:
- Richardus Eko Indrajit
- Pengenalan karakter optis
- Revolusi Industri 4.0
- Derry Tanti Wijaya
- Avatar (komputasi)
- Herman Hollerith
- Diakritik
- C++
- Radikal 64
- Radikal 208
- Character (computing)
- Character computing
- Character
- Box-drawing characters
- Control character
- Block check character
- Escape character
- Character (symbol)
- Symbols for Legacy Computing
- Grapheme