- Source: Sonification
- Personifikasi
- Sonifikasi
- Belial
- Daftar dewa-dewi Romawi
- Dewa-dewi Mesir Kuno
- Judith Herrin
- Kematian (personifikasi)
- Gerbang-gerbang Bagdad
- Antarmuka otak-komputer
- Marianne
- Sonification
- Data sonification
- Geiger counter
- Sonic interaction design
- Psychoacoustics
- Gregory Kramer
- Fiorella Terenzi
- Markus J. Buehler
- Audification
- Databending
Sonification is the use of non-speech audio to convey information or perceptualize data. Auditory perception has advantages in temporal, spatial, amplitude, and frequency resolution that open possibilities as an alternative or complement to visualization techniques.
For example, the rate of clicking of a Geiger counter conveys the level of radiation in the immediate vicinity of the device.
Though many experiments with data sonification have been explored in forums such as the International Community for Auditory Display (ICAD), sonification faces many challenges to widespread use for presenting and analyzing data. For example, studies show it is difficult, but essential, to provide adequate context for interpreting sonifications of data. Many sonification attempts are coded from scratch due to the lack of flexible tooling for sonification research and data exploration.
History
The Geiger counter, invented in 1908, is one of the earliest and most successful applications of sonification. A Geiger counter has a tube of low-pressure gas; each particle detected produces a pulse of current when it ionizes the gas, producing an audio click. The original version was only capable of detecting alpha particles. In 1928, Geiger and Walther Müller (a PhD student of Geiger) improved the counter so that it could detect more types of ionizing radiation.
In 1913, Dr. Edmund Fournier d'Albe of University of Birmingham invented the optophone, which used selenium photosensors to detect black print and convert it into an audible output. A blind reader could hold a book up to the device and hold an apparatus to the area she wanted to read. The optophone played a set group of notes: g c' d' e' g' b' c e. Each note corresponded with a position on the optophone's reading area, and that note was silenced if black ink was sensed. Thus, the missing notes indicated the positions where black ink was on the page and could be used to read.
Pollack and Ficks published the first perceptual experiments on the transmission of information via auditory display in 1954. They experimented with combining sound dimensions such as timing, frequency, loudness, duration, and spatialization and found that they could get subjects to register changes in multiple dimensions at once. These experiments did not get into much more detail than that, since each dimension had only two possible values.
In 1970, Nonesuch Records released a new electronic music composition by the American composer Charles Dodge, "The Earth's Magnetic Field." It was produced at the Columbia-Princeton Electronic Music Center. As the title suggests, the composition's electronic sounds were synthesized from data from the earth's magnetic field. As such, it may well be the first sonification of scientific data for artistic, rather than scientific, purposes.
John M. Chambers, Max Mathews, and F.R. Moore at Bell Laboratories did the earliest work on auditory graphing in their "Auditory Data Inspection" technical memorandum in 1974.
They augmented a scatterplot using sounds that varied along frequency, spectral content, and amplitude modulation dimensions to use in classification. They did not do any formal assessment of the effectiveness of these experiments.
In 1976, philosopher of technology, Don Ihde, wrote, "Just as science seems to produce an infinite set of visual images for virtually all of its phenomena--atoms to galaxies are familiar to us from coffee table books to science magazines; so 'musics,' too, could be produced from the same data that produces visualizations." This appears to be one of the earliest references to sonification as a creative practice.
In early 1982 Sara Bly of the University of California, Davis, released two publications - with examples - of her work on the use of computer-generated sound to present data. At the time, the field of scientific visualization was gaining momentum. Among other things, her studies and the accompanying examples compared the properties between visual and aural presentation, demonstrating that "Sound offers and enhancement and an alternative to graphic tools." Her work provides early experiment-based data to help inform matching appropriate data representation to type and purpose.
Also in the 1980s, pulse oximeters came into widespread use. Pulse oximeters can sonify oxygen concentration of blood by emitting higher pitches for higher concentrations. However, in practice this particular feature of pulse oximeters may not be widely utilized by medical professionals because of the risk of too many audio stimuli in medical environments.
In 1992, the International Community for Auditory Display (ICAD) was founded by Gregory Kramer as a forum for research on auditory display which includes data sonification. ICAD has since become a home for researchers from many different disciplines interested in the use of sound to convey information through its conference and peer-reviewed proceedings.
In May 2022, NASA reported the sonification (converting astronomical data associated with pressure waves into sound) of the black hole at the center of the Perseus galaxy cluster.
In 2024, Adhyâropa Records released The Volcano Listening Project by Leif Karlstrom, which merges geophysics research and computer music synthesis with acoustic instrumental and vocal performances by Billy Contreras, Todd Sickafoose, and other acoustic musicians.
Some existing applications and projects
Variometer (rate-of-climb indicator) in a glider (sailplane) beeps with a variable pitch corresponding to the meter reading
Auditory thermometer
Clocks, e.g., with an audible tick every second, and with special chimes every 15 minutes
Cockpit auditory displays
Geiger counter
Gravitational waves at LIGO
Interactive sonification
Medical and surgical auditory displays
Multimodal (combined sense) displays to minimize visual overload and fatigue
Navigation
DNA
Space physics
Pulse oximetery in operating rooms and intensive care
Speed alarm in motor vehicles
Sonar
Storm and weather sonification
Volcanic activity detection
Cluster Analysis of High Dimensional Data using Particle Trajectory Sonification
Volume and value of the Dow Jones Industrial Average
Image sonification for the visually impaired
CURAT Sonification Game based on psychoacoustic sonification
Tiltification based on psychoacoustic sonification
Sonified translates visual information from a video camera into sound in real-time (2011).
PriceSquawk Audible Market Technology
Representing biodiversity decline
Sonification techniques
Many different components can be altered to change the user's perception of the sound, and in turn, their perception of the underlying information being portrayed. Often, an increase or decrease in some level in this information is indicated by an increase or decrease in pitch, amplitude or tempo, but could also be indicated by varying other less commonly used components. For example, a stock market price could be portrayed by rising pitch as the stock price rose, and lowering pitch as it fell. To allow the user to determine that more than one stock was being portrayed, different timbres or brightnesses might be used for the different stocks, or they may be played to the user from different points in space, for example, through different sides of their headphones.
Many studies have been undertaken to try to find the best techniques for various types of information to be presented, and as yet, no conclusive set of techniques to be used has been formulated. As the area of sonification is still considered to be in its infancy, current studies are working towards determining the best set of sound components to vary in different situations.
Several different techniques for auditory rendering of data can be categorized:
Acoustic sonification
Audification
Model-based sonification
Parameter mapping
Stream-based sonification
An alternative approach to traditional sonification is "sonification by replacement", for example Pulsed Melodic Affective Processing (PMAP). In PMAP rather than sonifying a data stream, the computational protocol is musical data itself, for example MIDI. The data stream represents a non-musical state: in PMAP an affective state. Calculations can then be done directly on the musical data, and the results can be listened to with the minimum of translation.
See also
Auditory display – Use of sound to communicate information from a computer to the user
Computer music – Application of computing technology in music
Music and artificial intelligence – Usage of artificial intelligence to generate music
References
External links
International Community for Auditory Display
Sonification Report (1997) provides an introduction to the status of the field and current research agendas.
The Sonification Handbook, an Open Access book that gives a comprehensive introductory presentation of the key research areas in sonification and auditory display.
Using Sound to Extract Meaning from Complex Data, C. Scaletti and A. Craig, 1991.
Auditory Information Design, PhD Thesis by Stephen Barrass 1998, User Centred Approach to Designing Sonifications.
Mozzi : interactive sensor sonification on Arduino microprocessor.
Preliminary report on design rationale, syntax, and semantics of LSL: A specification language for program auralization, D. Boardman and AP Mathur, 1993.
A specification language for program auralization, D. Boardman, V. Khandelwal, and AP Mathur, 1994.
Sonification tutorial
SonEnvir general sonification environment
Sonification.de provides information about Sonification and Auditory Display, links to interesting event and related projects
Sonification for Exploratory Data Analysis, PhD Thesis by Thomas Hermann 2002, developing Model Based Sonification.
Sonification of Mobile and Wireless Communications
Interactive Sonification a hub to news and upcoming events in the field of interactive sonification
zero-th space-time association
CodeSounding — an open source sonification framework which makes possible to hear how any existing Java program "sounds like", by assigning instruments and pitches to code statements (if, for, etc.) and playing them as they are executed at runtime. In this way the flowing of execution is played as a flow of music and its rhythm changes depending on user interaction.
LYCAY, a Java library for sonification of Java source code
WebMelody, a system for sonification of activity of web servers.
Sonification of a Cantor set [1]
Sonification Sandbox v.3.0, a Java program to convert datasets to sounds, GT Sonification Lab, School of Psychology, Georgia Institute of Technology.
Program Sonification using Java, an online chapter (with code) explaining how to implement sonification using speech synthesis, MIDI note generation, and audio clips.
[2] Live Sonification of Ocean Swell