- Source: Seismic tomography
Seismic tomography or seismotomography is a technique for imaging the subsurface of the Earth using seismic waves. The properties of seismic waves are modified by the material through which they travel. By comparing the differences in seismic waves recorded at different locations, it is possible to create a model of the subsurface structure. Most commonly, these seismic waves are generated by earthquakes or man-made sources such as explosions. Different types of waves, including P-, S-, Rayleigh, and Love waves can be used for tomographic images, though each comes with their own benefits and downsides and are used depending on the geologic setting, seismometer coverage, distance from nearby earthquakes, and required resolution. The model created by tomographic imaging is almost always a seismic velocity model, and features within this model may be interpreted as structural, thermal, or compositional variations. Geoscientists apply seismic tomography to a wide variety of settings in which the subsurface structure is of interest, ranging in scale from whole-Earth structure to the upper few meters below the surface.
Theory
Tomography is solved as an inverse problem. Seismic data are compared to an initial Earth model and the model is modified until the best possible fit between the model predictions and observed data is found. Seismic waves would travel in straight lines if Earth was of uniform composition, but structural, chemical, and thermal variations affect the properties of seismic waves, most importantly their velocity, leading to the reflection and refraction of these waves. The location and magnitude of variations in the subsurface can be calculated by the inversion process, although solutions to tomographic inversions are non-unique. Most commonly, only the travel time of the seismic waves is considered in the inversion. However, advances in modeling techniques and computing power have allowed different parts, or the entirety, of the measured seismic waveform to be fit during the inversion.
Seismic tomography is similar to medical x-ray computed tomography (CT scan) in that a computer processes receiver data to produce a 3D image, although CT scans use attenuation instead of travel-time difference. Seismic tomography has to deal with the analysis of curved ray paths which are reflected and refracted within the Earth, and potential uncertainty in the location of the earthquake hypocenter. CT scans use linear x-rays and a known source.
History
In the early 20th century, seismologists first used travel time variations in seismic waves from earthquakes to make discoveries such as the existence of the Moho and the depth to the outer core. While these findings shared some underlying principles with seismic tomography, modern tomography itself was not developed until the 1970s with the expansion of global seismic networks. Networks like the World-Wide Standardized Seismograph Network were initially motivated by underground nuclear tests, but quickly showed the benefits of their accessible, standardized datasets for geoscience. These developments occurred concurrently with advancements in modeling techniques and computing power that were required to solve large inverse problems and generate theoretical seismograms, which are required to test the accuracy of a model. As early as 1972, researchers successfully used some of the underlying principles of modern seismic tomography to search for fast and slow areas in the subsurface.
The first widely cited publication that largely resembles modern seismic tomography was published in 1976 and used local earthquakes to determine the 3D velocity structure beneath Southern California. The following year, P-wave delay times were used to create 2D velocity maps of the whole Earth at several depth ranges, representing an early 3D model. The first model using iterative techniques, which improve upon an initial model in small steps and are required when there are a large number of unknowns, was done in 1984. The model was made possible by iterating upon the first radially anisotropic Earth model, created in 1981. A radially anisotropic Earth model describes changes in material properties, specifically seismic velocity, along a radial path through the Earth, and assumes this profile is valid for every path from the core to the surface. This 1984 study was also the first to apply the term "tomography" to seismology, as the term had originated in the medical field with X-ray tomography.
Seismic tomography has continued to improve in the past several decades since its initial conception. The development of adjoint inversions, which are able to combine several different types of seismic data into a single inversion, help negate some of the trade-offs associated with any individual data type. Historically, seismic waves have been modeled as 1D rays, a method referred to as "ray theory" that is relatively simple to model and can usually fit travel-time data well. However, recorded seismic waveforms contain much more information than just travel-time and are affected by a much wider path than is assumed by ray theory. Methods like the finite-frequency method attempt to account for this within the framework of ray theory. More recently, the development of "full waveform" or "waveform" tomography has abandoned ray theory entirely. This method models seismic wave propagation in its full complexity and can yield more accurate images of the subsurface. Originally these inversions were developed in exploration seismology in the 1980s and 1990s and were too computationally complex for global and regional scale studies, but development of numerical modeling methods to simulate seismic waves has allowed waveform tomography to become more common.
Process
Seismic tomography uses seismic records to create 2D and 3D models of the subsurface through an inverse problem that minimizes the difference between the created model and the observed seismic data. Various methods are used to resolve anomalies in the crust, lithosphere, mantle, and core based on the availability of data and types of seismic waves that pass through the region. Longer wavelengths penetrate deeper into the Earth, but seismic waves are not sensitive to features significantly smaller than their wavelength and therefore provide a lower resolution. Different methods also make different assumptions, which can have a large effect on the image created. For example, commonly used tomographic methods work by iteratively improving an initial input model, and thus can produce unrealistic results if the initial model is unreasonable.
P-wave data are used in most local models and global models in areas with sufficient earthquake and seismograph density. S- and surface wave data are used in global models when this coverage is not sufficient, such as in ocean basins and away from subduction zones. First-arrival times are the most widely used, but models utilizing reflected and refracted phases are used in more complex models, such as those imaging the core. Differential traveltimes between wave phases or types are also used.
= Local tomography
=Local tomographic models are often based on a temporary seismic array targeting specific areas, unless in a seismically active region with extensive permanent network coverage. These allow for the imaging of the crust and upper mantle.
Diffraction and wave equation tomography use the full waveform, rather than just the first arrival times. The inversion of amplitude and phases of all arrivals provide more detailed density information than transmission traveltime alone. Despite the theoretical appeal, these methods are not widely employed because of the computing expense and difficult inversions.
Reflection tomography originated with exploration geophysics. It uses an artificial source to resolve small-scale features at crustal depths. Wide-angle tomography is similar, but with a wide source to receiver offset. This allows for the detection of seismic waves refracted from sub-crustal depths and can determine continental architecture and details of plate margins. These two methods are often used together.
Local earthquake tomography is used in seismically active regions with sufficient seismometer coverage. Given the proximity between source and receivers, a precise earthquake focus location must be known. This requires the simultaneous iteration of both structure and focus locations in model calculations.
Teleseismic tomography uses waves from distant earthquakes that deflect upwards to a local seismic array. The models can reach depths similar to the array aperture, typically to depths for imaging the crust and lithosphere (a few hundred kilometers). The waves travel near 30° from vertical, creating a vertical distortion to compact features.
= Regional or global tomography
=Regional to global scale tomographic models are generally based on long wavelengths. Various models have better agreement with each other than local models due to the large feature size they image, such as subducted slabs and superplumes. The trade off from whole mantle to whole Earth coverage is the coarse resolution (hundreds of kilometers) and difficulty imaging small features (e.g. narrow plumes). Although often used to image different parts of the subsurface, P- and S-wave derived models broadly agree where there is image overlap. These models use data from both permanent seismic stations and supplementary temporary arrays.
First arrival traveltime P-wave data are used to generate the highest resolution tomographic images of the mantle. These models are limited to regions with sufficient seismograph coverage and earthquake density, therefore cannot be used for areas such as inactive plate interiors and ocean basins without seismic networks. Other phases of P-waves are used to image the deeper mantle and core.
In areas with limited seismograph or earthquake coverage, multiple phases of S-waves can be used for tomographic models. These are of lower resolution than P-wave models, due to the distances involved and fewer bounce-phase data available. S-waves can also be used in conjunction with P-waves for differential arrival time models.
Surface waves can be used for tomography of the crust and upper mantle where no body wave (P and S) data are available. Both Rayleigh and Love waves can be used. The low frequency waves lead to low resolution models, therefore these models have difficulty with crustal structure. Free oscillations, or normal mode seismology, are the long wavelength, low frequency movements of the surface of the Earth which can be thought of as a type of surface wave. The frequencies of these oscillations can be obtained through Fourier transformation of seismic data. The models based on this method are of broad scale, but have the advantage of relatively uniform data coverage as compared to data sourced directly from earthquakes.
Attenuation tomography attempts to extract the anelastic signal from the elastic-dominated waveform of seismic waves. Generally, it is assumed that seismic waves behave elastically, meaning individual rock particles that are displaced by the seismic wave eventually return to their original position. However, a comparatively small amount of permanent deformation does occur, which adds up to significant energy loss over large distances. This anelastic behavior is called attenuation, and in certain conditions can become just as important as the elastic response. It has been shown that the contribution of anelasticity to seismic velocity is highly sensitive to temperature, so attenuation tomography can help determine if a velocity feature is caused by a thermal or chemical variation, which can be ambiguous when assuming a purely elastic response.
Ambient noise tomography uses random seismic waves generated by oceanic and atmospheric disturbances to recover the velocities of surface waves. Assuming ambient seismic noise is equal in amplitude and frequency content from all directions, cross-correlating the ambient noise recorded at two seismometers for the same time period should produce only seismic energy that travels from one station to the other. This allows one station to be treated as a "virtual source" of surface waves sent to the other station, the "virtual receiver". These surface waves are sensitive to the seismic velocity of the Earth at different depths depending on their period. A major advantage of this method is that it does not require an earthquake or man-made source. A disadvantage of the method is that an individual cross-correlation can be quite noisy due to the complexity of the real ambient noise field. Thus, many individual correlations over a shorter time period, typically one day, need to be created and averaged to improve the signal-to-noise ratio. While this has often required very large amounts of seismic data recorded over multiple years, more recent studies have successfully used much shorter time periods to create tomographic images with ambient noise.
Waveforms are usually modeled as rays due to ray theory being significantly less complex to model than the full seismic wave equations. However, seismic waves are affected by the material properties of a wide area surrounding the ray path, not just the material through which the ray passes directly. The finite frequency effect is the result the surrounding medium has on a seismic record. Finite frequency tomography accounts for this in determining both travel time and amplitude anomalies, increasing image resolution. This has the ability to resolve much larger variations (i.e. 10–30%) in material properties.
Applications
Seismic tomography can resolve anisotropy, anelasticity, density, and bulk sound velocity. Variations in these parameters may be a result of thermal or chemical differences, which are attributed to processes such as mantle plumes, subducting slabs, and mineral phase changes. Larger scale features that can be imaged with tomography include the high velocities beneath continental shields and low velocities under ocean spreading centers.
= Hotspots
=The mantle plume hypothesis proposes that areas of volcanism not readily explained by plate tectonics, called hotspots, are a result of thermal upwelling within the mantle. Some researchers have proposed an upper mantle source above the 660km discontinuity for these plumes, while others propose a much deeper source, possibly at the core-mantle boundary.
While the source of mantle plumes has been highly debated since they were first proposed in the 1970s, most modern studies argue in favor of mantle plumes originating at or near the core-mantle boundary. This is in large part due to tomographic images that reveal both the plumes themselves as well as large low-velocity zones in the deep mantle that likely contribute to the formation of mantle plumes. These large low-shear velocity provinces as well as smaller ultra low velocity zones have been consistently observed across many tomographic models of the deep Earth
= Subduction Zones
=Subducting plates are colder than the mantle into which they are moving. This creates a fast anomaly that is visible in tomographic images. Tomographic images have been made of most subduction zones around the world and have provided insight into the geometries of the crust and upper mantle in these areas. These images have revealed that subducting plates vary widely in how steeply they move into the mantle. Tomographic images have also seen features such as deeper portions of the subducting plate tearing off from the upper portion.
= Other Applications
=Tomography can be used to image faults to better understand their seismic hazard. This can be through imaging the fault itself by seeing differences in seismic velocity across the fault boundary or by determining near-surface velocity structure, which can have a large impact on the magnitude on the amplitude of ground-shaking during an earthquake due to site amplification effects. Near-surface velocity structure from tomographic images can also be useful for other hazards, such as monitoring of landslides for changes in near-surface moisture content which has an effect on both seismic velocity and potential for future landslides.
Tomographic images of volcanoes have yielded new insights into properties of the underlying magmatic system. These images have most commonly been used to estimate the depth and volume of magma stored in the crust, but have also been used to constrain properties such as the geometry, temperature, or chemistry of the magma. It is important to note that both lab experiments and tomographic imaging studies have shown that recovering these properties from seismic velocity alone can be difficult due to the complexity of seismic wave propagation through focused zones of hot, potentially melted rocks.
While comparatively primitive to tomography on Earth, seismic tomography has been proposed on other bodies in the solar system and successfully used on the Moon. Data collected from four seismometers placed by the Apollo missions have been used many times to create 1-D velocity profiles for the moon, and less commonly 3-D tomographic models. Tomography relies on having multiple seismometers, but tomography-adjacent methods for constraining Earth structure have been used on other planets. While on Earth these methods are often used in combination with seismic tomography models to better constrain the locations of subsurface features, they can still provide useful information about the interiors of other planetary bodies when only a single seismometer is available. For example, data gathered by the SEIS (Seismic Experiment for Interior Structure) instrument on InSight on Mars has been able to detect the Martian core.
Limitations
Global seismic networks have expanded steadily since the 1960s, but are still concentrated on continents and in seismically active regions. Oceans, particularly in the southern hemisphere, are under-covered. Temporary seismic networks have helped improve tomographic models in regions of particular interest, but typically only collect data for months to a few years. The uneven distribution of earthquakes biases tomographic models towards seismically active regions. Methods that do not rely on earthquakes such as active source surveys or ambient noise tomography have helped image areas with little to no seismicity, though these both have their own limitations as compared to earthquake-based tomography.
The type of seismic wave used in a model limits the resolution it can achieve. Longer wavelengths are able to penetrate deeper into the Earth, but can only be used to resolve large features. Finer resolution can be achieved with surface waves, with the trade off that they cannot be used in models deeper than the crust and upper mantle. The disparity between wavelength and feature scale causes anomalies to appear of reduced magnitude and size in images. P- and S-wave models respond differently to the types of anomalies. Models based solely on the wave that arrives first naturally prefer faster pathways, causing models based on these data to have lower resolution of slow (often hot) features. This can prove to be a significant issue in areas such as volcanoes where rocks are much hotter than their surroundings and oftentimes partially melted. Shallow models must also consider the significant lateral velocity variations in continental crust.
Because seismometers have only been deployed in large numbers since the late-20th century, tomography is only capable of viewing changes in velocity structure over decades. For example, tectonic plates only move at millimeters per year, so the total amount of change in geologic structure due to plate tectonics since the development of seismic tomography is several orders of magnitude lower than the finest resolution possible with modern seismic networks. However, seismic tomography has still been used to view near-surface velocity structure changes at time scales of years to months.
Tomographic solutions are non-unique. Although statistical methods can be used to analyze the validity of a model, unresolvable uncertainty remains. This contributes to difficulty comparing the validity of different model results.
Computing power limits the amount of seismic data, number of unknowns, mesh size, and iterations in tomographic models. This is of particular importance in ocean basins, which due to limited network coverage and earthquake density require more complex processing of distant data. Shallow oceanic models also require smaller model mesh size due to the thinner crust.
Tomographic images are typically presented with a color ramp representing the strength of the anomalies. This has the consequence of making equal changes appear of differing magnitude based on visual perceptions of color, such as the change from orange to red being more subtle than blue to yellow. The degree of color saturation can also visually skew interpretations. These factors should be considered when analyzing images.
See also
Banana doughnut theory
EarthScope
References
External links
SubMachine is a collection of web-based tools for the interactive visualisation, analysis, and quantitative comparison of global-scale, volumetric (3-D) data sets of the subsurface, with supporting tools for interacting with other, complementary models and data sets.
EarthScope Education and Outreach: Seismic Tomography Background. Incorporated Research Institutions for Seismology (IRIS). Retrieved 17 January 2013.
Tomography Animation. Incorporated Research Institutions for Seismology (IRIS). Retrieved 17 January 2013.
Kata Kunci Pencarian:
- Batas divergen
- Tektonika lempeng
- Uturunku
- Cordón del Azufre
- Seismic tomography
- East African Rift
- Tomography
- Large low-shear-velocity provinces
- Geophysical imaging
- Crustal recycling
- Convergent boundary
- Seismic velocity structure
- Mantle plume
- Core–mantle boundary