Category Archives: Geophysics

Judging earthquake risk

The early 21st century seems to have been plagued by very powerful earthquakes: 217 greater than Magnitude 7.0; 19 > Magnitude 8.0 and 2 >Magnitude 9.0. Although some lesser seismic events kill, those above M 7.0 have a far greater potential for fatal consequences. Over 700 thousand people have died from their effects: ~20 000 in the 2001 Gujarat earthquake (M 7.7); ~29 000 in 2003 Bam earthquake (M 6.6); ~250 000 in the 2004 Indian Ocean tsunami that stemmed from a M 9.1 earthquake off western Sumatra; ~95 000 in the 2005 Kashmir earthquake (M7.6); ~87 000 in the 2008 Sichuan earthquake (M 7.9); up to 316 000 in the 2010 Haiti earthquake (M 7.0); ~20 000 in the 2011 tsunami that hit NE Japan from the M 9.0 Tohoku earthquake. The 26 December 2004 Indian Ocean tsunamis spelled out the far-reaching risk to populated coastal areas that face oceans prone to seismicity or large coastal landslips, but also the need for warning systems: tsunamis travel far more slowly than seismic waves and , except for directly adjacent areas, there is good chance of escape given a timely alert. Yet, historically, deadly risk is most often posed by earthquakes that occur beneath densely populated continental crust. Note that the most publicised earthquake that hit San Francisco in 1906 (at M 7.8) that lies on the world’s best-known fault, the San Andreas, caused between 700 and 3000 fatalities, a sizable proportion of which resulted from the subsequent fire. For continental earthquakes the biggest factor in deadly risk, outside of population density, is that of building standards.

English: A poor neighbourhood shows the damage...

A poor neighbourhood in Port au Prince, Haiti following the 2010 earthquake measuring >7 on the Richter scale. (credit: Wikipedia)

It barely needs stating that earthquakes are due to movement on faults, and these can leave distinct signs at or near to the surface, such as scarps, offsets of linear features such as roads, and broad rises or falls in the land surface. However, if they are due to faulting that does not break the surface – so-called ‘blind’ faults – very little record is left for geologists to analyse. But if it is possible to see actual breaks and shifts exposed by shallow excavations through geologically young materials, as in road cuts or trenches, then it is possible to work out an actual history of movements and their dimensions. It has also become increasingly possible to date the movements precisely using radiometric or luminescence means: a key element in establishing seismic risk is the historic frequency of events on active faults. Some of the most dangerous active faults are those at mountain fronts, such as the Himalaya and the American cordilleras, which often take the form of surface-breaking thrusts that are relative easy to analyse, although little work has been done to date. A notable study is on the West Andean Thrust that breaks cover east of Chile’s capital Santiago with a population of around 6 million (Vargas, G. Et al. 2014. Probing large intraplate earthquakes at the west flank of the Andes. Geology, v. 42, p. 1083-1086). This fault forms a prominent series of scarps in Santiago’s eastern suburbs, but for most of its length along the Andean Front it is ‘blind’. The last highly destructive on-shore earthquake in western South America was due to thrust movement that devastated the western Argentinean city of Mendoza in 1861. But the potential for large intraplate earthquakes is high along the entire west flank of the Andes.

Vargas and colleagues from France and the US excavated a 5 m deep trench through alluvium and colluvium over a distance of 25 m across one of the scarps associated with the San Ramon Thrust. They found excellent evidence of metre-sized displacement of some prominent units within the young sediments, sufficient to detect the effects of two distinct, major earthquakes, each producing horizontal shifts of up to 5 m. Individual sediment strata were dateable using radiocarbon and optically stimulated luminescence techniques. The earlier displacement occurred at around 17-19 ka and the second at about 8 ka. Various methods of estimation of the likely earthquake magnitudes of the displacements yielded values of about M 7.2 to 7.5 for both. That is quite sufficient for devastation of now nearby Santiago and, worryingly, another movement may be likely in the foreseeable future.

New gravity and bathymetric maps of the oceans

By far the least costly means of surveying the ocean floor on a global scale is the use of data remotely sensed from Earth orbit. That may sound absurd: how can it be possible to peer through thousands of metres of seawater? The answer comes from a practical application of lateral thinking. As well as being influenced by lunar and solar tidal attraction, sea level also depends on the Earth’s gravity field; that is, on the distribution of mass beneath the sea surface – how deep the water is and on varying density of rocks that lie beneath the sea floor. Water having a low density, the deeper it is the lower the overall gravitational attraction, and vice versa. Consequently, seawater is attracted towards shallower areas, standing high over, say, a seamount and low over the abyssal plains and trenches. Measuring sea-surface elevation defines the true shape that Earth would take if the entire surface was covered by water – the geoid – and is both a key to variations in gravity over the oceans and to bathymetry.

Radar altimeters can measure the average height of the sea surface to within a couple of centimetres: the roughness and tidal fluctuations are ‘ironed out’ by measurements every couple of weeks as the satellite passes on a regular orbital schedule. There is absolutely no way this systematic and highly accurate approach could be achieved by ship-borne bathymetric or gravity measurements, although such surveys help check the results from radar altimetry over widely spaced transects. Even after 40 years of accurate mapping with hundreds of ship-borne echo sounders 50% of the ocean floor is more than 10 km from such a depth measurement (80% lacks depth soundings)

This approach has been used since the first radar altimeter was placed in orbit on Seasat, launched in 1978, which revolutionised bathymetry and the details of plate tectonic features on the ocean floor. Since then, improvements in measurements of sea-surface elevation and the computer processing needed to extract the information from complex radar data have show more detail. The latest refinement stems from two satellites, NASA’s Jason-1(2001) and the European Space Agency’s Cryosat-2 (2010) (Sandwell, D.T. et al. 2014. New global marine gravity model from CryoSat-2 and Jason-1 reveals buried tectonic structure. Science, v. 346. p. 65-67; see also Hwang, C & Chang, E.T.Y. 2014. Seafloor secrets revealed. Science, v. 346. p. 32-33). If you have Google Earth you can view the marine gravity data by clicking here.  The maps throw light on previously unknown tectonic features beneath the China Sea (large faults buried by sediments), the Gulf of Mexico (an extinct spreading centre) and the South Atlantic (a major propagating rift) as well as thousands of seamounts.

Global gravity over the oceans derived from Jason-1 and Cryosat-2 radar altimetry (credit: Scripps Institution of Oceanography)

Global gravity over the oceans derived from Jason-1 and Cryosat-2 radar altimetry (credit: Scripps Institution of Oceanography)

There are many ways of processing the data, and so years of fruitful interpretation lie ahead of oceanographers and tectonicians, with more data likely from other suitably equipped satellites: sea-surface height studies are also essential in mapping changing surface currents, variations in water density and salinity, sea-ice thickness, eddies, superswells and changes due to processes linked to El Niño.

Signs of lunar tectonics

Large features on the near side of the Moon give us the illusion of the Man-in-the-Moon gazing down benevolently once a month. The lightest parts are the ancient lunar highlands made from feldspar-rich anorthosite, hence their high albedo. The dark components, originally thought to be seas or maria, are now known to be large areas of flood basalt formed about half a billion years after the Moon’s origin. Some show signs of a circular structure and have been assigned to the magmatic aftermath of truly gigantic impacts during the 4.1-3.8 Ga Late Heavy Bombardment. The largest mare feature, with a diameter of 3200 km, is Oceanus Procellarum, which has a more irregular shape, though it envelopes some smaller maria with partially circular outlines.

Full Moon view from earth In Belgium (Hamois)....

Full Moon viewed from Earth. Oceanus Procellarum is the large, irregular dark feature at left. (credit: Wikipedia)

A key line of investigation to improve knowledge of the lunar maria is the structure of the Moon’s gravitational field above them. Obviously, this can only be achieved by an orbiting experiment, and in early 2012 NASA launched one to provide detailed gravitational information: the Gravity Recovery and Interior Laboratory (GRAIL) whose early results were summarised by EPN in December 2012. GRAIL used two satellites orbiting in a tandem configuration similar to the US-German Gravity Recovery and Climate Experiment (GRACE) launched in 2002 to measure variations over time in the Earth’s gravity field. The Grail orbiters flew in a low orbit and eventually crashed into the Moon in December 2012, after producing lots of data whose processing continues.

The latest finding from GRAIL concerns the gravity structure of the Procellarum region (Andrews-Hanna, J.C. and 13 others 2014. Structure and evolution of the lunar Procellarum region as revealed by GRAIL gravity data. Nature, v. 514, p. 68-71) have yielded a major surprise. Instead of a system of anomalies combining circular arcs, as might be expected from a product of major impacts, the basaltic basin has a border made up of many linear segments that define an unusually angular structure.

The topography and gravity structure of the Moon. Oceanus Procellarum is roughly at the centre. Note: the images cover both near- and far side of the Moon. (credit: Andrews-Hanna et al 2014)

The topography and gravity structure of the Moon. Oceanus Procellarum is roughly at the centre. Note: the images cover both near- and far side of the Moon. (credit: Andrews-Hanna et al 2014)

The features only become apparent from the gravity data after they have been converted to the first derivative of the Bouguer anomaly (its gradient). Interpreting the features has to explain the angularity, which looks far more like an outcome of tectonics than bombardments. The features have been explained as rift structures through which basaltic magma oozed to the surface, perhaps feeding the vast outpourings of mare basalts, unusually rich in potassium (K), rare-earth elements (REE) and phosphorus (P) know as KREEP basalts. The Procellarum polygonal structure encompasses those parts of the lunar surface that are richest in the radioactive isotopes of potassium, thorium and uranium (measured from orbit by a gamma-ray spectrometer) – thorium concentration is shown in the figure.

Tectonics there may be on the Moon, but the authors are not suggesting plate tectonics but rather structures formed as a huge mass of radioactively heated lunar lithosphere cooled down at a faster rate than the rest of the outer Moon. Nor are they casting doubt on the Late Heavy Bombardment, for there is no escaping the presence of both topographic and gravity-defined circular features, just that the biggest expanse of basaltic surface on the Moon may have erupted for other reasons than a huge impact.

Tectonics of the early Earth

Tectonics on any rocky planet is an expression of the way heat is transferred from its deep interior to the surface to be lost by radiation to outer space. Radiative heat loss is vastly more efficient than either conduction or convection since the power emitted by a body is proportion to the fourth power of its absolute temperature. Unless it is superheated from outside by its star, a planet cannot stay molten at its surface for long because cooling by radiation releases all of the heat that makes its way to the surface.  Any football supporter who has rushed to get a microwaved pie at half time will have learned this quickly: a cool crust can hide a damagingly hot centre.

Thermal power is delivered to a planet’s surface by convection deep down and conduction nearer the surface because rocks, both solid and molten, are almost opaque to radiation. The vigour of the outward flow of heat might seem to be related mainly to the amount of internal heat but it is also governed by limits imposed by temperature on the form of convection. Of the Inner Planets only Earth shows surface signs of deep convection in the form of plate tectonics driven mainly by the pull exerted by steep subduction of cool, dense slabs of old oceanic lithosphere. Only Jupiter’s moon Io shows comparable surface signs of inner dynamics, but in the form of immense volcanoes rather than lateral movements of slabs. Io has about 40 times the surface heat flow of Earth, thanks largely to huge tidal forces imposed by Jupiter. So it seems that a different mode of convection is needed to shift the tidal heat production; similar in many ways to Earth’s relatively puny and isolated hot spots and mantle plumes.

Most of the yellow and orange hues of Io are d...

An analogy for the early Earth, Jupiter’s moon Io is speckled with large active volcanoes; signs of vigorous internal heat transport but not of plate tectonics. Its colour is dominated by various forms of sulfur rather than mafic igneous rocks. (credit: Wikipedia)

Shortly after Earth’s accretion it would have contained far more heat than now: gravitational energy of accretion itself; greater tidal heating from a close Moon and up to five times more from internal radioactive decay. The time at which plate tectonics can be deduced from evidence in ancient rocks has been disputed since the 1970s, but now an approach inspired by Io’s behaviour approaches the issue from the opposite direction: what might have been the mode of Earth’s heat transport shortly after accretion (Moore, W.B. & Webb, A.A.G. 2013. Heat-pipe Earth. Nature, v.  501, p. 501-505). The two American geophysicists modelled Rayleigh-Bénard convection – multicelled convection akin to that of the ‘heat pipes’ inside Io – for a range of possible thermal conditions in the Hadean. The modelled planet, dominated by volcanic centres turned out to have some surprising properties.

The sheer efficiency of heat-pipe dominated heat transfer and radiative heat lost results in development of a thick cold lithosphere between the pipes, that advects surface material downwards. Decreasing the heat sources results in a ‘flip’ to convection very like plate tectonics. In itself, this notion of sudden shift from Rayleigh-Bénard convection to plate tectonics is not new – several Archaean specialists, including me, debated this in the late 1970s – but the convincing modelling is. The authors also assemble a plausible list of evidence for it from the Archaean geological record: the presence in pre- 3.2 Ga greenstone belts of abundant ultramafic lavas marking high fractions of mantle melting; the dome-trough structure of granite-greenstone terrains; granitic magmas formed by melting of wet mafic rocks at around 45 km depth, extending back to second-hand evidence from Hadean zircons preserved in much younger rocks. They dwell on the oldest sizeable terranes in West Greenland (the Itsaq gneiss complex), South Africa and Western Australia (Barberton and the Pilbara) as a plausible and tangible products of ‘heat-pipe’ tectonics. They suggest that the transition to plate-tectonic dominance was around 3.2 Ga, yet ‘heat pipes’ remain to the present in the form of plumes so nicely defined in the preceding item Mantle structures beneath the central Pacific.

Mantle structures beneath the central Pacific

Since it first figured in Earth Pages 13 years ago seismic tomography has advanced steadily as regards the detail that can be shown and the level of confidence in its accuracy: in the early days some geoscientists considered the results to be verging on the imaginary. There were indeed deficiencies, one being that a mantle plume which everyone believed to be present beneath Hawaii didn’t show up on the first tomographic section through the central Pacific. Plumes are one of the forms likely to be taken by mantle heat convection, and many now believe that some of them emerge from great depths in the mantle, perhaps at its interface with the outer core.

The improvements in imaging deep structure stem mainly from increasingly sophisticated software and faster computers, the data being fed in being historic seismograph records from around the globe. The approach seeks out deviations in the speed of seismic waves from the mean at different depths beneath the Earth’s surface. Decreases suggest lower strength and therefore hotter rocks while abnormally high speeds signify strong, cool parts of the mantle. The hotter mantle rock is the lower its density and the more likely it is to be rising, and vice versa.

Using state-of-the-art tomography to probe beneath the central Pacific is a natural strategy as the region contains a greater concentration of hot-spot related volcanic island chains than anywhere else and that is the focus of a US-French group of collaborators (French, S. et al. 2013. Waveform tomography reveals channeled flow at the base of the oceanic lithosphere. Science, v. 342, 227-230;  doi 10.1126/science.1241514). The authors first note the appearance on 2-D global maps for a depth of 250 km of elongate zones of low shear-strength mantle that approximately parallel the known directions of local absolute plate movement. The most clear of these occur beneath the Pacific hemisphere, strongly suggesting some kind of channelling of hot material by convection away from the East Pacific Rise.

Seismic tomograhic model of the mantle beneath the central Pacific. Yellow to red colours represent increasing low shear strength. (credit: Global Seismology Group / Berkeley Seismological Laboratory

Seismic tomographic model of the mantle beneath the central Pacific. Yellow to red colours represent increasingly low shear strength. (credit: Global Seismology Group / Berkeley Seismological Laboratory)

Visually it is the three-dimensional models of the Pacific hot-spot ‘swarm’ that grab attention. These show the low velocity zone of the asthenosphere at depths of around 50 to 100 km, as predicted but with odd convolutions. Down to 1000 km is a zone of complexity with limb-like lobes of warm, low-strength mantle concentrated beneath the main island chains. That beneath the Hawaiian hot spot definitely has a plume-like shape but one curiously bent at depth, turning to the NW as it emerges from even deeper mantle then taking a knee-like bend to the east . Those beneath the hot spots of the west Pacific are more irregular but almost vertical. Just what kind of process the peculiarities represent in detail is not known, but it is almost certainly a reflection of complex forms taken by convection in a highly viscous medium.

Probing the Earth’s mantle using noise

sesmic tomography

Artistic impression of a global seismic tomogram – beneath Mercator projection – dividing the mantle into ‘warm’ and ‘cool’ regions (Credit: Cornell University Geology Department –

It goes without saying that it is difficult to sample the mantle. The only direct samples are inclusions found in igneous rocks that formed by partial melting at depth so that the magma incorporated fragments of mantle rock as it rose, or where tectonics has shoved once very deep blocks to the surface. Even if such samples were not contaminated in some way, they are isolated from any context. For 20 years geophysicists have been analysing seismograms from many stations across the globe for every digitally recordable earthquake to use in a form of depth sounding. This seismic tomography assesses variations in the speed of body (P and S) waves according to the path that they travelled through the Earth.

Unusually high speeds at a particular depth suggests more rigid rock and thus cooler temperatures whereas hotter materials slow down body waves. The result is images of deep structure in vertical 2-D slices, but the quality of such sections depends, ironically, on plate tectonics. Earthquakes, by definition mainly occur at plate boundaries, which are lines at the surface. Such a one-dimensional source for seismic tomograms inevitably leaves the bulk of the mantle as a blur. But there are more ways of killing a cat than drowning it in melted butter. All kinds of processes unconnected with tectonics, such as ocean waves hitting the shore and interfering with one another across the ocean basins, plus changes in atmospheric pressure especially associated with storms, also create waves similar in kind to seismic ones that pass through the solid Earth.

Such aseismic energy produces the background noise seen on any seismogram. Even though this noise is way below the energy and amplitude associated with earthquakes, it is continuous and all pervading: the cumulative energy. Given highly sensitive modern detectors and sophisticated processing much the same kind of depth sounding is possible using micro-seismic noise, but for the entire planet and at high resolution. Rather than imaging speed variations this approach can pick up reflections from physical boundaries in the solid Earth. Surface micro-seismic waves exactly the same as Rayleigh and Love waves from earthquakes have already been used to analyse the Mohorovičić discontinuity between crust and upper mantle as well as features in the continental crust; indeed the potential of noise was recognized in the 1960s. But the deep mantle and core are the principle targets, being far out of reach of experimental seismic surveys using artificial energy input. It seems they are now accessible using body-wave noise (Poli, P. et al. 2012. Body-wave imaging of Earth’s mantle discontinuities from ambient seismic noise. Science, v. 338, p. 1063-1065).

Poli and colleagues from the University of Grenoble, France and Finland used a temporary network of 42 seismometers laid out in Arctic Finland to pick up noise, and sophisticated signal processing to separate surface waves from body waves. Their experiment resolved two major mantle discontinuities at ~410 and 660 km depth that define a transition zone between the upper and lower mantle, where the dominant mineral of the upper mantle – olivine – changes its molecular state to a more closely packed configuration akin to that of the mineral perovskite that is thought to characterize the lower mantle. Moreover, they were able to demonstrate that the 2-step shift to perovskite occupies depth changes of about 10-15 km.

Applying the method elsewhere doesn’t need a flurry of new closely-spaced seismic networks. Data are already available from arrays that aimed at conventional seismic tomography, such as USArray that deploys  400 portable stations in area-by-area steps across the United States (

It is early days, but micro-seismic noise seems very like the dreams of planetary probing foreseen by several science fiction writers, such as Larry Niven who envisaged ‘deep radar’ being deployed for exploration by his piratical hero Louis Wu. Trouble is, radar of that kind would need a stupendous power source and would probably fry any living beings unwise enough to use it. Noise may be a free lunch to the well-equipped geophysicist of the future.

  • Prieto, G.A. 2012. Imaging the deep Earth. Science (Perspectives), v. 338, p. 1037-1038.

The shuffling poles

The mechanical disconnection of the lithosphere from the Earth’s deep mantle by a more ductile zone in the upper mantle – the asthenosphere – suggests that the lithosphere might move independently. If that were the case then points on the surface would shift relative to the axis of rotation and the magnetic poles, irrespective of plate tectonics.  So it makes sense to speak of absolute and relative motions of tectonic plates. The second relates to plates’ motions relative to each other and to the ancient position of the magnetic poles, assumed to be reasonably close to that of the past pole of rotation, yet measurable from the direction of palaeomagnetism retained in rocks on this or that tectonic plate. Plotting palaeomagnetic pole positions through time for each tectonic plate gives the impression that the poles have wandered. Such apparent polar wandering has long been a key element in judging ancient plate motions.  Absolute plate motion judges the direction and speed of plates relative to supposedly fixed mantle plumes beneath volcanic hot spots, the classic case being Hawaii, over which the Pacific Plate has moved to leave a chain of extinct volcanoes that become progressively older to the west. But it turns out that between about 80 to 50 Ma there are some gross misfits using the hot-spot frame of reference. An example is the 60° bend of the Hawaiian chain to become the Emperor seamount chain that some have ascribed to hot spots shifting (see

English: Age of ocean floor, with fracture zon...

Age of Pacific Ocean floor, showing the Hawaii-Emperor seamount chain in black. (credit: Wikipedia)

Ideas have shifted dramatically since it became clear that hot spots can shift, and there has been an attempt to estimate their actual motions (Doubrovine, P.V. et al. 2012. Absolute plate motions in a reference frame defined by moving hot spots in the Pacific, Atlantic, and Indian oceans. Journal of Geophysics Research: Solid Earth, v. 117, B09101, doi:10.1029/2011JB009072). It is early days for the revised view of absolute motion of the lithosphere and estimates go back only 120 Ma. However, one outcome has been a realistic examination of whether the positions of the poles have shifted through time; a possibility that is hidden in apparent polar wander paths. Since the mid-Cretaceous it seems that a slow and hesitant, but significant polar shuffle has taken place, varying between 0.1 and 1.0° Ma-1, starting in one direction and then the movement retraced its steps to achieve the current proximity of magnetic poles to the poles of rotation.

Geophysics reveals secrets of the beaver

Beaver Hut

Beaver lodge and dam (Photo credit: Bemep)

One of the interesting things about the beaver is that its obsession with civil engineering may have a profound effect upon landscape. Before Europeans set foot in North America, it is estimated that up to 400 million of them inhabited the continent. The ponds that they create by building the dams in which they live securely, encourage sedimentation. It is quite possible that this creates recognisable stratigraphic formations; but no-one really knows as active and wet beaver habitats hide what lies beneath them. It is clearly urgent to obtain this intelligence: the Geological Society of America’s monthly Geology contained in its first issue for 2012 a paper that indeed probes the legacy of large rodents long gone (Kramer, N. et al. 2012. Using ground penetrating radar to ‘unearth’ buried beaver dams. Geology, v. 40, p. 43-46).

The target for surveillance was the eponymous Beaver Meadows in Colorado, USA, and not only did the researchers from Colorado State University deploy ground-penetrating radar, but used the seismic reflection method as well, to quantify volumes of beaver-induced sedimentation. Fortunately, despite their past presence in some strength, beavers no longer frequent Beaver Meadows and no ethical lines in the sand were crossed. Beaver and elk seemingly have long competed for the meagre resources of Beaver Meadows, the rodent having finally succumbed locally to determined efforts by the elk to consume the beavers’ victuals. As disconcerted and no doubt sulking beavers failed to maintain their dams and lodges, the water table fell, further encouraging the elk. Eventually, at some time after the Beaver Survey of 1947, the last of them moved to new meadows. Their ravages (see of what would otherwise be dense woodland have, however, made it possible for geophysicists to try out their sophisticated kit on a new and thorny issue: they ran 6 km of GPR and seismic profiles.

I came across this handsome animal (Castor can...

A beaver. Image via Wikipedia

In much the same way as larger scale geophysical data are interpreted for petroleum traps, signs of hydrocarbons, mighty listric faults and zones of tectonic inversion, the beaver-oriented sections potentially yield considerable insight to the trained eye. There are indeed beaverine sedimentary aggradations of Holocene age above the local glacial tills. Beneath Beaver Meadow they amount to as much as 50% of post-glacial sediment. Apparently, the deposits have a linear element that follows the local drainages.

Seafloor mud cores and the seismic record


Japan's deep-sea Drilling Vessel "CHIKYU" Image via Wikipedia

The most important factors in attempting to assess risk from earthquakes are their frequency and the time-dependence of seismic magnitude. Historical records, although they go back more than a millennium, do not offer sufficient statistical rigor for which tens or hundreds of thousand years are needed. So the geological record is the only source of information and for most environments it is incomplete, because of erosion episodes, ambiguity of possible signs of earthquakes and difficulty in precise dating; indeed some sequences are extremely difficult to date at all with the resolution and consistency that analysis requires. One set of records that offer precise, continuous timing is that from ocean-floor sediment cores in which oxygen isotope variations related to the intricacies of climate change can be widely correlated with one another and with the records preserved in polar ice cores. For the past 50 ka they can be dated using radiocarbon methods on foraminifera shells The main difficulty lies in finding earthquake signatures in quite monotonous muds, but one kind of feature may prove crucial; evidence of sudden fracturing of otherwise gloopy ooze (Sakagusch, A. et al. 2011. Episodic seafloor mud brecciation due to great subduction zone earthquakes. Geology, v.39, p. 919-922).

The Japanese-US team scrutinised cores from the Integrated Ocean Drilling Program (IODP) that were drilled 5 years ago through the shallow sea floor above the subduction zone associated with the Nankai Trough to the SE of southern Japan. Young, upper sediments were targeted close to one of the long-lived faults associated with the formation of an accretionary wedge by the scraping action of subduction. Rather than examining the cores visually the team used X-ray tomography similar to that involved in CT scans, which produce precise 3-D images of internal structure. This showed up repeated examples of sediment disturbance in the form of angular pieces of clay set in a homogeneous mud matrix separated by undisturbed sections containing laminations. The repetitions are on a scale of centimetres to tens of centimetres and were dated using a combination of 14C and 210Pb dating (210Pb forms as a stage in the decay sequence of 238U and decays with a half-life of about 22 years, so is useful for recent events). The youngest mud breccia gave a 210Pb age of AD 1950±20, and probably formed during the 1944 Tonankai event, a great earthquake with Magnitude 8.2. Two other near-surface breccias gave 14C ages of 3512±34 and 10626±45 years before present. These too probably represent earlier great earthquakes as it can be shown that mud fracturing and brecciation by ground shaking needs accelerations of around 1G, induced by earthquakes with magnitudes greater than about 7.0. So, not all earthquakes in a particular segment of crust would show up in seafloor cores, most inducing turbidity flow of surface sediment, but knowing the frequency of the most damaging events, both by onshore seismicity and tsunamis, could be useful in risk analysis. In its favour, the method requires cores that penetrate only about 10 m, so hundreds could be systematically collected using simple piston coring rigs where a weighted tube is dropped onto the sea floor from a small craft.

Core’s comfort blanket and stable magnetic fields

Pangea animation

Pangaea and its break-up. Image via Wikipedia

The record of the Earth’s magnetic field for the most part bears more than a passing resemblance to a bar-code mark, by convention black representing normal polarity, i.e. like that at the present, and white signifies reversed polarity. The bar-code resemblance stems from long periods when the geomagnetic poles flipped on a regular, short-term basis, by geological standards. The black and white divisions subdivide time as represented by geomagnetic into chrons of the order of a million-years and subchrons that are somewhat shorter intervals. Stemming from changes in the Earth’s core, magnetostratigraphic divisions potentially occur in any sequence of sedimentary or volcanic igneous rocks anywhere on the planet and so can be used as reliable time markers; that is, if they can be defined by measurements of the remanent magnetism preserved in rock, which is not universally achievable. Yet this method of chronometry is extremely useful, for most of the Phanerozoic. However, there were periods when the geomagnetic field became unusually stable for tens of million years so the method is not so good. These have become known as superchrons, of which three occur during Phanerozoic times: the Cretaceous Normal Superchron when the field remained as it is nowadays from 120 to 83 Ma; a 50 Ma long period of stable reversed polarity (Kiaman Reverse Superchron) from 312 to 262 Ma in the Late Carboniferous and Early Permian; the Ordovician Moyero Reverse Superchron from 485 to 463 Ma.

Because the geomagnetic field is almost certainly generated by a self-exciting dynamo in the convecting  liquid metallic outer core, polarity flips mark sudden changes in how heat is transferred through the outer core to pass into the lower mantle. It follows that if there are no magnetic reversals then the outer core continued in a stable form of convection; the likely condition during superchrons. But why the shifts from repeated instability to long periods of quiescence? That is one of geoscience’s ‘hard’ questions, since no-one really knows how the core works at any one time, let alone over hundreds of million years. There is however a crude correlation with events much closer to the surface. The Kiaman superchron spans a time when Alfred Wegener’s supercontinent Pangaea had finished assembling so that all continental material was in one vast chunk. The Cretaceous superchron was at a time when sea-floor spreading and the break-up of Pangaea reached a maximum. The Ordovician, Moyero superchron coincides with the unification of what are now the northern continents into Laurasia and the continued existence of the southern continents lumped in Gondwana, so that the Earth had two supercontinents. Those empirical observations may have been due to chance, but at least they provide a possible clue to linkage between lithosphere and core, despite their separation by 2800 km of convecting mantle that transfers the core heat as well as that produced by the mantle itself to dissipate at the surface. Enter the modellers.

How part of the Earth transfers heat is, not unexpectedly, very complex, depending not only on what is happening at that point but on heat-transfer processes and heat inputs both above and below it. The surface heat flow is complex in its own right ranging from less than 20 to as much as 350 mW m-2, the largest amount being through zones of sea-floor spreading and the least  through continental lithosphere. Wherever heat is released in the core and mantle, willy-nilly the bulk of it leaves the solid Earth along what is today a complex series of lines; active oceanic ridge and rift systems such as the mid-Atlantic Ridge.  These lines weave between six drifting continental masses and many more sites of additional heat loss – hot spots and mantle plumes. The many heat escape routes today complicate the deeper convective processes and there are many possibilities for the core to shed heat, yet they continually change pace and position. When, inevitably, all continental lithosphere unites in a supercontinent, almost by definition, the sites of heat loss simplify too, the supercontinent acting like an efficient insulating blanket. In a qualitative sense, this kind of evolving scenario is what modellers try to mimic by putting in reasonable parameters for all the dynamic aspects involved.  Two physicists at the University of Colorado in Boulder, USA, Nan Zhang and Shije Zhong, have formulated 3-D spherical models of mantle convection with plate tectonics as a basis for whole Earth thermal evolution over that last 350 Ma (Zhang, N & Zhong, S. 2011.  Heat fluxes at the Earth’s surface and core–mantle boundary since Pangea formation. Earth and Planetary Science Letters, v. 306, p. 205-216). The acid test is whether the model can end with a close approximation to modern variations in heat flow and distribution of ages on the sea floor; it does. A probable key to stability in the means of transfer of heat from core to lower mantle – itself a key to a constant outer-core dynamo and geomagnetic polarity – is reduced heat flow at equatorial latitudes; a sort of equatorial downflow of convection with upflows in both northern and southern hemispheres. Zhang and Zhong’s model produced minimal core-to-mantle heat flow at  the Equator at 270 and 100 Ma, both within geomagnetic-field superchrons. Well, that is a good start. Superchrons seem also to have occurred from time to time during the Precambrian, one being documented at the Mesoproterozoic-Neoperoterozoic boundary about 1000 Ma ago. At that time, all continental lithosphere was assembled in a supercontinent dubbed Rodinia (‘homeland’ or ‘birthplace’ in Russian).