This article will be permanently flagged as inappropriate and made unaccessible to everyone. Are you certain this article is inappropriate? Excessive Violence Sexual Content Political / Social
Email Address:
Article Id: WHEBN0000231137 Reproduction Date:
Earthquake prediction is a branch of the science of seismology concerned with the specification of the time, location, and magnitude of future earthquakes within stated confidence limits but with sufficient precision that a warning can be issued.[1][2] Of particular importance is the prediction of hazardous earthquakes likely to cause loss of life or damage to infrastructure. Earthquake prediction is sometimes distinguished from earthquake forecasting, which can be defined as the probabilistic assessment of general earthquake hazard, including the frequency and magnitude of damaging earthquakes in a given area over years or decades.[3] It can be further distinguished from earthquake warning systems, which upon detection of an earthquake, provide a real-time warning to regions that might be affected.
In the 1970s, scientists were optimistic that a practical method for predicting earthquakes would soon be found, but by the 1990s continuing failure led many to question whether it was even possible.[4] Demonstrably successful predictions of large earthquakes have not occurred and the few claims of success are controversial.[5] Extensive searches have reported many possible earthquake precursors, but, so far, such precursors have not been reliably identified across significant spatial and temporal scales [6] While some scientists still hold that, given enough resources, prediction might be possible, many others now maintain that earthquake prediction is inherently impossible.[7]
All predictions of the future can be to some extent successful by chance.
Predictions are deemed significant if they can be shown to be successful beyond random chance.[8] Therefore, methods of statistical hypothesis testing are used to determine the probability that an earthquake such as is predicted would happen anyway (the null hypothesis). The predictions are then evaluated by testing whether they correlate with actual earthquakes better than the null hypothesis.[9]
In many instances, however, the statistical nature of earthquake occurrence is not simply homogeneous. Clustering occurs in both space and time.[10] In southern California about 6% of M≥3.0 earthquakes are "followed by an earthquake of larger magnitude within 5 days and 10 km."[11] In central Italy 9.5% of M≥3.0 earthquakes are followed by a larger event within 30 km and 48 hours.[12] While such statistics are not satisfactory for purposes of prediction (giving ten to twenty false alarms for each successful prediction) they will skew the results of any analysis that assumes that earthquakes occur randomly in time, for example, as realized from a Poisson process. It has been shown that a "naive" method based solely on clustering can successfully predict about 5% of earthquakes;[13] slightly better than chance.
As the purpose of short-term prediction is to enable emergency measures to reduce death and destruction, failure to give warning of a major earthquake, that does occur, or at least an adequate evaluation of the hazard, can result in legal liability,[14] or even political purging.[15] But warning of an earthquake that does not occur also incurs a cost:[16] not only the cost of the emergency measures themselves, but of civil and economic disruption.[17] False alarms, including alarms that are cancelled, also undermine the credibility, and thereby the effectiveness, of future warnings.[18] The acceptable trade-off between missed quakes and false alarms depends on the societal valuation of these outcomes. The rate of occurrence of both must be considered when evaluating any prediction method.[19]
Earthquake prediction may be intrinsically impossible. It has been argued that the Earth is in a state of [20] It has also been argued on decision-theoretic grounds that prediction of major earthquakes is impossible.[21] However, these theories and their implication that earthquake prediction is intrinsically impossible are still disputed.[22]
Earthquake prediction is an immature science—it has not yet lead to a successful prediction of an earthquake from first physical principles. Therefore, some research focuses on empirical analysis, either identifying distinctive precursors to earthquakes, or identifying some kind of geophysical trend or pattern in seismicity that might precede a large earthquake.[23]
An earthquake precursor is an anomalous phenomenon that might give effective warning of an impending earthquake.[24] Reports of these – though generally recognized as such only after the event – number in the thousands,[25] some dating back to antiquity.[26] There have been around 400 reports of possible precursors in scientific literature, of roughly twenty different types,[27] running the gamut from aeronomy to zoology.[28] None have been found to be reliable for the purposes of earthquake prediction.[29]
In the early 1990, the IASPEI solicited nominations for a Preliminary List of Significant Precursors. Forty nominations were made, of which five were selected as possible significant precursors, with two of those based on a single observation each.[30]
After a critical review of the scientific literature the International Commission on Earthquake Forecasting for Civil Protection (ICEF) concluded in 2011 there was "considerable room for methodological improvements in this type of research."[31] In particular, many cases of reported precursors are contradictory, lack a measure of amplitude, or are generally unsuitable for a rigorous statistical evaluation. Published results are biased towards positive results, and so the rate of false negatives (earthquake but no precursory signal) is unclear.[32]
For centuries there have been anecdotal accounts of anomalous animal behavior preceding and associated with earthquakes. In cases where animals display unusual behavior some tens of seconds prior to a quake, it has been suggested they are responding to the P-wave.[33] These travel through the ground about twice as fast as the S-waves that cause most severe shaking.[34] They predict not the earthquake itself — that has already happened — but only the imminent arrival of the more destructive S-waves.
It has also been suggested that unusual behavior hours or even days beforehand could be triggered by foreshock activity at magnitudes that most people do not notice.[35] Another confounding factor of accounts of unusual phenomena is skewing due to "flashbulb memories": otherwise unremarkable details become more memorable and more significant when associated with an emotionally powerful event such as an earthquake.[36] A study that attempted to control for these kinds of factors found an increase in unusual animal behavior (possibly triggered by foreshocks) in one case, but not in four other cases of seemingly similar earthquakes.[37]
Vp is the symbol for the velocity of a seismic "P" (primary or pressure) wave passing through rock, while Vs is the symbol for the velocity of the "S" (secondary or shear) wave. Small-scale laboratory experiments have shown that the ratio of these two velocities – represented as Vp/Vs – changes when rock is near the point of fracturing. In the 1970s it was considered a likely breakthrough when Russian seismologists reported observing such changes in the region of a subsequent earthquake.[38] This effect, as well as other possible precursors, has been attributed to dilatancy, where rock stressed to near its breaking point expands (dilates) slightly.[39]
Study of this phenomena near Blue Mountain Lake in New York State led to a successful prediction in 1973.[40] However, additional successes have not followed, and it has been suggested that the prediction was a fluke.[41] A Vp/Vs anomaly was the basis of a 1976 prediction of a M 5.5 to 6.5 earthquake near Los Angeles, which failed to occur.[42] Other studies relying on quarry blasts (more precise, and repeatable) found no such variations;[43] and an alternative explanation has been reported for such variations as have been observed.[44] Geller (1997) noted that reports of significant velocity changes have ceased since about 1980.
Most rock contains small amounts of gases that can be isotopically distinguished from the normal atmospheric gases. There are reports of spikes in the concentrations of such gases prior to a major earthquake; this has been attributed to release due to pre-seismic stress or fracturing of the rock. One of these gases is radon, produced by radioactive decay of the trace amounts of uranium present in most rock.[45]
Radon is useful as a potential earthquake predictor because it is radioactive and thus easily detected,[46] and its short half-life (3.8 days) makes radon levels sensitive to short-term fluctuations. A 2009 review[47] found 125 reports of changes in radon emissions prior to 86 earthquakes since 1966. But as the ICEF found in its review, the earthquakes with which these changes are supposedly linked were up to a thousand kilometers away, months later, and at all magnitudes. In some cases the anomalies were observed at a distant site, but not at closer sites. The ICEF found "no significant correlation".[48] Another review concluded that in some cases changes in radon levels preceded an earthquake, but a correlation is not yet firmly established.[49]
Various attempts have been made to identify possible pre-seismic indications in electrical, electric-resistive, or magnetic phenomena.[50] The most touted, and most criticized, is the VAN method of professors P. Varotsos, K. Alexopoulos and K. Nomicos – "VAN" – of the National and Capodistrian University of Athens. In a 1981 paper[51] they claimed that by measuring geoelectric voltages – what they called "seismic electric signals" (SES) – they could predict earthquakes of magnitude larger than 2.8 within all of Greece up to 7 hours beforehand. Later the claim changed to being able to predict earthquakes larger than magnitude 5, within 100 km of the epicentral location, within 0.7 units of magnitude, and in a 2-hour to 11-day time window.[52] Subsequent papers claimed a series of successful predictions.[53] However, the VAN group generated intense public criticism in the 1980s by issuing telegram warnings, a large number of which were false alarms.
Objections have been raised that the physics of the VAN method is not possible. None of the earthquakes which VAN claimed were preceded by SES generated SES themselves, as would have been expected. Analysis of the wave propagation properties of SES in the Earth’s crust showed that it would have been impossible for signals with the amplitude reported by VAN to have been transmitted over the several hundred kilometers distances from the epicenter to the monitoring station.[54] In addition, VAN’s publications do not account for (i.e. identify and eliminate) possible sources of electromagnetic interference (EMI). Taken as a whole, the VAN method has been criticized as lacking consistency in the statistical testing of the validity of their hypotheses.[55] In particular, there has been some contention over which catalog of seismic events to use in vetting predictions. This catalog switching can be used to conclude that, for example, of 22 claims of successful prediction by VAN[56] 74% were false, 9% correlated at random and for 14% the correlation was uncertain.[57]
In 1996 the journal Geophysical Research Letters presented a debate on the statistical significance of the VAN method;[58] the majority of reviewers found the methods of VAN to be flawed, and the claims of successful predictions statistically insignificant.[59] In 2001, the VAN method was modified to include time series analysis, and Springer published an overview in 2011.[60]
After the 1989 Loma Prieta earthquake occurred, a group led by Antony C. Fraser-Smith of Stanford University reported that the event was preceded by disturbances in background magnetic field noise as measured by a sensor placed in Corralitos, California, about 4.5 miles (7 km) from the epicenter.[61] From 5 October, they reported a substantial increase in noise in the frequency range 0.01–10 Hz. The measurement instrument was a single-axis search-coil magnetometer that was being used for low frequency research. Precursory increases of noise apparently started a few days before the earthquake, with noise in the range .01–.5 Hz rising to exceptionally high levels about three hours before the earthquake. Though this pattern gave scientists new ideas for research into potential precursors to earthquakes, and the Fraser-Smith et al. report remains one of the most frequently cited examples of a specific earthquake precursor, more recent studies have cast doubt on the connection, attributing the Corralitos signals to either unrelated magnetic disturbance[62] or, even more simply, to sensor-system malfunction.[63]
Instead of watching for anomalous phenomena that might be precursory signs of an impending earthquake, other approaches to predicting earthquakes look for trends or patterns that lead to an earthquake. As these trends may be complex and involve many variables, advanced statistical techniques are often needed to understand them, therefore these are sometimes called statistical methods. These approaches also tend to be more probabilistic, and to have larger time periods, and so merge into earthquake forecasting.
Even the stiffest of rock is not perfectly rigid. Given a large force (such as between two immense tectonic plates moving past each other) the earth's crust will bend or deform. According to the elastic rebound theory of Reid (1910), eventually the deformation (strain) becomes great enough that something breaks, usually at an existing fault. Slippage along the break (an earthquake) allows the rock on each side to rebound to a less deformed state. In the process energy is released in various forms, including seismic waves.[64] The cycle of tectonic force being accumulated in elastic deformation and released in a sudden rebound is then repeated. As the displacement from a single earthquake ranges from less than a meter to around 10 meters (for an M 8 quake),[65] the demonstrated existence of large strike-slip displacements of hundreds of miles shows the existence of a long running earthquake cycle.[66]
The most studied earthquake faults (such as the Nankai megathrust, the Wasatch fault, and the San Andreas fault) appear to have distinct segments. The characteristic earthquake model postulates that earthquakes are generally constrained within these segments.[67] As the lengths and other properties [68] of the segments are fixed, earthquakes that rupture the entire fault should have similar characteristics. These include the maximum magnitude (which is limited by the length of the rupture), and the amount of accumulated strain needed to rupture the fault segment. Since continuous plate motions cause the strain to accumulate steadily, seismic activity on a given segment should be dominated by earthquakes of similar characteristics that recur at somewhat regular intervals.[69] For a given fault segment, identifying these characteristic earthquakes and timing their recurrence rate (or conversely return period) should therefore inform us about the next rupture; this is the approach generally used in forecasting seismic hazard.[70] Return periods are also used for forecasting other rare events, such as cyclones and floods, and assume that future frequency will be similar to observed frequency to date.
The idea of characteristic earthquakes was the basis of the Parkfield prediction: fairly similar earthquakes in 1857, 1881, 1901, 1922, 1934, and 1966 suggested a pattern of breaks every 21.9 years, with a standard deviation of ±3.1 years.[71] Extrapolation from the 1966 event led to a prediction of an earthquake around 1988, or before 1993 at the latest (at the 95% confidence interval).[72] The appeal of such a method is that the prediction is derived entirely from the trend, which supposedly accounts for the unknown and possibly unknowable earthquake physics and fault parameters. However, in the Parkfield case the predicted earthquake did not occur until 2004, a decade late. This seriously undercuts the claim that earthquakes at Parkfield are quasi-periodic, and suggests the individual events differ sufficiently in other respects to question whether they have distinct characteristics in common.[73]
Further research into the Parkfield seismic data revealed that several 4.0 earthquakes had reduced the stresses on the northwest portion of the Parkfield segment, causing it to skip generating the predicted 6.0 earthquake.[74]
The failure of the Parkfield prediction has raised doubt as to the validity of the characteristic earthquake model itself.[75] Some studies have questioned the various assumptions, including the key one that earthquakes are constrained within segments, and suggested that the "characteristic earthquakes" may be an artifact of selection bias and the shortness of seismological records (relative to earthquake cycles).[76] Other studies have considered whether other factors need to be considered, such as the age of the fault.[77] Whether earthquake ruptures are more generally constrained within a segment (as is often seen), or break past segment boundaries (also seen), has a direct bearing on the degree of earthquake hazard: earthquakes are larger where multiple segments break, but in relieving more strain they will happen less often.[78]
At the contact where two tectonic plates slip past each other every section must eventually slip, as (in the long-term) none get left behind. But they do not all slip at the same time; different sections will be at different stages in the cycle of strain (deformation) accumulation and sudden rebound. In the seismic gap model the "next big quake" should be expected not in the segments where recent seismicity has relieved the strain, but in the intervening gaps where the unrelieved strain is the greatest.[79] This model has an intuitive appeal; it is used in long-term forecasting, and was the basis of a series of circum-Pacific (Pacific Rim) forecasts in 1979 and 1989–1991.[80]
However, some underlying assumptions about seismic gaps are now known to be incorrect. A close examination suggests that "there may be no information in seismic gaps about the time of occurrence or the magnitude of the next large event in the region";[81] statistical tests of the circum-Pacific forecasts shows that the seismic gap model "did not forecast large earthquakes well".[82] Another study concluded that a long quiet period did not increase earthquake potential.[83]
Various heuristically derived algorithms have been developed for predicting earthquakes. Probably the most widely known is the M8 family of algorithms (including the RTP method) developed under the leadership of Vladimir Keilis-Borok. M8 issues a "Time of Increased Probability" (TIP) alarm for a large earthquake of a specified magnitude upon observing certain patterns of smaller earthquakes. TIPs generally cover large areas (up to a thousand kilometers across) for up to five years.[84] Such large parameters have made M8 controversial, as it is hard to determine whether any hits that happened were skillfully predicted, or only the result of chance.
M8 gained considerable attention when the 2003 San Simeon and Hokkaido earthquakes occurred within a TIP.[85] But a widely publicized TIP for an M 6.4 quake in Southern California in 2004 was not fulfilled, nor two other lesser known TIPs.[86] A deep study of the RTP method in 2008 found that out of some twenty alarms only two could be considered hits (and one of those had a 60% chance of happening anyway).[87] It concluded that "RTP is not significantly different from a naïve method of guessing based on the historical rates [of] seismicity."[88]
Accelerating moment release (AMR, "moment" being a measurement of seismic energy), also known as time-to-failure analysis, or accelerating seismic moment release (ASMR), is based on observations that foreshock activity prior to a major earthquake not only increased, but increased at an exponential rate.[89] In other words, a plot of the cumulative number of foreshocks gets steeper just before the main shock.
Following formulation by Bowman et al. (1998) into a testable hypothesis,[90] and a number of positive reports, AMR seemed promising[91] despite several problems. Known issues included not being detected for all locations and events, and the difficulty of projecting an accurate occurrence time when the tail end of the curve gets steep.[92] But rigorous testing has shown that apparent AMR trends likely result from how data fitting is done,[93] and failing to account for spatiotemporal clustering of earthquakes.[94] The AMR trends are therefore statistically insignificant. Interest in AMR (as judged by the number of peer-reviewed papers) has fallen off since 2004.[95]
The occurrence of foreshocks has long been thought to be the most promising avenue in predicting earthquakes. A foreshock is a smaller earthquake that can strike minutes or days before a larger one. Because the rupture process for the earthquakes is still not completely clear, foreshock occurrence may give clues into an earthquake-triggering process. In the Non-Critical Precursory Accelerating Seismicity Theory (N-C PAST), foreshocks happen because of the constant buildup of pressure along the fault lines.[96] This theory is given weight due to seismic measurements. This had led to the conclusion for some scientists that foreshocks are a precursor to a larger event, and should be further studied and considered in earthquake prediction.
These are predictions, or claims of predictions, that are notable either scientifically or because of public notoriety, and claim a scientific or quasi-scientific basis. As many predictions are held confidentially, or published in obscure locations, and become notable only when they are claimed, there may be some selection bias in that hits get more attention than misses. The predictions listed here are discussed in Hough's book[97] and Geller's paper.[98]
The M 7.3 Haicheng (China) earthquake of 4 February 1975 is the most widely cited "success" of earthquake prediction.[99] Study of seismic activity in the region led the Chinese authorities to issue a medium-term prediction in June 1974. The political authorities therefore ordered various measures taken, including enforced evacuation of homes, construction of "simple outdoor structures", and showing of movies out-of-doors. The quake, striking at 19:36, was powerful enough to destroy or badly damage about half of the homes. However, the "effective preventative measures taken" were said to have kept the death toll under 300 in an area with population of about 1.6 million, where otherwise tens of thousands of fatalities might have been expected.[100]
However, although a major earthquake occurred, there has been some skepticism about the narrative of measures taken on the basis of a timely prediction. This event occurred during the Cultural Revolution, when "belief in earthquake prediction was made an element of ideological orthodoxy that distinguished the true party liners from right wing deviationists".[101] Recordkeeping was disordered, making it difficult to verify details, including whether there was any ordered evacuation. The method used for either the medium-term or short-term predictions (other than "Chairman Mao's revolutionary line"[102]) has not been specified.[103] The evacuation may have been spontaneous, following the strong (M 4.7) foreshock that occurred the day before.[104]
A 2006 study that had access to an extensive range of records found that the predictions were flawed. "In particular, there was no official short-term prediction, although such a prediction was made by individual scientists."[105] Also: "it was the foreshocks alone that triggered the final decisions of warning and evacuation". They estimated that 2,041 lives were lost. That more did not die was attributed to a number of fortuitous circumstances, including earthquake education in the previous months (prompted by elevated seismic activity), local initiative, timing (occurring when people were neither working nor asleep), and local style of construction. The authors conclude that, while unsatisfactory as a prediction, "it was an attempt to predict a major earthquake that for the first time did not end up with practical failure."
The "Parkfield earthquake prediction experiment" was the most heralded scientific earthquake prediction ever.[106] It was based on an observation that the Parkfield segment of the San Andreas Fault[107] breaks regularly with a moderate earthquake of about M 6 every several decades: 1857, 1881, 1901, 1922, 1934, and 1966.[108] More particularly, Bakun & Lindh (1985) pointed out that, if the 1934 quake is excluded, these occur every 22 years, ±4.3 years. Counting from 1966, they predicted a 95% chance that the next earthquake would hit around 1988, or 1993 at the latest. The National Earthquake Prediction Evaluation Council (NEPEC) evaluated this, and concurred.[109] The U.S. Geological Survey and the State of California therefore established one of the "most sophisticated and densest nets of monitoring instruments in the world",[110] in part to identify any precursors when the quake came. Confidence was high enough that detailed plans were made for alerting emergency authorities if there were signs an earthquake was imminent.[111] In the words of the Economist: "never has an ambush been more carefully laid for such an event."[112]
1993 came, and passed, without fulfillment. Eventually there was an M 6.0 earthquake, on 28 September 2004, but without forewarning or obvious precursors.[113] While the experiment in catching an earthquake is considered by many scientists to have been successful,[114] the prediction was unsuccessful in that the eventual event was a decade late.[115]
Professors P. Varotsos, K. Alexopoulos and K. Nomicos – "VAN" – claimed in a 1981 paper an ability to predict M ≥ 2.6 earthquakes within 80 km of their observatory (in Greece) approximately seven hours beforehand, by measurements of 'seismic electric signals'. In 1996 Varotsos and other colleagues claimed to have predicted impending earthquakes within windows of several weeks, 100–120 km, and ±0.7 of the magnitude.[116]
The VAN predictions have been criticized on various grounds, including being geophysically implausible,[117] "vague and ambiguous",[118] failing to satisfy prediction criteria,[119] and retroactive adjustment of parameters.[120] A critical review of 14 cases where VAN claimed 10 successes showed only one case where an earthquake occurred within the prediction parameters.[121] The VAN predictions not only fail to do better than chance, but show "a much better association with the events which occurred before them", according to Mulargia and Gasperini.[122]
On 17 October 1989, the Mw 6.9 (Ms 7.1[123]) Loma Prieta ("World Series") earthquake (epicenter in the Santa Cruz Mountains northwest of San Juan Bautista, California) caused significant damage in the San Francisco Bay area of California.[124] The U.S. Geological Survey (USGS) reportedly claimed, twelve hours after the event, that it had "forecast" this earthquake in a report the previous year.[125] USGS staff subsequently claimed this quake had been "anticipated";[126] various other claims of prediction have also been made.[127]
Harris (1998) reviewed 18 papers (with 26 forecasts) dating from 1910 "that variously offer or relate to scientific forecasts of the 1989 Loma Prieta earthquake." (In this case no distinction is made between a forecast, which is limited to a probabilistic estimate of an earthquake happening over some time period, and a more specific prediction.[128]) None of these forecasts can be rigorously tested due to lack of specificity,[129] and where a forecast does bracket the correct time and location, the window was so broad (e.g., covering the greater part of California for five years) as to lose any value as a prediction. Predictions that came close (but given a probability of only 30%) had ten- or twenty-year windows.[130]
One debated prediction came from the M8 algorithm used by Keilis-Borok and associates in four forecasts.[131] The first of these forecasts missed both magnitude (M 7.5) and time (a five-year window from 1 January 1984, to 31 December 1988). They did get the location, by including most of California and half of Nevada.[132] A subsequent revision, presented to the NEPEC, extended the time window to 1 July 1992, and reduced the location to only central California; the magnitude remained the same. A figure they presented had two more revisions, for M ≥ 7.0 quakes in central California. The five-year time window for one ended in July 1989, and so missed the Loma Prieta event; the second revision extended to 1990, and so included Loma Prieta.[133]
When discussing success or failure of prediction for the Loma Prieta earthquake, some scientists argue that it did not occur on the San Andreas fault (the focus of most of the forecasts), and involved dip-slip (vertical) movement rather than strike-slip (horizontal) movement, and so was not predicted.[134] Other scientists argue that it did occur in the San Andreas fault zone, and released much of the strain accumulated since the 1906 San Francisco earthquake; therefore several of the forecasts were correct.[135] Hough states that "most seismologists" do not believe this quake was predicted "per se".[136] In a strict sense there were no predictions, only forecasts, which were only partially successful.
Iben Browning claimed to have predicted the Loma Prieta event, but (as will be seen in the next section) this claim has been rejected.
Dr. Iben Browning (a scientist with a Ph.D. degree in zoology and training as a biophysicist, but no experience in geology, geophysics, or seismology) was an "independent business consultant" who forecast long-term climate trends for businesses.[137] He supported the idea (scientifically unproven) that volcanoes and earthquakes are more likely to be triggered when the tidal force of the sun and the moon coincide to exert maximum stress on the earth's crust.[138] Having calculated when these tidal forces maximize, Browning then "projected"[139] what areas were most at risk for a large earthquake. An area he mentioned frequently was the New Madrid Seismic Zone at the southeast corner of the state of Missouri, the site of three very large earthquakes in 1811–12, which he coupled with the date of 3 December 1990.
Browning's reputation and perceived credibility were boosted when he claimed in various promotional flyers and advertisements to have predicted (among various other events[140]) the Loma Prieta earthquake of 17 October 1989.[141] The National Earthquake Prediction Evaluation Council (NEPEC) formed an Ad Hoc Working Group (AHWG) to evaluate Browning's prediction. Its report (issued 18 October 1990) specifically rejected the claim of a successful prediction of the Loma Prieta earthquake.[142] A transcript of his talk in San Francisco on 10 October showed he had said: "there will probably be several earthquakes around the world, Richter 6+, and there may be a volcano or two" – which, on a global scale, is about average for a week – with no mention of any earthquake in California.[143]
Though the AHWG report disproved both Browning's claims of prior success and the basis of his "projection", it made little impact after a year of continued claims of a successful prediction. Browning's prediction received the support of geophysicist David Stewart,[144] and the tacit endorsement of many public authorities in their preparations for a major disaster, all of which was amplified by massive exposure in the news media.[145] Nothing happened on 3 December,[146] and Browning died of a heart attack seven months later.[147]
The M8 algorithm (developed under the leadership of Dr. Vladimir Keilis-Borok at UCLA) gained respect by the apparently successful predictions of the 2003 San Simeon and Hokkaido earthquakes.[148] Great interest was therefore generated by the prediction in early 2004 of a M ≥ 6.4 earthquake to occur somewhere within an area of southern California of approximately 12,000 sq. miles, on or before 5 September 2004.[85] In evaluating this prediction the California Earthquake Prediction Evaluation Council (CEPEC) noted that this method had not yet made enough predictions for statistical validation, and was sensitive to input assumptions. It therefore concluded that no "special public policy actions" were warranted, though it reminded all Californians "of the significant seismic hazards throughout the state."[149] The predicted earthquake did not occur.
A very similar prediction was made for an earthquake on or before 14 August 2005, in approximately the same area of southern California. The CEPEC's evaluation and recommendation were essentially the same, this time noting that the previous prediction and two others had not been fulfilled.[150] This prediction also failed.
At 03:32 on 6 April 2009, the Abruzzo region of central Italy was rocked by a magnitude M 6.3 earthquake.[151] In the city of L'Aquila and surrounding area around 60,000 buildings collapsed or were seriously damaged, resulting in 308 deaths and 67,500 people left homeless.[152] Around the same time, it was reported that Giampaolo Giuliani had predicted the earthquake, had tried to warn the public, but had been muzzled by the Italian government.[153]
Giampaolo Giuliani was a laboratory technician at the Laboratori Nazionali del Gran Sasso. As a hobby he had for some years been monitoring radon using instruments he had designed and built. Prior to the L'Aquila earthquake he was unknown to the scientific community, and had not published any scientific work.[154] He had been interviewed on 24 March by an Italian-language blog, Donne Democratiche, about a swarm of low-level earthquakes in the Abruzzo region that had started the previous December. He said that this swarm was normal and would diminish by the end of March. On 30 March, L'Aquila was struck by a magnitude 4.0 tremblor, the largest to date.[155]
On 27 March Giuliani warned the mayor of L'Aquila there could be an earthquake within 24 hours, and an earthquake M~2.3 occurred.[156] On 29 March he made a second prediction.[157] He telephoned the mayor of the town of Sulmona, about 55 kilometers southeast of L'Aquila, to expect a "damaging" – or even "catastrophic" – earthquake within 6 to 24 hours. Loudspeaker vans were used to warn the inhabitants of Sulmona to evacuate, with consequential panic. No quake ensued and Giuliano was cited for inciting public alarm and injoined from making public predictions.[158]
After the L'Aquila event Giuliani claimed that he had found alarming rises in radon levels just hours before.[159] He said he had warned relatives, friends and colleagues on the evening before the earthquake hit,[160] He was subsequently interviewed by the International Commission on Earthquake Forecasting for Civil Protection, which found that there had been no valid prediction of the mainshock before its occurrence.[161]
Technology, Geotechnical engineering, Loma Prieta earthquake, 2008 Sichuan earthquake, Earthquake shaking table
Earth, Earthquake engineering, P-wave, Geology, Zhang Heng
Thorium, Radium, Uranium, Lead, Xenon
St. Louis, Kansas City, Missouri, Arkansas, Columbia, Missouri, Tennessee
Rome, Pescara, L'Aquila, Teramo, Sulmona
Seismology, North Carolina, Radon, Oregon, Earthquake prediction
Psychology, Weather, Energy, California, Earth
Science, Agriculture, Technology, Science policy, China