Dust flux, Vostok ice core

Dust flux, Vostok ice core
Two dimensional phase space reconstruction of dust flux from the Vostok core over the period 186-4 ka using the time derivative method. Dust flux on the x-axis, rate of change is on the y-axis. From Gipp (2001).
Showing posts with label self-organization. Show all posts
Showing posts with label self-organization. Show all posts

Sunday, January 21, 2018

Dendritic fractures in river ice

We've had a pretty cold winter for this part of the world. Nothing like what you are having in Ontario, though. But still cold enough that I wish I hadn't forgone a winter coat this year.

This is the first time I have ever seen snow in Zhengzhou stay on the ground for more than a week. The canals froze, but that happens at least once every winter. They stayed frozen for longer than usual, until the warm spell that started just before the weekend.

Last weekend, crossing the bridge on Nongyedong Lu just east of Zhongzhou Lu, I saw these.




Cracking and refreezing. The dendritic pattern usually indicates a diffusive process, but am not sure how that translates here.








Thursday, January 26, 2017

"It's like being on the Moon!"

. . . said just about everyone about the terrain around Hamningberg.

Hamningberg is an abandoned fishing village in the northeast tip of eastern Finnmark--the northern part of Norway. The town was largely depopulated in the 1960s, although people still used some of the homes there as summer cottages. There was even a small coffee shop (or was, in 1993). What made the town special is that it is one of the few villages where buildings predate the war.

When the Germans retreated from Finnmark during the last winter of the war, they were ordered to burn everything. However (so I was told), the commander of the German forces stationed in Hamningberg took pity on the people, and so he disobeyed the order. This was probably made easier knowing that no other German units would be passing through to realize this. So while every other village in Finnmark was razed, Hamningberg remained.




 Things to do in town include visiting the abandoned German gun-emplacements, and, if you have a flashlight, the pillbox and the network of tunnels between ammunition storage areas, the observation area, and the rails for the gun.


WW2 vintage barbed wire



The picture quality isn't all that great--the slides look okay, but the scanner isn't doing a very good job of scanning them.

What I was most interested in seeing in the area was the landscape. Everyone I knew in Finnmark told me that going there was like going to the moon. Even this site describes it as a "moonscape".



Just for reference, here is a real moonscape.



The local geology around Hamningberg consists of alternating sequences of sandstone and shale, which have been folded so that the bedding is nearly vertical. The shale tends to get eroded out, but the more resistant sandstone beds remain as broken walls across the landscape. Craters are absent. So, the place doesn't look like the moon at all.


But there is something otherworldly about the place. I think the reason for this common description--like the surface of the moon--reflects the fact that the landscape looks radically different from any other landscape that most people have ever seen.

For one thing, there isn't a lot of vegetation. But (at least here in Canada), there are a lot of shield areas with practically no vegetation. The other reason has to do with the geometry of the landforms of the area.


In the early days of computer-generated landscapes, there were experiments in which people would be shown some of the simulations and asked to rate them as being realistic or not. Most of these landscapes were generated using simple rules, with a seed shape (usually a triangular pyramid) and a characteristic fractal dimension. It turns out people were remarkably good at picking out the landscapes which had fractal dimensions within the typical range of landscapes on earth. Anything outside of this range was "otherworldly".

For a computer-generated landscape to resemble Hamningberg, it may have to be seeded with rectangles rather than pyramids. I don't think the fractal dimension is anything unusual, however. But the description of the area is being otherworldly may reflect the preferences that people have for landscapes that conform to their ideas of what constitutes a "natural" landscape.





 

Saturday, November 23, 2013

Interpretation of scaling laws for US income

It has been remarked that if one tells an economist that inequality has increased, the doctrinaire response is "So what?"
                                          - Oxford Handbook of Inequality

h/t Bruce Krasting

Social Security online has published a full report on income distribution in America.

Two years ago we looked at the distribution of wealth in America. Today we are looking at income.


There were a total of about 153 million wage earners in the US in 2012, which is why the graph suddenly terminates there.

As we have discussed before, in self-organizing systems, we expect the observations, when plotted on logarithmic axes, to lie on a straight line. Casual observation of the above graph shows a slight curve, which gives us some room for interpretation.

I have drawn two possible "ideal states"--the yellow line and the green line. Those who feel the yellow line best represents the "correct" wealth distribution in the US would argue that the discrepancy at the lower income (below about $100k per year) represents government redistribution of wealth from the pockets of the ultra-rich to those less deserving. Followers of the green line would argue the opposite--that the ultra-wealthy are earning roughly double what they should be based on the earnings at the lower end.

Which is it? Looking at the graph you can't tell. But suppose we look at the numbers. Adherents of the yellow line would say that roughly 130 million people are getting more than they should. The largest amount is about 40%, so if we assume that on average these 130 million folks are drawing 20% more than they should (thanks to enslavement of  the ultra-wealthy), we find that these excess drawings total in excess of $1 trillion. Thanks Pluto!

The trouble with this analysis is that the combined earnings of the ultra-wealthy--the top 100,000--earned a total of about $400 billion. They simply aren't rich enough to have provided the middle class with all that money.

Now let's consider the green line. Here we are suggesting that the ultra-wealthy are earning about twice as much as they should be, and let's hypothesize that this extra income is somehow transferred from the middle and lower classes.

As above, the total income of the ultra-rich is about $400 billion. If half of this has been skimmed from the aforementioned 130 million, they would each have to contribute about $1500.

I expect a heavier weight has fallen on those at the upper end of the middle-class spectrum; but even so, $1500 per wage earner does seem doable. Of the two interpretations, the green line looks to be at least plausible, and we are forced to conclude that those who believe the ultra-wealthy are drawing a good portion of their salaries from everyone else have a point.

But isn't $1500 per year a small price to pay to create a really wealthy super-class?

Paper on causes of income inequality full of economic axiomatic gibberish here (pdf).

Friday, February 24, 2012

Another view on default cascades--Battiston et al. (2011)

This paper (pdf) was recently published in Switzerland, and provides an interesting look at our recent topic--default cascades. Although these papers are mathematically dense, they are worth working through sometimes as they may give some foreshadowing of future economic policy.

Block-slider model of earthquakes

Battiston et al. (2011) have presented a model of the financial system which might look like one of Turcotte's slider-block models of earthquakes, which are comprised of numerous blocks of (possibly varying) masses, connected by springs, having to slide across a surface with a limited (and possibly variable) friction. Motion in one block can change the stress field across the model, possibly triggering slip in one or more other blocks.


The original slider-block model consisted of two blocks connected by a spring, both of which sat on a somewhat rough surface (so there would be friction between it and the blocks). If block A moves some small distance, then it will add to the forces on block B. That force may be enough to overcome the friction which kept block B stable. If both blocks move together, we have a larger earthquake. The simple two-block slider model exhibits chaotic behaviour (Turcotte, 1997). I remember attending a conference a few years before the above volume was published when Turcotte presented a more advanced model that looked something like the one below.


We are looking at a plan view of several interconnected blocks. The frictional forces vary for each block, and each block has its own driver. Once again, the slippage of a single block may trigger slippages in one or more blocs--the more blocks that slip, the larger the earthquake. We might expect such models to satisfy the Gutenberg-Richter law which is an observed distribution of earthquake sizes through time that is consistent with a system at self-organized criticality (SOC). But I'm not sure because I've never seen the results although comments on similar models used to study avalanches were consistent with SOC (there are those avalanches again).

Block-slider model of default cascades

According to Battiston et al. (2011), prior to the financial crisis of 2008, existing models suggested that major financial entities had diversified their debts and obligations sufficiently that the likelihood of systemic failure was negligible. The observed financial crisis suggests that this conclusion was unwarranted, to say the least. The authors attempt to study the effects of diversification on systemic risks using a model conceptually similar to the block-slider model above.*

In the financial model, the blocks represent financial institutions. There are a large number of possible interactions between one institution and its neighbours. Furthermore, there is a richness to the interactions that is missing in the earthquake slider-block model--the debts and credits between institutions may each be long- or short-dated, so that there may be a mismatch in maturities between the credits and obligations of any one institution.


In the above figure, which shows only a portion of the potential interactions among entities A, h1, h2, etc., the arrows point in the direction in which credit has been extended. Credit may be long- or short-term. For instance, entity A has extended long-term credit to entity j1, and short-term credit to entity m1; and in turn has borrowed long-term from entity h1, and borrowed short-term from entity n1.

The authors carry out the following experiment. Assume an initial allocation of assets and liabilities across different participants, and derive (logically rather than empirically) a law of "motion" related to financial robustness of each agent affected by one or more of the initial defaults, as measured by their equity ratio. Models are run and the size of the default cascade is compared to the initial distribution of robustness and risk diversification.

The interrelationships between all the balance sheets of the various financial institutions links the dynamics of the individual equity ratios in ways that are not easily predictable.

The authors identify two "externalities" to the triggers for default cascades: 1) variability of financial robustness of all of the interconnected financial entities; and 2) the average financial robustness of the interconnected entities.

If all parties have similar financial robustness (variability is low), then increasing connectivity makes the system more robust. Stability is even likely through diversification if the individual parties are not very robust. It was only when the initial robustness was highly variable across agents (i.e., some agents are weak and others strong) that increasing interconnectedness tended to stimulate systemic defaults.

The second "externality" is a consequence of incomplete information--and deals with the likelihood that creditors will force a foreclosure on an otherwise solvent entity due to the fear that some of its counterparties might fail. Losses may therefore be amplified along the chain if runs begin on entities which may be technically solvent, but which may then be forced to sell long-dated assets at fire-sale prices to raise cash. Model runs suggest that if the average robustness of agents is high, then increased connectivity is beneficial. For low levels of average robustness, then increased connectivity has no effect. For intermediate values of average financial robustness, increased connectivity tended to stimulate systemic defaults.


The lesson here is diversification is not always a good idea. If you diversify across financial entities with wide risk profiles (i.e., some are weak and some are strong) you actually increase the likelihood of a financial calamity.

We don't have to confine ourselves to financial institutions. If we consider our agents to be sovereign, we expect the same problem. Creating a financial superpower out of a group of Germanys would be perfect--even a group of Greeces might be okay. But creating one out of Germanys and Greeces tends to encourage a financial catastrophe. Who could have predicted that?

The authors suggest that the "fix" for this situation is to concentrate risk rather than diversify it. I wonder--in whose hands will the risk be concentrated? Perhaps if you hold gold, the risk won't find its way into yours.

References

Battiston, S., Delli Gatti, D., Greenwald, B., and Stiglitz, J. E., 2011. Default cascades: When does risk diversification increase stability? ETH Risk Center Working Paper Series.

Turcotte, D. L., 1997. Fractals and chaos in geology and geophysics, 2nd edition. Cambridge University Press.

* one key difference between the default cascade and an earthquake--in an earthquake, the tsunami (if there is one) happens afterwards. The ocean of liquidity in which we find ourselves has preceded the major financial earthquake.

Tuesday, February 21, 2012

Scale invariant behaviour in avalanches, forest fires, and default cascades: lessons for public policy

We show that certain extended dissipative dynamical systems naturally evolve into a critical state, with no characteristic time or length scales. The temporal "fingerprint" of the self-organized critical state is the presence of flicker noise or 1/f noise; its spatial signature is the emergence of scale-invariant (fractal) structure.  - Bak et al., 1988 (one of the greatest abstracts ever written!)

1987 saw the publication of an extraordinary paper--one which led to a dramatic change in our understanding of the dynamics of certain kinds of dynamic systems. Most importantly  . . . introduced the concept of self-organized criticality, or self-organization to the critical state--which is a condition neither fully stable nor fully unstable, with a characteristic size-distribution of events (or failures). In the kinds of systems that interest geologists, earthquakes and avalanches were quickly recognized as being SOC systems, and SOC was recognized as the most efficient means of transmitting energy through a system.

Avalanches and SOC

An early computational experiment went like this:  imagine a pile of sand, on which single grains of sand are dropped one by one until an avalanche occurs.  An avalanche occurs when the slope at some local point is greater than a defined value.

If your sandpile is two-dimensional (length and height--imagine a cross-section of a real sandpile), you would have to visualize it as a string of numbers, where each value represented the number of grains of sand stacked at that point. In the figure below, we are only looking at half of the pile, from the midpoint to the edge.


In our simple sandpile consisting of four stacks, a grain of sand of thickness dx falls onto the middle stack. If the difference in heights between this stack and its neighbour x1 in the figure above) exceeds some threshold value n, then one grain of sand would drop from the higher stack onto the lower stack. You would then have to check whether the height of the next stack was now more than  n higher than its neighbouring stack. If so, then another grain of sand would drop down one more stack and so on to the end of the pile.

What happens in a two-dimensional sandpile is that eventually the height of the sandpile is such that each stack is exactly n higher than its neighbouring stack. As a new grain of sand is dropped onto the pile, it migrates along all of the stacks and drops off the edge of the pile.

The behaviour of the sandpile is very simple; but what happens when you move to a 3-dimensional model (I'm counting the height of the pile as a dimension--not all authors describing this problem do so!)? You might expect similar behaviour--that the slope of the pile will increase until a single grain of sand causes a rippling cascade through the entire pile. This doesn't happen, for it would imply that the natural behaviour of the system is to evolve towards a point of maximum instability. In the experiment, the behaviour of the sandpile was much more interesting. The pile built up until it reached a form of stability characterized by frequent avalanches of no characteristic size.



Bak et al. (1987) called this condition of minimal stability the "critical state", and pointed out that as it developed independent of modelling assumptions and external parameters, it arose by self organization--the term "self organized criticality" (SOC) was introduced to describe the process. The characteristics of systems displaying SOC are fractal geometry, and flicker noise (also called 1/f noise).

There are many systems in nature--and increasingly in the human environment--which are similar to the avalanche model described above. Real avalanches, and similar mass sliding events (debris flows in the deep sea, for instance) have been recognized as SOC processes; along with earthquakes, volcanic eruptions, and economic events.

Forest fires were quickly recognized to be characterized by SOC--at least in environments without a lot of active management. Curiously, it quickly turned out that the effects of fire management, at least as practiced in the United States, might have had an effect opposite to that which was desired.

Fire suppression in the United States

“Strange to say, that, obvious as the evils of fire are, and beyond all question to any one acquainted with even the elements vegetable physiology, persons have not been found wanting in India, and some even with a show of scientific argument(!), who have written in favor of fires.  It is needless to remark that such papers are mostly founded on the fact that forests do exist in spite of the fires, and make up the rest by erroneous statements in regard to facts.”   B.H. Baden-Powell


As European settlers spread through what became the United States, they were confronted by an unusual world. Wilderness was something that had to be eliminated so that "civilization" could spread. Forests were to be cut and the land put to the plow. This was more than an economic imperative--it was a moral imperative as well. 


The rapid westward expansion in the 19th century brought railroads, and railroads brought further development and fire. While clearance of the forest was necessary for development, the desire to create a forestry industry based on sustainable harvesting rather than a short-sighted liquidation of old forests was driven by European examples. And thus the American ideas of forestry were transformed by the turn of the 20th century. Forests were resources that had to be tended. And as resources, any fires within them resulted in economic losses.

Fire had been used as a method of maintaining the forest by the native populations--but such a method was far too messy and unpredictable for a modern people--particularly those who looked to the forestry programs of western Europe, where fires were uncommon. The European model worked tolerably well in the eastern forests in North America, where water was plentiful year-round; but this model turned out to be unsuitable for the western forests, the life cycles of which required fire as a controlling element.

Major Powell launched into a long dissertation to show that the claim of the favorable influence of forest cover on water flow or climate was untenable, that the best thing to do for the Rocky Mountains was to burn them down, and he related with great gusto how he himself had started a fire that swept over a thousand square miles. - Bernard Fernow

The forests of the southwestern United States were subjected to a lengthy dry season, quite unlike the forests of the northeast. The northeastern forests were humid enough that decomposition of dead material would replenish the soils; but in the southwest, the climate was too dry in the summer and too cool in the winter for decomposition to be effective. Fire was needed to ensure healthy forests. Apart from replenishing the soils, fire was needed to reduce flammable litter, and the heat or smoke was required to germinate seeds.

In the late 19th century, light burning--setting small surface fires episodically to clear underbrush and keep the forests open--was a common practice in the western United States. So long as the fires remained small they tended to burn out undergrowth while leaving the older growth of the forests unscathed. The settlers who followed this practice recognized its native heritage; just as its opponents called it "Paiute forestry" as an expression of scorn (Pyne, 1982).

Supporters of burning did so for both philosophical and practical reasons--burning being the "Indian way" as well as expanding pasture and reducing fuels for forest fires. The detractors argued that small fires destroyed young trees, depleted soils, made the forest more susceptible to insects and disease, and were economically damaging. But the critical argument put forth by the opponents of burning was that it was inimical to the Progressive Spirit of Conservation. As a modern people, Americans should use the superior, scientific approaches of forest management that were now available to them, and which had not been available to the natives. Worse than being wrong, accepting native forest management methods would be primitive.

Bernhard Fernow, a Prussian-trained forester, thought fires were the ‘bane of American forests’ and dismissed their causes as a case of ‘bad habits and loose morals’. - Pyne (1995).


Through the early 20th century, the idea that fire was bad under all circumstances, and fire control must be based on suppression of all fires came became the dominant conservation ideology. After WWII the idea became stronger still, partially because of the availability of military equipment; but also due to the Cold War mentality. Just like Communism, the spread of fires simply couldn't be tolerated--and it was the duty of America to contain both "red" menaces (Pyne, 1982).


In the latter part of the 20th century, the ideas behind fire suppression once again began to change. The emphasis on "modern" methodologies began to fade, with a preference appearing for restoration of the "old forest" from pre-settler times. Research into the forest had begun to reveal the importance of fire in the natural setting, and that humans had used fire to manage the forest throughout history. Costs of fire suppression had risen dramatically, and the damage done to the forest by the equipment and the methods of fire suppression often exceeded that done by the fires.


Gradually the idea of fire suppression faded, to be replaced by a determination to allow fire to return to its natural role. Major fires in Yellowstone Park in 1988 brought about something of a reversal again in policy, but it was recognized that a century of fire suppression efforts had left the western forests in a dangerous state. Even though fire was to return to that natural cycle, the huge growth of underbrush has created a substantial risk of massive, out-of-control fires. This risk is an indicator of just how unhealthy fire suppression has made American forests.


By comparison, forests in Mexico, where there have been no fire-suppression efforts are far healthier. Fires are more common, but tend to be smaller, due to lack of fuel. 

Fire, water, and government know nothing of mercy. - Latin proverb


Default cascades as avalanches


Economic fluctuations have long been recognized as SOC phenomena. One type of fluctuation that has been recently posited is the "cascading cross default" in which the failure of one entity to repay its debts drives one (or more) of its creditors into bankruptcy, which in turn drives one or more of its creditors into bankruptcy, and so on.


Clearly these default cascades can be of nearly any size. A default may only affect the defaulting institution--or it may take down all institutions in a global collapse. As a conceptual model, the sandpile automaton of Bak et al. (1987) is a pretty good representation--the key difference being that each individual stack in the economic sandpile is actually connected to a large number of other stacks, some of which are (geographically) quite distant. For instance, the failure of Deutsche Bank would likely put stress on Citigroup. Would it cause it to fail? Perhaps. We would model this by assigning a probability of failure for Citigroup in the event of a default by DB. And we would have to do this for all relationships between the different banks.


But we need conditional probabilities--because it may be that DB's failure alone wouldn't topple Citigroup. But suppose it topples ING, and Credit Suisse, and Joe's Bank in Tacoma, and Fred's Bank in Springfield, and Tim's Bank in Akron, . . . and many others, all of whom owe money to Citigroup. Then it might fall. So apart from having tremendous interconnectivity, with each bank connected to many others, there is also tremendous density of those connections, all of which would appear to make the pile very unsteady. 


Instead of dropping grains of sand one at a time on the same spot, multitudes of debt bombs are dropped randomly on the pile of financial institutions, provoking episodic failures. What might we expect of their size distribution?

The experiment as I've described is too difficult to set up on my computer, mainly because I don't know how to establish the probabilities of failure for all of the various default chains that may exist. Furthermore, the political will to prevent financial contagion, although finite, is unmeasurable. Luckily we don't have to run the model, as it is playing out in real life.

Paper now primed to burn


We have lived through a long period of financial management, in which failing financial institutions have been propped up by emergency intervention (applied somewhat selectively). Defaults have not been permitted. The result has been a tremendous build-up of paper ripe for burning. Had the fires of default been allowed to burn freely in the past we may well have healthier financial institutions. Instead we find our banks loaded up with all kinds of flammable paper products; their basements stuffed with barrels of black powder. Trails of black powder run from bank to bank, and it's raining matches.

References

Bak, P., Tang, C., and Wiesenfield, K., 1987. Self-organized criticality: An explanation of 1/f noise. Physical Review Letters, 59: 381-384.


Pyne, S. J., 1982. Fire in America: A cultural history of wildland and rural fire (cycle of fire). 

Wednesday, January 11, 2012

Scale invariance and the scaling laws of Zipf and Benford

Scaling laws have been empirically observed in the size-distributions of parameters of complex systems, including (but not limited to): 1) incomes; 2) personal wealth; 3) cities (both population and area); 4) earthquakes, both locally and globally; 5) avalanches; 6) forest fires; 7) mineral deposits; and 8) market returns. Several years ago one of my students showed that various measures for the magnitude of terrorist attacks also observed scaling laws.

The general prevalence of scale invariance in geological phenomena is the reason for one of the first rules taught to all geology students--every picture must have a scale. The reason for this is that there is no characteristic scale for many geological phenomena--so one cannot tell without some sort of visual cue whether that photo of folded rocks is a satellite photo or one taken through a microscope--whereas one can make such a distinction about a picture of, say, a moose.

Numerous empirical laws (by which I mean equations) have been developed to describe the size-distribution of scale invariant phenomena. Most of these empirical laws were developed before the idea of scale invariance was well understood. One famous example is the Gutenberg-Richter law describing the size distribution of earthquakes.

Another statistical law, Zipf's Law, describes the relationship between size and rank. For cities, for instance, the largest city in a country will tend to have twice the population of the second-largest city and three times the population of the third. More formally, the relationship is stated as follows:


for a distribution where C is the magnitude of the largest individual in the population, y is the magnitude of an individual with rank r, and k is a constant which characterizes the system--but is commonly about 1.

If we plot rank vs size on a log-log plot, the graph should approximate a straight line with a slope of -1/k.

For instance, a plot of city size vs rank for US cities appears as follows:


Data sourced here.

From the same data source we find a similar relationship when city size is determined from area rather than population:


In the first plot we obtain a value for k very close to 1. The plot where cities are ranked by area is not as clear, but this may be due to the arbitrary nature of city limits. To characterize either of the above plots by Zipf's law is fairly straightforward--draw the straight line from the top-ranked city that best follows the line of observations.

A recent article published in Economic Geology argues that mines in Australia follow Zipf's Law. In summary, not only do the known deposits in Australian greenstone belts follow Zipf's law fairly closesly, but the early estimates of as yet undiscovered gold projected from early Zipf's law characterizations compared favourably with the amount of gold eventually discovered.

The weakness that I see with this approach is that it is all rather strongly dependent on the estimates of the size of the largest deposit. In any given area, it will be true that the largest known deposit will be well studied, but history has shown us that mines can be "mined out" only to be rejuvenated by a new geological or mining idea.

I am unable to reconcile the size-distribution data from the Nevada mineral properties presented recently with Zipf's Law, although they do seem to follow some sort of power law.


Using the straightforward approach to a Zipf Law characterization gives us the red line, which appears to show that there have been far too many gold deposits of > 1/2 million ounces for the largest mine. To reconcile the known gold discoveries with Zipf's Law (green line), someone would need to find a 100-million-ounce deposit (if that doesn't get explorers interested in Nevada, I don't know what will)!

I, however, would prefer to use the interpretation of the above data developed in our last installment--that there is a power-law relationship between size and rank, but this relationship breaks down for the largest deposits because there is some sort of limit to the size of gold deposits (at least near the Earth's surface), although I do not know what the limiting factor(s) would be.

Another scaling law is Benford's Law, which is an empirical observation that the first digits of measurements of many kinds of phenomena are not random. In particular, the first digit is a '1' approximately 30% of the time; '2' 18% of the time, '3' 12% of the time and so on, with the probability descending as the number increases.

First       Probability of
digit        occurrence

1            0.30103
2            0.176091
3            0.124939
4            0.09691
5            0.0791812
6            0.0669468
7            0.0579919
8            0.0511525
9            0.0457575


So if you had a table of the lengths of every river in the world, for instance, you would find that approximately 30% of the first digits were '1'--rivers with lengths of 1 904 km, or 161, or 11 km would fall into this category.

Furthermore, it doesn't matter what units you used--if you had measured the river lengths inches, you would observe the same relationship. The reason for this is that if you were to double a number which begins with '1', you end up with a number which either begins with '2' or '3'. Hence, the probability that the first digit is either '2' or '3' must be the same as the probability of it being '1'. In the table above, we see this is the case.

It isn't only natural phenomena that are characterized by Benford's Law. It has also been used as a tool to identify fraud in forensic accounting.

The deposit-size data from Nevada seem to conform to Benford's Law.


And if I convert the deposit size from ounces to metric tonnes . . .


So although Zipf's Law doesn't describe Nevada gold deposits well (at least at present), Benford's Law does.

Saturday, January 7, 2012

Gold, part 2: Is there a maximum size for gold deposits?

In our last installment, I presented a graph showing the size distribution for global gold deposits of greater than one million ounces. In it I tried to estimate the slope of the relationship between the size of deposits and their ranking, in terms of size,  in the hopes that the slope had some predictive power for the deposits that are yet to be found.


Two suggested scaling laws for the size-distribution of gold deposits (global).

Once again, the interpretation of these graphs is the rank, (in size, less one) of any deposit is the abscissa, and size is the ordinate. The reason for subtracting one from the rank number is that the largest deposit shown on the graph is actually the second-largest deposit in the state--and there is one deposit larger.

In our last installment, we assumed that the blue line was the better representation of the scaling law for gold deposits. Today we explain why the yellow line may be the correct answer, and that it does not mean we can expect to find multi-billion ounce deposits of gold (at least nowhere near the Earth's surface).

- - - - - - - -

The Earth system consists of myriads of local interacting subsystems. Intuitively, we might not expect the overall effects of these to merge into a background of white noise, we find instead that highly ordered structure arises on a variety of scales ranging up to that of the globe.

A simple scaling law for the size-distribution of gold is an example of red noise (or pink noise, depending on the slope). The observed power-law is a characteristic of a system at a state of self-organized criticality (SOC), as is nicely outlined here. In essence, the scaling law we observe in the size-distribution of gold deposits due to self-organization in the geological processes which control the reservoir size of crustal fluids which contained the gold, and possibly also the fracturing process which preceded the emplacement of the gold in the rocks.

Today we look at the size-distribution of gold deposits in Nevada.


The above graph was plotted using the data from the Nevada Bureau of Mines and Geology review of its mineral industry for 2009. There were 191 (unambiguous) significant deposits of precious metals for which I have combined the most recent mineral resources (all categories) plus any pre-existing historical production. I only counted gold ounces--and freely acknowledge that some of the mines in the above chart were probably better described as copper or silver mines--and treated all categories (proven and probable reserves, measured and indicated resources, and inferred resources) equally. If you feel the methodology is flawed you are invited to use your own.

We can compare the current size-distribution of gold deposits to the size-distribution of gold deposits in the Carlin Trend in 1989 (Rendu and Guzman, 1991).


Remarkably, both sets of data appear to be described by a straight line of constant slope, at least between for deposits between about 100,000 ounces and 10 million ounces in size.


During Nevada's "maturation" as a gold province, the scaling law describing the size-distribution of gold deposits remained constant over two orders of magnitude in size. The slope of these lines is about 1.5, placing the scaling law exponent between pink noise and red noise.

When we look at the figure on the top of the page, the blue line has a slope < 1, whereas the yellow line has a slope of about 1.5. For this reason, I propose the yellow line to be a better representation of the scaling law for the global deposits. The reason I first leaned towards the blue line was due to insufficiency of observations.

For comparison, if I only looked at deposits in Nevada greater than 1 million ounces, I would not be as confident describing the size-distribution with the yellow line.

SOC theory would seem to tell us the entire distribution should be characterized by a power law. Why not gold deposits?

In nature, there are limits. Infinity is not an option. Earthquakes are recognized as SOC processes, yet they have a maximum size, as the capacity for earth materials to store and transmit strain is finite. Similarly, we would expect there to be an upper limit for the size of crustal reservoirs of gold-bearing fluids. The result is that the largest gold deposit we find is much less than we would predict on the basis of our observed power law.

This explanation does not explain why there also appears to be a deficit in small deposits. For this the reason is economic. Under the current reporting regime (NI 43-101), gold in the ground cannot be considered a "deposit" unless it is reasonable to expect it to be exploited profitably. The requirement for economic exploitability will exclude many small--well, since they are not deposits, let's call them "collections"--of gold. Additionally, many company geologists will ignore such collections as soon as it becomes clear they are unlikely to become a deposit.


So it's up to these guys! (sorry about the quality--this is a point-and-shoot photo scanned way back in the '90s). He's using a rubber cut-out from an inner tube as a pan. This site is a thrilling walk north of Asanta village, western Ghana, on land almost certainly on a concession held by Endeavour.

References:

Hronsky, J. M. A., 2011. Self-organized critical systems and ore formation: The key to spatial targeting? SEG Newsletter, 84, 3p.

Nevada Bureau of Mines, 2010. The Nevada Mineral Industry 2009. Special Publication MI-2009. http://www.nbmg.unr.edu/dox/mi/09.pdf, accessed today.

Rendu, J. M. and Guzman, J., 1991. Study of the size distribution of the Carlin Trend gold deposits. Mining Engineering, 43: 139-140.

Sunday, December 18, 2011

Self-organization and wealth distribution

The question of wealth inequality has been making headlines, in everything from the Occupy Wall Street movement and their decrying the wealth of the 1%, to discussion in the Republican Presidential-Candidate Popularity Contest currently ongoing in the US.

There have always been voices clamouring for equal wealth for everyone, but the real world doesn't work like that. Wealth inequality doesn't seem particularly unfair given the inequalities in natural abilities and access to capital or resources. Intuitively, it seems that the distribution of wealth in society will follow a power-law distribution. A power-law distribution is one in which the observations show a 1/f distribution, as described in this article.

Recent modeling studies suggest a 1/f distribution over most of the population, but wealth distribution becomes exponential near the tails. The model distribution is described as Pareto-like, with a relatively few super-wealthy floating over an ever-changing middle class.

So wealth inequality should be expected in any society, no matter how even the playing field. The skills necessary to navigate through the economy are not evenly distributed. Some individuals play better than others. Therefore, some individuals will be wealthier than others. Let's take a look through some public data and see if we can recognize a power-law distribution.

According to Wolff (2010), the breakdown of wealth among different quintiles (and finer groups) is:

Fraction of                        Fraction of 
population                         wealth

Lowest 40%                      0.2%
40 - 60%                            4.0%
60 - 80%                          10.9%
80 - 90%                          12.0%
90 - 95%                          11.2%
95 - 99%                          27.1%
99 - 100%                        34.6%

Given that the wealth of Americans in 2007 was reported by the Fed to be $79.482 trillion, and the population of the US at that time was 299,398,400 (roughly), we can plot a logarithmic graph of individual wealth vs population to check for self-organization in wealth distribution.

In order to do this, I have estimated that the wealth of the individual in the middle of each group to have the average wealth of the group. Based on past experience, this estimate will tend to be biased--however given the number of orders of magnitudes on the resulting graph, the errors are so small as to be unnoticed.


To interpret this graph, consider the first two points--they suggest that roughly 80 million people have less than about $2,500, and about 130 million people have less than about $75,000. Most of the data appear to lie along a line of fit, but there are a few exceptionally rich individuals, including some on the Forbes 400 list, who plot far above the line. 

Also note that "the 99%" includes people that have about $8 million in assets.

The observed distribution agrees somewhat with the models described above--a few super-wealthy lording it over the rest. However, there is a significant difference between our observed slope and the slope of the models--the models suggest a slope for the straight-line of about 2. On our graph, the slope of the straight line is over 4 (meaning four orders of magnitude in wealth over one order of magnitude of population).

On our graph, roughly 290,000,000 people have less than $1 million, and 29,000,000 have less than $100. Seems a tad steep. With a slope of 2, the 29,000,000 would have less than $10,000.

If the wealth of the entire population were described by a 1/f distribution, then the richest American would have a wealth of only about $1.5 million. We here at the World Complex think it would be difficult to manage that summer home in the Hamptons with such a paltry sum.

The Ebert and Paul (2009) paper linked to above attempts to explain the semi-permanent nature of the super-rich. The super-rich have benefited from leverage in the system, and remain at the top due to the ongoing access to greater leverage than is possible for the average citizen. 

A poor geologist like me can only wonder--what happens when leverage becomes wealth-destroying rather than wealth-enhancing? Unfortunately, the answer we are seeing is that the super-rich get bailed out of their losing positions by everyone else.

And here we come to the question of fairness in the system. A fair system with an even playing-field will always result in inequalities--but even extreme inequalities will be tolerated to the extent that the system is perceived as being fair. In the past, during times when the system was fair(er), people tended to respect that someone had earned money and was able to enjoy the fruits of success. Under the present system, there is a widespread and growing skepticism that unusually wealth individuals have obtained their wealth not through production of wealth but through gaming the system and even stealing wealth from those lower down the socio-economic ladder.


Lastly we see the same plot as above, but with the estimated and "ideal" wealth distributions as determined from a series of nationwide interviews with over 5500 respondents reported in Norton and Ariely (2010).

Clearly most Americans thought the system was more equitable that was actually the case, and interestingly, they seemed to wish the system were more equitable still. I would like to point out that the "ideal" distribution is actually mathematically impossible (the third and fourth quintiles had equal wealth), which seems fitting. 

In an ideal world, according to the survey, only 10 million Americans would have less than $100,000 in assets, and no one would have as much as a million.

Unfortunately the survey neglected to ask respondents what they felt the wealth of Mssrs Gates and Snyder (no. 1 and 400 on the Forbes 400 list) should be in an ideal world, which might have been very interesting.

Saturday, April 9, 2011

A little more on rhomboidal rills

Spent a pleasant day on Axim beach watching the rhomboidal rills form while at sea Ivoiran refugees streamed eastward.

The beforms on this particular beach are not as impressive as the last time, due to the lack of black/purple heavy minerals. In these forms the sorting is mainly seen in variations in grain size.


No scale. My bad. The largest features are about 20 cm in length.


The more detailed view above shows one of the larger rhomboidal rills left of centre breaking up into many smaller features.

Bandwidth won't allow any of the films to be posted just now, but one thing that has become clear is that even after the wave has receded, there is a very slow flow of coarse light grains down the edges of the rills.

Self-organization in the flow showed up well in the photo below.









Self-organized branching flows during wave recession.

Saturday, February 12, 2011

Scale invariance in geological phenomena: Size frequency distributions and self-organization

Size-frequency distribution of geologic features

The size-frequency distribution of geologic phenomena is closely related to scale invariance. In particular, we recognize that small objects are more common than large objects. What is less intuitively obvious is that there is frequently a very specific relationship between the number of small objects and the number of large objects.

The number of objects of a certain size tends to fall within a log-normal distribution. If you were to graph the count of, say, faults greater than a certain size on one axis against size on the other, and if this graph were a log-log graph, the plot would be a straight line.

Furthermore, the slope of that line could be used to characterize the population of the phenomena in question, being described sometimes as a measure of the fractal dimension of the system.

It is easy to imagine applying this to a real scenario.

The above is a photo of a channelized debris flow in the Yakataga Formation exposed on Middleton Island, in the Gulf of Alaska. A close look at the make up of the debris flow would reveal sediments in a continous gradation from cobble-sized fragments down to clay. If you counted the particles, you would expect to find a great many more clay-sized fragments than cobble-sized fragments.


Idealized 1/f distribution expected for grain sizes.

Flicker noise (also known as 1/f distribution) is held to be the dominant scaling law for geologic phenomena and is a marker of scale invariance. It is observed in borehole logs (Bean, 1999), fractures (Walsh et al., 1991), ocean temperature distributions through time (Fraedrich et al., 2004), avalanches,  forest fires, earthquakes, extinctions, and many other phenomena (Turcotte, 1999).

Sediment grain-size distributions

For several decades there has been an argument about the nature of sediment grain-size distributions in real deposits. One school of thought (let's call them the statistical school) held that sediment grain sizes should be log-normally distributed, and could therefore be expressed in terms of a mean and a standard deviation agains a logarithmic size scale (below right).

Log-normal data distribution.

As outlined in Limpert et al. (2001), there are many natural processes which are believed to yield log-normal size distributions. My opinion is that the normal (or log-normal) concept is overdone because of the nature of introductory statistical education. For many who have had a rudimentary introduction to statistics, all distributions are normal (or log normal), possibly with some modification to the tails.

The alternative approach (following Bagnold, 1941) held that there are two competing processes responsible for sediment deposition--the transport to the area of interest and the transport of material away from the area of interest. Each of these processes could have its own relationship between probability and grain-size, so the size-distribution of the particles actually deposited could be asymmetric.

The log-hyperbolic distribution is named because the distribution 
has the form of an hyperbola when plotted on a log-log graph. 
The distribution is described by four parameters. Phi and gamma
describe the slopes of the two asymptotes, which reflect the likelihood
that any particle will be carried to the area of interest (gamma) and the 
likelihood that any particle will be removed from the area of interest (phi).
The other two parameters (delta and pi) are reflections of how closely 
each limb of the hyperbola follows its asymptote.

As an aside, returns on investment portfolios are usually described by normal distributions. However, it is frequently recognized that the tails of the distribution are "fatter" than should be the case (meaning that extremely large annual gains and losses are more common than expected). Possibly these distributions should be hyperbolic instead of normal. Indeed, Nassim Taleb's (2007) black swan events may fall into this category.


The log-normal distribution traces a parabola in a log-log plot. It is described by 
two parameters: mu (the mean) and sigma (the standard deviation). The weights of
the tails in the log-hyperbolic distribution will be greater than those of the log-normal
distribution, to a typical investor's regret.

At the present time it would appear that those that believe grain-size distributions are log-hyperbolic hold the upper hand. The support that exists for log-normal descriptions stems from the fact that they are more easily described.

Time distribution and self-organized criticality

Bak et al. (1987) studied a model for the growth of a sandpile by dropping individual grains of sand over a two-dimensional grid. Under a prescribed circumstance, the addition of a grain of sand would cause an avalanche, whereby one grain would cascade one gridpoint to the west (say) and another to the north (say).

This was an improvement over the typical 2-dimensional model where the grains of sand were dropped along a line. In the 2-d model, the result is very simple--the pile of sand grows until the slope everywhere reaches the angle of repose, after which every subsequent grain of sand cascades down the side and off the pile. Not very interesting.

Bak et al. (1987) considered that the 3-d sandpile might behave in an analogous fashion to the 2-d model. The sandpile would grow until the slopes reached a critical threshold, after which a single grain of sand would cause a massive avalanche. The trouble with this notion is that it did not seem logical--that a system would spontaneously evolve to a point of maximum instability.

In carrying out the experiment, they discovered that instead of having a long period of no avalanches followed by a massive avalanche; there was a actually a continuous stream of avalanches of varying sizes. The size distribution of the avalanches had a 1/f distribution in time as well as in space.


The 1/f distribution of events through time had been noted previously, but not as a general phenomenon. The Gutenberg-Richter law describing the size distribution of earthquakes is an example of 1/f noise, but this law was never treated as anything but an empirical law applicable only to earthquakes.

One significant application for flicker noise in geological phenomena is in the field of risk management. Written history over much of North America is only a few hundred years, which is not nearly enough to establish the pattern for earthquakes with a recurrence interval of, say, a thousand years. And yet knowing the size of the thousand-year earthquake may be significant.

If you are building a nuclear power plant with an expected lifespan of fifty years, then there is a 1 in 20 chance that a thousand-year earthquake will strike during its operational life. It seems prudent to design the reactor to withstand this earthquake. We establish the size of this earthquake by charting the size-frequency distribution of the earthquakes we do observe, which will have mostly been small. We can extrapolate our line of best-fit to the thousand-year recurrence-interval point on the graph and calculate with reasonable confidence the moment magnitude of the earthquake with a thousand-year recurrence interval, which will be somewhat larger than any of the earthquakes we have observed to date.

The concept of self-organized criticality has been applied to economic systems (notably stock market crashes by Sornette, 2003).

References

Bagnold, R. A., 1941. The physics of blown sand. Methuen, London.

Bak, P., Tang, C., and Wiesenfield, K., 1987. Self-organized criticality: An explanation of 1/f noise. Physical Review Letters, 59: 381-384.

Bean, C. J., 1996. On the cause of 1/f-power spectral scaling in borehole sonic logs. Geophysical Research Letters, 23: 3119-3122.

Limpert, E., Stahel, W. A., and Abbt, M., 2001. Log-normal distributions across the sciences: keys and cues. Bioscience, 5: 341-352.

Sornette, D., 2003. Why stock markets crash: critical events in complex financial systems. Princeton University Press, Princeton.

Taleb, N. N., 2007. The Black Swan: The impact of the highly improbable. Random House, New York.

Turcotte, D. L., 1999. Self-organized criticality.Reports on Progress in Physics, 62: 1377-1429.

Walsh, J., Watterson, J., and Yielding, G., 1991. The importance of small-scale faulting in regional extension. Nature, 351: 391-393.