Tuesday, November 26, 2013

NRH gold deposits follow-up - still lots more gold out there!

Once again the good folks at Natural Resource Holdings (this time teamed up with Visual Capitalist) have updated their report (pdf) listing gold deposits greater than one million ounces in size.

In earlier postings I discussed briefly the expectations for the size-distribution of gold deposits, using an earlier list published by NRH and historical Nevada as examples. My conclusion was that the size-distribution of gold deposits follows a scaling law over at least a couple of orders of magnitude. There is a maximum size for gold deposits, because hydrothermal cells can likely only be so large before they become unstable and divide into smaller cells, leading to gold of one (natural deposit) being scattered over several discrete (economic) deposits. So how do we count them?

There are minimum sizes for deposits as well, primarily for economic reasons. So our scaling law only seems to be valid over a pretty limited range.


The yellow line is a possible scaling law to describe the size-distribution of gold deposits. Interestingly, its slope is 1 (pink noise), a very common scaling law in physical systems. It is quite different from the slope of 1.5 obtained from Nevada deposits. I'm not sure how to explain this, except that the Nevada deposits are almost exclusively of one type, whereas the global deposits represent all known settings.

As before, I don't expect that we will find many more huge (> 35 M oz) deposits; but there is potential to fill in the gap below the line in the 1 M oz range. From the above graph, we would still expect to find at least 400 more deposits, mostly in the 1 - 3 M oz range.

In reality, the number will likely be higher, as the census would still not be complete. It would be foolish to assume there are no deposits in Antarctica, for instance, even if climate and politics makes their exploitation unlikely. There are also numerous deposits on the seafloor, even if it may be a long time before control systems reach the point where they can correctly distinguish between ore and waste material while more than a km underwater.

All of this suggests that the yellow line needs to be shifted upwards--which opens up the possibility of many more deposits in the >10 M oz category still to be found. No guarantee on costs of all these, sorry.

---
Almost forgot to h/t Otto - although I'm sure I would have noticed this eventually.

Saturday, November 23, 2013

Interpretation of scaling laws for US income

It has been remarked that if one tells an economist that inequality has increased, the doctrinaire response is "So what?"
                                          - Oxford Handbook of Inequality

h/t Bruce Krasting

Social Security online has published a full report on income distribution in America.

Two years ago we looked at the distribution of wealth in America. Today we are looking at income.


There were a total of about 153 million wage earners in the US in 2012, which is why the graph suddenly terminates there.

As we have discussed before, in self-organizing systems, we expect the observations, when plotted on logarithmic axes, to lie on a straight line. Casual observation of the above graph shows a slight curve, which gives us some room for interpretation.

I have drawn two possible "ideal states"--the yellow line and the green line. Those who feel the yellow line best represents the "correct" wealth distribution in the US would argue that the discrepancy at the lower income (below about $100k per year) represents government redistribution of wealth from the pockets of the ultra-rich to those less deserving. Followers of the green line would argue the opposite--that the ultra-wealthy are earning roughly double what they should be based on the earnings at the lower end.

Which is it? Looking at the graph you can't tell. But suppose we look at the numbers. Adherents of the yellow line would say that roughly 130 million people are getting more than they should. The largest amount is about 40%, so if we assume that on average these 130 million folks are drawing 20% more than they should (thanks to enslavement of  the ultra-wealthy), we find that these excess drawings total in excess of $1 trillion. Thanks Pluto!

The trouble with this analysis is that the combined earnings of the ultra-wealthy--the top 100,000--earned a total of about $400 billion. They simply aren't rich enough to have provided the middle class with all that money.

Now let's consider the green line. Here we are suggesting that the ultra-wealthy are earning about twice as much as they should be, and let's hypothesize that this extra income is somehow transferred from the middle and lower classes.

As above, the total income of the ultra-rich is about $400 billion. If half of this has been skimmed from the aforementioned 130 million, they would each have to contribute about $1500.

I expect a heavier weight has fallen on those at the upper end of the middle-class spectrum; but even so, $1500 per wage earner does seem doable. Of the two interpretations, the green line looks to be at least plausible, and we are forced to conclude that those who believe the ultra-wealthy are drawing a good portion of their salaries from everyone else have a point.

But isn't $1500 per year a small price to pay to create a really wealthy super-class?

Paper on causes of income inequality full of economic axiomatic gibberish here (pdf).

Thursday, November 21, 2013

The Classification Problem

Posting has been light as I have been gobsmacked by something I discovered in a book that I've had for almost twenty years. I've always had trouble understanding it. I'm a geologist, and find this sort of thing (pdf) challenging.

It has to do with these probability density plots I've been making in phase space. I developed the idea intuitively, but the publishing has always been a slog because I had difficulty presenting a theoretical justification of my approach. I had made a leap of faith that each area of high probability density in phase space was centred about an attractor of indeterminate type.

It was a bit of a fluke getting the paper published in Paleoceanography--the reviewers weren't sure they agreed with it but were willing to give it a go. In terms of number of citations it ranks among the least influential publications in the journal's history.

The discovery was an interpretation of Zeeman's classification problem. His idea was that given a system which is described by a vector field on a manifold (say, a 2-d plane, which is what I have been using, but any surface is possible) so that the trajectory of the time-evolution of the system indicates a flow along the vectors; and given the system is somewhat noisy, so that there is a small random component to evolution along the trajectory; then the end-state of probability densities from all possible initial states on the manifold will be an invariant property of the vector field. What you will end up with will be diffuse balls of higher probability around each of the attractors on the manifold in phase space.

I read this as a justification of my intuitive approach, and he's a real mathematician.

Edit: Reference

Zeeman, E. C., 1988. Stability of dynamical systems. Nonlinearity, 1: 115. doi.org/10.1088/0951-7715/1/1/005

Saturday, November 16, 2013

Ode to Janet Yellen

The only problem with this song is that it should have been written in 1913.


Wednesday, November 13, 2013

Complexity, bifurcations, catastrophe

What makes a system complex?

It is a perplexing problem--both its description and its quantification. One might think that the description of a system as complex would suggest it has many subsystems each acting in accordance with its own rules, and interacting with each of the other subsystems in ways that we find difficult to describe. But there are systems involving very few "parts" which exhibit the kind of behaviour we call complex.

Another possible definition may stem from the notion of the compressibility of the system's information. Is it difficult to describe the sequential outputs in a manner that is simpler than merely listing all of our observations? A good random number generator would exhibit such behaviour, but we would not describe that as complex.

John Casti has proposed that the complexity of a system is at least partially dependent on the observer. He uses an example of a rock lying on the ground. To the layperson, there are only a limited number of ways to interact with the rock (kicking it, breaking it, throwing it, etc.) . To a trained geologist, there are more (as before, as well as mass-spectral geochem, x-radiography, electron probe, etc.). So the rock seems more complex to the geologist, but that additional complexity actually stems from the observer.

Another example can be share prices, where the complexity depends on the timing of your observations. If you look at the price of, say, Anadarko Petroleum over the past year, using closing prices only. (for disclosure--no position).


Then we can look at one-minute increments on a daily chart (Nov. 6, 2013).


Note that the variability within the two charts doesn't seem all that different despite the changes of scale. Lastly we could look at how trading in Anadarko looks over one second. One particular second, that is, between 3:59:59 and 4:00:00 on May 17 of this year.



You probably wouldn't expect a lot of change over 1 s, but in this case you would be wrong: the price fell from about $90 to $0.01 in less than 50 ms. That's a loss of $1 billion in market capitalization per millisecond--keep losing money at that rate and before long you're talking real money!

This all seems to trigger a philosophical debate--is the complexity present when none of the observers are capable of seeing it? In the case of Anadarko, if you were a pension fund, your losses would have been real (although the trades were all cancelled and reversed after market close).

If the complexity of a system arises from within, then what characteristics do we ascribe to complexity. One characteristic is discontinuous behaviour, particularly when the inputs to the system are continuous. For instance, tectonic processes gradually cause stresses to accumulate in an area in a fairly uniform fashion, until a critical threshold is reached and the earthquake occurs.

The branch of mathematics that investigates the sudden onset of convulsions wrought by a slow change is called catastrophe theory. Catastrophe theory is generally considered to be a branch of bifurcation theory. By bifurcation we normally mean some change in the operations of a complex system. It could represent a transition from one stable state to another. It could also represent the development of new areas of stability in phase space (or their disappearance) or simply a change in the nature of a chaotic attractor.

In particular, sometimes the sudden appearance of a new mode of stability is brought about by the changing value of a slowly migrating parameter past a critical threshold. Such behaviour is called a catastrophe, in the mathematical sense.

In the past few years we have seen a major change in the mode of operations in the markets. In particular, the rapid growth of high-frequency trading has added complexity at timescales where such behaviour did not previously exist. This is another example of a catastrophe.

Tuesday, November 12, 2013

Sorry

Just a quick post to prove I'm not dead. Although if I have been replaced by someone who knows my password, how could anyone be sure?

Last night at dinner, my wife had to send back the desserts. Suddenly she got paranoid. "Remember that time we went to Stratford and I sent back my meal because it was too salty, and we had a long discussion with the chef and he gave me a complementary meal, but all the remainder of the day I was sick with diarrhea? I'm sure he put laxatives in the food. I hope they don't do that here," and I said, "I wouldn't worry about. Laxatives aren't the oriental way. They're more into heavy metal toxicity. Russians and Israelis are partial to polonium-210." Then my son asks, "What do Canadians do?" I said they apologize a lot.

It isn't well known, but if you apologize enough to someone, they will die.

Sorry.
Sorry.
Sorry.
Sorry.



Saturday, November 9, 2013

Junior gold explorers have a structural problem

Exploration geologists spend a lot of their time looking for gold, but probably not much time thinking about it. Admittedly, my sample size is pretty small. I did ask a couple of colleagues why they thought we spent so much time looking for gold, when it was pretty clear that things like copper, zinc, and nickel are more socially useful.

I also never used to think about it, other than the pithy advice I received from an old employer long ago. "I like gold," he said, "because you can always sell it." There isn't the same market risk in gold that there is in iron--you can find a big iron deposit, but there may not be the demand required to justify bringing it into production.

I accepted that the twin notions of gold as money and it being the lowest risk metal to market (but not the lowest-risk metal to find!) were the reason that exploration for gold typically consumes about 50% of the global budget for non-fuel mineral exploration. The same was true for 2012, according to the Exploration Review published in May of this year (log-in required).

Mining Engineering only carries information back to 1996, but over that time gold exploration has dominated all other (non-fuel) mineral exploration. With so much being spent on gold exploration, it's no surprise that there are so many junior gold exploration companies. The question that concerns me now is whether or not this is the normal state for exploration.

As it has been this way for nearly 20 years, one might think that this must represent the normal state. But there has been a growing instability in our economic system for a longer period than this, and this emphasis on gold exploration may be a function of our distorted economy.

As evidence, I submit the following exhibit.


Source (my annotation).

Looking at this chart, the dominance of gold exploration came into effect as a consequence of the rising gold price into 1980; which itself was a consequence of Nixon removing the US dollar from the gold standard. But for the previous 30 years, gold exploration constituted less than 20% of total mineral exploration.

Part of this is logical. Gold rising from $35 to $800 would have attracted a lot of hot money into the sector. Part of this may also be a symptom of financialization of the economy that may have begun in the 1970s.

There has been a lot of commentary of how this cycle we have spent so much money only to discover so little. Part of the reason for our failures to discover is due to the change in the junior business model, which is a reflection of the fact that a single hole no longer creates the kind of pop in a stock that can feed management. Consequently, junior companies have had to fund salaries as well as resource definition through share immensely dilutive share issuances. Added to this is the difficulty of discovering a gold deposit (especially in a world of elevated political risk)--the risk of exploration itself combined with the risk of expropriation or suddenly increased royalties has made the discovery of an economic gold deposit a rare event indeed.

Clearly the emphasis on gold exploration is anomalous. An anomalously large amount of money being invested in one sector--more than is capable of providing an economic return--is the definition of a bubble. It is painful for me to admit this, as I work in the sector. If we allow ourselves to consider this possibility, we may also recognize the justifying statement--"you can always sell gold"--is equivalent to "house prices always go up" or "just buy the nifty fifty".

Just because a bubble has lasted a long time doesn't mean it will last forever. And just because it deflates from 10x normal to only 5x normal doesn't mean that we have experienced all the pain we will ever experience in the sector. We may have to consider that the current pain we are experiencing in the exploration industry is just the beginning of a return to reality. There could be much more pain to come as 80% of junior gold explorecos disappear.

I don't know what juniors are going to do. They can't really go into base metal exploration, at least not under the current financing regime, as they have no credible ability to raise the large amounts of capital to develop their deposits. It may be that the only way for them to survive will be as wholly- or partly-owned subsidiaries of major mining companies.

Eventually sanity will have to return to the economic system--the financialization will be reversed and goods will be manufactured in the developed countries--and the exploration budgets will favour base metals over gold. 

Tuesday, November 5, 2013

Happy anniversary chaos!

Fifty years ago, Edward Lorenz published the first paper (pdf) generally recognized to discuss chaos.

Lorenz didn't call what he had discovered 'chaos'. It's not clear that he really understood the importance of what he had discovered. He knew it was interesting, and when scientists find something interesting they publish it, and worry about the ramifications later.

What Lorenz had discovered is that a deterministic system could have unpredictability. It is difficult to convey how unexpected this discovery was at the time, because the idea of what is now called chaos has disseminated (although imperfectly) through our common culture. A deterministic system is one in which the rules are well described (via equations) which operate on data to produce results. Since the time of Laplace's Dr. Manhattan quote, it had always been assumed that nothing unexpected could arise from such a system when an initial position and the rules of motion were defined to arbitrary precision. Unexpected behaviour should only result from randomness.

So when Lorenz put together a simple model for atmospheric convection in the presence of heating, he used three simple differential equations, simple boundary conditions, and an arbitrary starting point, there would have been no reason to suspect that anything unexpected might occur. After all, all the parameters in the equations were known.

In essence, what he discovered was that minute variances in starting conditions led to extremely large variations in outcome. This again was unexpected, because our knowledge was largely built on assumptions of linear behaviour, in which small variations only grow larger slowly. Lorenz's interpretation of what he had discovered was to correctly point out that long-term weather forecasting was impossible, because it was impossible to measure the present state of the system with perfect accuracy--and the range of possible differing outcomes from the measurement accuracy was essentially the range of all possible weather.

The discovery and formalization of chaos theory led to entirely new fields of study encompassing different aspects of nonlinear dynamics and complex systems. Among them is one field of endeavour which has been a point of interest on this blog--complexity.

What do we mean by complexity? Actually, I'll write about this in a future posting. For now, let's just note the relative unpredictability of complex systems and get into the whys of it all later.

- - - - - - -

This post is a bit belated, because Lorenz's publication was actually in March. But something as momentous as chaos should celebrate over the course of an entire year.

Sometimes we go all out and celebrate something over a couple of years. International Geophysical Year (1957-58) and International Heliophysical Year (2007-08) come to mind.

There have already been numerous celebratory events so far this year. But first, a word about the enablers of this year's celebrations on the markets.

High-frequency trading spams the exchanges with empty quotes destined to be cancelled--so much that it appears that many legitimate offers do not get filled at optimum pricing, as the system becomes overwhelmed with meaningless numbers.

According to the exchanges, HFT is a good thing. It increases liquidity, or at least that is the axiom that guides their acceptance. Unfortunately, observation tells us that the opposite may be the case--that HFT causes liquidity to vanish precisely at the time it is most needed.

In the last 50 years, we entered the nonlinear world. But our thinking--especially institutional thinking--is still trapped in the linear world.

In the linear world, if something is a benefit, then more of it is a greater a benefit. But in the nonlinear world, where one may be a benefit, and two may be better; three could turn out to be horrifying.

So in celebrating 50 years of chaos, the exchanges (with their sponsors, the algos) have brought you the following celebratory events.

Flash crash on the German market. Twitter feed flash crash. (Appropriately enough, both of these were in April). Thee Anadarko flash crash. Information travels faster than the speed of light! Closing of the Nasdaq options bourse. Not to mention hundreds of strange trade executions across all the exchanges.


How to lose lots of money in 45 ms by Nanex.

Most of these problems are the (un)predictable result of the interaction of numerous algorithms. Some may have been errors, or the so-called 'fat finger' trades; others may have been other form of human or algo error.

Algo error. Was that supposed to happen?

The markets are not what they used to be. The overall superstate has changed over the last ten years from one dominated by humans to one dominated by machines. The result has a been a series of entirely new phenomena, which we have earlier termed 'innovation'.

The year isn't over yet. I look forward to the next special event. I don't think I have long to wait.

- - - - - - - - - - - - - -

And then there's this. I was going to put in something by King Crimson here, but this seemed more appropriate.