Dust flux, Vostok ice core

Dust flux, Vostok ice core
Two dimensional phase space reconstruction of dust flux from the Vostok core over the period 186-4 ka using the time derivative method. Dust flux on the x-axis, rate of change is on the y-axis. From Gipp (2001).
Showing posts with label sophisticated investors. Show all posts
Showing posts with label sophisticated investors. Show all posts

Sunday, February 9, 2014

Vindictus

I've been hearing stories lately from contacts in different junior explorecos about regulators questioning certain types of promotional literature.

In one company's case, the questions are driven by its inclusion of previously reported grades and tonnages of surrounding deposits in the same structural province. Now it is not clear to me whether the regulators are objecting because they are afraid that the casual reader will think that the neighbouring deposits belong to the company in question (it is clear they do not--the company has identified the owners/operators of the surrounding mines) or that they feel it is too promotional. I'm guessing the latter, in which case the fact that the grades/tonnages are already a matter of public record seems to render the problem moot.

How does excluding such information from the presentation help the retail investor? It doesn't. It is information which is potentially useful to the investor. In its absence, the retail investor will have less of an understanding of the area's potential than other, more sophisticated investors, who are capable of looking up such information themselves.

But perhaps that is the point. It is not yet time for the retail investor to enter the picture. Now is the time for institutions to get positioned. By excluding information from the eyes of the retail investor, the institutions can get safely positioned at a lower price. After the big move, it will be time for the retail investor to enter the picture, so that institutions can exit their positions safely. But don't worry--they'll leave the last few percent of the major move for you.

Tuesday, June 19, 2012

More fun with NI 43-101

You wouldn't ordinarily think that more transparency in the reporting of exploration companies would be harmful to the investing public. Yet I think there is the potential of just such a counter-intuitive result from some of the new requirements in reporting according to the most recent updated requirements for NI 43-101 reporting.

According to section 3.3, article 2:
If an issuer discloses in writing sample, analytical or testing results on a property material to the issuer, the issuer must include in the written disclosure, with respect to the results being disclosed,
(a) the location and type of the samples;
(b) the location, azimuth, and dip of the drill holes and the depth of the sample intervals;   
Once again--intuitively speaking, additional disclosure can only benefit the public, right?

I'm not so sure it does in this case. What will the ordinary public do with the location, dip, azimuth, depth, and grades of the samples? This is information which, combined with the appropriate software, can be used to generate resource estimates--particularly if it is carried out by the geological help hired by a large financial institution, thus allowing that institution (and perhaps certain of its well-heeled clients) to front-run the investing public by estimating the resource before the company itself announces it. Does this protect the investing public? It seems to me that this is the exact opposite of protecting the investing public.

Companies need to present results to the market in a timely fashion. These results need to be accurate. But the totality of information offered above shouldn't be available to the market until the company releases a resource estimate. But some information about the spacing of the drillholes needs to be released so that investors can be reasonably sure that good results are not obtained by drilling many holes very close together, but are from holes that are reasonably spaced apart.

A general outline for a method that might work would be to report on the distances between each drillhole and its two (or three) nearest neighbours--the sum total all plotted as a histogram. If the drillholes are clustered together, that will be immediately apparent. It would also be immediately apparent if the drillholes are spaced out evenly over the area. The size of the area of investigation could be reported, thus allowing all investors the ability to assess the significance of the results without allowing any of them the ability to make a resource calculation.

Another approach might be something like this. The advantage of presenting the data this way are obvious--but at the same time there isn't enough there for sophisticated investors to front-run the market. My only objection would be to mandating that companies use a particular program.

When the company reports a resource, all the information about the drill collars can be reported, allowing the resource calculation to be verified.

Saturday, July 2, 2011

Deconstructing algos--reconstructing the system

Our market system is predicated on the assumption that all market participants have equal access to information. In the real world, this is not the case. As companies pursue their interests, discoveries are made, unusually large transactions occur, and participants in the companies in question acquire material information before many other market participants. The market has rules which are intended to prevent those with access to privileged information from being able to profit unfairly. Thus, the rules against trading on insider information.

Now, as has been demonstrated abundantly, it is clear that there are other entities with access to privileged information. This information has not arisen from the normal business activities of market participants--it has been deliberately created and vomited into the market through quote stuffing in order to overwhelm the system's ability to update market prices and to create many, many, many small arbitrage opportunities.

Very few market participants can carry out quote stuffing, or can create and cancel orders hundreds of times per second. The effect is to tip the playing field in favour of these large entities, and the law of large numbers ensures that profits flow towards them. To my mind this is something very different from trading on insider information.

There is a libertarian argument insider trading is victimless and should not be considered a crime. I admit to some sympathy towards this view as it seems to fit into a kind of justice--usually someone is doing something socially useful and this creates an opportunity to make additional profit.

For those who think that quote stuffing represents a form of natural justice (IFS Bank has invested in the technology and therefore deserves its ill-gotten profits at the expense of everyone else), I would like to point out the moral difference. The money made from quote stuffing is not part of a socially useful activity. The proper role for financing has always been to make money by attempting to create something that generates cash flow, whether it be a mine, a factory, or an apartment block. Carrying out thousands of transactions per second in order to scalp fractions of pennies each time does not create wealth--it transfers it towards the HF traders.

How do we regulate the market to return it to a semblance of normality? The current market rules did not envision the kinds of advantages that can arise through quote stuffing--consequently there is no mechanism for bringing its practitioners under control. What should be done?

Quote stuffing has to be ended. One method is to place a tax on each stuffed quote, and to remove the exemptions that certain market participants hold for Exchange cancellation fees. The fact that such exemptions exist guarantees that certain market participants will hold an advantage over other participants--something that is not consistent with a fair market.

As has also been pointed out--how can such changes occur when the regulators have been captured by the perpetrators?

Thursday, June 30, 2011

Deconstructing algos, part 2: Leveraging chaos into high-frequency arbitrage opportunities

The recent elegant explanation for the activities of the HFT algos by Nanex seems likely to be a better one than the analysis that follows, as it answers the all-important question--why? In the following analysis we will look a little bit at how, but most or our interpretation of the results is coloured by the Nanex explanation. It explains why so many trades happen outside the bid-ask spread, particularly as the bid and ask prices are moving rapidly. They are scalping fractions of pennies from some poor fool who has data more than a few ms old.

As this is the reason, the method of choosing bid and ask prices pales in significance next to the methodology of stuffing the orders. This methodology I know nothing about and will not address. This article will address how to use this stuffing to create endless opportunities for arbitrage.

The principal advantage discussed in the Nanex report is stuffing the market with so many orders that competitors have trouble seeing the present state of the market. Whenever such inefficiencies are created, an arbitraging opportunity may also be created.

One method of creating arbitrage opportunities is through manipulation of time. We are accustomed to thinking of information flow as instantaneous, but it is limited by the speed of light. How might HFT take advantage of this?

Imagine for a moment that transatlantic communications were somehow extremely limited, so that a trader in New York could not see the present price of a stock in Paris, but would only see it after a two-hour delay. Any market participant who could somehow overcome this limitation would find a myriad of arbitrage opportunities.

Now look at the present. Let's suppose International Face Sucker (IFS) starts stuffing 100,000 bids per second into the pipe in New York. Let us say that those bids are x1, x2, x3, . . .

A market participant in Californa, Hedge Fool LLP (HF) is in the market and starts looking at the stream of bids coming down the pipe.

At 100,000 bids per second, the electronic signal only travels about  3 km between each bid. So at the time when HF sees the first bid (x1), and prepares its response (say, y1), IFS is actually sending quote x1500 into the dataverse. Where is the market? What is the current price?

Now suppose IFS has a branch in California. They have the same algo as IFS New York, and are running it so locally they observe x1 and HF's response y1--but they already know what x1500 is (or is that "is going to be"?), not to mention all of x2 through x1499. Might there be an arbitrage opportunity? Might there be 100,000 such opportunities every second?

A fraction of a penny 100,000 times a second--it isn't long before you're into real money.

Now IFS has branches in London, Paris, Sydney, Tokyo, Shanghai, Moscow--they are all running these arbitrage trades and who knows--maybe they are stuffing orders into their local bourses, using an algo known by all other branches and are arbitraging them as well.

The role of chaos

Not that it matters much, but what sort of algos are they using? I think they are mostly pretty simple.

The algo on the bizarre spreads seen here is straight forward, but hard to see how it profits.



As I've written elsewhere, the nat gas trading algo looked very similar to a simple chaotic function--the first such function ever identified.

 

Natural gas over a brief interval on June 8, 2011. Graph from Nanex.


 Nat gas price from above graph plotted against linear time.


Plot of first 5500 values for x using the Lorenz equation with parameters σ = 10, ρ = 24.7, β = 8/3.

You might think that using such a simple, well known function would mean that anyone could tag along for the ride. But you would be mistaken. Chaotic functions have a property called sensitivity to initial conditions, which makes them very useful in this particular application. Even in the unlikely event that some disgruntled former employee steals the software, its use will be extremely limited.

Note in the equations above we have three flow parameters, σ, ρ, and β, for which there are an unlimited number of choices resulting in chaotic behaviour of the overall function. In addition, we may choose any starting location, and we can also vary the time steps (basically x2 = x1 + time-step * dx/dt). Any arbitrarily small difference in any of the above parameters/variables leads to a dramatically different future evolution of the function. For instance, the two plots below (blue and red) are identical in all respects except for blue, σ = 10.01, and for red, σ = 10.00 (where only the red appears, the two curves are essentially identical).

The plot above represents about 16000 intervals, which could probably be squeezed into fewer than 5000 quotes, which IFS could blast out in maybe a twentieth of a second. If HF had stolen the program, and entered every parameter correctly, except for a typo ("10.01" instead of "10.00") then their estimate of the IFS bids will only be accurate for only about 25 ms. After that, HF might as well guess.

We could imagine IFS deciding on the next day's choice of parameters late in the evening, sending the numbers in an encrypted message to all their offices worldwide, and the next day they are all happily arbitraging away 100,000 times a second. They could change the parameters on an hourly basis--or every minute--it requires only a small amount of information to unspool an unlimited number of bids.

The only practical use for this software, if stolen, would be to use the same quote-stuffing method so your international subs could arbitrage the hell of the market. But that would be manipulation, if it falls into the wrong (read "your") hands. In the hands of IFS, of course, it is proper and judicious market management.