Saturday, July 31, 2010

Information flow in price selection, part 1

There is an article in the Guardian recently about how humanity should prepare for planetary-scale disasters. The authors offered some suggestions for deciding on the best way for dealing with a disaster, both of which involved setting up panels of experts (government-appointed). The proposals were: 1) setting up a government Office of Risk and Catastrophe; 2) setting up a panel of scientists and experts who would study the problem and advise governments and/or industry the best way to solve it; or 3) setting up two panels of scientists, each of which was to take a contrary position and argue it out--presumably the argument would allow us to see all sides of the problem and thus come up with the best solution.

The one problem that afflicts all of these proposals is the role of government in the funding and in defining the philosophical approach that these panels will take. It appears to be implicit that all of these bodies will be appointed by and accountable to some government, most likely at the national level. The past history of similar panels suggests that politics will play some role in the solutions.

 Just as Noam Chomsky and Edward S. Herman explain in Manufacturing Consent: The Political Economy of the Mass Media, any body dependent on government sources for information (or in this case, for funding) will find itself squeezed if it keeps making recommendations that are at odds with current government policy. Furthermore, any of the above solutions can be used to frame the debate by presenting a limited range of options. Even in the third option, it is likely that only two possible courses of action will be proposed, when in fact it may be useful to debate a wide range of possible actions.

Is there a better way? One method might be a futures market similar to the Pentagon's ill-fated Policy Analysis Market. Although there was popular revulsion to this particular application, the concept of a prediction market is a sound method of allowing information to flow from innovators to the general market.

There are those who would say that a prediction market works on a kind of "hive mind" principle. To me it sounds like superstition. History shows that mobs are not smarter than individuals. I prefer to think that prediction markets work because information is not evenly distributed, however that the profit motive is an efficient means to spread it around (which may foreshadow what I will say below about the efficient-market hypothesis).

As an aside, I once asked an introductory-level statistics class to consider the following problem: you are one of 500 passengers about to board a plane with 500 seats. Each passenger has a boarding card with an assigned seat. As the first passenger boards the plane, he discovers he has lost his boarding card, so he chooses a seat at random and sits in it. All subsequent passengers attempt to take their seats, but if one finds her seat occupied, she will choose an empty seat at random. If you are the last passenger to board the plane, what is the probability that you will be able to sit in your assigned seat (answer below). None of the students in the class had any idea of how to approach this (probably my fault!) so I conducted an experiment. I had everyone guess, tallied up the answers and took the average, which turned out to be surprisingly close to the correct answer.  So maybe all of us are smarter than just one of us. (Admittedly, the result was helped by the two exceptional know-nothings who each guessed a probability higher than 1).

What is the basis of my assertion that some participants in, say, the stock market have more information than others? One is the long history of investigations into allegations of insider trading. The other is that an analysis of market dynamics shows that stock price movements have the distinctive fingerprints of this flow of information all over them.

A common problem faced by scientists studying natural systems is that the systems are complex: frequently they are dynamic, driven, and dissipative (meaning that they move, are influenced by energy or matter inputs, and some energy is lost through friction or its equivalent). Such a system may be described by any number of differential equations, and modified by any number of time-varying inputs and boundary conditions. Additionally, the system may have many different outputs, only some of which (commonly only one of which) we actually observe. Naturally we don't know any of the actual equations, nor do we know what the inputs are, nor do we know if the particular observations we have made actually reflect what is happening within the system. Such is the life of a geologist, for instance. Despite these difficulties, we are full of optimism that somehow we can infer the dynamics of the system using our observations, and there are even well-defined mathematical approaches to this general problem.

There are many places to track down the information in the following discussion, but a good place to start is
Analysis of Observed Chaotic Data by H.D.I. Abarbanel (referred hereafter as Abarbanel, 1996). Ergodic theory suggests that dynamic information about the entire system is contained within any time-varying output of the system, so we don't need to worry about whether the particular observations we have chosen to make are important or not--everything we are looking for (simplistically speaking) is in there somewhere. But how do we reveal what may possibly be a multi-dimensional structure when we have a single time series (i.e., one dimensional data)?

One approach is to construct a phase space in multiple dimensions from our single time series. To give credit where credit is due, this concept was first discussed in a classic paper by Packard et al. (1980). The simplest approach is to reconstruct the phase space by plotting the time series against a lagged copy of itself. I will carry out a simple demonstration below.

Using the equations for the famous Lorenz "butterfly"--I will perform this work in Excel to show how easily this can be done, although it can be done better in a proper mathematical plotting package (especially one with 3-D rendering).

We will use the following equations:

xn+1=xn+0.005*10*(yn-xn)
yn+1=yn+0.005*(xn*(28-zn)-yn)
zn+1=zn+0.005*(xn*yn-8/3*zn)
Initial coordinates were (1, 0.5, 0). So in this case, I was using 0.005 as a "time-step". Your mileage may vary. You may use a different value, but then you will have either a more or a less dense looking graph than the one depicted below (which demonstrates x vs y over 4000 values). 




Lorenz "butterfly" curve rendered in Excel (as a scatterplot) based on x vs y over 4000 points using the equations and initial condition stipulated above.








If I were studying the above system, it is possible that the only observations I might have were the x column, which would look like this:









Sequential plot of the first 4000 x-values from the equations and boundary conditions listed above.

They don't look much alike. The first graph is a two-dimensional projection of a three-dimensional object. The graph above is really one-dimensional, and at first glance it does not seem possible to reconstruct the first graph from the second. But if we plot our x-values against a lagged copy (i.e. plot xn vs xn+12), we get:



Reconstructed two-dimensional phase space obtained by the time-delay method, rendered in Excel.










The trick above is to take the data in the x-column, and copy the values (not the formulae) into the next column, starting in the 13th row. You will then have 3988 points defined in two dimensions, which can be plotted on a scatter plot. You may be wondering why I chose the particular lag (why not xn vs xn+100?)--for now consider it to have been an arbitrary decision. There is an information theoretic prescription for deciding on the optimum lag, just as there is a prescription for choosing the correct embedding dimension (I have chosen to use two dimensions because of the limitations of excel, but it would be better to use three).

We see that the reconstructed phase space in two dimensions is topologically very similar to the two-dimensional projection of the actual system. Next time we'll start using this tool to analyze stock charting techniques.

---------

Update - June 19 - did I really forget the answer? Probability of you sitting in your proper seat is 0.5. The easiest way to consider this is to realize that of all the possible random seats that could be selected by the passenger with the missing boarding card, it is only whether he selects his properly assigned seat or yours that matters (as far as our problem is concerned). If he chooses his own seat, you will get to sit in yours; if he chooses yours, then obviously you won't. Any other choice simply defers the critical choice to a later passenger, who will have a smaller selection of seats from which to choose, but will again only have the two meaningful choices.

Thursday, July 22, 2010

Mexico, silver, and the world's greatest narco-state

I know I promised to write something else, but I just couldn't wait on this one.

It has long been known that the CIA is at least partially responsible for global drug trafficking. Now there is evidence that they are behind the recent ramp-up in violence along the US-Mexico border. A reasonable question would be to what end this increase in violence?

Part of the reason may have to do with increasing the public outrage at illegal immigration. After all, there are elections to be won and a population to stupify.

Now here is a story that has been circulating for a few months. Hugo Salinas Price has been talking about the remonetization of silver in Mexico. Given the deteriorating state of the US dollar, the Fed's desperate attempts to rob the principal US creditors, and the general "race to the bottom" style of mercantilism the Fed's policy is igniting, backing a currency with silver would likely make the Mexican peso the strongest currency in the world, would ruin the drive for the Amero, and would put tremendous international pressure on the USA to do the same. Naturally, the United States does not wish to see such a thing happen.




Mexican one ounce Libertad silver coin. Gorgeous!







Monetizing silver will not be easy, particularly for the first country to try it. A previous attempt by the Mexican government to monetize silver in the late '70's failed due to the volatility of silver prices, in particular, the breathtaking drop after the fall of the House of Hunt. Allowing the coins to fall in tandem with the silver price really soured the public's taste for them. How to avoid this problem in the future? Mr. Salinas Price has suggested the following:

What happens when the price of silver falls?

The answer is surprisingly simple: nothing happens.
 

The second indispensable condition for successfully carrying out the conversion of the silver ounce into currency which will circulate in parallel with the euro is: the last monetary quote given
to the ounce by the issuer must not be reducible.


The reason for this unusual condition is that ever since silver ceased to have monetary value according to weight, the monetary value of all silver coins was always and everywhere a fixed value; it was a fixed value because all these coins bore an engraved or stamped value.


In order for the silver ounce to cease being a commodity and be currency it is indispensable that its nominal monetary value be a fixed value which cannot be reduced – just as is the condition of
present euro coins and bank notes – along with which the ounce is to circulate in parallel. If the quote is allowed to fluctuate in value downward, according to the price of silver, then the ounce will not be currency: it will continue existing as a commodity.  (pg. 14)

                                                                                                               Read more

How can Mexico guarantee that the price of the coin does not fall as the silver price falls. The Mexican Central Bank must commit to purchasing silver at a price related to the value of the coin, even if that price is higher than the current market price of silver.

One can imagine that in today's world of levered commodity contracts what a lot of fun some large arbitrageur could have. However the fun could be self-limiting, as the arbitrageur would have to deliver actual silver in order to profit. They wouldn't be able to simply show up with a container full of future's contracts.

It has been widely speculated that a good portion of recent US foreign policy has been devoted to maintaining the dollar as the currency of last resort. The Iraq war began soon after Iraq began dumping dollars for euros and began demanding to be paid for its oil in euros. At various times during the past five years, Iran has demanded the same thing.

Is the rise in violence in the Mexican drug trade a new US foreign policy initiative? One intended to either destabilize the Mexican government with the ultimate goal of making it difficult to properly monetize the Libertad?

How government funding corrupts scientific integrity

It is well known that industry money in support of research can corrupt the results. There have been a litany of stories in all aspects of scientific study from the pharmaceutical and food industries, to establishing liability from long-term use of hazardous industrial products to more recent allegations of tampering with the food pyramid. From the fight over aspartame as a sweetening agent to the latest pharmaceuticals, once significant amounts of money are involved there are frequently doubts as to the validity of the science.

For instance, at the GAC in Calgary in May, the session on Climate Change had a talk by one Mr. Norman Kalmanovitch concerning the impact on global warming of a doubling of atmospheric CO2. According to the paper, the effect would be negligible. The paper did cause something of a sensation, and there was a great deal of angry criticism directed at the speaker; unfortunately only a very limited amount of that criticism was directed at the science presented, and much more was directed at the funding sources of Mr. Kalmanovitch and their inferred ideological bent.

Now it may be true that Friends of Science  is an ideologically driven organization, but that should not be the basis of criticism of the paper as presented. Unfortunately, it was nearly impossible to critique the presented paper as there were far too many slides (I believe he said there were 96, which I thought was a joke until he tried to go through them all). There was legitimate criticism about the length and confusion of the presentation, which cast doubt on the professionalism of the speaker and made it difficult to evaluate the science. I suggested to Mr. Kalmanovitch that he attempt to publish in peer-reviewed journals--at least then the ideas could be evaluated or criticized in an appropriate forum. Unfortunately, Mr. Kalmanovitch was of the opinion that the work would be rejected out of hand, as the climate journals were (in his opinion) ideologically driven organizations and he also felt that his own lack of academic stature would preclude any publication. 

To the point--if a young researcher (just starting off in a tenure-track position at a Canadian university) found himself with an NSERC grant to study climate change, and obtained results through either observation or experimentation that falsified the global warming hypothesis, I submit that the announcement of said results would be a career-limiting move. Perhaps even a career-ending one.

There are a couple of issues here. One is that in many cases, the weight of corporate funds is designed to produce a scientific result in order to finesse an objective around government regulation. Without the high degree of government regulation around pharmaceuticals and food additives, it would not be necessary to obtain results favouring the project at any cost. And for those who ask whether we would be better off without government regulation--who is it that allowed Lipitor, aspartame, and other toxins to pollute our bodies. Has government regulation really kept toxins out of the food chain? Who has overseen the regulation of offshore? The stock market and the financial sector? Mortgage markets? Who was responsible for preventing Ponzi schemes like Madoff or Enron? We must ask ourselves--would we fire or would we promote an employee who allowed such mayhem into the various aspects of our lives? Why should we not do the same with the State? By continually increasing the funding for failure, we reward failure. We have rewarded failure to the point that society is now on the brink of destruction.

Yet those who are quick to decry the influence of corporate interests either deny or ignore scientific bias in favour of state goals. If it is true that corporate interests fund science that supports their aims, is it logical to suppose that governments would not do the same? Have not the massive bailouts of the financial industry against the expressed wishes of the general population made it clear that States do not act in the best interests of their populations? What about the murders of tens of millions in the last century?

The ongoing furor over leaked emails from climate research in England (dubbed "Climategate") may be the beginning of this realization. The widespread perception of a possible conflict of interest has poisoned public opinion and is emblamatic of widespread distrust of government-funded science.

Another problem with government grants is more subtle. The existence of grants tends to force research in directions which are more likely to attract grants. This is not necessarily a direction that research should go. One of the original models of academia held that research should be driven by curiosity. Now, however, curiosity isn't enough.

For example, I was once interviewed for a position at a well-known university in the UK. As in all such interviews, the question of future research topics came up. I had industry money arranged for investigating the environmental impact of offshore aggregate mining in the North Sea, but I had ideas for other projects as well. One of  those was a continuation of my work on the dynamics of climate as determined from the geologic record. I wanted to pursue this as I saw what might be a short-lived lead in a field of endeavour that had great promise and could be done cheaply. Part of the promise was to deliver a methodology for testing climate models, and given the amount of money being spent on them, it seemed a good idea to evaluate them. Additionally, I knew that huge amounts of data were being collected at great expense, yet the methods of their analysis were primitive--and a small amount of money could greatly increase the value of what was being recovered at great expense. I was dismayed when the only question I received was how I would justify applying for a million-pound grant with such a project.

It was an aspect of research funding I had never really considered. Acquiring grant money for a young academic has always been necessary, but the amounts of money now being granted have attracted a new and unfortunate dynamic. The demand now is to design research projects which require large sums of money, which necessarily limits the types of proposals that can be formulated. For instance, in the field of paleoclimatology, the only types of projects that can justify grants of millions of dollars involve drilling holes somewhere remote (and crowd-pleasing). The resultant responsibility to ensure that the data obtained in such a project is thoroughly studied is ignored because of the need to obtain the next large grant (which usually involves more placing more holes somewhere else). Spending time contemplating the data obtained and attempting new methods of data processing in order to ensure that the best use is made of the data cannot compete with the drive to put new holes in distant places.

If you think that the funding agencies would be interested in granting relatively small amounts of money to  improve the use of the data from these expensive boreholes, you would be mistaken. For they also have an interest in ensuring that large research grants are made. If you are overseeing the disbursement of $50 million, it is a lot easier to give out 25 $2 million grants than to give out a thousand $50 thousand grants. Your own salary is only dependent on doling out the money, so it makes sense to create as little work for yourself as possible.

Moving up the chain, we come to the politicians, whose interests in these matters are complex and contradictory. It can be a good thing to be sure that science is funded, but it would be bad if publicity came out that you were funding studies on the World of Warcraft, for instance. They would like to know that the money is being used effectively, but they do not have the scientific background to evaluate the science; so they place the responsibility in the hands of the funding agencies above.

I submit that the system works very differently from the way it was intended. I have no doubt that at every step, individuals acted in a way that they thought would lead to the best use of scientific resources. How do we explain how the result has come to be at odds with the intent?

(added July 28)

Gary North has written on the differences between a job and a calling. A job is what you do to make money. A calling is the highest, best use of your time. Your goal in life should be to do less job and more calling. If you are very lucky, your calling will be your job, but this is rare.

For most people in academia, teaching is their job, but their calling is research. Actually, the way they are funded, they probably view the research as both their job and their calling--the teaching is some condition of their obtaining research space, and is to be avoided.



My proposal for the financing of scientific research is as follows--let it fund itself!

Academic positions should essentially be teaching positions. If the academic wishes to research as well, that becomes a personal decision. University education is failing, at least in part because the system is geared to reward research, and if the academic is particularly good at research, teaching may even be avoided. Make teaching the main job of academics. Universities already carrying research equipment may use that to attract researchers who are would like to use it to further their research. Government should get out of funding research.

Tuesday, July 20, 2010

First steps into complexity part 1

I will try to document some of my thinking as I moved from a standard mechanistic viewpoint of science to one that was more complex.

I have been involved in Quaternary climate studies since I began my MSc in marine geology at Memorial University of Newfoundland. There I worked with Dr. Ali Aksu ostensibly on a typical marine geology study of a sedimentary basin on the continental shelf of Nova Scotia, but I also spent some time pondering Quaternary climate change--in particular, the Milankovitch theory of astronomically driven climate change.

At first the problem was a straightforward technical problem--how to tease out the appropriate signals from marine records. In the course of background reading, I encountered a relatively unknown paper (at least by geologists), by E. N. Lorenz in Quaternary Research in 1976 (far more famous were his earlier works on nondeterminism in weather prediction). The QR paper presented alternative ideas concerning the fundamental architecture of the global climate system and challenged the geological community to test them and so determine the nature of climate change on the Quaternary timescale. Most of the literature of the time considered the climate system to be deteministic, while yet acknowledging that there were nonlinearities which complicated the whole thing--but it was clear that the nonlinearities were hoped to be local in nature and that they could be dealt with through a judicious series of fudge factors. Lorenz described three possibilities for climate: 1) a straightforward "transitive" system, in which the system outputs can be linked to the system inputs by a simple set of differential equations; 2) what he called intransitive (what we would now term multistability; i.e., a system as above but with different sets of differential equations operating at different times); and 3) what he termed "almost intransitive", and called "strange attractors" in other publications, and which we now refer to as simple chaos.

I say that the paper is poorly known as I have never seen any commentary on it. Nor, for many years, did there appear to be a clear attempt to distinguish among these different modes of operation. To be sure, there have been publications advocating any one of these modes (here and here), but most of these were attempts to show observations which supported the proposed mode, rather than using observations to test between the different modes. More recently, various climate models have been proposed in which the modal operation is taken as a given.

I finished my MSc., then shifted to University of Toronto to carry out Ph.D. research with Dr. Nick Eyles. My principal thesis was again a geological one which concerned itself with tectonic influence on the development of glaciated continental margins, using Eastern Canada and the Gulf of Alaska as contrasting examples. However I also devoted a lot of time to Lorenz's proposed problem of Quaternary climate change. My approaches to this problem followed several branches. The first was improved signal processing (mainly through alterations in Fourier transform, including attempts to use maximum entropy or other methods). The second was looking at other data sets. The third involved developing entirely new techniques for processing information.

This last approach very quickly came to absorb most of my spare time.

In 1990, the concept of fractals had been around for awhile, but its application in earth sciences was still very much leading edge (I was actually thinking of the first edition of this book). The push to educate earth science professionals had only just begun. At Scarborough there was a post-doc in geography who was trying to make a name for himself by publishing paper after paper in which he reported the fractal dimension of some geographical feature. He had published something like a dozen papers in a year, each of which I must assume, was very short.

The concept of nonlinear dynamics was also very cutting edge in earth sciences. I proposed teaching a course on the topic, going so far as to propose that we teach our own mathematics to earth science students, but the idea didn't go anywhere.

I had encountered an interesting idea in a paper by Imbrie and Imbrie, in which they proposed that it was not ice volume directly that responded to solar insolation, but the rate of change of ice volume. At the time this struck as me as a brilliant insight, and I immediately constructed a figure showing the connection between insolation in the northern hemisphere and the rate of change of ice volume calculated from first differences from a deep sea O-18 record.

Plot comparing insolation at 65N and the rate of change of ice volume from a deep sea O-18 isotope record. A panel from the ill-fated Paleoceanography paper described below.



I then had the idea of constructing a figure in which I plotted the inferred global ice volume against its rate of change, once again calculated from first differences. The graph would be a curve, in which each point would represent the "state" of the system at a particular time, and when all the points were plotted in sequence, a trajectory would be traced which should reflect the dynamics of the ice volume system.



Part of my first two-dimensional phase space reconstruction of global ice volume. The small numbers represent time in thousands of years before present (ka).









Points on the graph that lie above the x-axis represent intervals where ice volume is increasing, and ice volume is decreasing over the segments of the trajectory below the x-axis. The further from the x-axis, the more rapid the growth or retreat of global ice.  The plot above shows the relatively slow advance of glaciers from the period beginning about 120 ky ago until about 20 ky ago, followed by rapid deglaciation.

What was immediately noticeable in observing the function over the past 500 thousand years was that there were particular areas on the graph to which the function seemed attracted. It moved very rapidly towards them, and tended to stay in them for long periods of time before rapidly moving to another. All of these regions plotted along the x-axis, and corresponded to particular volumes of global ice. The location made sense, because it implied that there were particular volumes of ice which were more stable than others. During the times when ice volume was stable, its rate of change must be low--hence it would be impossible to find a small region of attraction off the x-axis.

Now this is a phase space portrait, in two dimensions, using the time-derivative method (Packard et al., 1980). At the time I did this, I didn't know what to call it. I was certain it had been done before, but even in today's world of search engines, without knowing the terminology it is very difficult to find information. I knew that I was on to something, but didn't know what.

In the meantime, I had had another idea for testing climate records for multistability--at least this was a test to distinguish multistability from the transitive case using information theory (I didn't understand enough about simple chaos to devise a test for it). My approach was that if climate had one or more stable states, then there should be measurable differences in the information between the climate record (again the deep ocean O-18 isotopic record) and the driver (which was presumed to be northern hemisphere insolation). If there were multiple stable modes of climate, then the insolation would be encrypted, as if by a polyalphabetic key, and there would be a change in a particular quantity called the index of coincidenc, which is the likelihood that two randomly selected characters in a string of text are identical. There were challenges in applying this, not the least of which that it required that the data should be 'binned' and it was not at all clear how the bin size in the observed data stream should be linked to that of the northern hemisphere insolation. This work was presented at two conferences in 1991 and 1992, and was awarded a top student paper prize in 1991. But when I wrote the paper and submitted it to Paleoceanography, I overlooked one of the cardinal rules of scientific writing.

Always look like you know what you are doing.

I have always been fascinated by the intellectual process of the scientific endeavour. This fascination lead me to make a basic mistake in presenting my experiment and results. In the course of my work I had discovered what appeared to be a novel use for the process of autoencryption--by which I mean using the message as its own key in a polyalphabetic substitution cipher. The charming result is a coded stream that cannot be unambiguously decrypted even by an intended recipient who has been furnished with the key. Such a method of encryption, understandably, had no real application, and so the behaviour of the index of coincidence for this style of encryption was not well known. However I did not discover this until I was forced to come up with an explanation for a rise in the index of coincidence in the observed signals (compared with the presumed driver).

So I wrote the paper this way. Testable hypothesis with two possible outcomes, conduct experiment, find unanticipated outcome, explain why unanticipated outcome was left out of the original hypothesis, modify hypothesis, conclusion. The paper was rejected. It may have been accepted had I submitted the modified hypothesis as the original one, tested it, and reported a result. I had thought that the process of discovery would be interesting to others. In the case of peer-reviewed journals, this view was mistaken. In the course of revisions, I came to realize that the binning issues mentioned above were unresolvable, and reluctantly abandoned this approach, returning to the reconstructed phase space portrait.

Friday, July 16, 2010

Is complexity post-Newtonian?

There are changes coming to our approach to science. But what is behind them?

In a word, complexity. What is it? It is actually very hard to define, but is frequently used to describe systems which behave unpredictably for one reason or another. By system we usually mean some interactive group of components, which may be living or not. Thus a system may be a single organism, a group of organs within an organism, a colony of related organisms, an entire ecosystem, a planet, or some portion thereof, such as the atmosphere or hydrosphere (or both together).

I will paint this in broad strokes and hopefully fill in details later. I will also link you to much better sources of information than poor me.

Complexity is frequently described as being either organized or disorganized. Disorganized complexity is used to describe systems which have so many disparate components and so many possible interactions that our ability to describe and characterize them all defies our computational abilities. The behaviour of the system might as well be random. In some senses this type of complexity is not of great intellectual interest as it is possible that as our computational and organizational skills increase, we may be able to understand the origin of the unpredictability of such systems.

Organized complexity is much more interesting. In this case we are looking at a simpler system with only a few interactions, each of which appear to be straightforward, and yet the system surprises us with unpredictable behaviours which are sometimes called “emergent properties”. (See here for a seminal paper on complexity in which emergent properties are described).
Complexity is often described as post-Newtonian, but issue is far from settled. For instance, an earlier version of the Scholarpedia article on Complexity began with such a statement, but has since been removed.
Apart from their disputes over who had precedence in the development of the calculus, Leibniz and Newton also had different metaphysical ideas about how science should proceed. 
The mechanistic approach to science is very closely associated with Newton despite having a much earlier origin. The central logic of the mechanistic view is that knowledge about a complex system can be gained by reducing it to simpler components, each of which could be understood. The reduction could be carried out repeatedly until hopefully the components were comprehensible. This approach, known as reductionism, was formulated by Descartes. The mechanistic approach to science is commonly considered to be the only approach to science. If we recall the key approach to science is the formulation and testing of hypotheses, then it is clear that the mechanistic worldview may be described as a paradigm, in that it does not define the scientific method itself, but restricts the types of hypotheses that are formulated and tested.

The mechanistic view would consider an organism to be a divisible collection of parts which, while interrelated, could be studied and understood separately.

Leibniz’s metaphysical view was considerably different. Leibniz’s metaphysics would consider the organism to be the sum or combination of an active and a passive principle: the passive principle representing the physical manifestation of the organism while the active principle was the organizing principle which caused matter and energy in the environment to form the organism. Under this approach then, it would make no sense to study an organism one component at a time, but only somehow in its entirety. Additionally, one could argue that the essential reality of the organism (or system) was the active principle, which was not something that could be perceived directly, but which would have to be inferred on the basis of observations of the passive principle.

In order to better understand the differences between these two systems, let us consider a particular complex system and look at how we would investigate it under these two different approaches.





A nicely defined complex system.















Under the Newtonian mechanistic approach we would study the system by studying all possible parts and making every possible measurement we could think of, and . . . where was I . . . we would hope somehow to gain a complete understanding of the system at the end of this process. Even with these measurements, common experience tells us that there is a little more to this system than meets the eye. We could not determine by direct measurement many of the important parameters of this system, such as her favourite music or indeed how to get her to agree to allow us to make the measurements we alluded to above.

The Leibnizian approach would suggest that the physical form of the system before us is merely a consequence of some inner truth which can't be perceived directly, but which causes the system to organize itself out of the ambient energy and matter of the surrounding environment. The Leibnizian approach would be . . . well, it's not really clear what the Leibnizian approach would be. It seems to be the central disadvantage of Leibniz's metaphysical approach to science. What sort of hypotheses can you formulate? And how do you test them? So while Newton is busily measuring the big toe, for instance, Leibniz can only wonder.

It is very difficult for us to think about this in the same way as did Leibniz, because our view is likely to be coloured by the recent concept of information as an actually quantifiable property. It is not clear to me whether or not information was viewed as a thing that could be measured in Leibniz's day, so while it is tempting for us to say that the active principle must be information—that it could be considered to be an intangible set of rules for constructing the system of which it is the active principle; I am not sure that Leibniz would have thought about it that way.

No doubt some readers are already thinking "Aha! Genetics!" And genetics could certainly qualify as information making up Leibniz's active principle in the complex system depicted above. But I am reasonably certain that Leibniz did not have secret knowledge of genetics either. So Leibniz would not be able to apply his metaphysical approach towards understanding the complex system standing in front of him.

All of this goes to explain why the mechanistic worldview came to be looked upon as the only approach to science. Under the mechanistic approach, it is generally clear what you do. You measure, codify, observe, and you will learn something, even if it wasn't what you set out to learn. Indeed, probably 99.9% of everything we have learned in science since Newton's time has come from testing hypotheses within a reductionist, mechanistic worldview.

And still . . .

There are some problems which we have not been very successful at solving, and we are beginning to doubt whether the reductionist approach will ever work. These are problems like the workings of ecosystems, and complex systems like climate. There are too many parameters to measure, we often don't know what parameters are important to measure and which can safely be ignored, the accuracy of measurements is limited, and there is a little problem called sensitivity to initial conditions.

It is only in the past thirty years or so that methodologies for codifying the behaviour of complex systems have been developed. And testing of interesting hypotheses concerning the organizational behaviour of complex systems is even more recent. The notion of self-organized criticality has a particularly "Leibnizian" feel to it. Phase space reconstructions, computational mechanics, the idea of self-organized criticality, multifractals, . . . are all ideas that are clearly moving us away from a mechanistic reductionist world view, and towards something that is more embracing of the organization of information at the centre of complex systems. However, this is not a paradigm shift, as the Newtonian approach will not be replaced, but merely enhanced by the new approaches. And, it is not a post-Newtonian approach either, as the basic idea was around in Newton's time. The difference is that we are beginning to learn how to apply it.

Thursday, July 15, 2010

Meanwhile in Calgary

Here is a link to an extended abstract for a paper I presented at the GAC meeting in Calgary in May of this year.

Groovy Ghana


I recall finding a number of interesting erosional features on rocky outcrops along the coast of West Africa starting back in the 1990s. To me their identity was a mystery. Part of this was because my experiences in geology were initially the Canadian and European north, the High Arctic, and Antarctica. These are places where there are not a lot of anthropogenic structures, and those that were common tended to be sophisticated representational carvings or paintings. So I never thought at all about an anthropogenic origin for these things, but frustrated myself trying to think of natural events that would cause randomly oriented grooves on all these rocks in coastal West Africa. 


Erosional features in Axim, Western Region, Ghana.


















Grooves at Akonu Beach, Ghana (about 3 km east of Axim).






The real hotbed for these features was Dixcove, which is near the boundary between the western region and the central region of Ghana. The number of such features as well as their variability (different types and sizes) exceeded any site observed so far. Sadly, none of them are in situ, as the rocks have been quarried and  formed into a breakwater.







Multiple grooves on a boulder in the breakwater at Dixcove. No real scale, sorry, but the white and blue paper at left is a wrapper for an ice cream bar.




The eye-opener came on a visit to the Primate Preservation Centre at Kakum (central region of Ghana) in November 2007, where these grooves are abundant and interpreted as stone-tool sharpening marks.





Kakum forest tool sharpening marks in gneiss. Site is atop an erosional remnant of bedrock, providing a good view of surrounding area. The 3-D nature of the grooves is apparent in the warp of the quartz vein (the white line in the picture at left).





I have seen interpretations suggesting that early habitation in West Africa was inland and on highlands. Certainly there are caves in the Kwahu area of Ghana supporting this idea. But after spending some years exploring in West Africa let me say that there is one very important reason to live near the coast.

Salt.

It is hard for modern people to appreciate the importance of, or the difficulty in finding, salt. Years of hiking and working in West Africa have forced me to learn first-hand the importance of maintaining salt levels in the body and the dire consequences of letting it drop. Muscle cramps are only the beginning. The improvement in health I experienced once I began strategically supplementing with salt was incredible, with an immediate reduction in cramping, gout, and an elimination of the occasional bout of traveller's diarrhea.

Salt was so precious that it has been traded weight for weight for gold. Clearly then, the coast would have been an important area, with its ready access to fish and salt. It is true that in the Volta Basin there are rocks which formed beneath an inland sea, so there would probably be salt-licks and salty soils available, the coast was where it was at.

With the amount of sea-level rise since the last glacial maximum, there may well be evidence for human habitation on what is now the West African continental shelf, similar to discoveries here and here.  







Wednesday, July 14, 2010

Beginning

I want to write this here so I don’t have to keep repeating it.

1. You are wealthy at birth. By wealth, I am referring to the net present value of your life’s earnings, discounted at a reasonable rate—or at least what was a reasonable rate before our current inflationary economic system.

2. There are those who believe it is unfair that you should be wealthy by right of birth. These people are frequently (but not always born wealthy by virtue of wealth within the family). This wealth is necessary because many of them would not earn any wealth during the course of their life otherwise.

3. The goal of the system in which you are embedded is to strip of you of that (in their eyes) “undeserved” wealth. After all, any fool can be born. Why should they be wealthy?

4. The means of stripping you of your natural wealth is by inflationary means, which increases the discount rate so much that the net present value of your future labours becomes zero (you can learn how to perform NPV calculations here ).

5. The role of government in all of this is to deliver you into this system. To this end, you are educated to be passive, docile, and ignorant of financial matters. You are blinded from this reality by a succession of atrocities and entertainments.

6. Government must handle your education so that you will cooperate with the system. Among the first things you learn is that government and its agents are your friends. Consequently, in early primary school you were introduced to firemen (your friends), the police (your friends), the principal (spelt ‘pal’ because he is your pal).

7. Any group that does not recognize the primacy of government must be eliminated. Hence, the Branch Davidians, the home schoolers, the FLDS (see here, for instance), and even the Muslims are demonized. The differences between ‘us’ and ‘them’ are emphasized and expressed as a reason for their demonization.

8. We must be educated to be divided, for if we were to unite, there could be unpleasant consequences for the architects of the system into which we have been born. We are divided by age into cohorts, and any social mixing of these cohorts is discouraged. Similarly we are divided in school by grade (A, B, C, D, and F, although those are not given much any more) similar to the alphas, betas, gammas, deltas, and epsilons of Huxley’s Brave New World.

9. The coming goal is to place controls on all forms of economic activity so that the flow of money can be tracked from your employer or business to your pocket to your local merchants.

10. The plan requires top-down control of all activities.

11. All transactions could be taxed. But what if we bartered. Or what if we used something outside of the system as money. Like the stone wheels of Yap, if we trusted each other, the commodity used might not change hands, merely ownership. Consider two individuals. One has money in Canada. One has money in Singapore. They wish to exchange, in order to move the money. But the money doesn’t move, only the ownership changes. In a system based on trust, no money crosses the borders. And the powers that be cannot trace the ownership, for it is purely conceptual. But this can only work if we trust one another and have respect for rights of ownership.

12. Our respect for the right of ownership has been degraded by the politics of democracy. Each of us is a participant in a scheme by which the majority expropriates property of some minority for some purpose. Actually, the expropriation is ordered by a majority of politicians each of which can claim they were voted in by the population of a particular area (not always a majority). We may or may not agree with the expropriation, but have little ability to change it.

13. The remedy is trust. Trust and faith in ourselves, and in each other. And distrust in authoritarianism.