Sunday, September 25, 2011

Back From Hiatus

I am back to blogging from a hiatus due to work-related obligations, and I'm excited to resume musing about financial risk management and its intersection with computer science.

In fact, the last few months have been extremely fruitful in terms of my own personal research. I don't want to provide too many details now, since they will be forthcoming in a separate blogpost dedicated to the subject, but I have identified various mathematical correspondences that allow for a cogent portfolio-level risk analysis of some fairly complex risks (like counterparty risks or geopolitical risks) without the use of statistics. As I'm sure we all know, statistics have this bizarre tendency to break down when markets get really crazy, like during a financial disaster that brings about a counterparty default. But of course, in the context of risk management, these tail scenarios are the most important, so avoiding statistics has value in that regard.

Finally, a note: I will never intentionally provide any sort of views on particular securities or trades in this blog, and although I may comment on them, I will never actually recommend a particular investment, so please don't interpret my writing that way.

Tuesday, December 14, 2010

Some Weird Math and the Most Interesting Guy of All Time

In this post, we will not only learn about one of the most interesting people to ever have walked the earth -- a person whose life story is full of so much comedy and irony that it sounds made up -- but also explore the history (and present) of highly abstract mathematics. This is a non-technical piece, so the non-technically inclined will enjoy as well.

In order to explain what abstract algebra is, we have to go backwards in time a little. For most of human history, mathematics was viewed mainly as a tool to study science, and although some people may have focused more on the math side of things, the two ideas were inextricably joined: math was a tool used to explore science. The better they developed the tools, the better they could solve scientific problems. This was, for example, how calculus came to be; it was invented on a whim by Newton in order to solve physics problems, which were his prime area of concern. And it was this way across the disciplines, from geometry to analysis -- math was primarily done with the idea of accomplishing something tangible in mind.

But that all began to change in the mid-19th century, with the birth of an absolutely fascinating individual by the name of Evariste Galois. While in his teens, Galois became interested in math. He applied to Ecole Polytechnique, which was then and still is the MIT of France, and was rejected. He went to another French university, Ecole Normale. This is where Galois started to rock the house. When he first began the work that would soon make him famous, he attempted to get some papers published. Both were judged by Cauchy, another all-time famous French mathematician, and he turned them down. The story goes that Cauchy's issue was clerical, and truly believed in Galois' work; nonetheless, it was another kick in the groin for a young Evariste.

At this point, Galois' father commits suicide, shortly before young Evariste reapplies to Polytechnique. Naturally, he gets rejected. Keep in mind, at this point Galois is a genuine prodigy. Most attribute this to a personal offense made by Galois to the examiner; some blame it on the examiner, others on Galois. Either way, Galois returns to Ecole Normale, defeated, to continue his studies. But don't worry, much more bad luck lay in store for him there.

Galois essentially invented the field of abstract algebra, coined the mathematical term "group" (as in group theory) and solved the extremely long-standing problem of determining the necessary and sufficient conditions for a polynomial to be solvable by radicals. Basically, this means that (among other things) he figured out how to tell whether we could solve a particular polynomial equation with an elegant little formula like the "quadratic formula" from elementary school.

A bit more confident, he resubmit his work to the Academy, which Cauchy had rejected some years before. In fact, he submitted it directly to Fourier (of "Fourier transform" fame) but in line with Galois' previous luck, Fourier died soon thereafter and never reviewed it. That year, the prize was given to Jacobi and Abel, two highly deserving mathematicians. Nevertheless, Galois did succeed in getting two important papers published that gave birth to Galois theory. Another French mathematical all-star, Poisson, soon asked him to submit his work on Galois theory.

Now that something good had happened, Galois was apparently anxious to have more problems. So when the Trois Glorieuses came about, he became enthralled with the tumult of contemporary French politics, quit mathematics and joined a batallion of the National Guard. At a banquet, he gave a toast to the king that was perceived as a threat on his life and he was jailed the next day. By Bastille Day, he was released, but then promptly rearrested when he showed up to a protest covered in firearms. He was imprisoned, and shortly after arriving he received word from Poisson. Poisson derided his work as "incomprehensible" and said that his "argument is neither sufficiently clear nor sufficiently developed to allow us to judge its rigor," which is about as a tough a statement as a mathematician could make. Galois, naturally, was of poor humor during his initial days of incarceration.

After getting out of prison, young Evariste finally could return to his research. Unfortunately, true to his luck, he found himself embroiled in a torid love affair with one Mlle du Motel, whose fiance challenged him to a duel and killed him. The twist? The previous night Galois knew he was a goner, and stayed up composing his last mathematical work, which he sent to friend and mathematician Auguste Chevalier. These documents contain ideas that have enormous consequences for diverse areas of mathematics, and in fact may have influenced the future trend towards abstraction and generalization in math and science more than any other. Says Hermann Weyl, "This letter, if judged by the novelty and profundity of ideas it contains, is perhaps the most substantial piece of writing in the whole literature of mankind."

Tragically, his last words made a bleak reference to dying at the age of twenty. An obvious end to a man whose life was fraught with constant neverending disaster, which makes it all the more shocking how influential his work has become. And Galois' last word on his work? Bitterly, he says, "ask Jacobi or Gauss publicly to give their opinion, not as to the truth, but as to the importance of these theorems. Later there will be, I hope, some people who will find it to their advantage to decipher all this mess."

What was in this work, and why was it so vastly important? Galois was trying to study a problem, which basically amounts to determining whether a polynomial equation (like 4x^2=0 or 5x-1=0 or x^10-x+1=0) admits a solution. It turns out the answer can be yes or sometimes. They can always be solved for degree less than five. But higher than four (for a polynomial like x^5+x-1=0)  how can we tell, for one particular such equation, in some kind of mathematically sound way, whether it's solvable -- and how? The theory for this was laid by Galois in his short life's work. He did a few amazing things. Like Lagrange before him, he connected permutations and the roots of the polynomials (roots meaning "solutions"). What he realized was that he had to look at those permutations of the roots  that had a remarkable property: any algebraic equation of the roots (with rational coefficients) satisfied before the permutation is also satisfied after. He then makes another incredible step, formalizing the notion of a "group" as we know it.

I've put off telling you what a group is but it's the first thing in abstract algebra. A group is a set (like {1,2,3} or {a,b,c} or "all the colors in the world") together with an "operation" such that taking any two elements, and doing the "operation" on them, gives you another element in the group. Pretty simple, but pretty powerful. Rich structure can be induced from these simple criteria, based on the rules of the operation and the elements you choose. The shocking part of abstract algebra is that given two completely unrelated groups, one can define homomorphisms (which you can think of as teleports between two different mathematical objects) to go between them. That means one can apply thinking and reasoning about one space to another.

Galois noticed that permutations formed such a group. And in fact, each polynomial has something called a "Galois group". The structure of the group corresponding to a polynomial (called that polynomial's Galois group) tells you whether or not a solution exists.

What does structure mean?


This is an idea that must resonate. It is so crucial, and such a powerful concept that is, I believe, a large organizing theme in the search for human knowledge. In this sense, structure simply means that a group has particular "subgroups" -- that is, groups within the group that are themselves also a group. So for example the even integers (under addition) make a subgroup of the integers (under addition). There are many interesting things that follow from this, and basically the subgroups are what define the structure of the group. And this is the case in Galois' solution, as well.

By beginning the trend towards abstraction and generalization, Evariste Galois set mathematics on a course to decouple from the physical sciences and become a science in its own right. This begs the question -- exactly what "science" do mathematicians study? I believe they study reality. They study some kind of general fabric of our universe so fundamental to existence that it perhaps goes unnoticed -- it is a concept too vague to give a simple label, like "chemistry" or "physics." I suppose that's why we call it mathematics.

Wednesday, November 3, 2010

Kicking Off the Free Finance Project: Free Pairs Trading Software!

In order to kick off the Free Finance project, I've released a tutorial to guide interested parties through the creation of a statistical arbitrage screener. You can use it to analyze stocks and determine pairs that could be viable candidates for statistical arbitrage. The script works by running an Augemented Dickey-Fuller test, one of the tests for cointegration, the measure primarily used to determine whether two stocks tend to move together in time.

In order to run the script, you will need Finatica, which can be downloaded for free. The code for the screener is available at Free Finance.

Tuesday, November 2, 2010

Ernie Chan, Quant Trading Wizard, Backing Up SVMs?

Today Ernie Chan, who has previously been unenthused by using machine learning algorithms in trading strategies, reports on a new study by UC Berkeley that affirms the ability of support vector machines to predict markets in a statistically significant fashion using technical indicators.

http://epchan.blogspot.com/2010/10/data-mining-and-artificial-intelligence.html

As it happens, I have long been a huge SVM advocate, and believe that due to the intuitions underlying their design they are clearly the superior nonlinear regression tool, and should find a wide variety of commercial usages. Finatica provides an environment in which you can use SVMs, among a wide variety of other tools, to develop, backtest and deploy trading strategies. If you want to harness the power of SVMs quickly without a whole bunch of hassle, Finatica is the perfect choice for you. And, a free version is available!

Make sure to enter the code "JVDBLOG" for 50% off the registered version of Finatica.

Tuesday, October 19, 2010

GS and JNJ Earnings Are In

In my last post, I mentioned two forecasts from my model on stocks going into earnings, and discussed the implications of a model's performance during this critical time period.

GS and JNJ earnings have been reported, and our model's insight was correct in both cases. In the case of JNJ, analysts' predictions gave an average of $1.15 per share, with a low of $1.10 and a high of $1.17. In reality, JNJ reported $1.23 per share, despite a minor slip in revenue, and raised their EPS forecasts for year-end. As for GS, America's perennial investment bank, analysts averaged a forecast of $2.29, with a low of $1.81 and a high of $3.00. They came in at $2.98, just shy of the maximum forecast.

It is 9:15 right now and both stocks are down in pre-market trading, so only time will tell how the market will react. What is interesting, though, is the fact that our model's predictions were in line with true changes in the company's fair value. Whether the perceived value will change with it, of course, remains to be seen.

Monday, October 18, 2010

GS and JNJ Earnings Tomorrow

I've been working on an options trading strategy recently, which would loosely be classified as "volatility arbitrage." As an experiment, I took some of the trades it implied on two stocks entering earnings announcements: GS and JNJ. Both were long, coupled with a short position in PG. The short position is less important to the experiment, but is necessary for reasons unimportant.

The reason I find this particular moment so fascinating is because, in my opinion, it is the truest test of a strategy. When earnings are reported, equities experience a rare moment in which the full information set that determines their value is available. There is something of a "reset" to the perceived fair value, in which all market participants have the same information.

The question is, will the earnings announcement disrupt my strategy, or actually bolster it? If a model works by correctly identifying the genuine value of a security, the earnings announcement should force a convergence to fair value, which would be beneficial to the strategy. If, on the other hand, it works by predicting the changes in investors' whims, and in fact does little to determine true value, the earnings announcement should disrupt the strategy.

Naturally, the optimal strategy would fare well in both scenarios. But how could that be? We must construct portfolios that are sensitive to a likely outcome (such as convergence to fair value) but still robust and immune to other possible scenarios in which investors misread the evidence.

Monday, August 30, 2010

The St. Petersburg Paradox and the Low Hanging Fruit

The most recent trading strategy I've been researching has been based on so-called volatility arbitrage, a horrible misnomer for the strategy of trading volatility like an asset by taking a view and getting exposure (generally) through options. But since the modern options market has all but adopted a notion of equivalence between volatility and price, trading volatility really amounts to determining the true fair value of options written on a particular underlying. It's not really a forecasting problem so much as a modeling problem, although elementary forecasting does play a role in a good model.

I shared a bit of this information with a reader of this blog last night, and to my surprise, he commented that vol arb had been "picked clean" -- a term applied to strategies whose popularity has eclipsed their profitability. This term gets thrown around a lot with reference to "statistical arbitrage," a similarly misnamed strategy by which traders exploit gaps between correlated securities and bet that the gap will disappear over time.

The reason statistical arbitrage (as it is now known) is possible is because of two Nobel Prize winners named Robert Engle and Clive Granger. They won the prize for the development of a number of related time-series analysis techniques, one of which was the measure of "cointegration," which aptly embodies the "gap-ungap" property of a good stat arb pair.

The study of cointegration was borne of Engle and Granger's realization that correlation was really a terrible metric in most cases of statistical analysis. Thus, the statistical arbitrage opportunity was borne of a simple realization that one of the core beliefs of the markets -- that correlation is the correct measure of similarity -- was wrong. A paradigm shift takes place, and an opportunity arises until information diffusion has the opportunity to destroy it.

Once enough people knew how to implement a cointegration test, they could implement stat arb. And stat arb got picked flat. And it will remain flat until another paradigm shift comes along, and we are able to discover similar time series with great precision or quality, and then stat arb is back in full force.

The point is: opportunity is created by widespread misunderstanding. And to demonstrate just how much work there is left to do in quantitative finance, and how many opportunities still exist to exploit widespread ignorance, let me guide you through some of the theory of options valuation, and present some startling contradictions that you may find quite shocking.

But first, let me bore you with some mathematical history. The St. Petersburg Paradox is a problem worked on jointly by the Bernoulli brothers. It describes a lottery in which one buys a ticket, and the "dealer" of sorts flips a coin in succession until a tails comes up. If a tails comes up on the first flip, the gambler gets $1. If it comes up on the second, he gets $2, and the third, $4, and so on ad infinitum.

The problem: determine the fair price of a ticket. Most people would do this by taking expectations over the probability distribution defined by the payoffs, but get this -- the expectation is infinite! That implies that a human being would pay any price for a ticket!

Clearly, a human being wouldn't pay any price for a ticket. The Bernoulli brothers solve the problem by expressing the payoffs in terms of a diminishing marginal utility function (logarithmic) and show the fair value to be $2, which is a lot more sensible. The point of the paradox is how nonsensical an answer you can get by naively taking expectations.

Now let's turn our attention to the original work of Black and Scholes, something I intend to discuss plenty on this blog. Black and Scholes begin by dreaming up a portfolio containing a delta-hedged option position, and by no-arbitrage arguments reason that the portfolio must yield the risk-free rate, since it is itself risk-free.

This is the celebrated Black-Scholes differential equation, the form of which is the basis for the great majority of all derivative pricing methods. Since the equation has no risk-dependent parameters in it, like the stock's drift rate, they reason that the derivative's value is not dependent on the level of risk, or even the expected return, on a stock.

This is where it starts to get weird. They reason that since the derivative's value is independent of risk, that the value of an option in our world is the same as that of a derivative in a hypothetical universe called the "risk neutral world."

The risk-neutral world is a bit like Eden. All investors are 100% tolerant of risk. All securities earn the risk-free rate. Securities are all equal to their discounted expected value under the correct probability measure.

They reason that since the value of the derivative in our world is the same as in theirs, we may as well make life simple and value the derivative in their world. So they do -- they combine the differential equation obtained above with the solution to an Ito integral representing the stock price's lognormal diffusion (with drift equal to the risk free rate). That yields the Black-Scholes solution.

I hope the St. Petersburg paradox is contrasting nicely with this Nobel Prize-winning reasoning of Black and Scholes. We're going to make the comparison even more explicit by extending the discussion to the pricing of American options, which have the capability to be exercised at any point during their lifetime, not just at expiry.

The Black Scholes-style way to value an American option is using a binomial tree representation, popularized initially by Cox, Rubinstein and Ross. A tree is constructed in order to represent the lognormal process described above; each leaf node is annotated with its value; the other nodes are annotated with the discounted expected value of their children under the tree's induced probability measure.

Now, if the stock doesn't pay a dividend, we have a bizarre situation. Under the induced probability measure of the binomial tree, the discounted price process (by the Black-Scholes theory) is a martingale. Thus, we know there is no optimal stopping time. So, the value of the early exercise never exceeds the discounted expected value, and therefore is never taken. Therefore, the value of an American option on a non-dividend paying stock is equal to the value of the same European option on that stock.

You might need a minute to digest that. Yes, it's true. Under the Black-Scholes assumptions, an American option has the same value as a European one so long as there is no dividend. And these same assumptions that yield such a terrifying paradox also underlie the rest of the models used for pricing and risk management of derivative contracts.

The source of the contradiction is similar to the case of the St. Petersburg Paradox; the reckless taking of expectations, that reduction of a probability distribution to a scalar, is such a dangerous approach. And much more importantly, the idealistic assumptions of Black-Scholes don't just manifest themselves in an unrealistic price process with skinny tails, as people frequently point out, but in the very modeling of human behavior that underlies the model.

See, with European options, there's no behavior to model, since nobody really has much of a choice. The only choice happens at expiry, and you can bet your bottom dollar that the option will be exercised if and only if the option is in the money. With American options, on the other hand, the option holder is forced to make the choice of whether to exercise every second that the option is in the money. As in the case of the St. Petersburg Paradox, the assumption that humans are rational, expectations-taking automata is highly inappropriate in finance. In fact, options investors will tell you it's common knowledge to never hold an option to expiry; how ironic, then, that the behavior modeling of Black and Scholes leads to a scenario in which this is not just possible but unavoidable.

The real point of this article isn't about American options or the St. Petersburg Paradox, but the "standard of proof" in our nascent field. Will we accept gross generalizations, and ignore their shortcomings to the point we award the Nobel Prize to the mathematicians who created the above model? Or will we enforce a standard of rigor, look at problems on their face, and imagine creative solutions in real-life conditions?

The question isn't so much about proof as assumptions. It is one of the assumptions of Black and Scholes that leads to the notion of risk-neutral valuation, and hence this odd model of human behavior.

When I ask people what they think the worst assumption in Black-Scholes is, you usually get some swagger about fat tails and normal distributions, or maybe complete markets from those in the know. But here's the bigger one, the much, much bigger one:

Black and Scholes ignore transaction costs.

Is this a fair assumption? In most of economics, it is. Let's examine the case of stocks and bonds. Is it valid to ignore transaction costs when discussing a deal in stocks or bonds? I think so. It's possible that eventually transaction costs for simple assets like this will be driven to zero if humans are more able to find the appropriate counterparty due to greater access to information. And in the meantime, to be honest, it's really a fair assumption. Transaction costs here are low relative to the cost of carry and cost of buying the security, and they don't make much of a difference.

But options? Options aren't like stocks, in my mind. Remember, the basis of the Black-Scholes thinking was the delta-hedged portfolio yielding a risk-free rate. But in order to actually create that kind of portfolio, you need to hire a trader to sit there and manually hedge and re-hedge. In fact, the Black-Scholes differential equation represents a continuously re-hedged portfolio, implying virtually infinite transaction costs. And, since the derivative didn't exist in the first place, the transaction cost also includes writing the option and selling it. For options, transaction costs aren't a negligible factor -- they're everything.

It's clear that quantitative finance professionals have a long hill to climb in order to engender the kind of progress that would legitimize our work to the rest of the scientific community. But in this lack there lies a great amount of opportunity, not just for monetary gain, but for intellectual exploration and the fun of pushing the envelope. Clearly, there's something amiss about the theory underlying the most common of derivative pricing formulas. But it's unclear whether an easy solution exists. It seems there are still many questions to answer, not just mathematical, but behavioral and psychological. There is no doubt that quantitative finance has developed into a scientific discipline, but we may be surprised to find the aftertaste of other fields in papers to come.