The Process

03.-No.-1_Mark-Rothko.jpg

Seeking Stillness by Mark Rothko. Oil on Canvas

 

How to build a trading strategy that transitions from an abstract representation of the market to something concrete with genuine partial predictive capabilities.

First, start with a hypothesis. For instance, a certain correlation between two instruments or a certain macroeconomic factor that drives microeconomic pricing behavior.

Second, write down the equation of heuristics that define the framework to the model. I use a process equation, geometric Brownian Motion, that shows how the input variables evolve over time, with a random (continuous-time) stochastic component. In other words, I’ll take the stationary independent increments of economic data reported weekly and measure the ‘cάdlάg’, French for RCLL (‘right continuous with left limits”) Brownian drift year-over-year and taking into consideration month-over-month change that will affect the probability of the next report slowing or accelerating.

Third, find a closed-form solution for this model, if one does in fact exist. If no reasonably solution is found, I resort to approximations.

1Gc71.png

Now that a model that represents the markets has been created, I need to test whether it is realistic. I do this by plugging in plausible values for various parameters and run simulations. Meanwhile, observing and taking note whether the simulated outputs populate values inline with my hypothesis’ expectations. Asking, bottom line, do the values reflect, conceptually, the actual dynamics of the market?

Assuming the model passes the sanity check, I move from the ideation phase and into the “formal research” phase. This phase constitutes the transition from an abstract, stylized representation of the market, to more concrete and unambiguous outputs.

To wit: it’s hard to build a truly predictive model without the use of significant capital-intensive AI technologies, however, I’ve found that it is easier to fool yourself that what you’ve built truly has predictive powers. I call this the Long Island Medium dilemma.

x1cy0.jpg

Regarding markets, back testing never yields a false positive so to speak. Therefore, the model that you’ve created so far has been either over-fitted, used solely in sample-testing, or applied exogenous knowledge into the heuristic parameters. This is the phase where most “systems” fall apart in their maiden voyages when applied to real world applications.

To overcome this hurdle I employ a slow, systematic approach that minimizes the risk I fool myself. This is what I define as the “formal process” or the next level of gatekeeping to capital.

ptm_thumb_video8.jpg

More on the formal research process. The ability to run realistic simulations against the model, it is imperative to fear data contamination. Historical data is a finite resource; once you’ve run out of uncontaminated data to test against, you can’t generate anymore. Thus, it becomes a creative endeavor to regenerate meaningful supplies of uncontaminated out-of-sample data.

I begin by dividing my historical data set into non-overlapping sections. Then, randomizing the data sections to prevent confirmation biases toward an outcome. For instance, being risk-adverse when knowing the dataset is sampled from 2008 and the GFC; conversely, being risk-seeking knowing the dataset is sampled from the following year, 2009).

I designate one data section as my calibration set, using Python’s built in optimization libraries. Using a two-step optimization process called the EM algorithm, my parameters are constrained and correlated.

To wit: optimizers can be sensitive to initial conditions, thereby using Monte Carlo simulations to choose a number of starting points in the solution space. All done very efficiently in Python. The result of this calibration should populate a set of “model parameters” – numerical values, that can be combined with actual market observations that foreshadow other market prices.

Once I’ve calibrated the model I test is out of sample. Asking the questions, “are the predictions stable and the residual’s mean reverting? If not, the model doesn’t work. Period. If it passes. I’ll try to break the model through various sources and methods.

pftm_thumb_video8.jpg

For instance, I calibrate on monthly data but test only daily data. Or I test U.S. parameters on data from Canadian borses.  If the model truly reflects underlying economic reality, it should be fairly robust to these sources and method attacks. Assuming, novos securlum ceterius paribus, economics does not change when you cross borders.

In sum, I strictly separate in-sample and out-of-sample data. I blind myself to date ranges and furthermore, use Monte Carlo simulation to avoid starting point biases. Testing the model’s robustness by different calibrations to the data.

Moreover, I place a high premium on parsimony or the scientific principle that things are usually connected or behave in the simplest or economical way. If my model requires too many parameters or has too many degrees of freedom, it’s curve fitted and thus, not a model.

To prevent over factorizations within the model, I am constantly weeding out factors or streamlining the existing factors. All phases of improvements are then tested against the models past results to determine whether the changes continue producing “rich” outputs.

A second proof of robustness is if the model works well no matter what trading strategy you build on top of it. If you can only generate alpha using a complex non-linear scaling rule with all sorts of edge conditions, then that suggests lack of robustness.

Lastly, there’s no substitute for data. I think of every possible out-of-sample dataset that I can plausibly test the model on: different countries, different instruments, different time frames, different date frequencies. The model has to work on all of them; else you have selection bias in the results.

With a calibrated model, the next step is to build a PL simulation. Mean-reverting residuals might not suffice if the opportunity set is too small to compensate for bid-ask, or if the occasional blowups kill all my profits. So I need to test an actual trading strategy using my model.

Now we arrive at the production phase of the process.

For starters, I now have to worry about the “real world” — nuances like day-count conventions, settlement dates and holidays. When calibrating on historical data, you can get away with approximations for these. But when it comes to individual live trades, you can’t be sloppy; you have to be exact.

Another aspect of production is that speed is critical. I can’t fit my model to market data in real time (gradient descent is slow!) so instead, I have to reduce everything to linear approximations of changes. This entails a lot of matrix manipulation.

I usually build an execution prototype that does everything “correctly” but inefficiently. I then hand that over to my engineering colleagues who build a performant version in Python or even C, using the market libraries they’ve built over the years. And that version pipes its output into my trading station, for me to actually start executing on this strategy.

And then, hopefully, I start making money

It typically takes months of work to bring a new strategy from drawing-board to production – and that’s for the strategies that actually work. Most don’t. And even the successful strategies have a shelf-life of a couple of years before they get arbitraged away, so the above process repeats itself all the time. I have to reinvent my approach to trading every few years.

Leave a Reply