The Process

03.-No.-1_Mark-Rothko.jpg

Seeking Stillness by Mark Rothko. Oil on Canvas

 

How to build a trading strategy that transitions from an abstract representation of the market to something concrete with genuine partial predictive capabilities.

First, start with a hypothesis. For instance, a certain correlation between two instruments or a certain macroeconomic factor that drives microeconomic pricing behavior.

Second, write down the equation of heuristics that define the framework to the model. I use a process equation, geometric Brownian Motion, that shows how the input variables evolve over time, with a random (continuous-time) stochastic component. In other words, I’ll take the stationary independent increments of economic data reported weekly and measure the ‘cάdlάg’, French for RCLL (‘right continuous with left limits”) Brownian drift year-over-year and taking into consideration month-over-month change that will affect the probability of the next report slowing or accelerating.

Third, find a closed-form solution for this model, if one does in fact exist. If no reasonably solution is found, I resort to approximations.

1Gc71.png

Now that a model that represents the markets has been created, I need to test whether it is realistic. I do this by plugging in plausible values for various parameters and run simulations. Meanwhile, observing and taking note whether the simulated outputs populate values inline with my hypothesis’ expectations. Asking, bottom line, do the values reflect, conceptually, the actual dynamics of the market?

Assuming the model passes the sanity check, I move from the ideation phase and into the “formal research” phase. This phase constitutes the transition from an abstract, stylized representation of the market, to more concrete and unambiguous outputs.

To wit: it’s hard to build a truly predictive model without the use of significant capital-intensive AI technologies, however, I’ve found that it is easier to fool yourself that what you’ve built truly has predictive powers. I call this the Long Island Medium dilemma.

x1cy0.jpg

Regarding markets, back testing never yields a false positive so to speak. Therefore, the model that you’ve created so far has been either over-fitted, used solely in sample-testing, or applied exogenous knowledge into the heuristic parameters. This is the phase where most “systems” fall apart in their maiden voyages when applied to real world applications.

To overcome this hurdle I employ a slow, systematic approach that minimizes the risk I fool myself. This is what I define as the “formal process” or the next level of gatekeeping to capital.

ptm_thumb_video8.jpg

More on the formal research process. The ability to run realistic simulations against the model, it is imperative to fear data contamination. Historical data is a finite resource; once you’ve run out of uncontaminated data to test against, you can’t generate anymore. Thus, it becomes a creative endeavor to regenerate meaningful supplies of uncontaminated out-of-sample data.

I begin by dividing my historical data set into non-overlapping sections. Then, randomizing the data sections to prevent confirmation biases toward an outcome. For instance, being risk-adverse when knowing the dataset is sampled from 2008 and the GFC; conversely, being risk-seeking knowing the dataset is sampled from the following year, 2009).

I designate one data section as my calibration set, using Python’s built in optimization libraries. Using a two-step optimization process called the EM algorithm, my parameters are constrained and correlated.

To wit: optimizers can be sensitive to initial conditions, thereby using Monte Carlo simulations to choose a number of starting points in the solution space. All done very efficiently in Python. The result of this calibration should populate a set of “model parameters” – numerical values, that can be combined with actual market observations that foreshadow other market prices.

Once I’ve calibrated the model I test is out of sample. Asking the questions, “are the predictions stable and the residual’s mean reverting? If not, the model doesn’t work. Period. If it passes. I’ll try to break the model through various sources and methods.

pftm_thumb_video8.jpg

For instance, I calibrate on monthly data but test only daily data. Or I test U.S. parameters on data from Canadian borses.  If the model truly reflects underlying economic reality, it should be fairly robust to these sources and method attacks. Assuming, novos securlum ceterius paribus, economics does not change when you cross borders.

In sum, I strictly separate in-sample and out-of-sample data. I blind myself to date ranges and furthermore, use Monte Carlo simulation to avoid starting point biases. Testing the model’s robustness by different calibrations to the data.

Moreover, I place a high premium on parsimony or the scientific principle that things are usually connected or behave in the simplest or economical way. If my model requires too many parameters or has too many degrees of freedom, it’s curve fitted and thus, not a model.

To prevent over factorizations within the model, I am constantly weeding out factors or streamlining the existing factors. All phases of improvements are then tested against the models past results to determine whether the changes continue producing “rich” outputs.

A second proof of robustness is if the model works well no matter what trading strategy you build on top of it. If you can only generate alpha using a complex non-linear scaling rule with all sorts of edge conditions, then that suggests lack of robustness.

Lastly, there’s no substitute for data. I think of every possible out-of-sample dataset that I can plausibly test the model on: different countries, different instruments, different time frames, different date frequencies. The model has to work on all of them; else you have selection bias in the results.

With a calibrated model, the next step is to build a PL simulation. Mean-reverting residuals might not suffice if the opportunity set is too small to compensate for bid-ask, or if the occasional blowups kill all my profits. So I need to test an actual trading strategy using my model.

Now we arrive at the production phase of the process.

For starters, I now have to worry about the “real world” — nuances like day-count conventions, settlement dates and holidays. When calibrating on historical data, you can get away with approximations for these. But when it comes to individual live trades, you can’t be sloppy; you have to be exact.

Another aspect of production is that speed is critical. I can’t fit my model to market data in real time (gradient descent is slow!) so instead, I have to reduce everything to linear approximations of changes. This entails a lot of matrix manipulation.

I usually build an execution prototype that does everything “correctly” but inefficiently. I then hand that over to my engineering colleagues who build a performant version in Python or even C, using the market libraries they’ve built over the years. And that version pipes its output into my trading station, for me to actually start executing on this strategy.

And then, hopefully, I start making money

It typically takes months of work to bring a new strategy from drawing-board to production – and that’s for the strategies that actually work. Most don’t. And even the successful strategies have a shelf-life of a couple of years before they get arbitraged away, so the above process repeats itself all the time. I have to reinvent my approach to trading every few years.

Of course. In my experience all opportunities eventually go away. And indeed one of the biggest challenges in the type of modelling I do is knowing when a live model is obsolete. All models lose money on some days and weeks. And it is very difficult to recognize when losses are part of a model that is still working and when those losses are signaling the death of the model.

Ultimately my process  was developed to back-engineer Hedgeye’s process (described below).

Quantitative Risk Ranges  Model 

Our quantitative trading range model was developed by CEO Keith McCullough during his years as a hedge fund manager to augment his team’s qualitative research views. The idea is simple: Create a quantitative risk management tool to help investors actually buy low and sell high.

The model uses three core inputs – price, volume and volatility – to determine the likely daily trading range for any publically-traded asset class. Again, it’s simple. You sell at the top end of the range and buy at the low end.

Keith’s quantitative model is also multi-duration, meaning it dynamically adjusts to suggest critical thresholds – over our TREND and TAIL durations (see below) – after which, upon breaching these levels, an asset flips from bullish to bearish or vice versa.

Qualitative Risk Management 

6af3412b-step1-3_0hs0a10hs08t000017

How We Model the U.S. Economy

In addition to our risk ranges, our Macro team has created a predictive tracking algorithm to suggest the future growth rate of the U.S. and other global economies.
Why?
We find two factors to be most consequential for forecasting future financial market returns: economic growth and inflation. We track both on a year-over-year rate of change basis (i.e. 2nd derivative) to better understand the big picture then ask the fundamental question: Are growth and inflation heating up or cooling down?
From there, we get four possible outcomes. Each is assigned a “quadrant” in our Growth, Inflation, Policy (GIP) model along with the typical government response as a result (neutral, hawkish, in-a-box or dovish): Growth accelerating, Inflation slowing (QUAD 1); Growth accelerating, Inflation accelerating (QUAD 2); Growth slowing, Inflation accelerating (QUAD 3); Growth slowing, Inflation slowing (QUAD 4).
MEASURING AND MAPPING THE CYCLE
Capture
f2b92bee-step3-2_0iv0gr0iv0gr000000
After building this base of knowledge, we can now select what we like and don’t like based on our historical back-testing of the different asset classes that perform best in each of the four quadrants.
Below is a color-coded chart showing the performance of different asset classes in each of the QUADs (along with another much simplified, rule-of-thumb table breaking down what works in each QUAD), from equity markets sub-sectors to fixed income to commodities and foreign exchange.
b898fcee-step3-1_0hv09x0hs08m000019

Of course this isn’t enough.

After looking at our proprietary risk ranges and GIP model there are three other essential market signals to check before we make a call on individual asset classes. These include our Asset Allocation model (TACRM), Wall Street consensus positioning via CFTC futures and options data and the market Volatility signals embedded in options markets.  Below are three essential videos plus some additional insight into why each is so critical.

Capture2

TACRM’s (Tactical Asset Class Rotation Model) asset allocation signals are generated using a highly quantitative risk management system. At its core, “TACRM measures volatility as a leading indicator of prices,” explains Senior Macro analyst Darius Dale in the accompanying video detailing how TACRM works. The model tries to identify shifts in momentum and deteriorating or accelerating breadth to spot asset classes that are breaking out or breaking down.

Just as important as vetting the fundamentals of any investing idea is knowing the investment community’s positioning around that idea. Namely, is this a consensus or contrarian trade? It’s another essential tool in your investing toolkit since, if Wall Street is too bullish or bearish, you may have already missed the move.
At Hedgeye, we measure and map the CFTC’s Commitments of Traders report, across asset classes, to learn precisely that: What does current investor consensus positioning look like and where can we add the most value with a non-consensus market call?
Understanding Video | How To Interpret Volatility
Understanding volatility is another essential tool in your macro toolkit. At Hedgeye, we have a nuanced view about how to incorporate this measure into your portfolio decision-making process.
Capture1
ggg

TRADE = 3 weeks or less

TREND = 3 months or more

TAIL = 3 years or less

As you already know, we are fixated on delivering superior investment ideas.   Our research team at Hedgeye is composed of over 40 analysts, including some of the most highly-regarded analysts in the industry.
We combine 1) quantitative 2) fundamental 3) macro analysis with an emphasis on duration.
With our Macro team covering the top down research and our 12 equity market Sector Heads (plus 5 analysts on our Washington Policy team) dissecting their industries bottom-up, the end result makes Hedgeye’s research products the most intelligent, high-octane research around.
That’s how our CEO Keith McCullough (who leads our Macro team) is able to use the “Style Factors” our Macro team likes and doesn’t like – such as growth over value or large cap over small cap – and select from among our Sector Head’s best ideas the stocks that fit our style preferences.
For instance, let’s say we are bullish on large-cap Tech stocks. McCullough could identify a Tech stock that fits the bill among Technology Sector Head Ami Joseph’s best ideas. You get the point.
Source:
Hedgeye

-R.W.N II

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.