Close up of a man's cupped hands holding a small pile of rich dirt, atop which sits what looks like a spherical circuit board as though having recently sprouted from it

The AI Cooperative

  • 080723
  • 5 minutes

With the advent of ChatGPT on November 30 of 2022, much of the public got its first hands-on experience with the modern wonder that is artificial intelligence. In less than two months, the free service reached over 100 million users, setting a new record for the fastest growing consumer application to date. And the more we’ve used it, the more uses we’ve found for it. From meal planning to songwriting, movie recommendations to case laws, (nb: use with caution) the outer bounds of its applications are yet to be found.

Naturally, ChatGPT’s potential in the financial sector has begun being probed. One of the UK’s most popular fintech sites tasked it with creating a hypothetical fund, then compared its performance over a four-month period to that of the top ten most popular funds in the UK. While the real funds averaged -1.00%, the AI-generated one gained a whopping 5.52%. And a research paper published earlier this year showed compelling results in testing the generative AI’s ability to pick stocks based on tens of thousands of news headlines.

Some, however, smell danger—not the least of whom is Securities and Exchange Commission Chair Gary Gensler, who warned in May that AI could be the cause of the next financial crisis. But the use of automated computer algorithms in stock trading is far from new, and it certainly wouldn’t be the first time they played a role in a crash.

Some of us may vividly remember the Black Monday market crash of 1987, when US markets fell over 20% in a single day—twice that of any daily loss during the 1929 crash. There were a number of factors at play that led to Black Monday’s disaster, most of which the market had seen previously in some form and weathered. But program trading—that is, using computer programs to buy and sell securities on a large scale based on predetermined conditions—was still relatively new, and as yet completely untested in extreme market conditions.

Commonly implemented with program trading at the time was something called portfolio insurance, intended to insulate investors from a declining market. Computer programs were instructed to automatically buy off stock index futures when certain loss targets were hit, cutting portfolio losses when a market downturn was detected. This freed up portfolio managers from tedious tweaking and completed the necessary transactions more quickly and efficiently than before possible.

This portfolio insurance strategy backfired tremendously when the market turned bearish. As portfolio insurance programs were triggered, holdings were liquidated quickly, pushing prices even lower—which triggered more automated selling, and greater losses to the market. Additionally, these computer algorithms were also programmed to turn off all automated buying, furthering the deficit of buy orders. This continued in a stomach-dropping spiral, ultimately marking the biggest single-day loss in stock market history. Since then, what’s called “circuit breakers” have been put in place to short circuit trading for a period in order to curb panic-selling, effectively preventing another “black” day in the markets.

Then there was the flash crash of 2010, a trillion-dollar crash that took place in a matter of minutes—36, to be exact. Much like Black Monday, a number of factors converged to elicit a perfect storm, but two stand out: high-frequency trading and Navinder Singh Sarao.

High-frequency trading (HFT) refers to fast-paced algorithmic trading that seeks to benefit from the regular, minor turbulence that stock prices experience throughout the day. To be successful, HFTs must move in and out of positions in frame-perfect milliseconds. Decisions to buy and sell are made by proprietary algorithms based on complex mathematical models, quickly doing the computational work required to do what would be, practically speaking, impossible for a human.

Enter Navinder Singh Sarao, a self-taught day trader out of London with savant-like mathematical acumen. Watching actions taken by the HFT programs at the time, he saw patterns emerging: whenever there was an order to buy or sell, these algorithms jumped in and attempted to make their own trades in the split second before the order could be executed. So what if he built a computer program that placed a large number of orders, triggering the HFTs, then cancelled or changed all those orders after the HFTs had made their moves? In this way, Sarao subtly tricked these computers into moving market prices for him, and profited from this movement. At the time, it was a novel and completely unprecedented way to play the markets; today, it’s called “spoofing,” and is punishable by US law.

There’s still some contention over the ultimate cause of the 2010 flash crash, but it’s clear that the interplay between Sarao’s algorithm and the HFTs definitely played a part. Those HFTs were also programmed to pull out when the market got too volatile. It seems that the artificial volatility created by Sarao’s induced market changes triggered these trading bots to sell all at once, reminiscent of what happened in 1987—only much, much faster.

So what about this modern iteration of human ingenuity? What would an AI-induced market crash look like, and how do we prevent history from repeating itself? SEC’s Gensler, mentioned earlier, has suggested a potential systemic fragility arising from the emergence of a single AI system that nearly all of future fintech could be based upon. Both Black Monday and 2010’s flash crash bear out this danger: when the vast majority of market players have employed the same or similar technology, unexpected behaviors can have a massive snowball effect.

And there are other drawbacks to AI as it stands today. Much of the effectiveness of any AI model depends on the data used to train it, and any biases, discrepancies, or blind spots in massive data sets can be hard to identify until they manifest in the resulting AI. And when there are blind spots, AI has shown a tendency to take the liberty of filling in the gaps itself. Called AI hallucinations, models have been known to present false information indistinguishably from facts. Biases could prove a challenge to identify as well; trying to divine the reasoning behind an AI’s decision-making process is essentially impossible with current models. And while cyber threats and data security certainly aren’t unique challenges to AI, it’s another attack vector that will doubtless bring with it new vulnerabilities to be found and exploited.

Like its more primitive ancestors, AI will certainly have its own consequences of use. But the possibilities are far too tantalizing to stop now. Just as program trading and HFTs weren’t discarded after their respective crashes, there’s sure to be plenty to gain from AI, glitches and all. Until we find those pitfalls, though, it may be prudent to watch how things unfold before jumping in headfirst. In researching this article, we asked ChatGPT itself about the use of AI in the financial sector. Its response? “AI should be viewed as a tool that complements human expertise rather than a complete replacement for human decision-making.” Right on, chat thing. No hallucination there.

The views and opinions expressed herein are those of the author(s) noted and may or may not represent the views of Beacon Advisory or Lincoln Investment. The material presented is provided for informational purposes only. When you link to any of these websites provided here, you are leaving this site. We make no representation as to the completeness or accuracy of information provided at these sites. Nor are we liable for any direct or indirect technical or system issues or consequences arising out of your access to or use of these third-party sites. When you access one of these sites, you assume total responsibility for your use of the sites you are visiting.