Robot stock analysts | Financial Times

0 0
Read Time:5 Minute, 56 Second

Receive free Artificial intelligence updates

This article is an on-site version of our Unhedged newsletter. Sign up here to get the newsletter sent straight to your inbox every weekday

Good morning. What’s the problem with a top-heavy stock market? Well, when Apple has a tough week on some discouraging news from China, the whole S&P 500 is dragged lower. If you are in the market, you are overweight Apple unless you’ve taken pains not to be. If you have thoughts on Apple, or anything else, email us: [email protected] and [email protected].

The FT’s editor, Roula Khalaf, is writing the Long Story Short email today. LSS is a punchy digest of the week’s biggest stories and best reads. Sign up here.

Algos suffer from recency bias, too

Are machines better stock analysts than human beings? 

For a significant part of analysts’ jobs, it seems to me that a sophisticated computer algorithm should do better than a person. A lot of what analysts do is build financial models and use them to estimate future corporate earnings. This is detail-driven, data-intensive work where consistency and objectivity are important and human biases are dangerous. If computers aren’t better at this part of the job now, I would guess that they will be soon.

Three researchers — Murray Frank, Jing Gao and Keer Yang — have put this idea to the test, and reported the results in a paper earlier this year (hat tip to Joachim Klement, whose Substack brought the paper to my attention). They loaded up a sophisticated algorithm with a big sample of company financial information, macroeconomic data and equity analysts’ forecasts, and set it to work predicting company earnings. The main statistical technique they used was something called “gradient boosted regression trees”, which I am at least one PhD away from understanding. But the key thing about it, as Frank described to me, is that it is able to pick up non-linear connections within the data. In a non-linear world, this gives GBRT a big edge on the linear regressions we learned about back in finance school.

The researchers found that, while their algo predicted earnings more accurately than Wall Street analysts, it exhibited one of the common biases that trips up human analysts. The algo tended to overreact to new information (we call this recency bias in human beings). Now, in theory, this problem should be easy to solve in an algo — just tweak it to weigh the new information less. But here’s the interesting thing: when the researchers did this, the quality of the algo’s earnings predictions declined. There turned out to be a trade-off between reducing systematic overreaction bias and average forecast accuracy.

This is weird and interesting. The idea that eliminating a systematic bias would lead to less accurate beliefs about the future is just strange. Frank told me that it might have implications for behavioural finance generally:

What was a bit surprising to us is that the nature of the bias of the algo is surprisingly similar to the bias in human beings. The behavioural finance guys say the biases humans demonstrate are about the deep psychology of the human brain. But algos, despite the metaphors we use about them, are not human brains. The fact that you are getting a similar bias suggests what is generating the bias is not the structure of the brain, but something else . . . It may tell us something about the nature of statistics and the way that we process information.

Reading this bit of the paper, what I thought about was momentum. Financial markets and (I would argue) business results do not vary randomly. They follow trends. Expansions and contractions, once begun, tend to continue for a while before everything gets scrambled up again and new trends form (this is what Benoit Mandelbrot meant by markets’ “long memory”). So perhaps recency bias helps us, as forecasters, to latch on to new trends as they form and to follow momentum — even as it gives a tendency to overreact to new information. Hence greater accuracy overall, but some bias, too.

I put this rather woolly hypothesis to Frank. He said there was nothing in his work that ruled it out. But his guess about what is going on is slightly different. He thinks the result might have something to do with the fact that the finance world is, in his words, very wide but quite shallow. “Width” refers to the fact that a lot of different kinds of information can affect financial markets or earnings results; economic, political, cultural, technological, and so on. “Shallowness,” for Frank, means that “most of the unexpected shocks in equity markets have pricing effects that are not terribly complex or hard to figure out. What is hard to figure out is where the next shock might be coming from or when it might happen.” (In “deep” systems, like some of those in the physical sciences, you might be able to see a shock coming, but its effects are complex and hard to model.)

So, no one saw Covid coming, but it’s easy to see that if everyone stays at home, restaurant businesses will take a big initial hit. An algo or analyst who is inclined to adjust quickly to such shallow shocks might render more accurate predictions overall, but at the cost of some overreaction bias.

A second interesting result from Frank and his co-authors is that if you restrict the algo to just objective inputs — financial and macroeconomic data — and deprive it of human analysts’ forecasts, the algo renders much less accurate predictions. “The results suggest that when forecasting firm earnings, the analysts’ private assessment is extremely valuable,” they write. “The information generated by analysts cannot be replaced by incorporating a large set of public financial ratios.”

Score one for the humans. But not necessarily for some sort of intrinsically human genius, for making imaginative leaps or forming gestalt conclusions that a computer cannot reproduce. One big thing human analysts do is call up companies and ask them what’s going on. Important information, not yet reflected in the financial statements, is discovered this way. How long before large language models are making those calls? Or at least listening in on them?

One good read

You will look at Elon Musk’s tweets, whether you’d like to or not.

FT Unhedged podcast

Ad for unhedged podcast

Can’t get enough of Unhedged? Listen to our new podcast, hosted by Ethan Wu and Katie Martin, for a 15-minute dive into the latest markets news and financial headlines, twice a week. Catch up on past editions of the newsletter here.

Swamp Notes — Expert insight on the intersection of money and power in US politics. Sign up here

The Lex Newsletter — Lex is the FT’s incisive daily column on investment. Sign up for our newsletter on local and global trends from expert writers in four great financial centres. Sign up here

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %
Previous post European leaders braced for tensions at G20 in India
Next post More than 20% of younger, educated women are refusing to change their names after marriage, while 5% of men now decide to take the WIFE’S NAME