Artificial intelligence may soon find a eternal role in the financial services sector.
An upcoming study from the University of Chicago’s Booth School of Business shows that large-language models (LLM), a type of artificial intelligence trained to understand and generate content, are able to outperform some financial analysts at “predicting the direction of future earnings.” . The researchers made their results available at an early stage in the form of a non-peer-reviewed draft.
Using chain-of-mind prompting, which helps language models perform elaborate reasoning tasks by breaking them down into smaller steps, these models – including pre-trained generation transformers (GPT) are one type – apparently they have an accuracy of 60.4%. The survey shows that this is 7 percentage points higher than the average analyst forecast.
This is noteworthy because the researchers did not provide the linguistic model with any narrative or context beyond the balance sheet and income statement.
The study found that with straightforward, quick instructions, the model’s ability to analyze financial statements and predict the direction of future earnings was comparable to analyst consensus forecasts for the first month.
“Altogether, our results suggest that GPT can outperform human analysts when performing financial statement analysis even without any specific narrative contexts,” the researchers wrote.
They added that the results underscore the importance of “human-like step-by-step analysis” that helps the model follow steps typically taken by analysts.
According to the report, the language model’s predictions provided greater value when human biases or inefficiencies such as misunderstandings were present.
As with humans, GPT’s predictions were not perfect. They are more likely to be incorrect if the company is smaller, has a higher leverage ratio, is making losses, or has volatile earnings, because context is more essential when making forecasts for smaller or more volatile companies.
Although both GPTs and analysts have more difficulty predicting when companies are smaller or report losses, analysts tend to be better at dealing with elaborate financial situations, likely because they consider pliable information and context beyond the financial statements.
“Our findings demonstrate the potential of LLM to democratize financial information processing and should be of interest to investors and regulators,” the report authors concluded, noting that language models can be more than just a tool for investors, playing a more energetic role in decision-making.
However, the report cautioned that AI performance may be different in the wild. “It is unknown whether artificial intelligence can significantly improve human decision-making in financial markets in practice,” the authors wrote, adding that “GPT analysts and human analysts complement, not replace, each other.”