![]() |
| Figure 3 on p.40 of the paper seems to show that their LLM based model does not statistically significantly outperform a 3 layer neural network using 59 variables from 1989.
|
![]() |
| > It's probably more about avoiding blunders than it is having some genius paradigm shifting idea.
I too believe this is key towards successful trading. Put in other words, even with an exceptionally successful algorithm, you still need a really good system for managing capital. In this line of business, your capital is the raw material. You cannot operate without money. A highly leveraged setup can get completely wiped out during massive swings - triggering margin calls and automatic liquidation of positions at the worst possible price (maximizing your loss). Just ask ex-billionaire investor/trader Bill Hwang[1]. 1. https://www.bloomberg.com/news/features/2021-04-08/how-bill-... |
![]() |
| You misunderstand the quote. It’s where brains go to die from a societal perspective. It might be stimulating and difficult for the individual but it’s useless to science. |
![]() |
| I think I could probably make more money selling a tool or strategy that consistently, reliably makes ~2% more than government bonds than I could make off it myself, with my current capital. |
![]() |
| As fas as I know the more people use the strategy the worse it performs, the market is not static, it adapts. Other people react to the buy/sell of your strategy and try to exploit the new pattern. |
![]() |
| > Why not then publish the strategies once outmoded
Because then your competition knows which strategies don't work, and also what types of strategies you work on. Don't leak information. |
![]() |
| I agree this isn't earth shattering, but I think the benefit here is that it's a general solution instead of one trained on financial statements specifically. |
![]() |
| That is not a benefit. If you use a tool like this to try to compete with sophisticated actors (e.g. all major firms in the capital markets space) you will lose every time. |
![]() |
| > Even worse: Some people assume an exceptional text-specialist model will effectively meta-impersonate a generalist model impersonating a different kind of specialist!
Eloquently put :-) |
![]() |
| >Why is it that we wouldn't trust a generalist over a specialist in any walk of life, but in AI we expect one day to be able to?
The specialist is a result of his general intelligence though. |
![]() |
| “There are hundreds of local municipalities and elected authorities.”
That’s the root of your problem. Too many governments, not enough attention available to keep them accountable. |
![]() |
| Let's say LLMs work exactly as advertised in this case: you go into the LLM, say "find corruption in these financial reports", and it comes back with some info about the mayor spending millions on overpriced contracts with a company run by his brother. What then? You can post on Twitter, but unless you already have a following it's shouting into the void. You can go to your local newspapers, they'll probably ignore you; if they do pay attention, they'll write an article which gets a few hundred hits. If the mayor acknowledges it at all, they'll slam it as a political hit-piece, and that's the end of it. So your best chance is... hope really hard it goes viral, I guess?
This isn't meant to be overly negative, but exposing financial corruption is mostly about information control; I don't see how LLMs help much here. Even if/when you find slam-dunk evidence that corruption is occurring, it's generally very hard to provide evidence in a way that Joe Average can understand, and assuming you are a normal everyday citizen, it's extremely hard to get people to act. As a prime example, this bit on the SF "alcohol rehab" program[0] went semi-viral earlier this week; there's no way to interpret $5 million/year spent on 55 clients as anything but "incompetence" at best and "grift and corruption" at worst. Yet there's no public outrage or people protesting on the streets of SF; it's already an afterthought in the minds of anyone who saw it. Is being able to query an LLM for this stuff going to make a difference? [0] https://www.sfchronicle.com/politics/article/sf-free-alcohol... |
![]() |
| Why would you report financial crimes to Twitter? If your LLM uncovers financial crimes you should contact regulators and prosecutors. They're both incentivized to do something about it. |
![]() |
| Here is a view of an accountant trying to use AI with a financial statement:
https://www.youtube.com/watch?v=VxxmzoZTRW4 there are other videos on their youtube channel on the more analytical aspect of it, I just decided to share the latest one. It's been a hit and miss for now, depending on the model used (chatgpt/gemini/clause etc.) the results can vary somewhat. |
![]() |
| Or rather: If LLMs could give those guys an edge, there's no way they'd share their edge-giving LLMs with anyone, least of all their competition and the plebs. |
![]() |
| Great. Humans no longer need to cook the books and can claim plausible deniability. The only problem is the hallucination errors could go against you as well as for you. |
![]() |
| People are going to lose a LOT of money using this when the LLM says "buy" and old-school humans who read the same statement say "sell".
But up until that day, it will probably be cheaper. |
![]() |
| The fact that the paper does not mention the word "hallucinations" in the full body text makes me think that the authors aren't fully familiar with the state of LLMs as of 2024. |