https://www.institutionalinvestor.c...edy-World-of-Hedge-Fund-Private-Investigators many are reluctant to talk about their process. (“I am not interested in giving the competition a roadmap of how to compete,” said one person in response to Institutional Investor’s inquiry.) What I find interesting is that while they are in the business of digging for all the secrets of people and yet they don't want other people to know their secrets of how they do it. Oh well, this is why we just trade when the price moves and we leave the "how" and "why" the price moves/d to these guys. Oh and I didn't even know doctors can be bought. LOL Oh well at least they are not allowed to pretext to get medical records and bank records. That's comforting to know.
%% Dirt grow good food also.........................................................................................
Can you imagine what would happen if everyone shared all information about how they make money in trading? Trading business would cease to exist and none of us would be here. Maybe except those who’d manage to keep some secrets. And that’s actually where we‘re going. Most hedge funds whose secrets were found out either went out of business or are unable to turn a profit. ET also lost many traders while others can’t make money, mostly those who happily share everything they do.
Well we are fast becoming obsolete anyway. https://www.institutionalinvestor.c...edge-Funds-Vastly-Outperformed-Research-Shows
There are more articles about struggling AI & quant funds. They actually have a serious problem, the same as other AI products: they all make mistakes without realizing it’s a mistake. Like the newest super-amazing GPT-3 from OpenAI that seems to know everything, but only if you discard its bullshit answers and select one that truly looks amazing. It can also write bullshit essays that look very legit, just stringing together intelligently looking sentences. Those mistakes are just more costly in terms of trading.
Well that's the thing with AI. It's always going to be a double-edged sword. You want it to be intelligent but never too intelligent. Do you REALLY want an AI that never makes ONE mistake? Imagine how scary that would be: An AI that knows EVERYTHING and never makes ONE mistake when making decisions based on incomplete information and is perfectly correct each time!! Remember trading is not like playing chess or go; it deals with incomplete information.
Yes, but the issue is with mistakes that are obvious and wouldn’t be made by humans. So if a machine makes obvious mistakes, then you can only imagine how many other mistakes it can make that aren’t easy to verify. The problem is that AI bullshits more than human compulsive liars
Well just like we are only humans, machines are only machines. And I'd like to keep it that way, tbh, at least for my lifetime.