I think there has to be some user input to tell the AI what the meaning of 'good' is. So you can teach an AI to play chess by letting it play a large number of repeated games against a human or chess computer, and telling it ONLY (A) when it is moving illegally, and (B) when it has won or lost. You can give it more information such as 'this position looks good' or 'a knight is worth about the same as a bishop'. This will speed up the learning process, but its not strictly necessary, and by doing so you're putting your own biases and ideas into the process. Similarly as a minimum I'd expect to have to tell the AI that (a) I want to maximise risk adjusted return and (B) how I define risk. This isn't unreasonable as there is no way the AI would know what my utility function is (as defined in B) - even another human wouldn't know that. But you shouldn't have to put any more than this in. To continue the analogy you shouldn't have to tell it 'stocks are usually uncorrelated with bonds'. That is both forward information, and a bias. So I agree with you that entering a fine tuned utility function that will recover the users prejudice, or put implicit forward information in, is definitely not a good idea.
Context is ever changing in nano degrees re: markets. It seems impossible to capture this in order to define/refine rule sets, especially in an ongoing manner.
Yup, we are saying the same thing. I'm a big fan of AI/ML algorithms.. but without me, they are worthless. Me + AI results in capability that is greater than the sum of its parts.
Falkensteins review of Ilmanen's Expected Returns. http://falkenblog.blogspot.com/2011/04/ilmanens-expected-returns.html
"If there is no effect between variables and your confidence level is at .05 (5%), 1 of 20 tests will show that there is an effect even though this is not true, due to random error." https://explorable.com/data-dredging
Virtu systems are claiming a 95%+ win rate using HFT and computer trading. Could this be the end of the independent trader ?