Very true.....I should have said that is true in corporate grade software. Large global corporations can ill afford the risk in a process worth significant corporate $$$ of a decision made by a human in a bad mood in cubicle 4237 in sub-basement 3 in the Hong Kong data center, and so will opt for objective and automated decisions/pruning, etc. by the software. The issue you cited of the NN trying to play the lottery occurs with people who don't know what they are doing or who are using hobbyist software. These mistakes are avoided by properly designed training, test and validation data sets and out of sample testing.
Rohan2008, thank you for your link. I'm currently collecting everything I can find about deep learning in finance. I think the most interesting thing I've found so far is stacked auto-encoding. It appears to be effective for dimensionality reduction, and the studies that use it do not need a million labeled training samples. That was my fear, originally, that there simply wasn't enough data in finance to support deep learning (252 days a year, with stocks correlated). Anyway, check out the list of papers, if you're interested: http://gregharris.info/a-survey-of-deep-learning-techniques-applied-to-trading/
Thanks for the link... very informative; here's a list of interesting deep learning literature (not related to trading though): https://github.com/robertsdionne/neural-network-papers
Well, I asked about the kinds of returns these deep networks are getting...no response. Let's make it interesting. Folks can wax philosophical ad nauseam. I've built a custom machine learning system that successfully trades NUGT/DUST. It's been verified on C2 since Jan. 24. The 'vibe' of this thread didn't/doesn't seem inclusive to the type of personal that taught themselves only enough about Machine Learning to simply get the job done; rather than learn via seminars and/or academia. So, if anyone is interested. I challenge any deep learning NN, free or $30,000 in costs, to beat my home made Excel VBA system. It's currently up about 220% (over 3x) since inception on C2. It switches side at most, once per day. It trades chop and trends. Even though others have done it unilaterally in the past, I'll post a link to the system only if a Mod or Admin says it ok to do so. You see, I've played with NN's for a while, and they're flawed with regard to financial time series, imo. (I've asked...and answered some of the same question being posted here.) However, I have built a basic NN in my model, but the crux of the engine is based upon k- Nearest Neighbor (kNN). Good Luck. EDIT: I've blurred out any identifiers, so hopefully this is ok. This doesn't include today's gains on the GRAPH part: about 7% more:
I would say you have an excellent system as the results in terms of PF as in line with the best of what I have seen or participated in creating.
I am no expert here; just sharing some thoughts/issues that I have come across when I looked into this area; some of what I write here is the advice that I got from some deep learning experts who happen to be my colleagues at the company I work for. One particular thing that I have noticed when I scratched this area is that open source software such as Theano/keras or tensorflow or torch have good CNN models, but lack/not upto-date in many RNN/RBM models that are currently being churned out at universities. RNN/RBMs have LSTM (Long-Short Term Memory) that support time series models; CNNs don’t for that matter. At one point, my colleagues advised me to simply ignore the open source packages for latest RNNs and start reading papers and use CUDA programming; Nvidia’s latest cuDNN supports RNNs. The other problem with LSTM models is that they are not as parallel as CNNs and so processing not so parallel neural networks can get time consuming. There is so much research going on in deep learning that one has to just keep reading all the papers coming out of the universities full-time (forget trading). Training RNNs is VERY hard; don't underestimate this; most of today's research focus is around ways to simplify training. As you see, once we get into the details, deep learning can get complicated and time consuming; unless you absolutely know what you are after, you can quickly get bogged down in a wild goose chase. To be fair, there are a number of simple mechanical strategies that can be developed by hand; deep learning is always there for more complicated scenarios. Every approach can be designed by either traditional (mechanical algo) approach or a deep learning... it’s a matter of how complicated the mechanical algo gets for a required accuracy; guess that’s the deciding factor. Deep learning is always superior, but is it worth the effort for a good enough accuracy? At the end of the day, we are after ROI/PNL, so its just a matter of ones perception of where the alpha is given the resources/time/priorities we have at hand. I struggling to stop myself from jumping into this field, I have some low hanging fruits first
Very nicely done. This gives me further encouragement for an ML project that I have currently going. I know an nth of that which the rest of you all seem to know, which is why I have a team working on mine for me. Mine has a very specific focus but the results you've posted are also in line with what they're claiming they have produced for me given my desired inputs. I think we can get it better than what they claim it is right now, but time shall tell.
That's useful to know, thanks. CUDA is actually my bottleneck right now. I upgraded my power supply and bought a nice GPU. I wanted a setup I could use for a while, so I installed Ubuntu 16.04 LTS on the very first day it came out. Turns out CUDA 7.5 doesn't work with the GCC compiler that comes with the newest Ubuntu. I keep thinking surely it will be supported soon, but still nothing so far. Now I wonder if they'll just skip it and wait some months before delivering CUDA 8 for Ubuntu. Then I'll have to wait some more for Tensorflow to support CUDA 8... It seems silly to have to downgrade operating systems just to train a model, but that's what I'll have to do.