I believe it is still crucial for humans to properly define the network, and that includes the data structure that is fed to the input layer. What goes on under the hood, how the network adjusts weights and especially the results auto-differentiation on the backpropagation produces are very hard to impossible to attach meaning to during the learning process. That is something a human does not have to rely on. But I feel I am stating the obvious when I am claiming that the design of the network, the choice of input data, a very precise definition of the desired output structure, the design of backpropagation, and other applicable components should all be carefully thought through and considered by the engineer of such network. I am afraid that some software packages make it almost too easy to nowadays allow someone to claim that he/she is experimenting with deep learning networks. I say that because the danger is that one does not first acquire the necessary understanding and expertise of the underlying mathematics and statistics before embarking on designing a network. Without such knowledge I would say failure is almost guaranteed.
Indeed. I have some experience of model trading, but usually using statistical or heuristic rules. This is my first real foray into using a NN. I am using a library for the NN engine, but the design of input, structure and goals is mine. I am now at a stage where I can "test" these things by running many trials across my considerable data. The reason for starting this thread was to help me avoid some obvious pitfalls, and so I am very grateful for the advice you have provided so far.
@conduit Running trials is a reasonable way to workout what structure surrounding the NN works well. However, for selecting (which I would rather not!) inputs it is less efficient as the range of potential inputs is vast. If you have any tips that might help me narrow that part of my efforts that would be really useful.
It really depends on the software you use. IMO, trying to create your own implementation of a NN from library components makes as much sense as trying to build your own Smartphone from parts your bought at Radio Shack. So buy the best software you can afford. If your goal is creating software then go it it and have fun. If it is making money them get a professional grade package. It will have every option and architecture you can think of. Try each combination it turn. To answer your question: To narrow inputs use a professional package with self pruning capacity. This tells you which inputs are useless so you can eliminate them from further usage. I use this approach by creating a ton of inputs: 1/3 are the stuff I think is relevant, 1/3 is mathematical expansion of the first group, 1/3 stuff I think is useless. The NN often discovers I was wrong on the useless stuff.
Lol...it's really going to depend on a lot of factors, including the package and how it's used. NN's will find useful data in historical lottery results; and even forecast the next draw!
It depends of the sophistication of the software and the skill/stupidity of the user. I use a package that retailed for $30,000 and would typically be used for detecting fraud in processing your next credit card transaction. I can't speak to hobbyist software that you can buy for $99.95 or something you build from a C++ library.
I am afraid I do not understand in the slightest what you are saying. I am not sure you understood the capabilities of neural networks. A deep learning network has incredible capabilities. That includes the capability to weed out what is useless. And why does one need a "professional" package? It took me 20 minutes to install Theano or Tensorflow and then another few hours to properly set it up to work with GPU cores. A properly defined deep learning network can derive conclusions on its own to decide what is useless information and what helps to train the network. The whole point of deep learning network in fact is to have a network find relationships that we beforehand do not know anything of. Example: Going back to detecting handwritten digits from images that are fed into the algorithm. When training the data set you can as well throw in images of animals (which are completely useless information for the purpose of detecting handwritten digits) and a properly defined network would still be able to train very well. You can in fact even photoshop the images and insert all sorts of distractions and in the end such network would still do very well, precisely because it is training to identify relationships and making connections that matter in order to achieve the stated goal. By the way, not even the most sophisticated networks achieve an accuracy level of around 95% these days on the stated goal of detecting handwritten digits. And keep in mind I used this example to illustrate the point. The more complex the task at hand of course the more complex the requirements and demands on the network. May I politely recommend to read up on some of the basic literature, for example the ones I posted earlier? You may seem to be confused about the actually goals of deep learning networks.
Right...factors. How's it working for you? Annual Rate of Return -wise if you went balls to the wall with it on $10k, for example?
How's it working for you? Annual Rate of Return -wise if you went balls to the wall with it on $10k, for example?
I smell bullshit. Most of the top of the line deep learning networks are entirely FREE OF CHARGE. Google Theano, Tensorflow, CNTK, Caffe