I hope that doesn't mean that you think I was actually trying to steal his ...what is it up to now, 28 gig?... of code and reverse engineer it. That's all I am trying to clear up here. Does anybody really believe Jerry's accusation that I was trying to steal his code? If not, this thread can mercifully end (hope hope) and I can have my redemption.
God knows there's nothing to steal. But your story (and existence here) poses its own problem. Why do you care. We don't know who you are. We don't know what company you are with. We can't believe how naive you are. Aren't you theoretically better off not haven't interjected at all? I very much doubt anyone (who actually might matter, anyway) will believe Jerry's claims. Why add all the fun drama is so entertaining and drawing an audience? There's a part of me that wouldn't be surprised if crisis?crisis and Jerry are the same person. No offense meant.
Wow, I've been called some bad things before, but THAT takes the cake! Alright, point taken. Caio [/drama] [/entertainment value] [/thread]
So you NEVER actually used a neural network for anything? Your lack of a clear answer likely means that you haven't...right? Come on a simple straight answer to a simple question can't hurt that muchâ¦can it? How about we discuss the genetic modification of RNA related proteins in the treatment of Parkinsonâs disease.....it's something neither one of us probably know anything about, thus I'd hold no advantage over you in terms of experience or knowledge. One logical model can utilize many sub models. You model for market volatility, for direction, for 1st derivative, for the probability of breaking support and resistance boundaries in multiple time windows. The number of base or characteristic models can get numerous. These are combined in a voting system sometimes called consensus modeling or voting models and a final model signal to trade or not taken on the vote totals. Jerry030
Wow! I've never motioned the number 28 GB of code, anywhere. On another thread 1 GB was mentioned on a different system that trades CBOT futures. Could somebody please send me the 27 GB of new code that Mr. Crisis has found? The preliminary test system done at his request was pretty simple.... it only generated 36 MB of C++ code. Jerry030
I won't annoy you by asking what your experience is in exporting a trained neural network to an executable for use in a distributed application... since it would appear you've never done that. However your comment about âno computer on earthâ can do that is so totally silly, that I'll lay it out in baby steps for you. 1) Train NN for condition 1 (say volatility) 2) Export the trained net 1 to C++ 3) Train NN for condition 2 (trend line breakout) 4) Export the trained net 2 to C++ (Repeat as needed for the number of conditions you want to predict 8, 28 , whatever youâve got t he time for (it takes 24 to 48 hours on a high end machine for each training step, so an advanced system may be a month of CPU time) Now in another application: 1) Read in the C++ code for model 1 2) Pass it the transformed price file 3) Get an output value for the prediction â¦> 0 , <1 4) Save that 5) Repeat 1 to 4 for each trained net 6) Now read in all your output values into another application and let them vote. Any decent PC can do this. How fast it happens depends on machine speed. The reason you think itâs impossible is simply because youâve never worked with a NN package designed to do this. Also the statement about logic makes no sense. This isnât hand written code by the Geek in cubicle J-3451, itâs code generated by weights in a network. Itâs learned by the NN not designed from a flow chart by the Geek. Jerry030
Neither source is particularly accurate. But here's a hint for you, if you are interested in PredCo and Neural Nets: read up on Bill Finoff's work on stop-training and the bias vs. variance tradeoff. Your posts show little if any insight in this area. Also Finoff on NFL Theory, another theoretical enclave that seems to have escaped your notice.
java is open source now so it'll probably just be improved and D won't ever happen. why not python? you probably don't need the performance boost of a compiled (or JIT compiled) language. and python is probably the easiest language out there to learn. there is a new version coming sometime within the next year (python 3 or python 3000) its gunna be a huge step forward. probably just as important as the language constructs are the libraries for the languages and python has one of the best ones out there.
Kevin, Thanks for the links to conceptual stuff you find of value. I'd don't use NN for stops but I'm sure there is much that is being done with it. There are dozens of areas that my posts have never covered for several reasons: 1) It's an area you could devote an advanced degree to .... a lot to cover 2) I don't write my on NNs but use an application so I'm limited to it's theoretical framework 3) I'm not a theoretician in this area just a kind of specialized power user/analyst. 4) I find what I do works well so exploring the leading edge in theory isn't the top priority as it might be if one is trying to find what works. Jerry030
Should be Finnoff, with two n's. Too late to edit my original post. Also, Jerry, "stop-training" refers to a NN cross-validation technique whereby fitting is terminated in the training set when squared error bottoms in the test set. This typically happens well before squared error bottoms in the training set itself. The idea is to prevent overfit.