Neural Networks Revisited

Discussion in 'Automated Trading' started by maninmoon, May 24, 2016.

  1. userque

    userque

    Yes, that's me.

    Yes, the algo optimizes its parameters.

    It took a while before I figured out the error metric part. It's a custom metric. Another benefit of coding your own stuff.

    The algo is based on the kNN algorithm. It uses multiple kNN's (also custom coded) along with multiple NN's. Standing alone, I favor kNN's over NN's. My NN's serve only to enhance the kNN's outputs.

    NN's are tricky and have to be properly constrained/validated as they'll find 'patterns' in lotto numbers if allowed to.
     
    #191     Sep 13, 2016
    Simples likes this.
  2. Arti

    Arti

    So you are making an ensemble of k nearest neighbors algo's that are later fed to a NN?
     
    #192     Sep 13, 2016
  3. userque

    userque

    Correct. Multiple kNN's are fed into multiple NN's; which is finally processed by a final NN.
     
    #193     Sep 13, 2016
  4. Arti

    Arti

    I'm just trying to understand the ensemble, do you feed "probability" prediction from KNN along with other features to NN? Are you using back propagation for NN optimization?
     
    #194     Sep 14, 2016
  5. userque

    userque

    No probabilities are intentionally involved.

    Each kNN feeds its predictions (1 or -1 or 0) to each NN. The output of these NN's is fed to a final NN. I use Excel's Evolutionary Algo (Solver and a third party EA called NOMAD [via OpenSolver]) to train the NN's. https://en.wikipedia.org/wiki/Neuroevolution
     
    #195     Sep 14, 2016
  6. 931

    931

    Userque , have been making similar algo to your description in c++ for about year. So far its running multithreaded strategy tester for 12 threads and minimal acceptable timeframe with some speed is 10-30 minute as lower time slows algo down exponentially.
    Thinking about learning cuda to port it to gpu for faster parralel processing.
    Some questions about cuda:
    Would it be good idea?
    For example if 1000 cores try get same memory locations it may slow down?
    Would duplicationg data help work around problem?
    Since gpus are good for floating point calculations would float or double precision be losing precision if many + - operations like in financial calculation?
     
    Last edited: Sep 18, 2016
    #196     Sep 18, 2016
  7. userque

    userque

    I'm afraid I don't know much about coding in C++ and gpu optimization etc. But in the past, others have commented to me that I would benefit from cuda/gpu utilization.

    Java has special operators to deal with critical, financial transaction math where maximum precision is needed. Perhaps C++ has the same.

    I only do EOD trades now, but I've always suspected that I wouldn't be able to use the algo to trade on a less than 30 min. basis even with optimal code/hardware due to the computational workload.

    I ultimately would like to code an advanced, single, multi-parameter kNN...without additional NN's. It would require even more computer resources, however. It would output percentage or absolute values rather than only directions.

    You're the first I heard doing something similar to me. Do you know of others doing the same? What's your rough topology? Mine currently is kNNs --> NNs --> NN

    Thanks.
     
    #197     Sep 18, 2016
  8. vicirek

    vicirek

     
    #198     Sep 18, 2016
  9. 931

    931

    Dont know others but most likely more people have been working on similar ideas as it seem very straigtfoward approach, but in details im sure we have many differences that change outcome.
    In past computers may have been too slow to implement something similar.

    Dont know about topology´s, but can describe how it operates.
    Not sure if my algo classifies under neural net as idk much about NNs yet and it based on parameters, but it should go under machine learning at least.

    Similarity seems to be in the idea of using patterns from past by similar concepts like in your previous posts described.

    About my implementation.
    For timeframe using 10,20 minutes ,but much faster for backtesting 1h and 2h.
    Algo works by scanning constantly for settings.
    If better set found by comparing to previous output parameters then tester threads get new set and same process is repeated.
    New settings are changing way for scanning patterns and filtering results.
    With new data learning period for settings is shifted foward.

    For fighting against over optimization algo uses lower gaps instead of just going for maximal past profit.
    Is there some other ideas to test also?
     
    Last edited: Sep 19, 2016
    #199     Sep 19, 2016