The ACD Method

Discussion in 'Technical Analysis' started by sbrowne126, Jul 16, 2009.

  1. Redbaron et al, sharing your coding tips, thank you. It is not taking this thread off track, it is all about ACD and taking it to another level, so it belongs here and not in programming. Like the good stuff baggerlord posted, it is motivating and helpful.

    Robert, I've been running something similar but using deciles as categorisation, I prefer numbers rather than descriptions, but I have been given a fresh idea for testing. Life is in the way now, after 8th July I'll have time to tweak a few Excel formulas and see if it's worth a shout.
     
    #13501     Jun 29, 2017
    redbaron1981 likes this.
  2. Do you guys score your lines via strict interpretation of what's in the book? Or more lax - ie just need the order of events to occur for the score, not the exact timing that's in the book?
     
    #13502     Jun 29, 2017
  3. SteveM

    SteveM

    I've found value in a variety of different scoring methods. I do believe today's algo-driven markets are prone to more two sided trading and therefore perhaps the traditional 3 and 4 scores on the daily charts during the open outcry days of the 1980s were more meaningful than they are today. In other words, a C up in the old days was more meaningful than it is today....sometimes when I assign a "2" to a hard trend up from the open one day and then assign a "4" the next day to weak C down I wonder if I am overvaluing the C trade.
     
    #13503     Jun 29, 2017
    deltastrike likes this.
  4. Originally I was using only QCollector for all data requirements. I purchased a bunch of ETF data(1min) then started updating my csv files using QCollector. This worked great but the csv files grew and even though I was subsetting that data before running my nlScoring code the import process was a serious bottle neck. So I then combined all these csv into a database which sped things up a bunch.

    I grew tired of Windows 10 and decided to transition to linux fedora, even though esignal semi worked alright using Wine, I really wanted to move completely away from Windows and besides that I use mac laptop when away from my desk. This is when I started playing with IB API and arrived at what I have today which is basically QCollector but written in R which updates a database(1 min res) rather than individual csv files.
     
    #13504     Jun 29, 2017
  5. As I understand QCollector also allows you to save downloaded data to Metastock format files. Did you investigate that as an option and if so did you discard that idea as it's not friendly to query a Metastock file? What DB do you use for storage?
     
    #13505     Jun 30, 2017
  6. Not to state the obvious but most of us probably don't need 1 minute or faster data. I use 5 minute and would probably not lose any value if I went to 15.
     
    #13506     Jun 30, 2017
  7. koolaid

    koolaid

    I use 15m and it's fine for what I need...I think if you're maintaining a single huge database then why not go for 1m. It gives you way more flexibility in terms of what a you can back test.
     
    #13507     Jun 30, 2017
  8. I did not look into Metastock and as far as I know Metastock will not work on Linux.

    Currently I am using sqlite at home. I am also working within a development team where we have migrated everything to using postgres.
     
    #13508     Jul 1, 2017
  9. My reason for choosing 1 minute resolution was because during the course of my own investigations I found that certain days can have certain characteristics. So using 1 minute allowed me to analyse the details with a finer comb so to speak.
     
    #13509     Jul 1, 2017
  10. No substitute for granularity.
     
    #13510     Jul 1, 2017