actually it shouldn't be that hard to create such a software, each indicator has a percentage of accuracy, and there are dozens of indicators. At a given time the software will scan several (or maybe 100s ) of stocks, and estimate the percentage of accuracy for each indicator for a given stock, and average out the percentage of all indicators for each stock and display the chance and profit for say buying each stock. The strength of a buy or sell signal could be determined by number of indicators & their percentage of accuracy that are giving a buy signal vs those indicators & their % that are giving a sell signal. Say for a given stock you have (and say these are the only indicators) Bollinger Bands 60% buy DMI 45% buy Fibonacci Retracment 28% buy Force Index 20% sell MACD 53% buy Then the software would give a strong buy signal at that given time for that specific stock of 92% chance of profit for an increase of 10cents if bought at that moment 84% for an increase of 20cents 71% increase of 30cents and etc If one thinks computers are too slow to go through all that process in a matter of minutes, then what are all these new dual core CPUs that Intel is selling, maybe someone could put up a server that does all the processing for 100s of stocks and displays it for their customers for a monthly fee also wanted to mention that for long term AI analysis there is http://www.tdmresearch.com
Bluud, Thats EXACTLY the kind of thing I'm interested in. I'm no programmer, but I know a little bit, and I know that such code wouldn't be too difficult for someone a little bit brighter than myself. Any takers?
Technical analysis, with artificial intelligence........... That sounds EXACTLY like my discretionary method!!!!
There is always good old omnitrader www.omnitrader.com "The only software designed to help you make money no matter what the market is doing". How can you beat that?
vector - vest ? gorilla - trader ? disclaimer ... I have never used these products but get mailings or see their ads sometimes
how could they let this happen! There is academics...there are traders...then there is stupidity...so their models were NOT flawed!
Mark, Re programming as per your requirements, it is actually very simple (in Excel, for example). However, it is also labour-intensive. Bluudâs illustration is a good example. The problem is whether the weighting of specific indicators at specific times should be adjusted, eg DMIâs under certain conditions should be given more (less) weight. Of course, one could apply equal weight to all indicators at all times. This is the simple solution but, I feel, far from optimal. Further, how many indicators are to be included? How many are there? Thereâs no reason why all shouldnât be included unless one has preferences (perhaps the weaker ones should be excluded). Again, more work. Personally, and this puts me in a minority of 0.0000001%, I canât accept TA. That said, the methods of their construction can be utilised in price analysis, for example in moving averages. However, I feel the trick in maâs is not to stick rigidly with specific time-frame combinations â 5, 15, 60-minute, etc â but to constantly re-adjust and utilise them to reflect the underlying market volatility. On a flat day, A, B and C maâs may provide the best indicators; on a strong up-day, D, E and F maâs will be more suited. And so on. This is a major flaw in most back-testing â combinations produce X% probability of winning trades but no optimisation seems to be involved. If it was, Iâm sure the success rate would increase. ElectricSavant, Years ago, I did some basic analysis re the daily returns of the UK FTSE incorporating the â87 crash. Basically what this showed was a move of the magnitude of the crash can be expected every X years. Letâs assume this was every 7 years. So, after a crash you pile into the market safe in the knowledge that statistically, you have 7 yearsâ grace. The following year the market takes a monumental dive and you are wiped out. Whither statistics? I think Iâm correct in saying that while a 20% drop for example, can occur every 7 years (occurring in the first year) AND it repeats in the second year, statistically speaking the â7 yearsâ expectation is still valid. I think the rationale, here is (and Iâm only half-informed on this) that if a 7% drop occurs twice in the first two years, statistically it would not (should not) occur for another 14 years. The problem is, where do you place your reference dates? Well, you could incorporate all available historic data but this will not reflect worse scenarios, which I think was the flaw in LTCMâs dependence on the Value-at-Risk models. And picking up on a point from above, there is a moral here for those who back-test strategies or technical indicators; look at the best profit returns but also look at the worse-case â and double it. That is what I would consider a pragmatic and conservative, if not wholly realistic, expectation. But for performance-remunerated fund managers, its not conducive to income generation or luxury apartments on the Champs Elysees Grant.