here is the data points from another data source i didnt fill these by hand for oanda. what ive done is simply to calculate D1 and D2 from the information in the excel D1=open- low D2=High-open short distance= is the shortest distance side of the bar long distance= is the longest distance side of the bar now from the 21 points we need to figure out - stop - entrance - Take profit - direction of pending trade. what you need to know is what your willing to risk MDRR(market defined Risk:reward ratio) %tile of accuracy(or % of data in the past that would work with this data that you can expect to have as well on the next bar.)
I'm inclined to follow my usual risk model with this kind of trading. i.e. 1% max risk if my stop is hit. With Oanda, it could be 0.1% per trade if I wanted since they don't have fixed lot sizes.
update new sheet so we sort the data by our rules a new longest shortest distance of 13 is made a new shortest longest distance of 14 is made this is at 100% now we need to create of inefficiency in the machine to create risk. so we determine the probability of success we are comfortable with i picked 80% and we see how many trades we can take out to bring us down to 80% which is 4 trades
updated excel now we figured out the regular unaltered highest distance of all the shortest distances lowest distance of all the longest distances now we need to create a statistical working range note that the system is much like a machine if your 100% efficient the machine breaks and there is never 100% accuracy in trading as its probability of occuring. if the longest shortest distance > shortest longest distance than move the longest shortest distance to the side of the shortest longest distance because the trade could be made in two different directions and have been profitable.
sorry those last two posts are out of order the one suppose to be before the other i dunno how i did that. update new sheet now we removed the 2 shorts and we removed the 2 longs giving us new numbers LSD(longest shortest distance)=10 SLD(shortest longest distance)=17 STOP= open at 80% accuracy rate and a market defined RISK:REWARD MDR:R=10:7 @ 80% to give us a better idea of what all this means profitability wise we look at this number.. NPOT(NET PROFIT OVER TIME) NPOT = profitable trades over 21 bars - losses over 21 bars NPOT = (17 profit trades*7profit)-(4 failed trades*10 stop price) NPOT = (119)-(40) NPOT = 79 pips profit over last 21 bars at these numbers.
forgot spreadsheet here it is. now we have these numbers +-.5 pip instead of 1 entrance 10.5 pips exit 10.5 pips tp 7 MDR:R=10:7 NPOT=79 80% accuracy direction price above ema 20 we go long then. on next bar now the next bar comes and we can set up trade lines
now we setup the chart plotted open entrance tp notice how it didnt hit down 10 from the bottom hit 10 up and went past the tp i cant view minute data here so i cant say 100% how it moved. fire away if you have questions.
well ntw31 thats a good question... one step further! we need a fixed point the open to measure from a varying point PNR(point of no return) and data but this time instead of using probabilities or candle formation we'll take the probabilities of action. well phrase the concept this way if price MOVES XX pips from the open it will move XX pips before returning to open with XX% probability. now we've really opened up a endless stream of mathematical/statistical possibilities as fixed points can be anything like price at old high point.. etc. ... anything.... so with this we need to measure PNR High before returning to open and what the probabilities will be.. we will be using TICK DATA our brains and a very big excel sheet and a very big computer because... damn.. its gonna be alot of calculation.
as i dont know if legally im allowed to post tick data that i receive from companies im jsut going to not post it but note you can get all this data from oanda with a 2000$ account or higher or fxcm for free but it isnt any good. but using our same sorting stat methods as above without needing to sort LSD SLD because tick data is more efficient and we are going from open returning to open there is no need for these functions. we sort tick data by open distance from open highest distance reached before returning back to open on the hourly of every candle and from these points derive a statistical graph of distances and a list of probabilities.