I hope this is an appropriate forum for this question: I'm working on an algorithm which would set stops based on price "volatility", i.e. near-term variance. The idea is that the stops are intended for the usual purpose "stop loss". Obviously, if they're too tight then there are unfortunate trades and if they're too loose then there could be inordinate losses. The objective here is to decide mostly "how tight?" and that, it seems to me, requires near-term variance information. This way a stop would trigger if the price were outside some range (like 2 sigma perhaps) on the low side - while preventing setting a stop at, say, 1 sigma. This has to be based on the recent history of real-time data. So, where's a good source for that kind of data and, ideally, the variability measures?
The problem is that looking at recent volatility is looking into the past. And past volatility is, in my experience, not a great prognosticator for trading a specific instrument. In fact, some might say that counter-trading volatility is as valid as trading it. If you're looking to measure an instrument other than overall market (VIX), then you'll need backtesting software and a data source. TR (True Range), which you can set up to measure recent variations in price, may be helpful.
Yes, TR made a huge difference for me once I started applying it. Made the drawdowns a bit bigger per trade, but did not knock me out of the game and allowed me to stay in the required direction.
My use of TR as a real-time filter is to alert me to the fact that sometimes the best decision is just to watch from the sidelines and not get chopped up in craziness. I like to compare recent TR to a moving average of TR for the past 5 days. If it gets way out of line, I just sit and watch everyone else beat themselves up.