can someone lay some intro on me for measuring volatility (short time frames, if it matters)? or perhaps get me up to speed on the relevant points. it used to be the a 2 stdev's of an sample of prices, if I'm not very much mistaken...but that was a long time ago.

You gotta be careful with the word "volatility" because it means different things to different traders. The "official" definition of volatility comes from option trading, where it means that funny variable you put into an option pricing model (usually Black-Scholes) to determine the premium, or conversely, it is that output you get from the pricing model, given the market's stated premium. If you're putting volatility in to get a guesstimate of premium, then it is historical volatility (e.g., standard deviation of annualized return of the underlying security). If you're putting premium in to back out a guesstimate of (future) volatility, the guesstimated volatility is called implied volatility. For non-option traders, a more common definition of volatility involves variation of the price, not the price return. Here, a common measure of volatility would be the ATR (average true range), with a lookback of 10 days or more. I find ATRs to be a more reasonable measure than standard deviations because they aren't as backward looking, being quasi-intraday variations rather than strictly interday variations. Or maybe it's just the idea that standard deviation typically only looks at one input stream (closing prices) whereas ATR looks at three (highs, lows, closes). True range is defined as the true high minus the true low. The true high is the greater of today's high and yesterday's close; the true low is the lesser of today's low and yesterday's close. The ATR is usually just a simple MA of the most recent TRs, and the "proper" lookback is anybody's guess (backtest optimization variable?). A common usage of ATRs is Keltner channels, which are similar to Bollinger bands (standard deviations). HTH kut2k2

Here are quick thoughts. "Volatility" in financial markets is the 2nd moment of asset returns distribution. Since returns are not distributed normally, volatility is local. Higher moments can exist because of conditionality of volatility in return, price or time. Examples of that are skewed equity distributions due to leverage effect, or fat tails due to volatility clustering. For those reasons, the return sample you use in order to estimate volatility will affect your results. Various models exist to describe volatility processes. GARCH(1,1) is by far the most popular. It is heavily used in option pricing and risk management. It is the variance equivalent of an ARIMA(1,1) model. However, it is regarded as a short memory model because it considers autocorrelation in volatility as geometrically decaying. Models that consider volatility as having a longer memory include the fractionally integrated garch (FIGARCH) or the long-memory garch (LMGARCH). In addition to the model and sample used in order to estimate volatility, frequency might also affect your results. If returns were white noise and if prices followed a purely random walk, low and high frequency measurements would lead to the same estimates of volatility. Introducing autocorrelation in returns, however, can lead prices to mean-revert around moving averages. This will result in low-frequency return estimates to be less volatile than high-frequency return estimates, thus biasing daily measurement downward compared to intraday measurement. Maybe there are some other important points regarding measurement.... Anybody? cosine

This article, "Putting Volatility to Work" is worth reading. http://www.ivolatility.com/news/Putting_volatility_to_work.pdf

Could volatility be the number of ticks divided by tradable price change? You could make your own ranking standards based on the answer to the above...