I have a question about historical volatility calculations. Suppose we have a stock that pays out a dividend of d on some regular basis (quarterly, say). We know that the price of the stock will drop by d on the ex-dividend dates, on top of whatever random motion there might be. The usual way to account for this when pricing an option is to subtract from the spot price of the stock the present value of all dividends paid during the life of the option, before plugging that spot price into the option pricing formula. This has the effect of shifting the probability distribution of the stock price at option maturity down by the future value of the dividends paid, which makes sense, because the stock price will in fact be that much lower due to paying out dividends. There is no reason to expect the standard deviation of that probability distribution to be any wider just because dividends were paid. So, this gets to my question: When calculating the return on an ex-dividend day t, instead of using r_t = log(P[t] / P[t-1]) shouldn't we be using r_t = log((P[t) + d) / P[t-1])? After all, -d of whatever change there was in P[t] vs P[t-1] was due to the dividend payment, a known event that has nothing to due with random Brownian motion and that was already taken into account in the option pricing by modifying the initial spot price. So we don't what that artificially increasing the volatility. I realize that this is a nit picking question. After all, it only affects about one out of every 63 returns. But still, if we want to do things right, should we make the above modification when computing historical volatility?