Hi fellows, Suppose I have a collection of irregularly spaced time series (tick data with time_obs(i+1) - time_obs(i) ranging from 2ms to 20ms). I want to calculate 1-minute standard deviation of returns. The question is: should the number of observations be the same in the every sample of 1 minute series? i.e. for example, the first minute contains 50 observations and the second minute 100 has observations. After calculating standard deviations for both is it legal to make claims like volatility of price in the first minute is higher than in the second since std1 > std2? Thanks in advance.