The statistics that we normally use, standard deviation, skew, and kurtosis, are all highly sensitive to outliers. There sensitivity comes from their using high powers of a deviation. If there are are couple of outliers, the contribution pull everything. With L-Moments, only the first order of the data is used so it is not so sensitive to outliers. From Wikipedia,http://en.wikipedia.org/wiki/L-moment Has anyone tried using L-Moments with market data?

I haven't, but looks interesting! If you are trying to deal with outliers, there are other ways, for example random forests have been shown to be less susceptible to outliers. If you are considering different distributions, look into fat-tailed distributions. Outliers can be clumped together. For example the VIX has been riding high for some time now. In other words, for a period in time, the outliers become more likely, and therefore shouldn't be reduced/ignored. Maybe L-Moments takes care of this? The mention of EVT (Extreme Value Theory) suggests that this might be the case.

It depends on the objectives. Let as consider this problem: you have 1000 measurements at around 1.0 and 30 more at around 10.0. Should you consider the latter outliers?

Thank you for the pointer, but Extreme Value Theory isn't a good fit in this case. I have bimodal data with noise. At the moment, I am looking for a measure of how much time is spent in each mode. Sometimes there are spikes and sometimes not. If I could assume that there would be spikes, I could use the fourth-moment kurtosis to make a Bayesian argument to identify the modes and judge their relative strength. Without knowing that, I need a metric like kurtosis which isn't so sensitive to spikes.