Hmm. My approach has been to normalize any signal I use against it's standard deviation (cross-sectional or longitudinal), fit a sigmoid to clip the extremes and then use a hysteresis band to reduce trading. That removes the necessity for forecasting, but I am now actually starting to think that forecasting the returns has it's merits.

Since most sigmoid functions map the interval -inf,+inf to the interval 0,1 (or -1,1), you might be able to skip the nomalizaton by standard deviation step, and just adjust the gain or slope parameter of your sigmoid to compensate. That would serve the same function of the "filter out signals near zero" method I mention above. I often use the term "forecast" pretty broadly. I more or less consider any numeric signal that gets you in or out of a trade a forecast of something.

Yeah, true. Normalization allows me to avoid calibrating/fitting almost completely by taking a sigmoid of a z-score and assuming some reasonable window like +/- 2 stdevs to clip by. Fitting a sigmid directly would require running iterative backtests. I feel, possibly incorrectly, that as a process it reduces the possibility of curve fitting, especially for low quality signals (which describes almost all of my universe, sadly). Hmm, not sure about that, I think your way is superior especially when you have decent predictive power. In my method, I apply the same band for going from 0 to say 0.5 as for going from 2 to 2.5 (if you think in standard deviations, as I do). In your case (let's call it "hollowing the signal out") you actually ignore the signal where it counts - i.e. when the signal is weak. I've been historically thinking in terms of "signals" and it worked for me more or less. I.e. "if X I want to buy Q, the bigger the X the more I want it". But now I am thinking that using "E(Q) = f(X)" as a proper forecast allows for a lot of niftier things like transaction optimization, for example.