I am a discretionary manager too, just that I am so slow that I have not caught up to it yet. For what it’s worth, I use stop losses whenever I get involved in special situations type of stuff. The idea is that “my read was incorrect for some reason and I don’t want to be in the trade any more”. In that case, I ab initio make a decision to either have a 2-stage stop loss (cut half at loss X, cut all at Y) or or a single-stage stop loss. In all situations I usually leave a tiny (inconsequential in the context of my book) position on so I am forced to follow the trade and see the outcome. For my systematic stuff, I do both predefined SL/TP combinations when using classification methods (it’s naturally combines with it) as well as no stop loss (use position sizing as a risk management tool) when I am using regression methods (again, that is natural). To get optimal stop loss and take profit levels, I generate synthetic data that has similar vol/skew/kurtosis to my instrument and simulate my process to find optimal parameters. Not as straightforward as it sounds but works
Out of curiosity, how is this quantifiable in terms of being helpful? I'm not sure where you stand on the whole Price Action/TA trading, but many here think its useless. But when you try and figure out why a stock dropped in price, couldn't there be a million reasons, and how would finding this out even help with predicting what comes next? You see lots of stocks go up on bad news, or down on good news. Cathy Wood is convinced that Tesla is a solid buy, and its nothing but bad news there. Top execs left, over 10% of workforce fired, sales down. Is this cost cutting therefore bullish? The supercharger team was fired. I read its because the manager was told by Tesla to reduce headcount, and the manager wouldn't, so Elon got rid of the whole team. So how do you evaluate this? Does saving money sound like a positive? Does Elon sound competent for being firm? Or will this disrupt the supercharger network rollout and hence turn away what future potential buyers they had? I guess my point is that depending on the bias we have about a stock, we can agree with a news piece or disagree, but if we are only guess, how will this be helpful in trying to figure out what to do about a sudden move in price for a stock? We also have to consider which piece of news is important, and which is inconsequential. So it almost seems harder to try and understand why.
The juice exists because it is hard. Information that’s relevant to revenue, margins, and cash flow will change the estimates of future earnings (and long-term growth rate), which drives the price. A fluctuation in a stock price from $155.56 to $155.87 back to $155.43 is typically just a function of how the regular liquidity flows interact with order books. What I do as a discretionary long/short manager is try to convert a continuous time series into a form of a binary bet. Once you start seeing stock prices as the aggregate of values x outcomes x odds then it completely changes the way you think about risk and how you want to position.
May I ask what value you find in synthetic data? I touched the subject lightly in work, ea creating synthetic data with LLM’s, but I always wondered what the exact difference is between this and simplistic extrapolation.
I mean, simulation is an extrapolation method, you creating paths that you've never seen in real life - it's just that number of variables is high enough that it's near-impossible to do using simple linear interpolation or something like that. In my case, I create synthetic paths that have the same mean/volatility/skew/kurtosis as my actual instrument. The idea is that it has no alpha built in, but for testing your trading rules, you can model the paths and the terminal distribution. So things that add path dependency like stop losses, and take profits, you can test them on much higher number of scenarios and thus avoid curve-fitting.
Argh..I probably touched it too lightly, here is something in it where I can’t wrap my head around it, at least intuitively I understand that you are “multiplying” your data but keep the underlying statistics and distributional characteristics intact, but in my (imperfect) view it stays “curve fitting”. When you say you need more data to build models I understand and that is a good use, but the curve, stays the curve, so to speak.
Stop losses are like a conservative approach to trades, but that does not necessarily mean they are bad.
Well, your signals aren’t curve fit, you’re still using alpha signals that come from real data and you’re still applying all the anti-fitting tricks (small number of variables, online learning in case of ML etc). Now, you want to decide on SL or TP parameters (eg 1SD and 2SD respectively over turnover period). However, instead of using a single actual path between entering and exiting, you now create a collection of brownian bridges. So if anything, you reduce curve fitting that comes from tweaking the SL/TP levels. Does that make sense?
Been finding lately that the further away I am from modern technology, the better I do. Just bought a couple cows, to fertilize my land. Watch me double my money on craigslist!