I've recently begun to use MC simulation to study Max DD and profit levels. In doing so, I have come across an issue that I would be curious to get opinions on. From what I understand, most who use MC simulation have their MC program select trades randomly from a list of actual historical returns (in accordance w/their relative probabilities) and then observe how reordering and/or excluding certain past returns affects DD and Equity. While I certainly see the value in this and the guidance it provides for pos sizing, I am curious to hear if anyone feels the following procedure is better or worse than the above, and why. In my MC program I can define variables according to a certain distribution. So given a list of historical returns, once I calculate the Mean and Std Dev of Winners and the Mean and Std Dev of losers, I can input it into the MC program and it will generate values according to the distribution I select for each run (for simplicity's sake let's assume I choose Normal). The program will generate a winning and losing return for each potential trade, all I need to do is have the program signal which of the two returns to record. This is accomplished by defining a binary variable for each trade that has the same probability as the Win%. From that point forward, the rest of the metrics are calculated just as they are in the aformentioned procedure. I hope this is clear, please question me if it is not. Anyway, I was wondering if a continous set of trade results generated according to a distribution might give a slightly more realistic picture. After limited testing, I have noticed that the results I get from this procedure tend to be more pessimistic than those from the process described in the first paragraph above. I would appreciate anyone's thoughts on the matter. As a side note, if I decided to go with this method, I would likely stick to a normal distribution for the winning trades, but use an alternative that captures fat tails on the losers.