I am designing a backtesting system to test an algorithm. I'd like to have a benchmark / control test results to compare to the results of a custom algorithm. A limited "set" of stocks is selected to invest in. The algorithm can choose to buy one or more stocks. In an effort to have a benchmark to measure against, the backtest system can: . Buy some or all of the stocks on the first day of the test period. . Sell all of the positions of the set at the end of the test period. . Include transaction fees and slippage costs during the test period. To test the algorithm, the backtest system can: . Allow the algorithm to choose to buy one or all stocks on the first test day. . Loop the algorithm through time for until the last day of the testing period. . Include similar transaction fees and slippage costs during the test period. . The profit percentage of the "set test" and "algo test" are recorded. The "set test profit" is deducted from the "algo test profit" to rank the test. QUESTION 1: Should the algorithm be optimized to outperform the benchmark test (rather than simply optimizing for the algorithm's profit)? QUESTION 2: What test results should be compared? A. Buying and selling the entire set VS. ALL the positions the algo bought. B. Buying and selling the entire set VS. EACH individual position the algo bought. C. Buying and selling the set of stocks the blackbox bought VS. The algo results. D. Buying THE stock the algo bought for the position VS. the algo position result. QUESTION 3: How should the profits of the tests be calculated? A. Buying and selling all positions on both the "set test" and "algo test". B. Buying and selling the "set test" VS. Allowing the algo to keep unsold stocks. C. Buying the set and selling stocks that end the algo test sold VS. algo results.