I am trying to model 2x and 3x leveraged ETFs (SSO, UPRO, etc.) to generate historical data before the inception of those funds. I thought it would be simple: I'm tracking the change to ^GSPC, multiplying it by the factor of x2 and should be getting something close to the actual NAV (or closing price) of SSO before the daily expense fees. I am seeing something strange: that, for the first years the ideal (no fees) synthetic SSO performs better than the actual fund. However, in more recent years the actual SSO perform way better than the ideal (no fees/expenses) synthetic fund. How can it be? How can I model the ETF so it matches the actual fund closely?