This kelly bet size uses only the first 4 moments of the distribution of the returns? How did you derive this? I though to get the 4th order taylor polynomial inside the integral integral ln(1+x*r)dP(r) where r is the returns and x is the betsize but that couldn't be accurate b/c the losses for when the arg of the log becomes 0 are the essential to the bet size and wouldn't be captured by any finite order taylor polynomial. Sorry if it's common knowledge but I've never seen it before. I always use the sample distribution, ie, the uniform distribution over all samples: Code: def kellyReward(x): reward = 0 for i in range(len(returns)): reward += np.log(1+returns[i]*x) return -reward lBound = -1 rBound = -1/min(returns)-.01 # this is the betsize such that the largest loss equals a 100% drawdown res = minimize_scalar(kellyReward, bounds = (lBound,rBound), method = 'Bounded') That said though I always calculate kelly, I never use it because the tails are necessarily fatter than observed.