z=(1-x)*(1+y*x)

Discussion in 'Trading' started by qll, Nov 30, 2006.

  1. :eek: I see.
     
    #21     Dec 3, 2006
  2. Now just in the spirit of bringing the thread back.


    Kelly believed in this equation


    G max=R

    Where
    G is the growth rate of money
    Max is the maximum possible rate of return
    R is the optimal return.

    Now he also had this idea.

    edge/odds

    If odds are equivalent to the 0 the trade should not be made.
    As, if the edge is 5, the person should place 5% of the portfolio.
    If the edge is 10, the person should place 10% of the portfolio. And so on.
     
    #22     Dec 5, 2006

  3. HAHAHAHAHAHA!

    I did not know Pump & Dump was considered long term.
     
    #23     Dec 5, 2006
  4. Now how would I define edge??????

    There are many definitions....the best one is.. inside trading.
    The winning percentage can be applied as well.

    Any others that I had omitted?
     
    #24     Dec 5, 2006
  5. optimal f is too large everyone agrees.

    is there a consensus which % of optimal f would work best for traders?
     
    #25     Dec 5, 2006
  6. I have not found it yet.

    If there are any works...by someone
    I will be more than happy to go over them.
     
    #26     Dec 5, 2006
  7. FOUND SOMETHING......jus give me couple of hours to go threw all of that........I was learning Log's last year....will have to go threw couple of my notes.

    xxx
    The equation can be generalized for a compound game by adding each level and inserting the correct expected wins and losses for each level given the various frequencies at which advantage levels occur and multipying f by the multiple bet at that level. To get the correct answer, number of levels-1 must be subtracted from the final answer and multiplying by B(0). xxx

    I am really not sure what you mean by this verbal description. Also, your formula for simple game is not precisely correct. You wrote:
    B(n)=((1+f)^(N*p))*((1-f)^(n*(1-p)))*B(0)

    The actual formula is
    B(n) = (1+f)^W*(1-f)^L*B(0), where W is number of winnings, L is number of losses, W+L = n. W and L are random variablea, and N*p is just the expected (average) value of W.

    The best way probably is to just write the formula for general case:
    B(n) = PI (1+Xi*f), where Xi is the outcome of round i (either positive or negative), PI stays for product from i=1 to n. Note that Xi are random variables. You do not know their value in advance, as you do not know the value of W and L in the simple case.

    The point is that if either the simple or compound game equation is manipulated by manipulating the value of f to find where maximum growth exists, (Where B(N) is largest, it will be largest at the "Kelly" value for a simple game or what I have been calling the "Yamashita" value for any defined compound game.

    I think that this is a misleading statement. You are not looking for the "largest" value of B(n) at all. See below.


    Since the bare bankroll, unmodified by a log function or any other utility function which might be talked about, is being maximized by determining the f that
    maximizes the bare bankroll, what does utility function theory have to do with the calculation? I just do not see it.

    The use of a utility function is the *critical* one. As I said above, you do not generally "maximize" the bankroll B(n). First of all, the bankroll B(n) is a random variable. If you say that you maximize a random variable it will be most likely understood that you maximize the expected value of this variable. If you were to do so, then the utility function U(B) would be linear, and you would bet everything that you have anytime you have any edge. See my paper for a proof of this statement.

    The point is that you maximize the expected value of U(B(n)), where U(x) is the utility function of wealth x. If you take U(x) = log(x+1) (logarithmic utility), then your objective becomes:
    maximize E[log(B(n)+1)], or just
    maximize E[log(B(n))], where the operator E stays for expected value

    In the simple game, your objections simplifies to:
    maximize E(W*log(1+f)+L*log(1-f)) = E(W)*log(1+f)+E(L)*log(1-f) = n*p*log(1+f)+n*(1-p)*log(1-f)
    (I set B(0)=1 without loss of generality)
    The maximizing f is then the well-known
    f =p/(1-p)

    The approach is similar for the general game.

    Note, once again, that the use of the utility function was critical. You can use also another utility function. A class of appropriate utility functions is:
    U(x) = ((x+1)^a-1)/a, where a is a parameter in the interval (-infinity;1). a=1 is not appropriate any more since we get the linear utility function (betting everything) which does not posses some of the properties required on a utility function. If we take the limit a-->0, we get exactly the logarithmic utility function.

    One of the reasons why the log utility function is used is probably the fact that it does *not* depend on your time horizont. The optimal f is the same regardless how many games n you want to maximize the utility of B(n) for. (Again, you do not want to maximize just B(n).) If you pick another a, the optimal fraction f(n) will generally be a function of n. It will converge to some value f(infinity) for n-->infinity.

    By the way, this is a very interesting question and I plan to look into this a little bit in another paper. There are other very interesting question stemming from this: for example, there will probably be some border-line value A in the interval (0;1) where the utility function is "good" for a lower than A, and "bad" for a greater than or equal to A in the sence that a = A and greater will give optimal f for which the player's bankroll will converge to zero with probability one. Unfortunately, the calculations become very complicated. If Yamashita has already done it, please let me know. I really would not like to be accused again of stealing his ideas ;(. (Sorry for this small pinch, I just couldn't resist.)

    Regards,

    Karel

    http://www.bjmath.com/bjmath/Betsize/mlmax1.htm
     
    #27     Dec 5, 2006
  8. Now, If anyone know this tell me if I am thinking right.


    R=xp

    where R is revenue,
    x is # of products
    p is the profit.

    Times it by the X again. (as the rule follows)

    Can't I just find the F(prime of the equation) F^1


    and then find the axis of symmetry(vertex) in order to find the maximum point.

    Right, Now if I will take the simple Eco 101 equation and substitute the #...... Do you think I can find the possible CONSISTANT probability of the maximum % of the account used for maximum returns on the investment???


    Sorry, for run-on sentences.
     
    #28     Dec 5, 2006
  9. AHahah........my theory might be true look at the chart....


    Same thing as finding axis of symmetry(vertex)

    by the way......to find vertex you can use -b/2a

    OR

    (-b+- (SqRoot(b^2-4ac)))/2a
     
    #29     Dec 5, 2006
  10. qll

    qll

    i don't quite understand your point, but i guess we both try to prove the same thing:

    even though the optimal risk looks high, and even though it may be 20% or 40%, even though if you got hits in a streak, if you can keep risking the same high %, you will win in the end.

    in my example, if you lose 20% 20% 20% 40%, you will only have 30% left, but you will still end up 100% gain after 12 tries.

    i personally hold 150% of my buying power in stocks, overnight for 3 years, and my return is great. now i trade futures and i use 90-100% of my margins, but i keep my portfolio diversified.

    i think average risk of 20%-33% is not too large even if your win:loss ratio is only 2:1 and winsize/losssize is 1:1.
     
    #30     Dec 5, 2006