Some maths required...

Discussion in 'Options' started by earth_imperator, Aug 14, 2022.

  1. Taking the given Spot as the mean, and simply using Normal Distribution,
    one can easily compute -1SD and +1SD as follows:
    Code:
      Spot_at_minus_1SD = Spot * (1 - 0.341)
    
      Spot_at_plus_1SD  = Spot * (1 + 0.341)
    
    For the 0.341 (ie. 34.1%), see this in the above link.
    Actually using one more decimal is better --> 0.3413 (34.13%).
    One gets this number by computing 0.5 - z_to_p(-1).

    Question: Is it possible to derive an IV (ie. a stddev) from the above relation? :)

    Question: Can the above be turned into a Lognormal Distribution?
     
    Last edited: Aug 14, 2022
  2. I started off typing a very lengthy reply, but I think it's overkill here and I'm going to substantially cut it back.

    The bottom line is that you need to be very clear on what you're trying to accomplish; I really doubt you have the right formulas for whatever it is you're trying to do.

    Yes, it is trivial to calculate the SD *based on what you have there*, but I don't think you have what you want. SD = spot_plus_1SD - spot = spot + 0.341*spot - spot = 0.341*spot. However, I doubt that calculation is of much value.

    Typically, you'll want to estimate the standard deviation yourself, either by using historical data, a theoretical model, implied vol or maybe from a prediction based on your own market knowledge.

    You can model data however you would like, lognormal distribution included. That doesn't mean that all models are equally valid or equally useful, however. (Very far from it!) Models are assumptions we make about data to simplify our ability to make predictions and draw conclusions.

    I guess my initial reaction to your question is what problem are you actually trying to solve? It might be possible to help more then.

    PS: For what it's worth, many years ago I was trying to answer statistical questions about the financial markets but knew I didn't really know what I was doing. That led me to study stats more earnestly, which led me to audit a few stats courses at university, which led me to do my masters in statistics. It was a very worthwhile journey for me. I learned a lot, enjoyed it and now have many useful tools for better understanding the world around me (finance included).
     
    Ninja, stochastix, ET180 and 3 others like this.
  3. easymon1

    easymon1

    Krikey!
    hell yeah 879.jpg
     
  4. Assuming you mean "implied volatility" when you say IV then the answer is "no". Implied Volatility is determined in a market environment. We just look at what people are wiling to buy or sell an option for and then work backwards. "Volatility" is our guess at future volatility. An easy estimate that is often used is to derive it from historic data and some sort of probability distribution. You get to choose what you think is the best probability distribution and you have chosen a Gaussian Distribution. That doesn't mean future volatility as you defined it will match perfectly with the actual implied volatility. It usually does not.
     
    TheDawn and stochastix like this.
  5. Real Money

    Real Money

    Better idea is to estimate price/VWAP std. dev, momentum/trending momentum std. dev., actual (tradable) implied vol and indexes of vols, ratios of synthetic indexes to related indexes, cash value of tradable asset price differentials, time based measures of momentum, and so on...

    There's problems with your proposed method involving the math theorems...

    Non-stationary phenomenon (interferes with parameter estimation, including std. dev estimation)

    {THE variance IS VARYING!} STD. DEV is an estimator for root variance

    Spot is definitely not the mean.

    Something related to your idea is the idea of price versus lagged price. Examples are like price[0] - price[1] where the sqr bracket is the time lag.

    Things like

    2*(price[0] - price [2])/(price[0] - price[3]) or
    (price[0] - price [3])/(price[0] - price[5]) or
    functions involving squared price lags etc. are perhaps more amenable to variance estimation techniques. Good Luck to you.
     
  6. The problem in my case is that say I don't have time series of past data,
    but just one data point with the current or last Spot,
    and would like to use lognormal distribution instead of the above normal distribution,
    whereby keeping it similarly as simple as in the ND case (or making use of the ND case for the LogND case).
    Ie. just seeking a simple generic logND estimate for -1SD and +1SD like in the ND case.
     
    Last edited: Aug 15, 2022
  7. Found a workaround solution:

    S=100.0000 z=-1.0 p=0.341345 --> Sx=65.865525
    S=100.0000 z=+1.0 p=0.341345 --> Sx=134.134475
    DTE=365.0000(t=1.0000) IVa=41.755502 IVb=29.367265 --> avgIV=35.561383

    Above assuming the said ND calc is for t=1 year and just presuming that the above calculated Spot for -1SD and +1SD is for LogND (which of course is not), and then solving the LogND formula for sigma (ie. stddev or IV) for both cases, and then taking the avg of the two as the final IV to use. Using that IV then gives such LogND results:

    S=100.0000 IV=35.561383 DTE=365.0000 rPct=0.00 qPct=0.00 z=-1.0 --> Sx=70.074317
    S=100.0000 IV=35.561383 DTE=365.0000 rPct=0.00 qPct=0.00 z=+1.0 --> Sx=142.705636

    and one now can apply this also to different time frames, here for the upcoming 180 days:
    S=100.0000 IV=35.561383 DTE=180.0000 rPct=0.00 qPct=0.00 z=-1.0 --> Sx=77.901210
    S=100.0000 IV=35.561383 DTE=180.0000 rPct=0.00 qPct=0.00 z=+1.0 --> Sx=128.367711

    for the upcoming 90 days:
    S=100.0000 IV=35.561383 DTE=90.0000 rPct=0.00 qPct=0.00 z=-1.0 --> Sx=83.812765
    S=100.0000 IV=35.561383 DTE=90.0000 rPct=0.00 qPct=0.00 z=+1.0 --> Sx=119.313567

    for the upcoming 30 days:
    S=100.0000 IV=35.561383 DTE=30.0000 rPct=0.00 qPct=0.00 z=-1.00 --> Sx=90.307352
    S=100.0000 IV=35.561383 DTE=30.0000 rPct=0.00 qPct=0.00 z=+1.00 --> Sx=110.732956

    This simple approximate solution is sufficient for my needs.

    Thx to all participants in this discussion.

    Case solved, case closed.
     
    Last edited: Aug 15, 2022
  8. LOL :)

    I shared that "PS" because OP reminds me just a little bit of myself many, many years ago and I thought he (and/or someone else reading this thread) might enjoy following a similar path if they find themselves intrigued by the power of statistics to solve difficult problems (and/or at least quantify uncertainty when problems remain largely unsolved).

     
  9. easymon1

    easymon1

    Yeah, they don't just give those degrees out of a Pez dispenser.
    Nobody ever went broke with Math or Stats or Engineering degrees and some personability from what I've seen around. A guy can plug those into a lot of nice situations where picking up the essentials of the situation is hella easier than the bar of standards for those degrees. and a Masters to boot. Not too shabby as they say.
     
    earth_imperator likes this.
  10. I'm glad you feel you've solved your problem well enough for your purposes. You say "case closed" and so I'll respect that and try not to say too much more.

    I would, if I may, just like to say that I am intrigued by your assertion that you only have one data point (or would like a solution that can be applied to a single data point) and I am curious about your use case. It is very often the case that we use as much data as we can reasonably collect/afford/measure/etc in order to estimate the parameters for a distribution that we want to use; in the case of the normal distribution, for example, we need to estimate two quantities using data in order to identify the normal distribution that best fits our data and is therefore most likely to be of use in making inferences or drawing conclusions. (There are infinitely many normal distributions, of course!)

    If you must rely on a single data point then you are forced to "bake in" a lot of assumptions... but at the end of the day it's all about practical utility and for all I know you have some use case where you just need a quick and dirty, very rough estimate... or perhaps you have some theoretical knowledge or prior research that already validates some of your assumptions; without actually knowing the problem I can't possibly know. :)
     
    Last edited: Aug 15, 2022
    #10     Aug 15, 2022
    earth_imperator likes this.