approaches to calibrating a Heston model

Discussion in 'Options' started by stochastix, Mar 2, 2023.

  1. There are several methods for calibrating the Heston stochastic volatility model, including:

    1. Maximum Likelihood Estimation (MLE): This is a popular method for estimating the parameters of the Heston model. It involves finding the parameters that maximize the likelihood function given a set of observed data.

    2. Method of Moments (MoM): This method involves equating the theoretical moments of the Heston model with the sample moments of the observed data. The parameters of the model are then estimated by solving a system of equations.

    3. Least Squares (LS): This method involves minimizing the difference between the observed prices of a set of options and the prices predicted by the Heston model. The parameters are then estimated by minimizing the sum of squared differences between the predicted and observed prices.

    4. Bayesian Estimation: This method involves using Bayes' theorem to estimate the parameters of the Heston model. The prior distribution of the parameters is combined with the likelihood function to obtain a posterior distribution of the parameters.

    5. Kalman Filtering: This method involves using a recursive filter to estimate the parameters of the Heston model. The filter updates the parameter estimates based on the observed data as it becomes available.

    6. Particle Filtering: This is a more advanced version of the Kalman filtering method that is useful when the Heston model has non-linear and non-Gaussian dynamics. It involves using a set of particles to represent the posterior distribution of the parameters.
    The choice of calibration method depends on the specific requirements of the problem at hand, such as the availability of data, the desired accuracy of the estimates, and the computational resources available
     
  2. Define an objective function that combines the negative log-likelihood and the MSE metric with weighting factors


    Let w be a scalar weighting factor between 0 and 1 that represents the importance given to the negative log-likelihood versus the MSE metric. Then, the objective function can be defined as follows:

    f(θ) = w * (-ln(L(θ))) + (1 - w) * (MSE(θ))

    where θ represents the vector of parameters to be estimated, L(θ) represents the likelihood function, and MSE(θ) represents the mean square error metric. The negative log-likelihood function can be defined based on the probability density function of the Heston model, as shown in the previous response. The mean square error metric can be defined as follows:

    MSE(θ) = (1/N) * Σ[(P_obs - P_pred)^2]

    where P_obs represents the observed option prices, P_pred represents the predicted option prices based on the Heston model, and N represents the total number of observations.

    The optimization algorithm can then be used to find the values of θ that minimize the objective function f(θ). This approach allows for a balanced consideration of both the likelihood of the model and the accuracy of the predicted option prices. The choice of the weighting factor w depends on the specific problem at hand and the relative importance of the likelihood versus the MSE metric.
     
  3. destriero

    destriero

    The structure is the model.
     
  4. M.W.

    M.W.

    Is there anyone who still uses the Heston model for volatility pricing purposes? o_O

    Back in the day when I priced fixed income derivatives SABR was all the rage and predominantly used in the pricing of caps, floors, and swaptions, as well as many structured notes.
     
  5. This is not a bad question, but the posted answer is idiotic. Is this a chatgpt answer?

    In this case the instruction to chatgpt is nonsensical in itself, and the chatgpt response is worse. In situations where a least squares fit is appropriate, MSE and ML are the same, otherwise, ML is preferred for a host of good reasons.

    Something tells me that chatgpt has no ability to decipher a skewlock. But that would not generally prevent it from coming up with a [misleading] answer. Every time I ask it anything "difficult," it returns an answer that is worse than simply wrong, but is actively misleading. E.g. any question regarding NK DGSE models.
     
    M.W. and destriero like this.
  6. The basic idea is still valid... dual SDE's, modeling a time series as a 2D Ito process... this is in active use in finance.
     
  7. Kevin,yes, maybe I worded it wrong, I want to combine maximum likehood of the sample path realization with MSE of the latest-option prices. I'm aware MLE and MSE are equivalent in the way you mention.

    I was really amazed at the ability of it to compute a formula for the inverse of a compensator though

    I'm going about the process of verifying the formula now but I'm pretty sure its correct. Here was the prompt I gave it
    "please read and summarize the article about Hawkes processes at https://vixra.org/abs/1211.0094"
    then after a bit of back and forth of pasting snippets of the article manually to it, I asked it
    "find a closed-form expression for the inverse compensator involving the LambertW function for P>=2"
    and this is what it generated, no modifications
     
  8. yes I know, I wrote a general purpose SDE integrator using an arbitrary precision library.. I finally understand the paradigm after first wondering wtf all this crap meant back in 2007
     
  9. M.W.

    M.W.

    Of course is the idea still valid,especially for vanilla options. It just surprises me that no better model has come about in the past 10 years.

     
  10. I don't know how well this method works. I prompted chatgpt like this:

    Please read the document at https://ocw.mit.edu/courses/18-s096...de23e5c59c04512954a8b_MIT18_S096F13_pset3.pdf and answer question 2(b).

    Here is her response:

    Code:
    Sure, I'd be happy to help!
    
    In question 2(b) of the document you provided, we are asked to find the
    expected value and variance of the maximum of two random variables.
    
    [snip] ...
    
    In summary, we have found that the expected value of the maximum of two
    iid random variables X and Y is twice the expected value of X (or Y), and
    the variance of the maximum is twice the variance of X (or Y).
    

    This response of course has absolutely nothing to do with the query posed.
    Question 2(b) involves Sherman-Morrison-Woodbury and LOO cross validation betas, not expected values of iid random variables.

    Note also the tone of confidence. Chatgpt seems never to answer "I don't know" or "I'm not that good with linear algebra so I can't answer your question." In this way she resembles the biggest bullshitters (aka best salesmen) I have come across in my years in this field.

    I've attached the relevant document in case anyone wants to verify this or might
    want to venture a guess on the genesis of such a non-answer.
     
    Last edited: Mar 2, 2023
    #10     Mar 2, 2023
    stochastix likes this.