What is the correct way to analyze...

Discussion in 'Risk Management' started by nitro, Oct 10, 2010.

  1. You need to apply Queueing theory with multiple inputs and distributions.
     
    #11     Oct 26, 2010
  2. nitro

    nitro

    Thanks for the suggestions. I have some studying to do...
     
    #12     Oct 27, 2010
  3. He wants to know in what ways the inputs affect his foracast, not just get a forecast. Maybe you should read the question carefully before answering. His question is very advanced and it can be answered possibly by dynamic stochastic estimation. He needs to get a model of the ssytem, not apply regression to get a forecast.
     
    #13     Oct 28, 2010
  4. nLepwa

    nLepwa

    Entering zeros as missing values is probably the worst option.

    You can try entering arithmetic means of columns/rows instead as a first solution.
    I'm not sure about your exact problem, but it is probable that filling missing values with an SVD transformation would be an optimal method.

    Ninna
     
    #14     Oct 28, 2010
  5. kut2k2

    kut2k2

    Regression is a model, and so is ARIMA(X). I never use regression myself precisely because it assumes a model for the data, and all I want to do is remove the noise from my time series. But I'm not dealing with exogenous inputs like the OP is. At least ARIMAX(p,d,q,b) doesn't assume a model upfront; it finds the best fitting model for the data.
     
    #15     Oct 28, 2010
  6. nitro

    nitro

    You know, what is interesting is that this is nothing more than a calculus problem.

    All we want is the partial derivatives of each Xn with respect to each other. Transforming with the Jacobian to a more suitable coordinate system.

    What confuses me is the statistical/probabilistic aspect of this problem.
     
    #16     Oct 28, 2010
  7. kut2k2

    kut2k2

    OK, you described the Xi's as discrete events, now you're talking about them as continuous variables. Which is it?

    If they are discrete, I still think ARIMAX is the way to go, but if interaction is a factor, your model should look more like

    ... + c1*Z1 + ... + cb*Zb + g[1,2]*Z1*Z2 + .... + g[b-1,b]*Z[b-1]*Zb
     
    #17     Oct 28, 2010
  8. nitro

    nitro

    They are discreet. Replace partial derivative with finite differences...
     
    #18     Oct 28, 2010
  9. Embarking on getting an answer to the questions (modelling an operating system with great utility) may not lead to a forecasting based upon the newest events. It is very logical to first consider the weighted symmetry about the Present.

    After some study and considerations, it may be even possible to discard the consideration of forecasting.

    When a person examines the generalizatiions of behavioral finance, it appears that the better approach is to consider events in terms of their pre and post characterisitics.

    Events have few characterisitics that are required for linking events. Events are isolated.

    Events, being more or less important, suggest that there is a heirarchy of relationships of relative event importance.

    At some point, this resulting continuity of events is revealed. By knowing the few characterisitcs and their relative importance in a hierarchy, achieving the goal of always knowing the "effect of current events" on what is aniticpated based on continuity becomes a SOP. Obviously the basis becomes finite math in lieu of any probabilisitic information theory.

    The model would basically emphasize an order of events where Mandelbrot stuff would allow an integrated weighted building theory to be used.

    It is anything BUT a stochastic problem.

    Previously I suggested a way to get into the ballpark using stats. At least that afforded a stepping stone to be able to get on a path away from probabilitisitc stuff to go towards the behavioral context of market operation.

    Markets have a specified logical granularity, why not drill down to that and discover how few characteristics are involved in conducting any market activity.
     
    #19     Oct 29, 2010