Friday, March 20, 2015

errors in a financial model are Gaussian distributed, Levy flight from a Cauchy Distribution compared to Brownian Motion (below). ... have an expected return, after one year, that is five times its standard deviation, Levy flight from a Cauchy Distribution compared to Brownian Motion (below).

[PDF]On the Origins of Truncated Lévy Flights - UFRGS
www.ufrgs.br/PPGE/pcientifica/2003_03.pdf
翻譯這個網頁
由 A Figueiredoa 著作 - ‎被引用 19 次 - ‎相關文章
Abstract. We show that truncated Lévy flights appear due to the presence of particular features of ... Yet Black Monday is more than 34 standard deviations [3].
  • Fat-tailed distribution - Wikipedia, the free encyclopedia

    en.wikipedia.org/wiki/Fat-tailed_distribution
    翻譯這個網頁
    Levy flight from a Cauchy Distribution compared to Brownian Motion (below). ... have an expected return, after one year, that is five times its standard deviation.
  • Asymmetric Lévy flight in financial ratios

    www.ncbi.nlm.nih.gov › ... › PubMed Central (PMC)
    翻譯這個網頁
    由 B Podobnik 著作 - ‎2011 - ‎被引用 33 次 - ‎相關文章
    2011年10月17日 - Most tests and tools used in statistics assume that any errors in a financial model are Gaussian distributed, and it is a common practice in ...


  • Why do we assume that the error is normally distributed?

    I wonder why do we use the Gaussian assumption when modelling the error. In Stanford's ML course, Prof. Ng describes it basically in two manners:
    1. It is mathematically convenient. (It's related to Least Squares fitting and easy to solve with pseudoinverse)
    2. Due to the Central Limit Theorem, we may assume that there are lots of underlying facts affecting the process and the sum of these individual errors will tend to behave like in a zero mean normal distribution. In practice, it seems to be so.
    I'm interested in the second part actually. The Central Limit Theorem works for iid samples as far as I know, but we can not guarantee the underlying samples to be iid.
    Do you have any ideas about the Gaussian assumption of the error?
    share|improve this question
        
    What setting are you talking about? Classification, regression, or something more general? –  tdc Feb 9 '12 at 14:24
        
    I asked the question for the general case. Most of the stories start with Gaussian error assumption. But, personally, my own interest is matrix factorizations and linear model solutions (so say regression). –  petrichor Feb 9 '12 at 14:30

    1 Answer 1

    I think you've basically hit the nail on the head in the question, but I'll see if I can add something anyway. I'm going to answer this in a bit of a roundabout way ...
    The field of Robust Statistics examines the question of what to do when the Gaussian assumption fails (in the sense that there are outliers):
    it is often assumed that the data errors are normally distributed, at least approximately, or that the central limit theorem can be relied on to produce normally distributed estimates. Unfortunately, when there are outliers in the data, classical methods often have very poor performance
    These have been applied in ML too, for example in Mika el al. (2001) A Mathematical Programming Approach to the Kernel Fisher Algorithm, they describe how Huber's Robust Loss can be used with KDFA (along with other loss functions). Of course this is a classification loss, but KFDA is closely related to the Relevance Vector Machine (see section 4 of the Mika paper).
    As implied in the question, there is a close connection between loss functions and Bayesian error models (see here for a discussion).
    However it tends to be the case that as soon as you start incorporating "funky" loss functions, optimisation becomes tough (note that this happens in the Bayesian world too). So in many cases people resort to standard loss functions that are easy to optimise, and instead do extra pre-processing to ensure that the data conforms to the model.
    The other point that you mention is that the CLT only applies to samples that are IID. This is true, but then the assumptions (and the accompanying analysis) of most algorithms is the same. When you start looking at non-IID data, things get a lot more tricky. One example is if there is temporal dependence, in which case typically the approach is to assume that the dependence only spans a certain window, and samples can therefore be considered approximately IID outside of this window (see for example this brilliant but tough paper Chromatic PAC-Bayes Bounds for Non-IID Data: Applications to Ranking and Stationary β-Mixing Processes), after which normal analysis can be applied.
    So, yes, it comes down in part to convenience, and in part because in the real world, most errors do look (roughly) Gaussian. One should of course always be careful when looking at a new problem to make sure that the assumptions aren't violated.
    share|improve this answer
        
    +1 Thank you very much especially for mentioning about robust and non-robust statistics. I do observe that median and alpha-trimmed mean works usually better than mean in practice but I didn't know the theory behind them. –  petrichor Feb 10 '12 at 12:26
    1  
    Another convenience item associated with normally distributed data is that 0 correlation implies independence. –  AdamO Feb 23 '12 at 0:14
    1  
    The comment about IID-ness isn't quite right. There are (several) very general Central Limit Theorems that apply when results are independent but not identically distributed; see e.g. the Lindeberg CLT. There are also CLT results that don't even need independence; they can arise from exchangeable observations, for example. –  guest Feb 23 '12 at 6:02

    No comments:

    Post a Comment