Monday, August 18, 2014

Black-Scholes model The first condition is that of detailed balance

[PDF]

Black-Scholes and the Volatility Surface

www.columbia.edu/.../BlackScholesCtsTime.pdf
Columbia University
Loading...
We will also discuss the weaknesses of the Black-Scholes model, i.e. ... us naturally to the concept of the volatility surface which we will describe in some detail. ..... Gamma scalping is the process of regularly re-balancing your options portfolio ...

  • BlackScholes model - Wikipedia, the free encyclopedia

    en.wikipedia.org/wiki/BlackScholes_model
    Wikipedia
    Loading...
    The BlackScholes /ˌblæk ˈʃoʊlz/ or BlackScholes–Merton model is a ..... pricing: the Q world" under Mathematical finance; for detail, once again, see Hull.
  • [PDF]

    Do Physical Analogies of Stock Market Crashes Make Sense?

    dl.tufts.edu/file_assets/tufts:UA005.022.025.00001
    Apr 11, 2013 - The popular Black-Scholes model and, in particular, the Nobel prize-winning Black-Scholes ..... The first condition is that of detailed balance.
    You've visited this page 2 times. Last visit: 8/8/14

  • 3.1.4 Introduction to Monte Carlo Methods and the Metropolis Algorithm Recall our definition of a macroscopic variable as a weighted average of a particular quantity over the available microstatesofthesystem(equation3.1),wheretheweightswereshowntobedeterminedbyanormalizedBoltzmann factor. Unfortunately, we have seen that the sheer number of available microstates makes this average exceedingly difficult to compute. Luckily, as we will now show, we can also think of a macroscopic variable as an average over time. This is the key insight that allows the use Monte Carlo methods, which will later be used to simulate the Ising model in a several dimensions. As we will use the correlation data from these simulations to provide insight into the spatial structure of the stock market, the discussion of Monte Carlo methods is important to this thesis. This discussion of Monte Carlo methods will follow [35] and this text is highly recommend to anyone interested in a more detailed treatment. Consider first the following question: is it possible to choose a finite sequence of states µ1,...,µn such that the arithmetic mean of the macroscopic variable Q over these states gives a good estimate of the actual expectation value, defined by equation 3.1? Loosely speaking, to do this we would need some assurance that this finite sequence of states is in some way representative of the full set of available states. On the surface, this might seem like a difficult problem. For example, a good computer will be able to sample about 108 states in a few hours [35], while the Ising model - one of the simplest models in statistical mechanics - on, say, a 10⇥10⇥10 lattice will have 210⇥10⇥10 ⇠ 10301 possible states. However, we should be encouraged by the following fact: a system with a large number of components spends the vast majority of its time in a very small number of states. Consider for a moment a real system consisting of a box of particles that is in thermal contact with a reservoir. Suppose that the system and the reservoir are allowed to exchange energy but not particles. In other words, this system falls into the canonical ensemble. In equilibrium, the system and the reservoir will have the same temperature. However, the energy will fluctuate in time. To estimate a macroscopic quantity (like the energy) we may average the value of this quantity over time. In a Monte Carlo simulation we do essentially this, except we map our system onto a computer. In both cases we choose some initial state and watch the system evolve. The longer we wait, the more accurate our estimates become. Of course, on a computer, we measure time in terms of iterations of our Monte Carlo algorithm (this idea will be refined later). The main difference is that on a computer we must completely specify the system by a model, which inevitably introduces some degree of simplification. Our task is then to specify the time evolution of this
    13
    model in such a way that once the model system reaches equilibrium, states will appear with probability according to the Boltzmann distribution. To do this, most Monte Carlo simulations define a stochastic time evolution according to a Markov process, which is a type of stochastic process in which the probability of transition between a state µ and a state ⌫ depends only on the states µ and ⌫, and not on any of the previous states. We will denote this transition probability by P (µ!⌫). For a Markov process to generate a sequence of states according the the Boltzmann distribution, two conditions must be satisfied. The first condition is that of detailed balance. Using our notation from earlier sections, we will let wµ (t) denote the probability of being in state µ at time t. Therefore,wµ (t)P (µ!⌫) is the rate at which the system transitions out of the state µ and into the state ⌫, while w⌫ (t)P (⌫ !µ) is the at which the system transitions into the state µ from the state ⌫. The state probabilities wµ (t) then evolve according to the master equation wµ (t + 1)wµ (t)=X ⌫ [w⌫ (t)P (⌫ !µ)wµ (t)P (µ!⌫)] (3.8) Recall that we defined a system to be in equilibrium when wµ (t) no longer depends on time, for all states µ. We wrote these equilibrium state probabilities as pµ. Thus, in equilibrium, the master equation becomes X ⌫ pµP (µ!⌫)=X ⌫ p⌫P (⌫ !µ). (3.9) In order to rule out the possibility of limit cycles in our Markov process, we must further assume that these sums agree term-wise. That is, for all µ,
    pµP (µ!⌫)=p⌫P (⌫ !µ). (3.10) This is the condition of detailed balance. Since we wantpµ = 1 ZeEµ/kT, we can rewrite the condition of detailed balance as P (µ!⌫) P (⌫ !µ) = p⌫ pµ = e(E⌫Eµ). (3.11) The second condition is that each state must be accessible from each other state. This is the condition of ergodicity. Note that it is not necessary for the each state to be accessible in a single transition. Indeed, most Monte Carlo methods set the majority of transition probabilities to zero [35]. Rather, it must only be that there is at least one path of non-zero probability between any pair of states. Assuming a valid probability distribution (that is, one that sums to one) if the condition of ergodicity as well as equation 3.11 are satisfied, then the equilibrium distribution will be Boltzmann distribution [35]. At this point we have still not specified a way to construct a Markov process satisfying the conditions above. To change this, we first break the transition probability into a product of two other probabilities:
    P (µ!⌫)=g(µ!⌫)A(µ!⌫). (3.12) The first term, g(µ!⌫), is called theselection probability and it gives the probability of selecting a target state ⌫ while the system is in state µ. Once the target state has been selected, the system will actually transition to it with probability A(µ!⌫), called theacceptance probability. If the system rejects the target state ⌫ it will remain in the current state µ. An ideal algorithm is one in which the selection probabilities are equivalent to the transition probabilities, making the acceptance ratio unity and a good algorithm is one that approximates this as best as possible. The most widely used algorithm to this problem is Metropolis algorithm. Recall that setting some of the
    14
    transition probabilities to zero will not violate the condition of ergodicity, as long as there is at least one path of non-zero probability between each pair of states. In the so-called single-spin-flip dynamics, we set the transition probability to zero for all states except those differing by a single spin. The Metropolis algorithm then sets the probability of selecting one of these states, g(µ!⌫), to be the same number for each pairµ and ⌫, whereµ and ⌫ differ by a single spin. For a system of N spins, this assumption along with the normalization of the probability distribution gives
    g(µ!⌫)=
    1 N
    . (3.13)
    Using equations 3.11 and 3.12 we have
    A(µ!⌫) A(⌫ !µ)
    = e(E⌫Eµ). (3.14)
    Recall that a more efficient algorithm, the acceptance ratios are near unity. One way to exploit this fact is to set the larger acceptance ratio to unity and adjust the other accordingly. We can do this since it is only the ratio of acceptance probabilities that matters. The Metropolis algorithm does exactly this, giving A(µ!⌫)=8 < : e(E⌫Eµ) if E⌫ Eµ > 0 1 otherwise . (3.15) This is equation is the distinguishing feature of the Metropolis algorithm. We will discuss our implementation of the Metropolis algorithm in section 5.3.


    No comments:

    Post a Comment