http://www.caee.utexas.edu/prof/bhat/COURSES/LM_Draft_060131Final-060630.pdf
https://people.rit.edu/pnveme/pigf/Matrices/mat_det_1.html
Statistics on Manifolds with Applications to Modeling Shape ...
www.dam.brown.edu/people/.../ThesisOrenFreifeld.pdf
Brown University
... of “Statistics on Manifolds with Applications to Modeling Shape Deformations” ...... built from summing outer-products of vectors following the subtraction of the ...
You've visited this page 2 times. Last visit: 3/20/16
www.dam.brown.edu/people/.../ThesisOrenFreifeld.pdf
... of “Statistics on Manifolds with Applications to Modeling Shape Deformations” ...... built from summing outer-products of vectors following the subtraction of the ...
Brown University
You've visited this page 2 times. Last visit: 3/20/16
Statistics on Manifolds with Applications to Modeling Shape ...
www.dam.brown.edu/people/.../ThesisOrenFreifeld.pdf
Brown University
... of “Statistics on Manifolds with Applications to Modeling Shape Deformations” ..... 5.1 Linear translation fails for a covariance expressed in a tangent space . ...... built from summing outer-products of vectors following the subtraction of the ...
You've visited this page 2 times. Last visit: 3/20/16
www.dam.brown.edu/people/.../ThesisOrenFreifeld.pdf
... of “Statistics on Manifolds with Applications to Modeling Shape Deformations” ..... 5.1 Linear translation fails for a covariance expressed in a tangent space . ...... built from summing outer-products of vectors following the subtraction of the ...
Brown University
You've visited this page 2 times. Last visit: 3/20/16
Support vector machine for binary classification - MATLAB
www.mathworks.com/help/stats/classificationsvm-class.html
For nonlinear SVM, the algorithm forms a Gram matrix using the predictor matrix columns. The dual formalization replaces the inner product of the predictors with
MathWorks
Understanding Support Vector Machine Regression ...
www.mathworks.com/.../understanding-support-vector-mach...
Each element gi,j is equal to the inner product of the predictors as transformed by Φ. However, we do not need to know Φ, because we can use the kernel ...
MathWorks
Choosing Between Multinomial Logit and Multinomial Probit ...
https://books.google.com/books?isbn=0549323767
2007
So for each choice j and individual i U ij = β j x i + ε ij , where βjxi is the inner-[PDF]Choosing Between Multinomial Logit and Multinomial Probit ...
https://cdr.lib.unc.edu/.../uuid:008129bb-c121-47ca-9671-3396eb655b2...
where βjxi is the inner-product of the predictors and their coefficients for choice j, and all of the εij are independent and identically distributed by the type 1 ...Matrices: Determinant - Second Order
A determinant is a special set of mathematical operations associated with a square array. The result of the operation is a scalar value for a numerical array. This array may be part of a matrix. The determinantis usually enclosed between a pair vertical bars. The simplest determinant is defined for a 2 x 2 array. It is called a determinant of second order. There is difference between a determinant and a matrix. Thedeterminant is associated with a defined set of arithmetic operations.
» A=[1 2; 3 4] % define A
A =
1 2
3 4
» det(A) % calculate determinant of A
ans =
-2
A =
1 2
3 4
» det(A) % calculate determinant of A
ans =
-2
» syms b11 b12 b21 b22 % define symbolic variables
» B=[b11 b12; b21 b22] % define symbolic matrix B
B =
[ b11, b12]
[ b21, b22]
» det(B) % calculate determinant - check definition
ans =
b11*b22-b12*b21
» B=[b11 b12; b21 b22] % define symbolic matrix B
B =
[ b11, b12]
[ b21, b22]
» det(B) % calculate determinant - check definition
ans =
b11*b22-b12*b21
fitcsvm - MIT
https://lost-contact.mit.edu/afs/pdc.kth.se/roots/ilse/v0.../fitcsvm.html
For nonlinear SVM, the algorithm forms a Gram matrix using the predictor matrix columns. The dual formalization replaces the inner product of the predictors withCosine Similarity, Pearson Correlation, Inner Products ...
https://ardoris.wordpress.com/.../cosine-similarity-pearson-correlation-in...
Aug 14, 2014 - Covariance Inner Product. The covariance between two vectors is defined as CovBayesian Theory and Applications - Page 13 - Google Books Result
https://books.google.com/books?isbn=0191647004
Paul Damien, Petros Dellaportas, Nicholas G. Polson - 2013 - Mathematics
... with covariance inner product, and constructs the closure of the inner product space I(U), denoted [I(U)]. For any quantity Y ∈ [I(U)], the adjusted mean and ...Bayes Linear Statistics, Theory and Methods
https://books.google.com/books?isbn=0470065672
Michael Goldstein, David Wooff - 2007 - Mathematics
... then we denote 〈C〉 with covariance inner product as [C], the (partial) belief structure over the base C. If C is an infinite collection, then we define [C] to be theHandbook of Survey Research - Page 271 - Google Books Result
https://books.google.com/books?isbn=1483276309
Peter H. Rossi, James D Wright, Andy B. Anderson - 2013 - Social Science
... expressed as standardized deviations about means to unit variance Yes No Yes Correlation Cosine matrix matrix No Covariance Inner product matrix matrix ...Semiparametric Theory and Missing Data
https://books.google.com/books?isbn=0387373454
Anastasios Tsiatis - 2007 - Mathematics
We shall refer to this inner product as the “covariance inner product.” This definition of inner product clearly satisfies the first three conditions of the definition ...A comment on the choice of inner product
fourier.dur.ac.uk/stats/bd/blm1.online/node31.html
Within this belief structure, the covariance inner product is simply the adjustment by the unit constant, so that the inner product space that we have termed [B] ...Adaptive Treatment Strategies in Practice: Planning Trials ...
https://books.google.com/books?isbn=1611974186
Michael R. Kosorok, Erica E. M. Moodie - 2015 - Medical
... by Y so that, under the covariance inner product, the residual Y−Y is orthogonal to (functions in)A,X. We then consider the Hilbert space X of square integrable ...Probabilistic Expert Systems - Page 16 - Google Books Result
https://books.google.com/books?isbn=0898713730
Glenn Shafer - 1996 - Computers
Here the normal belief function looks like an inner product (the dual of the covariance inner product) on a hyperplane, and combination amounts to intersecting ...Asset Pricing Theory - Page 288 - Google Books Result
https://books.google.com/books?isbn=1400830141
Costis Skiadas - 2009 - Business & Economics
Uncorrelatedness can equivalently be thought of as orthogonality in the space ˆLHandbook of Missing Data Methodology
https://books.google.com/books?isbn=1439854610
Geert Molenberghs, Garrett Fitzmaurice, Michael G. Kenward - 2014 - Mathematics
... space of all q-dimensional, mean-zero functions h(V) with E{h(V)h(V)} < ∞ equipped with the covariance inner product. As we noted at the end of Section 8.1
Expected Value and Covariance Matrices
www.math.uah.edu/stat/.../Matrices.h...
The outer product of x and y is x y T , the n × n matrix whose ( i , j ) entry is x i y j . Note that the inner product is the trace (sum of the diagonal entries) of the outer ...
University of Alabama in Huntsville
[PDF]Linear Algebra & Properties of the Covariance Matrix
www.maths.usyd.edu.au/u/alpapani/.../lecture6.pdf
Oct 3, 2012 - or in matrix/vector notation. ̂C = 1. T − 1. T. ∑ t=1. (rt − ˆ¯r)(rt − ˆ¯r) anouter product. Linear Algebra & Properties of the Covariance Matrix ...
University of Sydney
Outer product - Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Outer_product
The outer product u ⊗ v is equivalent to a matrix multiplication uvT, provided that .... analysis for computing the covariance and auto-covariance matrices for two ...
Wikipedia
Geometric intuition for why an outer product of two vectors ...
stats.stackexchange.com/.../geometric-intuition-for-why-an-outer-produc...
Sep 14, 2012 - I understand that the outer product of two vectors, say representing two ... series, can represent a cross-correlation (well covariance) matrix.Maximum likelihood - Covariance matrix estimation
www.statlect.com › Fundamentals of statistics › Maximum likelihood
How to approximate the covariance matrix of maximum likelihood estimators. ... is called outer product of gradients (OPG) estimate and it is computed as [eq11] ...[PDF]Principal Component Analysis (PCA) • Quadratic Forms ...
https://www.cs.unm.edu/~williams/.../kl3.pdf
Inner and Outer Products ... Diagonalizing the Covariance Matrix ... 1·1+2·2+3·3 = 14. The outer product of x with itself, or xx. T is a matrix:.. 1. 2. 3..
University of New Mexico
Linear Algebra and Linear Systems — Computational ...
people.duke.edu/~ccc14/sta.../LinearAlgebraReview.htm...
From the definition, the covariance matrix # is just the outer product of the normalized # matrix where every variable has zero mean # divided by the number of ...
Duke University
Expectations and Products
www.asis.com/users/scotfree/cgi/latex2html/lab/kalm/node8.html
Another construction of great utility here is the Outer Product, which is a function of two random variables yielding their covariance matrix. This is defined as.[PDF]Matrix Outer-product Decomposition Method For Blind ...
www.egr.msu.edu/.../m...
Michigan State University College of Engineering
by Z Ding - 1997 - Cited by 157 - Related articles
outer-product matrix, which depend critically on the leading coefficients of the .... matrix be and. Based on (2.4), the channel output covariance matrix becomes.
3.5 Specification of the Additive Error Term As described in section 3.3, the utility of each alternative is represented by a deterministic component, which is represented in the utility function by observed and measured variables, and an additive error term, i ε which represents those components of the utility function which are not included in the model. In the three alternative examples used above, the total utility of each alternative can be represented by: U VS VX VSX DA t DA t DA DA =+ + + ( ) ( ) ( , ) ε 3.19 ( ) ( ) ( , ) U VS VX VSX SR t SR t SR SR =+ + + ε 3.20 ( ) ( ) ( , ) U VS VX VSX TR t TR t TR TR =+ + + ε 3.21 where V() represents the deterministic components of the utility for the alternatives, and Self Instructing Course in Mode Choice Modeling: Multinomial and Nested Logit Models 25 Koppelman and Bhat January 31, 2006 i ε represents the random components of the utility, also called the error term. The error term is included in the utility function to account for the fact that the analyst is not able to completely and correctly measure or specify all attributes that determine travelers’ mode utility assessment. By definition, error terms are unobserved and unmeasured. A wide range of distributions could be used to represent the distribution of error terms over individuals and alternatives. If we assume that the error term for each alternative represents many missing components, each of which has relatively little impact on the value of each alternative, the central limit theorem suggests that the sum of these small errors will be distributed normally. This assumption leads to the formulation of the Multinomial Probit (MNP) probabilistic choice model. However, the mathematical complexity of the MNP model; which makes it difficult to estimate, interpret and predict; has limited its use in practice. An alternative distribution assumption, described in the next chapter, leads to the formulation of the multinomial logit (MNL) model.
Submitted to Management Science
manuscript MS-MS-12-00426.R2
On Theoretical and Empirical Aspects of Marginal
Distribution Choice Models
(Authors’ names blinded for peer review)
In this paper, we study the properties of a recently proposed class of semiparametric discrete choice models
(referred to as the Marginal Distribution Model), by optimizing over a family of joint error distributions with
prescribed marginal distributions. Surprisingly, the choice probabilities arising from the family of Generalized
Extreme Value models can be obtained from this approach, despite the difference in assumptions on the
underlying probability distributions. We use this connection to develop flexible and general choice models to
incorporate respondent and product/attribute level heterogeneity in both partworths and scale parameters
in the choice model. Furthermore, the extremal distributions obtained from the MDM can be used to
approximate the Fisher’s Information Matrix to obtain reliable standard error estimates of the partworth
parameters, without having to bootstrap the method.
We use various simulated and empirical datasets to test the performance of this approach. We evaluate
the performance against the classical Multinomial Logit, Mixed Logit, and a machine learning approach
developed by Evgeniou et al. (13) (for partworth heterogeneity). Our numerical results indicate that MDM
provides a practical semi-parametric alternative to choice modeling.
Key words : Discrete Choice Model, Convex Optimization
1
Authors’ names blinded for peer review
2 Article submitted to Management Science; manuscript no. MS-MS-12-00426.R2
1. Introduction
Conjoint analysis is often used in practice to determine how consumers choose among products
and services, based on the utility maximization framework. It allows companies to decompose
customers preferences into partworths (or utilities), associated with each level of each attribute of
the products and services. Since the early work of McFadden on the logit based choice model (31),
and the subsequent introduction of the Generalized Extreme Value models (32, 33), discrete choice
models have been used extensively in many areas in economics, marketing and transportation
research.
In a typical discrete choice model, the utility of a customer (denoted by i ∈ I = {1, 2,.. ., I}), for
an alternative (denoted by j ∈ J = {1, 2,.. ., J}), can be decomposed into a sum of a deterministic
and a random utility given by U˜
ij = Vij + ˜ ij . The first component Vij is a function of preference
weights βi
, written as Vij (βi), that represents the deterministic utility obtained from the observed
attributes of the alternative. In most cases, βi
is the vector of the customer’s preference weights
(or partworths) for the vector of attributes of alternative j offered to customer i, denoted by xij .
The second component ˜ ij denotes the random and unobservable or idiosyncratic effects on the
utilities of customer i for alternative j.
Prediction of customer i’s choice for alternative j is done by evaluating the choice probability
Pij := P
U˜
ij ≥ max
k∈J
U˜
ik
= P
Vij − Vik ≥ ˜ik − ˜ij , ∀k ∈ J
. The computation involves an evaluation
of a multidimensional integral, and closed form solutions are available only for certain classes
of distributions. This includes the GEV family which is derived under the assumption that the
error-term follows a generalized extreme value distribution. The Multinomial Logit (MNL) and
Nested Logit (NestL) models are well-known members of this family. The choice probabilities do
not have closed-form solutions for most other distributions.
In practice, an adequate modeling of consumer heterogeneity is important for accurate choice
prediction. To account for taste variation among the customers, the Mixed Logit (MixL) model
(see for example Train (44), Allenby and Rossi (1)) assumes that the partworth parameter vector
βi
is sampled from a distribution β0 + ˜i
a
. In this way, the utility function U˜
ij = (β0 + ˜i
a
)
0xij + ˜ ij
captures consumer taste variation across the attributes. By integrating over the density, the choice
probabilities under mixed logit model is derived as Pij =
R
Pij ( i
a
)g( i
a
)d i
a
, where Pij ( i
a
) is the
choice probability for given i
a
, and g(·) denote the probability density
No comments:
Post a Comment