Discriminant analysis helps to identify the independent variables that discriminate a nominally scaled dependent variable of interest. The linearcombination of independent variables indicates the discriminating function showing the large difference that exists in the two group means. In other words the independent variables measured on an interval or ratio scale discriminate the groups of interest to study.
Factor analysis helps to reduce a vast number of variables to a meaningful, interpretable, and manageable set of factors. A principle component analyses transform all the variables into a set of composite variables that are not correlated to one another. Suppose we have measured in aquestionnaire the four concepts of mental health, job satisfaction, life satisfaction, and job involvement with seven questions tapping each. When we factor analyze these 28 items, we should find four factors with the right variables loading on each factor, confirming that we have measured the concepts correctly.
The cluster analysis is used to classify objects or individuals into mutually exclusive and collectively exhaustive groups with high homogeneity within clusters and low homogeneity between clusters. In other words cluster analysis helps to identify objects that are similar to one another, based on some specified criterion. Cluster analysis will cluster individuals by their preferences for each of the different brands.
Factor analysis helps to reduce a vast number of variables to a meaningful, interpretable, and manageable set of factors. A principle component analyses transform all the variables into a set of composite variables that are not correlated to one another. Suppose we have measured in aquestionnaire the four concepts of mental health, job satisfaction, life satisfaction, and job involvement with seven questions tapping each. When we factor analyze these 28 items, we should find four factors with the right variables loading on each factor, confirming that we have measured the concepts correctly.
The cluster analysis is used to classify objects or individuals into mutually exclusive and collectively exhaustive groups with high homogeneity within clusters and low homogeneity between clusters. In other words cluster analysis helps to identify objects that are similar to one another, based on some specified criterion. Cluster analysis will cluster individuals by their preferences for each of the different brands.
Neural Networks Algorithm Based on Factor Analysis ...
link.springer.com/.../10.1007%2F978-...
Springer Science+Business Media
by S Ding - 2010 - Cited by 2 - Related articles
neural networks algorithm factor analysis (FA) feature extraction. Page %P. Loading... Close Plain text. Advances in Neural Networks - ISNN 2010 Look Inside ...[PDF]dimensionality reduction using factor analysis - Griffith ...
https://www120.secure.griffith.edu.au/rch/file/1900e8ee.../02Main.pdf
by N KHOSLA - 2004 - Cited by 3 - Related articles
algorithm, factor analysis can be useful in overcoming the shortcomings of PCA. Factor analysis model, based upon Expectation-maximization algorithm, ...Quantitative Spectroscopy: Theory and Practice
https://books.google.com/books?isbn=0080515533
Brian C. Smith - 2003 - Science
... data instead of measuring sample properties directly. B. FACTOR ANALYSIS OvERvIEw Like any calibration algorithm, factor analysis techniques seek to ...Maximum Likelihood Estimation of Tobit Factor Analysis for ...
www.tandfonline.com/doi/pdf/10.1080/03610910902898440
by X Zhou - 2009 - Cited by 2 - Related articles
Louis's method. The algorithm was illustrated through a simulation study. Keywords EMalgorithm; Factor analysis; Monte Carlo; t-distribution; Tobit model.[PDF]Download PDF - Journal of Data Science
www.jds-online.com/file_download/306/JDS-682.pdf
by K Hirose - 2011 - Cited by 21 - Related articles
Key words: EM algorithm, factor analysis, model selection criterion, num- ber of factors, prior distribution. 1. Introduction. Factor analysis provides a useful tool toAdvances in Neural Networks -- ISNN 2010: 7th ...
https://books.google.com/books?isbn=3642132774
James Kwok, Liqing Zhang, Bao-Liang Lu - 2010 - Computers
Keywords: neural networks algorithm; factor analysis (FA); feature extraction. 1 Introduction Neural networkss (NNs) [1] is based on the intelligent computation ...The EM Algorithm and Extensions - Page 351 - Google Books Result
https://books.google.com/books?isbn=0470191600
Geoffrey McLachlan, Thriyambakam Krishnan - 2007 - Mathematics
... censored data in, 30-31 Extensions of EM algorithm, see ECM algorithm; ECME algorithm; EMS algorithm; MCEM algorithm; ACEM algorithm Factor analysis, ...Neural Information Processing: 22nd International ...
https://books.google.com/books?isbn=3319265555
Sabri Arik, Tingwen Huang, Weng Kin Lai - 2015 - Computers
In order to address these problems, a new neural network model, named as FANet, is introduced by using the factor analysis algorithm. Factor analysis is a ...[PDF]16_chapter 6.pdf - IR @ INFLIBNET
ir.inflibnet.ac.in:8080/jspui/bitstream/10603/.../16/16_chapter%206.pdf
system followed by the algorithm Factor Analysis based support vector machines (FA-SVM). Lastly it also illustrates the results obtained and evaluation with ...Advanced Data Mining and Applications: Second ...
https://books.google.com/books?isbn=3540370250
2.4 Factor Analysis The next process of analysis was the determination of drugs groups by using factor analysis and clustering algorithm. Factor analysis [3] ...Conjoint analysis (marketing)
From Wikipedia, the free encyclopedia
- See also: Conjoint analysis, Conjoint analysis (in healthcare), IDDEA, Rule Developing Experimentation, Discrete choice models.
Conjoint analysis is a statistical technique used in market research to determine how people value different attributes (feature, function, benefits) that make up an individual product or service.
The objective of conjoint analysis is to determine what combination of a limited number of attributes is most influential on respondent choice or decision making. A controlled set of potential products or services is shown to respondents and by analyzing how they make preferences between these products, the implicit valuation of the individual elements making up the product or service can be determined. These implicit valuations (utilities or part-worths) can be used to create market models that estimate market share, revenue and even profitability of new designs.
Conjoint originated in mathematical psychology and was developed by marketing professor Paul Green at the Wharton School of the University of Pennsylvania and Data Chan. Other prominent conjoint analysis pioneers include professor V. “Seenu” Srinivasan of Stanford University who developed a linear programming (LINMAP) procedure for rank ordered data as well as a self-explicated approach, Richard Johnson (founder of Sawtooth Software) who developed the Adaptive Conjoint Analysis technique in the 1980s[1] and Jordan Louviere (University of Iowa) who invented and developed Choice-based approaches to conjoint analysis and related techniques such as Best-Worst Scaling.
Today it is used in many of the social sciences and applied sciences including marketing, product management, and operations research. It is used frequently in testing customer acceptance of new product designs, in assessing the appeal of advertisements and in service design. It has been used in product positioning, but there are some who raise problems with this application of conjoint analysis (see disadvantages).
Conjoint analysis techniques may also be referred to as multiattribute compositional modelling, discrete choice modelling, or stated preference research, and is part of a broader set of trade-off analysis tools used for systematic analysis of decisions. These tools include Brand-Price Trade-Off, Simalto, and mathematical approaches such as AHP,[2] evolutionary algorithms or Rule Developing Experimentation.
Conjoint Design[edit]
A product or service area is described in terms of a number of attributes. For example, a television may have attributes of screen size, screen format, brand, price and so on. Each attribute can then be broken down into a number of levels. For instance, levels for screen format may be LED, LCD, or Plasma.
Respondents would be shown a set of products, prototypes, mock-ups, or pictures created from a combination of levels from all or some of the constituent attributes and asked to choose from, rank or rate the products they are shown. Each example is similar enough that consumers will see them as close substitutes, but dissimilar enough that respondents can clearly determine a preference. Each example is composed of a unique combination of product features. The data may consist of individual ratings, rank orders, or preferences among alternative combinations.
As the number of combinations of attributes and levels increases the number of potential profiles increases exponentially. Consequently, fractional factorial design is commonly used to reduce the number of profiles that have to be evaluated, while ensuring enough data are available for statistical analysis, resulting in a carefully controlled set of "profiles" for the respondent to consider
Types of conjoint analysis[edit]
The earliest forms of conjoint analysis were what are known as Full Profile studies, in which a small set of attributes (typically 4 to 5) are used to create profiles that are shown to respondents, often on individual cards. Respondents then rank or rate these profiles. Using relatively simple dummy variable regression analysis the implicit utilities for the levels can be calculated.
Two drawbacks were seen in these early designs. Firstly, the number of attributes in use was heavily restricted. With large numbers of attributes, the consideration task for respondents becomes too large and even with fractional factorial designs the number of profiles for evaluation can increase rapidly.
In order to use more attributes (up to 30), hybrid conjoint techniques were developed. The main alternative was to do some form of self-explication before the conjoint tasks and some form of adaptive computer-aided choice over the profiles to be shown.
The second drawback was that the task itself was unrealistic and did not link directly to behavioural theory. In real-life situations, the task would be some form of actual choice between alternatives rather than the more artificial ranking and rating originally used. Jordan Louviere pioneered an approach that used only a choice task which became the basis of choice-based conjoint analysis and discrete choice analysis. This stated preference research is linked to econometric modeling and can be linked revealed preference where choice models are calibrated on the basis of real rather than survey data. Originally choice-based conjoint analysis was unable to provide individual level utilities as it aggregated choices across a market. This made it unsuitable for market segmentation studies. With newer hierarchicalBayesian analysis techniques, individual level utilities can be imputed back to provide individual level data.
Information collection[edit]
Data for conjoint analysis are most commonly gathered through a market research survey, although conjoint analysis can also be applied to a carefully designedconfigurator or data from an appropriately design test market experiment. Market research rules of thumb apply with regard to statistical sample size and accuracy when designing conjoint analysis interviews.
The length of the research questionnaire depends on the number of attributes to be assessed and the method of conjoint analysis in use. A typical Adaptive Conjoint questionnaire with 20-25 attributes may take more than 30 minutes to complete. Choice based conjoint, by using a smaller profile set distributed across the sample as a whole may be completed in less than 15 minutes. Choice exercises may be displayed as a store front type layout or in some other simulated shopping environment.
Analysis[edit]
Depending on the type of model, different econometric and statistical methods can be used to estimate utility functions. These utility functions indicate the perceived value of the feature and how sensitive consumer perceptions and preferences are to changes in product features. The actual estimation procedure will depend on the design of the task and profiles for respondents, in the type of specification, and the scale of measure for preferences (it can be ratio, ranking, choice) which can have a limited range or not. For rated full profile tasks, linear regression may be appropriate, for choice based tasks, maximum likelihood estimation, usually withlogistic regression are typically used. The original methods were monotonic analysis of variance or linear programming techniques, but contemporary marketing research practice has shifted towards choice-based models using multinomial logit, mixed versions of this model, and other refinements. Bayesians estimators are also very popular. Hierarchical Bayesian procedures are nowadays relatively popular as well.
No comments:
Post a Comment