Monday, April 25, 2016

conjoint airline Note on Conjoint Analysis - MIT, the value of each “util” is about $2.46,

http://www.surveyanalytics.com/images/bookshelf/conjoint.pdf

conjoinT Condu analysis Cting PriCing researCh effeCtively A QuestionPro Publication For more information contact our sales team at +1 (800) 326-5570 © 2010 Survey Analytics Enterprise Research Platform Conjoint Analysis Conducting Pricing Research Effectively What is Conjoint Analysis? Conjoint Analysis is one of the most effective models in extracting consumer behavior into an empirical or quantitative measurement. It evaluates products/services in a way no other method can. Traditional ratings surveys and analysis do not have the ability to place the “importance” or “value” on the different attributes, a particular product or service is composed of. Conjoint Analysis guides the end user into extrapolating his or her preference to a quantitative measurement. One of the most important strengths of Conjoint Analysis is the ability to develop market simulation models that can predict consumer behavior to product changes. With Conjoint Analysis, changes in markets or products can be incorporated into the simulation, to predict how consumers would react to changes. Attributes and Levels Any product or service can be modeled as an entity with a set of attributes. For example an airline ticket between Seattle and Miami may have the following attributes:- * Attributes: Price * Airline * Stops Each of the attributes may have one or more levels. A level can be defined as any value the attribute can take. In the examples above, the attributes can have the following levels: - 1. Price * $100 * $150 * $200 2. Airline * Delta * Northwest * AA 3. Stops * None * 1 * 2 For more information contact our sales team at +1 (800) 326-5570 © 2010 Survey Analytics Enterprise Research Platform Conjoint Analysis Conducting Pricing Research Effectively Choice Based Conjoint Choice based or Discrete Choice Conjoint is by far the most preferred model for a conjoint questionnaire. This is primarily because it models after consumer behavior in real-life. Most purchases that consumers make today are basically trade-off based. Will you buy a $150 ticket with 2 flight stops and No miles or a 200$ ticket with no stops and 4000 miles? The Survey Analyitcs Conjoint Analysis offering includes the following tools: 1. Conjoint Task Creation Wizard Wizard based interface to create Conjoint Tasks based on simply entering Features (Attributes) and Levels for each of the features. 2. Conjoint Design Parameters Tweak your design but choosing the number of tasks, number of profiles per task as well as “Not-Applicable” option. 3. Utility Calculation Automatically calculates utilities. 4. Relative Importance Automatically calculates relative importance of attributes (based on utilities) 5. Cross/Segmentation and Filtering Filter the data based on criteria and then run Relative Importance calculations. For more information contact our sales team at +1 (800) 326-5570 © 2010 Survey Analytics Enterprise Research Platform Conjoint Analysis Conducting Pricing Research Effectively Please be aware that opinions and suggestions given here are exactly that -- Our opinions and suggestions. Conjoint Analysis is a fairly complicated model and while there is no single solution this FAQ should be considered as a broad opinion trying to encompass as many scenarios as possible. Many of our suggestions and methods here are OUR opinions and may contradict other research executed for specific scenarios. How do I determine how many concepts per screen should I ask? In general for internet based (self-administered) surveys, asking users to choose between more than 3 or 4 concepts is generally not reasonable. The cognitive stress placed on users far outweighs the extensibility in the utility calculation. The number of concepts that you choose also depends upon the number attributes that is being measured. For example, if you are measuring more than 3 attributes, the users have to understand and comprehend product (that is comprised or 3 or more attributes) and then make a decision between the different concepts. In general, however if you have 3 or less attributes, you can go with 3 concepts per task. However, if you have more than 3 attributes, we would suggest that you do not put in more than 2 concepts per task. How are the tasks (profiles) created and displayed? Is it pre-defined or does the system create concepts and present them to the user? The concepts are randomly created and displayed to the user. You can use the “Prohibited Pairs” feature to make sure certain pairs of levels are never part of the same concept. But, the model generally is to keep the creation of the concepts random - so as to explore as varied a possibility to gain insight into individual utilities. The alogrithm for computing a profile is as follows: * System will choose random levels for each attribute to create the first concept. * Subsequent concepts (for each task) will be created such that the levels are not repeated. In cases where this is not possible, the levels will be repeated. * Under no circumstances will the system display two identical profiles * The system will also take into account User-Defined “Prohibited Pairs” to make sure profiles that are created. For more information contact our sales team at +1 (800) 326-5570 © 2010 Survey Analytics Enterprise Research Platform Conjoint Analysis Conducting Pricing Research Effectively Can there be an over-reliance on a particular attribute like “Price” or “Cost?” Yes. In almost all conjoint studies, Price and Cost will be the primary determining factor. If previous conjoint studies have shown an over-reliance on Price/Cost factors, it may also make sense to conduct a study WITHOUT price as one of the components as a test-run. While this approach will not work for price-sensitivity testing, it will however elimitate the “Price-Fixation” attitude and give you deeper insight into the other attributes. Another mechanism for solving the “Price-Fixation” (where users show a tedency to heavily rely on price i.e. the cheapest price always wins) is to do price-bands -- i.e. where the levels in the Cost/ Price attribute is defined as a range -- Eg. ($1.99 - $2.99) This approach may be able to help in “de-focusing” cost and allowing users to pay equal attention to other factors. How do I determine how many tasks I should have the user complete? Our experience has shown that there is a precipitious drop-out rate after about 15 tasks. Unless there is a strong personal incentive for the end-users to complete the survey, we would suggest to keep the number of tasks to under 15 especially in cases where users are volunteering to take surveys. Please keep in mind that conjoint product selection is a little more involved than simply “answering a survey question” -- users have to comprehend each of the attributes/concepts and then make a choice. This is a lot more involved than say choosing “Male/Female” on a gender question. On the lower side, we would suggest that 6-8 tasks be the minium for a conjoint model with 3 attributes. The more attributes you have, the more number of tasks users have to fill out. It is obvious that it’s a balancing act between the number of tasks, concepts per task and the total number of attributes/levels than need to be displayed. Two factors determine the overall utility: * Concepts Per Task * Total # of Tasks The system provides the “Concept Simulator” - with the concept simulator you can see the TOTAL number of times a particular level will be displayed (given the total number of respondents) For more information contact our sales team at +1 (800) 326-5570 © 2010 Survey Analytics Enterprise Research Platform Conjoint Analysis Conducting Pricing Research Effectively What about the “None” option? How does having the “none” option affect data calculation and reliability? If the “None” option is enabled, the utility calculation takes into account that NONE of the options was selected. The utility calculation fundamentally relies on the number of times a particular level is displayed to the user compared to the number of times a level is part of a chosen concept. If the “none” option is selected, then the utilities for all the levels in the options (that were not selected) will go down. What are the other implications of enabling the “none” options? From a practical standpoint, in some cases, where we’ve seen an over-emphasis on price/cost or a single level, the None option is selected whenever the emphasized level is not present in one of the random concepts. In such cases, it may not make sense to enable the “none” option and force users to choose the “best” option that they are presented with. How many levels can I have within each attribute? How many attributes can I have? Any guidelines? From a technical standpoint, the system does NOT impose any limitations. You can have unlimited attributes and unlimited levels within each attribute. However, from a practical standpoint, it is unreasonable to have more than 4-6 attributes, and about 3-4 levels per attribute. Our suggestion would be to keep the number of attributes to under 5 and try and seek about 3 levels for each attribute. I have a lot of Attributes that I’d like to test out (more than 5-6) - what can I do? Like mentioned above, the system does not limit the number of attributes. However from a practical presentation standpoint, it really does not make sense to have choices with more than 5-6 attributes because of the cognitive stress involved. However, if you do have a case where you’d like to test out 10-20 attributes we would suggest you do this as a two part project: 1. Create a screening/profiling survey and use simple “Multiple Choice (Multiple Select)” to determine viability of attributes - “Pick 6 of 20” etc. 2. Use TURF Analysis to pick the Top 5 or 6 attributes with the highest reach. 3. Then as a secondary wave, run the conjoint study on highest reach attributes.


Note on Conjoint Analysis - MIT

www.mit.edu/~hauser/Papers/Note...

Massachusetts Institute of Technology
by JR Hauser - ‎Cited by 10 - ‎Related articles
M I T S L O A N C O U R S E W A R E > P. 1. Note on Conjoint Analysis. John R.Hauser. Suppose that you are working for one of the primary brands of global.

[PDF]the voice of the customer. - MIT

www.mit.edu/~hauser/Papers/TheV...

Massachusetts Institute of Technology
by A GRIFFIN - ‎1993 - ‎Cited by 2041 - ‎Related articles
2 ABBIE GRIFFIN AND JOHN R. HAUSER. One aspect of the ... This paper focuses on the customer input used for new-product development. We adopt industry ..... In closing this section, we note that QFD seems to work. In a study of 35 US.

[PDF]The Voice of the Customer - MIT

www.mit.edu/~hauser/Papers/Gaski...

Massachusetts Institute of Technology
by MA Waltham
MIT Sloan School of Management, Massachusetts Institute of Technology .... is drawn from an MIT Sloan Courseware document by John R. Hauser, “Note on.

[PDF]New Research Issues Related to Conjoint Analysis - MIT

www.mit.edu/~hauser/Papers/Gree...

Massachusetts Institute of Technology
by JR Hauser - ‎2002 - ‎Cited by 123 - ‎Related articles
In another paper in this volume, Green, Krieger, and Wind (2002) address how ... Morgenstern utility measurement (Eliashberg 1980; Eliashberg and Hauser ...... Conjoint Analysis: A Cautionary Note,” Journal of Marketing Research, 25, 3, ...

[PDF]Managing a Dispersed Product Development Process - MIT

www.mit.edu/~hauser/Papers/Daha...

Massachusetts Institute of Technology
by E Dahan - ‎2001 - ‎Cited by 123 - ‎Related articles
John Hauser is the Kirin Professor of Marketing, MIT Sloan School of ..... Figure 4: Cooper's Stage-Gate Process (adapted to the structure of this paper) ..... note. The cards are well shuffled to eliminate any pre-existing order bias and are.

[PDF]Validating agent-based marketing models through conjoint ... - MIT

www.mit.edu/~hauser/Papers/Garci...

Massachusetts Institute of Technology
by R Garcia - ‎2007 - ‎Cited by 49 - ‎Related articles
approach based on the wine industry, the paper demonstrates how conjoint .... (1999)note, ..... study are reported in Toubia, Hauser, and Garcia (2006) and ..... New Zealand Grape and Wine Industry Statistical Annual 2005. http://www.

Prof. John R. Hauser - MIT

web.mit.edu/hauser/.../books.html

Massachusetts Institute of Technology
Jump to Published Research Papers - Most of these papers are available for download. ... Hauser, John R. (2011), “A Marketing Science Perspective .... Business School Publishing. http://cb.hbsp.harvard.edu/cb/web/ .... Published Notes.

[PDF]NON-COMPENSATORY (AND COMPENSATORY) MODELS OF ... - MIT

www.mit.edu/~hauser/Papers/Haus...

Massachusetts Institute of Technology
by JR Hauser - ‎2009 - ‎Cited by 38 - ‎Related articles
John R. Hauser is the Kirin Professor of Marketing, MIT Sloan School of ... 1 Papersusing two- (or more) stage models include: Andrews and Manrai ...... in Conjoint Analysis: A Cautionary Note,” Journal of Marketing Research, 25 (August),.

[PDF]Download Paper

www.mit.edu/~hauser/Papers/Selov...

Massachusetts Institute of Technology
hauser@mit.edu ... These papers assume that competing firms first set ... In this notewe summarize results from a recent working paper where we explore how ...

[PDF]Metrics: You are What You Measure - MIT

www.mit.edu/~hauser/Papers/Haus...

Massachusetts Institute of Technology
by JR Hauser - ‎1998 - ‎Cited by 211 - ‎Related articles
John R. Hauser is the Kirin Professor of Marketing, Massachusetts Institute of Technology, Sloan ... This paper focuses on the selection of good metrics. There is ...

No comments:

Post a Comment