The Meaning of “Importance”

“What’s important to our customers?” is a common question facing marketing decision makers at Mercury Marine (and, I would guess, at your company, too). I’ve seen many boat manufacturer customer-satisfaction studies that include instructions such as, “Please indicate how important each factor was in the decision to purchase your new boat.” Usually, a 1- to-5-point scale is provided, where higher numbers denote greater importance – otherwise known as a direct attribute importance scale.

Unfortunately, measuring “importance” has been a topic of heated debate since at least the late 1960s, when James H. Myers and Mark I. Alpert observed that this term has at least two different meanings – a product attribute that is “desired” vs. an attribute that affects a purchase decision (see “Determinant Buying Attitudes: Meaning and Measurement,” Journal of Marketing, October 1968, pp. 12-10).

The marketing research literature points out several problems with the direct attribute importance scale relating to this dual meaning of the term and the fact that respondents often do not rate the “importance” of an attribute independently of other issues, such as price.

Assume that three respondents are identical in their attitudes, beliefs and behaviors toward purchasing a boat. We ask them to indicate how important each factor was in the decision to purchase their new boat, using a 1- to-5-point importance scale, where higher numbers denote greater “importance.” One of the attributes is “quality.” Consider the following:

Respondent A’s rating = 5 because quality is a desirable feature of a boat.

Respondent B’s rating = 1 because all the boats he considered were of equal quality, so quality did not play a role in the decision process.

Respondent C’s rating = 4 because she did not want the highest quality boat — it would cost too much.

Each respondent used a different assumption when interpreting the meaning of “important.” Respondent A defined “important” as reflecting the desirability of an attribute. Respondent B interpreted “important” to mean the role the attribute played in the decision process. Respondent C considered how another attribute — price — might affect the role the given attribute — quality — played in the decision process. What Respondent C did violates a key assumption of any kind of direct attribute rating: Attribute ratings need to be independent of each other.

The above problems introduce measurement error into the data. The data and subsequent analysis are less valid because different respondents are answering different questions.

Compounding this issue is the fact that marketing managers often use the term “important” ambiguously in business meetings. The problem is that “What’s important to our customers?” is the wrong question to ask. Better questions would be: (1) What attributes and benefits do consumers seek in our product? (2) What level of attribute performance are consumers willing to pay for? and (3) Are consumers willing to trade off somewhat lower performance in attribute X to get higher performance in attribute Y? To simply ask, “What’s important to our customers” clouds one’s thinking and frustrates managers’ abilities to identify truly critical attributes that, as Myers and Alberts said over 40 years ago, “affect both the overall evaluation of an item and the actual purchasing decision.”

There are no simple solutions to the marketing research problem of measuring “importance.” In fact, no alternative is without its weaknesses. But there are better methods than the direct attribute importance scale, which can reduce the amount of measurement error in our questionnaires.

For example, some methods such as conjoint analysis or choice-based modeling have respondents evaluate different bundles of product attributes and select those bundles they are most likely to purchase. “What do you prefer, boat X with feature Y at price Z, or boat A with feature B at price C?” Based on these choices, we can derive the relative influence each attribute and attribute level has on consumer choice and examine the trade-offs consumers may make in a purchase decision. These methods, for example, could help a pontoon manufacturer understand whether consumers are willing to pay more for higher-quality carpet.

One problem with these “trade-off” methods is that one cannot easily adapt them to a self-administered survey, such as a customer-satisfaction questionnaire, which is prevalent in our industry. Other methods exist that can be used in self-administered surveys such as regression analysis, which, like trade-off analysis, derives the relative influence that an attribute has on the consumer decision process.
There are other methods unlike the direct importance scale that are less problematic with respect to how respondents interpret the question and more amenable to self-administered surveys.

For example, the maximum difference scale gives respondents a list, say, of 20 attributes. Respondents select the five that most influenced them to purchase the brand they did, and the five that least influenced their decision. The relative frequency with which an attribute is selected is an indicator of its influence on the purchase decision.

I’m fond of saying that marketing research questions have simple, easy-to-understand wrong answers. How to measure “importance” is one of those questions.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button