I think I understand why polls get confusing results.
Yesterday, I had the pleasure of answering a survey about health care. Normally, when I get those types of robo-dialed call-center calls, I wait for the person on the line to say “Hello,” but for whatever reason, I said hello first. And then, when asked if I’d be willing to do a survey, I uncharacteristically said “yes.” It was interesting to take the survey both because of what I know about health care and my opinions on it. But what was more interesting was how the questions were phrased. I’d often wondered how one company could do a survey claiming that 87% of Americans love chocolate. And then another company could do a survey the next day claiming that 87% of Americans hate chocolate. After taking yesterday’s survey, I understand how this can happen. The questions (and supplied answers) can rig the results. For example, several of the questions had this format:
Which of the following best represents you?
1. I like dogs, love chocolate, and my favorite color is blue.
2. I like cats, hate chocolate, and my favorite color is purple
For me, #1 is probably closest, as I love dogs and chocolate, but I don’t like the color blue. On the other hand, #2 is close too, as I like cats (but not as much as dogs) and my favorite color is purple. Which answer should I choose? Regardless of which I choose, there will be one aspect of it that is completely wrong. But based on the question, I’m supposed to choose which one is “closest” to my opinions, I’d probably choose #1. Imagine if 87% of people chose #1. The marketers reviewing the survey results could claim that 87% of people love chocolate, have the favorite color blue, or like dogs (or all three). Even though, for me, only two of the statements are true. Another type of question that I found misleading was one asking a hypothetical:
If you read about a company that loved dogs, made world-class chocolate, and was focused on fostering world peace, would your opinion of that company be favorable or unfavorable?
It was clear to me that the marketing copy she was reading to me was intended to be about my insurance company. But the message she was reading was very favorable. The problem I had with the question is not whether I found the marketing message favorable, but whether I found it credible — especially when applied to any insurance company. So, even though I found the message favorable, I wouldn’t find it favorable if it were coming from a company I didn’t trust. In that scenario, it sounded like a snake oil salesman. Did I answer incorrectly when I said I found the message favorable? When I worked in Marketing, I used to get really annoyed at how they would interpret the results of surveys. For example, how many times have you seen a survey asking you to rate something from 1 (very bad) to 5 (very good)? If you give the product a 3 is that good or bad? In the survey analysis I saw, anything 3 or higher was listed as “good.” In other words, they would say “85% of customers rate our product good” and include every value of 3, 4, or 5 in that percentage. It didn’t matter if 84% gave the product a 3 and only two votes total came in at 4 and 5, that’s how they would interpret the results. My solution, as a result, was that I always take surveys with the most extreme answers. In other words, if I would give an honest answer of 4, I would post 5 on the survey and if I was considering a 2, I would give it a 1. Which immediately makes me think of reviews. When you go on Amazon, you see reviews of the books and products and it seems like the majority come in at 5 stars or 1 star. When I’m writing reviews, I always give the score that I think the product deserves. And this is nearly never an extreme. Most books, software, and other things that I review are average or sorta bad or sorta good. Sorta like I feel about health care surveys.