September is almost upon us and with it, the sights and smells of autumn: kids in school, college football, leaves on the ground and for some, most importantly, the return of Starbucks’ Pumpkin Spice Latte (PSL). With its own Twitter account, Facebook page and an army of avid lovers, this seasonal drink is a rock star among lesser beverages. Recently, a friend of mine reposted a dire warning from a popular food blogger about PSL. The photo showed a close up view of the drink and listed nine different reasons not to drink it: everything from pesticide residue to potentially hazardous ingredients. Most ominously, Carmel Color Level IV leads the list with the claim that it is considered to be a carcinogen.

Almost two decades of being a market researcher has made me skeptical of any claim of medical benefit or harm. It’s too easy to be misled or to misunderstand data, especially any that involve provocatively scientific conclusions. In the PSL, the issue is the formation of 4-methylimidazole (4-MI) in the manufacturing process of the caramel coloring. The claim cited a 2008 toxicity and carcinogenicity study by Chan, Hills, Kissling and Nyska (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2366200/) as scientific proof of the chemical’s harmful effects and the company’s intentional disregard for the safety of its consumers. A little digging and a few keystrokes led me to a snopes.com article that debunked the claims (http://www.snopes.com/food/ingredient/pumpkinspicelatte.asp). I also found that the US Food and Drug Administration and the European Food Safety Authority have both concluded that consumers should not be concerned about 4-MI.

If the truth about this claim was so easy to find, why do this and other medical myths persist? The answer involves the way humans make decisions when confronted with overwhelming and conflicting information. Specifically, the shortcuts we use and assumptions we make in evaluating the credibility of data. In this case, I looked for data from information sources I trusted (e.g., snopes.com and government sites). In evaluating the credibility of the data, I made three specific assumptions: 1) snopes.com has the ability to correctly evaluate the claim, 2) information published on government websites are accurate, and 3) I have the ability to understand clinical trial information published in a journal article. Implicit in each of these assumptions, of course, are many more assumptions. When combined, they form a heuristic structure that helps to interpret the information I receive and decide what to do next. My friends’ assumptions were fundamentally different from mine, which led her to a very different conclusion. In identifying the assumptions, we quickly understood how two intelligent people could come to vastly different conclusions.

This same pitfall applies to marketers as they wade through a sea of potentially conflicting data about their markets, consumers and competitors. It’s not only the data itself, but also the lens through which that data is evaluated that can make all the difference between a successful and failed campaign. It is for this reason Cadence staffs projects with people from varied backgrounds: brand planning, sales, market research, regulatory, etc. It helps us to identify and (if needed) challenge the assumptions we work with as we design, execute and interpret research. For everyone from brand managers to consumers, a little skepticism and a willingness to identify hidden assumptions is the key.

As for me, I can sleep tight, knowing that my favorite Starbucks drink isn’t going to kill me anytime soon.

 —Sugata Biswas