At the closing keynote of the Marketing Research Association’s Corporate Researcher Conference in St. Louis today, Dr. Jennifer Golbeck from the University of Maryland discussed the types of insights that computer science can find and will delve into 1) which parts of them are meaningful in other contexts, 2) which outputs to trust, and 3) exactly how these algorithms can be applied for unique and useful research results.

While we worry about spurious correlations, not knowing why they work, that may be enough. For instance, from the warning sheet for the drug Topiramate: “The precise mechanisms by which topiramate exerts its anticonvulsant and migraine prophylaxis effects are unknown.” Do you want it to work, and not know why? Or not want it to work?

Of the millions of pages that people can like (Facebook “likes” are public and can’t be made private), these are predictive of intelligence, personality, consumer behavior and more. If you like the Budweiser page on Facebook, chances are you are a drinker; no complex explanation needed. The best predictors of high intelligence were liking science, thunderstorms, Colbert Report, and curly fries. The number one indicator of low intelligence was liking a page about enjoying parenthood. “Why are these relationships there? This is where people yell ‘spurious correlations’ – how can you use them?”

“The heart of computer science is ‘can we compute this?’ That’s all we care about. We don’t care about the why it works, unless it helps us compute it better.” Machine learning takes thousands or millions of examples of behavior, but there is no data provided about the reason. “These algorithms are black boxes, and we can’t understand what goes on inside.” Computer scientists spent a decade trying to figure out what went on inside the black box of machine learning and gave up coming up with reasons. On the inside, it’s a sparse matrix of likes across the columns and Facebook accounts down all the rows. “If you liked curly fries, it helped us make a better prediction.”

This is how Amazon recommends other things to buy or Netflix recommends what else you should watch. No matter how many movies you’ve rated, there is an order of magnitude more movies that you haven’t rated. Of the millions of movies, 15% of the wrongness for one Netflix Prize contestant’s recommendation model came from “Napoleon Dynamite”: it’s ratings were U-shaped instead of following a bell curve. “There are now papers on the ‘Napoleon Dynamite problem.’” This is the converse of the curly fries helping predict.

Dr. Golbeck took Facebook account’s profile information (about a hundred fields) to predict people’s personalities.  No correlations were larger than an absolute value of 0.200; extroverts have more friends with less dense networks; neurotics use more words to talk about anxiety. Things that don’t make sense: the number of characters in your last name is positively correlated with neuroticism, “a hilarious false positive – if you have a long last name you go through a lifetime of having to explain your name and it makes you anger.” Unfortunately, that false positive took on a life on its own. “We didn’t use the correlations in the data, but computed them after the fact on the side because people wanted to know.”

Target found from analyzing CRM data that the three predictors of pregnancy were purchases of vitamins, large purses that could hold diapers, and colorful rugs. Maybe the women were decorating a nursery?

In another example of machine learning related to pregnancy, Microsoft found a way to predict post-partum depression from Twitter feeds. Machine learning requires the analyst to come up with “features” that describe individuals. For instance, on Twitter, whether people follow back. Post-partum depression sufferers have fewer followers, follow people back less often, and post less often. Psycholinguistics analyzes text to categorize sentiment, often driven by use of “little words”: post-partum depression suffers use more adverbs and less verbs and use more first-person pronouns. Women who develop post-partum depression ask three times as many questions as they did before. “Why? We don’t know, and as computer scientists we don’t care. We just want an algorithm that works. It opens up some space for us to find out the ‘why’. I have some hypotheses, but we haven’t looked at all those.”

To identify people’s political leanings, Dr. Golbeck used homophily (you are friends with people who are like you). On Twitter, you follow people similar to you politically. We know the members of Congress’s political leanings and can quantity it as a score from 0 to 1, from liberal to conservative. But following might not reflect your own leanings, such as following your representative, regardless of your personal orientation. Dr. Golbeck added in media sources that people followed to better predict leanings: who you follow will predict the party of which presidential candidate you followed (0.97 correlation).

In another example of homophily, emulating “gaydar” (the ability to predict if a male was homosexual), if a significant portion of your Facebook friends self-identified as gay, it could predict at 80% accuracy if you were gay.

Could Facebook predict your significant other from a list of Facebook friends that were shared between you and your spouse? Social dispersion is a measure of your distinct groups of friends who know one another. The person who is connected to the greatest number of your social groups is your significant other: “My husband is on my hockey team, has met by friends, has met by family, and so he is connected to all those circles, even if he isn’t friends with as many of my friends as other friends. This predicts the significant other 75% of the time. If they guessed wrong, the couple were 50% more likely to have broken up.” These algorithms were built on an intuitive understanding of social dynamics.

Don’t look to black boxes for “spurious, random correlations that are going to change over time.” Liking curly fries on Facebook probably now means that you saw Dr. Golbeck’s TED Talk about curly fries being intelligent.

“Embrace the black box. There is a good stuff inside the black box.” Even if you can’t say why, there is still value. “There’s a lot you can do with this: predict personality traits, preferences for drugs, recommendation systems, and all kinds of things about you.” Often from incomplete data you need to impute more information about them – the black box can predict this information. Don’t throw away your intuitive insights; value the black box outputs.

For the example of post-partum depression, what questions are the women asking? The black box invites us to do more investigation of the data and develop a “qualitative, human understanding of what is happening. This is an interesting and fun space to explore!”

Author Notes:

Jeffrey Henning

Gravatar Image
Jeffrey Henning, IPC is a professionally certified researcher and has personally conducted over 1,400 survey research projects. Jeffrey is a member of the Insights Association and the American Association of Public Opinion Researchers. In 2012, he was the inaugural winner of the MRA’s Impact award, which “recognizes an industry professional, team or organization that has demonstrated tremendous vision, leadership, and innovation, within the past year, that has led to advances in the marketing research profession.” In 2022, the Insights Association named him an IPC Laureate. Before founding Researchscape in 2012, Jeffrey co-founded Perseus Development Corporation in 1993, which introduced the first web-survey software, and Vovici in 2006, which pioneered the enterprise-feedback management category. A 35-year veteran of the research industry, he began his career as an industry analyst for an Inc. 500 research firm.