Don't Count on Twitter for Presidential Predictions

Many of us are looking to social media to tell us who will win the US presidential election, but we can't rely on Twitter to predict the outcome.

Why not? First, Twitter users en masse are not a representative sample of registered or likely voters. Second, voters' views of candidates tend to be cumulative; tweet sentiments, reactive and time sensitive as they are, don't accurately reflect that. Third, voters are getting information about each candidate and his campaign through reactive “filter bubbles” that affect the way they perceive behavior and issues.

I turn to the ideas of Marshall McLuhan, the well-known scholar who coined the terms “the medium is the message” and “global village,” for insight on how we should view online communications in this election. Were McLuhan alive today, he would likely call Twitter a “hot medium” because it limits communications to 140 characters and takes almost no effort to consume or react to. Other forms of media, like TV or cable programming (where most viewers catch the presidential debates), are much colder, in McLuhan's terminology, because they require more effort to understand and summarize.

In addition, text-oriented and visual-based social media channels (Twitter being the former) involve different processing areas of the human brain, as discussed in this piece. Consider this in context of research from the Autism Research Centre at Cambridge University, showing that a woman's brain has a larger corpus callosum than a man's, which means women can transfer data between their right and left hemispheres faster than men. This makes women voters able to process visual information more quickly then their male counterparts.

I believe women, as well as minorities, will determine the election -- however, Twitter will not allow a full expression of their sentiments because of its hot and textual nature, as well as its message-size limitation.

As a result, I place more weight on the opinions expressed in blogs and on Facebook than in tweets. They take more effort to understand, are more selective, and are less likely to be the knee-jerk sentiments of the highly reactive and skewed Twitter audience. In fact, many have challenged the accuracy of the sentiments expressed in social media mentions, and that is no less true about tweets on the presidential election, according to

Understanding the meaning of face-to-face conversations in English is difficult enough, without getting into highly polarized and political conversations around presidential politics. And what about the difficulty of processing sentiment expressed in multiple languages, lexicons, and slang words? This level of balanced messaging is much harder to do in the interpretive realm of social media, where individuals often use the same words to say different things and where visual cues to help provide context are missing -- save for an emoticon or two.

And to my earlier point about minorities -- election watchers widely believe that the Hispanic vote will be a large factor in the race's outcome, yet most Twitter analyses examine English verbatim comments only. To test this notion, I ran a quick analysis of tweets from the time of the first debate. I used Marketwire's Sysomos platform, which delivered results based almost entirely from tweets in English. Clearly, the large Hispanic vote has not been expressed or amplified in the main Twitter streams around the debates, which provides yet another reason to discount Twitter as a predictive tool around national elections.

When we add the filter bubble concept to our witches’ brew of communications, we have even more to worry about in regards to getting an accurate view of the presidential candidates and their positions from Twitter.

As modern communications become increasingly fragmented and pre-processed, the messaging across mainstream, search engines, and social media has fallen into highly polarized filter bubbles that provide self-sustaining social segmentation and validation viewpoints for whatever voters choose to believe, as this Nieman Journalism Lab post discusses. Because of the filter bubble (only the fragments of information that match the filter bubble get amplified), we see reality through rose-colored glasses.

It’s almost impossible to reason someone out of his or her own self-validated perceptions, regardless of evidence to the contrary.

Hot topics for this election, including the economy, healthcare reform, Iran, marriage equality, immigration, and Social Security reform, will look much different to voters depending on the filter bubble in which they live. The result is an ungovernable state of affairs that has the two parties demonizing each other and obfuscating the issues.

Meanwhile, moderate and undecided voters, who will likely tilt the election one way or the other, end up hostages of the online opinion war -- especially if they live in a battleground state such as Ohio or Florida. Monitoring Twitter users in such states is complicated because the majority don’t yet share location data, making any attempt to gauge state-level or district-level outcomes inaccurate. Most Twitter analyses of voter mindset look at national sentiment, even though the election will likely be decided at the lower levels.

It’s entirely possible that Twitter filter bubbles, which only report the intensity of messaging within them, rather than the actual views of the voting public, will skew the monitoring tools for gauging electorate viewpoints around the presidential debates. What's more, since we have no established way to design and operate social media monitoring platforms for accurate results, or even to decide on what accurate results look like (a subject I cover in depth in my book), social media monitoring is particularly unreliable when predicting elections.

And so I argue against Twitter as a presidential predictor. While Twitter can augment what we know about the voting public that expresses itself online, we must not make the mistake of using it as an accurate predictor of the electorates’ vote, or as a replacement for more traditional (and much colder, according to McLuhan) market research tools like surveys and phone polling.

Do you trust Twitter's presidential predictions? Read Pierre DeBois' Point, take our quick poll, at right, and share your thoughts on the message boards.

Point / Counterpoint, Web Analytics and SEO/SEM Specialist

Counterpoint: Train for Data Visualization Skills

Chances are you've already got good data visualization experts on staff, even if you don't know it yet.

Customer-Centric Banking Analytics Scares Me

Banks have tons of customer data at their disposal; unfortunately not all will use it scrupulously.

Re: Twitter
  • 10/24/2012 7:40:54 AM

@Lyndon_Henry, the question that immediately popped to my mind is whether folks are buying the masks to make fun of the candidate or to support. I'd kind of think the former, given the nature of Halloween. This is a tricky one!

Re: Twitter
  • 10/23/2012 7:36:31 PM


Beth writes

Tweets may be fun, annoying, inflamatory, etc., but I don't see them as truly indicative of what will happen at the polls.


In terms of political analytics, here's an interesting competitor to Twitter:

Halloween Mask Sales Show Obama Leads Romney 60 Percent To 40 Percent



Re: Twitter
  • 10/18/2012 12:00:12 PM

Darn Beth!  I wish I had joined Twitter early enough to have gotten @marshall as my Twitter handle! Unfortunately, I waited too long and I actually use @webmetricsguru as my main handle, and @smanalyticsbook as the other one around my book.

BTW, in the first recording - there are new tools to find "pursuadable voters" (note the Analytist Institure in Washington DC).  What does it mean to be a partisan or independent (see 16 minutes in).    An indepenent that leans democratic is more democratic than someone that identifies themselves as a democrat, but not really strongly).


Listen to the tapes, here are gems in there.


And yes, I found that most candidates, besides the national ones and a few forward leaning senate and governers, just don't have the resources and staff or even the understanting yet, of how to work with this data - frustrating - but 99% could not run these experiments even if they did mouth the "big data" hype.  Ha!

Re: Twitter
  • 10/18/2012 11:49:00 AM

Thanks for updating the links so they work - I think some of us here, would really enjoy them, and I'm listening to the first one now, and enjoying it.  Granted they do take time to listen to - but some of the people here are data geeks and wondering just what kind of data science is being used to gauge campaign messaging and targeting.

Well, a lot of progerssive data testing is contained in these audios - and the video I tried to embed had a lot of good stuff too.

Re: Twitter
  • 10/18/2012 11:24:46 AM

@Marshall, good point about funding of analytics work/analysts on state and local campaigns. I wonder if we'll see enough evidence coming out of these current elections to make analytics a priority spend for lower-level campaigns going forward. What do you think?

Re: Twitter
  • 10/18/2012 11:20:14 AM

I'm not sure, Beth!  

Actually, what I was struck with, at NetRoots, was groups of statisticians and analytsts who had applied best practices of multivariate testing and data mashups to candidate messaging and positioning.  

That's what those 4 links I put in earlier this morning were about - they were actual cases and some of the science/analytics behind what sophisticated campaigns are doing on the Left to counter large amounts of SuperPAC money coming from to the Right.

What also struck me was how most state and local district campaigns don't have the right people on staff to run the experiements, do the data cleaning, assemble the insights - even if they wanted to have them.  I also don't think many campaigns have the money to pay for it either (or don't yet understand it's value).



Re: Twitter
  • 10/18/2012 10:58:01 AM

Marshall, would you say that the people who are trying to find answers and be objective are academics or other researchers studying the Twitter phenomenon and its usefulness in measuring social mores, interests, trends, etc.? In other words, they'll be spending far more time and putting far more effort into their projects -- presidential predictions, in this case -- than the average political pundit or pollster?

Re: Bias reaffirmed
  • 10/18/2012 3:35:18 AM

For some reason the links I put in won't display - too bad.  I can try pasting them in as plain text, or people here can let me know if they are interested in the material as it's very timely.

Re: Bias reaffirmed
  • 10/18/2012 3:33:39 AM

first, the polls that include mobile phones are better-but i understand there are issues with certain laws or regulations around it. i learnt about this when i attended netroots nation in providence, ri, last june (netroots is run by the dailykos founder).

for data geeks (what we all are) i attended and recorded all the polling sessions at netroots-which includes those that deal with handling of the data-some of the readers here might be interested in this material just now.

i think alot of readers here would be interested in these 4 data sessions.

so here it is.


#nn 12 panels on data and politics (you may want to listen to the entire recording as there is no other way to get the information that i'm aware of and i may have been the only one recording it)



Re: Neuroscience or not ...
  • 10/18/2012 3:14:27 AM

Agreed, and I'm about to write a post on it a     Perhaps something for next Monday's debate at would be a good idea too.

Page 1 / 2   >   >>