Let's Talk!

Say Hello

Send us an inquiry quote request, using the form below and we'll get back to you as soon as we can.


Please enter a work email address




Welcome to the second part of our blog series on analyzing Twitter data!


In our previous post, we presented data from @BarackObama’s account in the form of word clouds and network graphs to show the most common words and word groups. While this is useful in profiling a particular person or brand, organizations normally want to go beyond that and understand how people perceive their products and services.


To do this, we will dive deeper into the data available on Twitter and use sentiment analysis, one of the most common yet complex analyses done on social media data. As defined by Prof. Bing Liu, a professor of computer science from University of Illinois – Chicago who specializes in this field, “Sentiment analysis, also called opinion mining, is the field of study that analyzes people’s opinions, sentiments, evaluations, appraisals, attitudes, and emotions towards entities such as products, services, organizations, individuals, issues, events, topics, and their attributes.”[i]


One of the most obvious applications of this type of analysis is in brand management or customer service. Organizations may be concerned about the type of publicity (positive or negative) certain products, services, or names receive. Customer service providers also need to take note of what people are saying when they make online reviews or comments about a brand. Without sentiment analysis, it would take too much work to go through all the comments and tweets on the internet just to gauge a brand’s reputation.


For this article, we compared tweets mentioning two well-known comic publishers, Marvel and DC Comics, to gain an understanding of how people perceive them. We collected a total of 1782 tweets mentioning the keywords “marvel” and “dccomics” from Twitter, streamed for over nine hours (1:00 AM to 10:00 AM GMT) on August 19, 2016. Although the sample is very limited, our intention with this article is to show what kind of data can be extracted from tweets and what kind of techniques can be used to analyze this data.





Before applying sentiment analysis, it is important to first explore the dataset to look at attributes in the data and see if they can tell us anything about our tweet sample. This section will show some of the attributes that are available in public Twitter data, and we will use them to attempt to understand the users in the sample.


We can start by finding out what the top platforms used to post the tweets are. This helps answer questions such as, “What does our userbase look like? Are more people tweeting from their mobile phone apps or from browsers?”



From the graph, we can observe that the top 3 platforms for the tweets are the iPhone, Android devices, and the standard Twitter web client, accounting for almost 90% of all tweets. iPhone and Android combined have a total of about 72%, which suggests that majority of the tweets in our sample come from mobile phone users. Also, if we take a look at all Apple sources combined (iPhone, iPad, and iOS), they account for around 42% of all tweets, more than Android’s 32%. Although more data is needed to form sound conclusions from this, it is this kind of information that can help marketers or product development teams know which types of users to focus on.


Another aspect of Twitter data that can be explored is the temporal aspect. When do people tweet about a topic or brand most often? Can we spot spikes in social media activity after an event? For our sample, after tagging each tweet as one of dccomics, marvel, or both (contained both dccomics and marvel keywords), we looked at the hourly inflow of tweets by publisher.



Here, we can see that the number of tweets streamed decreases per hour. Our hypothesis is that most of the tweets come from American audiences, but the US time zones would have been asleep during the times we streamed for tweets. However, since many tweets do not contain location information, we can neither confirm nor deny this claim.


If data was collected over a much longer period of time, it might be possible to see seasonality in tweet activity (perhaps peaking during American peak hours) or large buzz spikes due to Marvel/DC announcements.


Note: Since the stream started a few minutes before 1:00 AM UTC and ended a few minutes after 10:00 AM UTC, there are a few trailing tweets at the opposite ends of the graph.


Similar to what we did for the word clouds and network graphs in the previous article, we can look at tweet content based on hashtags or the actual tweet text. Here, we identified the top hashtags to know what people were tweeting about.



There seem to be a lot of nostalgia posts (#tbt for Throwback Thursday) and posts about on-going TV shows (#agentsofshield, #agentcarter). However, why is #bb8 in the top 5 hashtags? What does Star Wars have to do with Marvel or DC? After further investigation, it turns out that BB-8 made a cameo in one of the #agentcarter-related tweets (see below) that gained popularity.



Note: Image capture was taken weeks after the tweet stream.


It is important to note that the graph above does not show the count of unique tweets per hashtag. Retweets (or tweets that were quoted or re-broadcasted by other users) add to the tweet count, which means that popular hashtags will really make their way to the top of the chart.


To dig deeper into the opinions and tones of the tweets containing these hashtags, we can move towards sentiment analysis. However, most sentiment analysis models only work on English text. To ensure that we can apply sentiment analysis to most of our data, we checked the distribution of tweets according to Twitter’s language tagging (not the user’s declared language).



Just over 80% of the tweets are in English, which is a sizeable majority. We then filtered the data to analyze only the English tweets.





There are many models and methods for sentiment analysis, each with its own advantages and disadvantages. Simple models give scores to each word depending on their inherent meanings and get aggregated scores for a phrase or sentence. More complex models attempt to compute for sentiment at a document level (long texts) or incorporate syntax and grammar when classifying text[ii]. The model[iii] we used classifies tweets into polarities (positive, neutral, and negative) and computes for their valences (scores). These computations are based on the intensity of positive or negative sentiments attributed to words or phrases and other sentiment clues such as capitalization and punctuation.


To start, we wanted to know whether a particular publisher gets more positive or negative tweets than the other. We presented this data by hour to see whether the time of day matters.



In general, it would seem that most of the tweets were labeled as positive regardless of whether they were about Marvel, DC Comics, or both. Also, there does not seem to be any clear trend for the distribution of tweet polarity across the time of day. However, since the sample was only taken over a short period of time, these results are inconclusive. More data (longer timeframe, more popular hashtags) may be able to show more interesting patterns, if any.


Another question we can ask is: are there stark differences in polarity distribution across different Twitter clients? Businesses that rely heavily on various mobile or web platforms will ask this type of question often to determine whether their products or services are working properly (few negative comments) across all platforms.



In this example, we filtered the data to only show the top 3 sources by number of users. Similar to the previous question, the distribution of the tweet polarities seem to be the same across the top 3 platforms.


Since looking at the sentiment results at a tweet-level has not really produced that much insight, it might be more useful to dig deeper and look at the prevailing words (not just hashtags) included in tweets to take a peek at the topics being discussed. This graph shows us the popular words in our dataset, and the average sentiment score (valences) of the tweets including them, as computed by our model.


Note: Scores closer to 1 denote more positive tweets, while scores closer to -1 denote more negative tweets. Neutral tweets have a score of 0.



As expected, since most of the tweets had positive scores, most of the top words have positive scores as well. However, “suicide” and “squad” have very low sentiment scores. Does this mean that tweets regarding DC movie Suicide Squad were generally negative? Not necessarily. It is possible that the extremely negative scores are due to the inherent negativity of the word “suicide” in the model’s sentiment dictionary. To prove this, we removed the phrase “suicide squad” from the tweets including it and tried comparing the sentiment valences to the original tweets.



From the box plots, we can observe that the median score of the tweets with the phrase “Suicide Squad” removed is close to 0.0 (neutral). The median shifts drastically lower if the phrase is not removed, which suggests that the negative scores can be attributed to the presence of the phrase “Suicide Squad” rather than other words in the same tweet.


Since names with loaded meanings like “Suicide Squad” can affect a model’s scores and conclusions, it is important to exercise caution when interpreting initial results to avoid publishing the wrong conclusions. In fact, depending on the model used, it might be better to remove brand names from the tweets in order to gauge the sentiment of the actual comments within the tweet, independent of the brand name.


In our sample, since most tweets have a positive average sentiment, and “Suicide Squad” tweets become neutral if “Suicide Squad” is removed, it is still unclear what words are prevalent in negative tweets. To dig deeper, we split the tweets into positive and negative groups and analyzed the top words separately. Below are the words found in positive and negative tweets for Marvel and DC Comics, along with their average positive and average negative scores.



Looking at the text of actual tweets shows that words like “review”, “movie”, “harley”, and “quinn” are mostly included in tweets mentioning suicide squad, and were therefore also tagged as negative because of the inherent negative bias of the word “suicide”. The word "blood", on the other hand, although not usually associated with positive emotion, is frequently mentioned in positive tweets. It turns out that these tweets mostly refer to Nick Blood, an actor in the positively-tweeted TV series “Agents of Shield.”


Oddly enough, Shrek is in the list of positive DC Comics tweets when he is not related to any of the two publishers. After checking the individual tweets containing the keyword ‘Shrek’, we found that a user photoshopped Jennifer Lawrence to have green skin, stated that she would be amazing as Poison Ivy, and mentioned DC Comics in the tweet. Someone else then retweeted this photo (see below) with a statement saying “The new Shrek looks so good”. This sarcastic retweet gained popularity and several retweets. Since the model scores the word ‘good’ as positive, Shrek appears in the top 10 positive list for DC Comics.





One of the limitations of our sentiment analysis model is that it is not adept at detecting sarcasm and other undertones. The human element is still needed to counter check odd data and spot misinterpretations that cause shifts in sentiment scores.





In summary, we found out that majority of the sample tweets that we gathered were positive, and that there are no obvious differences in the proportion of positive to negative tweets between Marvel and DC Comics. We also found some interesting tweets by looking closely at the keywords not commonly associated with certain sentiments. Several of the analyses might yield more distinguishable patterns when applied to larger datasets, but nevertheless, we hope that we have provided a good overview of the various analyses applicable to Twitter data.


Here’s a recap of the analyses we made and their possible applications to businesses:



Use Cases

Source of Tweets

Prioritize the development of particular platforms over others (e.g. Android, IOS). Determine the problems in specific platforms through sentiment analysis.


Plot the quantity of tweets over a period of time to determine the peak hours. Use live streaming to get the sentiment of the public right after an announcement or event launch.

Hashtag/Keyword Tracking

Find words associated to a company/brand/product name. Gauge the sentiment of tweets containing these hashtags/keywords. Split tweets into positive and negative groups and get the top keywords for each to quickly spot which aspects of your band make customers satisfied or unsatisfied.


Establish the languages used by the majority of the company’s reach to determine which customer groups to focus on.

Name or Brand Sentiment

Isolate the effect of the company/brand/product name’s natural sentiment from the rest of the words to determine the true sentiment of tweets.




Social media analysis provides a lot of different perspectives to analyze a brand’s reputation and internet presence. As shown by our recap above, the insight social media can provide is diverse. Like in any analytics domain, the analyst’s creativity is the limit. At the end of the day, all these types of analyses boil down to a single question: how do we improve the way we do our business? Although not everyone may be in the business of publishing comics, we hope that these articles have given ideas for you to start applying social media analytics to your organization.


If you are interested to have deeper discussions on social media analytics or how analytics can help your business in general, feel free to contact us at and stay tuned for more articles!


[i] Liu, B. (2012). Sentiment Analysis and Opinion Mining. Morgan & Claypool Publishers. Retrieved September 20, 2016 from

[ii] Pang, B. and Lee, L. (2008). Opinion Mining and Sentiment Analysis. Foundations and Trends in Information Retrieval, Vol. 2, No. 1-2. Retrieved September 20, 2016 from

[iii] Model used: Valence Aware Dictionary for sEntiment Reasoner (VADER) v0.5. Reference: