Using Sentiment Analysis to Understand Public Policy Nicknames:…

Aliah Achilles

Researcher
Article Writer
Writer
Google Scholar
Microsoft Word
WordPress
In this article
In this study, we compared the social media net sentiment of one policy with two names. Specifically, we analyzed Obamacare and the Affordable Care Act (ACA) to understand how social media users engaged with each term on social media from March 2010 to March 2017. The net sentiment was measured with a sample of over 50 million micro-blogs, and the analysis was done using a combination of digital instruments and human validation. We found a significant difference between the social media engagement and sentiment of both terms, with the ACA performing significantly better than Obamacare, despite Obamacare’s higher conversation volume. With the ACA having an average of 26% less negative sentiment than Obamacare, the findings of this study emphasize the need to be careful when attaching nicknames to public policy. The findings also have implications for policymakers and politicians.

Introduction

This study explored the case of the Affordable Care Act (ACA), nicknamed Obamacare after the former president of the United States, Barack Obama, as a form of policy branding (Busby & Cronshaw, 2015). Obamacare has garnered significant attention as a notable 21st-century case of nicknaming in public policy. The widespread attention it received makes it an important issue for assessing how nicknaming a policy can impact sentiment (Baker, 2012; Busby & Cronshaw, 2015; Oberland, 2012). The idea of nicknaming public policies and places is not a new concept. Today, however, we have better tools to understand the impact of nicknaming in public policy. Public opinion can be evaluated using sentiment analysis methods evolving in the modern day, accounting for online data as a core source of public sentiment (Tumasjan et al., 2010).
The surge in social media usage over the past decade meant that public involvement and opinion concerning political matters and public policy grew exponentially (Williams & Gulati, 2013). Tumasjan et al. (2010) used the German federal elections to illustrate how the micro-blogging platform, Twitter, can be used as a source of election forecasting. Part of this analysis argued that Twitter sentiment regarding the parties fairly accurately represented overall public sentiment. It has further been noted that branding in public policy is increasing (Marsh & Fawcett, 2011; Ogden et al., 2003). However, given this understanding, research into political marketing is still in its infancy, with little research going beyond party and candidate branding (Eshuis & Klijn, 2012; Marsh & Fawcett, 2011; Scammell, 2014). The use of branding, in particular, has become a focal point for politicians and political parties in positioning themselves, improving their image and fostering support (Eshuis & Klijn, 2012; Marsh & Fawcett, 2011; Schneider, 2008; Serazio, 2015). Another locus of branding in the public sector is place branding, such as in the branding of cities (Ashworth, 2009; Mayes, 2008). Only a handful of cases of public policy branding have been evaluated, of which a noticeably missing area of study is the use of the personality branding concept concerning public policy (Marsh & Fawcett, 2011; Ogden et al., 2003).
The measure of public opinion in governance is primarily conducted using the principles of sentiment analysis. Wilson et al. (2005) defined sentiment analysis as the activity of distinguishing negative and positive evaluations and emotions. Sentiment analysis is critical to marketers as they can use opinion mining to collect and process extensive data on brand perception and political and product reviews to develop better marketing strategies (Cambria et al., 2013). Of late, public policy and marketing researchers have been very interested in social media-based sentiment analysis due to the valuable insights gained from the associated platforms (Ghiassi et al., 2013; Pak & Paroubek, 2010; Tumasjan et al., 2010). Sentiment analysis has contributed to predictions of political elections and illustrates the significant use of Twitter in understanding political topics (Gayo et al., 2011).
This paper proposes the following research question: is there a difference between public policy net sentiment on Twitter for a single policy with two different names, namely the Affordable Care Act and Obamacare? We also sought to understand volume and polarity differences in asking this question. The study aimed to provide insight into the emerging concept of public policy branding by understanding its impact on public sentiment. This study contributes to the emerging field of policy naming and political marketing by showing how nicknames can significantly impact public policy sentiment and success (Ashworth, 2009). In addition, the study illustrates the effects that personality branding can have on public sentiment (Keel & Nataraajan, 2012; Tumasjan et al., 2010). Marketers can analyze this information for use in business practices, such as when deciding on new business policies that will have a public impact or introducing new products. Here, marketers can note the possible effects of personality branding when determining an implementation strategy (Eshuis & Klijn, 2012; Keel & Nataraajan, 2012).

Literature review

Nicknaming in American politics

Nicknaming in the field of modern politics is not a new concept. As early as 1955, Shankle reported nicknames to be widely used in American political society (Shankle, 1955) and used in political discourse to address policies, presidents and political brands (Gladkova, 2002). The New Deal (Jeffries, 1990) and Reaganomics (Bartlett, 2009) were names attached to economic policies introduced by ex-presidents Franklin D Roosevelt and Ronald Regan. The nickname Reaganomics was featured on a 1981 Time magazine cover (Bartlett, 2009). Another past American president, George Washington, was nicknamed “The Father of our Country.” Washington gained this nickname as he was seen to have revolutionized the American government, its offices, practices and institutions (Gladkova, 2002).
Nicknames are often associated with the sponsor of a respective policy. Gladkova (2002) found that nicknames were sometimes used as an optional term of reference that captures characteristics of the individual, if relevant. When the given nickname is used instead of the original name of the president, sentiment is often evoked (whether good or bad) based on an individual’s attitude to the nickname holder. Nicknames might be created to influence public opinion (Gladkova, 2002).

Nicknames as political branding

In recent years, the concepts and tools of marketing have been progressively adopted by political actors (Eshuis & Klijn, 2012, p. 2; Scammell, 2014, p. 33). However, the primary focus of political marketing has been on candidates during election periods and its function in influencing public perceptions and garnering votes (Schneider, 2008; Eshuis & Klijn, 2012, p. 2; Marsh & Fawcett, 2011; Serazio, 2015). Whereas research into marketing in the private sector has continued to thrive, fewer studies have been concerned with using marketing tools and concepts in public policy (Eshuis & Klijn, 2012; Marsh & Fawcett, 2011). Further, a scant examination of specific tools such as branding and personality branding in public policy has been established. Holt (2006) claimed that branding was an integral exploit of capitalism; therefore, increasing the function of its role in politics should not be overlooked. The subject has drawn negative and positive arguments, but it is collectively accepted that a closer examination of the topic is essential (Marsh & Fawcett, 2011).
The political world sees the value and impact that employing instruments such as branding can have on election outcomes, policy success rates, public perceptions and public sentiment (Eshuis & Klijn, 2012). The recent events of the EU Brexit referendum and the 2016 US presidential election of Donald Trump are clear demonstrations of the impact of branding on political outcomes (Inglehart & Norris, 2016). Trump used an image of the “honest, outspoken” candidate coupled with fiery rhetoric, such as his extreme position on immigration policies, as a significant component of his political brand, further dominating media attention from both sides of the political spectrum (Oates & Moe, 2016). The UK Independence Party used similar tactics of populism and authoritarianism to spark anti-European and immigration debates in the UK, forcing the Conservative Party to call for the Brexit referendum (Inglehart & Norris, 2016). These examples also highlight the impact of branding on media exposure, which correlates with an increase in public participation in governance processes, and the particular value of social media as a tool to examine public opinions (Agarwal et al., 2011; Inglehart & Norris, 2016).
Public policy branding is used to improve the success rates of a governance policy in terms of public sentiment and acceptance of the terms and adoption rates of the policy (Marsh & Fawcett, 2011; Ogden et al., 2003). Branding may alter the perception and value of a product, place or policy without changing its attributes (Ashworth, 2009; Eshuis & Klijn, 2012; Keel & Nataraajan, 2012). However, it must be noted that, while branding may increase awareness and conversation concerning policy, it can also create contention that would not necessarily have occurred otherwise (Marsh & Fawcett, 2011).

The affordable care act or Obamacare

The Patient Protection and Affordable Care Act (PPACA), or simply the Affordable Care Act (ACA), came about as a response to resolving the large population of uninsured residents in the US in the early to middle 2000s (Oberland, 2012). The act was soon dubbed Obamacare as a term of contempt by opposing actors to the Barack Obama administration. However, it was later embraced and endorsed by Obama himself in an attempt to appropriate the phrase (Baker, 2012). This led to an upsurge in the use of the term by both sides of the political sphere, rendering the use of its official name (the Patient Protection and Affordable Care Act) less common (Baker, 2012) as the policy was rigorously debated during the USA elections (Hall & Lord, 2014).

Theoretical foundation

Evans and Fill (2000) established the multi-step flows of the communication process, which is influential in public relations theory (Jensen, 2011). The initial step of the process is a sender conveying information to either an opinion former, opinion leader or receiver. Receivers can be intermediary or unintended (McGarty, Lala & Douglas, 2011). The opinion former or leader are possible conduits to relay the information to the receiver. An opinion leader is an informal source, defined by Evans and Fill (2000) as one who processes and confers information to others in addition to obtaining information. These opinion leaders, therefore, influence others. The process is completed by a feedback loop, as illustrated in Figure 1.
Figure 1. Multi-step flows of communication (Evans & Fill, 2000)
Although other generic communication models exist (Fletcher & Melewar, 2001; Stern, 1994), the multi-step flow of communication is a valuable framework for understanding the Obamacare nickname, as both official and unofficial sources were used in generating the momentum behind the nickname.

Methodology

The methodology to analyze social media sentiment for Obamacare and the ACA used both digital and human instruments. Studies to understand sentiment have traditionally relied on backward-looking indicators, such as questionnaires or focus groups (Pace et al., 2017; Steiniger, 2016). For this study, however, unsolicited historical and social data were examined and allowed for non-coercive responses that captured the lived sentiment of those who had expressed opinions about Obamacare and the ACA (Z. Wang et al., 2018). Twitter has already gained momentum as a source of widespread opinion on governance issues, including public policy sentiment (Pak & Paroubek, 2010; Tumasjan et al., 2010) and was the source for this study. Typical sentiment analysis only uses NLP (natural language processing; Bifet & Frank, 2010; McKenzie & Swails, 2016; Tumasjan et al., 2010). In this study, both NLP and human validation were used to increase the accuracy of the analysis (Lappeman et al., 2020, 2022; McKenzie & Swails, 2016). The combination of micro-sampling and manual validation was used to combine the strengths of both computer (speed and volume) and human (accuracy) interpretative strengths (Lappeman et al., 2020).
The initial phase of analysis involved accessing a database of Twitter’s microblogs (called tweets) using an application programming interface (API; Johnson et al., 2012). For this study, Gnip was used to access Twitter’s API. Gnip is a social media API aggregation company that collects social media data from channels such as Twitter and normalizes the data before feeding it back to its subscribers (H. Wang et al., 2012). H. Wang et al. (2012) used Gnip to access Twitter data for a real-time analysis of public sentiment expressed on Twitter toward US presidential candidates in the 2012 election. Their study found the benefits of using the Gnip platform and its data to gauge the relation between expressed public sentiment and electoral events (H. Wang et al., 2012). The Gnip subscription allowed us to search for the keywords “Obamacare” and “Affordable Care Act.”
Two significant events formed the basis of the study’s timeframe. The ACA was introduced by the United States of America on 23rd March 2010, marking the most significant amendment to health policy in the United States since 1960 (Shaw et al., 2014). In March 2017, a vote to repeal and replace the ACA was unsuccessful (Goldstein et al., 2017). A significant volume of online discussions occurred regarding this political policy during this time (Hopper, 2015). While no exact standard exists for the appropriate sample sizes required to analyze tweets accurately, Palguna et al. (2015) suggest increasing the sample size to the largest available to increase both representation and accuracy. This study’s sample was approximately five million tweets between March 2010 and March 2017.
Certain demographic filters were applied once the raw data was retrieved from the platform. The platform filtered out any data (tweets) from outside the USA and any repeats and bots. In addition, unique authors were included in the analysis so that repeated sentiments from the same author in a week would only be accounted for once, even if they commented daily. This phase allowed for the measurement of conversation volume. A sample was sent for human rater verification and a method closely aligned with the principles of interrater reliability testing (Armstrong et al., 1997; Lappeman et al., 2020; Marques & McCall, 2005). The need for human rater verification allowed one to understand words and sentences with possible double meanings and identify unclear objects of reference (Ghiassi et al., 2013; Pang et al., 2002; Turney, 2002). Furthermore, humans are superior compared to machines at gauging the strength of sentiments (Ghiassi et al., 2013; Pang et al., 2002; Turney, 2002), and diminishing language barriers such as local slang or sarcasm which may decrease the accuracy of NLP only methods (Agarwal et al., 2011; Wilson et al., 2005).
The verification process involved trained raters from the research company, BrandsEye, who were given a random sample of the already filtered tweets. The raters then rated the content according to three criteria. First, relevance was verified to ensure that the mention was about the relevant topic (Obamacare). Relevance was rated nominally by a “yes” or “no” score. Second, the sentiment was verified according to the author’s opinion on the topic (positive, negative or neutral toward Obamacare). The seven-point Likert scale included an eighth option for “mixed sentiment.” Third, the rater contextualized the topic relevance concerning the main topic of analysis (is the author discussing the cost, the ethics, the government, politicians, and so on regarding Obamacare). The topic was rated nominally by a “yes” or “no” score.

Results

Volume of conversation

Our examination revealed the extent of online attention devoted to discussing public policies. A total of 50 649 339 tweets mentioning either Obamacare or the ACA were analyzed from 2010 up to and including 2017. Of this, Obamacare was mentioned44464153more times than the ACA. Obamacare dominated 93.89% of the conversation on Twitter, while ACA only accounted for 6.11% as seen in . The highest conversation volume occurred between 2013 and 2017, as seen in and Figure 2.
Figure 2. The volume of conversation between 2010 and 2017. Noteworthy socio-economic events of public interest shaped these years. On the 1st of October 2013, the health insurance exchange was scheduled to open, and the writing policies went into effect. In addition, a campaign titled “#MakeDcListen” was launched by Ted Cruz to defund Obamacare. Finally, 2013 also saw a US government shutdown in October. Another year with a conversation volume peak was 2017, when Donald Trump began his presidential term by attacking parts of the healthcare policy. In addition, Trump commented on replacing the healthcare bill and promised to insure everyone under his new proposed replacement bill. In February 2017, the Republican party leaders outlined plans to replace Obamacare, but the vote to repeal it failed.
The most significant volume of conversation about Obamacare occurred in 2013. The spike in this year is attributable to the socio-economic events which occurred in October of that year. In 2013, Obamacare was one of the most tweeted topics (CHCF, 2014). Obamacare was mentioned13476873times on Twitter in the year alone. Tweets relating to Obamacare totaled over 13.2 million in the month of pre-enrollment. For nine days in October 2013, #Obamacare trended in the top 10 hashtags on Twitter. This year, the ACA was mentioned only 564735 times on Twitter. Whether used primarily by the opposition or those in favor of the policy, it is clear that using the nickname Obamacare helped to drive awareness, which was particularly important in the month of pre-enrollment, where the aim was to get the public to sign-up for the Obamacare healthcare plan (Jaffe, 2014).
However, the most significant volume of tweets for the ACA occurred in 2017 in the first three months alone. This is proposed to be a result of the start of Trump’s presidential term, in which he began by making bold claims against the ACA and his intention to replace it. It is notable, however, that, even in years of lower conversation volume for Obamacare, the volume was still significantly above that of the highest years of ACA conversation. Many of the events correlating to spikes in the amount of conversation were incited by Republican Party leaders. These socio-economic events contributed extensively to fluctuations in the sentiment attached to each policy name. This may indicate that popularizing a policy using a nickname makes it easier for an opposition member to latch onto the term, using its popularity to their benefit to drive conversations on their issues.

Sentiment of Tweets

It is important to note that, despite being the same policy, 35% of Americans fail to understand that Obamacare is a nickname for the Affordable Care Act (New York Times, 2017). A tweet published by Keith Ellison in March 2017 saying, “Personally, I don’t make bets, but if I did, I’d bet that GOP is going to replace Obamacare with the Affordable Care Act” was retweeted 1267 times and received 11 replies (Twitter.com, 2018). The tweet is represented in Figure 3. Although there is some agreement that the comment may have been made in jest, the opportunity for confusion is evident.
Figure 3. Tweet published in March 2017 (Twitter.com, 2018).
The confusion between the act’s interchangeable names posed the threat of delivering an uninformed and non-uniform message while receiving significant attention and amplifying conversation.
The Republican Party and its supporters contributed most of the negative sentiment, and the Democratic Party and its supporters contributed most of the positive views. On average, citizens expressed 26% more negative sentiment toward Obama care than ACA as seen is respectively. The greater extent of the negative sentiment is mainly due to the Republican Party and its supporters contributing to most of the conversation online.
In total,16070tweets were sent for HRV, which allowed for a 95% confidence interval and an average of a 2% – 4% margin of error over the entire period sampled. Data was only verified for specific conversation periods to gauge whether public sentiment toward these entities changed year-on-year.
The net sentiment (positive minus negative sentiment) for Obamacare never reached a net positive (positive exceeding negative sentiment) in the period. Conversely, ACA had periods of both positive and negative sentiment. On average, the data showed that Obamacare experienced 26% more negativity than ACA over the sample period (Figure 4).
Figure 4. The difference in net sentiment over time.
The ACA enjoyed overall positive sentiment in the introductory phase of the healthcare bill, up until 2013, before its launch. A great deal of negative sentiment was expressed post the launch due to tens of thousands of people’s failed attempts to access the healthcare.gov website due to technical issues. Ted Cruz’s campaign to defund the act further encouraged this negative sentiment. Once overcoming many issues, the Affordable Care Act continued to enjoy positive views before the announcement of Trump’s presidential candidacy. The ACA received negative views after the announcement and throughout the 2016 elections, which the Republicans dominated. Despite Trump’s victory in the 2016 elections and promises of repealing and replacing the ACA, positive sentiment increased during this period. This is suspected to be due to Democrats defending the ACA to ensure its survival.
Although the ACA enjoyed a surge of positive net sentiment on Twitter in 2011, Obamacare experienced largely negative sentiment. The gap was approximately 80% between the sentiments in the same year. It was essential to remember the origin of the name when analyzing the sentiment attached to Obamacare. It initially originated from critics using the nickname when criticizing the ACA. Furthermore, about 10% of the overall conversation about Obamacare was driven by an opposition group known as the “top conservatives on Twitter” (#tcot), with approximately 71752 unique authors. These components concerning the make-up of the tweeters may have contributed extensively to the negative sentiment attached to Obamacare. Throughout the period, Obamacare received relatively negative sentiment on Twitter. Furthermore, negativity increased during socio-economic events, reaching its highest level after Trump announced his candidacy.

Conclusion and future research

This research was motivated by the need to understand the impact of public policy nicknames on public sentiment. The study analyzed the conversation volume and net sentiment of a public policy with two branded identities. The value of this study is directly linked to the growing prominence of marketing in politics (Eshuis & Klijn, 2012; Scammell, 2014). In particular, branding in governance has evolved from the predominant use in political party and politician brands to increasing use in city branding and less commonly in public policy (Marsh & Fawcett, 2011). Furthermore, personality branding in the political field has centered on place branding, but the recent case study of the Affordable Care Act indicated that its use had expanded into the realm of public policy (Ashworth, 2009; Baker, 2012).
From our analysis, although Obamacare held the most significant conversation volume (attention), the term generally had more negative sentiment than the ACA. This negativity may be a result of the attachment to Barack Obama, who had many critics and opponents from the Republican Party and its supporters. This negativity may have further resulted from the name’s origin, as critics of the ACA used it first. Although more research is needed on this subject, our finding implies that public policy nicknames may generate more attention than more neutral, traditional policy names but may also disadvantage policymakers. The ACA, which is a more neutral name, received relatively more positive sentiment (and less negative sentiment) than Obamacare. In addition, the findings show that the use of nicknames may result in public confusion, evidenced by many believing the ACA and Obamacare to be two separate policies. Figure 5 shows the multi-step communication flows by Evans and Fill (2000), adapted for this study. The red lines show the communication of the ACA to opinion leaders and receivers. Not long after the introduction of the ACA as a topic of conversation, opposition opinion leaders began to use the Obamacare nickname for the ACA and then the Obama administration adopted it. Obamacare becomes the message, and the ACA loses its clarity in communication to receivers. The feedback loop initially does not create enough clarity to deter adopting the nickname.
Figure 5. Adapted multi-step flows of communication.
Despite branding benefits, policymakers should take these findings as a warning to be careful and purposeful when using nicknames. More specifically, they need to consider whether a target personality nickname, for example, will improve or discredit the policy. Using a controversial personality has proven effective in achieving attention, but not necessarily positive sentiment. In particular, personalities tied to specific political persuasions risk negativity from opposing parties. Policymakers and political marketers thus need to take control of any nicknaming from the start and ensure the origin of the name is not appropriated or sourced negatively. This study does imply that policymakers would benefit from selecting a more neutral name if their goal is to achieve positive sentiment. Marketers also benefit from the findings of this study as they are already familiar with collecting and processing online information (Cambria et al., 2013). Using sentiment analysis in assisting and creating effective marketing strategies and branding is already known. Attaching sentiment analysis to nicknaming is a platform to understand branding at a broader level better. Specifically, attention and sentiment need to be tracked and managed at a brand level to understand brand health fully.
This study was presented with certain limitations and avenues for further research. First, the sample tweets were limited to those specifically containing the specified policy search terms. This means that tweets which concerned the policy but did not mention it by name were not assessed. This boundary includes Twitter threads, which have become a popular microblogging style (Zubiaga et al., 2016). The initial tweet may contain the name of the policy, but replies to that tweet may not continue to mention it by name (Tumasjan et al., 2010; Zubiaga et al., 2016). Future researchers could assess the impacts of such replies to add new depth to the analysis. The second limitation concerns the existence of crucial political influencers who can dominate and provoke a significant portion of the conversation (Conover et al., 2011). These microbloggers are often referred to as the “top conservatives on Twitter” and use a hashtag (#tcot; Conover et al., 2011). Their political views on Obamacare and the ACA present an important crop of tweets for future researchers to examine.
Third, it has not been established whether the distribution of sampled tweets correlates with the representative breakdown of Democrats to Republicans. Therefore, a larger portion of Republican users on Twitter may have subsequently contributed to the more significant negative sentiment toward Obamacare. Future research could determine the role played by the demography of Twitter users on sentiment analysis. It is further notable that the highest volume of conversation occurred in a period when the political landscape of the United States was deviating toward a Republican-driven conversation. This is indicated by the major spikes in conversation occurring when Republican presidential candidates, Ted Cruz and Donald Trump, announced plans to defund or replace the act. This study was conducted during such a political transition. A broader timeframe of analyzing tweets concerning Obamacare may present more positive sentiment toward the policy in periods when Democratic views dominated the conversation. Finally, popularity ranking and algorithms influencing social media feeds impact what political content is consumed. As research on this subject improves, so will our understanding of bias in social media sentiment data (Shmargad & Klar, 2020).

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

Agarwal, A., Xie, B., Vovsha, I., Rambow, O., & Passonneau, R. (2011). Sentiment analysis of twitter data. Proceedings of the workshop on languages in social media. Association for computational linguistics. June, 2011. 30–38. [Google Scholar]
Armstrong, D., Gosling, A., Weinman, J., & Marteau, T. (1997). The place of interrater reliability in qualitative research: An empirical study. Sociology, 31(3), 597–606. https://doi.org/10.1177/0038038597031003015 [Crossref], [Google Scholar]
Ashworth, J. (2009). The instruments of place branding: How is it done? European Spatial Research and Policy, 16(1), 9–22. https://doi.org/10.2478/v10105-009-0001-9 [Crossref], [Google Scholar]
Baker, P. 2012. Democrats embrace once pejorative ‘Obamacare’ Tag. The New York Times (New York). August, 3: A1. [Google Scholar]
Bartlett, B. (2009). The new American economy: The failure of Reaganomics and a new way forward. St. Martin’s Press. [Google Scholar]
Bifet, A., & Frank, E. (2010). International conference on discovery science: Sentiment knowledge discovery in twitter streaming data.1-15. Springer. [Google Scholar]
Busby, R., & Cronshaw, S. (2015). Political branding: The tea party and its use of participation branding. Journal of Political Marketing, 14(1–2), 96–110. https://doi.org/10.1080/15377857.2014.990850 [Taylor & Francis Online], [Web of Science ®], [Google Scholar]
Cambria, E., Schuller, B., Xia, Y., & Havasi, C. (2013). New avenues in opinion mining and sentiment analysis. IEEE Intelligent Systems, 28(2), 15–21. https://doi.org/10.1109/MIS.2013.30 [Crossref], [Web of Science ®], [Google Scholar]
CHCF. (2014). Observations on Twitter and the ACA: Taking the pulse of obamacare. California HealthCare Foundation. https://www.chcf.org/wp-content/uploads/2017/12/PDF-ObservationsTwitterACA.pdf) [Google Scholar]
Conover, M., Ratkiewicz, J., Francisco, M. R., Gonçalves, B., Menczer, F., & Flammini, A. (2011). Political polarisation on twitter. ICWSM, 133, 89–96. https://doi.org/10.1609/icwsm.v5i1.14126 [Google Scholar]
Eshuis, J., & Klijn, E. H. (2012). Branding in governance and public management. Routledge. [Crossref], [Google Scholar]
Evans, M., & Fill, C. (2000). Extending the communication process: The significance of personal influencers in UK motor markets. International Journal of Advertising, 19(3), 377–396. https://doi.org/10.1080/02650487.2000.11104807 [Taylor & Francis Online], [Google Scholar]
Fletcher, R., & Melewar, T. C. (2001). The complexities of communicating to customers in emerging markets. Journal of Communication Management, 6(1), 9–23. https://doi.org/10.1108/13632540210806900 [Crossref], [Google Scholar]
Gayo, A. D., Metaxas, P. T., & Mustafaraj, E. (2011). Limits of electoral predictions using twitter. Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media. Association for the Advancement of Artificial Intelligence. 490–493. [Google Scholar]
Ghiassi, M., Skinner, J., & Zimbra, D. (2013). Twitter brand sentiment analysis: A hybrid system using n-gram analysis and dynamic artificial neural network. Expert Systems with Applications, 40(16), 6266–6282. https://doi.org/10.1016/j.eswa.2013.05.057 [Crossref], [Web of Science ®], [Google Scholar]
Gladkova, A., (2002). The semantics of nicknames of the American presidents. In Proceedings of the 2002 conference of the Australian linguistic society (Vol.2, pp. 1–11). [Google Scholar]
Goldstein, A., DeBonis, M., & Snell, K. 2017. House Republicans release long-awaited plan to replace Obamacare. Washington Post. Retrieved March 17, 2018, from https://www.washingtonpost.com/powerpost/new-details-emerge-on-gop-plans-to-repeal-and-replace-obamacare/2017/03/06/04751e3e-028f-11e7-ad5b-d22680e18d10_story.html?utm_term=.caafba36d282 [Google Scholar]
Hall, M. A., & Lord, R. (2014). Obamacare: What the affordable care act means for patients and physicians. BMJ Clinical Research. Bmj, 349(7), 5376. https://doi.org/10.1136/bmj.g5376 [Google Scholar]
Holt, D. B. (2006). Toward a sociology of branding. Journal of Consumer Culture, 6(3), 299–302. https://doi.org/10.1177/1469540506068680 [Crossref], [Google Scholar]
Hopper, J. (2015). Obamacare, the news media, and the politics of 21st-century presidential communication. International Journal of Communication, 9, 1275–1299. [Web of Science ®], [Google Scholar]
Inglehart, R., & Norris, P. (2016). Trump, Brexit, and the rise of populism: Economic have-nots and cultural Backlash. Harvard Kennedy School Working Paper No. RWP,pp.16–026 Retrieved March 17, 2018. from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2818659 [Crossref], [Google Scholar]
Jaffe, S. (2014). Second round of enrolment begins under affordable care act. World Report, 384(9965), 1733–1744. https://doi.org/10.1016/S0140-6736(14)62057-2 [Google Scholar]
Jeffries, J. W. (1990). The” New” new deal: FDR and American Liberalism, 1937-1945. Political Science Quarterly, 105(3), 397–418. https://doi.org/10.2307/2150824 [Crossref], [Google Scholar]
Jensen, I. (2011). Postkulturel Kommunikation – Fordi Kultur Ikke Altid Er Vigtigst. In Forståelsens Gylne Øyeblikk. Festskrift til Øyvind Dahl, red. Tomas Sundnes Drønen, Kjetil Fretheim og Marianne Skjortnes. Tapir Akademisk Forlag. ISBN: 978-82-519-2602-7. [Google Scholar]
Johnson, R., Wang, Z., Gagnon, C., & Stavrou, A. 2012. Analysis of android applications’ permissions. Proceedings of the 6th International Conference on Software Security and Reliability Companion. 20-22 June 2012. Gaithersburg. 45–46. [Google Scholar]
Keel, A., & Nataraajan, R. (2012). Celebrity endorsements and beyond: New avenues for celebrity branding. Psychology and Marketing, 29(9), 690–703. https://doi.org/10.1002/mar.20555 [Crossref], [Web of Science ®], [Google Scholar]
Lappeman, J., Clark, R., Evans, J., Sierra-Rubia, & Gordon, P. (2020). Studying social media sentiment using human validated analysis. MethodsX, 7, 100867. https://doi.org/10.1016/j.mex.2020.100867 [Crossref], [PubMed], [Web of Science ®], [Google Scholar]
Lappeman, J., Franco, M., Warner, V., & Sierra-Rubia, L. (2022). What social media sentiment tells us about why customers churn. Journal of Consumer Marketing, 39(5), 385–403. https://doi.org/10.1108/JCM-12-2019-3540 [Crossref], [Google Scholar]
Marques, J. F., & McCall, C. (2005). The application of interrater reliability as a solidification instrument in a phenomenological study. The Qualitative Report, 10, 439–462. https://doi.org/10.46743/2160-3715/2005.1837 [Google Scholar]
Marsh, D., & Fawcett, P. (2011). Branding and franchising a public policy: The case of the gateway review process 2001–2010. The Australian Journal of Public Administration, 70((3):), 246–258. https://doi.org/10.1111/j.1467-8500.2011.00729.x [Crossref], [Google Scholar]
Mayes, R. (2008). A place in the sun: The politics of place, identity and branding. Place Branding and Public Diplomacy, 4(2), 124–135. https://doi.org/10.1057/pb.2008.1 [Crossref], [Google Scholar]
McGarty, C., Lala, G., & Douglas, K. M. (2011). Opinion-based groups: (Racist) talk and (collective) action on the internet. In Z. Birchmeier, B. Dietz-Uhler, & G. Stasser (Eds.), Strategic uses of social technology: An interactive perspective of social psychology (pp (pp. 145–171). Cambridge University Press. https://doi.org/10.1017/CBO9781139042802.008 [Crossref], [Google Scholar]
McKenzie, D., & Swails, B. (2016). They predicted president trump and brexit. CNN. Retrieved August 25, 2019, from https://edition.cnn.com/2016/11/15/africa/south-africa-brandseye-trump-brexit/ [Google Scholar]
Oates, S., & Moe, W. W. (2016). Donald Trump and the “oxygen of publicity”: Branding, social media, and mass media in the 2016 presidential primary elections. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2830195 [Crossref], [Google Scholar]
Oberlander, J. (2012). The future of Obamacare. The New England Journal of Medicine, 367(23), 2165–2167. https://doi.org/10.1056/NEJMp1213674 [Crossref], [PubMed], [Google Scholar]
Ogden, J., Walt, G., & Lush, L. (2003). The politics of ‘branding’ in policy transfer: The case of DOTS for tuberculosis control. Social Science & Medicine, 57(1), 179–188. https://doi.org/10.1016/S0277-9536(02)00373-8 [Crossref], [PubMed], [Web of Science ®], [Google Scholar]
Pace, S., Balboni, B., & Gistri, G. (2017). The effects of social media on brand attitude and WOM during a brand crisis: Evidences from the Barilla case. Journal of Marketing Communications, 23(2), 135–148. https://doi.org/10.1080/13527266.2014.966478 [Taylor & Francis Online], [Google Scholar]
Pak, A., & Paroubek, P. (2010). Twitter as a corpus for sentiment analysis and opinion mining. LREc, 10(2010), 1320–1326. Retrieved July 2020, from https://lexitron.nectec.or.th/public/LREC-2010_Malta/pdf/385_Paper.pdf [Google Scholar]
Palguna, D. S., Joshi, V., Chakaravarthy, V. T., Kothari, R., & Subramaniam, L. V. (2015). Analysis of sampling algorithms for Twitter. Proceedings of the 24th International Joint Conference on Artificial Intelligence. 25-27 July 2015. 967–973. [Google Scholar]
Pang, B., Lee, L., & Vaithyanathan, S. (2002). Thumbs up? Sentiment classification using machine learning techniques. Association for Computational Linguistics: Proceedings of the Conference on Empirical Methods in Natural Language Processing: 79–86. [Google Scholar]
Scammell, M. (2014). Consumer democracy: The marketing of politics. Cambridge University Press. [Crossref], [Google Scholar]
Schneider, H. (2008). Branding in politics—manifestations, relevance and identity-oriented management. Journal of Political Marketing, 3(3), 41–67. https://doi.org/10.1300/J199v03n03_03 [Taylor & Francis Online], [Google Scholar]
Serazio, M. (2015). Branding politics: Emotion, authenticity, and the marketing culture of American political communication. Journal of Consumer Culture, 17(2), 225–241. https://doi.org/10.1177/1469540515586868 [Crossref], [Web of Science ®], [Google Scholar]
Shankle, G. E. (1955). American nicknames: Their origin and significance 2nd ed. The H.W.Wilson Co. [Google Scholar]
Shaw, F. E., Asomugha, C. N., Conway, P. H., & Rein, A. S. (2014). The patient protection and affordable care act: Opportunities for prevention and public health. The Lancet, 384(9937), 75–82. https://doi.org/10.1016/S0140-6736(14)60259-2 [Crossref], [PubMed], [Web of Science ®], [Google Scholar]
Shmargad, Y., & Klar, S. (2020). Sorting the news: How ranking by popularity polarises our politics. Political Communication, 37(3), 423–446. https://doi.org/10.1080/10584609.2020.1713267 [Taylor & Francis Online], [Google Scholar]
Steiniger, L. (2016). Hate or forgiveness: How do online firestorms impact brand attitude. Master Thesis, Faculty of Behavioural, Management and Social sciences Marketing Communication Studies, University of Twente, The Netherlands. Retrieved March 4, 2019, from https://essay.utwente.nl/71529/1/SteinigerMA_BMS.pdf [Google Scholar]
Stern, B. B. (1994). A revised communication model for advertising: Multiple dimensions of the source, the message and the recipient. Journal of Advertising, 23(2), 5–15. https://doi.org/10.1080/00913367.1994.10673438 [Taylor & Francis Online], [Web of Science ®], [Google Scholar]
Tumasjan, A., Sprenger, T. O., Sandner, P. G., & Welpe, I. M. (2010). Predicting elections with twitter: What 140 characters reveal about political sentiment. Icwsm, 10(1), 178–185. https://doi.org/10.1609/icwsm.v4i1.14009 [Crossref], [Google Scholar]
Turney, P. D. (2002). Thumbs up or thumbs down?: Semantic orientation applied to unsupervised classification of reviews. Proceedings of the 40th annual meeting on association for computational linguistics. July 2002. Association for Computational Linguistics. 417–424. [Google Scholar]
Wang, H., Can, D., Kazemzadeh, A., Bar, F., & Narayanan, S. (2012). A system for real-time twitter sentiment analysis of 2012 us presidential election cycle. Proceedings of the 50th Annual meeting of the association for computational linguistics. 8-14July2012. Republic of Korea: Association for Computational Linguistics. 115–120. [Google Scholar]
Wang, Z., Jin, Y., Liu, Y., Li, D., & Zhang, B. (2018). Comparing social media data and survey data in assessing the attractiveness of Beijing Olympic Forest Park. Sustainability, 10(2), 382. https://doi.org/10.3390/su10020382 [Crossref], [Web of Science ®], [Google Scholar]
Williams, C. B., & Gulati, G. J. (2013). Social networks in political campaigns: Facebook and the congressional elections of 2006 and 2008. New Media & Society, 15(1), 52–71. DOI:10.1177/1461444812457332. [Crossref], [Web of Science ®], [Google Scholar]
Wilson, T., Wiebe, J., & Hoffmann, P. (2005). Recognising contextual polarity in phrase-level sentiment analysis. Proceedings of Human Language Technology Conference And Conference on Empirical Methods in Natural Language Processing. 347–354. [Google Scholar]
Zhuang, L., Jing, F., & Zhu, X. (2006). Movie review mining and summarisation. Proceedings of the 15th ACM conference on information and knowledge management: New York, NY. 43–50. [Crossref], [Google Scholar]
Zubiaga, A., Kochkina, E., Liakata, M., Procter, R., & Lukasik, M. (2016). Stance Classification in Rumours as a Sequential Task Exploiting the Tree Structure of Social Media Conversations. Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics. The COLING 2016 Organizing Committee. 2438–2448. [Google Scholar]
Partner With Aliah
View Services

More Projects by Aliah