I am a Tenure-Track Professor of Data Science at the Faculty of Economics and Business Studies of the University of Giessen. Before joining the University of Giessen, I worked as a postdoctoral researcher in machine learning at the University of Oxford. Prior to that, I headed my own research group at the University of Freiburg, where I also obtained my Ph.D. in Information Systems. My research focuses on data science methods and computational techniques for understanding and predicting human decision-making in the digital age. Current research projects apply machine learning and natural language processing to a broad selection of topics, including (1) social networks, (2) recommender systems, and (3) financial markets. Apart from academic research, I am a passionate programmer and have developed multiple widely used R packages (> 150,000 downloads via CRAN) for text mining and machine learning.
Featured Research
Negativity Drives Online News Consumption
Online media is important for society in informing and shaping opinions, hence raising the question of what drives online news consumption. Here, we analyze the effect of negative words on news consumption using a massive online dataset of viral news stories. Specifically, we conducted preliminary analyses using a large-scale, series of randomized controlled trials in the field (N = 22,743). Our final dataset will comprise ∼105,000 different variations of news stories from Upworthy.com–one of the fastest growing websites of all time–that generated ∼8 million clicks across more than 530 million overall impressions. As such, this dataset allows a unique opportunity to test the causal impact of negative and emotional language on consumption with millions of news readers. An analysis with preliminary data reveals that negative words in news increase consumption rates. Our results contribute to a better understanding of why users engage with online media.
Co-authored with Claire E. Robertson (NYU), Kaoru Schwarzenegger (ETH Zurich), Phillip Parnamets (Karolinska Institutet), Jay J. Van Bavel (NYU), Stefan Feuerriegel (LMU Munich)
Accepted in principle at Nature Human Behaviour (accepted Stage 1 Registered Report available here)
Hate Speech in the Political Discourse on Social Media: Disparities Across Parties, Gender, and Ethnicity
The political discourse on social media is increasingly characterized by hate speech, which affects not only the reputation of individual politicians but also the functioning of society at large. In this work, we empirically analyze how the amount of hate speech in replies to posts from politicians on Twitter depends on personal characteristics, such as their party affiliation, gender, and ethnicity. We find that tweets are particularly likely to receive hate speech in replies if they are authored by (i) persons of color from the Democratic party, (ii) white Republicans, and (iii) women. Furthermore, our analysis reveals that more negative sentiment (in the source tweet) is associated with more hate speech (in replies). However, the association varies across parties: negative sentiment attracts more hate speech for Democrats (vs. Republicans). Altogether, our empirical findings imply significant differences in how politicians are treated on social media depending on their party affiliation, gender, and ethnicity.
Accepted at The Web Conference (preprint available via arXiv)
Community-Based Fact-Checking on Twitter’s Birdwatch Platform
Twitter has recently introduced “Birdwatch,” a community-driven approach to address misinformation on Twitter. In this work, we empirically analyze how users interact with this new feature. Our empirical analysis yields the following main findings: (i) users more frequently file Birdwatch notes for misleading than not misleading tweets. These misleading tweets are primarily reported because of factual errors, lack of important context, or because they contain unverified claims. (ii) Birdwatch notes are more helpful to other users if they link to trustworthy sources and if they embed a more positive sentiment. (iii) The helpfulness of Birdwatch notes depends on the social influence of the author of the fact-checked tweet. For influential users with many followers, Birdwatch notes yield a lower level of consensus among users and community-created fact checks are more likely to be seen as being incorrect. Altogether, our findings can help social media platforms to formulate guidelines for users on how to write more helpful fact checks. At the same time, our analysis suggests that community-based fact-checking faces challenges regarding biased views and polarization among the user base.