Online Social Networks (OSNs) such as Facebook1 and Twitter have far exceeded the traditional networking service of connecting people together. With millions of users actively using their platforms, OSNs have attracted third parties who exploit them as an eective media to reach and potentially in uence a large and diverse population of web users. For example, during the 2008 U.S. presidential election, social media was heavily employed by Obama’s campaign team who raised about half a billion dollars online, introducing a new digital era in presidential fundraising. Moreover, it has been argued that OSNs, as democracy- enforcing communication platforms, were one of the key enablers of the recent Arab Spring in the Middle East. Such a global integration of social media into everyday life is rapidly becoming the norm, and arguably is here to stay. But what if some of the content in social media|OSNs in particular|is not written by human beings? A new breed of computer programs called socialbots are now online, and they can be used to in uence OSN users. A socialbot is an automation software that controls an account on a particular OSN, and has the ability to perform basic activities such as posting a message and sending a connection request. What makes a socialbot different from self-declared bots (e.g., Twitter bots that post up-to-date weather forecasts) and spambots is that it is designed to be stealthy, that is, it is able to pass itself o as a human being. This allows the socialbot to compromise the social graph of a targeted OSN by inltrating (i.e., connecting to) its users so as to reach an in uential position. This position can be then exploited to spread misinformation and propaganda in order to bias the public opinion. For example, Ratkiewicz et al. describe the use of Twitter bots to run astroturf and smear campaigns during the 2010 U.S. midterm elections. Here to read more.