Digital technologies have gathered increasing momentum in the last decades and have come to play a fundamental role in every aspect of contemporary society. At a political level, the process of digitisation has been generally encouraged as a means both to strengthen the market and to modernise public administrations. According to the European Commission, digital transformation, which has been defined as “a process that aims to improve an entity by triggering significant changes to its properties through combinations of information, computing, communication, and connectivity technologies” (Vial 2019, 118), also represents a key instrument for the purposes of a green transition and for reaching the goal of a climate-neutral Europe by 2050.
The recent COVID-19 pandemic has, nonetheless, represented a turning point for digital transition. Indeed, the need to ensure social distancing to stop the spread of the virus made the resort to digital tools and instruments more and more necessary in everyday life. Thus, for instance, COVID-19 caused education systems to be reinvented to guarantee children’s and young people’s right to study (Iivari, Sharma & Ventä-Olkkonen 2020), pushed for an increase in remote working (Savić 2020; Nagel 2020), and accelerated digital transformation in the delivery of public services (Agostino, Arnaboldi & Diaz Lema 2021), including healthcare (Murray and others 2020).
It should be noted, however, that the enhancement of the role of digital technologies and, therefore, of the cyberspace and digital environment after COVID-19 largely depended upon the infrastructures and systems built and provided by private entities, most notably online intermediaries and platforms. In fact, according to Agostino, Arnaboldi and Diaz Lema, “COVID-19 has not been a catalyst for public service institutions to use technologically ground-breaking tools”, since most of them ended up relying on “the most common tool in the public communication and editorial armoury: social media” (Agostino, Arnaboldi & Diaz Lema 2021, 69). The rapid digitisation of society which has followed the outbreak of the pandemic, therefore, has rendered digital private corporations even more central in the everyday life of private individuals, as well as in the provision of public services by national administrations. In other words, COVID-19 has accelerated the transition towards what Jack Balkin famously defined as the “algorithmic society”, i.e. a societal landscape “which features large, multinational social media platforms that sit between traditional nation states and ordinary individuals, and the use of algorithms and artificial intelligence agents to govern populations” (Balkin 2018, 1151).
The central role of intermediaries within the process of digitisation in the aftermath of COVID-19 is best represented by two examples: on the one hand, the governance of the phenomenon of online disinformation, especially with respect to the “infodemic” which accompanied the outbreak of the pandemic; on the other hand, the development of contact-tracing apps as digital epidemiology tools aimed at breaking up the chain of infections (Cattuto & Spina 2020).
As for the first aspect, information on the novel disease across the web has been severely tampered by the dissemination of fake news, especially disinformation and misinformation. Many scholars and the medical community, as well as governmental institutions and international organisations, have highlighted the serious damage caused by the distribution across the web of contents and materials containing false information with respect to the origins and seriousness of COVID-19, as well as on the effectiveness and necessity of precautionary measures such as social distancing and masks. In some cases, the population refused the suggestions of medical science, and many cases of vitamin-D abuse and mass poisoning from methanol intake took place (Tagliabue, Galassi, & Mariani 2020). More recently, misinformation and disinformation have targeted and affected worldwide the vaccination campaigns and have thus represented a serious threat for overcoming the pandemic.
In recent years, research had already underscored how online media and social networks have contributed to the dissemination and spread of “fake news” at a large scale. In general, the algorithmic architecture of these platforms aims at maximising is structured in such a way as to maximise the engagement of users themselves, with the economic purpose of maximising profit. Thus, on the one hand, people’s interests and opinions are investigated and are used as a founding base to build around them an agreeable digital environment (“Daily Me”) (Sunstein 2017), with the inevitable counterpart of locking them up in echo chambers and filter bubbles (Pariser 2011). On the other hand, controversial items and posts can often be rewarded with increased visibility (Llansó and others 2020). Recent digital policies, especially in Europe, have pointed towards an increased responsibilisation of Internet intermediaries in this sense, most notably through the implementation of the self-regulatory 2019 Code of practice on disinformation.
In the aftermath of the outbreak of COVID-19, social media, pressured both by the government and by the public opinion, took a more interventionist approach. Indeed, the pandemic caused content moderation practices, operated by online intermediaries, to enter a “state of emergency” (Douek 2021, 763). The renewed interventionist approach was generally justified, on the one hand, by the need to contrast an emerging and rapidly spreading threat and, on the other hand, by the existence of clear-cut authoritative sources of information, such as the World Health Organization (“WHO”). Amongst the others, Facebook has updated its policies and its standards and conditions by prohibiting a long list of claims such as: vaccines are not effective at preventing the disease they are meant to protect against; it is safer to get the disease than to get the vaccine; vaccines are toxic, dangerous or cause autism (Rosen 2020). Misinformation- and disinformation-driven hate speech expressions, such as that which affected ethnical minorities and especially Asian groups, are also prohibited.
However, in order to implement such measures, online intermediaries need to employ algorithms and AI systems to deal with the scale of the information flow across the Internet. The adoption of such tools, though being inescapable vis-à-vis the amount of data and items travelling any second across the Internet, raises nonetheless some issues with respect to due process principles, as well as with respect to the fundamental rights and liberties of the users. All AI systems, including automated content-moderation software, are to a certain extent subject to error rates, since they are fundamentally based on probabilistic grounds (Sartor & Loreggia 2020). In the case of misinformation and disinformation, for instance, machines may be at a loss when it comes to understand the irony behind a post. As a result, the implementation of these systems, coupled with the lack of adequate transparency and of redress systems and procedures, is likely to impact negatively on users’ freedom of expression and information whenever a legitimate and unharmful content is unwarrantedly removed or, in general, when individuals are punished for being unjustly recognised as being in violation of the platforms’ terms and standards. Moreover, users are often not guaranteed sufficient and adequate procedural safeguards and remedies. In this sense, concerns arise as regards the full compliance with the principles connected to the rule of law (De Gregorio 2020).
In the aftermath of the pandemic, online intermediaries and digital platforms have thus ultimately entered a new phase and stage of content moderation, especially with respect to disinformation and so-called “fake news”. Nor does it seem that this new phase will be put aside with the gradual overcoming of the COVID-19 emergency. Scholarship has, in fact, expressed the view that online media will not go back to their original “neutral” approach (Douek 2021): instead, it may be argued that the pandemic has accelerated the tendency of intermediaries to gradually become more and more involved with online content moderation. Such a trend, far from being negative per se, should nonetheless be driven by democratic principles and not simply by pure profit-based aspirations.
As mentioned above, the second example which has clearly shown the increasing reliance on private digital actors for the purpose of reaching publicly relevant purposes is that represented by the development, throughout 2020, of contact-tracing apps. The purpose of these apps was ultimately that of keeping track of the chain of infections and, consequently, of helping interrupt it. In other words, they were to be implemented as systems of digital epidemiology, meaning that they represented digital tools to “understand the patterns of disease and health dynamics in populations, as well as the causes of these patterns, and to use this understanding to mitigate and prevent disease and promote health” (Catutto & Spina, 229). In this sense, contact-tracing apps constitute an emblematic symbol of the process of digitisation and digital transformation of public healthcare.
The proposal of using mobile phones’ technology (generally Bluetooth) immediately sparked concerns all around Europe regarding the potential risks connected to privacy and data protection rights of individuals, both within academia and within the public debate. In particular, preoccupations emerged concerning the potential use which could be made against individuals of data concerning their geo-localisation. The European Commission, in an attempt to guarantee the compliance of national apps with Union legislation on privacy and data protection rights, drafted a common toolbox for Member States, containing important recommendations and suggesting standards. Amongst the main concerns and requests there were, inter alia, the guarantee that data should be anonymised to protect the interests of citizens, as well as the requirement that apps should be downloaded on a voluntary basis.
Member states and national governments, however, had to rely on private digital firms to materially develop the applications. Tech giants Google and Apple even partnered up in “a joint effort to enable the use of Bluetooth technology to help governments and health agencies reduce the spread of the virus, with user privacy and security central to design”. Once again, therefore, the aspiration of pursuing a public interest such as the reduction of COVID-19 infection cases was directly dependant upon the services and tools developed and offered privately. State governments retained the function of designing and shaping what the characteristics of national contact-tracing apps should be, but these apps ultimately depended on then “gatekeeper” function of intermediaries and platforms (Catutto & Spina 2020).
Moreover, as the above reported quote shows, the protection of individuals’ fundamental rights to privacy and data protection ultimately became a task vested directly in those private actors. It was up to the platforms to embed, within their codes, the appropriate safeguards for the people. A situation which is arguably problematic as far as the principles of rule of law, democracy and transparency are concerned.
Although the contact-tracing apps did not succeed in their intent of limiting and constraining the virus, especially because of the general mistrust of the public and, therefore, because of the limited number of downloads, the story of their development is emblematic of the typical forms that the processes of digitisation and digital transformation have taken in the aftermath of the COVID-19 pandemic. The transition from a purely analogue world to an increasingly digital one has faced in the last two years an unprecedented and unexpected acceleration. Against this backdrop, private digital platforms and Internet intermediaries represented the only actors capable of offering the services and infrastructures apt at facing the emergency.
The pandemic has, therefore, shed a light and given momentum to an already ongoing societal development, where the traditional power dynamics are confused and where new emerging actors (Balkin 2018; Klonick 2018), i.e. online intermediaries, have become a necessary tile of the game which should not be ignored anymore. In the new digitised society, digital corporations and firms have gained a central role in the deployment of public and private services, as well as in the pursuit of socially desirable outcomes and results, and often do this through the implementation of AI tools. Political powers and nation states themselves cannot ignore these emerging new forces and have, indeed, begun modifying their own agendas and operational practices to adapt themselves to the new developing world. The digital environment, and therefore the private forces driving its development and its evolution, is shaping society and is shaping how society is governed and administered. In this sense, the boundaries between public and private power have become fuzzier than ever, and may thus require new constitutional adjustments to allow for a new balancing of the forces at play.
Agostino D, M Arnaboldi, and M Diaz Lema, ‘New development: COVID-19 as an accelerator of digital transformation in public service delivery’ (2021) 41(1) Public Money & Management 69.
Balkin JM, ‘Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation’ (2018) 51 UC Davis Law Review 1149, at 1151.
Cattuto C and A Spina, ‘The Institutionalisation of Digital Public Health: Lessons Learned from the COVID-19 App’ (2020) 11(2) European Journal of Risk Regulation 228.
De Cock Buning M and others, A multi-dimensional approach to disinformation. Report of the independent High-level Group on fake news and online disinformation (Publications Office of the European Union 2018).
De Gregorio G, ‘Democratising online content moderation: A constitutional framework’ (2020) 36 Computer Law & Security Review <https://doi.org/10.1016/j.clsr.2019.105374>.
Douek E, ‘Governing online speech: From “posts-as-trumps” to proportionality and probability’ (2021) 121(3) Columbia Law Review 759, at 763.
Iivari N, S Sharma, and L Ventä-Olkkonen, ‘Digital transformation of everyday life – How COVID-19 pandemic transformed the basic education of the young generation and why information management research should care?’ (2020) 55 International Journal of Information Management <https://doi.org/10.1016/j.ijinfomgt.2020.102183>.
Klonick K, ‘The New Governors: The People, Rules, and Processes Governing Online Speech’ (2018) 131(6) Harvard Law Review 1598.
Llansó E and others, Artificial Intelligence, Content Moderation, and Freedom of Expression (2020) TWG Papers Series <https://www.ivir.nl/publicaties/download/AI-Llanso-Van-Hoboken-Feb-2020.pdf>.
Murray CJL and others, ‘Digital public health and COVID-19’ (2020) 5(9) Lancet E469.
Nagel L, ‘The influence of the COVID-19 pandemic on the digital transformation of work’ (2020) 40(9/10) International Journal of Sociology and Social Policy 861.
Pariser E, The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think (Penguin Books 2011).
Rosen G, ‘An Update on Our Work to Keep People Informed and Limit Misinformation About COVID-19’ (Meta, 16 April 2020) <https://about.fb.com/news/2020/04/covid-19-misinfo-update/#removing-more-false-claims>.
Sartor G and Loreggia A, ‘The impact of algorithms for online content filtering or moderation. “Upload filters”’ (2020), Study requested by the JURI committee of the European Parliament (PE 657.101) <https://www.europarl.europa.eu/RegData/etudes/STUD/2020/657101/IPOL_STU(2020)657101_EN.pdf>.
Savić D, ‘COVID-19 and Work from Home: Digital Transformation of the Workforce’ (2020) 16(2) The Grey Journal 101.
Sunstein CR, #Republic: Divided Democracy in the Age of Social Media (Princeton University Press 2017).
Tagliabue F, L Galassi, and P Mariani, ‘The “Pandemic” of Disinformation in COVID-19 (2020) 2 SN Comprehensive Clinical Medicine 1287.
Vial G, ‘Understanding digital transformation: A review and a research agenda’ (2019) 28 Journal of Strategic Information Systems 118.
 European Commission, ‘A Europe fit for the digital age’ <https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age_en>, accessed 9 November 2021.
 The World Health Organization (“WHO”) defines infodemic as “too much information including false or misleading information in digital and physical environments during a disease outbreak”, which thus “causes confusion and risk-taking behaviours that can harm health”: see World Health Organization, ‘Infodemic’ <https://www.who.int/health-topics/infodemic#tab=tab_1>. The WHO called repeatedly against the COVID-19 infodemic and took action against it: cf. World Health Organization, Infodemic Management: An overview of infodemic management during COVID-19. January 2020-May 2021, <file:///C:/Users/utente/Downloads/9789240035966-eng.pdf>.
 On the notions of misinformation and disinformation, see De Cock Buning and others 2018.
 See Facebook Help Center, ‘COVID-19 and Vaccine Policy Updates & Protections’ <www.facebook.com/help/230764881494641/?helpref=uf_share>.
 European Commission – eHealth Network, ‘Mobile applications to support contact tracing in the EU’s fight against COVID-19. Common EU Toolbox for Member States’ (15 April 2020) <https://ec.europa.eu/health/sites/default/files/ehealth/docs/covid-19_apps_en.pdf>.
 ‘Apple and Google partner on COVID-19 contact tracing technology’ (Apple, 10 April 2020) <https://www.apple.com/newsroom/2020/04/apple-and-google-partner-on-covid-19-contact-tracing-technology/>.