ICO fines Ticketmaster £1.25m for failing to protect customers’ personal data

0

The Information Commissioner’s Office (ICO), i.e., the UK data protection authority, imposed a £1.25m fine on Ticketmaster UK for failing to keep its customer’s personal data safe and resulting in a data breach.[1]

Among the multiple innovations General Data Protection Regulation (GDPR) established in the European data protection framework, an important feature is that it provided more and enhanced powers to national data protection authorities to enforce data protection rules.

Since the adoption of the GDPR, data protection authorities have been increasingly active in enforcing its provisions.[2] Imposing administrative fines is one of the strongest and most effective corrective powers at their disposal to deter unlawful practices. Not only do administrative fines result in concrete patrimonial losses for companies, but they also represent a reputational loss for them.

The case concerns a personal data breach that was not timely and adequately addressed. It could be seen as one more in the long list of cases of DPAs imposing administrative fines, except for a particular detail: the attack was carried out on Ticketmaster’s AI-powered chatbot.

The chatbot on Ticketmaster’s website was designed by a third-party vendor (Inbenta) to interpret user’s questions, to which it automatically identified relevant help articles or information. The automatic process was operated by a computer code that analysed questions. It used Symbolic AI to power its Natural Language Processing (NLP) technology, which allowed it to understand human languages in all their variations.[3] While the chatbot was hosted in the server of the third-party vendor, Ticketmaster included it on various pages of its website, including the payment page. The chatbot unlawfully processed the personal data of customers. An attacker directed its attack at the third-party vendor’s servers and inserted malicious code into the chatbot. The malicious code collected and sent to the attacker users’ personal data, including financial data such as names, payment card numbers, expiry dated and CVV numbers.[4] Despite being notified of potential data breaches and fraudulent activity in several opportunities by financial institutions, the company failed to timely implement appropriate measures to prevent the cyber-attack. As a result, nearly 9.4m customers were potentially affected.

As a result of the investigation carried out by the UK’s data protection authority, the company was fined for violating art. 5(1)(f), which obliges controllers to ensure secure processing, and 32 GDPR, regarding the failure to ensure confidentiality, integrity and resilience of processing systems, and for nor implementing appropriate technical and organisations measures to ensure an adequate level of security.

Why is it important to consider security aspects in AI-systems that process personal data?

In a typical computational architecture, computer program developers code the software that implement an algorithm. This system architecture does not present particular hurdles to secure from malicious attacks. However, AI-powered systems are more complex, and thus offer to attackers several different pathways to carry out their malicious attacks. Common vulnerabilities associated to AI-systems are: data poisoning (introducing false data at the training or at the production stage of the model), making adversarial examples (to confuse the model and lead to misclassified outputs), and model attacks (profiting from the flows detected in the model).[5] In the image below depicts both models and the new threats due to the introduction of AI components:

Source: European Commission’s Joint Research Centre

What is important to take-away from this case is that while artificial intelligence solutions are efficient and innovative tools to perform certain tasks, they also open doors to increased risks and vulnerabilities to personal data protection. Hence, it is of utmost importance that whenever companies implement AI-solutions they must properly assess the risks of using these technologies and identify and implement adequate security measures to counter the increased risks posed by AI-enhanced technologies.

 

 

 

[1] ICO, Penalty notice, Case ref: COM0759008  – Ticketmaster UK Limited, 13 November 2020, https://ico.org.uk/media/action-weve-taken/2618609/ticketmaster-uk-limited-mpn.pdf

[2] An overview of the fines imposed by national data protection authorities throughout the EU can be seen here: https://www.enforcementtracker.com/

[3] Official information from Inbenta’s website https://www.inbenta.com/en/products/chatbot/

[4] ICO, Penalty notice – Ticketmaster, para. 3.29

[5] https://publications.jrc.ec.europa.eu/repository/bitstream/JRC119336/dpad_report.pdf

Share this article!

Share.

About Author

Leave A Reply