The Internet of Emotions: A New Legal Challenge

0

The new frontier of artificial intelligence and the internet of things is moving towards 3D facial recognition technology and emotional recognition. With the internet of emotions, new means to capture human feelings through artificial intelligence have been developed. These devices are interconnected and they use the processed data for various purposes. This will deepen the interaction between humans and technology in a significantly encompassing way.

Computers and robots are being trained to recognise people.In the near future advertising will be even more personalized and intrusive than ever before. Triggered by the scans of our faces in shops and public spaces, companies will be able to detect our emotions in every circumstance, and consequently serve us with the creation and fulfilment of new needs.

Face ++

This fantasy movie scenario is almost a reality in China. The billion-dollar platform and start-up “Face ++”, owned by Megvii, is a cognitive service provider that allows apps to add deep learning-based image analysis recognition technologies to their functions. Face ++’s clients include Alipay,the photo editing app Camera 360, Lenovo and others.  The former, in particular, is a payment platform which allows users to reset their missing password using facial recognition. Alipay will soon launch the feature “Smile to Pay”: a customer will be able to stand in front of an in-store camera, smile, and have money automatically deducted from their bank account.

Rationalizer

Philips and ABN AMBRO BANK, both based in Amsterdam, have developed the “Rationalizer” to monitor its employees. The Rationalizer is a bracelet attached to the trader’s wrist that measures feelings via electro dermal activity, revealing the user’s emotions using light patterns and colours. The app is designed to help traders to improve their performance, while managers can gather that data in order to understand how internal and external environment factors influence the risks taken by their employees.

Microsoft Corp.

Microsoft Corporation is also exploring the use of wearables in the workplace, and has conducted research on the use of wearable sensors to understand what work activities are associated with changes in emotion, and when the stress level of employees is too high.

These are just a few examples. But the question is: How will people behave in the future, knowing that their feelings and behaviour are constantly monitored?

Nowadays, since the collection of our data mainly occurs when surfing the internet, an effective defence weapon may be the usage of a VPN (Virtual Private Network): a technology that enables users to mask their IP address and to ensure that their online activity remains anonymous. This instrument is not yet widespread, if not among companies that fear the breach of their sensitive data. The reason may be that average consumers are not really aware of how much of their data is being collected by companies. The problem is still relatively new, and only faced by experts in the field, but since technological advancement is set to grow, bringing with it an increase in public knowledge, people may begin to fear invasion of their privacy, and look for tools to protect it. If the keyhole of intrusion is no longer just the internet, but also IoT devices connected to each other in our houses and in public, what can be done about it?

The problem may also affect the privacy of children, as the “Hello Barbie” case has shown. The historical doll has been redesigned in the US, and is now capable of interacting with children, using voice recognition technology when pressing a belt buckle. The recordings of children’s voices are sent to third-party companies for processing, and this can potentially reveal his or her intimate thoughts and details. This information could be of great value to advertisers and may be used to market their products at the expense of children’s privacy. Even if the toy can be used after the parents have agreed to various privacy policies, to what extent are they really aware of the consequences?

In the environment of IoT, some harms may not have manifested themselves yet, and have not yet been considered under current legal regimes. In the not-so-unlikely case of a data breach in companies that retain user’s personal data concerning their behaviour, emotions, or health information, the risk of harm may be high: from identity theft, fraud, to economic loss (as in the case of FACE ++ if information is linked to bank accounts).

Privacy Issues

The Internet of Emotions seems irreconcilable with the current privacy regime of privacy on the continent.

The recently-introduced GDPR contains a range of principles related to the processing of personal data: the data subject must have the right of access, rectification, erasure, data portability, the right to preclude others from processing and marketing his/her data, and most importantly to give the consent to the gathering and processing of such data where no other legal basis exists. In order to give the consent, however, people must know WHO is gathering their data. This becomes more complicated if we are surrounded by devices such as cameras, sensors and other observational tools. Biosensors and biometric data allow a real time understanding of people’s movements and emotions in everyday life, whether they are in public, or in their private homes. How is it possible to give consent to the processing of data when entering a smart house? The house owner may have consented to it when purchasing the devices, but what about a person other than the owner, that occasionally enter the house and has no way to consent or even know that his data is being collected?

When we share personal data online, we are given the opportunity to give or withhold our consent to data collection before we start to use the service, while in the IoT such notice and opportunity are mostly absent. These devices are built to be unobtrusive, so they do not usually have the means to display privacy notices and to provide consent in line with the preferences expressed by individuals. If consent is expensive, over complicated or even impossible to obtain, data controllers may likely choose to avoid it entirely.

When consent is missing, personal data protection rights are unduly and unjustifiably restricted, the GDPR is violated, so the right to privacy stated in article 8 of the ECHR is breached. The latter establishes that “Everyone has the right to respect for his private and family life, his home and his correspondence. There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society…”

Plenty of interference is envisaged in this scenario and the very concept of privacy is undermined. The lack of control over the real-time data collection may lead to psychological stress on consumers. The insecurity and pressure due to the potential surveillance and manipulation may even result in a change of behaviour: if people are constantly afraid they are being observed, they won’t be free to express their ideas, right or wrong, as those may be attributed to them perpetually.

In order for a fundamental right to be lawfully restricted, certain rules must be applied.

Echoing the provisions of the GDPR, the European Data Protection Board (the EU’s independent data protection authority) has set out guidance on assessing the proportionality of measures that limit the fundamental rights to privacy and to the protection of personal data.

The first principle is Necessity: due to the fact that the processing of personal data involves a series of fundamental rights, restrictions shall occur only when strictly necessary. The basis of necessity must be of objective evidence. The categories of data gathered and processed and the duration of their retention shall be necessary to achieve the goal.

The second principle is Proportionality. This principle obliges authorities to strike a balance between the means used and the intended aim when exercising their powers. The outcome must entail that the disadvantages do not override the advantages obtained through the limitation, so that the limitation of the right is justified. In addition, proportionality requires that the amount and typology of data gathered must be adequate and relevant for the purpose of processing.

Those guidelines, among others, shall be taken into consideration when assessing the regulation related to the IoT and the IoE in order to lawfully restrict the fundamental right to privacy.

What about the Anonimization Tool?

An alternative path, as consent seems hard to obtain in the IoE world may be, when possible, data anonymization.

As we know, GDPR rules only apply for any information concerning an identified or identifiable natural person, meaning that if the data subject is not identifiable, privacy risks are averted.

So, in theory, if anonymization is feasible the point at issue may be solved.

However, the problem is that the sensor datasets are particularly prone to sparsity:  as sensor data catches a comprehensive picture of an individual, especially by the analysis of his activities, each individual in the algorithm world is quite unique.

For example, a MIT study has recently shown how simple it is to identify people simply by analysing anonymized location information obtained from their cell-phones. By the screening of 1.5 million cell-phone users in Europe over fifteen months it was possible to identify ninety-five percent of the users in the dataset. The method used to do so was the analysis of the location of single users within several hundred yards of a cell-phone transmitter over the course of an hour on four occasions in one year.

It is clear though, how challenging is to preserve anonymity when IoE sensors are at stake.

Along with the right to privacy, another fundamental right is threatened in this scenario: the right to equality (guaranteed in all European constitutions). A wide knowledge of consumers’ features, habits, sexual orientation or personal earnings could lead to the risk of unequal treatment. One may argue that in Europe there is no “social rating score system” as is the case in China, but of the type of monitoring that IoT permits may lead us in that direction as well.

Sensor data tends to be so detailed and wide that information extracted from it may be extremely valuable in the economic or information contexts.

In a dystopian scenario where the data subject cannot control the gathering or transfer of his personal data, such data may be acquired by any stakeholder in the market such as insurers, employers and banks, and many others may take economical advantage, basing their decisions on those data without consumers or regulators having a clue of the ongoing process.

New forms of discrimination may emerge against protected classes such as age, race, or gender. Furthermore, hidden forms of economic discrimination based on Internet of Things data may appear.

If, by chance, personal information regarding health issues are obtained by a health insurance company, access to health insurance could be limited or offered at a higher premium for some people.

Other examples can arise in the employment field. Although this may seem intrusive, employers always scrutinize relevant data about potential employees in order to understand who will be most efficient, productive, and in general more suitable for the position.

If data science can be used to enhance the selection and placing of employees, the profits can increase tremendously – potentially at the expense of employee and prospective employee privacy.

The higher the number of data sources the more likely the potentiality to gather relevant information about the employee. The employer could address a number of commercial partners to obtain them: from mobile phone carriers, to electric utility companies, and all the producers of Internet of Things products.

In the internet world everything may potentially reveal information about a person.

Food habits may not predict employability, sleeping or fitness devices may not predict someone’s solvability, but with those devices, we don’t know it for sure. There is reason to believe that combined processed data may reveal any kind of information.

Data from movements measure devices such as an accelerometer combined with heart-rate sensor data can plausibly reveal stress levels and emotions of a person. Research has demonstrated that heart-rate variations from physical exercise follow a different pattern depending on the increase in excitement.

In the same way, many daily activities might infer someone’s mental state, such as the way a consumer holds a cell phone, how placidly a person types on a computer, or how a person’s hands shake are while holding any sort of IoT device.

If the Internet of Things creates several data sources from which unexpected information can be extracted, with the possibility of those inferences being used by economic actors to make decisions, it is easy to imagine how biased algorithms can be used to create new forms of illegal discrimination. In the case of a credit applicant, for instance, it may seem impossible to guess his/her race or gender, unless combined data concerning the applicant’s habits, place to live, and fitness information are gathered. All those characteristics can be disclosed by IoT and IoE devices. As a consequence, this may lead to new form of discrimination.

Antidiscrimination laws do not prevent economic ranking based on our habits, personalities, or stress level. There are no laws that prohibit employers from not hiring people with personality traits they don’t like. Lenders are free to discriminate borrowers with traits or characteristics that suggest trustworthiness from those that have different features. Insurers are free to deny people who are deemed too risky to insure, and so on.

Nowadays, antidiscrimination law has not yet considered these problems.

Through this brief analysis we can understand how crucial it is for legal regimes to regulate the new frontier of IOE before they become a reality in our continent. These devices should be built in a “regulatory friendly way”, implementing the GDPR concept of “privacy by design”, in order to prevent privacy violations, rather than addressing data protection concerns as an afterthought.

In order to do it, here are some standards raised by the experts.

First of all, Transparency: it should be as clear as possible to the consumer who is gathering data and for what purpose. If possible, manufacturers should embed a privacy notice system in each device in a user-friendly way through an effective communication method. For instance, devices that advertise their presence when users enter a space (IoT devices currently are built in an unobtrusive way, otherwise consumers would be annoyed by their presence, so it’s complicated to think of a device that is unobtrusive, but at the same time efficient in terms of privacy notice).

Companies should also allow consumers to know what would happen in case of a data breach.

Devices should provide “do not collect” switches, in order to prevent data collection when not desired by the consumer.

Data minimization: data should be collected only for current, necessary use and not for the future, so the retention period data should be restricted.

Once data is collected, it should be easy for consumers to withdraw consent (where consent has been the legal basis for processing) and ask for such data to be deleted.

The encryption/anonymization level should be as high as possible, and the life-span of raw data should be very short.

Share this article!
Share.

About Author

Leave A Reply