Standing in the Rip Current of the Algorithmic Economy with Closed Eyes

0

This article is re-posted from DataEthics.eu

Blog: How can we question the ethics of a service if we don’t have access to the details of how it is designed to act on data? How can we put a health warning on a product if we don’t know the ingredients?


(Post based on intervention at IGF 2015, Joao Pessoa, Brazil)

Algorithms are business. They are what make value out of data. The industry knows this and all of the biggest tech industry players are investing heavily in AI and machine learning algorithms. Facebook, Google, Amazon, IBM – by opening new research departments, hiring AI and machine learning staff in many fold, and dedicating large quantities of budgets in the field.

We are standing with closed eyes in the rip current of an evolving economy based on finding patterns in data, creating profiles, predicting and responding to data, making meaning out of data and transforming it into value. Some have called this the Algorithmic Economy. Because it is the algorithms that are the actual value makers. They are the recipes of successful businesses and so they are of course based on subjective assumptions, perhaps even biases, and interests – commercial, governmental, scientific etc.

The odd man out here is that this new type of speedily evolving economy – with a whole set of new businesses, services, products, infrastructures etc. heavily invested with different interests – is evolving with no ethical oversight, no public scrutiny.

There is a total lack of transparency in the proprietary value making algorithms. They are deployed invisibly. The people they act on have no access to them. Their basic functioning and source code are secret. Secret as the recipe for Coca Cola. But powerful and game changing in our every day social lives as described by Francesco Lapenta in this Re:publica talk.

Many of the industry giants offer developer clouds for developers and new start ups. But the way they offer them is symptomatic of the type of Black Box society that Frank Pasquale has described. They only offer the APIs, the tools. That means that not even the developers of new services based on these know the recipes for what they are developing. They are provided with tools to create meaning out of their data, but they do not possess the actual value, which is the algorithm that creates the meaning.

An example is the IBM Watson’s “ECO SYSTEM”. This is where IBM offers developers and start ups Watson partnerships to develop their services and business ideas. One of these new services is Unitesus – an online career matching platform that matches employers and employees e.g. on cultural fit based on Watson’s cognitive computing. Basically this means that an algorithm profiles and matches a personality of an individual with a potential work place’s culture etc.: “Custom build your perfect employee based on personality, experience, values, skills and company cultural fit”. Imagine that your future career possibilities are decided by a secret algorithm that finds patterns in for example your social media history and profiles you as one type of worker or another – employable or not. Imagine when a career matching service like this becomes part of the internet of things. An ever connected pervasive environment that measures your professional capabilities and personality based on the way in which you move in your physical environment? A centralized and closed proprietary environment like the one Julia Powles describes.

Imagine the ethical implications. You can’t, because how can you question the ethics of a service, a product, an internet thing if not even the developer him/herself knows exactly how his product’s algorithm is designed to act on its data? How can we put a health warning on a product if we don’t know the ingredients?

If we don’t have insight into the recipes of these secret algorithms, the very talk about ethics becomes hollow. Ethics is something that needs to be considered in the very innovative processes, it has to be designed into services (as Ind.ie calls for in their Ethical Design Manifesto), it has to be part of the recipe. Its about balancing interests from the beginning – human, commercial etc., defining ethical non-discriminatory criteria for e.g. prediction and profiling. Right now we are only looking at the consequences and outcomes of the algorithmic processing of data. Some of these consequences we are already finding discriminatory. Like when a piece of software decides if one is fit for a job based on ones social media history that also prevails your social status. But evaluating the consequences is too late. There’s an urgent call for public scrutiny into the algorithmic economy that we need to react on now.

Share this article!
Share.

About Author

Leave A Reply