My Algorithm Doesn’t Fit - When Technology Fails To Consider The Human Factor, Regulators Must Protect Citizens And Preserve Society’s Values.

 

When you stand over two meters tall you get used to things not fitting. Off-the-peg clothes, not typically. Desks in the office, rarely. And airline seats? Almost never.

 

Lately though it’s not just the physical things that don’t fit. Apparently because I don’t borrow money and I pay off my cards every month, my credit score is lower than the average university student. Vehicle insurance premiums vary depending on how much time I spend at work and from which email address I apply from. And if I buy two single airfares it can be cheaper than buying a return ticket…and the list goes on.

 

While these may be interpreted as annoying and trivial problems that we can learn to overcome by gaming the system, in practice they are evidence that algorithms are everywhere. That’s because massive amounts of data about people and their habits are now collected and analyzed, resulting in algorithms becoming a part of almost every interaction we humans have with technology. In fact, you reading this article right now is probably the result of an algorithm.

 

Algorithms are everywhere


Google owes their dominance of search engines to its unique algorithm. Facebook uses them to decide what news is fed to your page. Algorithms tell companies who should see their online advertising, let politicians know which voters are undecided, and guide judges when sentencing criminals.

 

Data fuels algorithms and it’s assumed that more data can lead to more accurate algorithms. But more data can also involve probing and influencing people’s lives in ways that raise privacy and human rights concerns. In other words, an issue for regulators.

 

Do algorithms reflect the values in society or those of their creator?


Organizations who use algorithms see it simply as a more efficient way to bring products and services closer to their target markets. However, as with the trends of people buying vinyl records or paper books, the way humans interact with algorithms isn’t always simple or predictable.

 

This is a critical consideration often overlooked during conversations about the impact of algorithms, artificial intelligence (AI), and disruptive technologies—that is, how does human behavior disrupt the technology? Consider how studies have shown that algorithms used in US criminal cases can be racially biased. Or how algorithms were most likely used to target specific voters with specific “news” in the 2016 US election, with Russian interference now being investigated as the source of that news. 

 

How do we fit such considerations into new regulatory models and ensure that there is transparency, fairness and equality in the way that algorithms, robotics, AI, and machine learning deliver services in a diverse society?

 

Should there be a difference in service if I use my corporate email identity over my Hotmail account? What if I don’t have a corporate email?

 

What this means for the Regulator


Most people assume that data use, like justice, is meant to be blind and objective. But some regulators are already thinking beyond this assumption. Article 22 in the EU’s GDPR states that individuals have “the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” In other words, if someone doesn’t like what the machine says, they can appeal and get a second opinion, this time from a human.

 

With that in mind, it may be fairly asked if regulators will have to start examining how algorithms are designed. How transparent are they when it comes to things like breaches? How accurate? How accessible? How biased?

 

 

Do they seek to remove inequality? Or do they reinforce it?


As more and more organizations rely on data gathering and algorithms to help them make decisions, more inequality and bias is likely to be exposed, some with serious consequences for people as they interact with financial services, health care, government, employers, or even the justice system. While I may find it amusing when Netflix incorrectly suggests a film based on my spouse’s likes that I would find painful to sit through—it can often be no laughing matter when an algorithm fails to read the nuances of human behaviour.

 

To reiterate a key theme from my previous posts, regulators and technologycompanies must work together to help address these problems. The new GDPR framework is an excellent example of a debate around the use of data that needs to be extended to the use of algorithms. We need to be in front of this issue through honest dialogue between businesses, citizens, and regulators alike. Because, as Alibaba founder and Executive Chairman Jack Ma noted at Davos, “The computer will always be smarter than you are; they never forget, they never get angry. But computers can never be as wise.”

 

By Mike Turley, Government & Public Services Global Leader