A life of algorithms

This post is written by Andrei Dinca, who is a first-year PhD student in Geography and Marketing. This blog post was first published as an article on LinkedIn

According to Experian’s research “The UK adult population can be split into 29 customer segments – 16 female and 13 male – based on their purchasing habits, behaviours and preferences.” Knowing the complexity and amount of data they hold, I’d take such claims very seriously … and equally scrutinise them.

However, this is not about Experian or any other similar company for that matter. This is about a phenomenon marketers have been embracing and developing for more than 3 decades: consumer segmentation and its subsequent commercial targeting. They are seen as the path to the golden egg of marketing: 1-to-1 communication with customers. We’re all looking for that right message, at the right time for the right individual. This ultimate goal has made data and its endless promises our new best friend.

We live in the age of algorithms where more and more key decisions are made by some sort of presumably objective mathematical models. This may lead to a life of algorithms, where we move almost all our decision making capabilities to the ‘brains’ of machines. Think about your credit score, insurance premium, GPS routes, Facebook newsfeed or search engine results, or even China’s new ‘social credit system‘. They all rely on complex and hardly decipherable algorithms that work at speeds and accuracies far greater than any human brain will ever be able to achieve.

So what’s wrong with this? This should be fair for everyone as we’re all ‘judged’ based on the same rules, free of emotions and interpretations. For marketers, in theory, this is a win-win situation: consumers receive the content they want, when they want, while we do contribute to our companies goals and growth.

Yet how many of us have ever questioned this process? How many of us have looked at our segmentation practices beyond commercial interests and questioned their correctitude, efficiency, effectiveness and power? Or even thought about their social, political, economic or ethical considerations?

I’d take a wild guess and say that not many have done so. And it’s understandable because this is not something one has even time to consider in their job. Plus, the hype around technology has given us a massive boost of opportunity and confidence, leading many to believe that there are endless possibilities to segment and target your audience.

Likewise, not many academics have thought about it either. There are some interesting attempts, with Cathy O’Neil’s Weapons of Math Destruction being a great example. They all revolve around the dark sides of algorithms, inequality, social and political injustice, questioning their objectiveness. After all, they are built by humans with particular values and views of the world that are often embedded in the hundreds and thousands of lines of code that create these algorithms.

All this is great because it makes us aware of our actions and how using such tools may affect others. However, none of these attempts come up with solutions, neither do they ask people what they think about this – which is a bit of a paradox as it leaves customers outside a debate about practices that directly affect them. Anyway, one thing is certain. While some concerns have been raised, there’s still little to no attempts to try and solve such issues.

The problem here is that it creates a void which has recently been filled with sensationalist stories about misused personal data – see Cambridge Analytica. While these cases are definitely news worthy, they also paint a rather evil picture around segmentation and targeting, seeding fear amongst customers when it comes to personal data. They overpower the examples where data and algorithms have proved incredibly beneficial and, in some cases, they even helped saving lives. However, there is a key distinction that needs to be made: it’s one thing to be cautious about your personal data and a completely different thing to be fearful.

So how can we make this better? How can we bridge these gaps and look beyond commercial interests alone and find a way to address social, economic or ethical concerns without impacting our companies’ success? I don’t have the answer – and it bothers me. This is why I’ve chosen to address the issue of commercial targeting in my PhD project. I truly believe in the power of transparency and alternatives – it’s been proven over and over again that, on the long term, singular perspectives over such matters are very dangerous.

So why settle for the current model and not look for more ethical and transparent ways of ‘talking’ to customers? I’d love to hear stories where alternatives were explored or even if anyone has any view / suggestion of how we can make commercial targeting / segmentation more ethical as well as more efficient.