Algorithms Behaving Badly
Did anyone see the news yesterday? Amazon, the world's third-largest information technology company came under huge scrutiny for its algorithms, which can be used as a guide to purchase the necessary components to produce chemical explosives.
The Channel 4 News investigation found explosive black powder and thermite grouped together under the "Frequently Bought Together" list during searches for other common chemicals, while steel ball bearings, often used as bomb shrapnel, were recommended in the "Customers Also Bought" tab.
So what lessons can be learned from this faux-pas?
Algorithms, or ‘algo’ for short, are scripts that use rules to make decisions and improve performance over time. Marketers have used them for years and they are fully embedded in most modern marketing strategies. However, as we’ve seen the in case of Amazon, human ingenuity and our ability to draw insights and conclusions should not be overlooked. Algorithms augment human ability and allow marketers to make better decisions and focus on the customer experience and path to purchase. Therefore, marketers will always have a need to keep people working alongside algorithms and offering their human input to ensure algorithms continue to perform as intended. This is where Amazon has fallen down: it’s become lazy, overlooking what its algorithms are linking to and subsequently causing huge backlash for the brand.
This increasing reliance on algorithms is changing the way our brains work and fundamentally changing the ways in which marketers make decisions. In the future, the survival and success of every brand in the world will depend on how well it can market to the machines. Our Stories in Motion study proved that there are changes to the way people search for information and shop. Consumers are heavily influenced by retailers’ sites (45% in the UK, 34% in the US and 55% in China), meaning brands like Amazon have a duty for driving accurate and responsible marketing.
A more serious case of algorithms failing (and a sad inkling of what is to come as machines infiltrate every part of our lives) is driverless cars causing fatalities. Tesla admits that its system sensors failed to distinguish a white van against a bright sky, resulting in the untimely death of a 40-year-old Floridian. The moral of the story is no great surprise; algorithms aren’t a perfect science yet – we must learn how to work with them, rather than accepting algorithm-driven technology as unalloyed good.
What can brands do when algorithms fail?
Failures will happen – it’s inevitable. However, here are some best practices for brands to ensure their customers aren’t affected by badly behaving algorithms:
- Don’t under estimate the human touch. Human interaction is still needed to oversee the algorithmic process to avoid any computer blunders
- Always check for racial, gender, age and other common biases in your algorithms
- Explicitly analyse how your software can fail and provide a safety mechanism for each possible failure
- Have a communications plan in place to address the media in case of an embarrassing failure. (Hint: start with an apology, Amazon)
Machines are not humans – and they can’t always be relied upon to understand the nuances of the customer-brand relationship. It’s a hard lesson for Amazon to learn, but one that must be taken with serious consideration.
The latest blogs from WE
Decoding Gen Alpha: A Primer on the Next Gen of Consumers>
Why Gen Alpha Will Fuel Spending This Season>
Why Reputation Is a Business Driver in Healthcare>