Sophie Giesen, Head of Strategic Business Consulting Australia and New Zealand, Genesys, talks about AI and the problem with everything.
In the last decade things have changed. A lot.
While it’s amazing to live in the iPhone age, we also live in a time where our worst traits are amplified across the globe in seconds. From recommender systems radicalising citizens in 5 clicks, to social media influencing elections, a robot uprising is the least of our concerns when it comes to advanced technology.
Recently even Apple has found themselves on the less forgiving side of new tech when their Apple Card applications delivered gender-biased approvals. What is happening?
At a conference last year, a colleague of mine asked: Why is everyone talking about artificial intelligence? It’s not intelligence we are working with, it is designed applications, mostly using machine learning, NLP or predictive algorithms.
This very definition might give us an idea of what is going wrong. In 2018 Khoi Vinh, Head Designer, Adobe, spoke about perceived technology issues. “On the surface, the basis for these stories are ‘tech problems’ but as designers, we know better. We know that at least half of the issues discussed stem from design,” said Mr Vinh.
In her TED Talk this year, AI researcher Janelle Shane spoke about the difficulty in giving the right instructions to an AI-algorithm. The goal isn’t the problem, she noted, it’s how we ask the algorithm to achieve it.
So, if part of the technology-related problem comes from design, and part is how we communicate with the algorithm then the inevitable question must be, how do we design better?
At minimum, there are two components to address: data set bias, and our instruction approach.
“It’s easy to mistake this process for an objective or neutral pipeline. Data in, results out. But human judgment plays a role throughout. Machine learning isn’t a pipeline, it’s a feedback loop. As technologists, we’re not immune to the effects of the very systems we’ve helped instrument,” Josh Lovejoy, Google.
No matter how much we try to design neutral technology there is always some level of unconscious bias inserted into our work because our society includes bias. Apple stated bias was impossible as the machine did not know an applicant’s gender, yet we know that other indicators can act as proxies. Rather than sanitise data, maybe we should purposefully include bias for the algorithm to understand. Give it the whole picture and let it find the solution!
Instead of assuming the machines think like us, maybe we need to think and communicate more like them.
Learning technologies such as neural networks have incredible potential to elevate the best in us but learning alongside the machines may elevate the very function of AI.