This Tuesday, Women Love Tech Founder Robyn Foyster emceed a panel of powerhouse tech leaders at SXSW Sydney to discuss the impacts of AI in perpetuating unconscious biases.
Shark Tank’s Dr Catriona Wallace, who is a metaverse and AI specialist, spoke about the 80 million jobs that will be replaced by AI which will overwhelmingly impact women. She also detailed the ‘ghost work’ done by underpaid workers in Kenya to clean data sets. She also warned us of the potential for AI to further entrench biases from the past into our future, and the impacts this will have on diversity and misinformation moving forward.
Journalist and Author Tracey Spicer shared her valuable insights from her extensive research from her book ‘Man-Made’ and explained how insidious unconscious bias really in AI is.
CEO and finance guru Shivani Gopal talked about the incredibly interesting Dunning–Kruger effect, in which people with limited competence in a particular domain overestimate their abilities.
Towards the end of the panel, PyxGlobal CEO Zachary Zeus joined to lineup to discuss the creation of a new role, known as a ‘trust architect’ – a term he coined – to help make positive change against our concerns about unconscious bias being embedded in AI.
Here are a few of the eye-opening insights from the panel:
Tracey Spicer: In the modern workplace, unconscious bias is not helpful because it works against diversity, inclusion and more importantly belonging.
The problem with bias and discrimination in AI is that it deepens the stereotypes that already exists in our brains and in society. I started becoming interested in this topic seven years ago when my then 11 year old son turned to me and said ‘Mum, I want a robot slave.’…As a life-long feminist and journalist, I had this epiphany that the bias of the past – this idea of women and girls being servile like the stereotype that was invented back in the 1950s was being embedded into the technology that is running our futures.
When the focus groups happen around the development of innovations, and people are asked if you want this voice to be female or male, if it is a servile device, and because we are comfortable with that in the past with sexism and mysogyny we say ‘yes we want a voice female’ for that. And yet almost all the chatbox for voices for business have male voices.
“From the moment we wake up in the morning to when we go bed at night, we are surrounded by AI and it is almost like we are marinating in a sea of bigotry and mysogyny.”
Tracey Spicer, author of Man-Made
The Need for Responsible AI Is Greater Than Ever
Robyn Foyster: So when it comes to AI, is it like the wild west?
Dr Catriona Wallace: So the world changed for us in November last year forever. It’s never going back. And that is, humanity released a superpower into the world. And a lot of people in the AI sector aren’t saying, oh it just kind of happened, like suddenly there wasn’t generative AI and then suddenly there was.
But actually, AI was invented in 1956 at Dartmouth College. It’s 70 years in the making, so these machines are very powerful. So it is humanity’s dream to have superpowers. We now have one. The challenge is that we’ve released it and there’s no way of getting it back. And why I refer to the Wild West is that there are no laws or regulations that really specifically are applied to AI.
Now there is the East Safety Commission and the Human Rights Commission that have generic rules and regulations that could be applied to AI, but even the EU is the only place in the world that has really has a significant draft legislation underway, but they’ve had to not pass it, go back to throwing it away because generative AI has posed a whole new series of risks. So right now, we have a superpower released into the world, full of bias. Without the ones who are the AI entrepreneurs, actually even knowing what it’s learning and how it behaves. So it’s extremely dangerous.
Robyn Foyster: AI is not built for the future, it’s actually built by using the patterns of the past. And so therefore, what we’re doing is literally dragging the prejudices of the past into the AI we are creating for the future. Shivani, when I Google’d ‘CEO’, I found all the images you see are of middle-aged white blokes. You’ve been championing women through your Remarkable Women and Elladex, and as an entrepreneur; how does that make you feel?
Shivani Gopal: Oh, it’s so disappointing in a world where we have such incredible female leaders and female CEOs. So what is happening in this world of technology is that when you Google CEO, you get a page full of men. If you don’t believe me, when this session finishes jump onto Google and type in Australian CEOs or just type in CEOs and see what comes up. You will get a page full of men, and not just men, white men. Where is the diversity? And that is really scary because we live in a world where when you Google something, the results that you get are presented to you as facts. So seeing is believing.
So if you go, okay, if that’s the CEO, well that’s not me. And what I worry about is the impact that that has on aspiring women who want to move into a CEO role. Seeing is believing. If you don’t have those role models, if you don’t see it through the fact machine of Google, then how can I do it? And then if I’m not doing enough of it, then what is the intergenerational impact of this? What is the impact that this is going to have on young girls and boys? Because we also want our young boys looking up and going, a CEO could be a man or a woman, it could be my mummy or it could be my daddy. And the same thing for girls, and that’s not happening.
I know we’ve all made this mistake of taking information on the internet as facts. How many of us have actually taken something off Wikipedia as facts? Hopefully not now – hopefully we know better now – but information off Wikipedia once upon a time and popped it in a university assignment, or written it and used it in an article presented as facts, and then you get shamed to go, oh crap, it was wrong.
The information on the internet isn’t always correct, it isn’t always facts, but we digest it as facts, and that can be very powerful and/or limiting. And it’s also very scary when we live in a world where 70% of us – and I actually think this number is a lot higher, there’s just a couple of us that are in denial maybe – but 70% of us overall research says experience imposter syndrome. So if you’re already experiencing imposter syndrome and the world is telling you, and the role models are telling you that as a woman you’re not supposed to be CEO. That’s not what they look like, that’s not what they sound like. Well, you’re going to second-guess yourself.
We also know that women will generally only apply for work when they feel they’ve ticked every single box. And so when we live in that world again, it becomes even more limiting. So I’d like to leave you with just a little PSA, and that is that pretty much no one fits all of those boxes. I know because I’m an employer and I’ve hired so many people who haven’t ticked all of the boxes that I’ve put in those roles. And in fact, the people who I have brought in, they created a category of their own. So if you don’t fit the box, think about, what if I’m outside the box? And what if that’s great? What can I do with this?
The second thing I want to leave you with is a really interesting psychological phenomenon that’s called Dunning-Kruger syndrome. And that often plays out in the way that the less that you know, the more confident that you are. So the next time you’re sitting next to a, no offence against a middle-aged white guy, but we’re going to use the term. The next time you’re sitting next to a middle-aged white guy and you think, gee, if I can have his confidence, just know that if you’re feeling less confident, it’s probably because you have more experience, you have more expertise, and so therefore, you are more aware. That’s what’s driving the lack of confidence, not anything else. Think about that the next time you Google CEO and think, actually, I do know better.
Dr Catriona Wallace and Zachary Zeus also spoke about the dangers of AI at large.
Dr Catriona Wallace: We know that within 10 years, the rate of progress of AI, we won’t know ourselves. So when we talk about artificial intelligence, there is kind of narrow artificial intelligence, then there is more general artificial intelligence. So narrow artificial intelligence is very capable today. It can do a tasks like reading your emails. Artificial general intelligence is something more like Siri or Alexa that knows generally about the world, but it’s not very deep. You can ask it about the news and the weather and so on.
Then we have artificial superintelligence, which is where the machines really do operate and think and are more capable than human intelligence. We’re pretty close to that. In fact, there are a number, like eight really well-regarded super-intelligent computers in the world already. IBM Watson was probably the original one, it came with a lot of problems. And it worries me deeply that Meta owns a supercomputer. That’s what we should be worried about. We’ve got people like Elon Musk who have invested in companies like Neuralink, which is the brain chip, which is a very well-developed brain chip that can actually communicate with a computer and show what people are dreaming or what they’re thinking. This will all be commonplace within 10 years, so know that this is coming super fast.
What are Hallucinations?
Who knows what the hallucinations are? A few. So hallucinations are when the machines ask questions and then they kind of really convincingly make up an answer that’s not true. And what the machine is actually doing is trying to take representations from the outside world and bring it in, look at its data, it doesn’t quite have the answer, so it hallucinates and makes something up, which is often completely false, but we believe that it’s fact.
Now, that’s challenging. What is more challenging is, the engineers and the programmers of that AI, we don’t actually know what it’s doing or how it did that. We can’t explain really why it hallucinates. So not only have we got super-intelligent computers coming, we’ve also got computers that hallucinate amongst other problems. So this is a great concern for us. And the answer to that, I don’t really know.
Why We Need Trust Architects
Zachary Zeus: The only thing I’d add to that is it is coming, and the thing I’d encourage everybody here and everybody who is concerned about this is to take as much agency in your life, in your design of how you engage with technology, and how you engage with businesses, and how you engage with your workplaces. Take agency for, and take command of your life because that’s what we have and that’s the opportunity to flip this on its head and make sure that the machines work for us as opposed to the other way around.
Websites:
Responsible Metaverse Alliance: https://responsiblemetaverse.org
Giving What We Can: https://www.givingwhatwecan.org/