Here is the fourth in a series of extracts from award-winning journalist Tracey Spicer’s book, Man-Made: How the bias of the past is being built into the future. In this extract from her book, Tracey explains how we can turn the tables on AI and bias.
In an era in which employers use workplace surveillance to monitor our productivity, we often forget that we can turn the tables. Futurists predict workers will soon be monitoring their bosses using the same tools. For example, employees can check and record stress levels on their Garmin, Fitbit or iWatch.
Perhaps there’s a place for a website to publish the results of smartwatch data from employees at particular companies. Kinda like a high-tech Glassdoor, the website where employees anonymously review companies. It’d be fascinating to see the results from the likes of Apple, Google and Facebook. Talk about turning the tables on the tech titans. Solidarity forever!
Nowadays, male allies are refusing to speak at conventions where there’s an unequal representation of genders. Advocates are highlighting this on social media using the hashtag #Whereareallthewomen. If you work in tech and are slated to present at a conference, ask yourself is there someone I can nominate instead? It might be a woman, a person of colour, or a colleague with a disability.
Ethical investors should be putting a gender lens over tech stocks. ‘What’s the composition of the executive team and board?’ ‘How many women are involved in research and development?’ ‘Does this company invest in organisations with a history of gender or racial bias?’ Talk to your bank, superannuation fund or financial advisor about your concerns. By the end of this decade, women will control US$30 trillion in financial assets. Yep – trillion.
Do you work in the university sector? Lecturers should be aware of the pitfalls of using machine learning to evaluate assessments. There needs to be human oversight. Sophisticated algorithms can pinpoint struggling students and organise assistance. However, they can also exacerbate existing bias, further side-lining these students.
It might be time for your uni to create a department dedicated to AI bias. The Human Technology Institute at the University of Technology Sydney is hitting the ground running with a world-leading report recommending reform on facial recognition, based on international human rights law. The report urges the federal attorney-general to lead this ‘pressing’ process.
Maybe you’re a lawyer working in the technology field, or a judge who’ll be presiding over cases of alleged algorithmic bias. It’s your job to decide what ‘fair’ means as it pertains to the use of artificial intelligence and machine learning. So too, insurers must adopt robust fairness criteria. One test being developed centres on something called ‘counterfactual fairness’. Yikes! What the heck is that? Apparently, it asks AI systems this question: if attributes such as race, gender or sexual orientation are changed, do the model’s decisions remain the same?
Where can you look for more details? The Alan Turing Institute’s website is an excellent source for information on fairness, transparency and privacy. Google AI and Fairness 360 provide open-source toolkits to check for bias in datasets and machine learning models. IBM’s Watson OpenScale performs bias-checking and mitigation in real time. Read the 2021 book co-authored by Dr Catriona Wallace, Checkmate Humanity: The how and why of Responsible AI.
One easy suggestion is to include a human in the loop: an actual person double-checking everything. The World Economic Forum wants businesses to be aware of the three main forms of machine bias. (Yes, there are more than three, but these are often evident in the workplace.) These are:
1. Implicit bias: A system that discriminates against a person or group on the grounds of gender, race, disability, sexuality or social class.
2. Sampling bias: This occurs when the data used to train the machine isn’t representative of the population the machine will be servicing. For example, we know about the problem with data from white middle-class hospitals failing to represent non-white populations from poorer neighbourhoods.
3. Temporal bias: Even if the AI produces fair results today, it may not do so in the future, as circumstances change.
Put simply, we should all be aware of the ‘red flags’, as my teenage daughter would say. Although she says it at the slightest provocation. I’ll try to give her a kiss on the cheek before bedtime: ‘Red flag, Mum!’ Gotta love Generation Alpha. These kids really know their boundaries!
If you think speaking out about red flags will make little difference, think again. Research by the Brookings Institution reveals public opinion is hugely influential in relation to legislation on AI fairness, privacy and safety in the US. Here in Australia, we’re at the beginning of the conversation. We can exert enormous influence in the coming years.
Follow the international change-makers and donate to their foundations. Hop onto ajl.org to give money to the Algorithmic Justice League, which is fighting for equitable and accountable artificial intelligence. Maybe you work in voice software. Anyone who identifies as ‘a female, non-binary, genderqueer, genderfluid, gender non conforming, agender, and all minority genders’ can be part of the community at womeninvoice.org.
You can buy Man-Made: How the bias of the past is being built into the future, here
How to get a signed copy of Tracey Spicer’s book
Readers can also buy a signed copy via this link: https://traceyspicer.com.au/signedbook
You can read more extracts from Tracey Spicer’s book here on ‘How we can take action on AI being predisposed to ‘Misogyny, ableism, homophobia and ageism.’