The call for regulation on AI is being described as critical from AI experts. Here, three academics from RMIT explain why it is so important.
Lisa Given, Enabling Impact Platform Director, Social Change, Research and Innovation Capability, RMIT University
Professor Lisa Given is Professor of Information Sciences and Director of RMIT’s Social Change Enabling Impact Platform. Her research examines people’s use of technology tools for decision-making in business contexts and everyday life.
“Sam Altman’s testimony in front of Congress highlighted concerns around the potential for these technologies to ‘manipulate voters and target disinformation’, yet others – such as IBM’s Christina Montgomery – cautioned that there was no need to ‘slam the brakes on innovation’ while addressing these concerns.
“AI tools have the potential to disrupt workplaces and people’s everyday lives. However, the technology is still in evolution, which requires a moderated approach to balancing real concerns against a potential future that may not materialise.
“Separating the hype from the reality, of both the promises and the doomsday scenarios being discussed, is where experts can help to advise the public and businesses on how to move ahead with an appropriate level of caution.
“For example, scams and frauds targeting people to wire money to someone posing as a family member is not a new strategy. Yet, AI tools may be able to enhance the texts or emails or be combined with realistic, fake voices to make scams more persuasive. Businesses, governments and individuals can take steps to guard against scams, while embracing AI innovations that can enhance work and daily life.
“The key advice for consumers and businesses alike is to be cautious in our interactions with information shared online (whether text, audio, or video) and to employ critical thinking and assessment strategies to combat those who would use these technologies with ill intent.
“Regulation is critical in this space to ensure that AI tools can be used appropriately and effectively so these are a benefit to society – rather than working against society’s best interests.
“Open analysis of the datasets underlying these systems, clear principles of how these tools can be deployed to help business and individuals, and transparency around the use of these tools to transform texts and images, are just some of the ways that regulation can help society to move forward safely and appropriately.”
Dr Dana Mckay, Senior Lecturer Innovative Interactive Technologies, School of Computing Technologies, RMIT University
Dr Dana McKay is a senior lecturer in innovative interactive technologies at RMIT University. She studies the intersection of people, technology and information, and her focus is on ensuring advances in information technology benefit society as a whole.
“The sad truth is that disparity and discrimination in data underpins artificial intelligence. Data that reflects the needs and biases of predominately white, wealthy, English-speaking men populates the planet, and it is also overwhelmingly predominately white, wealthy, English-speaking men who have created AI.
“Women are more at risk from deepfakes, creatives are seeing their work stolen and losing their income streams, low paid jobs are more easily replaceable, biases in existing data (such as racism) are seen in AI-facilitated decision-making, and low-income people have less chance of fighting back against the AI systems working against them – as we’ve seen with Robodebt.
“AI products are created by companies for financial benefit, not for social good. This means they are controlled by a handful of privileged people, and the details of how they do things are hidden behind the defence of ‘computer as neutral expert’.
“We need regulation, and we need it quickly to tackle a rapidly developing technology that could have many foreseen and unforeseen harms.
“While I would love to see the nationalisation of AI, it will never happen.
“We know diversity makes technology innovation safer, more inclusive and more just. Perhaps a more feasible measure might be to require companies to go through a national body of AI ethicists who are a diverse representation of the community. Failing this, internal, diverse ethics teams would go some way to producing better AI products.
“Cigarettes, alcohol and other substances that may produce social harms are taxed at a higher rate to reflect these harms. Could we do the same with revenues earned by AI companies? The public doesn’t have to bear the costs of ensuring AI is operating within the law.
“An AI supertax – similar to an excise tax – could also pay for the environmental costs of creating and running AI, which are significant, and be used to support content creators whose works have been fed into the generative AI framework, in much the same way as copyright payments.
“Effective take-out legislation would ensure that algorithms are recalibrated so if someone wished to remove their data from the data set, it could be done.
“For transparency, information on decisions and recommendations made by AI, especially around classes of people, could be made accessible through a process like Freedom of Information.
“Finally, it is worth considering whether directors of companies should be criminally liable for harms enacted by these AI products.”
Dr Nataliya Ilyushina, Research Fellow in the College of Business and Law, RMIT
Dr Nataliya Ilyushina is a Research Fellow at the Blockchain Innovation Hub and ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at RMIT University. Her work investigates decentralised autonomous organisations and automated decision making, and the impact they have on labour markets, skills and long-term staff wellbeing.
“It is premature to impose stringent regulation on AI without considering its enormous benefits in terms of productivity.
“One fear of AI is the potential for widespread job losses, but the effect of AI on the economy is more complex than the automation of some occupations – or tasks within occupations.
“The essence of AI lies in its ability to complement human workers rather than replace them. Existing research demonstrates a substantial increase in productivity, with knowledge workers experiencing an immediate 40% improvement through the adoption of AI.
“It is worth noting that AI has great potential to create jobs, for instance, the role of a ChatGPT prompt engineer is currently being advertised with a salary in excess of $300,000. Implementing excessive regulation can impede this progress and hinder the emergence of new types of work.
“In the specific context of the Australian labour market, where productivity has reached a 60-year low, rapid AI implementation can serve as a beacon of hope in the ongoing struggle against inflationary pressures. It could also alleviate the burden of increased interest rates and mortgage stress faced by the Reserve Bank of Australia.
“Excessive regulation of AI could lead businesses to outsource their operations to other countries, resulting in more job losses than the adoption of AI itself.
“Instead of rushing regulation or following Italy’s extreme approach of banning AI, the government’s best course of action is to monitor the impact of AI and carefully evaluate its costs and benefits.
“It is prudent to regulate proven harms, but only after conducting a thorough cost-benefit analysis for the economy and society. Thus far, the proven productivity benefits of this frontier technology outweigh the potential risks that still need to be substantiated and assessed.”