What Are The Dangers Of Artificial Intelligence For Business?

By Robyn Foyster Robyn Foyster has been verified by Muck Rack's editorial team
on 28 March 2022

The Gradient Institute, with support from Minderoo Foundation, recently released a report on the growing risk of Artificial Intelligence (AI) to business along with open source software for companies to combat the risk.

Here, Bill Simpson-Young who is CEO of Gradient Institute and former CSIRO, talks about the dangers of Artificial Intelligence for business.

What are the dangers of Artificial Intelligence we need to be aware of for business?
AI used by businesses can have many benefits to businesses and their customers such as being able to perform actions for the customer at great speed and customised specifically for that customer. For example, every time you use a map app on your phone, the speed, accuracy, and relevance for you is made possible by AI. AI is also used in deciding your news feed, whether you get matched for a job opening and whether you are successful with a loan application.

Unfortunately, as we show in this new report, there is now overwhelming evidence that the use of AI for automated decision-making also has the potential to produce unlawful, immoral, discriminatory outcomes for individuals through what are usually opaque and unaccountable decision processes.

These harms are arising from unwarranted trust in (or at least reliance on) AI. Humans and machines make decisions differently. While humans have common sense and are able to navigate different contexts with ease, machines have no built-in moral judgement and only perform well in relatively narrow domains.


What do we need to be aware to safeguard our business?

Companies which operate AI systems that can influence the lives of people need to get better at understanding the new risks that AI systems present for their business. The report includes a taxonomy of these new risks which includes “failures of legitimacy” (for example when the way an AI system works leads to inadvertently treating different types of people differently and going against anti-discrimination law), “failures of design” (for example when an AI system has been trained using data that really isn’t suitable for the decisions the AI will be making) and “failures of execution” (for example failing to properly monitor the operation of the AI system over time).


How do we combat it?

The report is a pragmatic one – we describe a range of actions that companies can take to help them use AI responsibly. These are mostly existing approaches that are available to help AI be used responsibly and we have brought these together in a way that makes it easier for companies to adopt them. We have also released some software that we hope companies who use AI will use to provide better control and oversight of their AI systems. We call it AI Impact Control Panel software and have made it available as open-source software so companies can adopt it and adapt it freely and easily.


What are the key findings from the report?

In the report we provide guidance for organisations to reduce the risks of adopting AI systems for decision-making. We describe how AI systems make decisions differently from people and how those differences create new types of risks for organisations. We provide a taxonomy that will help companies think about AI risk and we suggest actions to address them. We hope that companies find this report and software useful and build more responsible AI systems. That way, businesses and society can have all the many benefits of AI while avoiding many of the harms to individuals and society that are happening now.

More About The Open Source Software

Companies using Artificial Intelligence (AI) will soon be able to access open-source software that helps improve their control of the impact caused by the decisions of their AI systems.

Developed by Gradient Institute, with support from Minderoo Foundation’s Frontier Technology initiative, the AI Impact Control Panel elicits the goals and preferences of decision-makers through a graphical user interface and translates them into the mathematical language required by an AI system.

The users of the tool do not need to have technical knowledge about AI. Rather they can set the objectives and constraints for the AI system.

The Control Panel helps ensure the AI system’s operation is in alignment with the values of the organisation and society by iteratively asking the people accountable for the AI system what the acceptable ranges of different measures of performance are (compared to known baselines), the relative importance of different objectives and the relative desirability of different outcomes. The tool adapts the choices presented to users over time to efficiently discover their preferences without overwhelming them.

Minderoo Foundation:

Established by Andrew and Nicola Forrest in 2001, Minderoo Foundation is a modern philanthropic organisation seeking to break down barriers, innovate and drive positive, lasting change. Minderoo Foundation is proudly Australian, with key initiatives spanning from ocean research and ending slavery, to collaboration against cancer, building community projects and improving the digital economy.

Gradient Institute:

Gradient Institute is an independent, nonprofit research institute that works to build ethics, accountability, and transparency into AI systems: developing new algorithms, training organisations operating AI systems and providing technical guidance for AI policy development

More info:

Related News


More WLT News