Dr Charna Parkey, VP Product & Ops, Innovation, DataStax, writes about the importance of using AI in transparent, accountable, and trusted ways.
AI is shaping up to be the most transformative technology breakthrough of the last 20 years. Just as PCs, the web, and smartphones changed the way we work and live, AI will change society, the jobs we do, and redefine our sense of purpose. But AI must be developed responsibly so potential harms, including bias and discrimination, are minimised.
But let’s look at the positives, first. AI, and its generative AI subset (GenAI), is something most of us are familiar with thanks to OpenAI’s ChatGPT, Google Gemini, and Anthropic’s Claude. Even now, they are helpful for many tasks from creating a cover letter for a job application making music and art.
With GenAI, people are finding new ways to express themselves, find information and improve the way they do things every day.
Businesses are also jumping on board the GenAI train and at DataStax, we specialise in helping those organizations create AI apps using tools such as our Langflow application development environment and hybrid vector search-enabled Astra DB database enabling smart context for GenAI, which we’ll look at shortly.
Companies see GenAI as a way of improving their processes, boosting the bottom line and supercharging customer experience. An online store providing a GenAI chatbot with smart context, for example, would be able to offer deep, dynamic, personalised recommendations about products customers will love based on their previous purchase history and what they’ve clicked on while browsing. Generic chatbots simply don’t have the capacity to do this because they don’t have the real-time, proprietary data required to provide a tailored response.
Those same businesses are also looking to use AI to improve the lives and jobs of their staff. They’re doing this by using AI to take over repetitive and time consuming tasks, allowing people to focus on the things humans do best – creativity and strategic thinking, both capabilities that even the most advanced AI currently lacks.
GenAI isn’t perfect – yet
To understand the ethical challenges that can arise with GenAI, let’s look at how it works, and recognise the inner functioning of GenAI is largely opaque. Researchers don’t really understand precisely how a GenAI arrives at its answers.
GenAI is powered by a large language model, specifically a transformer model, which identifies relationships between words (or other objects, such as music and images) and generates responses based on those relationships. However, the mathematical processes that occur behind the scenes—how these relationships are formed through layers of weights and statistical computations—are complex and often inscrutable.
What we know, however, is that GenAI is only as effective as the data it’s trained on. Most GenAI services are trained on data scraped from the web and, as anyone who’s done a Google search will know, the quality of the data can vary significantly. Some of it is accurate and factual, but it can also be full of bias, discrimination, and outright falsehoods. Unless we understand the data on which the large language models (LLMs) we use are trained, the responses they generate may reflect these biases and inaccuracies in the use cases they are applied to.
GenAI also has the potential to provide answers that are known in the industry as ‘hallucinations.’ That is, it simply makes up information, and those hallucinations are hard for the average person to detect because the responses are delivered in a way that seems authoritative and believable.
How we can avoid GenAI bias, discrimination and hallucination
There are two main pathways we can take to minimise the flaws and downsides that come with GenAI. The first is societal; that is, building the governance and regulatory frameworks at a global scale needed to guide and enable the development of responsible AI.
Locally, the Australian Human Rights Commission (AHRC) has developed guidelines on adopting AI in Australia. According to its guidelines, “the protection of human rights when developing AI should be a priority.”
The AHRC also weighs in on the need to guard against misinformation while, simultaneously, ensuring a diversity of opinion and expression, along with the need to preserve the environment from harms done by the large energy and resource footprint GenAI currently has.
Additionally, since 2019, Australia has had a set of voluntary AI Ethics Principles developed by the Department of Industry, Science and Resources (DISR). These principles outline key considerations for trustworthy AI development, use, and deployment, while in in January this year, DISR also published an interim response to a discussion paper on responsible AI outlining the government’s plan for ensuring safe and responsible AI.
The second way to create responsible, ethical AI is a technical response working together with the societal response. This means tuning algorithms to better tackle harms like bias and discrimination and using platforms integrated with the ecosystem like DataStax’ Langflow to create production generative AI applications that run a GenAI response against an external database of known, verifiable information. The result is GenAI can provide answers that are more accurate, reliable and truthful, minimising the chances of hallucination or misinformation.
GenAI needs both societal and technical responses to ensure it grows as a technology that enhances people’s lives and doesn’t harm the environment. It’s still early days for GenAI but I’m confident we have the right tools and are developing the correct oversights and regulatory responses to ensure as AI develops, it will work for the benefit of everyone.
Dr. Charna Parkey, VP Product & Ops, Innovation, DataStax, is an experienced tech executive, product builder, speaker, writer, and mentor; leveraging over 15 years’ experience to successfully bring B2B AI products to market. Charna is passionate about using AI in transparent, accountable, and trusted ways to combat systemic oppression.
Note from editor: the hero image was created with Adobe using AI.