Should You Use Meta’s New AI Assistant? Let’s Ask An RMIT Professor

Meta has released a new AI feature across its platforms and as a standalone app. This innovative function leverages the data users have already shared, including identity details and preferences, to generate personalised responses.

The new AI function is designed to enhance user experience by providing more tailored interactions. It is capable of understanding and anticipating user needs based on their activity and history within Meta’s ecosystem. Whether it’s suggesting content, assisting with tasks, or engaging in more meaningful conversations, this AI aims to make interactions smoother and more intuitive.

However, with the introduction of such a personalised tool, there are concerns about privacy and data security. We reached out to RMIT expert Kok-Leong Ong, Professor of Business Analytics, to understand the implications better.

Professor Ong highlights that while the AI’s capability to provide personalised experiences is impressive, users must be aware of the data being used and the potential privacy risks. It’s crucial for users to stay informed about how their data is being processed and to use the privacy settings available to them.

RMIT expert Kok-Leong Ong, Professor of Business Analytics

Here’s what RMIT’s Kok-Leong Ong, Professor of Business Analytics, says:

AI agents are becoming increasingly popular because they are easy to use and provide accurate information. Users can submit a conversational request and receive relevant answers that draw from data in the ecosystem from which a user has subscribed.

AI agents offer a range of benefits and it’s likely their popularity will continue to increase. However, users, especially young adults and kids, should be aware of the risks.

We have already seen Mark Zuckerberg apologise to families whose children were harmed by using social media. AI agents working in a social context could heighten a user’s exposure to misinformation and inappropriate content. This could lead to mental health issues and fewer in-person social interactions.

Meta already has a huge amount of information about its users. Its new AI app could pose security and privacy issues. Users will need to navigate potentially confusing settings and user agreements. They will need to choose between safeguarding their data versus the experience they get from using the AI agent. Conversely, imposing tight security and privacy settings on Meta may impact the effectiveness of its AI agent.

That’s not to say we shouldn’t use AI agents. But we should all look at mitigating risks, including by regularly reviewing settings, understanding newly introduced terms and conditions, and being mindful about the sensitive information you share on these types of apps.

Robyn Foyster: A multi award-winning journalist and editor and experienced executive, Robyn Foyster has successfully led multiple companies including her own media and tech businesses. She is the editor and owner of Women Love Tech, The Carousel and Game Changers. A passionate advocate for diversity, with a strong track record of supporting and mentoring young women, Robyn is a 2023 Women Leading Tech Champion of Change finalist, 2024 finalist for the Samsung Lizzies IT Awards and 2024 Small Business Awards finalist. A regular speaker on TV, radio and podcasts, Robyn spoke on two panels for SXSW Sydney in 2023 and Intel's 2024 Sales Conference in Vietnam and AI Summit in Australia. She has been a judge for the Telstra Business Awards for 8 years. Voted one of B&T's 30 Most Powerful Women In Media, Robyn was Publisher and Editor of Australia's three biggest flagship magazines - The Weekly, Woman's Day and New Idea and a Seven Network Executive.

This website uses cookies.

Read More