Trust AI: It is our frontline of defence to protect users in the online world, reports Sacha Lazimi, Co-founder and CEO at Yubo.
The average Aussie will spend up to 17 years of their life looking at their phones. Aussie Gen Z’s in particular average over 7 hours per day on up to 5 different social apps, so it’s safe to say their online and offline worlds are becoming entwined. With increased social media usage now the norm and all of us living our lives online a little bit more, we must look for ways to mitigate risks, protect ourselves and filter out any harmful communications. Step forward, Artificial Intelligence (AI) – advanced machine learning technology that plays an important role in modern life and is fundamental in how today’s social networks function.
In the past, the use of AI technology has been slightly controversial however, it has become an integral tool across our social app experience. Tools such as chatbots, algorithms and auto suggestions impact what you see on your screen and how often you see it, creating a customised feed that has completely changed the way we interact on these platforms. By analysing our behaviours, deep learning tools can determine habits, likes and dislikes and only display material that you enjoy. Human intelligence combined with these deep learning systems not only make our feed feel more personalised delivering what we want, but they also offer a very crucial and effective method to monitor and quickly react to concerning and threatening behaviours we are exposed to online.
Importance of AI in protecting our safety
The lack of supervision and the amount of users that are unknown to you on these platforms carries a large degree of risk. The reality is lots of teens face day to day challenges online, having witnessed or been a victim of cyberbullying along with other serious threats such as radicalisation, child exploitation and the rise of pro-suicide chat rooms to name a few and all of these activities go on unsupervised.
AI exists to improve people’s lives, yet there has always been a fear that these ‘robots’ will begin to replace humans. We must be willing to embrace its capabilities and the possibilities it offers. Cybersecurity is one of the greatest challenges of our time and by harnessing the power of AI we can begin to fight back against actions that have harmful consequences and mitigate online risk in a timely manner.
Technologically advanced defence
AI has proven to be an effective weapon fortifying our frontline defence in the fight against harmful online behaviour including harassment and the spreading of dangerous or graphic content. AI can be leveraged to moderate content that is uploaded to social platforms as well as monitor interactions between users – something that we would not be able to tackle manually due to sheer volume.
At Yubo we use Yoti Age Scan that utilises a form of AI called neural network learning, to accurately estimate a user’s age on accounts where there are suspicions or doubts – our users must be 13 years old to sign up and there are separate adult accounts for over 18’s. Flagged accounts are reviewed within seconds and users must verify their age and identity before they can continue using the platform – it is just one vital step we are taking to protect young people online.
With over 100 million hours of video and 350 million photos uploaded on Facebook alone every day, algorithms are programmed to sift through massive amounts of content and delete both the posts and the users if it is harmful and does not comply with the platform standards and guidelines. These algorithms are constantly developing and evolving to protect users within the rapid virtual environment. It can recognise duplicate posts, understand the context of scenes in videos and even analyse sentiment – recognising tones such as anger or sarcasm. If a post cannot be identified it will be flagged for human review. However by utilising AI as our frontline defence to review the majority of online activity, we can shield and protect our human moderators from disturbing content that could otherwise lead to mental health concerns.
AI also uses Natural Language Processing (NPL) tools that can help us monitor interactions between users on social networks to identify inappropriate messages being sent amongst users. In practice, most harmful content is generated by a minority of bad actors, so these AI techniques can be employed to identify these malicious users and prioritise their content for review. It can also recognise patterns in behaviours and conversations that would otherwise be invisible to humans to flag in real time. With its advanced analytical capabilities, AI can also automate the verification of information and the validation of a post’s authenticity to eliminate the spread of misinformation and misleading content, which is more important now than ever before.
Real-time education
AI can be used to proactively educate users about responsible online etiquette through alerts, notifications and blockers. At Yubo, where our user base is made up of a younger audience and Gen Zs, we use a combination of sophisticated AI technology and human review to monitor users’ behaviour. Yubo safety features prevent the sharing of personal information or inappropriate messages by intervening in real-time. For example, if a user is about to share sensitive information, such as their personal number, address or even an inappropriate image, they’ll receive a prompt from Yubo highlighting the implications that could rise from sharing this information. The user will then have to confirm whether they want to proceed. Additionally, if users attempt to share revealing images or an inappropriate request, Yubo will block that content from being shared with the intended recipient before they can hit send.
We are actively educating our users, not only about the risks associated with sharing personal information but also prompting them to rethink their actions before participating in activities that could have negative consequences for themselves or others. We are committed to providing a safe place for Gen Z to connect and socialise – we know our user base is of an age where if we can educate them around online dangers and best practices now then we can help shape their behaviours in a positive way for the future.
AI for good
Social media, when used safely, is a powerful tool that enables people to connect, collaborate, innovate, and helps to raise awareness about important societal issues. With so much importance placed in our virtual world, it’s imperative that users are both educated and protected so they can navigate these platforms and reap the benefits in the most responsible way. We are already seeing the positive impact AI technology is having on our social platforms – they are vital in analysing and monitoring the expansive amount of data, and protecting the millions of users that are active on these platforms every day at every hour.
At Yubo, it’s our duty to protect our users and have the right technological tools in place, such as AI to help mitigate any risks and shield our community from harmful interactions and content. AI tools present unlimited potential for protecting our online world, and we need to harness the power and capabilities it offers to ensure it continues to be a safe space for what it was originally intended for – learning, connecting and collaborating.