Home Artificial Intelligence How AI can protect users in the online world

How AI can protect users in the online world

0
How AI can protect users in the online world

[ad_1]

With more than 74 percent of Gen Z spending their free time online – averaging around 10 hours per day – it’s safe to say their online and offline worlds are becoming entwined. With increased social media usage now the norm and all of us living our lives online a little bit more, we must look for ways to mitigate risks, protect our safety and filter out communications that are causing concern. Step forward, Artificial Intelligence (AI) – advanced machine learning technology that plays an important role in modern life and is fundamental in how today’s social networks function. 

With just one click AI tools such as chatbots, algorithms and auto-suggestions impact what you see on your screen and how often you see it, creating a customised feed that has completely changed the way we interact on these platforms. By analysing our behaviours, deep learning tools can determine habits, likes and dislike and only display material they anticipate you will enjoy. Human intelligence combined with these deep learning systems not only make scrolling our feeds feel more personalised but also provide a crucial and effective way to monitor for and quickly react to harmful and threatening behaviours we are exposed to online, which can have damaging consequences in the long term. 

The importance of AI in making social platforms safer 

The lack of parental control on most social networks means it can be a toxic environment to be in, and the amount of users that are unknown to you on these platforms carries a large degree of risk. The reality is teens today have constant access to the internet yet most lack parental involvement in their digital lives. Lots of children face day to day challenges online, having seen or witnessed cyberbullying along with other serious threats such as radicalisation, child exploitation and the rise of pro-suicide chat rooms to name a few and all of these activities go on unsupervised by parents and guardians.

AI exists to improve people’s lives, yet there has always been a fear that these ‘robots’ will begin to replace humans, that classic ‘battle’ between man and machine. Instead, we must be willing to tap in and embrace its possibilities – cybersecurity is one of the greatest challenges of our time and by harnessing the power of AI we can begin to fight back against actions that have harmful consequences and reduce online risk.

Advanced safety features

AI has proven to be an effective weapon in the fight against online harassment and the spreading of harmful content and these deep learning tools are now playing an important role in our society, improving security in both our virtual and real worlds. AI can be leveraged to moderate content that is uploaded to social platforms as well as monitor interactions between users – something that would not be possible if done manually due to sheer volume. At Yubo we use a form of AI called neural network learning, Yoti Age Scan, to accurately estimate a user’s age on accounts where there are suspicions or doubts – our users must be 13 to sign up and there are separate adult accounts for over 18’s. Flagged accounts are reviewed within seconds and users must verify their age and identity before they can continue using the platform – it is just one vital step we are taking to protect young people online. With over 100 million hours of video and 350 million photos uploaded on Facebook alone every day, algorithms are programmed to shift through mind-boggling amounts of content and delete both the posts and the users when content is harmful and does not comply with the platform standards. Algorithms are constantly developing and learning and are able to recognise duplicate posts, understand the context of scenes in videos and even identify sentiment analysis – recognising tones such as anger or sarcasm. If a post cannot be identified it will be flagged for human review. Using AI to review the majority of online activity shields human moderators from disturbing content that could otherwise lead to mental health issues.

AI also uses Natural Language Processing (NPL) tools to monitor interactions between users on social networks and identify inappropriate messages being sent amongst underage and vulnerable users. In practice, most harmful content is generated by a minority of users and so AI techniques can be used to identify malicious users and prioritise their content for review. Machine learning enables these systems to find patterns in behaviours and conversations invisible to humans and can suggest new categories for further investigation. With its advanced analytical capabilities, AI can also automate the verification of information and the validation of a post’s authenticity to eliminate the spread of misinformation and misleading content.

Unleashing the power of AI for education 

Young people need a safe and stimulating environment when they are online.  AI can be used to proactively educate users about responsible online behaviour through real-time alerts and blockers. At Yubo, where our user base is made up of only Gen Zers, we use a combination of sophisticated AI technology and human interaction to monitor users behaviour. Our safety features prevent the sharing of personal information or inappropriate messages by intervening in real-time – for example, if a user is about to share sensitive information, such as a personal number, address or even an inappropriate image they’ll receive a pop up from Yubo highlighting the implications that could arise from sharing this information. The user will then have to confirm they want to proceed before they are allowed to do so. Additionally, if users attempt to share revealing images or an inappropriate request, Yubo will block that content from being shared with the intended recipient before they can hit send. We are actively educating our users not only about the risks associated with sharing personal information but also prompting them to rethink their actions before participating in activities that could have negative consequences for themselves or others. We are committed to providing a safe place for Gen Z to connect and socialise – we know our user base is of an age where if we can educate them around online dangers and best practices now then we can mould their behaviours in a positive way for the future.  

Applying AI tools for social good

Social media, when used safely, is a powerful tool that enables people to collaborate, build connections, encourages innovation and helps to raise awareness about important societal issues along with an untold number of other positives. With so much importance placed in these digital worlds, it’s imperative that users are both educated and protected so they can navigate these platforms and reap the benefits in the most responsible way.  We are already seeing the positive impact AI technology is having on social networks –  they are vital in analysing and monitoring the expansive amount of data and users that are active on these platforms every day. 

At Yubo, we know it’s our duty to protect our users and have implemented sophisticated AI technology to help mitigate any risks and we will continue to utilise AI to shield our users from harmful interactions and content as well as starting an ongoing dialogue about the consequences of inappropriate behaviour. AI tools present an unlimited potential for making social spaces safer and we need to harness the power they have to increase well being for us all.

(Photo by Prateek Katyal on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

Tags: ai, artificial intelligence, Featured, live stream, livestream, social media, Society

[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here