How can AI be used on social media?
In 2024, social media is becoming more and more saturated with AI generated content, and it is becoming difficult for the average user to separate the facts from the fake. AI generation tools on social media are often used harmlessly, perhaps to imagine SpongeBob singing an Ariana Grande song, to transform users into anime characters, or to convert text to speech so that content creators don’t have to use their own voice. However, AI can also be used to mislead users.
Could AI have the power to sway the results of 2024 elections?
There is a growing concern at META that AI content will be used to deceive and influence voters ahead of critical upcoming elections. Experts predict that AI will be able to target undecided voters and sway them in a particular direction by spreading misinformation that appears to be from a trusted source. Artificially generated and hyper-realistic photos, speeches, and videos have already started muddling the minds of voters. In early 2023, a video of Joe Biden attacking transgender people went viral. It was soon identified as fake, when users realised that the video had been lifted from a speech the president gave about supporting Ukraine, and the text had been taken from a crude Reddit post. While this video may not have done permanent damage, since it was discredited only a few weeks later, plenty of AI generated content continues go undetected, spreading misinformation to those who may not even realised their vote has been swayed by it.
How will META attempt to prevent this spread of misinformation via AI generated content?
META intends to tighten regulations on deceptive, hyper-realistic content generated by artificial intelligence. Other tech giants have joined them in their research, aiming to increase transparency and help social media users understand the authenticity of the content they see. META and their industry partners are working to set technical standards for detecting artificially generated content. If META finds evidence that an image has been created or altered by AI, a label will be added to the post to indicate this. This will be implemented over the next few months across Instagram, Facebook, and Threads. Any photorealistic images created using the META AI feature will be visibly marked as AI generated content, as well as embedded with invisible watermarks and signifying metadata. META is currently working on technology that can detect these invisible markers at scale and identify AI generated images.
How else can this technology be used?
The aim is to use these tools to identify not only artificial content generated by META AI but also images from google, OpenAI, Microsoft, Adobe Midjourney, and Shutterstock, as these companies move forward with their plans to include metadata within images generated by their software. Unfortunately, the industry has not yet managed to include this signifying metadata in audio and video content at the same scale, though META has added a feature that allows users to disclose the use of AI in their content so that a label can be added.
Why is AI awareness important in the media industry?
META has acknowledged that the new technologies they are employing to prevent the spread of misinformation via AI generated content are not infallible. Visual signifiers can be easily removed, and invisible watermarks can be stripped from files, with this, AI is ushering forward a new age of media with new rules and regulations.
Trust us, at Seren Global Media, to not just navigate the technological landscape but to elevate your business through dynamic PR, marketing and content strategies.
Written by Caitlin Puddle, Swansea University Intern.
04 October 2024
17 September 2024
03 September 2024