Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
It seems that although the Internet is more and more drown in false imageswe can at least give some credit to humanity’s ability to smell BS when it counts. A number of recent studies show that AI-generated disinformation has had no significant impact on this year’s elections around the world, as it still not so good.
Over the years, there has been much concern that more realistic but synthetic content could manipulate audiences in harmful ways. seem real.Back in August, a political consultant used AI rigging President Biden’s vote for a robot that told voters in New Hampshire to stay home during the state’s Democratic primaries.
Tools like ElevenLabs allow you to present a short voice of a speaker and then duplicate their voice to say what the user wants it to say, although many commercial AI tools include guardrails to prevent this use are available as open source. models.
Despite these advances, Financial Times in a new story that looked back at the year and found that very little synthetic political content went viral around the world.
It quoted a report From the Alan Turing Institute, which found that only 27 pieces of AI-generated content went viral during the summer’s European elections, the report concluded that there was no evidence that the election was influenced by AI disinformation, as “the revelations most were concentrated among a minority of users with political beliefs already aligned with the ideological narratives embedded in such content.” In other words, it among the few who saw the content (before it was allegedly flagged) and were willing to believe it, it reinforced those beliefs about the candidate, even if those exposed to it knew the content itself was generated by an AI example of created footage showing Kamala Harris speaking at a rally standing in front of Soviet flags.
In the US, the News Literacy Project found more than 1,000 examples of misinformation about the presidential election, but only 6% were created by artificial intelligence ” mentions were usually only mentioned with the release of new imaging models, not during elections.
Interestingly, it appears that social media users are more likely to misidentify real images generated by AI rather than the other way around, but overall users showed a healthy dose of skepticism.
If the findings are accurate, it would make a lot of sense. AI images are everywhere these days, but AI-generated images still have an unlucky quality to them, showing telltale signs of being fake , or the face is not properly reflected on the mirror surface; there are very small hints that will indicate that the image is synthetic.
Proponents of AI shouldn’t necessarily be happy about this news. It means the images created still have a way to go OpenAI’s Sora model knows that the video it produces just isn’t that good, it looks like something created by a video game graphics engine (speculation is that it was trained on video games), one that clearly does not understand such properties of physics.
With all of this said, there are still concerns. Alan Turing Institute Report did ultimately conclude that beliefs can be reinforced by a realistic deep fake containing misinformation, even if the audience knows the media is not real; confusion over whether a piece of media is real undermines trust in online sources; and AI images have already been used target female politicians with pornographic deepfakeswhich can be damaging psychologically and to their professional authority as it reinforces sexist beliefs.
The technology will undoubtedly continue to improve, so it’s worth keeping an eye on.