HomeBusinessDeepfake technology is beyond our ability to combat it Achi-News

Deepfake technology is beyond our ability to combat it Achi-News

- Advertisement -

Achi news desk-

With at least 64 nations around the world going to the polls, there is a real possibility that deepfakes will be responsible for manipulating democracy on a level we haven’t seen before.

Deepfakes are manipulated images, video or audio created using AI; generally to present someone doing or saying something they did not do.

Until recently, there hasn’t been much cause for concern. In their infancy, deep lies were easy to spot, and therefore easy to ignore. That is no longer the case. In 2024, deep fakes are more advanced than ever, to the extent that some slip past even the most advanced AI detection tools. When the ethical guardrails no longer work, there must be cause for concern.

The credibility of these deep hoaxes has led to a number of them going viral this year; before being dissected in publications such as The Herald. AI-generated images of Taylor Swift became so common on social media platform X, that the company was forced to block users from searching for the popstar.

The Herald: Sir Keir Starmer targetedSir Keir Starmer was targeted (Image: free)

Fake audio purporting to be Sir Keir Starmer berating a staff member was seen around 1.5 million times on X in days, but even condemnations from opposition politicians were not enough to convince the platform to take it down . Even today, the clip is easy to find – threatening to rear its head and start a second viral storm in the future. Although this audio has been unpacked, it is still shared by people who either don’t want to believe the truth, or don’t care.

Part of the problem is that it is worryingly easy to make deep fakes; and requires almost no technical knowledge or skill meaning the barrier to entry is low. Plenty of free production tools exist, and even paid software is affordable.

You may have noticed a flood of AI-generated music covers on platforms such as YouTube in recent months, with the voices of celebrities such as Freddie Mercury and Frank Sinatra being used to cover songs released long after their deaths. These exist because they are easy to make. This ease of use has also meant that some of the most damaging political hoaxes have been traced back to individual users rather than rogue nations. Anyone wishing to cause chaos can attempt to do so at the click of a button.

We have seen, in recent years, how damaging wrong information can be. Conspiracies like QAnon in the US have moved from the shadowy corners of the internet to mainstream social media sites and then on to the real world – leading to shocking events like the storming of the Capitol. Imagine how much more dangerous this misinformation will be when accompanied by fake audio, images and video; making the plot seem even more real to those who stumble across it. It is said that the camera never lies, but this phrase has never been less true.


READ MORE

Will Artificial Intelligence take my job?

Artificial intelligence could enslave the human race


The threat posed by deep fakes urgently needs a solution – yet there is no easy solution. The technology used to create deep fakes outstrips the solutions. It’s a constant process of playing catch up.

One solution that has been referred to is the introduction of stricter vetting processes around images that are uploaded to social media; with more human involvement in approving potentially controversial images. However, when blocking fake media, it is important that we do not end up blocking genuine media. Platforms like Twitter have been invaluable tools in allowing breaking news to emerge before us, and their ability to do so should not be hindered.

In recent UK elections, many seats have been extremely marginal; with a handful of votes being enough to swing some results. Deepfakes could end up being the difference makers.

It is vital that public awareness of deep lies and general media literacy increases as we approach the next vote; to ensure people know that seeing is not necessarily believing. Given that no suitable technological solution for the purpose yet exists, deep counterfeiting should be a key topic in public discussions.

Stepping away from politics, it is worth noting the destructive potential of this technology on an individual level. A disturbing trend has emerged of AI-generated images of adults, which have faces superimposed on other bodies. Research by Home Security Heroes has shown that a staggering 99% of those affected, like Cathy Newman, by this type of hoax on one website are women.

The Herald: Dr Sajjad BagheriDr Sajjad Bagheri (Image: free)

Deep fakes will inevitably become a dangerous tool in the cyberbullying toolkit. They will also be used to harass and cause serious distress – they are an emerging threat that we must all take seriously.

As we get closer to the election, we will inevitably see more deep fakes. People will find new ways to use this technology in destructive ways; and the public perception of some candidates may be damaged. It is important that we talk about this emerging problem, and ensure that we do what we can to ensure that it is not normalized or ignored. This applies to us as voters, but also the media, politicians and their parties who must be responsible for what they report and share on platforms.

In the coming months and years we all have to approach images, audio and video that do not come from a reputable news source with some cynicism, which can be a challenge if what is n being presented to us supports our current feelings towards. person or object. As far as can be predicted, deep lies are going to be part of political campaigns, and that is going to be difficult to control; but the way we deal with them together will be a joint effort.

Dr Sajjad Bagheri is a lecturer in Computer Science, University of the West of Scotland

spot_img
RELATED ARTICLES

Most Popular