Home

It’s a Fake New World: How artificial intelligence is being weaponised for industrial-scale disinformation

Headshot of John Flint
John FlintThe West Australian
CommentsComments
For every humanity-enhancing potential application of AI, we’ve been warned there are just as many chilling and abominable uses.
Camera IconFor every humanity-enhancing potential application of AI, we’ve been warned there are just as many chilling and abominable uses. Credit: Supplied

With the artificial intelligence genie well and truly out of the bottle, society is being led to believe the runaway technology is both the best and worst thing to ever happen.

For every humanity-enhancing potential application of AI, we’ve been warned there are just as many chilling and abominable uses.

The reason we’re hearing more of the latter is the runaway aspect. The pace of development is frighteningly fast, even for the techies who have a strong grasp of the fundamentals.

As someone observing and commentating on technology for 30 years, Will Berryman, executive director of the Royal Institution of Australia, described the pace as unlike anything he’d seen.

“I think with generative AI and all of the associated tools around it — it’s breathtaking,” he said.

Lately, it’s the dark side of AI that has been making news headlines at home and abroad.

Pop megastar Taylor Swift last month became the latest high-profile casualty of AI-generated porn.

Pop megastar Taylor Swift last month became the latest high-profile casualty of AI-generated porn.
Camera IconPop megastar Taylor Swift last month became the latest high-profile casualty of AI-generated porn. Credit: JOEL CARRETT/ JOEL CARRETT

Deepfake pornographic images of the singer were shared across the social media platform X, which upset her legions of fans by being very slow to remove them.

Such images aren’t photoshopped pictures. A deepfake is an artificial image or video generated by a type of machine learning called “deep” learning.

As many women, famous or not, have discovered, anyone can be a victim of deepfake porn, which is now rife on the internet. Children are also being targeted, creating a new headache for police forces around the globe.

Governments are playing catch-up to regulate the generative AI explosion.

Four months ago, 28 countries and the European Union signed the Bletchley Declaration following an AI Safety Summit at the famous Bletchley Park site in England that was the centre of Allied code-breaking during World War II.

The declaration affirmed that “for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible”.

Whether the commitment to international collaboration to that end delivers on its lofty promise remains to be seen, but keeping AI “inside the guard rails” will be every bit as challenging as the difficulties faced by Alan Turing and his cryptanalysts at Bletchley Park. After the war, computer genius Turing went on to become a founding father of artificial intelligence.

Writing in the Conversation, UK academic Kimberley Hardcastle said there were fears the speed of innovation “could hamper our ability to detect serious problems before they’ve caused damage”.

Kimberley Hardcastle - Assistant Professor in Marketing, Northumbria University, Newcastle
Camera IconKimberley Hardcastle - Assistant Professor in Marketing, Northumbria University, Newcastle Credit: Unknown/ Unknown

This could have “profound implications for society, particularly when we can’t anticipate the capabilities of something that may end up having the ability to train itself”, she said.

Rise of the machines

“I think in many ways, it’s a scary time and an exciting time,” Professor Simon Lucey, director of the Australian Institute for Machine Learning, said.

The Adelaide-based AI expert said the technology was used before 2014 to analyse images and perform tasks such as facial recognition.

“Around about 2014, a new generation of tools was created, that could generate images. What we’ve seen (since) is this massive, acceleration . . . not just in imagery, but also text,” he said.

ChatGPT, a chatbot driven by generative AI technology that engages in human-like conversations and much more, has more than 100 million users.

“What we’re seeing at the moment is the intersection of these two technologies coming together,” Professor Lucey said.

“The ability to simply just write a couple of words, and to be able to easily create images . . . high-definition content. It’s something to behold . . . when I first saw this I was gobsmacked.”

Professor Simon Lucey, Director of the Australian Institute for Machine Learning (AIML) at The University of Adelaide
Camera IconProfessor Simon Lucey, Director of the Australian Institute for Machine Learning (AIML) at The University of Adelaide Credit: The University of Adelaide/ The University of Adelaide

Professor Lucey leads a team of 200 staff at the institute, which is attempting to build a deep-learning machine that can reason.

He has been appointed to the Federal Government’s new Artificial Intelligence Expert Group, tasked with providing advice to the Department of Industry, Science and Resources to help ensure AI systems are safe.

“Australia is trying to position itself as sort a global player in responsible AI,” he said.

“We can’t really compete directly with the big global superpowers in AI, like China and the US, but there are certain sectors of AI that we are globally excellent in, (like) computer vision. We (also) have amazing capability in robotics. And we’re growing this brand around responsible AI.”

Democracy in peril

The use of AI-generated misinformation as a political weapon to use in elections is a question of scale and consequence, not possibility.

National elections are looming this year in 64 countries, with some hugely consequential to the rest of the world, including ballots in the US, the UK, India, Taiwan and Mexico.

Dirty tricks to eke an advantage have long been a feature in politics and elections at all levels, so what’s different?

Generative AI has the potential to manipulate an outcome, via deception, like never before.

Deepfakes can be hyper-realistic images, videos or audio files that make put political rivals in a bad light and vice versa. Chances are that many voters won’t realise they’ve been scammed by AI.

In tight election contests, they can make a critical difference, subverting democracy.

In New Hampshire, voters received robocalls seemingly from US President Joe Biden, urging them to stay at home during the primaries.

“It’s important that you save your vote for the November election,” the call said. “Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again.”

Republican presidential candidate former President Donald Trump speaks at a campaign rally in Waterford Township, Mich., Saturday, Feb. 17, 2024.
Camera IconRepublican presidential candidate former President Donald Trump speaks at a campaign rally in Waterford Township, Mich., Saturday, Feb. 17, 2024. Credit: Paul Sancya/ Paul Sancya

As a consequence, the Federal Communications Commission moved quickly to outlaw the use of robocalls using AI-generated voices.

“There is going to be a tsunami of disinformation in 2024,” Darrell West, a senior fellow at the Centre for Technology Innovation at the Brookings Institution, told USA Today.

“We are already seeing it, and it is going to get much worse. People are anticipating that this will be a close election, and anything that shifts 50,000 votes in three or four States could be decisive.”

Only five of the 50 States have laws in place to restrict AI in political communications.

It’s unclear the extent to which AI was used to analyse data harvested in the Cambridge Analytica scandal, that impacted the 2016 US presidential election, and the Brexit referendum in the UK the same year.

Right-wing consulting firm, Cambridge Analytica, used an app to collect detailed data on 87 million Facebook users to build psychological profiles of them in order to target them with tailored political advertising in the lead-up to the polls. The firm also had ties to the Kremlin.

AI has the power to make Cambridge Analytica look like a pea-shooter in the propaganda battle for voters’ hearts and minds.

“AI just supercharges that and it’s not a generative issue,” Mr Berryman said.

“The ability of AI to do complex types of mathematics in order to game parts of the population . . . When you see politics as a game, and you give tools to people to game and manipulate that competition, that’s what we get.”

Rebecca Johnson, an AI and ethics researcher at the University of Sydney, said the use of AI to manipulate political outcomes was a real worry.

“If you think about Cambridge Analytica, we’re talking a decade-old technology, which (former Cambridge Analytica director-turned-whistleblower) Brittany Kaiser described herself as weapons-grade communication. And now we’re 10 years into the future with that, and we really didn’t spend a lot of time trying to address the underlying methods that were impacting our democracy,” Ms Johnson said.

“This is the biggest concern, I think, that we’re facing with this technology, the absolute biggest.”

Earlier this month, six big tech companies — Adobe, Google, Meta, Microsoft, OpenAI, and TikTok- pledged to combat AI trickery in elections whether it’s videos, images, and audio that alter or fake the appearance, voice, or actions of political candidates or officials.

But a lot of election watchers are unconvinced the companies will adequately police themselves.

AI technologies are advancing so quickly that lawmakers and regulators are having trouble keeping up. With few rules governing AI-generated content in the US, the European Union has taken the lead. It requires that companies identify and label deepfakes.

“This is a feeble response on the part of the tech companies given the risk we are already seeing in the form of political disinformation and election interference,” Hany Farid, a University of California, Berkeley professor who specialises in deepfakes and disinformation, told USA Today.

Misinformation Age

If anyone, didn’t already know it, people are living in the misinformation age, a period in history when it is increasingly difficult for many — especially those who get all their news from social media — to discern fact from fiction, never mind subtle biases and embellishment.

“We live in a plague of misinformation. It’s a scourge, it strikes at the heart of community trust and generative AI has the potential to be another instrument another that can supercharge this wave,” Mr Berryman said.

“I think the challenges that it presents to the communications environment are profound and really important that as a nation that we start to grasp what they are,” he added.

“AI-driven mis-and-disinformation has been identified by the World Economic Forum, as amongst the biggest risks that that humanity faces, second only to climate change, which I think is quite extraordinary,” Professor Monica Attard, Co-Director of the Centre For Media Transition at the University of Technology Sydney, said.

Professor Monica Attard, Co-Director of the Centre For Media Transition at The University of Technology Sydney.
Camera IconProfessor Monica Attard, Co-Director of the Centre For Media Transition at The University of Technology Sydney. Credit: The University of Technology Sydney./Supplied

The Centre for Media Transition has been looking at the many opportunities and dangers generative AI presents to professional media organisations and has recently interviewed editors and product development staff from eight newsrooms around the country.

“There are risks to trust, there are risks to copyright. There’s a risk ultimately to the sustainability of the business models, which have already been battered after 20 years of digital technology,” Professor Attard said.

“They were all they’re all thinking very, very deeply about how generative AI will challenge many of journalism’s fundamentals.

“The technology was changing so fast in this space, they could barely breathe in between announcements, they could barely keep up with a change,” she said.

“Most newsroom leaders that we spoke to said that they didn’t want their journalists using ChatGPT. Their focus was on original journalism.

“All editors mentioned that they didn’t want to be the editor who stepped out first using generative AI to produce journalism because it would be so dangerous and such a leap.

“Information integrity was a big concern. In fact, it was the underlying sentiment in every interview with every editor. For them, quality, they said was a fundamental part of the value proposition. They didn’t want to mess with quality, they didn’t want to introduce errors or doubt around quality and authenticity and accuracy.

“They all talked about the fact that integrity was important to what they did, and that they were looking at ways to safeguard and retain the trust of the audience.”

Another problem was that efforts to educate the public about deepfakes might unintentionally undermine trust in real content.
Camera IconAnother problem was that efforts to educate the public about deepfakes might unintentionally undermine trust in real content. Credit: Adobe Stock/maurice norbert - stock.adobe.com

Another problem was that efforts to educate the public about deepfakes might unintentionally undermine trust in real content.

Professor Lucey said innovation in the area of secure provenance would be of benefit to news organisations and media consumers, but wouldn’t be a “silver bullet”.

The Coalition for Content Provenance and Authenticity (C2PA) has developed technical standards for certifying the source and history of media content, such as a photograph or video.

The coalition formed through an alliance between Adobe, Arm, Intel, Microsoft and Truepic.

“The problem is it’s useful for the good actors, the people who want to be responsible,” Professor Lucey said. “There’s always going to be fake content or deepfake content that seeps through.”

Partner in crime

Law enforcement agencies are also scrambling to get their heads around AI — how to use it to catch the crooks and how to prevent the technology being an accessory to crime.

On the one hand, AI has uses in fraud detection, for example, but there has also been a proliferation of AI-generated scams.

Daniel Prince, professor of cybersecurity at Lancaster University in England, said the technology enhances the efficiency of criminal activity.

“It allows lawbreakers to target a greater number of people and helps them be more plausible,” he wrote in The Conversation.

“This technology could also help criminals sound more believable when contacting potential victims.

“Think about all those spam phishing emails and texts that are badly written and easily detected. Being plausible is key to being able to elicit information from a victim.”

We have already mentioned the explosion of deepfake sexual images.

The use of generative AI sexually explicit images of children is a nightmare for law enforcement and parents.

“The horror now before us is that someone can take an image of a child from social media, from a high school page or from a sporting event, and they can engage in what some have called ‘nudification,’” Dr Michael Bourke, the former chief psychologist for the US Marshals Service told The New York Times last month.

Not knowing if the victim is a real child or an AI creation made police investigations more difficult than they already are.

Professor Prince pointed out that deepfake technology has been used to generate revenge pornography.

The European Union has been among the quickest to act, introducing a law to ban the indiscriminate scraping of facial images from the internet and CCTV among other things.

Its AI Act, “is the first-ever comprehensive legal framework on AI worldwide, guaranteeing the health, safety and fundamental rights of people, and providing legal certainty to businesses across the 27 member states”, the EU claims.

A company can be fined up to 7 per cent of its global turnover if caught violating the law.

Meta, the company behind Facebook and Instagram, has been accused of making it harder for police to catch the creators and sharers of illegal AI-generated content, like child abuse material, by encrypting its messaging platform.

Trust and opportunity

A world where people don’t know what information and content to trust anymore, presents opportunities for professional journalists and news organisations.

But many would need to lift their game and adopt stricter standards of accuracy and curation to seize the opportunities.

Mr Berryman said organisations needed to throw a “blanket of trust” around their content.

“It is very important that images that that are taken . . . have technical stamps. So that provenance of information can be checked . . . that people can check to see whether something’s been modified, (they can) check the source of a piece of information and look at the chain of how that information got to them, to be able to determine themselves whether something is trusted or not.

“I think in this case, journalism becomes really important, again, not as a purveyor of opinions, but as a purveyor of verified facts.”

Mr Berryman, whose institute publishes the respected science magazine and website Cosmos, added: “Everybody can call themselves a journalist these days, everybody can call themselves a trusted source of information, but are they?

“I think there needs to be a bar that is set where we’re independently measured, as to whether we’re doing our best to report on opinions or making things up or cutting corners.

“We’ve tended over the past 30 years to surrender to global media sources, the amount of curation that happens in Australia is smaller than it’s ever been.

“If AI-based generative tools are only as good as the information that we put in them, they’re only as good as what we feed into them.

“If those inputs are coming from global, disreputable sources, by which we don’t have any source of verification, they are going to end up in our communications environment. Part of this blanket of trust means that Australia needs to start to think about the sovereign authoritative information that it puts into generative AI for the purposes of its community.

“Australia has sovereign generative AI capacity here, that we are starting to build technologies that are mirrors and reflections of the trust and values that we as a country want in information.”

Get the latest news from thewest.com.au in your inbox.

Sign up for our emails