Internet Watch Foundation warns against AI-generated child sexual abuse images that could flood the internet

thtrangdaien

Internet Watch Foundation warns against AI-generated child sexual abuse images that could flood the internet

NEW YORK – The already alarming proliferation of child sexual abuse images on the internet could get worse if something is not done to put controls on artificial intelligence tools that generate fake images, a watchdog warned on Tuesday.

In a written report, the UK-based Internet Watch Foundation urged governments and technology providers to act quickly before the flood of AI-generated images of child sexual abuse overwhelm law enforcement investigators and widen the pool of potential victims.

“We’re not talking about the harm it might do,” said Dan Sexton, the watchdog group’s chief technology officer. “This is happening now and it needs to be addressed now.”

In the first case of its kind in South Korea, a man was sentenced to 2 1/2 years in prison in September for using artificial intelligence to create 360 ​​images of virtual child abuse, according to the Busan District Court in the country’s southeast. .

In some cases, children use these tools with each other.

At a school in south-west Spain, police have investigated claims teenagers used a phone app to make their fully clothed schoolmates look naked in pictures.

Computer-generated images of child sexual abuse created with artificial intelligence tools such as Stable Diffusion are beginning to proliferate on the internet and are so realistic they are indistinguishable from pictures depicting real children, according to a new report.Computer-generated images of child sexual abuse created with artificial intelligence tools such as Stable Diffusion are beginning to proliferate on the internet and are so realistic they are indistinguishable from real children.AP

The report reveals the dark side of the race to build generative AI systems that allow users to describe in words what they want to produce — from an email to a new piece of art or video — and have the system spit it out.

If it is not stopped, the flood of fake child sexual abuse images could make it difficult for investigators to try to save children who become virtual characters.

See also  Nick Cannon Weighs In On Tyrese Gibson Home Depot Lawsuit: ‘Ain’t You Tired Of Going To Court’?

Perpetrators can also use images to groom and coerce new victims.

Sexton said IWF analysts found famous children’s faces online as well as “massive requests for the creation of more images of children who have been abused, possibly years ago.”

“They take existing real content and use it to create new content for this victim,” he said. “That was very surprising.”

Sexton said his charity, which focuses on combating child sexual abuse online, first began to surface reports of abusive AI-generated imagery earlier this year.

This led to an investigation into forums on the so-called dark web, a part of the internet hosted in encrypted networks and accessible only through anonymization tools.

What IWF analysts found were abusers sharing tips and marveling at how easy it was to turn their home computers into factories for producing sexually explicit images of children of all ages. Some even trade and try to profit from such images that seem to come alive.

“What we’re starting to see is this explosion of content,” Sexton said.

While the IWF report aims to flag a growing problem more than offer a prescription, it urges governments to strengthen laws to make it easier to combat abuses generated by AI.

It particularly targeted the European Union, where there is debate over surveillance measures that could automatically scan messaging apps for images of suspected child sexual abuse even if the images were not previously known to law enforcement.

A big focus of the group’s work is to prevent victims of previous sexual abuse from being abused again through the redistribution of their photos.

See also  Jimmy Kimmel Doubled Down On Matt Damon ‘Feud’ At The Oscars

The UK-based Internet Watch Foundation is urging governments and technology providers to act quickly before the flood of AI-generated images of child sexual abuse overwhelm law enforcement investigators and widen the pool of potential victims.The UK-based Internet Watch Foundation is urging governments and technology providers to act quickly before the flood of AI-generated images of child sexual abuse overwhelm law enforcement investigators. AFP via Getty Images

The report says technology providers can do more to make the products they build harder to use in this way, though it’s complicated by the fact that some tools are difficult to put back into the bottle.

A new batch of AI image generators were introduced last year and wowed people with their ability to create whimsical or photorealistic images on command. But most of them are not liked by the producers of child sex abuse material because they contain mechanisms to block it.

Technology providers that have closed AI models, with full control over how they are trained and used — for example, the OpenAI DALL-E image generator — appear to be more successful at curbing abuse, Sexton said.

Instead, the tool favored by producers of child sex abuse imagery is the open-source Stable Diffusion, developed by London-based Stability AI.

When Stable Diffusion appeared in the summer of 2022, a subset of users quickly learned how to use it to generate nudity and pornography.

Although much of the material depicts adults, it is often non-consensual, such as when it is used to create celebrity-inspired nude photos.

Stability then rolled out a new filter that blocked unsafe and inappropriate content, and the license to use Stability’s software also came with a ban on illegal use.

In a statement released Tuesday, the company said it “strictly prohibits any misuse for illegal or immoral purposes” across its platform. “We strongly support law enforcement efforts against those who misuse our products for illegal or nefarious purposes,” the statement said.

See also  Emily Elizabeth Shares Cryptic ‘Life Update For So Many Asking’

Users can still access the old, uncensored version of Stable Diffusion, however, which is “too much software of choice … for people creating explicit content involving children,” said David Thiel, chief technology officer of the Stanford Internet Observatory, another watchdog group. who studied the problem.

“You can’t control what people do on their computer, in their bedroom. It’s not possible,” added Sexton. “So how do you get to the point where they can’t use publicly available software to create this kind of malicious content?”

Most AI-generated child sexual abuse images would be illegal under existing laws in the US, UK and elsewhere, but it remains to be seen whether law enforcement has the tools to combat them.

The IWF report is timed ahead of next week’s global AI security gathering hosted by the British government that will include high-profile attendees including US Vice President Kamala Harris and tech leaders.

“Although this report paints a bleak picture, I am optimistic,” IWF Chief Executive Officer Susie Hargreaves said in a prepared written statement. He said it was important to communicate the reality of the problem to “a broad audience because we need to have a discussion about the dark side of this amazing technology.”

Categories: Trending
Source: thtrangdai.edu.vn/en/