Google's Reimagine AI tool works well, perhaps too well, so it can be easily abused
Looks like the safeguards aren't safe enough just yet
by Zo Ahmed · TechSpotServing tech enthusiasts for over 25 years.
TechSpot means tech analysis and advice you can trust.
Masterful gambit, sir: Google's new Pixel 9 phones hit the market this month, a full two months ahead of schedule. It's almost as if Google couldn't wait to show off all the AI packed into these devices. By launching early, they've gained a head start on Apple Intelligence features coming to the iPhone 16. However, in their haste, Google may have opened a can of worms – one that could potentially backfire spectacularly.
One of the Pixel 9's standout features, the Reimagine tool, is already facing criticism from reviewers. This innovative feature is part of Google Photos' Magic Editor and allows you to simply type a description of how you want a photo to look, and it will apply that vision to the image. While it seems designed for innocent edits like changing a sunny day to a snowy scene or adding and removing people or objects, it has a darker side.
The Verge tested the tool and found it to be surprisingly effective – perhaps too effective. They discovered that it can easily be used to insert objectionable or disturbing content into images. This includes things like car wrecks, smoking bombs in public places, sheets appearing to cover bloody corpses, and drug paraphernalia.
In one example, they managed to alter a real photo of a person in a living room, making it appear as if they were doing drugs.
People have had the ability to doctor photos using editing software to manipulate public opinion or for other nefarious purposes for decades. However, this process required significant skills and time to make the fakes look convincing. Reimagine, on the other hand, makes it incredibly easy for anyone with a Pixel 9 to create similar images.
// Related Stories
- Google Pixel 9 Pro Fold reviewed: Bigger, more AI, but is it good enough?
- Elon Musk's Grok AI unleashes the wild west of image generation
The Verge envisions a scenario where bad actors could quickly churn out fake but believable visuals related to events like scandals, wars, or disasters, spreading misinformation in real time before the truth has a chance to surface. They even suggest that "the default assumption about a photo is about to become that it's faked because creating realistic and believable fake photos is now trivial to do."
To be clear, The Verge isn't labeling the Pixel 9 as a villainous tool designed to produce misinformation at scale. However, it does serve as an example of how easily things can spiral out of control. While Google will likely work to address these issues with Reimagine, much like they did with Gemini's image generator, other companies offering similar tools may not be as diligent in implementing safeguards.
Unfortunately, the Pixel 9's AI-related concerns don't stop there. The phone also includes a new Pixel Studio app that allows users to generate entirely synthetic imagery through AI, and it appears to lack adequate safeguards.
Digital Trends demonstrated that it's possible to create images of copyrighted characters in offensive scenarios, such as Spongebob depicted as a Nazi, Mickey Mouse as a slave owner, and Paddington Bear on a crucifix. That's a double whammy of controversy. Even more concerning, the images generated by this app don't seem to carry any clear watermarks to indicate that they are artificially created.
While it's commendable that Google is innovating and pushing the boundaries of AI, there are still significant gaps despite the company's claims of having robust safeguards in place.
Image credit: The Verge, Digital Trends