Adobe Using AI to Spot Photoshopped Photos

Adobe Using AI to Spot Photoshopped Photos

Adobe’s software has been widely used for many years now as a tool to create fake photos, but now the company is developing software for the other side: it’s using AI to spot photo manipulations to aid in the war against fake photos.

In addition to using Photoshop for good, “some people use these powerful tools to ‘doctor’ photos for deceptive purposes,” Adobe writes. “[I]n addition to creating new capabilities and features for the creation of digital media, Adobe is exploring the boundaries of what’s possible using new technologies, such as artificial intelligence, to increase trust and authenticity in digital media.”

Adobe researcher Vlad Morariu has been working on the challenge of detecting image manipulation as a part of the government-sponsored DARPA Media Forensics program.

There are existing tools that can help people spot manipulation — things like metadata and programs that examine features of photos (e.g. noise, edges, lighting, pixels) — but artificial intelligence technology could help take this to the next level. AI could help fake photo detection be easier, faster, more reliable, and more informative.

In a new paper titled Learning Rich Features for Image Manipulation Detection, Morariu describes how Adobe is using AI for this purpose.

The team focused on three common image manipulation techniques: splicing (parts of two photos combined), copy-move (objects in photo moved/cloned from one place to another), and removal (objects removed from photos and filled in, like with Content-Aware Fill).

“Each of these techniques tend to leave certain artifacts, such as strong contrast edges, deliberately smoothed areas, or different noise patterns,” Morariu says. “Using tens of thousands of examples of known, manipulated images, we successfully trained a deep learning neural network to recognize image manipulation […]”

AI can help identify how and where a photo was manipulated in just seconds.

But these technologies will complement, rather than replace, traditional means of “trust” in the world of photojournalism.

“The Associated Press and other news organizations publish guidelines for the appropriate digital editing of photographs for news media,” says Adobe Research director Jon Brandt. “I think one of the important roles Adobe can play is to develop technology that helps them monitor and verify authenticity as part of their process.

“It’s important to develop technology responsibly, but ultimately these technologies are created in service to society. Consequently, we all share the responsibility to address potential negative impacts of new technologies through changes to our social institutions and conventions.”

(via Adobe via Engadget)


Source: PetaPixel

Adobe Using AI to Spot Photoshopped Photos

This Sensor Can Stop a Drone’s Rotor in 0.077 Seconds to Save Your Finger

This Sensor Can Stop a Drone’s Rotor in 0.077 Seconds to Save Your Finger

The rotors on camera drones can do serious damage to human flesh if the two come in contact. Researchers are working on a new flesh sensor that would stop a rotor so fast that an approaching finger can be spared from harm.

If the concept of detecting flesh and stopping a spinning blade sounds familiar, it’s because similar technologies already exist. In the world of table saws, the company Saw Stop uses a sensor that puts the brakes on a spinning saw blade in less than 5 milliseconds. Even if you push your finger into the blade quickly, the most damage you’ll receive is a relatively shallow nick.

Now researchers at the University of Queensland in Brisbane, Australia, are working to bring this concept to the world of drones. Their Safety Rotor is a drone sensor system that has a flesh-detecting sensor spin with the rotor.

“The measured latency [of the Safety Rotor’s braking response] was 0.0118 seconds from the triggering event to start of rotor deceleration,” the researchers report. “The rotor required a further 0.0474 s to come to a complete stop. Ninety percent of the rotational kinetic energy of the rotor (as computed from angular velocity) was dissipated within 0.0216 s of triggering, and 99 percent of the rotational kinetic energy of the rotor was dissipated within 0.032 s.”

And like with the Saw Stop, the Safety Rotor is being demoed using a hot dog as a finger substitute. In tests, the sensor hoop only caused light marks on the hot dog and the rotor was completely stopped by the time the “finger” reached its plane. When the test was done on a standard drone rotor, the “finger” was completely destroyed.

Here’s a 2.5-minute video that introduces and explains the Safety Rotor’s technology:

The scientists say that the Safety Rotor system would only add about 22g (0.78oz) of weight and $20 of cost to a drone. No word yet on if or when we’ll see this type of system actually appear in consumer drones.

(via IEEE Spectrum via TechCrunch)


Source: PetaPixel

This Sensor Can Stop a Drone’s Rotor in 0.077 Seconds to Save Your Finger

Facebook’s AI Can Open Your Eyes in Blinking Photos

Facebook’s AI Can Open Your Eyes in Blinking Photos

“Take it again, I blinked.” That’s something commonly said after pictures are snapped, but it may soon be a relic of the past if Facebook has its way. The company’s researchers have created an AI that can automatically replace closed eyes with open ones in your pictures.

The scientists trained the AI with photos of people with their eyes open to learn what the subjects’ eyes normally look like. After learning what a person’s eye shape and features should be like, the AI can then work to replace closed eyes with artificially generated eyes in blinking photos.

Adobe Photoshop Elements 2018 also contains a feature called Open Eyes that also opens closed eyes by copying open eyes from other photos of the same subject. But as you can see in the comparison images below, the results leave quite a bit to be desired compared to Facebook’s results.

Closed eyes (left), the results of Adobe Photoshop Elements 2018 (center), and Facebook’s eye-opening AI (right).

Here are some more examples of results produced by Facebook’s eye-opening AI:

Reference photos (left), closed eye photos (center, and AI-generated open eye versions (right).

Some results are better than others. A few are quite realistic, while others produce cold and creepy stares that you probably wouldn’t want to share with friends and family on Facebook.

Advancements in this type of eye-opening AI will undoubtedly produce better and more realistic results as time goes by. But for now, this is an interesting (and eerie) look at what the future may hold for our casual snapshots.

(via Facebook via The Verge)


Source: PetaPixel

Facebook’s AI Can Open Your Eyes in Blinking Photos

NVIDIA’s AI Can Turn Normal Video Into High-Quality Slow Motion

NVIDIA’s AI Can Turn Normal Video Into High-Quality Slow Motion

NVIDIA researchers have created a new AI system that can turn standard 30fps video into high-quality slow-motion that looks like it was actually shot at higher frame rates with a high-speed camera. The 1.5-minute video above has demos and comparisons showing what the AI can create.

“There are many memorable moments in your life that you might want to record with a camera in slow-motion because they are hard to see clearly with your eyes: the first time a baby walks, a difficult skateboard trick, a dog catching a ball,” the researchers write in their research paper. “While it is possible to take 240-frame-per-second videos with a cell phone, recording everything at high frame rates is impractical, as it requires large memories and is power-intensive for mobile devices.”

So instead of documenting all of life with slow motion cameras, the idea is that people could enjoy the benefits of viewing memories in slow motion while only shooting ordinary video.

The researchers trained their AI on over 11,000 videos of sports and everyday life, shot at 240 frames per second. The large archive of actual slow-motion footage allowed the AI to learn how to predict extra frames and imagine them out of thin air.

To turn ordinary video into slow-motion video, the AI as many “missing frames” between actual frames as needed to achieve the desired frame rate.

“Our method can generate multiple intermediate frames that are spatially and temporally coherent,” the researchers state. “Our multi-frame approach consistently outperforms state-of-the-art single frame methods.”

“The method can take everyday videos of life’s most precious moments and slow them down to look like your favorite cinematic slow-motion scenes, adding suspense, emphasis, and anticipation,” NVIDIA writes.


Source: PetaPixel

NVIDIA’s AI Can Turn Normal Video Into High-Quality Slow Motion