This Website Uses AI to Enhance Low-Res Photos, CSI-Style

This Website Uses AI to Enhance Low-Res Photos, CSI-Style

Let’s Enhance is a new free website that uses neural networks to upscale your photos in a way Photoshop can’t. It magically boosts and enhances your photo resolution like something straight out of CSI.

The service is designed to be minimalist and extremely easy to use. The homepage invites you to drag and drop a photo into the center (once you do, you’ll be asked to create a free account):

Once it receives your photo, the neural network goes to work, upscaling your photo by 4x, removing JPEG artifacts, and “hallucinating” missing details and textures into your upscale photo to make it look natural. You’ll need to wait a couple of minutes for the work to be done, but it’s worth the wait — the results we’ve seen are impressive.

We first tested the system with a press photo we had on hand from the Rylo camera that just launched. Here’s the original:

We then resized the image into a 500px wide photo.

A small 500 pixel wide version of the test photo.

Upsampling the 500px-wide photo in Photoshop to 2000px wide using the “Preserve Details (enlargement)” resample option produces a photo with horrible textures (look at the fingers):

An upsampled version created using Photoshop.

But upsampling the 500px-wide photo using Let’s Enhance produces a much cleaner version of the image that magically restores realistic textures to the hands:

Here’s a crop comparison to help you more easily see the difference:

Upsampled with Photoshop (left) and Let’s Enhance (right).

We did a number of other similar tests. Here are the results:

Cat

A crop of an original photo by Linnea Sandbakk.
Upscaled with Photoshop
Upscaled with Let’s Enhance.

Face

A crop of an original photo by Brynna Spencer
Upscaled with Photoshop
Upscaled with Let’s Enhance

Buildings

A crop of an original photo by picjumbo.com
Upscaled with Photoshop
Upscaled with Let’s Enhance

Face

The system is currently weak with things like eye realism, but it excels with things like hair, landscapes, and animals.

A crop of an original photo by Matthew Kane.
Upscaled with Photoshop
Upscaled with Let’s Enhance

Let’s Enhance was founded by Alex Savsunenko and Vladislav Pranskevičius, a chemistry Ph.D. and a former CTO (respectively) who have been building the software over the past 2.5 months.

“We are researchers ourselves,” Savsunenko tells PetaPixel. “We took few state-of-art approaches, hacked around and rolled them into production-ready system. Basically we were inspired by SRGAN and EDSR papers.”

Let’s Enhance is currently in its first version and will continually be improved based on user needs and feedback. The current neural network “was trained on a very broad subset of images that included portraits at about 10% rate,” Savsunenko says. “The idea is to make separate networks for each ‘type’ of image. Detect the type uploaded under the hood and apply some appropriate network. The current version does better with animals and landscapes.”

Every time you upload a photo, three results are produced for you: the Anti-JPEG filter simply removes JPEG artifacts, the Boring filter does the upscaling while preserving existing details and edges, and the Magic filter draws and hallucinates new details into the photo that weren’t actually there before (using AI).

If you’d like to use Let’s Enhance on your own photos, head on over to the website and drag-and-drop an image into your browser.


Source: PetaPixel

This Website Uses AI to Enhance Low-Res Photos, CSI-Style

This is Trump’s New Official Portrait

This is Trump’s New Official Portrait

The White House just published President Trump’s official portrait photo, which means the one that was released in January 2017 was a placeholder until the real official photo could be made.

The newly released portrait was shot on Friday, October 6, 2017, by official White House photographer Shealah Craighead. In case you missed it the first time around, here’s what the older “official” portrait looked like:

As you can see, Trump is looking a lot happier in the new portrait and there isn’t a strange blue color cast in the background.

Here are President Obama’s two official portraits captured by former White House photographer Pete Souza:

Trump’s new portrait was released more than 9 months after he took office, and it was produced by the U.S. Government Publishing Office (GPO).


Source: PetaPixel

This is Trump’s New Official Portrait

Great Photos Don’t Need to Be Technically Perfect

Great Photos Don’t Need to Be Technically Perfect

Do photos always need to be technically perfect? In this 10-minute video, landscape photographer Thomas Heaton discusses whether photographers worry too much about the technicalities of a photo, forgetting about what’s actually in the image.

“The best standalone images are those that tell a story, those that make the viewer feel something,” says Heaton.

This image shows water droplets on the lens, but it’s the only part of the photo that makes the viewer appreciate the horrible, rainy conditions Heaton faced on the day. Does that make this a bad photo?

“For me, those water droplets actually really, really add to the image,” says Heaton. “I have no interest in removing them. I think they help tell the story.”

But some disagree. A user commenting on his channel said they were a “shame,” and it was that comment that prompted Heaton to make the video in the first place.

Another shot shows a storm rolling in on the coast, but Heaton admits he missed the focus “by a mile.” However, he doesn’t think it matters. The scene itself, when you’re not pixel-peeping, looks great.

“Photography is full of contradictions,” concludes Heaton. “The truth is it’s all about what happens in the moment. Don’t follow the rules, and don’t shoot for anybody other than yourself.”


Source: PetaPixel

Great Photos Don’t Need to Be Technically Perfect

Jesse Jo Stark Delivers a Halloween Treat with a Haunting, Live Rendition of “Monster Man”

Jesse Jo Stark Delivers a Halloween Treat with a Haunting, Live Rendition of “Monster Man”
Emerging L.A. songstress Jesse Jo Stark is a timeless vision from an old Hollywood horror film, and artistically she is able to channel that classic darkness into her music. Gearing up to release her debut full-length record, the singer is on the rise, and with her noir-fused, alternative tracks and an affliction for the spookier side of life, everything about her is about to appropriately haunt your conscience and nightmares.
As Stark prepares for her forthcoming album, she is also set to r…

Keep on reading: Jesse Jo Stark Delivers a Halloween Treat with a Haunting, Live Rendition of “Monster Man”
Source: V Magazine

Jesse Jo Stark Delivers a Halloween Treat with a Haunting, Live Rendition of “Monster Man”

Tuesday Tip: How to Learn What They Don’t Teach You in Photo School

Tuesday Tip: How to Learn What They Don’t Teach You in Photo School

Before she launched her own career, fashion and beauty photographer Kat Borchart spent five years as post-production supervisor for fashion photographer Dewey Nicks. “It was a huge game changer,” she says. “I got to see everything about what being a photographer is: promotions, treatments, pitches, managing the archive.” She adds, “When I went on set I absorbed everything I could.” She paid close attention to how Nicks worked with clients, handled talent on set, and directed crew. “He shot a lot of celebrities. I saw how he gave them inspiration and direction,” Borchart says. “I also saw the importance of great hair styling and makeup, and [choosing] great locations.”

A year after she started working for Nicks, Borchart started doing her own test shoots on the side, applying the things she was learning from Nicks. That enabled her to build a portfolio that eventually led to her first assignments.

See “How Kat Borchart Built a Career in Fashion and Beauty Photography

Related:
From Producer to Photographer: How Christin Rose Made the Transition

How Frances F. Denny Made the Jump from Assistant to Fine Art and Ad Photographer

Advice from the Trenches for Graduating Photography Students

9 Tips for Getting Hired (and Re-Hired) as a Photographer’s Assistant

The post Tuesday Tip: How to Learn What They Don’t Teach You in Photo School appeared first on PDNPulse.


Source: PDN Pulse

Tuesday Tip: How to Learn What They Don’t Teach You in Photo School

Apple Acquires Camera Sensor Startup Behind QuantumFilm: Report

Apple Acquires Camera Sensor Startup Behind QuantumFilm: Report

Apple has quietly acquired Invisage, the camera sensor startup company behind QuantumFilm, according to a new report.

Image Sensors World reports that Apple closed a deal to acquire InVisage back in July 2017, and that some of the employees of InVisage have joined Apple while others were let go.

We first covered InVisage back in 2010, when the California-based startup announced QuantumFilm, a new image sensor technology that was touted as being 4 times more sensitive than traditional camera sensors.

The sensor uses a layer of semiconductor material on top of the traditional silicon sensor, using “quantum dots” to gather light with 90% efficiency compared to 50% of traditional silicon.

Cameras using these QuantumFilm sensors would be boast higher resolution, light sensitivity, and dynamic range, the company said. In its original video introducing QuantumFilm above, InVisage did a side-by-side comparison showing the sensor’s advantages over the iPhone 6:

In October 2015, the company released PRIX, the world’s first film to be shot using a QuantumFilm sensor:

“While the deal has never been officially announced, I got unofficial confirmations of it from 3 independent sources,” Vladimir Koifman of Image Sensors World writes. Koifman also points out that two of InVisage’s investors, Nokia Growth Partners and InterWest Partners, now list InVisage as having exited on their websites:

If true, this acquisition could help Apple further improve the camera technologies in the iPhone as the smartphone camera wars continue to escalate.

(via Image Sensors World via Ubergizmo)


Source: PetaPixel

Apple Acquires Camera Sensor Startup Behind QuantumFilm: Report

This Music Video is a Weird Photoshop Editing Timelapse

This Music Video is a Weird Photoshop Editing Timelapse

Here’s the new official music video for the song “Do I Have to Talk You Into It” by Spoon. If you’re a photographer who has watched post-processing tutorials online, the concept of this music video will be strangely familiar to you: it’s a Photoshop editing timelapse.

The 4.5-minute video shows the band’s lead singer, Britt Daniel, being edited in Photoshop in all kinds of strange ways, from having his sunglasses edited out and face Liquefied to having his skin and muscles removed to reveal the skeleton within.

And if you’re wondering how any of the edits are done, just use the YouTube video player settings to watch the video at 0.25x speed, and voila! It becomes a silent Photoshop tutorial.

(via Spoon via Laughing Squid)


Source: PetaPixel

This Music Video is a Weird Photoshop Editing Timelapse

Rey Pila’s “Fangs” Is A Synth-Ridden Spook Fest

Rey Pila’s “Fangs” Is A Synth-Ridden Spook Fest
Rey Pila, the band from Mexico City who sound like they’re from an 80s synth-ridden future, are hot off the heels of the release of their Wall of Goth EP, and already bringing us another wall of sound to dance around to. Hand-picked by Strokes frontman Julian Casablancas who signed them to his indie imprint Cult Records, they released hauntingly eerie track “Fangs” last Friday, just in time for the Day of the Dead.
With seductive riffs and dark synths bleeding into each other in a way …

Keep on reading: Rey Pila’s “Fangs” Is A Synth-Ridden Spook Fest
Source: V Magazine

Rey Pila’s “Fangs” Is A Synth-Ridden Spook Fest

This Neural Network Enhances Phone Photos to ‘DSLR-Quality’

This Neural Network Enhances Phone Photos to ‘DSLR-Quality’

Want to turn your smartphone snapshots into DSLR-quality photos? A group of scientists in Switzerland is trying to help make that possible. They’ve created a neural network that aims to automatically enhance low-quality phone snapshots into “DSLR-quality photos.”

The research group at ETH Zurich detailed their new artificial intelligence photo enhancer in a paper titled “WESPE: Weakly Supervised Photo Enhancer for Digital Cameras.”

“Low-end and compact mobile cameras demonstrate limited photo quality mainly due to space, hardware and budget constraints,” the scientists write. “We propose a deep learning solution that translates photos taken by cameras with limited capabilities into DSLR-quality photos automatically.”

The scientists first trained a deep learning system on what makes a “DSLR-quality photo” by shooting photos of the exact same scenes using both smartphones (the iPhone 3Gs, BlackBerry Passport, and Sony Xperia Z) and a DSLR (a Canon 70D). Next, they also trained the system using a large set of DSLR-quality photos unrelated to the smartphone photos.

The results created using the large set of photos was superior in key ways to the set shot on-location with the DSLR, which “shows that training benefits from a data diverse dataset (different sources) of high-quality images with little noise levels, rather than a set of images from a single high-quality camera,” the scientists say.

So instead of using a giant dataset of original and enhanced pairs of photos, the system only requires a single set of photos from a new smartphone to operate since it’s trained using a set of high-quality photos that can be completely unrelated to the source camera photos.

Here are some before-and-after examples of how this neural network enhances smartphone photos:

Before. Shot on the iPhone
After
Before
After
Before. Shot on the Nexus 5X
After
Before
After
Before. Shot on the iPhone 6.
After
Before. Shot on the Xiaomi Redmi 3X.
After
Before. Shot on the HTC One M9.
After
Before. Shot on the HTC One M9.
After

As you can see from these sample photos, the neural network seems to have a bad habit of blowing out highlights (check out the clouds). These emerging technologies are continually refined over time, though, so perhaps it’s a weakness that will be addressed in the future.

While using AI to automatically enhance lower-quality photos is a problem that scientists across the world are tackling, these scientists at ETH Zurich say that their system’s advantage is being able to train itself for any new camera simply with a set of photos from that camera instead of using pairs of photos like traditional enhancement methods.

Want to try this neural network out yourself? The scientists have set up a webpage that lets you upload your own smartphone photos to see what the AI enhancement produces.

(via ETH Zurich via Engadget)


Source: PetaPixel

This Neural Network Enhances Phone Photos to ‘DSLR-Quality’

What Photographers Need to Know About Computational Photography

What Photographers Need to Know About Computational Photography

What do you think of when you think of computational photography?

Wikipedia defines it as “digital image capture and processing techniques that use digital computation instead of optical processes. Computational photography can improve the capabilities of a camera, or introduce features that were not possible at all with film based photography, or reduce the cost or size of camera elements.”

In a panel at PhotoPlus Expo, several experts explored scenarios for what computational photography will mean for creative professionals. The short answer: more immersive virtual reality, smaller and lighter cameras that nonetheless perform as well if not better than DSLRs and the embedding of more information inside images to enable augmented reality experiences.

For photographers, computational photography creates some cognitive dissonance, according to Allen Murabayashi, co-founder of PhotoShelter. “You no longer have to get it right in camera” because the camera is increasingly smart enough to get it right for you. But that doesn’t mean that all of photography is on a glide-path toward a utopian future.

The central question photographers face is whether they can “transcend the novelty of these [technologies] to best leverage these features for storytelling,” Murabayashi said.

Jim Malcom, General Manager of Humaneyes North America (makers of the Vuze Camera), was very bullish that VR creators can do just that.

“People don’t know what they want until they experience it,” he said. With VR, creators now have a “fourth screen” — a VR headset–to create content for. There are 15 million headsets in circulation now, he added, and the market for VR content is already valued in the billions of dollars. While computation will enable VR cameras to capture increasingly more realistic footage (by, among other things, faster stitching of stereoscopic content), it’s up to artists to experiment with the format, he said.

Don’t Miss: Vuze Virtual Reality Camera Review

Rajiv Laroi, co-founder of Light, made the most sweeping prediction. In the coming years, computational photography “will be the norm” and DSLRs will be like film cameras are today: a small audience will still use them, but most photographers will have moved on.

“It’s like when flat panel TVs came out, there was no longer a reason to buy a CRT,” he said.

The L16, Light’s first product, is the poster-child for computational photography. It combines 16 cameras into a single, relatively compact body while still producing huge RAW files (up to 160MB at a pop) with light-field capabilities to alter focus points and depth of field after an image has been captured.

Camera companies need to start viewing their products as “computers with sensors” or they’ll be in dire risk of being left behind by the world of computational photography, Murabayashi added.

For Steve Medina, Producment Manager at the augmented reality company Avegant, the promise of computational photography lies in the ability to blend in real-world information with photographic objects. “Augmented reality doesn’t replace photography, it adds context and information,” he said. As an example he cited a movie poster with characters that would “come alive” when you pointed a phone at them.

Computational photography doesn’t simply mean totally novel experiences, either. It also means adding information to photographic and video metadata that wasn’t available to earlier cameras. In a previous job at GoPro, Medina was working on technology to feed the camera information like a user’s heart rate, acceleration, height and orientation from external Bluetooth sensors. This information could then be used by the camera or by desktop editing software for cutting the video. “Maybe you want to focus only on moments when the filmmakers heart rate was high or when they were moving rapidly,” he said.

For pro photographers looking to navigate these emerging technologies, Murabayshi’s advice was simple: look to differentiate yourself by knowing “which technologies to use to tell which stories,” he said. And don’t think of yourself as an artist, but “as a service provider of visual communications.”

Don’t Miss: How Photography Is Changing in the Era of Machine Learning

The post What Photographers Need to Know About Computational Photography appeared first on PDNPulse.


Source: PDN Pulse

What Photographers Need to Know About Computational Photography