Google Gave Its Pixel 8 Cameras a Major Upgrade. Here’s How They Did It – CNET

With its Pixel 8 and Pixel 8 Pro smartphones, Google is bringing its big guns to the battle for smartphone photo and video leadership. Among more than a dozen notable improvements coming to the Android phones is a tool called Video Boost that uses AI processing on Google’s server-packed data centers to dramatically increase image quality.

When you first shoot a video, you’ll have just a 1080p preview version. But during a couple hours or so for uploading and processing, Google uses new artificial intelligence models too big for a smartphone to improve shadow detail, reduce pesky noise speckles and stabilize the video. That means Google’s Night Sight technology, which in 2018 set a new standard for smartphone photos taken in dim and dark conditions, has now come to video, too. Or at least it will when Video Boost ships later this winter.

“Night Sight means something very big to us,” said Isaac Reynolds, the lead product manager in charge of the Pixel cameras. “It is the best low-light smartphone video in the market, including any phones that might have recently come out,” he said in an unsubtle dig at Apple’s iPhone 15 models. But Video Boost improves daytime videos, too, with better detail and smoother panning.

p1021184 p1021184

A side-by-side comparison of video from Apple's iPhone 15 Pro and Google's Pixel 8 Pro processed with Video Boost video technology. In the comparison, Google's video shows more details on a person's shadowed face and a bluer sky. A side-by-side comparison of video from Apple's iPhone 15 Pro and Google's Pixel 8 Pro processed with Video Boost video technology. In the comparison, Google's video shows more details on a person's shadowed face and a bluer sky.

Google says its Video Boost technology produces better video dynamic range, including shadow detail and highlights that aren’t blown out, when compared to the Apple iPhone 15 Pro. But you’ll have to wait hours to get your Video Boost video back from Google’s data centers.

Google; Screenshot by Stephen Shankland/CNET

Instead, the dual conversion gain technology is able to simultaneously capture details from both low-light and bright areas of a scene pixel by pixel, then blend the best of both. The result: “Whether it’s a high-contrast scene or a low-light scene, you’re going to see dramatically better performance versus the Pixel 7 and Pixel 7 Pro,” Reynolds said. “You don’t have to give up the dynamic range. That means less underexposure, which means less shadow noise.”

Next is the new Tensor G3 processor, the third generation of Google’s Pixel phone processors. The G3 has more built-in Google circuitry for AI and image processing than last year’s G2, and Google uses it to produce two videos. One is the 1080p preview version you can watch or share immediately.

The other is the Video Boost version that’s uploaded to Google for more editing. The G3 preprocesses that video and, for each frame, adds up to 400 metadata elements that characterize the scene, Reynolds said.

The last Video Boost step takes place in Google’s data centers, where servers use newly developed algorithms for noise reduction, stabilization and sharpening with low-light imagery. That processed video then replaces the preview video on your phone, including a 4K version, if that’s the resolution you originally shot at.

Reynolds defends the video’s data center detour as worthwhile.

“The results are incredible,” he said. Besides, people like to reminisce, revisiting a moment through photos and videos hours later, not just months or years later. “I don’t think there’s any downside at all to waiting a couple of hours,” he said.

It may be worth the wait, but the wait also might be longer than just a couple hours to pump gigabytes of video to Google. If you’re away from home Wi-Fi, you might be worried about blowing through your plan’s mobile data cap. And when you’re at home, you might be among the millions of people whose broadband doesn’t actually offer fast upload speeds.

More megapixels on Pixel 8 cameras

If you’re taking photos, the sensor doesn’t use the image sensor’s dual conversion gain technology — at least yet, though Google says it’s excited about the technology’s potential.

But there are other big improvements: Like Samsung and Apple, Google is now advancing beyond the 12-megapixel smartphone photo resolution we’ve had for years.

When Apple introduced its iPhone 14 Pro in 2022, it let photographers shoot 48-megapixel photos with the main camera. Samsung goes even further with a 200-megapixel sensor, though the results aren’t generally impressive beyond 50 megapixels. In comparison, even though the Pixel 7 and 7 Pro had 50-megapixel main cameras, Google offered photos at only 12-megapixel resolution. (Though it did offer 2x and 10x modes that took advantage of the full resolution of its sensors.)

This year, Google is leapfrogging Apple when it comes to pixel count on the Pixel 8 Pro. Not only can you take photos at the main camera’s full 50-megapixel resolution, you can also take 48-megapixel ultrawide (like this year’s OnePlus 11) and 48-megapixel 5x telephoto shots. (The Pixel 8 only can take 12-megapixel ultrawide shots.)

Google Pixel 8 Pro's three-camera camera bar Google Pixel 8 Pro's three-camera camera bar

Photos and videos are a top priority and a competitive strong suit for for Google’s phones. This closeup of the “obsidian” colored Google Pixel 8 Pro shows its larger new 48-megapixel ultrawide camera at left, its updated 50-megapixel main camera in the center and its 48-megapixel 5x telephoto camera at right.

Stephen Shankland/CNET

Only the Pro gets an improved ultrawide camera. It can gather 105% more light thanks to a larger sensor with a wider-aperture lens. That’s important for low-light scenes and for supporting the 48-megapixel resolution.

The Pro’s ultrawide camera also gets autofocus abilities and reduces its close-focus distance from last year’s 3cm to 2cm — about 0.8 inch. That means macro shots will have much better background blur, Reynolds said.

New Google camera app gets “pro controls”

Google is proud of its “computational raw” technology, which combines the multishot blending used to create ordinary JPEGs with the editing flexibility of raw photos. That means more dynamic range than with single-frame raw shots, which is handy for people who edit their photos afterward in software like Adobe Lightroom.

With the Pixel 8 phones’ camera app, Google is giving photographers new controls to give photographers fine-tune their shots as they’re taken. It’s got new “pro controls” that expose options for shutter speed, exposure length, white balance, ISO sensitivity and focus, Reynolds said.

Most people won’t shoot raw or use the manual controls, but that doesn’t mean it’s not important. “It gets you the photo you need when you absolutely have to have it a certain way,” Reynolds said. 

Another big change to Google’s camera app has already started arriving. For years, that app has presented buttons with a choice of modes like photo, video, panorama, Night Sight and slow motion video. Now Google offers a master switch for video and photo, each with its own range of buttons.

Also new to the camera app is an improvement to the DNG files used for raw photos. They now store more metadata so software like Adobe Lightroom will display a version that better matches the colors and tones of the Pixel’s fine-tuned JPEG. Raw photos in general have been retuned for better color and subtle tonal differences, Google said.

The Pixel 8 phones are the first smartphones to use Ultra HDR, an Android photo format that adds extra information to a JPEG photo so compatible software can show a high dynamic range version. For instance, Ultra HDR brightens stars in astrophotography photos, which show more detail at 50-megapixel resolution.

New Pixel 8 shooting tricks

The new Pixels come with other abilities, too, many using artificial intelligence technology. Here are some:

  • For group photos, a new feature called Best Take lets you choose the faces you want from a group of photos. It shows thumbnails of each face in the photos, and when you tap on one, it shows you the various expressions. You can pick everybody’s best smiles or goofy faces for the composite photo the camera creates.
  • The G3’s AI acceleration abilities automatically cut noise from crowds and wind out of videos. But a new editing tool called Audio Magic Eraser isolates different sounds to let you pick what you want. Sound level sliders adjust the mix of sound from speech, wind, music, crowd hubbub and background noise.
  • A new Magic Editor tool lets you increase or decrease the size of scene elements like people. One tap outlines a scene element, then pinching or dragging moves it around. Like Video Boost, this tool goes to Google’s cloud computing system to do the heavy lifting of creating any new imagery that’s needed. The tool will only be available in an early access version to start.
  • A tool called Zoom Enhance will use generative AI to create higher-resolution photos out of smaller, pixelated originals.
  • Google has taken some measures to improve lens flare problems common on smartphones. Among other things, when shooting toward bright point sources of light like the sun, the Pixel can remove the distracting green dot such sources often produce.
  • Magic Eraser, which lets you obliterate scene elements like distracting people in the background, gets a big AI boost. It now uses generative AI so the phone can fill in larger areas. It also can remove shadows from selected elements you’re erasing.

How well all these features and technologies work remains to be seen. But it’s clear Google is investing heavily in the Pixel photo and video technology.

Leave a Reply