Well… No… But kinda? Maybe?
When I talk about camera sensor improvements, I’m mostly impressed by sensor sizes getting larger. When we increase sensor surface area, we see more dramatic improvements to photo quality. The aesthetics of a photo change with a larger sensor, and we can demonstrate things like depth of field and bokeh more easily.
That’s not to say that staying at the same sensor size, and adopting new tech at that size, can’t ALSO bring benefits. It’s just, in my time reviewing phones, I haven’t seen where the “tech” upgrade has ever been as noticeable as the “size” upgrade.
Recently, one of the few times that pattern has been challenged was while reviewing the Xiaomi 14. My expectations were REALLY low as it was another phone with a 1/1.3″ type sensor, but this time the sensor was from Omnivision.
In my experience, Omnivision made the “cheap” sensors for companies that wanted to save money on manufacturing.
With my expectations that low, it was shocking to see the Xiaomi 14 outperform the Galaxy Note 24. Had sensor tech (at comparable sizes) finally made a noticeable impact on low light performance? Did this change the whole game?
This post was published early for my Patrons! If you have the means to support directly, I hope you’ll consider checking out the community at Patreon.com/SomeGadgetGuy!
Seeing the Xiaomi and Samsung go head to head, my expectations were MUCH higher for the Omnivision comparison against the Pixel 9 Pro. The Pixel is using a newer variant of an older Samsung GN sensor. If Samsung was going to put their best sensor tech in a phone, one might assume they’d put it in the Galaxy, and Google would use the older (still very good) sensor as a way to save a little on manufacturing costs.
I was anxious to see if my prediction would prove true…
I hiked out at night to shoot some rear camera selfies.
The Omnivision allows for an extended ISO setting controlled by the user, and I wanted to show how that extra stop of ISO could help in DARK conditions.
I set both phones to 1/30th of a shutter. The Pixel lets you set the ISO up to 3200, and the Xiaomi was at 6400.
I was quite surprised to see how close the two photos were.
Did I really get this SO wrong? I thought I was pretty good at following sensor tech and testing this hardware…
As these are RAW files, were not much concerned with the color differences here. We’re looking at brightness, detail, and noise.
I think the Xiaomi did a TINY bit better job at resolving fine detail, but certainly nothing dramatic. I don’t think it’s any kind of perk if you can only see sensor improvements by pixel peeping to an extreme degree. These shots side by side would not motivate a purchasing decision.
I got both phones back in the GadgetLab, and moved the photos to my workstation.
It’s annoying because the Google RAW files will only show the preview image at LOW resolution in Windows Photos app, so for EVERY comparison image, I had to send each image through Affinity, and double check there was NO processing applied.
While looking at images in Affinity, I noticed something strange.
I had set the Pixel to ISO3200, but Affinity saw the metadata on the RAW file at ISO1400. I double checked EVERYTHING. Looking at the Pixel JPG files, the metadata showed ISO3200. The Xiaomi showed ISO6400 in both RAW and JPG.
That’s weird. That’s REALLY weird…
I needed a more controlled setup, and blacked out my office to take some shots of my stuffed dog.
This time, I let the Xiaomi ramp up to its max ISO of 12800. I set the Pixel at ISO3200 again. Looking at the RAW files, there’s a definite “tell” that something is happening with the Pixel.
There’s almost no noise. It’s nearly pitch black in my office, and there’s almost NO noise in a RAW file from the Pixel?
Yet again we see the same metadata split where the JPG reports ISO3200 and the RAW is ISO1400. It did not matter what combination of settings I used in the Google Camera app, I kept getting these processing differences.
I needed another comparison to see if this was a camera firmware issue or a camera app issue.
Switching over to Open Camera, I did a similar test, setting the camera to 1/30th of a second shutter and ISO3200.
Affinity confirmed that the Open Camera RAW was taken at ISO3200. I would say, that EVER so slightly, the Open Camera RAW is just a touch brighter, and there is better detail preserved. There’s no noise reduction applied. We SHOULD see noise and grain in a RAW photo. That’s how we preserve fine detail.
Also, the Pixel shutter is slow. It’s doing SOME kind of additional processing even though WE set the conditions for the photo. 1/30th of a shutter should be quick. In Open Camera it was quick. In Google Camera it “scanned”.
Getting here, I had no idea that Open Camera can tap into an extended ISO range for these sensors. The limitation of ISO3200 is placed on this sensor by Google and Samsung. The sensor is capable of going harder.
It’s a weird number, but Open Camera will let you push the Samsung sensor in the Pixel to ISO11272.
Open Camera RAW files are WAY better!
The noise is not as bad as I thought it would be. There’s more detail in the Open Camera shot. The Google camera is badly smeared by comparison. All this, and it’s SIGNIFICANTLY brighter. We can notice that right away.
This is HUGELY beneficial as we could also use a faster shutter if we needed to. Maybe you want a dark image (because it’s night time), but you want to freeze action better. The Google Camera is going to perform HDR processing that will smear details AND still likely motion blur your subject.
With Open Camera, you’d have a bit more noise to deal with, but you could shoot two stops faster at 1/120th of a second, and have a similar brightness to the Google Camera reporting 1/30th and ISO3200.
This compares SO much better against the Omnivision now.
It’s slight, but I still like the image out of the Omnivision better. We’re back to margin of error territory though, and the preferences are likely to be more subjective than scientific.
The MAIN difference however, Xiaomi lets me get this image directly out of their native camera app. Google wont let me get anywhere near this close out of the Pixel’s stock camera app.
Pixels are not “the best” phone cameras…
Pixels aren’t the best phone cameras in any objective sense. I still maintain that a Pixel is the best “easy point and shoot” camera on the market. Given the higher prices on Pixels recently however, I question the value of spending $1100+ on the “easiest point and shoot”, when a less expensive Pixel will deliver close results in the majority of situations consumers are likely to be shooting.
We praise Pixels for the image processing and HDR, but Google pushes aggressively to automate as much of the process as possible. Even when we have a setting that appears to give users some control, Google’s software will still get in our way. In low light situations it seems Google is overriding the user input.
[I’d also imagine we could arrive at similar improvements using another camera app on the Galaxy Note 24. I didn’t think to try that while I had the Samsung in to review.]
These results also water down my enthusiasm for the Omnivision sensor a little. We should still be impressed that the “cheap” sensor brand has SLIGHTLY outpaced the best from Sony and Samsung at this size, but the HARDWARE differences are more slight than I previously assumed. The major differences witnessed seem more influenced by manufacturer software.
Xiaomi and Omnivision still deserve credit for this collaboration and the results of this hardware and software combined, but my hypothesis on hardware performance from that earlier round of testing proved less accurate.
The hardware on its own is not causing a dramatic difference in output.
This points me back to sensor size as the most important factor in improving mobile photo IQ.
I’ve complained about Pixel RAW files in the past. Google has always saved some kind of compressed RAW image. File sizes on Pixel RAW have consistently been smaller than on other brands, even when we have directly comparable sensors.
It would seem that Google has shifted to a “Processed RAW” as the default.
Most premium brands have some kind of “Stacked DNG” mode. You take a bunch of RAW photos, do some noise reduction, boost the dynamic range, and repackage that data as a new DNG. It’s NOT a RAW file, as it’s been HEAVILY processed. It’s a bigger bucket of image data to edit with, and it can shortcut SOME of the editing like noise reduction.
OTHER brands will call this something different though: ProRAW, RAW Plus, Expert RAW, SuperRAW. None of these brands are calling this edited output “RAW”.
I considered this might be an issue with how the Pixel processes JPG files.
The phone takes a series of RAW images quickly, and then “sums” them into one brighter image with lower noise. Maybe, the Pixel is just pulling ONE RAW file from that burst to save as a DNG. If the phone is stacking multiple DNG files to make one JPG, then each DNG would NOT need to be captured at the max ISO.
This doesnt fully explain the results I’m seeing though.
I can’t set the Xiaomi 14 to the exact same ISO as the Pixel “RAW” file. Instead I set it slightly higher. When comparing, each sensor will process light in a slightly different way based on the construction of the sensor. ISO1400 is not a universal measure of the light capture, but a measure of each sensor’s sensitivity. That said, I opted to set the Xiaomi slightly HIGHER at ISO1600.
The Xiaomi is set to a slightly higher ISO, and has a TINY bit faster aperture than the Pixel, yet delivers a darker overall image with significantly more noise.
Until I see some other evidence of how Google might be processing these images, I have to conclude Google is “pre-editing” our RAW files.
I’m disappointed by this turn…
It might be nifty to take a photo, then go to edit a RAW file, and see very little noise.
“Wow! Pixel Cameras capture AMAZING dynamic range and are SO low noise!”
This is the reality for folks using 1″ Type camera sensors. That hardware represents a significant improvement in low light performance and dynamic range. Unfortunately, no company is selling a phone with a 1″ Type camera in North America.
When we put Google’s style of processing to the test in more challenging conditions, the capture is slower, and the output is poorer quality, than if we just shot a regular RAW image. The “pre-processing” doesn’t help me in low light.
This gets in my way.