We use cookies on our websites. Information about cookies and how you can object to the use of cookies at any time or end their use can be found in our privacy policy.

4 min read 1 Comment

Google's Camera app still ignores the Pixel Visual Core, and that's ok

After much confusion about the purpose of the dedicated image processing chip in the Google Pixel 2 and Pixel 2 XL smartphones, the purpose of the Pixel Visual Core is finally becoming clear. The February patch included an update that turns on the Visual Core to improve photos taken with third-party apps through the old Camera API. However, Google's own camera app still doesn't make use of the special chip.

The Pixel Visual Core was discovered by chance during a teardown by the smartphone repair site iFixit. Afterward, it became clear that it was a dedicated, but still inactive, chip for photo processing. A November Fonearena interview with Brian Rakowski, VP of Product Management at Google and Tim Knight, head of the Pixel camera team, could have clarified things at this point and put a stop to the mounting confusion. But, confusion persisted nevertheless...

Google Project Management VP Brian Rakowski said in the aforementioned interview, "The Visual Core which we will be turning on in the coming apps will primarily be for 3rd party apps." But wouldn't the photos taken with the Google Camera app look much better with the Visual Core? Or at least processed more efficiently with it? To this point, he only says: "Turns out we do pretty sophisticated processing, optimising and tuning in the camera app itself to get the maximum performance possible. [...] So we don’t take advantage of the Pixel Visual Core, we don’t need to take advantage of it."

old v new pixel 2 hdr labled 3
On the right side: photos taken with the Pixel Visual Core activated. Google Camera results (top row) are unimproved by it. Now we know why. / © AndroidPIT

A glance at the data sheet also shows why Google doesn't need to use the Pixel Visual Core. The Pixel 2 and Pixel 2 XL have a Snapdragon chipset, which in turn has a Hexagon digital signal processor, which is optimized for tasks like image processing and expected to deliver the same speed and efficiency. Relying on it has two advantages. Because, as long as Google optimizes for it, the camera app will achieve similar results on first and second generation Pixel devices - the first gen devices are known to have no Visual Core - without any changes to the code.

In contrast, Google can program the Visual Core so that it performs only a single task. It does not have to be a jack of all trades like the Hexagon unit, but instead only deal with pictures that an app delivers using the Camera.takePicture() method. However, this method is part of the old interface, and it seems to be used by WhatsApp, Instagram and others, but not by Google's own camera app.

A chip was needed for this?

The fact that Google (together with Intel) developed and integrated a whole chip for this may seem overkill. In fact, it wouldn't be expected for Google to give so few tasks to the co-processor. However, this might just be a bit of flexing for the engineers at Google and a small show of power over the other chip makers, especially Qualcomm.

Google hoards the know-how about the internal processes in its chips and keeps it a close secret. This is partly due to the fact that smartphone innovation is largely determined by chip designers. Also, the lifetime of a smartphone is capped by them, as software updates are possible only in conjunction with matching kernel drivers for the chipsets. Google created Project Treble with this in mind.

We can't judge whether Google will call the Visual Core a success. Until then, the Instagram photos of your Pixel 2-owning friends will look even more #nofilter than before.

22 Shares

1 Comment

Write new comment:
All changes will be saved. No drafts are saved when editing