Google’s current mission is to weave generative AI into as many products as it can, getting everyone accustomed to, and maybe even dependent on, working with confabulatory robots. That means it needs to feed the bots a lot of your data, and that’s getting easier with the company’s new Private AI Compute. Google claims its new secure cloud environment will power better AI experiences without sacrificing your privacy.
The pitch sounds a lot like Apple’s Private Cloud Compute. Google’s Private AI Compute runs on “one seamless Google stack” powered by the company’s custom Tensor Processing Units (TPUs). These chips have integrated secure elements, and the new system allows devices to connect directly to the protected space via an encrypted link.
Google’s TPUs rely on an AMD-based Trusted Execution Environment (TEE) that encrypts and isolates memory from the host. Theoretically, that means no one else—not even Google itself—can access your data. Google says independent analysis by NCC Group shows that Private AI Compute meets its strict privacy guidelines.
According to Google, the Private AI Compute service is just as secure as using local processing on your device. However, Google’s cloud has a lot more processing power than your laptop or phone, enabling the use of Google’s largest and most capable Gemini models.
Edge vs. Cloud
As Google has added more AI features to devices like Pixel phones, it has talked up the power of its on-device neural processing units (NPUs). Pixels and a few other phones run Gemini Nano models, allowing the phone to process AI workloads securely on “the edge” without sending any of your data to the Internet. With the release of the Pixel 10, Google upgraded Gemini Nano to handle even more data with the help of researchers from DeepMind.
NPUs can’t do it all, though. While Gemini Nano is getting more capable, it can’t compete with models that run on massive, high-wattage servers. That might be why some AI features, like the temporarily unavailable Daily Brief, don’t do much on the Pixels. Magic Cue, which surfaces personal data based on screen context, is probably in a similar place. Google now says that Magic Cue will get “even more helpful” thanks to the Private AI Compute system.
Google has also released a Pixel feature drop today, but there aren’t many new features of note (unless you’ve been hankering for Wicked themes). As part of the update, Magic Cue will begin using the Private AI Compute system to generate suggestions. The more powerful model might be able to tease out more actionable details from your data. Google also notes the Recorder app will be able to summarize in more languages thanks to the secure cloud.
So what Google is saying here is that more of your data is being offloaded to the cloud so that Magic Cue can generate useful suggestions, which would be a change. Since launch, we’ve only seen Magic Cue appear a handful of times, and it’s not offering anything interesting when it does.
There are still reasons to use local AI, even if the cloud system has “the same security and privacy assurances,” as Google claims. An NPU offers superior latency because your data doesn’t have to go anywhere, and it’s more reliable, as AI features will still work without an Internet connection. Google believes this hybrid approach is the way forward for generative AI, which requires significant processing even for seemingly simple tasks. We can expect to see more AI features reaching out to Google’s secure cloud soon.
Ryanair tries forcing app downloads by eliminating paper boarding passes