Google is currently focused on integrating generative AI into its array of products, aiming to familiarize users with, and potentially make them reliant on, these advanced technologies. This strategic move involves processing vast amounts of user data, a task now facilitated by the company’s latest introduction: Private AI Compute.
According to Google, its new secure cloud environment promises enhanced AI experiences without compromising user privacy. This initiative bears resemblance to Apple’s Private Cloud Compute, but Google’s version leverages a seamless stack powered by its proprietary Tensor Processing Units (TPUs). These units include integrated secure elements, enabling direct device connections to the protected space through an encrypted link.
The TPUs operate within an AMD-based Trusted Execution Environment (TEE), designed to encrypt and segregate memory from the host system. This theoretically ensures that data remains inaccessible to unauthorized parties, including Google itself. To substantiate its claims, Google cites an independent analysis by NCC Group affirming that Private AI Compute aligns with the company's stringent privacy standards.
Google asserts that the privacy level of Private AI Compute parallels using local processing on personal devices. However, the cloud-based infrastructure offers superior computational power compared to typical laptops or smartphones, enabling the operation of Google's expansive Gemini models.
Edge vs. Cloud
The introduction of more AI functionalities in devices such as Pixel phones has been accompanied by discussions about the capabilities of on-device neural processing units (NPUs). These devices, including selected Pixel models, run Gemini Nano models, which manage AI processes securely on the device, or 'the edge,' without transmitting data over the Internet. With the rollout of the Pixel 10, Google has enhanced the Gemini Nano to handle an increased data load, thanks to collaborative efforts from DeepMind researchers.