Google's New Private AI Compute Aims to Enhance AI Integration While Ensuring Privacy

Google is currently focusing on incorporating generative AI into a wide array of its products, with the aim of getting users not only accustomed to working with AI but potentially dependent on it. This ambition necessitates a substantial intake of user data, a process facilitated by the company's new Private AI Compute. Google has announced that this secure cloud environment will enhance AI experiences while safeguarding user privacy.

The company's pitch is reminiscent of Apple's Private Cloud Compute. Google's solution operates on a single unified stack powered by its custom Tensor Processing Units (TPUs), which include integrated secure elements. This system enables devices to connect directly to a protected space via encrypted links.

Google's TPUs are complemented by an AMD-based Trusted Execution Environment (TEE), which encrypts and separates memory from the host, theoretically restricting access to the data, even for Google. Independent analysis by NCC Group has reportedly confirmed that Private AI Compute adheres to Google's stringent privacy standards.

Google asserts that using the Private AI Compute service is as secure as processing data locally on individual devices. However, the cloud's significantly greater processing power, compared to what is available on personal laptops or phones, allows for the use of Google's most advanced Gemini models.

Edge vs. Cloud

With Google's integration of more AI features into devices like the Pixel phones, the company has highlighted the capabilities of on-device neural processing units (NPUs). Pixel phones and select other devices run Gemini Nano models, enabling them to securely process AI tasks on "the edge" without sending data online. The launch of the Pixel 10 saw an upgrade to these Gemini Nano models, enhancing their ability to handle more data, with contributions from DeepMind researchers.

← Back to News