Google’s AI Controversy: Misconceptions and User Concerns

Recently, social media and various publications, such as those from Malwarebytes, have circulated claims about an alleged change in Google’s policy regarding the use of Gmail user data. It was purported that the company uses email content and attachments to train its AI models, with the only way to opt-out being disabling so-called “smart features” like spell check. Google’s spokesperson, Jenny Thomson, told The Verge that these reports are misleading. “We haven’t changed anyone’s settings, Gmail’s smart features have existed for many years, and we do not use Gmail’s email content to train our Gemini AI model,” she emphasized.

Googles AI Controversy
Illustration: Sora

Gmail’s “smart features” include not only spell check but also order tracking, automatic addition of flight information to the calendar, and other options that simplify email management. Enabling these features in Google Workspace implies that “you agree to allow Google Workspace to use your content and actions for personalizing your Workspace experience.” According to Google, this does not mean passing on the email content for AI training. In January, Google updated the personalization settings for “smart features,” giving users the ability to disable them for Google Workspace and other Google services (such as Maps and Wallet) independently of one another.

Nonetheless, a staff member from The Verge noted that after the update, some of the “smart features” he had previously disabled were reactivated. In addition to existing concerns, recent updates suggest growing scrutiny around how emerging AI technologies, like Gemini AI, operate within Google’s ecosystem. User concerns largely circle around privacy implications and the transparency of data usage, fostering a dialogue on digital rights in the age of AI.

Related Posts