Privacy Settings, Energy Efficiency, and ROI Reality Checks

Q&AI with Jen Taylor

Jen Taylor AUTHOR: Jen Taylor
Sep 11, 2025
3 Min Read
Listen

In this edition: privacy in Claude, AI energy consumption, questionable study data, and OpenAI hallucinations.

This is Q&AI, our blog series aimed at keeping you in the know on updates in the rapidly evolving world of AI. Sometimes, these will be quick updates on new developments in the field. Sometimes, they’ll be tips on tactics, features, or functionality. 

If you haven’t met me yet, hi: I’m Jen Taylor, CI’s Director of AI Strategy & Implementation, and your (very human) AI BFF. AI is moving at the speed of light, so I’m here to let you know what matters most now. 

Q: Jen, the news is always so overwhelming. Any updates for us?? 

A: I’ve got four recent updates to flag for you.

#1 : Claude recently changed its terms and conditions.

Previously, they did not train on your chats regardless of your settings. That changed two weeks ago.

If you go to your name in Claude, then click Settings & Privacy, you’ll now see a toggle—similar to ChatGPT—that lets you choose whether to help improve Claude. If you haven’t yet accepted the new terms and turned this off, I highly recommend doing so.

 

#2 : Google released a new study on AI energy use, and the findings are directionally very positive.

Google found that energy consumption is substantially lower than many public estimates. For example, the energy used per prompt is equivalent to watching TV for less than nine seconds.

They also report that AI systems are becoming more efficient thanks to research innovations and software and hardware improvements. Specifically, over a recent 12-month period, the energy use and total carbon footprint of median Gemini text prompts dropped by 33 and 44 times respectively—all while delivering higher-quality responses.

The study also takes a more holistic approach to measuring energy use, looking beyond just machine activity to include factors like water usage.

 

#3 : There’s an MIT study that’s been circulating with the headline: “95% of organizations get zero return from AI tools.” 

While that headline is attention-grabbing, the study’s methodology raises some concerns.

First, the sample size was very limited: just 52 interviews and 153 survey responses. That’s not enough to support a sweeping global conclusion, and the authors admit the study isn’t representative of all enterprise segments or regions.

Second, their definition of success is extremely narrow: “deployment beyond pilot with measurable ROI at six months.” Impact may take longer to appear, and success can also be measured in other ways—such as time saved or improved workflows—that don’t immediately show up as ROI.

Third, the study acknowledges that ROI results are often complicated by other factors happening simultaneously.

If I were to reframe their findings more accurately, it would be: Within six months, most organizations could not directly attribute measurable ROI to AI due to overlapping operational and economic factors.

 

#4: OpenAI published an article digging into why language models hallucinate. 

One big reason: during training, models are rewarded for producing fluent, confident answers, even when they’re wrong. That incentive bakes hallucinations into the system. 

The encouraging part is this means hallucinations can be reduced if training is adjusted, though they can’t be eliminated entirely (since perfect accuracy isn’t achievable for AI or humans). Curious to see if OpenAI applies this learning in their next release!

SEE YOU NEXT WEEK,
Jen

Have a question for a future edition? Submit it here!