The White House on “The AI Race”
Q&AI with Jen Taylor
Introducing Q&AI, our new blog series aimed at keeping you in the know on updates in the rapidly evolving world of AI. Sometimes, these will be quick updates on new developments in the field. Sometimes, they’ll be tips on tactics, features, or functionality.
If you haven’t met me yet, hi: I’m Jen Taylor, CI’s Director of AI Strategy & Integration, and your (very human) AI BFF. AI is moving at the speed of light, so I’m here to let you know what matters right now. Have questions for me? Ask in real time during my Intro to AI webinar on August 20!
Q: What are the implications of “America’s AI Action Plan”?
A: Last week, the Trump administration released a 28-page plan outlining how the U.S. intends to strengthen its global leadership in artificial intelligence. The plan is broken into three pillars: Accelerate AI Innovation, Build American AI Infrastructure, and Lead in International AI Diplomacy and Security. The document states that “winning the AI race will usher in a new golden age of human flourishing, economic competitiveness, and national security for the American people.”
For a topic that was never even discussed in the last election, this is a major pivot. And it WILL likely accelerate AI adoption, or at least AI tools hitting markets faster. However, I really dislike the mandate to “remove red tape and onerous regulation.” It should be raising major ethical and safety concerns for the general public. I’m also curious to see how the push to accelerate AI adoption in government plays out—especially with the focus on the department of defense, which also could have downstream effects on public trust and tech development cycles.
Another critical issue: We don’t yet have the infrastructure to actually support this growth. (Think: national compute capacity, grid resilience, and data center expansion.) That’s why the plan includes a big focus on building infrastructure, like developing a power grid that can “match the pace of AI innovation.”
Q: Who, if anyone, is calling for more AI regulation and research?
A: On July 15, leading researchers from OpenAI, DeepMind, and Anthropic published a paper calling for deeper investigation into monitoring the thought process (chains of thought) of AI reasoning models. Their concern is that transparency could erode as models evolve, which might become a safety risk. This feels like a moment when organizations could come together and develop standardized evaluations for monitoring these models. I support regulation of AI, especially in ways that guarantee transparency moving forward.
On July 22, Mistral AI released a report on the environmental impact of their models. This report does not disprove the negative impacts, but I find it interesting to see comparisons like: generating 1 page of text is the same water consumption as growing a small pink radish. They acknowledge that the study is a first approximation because precise calculations are difficult to make—and this really highlights how little we actually know about the environmental impact. I really hope this encourages transparency and research around the topic with other AI leading developers (which Mistral is pushing for).
Your friend,
Jen
Have a question for a future edition? Submit it here!