What’s with ChatGPT Agent?

Q&AI with Jen Taylor

Jen Taylor AUTHOR: Jen Taylor
Aug 08, 2025
4 Min Read
Listen

In this edition: an overview of ChatGPT Agent (which is basically a mix of Deep Research and Operator).

This is Q&AI, our blog series aimed at keeping you in the know on updates in the rapidly evolving world of AI. Sometimes, these will be quick updates on new developments in the field. Sometimes, they’ll be tips on tactics, features, or functionality.

If you haven’t met me yet, hi: I’m Jen Taylor, CI’s Director of AI Strategy & Implementation, and your (very human) AI BFF. AI is moving at the speed of light, so I’m here to let you know what matters most now. Have tactical questions for me? Ask in real time during my Intro to AI webinar on August 20!

Q: Are there recent developments that inspire any uneasiness?

A: On July 17, OpenAI released ChatGPT Agent, which is basically a mix of Deep Research and Operator.

Deep Research has the ability to process complex questions by searching the internet and reviewing documents you provide. Operator can actually do tasks for you across the web, like booking a flight (although this function has only been released for pro-level users). Agent basically takes both of those things and puts them together.

In the livestream announcing Agent, they showed a live demo using the scenario of planning a wedding. he tool found multiple outfit options, checked hotel prices and availability, and more. What stuck out to me is that Agent is an interactive experience. Right now, if I put in a prompt to Deep Research, that’s 5 to 20 minutes where I have to just wait until it’s done. (I can’t ask it questions while it’s working.) Agent is different. With Agent, you can actually ask it questions while it’s processing the first thing, and as it’s doing research, it might come back to you and clarify things. There are also certain safety checks in place to ensure that there is human review before anything is actually booked or sent (like a flight or an email). 

From Sam Altman himself: “I would explain this to my own family as cutting edge and experimental, a chance to try the future, but not something I’d yet use for high stake uses or with a lot of personal information until we have a chance to study and improve it in the wild.”

So, I think it’s really interesting that the makers of this technology are also a little bit afraid of what it’s going to mean…but they have put it out there.

Separately: on July 14, AI Companions were introduced for SuperGrok subscribers. This means Grok paying subscribers can now chat with fully animated avatars. The Grok companions (there is one male/one female) are actually quite sexual, and will do whatever you tell them to. There has been some buzz about their behavior, especially given the access to ages 13+. Meta has also talked about generative AI profiles to engage with their users. It forces us to consider the question: will generative AI relationships be a draw or a deterrent on different platforms, and what does that ultimately mean for where and how we reach potential arts audiences? 

Note: I think this is bad, because I support human connection (humans connecting with humans) over humans connecting with AI.  AI is a powerful partner, and great to help with questions, and even perspective on work and personal challenges.  But at the end of the day I want people to  learn how to interact with each other, especially given how much AI looks to please. People will not always please and humans need to learn to flex!

I hope the public rejects these companion applications, but at least with Grok it’s clear the companions are ‘fake.’

YOUR FRIEND,
Jen

Have a question for a future edition? Submit it here!