Concerns and Age Controls
Q&AI with Jen Taylor
In this edition: Jen’s greatest concerns, and what’s being done to address kids’ use of AI.
This is Q&AI, our blog series aimed at keeping you in the know on updates in the rapidly evolving world of AI. Sometimes, these will be quick updates on new developments in the field. Sometimes, they’ll be tips on tactics, features, or functionality.
If you haven’t met me yet, hi: I’m Jen Taylor, CI’s Director of AI Strategy & Implementation, and your (very human) AI BFF. AI is moving at the speed of light, so I’m here to let you know what matters most now.
Q: What are your greatest concerns about AI, and how do you feel they are being addressed as the AI landscape changes?
A: My biggest concern is really that simulated relationships will become an accepted substitute for human ones. I strongly believe that what makes life exciting is relationships, whether that’s spouses, family, friends, co-workers…to me, that is what makes life worth living. And the idea of a world where people have fake friends that fill that is, well, really terrifying. Unfortunately, I don’t think there is enough concern yet for it to be meaningfully addressed. People pay attention when problems become visible. I’m hoping that things like this episode of The Daily will help bring to light some of the potential negative impact here.
Q: ARE THERE *ANY* GUARDRAILS AT THIS POINT IN TIME?
A: The FTC is issuing orders to seven companies, including Google, Meta, OpenAI, and xAI to understand how their AI could negatively impact children. More to come is this as the FTC digs in.
California passed a first of its kind bill to require operators of AI chatbots used by minors to make clear when they are interacting with AI and not a human. It also requires the operator to have a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user.
Finally: by the end of the month ChatGPT will be launching parental controls. Parents will be able to: link their account to their teens; help guide how ChatGPT responds to their teen, based on teen-specific model behavior; manage which features to disable; receive notifications if the system detects their teen is in a moment of acute distress, and if they can’t reach a parent may involve law enforcement as a last resort; set blackout hours for the teen. They are also building the capabilities so that when ChatGPT identifies that a user is under 18 they will automatically be directed to a ChatGPT experience with age-appropriate policies.
I strongly believe this technology needs regulations, especially for children, so I’m glad to have seen some recent progress here.
MORE SOON,
Jen
Have a question for a future edition? Submit it here!