How Claude Improves

Q&AI with Jen Taylor

Jen Taylor AUTHOR: Jen Taylor
Aug 14, 2025
3 Min Read
Listen

In this edition: Claude’s signal-training, and pushback to AI-generated images.

This is Q&AI, our blog series aimed at keeping you in the know on updates in the rapidly evolving world of AI. Sometimes, these will be quick updates on new developments in the field. Sometimes, they’ll be tips on tactics, features, or functionality.

If you haven’t met me yet, hi: I’m Jen Taylor, CI’s Director of AI Strategy & Implementation, and your (very human) AI BFF. AI is moving at the speed of light, so I’m here to let you know what matters most now. Have tactical questions for me? Ask in real time during my Intro to AI webinar on August 20!

Q: IF CLAUDE DOESN’T TRAIN ON USER CONVERSATIONS, HOW DOES ANTHROPIC IMPROVE ITS MODEL?

A: They use a combination of methods to get useful signals. Here’s how that works:

  • Human Feedback Loops (Opt-In): Some users (especially enterprise or research partners) opt in to sharing conversations for improvement purposes. Human reviewers then rate these responses marking which ones are helpful, accurate, or problematic. This teaches the AI to recognize patterns in what humans prefer, without needing to collect data from everyone’s conversations.
  • Synthetic Data: Claude also uses existing models to generate realistic practice conversations and scenarios for training purposes.
  • Reinforcement Learning from AI Feedback (RLAIF): One AI model grades another model’s responses, creating a feedback loop that doesn’t require human data to improve performance.
  • Red Teaming & Internal Testing: They continuously stress-test their models internally with adversarial prompts and edge cases to identify weaknesses.

These methods focus heavily on safety and alignment improvements, teaching the model to be more helpful, harmless, and honest, rather than just boosting general performance metrics

Q: HOW DO YOU FEEL ABOUT RECENT PUBLIC PUSHBACK TO AI-GENERATED IMAGES?

A: Have you seen the latest discourse around the AI-generated image in a Vogue ad by Guess? The backlash has been swift and well-deserved.

The criticism goes beyond aesthetics. Using AI instead of real models displaces creative professionals, and sends a troubling message to consumers, especially young girls already facing impossible beauty standards. Now those standards aren’t just unrealistic, they’re completely artificial. The AI-generated model also reinforced narrow ideals, ignoring racial and size diversity that the industry claims to be evolving toward.

Yes, the ad included a small disclosure that the image was AI-generated, but it was barely visible. Transparency matters. If something is meant to look lifelike, consumers deserve to know when it’s not real, and that should be unmissable, not hidden in fine print.

I’m glad to see pushback here. It’s a reminder that people still value human creativity, real representation, and ethical standards in media. AI has a role to play, but it can’t replace what’s human.

MORE SOON,
Jen

Have a question for a future edition? Submit it here!