Anthropic Addresses Recent Claude Code Quality Issues
Original: An update on recent Claude Code quality reports
Why This Matters
Shows challenges in maintaining AI model quality during product updates
Anthropic traced Claude Code quality degradation reports to three separate changes: default reasoning effort reduction on March 4, session memory bug on March 26, and verbosity prompt change on April 16. All issues resolved by April 20.
Anthropic investigated user reports of Claude Code performance degradation over the past month, identifying three distinct issues. On March 4, they reduced Claude Code's default reasoning effort from high to medium to address latency issues, but this made responses less intelligent. A March 26 bug caused Claude to repeatedly clear session memory every turn instead of once after idle periods, making it seem forgetful. An April 16 system prompt change to reduce verbosity inadvertently hurt coding quality. The changes affected different user segments at different times, creating the appearance of broad, inconsistent degradation. All issues were resolved by April 20, and Anthropic is resetting usage limits for all subscribers as of April 23. The company emphasized they never intentionally degrade models and are implementing measures to prevent similar issues.