
For developers leveraging AI, “vibe coding” at present involves closely monitoring every action or risking the model operating uncontrolled. Anthropic states that its latest enhancement to Claude seeks to remove that dilemma by enabling the AI to determine which actions are safe to undertake independently — within certain constraints.
This initiative signifies a wider transition across the sector, as AI tools are progressively crafted to function without awaiting human consent. The challenge involves finding a balance between speed and oversight: too many restrictions can hinder progress, while too few might render systems dangerous and unpredictable. Anthropic’s new “auto mode,” currently in research preview — indicating it is available for experimentation but not yet a finalized offering — represents its most recent effort to navigate this balance.
Auto mode employs AI-driven safeguards to evaluate each action prior to execution, assessing for any risky behavior that the user did not authorize and for indications of prompt injection — a method of attack where harmful instructions are concealed within the content that the AI is processing, leading it to execute unintended actions. Any actions deemed safe will proceed automatically, while the risky ones will be blocked.
It essentially expands upon Claude Code’s existing “dangerously-skip-permissions” command, which delegates all decision-making to the AI, but incorporates an additional safety layer.
This feature builds upon a trend of autonomous coding solutions from firms like GitHub and OpenAI, which can carry out tasks on behalf of developers. However, it advances this concept by transferring the decision-making of when to seek permission from the user to the AI itself.
Anthropic has yet to disclose the precise criteria utilized by its safety layer to differentiate safe actions from risky ones — an aspect that developers will likely wish to grasp more thoroughly before broadly implementing the feature. (TechCrunch has reached out to the company for more insights on this matter.)
Auto mode follows Anthropic’s introduction of Claude Code Review, its automatic code reviewer designed to identify bugs before they affect the codebase, and Dispatch for Cowork, which empowers users to delegate tasks to AI agents for work management on their behalf.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Auto mode will be available to Enterprise and API users shortly. The company mentions that it currently functions only with Claude Sonnet 4.6 and Opus 4.6, and advises using the new feature in “isolated environments” — sandboxed setups that are separated from production systems, minimizing potential harm if something goes awry.

