Anthropic Debuts Claude Opus 4.7 as Agentic Workflows Take Center Stage
Key Takeaways: The model also managed to edge out its primary competition in several key categories. Anthropic reported that Opus 4.7 outperformed OpenAI’s GPT-5.4 and Google’s Gemini 3.1 Pro in tool use and computer interaction tests. One of the most visible changes involves a massive upgrade to the model’s vision capabilities. Claude Opus 4.7 can now process images up to 2,576 pixels on the long edge, which is triple the previous resolution limit. This visual boost allows the AI to better interpret complex charts, user interfaces, and technical diagrams. However, the company noted that higher-resolution images consume more tokens, potentially increasing costs for high- volume users. Anthropic also introduced a new feature called /ultrareview within its Claude Code environment. This tool allows professional and max-tier users to run multi-agent sessions to identify bugs and design flaws in software. For financial professionals, the model shows a higher degree of rigor in economic modeling. It achieved a 0.813 score on the General Finance module, representing a meaningful step up from the previous version’s 0.767 rating. The pricing structure for the model remains unchanged at $5 per million input tokens and $25 per million output tokens. To help manage expenses during long autonomous runs, Anthropic added a task budget feature in public beta. Early feedback from the developer community suggests the model is more literal in following instructions. This change might require users to re-tune existing prompts that were optimized for older versions of the Claude family. “Claude 4.7 is out, and using it feels like stepping into an F1 car. Far more power, and it does exactly what you tell it at full speed. Your job is to pick the direction and make the turns,” one user wrote on X. Some testers have observed that the updated tokenizer can use up to 1.35 times more tokens for the same input. While this can lead to faster limit depletion, the company argues that the performance per task justifies the usage. Safety remains a core focus, as the model includes new automated safeguards to block high-risk cybersecurity uses. Anthropic’s system card highlights improved honesty and a stronger resistance to generating harmful content. The model is now available through the Claude API, Amazon Bedrock, Google Vertex AI, and Microsoft Foundry. It retains the 1 million token context window introduced earlier this year.Instructions to a T
About Author
