All News

Microsoft Prioritizes Anthropic Models in Visual Studio Code

Microsoft added an automatic AI model selector to Visual Studio Code that favors Anthropic’s Claude Sonnet 4 over GPT‑5 for paid Copilot users, while free users may see a mix including GPT‑5 and GPT‑5 mini. The move, plus internal guidance and Microsoft’s own model investments, signals a strategic shift toward multi‑vendor AI sourcing and in‑house model development.

Published September 16, 2025 at 07:13 AM EDT in Artificial Intelligence (AI)

Microsoft favors Anthropic models in Visual Studio Code

Microsoft has quietly updated Visual Studio Code with an automatic AI model selector that chooses the best model for “optimal performance.” For GitHub Copilot users, the selector can pick among Claude Sonnet 4, GPT‑5, GPT‑5 mini and other options, but paid users will primarily be routed to Claude Sonnet 4.

The change reflects internal guidance: Microsoft has advised developers to prefer Claude Sonnet 4 based on benchmarks. That guidance reportedly predates GPT‑5’s public release and hasn’t materially shifted since, suggesting Microsoft sees persistent advantages in Anthropic’s models for coding tasks.

At the same time Microsoft is investing in its own AI infrastructure. Executives noted MAI‑1‑preview was trained on a relatively small 15,000‑H100 cluster, and the company plans more investment in model training hardware and research.

Microsoft is also testing Anthropic models in Microsoft 365 Copilot features after reportedly finding them superior to OpenAI models in certain Excel and PowerPoint tasks. Meanwhile, Microsoft and OpenAI recently updated their deal to allow OpenAI to use other cloud providers — a sign of shifting commercial dynamics as OpenAI eyes an IPO.

Why this matters

Model choice is now an operational decision for engineering teams, not just a vendor preference. The selector in VS Code signals several shifts: performance‑led model selection, active multi‑vendor sourcing, and an increased appetite at Microsoft for running and refining in‑house models alongside third‑party offerings.

For developers and organizations this affects latency, accuracy on code generation, hallucination rates, cost per call, and contractual terms around data and compliance. It also matters for procurement: you may need to negotiate across multiple providers and cloud vendors.

Practical implications and risks

  • Performance variance: models can differ on code correctness, runtime, and resource use.
  • Vendor diversification: relying on one provider increases exposure; multi‑model strategies reduce single‑point risks.
  • Cost and cloud strategy: different models and clouds change total cost of ownership and performance profiles.
  • Data governance: contracts must cover data use, retention, and compliance for code and proprietary assets.

Actionable steps for teams

  • Benchmark models on your actual CI/CD tasks and codebases, not synthetic tests.
  • Adopt a multi‑model strategy with fallbacks for cost, latency, and accuracy.
  • Embed guardrails and tests to detect hallucinations or insecure code suggestions before merging.
  • Negotiate data usage and uptime terms across providers and align cloud strategy to latency needs.

Microsoft’s move to prefer Claude Sonnet 4 in VS Code is a reminder that model choice is a tech stack decision. Teams should treat models like databases or compilers: pick the right tool for each job, measure outcomes, and maintain the ability to switch when requirements change.

QuarkyByte runs enterprise‑grade model benchmarks, crafts migration playbooks and monitoring frameworks, and helps align procurement and engineering around measurable KPIs. If your organization relies on code generation or productivity AI, start by mapping the business‑impact metrics you want to protect and improve.

Keep Reading

View All
The Future of Business is AI

AI Tools Built for Agencies That Move Fast.

QuarkyByte can help teams benchmark Claude, GPT‑5, and in‑house models against your real coding workflows and quantify productivity, cost, and accuracy tradeoffs. We build migration playbooks, guardrails, and monitoring plans so engineering leaders and IT procurement can adopt a multi‑model strategy with confidence.