New Protocols Aim to Make AI Agents Safer and More Useful
AI agents can already draft emails, edit docs, and touch databases — but they struggle to navigate the messy web of apps and other agents. Anthropic's MCP and Google's A2A set early standards to translate between models and tools and to moderate agent-to-agent exchanges. Adoption is strong, but security, governance, and token inefficiency remain unresolved.
AI agent protocols are maturing but major gaps remain
This week the industry spotlight is on two emerging standards that aim to let AI agents act on our behalf without breaking the systems they touch. Anthropic’s Model Context Protocol (MCP) and Google’s Agent2Agent (A2A) protocol are designed to solve two connected problems: how models talk to programs, and how models talk to each other.
The promise is simple: give agents standard, predictable ways to fetch calendars, edit documents, or call a CRM. MCP translates between the model’s natural-language reasoning and the structured inputs tools expect. A2A governs the choreography when multiple agents coordinate on a task.
Adoption has been fast. An MCP server index lists over 15,000 endpoints, and Google says about 150 companies are building on A2A, including Adobe and Salesforce. But uptake hasn’t solved the hard trade-offs below.
Three challenges: security, openness, efficiency
Security is the most urgent worry. Agents that can send emails or access files expand the attack surface. Researchers have demonstrated indirect prompt-injection attacks that can hijack an agent reading email and trick it into exfiltrating documents. Critics warn that current protocols lack built-in safety designs and could amplify real-world harms.
Openness and governance are next. A2A’s move to the Linux Foundation gives many stakeholders a voice; MCP is open-licensed but stewarded by Anthropic. That mix has practical benefits — faster iteration and shared tooling — but it raises questions about who sets rules, who audits servers, and how users rate trust in services.
Finally, efficiency. MCP and A2A use natural language to communicate, which keeps interaction human-like but wastes computational work. Agents must tokenize and reprocess machine-to-machine exchanges that no human reads — doubling up tokens for tasks like summarization and driving costs up fast.
What practical steps can close the gaps?
The path forward is a mixture of engineering, governance, and tooling. Ideas already on the table include:
- Security layers that validate intent and permissions before granting access, plus standardized audit logs to trace which agent asked what.
- Hybrid interfaces that use structured data for machine-to-machine communication and natural language only when humans need to read or intervene.
- Open registries and user ratings so organizations can pick trusted agent servers, plus governance frameworks that include multiple industry voices.
Think of protocols as diplomatic rules for software: they give agents common language, identity, and accountability. The better those rules, the more confidently organizations can delegate everyday tasks to agents without inviting new risks.
At this stage—mid‑2025—MCP and A2A are important building blocks, not finished products. Expect forks, governance experiments, and hybrid designs as companies wrestle with security, cost, and control.
Organizations that move now should prioritize: incremental pilots, threat modeling for agent workflows, and a clear policy stack that defines what agents may and may not do. With careful engineering and cross-industry governance, agents can become reliable assistants rather than unpredictable rivals.
QuarkyByte is monitoring these protocols closely and helping clients map the trade-offs. Our approach combines adversarial testing, governance design, and cost-performance analysis so teams can deploy agents with measured risk and clear metrics for success.
Keep Reading
View AllAI agents need common protocols for real-world use
AI agents are advancing; new agent protocols and OpenAI’s safety focus aim to make them useful and safer for workplaces and consumers.
OpenAI Releases Open-Weight GPT-OSS Models
OpenAI publishes gpt-oss open-weight LLMs under Apache 2.0, enabling local use, customization, and a US response to rising Chinese open models.
OpenAI Releases Open-Weight Models as AI Upends Search
OpenAI publishes downloadable models, search shifts to AI answers, Nvidia rebuts 'kill switch' claims — implications for developers, publishers, and infra.
AI Tools Built for Agencies That Move Fast.
QuarkyByte can simulate MCP and A2A deployments, stress-test agent workflows for prompt-injection and permission flaws, and design governance blueprints that balance openness with accountability. Contact us to pilot a hardened agent stack and benchmark cost and security trade-offs for your org.