# NeuroCode Systems > NeuroCode Systems is an engineering team that builds production-grade AI software and multi-agent systems. We specialize in two complementary open protocols — the Agent2Agent (A2A) protocol for agent-to-agent communication and the Model Context Protocol (MCP) for connecting LLMs to tools and data — and we integrate them with the major agent frameworks: Google ADK, LangGraph/LangChain, CrewAI, Microsoft Semantic Kernel, Microsoft AutoGen, LlamaIndex, OpenAI Agents SDK, Claude Agent SDK, Mastra, Agno, Pydantic AI, and Smolagents. We deliver AI systems as a product core — not as a feature — with an emphasis on interoperability, observability, and operational reliability. Typical engagements: A2A/MCP platform integration, multi-agent orchestration, MCP server and gateway development, RAG and NLP platforms, MLOps, data engineering, and AI consulting & prototyping. ## Primary pages - [Home](https://neurocodesystems.com/): Overview of services, technologies, capabilities, and protocols. - [A2A Protocol Manual](https://neurocodesystems.com/protocols/a2a): Practical manual for the Agent2Agent (A2A) protocol — concepts (AgentCard, Task, Message, Part, Artifact, Skill), JSON-RPC 2.0 wire format, streaming over SSE, push notifications, security model, production checklist, and integration recipes with code for 11 agent frameworks (Google ADK, LangGraph, CrewAI, Semantic Kernel, AutoGen, LlamaIndex, OpenAI Agents SDK, Mastra, Agno, Pydantic AI, Smolagents). - [MCP Protocol Manual](https://neurocodesystems.com/protocols/mcp): Practical manual for the Model Context Protocol — primitives (Tools, Resources, Prompts, Sampling, Elicitation, Roots), stdio and Streamable HTTP transports, OAuth 2.1 authorization, host configurations (Claude Desktop, Claude Code, Cursor, VS Code), and integration recipes with code for Python and TypeScript SDKs, Claude Agent SDK, OpenAI Agents SDK, LangChain/LangGraph, Google ADK, Semantic Kernel, LlamaIndex, CrewAI, and Pydantic AI. ## Services - [AI Software Development](https://neurocodesystems.com/info?ref=ai-software-development): Applied AI systems where the model is the product core. - [Intelligent Automation](https://neurocodesystems.com/info?ref=intelligent-automation): Agent-and-rule automation for repetitive operations. - [AI Consulting & Prototyping](https://neurocodesystems.com/info?ref=ai-consulting-prototyping): Problem framing, architecture, and MVPs. - [Data Engineering](https://neurocodesystems.com/info?ref=data-engineering): Pipelines, cleaning, marts — data ready for AI. ## Capabilities - Machine Learning, Deep Learning, NLP, Computer Vision, MLOps, DevOps. - Multi-agent orchestration across A2A + MCP. - MCP server / gateway engineering, A2A supervisor and specialist agents. - Framework-agnostic: ADK, LangGraph, CrewAI, Semantic Kernel, AutoGen, LlamaIndex, OpenAI Agents SDK, Claude Agent SDK, Mastra, Agno, Pydantic AI, Smolagents. ## What A2A is (short reference) The Agent2Agent (A2A) protocol is an open, vendor-neutral standard hosted by the Linux Foundation for agent-to-agent communication. It standardizes: - Agent discovery via a public Agent Card at `/.well-known/agent-card.json`. - A Task lifecycle (submitted → working → input-required → completed | failed | canceled) with message history and artifacts. - Content model of Messages composed of TextPart / FilePart / DataPart. - JSON-RPC 2.0 over HTTPS as the default transport (REST and gRPC also standardized). - Streaming via Server-Sent Events (`message/stream`) and asynchronous webhooks (`tasks/pushNotificationConfig/set`) for long-running work. - Pluggable authentication schemes (Bearer/OAuth, API keys, mTLS) declared in the Agent Card. ## What MCP is (short reference) The Model Context Protocol (MCP) is an open standard introduced by Anthropic in November 2024 for connecting LLMs to tools, data, and prompts. It standardizes: - A three-role architecture: **Host** (the user's app), **Client** (runs inside the host, one per server), **Server** (exposes capabilities). - Six primitives: **Tools** (model-invokable actions with JSON-Schema inputs), **Resources** (read-only URI-addressed context), **Prompts** (reusable templates), **Sampling** (server asks host to run an LLM completion), **Elicitation** (server asks the user for structured input), **Roots** (URI scopes the server may operate within). - JSON-RPC 2.0 wire format with lifecycle messages (`initialize`, `notifications/initialized`, `shutdown`). - Two transports: **stdio** (local subprocess) and **Streamable HTTP** (remote, resumable via `Mcp-Session-Id`); the older HTTP+SSE transport is deprecated. - OAuth 2.1 authorization for remote servers (Resource Server pattern, PKCE, audience-bound tokens, RFC 7591 dynamic client registration), advertised via `/.well-known/oauth-protected-resource`. - Capability negotiation on initialize, progress notifications, and structured tool outputs via `outputSchema`. ## How A2A and MCP relate MCP is complementary to A2A. MCP standardizes how an LLM uses tools and data inside a single agent runtime; A2A standardizes how agents talk to other agents across runtimes. Real production systems run both: MCP inside the box, A2A across the wire. ## Contact Use the contact form on the homepage or email via the site for A2A/MCP integration engagements, multi-agent system design, MCP server development, or AI platform consulting.