Skip to content

How AI Assistants Connect to Your Tools: MCP Architecture Explained

A visual walkthrough of clients, servers, and the communication layer that makes AI integration work — from request to response.

January 2026 10 min read
Architecture Technical Deep Dive MCP Servers

If you want to understand how MCP actually works—how requests flow from an AI system to your business tools and back—this is the deep dive. We'll walk through the architecture from the ground up, explain each component, and show you how data and actions move through the system.

You don't need to be a developer to understand this, but you do need to be comfortable with technical concepts. Think of this as understanding the internal design of something you're about to adopt.

The Three-Part Architecture

MCP has three fundamental layers: the client, the communication layer, and the server. Let's start by understanding what each does.

The MCP Client

The client is the AI system—Claude, ChatGPT, or whatever LLM is powering your interaction. The client needs something, so it formulates a request. But here's the key: the client doesn't build the request itself. Instead, it works with what we call a client runtime.

Think of the runtime as a translator and conductor. The AI knows it wants to "find a customer named Alice," but it doesn't know how to phrase that request in a way the server understands. The runtime bridges that gap. It also manages the conversation itself—maintaining state, handling retries, and ensuring things stay organized.

In practical terms, if you're using Claude through Cursor or another IDE, the IDE has a client runtime built in. If you're building a custom application, you'd implement your own client runtime using an MCP library.

The Communication Layer

The client and server need to talk to each other. MCP is transport-agnostic, meaning it doesn't care how the messages travel—just that they arrive correctly. In practice, this is usually JSON-RPC over HTTP, WebSocket, or stdio (for local integrations).

Request (JSON-RPC):
{
"jsonrpc": "2.0",
"id": 1,
"method": "resources/read",
"params": { "uri": "salesforce://customers/alice" }
}

The message is standardized. It has an ID so responses can be matched to requests. It specifies a method (what to do) and parameters (what to do it with). The server reads this, processes it, and sends back a response in the same format.

This standardization is crucial. The AI vendor doesn't care about your business logic. Your systems don't care about the AI vendor's internals. They just exchange JSON messages. Both sides understand the contract.

The MCP Server

The server is where your business logic lives. This is custom code that understands your Salesforce instance, your database schema, your authentication, your business rules.

An MCP server accepts requests in the standard format, validates them, executes the appropriate action in your systems, and returns a response. If the client asks to read a customer record, the server fetches it from your database. If the client asks to create an order, the server validates the request and updates your order management system.

The server is also where you implement security. It authenticates requests, enforces authorization rules, audits what was accessed, and ensures data isn't leaked to unauthorized parties. You might expose a "read customers" resource to some clients but not others. You might allow "create orders" for certain AI assistants but restrict "delete orders" entirely.

How a Request Flows Through the System

Let's walk through a concrete example. You're using Claude to manage customer interactions. You ask: "What are our top three customers by revenue?"

Step 1: The client decides it needs data. Claude's internal process recognizes that it needs customer revenue information to answer your question. It knows this is something an MCP server can provide.

Step 2: The client runtime builds a request. The runtime translates Claude's intent into an MCP request. It specifies the method (resources/list), the resource type (customers), and any filters or parameters.

Step 3: The message is transmitted. The request travels over the communication channel (likely HTTPS) to your MCP server, which is running in your infrastructure.

Step 4: The server authenticates and validates. Your server receives the request. It checks: Is the client authorized? Is the request well-formed? Does the requested resource exist? Does the requestor have permission to access it?

Step 5: The server executes. If validation passes, your code runs. This might mean querying your database, calling an internal API, checking business rules, or performing some computation. In this case, it queries your CRM for customer revenue data.

Step 6: The server responds. The data is formatted in the MCP response format and sent back to the client runtime. This response includes the data (the three top customers and their revenue), metadata (how many total results there were), and any error messages if something went wrong.

Step 7: Claude uses the data. The client runtime unpacks the response and feeds the customer data back to Claude as context. Claude now knows your top three customers and can summarize them for you.

Step 8: Logging and sampling. Throughout this process, the server logs what happened. This gives you an audit trail—you can see that Claude requested customer data, exactly which fields were returned, and when.

Resources, Tools, and Prompts

Now let's talk about what you actually expose through MCP. There are three kinds of things:

Resources

Resources are data. Think of them as nouns in your system: customers, orders, invoices, spreadsheets, documents. Each resource has a unique identifier (URI), a type, and a schema that defines what fields it contains.

When you design your MCP server, you define which resources are available. You might expose salesforce://customers/* (all customer records) but not salesforce://salaries/* (sensitive compensation data). You control the surface area.

Tools

Tools are actions. They're the verbs: create, update, delete, send, generate. Each tool has a name, a description, and a schema defining what parameters it accepts and what it returns.

You might define a tool called create_customer that accepts name, email, and phone number. You might define another tool called send_email that takes a recipient and message body. The AI learns about these tools and can call them when appropriate.

Tools are where you need to be most careful about security. You might allow the AI to read customer data but not delete it. You might allow creating orders but require human approval for orders above a certain value. Authorization happens at the tool level.

Prompts

Prompts are instructions or templates that guide how the AI should use your resources and tools. They're metadata about how your business works and what the AI should care about.

A prompt might say: "When a customer asks about their order status, always check the orders resource first. Never promise delivery dates without checking inventory." Or: "Our company has a policy that customers above $100k annual revenue get priority support. Use this when deciding response urgency."

Prompts give you a way to inject business logic and policy into the AI's decision-making without needing to change the underlying model.

Security and Sampling

One of MCP's most important features is its built-in sampling mechanism. Every resource read and tool call is automatically logged and can be sampled. This means you can see exactly what Claude accessed and what it did.

You can configure sampling policies. You might sample 100% of sensitive operations (like deletions) and 1% of routine reads (like viewing customer names). You can set up alerts: "If the AI accesses compensation data, notify compliance immediately."

This is critical for compliance and governance. If your industry has audit requirements, you can prove exactly what information the AI accessed and when. If something goes wrong, you have a complete trail.

MCP's sampling turns AI integration from "trust and hope for the best" into "verify and audit everything."
Implementation Patterns

How you actually build an MCP server depends on your existing infrastructure. There are several patterns:

API wrapper: If you already have REST APIs for your internal systems, you can build an MCP server that wraps those APIs. The server translates MCP requests into API calls, handles responses, and formats them back as MCP responses. This is the fastest path if your APIs already exist.

Database connector: If you want direct MCP access to a database, you can build a server that speaks directly to your database. This is common for exposing a specific dataset that multiple AI systems need to access. The server validates queries, enforces row-level security, and returns filtered results.

Proxy gateway: Larger organizations might build a central MCP gateway that handles authentication, rate limiting, and routing. All MCP requests flow through this gateway, which dispatches them to the appropriate internal service.

Adapter layer: If you use platforms like Salesforce or HubSpot that have APIs but aren't MCP-native, you build an adapter that translates between their API and MCP. This is where specialized service providers add the most value.

Why This Architecture Matters

Understanding MCP's architecture helps you make better decisions about adoption. You can see why vendor independence matters—because once you've built an MCP server, any AI client that speaks MCP can use it. You can understand why security is handled well—because every interaction is logged and controllable at the server level.

You can also see where complexity lives. The server-side implementation is custom to your business. That's not a weakness—it's actually where the value is. Generic AI systems become powerful when they're connected to your specific data and processes. Building that connection well requires understanding your business, not just following a template.

This is exactly what we do at Crox. We translate your business logic into MCP servers, build the adapters that connect your systems, and implement the security and governance policies that keep your data safe while letting AI systems operate effectively. Once that's in place, you can use any compatible AI system—today and in the future—without rewriting anything.

Ready to implement these concepts in your organization? Our team can guide you through the entire MCP integration process.

Schedule a Consultation