In 2025, MCP was everywhere. The "MCPification" of services became a defining trend. But MCP can be a hard concept to grasp, especially for non-technical users. This article breaks down what MCP is and what problem it solves.
The problem: LLMs have limitations
Before we understand MCP, we have to understand the problem it solves. That problem is related to LLMs (Large Language Models). Your AI, like ChatGPT.
The LLM on its own is not capable of doing much. Most of the time, you ask it questions, and it answers based on what it knows. Sometimes it is stuck in the past, not updated with the latest events. If you ask it to send an email or help you do some shopping, the LLM on its own cannot do that.
LLMs are intelligent, but they have limitations.
Tools and frameworks: developer-centric solutions
This is where tools come in. Most providers like OpenAI and Anthropic have tool use built into their APIs. You define a tool, handle the logic, and execute it. But you manage everything yourself. Each provider implements the tool slightly differently. Frameworks like LangChain extend this standardization. You can import Python classes that represent capabilities like web search, mix and match LLM providers, and write custom tools. But it is still developer-centric. You have to do some coding.
Building tools into your LLM requires work. You need to know coding. You need to know how to use LangChain and similar frameworks. That is why we have AI products where tools are already built in. Perplexity has web search and deep research. Every (a company offers a suite of AI tools: Spiral for writing, Cora for computer automation, and Sparkle for editing. These products have capabilities baked in by developers, so you just use them.
But there is an issue with tools. You cannot extend the capability of your AI on your own if you are not good with coding.
MCP: module-centric, not developer-centric
This is where MCP comes in. MCP is module-centric, not developer-centric.
MCP allows you to extend the capability of your LLM. It is almost like adding an extension to your browser. Browser extensions extend your browser's capabilities. You can do this without coding. There might be some complexity in setup, but compared to tools, you do not need to write code.
The creators of MCP (Anthropic) created a standard. This standard allows service providers to build their own MCP servers. When service providers build MCP servers using Anthropic's standard, your LLM can connect to them. Your LLM's capabilities get extended through these connections.
Anthropic left the maintenance of MCP servers to the service providers. That is why you have so many MCP servers available. Anthropic created the standard. Service providers built servers using that unified language.
The MCP ecosystem
In the MCP ecosystem, we have four parts: the MCP host, the MCP client, the MCP server, and the data sources or services.
The MCP host is the AI application that wants to use external data or tools. Claude Desktop, Cursor, Windsurf, Cline. This is what you interact with. You talk to the host, and the host coordinates everything behind the scenes.
The MCP client is a component that runs inside the host. It maintains the connection to MCP servers, sends requests, and receives responses. You do not interact with the client directly; the host manages it for you.
The MCP server is a lightweight program that exposes data or capabilities using the MCP standard. Each server typically connects to one data source or service. Think of it as an adapter that knows how to fetch or manipulate a particular kind of data. Service providers build and maintain these servers using Anthropic's protocol.
The data sources and services are the actual places where information or functionality resides. They can be local (files on your computer, a local database) or remote (web APIs, cloud services, Slack, GitHub). The server connects to these sources and exposes them to the AI.
The flow: the AI host talks to a server (via its internal client), and the server talks to some data or tool. The AI might say, "Hey server, give me the file report.pdf" or "Hey server, execute this database query." The server performs that action and returns the result.
The building manager analogy
Let me use a scenario. Picture the MCP ecosystem as a large building.
The MCP host is the building itself. When you walk in, you are entering the host. This is what you interact with.
The MCP client is the building manager. The building manager works behind the scenes, coordinating requests between you and the departments. You do not talk to the building manager directly; the building handles that for you.
The LLM is the intelligence manager that sits upstairs. The building manager relays your request to the intelligence manager. The intelligence manager interprets what you want and identifies which department can help.
Each department is an MCP server. These departments provide specialized services. They follow a standard language protocol (the one Anthropic defined). The files, databases, and resources within each department are the data sources and services.
The flow: you enter the building (host) and state what you need. The building manager (client) relays your request to the intelligence manager (LLM). The intelligence manager identifies which department (server) can help and sends a request. The department fetches the information from its files and resources (data sources). The response travels back: department to intelligence manager to building manager to you.
MCP is stateful
One more thing about MCP compared to tools: MCP is stateful. MCP servers maintain persistent connections during a session. The server keeps the connection open and can track context within that session. Traditional tool calls are one-off: call, respond, disconnect. MCP keeps the line open.
MCP primitives: resources, tools, prompts, and sampling
When you build an MCP server, you expose one or more of these four capabilities:
Resources are passive data. The client asks to read a URI (like a file path or database record). Think of it as a file read: informational only, no action taken.
Tools are executable functions. These let the LLM take action on your behalf: execute a database query, send a Slack message, create a file. Tools do things.
Prompts are reusable templates. A server can define a template (like "Analyze Error Logs") that the host loads to jumpstart a conversation. They give the AI context to work with.
Sampling is when the server asks the LLM for help with reasoning. Picture the kitchen department in our building analogy asking the intelligence manager upstairs for help deciding which dish to prepare. The server needs the LLM's reasoning to complete its task.
Categories of MCP servers
MCP servers come in many categories. Here are some common ones:
Browser automation: Playwright MCP is the most popular, with over 12,000 stars on GitHub. It lets AI agents interact with web pages, perform scraping, and automate browser-based workflows. With accessibility snapshots, it can help you do online shopping or navigate complex web apps.
File system servers: These let your AI access files on your computer. Read, write, search, and manage files and directories.
Database servers: These expose databases to your AI. Query data, run reports, and interact with your data stores.
Code execution: Servers like Code Alchemist let you run code in simulated environments. Your AI can execute Python or other languages safely.
Vision and media: Some servers help AI process images, videos, or other media formats.
Finding MCP servers
Several resources list available MCP servers:
punkpeye/awesome-mcp-servers on GitHub: A curated collection of MCP servers with categories and descriptions.
tolkonepiu/best-of-mcp-servers on GitHub: A ranked list of over 410 MCP servers, updated weekly.
mcpserver.works: A website that catalogs MCP servers and makes them easy to discover.
The ecosystem keeps growing. New servers appear regularly as more service providers adopt the standard.
In Conclusion
MCP lets you extend your AI without coding. Anthropic created the standard. Service providers build the servers. You just connect.
