TL;DR
The MCP Registry is the centralized discovery layer for the Model Context Protocol ecosystem — think npm, but for AI tool servers. Launched in preview by the MCP open-source community in September 2025, it provides a standardized way to publish, discover, and install MCP servers across AI IDEs, agents, and platforms. This article covers the registry's architecture, namespace verification, the server.json manifest format, multi-layer security model, governance policies, and how enterprise teams can run private registries. Whether you are publishing servers or consuming them, understanding the registry is now essential to working with MCP.
📋 Table of Contents
- Key Takeaways
- Why MCP Needs a Registry
- The GitHub MCP Registry Architecture
- How Server Discovery Works
- Publishing an MCP Server
- Security and Trust Model
- Governance and Namespace Management
- Private and Enterprise Registries
- The Future of MCP Distribution
- Best Practices
- FAQ
- Summary
✨ Key Takeaways
- The MCP Registry is a meta-registry: it stores metadata about servers and points to existing package registries (npm, PyPI, Docker Hub) for actual code distribution
- Namespace verification ties every server identity to a proven owner via GitHub OAuth, OIDC, or DNS/HTTP domain validation
- The
server.jsonmanifest is the universal descriptor format for MCP servers, covering packages, remote endpoints, environment variables, and arguments - Subregistries allow AI client marketplaces and enterprises to curate, rate, and filter servers on top of the official registry data
- Publishing is automated through the
mcp-publisherCLI and GitHub Actions, with the registry API frozen at v0.1 for stability
💡 Quick Tool: MCP Server Directory — Browse and discover MCP servers for your AI applications.
Why MCP Needs a Registry
The MCP protocol solved a fundamental problem: how AI applications communicate with external tools through a standardized interface. But as adoption exploded through 2025 — with Cursor, VS Code Copilot, Claude Desktop, TRAE, and dozens of other clients implementing MCP support — a new problem emerged: discovery.
With thousands of MCP servers being built by individual developers, open-source communities, and enterprises, there was no unified way to answer basic questions:
- How do I find an MCP server for a specific capability?
- How do I know a server is safe to install?
- How do I know who published it and whether it is still maintained?
- How does my AI agent automatically discover and install the right tools?
The parallel to existing package ecosystems is obvious. npm solved this for JavaScript. PyPI solved it for Python. The MCP ecosystem needed its own discovery layer — but with additional requirements specific to AI tool distribution: capability declarations, security scanning, and support for multiple deployment models (local packages, remote servers, containers).
The official MCP Registry, hosted at registry.modelcontextprotocol.io, launched in public preview on September 8, 2025. It is owned by the MCP open-source community and backed by major contributors including Anthropic, GitHub, PulseMCP, and Microsoft.
The GitHub MCP Registry Architecture
Registry Ecosystem Overview
The MCP Registry is not a monolithic package host. It is a meta-registry — a metadata layer that describes where servers live and how to install them, without hosting the actual code or binaries.
This architecture has a critical advantage: it does not create a single point of failure for code distribution. If the registry goes down, existing installed servers keep working because the actual packages live on npm, PyPI, Docker Hub, or self-hosted endpoints.
Core Components
Server Catalog and Metadata Store. Every published server is stored as a server.json document containing identity, version, package locations, runtime configuration, and optional custom metadata. The catalog is queryable through a REST API.
Namespace Management. Server names follow a reverse-DNS pattern (e.g., io.github.username/server-name or com.company/server-name). Ownership is verified through authentication before publishing is allowed.
Version Control and Semantic Versioning. Each server can have multiple published versions. Publications are immutable — once a version is published, it cannot be modified, only deprecated or superseded by a new version.
Deployment Models. The registry supports three deployment types:
| Deployment Type | Description | Example |
|---|---|---|
| Package | Published to a package registry, installed and run locally | npm, PyPI, NuGet, Docker, MCPB |
| Remote | Hosted as a web service, clients connect directly | Streamable HTTP, SSE endpoints |
| Hybrid | Both package and remote options available | Maximum flexibility for consumers |
How Server Discovery Works
The registry exposes a REST API that clients use to discover servers. The core endpoints are straightforward:
# List servers with pagination
curl "https://registry.modelcontextprotocol.io/v0/servers?limit=10"
# Search by keyword
curl "https://registry.modelcontextprotocol.io/v0/servers?search=filesystem"
# Get a specific server by ID
curl "https://registry.modelcontextprotocol.io/v0/servers/{server-id}"
AI IDEs and agent platforms integrate these APIs to provide in-app discovery experiences. When you search for MCP servers in Cursor or VS Code Copilot, the search queries are hitting this registry API behind the scenes.
Discovery Mechanisms Compared
| Mechanism | How It Works | Best For |
|---|---|---|
| Keyword Search | Full-text search across server names, descriptions, and metadata | Finding servers by topic ("database", "weather", "filesystem") |
| Namespace Browse | List all servers under a specific organization namespace | Exploring a publisher's full catalog |
| Package Type Filter | Filter by registry type (npm, PyPI, Docker, etc.) | Finding servers compatible with your runtime |
| Transport Filter | Filter by transport type (stdio, streamable-http, sse) | Finding servers that match your deployment model |
| Version Query | Get specific version details or list all versions | Pinning to stable releases |
The API also supports synchronization endpoints that subregistries and aggregators use to maintain up-to-date mirrors of the server catalog.
Publishing an MCP Server
Publishing an MCP server to the official registry involves creating a server.json manifest, authenticating your namespace, and submitting through the CLI or GitHub Actions.
Step 1: Create the server.json Manifest
The server.json file is the universal descriptor for your MCP server. Here is a complete example for a TypeScript server published to npm:
{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
"name": "io.github.myorg/database-query",
"description": "MCP server for safe, read-only database queries with schema introspection",
"title": "Database Query",
"websiteUrl": "https://github.com/myorg/database-query-mcp",
"repository": {
"url": "https://github.com/myorg/database-query-mcp",
"source": "github"
},
"version": "1.2.0",
"packages": [
{
"registryType": "npm",
"registryBaseUrl": "https://registry.npmjs.org",
"identifier": "@myorg/database-query-mcp",
"version": "1.2.0",
"transport": {
"type": "stdio"
},
"environmentVariables": [
{
"name": "DATABASE_URL",
"description": "PostgreSQL connection string",
"isRequired": true,
"isSecret": true
},
{
"name": "QUERY_TIMEOUT_MS",
"description": "Maximum query execution time in milliseconds",
"default": "5000"
}
]
}
]
}
For a Python server on PyPI, the package section looks different:
{
"packages": [
{
"registryType": "pypi",
"registryBaseUrl": "https://pypi.org",
"identifier": "database-query-mcp",
"version": "1.2.0",
"runtimeHint": "uvx",
"transport": {
"type": "stdio"
},
"environmentVariables": [
{
"name": "DATABASE_URL",
"description": "PostgreSQL connection string",
"isRequired": true,
"isSecret": true
}
]
}
]
}
For remote servers using Streamable HTTP, you use the remotes field instead:
{
"remotes": [
{
"type": "streamable-http",
"url": "https://mcp-db.myorg.com/http"
}
]
}
Step 2: Authenticate and Register Your Namespace
Install the publisher CLI and authenticate:
# Install the MCP publisher CLI
npm install -g mcp-publisher
# Authenticate via GitHub OAuth (for io.github.* namespaces)
mcp-publisher login github
# Or authenticate via domain verification (for com.yourcompany.* namespaces)
mcp-publisher login domain --domain yourcompany.com
GitHub authentication grants you access to the io.github.{username}/* and io.github.{org}/* namespaces. Domain authentication requires setting a DNS TXT record or hosting a verification file on your domain.
Step 3: Publish
# Validate your server.json before publishing
mcp-publisher validate server.json
# Publish to the official registry
mcp-publisher publish server.json
Step 4: Automate with GitHub Actions
For continuous publishing on release, use the GitHub Actions OIDC flow:
# .github/workflows/publish-mcp.yml
name: Publish MCP Server
on:
release:
types: [published]
permissions:
id-token: write # Required for OIDC authentication
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm install -g mcp-publisher
- run: mcp-publisher login github-oidc
- run: mcp-publisher publish server.json
The OIDC flow eliminates the need for stored credentials — GitHub Actions automatically provides a short-lived token that proves the action is running in your repository.
Security and Trust Model
Security is the most critical differentiator between the MCP Registry and a simple list of GitHub repositories. MCP servers have direct access to sensitive resources — filesystems, databases, APIs, credentials. A malicious server could exfiltrate data, execute arbitrary code, or compromise an entire development environment.
Security Verification Pipeline
Multi-Layer Security
The registry implements defense in depth through multiple security layers:
1. Namespace Verification. Before you can publish anything, you must prove you own the namespace. For io.github.* namespaces, this means authenticating as the GitHub user or organization. For domain namespaces like com.company.*, this requires DNS or HTTP verification. This prevents name squatting and impersonation attacks.
2. Package Validation. The registry validates that the server.json manifest is structurally correct, references real packages on the declared registries, and uses consistent versioning. Malformed manifests are rejected at submission time.
3. Automated Security Scanning. Published servers undergo automated scanning that checks for common vulnerability patterns: SQL injection in tool handlers, command injection, hardcoded secrets, path traversal vulnerabilities, and cross-site scripting in any web-facing components. Enterprise-grade scanners use both pattern matching (YARA rules) and LLM-based semantic analysis.
4. Publisher Trust. The registry builds publisher trust over time. Servers from verified organizations with established track records appear higher in search results and may display verified badges in client UIs.
5. Subregistry Curation. Downstream subregistries (like the tool marketplaces built into Cursor or VS Code) add their own curation layer — featuring tested servers, adding user ratings, and blocking servers that fail additional quality checks.
Security Feature Comparison
| Security Layer | What It Checks | When It Runs |
|---|---|---|
| Namespace Verification | Publisher identity matches claimed namespace | At authentication time |
| Manifest Validation | Schema compliance, package reference integrity | At submission time |
| Automated Code Scanning | Injection, secrets leakage, path traversal | At publish time + periodic re-scans |
| Transport Security | HTTPS enforcement, OAuth compliance for remote servers | At publish time |
| Community Reporting | User-submitted vulnerability reports | Ongoing |
| Runtime Monitoring | Anomalous behavior detection in remote servers | Ongoing (for remote servers) |
Governance and Namespace Management
Namespace Types
The registry supports two primary namespace types, each with different verification requirements:
GitHub Namespaces (io.github.*)
- Format:
io.github.{username}/{server-name}orio.github.{org}/{server-name} - Verification: GitHub OAuth or GitHub Actions OIDC
- Best for: Open-source projects, individual developers
- Example:
io.github.modelcontextprotocol/filesystem
Domain Namespaces (com.company.*)
- Format:
{reversed-domain}/{server-name}(e.g.,com.anthropic/claude-tools) - Verification: DNS TXT record or HTTP well-known file
- Best for: Enterprises, organizations with established domains
- Example:
com.stripe/payment-mcp
Naming Conventions and Conflict Resolution
Server names must be globally unique within the registry. The reverse-DNS pattern naturally prevents most conflicts — io.github.alice/weather and io.github.bob/weather are distinct servers. Within a namespace, the publisher has full control over naming.
If a naming dispute arises (e.g., a domain changes ownership), the registry's core maintainers — currently from Anthropic, GitHub, and PulseMCP — adjudicate according to established policies. Domain-based namespaces follow the same dispute resolution patterns as DNS itself.
Deprecation and Archival
Published versions are immutable, but publishers can:
- Deprecate a version by updating its status, signaling clients to migrate
- Publish a new version that supersedes the deprecated one
- Archive an entire server, removing it from default search results while preserving its data for existing installations
The registry does not delete published versions because clients may have pinned dependencies on specific versions. Deprecated servers display warnings in client UIs but remain installable for backward compatibility.
Private and Enterprise Registries
Not every MCP server belongs in a public catalog. Enterprises building internal tooling — database query servers, proprietary API wrappers, compliance tools — need private registries that enforce corporate security policies.
Self-Hosted Registry Deployment
The MCP Registry specification is an open API standard. Anyone can implement a registry that follows the same API shape. This means:
- Enterprise teams can deploy a private registry behind their firewall
- The same
mcp-publisherCLI and GitHub Actions work against private registries - MCP clients that support the official registry can be pointed at private instances
Several open-source implementations already exist, including containerized solutions that run on Kubernetes with PostgreSQL backends. A typical enterprise deployment looks like:
# Point the CLI at your private registry
mcp-publisher login github --registry https://mcp-registry.internal.company.com
# Publish to the private registry
mcp-publisher publish server.json --registry https://mcp-registry.internal.company.com
Integration with Corporate Security
Enterprise registries typically add layers on top of the base specification:
- Approval workflows: Servers require manual review before becoming available to agents
- Policy enforcement: Only servers declaring specific permission levels are allowed
- Audit trails: Every publish, install, and invocation is logged for compliance
- Network isolation: Air-gapped registries for environments with no internet access
Subregistry Architecture
The official registry is designed to be an upstream data source that subregistries build upon. An enterprise can sync the public catalog, apply internal filters, merge in private servers, and expose the combined result through the same API:
The Future of MCP Distribution
The MCP Registry is still in public preview, but its trajectory is clear. Several developments are shaping where the ecosystem is heading:
Richer Capability Declarations. Today's server.json describes what a server is and where to install it. Future versions will likely include structured capability declarations — listing exact tools, resources, and prompts a server provides. This enables clients to match user intent to server capabilities without installing anything first.
Automated Compatibility Testing. As the registry matures, expect automated test suites that verify servers work correctly with major clients. Think of it like browser compatibility testing, but for MCP: does this server work correctly with Claude Desktop? With Cursor? With a custom agent framework?
Cross-Registry Federation. The subregistry architecture already supports federation. As more organizations run private registries, standardized federation protocols will allow servers to be discovered across registry boundaries while respecting access controls.
Comparison with Existing Package Ecosystems. The MCP Registry's meta-registry approach is deliberately different from npm or PyPI, which host the actual packages. This has trade-offs:
| Aspect | npm / PyPI | MCP Registry |
|---|---|---|
| Hosts actual code | Yes | No (metadata only) |
| Single point of failure for distribution | Yes | No |
| Supports multiple package formats | No (one per registry) | Yes (npm, PyPI, Docker, etc.) |
| Supports remote servers | No | Yes (Streamable HTTP, SSE) |
| Namespace verification | Limited | Strong (OAuth, DNS, OIDC) |
The meta-registry approach trades simplicity for resilience and flexibility — a deliberate choice for an ecosystem where servers can be anything from a local Python script to a globally distributed HTTP service.
Best Practices
For Server Publishers
1. Write thorough server.json metadata. Your description, title, and environment variable documentation are how both humans and AI clients evaluate your server. Invest time in clear, specific descriptions. Explain what each environment variable does, which are required, and which contain secrets.
2. Use GitHub Actions for automated publishing. Manual publishing invites mistakes and staleness. Set up a workflow that publishes to the registry on every GitHub release. The OIDC flow means no credentials to manage.
3. Declare environment variables explicitly. Every secret your server needs — API keys, database URLs, auth tokens — should be declared in the manifest with isRequired and isSecret flags. Clients use this information to prompt users for configuration before installation.
4. Support multiple deployment models when possible. Offering both a local package (npm/PyPI) and a remote endpoint gives consumers maximum flexibility. Some users want local execution for data privacy; others want zero-setup remote access.
5. Follow semantic versioning strictly. The registry makes published versions immutable. Breaking changes must go in a new major version. Patch versions should always be safe to upgrade.
For Server Consumers
1. Verify the publisher namespace. Before installing a server, check that the namespace matches the expected publisher. io.github.anthropic/ is different from io.github.anthropic-tools/.
2. Pin versions in production. Never use latest-version resolution for production agents. Pin to specific versions and upgrade deliberately after testing.
3. Review environment variable requirements. Understand what secrets a server needs before installing. A server that requires broad filesystem access or database credentials should be evaluated carefully.
4. Prefer servers from verified publishers. Servers from established organizations with domain-verified namespaces have higher baseline trustworthiness than anonymous submissions.
5. Monitor for deprecation notices. Subscribe to updates for servers you depend on. When a version is deprecated, migrate promptly — deprecated versions may have known security issues.
⚠️ Common Mistakes:
- Publishing without namespace verification — Attempting to publish under a namespace you do not own results in rejection. Always authenticate and verify your namespace first.
- Hardcoding secrets in
server.json— Never put actual API keys or credentials in the manifest. UseenvironmentVariablesdeclarations so clients prompt users for these values at installation time. - Ignoring
isSecretflags — Failing to mark sensitive environment variables asisSecret: truemeans clients may log or display these values in plaintext. Always mark API keys, tokens, and connection strings as secrets.
FAQ
What is the GitHub MCP Registry?
The MCP Registry is a centralized discovery platform for MCP servers. It provides a searchable catalog where developers can find, evaluate, and install MCP servers, similar to npm for Node.js packages but specifically designed for AI tool integrations. The official registry at registry.modelcontextprotocol.io is community-owned and backed by Anthropic, GitHub, PulseMCP, and Microsoft.
How do I publish an MCP server to the registry?
Publishing involves four steps: create a server.json manifest describing your server, authenticate with the registry to claim your namespace (via GitHub OAuth or domain verification), validate your manifest using mcp-publisher validate, and submit with mcp-publisher publish. For continuous delivery, use GitHub Actions with OIDC authentication to publish automatically on every release.
How does the MCP Registry handle security?
The registry implements defense in depth: namespace verification ensures publisher identity, manifest validation checks structural correctness, automated scanning detects vulnerabilities (injection, secrets leakage, path traversal), and ongoing monitoring watches for anomalous behavior. Subregistries and AI client marketplaces add additional curation and rating layers. Servers with detected issues are flagged or disabled.
Can I host a private MCP Registry?
Yes. The MCP Registry specification is an open API standard that anyone can implement. Organizations deploy private registries behind firewalls using containerized solutions. The same CLI tools and GitHub Actions workflows work against private instances. Enterprise registries typically add approval workflows, policy enforcement, and audit logging on top of the base specification.
What is the difference between the MCP Registry and MCP server lists on GitHub?
Curated lists (like awesome-mcp-servers) are static collections maintained by individuals. The MCP Registry is a structured, API-driven catalog with namespace verification, version control, security scanning, and client integration. AI IDEs query the registry API to display installable servers; they cannot do that with a README file. The registry also enforces consistent metadata through the server.json format, making automated discovery and installation possible.
Summary
The MCP Registry solves the discovery problem for the AI tool ecosystem. By providing a standardized, secure, and API-driven catalog of MCP servers, it enables AI IDEs and agent platforms to programmatically find, evaluate, and install tools without manual configuration. Its meta-registry architecture — storing metadata while delegating code distribution to existing package registries — is a pragmatic design that avoids single points of failure while supporting every deployment model from local npm packages to globally distributed HTTP services.
For server publishers, the registry provides a clear path to distribution: write a server.json, verify your namespace, and publish. For consumers, it provides trust signals — namespace verification, security scanning, and community curation — that make it safer to extend AI agents with third-party capabilities.
The registry is still in preview, but the foundations are solid. As the MCP ecosystem continues to grow, the registry will become the backbone of AI tool distribution — the place where every MCP server goes to be found.
👉 Browse MCP Servers — Discover verified MCP servers for your AI workflow.
Related Resources
- MCP Protocol Deep Dive: Complete Guide — Foundational guide to MCP architecture and primitives
- MCP Server Performance Benchmark: Node.js vs Go — Performance comparison for server implementation choices
- The Art of AI Agent Tools: Best Practices for MCP Tools — How to design high-quality tools for LLMs
- MCP 2025-03-26 Spec Breakdown: OAuth, Remote Connections & Tool Annotations — Deep dive into the spec version that powers the registry
- MCP Glossary — Core terminology and concepts
- AI Agent Glossary — Understanding agent architectures