Skip to main content

Model Context Protocol (MCP) Server

Our MCP server provides a standardized way to integrate profileAPI with AI applications and language models through the Model Context Protocol. This allows AI assistants to access our API endpoints directly as tools and resources.

What is MCP?

The Model Context Protocol (MCP) is an open standard that enables secure connections between AI applications and external data sources. It provides a unified way for AI assistants to access tools and resources while maintaining security and proper authentication.

Getting Started

Prerequisites

  • A valid profileAPI API key
  • An MCP-compatible client

Server Configuration

Our MCP server is available at the following endpoint and uses the streamable HTTP transport protocol:

POST https://api.profileapi.com/mcp

The server implements the MCP specification using streamable HTTP transport, allowing for efficient bidirectional communication between clients and the server.

Authentication

Include your API key in the Authorization header:

Authorization: ApiKey YOUR_API_KEY_HERE

Make sure to store your API key securely and never expose it in client-side code or version control systems.

Available Tools and Resources

Our MCP server exposes all of our API endpoints as MCP tools, allowing AI assistants to access the full range of profileAPI functionality through natural language interactions.

Rate Limiting

The rate limiting for tools used through the MCP server is the same as when calling the endpoints normally over HTTPS. For more information about rate limiting, see our rate limiting documentation.

Example Usage

Once configured with a compatible MCP client, you can use natural language to interact with profileAPI:

"Find companies in the technology sector with more than 100 employees"
"Look up the email address for John Smith at Acme Corp"
"Find contact information for the CEO of Tesla"

The AI assistant will automatically use the appropriate MCP tools to fulfill these requests using our API endpoints.