TalentSprout MCP Server
Let any AI assistant -- Claude, ChatGPT, Cursor, Cline, and the rest -- create TalentSprout interviews on your behalf, using natural language. Same API key, same rate limits, same data as the REST API.
https://www.talentsprout.ai/api/mcpAuthorization: Bearer tsk_...Built on the open Model Context Protocol. Streamable HTTP transport. No SDK required on the client side -- every major MCP-compatible assistant ships native support.
Quick start#
Create an API key
Open Settings → Developers and click Create API key. The same tsk_* token authenticates the MCP server and the REST API.
Add the connector to your AI assistant
Pick your client below. The server URL is https://www.talentsprout.ai/api/mcp in every case.
Verify the connection
Ask the assistant to run the ping tool. You should see your organization_id and remaining rate-limit budget come back. From here, natural-language interview creation works.
Claude Desktop#
Edit your claude_desktop_config.json file (Claude Desktop → Settings → Developer → Edit Config) and add the entry below. Restart Claude Desktop afterwards.
{
"mcpServers": {
"talentsprout": {
"type": "http",
"url": "https://www.talentsprout.ai/api/mcp",
"headers": {
"Authorization": "Bearer tsk_..."
}
}
}
}Claude.ai (web)#
1. Open Claude.ai -> Settings -> Integrations -> Add integration
2. Choose "Custom integration"
3. Server URL: https://www.talentsprout.ai/api/mcp
4. Auth type: Custom header
Header name: Authorization
Header value: Bearer tsk_...
5. Save and approve the connector when promptedChatGPT (custom connector)#
1. Open ChatGPT -> Settings -> Connectors -> Add custom connector
(available on Pro / Team / Enterprise)
2. Server URL: https://www.talentsprout.ai/api/mcp
3. Auth type: Bearer token
Token: tsk_...
4. Save, then enable the connector in the conversation surfaceCursor#
Add to your project’s .cursor/mcp.json (per-project) or ~/.cursor/mcp.json (global). Restart Cursor.
{
"mcpServers": {
"talentsprout": {
"url": "https://www.talentsprout.ai/api/mcp",
"headers": {
"Authorization": "Bearer tsk_..."
}
}
}
}Cline#
Open the Cline panel → MCP Servers → Configure MCP Servers and add:
{
"mcpServers": {
"talentsprout": {
"url": "https://www.talentsprout.ai/api/mcp",
"headers": {
"Authorization": "Bearer tsk_..."
}
}
}
}Working from a different MCP client?
Any client that speaks Streamable HTTP and supports Bearer auth can connect. Point it at https://www.talentsprout.ai/api/mcp with the header Authorization: Bearer tsk_....
Authentication#
Every tool call must include an API key in the Authorization header using the Bearer scheme. Keys are scoped to a single organization -- the same tsk_* tokens used by the REST API.
- Tokens start with
tsk_followed by 32 random base62 characters (~190 bits of entropy). - The full token is shown once, at creation time. Store it as a secret -- we keep only a SHA-256 hash and the first 12 characters (the “display prefix”).
- Revoking a key in the dashboard is immediate. The next tool call from that client returns
authentication_error.invalid_api_key. - Need access to multiple TalentSprout organizations from a single assistant? Create one connector per org with the matching API key (see Operations).
Security: Treat the API key like a password. AI assistants that store the connector config locally (Claude Desktop, Cursor, Cline) persist it on your machine; assistants that store it server-side (Claude.ai web, ChatGPT) keep it in their encrypted secret store. We surface leaked-key alerts via the GitHub Secret Scanning Partner Program.
Tools#
v1 ships three tools, deliberately mirroring the v1 REST API. Each tool advertises an inputSchema and outputSchema so MCP-2025-11+ clients can chain results.
ping#
Read-only liveness check. Use this once after configuring the connector to verify your API key works. Returns the authenticated organization id, scopes, and remaining rate-limit budget.
Example prompt
Use the talentsprout ping tool to check that my API key works.Example structured response
{
"object": "ping",
"message": "pong",
"organization_id": "68f7f008f3a886da8c2d69af",
"scopes": ["interviews:write"],
"rate_limit_remaining": 119
}create_interview#
Create an interview manually or from a template. Requires a title and either a non-empty questions array or a template_id. All other fields (voice, languages, passing_score, require_video, etc.) are optional. Mirrors POST /v1/interviews.
Example prompt
Create a TalentSprout interview titled "Senior Backend Engineer" with these
questions:
1. Walk me through how you would design a rate-limiting layer for our public API.
2. Tell me about a production incident you led the response to.
3. How do you approach code review on a high-velocity team?
Set passing_score to 75 and require_video to false.Example structured response
{
"id": "f8a4b2c1-9e3d-4f5a-b6c7-d8e9f0a1b2c3",
"object": "interview",
"title": "Senior Backend Engineer",
"share_link": "https://www.talentsprout.ai/interview/f8a4b2c1-9e3d-4f5a-b6c7-d8e9f0a1b2c3",
"questions": [
{ "question": "Walk me through how you would design a rate-limiting layer for our public API." },
{ "question": "Tell me about a production incident you led the response to." },
{ "question": "How do you approach code review on a high-velocity team?" }
],
"status": "active",
"voice": "alloy",
"languages": ["en"],
"duration": 15,
"passing_score": 75,
"require_video": false
}create_interview_from_job_description#
Generate the title and 5-7 interview questions from a job description, then save the interview and return a share link plus an ai_generation block (model + timestamp). The job description must be at least 50 characters. Mirrors POST /v1/interviews/from-job-description.
Example prompt
Read the attached job description, then create a TalentSprout interview from
it using the create_interview_from_job_description tool. Set the passing
score to 75. Send me back the share link when it is ready.Example structured response
{
"id": "5c2e4a1b-7f6d-49b8-a3e1-d2c4f6b8a0d1",
"object": "interview",
"title": "Senior Product Designer",
"share_link": "https://www.talentsprout.ai/interview/5c2e4a1b-7f6d-49b8-a3e1-d2c4f6b8a0d1",
"questions": [
{ "question": "Walk us through a recent design system you shipped end-to-end." },
{ "question": "How do you balance user research with shipping velocity?" }
],
"ai_generation": {
"model": "gpt-4o-mini",
"generated_at": "2026-05-08T10:24:11.421Z"
}
}Rate limits#
Per-organization sliding-window limits. The MCP server and the REST API share the SAME bucket per organization, so flipping between surfaces does not give you twice the budget.
| Tier | Limit | Tools |
|---|---|---|
| default | 120 / minute / org | ping, create_interview |
| ai | 20 / minute / org | create_interview_from_job_description |
When a tool is rate-limited, the response is an isError block with the type rate_limit_exceeded. The text content includes the number of seconds until the window resets so the AI assistant can wait or fall back.
Errors#
Tool errors come back as MCP isError: true results. The text content carries a human-readable message, the request id (always prefixed mcp_*), and a docs URL deep-linked to the matching error code in the REST API reference.
Validation error: 'questions' is required. Provide at least one question or set 'template_id'.
request_id: mcp_2f9a3b1c4d5e6f7a8b9c0d1e
docs: https://www.talentsprout.ai/docs/api-reference#error-invalid_requestReading the structured envelope
MCP-2025-11+ clients also receive a structuredContent payload that mirrors the REST { error: { type, code, ... } } shape. If you wrote tooling against the REST envelope already, it works as-is here.
Operations#
Multiple organizations#
Each API key is scoped to one organization. If you belong to several TalentSprout organizations and want all of them reachable from the same assistant, add one connector per org. The recommended naming convention is TalentSprout — <Org name> so the LLM picks the right one when there is ambiguity.
Helping the LLM pick the right tool#
Tool descriptions encode “when NOT to use” lines so models can disambiguate create_interview from create_interview_from_job_description. If you find a model picking the wrong one, the most effective intervention is usually to be more explicit in the prompt -- “use the AI generation tool” vs. “use the manual create tool with these questions”.
Key rotation & revocation#
Revoking a key in Settings → Developers takes effect immediately. The connector in your client will start returning invalid_api_key. Update the connector with a fresh key (Bearer header) and reconnect.
Debugging a tool call#
Every error includes a request id like mcp_2f9a3b1c4d5e6f7a8b9c0d1e. If you contact support, paste it -- our logs are tagged with [mcp-v1] and the same id, so we can find the call instantly.
Webhooks#
In v1, interviews created via MCP do not fire interview.created webhooks. Completion events (interview.completed, scoring, transcripts) fire normally. See the Webhooks docs for the full event catalog.
Versioning#
Tools follow a strict additive-compatibility policy. We may add new tools, new optional input fields, and new output fields without notice. Breaking changes (renaming a tool, removing an input field, changing a return type) ship as a new tool name (e.g. create_interview_v2) -- never as a silent change to an existing tool.
The MCP server URL /api/mcp is stable. We do not version the URL because the protocol version is negotiated at the connection handshake.