SupportLogic MCP Server — Secure Middleware for External AI Agents
MCP Server

Intelligence and grounded context for every AI assistant.

SupportLogic MCP Server is the secure middleware connecting Claude, ChatGPT, Gemini, Cursor, VS Code, and Zed directly to your enterprise support data — with zero-trust authentication, governance, and full audit logging.

Available now to SupportLogic customers.

System architecture
🤖
AI Client
Claude, ChatGPT…
🔐
MCP Gateway
Auth + policy
🗄️
Support Data Lake
Enriched signals
Zero-trust auth Audit logging Granular scoping JSON-RPC 2.0
SupportLogic MCP Server — Secure Middleware for External AI Agents
Works with your preferred AI assistants & editors
Claude Desktop ChatGPT Gemini Code Assist Cursor VS Code Zed
How it works

A standardized protocol. Real-time operational grounding.

Just as USB-C solved device charging compatibility, MCP replaces brittle, bespoke integration glue code with a single standardized protocol — so every AI client speaks the same language as your support systems.

01
AI client makes a request
Any MCP-compatible AI assistant — Claude, ChatGPT, Cursor, or a custom LangChain agent — sends a structured JSON-RPC 2.0 request to the SupportLogic MCP Server at https://mcp.supportlogic.io/mcp.
02
MCP Gateway authenticates & enforces policy
Every request — regardless of origin — passes through the centralized MCP Gateway for real-time authentication and authorization. Role-based scoping ensures an agent authorized to read sentiment signals cannot trigger a case reassignment unless explicitly permitted.
03
Grounded access to enriched support intelligence
Approved requests execute against SupportLogic’s enriched support data lake — surfacing real-time sentiment scores, escalation risk, account health, and case context. LLM outputs are grounded in live operational signals, not static or outdated data.
Core primitives

Beyond APIs — three AI-native building blocks.

Unlike traditional REST APIs that focus on data endpoints, the SupportLogic MCP Server is built around three primitives that map directly to how AI agents reason and act.

Tools
Executable actions
with side effects
Functions that allow the AI to perform actions — such as re-assigning a case owner, triggering an escalation workflow, or updating a ticket field. Tools are the agent’s hands.
Resources
Read-only data objects
background knowledge
Structured data — ticket details, knowledge base articles, account health scores — that the AI references as context without the ability to modify it. Resources are the agent’s memory.
Prompts
Pre-defined templates
guided workflows
Templates that guide the AI through complex, multi-step workflows — such as professional response drafting or executive briefing generation — ensuring consistent, on-brand outcomes.
Zero-trust security

Enterprise-grade governance for every agent interaction.

The MCP Gateway enforces a zero-trust model across all connected clients — Claude Desktop, custom LangChain agents, and SupportLogic-native workflows alike.

🔑
Flexible authentication
Per-user API keys for developer and programmatic access, plus Enterprise SSO via OAuth for identity-provider-level control. Access policies stay consistent with your existing governance frameworks — no parallel permission systems.
📋
Immutable audit logging
Every tool invocation is logged with full context: which client made the request, which tool was called, what data was accessed, and what was returned. When something needs investigation, the answer is in the audit trail — not buried in client-side logs.
🛡️
Granular permission scoping
Permissions are bound to the specific intent of each tool. An agent authorized to read sentiment signals cannot trigger a case reassignment unless explicitly permitted — eliminating privilege creep across complex multi-agent workflows.
📡
Real-time anomaly monitoring
The MCP Gateway continuously tracks tool call frequency and detects anomalous usage patterns across all connected clients. If a workflow fires an unusual volume of escalation triggers or a client attempts out-of-scope data access, monitoring surfaces it before it becomes a problem.
Dynamic tool discovery
Instead of hard-coding APIs, AI agents dynamically discover available business capabilities — sentiment analysis, escalation prediction, knowledge search — at runtime. Agents extend to new tools without any code changes on the client side.
🔗
Operational grounding
MCP ensures LLM outputs are grounded in real-time operational signals — sentiment, escalation risk, account health — rather than static, outdated data. No more AI operating in a vacuum, disconnected from what’s actually happening in your support queue.
Transport layers

State-of-the-art connectivity for every environment.

Local / on-device
STDIO
Optimized for local development and secure, on-device assistants like Claude Desktop. Zero network exposure for sensitive workflows.
Remote / cloud
HTTP + SSE
The preferred method for remote connections — streams live updates and analysis from the SupportLogic server to any MCP-compatible AI client in real time.
Protocol
JSON-RPC 2.0
All communication is wrapped in this lightweight protocol, providing structured error handling that allows AI agents to self-correct during multi-step tasks.
Available tools

Your SupportLogic intelligence, as callable functions.

The MCP Server transforms SupportLogic’s enriched support data lake into AI-native tools that any connected agent can discover and call at runtime.

Signals & Insights
extract_signals
Real-time sentiment & urgency detection
Analyzes text or structured case data to detect customer sentiment, urgency, and emotional tone in real time. Powers backlog triage, escalation prediction, and live coaching.
Signals & Insights
list_of_escalations
Active & predicted escalation queue
Returns recently escalated cases or those flagged as likely to escalate based on early-warning patterns — letting agents act before a situation breaks.
Quality & Case Intelligence
auto_qa
Automated QA & agent coaching
Evaluates case quality, Customer Effort Scores (CES), and agent performance across 100% of interactions — not just the 5–10% sampled manually.
Quality & Case Intelligence
case_details
Full case timeline & analysis
Retrieves computed health scores and detailed analysis for a specific case — including resolution attempts, agent interactions, and sentiment arc across the full lifecycle.
Quality & Case Intelligence
account_details
Account health & sentiment trends
Broad account-level monitoring — health scores, historical sentiment trajectory, prior escalation patterns, and renewal risk signals for a given customer account.
Knowledge Search
corpus_clarification
Contextual disambiguation
Resolves ambiguous questions before executing a knowledge search — improving answer accuracy by ensuring the agent understands what is actually being asked.
Knowledge Search
query_search
Knowledge base search
Performs semantic search across your enterprise knowledge base, surfacing the most relevant articles and resolutions for any given support query.
Knowledge Search
get_answer
Concise contextual answers
Retrieves direct, contextual answers for complex AI workflows — synthesizing knowledge base content into a response the agent can act on immediately.
Use cases

Real agentic workflows. Zero manual intervention.

See how agentic frameworks and MCP-compatible AI clients autonomously orchestrate SupportLogic intelligence to transform support operations.

Scenario 01
Executive escalation briefing — 30 minutes, zero scrambling
case_details account_details
→ case_details (full timeline)
→ account_details (health + sentiment)
→ Prompt: executive briefing template
→ Push to Slack / Salesforce / Google Doc
A VP is pulled into an urgent escalation call with 30 minutes notice. A Slack-connected AI agent fires autonomously the moment the calendar event is flagged — calling case_details for the full case timeline and account_details for the account’s health score and sentiment trend. The outputs chain into a structured briefing prompt — covering root cause, the customer’s emotional arc, resolution attempts, and recommended talking points. The finished brief lands in the exec’s Slack thread before they’ve opened their laptop. MCP’s dynamic tool discovery means the same workflow can simultaneously push to Salesforce and Google Docs, without any hardcoded integration work.
Scenario 02
Backlog triage after a product outage — 600 tickets, no manual sorting
extract_signals
→ extract_signals (parallel, full backlog)
→ Cluster by urgency + customer tier
→ Re-prioritize queue, bypass FIFO
→ Generate incident signal report
A platform incident hits at 2am. By morning, the queue has 600 new tickets — a mix of production-blocked enterprise customers, mid-market accounts venting frustration, and long-tail status questions. Without intelligent triage, a Fortune 500 customer sits behind a password reset. An agentic framework (LangGraph or n8n) fires automatically when the incident is declared, running extract_signals across the entire backlog in parallel — analyzing sentiment, urgency, and customer tier simultaneously. The agent clusters tickets into priority buckets, re-orders the queue, routes critical cases to senior agents, and generates an account-segment impact report for the retrospective.
Scenario 03
100% QA coverage — without adding headcount
auto_qa
→ auto_qa (all tickets last 24h)
→ Score CES, quality, compliance
→ Coaching notes for flagged cases
→ Daily digest to supervisors
Traditional QA covers 5–10% of closed tickets. The rest — and every coaching opportunity inside them — goes unexamined. Feedback arrives weeks late. With SupportLogic MCP, a scheduled nightly agent triggers auto_qa across all tickets resolved in the past 24 hours — evaluating each for response quality, Customer Effort Score, communication guidelines, and escalation compliance. The agent aggregates scores, flags outliers, and generates specific coaching notes. Supervisors receive a daily digest of only the cases needing attention. Agents get near-real-time feedback instead of a monthly review cycle — compressing the learning loop dramatically and surfacing systemic training gaps at the team level.
Scenario 04
SLA breach prevention — always-on monitoring for high-value accounts
case_details account_details
→ Poll all open high-priority cases (15min)
→ Threshold breach: case_details chain fires
→ Auto-reassign + Slack alert to on-call lead
→ account_details for structural pattern analysis
SLA breaches rarely arrive suddenly — they build from cases that went quiet or fell through shift handoffs. A persistent monitoring agent polls all open high-priority cases every 15 minutes. When a platinum-tier customer’s case crosses a configurable inactivity threshold — say, 2 hours from breach with no agent response — the MCP tool chain fires autonomously: case_details assembles full context, the case is re-assigned to an available senior agent, and an alert posts to the on-call lead’s Slack with the sentiment score and breach countdown attached. Over time, patterns in near-breach cases surface through account_details trend data — giving ops leaders the insight to fix structural causes, not just individual fires.
Quick start

Connect in minutes. One URL, any client.

Retrieve your credentials from Settings → MCP in SupportLogic, then connect to your preferred AI assistant or editor. The remote server URL is the same for every client.

MCP Server URL
https://mcp.supportlogic.io/mcp
+ your Client ID
🤖 Claude Desktop Pro plan required
A
Via UI: Settings → Connectors → Add Custom Connector. Enter “SupportLogic” as the name, then paste your URL and Client ID.
B
Via config file: Add to claude_desktop_config.json:
"mcpServers": {
  "supportlogic": {
    "type": "http",
    "url": "https://mcp.supportlogic.io/mcp"
  }
}
💬 ChatGPT Plus / Enterprise
1
Go to Settings → Apps & Connectors → Advanced Settings and toggle Developer Mode to ON.
2
Click Create in the Connectors settings.
3
Set the Connector URL to your SupportLogic MCP URL and use your Client ID for authentication.
📝 VS Code (GitHub Copilot)
1
Open Command Palette: Ctrl+Shift+P (Win) / Cmd+Shift+P (Mac) → search MCP: Add Server.
2
Choose transport: HTTP Streaming (SSE).
3
Paste your SupportLogic MCP URL, then ensure GitHub Copilot is in Agent Mode.
⌨️ Cursor
1
Navigate to Settings: Ctrl+Shift+J (Win) / Cmd+Shift+J (Mac) → select MCP.
2
Click Add new global MCP server.
3
Set type to url (Streamable HTTP) and paste your SupportLogic URL.
⚡ One-command install
Skip manual config — install across Claude, Cursor, and VS Code simultaneously:
npx add-mcp https://mcp.supportlogic.io/mcp
Security & compliance
ISO 27001 SOC 2 Type II GDPR Compliant HIPAA Compliant Zero-trust architecture OAuth / Enterprise SSO

Elevate Your Support Experience

Reduce escalations and cut through backlog to increase customer retention and revenue with the first Support Experience Platform