Overview
A System Prompt is your primary way to customize how your LLM interacts with the MCP Server. It defines your agent's behavior, persona, and rules for using MCP tools. The system prompt is a set of instructions sent to the LLM that:
- Defines the LLM's persona and tone
- Sets business rules and constraints
- Determines output format
You have full control over the system prompt to tailor the agent's behavior to your specific needs.
How to Use Your System Prompt
Your system prompt is passed to the LLM when making requests. For example:
Code
See Connecting to the MCP Server for complete implementation.
What to Customize
1. MCP Server Usage Instructions (Required)
Always include these instructions to help your LLM format requests correctly:
Code
This helps your LLM understand:
- A suggested format for
resolve_entitiesrequests - When to use
'Movie'vs'Show'vs'Episode'entity types - That root IDs need conversion to tmsIds for availability lookups
- To check individual tool descriptions for detailed parameters
2. Persona and Tone
Define your agent's identity:
Code
Or:
Code
3. Tool Orchestration
Tell the agent how to orchestrate tools, such as tool sequence if needed:
Code
4. Business Rules
Implement specific logic for your application:
Code
5. Output Format
Specify how responses should be structured based on your use case:
For user-facing applications, use natural language responses. Example applications include:
- Chatbots and conversational interfaces
- Search interfaces where users see results directly
- Recommendation engines with explanations
- Voice assistants
- Customer-facing web/mobile apps
Prompt Example:
Code
For data processing applications (e.g., catalog enrichment, API endpoints), specify a structured output format. Examples include:
- Batch processing and ETL pipelines
- Data harmonization and catalog enrichment
- API endpoints that return structured data
- Database updates and synchronization
- Microservices and backend integrations
- Any application that parses the response programmatically
Prompt Example:
Code
Claude-specific note: Claude models may wrap JSON in markdown code fences (
```json ... ```) or add introductory prose even when instructed not to. If your application parses structured JSON from Claude, include defensive parsing that strips code fences and extracts JSON from surrounding text. See the Catalog Enrichment example for a robust implementation. Other LLMs (GPT-4, Gemini, Llama) may not require these refinements, especially when using native JSON mode features.