Explanation
Understanding-oriented content for gomcptest architecture and concepts
Explanation documents discuss and clarify concepts to broaden the reader’s understanding of topics. They provide context and illuminate ideas.
This section provides deeper background on how gomcptest works, its architecture, and the concepts behind it.
1 - gomcptest Architecture
Deep dive into the system architecture and design decisions
This document explains the architecture of gomcptest, the design decisions behind it, and how the various components interact to create a custom Model Context Protocol (MCP) host.
The Big Picture
The gomcptest project implements a custom host that provides a Model Context Protocol (MCP) implementation. It’s designed to enable testing and experimentation with agentic systems without requiring direct integration with commercial LLM platforms.
The system is built with these key principles in mind:
- Modularity: Components are designed to be interchangeable
- Compatibility: The API mimics the OpenAI API for easy integration
- Extensibility: New tools can be easily added to the system
- Testing: The architecture facilitates testing of agentic applications
Core Components
Host (OpenAI Server)
The host is the central component, located in /host/openaiserver
. It presents an OpenAI-compatible API interface and connects to Google’s Vertex AI for model inference. This compatibility layer makes it easy to integrate with existing tools and libraries designed for OpenAI.
The host has several key responsibilities:
- API Compatibility: Implementing the OpenAI chat completions API
- Session Management: Maintaining chat history and context
- Model Integration: Connecting to Vertex AI’s Gemini models
- Function Calling: Orchestrating function/tool calls based on model outputs
- Response Streaming: Supporting streaming responses to the client
Unlike commercial implementations, this host is designed for local development and testing, emphasizing flexibility and observability over production-ready features like authentication or rate limiting.
The tools are standalone executables that implement the Model Context Protocol. Each tool is designed to perform a specific function, such as executing shell commands or manipulating files.
Tools follow a consistent pattern:
- They communicate via standard I/O using the MCP JSON-RPC protocol
- They expose a specific set of parameters
- They handle their own error conditions
- They return results in a standardized format
This approach allows tools to be:
- Developed independently
- Tested in isolation
- Used in different host environments
- Chained together in complex workflows
CLI
The CLI provides a user interface similar to tools like “Claude Code” or “OpenAI ChatGPT”. It connects to the OpenAI-compatible server and provides a way to interact with the LLM and tools through a conversational interface.
Data Flow
- The user sends a request to the CLI
- The CLI forwards this request to the OpenAI-compatible server
- The server sends the request to Vertex AI’s Gemini model
- The model may identify function calls in its response
- The server executes these function calls by invoking the appropriate MCP tools
- The results are provided back to the model to continue its response
- The final response is streamed back to the CLI and presented to the user
Design Decisions Explained
Why OpenAI API Compatibility?
The OpenAI API has become a de facto standard in the LLM space. By implementing this interface, gomcptest can work with a wide variety of existing tools, libraries, and frontends with minimal adaptation.
Why Google Vertex AI?
Vertex AI provides access to Google’s Gemini models, which have strong function calling capabilities. The implementation could be extended to support other model providers as needed.
By implementing tools as standalone executables rather than library functions, we gain several advantages:
- Security through isolation
- Language agnosticism (tools can be written in any language)
- Ability to distribute tools separately from the host
- Easier testing and development
Why MCP?
The Model Context Protocol provides a standardized way for LLMs to interact with external tools. By adopting this protocol, gomcptest ensures compatibility with tools developed for other MCP-compatible hosts.
Limitations and Future Directions
The current implementation has several limitations:
- Single chat session per instance
- Limited support for authentication and authorization
- No persistence of chat history between restarts
- No built-in support for rate limiting or quotas
Future enhancements could include:
- Support for multiple chat sessions
- Integration with additional model providers
- Enhanced security features
- Improved error handling and logging
- Performance optimizations for large-scale deployments
Conclusion
The gomcptest architecture represents a flexible and extensible approach to building custom MCP hosts. It prioritizes simplicity, modularity, and developer experience, making it an excellent platform for experimentation with agentic systems.
By understanding this architecture, developers can effectively utilize the system, extend it with new tools, and potentially adapt it for their specific needs.
2 - Understanding the Model Context Protocol (MCP)
Exploration of what MCP is, how it works, and design decisions behind it
This document explores the Model Context Protocol (MCP), how it works, the design decisions behind it, and how it compares to alternative approaches for LLM tool integration.
What is the Model Context Protocol?
The Model Context Protocol (MCP) is a standardized communication protocol that enables Large Language Models (LLMs) to interact with external tools and capabilities. It defines a structured way for models to request information or take actions in the real world, and for tools to provide responses back to the model.
MCP is designed to solve the problem of extending LLMs beyond their training data by giving them access to:
- Current information (e.g., via web search)
- Computational capabilities (e.g., calculators, code execution)
- External systems (e.g., databases, APIs)
- User environment (e.g., file system, terminal)
How MCP Works
At its core, MCP is a protocol based on JSON-RPC that enables bidirectional communication between LLMs and tools. The basic workflow is:
- The LLM generates a call to a tool with specific parameters
- The host intercepts this call and routes it to the appropriate tool
- The tool executes the requested action and returns the result
- The result is injected into the model’s context
- The model continues generating a response incorporating the new information
The protocol specifies:
- How tools declare their capabilities and parameters
- How the model requests tool actions
- How tools return results or errors
- How multiple tools can be combined
MCP in gomcptest
In gomcptest, MCP is implemented using a set of independent executables that communicate over standard I/O. This approach has several advantages:
- Language-agnostic: Tools can be written in any programming language
- Process isolation: Each tool runs in its own process for security and stability
- Compatibility: The protocol works with various LLM providers
- Extensibility: New tools can be easily added to the system
Each tool in gomcptest follows a consistent pattern:
- It receives a JSON request on stdin
- It parses the parameters and performs its action
- It formats the result as JSON and returns it on stdout
The Protocol Specification
The core MCP protocol in gomcptest follows this format:
Tools register themselves with a schema that defines their capabilities:
{
"name": "ToolName",
"description": "Description of what the tool does",
"parameters": {
"type": "object",
"properties": {
"param1": {
"type": "string",
"description": "Description of parameter 1"
},
"param2": {
"type": "number",
"description": "Description of parameter 2"
}
},
"required": ["param1"]
}
}
Function Call Request
When a model wants to use a tool, it generates a function call like:
{
"name": "ToolName",
"params": {
"param1": "value1",
"param2": 42
}
}
Function Call Response
The tool executes the requested action and returns:
{
"result": "Output of the tool's execution"
}
Or, in case of an error:
{
"error": {
"message": "Error message",
"code": "ERROR_CODE"
}
}
Design Decisions in MCP
Several key design decisions shape the MCP implementation in gomcptest:
Standard I/O Communication
By using stdin/stdout for communication, tools can be written in any language that can read from stdin and write to stdout. This makes it easy to integrate existing utilities and libraries.
Using JSON Schema for tool definitions provides a clear contract between the model and the tools. It enables:
- Validation of parameters
- Documentation of capabilities
- Potential for automatic code generation
Stateless Design
Tools are designed to be stateless, with each invocation being independent. This simplifies the protocol and makes tools easier to reason about and test.
Pass-through Authentication
The protocol doesn’t handle authentication directly; instead, it relies on the host to manage permissions and authentication. This separation of concerns keeps the protocol simple.
Comparison with Alternatives
vs. OpenAI Function Calling
MCP is similar to OpenAI’s function calling feature but with these key differences:
- MCP is designed to be provider-agnostic
- MCP tools run as separate processes
- MCP provides more detailed error handling
Compared to LangChain:
- MCP is a lower-level protocol rather than a framework
- MCP focuses on interoperability rather than abstraction
- MCP allows for stronger process isolation
vs. Agent Protocols
Other agent protocols often focus on higher-level concepts like goals and planning, while MCP focuses specifically on the mechanics of tool invocation.
Future Directions
The MCP protocol in gomcptest could evolve in several ways:
- Enhanced security: More granular permissions and sand-boxing
- Streaming responses: Support for tools that produce incremental results
- Bidirectional communication: Supporting tools that can request clarification
- Tool composition: First-class support for chaining tools together
- State management: Optional session state for tools that need to maintain context
Conclusion
The Model Context Protocol as implemented in gomcptest represents a pragmatic approach to extending LLM capabilities through external tools. Its simplicity, extensibility, and focus on interoperability make it a solid foundation for building and experimenting with agentic systems.
By understanding the protocol, developers can create new tools that seamlessly integrate with the system, unlocking new capabilities for LLM applications.
3 - Understanding the MCP Tools
Detailed explanation of the MCP tools architecture and implementation
This document explains the architecture and implementation of the MCP tools in gomcptest, how they work, and the design principles behind them.
MCP (Model Context Protocol) tools are standalone executables that provide specific functions that can be invoked by AI models. They allow the AI to interact with its environment - performing tasks like reading and writing files, executing commands, or searching for information.
In gomcptest, tools are implemented as independent Go executables that follow a standard protocol for receiving requests and returning results through standard input/output streams.
Each tool in gomcptest follows a consistent architecture:
- Standard I/O Interface: Tools communicate via stdin/stdout using JSON-formatted requests and responses
- Parameter Validation: Tools validate their input parameters according to a JSON schema
- Stateless Execution: Each tool invocation is independent and does not maintain state
- Controlled Access: Tools implement appropriate security measures and permission checks
- Structured Results: Results are returned in a standardized JSON format
Common Components
Most tools share these common components:
- Main Function: Parses JSON input, validates parameters, executes the core function, formats and returns the result
- Parameter Structure: Defines the expected input parameters for the tool
- Result Structure: Defines the format of the tool’s output
- Error Handling: Standardized error reporting and handling
- Security Checks: Validation to prevent dangerous operations
The tools in gomcptest can be categorized into several functional groups:
Filesystem Navigation
- LS: Lists files and directories, providing metadata and structure
- GlobTool: Finds files matching specific patterns, making it easier to locate relevant files
- GrepTool: Searches file contents using regular expressions, helping find specific information in codebases
Content Management
- View: Reads and displays file contents, allowing the model to analyze existing code or documentation
- Edit: Makes targeted modifications to files, enabling precise changes without overwriting the entire file
- Replace: Completely overwrites file contents, useful for generating new files or making major changes
System Interaction
- Bash: Executes shell commands, allowing the model to run commands, scripts, and programs
- dispatch_agent: A meta-tool that can create specialized sub-agents for specific tasks
Design Principles
The tools in gomcptest were designed with several key principles in mind:
1. Modularity
Each tool is a standalone executable that can be developed, tested, and deployed independently. This modular approach allows for:
- Independent development cycles
- Targeted testing
- Simpler debugging
- Ability to add or replace tools without affecting the entire system
2. Security
Security is a major consideration in the tool design:
- Tools validate inputs to prevent injection attacks
- File operations are limited to appropriate directories
- Bash command execution is restricted with banned commands
- Timeouts prevent infinite operations
- Process isolation prevents one tool from affecting others
3. Simplicity
The tools are designed to be simple to understand and use:
- Clear, focused functionality for each tool
- Straightforward parameter structures
- Consistent result formats
- Well-documented behaviors and limitations
4. Extensibility
The system is designed to be easily extended:
- New tools can be added by following the standard protocol
- Existing tools can be enhanced with additional parameters
- Alternative implementations can replace existing tools
The communication protocol for tools follows this pattern:
Tools receive JSON input on stdin in this format:
{
"param1": "value1",
"param2": "value2",
"param3": 123
}
Tools return JSON output on stdout in one of these formats:
Success:
{
"result": "text result"
}
or
{
"results": [
{"field1": "value1", "field2": "value2"},
{"field1": "value3", "field2": "value4"}
]
}
Error:
{
"error": "Error message",
"code": "ERROR_CODE"
}
Implementation Examples
Most tools follow this basic structure:
package main
import (
"encoding/json"
"fmt"
"os"
)
// Parameters defines the expected input structure
type Parameters struct {
Param1 string `json:"param1"`
Param2 int `json:"param2,omitempty"`
}
// Result defines the output structure
type Result struct {
Result string `json:"result,omitempty"`
Error string `json:"error,omitempty"`
Code string `json:"code,omitempty"`
}
func main() {
// Parse input
var params Parameters
decoder := json.NewDecoder(os.Stdin)
if err := decoder.Decode(¶ms); err != nil {
outputError("Failed to parse input", "INVALID_INPUT")
return
}
// Validate parameters
if params.Param1 == "" {
outputError("param1 is required", "MISSING_PARAMETER")
return
}
// Execute core functionality
result, err := executeTool(params)
if err != nil {
outputError(err.Error(), "EXECUTION_ERROR")
return
}
// Return result
output := Result{Result: result}
encoder := json.NewEncoder(os.Stdout)
encoder.Encode(output)
}
func executeTool(params Parameters) (string, error) {
// Tool-specific logic here
return "result", nil
}
func outputError(message, code string) {
result := Result{
Error: message,
Code: code,
}
encoder := json.NewEncoder(os.Stdout)
encoder.Encode(result)
}
Advanced Concepts
The dispatch_agent tool demonstrates how tools can be composed to create more powerful capabilities. It:
- Accepts a high-level task description
- Plans a sequence of tool operations to accomplish the task
- Executes these operations using the available tools
- Synthesizes the results into a coherent response
Error Propagation
The tool error mechanism is designed to provide useful information back to the model:
- Error messages are human-readable and descriptive
- Error codes allow programmatic handling of specific error types
- Stacktraces and debugging information are not exposed to maintain security
Tools are designed with performance in mind:
- File operations use efficient libraries and patterns
- Search operations employ indexing and filtering when appropriate
- Large results can be paginated or truncated to prevent context overflows
- Resource-intensive operations have configurable timeouts
Future Directions
The tool architecture in gomcptest could evolve in several ways:
- Streaming Results: Supporting incremental results for long-running operations
- Tool Discovery: More sophisticated mechanisms for models to discover available tools
- Tool Chaining: First-class support for composing multiple tools in sequences or pipelines
- Interactive Tools: Tools that can engage in multi-step interactions with the model
- Persistent State: Optional state maintenance for tools that benefit from context
Conclusion
The MCP tools in gomcptest provide a flexible, secure, and extensible foundation for enabling AI agents to interact with their environment. By understanding the architecture and design principles of these tools, developers can effectively utilize the existing tools, extend them with new capabilities, or create entirely new tools that integrate seamlessly with the system.