This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Reference

Technical reference documentation for gomcptest components and tools

Reference guides are technical descriptions of the machinery and how to operate it. They describe how things work in detail and are accurate and complete.

This section provides detailed technical documentation on gomcptest’s components, APIs, parameters, and tools.

1 - Tools Reference

Comprehensive reference of all available MCP-compatible tools

This reference guide documents all available MCP-compatible tools in the gomcptest project, their parameters, and response formats.

Bash

Executes bash commands in a persistent shell session.

Parameters

ParameterTypeRequiredDescription
commandstringYesThe command to execute
timeoutnumberNoTimeout in milliseconds (max 600000)

Response

The tool returns the command output as a string.

Banned Commands

For security reasons, the following commands are banned: alias, curl, curlie, wget, axel, aria2c, nc, telnet, lynx, w3m, links, httpie, xh, http-prompt, chrome, firefox, safari

Edit

Modifies file content by replacing specified text.

Parameters

ParameterTypeRequiredDescription
file_pathstringYesAbsolute path to the file to modify
old_stringstringYesText to replace
new_stringstringYesReplacement text

Response

Confirmation message with the updated content.

GlobTool

Finds files matching glob patterns with metadata.

Parameters

ParameterTypeRequiredDescription
patternstringYesGlob pattern to match files against
pathstringNoDirectory to search in (default: current directory)
excludestringNoGlob pattern to exclude from results
limitnumberNoMaximum number of results to return
absolutebooleanNoReturn absolute paths instead of relative

Response

A list of matching files with metadata including path, size, modification time, and permissions.

GrepTool

Searches file contents using regular expressions.

Parameters

ParameterTypeRequiredDescription
patternstringYesRegular expression pattern to search for
pathstringNoDirectory to search in (default: current directory)
includestringNoFile pattern to include in the search

Response

A list of matches with file paths, line numbers, and matched content.

LS

Lists files and directories in a given path.

Parameters

ParameterTypeRequiredDescription
pathstringYesAbsolute path to the directory to list
ignorearrayNoList of glob patterns to ignore

Response

A list of files and directories with metadata.

Replace

Completely replaces a file’s contents.

Parameters

ParameterTypeRequiredDescription
file_pathstringYesAbsolute path to the file to write
contentstringYesContent to write to the file

Response

Confirmation message with the content written.

View

Reads file contents with optional line range.

Parameters

ParameterTypeRequiredDescription
file_pathstringYesAbsolute path to the file to read
offsetnumberNoLine number to start reading from
limitnumberNoNumber of lines to read

Response

The file content with line numbers in cat -n format.

dispatch_agent

Launches a new agent with access to specific tools.

Parameters

ParameterTypeRequiredDescription
promptstringYesThe task for the agent to perform

Response

The result of the agent’s task execution.

imagen

Generates and manipulates images using Google’s Imagen API.

Parameters

ParameterTypeRequiredDescription
promptstringYesDescription of the image to generate
aspectRatiostringNoAspect ratio for the image (default: “1:1”)
safetyFilterLevelstringNoSafety filter level (default: “block_some”)
personGenerationstringNoPerson generation policy (default: “dont_allow”)

Response

Returns a JSON object with the generated image path and metadata.

duckdbserver

Provides data processing capabilities using DuckDB.

Parameters

ParameterTypeRequiredDescription
querystringYesSQL query to execute
databasestringNoDatabase file path (default: in-memory)

Response

Query results in JSON format.

imagen_edit

Edits images using Google’s Gemini 2.0 Flash model with natural language instructions.

Parameters

ParameterTypeRequiredDescription
base64_imagestringYesBase64 encoded image data (without data:image/… prefix)
mime_typestringYesMIME type of the image (e.g., “image/jpeg”, “image/png”)
edit_instructionstringYesText describing the edit to perform
temperaturenumberNoRandomness in generation (0.0-2.0, default: 1.0)
top_pnumberNoNucleus sampling parameter (0.0-1.0, default: 0.95)

Response

Returns edited image information including file path and HTTP URL.

plantuml

Generates PlantUML diagram URLs from plain text diagrams with syntax validation and error correction.

Parameters

ParameterTypeRequiredDescription
plantuml_codestringYesPlantUML diagram code in plain text format
output_formatstringNoOutput format: “svg” (default) or “png”

Response

Returns URL pointing to PlantUML server for SVG/PNG rendering.

plantuml_check

Validates PlantUML file syntax using the official PlantUML processor.

Parameters

ParameterTypeRequiredDescription
file_pathstringYesPath to PlantUML file (.puml, .plantuml, .pu)

Response

Returns validation result with detailed error messages if syntax issues are found.

sleep

Pauses execution for a specified number of seconds (useful for testing and demonstrations).

Parameters

ParameterTypeRequiredDescription
secondsnumberYesNumber of seconds to sleep

Response

Confirmation message after sleep completion.

Tool Response Format

Most tools return JSON responses with the following structure:

{
  "result": "...", // String result or
  "results": [...], // Array of results
  "error": "..." // Error message if applicable
}

Error Handling

All tools follow a consistent error reporting format:

{
  "error": "Error message",
  "code": "ERROR_CODE"
}

Common error codes include:

  • INVALID_PARAMS: Parameters are missing or invalid
  • EXECUTION_ERROR: Error executing the requested operation
  • PERMISSION_DENIED: Permission issues
  • TIMEOUT: Operation timed out

2 - OpenAI-Compatible Server Reference

Technical documentation of the server’s architecture, API endpoints, and configuration

This reference guide provides detailed technical documentation on the OpenAI-compatible server’s architecture, API endpoints, configuration options, and integration details with Vertex AI.

Overview

The OpenAI-compatible server is a core component of the gomcptest system. It implements an API surface compatible with the OpenAI Chat Completions API while connecting to Google’s Vertex AI for model inference. The server acts as a bridge between clients (like the modern AgentFlow web UI) and the underlying LLM models, handling session management, function calling, and tool execution.

AgentFlow Web UI

The server includes AgentFlow, a modern web-based interface that is embedded directly in the openaiserver binary. It provides:

  • Mobile-First Design: Optimized for iPhone and mobile devices
  • Real-time Streaming: Server-sent events for immediate response display
  • Professional Styling: Clean, modern interface with accessibility features
  • Conversation Management: Persistent conversation history
  • Attachment Support: File uploads including PDF support
  • Embedded Architecture: Built into the main server binary for easy deployment

UI Access

Access AgentFlow by starting the openaiserver and navigating to the /ui endpoint:

./bin/openaiserver
# AgentFlow available at: http://localhost:8080/ui

Development Note

The host/openaiserver/simpleui directory contains a standalone UI server used exclusively for development and testing. Production users should use the embedded UI via the /ui endpoint.

API Endpoints

POST /v1/chat/completions

The primary endpoint that mimics the OpenAI Chat Completions API.

Request

{
  "model": "gemini-pro",
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello, world!"}
  ],
  "stream": true,
  "max_tokens": 1024,
  "temperature": 0.7,
  "functions": [
    {
      "name": "get_weather",
      "description": "Get the current weather in a given location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The city and state, e.g. San Francisco, CA"
          }
        },
        "required": ["location"]
      }
    }
  ]
}

Response (non-streamed)

{
  "id": "chatcmpl-123456789",
  "object": "chat.completion",
  "created": 1677858242,
  "model": "gemini-pro",
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you today?"
      },
      "finish_reason": "stop",
      "index": 0
    }
  ],
  "usage": {
    "prompt_tokens": 13,
    "completion_tokens": 7,
    "total_tokens": 20
  }
}

Response (streamed)

When stream is set to true, the server returns a stream of SSE (Server-Sent Events) with partial responses:

data: {"id":"chatcmpl-123456789","object":"chat.completion.chunk","created":1677858242,"model":"gemini-pro","choices":[{"delta":{"role":"assistant"},"index":0,"finish_reason":null}]}

data: {"id":"chatcmpl-123456789","object":"chat.completion.chunk","created":1677858242,"model":"gemini-pro","choices":[{"delta":{"content":"Hello"},"index":0,"finish_reason":null}]}

data: {"id":"chatcmpl-123456789","object":"chat.completion.chunk","created":1677858242,"model":"gemini-pro","choices":[{"delta":{"content":"!"},"index":0,"finish_reason":null}]}

data: {"id":"chatcmpl-123456789","object":"chat.completion.chunk","created":1677858242,"model":"gemini-pro","choices":[{"delta":{"content":" How"},"index":0,"finish_reason":null}]}

data: {"id":"chatcmpl-123456789","object":"chat.completion.chunk","created":1677858242,"model":"gemini-pro","choices":[{"delta":{},"index":0,"finish_reason":"stop"}]}

data: [DONE]

Supported Features

Models

The server supports the following Vertex AI models:

  • gemini-1.5-pro
  • gemini-2.0-flash
  • gemini-pro-vision (legacy)

Vertex AI Built-in Tools

The server supports Google’s native Vertex AI tools:

  • Code Execution: Enables the model to execute code as part of generation
  • Google Search: Specialized search tool powered by Google
  • Google Search Retrieval: Advanced retrieval tool with Google search backend

Parameters

ParameterTypeDefaultDescription
modelstringgemini-proThe model to use for generating completions
messagesarrayRequiredAn array of messages in the conversation
streambooleanfalseWhether to stream the response or not
max_tokensinteger1024Maximum number of tokens to generate
temperaturenumber0.7Sampling temperature (0-1)
functionsarray[]Function definitions the model can call
function_callstring or objectautoControls function calling behavior

Function Calling

The server supports function calling similar to the OpenAI API. When the model identifies that a function should be called, the server:

  1. Parses the function call parameters
  2. Locates the appropriate MCP tool
  3. Executes the tool with the provided parameters
  4. Returns the result to the model for further processing

Architecture

The server consists of these key components:

HTTP Server

A standard Go HTTP server that handles incoming requests and routes them to the appropriate handlers.

Session Manager

Maintains chat history and context for ongoing conversations. Ensures that the model has necessary context when generating responses.

Vertex AI Client

Communicates with Google’s Vertex AI API to:

  • Send prompt templates to the model
  • Receive completions from the model
  • Stream partial responses back to the client

MCP Tool Manager

Manages the available MCP tools and handles:

  • Tool registration and discovery
  • Parameter validation
  • Tool execution
  • Response processing

Response Streamer

Handles streaming responses to clients in SSE format, ensuring low latency and progressive rendering.

Configuration

The server can be configured using environment variables and command-line flags:

Command-Line Options

FlagDescriptionDefault
-mcpserversInput string of MCP servers-
-withAllEventsInclude all events (tool calls, tool responses) in stream output, not just content chunksfalse

⚠️ Important for Testing: The -withAllEvents flag is mandatory for testing tool event flows in development. It enables streaming of all tool execution events including tool calls and responses, which is essential for debugging and development. Without this flag, only standard chat completion responses are streamed.

Environment Variables

The server can be configured using environment variables:

Core Configuration

VariableDescriptionDefault
GCP_PROJECTGoogle Cloud project ID-
GCP_REGIONGoogle Cloud regionus-central1
GEMINI_MODELSComma-separated list of available modelsgemini-1.5-pro,gemini-2.0-flash
PORTHTTP server port8080
LOG_LEVELLogging level (DEBUG, INFO, WARN, ERROR)INFO

Vertex AI Tools Configuration

VariableDescriptionDefault
VERTEX_AI_CODE_EXECUTIONEnable Code Execution toolfalse
VERTEX_AI_GOOGLE_SEARCHEnable Google Search toolfalse
VERTEX_AI_GOOGLE_SEARCH_RETRIEVALEnable Google Search Retrieval toolfalse

Legacy Configuration

VariableDescriptionDefault
GOOGLE_APPLICATION_CREDENTIALSPath to Google Cloud credentials file-
GOOGLE_CLOUD_PROJECTLegacy alias for GCP_PROJECT-
GOOGLE_CLOUD_LOCATIONLegacy alias for GCP_REGIONus-central1

Error Handling

The server implements consistent error handling with HTTP status codes:

Status CodeDescription
400Bad Request - Invalid parameters or request format
401Unauthorized - Missing or invalid authentication
404Not Found - Model or endpoint not found
429Too Many Requests - Rate limit exceeded
500Internal Server Error - Server-side error
503Service Unavailable - Vertex AI service unavailable

Error responses follow this format:

{
  "error": {
    "message": "Detailed error message",
    "type": "error_type",
    "param": "parameter_name",
    "code": "error_code"
  }
}

Security Considerations

The server does not implement authentication or authorization by default. In production deployments, consider:

  • Running behind a reverse proxy with authentication
  • Using API keys or OAuth2
  • Implementing rate limiting
  • Setting up proper firewall rules

Examples

Basic Usage

export GCP_PROJECT="your-project-id"
export GCP_REGION="us-central1"
./bin/openaiserver
# Access AgentFlow UI at: http://localhost:8080/ui

Development with Full Event Streaming

export GCP_PROJECT="your-project-id"
export GCP_REGION="us-central1"
./bin/openaiserver -withAllEvents
# Access AgentFlow UI with full tool events at: http://localhost:8080/ui

With Vertex AI Tools

export GCP_PROJECT="your-project-id"
export VERTEX_AI_CODE_EXECUTION=true
export VERTEX_AI_GOOGLE_SEARCH=true
./bin/openaiserver
# AgentFlow UI with Vertex AI tools at: http://localhost:8080/ui

Development UI Server (For Developers Only)

# Terminal 1: Start API server
export GCP_PROJECT="your-project-id"
./bin/openaiserver -port=4000

# Terminal 2: Start development UI server
cd host/openaiserver/simpleui
go run . -ui-port=8081 -api-url=http://localhost:4000
# Development UI at: http://localhost:8081

Note: The standalone UI server is for development purposes only. Production users should use the embedded UI via /ui.

Client Connection

curl -X POST http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gemini-2.0-flash",
    "messages": [{"role": "user", "content": "Hello, world!"}]
  }'

Limitations

  • Single chat session support only
  • No persistent storage of conversations
  • Limited authentication options
  • Basic rate limiting
  • Limited model parameter controls

Advanced Usage

Tool Registration

Tools are automatically registered when the server starts. To register custom tools:

  1. Place executable files in the MCP_TOOLS_PATH directory
  2. Ensure they follow the MCP protocol
  3. Restart the server

Streaming with Function Calls

When using function calling with streaming, the stream will pause during tool execution and resume with the tool results included in the context.

3 - cliGCP Reference (Deprecated)

Detailed reference of the cliGCP command-line interface (deprecated in favor of AgentFlow UI)

⚠️ DEPRECATED: The cliGCP command-line interface is deprecated in favor of the modern AgentFlow web UI. New users should use the AgentFlow UI instead. This documentation is maintained for legacy users.

Overview

The cliGCP (Command Line Interface for Google Cloud Platform) is a legacy command-line tool that provides a chat interface similar to tools like “Claude Code” or “ChatGPT”. It connects to an OpenAI-compatible server and allows users to interact with LLMs and MCP tools through a conversational interface.

For new projects, we recommend using the AgentFlow web UI which provides a modern, mobile-optimized interface with better features and user experience.

Command Structure

Basic Usage

./bin/cliGCP [flags]

Flags

FlagDescriptionDefault
-mcpserversComma-separated list of MCP tool paths""
-serverURL of the OpenAI-compatible server“http://localhost:8080”
-modelLLM model to use“gemini-pro”
-promptInitial system prompt“You are a helpful assistant.”
-tempTemperature setting for model responses0.7
-maxtokensMaximum number of tokens in responses1024
-historyFile path to store/load chat history""
-verboseEnable verbose loggingfalse

Example

./bin/cliGCP -mcpservers "./bin/Bash;./bin/View;./bin/GlobTool;./bin/GrepTool;./bin/LS;./bin/Edit;./bin/Replace;./bin/dispatch_agent" -server "http://localhost:8080" -model "gemini-pro" -prompt "You are a helpful command-line assistant."

Components

Chat Interface

The chat interface provides:

  • Text-based input for user messages
  • Markdown rendering of AI responses
  • Real-time streaming of responses
  • Input history and navigation
  • Multi-line input support

MCP Tool Manager

The tool manager:

  • Loads and initializes MCP tools
  • Registers tools with the OpenAI-compatible server
  • Routes function calls to appropriate tools
  • Processes tool results

Session Manager

The session manager:

  • Maintains chat history within the session
  • Handles context windowing for long conversations
  • Optionally persists conversations to disk
  • Provides conversation resume functionality

Interaction Patterns

Basic Chat

The most common interaction pattern is a simple turn-based chat:

  1. User enters a message
  2. Model generates and streams a response
  3. Chat history is updated
  4. User enters the next message

Function Calling

When the model determines a function should be called:

  1. User enters a message requesting an action (e.g., “List files in /tmp”)
  2. Model analyzes the request and generates a function call
  3. cliGCP intercepts the function call and routes it to the appropriate tool
  4. Tool executes and returns results
  5. Results are injected back into the model’s context
  6. Model continues generating a response that incorporates the tool results
  7. The complete response is shown to the user

Multi-turn Function Calling

For complex tasks, the model may make multiple function calls:

  1. User requests a complex task (e.g., “Find all Python files containing ’error’”)
  2. Model makes a function call to list directories
  3. Tool returns directory listing
  4. Model makes additional function calls to search file contents
  5. Each tool result is returned to the model
  6. Model synthesizes the information and responds to the user

Technical Details

Message Format

Messages between cliGCP and the server follow the OpenAI Chat API format:

{
  "role": "user"|"assistant"|"system",
  "content": "Message text"
}

Function calls use this format:

{
  "role": "assistant",
  "content": null,
  "function_call": {
    "name": "function_name",
    "arguments": "{\"arg1\":\"value1\",\"arg2\":\"value2\"}"
  }
}

Tool Registration

Tools are registered with the server using JSONSchema:

{
  "name": "tool_name",
  "description": "Tool description",
  "parameters": {
    "type": "object",
    "properties": {
      "param1": {
        "type": "string",
        "description": "Parameter description"
      }
    },
    "required": ["param1"]
  }
}

Error Handling

The CLI implements robust error handling for:

  • Connection issues with the server
  • Tool execution failures
  • Model errors
  • Input validation

Error messages are displayed to the user with context and possible solutions.

Configuration

Environment Variables

VariableDescriptionDefault
OPENAI_API_URLURL of the OpenAI-compatible serverhttp://localhost:8080
OPENAI_API_KEYAPI key for authentication (if required)""
MCP_TOOLS_PATHPath to MCP tools (overridden by -mcpservers)“./tools”
DEFAULT_MODELDefault model to use“gemini-pro”
SYSTEM_PROMPTDefault system prompt“You are a helpful assistant.”

Configuration File

You can create a ~/.cligcp.json configuration file with these settings:

{
  "server": "http://localhost:8080",
  "model": "gemini-pro",
  "prompt": "You are a helpful assistant.",
  "temperature": 0.7,
  "max_tokens": 1024,
  "tools": [
    "./bin/Bash",
    "./bin/View",
    "./bin/GlobTool"
  ]
}

Advanced Usage

Persistent History

To save and load chat history:

./bin/cliGCP -history ./chat_history.json

Custom System Prompt

To set a specific system prompt:

./bin/cliGCP -prompt "You are a Linux command-line expert that helps users with shell commands and filesystem operations."

Combining with Shell Scripts

You can use cliGCP in shell scripts by piping input and capturing output:

echo "Explain how to find large files in Linux" | ./bin/cliGCP -noninteractive

Limitations

  • Single conversation per instance
  • Limited rendering capabilities for complex markdown
  • No built-in authentication management
  • Limited offline functionality
  • No multi-modal input support (e.g., images)

Troubleshooting

Common Issues

IssuePossible Solution
Connection refusedEnsure the OpenAI server is running
Tool not foundCheck tool paths and permissions
Out of memoryReduce history size or split conversation
Slow responsesCheck network connection and server load

Diagnostic Mode

Run with the -verbose flag to enable detailed logging:

./bin/cliGCP -verbose

This will show all API requests, responses, and tool interactions, which can be helpful for debugging.

4 - Artifact Storage API Reference

Complete API reference for the artifact storage endpoints in the OpenAI server

The OpenAI server provides a RESTful API for storing and retrieving generic artifacts (files). This API allows you to upload any type of file and retrieve it later using a unique identifier.

Authentication

The artifact API endpoints do not require authentication and are publicly accessible. In production environments, consider implementing authentication middleware as needed.

Content Types

The API supports any content type. Common examples include:

  • text/plain - Text files
  • application/json - JSON documents
  • image/jpeg, image/png - Images
  • application/pdf - PDF documents
  • audio/webm, audio/wav - Audio files
  • application/octet-stream - Binary files

Endpoints

Upload Artifact

Uploads a new artifact to the server.

Request:

POST /artifact/

Headers:

  • Content-Type (required): MIME type of the file being uploaded
  • X-Original-Filename (required): Original filename including extension

Request Body:

  • Binary file data

Response:

Success (201 Created):

{
  "artifactId": "7f33ee3d-b589-4b3c-b8c8-a9a3ee04eacf"
}

Response Headers:

  • Location: URL where the artifact can be retrieved
  • Content-Type: application/json

Error Responses:

400 Bad Request:

Missing 'Content-Type' or 'X-Original-Filename' header

413 Payload Too Large:

Error saving file: http: request body too large

500 Internal Server Error:

Could not create file on server

Example:

curl -X POST http://localhost:8080/artifact/ \
  -H "Content-Type: text/plain" \
  -H "X-Original-Filename: example.txt" \
  --data-binary @example.txt

Retrieve Artifact

Downloads an artifact by its unique identifier.

Request:

GET /artifact/{artifactId}

Path Parameters:

  • artifactId (required): UUID of the artifact to retrieve

Response:

Success (200 OK):

  • Returns the original file content as binary data

Response Headers:

  • Content-Type: Original MIME type of the file
  • Content-Disposition: inline; filename="original-filename.ext"
  • Content-Length: Size of the file in bytes
  • Accept-Ranges: bytes (supports range requests)
  • Last-Modified: Timestamp when the file was uploaded

Error Responses:

400 Bad Request:

Invalid artifact ID format

404 Not Found:

404 page not found

500 Internal Server Error:

Could not read artifact metadata
Corrupted artifact metadata

Example:

curl http://localhost:8080/artifact/7f33ee3d-b589-4b3c-b8c8-a9a3ee04eacf

Data Models

Artifact Metadata

Each uploaded artifact has associated metadata stored in a .meta.json file:

{
  "originalFilename": "example.txt",
  "contentType": "text/plain",
  "size": 1024,
  "uploadTimestamp": "2025-09-19T12:01:11.277651Z"
}

Fields:

  • originalFilename (string): The original name of the uploaded file
  • contentType (string): MIME type of the file
  • size (number): Size of the file in bytes
  • uploadTimestamp (string): ISO 8601 timestamp of when the file was uploaded

Configuration

The artifact storage behavior can be configured using environment variables:

  • ARTIFACT_PATH: Directory where artifacts are stored (default: ~/openaiserver/artifacts)
  • MAX_UPLOAD_SIZE: Maximum file size in bytes (default: 52428800 = 50MB)

File Storage

Storage Structure

Artifacts are stored using the following directory structure:

${ARTIFACT_PATH}/
├── 7f33ee3d-b589-4b3c-b8c8-a9a3ee04eacf           # Binary file content
├── 7f33ee3d-b589-4b3c-b8c8-a9a3ee04eacf.meta.json # Metadata file
├── 123e4567-e89b-12d3-a456-426614174000           # Another file
└── 123e4567-e89b-12d3-a456-426614174000.meta.json # Its metadata

File Naming

  • Artifact files: Named using the UUID (no extension)
  • Metadata files: Named using the UUID + .meta.json suffix
  • UUIDs: Generated using UUID v4 standard (RFC 4122)

Security Considerations

File Size Limits

  • Default maximum upload size: 50MB
  • Configurable via MAX_UPLOAD_SIZE environment variable
  • Requests exceeding the limit return HTTP 413 (Payload Too Large)

File Type Validation

  • The API accepts any content type
  • Content type validation is based on the Content-Type header
  • No server-side file content inspection is performed

Path Traversal Protection

  • Artifact IDs must be valid UUIDs
  • Invalid UUID format returns HTTP 400 (Bad Request)
  • File paths are constructed using secure filepath.Join()

Storage Directory

  • Default storage path uses user home directory
  • Tilde (~) expansion is supported
  • Directory is created automatically with 0755 permissions
  • Metadata files are created with 0644 permissions

Error Handling

Client Errors (4xx)

  • 400 Bad Request: Invalid UUID format or missing required headers
  • 404 Not Found: Artifact with the specified ID does not exist
  • 413 Payload Too Large: File exceeds maximum upload size

Server Errors (5xx)

  • 500 Internal Server Error: File system errors, metadata corruption, or server misconfiguration

Logging

All artifact operations are logged with structured logging:

INFO Artifact uploaded successfully artifactID=7f33ee3d-b589-4b3c-b8c8-a9a3ee04eacf filename=example.txt size=1024
DEBUG Artifact served artifactID=7f33ee3d-b589-4b3c-b8c8-a9a3ee04eacf filename=example.txt
ERROR Could not create artifact file error="permission denied" path=/artifacts/uuid

Rate Limiting

The artifact API does not implement built-in rate limiting. For production environments, consider implementing rate limiting at the reverse proxy level or using middleware.

CORS Support

The artifact API includes CORS headers to support web-based clients:

Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS
Access-Control-Allow-Headers: Content-Type, Authorization, X-Requested-With
Access-Control-Allow-Credentials: true

Integration Examples

JavaScript/Fetch API

// Upload file
const uploadFile = async (file) => {
  const response = await fetch('/artifact/', {
    method: 'POST',
    headers: {
      'Content-Type': file.type,
      'X-Original-Filename': file.name
    },
    body: file
  });

  const result = await response.json();
  return result.artifactId;
};

// Download file
const downloadFile = async (artifactId) => {
  const response = await fetch(`/artifact/${artifactId}`);
  return response.blob();
};

Python/Requests

import requests

# Upload file
def upload_file(file_path, content_type):
    with open(file_path, 'rb') as f:
        headers = {
            'Content-Type': content_type,
            'X-Original-Filename': os.path.basename(file_path)
        }
        response = requests.post('http://localhost:8080/artifact/',
                               headers=headers, data=f)
        return response.json()['artifactId']

# Download file
def download_file(artifact_id, output_path):
    response = requests.get(f'http://localhost:8080/artifact/{artifact_id}')
    with open(output_path, 'wb') as f:
        f.write(response.content)

Go

package main

import (
    "bytes"
    "encoding/json"
    "fmt"
    "io"
    "net/http"
    "os"
)

// Upload file
func uploadFile(filePath, contentType string) (string, error) {
    file, err := os.Open(filePath)
    if err != nil {
        return "", err
    }
    defer file.Close()

    req, err := http.NewRequest("POST", "http://localhost:8080/artifact/", file)
    if err != nil {
        return "", err
    }

    req.Header.Set("Content-Type", contentType)
    req.Header.Set("X-Original-Filename", filepath.Base(filePath))

    client := &http.Client{}
    resp, err := client.Do(req)
    if err != nil {
        return "", err
    }
    defer resp.Body.Close()

    var result map[string]string
    json.NewDecoder(resp.Body).Decode(&result)
    return result["artifactId"], nil
}

// Download file
func downloadFile(artifactID, outputPath string) error {
    resp, err := http.Get(fmt.Sprintf("http://localhost:8080/artifact/%s", artifactID))
    if err != nil {
        return err
    }
    defer resp.Body.Close()

    file, err := os.Create(outputPath)
    if err != nil {
        return err
    }
    defer file.Close()

    _, err = io.Copy(file, resp.Body)
    return err
}