1 - Tools Reference
Comprehensive reference of all available MCP-compatible tools
This reference guide documents all available MCP-compatible tools in the gomcptest project, their parameters, and response formats.
Bash
Executes bash commands in a persistent shell session.
Parameters
Parameter | Type | Required | Description |
---|
command | string | Yes | The command to execute |
timeout | number | No | Timeout in milliseconds (max 600000) |
Response
The tool returns the command output as a string.
Banned Commands
For security reasons, the following commands are banned:
alias
, curl
, curlie
, wget
, axel
, aria2c
, nc
, telnet
, lynx
, w3m
, links
, httpie
, xh
, http-prompt
, chrome
, firefox
, safari
Edit
Modifies file content by replacing specified text.
Parameters
Parameter | Type | Required | Description |
---|
file_path | string | Yes | Absolute path to the file to modify |
old_string | string | Yes | Text to replace |
new_string | string | Yes | Replacement text |
Response
Confirmation message with the updated content.
Finds files matching glob patterns with metadata.
Parameters
Parameter | Type | Required | Description |
---|
pattern | string | Yes | Glob pattern to match files against |
path | string | No | Directory to search in (default: current directory) |
exclude | string | No | Glob pattern to exclude from results |
limit | number | No | Maximum number of results to return |
absolute | boolean | No | Return absolute paths instead of relative |
Response
A list of matching files with metadata including path, size, modification time, and permissions.
Searches file contents using regular expressions.
Parameters
Parameter | Type | Required | Description |
---|
pattern | string | Yes | Regular expression pattern to search for |
path | string | No | Directory to search in (default: current directory) |
include | string | No | File pattern to include in the search |
Response
A list of matches with file paths, line numbers, and matched content.
LS
Lists files and directories in a given path.
Parameters
Parameter | Type | Required | Description |
---|
path | string | Yes | Absolute path to the directory to list |
ignore | array | No | List of glob patterns to ignore |
Response
A list of files and directories with metadata.
Replace
Completely replaces a file’s contents.
Parameters
Parameter | Type | Required | Description |
---|
file_path | string | Yes | Absolute path to the file to write |
content | string | Yes | Content to write to the file |
Response
Confirmation message with the content written.
View
Reads file contents with optional line range.
Parameters
Parameter | Type | Required | Description |
---|
file_path | string | Yes | Absolute path to the file to read |
offset | number | No | Line number to start reading from |
limit | number | No | Number of lines to read |
Response
The file content with line numbers in cat -n format.
dispatch_agent
Launches a new agent with access to specific tools.
Parameters
Parameter | Type | Required | Description |
---|
prompt | string | Yes | The task for the agent to perform |
Response
The result of the agent’s task execution.
imagen
Generates and manipulates images using Google’s Imagen API.
Parameters
Parameter | Type | Required | Description |
---|
prompt | string | Yes | Description of the image to generate |
aspectRatio | string | No | Aspect ratio for the image (default: “1:1”) |
safetyFilterLevel | string | No | Safety filter level (default: “block_some”) |
personGeneration | string | No | Person generation policy (default: “dont_allow”) |
Response
Returns a JSON object with the generated image path and metadata.
duckdbserver
Provides data processing capabilities using DuckDB.
Parameters
Parameter | Type | Required | Description |
---|
query | string | Yes | SQL query to execute |
database | string | No | Database file path (default: in-memory) |
Response
Query results in JSON format.
imagen_edit
Edits images using Google’s Gemini 2.0 Flash model with natural language instructions.
Parameters
Parameter | Type | Required | Description |
---|
base64_image | string | Yes | Base64 encoded image data (without data:image/… prefix) |
mime_type | string | Yes | MIME type of the image (e.g., “image/jpeg”, “image/png”) |
edit_instruction | string | Yes | Text describing the edit to perform |
temperature | number | No | Randomness in generation (0.0-2.0, default: 1.0) |
top_p | number | No | Nucleus sampling parameter (0.0-1.0, default: 0.95) |
Response
Returns edited image information including file path and HTTP URL.
plantuml
Generates PlantUML diagram URLs from plain text diagrams with syntax validation and error correction.
Parameters
Parameter | Type | Required | Description |
---|
plantuml_code | string | Yes | PlantUML diagram code in plain text format |
output_format | string | No | Output format: “svg” (default) or “png” |
Response
Returns URL pointing to PlantUML server for SVG/PNG rendering.
plantuml_check
Validates PlantUML file syntax using the official PlantUML processor.
Parameters
Parameter | Type | Required | Description |
---|
file_path | string | Yes | Path to PlantUML file (.puml, .plantuml, .pu) |
Response
Returns validation result with detailed error messages if syntax issues are found.
sleep
Pauses execution for a specified number of seconds (useful for testing and demonstrations).
Parameters
Parameter | Type | Required | Description |
---|
seconds | number | Yes | Number of seconds to sleep |
Response
Confirmation message after sleep completion.
Most tools return JSON responses with the following structure:
{
"result": "...", // String result or
"results": [...], // Array of results
"error": "..." // Error message if applicable
}
Error Handling
All tools follow a consistent error reporting format:
{
"error": "Error message",
"code": "ERROR_CODE"
}
Common error codes include:
INVALID_PARAMS
: Parameters are missing or invalidEXECUTION_ERROR
: Error executing the requested operationPERMISSION_DENIED
: Permission issuesTIMEOUT
: Operation timed out
2 - OpenAI-Compatible Server Reference
Technical documentation of the server’s architecture, API endpoints, and configuration
This reference guide provides detailed technical documentation on the OpenAI-compatible server’s architecture, API endpoints, configuration options, and integration details with Vertex AI.
Overview
The OpenAI-compatible server is a core component of the gomcptest system. It implements an API surface compatible with the OpenAI Chat Completions API while connecting to Google’s Vertex AI for model inference. The server acts as a bridge between clients (like the modern AgentFlow web UI) and the underlying LLM models, handling session management, function calling, and tool execution.
AgentFlow Web UI
The server includes AgentFlow, a modern web-based interface that is embedded directly in the openaiserver binary. It provides:
- Mobile-First Design: Optimized for iPhone and mobile devices
- Real-time Streaming: Server-sent events for immediate response display
- Professional Styling: Clean, modern interface with accessibility features
- Conversation Management: Persistent conversation history
- Attachment Support: File uploads including PDF support
- Embedded Architecture: Built into the main server binary for easy deployment
UI Access
Access AgentFlow by starting the openaiserver and navigating to the /ui
endpoint:
./bin/openaiserver
# AgentFlow available at: http://localhost:8080/ui
Development Note
The host/openaiserver/simpleui
directory contains a standalone UI server used exclusively for development and testing. Production users should use the embedded UI via the /ui
endpoint.
API Endpoints
POST /v1/chat/completions
The primary endpoint that mimics the OpenAI Chat Completions API.
Request
{
"model": "gemini-pro",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, world!"}
],
"stream": true,
"max_tokens": 1024,
"temperature": 0.7,
"functions": [
{
"name": "get_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
}
},
"required": ["location"]
}
}
]
}
Response (non-streamed)
{
"id": "chatcmpl-123456789",
"object": "chat.completion",
"created": 1677858242,
"model": "gemini-pro",
"choices": [
{
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop",
"index": 0
}
],
"usage": {
"prompt_tokens": 13,
"completion_tokens": 7,
"total_tokens": 20
}
}
Response (streamed)
When stream
is set to true
, the server returns a stream of SSE (Server-Sent Events) with partial responses:
data: {"id":"chatcmpl-123456789","object":"chat.completion.chunk","created":1677858242,"model":"gemini-pro","choices":[{"delta":{"role":"assistant"},"index":0,"finish_reason":null}]}
data: {"id":"chatcmpl-123456789","object":"chat.completion.chunk","created":1677858242,"model":"gemini-pro","choices":[{"delta":{"content":"Hello"},"index":0,"finish_reason":null}]}
data: {"id":"chatcmpl-123456789","object":"chat.completion.chunk","created":1677858242,"model":"gemini-pro","choices":[{"delta":{"content":"!"},"index":0,"finish_reason":null}]}
data: {"id":"chatcmpl-123456789","object":"chat.completion.chunk","created":1677858242,"model":"gemini-pro","choices":[{"delta":{"content":" How"},"index":0,"finish_reason":null}]}
data: {"id":"chatcmpl-123456789","object":"chat.completion.chunk","created":1677858242,"model":"gemini-pro","choices":[{"delta":{},"index":0,"finish_reason":"stop"}]}
data: [DONE]
Supported Features
Models
The server supports the following Vertex AI models:
gemini-1.5-pro
gemini-2.0-flash
gemini-pro-vision
(legacy)
The server supports Google’s native Vertex AI tools:
- Code Execution: Enables the model to execute code as part of generation
- Google Search: Specialized search tool powered by Google
- Google Search Retrieval: Advanced retrieval tool with Google search backend
Parameters
Parameter | Type | Default | Description |
---|
model | string | gemini-pro | The model to use for generating completions |
messages | array | Required | An array of messages in the conversation |
stream | boolean | false | Whether to stream the response or not |
max_tokens | integer | 1024 | Maximum number of tokens to generate |
temperature | number | 0.7 | Sampling temperature (0-1) |
functions | array | [] | Function definitions the model can call |
function_call | string or object | auto | Controls function calling behavior |
Function Calling
The server supports function calling similar to the OpenAI API. When the model identifies that a function should be called, the server:
- Parses the function call parameters
- Locates the appropriate MCP tool
- Executes the tool with the provided parameters
- Returns the result to the model for further processing
Architecture
The server consists of these key components:
HTTP Server
A standard Go HTTP server that handles incoming requests and routes them to the appropriate handlers.
Session Manager
Maintains chat history and context for ongoing conversations. Ensures that the model has necessary context when generating responses.
Vertex AI Client
Communicates with Google’s Vertex AI API to:
- Send prompt templates to the model
- Receive completions from the model
- Stream partial responses back to the client
Manages the available MCP tools and handles:
- Tool registration and discovery
- Parameter validation
- Tool execution
- Response processing
Response Streamer
Handles streaming responses to clients in SSE format, ensuring low latency and progressive rendering.
Configuration
The server can be configured using environment variables and command-line flags:
Command-Line Options
Flag | Description | Default |
---|
-mcpservers | Input string of MCP servers | - |
-withAllEvents | Include all events (tool calls, tool responses) in stream output, not just content chunks | false |
⚠️ Important for Testing: The -withAllEvents
flag is mandatory for testing tool event flows in development. It enables streaming of all tool execution events including tool calls and responses, which is essential for debugging and development. Without this flag, only standard chat completion responses are streamed.
Environment Variables
The server can be configured using environment variables:
Core Configuration
Variable | Description | Default |
---|
GCP_PROJECT | Google Cloud project ID | - |
GCP_REGION | Google Cloud region | us-central1 |
GEMINI_MODELS | Comma-separated list of available models | gemini-1.5-pro,gemini-2.0-flash |
PORT | HTTP server port | 8080 |
LOG_LEVEL | Logging level (DEBUG, INFO, WARN, ERROR) | INFO |
Variable | Description | Default |
---|
VERTEX_AI_CODE_EXECUTION | Enable Code Execution tool | false |
VERTEX_AI_GOOGLE_SEARCH | Enable Google Search tool | false |
VERTEX_AI_GOOGLE_SEARCH_RETRIEVAL | Enable Google Search Retrieval tool | false |
Legacy Configuration
Variable | Description | Default |
---|
GOOGLE_APPLICATION_CREDENTIALS | Path to Google Cloud credentials file | - |
GOOGLE_CLOUD_PROJECT | Legacy alias for GCP_PROJECT | - |
GOOGLE_CLOUD_LOCATION | Legacy alias for GCP_REGION | us-central1 |
Error Handling
The server implements consistent error handling with HTTP status codes:
Status Code | Description |
---|
400 | Bad Request - Invalid parameters or request format |
401 | Unauthorized - Missing or invalid authentication |
404 | Not Found - Model or endpoint not found |
429 | Too Many Requests - Rate limit exceeded |
500 | Internal Server Error - Server-side error |
503 | Service Unavailable - Vertex AI service unavailable |
Error responses follow this format:
{
"error": {
"message": "Detailed error message",
"type": "error_type",
"param": "parameter_name",
"code": "error_code"
}
}
Security Considerations
The server does not implement authentication or authorization by default. In production deployments, consider:
- Running behind a reverse proxy with authentication
- Using API keys or OAuth2
- Implementing rate limiting
- Setting up proper firewall rules
Examples
Basic Usage
export GCP_PROJECT="your-project-id"
export GCP_REGION="us-central1"
./bin/openaiserver
# Access AgentFlow UI at: http://localhost:8080/ui
Development with Full Event Streaming
export GCP_PROJECT="your-project-id"
export GCP_REGION="us-central1"
./bin/openaiserver -withAllEvents
# Access AgentFlow UI with full tool events at: http://localhost:8080/ui
export GCP_PROJECT="your-project-id"
export VERTEX_AI_CODE_EXECUTION=true
export VERTEX_AI_GOOGLE_SEARCH=true
./bin/openaiserver
# AgentFlow UI with Vertex AI tools at: http://localhost:8080/ui
Development UI Server (For Developers Only)
# Terminal 1: Start API server
export GCP_PROJECT="your-project-id"
./bin/openaiserver -port=4000
# Terminal 2: Start development UI server
cd host/openaiserver/simpleui
go run . -ui-port=8081 -api-url=http://localhost:4000
# Development UI at: http://localhost:8081
Note: The standalone UI server is for development purposes only. Production users should use the embedded UI via /ui
.
Client Connection
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gemini-2.0-flash",
"messages": [{"role": "user", "content": "Hello, world!"}]
}'
Limitations
- Single chat session support only
- No persistent storage of conversations
- Limited authentication options
- Basic rate limiting
- Limited model parameter controls
Advanced Usage
Tools are automatically registered when the server starts. To register custom tools:
- Place executable files in the
MCP_TOOLS_PATH
directory - Ensure they follow the MCP protocol
- Restart the server
Streaming with Function Calls
When using function calling with streaming, the stream will pause during tool execution and resume with the tool results included in the context.
3 - cliGCP Reference (Deprecated)
Detailed reference of the cliGCP command-line interface (deprecated in favor of AgentFlow UI)
⚠️ DEPRECATED: The cliGCP command-line interface is deprecated in favor of the modern AgentFlow web UI. New users should use the AgentFlow UI instead. This documentation is maintained for legacy users.
Overview
The cliGCP (Command Line Interface for Google Cloud Platform) is a legacy command-line tool that provides a chat interface similar to tools like “Claude Code” or “ChatGPT”. It connects to an OpenAI-compatible server and allows users to interact with LLMs and MCP tools through a conversational interface.
For new projects, we recommend using the AgentFlow web UI which provides a modern, mobile-optimized interface with better features and user experience.
Command Structure
Basic Usage
Flags
Flag | Description | Default |
---|
-mcpservers | Comma-separated list of MCP tool paths | "" |
-server | URL of the OpenAI-compatible server | “http://localhost:8080” |
-model | LLM model to use | “gemini-pro” |
-prompt | Initial system prompt | “You are a helpful assistant.” |
-temp | Temperature setting for model responses | 0.7 |
-maxtokens | Maximum number of tokens in responses | 1024 |
-history | File path to store/load chat history | "" |
-verbose | Enable verbose logging | false |
Example
./bin/cliGCP -mcpservers "./bin/Bash;./bin/View;./bin/GlobTool;./bin/GrepTool;./bin/LS;./bin/Edit;./bin/Replace;./bin/dispatch_agent" -server "http://localhost:8080" -model "gemini-pro" -prompt "You are a helpful command-line assistant."
Components
Chat Interface
The chat interface provides:
- Text-based input for user messages
- Markdown rendering of AI responses
- Real-time streaming of responses
- Input history and navigation
- Multi-line input support
The tool manager:
- Loads and initializes MCP tools
- Registers tools with the OpenAI-compatible server
- Routes function calls to appropriate tools
- Processes tool results
Session Manager
The session manager:
- Maintains chat history within the session
- Handles context windowing for long conversations
- Optionally persists conversations to disk
- Provides conversation resume functionality
Interaction Patterns
Basic Chat
The most common interaction pattern is a simple turn-based chat:
- User enters a message
- Model generates and streams a response
- Chat history is updated
- User enters the next message
Function Calling
When the model determines a function should be called:
- User enters a message requesting an action (e.g., “List files in /tmp”)
- Model analyzes the request and generates a function call
- cliGCP intercepts the function call and routes it to the appropriate tool
- Tool executes and returns results
- Results are injected back into the model’s context
- Model continues generating a response that incorporates the tool results
- The complete response is shown to the user
Multi-turn Function Calling
For complex tasks, the model may make multiple function calls:
- User requests a complex task (e.g., “Find all Python files containing ’error’”)
- Model makes a function call to list directories
- Tool returns directory listing
- Model makes additional function calls to search file contents
- Each tool result is returned to the model
- Model synthesizes the information and responds to the user
Technical Details
Messages between cliGCP and the server follow the OpenAI Chat API format:
{
"role": "user"|"assistant"|"system",
"content": "Message text"
}
Function calls use this format:
{
"role": "assistant",
"content": null,
"function_call": {
"name": "function_name",
"arguments": "{\"arg1\":\"value1\",\"arg2\":\"value2\"}"
}
}
Tools are registered with the server using JSONSchema:
{
"name": "tool_name",
"description": "Tool description",
"parameters": {
"type": "object",
"properties": {
"param1": {
"type": "string",
"description": "Parameter description"
}
},
"required": ["param1"]
}
}
Error Handling
The CLI implements robust error handling for:
- Connection issues with the server
- Tool execution failures
- Model errors
- Input validation
Error messages are displayed to the user with context and possible solutions.
Configuration
Environment Variables
Variable | Description | Default |
---|
OPENAI_API_URL | URL of the OpenAI-compatible server | http://localhost:8080 |
OPENAI_API_KEY | API key for authentication (if required) | "" |
MCP_TOOLS_PATH | Path to MCP tools (overridden by -mcpservers) | “./tools” |
DEFAULT_MODEL | Default model to use | “gemini-pro” |
SYSTEM_PROMPT | Default system prompt | “You are a helpful assistant.” |
Configuration File
You can create a ~/.cligcp.json
configuration file with these settings:
{
"server": "http://localhost:8080",
"model": "gemini-pro",
"prompt": "You are a helpful assistant.",
"temperature": 0.7,
"max_tokens": 1024,
"tools": [
"./bin/Bash",
"./bin/View",
"./bin/GlobTool"
]
}
Advanced Usage
Persistent History
To save and load chat history:
./bin/cliGCP -history ./chat_history.json
Custom System Prompt
To set a specific system prompt:
./bin/cliGCP -prompt "You are a Linux command-line expert that helps users with shell commands and filesystem operations."
Combining with Shell Scripts
You can use cliGCP in shell scripts by piping input and capturing output:
echo "Explain how to find large files in Linux" | ./bin/cliGCP -noninteractive
Limitations
- Single conversation per instance
- Limited rendering capabilities for complex markdown
- No built-in authentication management
- Limited offline functionality
- No multi-modal input support (e.g., images)
Troubleshooting
Common Issues
Issue | Possible Solution |
---|
Connection refused | Ensure the OpenAI server is running |
Tool not found | Check tool paths and permissions |
Out of memory | Reduce history size or split conversation |
Slow responses | Check network connection and server load |
Diagnostic Mode
Run with the -verbose
flag to enable detailed logging:
This will show all API requests, responses, and tool interactions, which can be helpful for debugging.
4 - Artifact Storage API Reference
Complete API reference for the artifact storage endpoints in the OpenAI server
The OpenAI server provides a RESTful API for storing and retrieving generic artifacts (files). This API allows you to upload any type of file and retrieve it later using a unique identifier.
Authentication
The artifact API endpoints do not require authentication and are publicly accessible. In production environments, consider implementing authentication middleware as needed.
Content Types
The API supports any content type. Common examples include:
text/plain
- Text filesapplication/json
- JSON documentsimage/jpeg
, image/png
- Imagesapplication/pdf
- PDF documentsaudio/webm
, audio/wav
- Audio filesapplication/octet-stream
- Binary files
Endpoints
Upload Artifact
Uploads a new artifact to the server.
Request:
POST /artifact/
Headers:
Content-Type
(required): MIME type of the file being uploadedX-Original-Filename
(required): Original filename including extension
Request Body:
Response:
Success (201 Created):
{
"artifactId": "7f33ee3d-b589-4b3c-b8c8-a9a3ee04eacf"
}
Response Headers:
Location
: URL where the artifact can be retrievedContent-Type
: application/json
Error Responses:
400 Bad Request:
Missing 'Content-Type' or 'X-Original-Filename' header
413 Payload Too Large:
Error saving file: http: request body too large
500 Internal Server Error:
Could not create file on server
Example:
curl -X POST http://localhost:8080/artifact/ \
-H "Content-Type: text/plain" \
-H "X-Original-Filename: example.txt" \
--data-binary @example.txt
Retrieve Artifact
Downloads an artifact by its unique identifier.
Request:
GET /artifact/{artifactId}
Path Parameters:
artifactId
(required): UUID of the artifact to retrieve
Response:
Success (200 OK):
- Returns the original file content as binary data
Response Headers:
Content-Type
: Original MIME type of the fileContent-Disposition
: inline; filename="original-filename.ext"
Content-Length
: Size of the file in bytesAccept-Ranges
: bytes
(supports range requests)Last-Modified
: Timestamp when the file was uploaded
Error Responses:
400 Bad Request:
Invalid artifact ID format
404 Not Found:
404 page not found
500 Internal Server Error:
Could not read artifact metadata
Corrupted artifact metadata
Example:
curl http://localhost:8080/artifact/7f33ee3d-b589-4b3c-b8c8-a9a3ee04eacf
Data Models
Each uploaded artifact has associated metadata stored in a .meta.json
file:
{
"originalFilename": "example.txt",
"contentType": "text/plain",
"size": 1024,
"uploadTimestamp": "2025-09-19T12:01:11.277651Z"
}
Fields:
originalFilename
(string): The original name of the uploaded filecontentType
(string): MIME type of the filesize
(number): Size of the file in bytesuploadTimestamp
(string): ISO 8601 timestamp of when the file was uploaded
Configuration
The artifact storage behavior can be configured using environment variables:
ARTIFACT_PATH
: Directory where artifacts are stored (default: ~/openaiserver/artifacts
)MAX_UPLOAD_SIZE
: Maximum file size in bytes (default: 52428800
= 50MB)
File Storage
Storage Structure
Artifacts are stored using the following directory structure:
${ARTIFACT_PATH}/
├── 7f33ee3d-b589-4b3c-b8c8-a9a3ee04eacf # Binary file content
├── 7f33ee3d-b589-4b3c-b8c8-a9a3ee04eacf.meta.json # Metadata file
├── 123e4567-e89b-12d3-a456-426614174000 # Another file
└── 123e4567-e89b-12d3-a456-426614174000.meta.json # Its metadata
File Naming
- Artifact files: Named using the UUID (no extension)
- Metadata files: Named using the UUID +
.meta.json
suffix - UUIDs: Generated using UUID v4 standard (RFC 4122)
Security Considerations
File Size Limits
- Default maximum upload size: 50MB
- Configurable via
MAX_UPLOAD_SIZE
environment variable - Requests exceeding the limit return HTTP 413 (Payload Too Large)
File Type Validation
- The API accepts any content type
- Content type validation is based on the
Content-Type
header - No server-side file content inspection is performed
Path Traversal Protection
- Artifact IDs must be valid UUIDs
- Invalid UUID format returns HTTP 400 (Bad Request)
- File paths are constructed using secure
filepath.Join()
Storage Directory
- Default storage path uses user home directory
- Tilde (
~
) expansion is supported - Directory is created automatically with 0755 permissions
- Metadata files are created with 0644 permissions
Error Handling
Client Errors (4xx)
- 400 Bad Request: Invalid UUID format or missing required headers
- 404 Not Found: Artifact with the specified ID does not exist
- 413 Payload Too Large: File exceeds maximum upload size
Server Errors (5xx)
- 500 Internal Server Error: File system errors, metadata corruption, or server misconfiguration
Logging
All artifact operations are logged with structured logging:
INFO Artifact uploaded successfully artifactID=7f33ee3d-b589-4b3c-b8c8-a9a3ee04eacf filename=example.txt size=1024
DEBUG Artifact served artifactID=7f33ee3d-b589-4b3c-b8c8-a9a3ee04eacf filename=example.txt
ERROR Could not create artifact file error="permission denied" path=/artifacts/uuid
Rate Limiting
The artifact API does not implement built-in rate limiting. For production environments, consider implementing rate limiting at the reverse proxy level or using middleware.
CORS Support
The artifact API includes CORS headers to support web-based clients:
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS
Access-Control-Allow-Headers: Content-Type, Authorization, X-Requested-With
Access-Control-Allow-Credentials: true
Integration Examples
JavaScript/Fetch API
// Upload file
const uploadFile = async (file) => {
const response = await fetch('/artifact/', {
method: 'POST',
headers: {
'Content-Type': file.type,
'X-Original-Filename': file.name
},
body: file
});
const result = await response.json();
return result.artifactId;
};
// Download file
const downloadFile = async (artifactId) => {
const response = await fetch(`/artifact/${artifactId}`);
return response.blob();
};
Python/Requests
import requests
# Upload file
def upload_file(file_path, content_type):
with open(file_path, 'rb') as f:
headers = {
'Content-Type': content_type,
'X-Original-Filename': os.path.basename(file_path)
}
response = requests.post('http://localhost:8080/artifact/',
headers=headers, data=f)
return response.json()['artifactId']
# Download file
def download_file(artifact_id, output_path):
response = requests.get(f'http://localhost:8080/artifact/{artifact_id}')
with open(output_path, 'wb') as f:
f.write(response.content)
Go
package main
import (
"bytes"
"encoding/json"
"fmt"
"io"
"net/http"
"os"
)
// Upload file
func uploadFile(filePath, contentType string) (string, error) {
file, err := os.Open(filePath)
if err != nil {
return "", err
}
defer file.Close()
req, err := http.NewRequest("POST", "http://localhost:8080/artifact/", file)
if err != nil {
return "", err
}
req.Header.Set("Content-Type", contentType)
req.Header.Set("X-Original-Filename", filepath.Base(filePath))
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
var result map[string]string
json.NewDecoder(resp.Body).Decode(&result)
return result["artifactId"], nil
}
// Download file
func downloadFile(artifactID, outputPath string) error {
resp, err := http.Get(fmt.Sprintf("http://localhost:8080/artifact/%s", artifactID))
if err != nil {
return err
}
defer resp.Body.Close()
file, err := os.Create(outputPath)
if err != nil {
return err
}
defer file.Close()
_, err = io.Copy(file, resp.Body)
return err
}