System Requirements
The following are general recommendations for running the Jaxon platform. The platform runs on any Docker-compatible system (amd64 architecture), and exact resource needs vary with workload and the number of concurrent evaluations.
Hardware
| Component | Minimum | Recommended |
|---|---|---|
| CPU | 4 cores | 8+ cores (e.g., AMD Ryzen 7950X3D, Intel i9-14900K) |
| RAM | 32 GB | 64 GB+ |
| Disk | 128 GB | 256 GB+ SSD |
| GPU | Not required | Recommended if running local LLMs |
GPU Requirements
A GPU is only needed if you plan to run local LLMs. When using cloud-based LLM providers (OpenAI, Anthropic, etc.), no GPU is required. GPU sizing depends on the specific local models you intend to run.
Software
| Component | Requirement |
|---|---|
| Operating System | Ubuntu 22.04+ or other modern Linux distribution |
| Docker | Version 27.x with Docker Compose V2 |
| unzip | Required for deployment package extraction |
Network
The platform runs as a Docker Compose stack with multiple services. All inter-service communication happens within the Docker network. External access is through a single edge-proxy endpoint (default port 8443, HTTPS).
LLM Connectivity
If using cloud-based LLM providers, the host machine needs outbound HTTPS access to the provider endpoints (e.g., api.openai.com). If running in an air-gapped environment, configure local LLM support instead.