Skip to content

Platform Overview

Application Main Screen

Applications

Applications are containers that group together cooperative guardrails and shared data assets. You can think of them as projects. You must create an application before creating your first guardrail.

An application in the platform corresponds loosely to the verification assets for an external application. The application's Context Document allows a "hook" in which application-wide information, such as a list of facts or custom terms, can be injected automatically into all guardrails within the application.

Applications can also contain DAG (Directed Acyclic Graph) configurations that define complex workflows combining multiple guardrails with custom orchestration logic. These DAGs enable sophisticated verification pipelines beyond simple individual guardrail execution.

For detailed information about creating, managing, and configuring applications and DAG workflows, see the Applications documentation.

Guardrails

Guardrails are the primary verification assets created in this platform. Each defines specific procedures that verify the outputs of an application using a particular technique or along a specific dimension. Guardrails come in in three primary categories:

  1. Agentic: General-purpose guardrails that utilize an LLM for "reasoning" about the output of another LLM. These have the benefit of generality, and can address a wide swath of hallucination and other error classes, but ultimately suffer from using an LLM to "reason". They can effectively reduce error rates, but cannot provably eliminate them.
  2. Neurosymbolic: Integration of the LLM outputs into formal symbolic logic can provably perform reasoning tasks with perfect accuracy (subject to garbage in, garbage out). The tradeoff is that the symbolic reasoning must typically be constrained to a specific policy. Regarding the bridge between natural language and the symbolic language, Jaxon's approach to neurosymbolic guardrails allows LLMs to perform data extraction instead of verification reasoning (an easier and better-suited task) for automation, and optional human review of extracted data for guaranteed consistency.
  3. Manual: Some processes warrant direct human verification. Jaxon offers a manual Human Review guardrail that can implement these manual steps.

Playground

The Playground provides an interactive testing environment where you can quickly test and validate guardrails and applications without writing code. It's accessible through the Playground tab in the admin interface and serves as a crucial development and debugging tool.

The Playground supports:

  • Individual Guardrail Testing: Test specific guardrails with custom inputs to verify behavior and tune configurations
  • Application DAG Testing: Execute complete workflow scenarios to validate complex guardrail orchestrations
  • Response Analysis: Switch between human-readable Table View and technical JSON View for comprehensive result analysis
  • Rapid Iteration: Quickly test changes and refinements without deployment cycles

The Playground automatically handles the underlying infrastructure, message routing, and response collection, allowing you to focus on verification logic and results. It's designed to be the primary tool for guardrail development, testing, and validation workflows.

For comprehensive usage instructions, see the Playground documentation.

Human Review

Jaxon offers a Human Review dashboard for manual verification and correction of guardrail inputs and outputs. This can be used for guardrail testing, but is intended for use by analysts/reviewers who are incorporated into some verification processes. See above and elsewhere in this documentation for specific usages of the Human Review dashboard.

Dashboards

The system dashboards give an overall perspective on guardrail usage and statistics.

Logs

System logs are available both in Docker (programmatially) and here for manual review and monitoring.