Context Engineering, Finally
Design, version, and observe the full context of your LLM application. #JustLookAtYourData
Built for context engineering
The Editor
A real editor for LLM context. Rich text, markdown, mermaid diagrams, and code blocks with syntax highlighting. Embed images and PDFs directly—see them exactly how they're passed to providers.
Define typed variables that map to your database and API types. Not just primitives—full complex objects, arrays, and nested structures with auto-generated Pydantic and Zod models.


#JustLookAtYourData
See exactly what your LLM received and returned—rendered visually, not as raw JSON. Every message, every image, every variable. Filter, search, and trace back from outputs to the context that produced them.
Version Control
Branch, iterate, commit. Test changes safely before shipping. Rollback when needed.

From design to production
Close the loop between authoring context and observing results.
DESIGN
Editor
GENERATE
Type-safe
INVOKE
SDK
OBSERVE
Traces
Type-safe from design to runtime
Auto-generate Pydantic models from your schemas. Invoke with full type safety.
# Auto-generated from Moxn schema
from pydantic import BaseModel
from datetime import date
from typing import TypedDict
class DocumentSearchItemRendered(TypedDict):
"""TypedDict for prompt variable compilation"""
title: str
chunkIndex: int
query: str
class DocumentSearchItem(BaseModel):
title: str
chunk_index: int
query: str
def render(self) -> DocumentSearchItemRendered:
return {
"title": self.title,
"chunkIndex": self.chunk_index,
"query": self.query
}
class ProductHelpInputRendered(TypedDict):
"""TypedDict for prompt variable compilation"""
userName: str
today: str
documentSearchContext: str
class ProductHelpInput(BaseModel):
"""Type-safe input for product-help prompt"""
user_name: str
today: date
document_search_context: list[DocumentSearchItem]
def render(self) -> ProductHelpInputRendered:
# Render each item, then combine into XML
docs_xml = "<documents>\n"
for i, item in enumerate(self.document_search_context):
rendered = item.render()
docs_xml += f' <doc id="{i}" title="{rendered["title"]}" '
docs_xml += f'chunk="{rendered["chunkIndex"]}"/>\n'
docs_xml += "</documents>"
return {
"userName": self.user_name,
"today": str(self.today),
"documentSearchContext": docs_xml
}from anthropic import Anthropic
from moxn import MoxnClient
from moxn.types.content import Provider
from models import ProductHelpInput, DocumentSearchItem
anthropic = Anthropic() # Uses ANTHROPIC_API_KEY env var
async with MoxnClient() as client:
# Fetch prompt template
prompt = await client.get_prompt(
prompt_id="a8a2078d-...",
branch_name="main"
)
# Create typed input with nested items
input_data = ProductHelpInput(
user_name="Alex",
today=date.today(),
document_search_context=[
DocumentSearchItem(
title="Password Reset Guide",
chunk_index=0,
query="how to reset password"
),
DocumentSearchItem(
title="Account Settings",
chunk_index=2,
query="change email"
)
]
)
# Render compiles to XML automatically
# input_data.render()["documentSearchContext"] ->
# <documents>
# <doc id="0" title="Password Reset Guide" chunk="0"/>
# <doc id="1" title="Account Settings" chunk="2"/>
# </documents>
session = prompt.to_prompt_session(input_data)
# to_invocation() creates provider-specific payloads
# Switch providers without changing code:
# payload = session.to_invocation(provider=Provider.OPENAI_CHAT)
payload = session.to_invocation(provider=Provider.ANTHROPIC)
response = await anthropic.messages.create(**payload)
# Log telemetry with provider-aware response parsing
await client.log_telemetry_event_from_response(
session, response, provider=Provider.ANTHROPIC
)Full observability
Trace every LLM call. See the full context. Debug with confidence.

Trace Explorer: Hierarchical view of all LLM calls with timing and metadata

Span Detail: See exactly what your LLM received and returned

Studio: Test your context before shipping
The context engineering problem
Context is invisible
You can't see what your LLM actually receives. Debugging means printf and prayer.
No proper editor
Building context in strings, YAML, and notebooks. No tooling for structured content.
No iteration loop
Can't trace back from bad outputs to the context that caused them.
Simple, transparent pricing
Start free, scale as you grow.
Individual
Perfect for personal projects
- Unlimited prompts
- Version control
- API access
- 1,000 LLM traces/month
- 7-day log retention
- Email support
Team
Built for collaboration
- Everything in Individual
- Up to 5 team members
- 5,000 LLM traces/user/month
- 30-day log retention
- Team collaboration
- Priority support
Enterprise
For large teams and advanced needs
- Everything in Team
- Unlimited seats
- Unlimited LLM traces
- 365-day log retention
- SSO/SAML
- Dedicated support
- Custom integrations