Managed Variables¶
Managed variables let you define and reference configuration in your code, but control the runtime values from the Logfire UI without redeploying.
Define a variable once with a sensible default, deploy your application, then iterate on the values in production. You can target specific populations (opted-in beta users, internal developers, enterprise customers, etc.) using flexible targeting rules that integrate with your existing OpenTelemetry attributes.
Changes take effect quickly, and every variable resolution is visible in your traces. This trace-level visibility means you can correlate application behavior directly with configuration variants, enabling A/B testing, automated prompt optimization, and online evaluations using the same observability data you're already sending to Logfire.
What Are Managed Variables?¶
Managed variables are a way to externalize runtime configuration from your code. While they're especially powerful for AI applications (where prompt iteration is frequently critical), they work for any configuration you want to change without redeploying:
- Any type: Use primitives (strings, bools, ints) or structured types (dataclasses, Pydantic models, etc.)
- Observability-integrated: Every variable resolution creates a span, and using the context manager automatically sets baggage so downstream operations are tagged with which variant was used
- Variants and rollouts: Define multiple values (variants) for a variable and control what percentage of requests get each variant
- Targeting: Route specific users or segments to specific variants based on attributes
Structured Configuration¶
While you can use simple primitive types as variables, the real power comes from using structured types—Pydantic models that group related configuration together:
from pydantic import BaseModel
import logfire
logfire.configure()
class AgentConfig(BaseModel):
"""Configuration for an AI agent."""
instructions: str
model: str
temperature: float
max_tokens: int
# Create a managed variable with this structured type
agent_config = logfire.var(
name='agent-config',
type=AgentConfig,
default=AgentConfig(
instructions='You are a helpful assistant.',
model='openai:gpt-4o-mini',
temperature=0.7,
max_tokens=500,
),
)
Why group configuration together instead of using separate variables?
- Coherent variants: A variant isn't just "instructions v2", it's a complete configuration where all the pieces work well together. The temperature that works with a detailed prompt might not work as well with a concise one.
- Atomic changes: When you roll out a new variant, all settings change together. No risk of mismatched configurations.
- Holistic A/B testing: Compare "config v1" vs "config v2" as complete packages, not individual parameters in isolation.
- Simpler management: One variable to manage in the UI instead of many.
When to use primitives
Simple standalone settings like feature flags (debug_mode: bool), rate limits (max_requests: int), or even just agent instructions work great as primitive variables. Use structured types when you have multiple settings you want to vary together.
Why This Is So Useful For AI Applications¶
In AI applications, prompts and model configurations are often critical to application behavior. Some changes are minor tweaks that don't significantly affect outputs, while others can have substantial positive or negative consequences. The traditional iteration process looks like:
- Edit the code
- Open a PR and get it reviewed
- Merge and deploy
- Wait to see the effect in production
This process is problematic for AI configuration because:
- Production data is essential: Useful AI agents often need access to production data and real user interactions. Testing locally or in staging environments rarely captures the full range of inputs your application will encounter.
- Representative testing is hard: Even a fast deployment cycle adds significant friction when you're iterating on prompts. What works in a test environment may behave differently with real user queries.
- Risk affects all users: Without targeting controls, every change affects your entire user base immediately.
With managed variables, you can iterate safely in production:
- Iteration speed: Edit prompts in the Logfire UI and see the effect in real traces immediately
- A/B testing: Run multiple prompt/model/temperature combinations simultaneously and compare their performance in your traces
- Gradual rollouts: Start a new configuration at 5% of traffic, watch the metrics, then gradually increase
- Emergency rollback: If a configuration is causing problems, revert to the previous variant in seconds, with no deploy required
How It Works¶
Here's the typical workflow using the AgentConfig example from above:
- Define the variable in code with your current configuration as the default
- Deploy your application: it starts using the default immediately
- Create the variable in the Logfire UI with your initial value
- Add variants: create additional variants like
v2-detailedwith different configurations - Set up a rollout: start with 10% of traffic going to the new variant
- Monitor in real-time: filter traces by variant to compare response quality, latency, and token usage
- Adjust based on data: if v2 performs better, gradually increase to 50%, then 100%
- Iterate: create new variants, adjust rollouts, all without code changes
Managing Variables in the Logfire UI¶
The Logfire web UI provides a complete interface for managing your variables without any code changes. You can find it under Settings > Variables in your project.
Creating a Variable¶
To create a new variable, click New variable and fill in:
- Name: A unique identifier using lowercase letters, numbers, and hyphens (e.g.,
agent-config,feature-flag) - Description: Optional text explaining what the variable controls
- Value Type: Choose from:
- Text: Plain text values, ideal for prompts and messages
- Number: Numeric values for thresholds, limits, etc.
- Boolean: True/false flags for feature toggles
- JSON: Complex structured data matching your Pydantic models
For JSON variables, you can optionally provide a JSON Schema to validate variant values.
Working with Variants¶
Each variable can have multiple variants—different values that can be served to different users or traffic segments.
To add variants:
- Click Add Variant in the Variants section
- Enter a unique key for the variant (e.g.,
premium,experimental,v2-detailed) - Provide an optional description
- Enter the value (the format depends on your value type)
Using the example value
When you push a variable from code using logfire.push_variables(), the code's default value is stored as an "example". This example appears pre-filled when you create a new variant in the UI, making it easy to start from a working configuration and modify it.
Each variant tracks its version history, accessible via the View history button. You can also browse all variant history using Browse All History to see changes over time or restore previous versions.
No variants = code default
If a variable has no variants configured, your application uses the code default value. This is the expected state immediately after push_variables(). You create variants in the UI when you want to serve different values to different users or run experiments.
Configuring Rollouts¶
The Default Rollout section controls what percentage of requests receive each variant. The weights must sum to 1.0 or less:
- Set
defaultto0.5andpremiumto0.5for a 50/50 A/B test - Set
defaultto0.9andexperimentalto0.1for a 10% canary deployment - If weights sum to less than 1.0, the remaining percentage uses your code's default value
Targeting with Override Rules¶
Rollout Overrides let you route specific users or segments to specific variants based on attributes. Rules are evaluated in order, and the first matching rule determines the rollout.
To add a targeting rule:
- Click Add Rule in the Rollout Overrides section
- Add one or more conditions (all conditions must match):
- Choose an attribute name (e.g.,
plan,region,is_beta_user) - Select an operator (
equals,does not equal,is in,is not in,matches regex, etc.) - Enter the value to match
- Choose an attribute name (e.g.,
- Configure the rollout percentages when this rule matches
For example, to give enterprise customers the premium variant:
- Condition:
planequalsenterprise - Rollout:
premium= 100%
Variable names must match
The variable name in the UI must exactly match the name parameter in your logfire.var() call. If they don't match, your application will use the code default instead of the remote configuration.
Quick Start¶
Define a Variable¶
Use logfire.var() to define a managed variable. Here's an example using a structured configuration:
from pydantic import BaseModel
import logfire
logfire.configure()
class AgentConfig(BaseModel):
"""Configuration for a customer support agent."""
instructions: str
model: str
temperature: float
max_tokens: int
# Define the variable with a sensible default
agent_config = logfire.var(
name='support-agent-config',
type=AgentConfig,
default=AgentConfig(
instructions='You are a helpful customer support agent. Be friendly and concise.',
model='openai:gpt-4o-mini',
temperature=0.7,
max_tokens=500,
),
)
Use the Variable¶
The recommended pattern is to use the variable's .get() method as a context manager. This automatically:
- Creates a span for the variable resolution
- Sets baggage with the variable name and selected variant
When using the Logfire SDK, baggage values are automatically added as attributes to all downstream spans. This means any spans created inside the context manager will be tagged with which variant was used, making it easy to filter and compare behavior by variant in the Logfire UI.
from pydantic_ai import Agent
async def handle_support_ticket(user_id: str, message: str) -> str:
"""Handle a customer support request."""
# Get the configuration - same user always gets the same variant
with agent_config.get(targeting_key=user_id) as config:
# Inside this context, baggage is set:
# logfire.variables.support-agent-config = <variant_name>
agent = Agent(
config.value.model,
system_prompt=config.value.instructions,
)
result = await agent.run(
message,
model_settings={
'temperature': config.value.temperature,
'max_tokens': config.value.max_tokens,
},
)
return result.output
The targeting_key ensures deterministic variant selection: the same user always gets the same variant, which is essential for application behavior consistency when A/B testing.
In practice, depending on your application structure, you may want to use tenant_id or another identifier for targeting_key instead of user_id. If no targeting_key is provided and there's an active trace, the trace_id is used automatically to ensure consistent behavior within a single request.
Variable Parameters¶
| Parameter | Description |
|---|---|
name |
Unique identifier for the variable |
type |
Expected type for validation; can be a primitive type or Pydantic model |
default |
Default value when no configuration is found (can also be a function) |
A/B Testing Configurations¶
Here's a complete example showing how to A/B test two complete agent configurations:
from pydantic import BaseModel
from pydantic_ai import Agent
import logfire
from logfire.variables.config import (
Rollout,
VariableConfig,
VariablesConfig,
Variant,
)
logfire.configure()
class AgentConfig(BaseModel):
"""Configuration for a customer support agent."""
instructions: str
model: str
temperature: float
max_tokens: int
# For local development/testing, you can define variants in code
# In production, you'd typically configure these in the Logfire UI
# and configure logfire to retrieve and sync with the remotely-managed config.
variables_config = VariablesConfig(
variables={
'support-agent-config': VariableConfig(
name='support-agent-config',
variants={
'v1-concise': Variant(
key='v1-concise',
serialized_value="""{
"instructions": "You are a helpful support agent. Be brief and direct.",
"model": "openai:gpt-4o-mini",
"temperature": 0.7,
"max_tokens": 300
}""",
description='Concise responses with faster model',
),
'v2-detailed': Variant(
key='v2-detailed',
serialized_value="""{
"instructions": "You are an expert support agent. Provide thorough explanations with examples. Always acknowledge the customer's concern before providing assistance.",
"model": "openai:gpt-4o",
"temperature": 0.3,
"max_tokens": 800
}""",
description='Detailed responses with more capable model',
),
},
# 50/50 A/B test
rollout=Rollout(variants={'v1-concise': 0.5, 'v2-detailed': 0.5}),
overrides=[],
json_schema={
'type': 'object',
'properties': {
'instructions': {'type': 'string'},
'model': {'type': 'string'},
'temperature': {'type': 'number'},
'max_tokens': {'type': 'integer'},
},
},
),
}
)
logfire.configure(
variables=logfire.VariablesOptions(config=variables_config),
)
# Define the variable
agent_config = logfire.var(
name='support-agent-config',
type=AgentConfig,
default=AgentConfig(
instructions='You are a helpful assistant.',
model='openai:gpt-4o-mini',
temperature=0.7,
max_tokens=500,
),
)
async def handle_ticket(user_id: str, message: str) -> str:
"""Handle a support ticket with A/B tested configuration."""
with agent_config.get(targeting_key=user_id) as config:
# The variant (v1-concise or v2-detailed) is now in baggage
# All spans created below, including those from the call to agent.run, will be tagged with the variant
agent = Agent(config.value.model, system_prompt=config.value.instructions)
result = await agent.run(
message,
model_settings={
'temperature': config.value.temperature,
'max_tokens': config.value.max_tokens,
},
)
return result.output
Analyzing the A/B test in Logfire:
After running traffic through both variants, you can:
- Filter traces by the variant baggage to see only requests that used a specific variant
- Compare metrics like response latency, token usage, and error rates between variants
- Look at actual responses to qualitatively assess which variant performs better
- Make data-driven decisions about which configuration to roll out to 100%
Targeting Users and Segments¶
Targeting Key¶
The targeting_key parameter ensures deterministic variant selection. The same key always produces the same variant, which is useful for:
- Consistent user experience: You typically want users to see consistent configuration behavior within a session, or even across sessions. You may also want all users within a single tenant to receive the same variant.
- Debugging: By controlling the
targeting_key, you can deterministically get the same configuration variant that a user received. Note that this reproduces the configuration, not the exact behavior; if your application includes stochastic elements like LLM calls, outputs will still vary.
# User-based targeting
with agent_config.get(targeting_key=user_id) as config:
...
# Request-based targeting (if no targeting_key provided and there's an active trace,
# the trace ID is used automatically)
with agent_config.get() as config:
...
Attributes for Conditional Rules¶
Pass attributes to enable condition-based targeting:
with agent_config.get(
targeting_key=user_id,
attributes={
'plan': 'enterprise',
'region': 'us-east',
'is_beta_user': True,
},
) as config:
...
These attributes can be used in override rules to route specific segments to specific variants:
from logfire.variables.config import (
Rollout,
RolloutOverride,
ValueEquals,
VariableConfig,
VariablesConfig,
Variant,
)
variables_config = VariablesConfig(
variables={
'support-agent-config': VariableConfig(
name='support-agent-config',
variants={
'standard': Variant(
key='standard',
serialized_value='{"instructions": "Be helpful and concise.", ...}',
),
'premium': Variant(
key='premium',
serialized_value='{"instructions": "Provide detailed, thorough responses...", ...}',
),
},
# Default: everyone gets 'standard'
rollout=Rollout(variants={'standard': 1.0}),
overrides=[
# Enterprise plan users always get the premium variant
RolloutOverride(
conditions=[ValueEquals(attribute='plan', value='enterprise')],
rollout=Rollout(variants={'premium': 1.0}),
),
],
json_schema={'type': 'object'},
),
}
)
# Now when you call get() with attributes:
with agent_config.get(
targeting_key=user_id,
attributes={'plan': 'enterprise'}, # Matches the override condition
) as config:
# config.variant will be 'premium' because of the override
...
with agent_config.get(
targeting_key=user_id,
attributes={'plan': 'free'}, # Does not match override
) as config:
# config.variant will be 'standard' (the default rollout)
...
Automatic Context Enrichment¶
By default, Logfire automatically includes additional context when resolving variables:
- Resource attributes: OpenTelemetry resource attributes (service name, version, environment)
- Baggage: Values set via
logfire.set_baggage()
This means your targeting rules can match against service identity or request-scoped baggage without explicitly passing them.
Example: Plan-based targeting with baggage
If your application sets the user's plan as baggage early in the request lifecycle, you can use it for targeting without passing it explicitly to every variable resolution:
# In your middleware or request handler, set the plan once
with logfire.set_baggage(plan='enterprise'):
# ... later in your application code ...
with agent_config.get(targeting_key=user_id) as config:
# The variable resolution automatically sees plan='enterprise'
# If you have an override targeting enterprise users, it will match
...
This is useful when you want different configurations based on user plan—for example, enterprise users might get a prompt variant that references tools only available to them.
Example: Environment-based targeting with resource attributes
Resource attributes like deployment.environment are automatically included, allowing you to use different configurations in different environments without code changes:
- Use a more experimental prompt on staging to test changes before production
- Enable verbose logging in development but not in production
- Route all staging traffic to a "debug" variant that includes extra context
To disable automatic context enrichment:
logfire.configure(
variables=logfire.VariablesOptions(
include_resource_attributes_in_context=False,
include_baggage_in_context=False,
),
)
Remote Variables¶
When connected to Logfire, variables are managed through the Logfire UI. This is the recommended setup for production.
To enable remote variables, you need to explicitly opt in using VariablesOptions:
import logfire
from logfire.variables.config import RemoteVariablesConfig
# Enable remote variables
logfire.configure(
variables=logfire.VariablesOptions(
config=RemoteVariablesConfig(),
),
)
# Define your variables
agent_config = logfire.var(
name='support-agent-config',
type=AgentConfig,
default=AgentConfig(...),
)
API Token Required
Remote variables require an API token with the project:read_variables scope. This is different from the write token (LOGFIRE_TOKEN) used to send traces and logs. Set it via the LOGFIRE_API_TOKEN environment variable or pass it directly to RemoteVariablesConfig(api_token=...).
How remote variables work:
- Your application connects to Logfire using your API token
- Variable configurations are fetched from the Logfire API
- A background thread polls for updates (default: every 30 seconds)
- When you change a variant or rollout in the UI, running applications pick up the change automatically while polling
Configuration options:
from datetime import timedelta
from logfire.variables.config import RemoteVariablesConfig
logfire.configure(
variables=logfire.VariablesOptions(
config=RemoteVariablesConfig(
# Block until first fetch completes (default: True)
# Set to False if you want the app to start immediately using defaults
block_before_first_resolve=True,
# How often to poll for updates (default: 30 seconds)
polling_interval=timedelta(seconds=30),
),
),
)
Pushing Variables from Code¶
Instead of manually creating variables in the Logfire UI, you can push your variable definitions directly from your code using logfire.push_variables().
The primary benefit of pushing from code is automatic JSON schema generation. When you use a Pydantic model as your variable type, push_variables() automatically generates the JSON schema from your model definition. This means the Logfire UI will validate variant values against your schema, catching type errors before they reach production. Creating these schemas manually in the UI would be tedious and error-prone, especially for complex nested models.
from pydantic import BaseModel
import logfire
from logfire.variables.config import RemoteVariablesConfig
logfire.configure(
variables=logfire.VariablesOptions(
config=RemoteVariablesConfig(),
),
)
class AgentConfig(BaseModel):
"""Configuration for an AI agent."""
instructions: str
model: str
temperature: float
max_tokens: int
# Define your variables
agent_config = logfire.var(
name='agent-config',
type=AgentConfig,
default=AgentConfig(
instructions='You are a helpful assistant.',
model='openai:gpt-4o-mini',
temperature=0.7,
max_tokens=500,
),
)
# Push all registered variables to the remote provider
if __name__ == '__main__':
logfire.push_variables()
When you run this script, it will:
- Compare your local variable definitions with what exists in Logfire
- Show you a diff of what will be created or updated
- Prompt for confirmation before applying changes
No variants created by push_variables
When push_variables() creates a new variable, it does not create any variants. Instead, it stores your code's default value as an "example" that can be used as a template when creating variants in the Logfire UI. Until you create variants, your application will use the code default.
Example output:
=== Variables to CREATE ===
+ agent-config
Example value: {"instructions":"You are a helpful assistant.","model":"openai:gpt-4o-mini","temperature":0.7,"max_tokens":500}
Apply these changes? [y/N] y
Applying changes...
Successfully applied changes.
Options:
| Parameter | Description |
|---|---|
variables |
List of specific variables to push. If not provided, all registered variables are pushed. |
dry_run |
If True, shows what would change without actually applying changes. |
yes |
If True, skips the confirmation prompt. |
strict |
If True, fails if any existing variants in Logfire are incompatible with your new schema. |
Pushing specific variables:
feature_flag = logfire.var(name='feature-enabled', type=bool, default=False)
max_retries = logfire.var(name='max-retries', type=int, default=3)
# Push only the feature flag
logfire.push_variables([feature_flag])
# Dry run to see what would change
logfire.push_variables(dry_run=True)
# Skip confirmation prompt (useful in CI/CD)
logfire.push_variables(yes=True)
Schema Updates
When you push a variable that already exists in Logfire, push_variables will update the JSON schema if it has changed but will preserve existing variants and rollout configurations. If existing variant values are incompatible with the new schema, you'll see a warning (or an error if using strict=True).
Validating Variables¶
You can validate that your remote variable configurations match your local type definitions using logfire.validate_variables():
from logfire.variables import ValidationReport
# Validate all registered variables
report: ValidationReport = logfire.validate_variables()
if report.has_errors:
print('Validation errors found:')
print(report.format())
else:
print('All variables are valid!')
# Check specific issues
if report.variables_not_on_server:
print(f'Variables missing from server: {report.variables_not_on_server}')
The ValidationReport provides detailed information about validation results:
| Property | Description |
|---|---|
has_errors |
True if any validation errors were found |
errors |
List of variant validation errors with details |
variables_checked |
Number of variables that were validated |
variables_not_on_server |
Names of local variables not found on the server |
description_differences |
Variables where local and server descriptions differ |
format() |
Returns a human-readable string of the validation results |
This is useful in CI/CD pipelines to catch configuration drift where someone may have edited a variant value in the UI that no longer matches your expected type.
Config File Workflow¶
For more control over your variable configurations, you can work with config files directly. This workflow allows you to:
- Generate a template config from your code
- Edit the config locally (add variants, rollouts, overrides)
- Push the complete config to Logfire
- Pull existing configs for backup or migration
Generating a config template:
import logfire
from logfire.variables import VariablesConfig
# Define your variables
agent_config = logfire.var(name='agent-config', type=AgentConfig, default=AgentConfig(...))
feature_flag = logfire.var(name='feature-enabled', type=bool, default=False)
# Generate a config with name, schema, and example for each variable
config = logfire.generate_config()
# Save to a YAML file (JSON also supported)
config.write('variables.yaml')
The generated file will look like:
variables:
agent-config:
name: agent-config
variants: {}
rollout:
variants: {}
overrides: []
json_schema:
type: object
properties:
instructions: {type: string}
model: {type: string}
temperature: {type: number}
max_tokens: {type: integer}
example: '{"instructions":"You are a helpful assistant.","model":"openai:gpt-4o-mini","temperature":0.7,"max_tokens":500}'
feature-enabled:
name: feature-enabled
variants: {}
rollout:
variants: {}
overrides: []
json_schema: {type: boolean}
example: 'false'
Editing and syncing:
Edit the YAML file to add variants and rollouts:
variables:
agent-config:
name: agent-config
variants:
concise:
key: concise
serialized_value: '{"instructions":"Be brief.","model":"openai:gpt-4o-mini","temperature":0.7,"max_tokens":300}'
detailed:
key: detailed
serialized_value: '{"instructions":"Provide thorough explanations.","model":"openai:gpt-4o","temperature":0.3,"max_tokens":1000}'
rollout:
variants:
concise: 0.8
detailed: 0.2
overrides: []
json_schema: {...}
Then sync to Logfire:
from logfire.variables import VariablesConfig
# Read the edited config
config = VariablesConfig.read('variables.yaml')
# Sync to the server
logfire.sync_config(config)
Sync modes:
| Mode | Description |
|---|---|
'merge' (default) |
Only create/update variables in the config. Other variables on the server are unchanged. |
'replace' |
Make the server match the config exactly. Variables not in the config will be deleted. |
# Partial sync - only update variables in the config
logfire.sync_config(config, mode='merge')
# Full sync - delete server variables not in config
logfire.sync_config(config, mode='replace')
# Preview changes without applying
logfire.sync_config(config, dry_run=True)
Pulling existing config:
# Fetch current config from server
server_config = logfire.pull_config()
# Save for backup or migration
server_config.write('backup.yaml')
# Merge with local changes
merged = server_config.merge(local_config)
VariablesConfig methods:
| Method | Description |
|---|---|
VariablesConfig.read(path) |
Read config from JSON or YAML file |
config.write(path) |
Write config to JSON or YAML file |
VariablesConfig.from_json(string) |
Parse from JSON string |
VariablesConfig.from_yaml(string) |
Parse from YAML string |
config.to_json() |
Convert to JSON string |
config.to_yaml() |
Convert to YAML string |
config.to_dict() |
Convert to dictionary |
config.merge(other) |
Merge with another config (other takes precedence) |
VariablesConfig.from_variables(vars) |
Create minimal config from Variable instances |
YAML vs JSON
YAML is recommended for config files because it's more readable and supports comments. JSON is available for programmatic use. The format is auto-detected from the file extension (.yaml, .yml, or .json).
Local Variables¶
For development, testing, or self-hosted deployments, you can configure variables locally using VariablesConfig:
import logfire
from logfire.variables.config import (
Rollout,
RolloutOverride,
ValueEquals,
VariableConfig,
VariablesConfig,
Variant,
)
variables_config = VariablesConfig(
variables={
'support-agent-config': VariableConfig(
name='support-agent-config',
variants={
'default': Variant(
key='default',
serialized_value='{"instructions": "...", "model": "...", "temperature": 0.7, "max_tokens": 500}',
),
'premium': Variant(
key='premium',
serialized_value='{"instructions": "...", "model": "...", "temperature": 0.3, "max_tokens": 1000}',
),
},
# Default: everyone gets 'default'
rollout=Rollout(variants={'default': 1.0}),
overrides=[
# Enterprise users get 'premium'
RolloutOverride(
conditions=[ValueEquals(attribute='plan', value='enterprise')],
rollout=Rollout(variants={'premium': 1.0}),
),
],
json_schema={'type': 'object'},
),
}
)
logfire.configure(
variables=logfire.VariablesOptions(config=variables_config),
)
When to use local variables:
- Development: Test different configurations without connecting to Logfire
- Testing: Use fixed configurations in your test suite
- Self-hosted: Full control over variable configuration without external dependencies
- Optimization harnesses: Build automated optimization loops that monitor performance metrics and programmatically update variable values
The local provider exposes methods to create, update, and delete variables and variants programmatically. This makes it possible to build optimization harnesses that:
- Run your application with different configurations
- Collect performance metrics from traces
- Use the metrics to decide on new configurations to try
- Update variable values via the local provider's API
- Repeat until optimal configuration is found
This workflow is particularly useful for automated prompt optimization, where you want to systematically explore different prompt variations and measure their effectiveness.
Configuration Reference¶
Variants and Rollouts¶
VariableConfig - Full configuration for a variable:
| Field | Description |
|---|---|
name |
Variable name (must match the name in logfire.var()) |
variants |
Dict of variant key to Variant objects |
rollout |
Default Rollout specifying variant weights |
overrides |
List of RolloutOverride for conditional targeting |
json_schema |
JSON Schema for validation (optional) |
description |
Human-readable description (optional) |
aliases |
Alternative names that resolve to this variable (optional, for migrations) |
example |
JSON-serialized example value, used as template in UI (optional) |
Variant - A single variant value:
| Field | Description |
|---|---|
key |
Unique identifier for this variant |
serialized_value |
JSON-serialized value |
description |
Human-readable description (optional) |
Rollout - Variant selection weights:
| Field | Description |
|---|---|
variants |
Dict of variant key to weight (0.0-1.0). Weights should sum to 1.0 or less. |
If weights sum to less than 1.0, there's a chance no variant is selected and the code default is used.
Condition Types¶
Overrides use conditions to match against attributes:
| Condition | Description |
|---|---|
ValueEquals |
Attribute equals a specific value |
ValueDoesNotEqual |
Attribute does not equal a specific value |
ValueIsIn |
Attribute is in a list of values |
ValueIsNotIn |
Attribute is not in a list of values |
ValueMatchesRegex |
Attribute matches a regex pattern |
ValueDoesNotMatchRegex |
Attribute does not match a regex pattern |
KeyIsPresent |
Attribute key exists |
KeyIsNotPresent |
Attribute key does not exist |
Override Example¶
from logfire.variables.config import (
KeyIsPresent,
Rollout,
RolloutOverride,
ValueEquals,
ValueIsIn,
)
overrides = [
# Beta users in US/UK get the experimental variant
RolloutOverride(
conditions=[
ValueEquals(attribute='is_beta', value=True),
ValueIsIn(attribute='country', values=['US', 'UK']),
],
rollout=Rollout(variants={'experimental': 1.0}),
),
# Anyone with a custom config attribute gets the custom variant
RolloutOverride(
conditions=[KeyIsPresent(attribute='custom_config')],
rollout=Rollout(variants={'custom': 1.0}),
),
]
Conditions within an override are AND-ed together. Overrides are evaluated in order; the first matching override's rollout is used.
Advanced Usage¶
Contextual Overrides¶
Use variable.override() to temporarily override a variable's value within a context. This is useful for testing:
def test_premium_config_handling():
"""Test that premium configuration works correctly."""
premium_config = AgentConfig(
instructions='Premium instructions...',
model='openai:gpt-4o',
temperature=0.3,
max_tokens=1000,
)
with agent_config.override(premium_config):
# Inside this context, agent_config.get() returns premium_config
with agent_config.get() as config:
assert config.value.model == 'openai:gpt-4o'
# Back to normal after context exits
Dynamic Override Functions¶
Override with a function that computes the value based on context:
from collections.abc import Mapping
from typing import Any
def get_config_for_context(
targeting_key: str | None, attributes: Mapping[str, Any] | None
) -> AgentConfig:
"""Compute configuration based on context."""
if attributes and attributes.get('mode') == 'creative':
return AgentConfig(
instructions='Be creative and expressive...',
model='openai:gpt-4o',
temperature=1.0,
max_tokens=1000,
)
return AgentConfig(
instructions='Be precise and factual...',
model='openai:gpt-4o-mini',
temperature=0.2,
max_tokens=500,
)
with agent_config.override(get_config_for_context):
# Configuration will be computed based on the attributes passed to get()
with agent_config.get(attributes={'mode': 'creative'}) as config:
assert config.value.temperature == 1.0
Refreshing Variables¶
Variables are automatically refreshed in the background when using the remote provider. You can also manually trigger a refresh:
# Synchronous refresh
agent_config.refresh_sync(force=True)
# Async refresh
await agent_config.refresh(force=True)
The force=True parameter bypasses the polling interval check and fetches the latest configuration immediately.


