When you trace LLM calls with LangSmith, you often want to track costs, compare model configurations, and analyze performance across different providers. LangSmith’s native integrations (like LangChain or the OpenAI/Anthropic wrappers) handle this automatically, but custom model wrappers and self-hosted models require a standardized way to provide this information. LangSmith uses ls_ metadata parameters for this purpose.
These metadata parameters (all prefixed with ls_) let you pass model configuration and identification information through the standard metadata field. Once set, LangSmith can automatically calculate costs, display model information in the UI, and enable filtering and analytics across your traces.
Use ls_ metadata parameters to:
- Enable automatic cost tracking for custom or self-hosted models by identifying the provider and model name.
- Track model configuration like temperature, max tokens, and other parameters for experiment comparison.
- Filter and analyze traces by provider or configuration settings
- Improve debugging by recording exactly which model settings were used for each run.
Basic usage example
The most common use case is enabling cost tracking for custom model wrappers. To do this, you need to provide two key pieces of information: the provider name (ls_provider) and the model name (ls_model_name). These work together to match against LangSmith’s pricing database.
from langsmith import traceable
@traceable(
run_type="llm",
metadata={
"ls_provider": "my_provider",
"ls_model_name": "my_custom_model"
}
)
def my_custom_llm(prompt: str):
return call_custom_api(prompt)
This minimal setup tells LangSmith what model you’re using, enabling automatic cost calculation if the model exists in the pricing database or if you’ve configured custom pricing.
For more comprehensive tracking, you can include additional configuration parameters. This is especially useful when running experiments or comparing different model settings:
@traceable(
run_type="llm",
metadata={
"ls_provider": "openai",
"ls_model_name": "gpt-4o",
"ls_model_type": "chat",
"ls_temperature": 0.7,
"ls_max_tokens": 4096,
"ls_stop": ["END"],
"ls_invocation_params": {
"top_p": 0.9,
"frequency_penalty": 0.5
}
}
)
def my_configured_llm(messages: list):
return call_llm(messages)
With this setup, you can later filter traces by temperature, compare runs with different max token settings, or analyze which configuration parameters produce the best results. All these parameters are optional except for the ls_provider and ls_model_name pair needed for cost tracking.
All parameters
User-configurable parameters
| Parameter | Type | Required | Description |
|---|
ls_provider | string | Yes* | LLM provider name for cost tracking |
ls_model_name | string | Yes* | Model identifier for cost tracking |
ls_model_type | "chat" | No | Type of model (chat completion) |
ls_temperature | number | No | Temperature parameter used |
ls_max_tokens | number | No | Maximum tokens parameter used |
ls_stop | string[] | No | Stop sequences used |
ls_invocation_params | object | No | Additional invocation parameters |
* ls_provider and ls_model_name must be provided together for cost tracking
System-generated parameters
| Parameter | Type | Description |
|---|
ls_run_depth | integer | Depth in trace tree (0=root, 1=child, etc.) - automatically calculated |
ls_method | string | Tracing method used (e.g., “traceable”) - set by SDK |
Experiment parameters
| Parameter | Type | Description |
|---|
ls_example_* | any | Example metadata prefixed with ls_example_ - added during experiments |
ls_experiment_id | string (UUID) | Unique experiment identifier - added during experiments |
Parameter details
ls_provider
What it does:
Identifies the LLM provider. Combined with ls_model_name, enables automatic cost calculation by matching against LangSmith’s model pricing database.
Common values:
"openai"
"anthropic"
"azure"
"bedrock"
"google_vertexai"
"google_genai"
"fireworks"
"mistral"
"groq"
- Or, any custom string
When to use:
When you want automatic cost tracking for custom model wrappers or self-hosted models.
Example:
@traceable(
run_type="llm",
metadata={
"ls_provider": "openai",
"ls_model_name": "gpt-4o"
}
)
def my_llm_call(prompt: str):
return call_api(prompt)
Relationships:
- Requires
ls_model_name for cost tracking to work.
- Works with token usage data to calculate costs.
ls_model_name
- Type:
string
- Required: Yes (with
ls_provider)
What it does:
Identifies the specific model. Combined with ls_provider, matches against pricing database for automatic cost calculation.
Common values:
- OpenAI:
"gpt-4o", "gpt-4o-mini", "gpt-3.5-turbo"
- Anthropic:
"claude-3-5-sonnet-20241022", "claude-3-opus-20240229"
- Custom: Any model identifier
When to use:
When you want automatic cost tracking and model identification in the UI.
Example:
@traceable(
run_type="llm",
metadata={
"ls_provider": "anthropic",
"ls_model_name": "claude-3-5-sonnet-20241022"
}
)
def my_claude_call(messages: list):
return call_claude(messages)
Relationships:
- Requires
ls_provider for cost tracking to work.
- Works with token usage data to calculate costs.
ls_model_type
Deprecation notice: Values other than "chat" are deprecated for the ls_model_type parameter.
- Type:
"chat" | "text" (deprecated)
- Required: No
What it does:
Categorizes whether the model is chat-based or text completion. Used for UI display and analytics.
Values:
"chat": Chat-based models (most common)
"text": Text completion models (deprecated)
When to use:
When you want proper categorization in the LangSmith UI.
Example:
metadata={
"ls_provider": "openai",
"ls_model_name": "gpt-4o",
"ls_model_type": "chat"
}
Relationships:
- Independent: works with or without other parameters
ls_temperature
- Type:
number (nullable)
- Required: No
What it does:
Records the temperature setting used. This is for tracking only—does not affect LangSmith behavior.
When to use:
When you want to track model configuration for experiments or debugging.
Example:
metadata={
"ls_provider": "openai",
"ls_model_name": "gpt-4o",
"ls_temperature": 0.7
}
Relationships:
- Independent; just for tracking.
- Useful alongside other config parameters for experiment comparison.
ls_max_tokens
- Type:
number (nullable)
- Required: No
What it does:
Records the maximum tokens setting used. This is for tracking only—does not affect LangSmith behavior.
When to use:
When you want to track model configuration for experiments or debugging.
Example:
metadata={
"ls_provider": "openai",
"ls_model_name": "gpt-4o",
"ls_max_tokens": 4096
}
Relationships:
- Independent; just for tracking.
- Useful for cost analysis when combined with actual token usage.
ls_stop
- Type:
string[] (nullable)
- Required: No
What it does:
Records stop sequences used. This is for tracking only—does not affect LangSmith behavior.
When to use:
When you want to track model configuration for experiments or debugging.
Example:
metadata={
"ls_provider": "openai",
"ls_model_name": "gpt-4o",
"ls_stop": ["END", "STOP", "\n\n"]
}
Relationships:
- Independent; just for tracking.
ls_invocation_params
- Type:
object (any key-value pairs)
- Required: No
What it does:
Stores additional model parameters that don’t fit the specific ls_ parameters. Can include provider-specific settings.
Common parameters:
top_p, frequency_penalty, presence_penalty, top_k, seed, or any custom parameters
When to use:
When you need to track additional configuration beyond the standard parameters.
Example:
metadata={
"ls_provider": "openai",
"ls_model_name": "gpt-4o",
"ls_invocation_params": {
"top_p": 0.9,
"frequency_penalty": 0.5,
"presence_penalty": 0.3,
"seed": 12345
}
}
Relationships:
- Independent; stores arbitrary configuration.
ls_run_depth
- Type:
integer
- Set by: LangSmith backend (automatic)
- Cannot be overridden
What it does:
Indicates depth in the trace tree:
0 = Root run (top-level)
1 = Direct child
2 = Grandchild
- etc.
When it’s used:
Automatically calculated during trace ingestion. Used for filtering (e.g., “show only root runs”) and UI visualization.
Example query:
metadata_key = 'ls_run_depth' AND metadata_value = 0
Relationships:
- Determined by trace parent-child structure.
- Cannot be set manually.
ls_method
- Type:
string
- Set by: SDK (automatic)
What it does:
Indicates which SDK method created the trace (commonly "traceable" for @traceable decorator).
When it’s used:
Automatically set by the tracing SDK. Used for debugging and analytics.
Relationships:
- Set by SDK based on how trace was created.
- Cannot be set manually.
ls_example_*
- Type: Any (depends on example metadata)
- Pattern:
ls_example_{original_key}
- Set by: LangSmith experiments system (automatic)
What it does:
When running experiments on datasets, metadata from the example is automatically prefixed with ls_example_ and added to the trace.
Special parameter:
ls_example_dataset_split: Dataset split (e.g., “train”, “test”, “validation”)
When it’s used:
During dataset experiments. Allows filtering/grouping by example characteristics.
Example:
If example has metadata {"category": "technical", "difficulty": "hard"}, trace gets:
{
"metadata": {
"ls_example_category": "technical",
"ls_example_difficulty": "hard",
"ls_example_dataset_split": "test"
}
}
Relationships:
- Automatically derived from example metadata.
- Cannot be set manually on traces.
ls_experiment_id
- Type:
string (UUID)
- Set by: LangSmith experiments system (automatic)
What it does:
Unique identifier for an experiment run.
When it’s used:
Automatically added when running experiments/evaluations on datasets. Used to group all runs from the same experiment.
Relationships:
- Links runs to specific experiments.
- Cannot be set manually.
Parameter relationships
Cost tracking dependencies
For LangSmith to automatically calculate costs, several parameters must work together. Here’s what’s required:
Primary requirement: ls_provider + ls_model_name
Additional requirements:
Fallback behavior:
If ls_model_name is not in metadata, the system checks ls_invocation_params for model identifiers like "model" before giving up on cost tracking.
Configuration tracking group
These parameters help you track model settings but don’t affect LangSmith’s core functionality:
Optional, work independently: ls_model_type, ls_temperature, ls_max_tokens, ls_stop
- These are for tracking/display.
- Do not affect LangSmith behavior or cost calculation.
- Useful for experiment comparison and debugging.
Invocation params special case
The ls_invocation_params parameter has a dual role as both a tracking field and a fallback mechanism:
ls_invocation_params; partially independent with fallback role:
- Primarily stores arbitrary configuration for tracking.
- Can serve as fallback for cost tracking if
ls_model_name is missing.
- Does not directly affect cost calculation when
ls_model_name is present.
System parameters
These parameters are automatically generated by LangSmith and cannot be manually set:
Cannot be user-set: ls_run_depth, ls_method, ls_example_*, ls_experiment_id
- Automatically set by system.
- Used for filtering, analytics, and system tracking.
Once you’ve added ls_ metadata parameters to your traces, you can use them to filter and search traces programmatically via the API or interactively in the LangSmith UI. This lets you narrow down traces by model, provider, configuration settings, or trace depth.
Use the API
Use the Client class with the list_runs() method (Python) or listRuns() method (TypeScript) to query traces based on metadata values. The filter syntax supports equality checks, comparisons, and logical operators.
from langsmith import Client
client = Client()
# Filter runs by provider
runs = client.list_runs(
project_name="my-app",
filter='metadata_key = "ls_provider" AND metadata_value = "openai"'
)
# Filter by specific model
runs = client.list_runs(
project_name="my-app",
filter='metadata_key = "ls_model_name" AND metadata_value = "gpt-4o"'
)
# Filter root runs only (top-level traces)
runs = client.list_runs(
project_name="my-app",
filter='metadata_key = "ls_run_depth" AND metadata_value = 0'
)
# Filter by temperature threshold
runs = client.list_runs(
project_name="my-app",
filter='metadata_key = "ls_temperature" AND metadata_value > 0.5'
)
These examples show common filtering patterns:
- Filter by provider or model to analyze usage patterns or costs for specific models
- Filter by run depth to get only root traces (depth 0) or child runs at specific nesting levels
- Filter by configuration to compare experiments with different temperature, max tokens, or other settings
Use the UI
In the LangSmith UI, use the filter/search bar with the filter syntax:
metadata_key = 'ls_provider' AND metadata_value = 'openai'
metadata_key = 'ls_model_name' AND metadata_value = 'gpt-4o'
metadata_key = 'ls_run_depth' AND metadata_value = 0