model
stringclasses 1
value | evaluation_date
stringdate 2025-12-06 04:13:43
2025-12-06 04:13:43
| task_id
stringlengths 16
26
| agent_type
stringclasses 2
values | difficulty
stringclasses 3
values | prompt
stringlengths 23
118
| success
bool 1
class | tool_called
bool 1
class | correct_tool
bool 1
class | final_answer_called
bool 1
class | response_correct
bool 1
class | tools_used
listlengths 0
0
| steps
int64 0
0
| response
null | error
stringlengths 268
271
| trace_id
stringlengths 34
34
| execution_time_ms
float64 816
136k
| total_tokens
int64 0
0
| cost_usd
float64 0
0
| enhanced_trace_info
stringlengths 130
138
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
openai/gpt-oss-120b
|
2025-12-06T04:13:43.548430
|
tool_weather_single
|
tool
|
easy
|
What's the weather in Paris, France?
| false
| false
| false
| false
| true
|
[] | 0
| null |
Error while generating output:
(Request ID: Root=1-6933ace4-7dc3eccd2c4afea848b4fcee;71bd0d64-228a-4670-b12b-5c603bc5f35a)
Bad request:
Unable to tokenize message with status code 400 for model gpt-oss-120b: Invalid 'content' type. Expected one of: ['str'], got list.
|
0x5285034194fff6485346371a163ab0f6
| 1,724.579382
| 0
| 0
|
{"trace_id": "0x5285034194fff6485346371a163ab0f6", "total_tokens": 0, "duration_ms": 1724.579382, "cost_usd": 0.0, "span_count": 3}
|
openai/gpt-oss-120b
|
2025-12-06T04:13:43.548467
|
tool_time_single
|
tool
|
easy
|
What time is it in UTC?
| false
| false
| false
| false
| true
|
[] | 0
| null |
Error while generating output:
(Request ID: Root=1-6933ace5-63e24b0e12951b8a2ca0d413;37aa9f47-4b56-406f-b930-ad370f499b79)
Bad request:
Unable to tokenize message with status code 400 for model gpt-oss-120b: Invalid 'content' type. Expected one of: ['str'], got list.
|
0x8e755f019bc1ce21ef6e28a74b189b9b
| 1,184.514559
| 0
| 0
|
{"trace_id": "0x8e755f019bc1ce21ef6e28a74b189b9b", "total_tokens": 0, "duration_ms": 1184.5145590000002, "cost_usd": 0.0, "span_count": 3}
|
openai/gpt-oss-120b
|
2025-12-06T04:13:43.548479
|
tool_search_single
|
tool
|
easy
|
Search for information about Python programming language
| false
| false
| false
| false
| true
|
[] | 0
| null |
Error while generating output:
(Request ID: Root=1-6933ace6-5b2f6cd51f7e54dc08ca36f5;874aa977-ab62-4088-8478-365127e38e8f)
Bad request:
Unable to tokenize message with status code 400 for model gpt-oss-120b: Invalid 'content' type. Expected one of: ['str'], got list.
|
0xab37142a0f2ab323a7ca626d52e38a7e
| 1,011.09092
| 0
| 0
|
{"trace_id": "0xab37142a0f2ab323a7ca626d52e38a7e", "total_tokens": 0, "duration_ms": 1011.09092, "cost_usd": 0.0, "span_count": 3}
|
openai/gpt-oss-120b
|
2025-12-06T04:13:43.548488
|
tool_weather_compare
|
tool
|
medium
|
Compare the weather in Paris, France and London, UK. Which one is warmer?
| false
| false
| false
| false
| true
|
[] | 0
| null |
Error while generating output:
(Request ID: Root=1-6933ace7-6cfff7c215c574ba16647e36;da11d81d-eb68-4192-b31c-805ff8c1a46d)
Bad request:
Unable to tokenize message with status code 400 for model gpt-oss-120b: Invalid 'content' type. Expected one of: ['str'], got list.
|
0x75a6ff101b014ca91b5ed94920d1b54a
| 1,056.74845
| 0
| 0
|
{"trace_id": "0x75a6ff101b014ca91b5ed94920d1b54a", "total_tokens": 0, "duration_ms": 1056.74845, "cost_usd": 0.0, "span_count": 3}
|
openai/gpt-oss-120b
|
2025-12-06T04:13:43.548496
|
tool_search_and_summarize
|
tool
|
medium
|
Search for the latest news about AI and tell me what you find.
| false
| false
| false
| false
| true
|
[] | 0
| null |
Error while generating output:
(Request ID: Root=1-6933ace8-3b3385b8111488b4607f96a3;ee33b7b1-5d78-40aa-b5ea-40d31ff6482c)
Bad request:
Unable to tokenize message with status code 400 for model gpt-oss-120b: Invalid 'content' type. Expected one of: ['str'], got list.
|
0xa68a9533dc7cfb9acdda6c760dad7924
| 1,771.848951
| 0
| 0
|
{"trace_id": "0xa68a9533dc7cfb9acdda6c760dad7924", "total_tokens": 0, "duration_ms": 1771.848951, "cost_usd": 0.0, "span_count": 3}
|
openai/gpt-oss-120b
|
2025-12-06T04:13:43.548504
|
tool_weather_time_combined
|
tool
|
hard
|
What's the current time in UTC and what's the weather in Tokyo, Japan?
| false
| false
| false
| false
| true
|
[] | 0
| null |
Error while generating output:
(Request ID: Root=1-6933ad6f-4ba7d1bb022af05765e8ca04;08ac9450-f13f-4d4f-91da-105d2b5e5d6c)
Bad request:
Unable to tokenize message with status code 400 for model gpt-oss-120b: Invalid 'content' type. Expected one of: ['str'], got list.
|
0x90410f59ea8855690ea2f60490a6c35d
| 135,530.761441
| 0
| 0
|
{"trace_id": "0x90410f59ea8855690ea2f60490a6c35d", "total_tokens": 0, "duration_ms": 135530.761441, "cost_usd": 0.0, "span_count": 4}
|
openai/gpt-oss-120b
|
2025-12-06T04:13:43.548512
|
shared_basic_weather
|
tool
|
easy
|
What's the weather like in Sydney, Australia?
| false
| false
| false
| false
| true
|
[] | 0
| null |
Error while generating output:
(Request ID: Root=1-6933ad6f-65d0eec125f475c613c6e078;9fedb7fb-d42b-40f3-a4ac-8c56ba562691)
Bad request:
Unable to tokenize message with status code 400 for model gpt-oss-120b: Invalid 'content' type. Expected one of: ['str'], got list.
|
0xd6d8d865570b0b2fba1e6956687395a8
| 869.746274
| 0
| 0
|
{"trace_id": "0xd6d8d865570b0b2fba1e6956687395a8", "total_tokens": 0, "duration_ms": 869.7462740000001, "cost_usd": 0.0, "span_count": 3}
|
openai/gpt-oss-120b
|
2025-12-06T04:13:43.548520
|
shared_basic_search
|
tool
|
easy
|
Search for information about machine learning
| false
| false
| false
| false
| true
|
[] | 0
| null |
Error while generating output:
(Request ID: Root=1-6933ad70-1f4dee023b9fd618089ee58a;25a174d8-6781-4356-badf-3e1073d2742f)
Bad request:
Unable to tokenize message with status code 400 for model gpt-oss-120b: Invalid 'content' type. Expected one of: ['str'], got list.
|
0xa071d36a131d153ca98cff3b3458882b
| 816.047433
| 0
| 0
|
{"trace_id": "0xa071d36a131d153ca98cff3b3458882b", "total_tokens": 0, "duration_ms": 816.047433, "cost_usd": 0.0, "span_count": 3}
|
openai/gpt-oss-120b
|
2025-12-06T04:13:43.548527
|
code_calculator_single
|
code
|
easy
|
What is 234 multiplied by 67?
| false
| false
| false
| false
| true
|
[] | 0
| null |
Error in generating model output:
(Request ID: Root=1-6933ad70-4a90c005540fadce79035c5a;8d075d85-0595-4f04-b242-05097d9520f3)
Bad request:
Unable to tokenize message with status code 400 for model gpt-oss-120b: Invalid 'content' type. Expected one of: ['str'], got list.
|
0x8301f600024b3c040a410080f683f2c5
| 1,035.608161
| 0
| 0
|
{"trace_id": "0x8301f600024b3c040a410080f683f2c5", "total_tokens": 0, "duration_ms": 1035.608161, "cost_usd": 0.0, "span_count": 3}
|
openai/gpt-oss-120b
|
2025-12-06T04:13:43.548535
|
code_calculator_complex
|
code
|
medium
|
Calculate (450 + 230) * 3, then divide the result by 10
| false
| false
| false
| false
| true
|
[] | 0
| null |
Error in generating model output:
(Request ID: Root=1-6933ad71-6228d1f72037b4e1495d3168;074189ce-d6d2-4b95-9c9c-eac49d48f9c0)
Bad request:
Unable to tokenize message with status code 400 for model gpt-oss-120b: Invalid 'content' type. Expected one of: ['str'], got list.
|
0xd3434fa2a76a790428da4bd18b4b7416
| 1,103.477402
| 0
| 0
|
{"trace_id": "0xd3434fa2a76a790428da4bd18b4b7416", "total_tokens": 0, "duration_ms": 1103.477402, "cost_usd": 0.0, "span_count": 3}
|
openai/gpt-oss-120b
|
2025-12-06T04:13:43.548542
|
code_weather_with_calc
|
code
|
hard
|
Get the weather in Paris and if the temperature is above 15°C, calculate 15 * 2
| false
| false
| false
| false
| true
|
[] | 0
| null |
Error in generating model output:
(Request ID: Root=1-6933ad72-12b2ae630ba39df36611f06a;5285b9f4-99ea-464a-bc48-124ef04a6a67)
Bad request:
Unable to tokenize message with status code 400 for model gpt-oss-120b: Invalid 'content' type. Expected one of: ['str'], got list.
|
0x57e4279c505f3fbbcff64e8b3823c369
| 1,101.852293
| 0
| 0
|
{"trace_id": "0x57e4279c505f3fbbcff64e8b3823c369", "total_tokens": 0, "duration_ms": 1101.852293, "cost_usd": 0.0, "span_count": 3}
|
openai/gpt-oss-120b
|
2025-12-06T04:13:43.548550
|
code_search_calculate
|
code
|
hard
|
Search for the population of Paris, then if you find it's around 2 million, calculate what 2 million divided by 365 is
| false
| false
| false
| false
| true
|
[] | 0
| null |
Error in generating model output:
(Request ID: Root=1-6933ad73-5cd4be344f23640b411ba3f7;04dc8079-f4f0-4afa-92ac-b183a8dd346f)
Bad request:
Unable to tokenize message with status code 400 for model gpt-oss-120b: Invalid 'content' type. Expected one of: ['str'], got list.
|
0x7b17fa62adbcb23235c9a5cfa64bcb97
| 1,029.206184
| 0
| 0
|
{"trace_id": "0x7b17fa62adbcb23235c9a5cfa64bcb97", "total_tokens": 0, "duration_ms": 1029.2061840000001, "cost_usd": 0.0, "span_count": 3}
|
openai/gpt-oss-120b
|
2025-12-06T04:13:43.548558
|
code_list_processing
|
code
|
hard
|
Get weather for Paris, London, and Tokyo, then tell me which cities have temperature above 18°C
| false
| false
| false
| false
| true
|
[] | 0
| null |
Error in generating model output:
(Request ID: Root=1-6933ad73-4badf80461e74109343cfd91;c76159b2-af7e-4f01-90fd-aa820ea0515c)
Bad request:
Unable to tokenize message with status code 400 for model gpt-oss-120b: Invalid 'content' type. Expected one of: ['str'], got list.
|
0x1626be9f6adfa10469bcf7478c7c26ba
| 1,151.987886
| 0
| 0
|
{"trace_id": "0x1626be9f6adfa10469bcf7478c7c26ba", "total_tokens": 0, "duration_ms": 1151.987886, "cost_usd": 0.0, "span_count": 3}
|
openai/gpt-oss-120b
|
2025-12-06T04:13:43.548565
|
shared_basic_weather
|
code
|
easy
|
What's the weather like in Sydney, Australia?
| false
| false
| false
| false
| true
|
[] | 0
| null |
Error in generating model output:
(Request ID: Root=1-6933ad76-224bd53b3d438e2a317b0843;4e919b7e-064c-4390-a15a-59e10f3bcc99)
Bad request:
Unable to tokenize message with status code 400 for model gpt-oss-120b: Invalid 'content' type. Expected one of: ['str'], got list.
|
0xd6d8d865570b0b2fba1e6956687395a8
| 869.746274
| 0
| 0
|
{"trace_id": "0xd6d8d865570b0b2fba1e6956687395a8", "total_tokens": 0, "duration_ms": 869.7462740000001, "cost_usd": 0.0, "span_count": 3}
|
openai/gpt-oss-120b
|
2025-12-06T04:13:43.548572
|
shared_basic_search
|
code
|
easy
|
Search for information about machine learning
| false
| false
| false
| false
| true
|
[] | 0
| null |
Error in generating model output:
(Request ID: Root=1-6933ad77-79f476ee6821c1f7407c976c;c740ce0f-24b8-458b-8917-bf7423cc7dab)
Bad request:
Unable to tokenize message with status code 400 for model gpt-oss-120b: Invalid 'content' type. Expected one of: ['str'], got list.
|
0xa071d36a131d153ca98cff3b3458882b
| 816.047433
| 0
| 0
|
{"trace_id": "0xa071d36a131d153ca98cff3b3458882b", "total_tokens": 0, "duration_ms": 816.047433, "cost_usd": 0.0, "span_count": 3}
|
SMOLTRACE Evaluation Results
This dataset contains evaluation results from a SMOLTRACE benchmark run.
Dataset Information
| Field | Value |
|---|---|
| Model | openai/gpt-oss-120b |
| Run ID | job_4acee6f5 |
| Agent Type | both |
| Total Tests | 15 |
| Generated | 2025-12-06 04:13:45 UTC |
| Source Dataset | kshitijthakkar/smoltrace-tasks |
Schema
| Column | Type | Description |
|---|---|---|
model |
string | Model identifier |
evaluation_date |
string | ISO timestamp of evaluation |
task_id |
string | Unique test case identifier |
agent_type |
string | "tool" or "code" agent type |
difficulty |
string | Test difficulty level |
prompt |
string | Test prompt/question |
success |
bool | Whether the test passed |
tool_called |
bool | Whether a tool was invoked |
correct_tool |
bool | Whether the correct tool was used |
final_answer_called |
bool | Whether final_answer was called |
response_correct |
bool | Whether the response was correct |
tools_used |
string | Comma-separated list of tools used |
steps |
int | Number of agent steps taken |
response |
string | Agent's final response |
error |
string | Error message if failed |
trace_id |
string | OpenTelemetry trace ID |
execution_time_ms |
float | Execution time in milliseconds |
total_tokens |
int | Total tokens consumed |
cost_usd |
float | API cost in USD |
enhanced_trace_info |
string | JSON with detailed trace data |
Usage
from datasets import load_dataset
# Load the results dataset
ds = load_dataset("YOUR_USERNAME/smoltrace-results-TIMESTAMP")
# Filter successful tests
successful = ds.filter(lambda x: x['success'])
# Calculate success rate
success_rate = sum(1 for r in ds['train'] if r['success']) / len(ds['train']) * 100
print(f"Success Rate: {success_rate:.2f}%")
Related Datasets
This evaluation run also generated:
- Traces Dataset: Detailed OpenTelemetry execution traces
- Metrics Dataset: GPU utilization and environmental metrics
- Leaderboard: Aggregated metrics for model comparison
About SMOLTRACE
SMOLTRACE is a comprehensive benchmarking and evaluation framework for Smolagents - HuggingFace's lightweight agent library.
Key Features
- Automated agent evaluation with customizable test cases
- OpenTelemetry-based tracing for detailed execution insights
- GPU metrics collection (utilization, memory, temperature, power)
- CO2 emissions and power cost tracking
- Leaderboard aggregation and comparison
Quick Links
Installation
pip install smoltrace
Citation
If you use SMOLTRACE in your research, please cite:
@software{smoltrace,
title = {SMOLTRACE: Benchmarking Framework for Smolagents},
author = {Thakkar, Kshitij},
url = {https://github.com/Mandark-droid/SMOLTRACE},
year = {2025}
}
Generated by SMOLTRACE
- Downloads last month
- 4