A lightweight, easy-to-configure Rust profiler that shows exactly where your code spends time and allocates memory. Instrument any function or code block to quickly spot bottlenecks, and focus your optimizations where they matter most.
- Zero-cost when disabled — fully gated by a feature flag.
- Low-overhead profiling for both sync and async code.
- Memory allocation tracking — track bytes allocated or allocation counts per function.
- Detailed stats: avg, total time, call count, % of total runtime, and configurable percentiles (p95, p99, etc.).
- Background processing for minimal profiling impact.
⚠️ Note
This README reflects the latest development on themain
branch. For documentation matching the current release, see crates.io — it stays in sync with the published crate.
Add to your Cargo.toml
:
[dependencies]
hotpath = { version = "0.2", optional = true }
[features]
hotpath = ["dep:hotpath", "hotpath/hotpath"]
hotpath-alloc-bytes-total = ["hotpath/hotpath-alloc-bytes-total"]
hotpath-alloc-bytes-max = ["hotpath/hotpath-alloc-bytes-max"]
hotpath-alloc-count-total= ["hotpath/hotpath-alloc-count-total"]
hotpath-alloc-count-max= ["hotpath/hotpath-alloc-count-max"]
hotpath-off = ["hotpath/hotpath-off"]
This config ensures that the lib has zero overhead unless explicitly enabled via a hotpath
feature.
Profiling features are mutually exclusive. To ensure compatibility with --all-features
setting, the crate defines an additional hotpath-off
flag. This is handled automatically - you should never need to enable it manually.
use std::time::Duration;
#[cfg_attr(feature = "hotpath", hotpath::measure)]
fn sync_function(sleep: u64) {
std::thread::sleep(Duration::from_nanos(sleep));
}
#[cfg_attr(feature = "hotpath", hotpath::measure)]
async fn async_function(sleep: u64) {
tokio::time::sleep(Duration::from_nanos(sleep)).await;
}
// When using with tokio, place the #[tokio::main] first
#[tokio::main]
// You can configure any percentile between 0 and 100
#[cfg_attr(feature = "hotpath", hotpath::main(percentiles = [99]))]
async fn main() {
for i in 0..100 {
// Measured functions will automatically send metrics
sync_function(i);
async_function(i * 2).await;
// Measure code blocks with static labels
#[cfg(feature = "hotpath")]
hotpath::measure_block!("custom_block", {
std::thread::sleep(Duration::from_nanos(i * 3))
});
}
}
Run your program with a hotpath
feature:
cargo run --features=hotpath
Output:
[hotpath] Performance summary from basic::main (Total time: 122.13ms):
+-----------------------+-------+---------+---------+----------+---------+
| Function | Calls | Avg | P99 | Total | % Total |
+-----------------------+-------+---------+---------+----------+---------+
| basic::async_function | 100 | 1.16ms | 1.20ms | 116.03ms | 95.01% |
+-----------------------+-------+---------+---------+----------+---------+
| custom_block | 100 | 17.09µs | 39.55µs | 1.71ms | 1.40% |
+-----------------------+-------+---------+---------+----------+---------+
| basic::sync_function | 100 | 16.99µs | 35.42µs | 1.70ms | 1.39% |
+-----------------------+-------+---------+---------+----------+---------+
In addition to time-based profiling, hotpath
can track memory allocations. This feature uses a custom global allocator from allocation-counter crate to intercept all memory allocations and provides detailed statistics about memory usage per function.
Available alloc profiling modes:
hotpath-alloc-bytes-total
- Tracks total bytes allocated during each function callhotpath-alloc-bytes-max
- Tracks peak memory usage during each function callhotpath-alloc-count-total
- Tracks total number of allocations per function callhotpath-alloc-count-max
- Tracks peak number of live allocations per function call
Run your program with a selected flag to print a similar report:
cargo run --features='hotpath,hotpath-alloc-bytes-max'
To profile memory usage of async
functions you have to use a similar config:
#[cfg(any(
feature = "hotpath-alloc-bytes-total",
feature = "hotpath-alloc-bytes-max",
feature = "hotpath-alloc-count-total",
feature = "hotpath-alloc-count-max",
))]
#[tokio::main(flavor = "current_thread")]
async fn main() {
_ = inner_main().await;
}
#[cfg(not(any(
feature = "hotpath-alloc-bytes-total",
feature = "hotpath-alloc-bytes-max",
feature = "hotpath-alloc-count-total",
feature = "hotpath-alloc-count-max",
)))]
#[tokio::main]
async fn main() {
_ = inner_main().await;
}
#[cfg_attr(feature = "hotpath", hotpath::main)]
async fn inner_main() {
// ...
}
It ensures that tokio runs in a current_thread
runtime mode if any of the allocation profiling flags is enabled.
Why this limitation exists: The allocation tracking uses thread-local storage to track memory usage. In multi-threaded runtimes, async tasks can migrate between threads, making it impossible to accurately attribute allocations to specific function calls.
#[cfg_attr(feature = "hotpath", hotpath::main)]
- Macro that initializes the background measurement processing#[cfg_attr(feature = "hotpath", hotpath::measure)]
- Macro that wraps functions with profiling code- Background thread - Measurements are sent to a dedicated worker thread via bounded channel
- Statistics aggregation - Worker thread maintains running statistics for each function/code block
- Automatic reporting - Performance summary displayed when the program exits
#[cfg_attr(feature = "hotpath", hotpath::main)]
Attribute macro that initializes the background measurement processing when applied. Supports parameters:
percentiles = [50, 95, 99]
- Custom percentiles to displayformat = "json"
- Output format ("table", "json", "json-pretty")
#[cfg_attr(feature = "hotpath", hotpath::measure)]
An opt-in attribute macro that instruments functions to send timing measurements to the background processor.
hotpath::measure_block!(label, expr)
Macro that measures the execution time of a code block with a static string label.
hotpath::GuardBuilder::new(caller_name)
- Create a new builder with the specified caller name
Configuration methods:
.percentiles(&[u8])
- Set custom percentiles to display (default: [95]).format(Format)
- Set output format (Table, Json, JsonPretty).reporter(Box<dyn Reporter>)
- Set custom reporter (overrides format).build()
- Build and return the HotPath guard
Example:
let _guard = hotpath::GuardBuilder::new("main")
.percentiles(&[50, 90, 95, 99])
.format(hotpath::Format::JsonPretty)
.build();
The #[hotpath::main]
macro is convenient for most use cases, but the GuardBuilder
API provides more control over when profiling starts and stops.
Key differences:
#[hotpath::main]
- Automatic initialization and cleanup, report printed at program exitlet _guard = GuardBuilder::new("name").build()
- Manual control, report printed when guard is dropped, so you can fine-tune the measured scope.
Only one hotpath guard may be alive at a time, regardless of whether it was created by the main
macro or by the builder API. If a second guard is created, the library will panic.
use std::time::Duration;
#[cfg_attr(feature = "hotpath", hotpath::measure)]
fn example_function() {
std::thread::sleep(Duration::from_millis(10));
}
fn main() {
#[cfg(feature = "hotpath")]
let _guard = hotpath::GuardBuilder::new("my_program")
.percentiles(&[50, 95, 99])
.format(hotpath::Format::Table)
.build();
example_function();
// This will print the report.
#[cfg(feature = "hotpath")]
drop(_guard);
// Immediate exit (no drops); `#[hotpath::main]` wouldn't print.
std::process::exit(1);
}
In unit tests you can profile each individual test case:
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_sync_function() {
#[cfg(feature = "hotpath")]
let _hotpath = hotpath::GuardBuilder::new("test_sync_function")
.percentiles(&[50, 90, 95])
.format(hotpath::Format::Table)
.build();
sync_function();
}
#[tokio::test(flavor = "current_thread")]
async fn test_async_function() {
#[cfg(feature = "hotpath")]
let _hotpath = hotpath::GuardBuilder::new("test_async_function")
.percentiles(&[50, 90, 95])
.format(hotpath::Format::Table)
.build();
async_function().await;
}
}
Run tests with profiling enabled:
cargo test --features hotpath -- --test-threads=1
Note: Use --test-threads=1
to ensure tests run sequentially, as only one hotpath guard can be active at a time.
By default, hotpath
displays P95 percentile in the performance summary. You can customize which percentiles to display using the percentiles
parameter:
#[tokio::main]
#[cfg_attr(feature = "hotpath", hotpath::main(percentiles = [50, 75, 90, 95, 99]))]
async fn main() {
// Your code here
}
For multiple measurements of the same function or code block, percentiles help identify performance distribution patterns. You can use percentile 0 to display min value and 100 to display max.
By default, hotpath
displays results in a human-readable table format. You can also output results in JSON format for programmatic processing:
#[tokio::main]
#[cfg_attr(feature = "hotpath", hotpath::main(format = "json-pretty"))]
async fn main() {
// Your code here
}
Supported format options:
"table"
(default) - Human-readable table format"json"
- Compact, oneline JSON format"json-pretty"
- Pretty-printed JSON format
Example JSON output:
{
"hotpath_profiling_mode": "timing",
"output": {
"basic::async_function": {
"calls": "100",
"avg": "1.16ms",
"p95": "1.26ms",
"total": "116.41ms",
"percent_total": "96.18%"
},
"basic::sync_function": {
"calls": "100",
"avg": "23.10µs",
"p95": "37.89µs",
"total": "2.31ms",
"percent_total": "1.87%"
}
}
}
You can combine both percentiles and format parameters:
#[cfg_attr(feature = "hotpath", hotpath::main(percentiles = [50, 90, 99], format = "json"))]
You can implement your own reporting to control how profiling results are handled. This allows you to plug hotpath
into existing tools like loggers, CI pipelines, or monitoring systems.
For complete working examples, see:
examples/csv_file_reporter.rs
- Save metrics to CSV fileexamples/json_file_reporter.rs
- Save metrics to JSON fileexamples/tracing_reporter.rs
- Log metrics using the tracing crate
Measure overhead of profiling 100k method calls with hyperfine:
Timing:
cargo build --example benchmark --features hotpath --release
hyperfine --warmup 3 './target/release/examples/benchmark'
Allocations:
cargo build --example benchmark --features='hotpath,hotpath-alloc-count-max' --release
hyperfine --warmup 3 './target/release/examples/benchmark'