axm-smelt
Deterministic token compaction for LLM inputs.
What it does
axm-smelt reduces token consumption for LLM inputs by applying deterministic compaction strategies. It detects the input format, runs the selected strategies in pipeline order, and reports exact token savings via tiktoken.
| Strategy | Category | Effect |
|---|---|---|
minify |
whitespace | Remove whitespace from JSON |
drop_nulls |
structural | Remove None, "", [], {} values |
flatten |
structural | Collapse single-child wrapper dicts |
tabular |
structural | Convert list[dict] to pipe-separated tables |
dedup_values |
structural | Replace repeated long strings with aliases |
strip_quotes |
cosmetic | Remove quotes on simple JSON keys |
round_numbers |
cosmetic | Round floats to N decimal places |
Quick Example
Bash
# CLI
echo '{"name": "Alice", "age": 30}' | axm-smelt compact
# Or use a preset
axm-smelt compact --file data.json --preset aggressive
Python
# Python API
from axm_smelt import smelt, check, count
report = smelt('{\n "name": "Alice",\n "age": 30\n}')
print(f"{report.savings_pct:.1f}% saved")
# 35.7% saved
Features
- Format detection — auto-detect JSON, YAML, XML, TOML, CSV, Markdown, and plain text
- Token counting — exact counts via tiktoken (
o200k_base), withlen//4fallback - Composable pipeline — chain strategies or use presets (
safe,moderate,aggressive) - CLI —
axm-smelt compact|check|count|versioncommands - MCP tool — available to AI agents via
axm-mcp - Modern Python — 3.12+ with strict typing