Introducing the Apitally CLI and skill for agents
With full access to your API metrics and request logs in Apitally, agents can now run investigations and analyses that go beyond what pre-built dashboards can answer.

Today we’re launching the Apitally CLI and agent skill, making API metrics and request log data in Apitally accessible to AI agents such as Claude Code, Cursor, and Codex.
Apitally already captures detailed API metrics and full request logs. The dashboard surfaces that data through a focused set of views. But real investigations often bring up questions that don’t fit a pre-built view. With the Apitally CLI, agents are now able to answer those kinds of questions.
The Apitally CLI is a Rust binary with no runtime dependencies and comes with a bundled DuckDB
engine. It can be used via npx without prior installation.
Thanks to the instructions and reference material in the
skill, agents can use the CLI efficiently without
digging through documentation or --help text every time.
Key features
- Retrieve API metrics with various aggregation options and filters
- Retrieve request logs with filters and field selection, including headers and payloads
- Fetch full details about specific API requests, including application logs and traces
- Load data into a local DuckDB database and run arbitrary SQL queries against it
Getting started
- Install the
apitally-cliskill using the skills CLI:
npx skills add apitally/cli
- Authenticate using a browser-based flow (creates and saves an API key):
npx @apitally/cli auth
- Ask your agent to use the Apitally CLI for investigations or analyses. It’ll know what to do.
Example use cases
Here are some questions that agents can now answer with the Apitally CLI:
- There was a bug in the
POST /v1/ordersendpoint where theorder_numberfield in the response sometimes had a"null"value. Which consumers were affected? - Compare latency for all endpoints under
/api/v2/*before and after 2026-04-15 14:30 UTC. Did our deployment impact API performance? Which endpoints are affected the most? - Which were the most common books that
GET /v1/books/{book_id}was called for in the last 7 days? Include the book title and author in the ranking.
A more detailed example
Imagine your API has a book search endpoint GET /v1/books?query=<search-term> and you want to analyze the most common
search terms that return empty results.
The transcript below shows how the agent identifies the relevant app, fetches request log data from Apitally into a local DuckDB database, and then runs a SQL query to extract the search terms and count occurrences.
❯ Use the Apitally CLI to get the most common book search terms with empty
results. The book search endpoint is GET /v1/books?query=<search-term>
⏺ Skill(apitally-cli)
⎿ Successfully loaded skill
⏺ I'll start by identifying the relevant app.
⏺ Bash(npx @apitally/cli apps 2>/dev/null)
⎿ {
"id": 1,
"name": "Boundless Bookstore API",
… +119 lines (ctrl+o to expand)
⏺ The relevant app ID is 1. Let me read the command reference for available
fields and filters. Then I'll fetch request logs with response bodies.
⏺ Read(.claude/skills/apitally-cli/references/commands.md)
⎿ Read 212 lines
⏺ Bash(npx @apitally/cli request-logs 1 --since "2026-04-12T00:00:00Z" \
--fields '["url","response_body_json"]' \
--filters '[{"field":"url","op":"contains","value":"/v1/books?query="}]' \
--db 2>/dev/null)
⎿ (No output)
⏺ Bash(npx @apitally/cli sql "
SELECT
trim(replace(regexp_extract(url, '[?&]query=([^&]+)', 1), '+', ' ')) AS query,
COUNT(*) AS count
FROM request_logs
WHERE app_id = 1
AND timestamp >= '2026-03-23'
AND method = 'GET'
AND url LIKE '%/v1/books?query=%'
AND response_body_json IS NOT NULL
AND json_array_length(response_body_json) = 0
AND regexp_extract(url, '[?&]query=([^&]+)', 1) <> ''
GROUP BY query
ORDER BY count DESC
LIMIT 10" 2>/dev/null)
⎿ {"query":"the midnight library","count":46}
{"query":"project hail mary","count":31}
{"query":"tomorrow and tomorrow","count":24}
… +7 lines (ctrl+o to expand)
⏺ The most common book search terms with empty search results are:
┌────────────────────────────┬───────┐
│ Query │ Count │
├────────────────────────────┼───────┤
│ the midnight library │ 46 │
├────────────────────────────┼───────┤
│ project hail mary │ 31 │
├────────────────────────────┼───────┤
│ tomorrow and tomorrow │ 24 │
├────────────────────────────┼───────┤
│ the priory of the orange │ 19 │
├────────────────────────────┼───────┤
│ babel │ 17 │
├────────────────────────────┼───────┤
│ sea of tranquility │ 15 │
├────────────────────────────┼───────┤
│ piranesi │ 12 │
├────────────────────────────┼───────┤
│ the atlas six │ 11 │
├────────────────────────────┼───────┤
│ yellowface │ 9 │
├────────────────────────────┼───────┤
│ this is how you lose │ 8 │
└────────────────────────────┴───────┘
This is based on GET requests to `/v1/books` over the last 7 days where the
`query` parameter was present and the response body was an empty JSON array.
✻ Cooked for 27s
Available commands
Here’s a list of the commands the CLI provides:
| Command | Description |
|---|---|
auth | Configure API key |
whoami | Check authentication and show team info |
apps | List all apps in your team |
consumers | List consumers for an app |
endpoints | List endpoints for an app, with method and path filters |
metrics | Fetch metrics for an app, with filters and different aggregation options |
request-logs | Fetch request log data, with filters and field selection |
request-details | Fetch full details for a specific request and response, including headers, payloads, application logs and traces |
sql | Run SQL queries against a local DuckDB database |
reset-db | Drop and recreate all tables in local DuckDB database |
All commands output newline-delimited JSON to stdout by default. You can use the --db flag to write data to a local
DuckDB database instead, which can then be queried with the sql command.
Wrapping up
The Apitally CLI is now available to users on all plans.
While the CLI is primarily designed for agents, it works just as well directly in your terminal or as part of scripts
and automations. Since all output is JSON, you can just pipe it into jq for further processing.
We’d love to hear what you’ll use the CLI for and what possibilities you’ll discover. Equally, if you hit any limitations or have feature requests, please let us know.