Skip to content

QQL (Quick Query Language)

QQL is the unified query language in Cruncher for searching, filtering, and transforming logs from any data source.

Inspired by observability languages like Splunk SPL2 and Kusto KQL, QQL is designed to be easy to learn while providing powerful capabilities for log analysis and investigation.

Different log sources (Grafana, Loki, Kubernetes, Docker) have different native query languages. QQL abstracts away these differences, letting you write once and query any source. You get:

  • Consistent syntax across all adapters
  • Powerful pipeline model for data transformation
  • Simple learning curve with familiar operators
  • Local execution for fast feedback

QQL uses a pipeline model: data flows through a series of commands, with each command filtering, transforming, or visualizing the data.

How Data Flows Through QQL

1
📥
Data Source
Logs retrieved from your configured adapter
2
Pipeline Commands
🔽
Filter
where
🔄
Transform
eval, regex
📊
Aggregate
stats, timechart
3
📈
Output
View results as Logs, Table, or Chart

A QQL query has two parts:

  1. Controller parameters (optional): Passed to the adapter for server-side filtering
  2. Pipeline commands: Executed locally on results
[controller_params] | command1 | command2 | command3

Example:

level=error | where contains(message, "timeout") | stats count by service
  • No controller params in this example
  • where contains(...) — filters records where message contains “timeout”
  • stats count by service — counts records grouped by service field

See Commands for full details.

QQL provides functions for boolean logic, string manipulation, math, and more:

Boolean

String

Number

Conditional

See Functions for full details.

QQL supports: string, number, boolean, and null. The adapter assigns types when extracting fields, and type coercion happens implicitly in expressions.

See Data Types for details.

Suppose you want to investigate slow API requests in your logs:

service="api" method="POST"
| eval duration_ms = duration / 1000
| where duration_ms > 100
| stats avg(duration_ms), max(duration_ms), count() by endpoint
| sort count desc

What happens:

  1. Adapter filters for service=api and method=POST before sending data
  2. Pipeline creates duration_ms field in milliseconds
  3. Filters to requests slower than 100ms
  4. Aggregates by endpoint: avg/max duration, count
  5. Sorts by count descending

Result: A table showing which endpoints have the most slow requests.


Next steps: