Skip to content
Codedock
ServicesHow we workInsightsCase StudiesCareerContact
Back to all articles
Enterprise Integration

·

7 min read

·

Written by Tomáš Mikeš

TimescaleDB vs. InfluxDB vs. ClickHouse: picking a database for time-series data

For the Netigo NetFlow ingestion pipeline we had to pick a database that could process GB/s of network data in real time. A concrete comparison of the three candidates on performance, query language, operations and TCO.

TimescaleDBInfluxDBClickHouseTime-seriesPerformance

For Netigo we were building NetFlow Manager — a system that ingests and analyses network traffic from IPFIX/NetFlow sensors in real time, with peaks in the GB/s range. The database choice was critical. The wrong one = the system falls over under load or costs 5× more.

We evaluated three candidates: TimescaleDB, InfluxDB, ClickHouse. Here is what we weighed and why we picked TimescaleDB.

The workload we were picking for

  • Write-heavy: 500 000 — 2 000 000 rows/s ingestion
  • Rows typically 100-200 bytes (compact event)
  • Retention 90 days hot, 2 years cold
  • Query patterns: aggregations over time windows (sum bytes per src_ip per hour last 24h), top-N queries (top 100 flows by volume)
  • Consumers: API backend, dashboards, alerting system

TimescaleDB — a Postgres extension

Pros:

  • It's Postgres. Everyone knows it, tooling exists, backend integration is trivial.
  • SQL — no new query language to learn.
  • Hyperpartitioning (hypertable) and native compression — the table is auto-sharded by time; older chunks are compressed 10-20× smaller.
  • Continuous aggregates — pre-computed hourly/daily roll-up tables updated automatically.
  • JOINs with regular tables (user, config, device metadata) with no contortions — rare for a TSDB.

Cons:

  • Write throughput lower than ClickHouse. On our hardware (32 cores, NVMe SSD) we measured ~400k inserts/s in single-node. For 2M/s we'd need a cluster.
  • Licensing — open-source Apache 2.0 core, some enterprise features (multi-node, advanced compression) are paid.
  • Disk usage before compression ~30-50% larger than ClickHouse.

InfluxDB — a time-series purist

Pros:

  • Designed purely for time-series. Good benchmarks on synthetic workloads.
  • Retention policies and downsampling out-of-the-box.
  • Smaller disk footprint than Postgres/TimescaleDB.

Cons:

  • Flux query language (InfluxDB 2.x) — yet another language, yet another learning curve, fewer people know it. 3.x reverts to SQL via an IOx backend, but the ecosystem is turbulent.
  • No JOINs with relational data. If you want to aggregate flows by device metadata (in Postgres), you do it in the app.
  • In our internal load test we hit cardinality explosion — at GB/s traffic the unique src_ip × dst_ip pairs run to millions; InfluxDB slowed down.
  • Historically turbulent: major versions 1 → 2 → 3 changed both storage engine and query language. Nobody offers a strong long-term stability bet.

ClickHouse — raw throughput monster

Pros:

  • Highest write throughput. On the same hardware we measured 1.8M inserts/s — nearly 5× TimescaleDB.
  • Columnar storage, aggressive compression, smallest disk footprint (30-40% of TimescaleDB).
  • SQL-ish dialect (CH has its own, very close to standard SQL).
  • Excellent for OLAP — top-N, aggregations, rollups.

Cons:

  • Weak update/delete operations. ClickHouse is append-optimised. Deletes and updates are expensive and async. Unsuitable for GDPR deletes.
  • Eventual consistency on replication. For some use cases (audit trail) unacceptable.
  • Operations complexity — ZooKeeper/Keeper coordination, sharding strategy, merge tree tuning. Needs a dedicated devops hand.
  • Smaller ecosystem than Postgres. Some tools (e.g. Liquibase schema migrations) don't know it.

The decision matrix

For Netigo we scored on 5 dimensions:

  • Write throughput: CH > TS > Influx. CH wins by ~5×.
  • Query expressiveness: TS > CH > Influx. TS SQL with JOINs against metadata tables won.
  • Operations: TS > Influx > CH. TS is “just Postgres.”
  • Team expertise: TS high. The whole team knows Postgres.
  • Long-term risk: TS > CH > Influx. Postgres ecosystem is the most stable.

For our specific numbers (500k-2M rows/s write, need for JOINs, no dedicated DBA), TimescaleDB came out optimal. CH would save hardware, but the ops overhead would cancel it.

When you'd pick differently

Pick ClickHouse if:

  • Write throughput > 3M/s on a single node is required
  • You have a dedicated platform team or experienced DBA
  • You don't need JOINs with transactional data (analytics-only workload)
  • GDPR-style deletes or frequent updates aren't use cases

Pick InfluxDB if:

  • Pure IoT/observability use case with low cardinality
  • Team tolerates Flux or waits for 3.x stability
  • You need out-of-the-box downsampling without custom code

Netigo outcome after 10 months

  • TimescaleDB, single-node (for now), 64 cores, 1TB NVMe
  • Average write throughput 650k rows/s
  • P99 query latency on top-100 queries: 180 ms
  • Compression ratio on 7-day chunks: 12×
  • Disk storage: 2.3 TB for 90 days of hot data
  • Scaling plan: move to TimescaleDB multi-node once > 1.5M/s (current headroom 2.3×)

Takeaway

TSDB selection isn't about “fastest wins.” It's about how well the choice fits your overall stack, team know-how and operational capacity. For 80% of projects handling time-series data, TimescaleDB is the pragmatic default — unless you have an explicit reason to go elsewhere. ClickHouse is stronger for OLAP warehousing; InfluxDB for IoT with low cardinality.

The most important advice: benchmark your real workload. Synthetic numbers tell you nothing. Spin up a prototype, point 10% of your expected traffic at it, watch what happens.

Working on something similar?

Book a 30-minute technical call. No sales process — direct architectural feedback.

Pick a time

Architecture, cloud and integration for complex systems. A senior architect on every project.

Navigation

ServicesHow we workInsightsCase StudiesCareerContactAgency vs. freelancer vs. us

Services

DevelopmentCloudDevOpsAI & DataConsultingDelivery

Contact

CodeDock s.r.o.

Zlenická 863/9, 104 00 Praha 22

Czech Republic

info@codedock.com

Company ID: 14292769

VAT ID: CZ14292769


© 2026 Codedock

ContactPrivacy Policy
Book a call