And other times it’s a benchmark that compares apples to… a forklift.
This 1000x faster than PostgreSQL slogan is doing the rounds again thanks to new database projects, and also because people keep bumping into one cursed query or ingest pipeline that’s quietly doing the slowest possible thing.
Key takeaways
- The “1000x faster than PostgreSQL” claim can be true for a specific workload, especially once you fix query shape or ingestion strategy.
- Mattermost hit a real-world case where one batch query used for indexing got slower and slower, then started slamming into ~30s timeouts. They dug in with
EXPLAIN. - A bunch of “Postgres is slow” stories are basically “my query can’t use the indexes I thought it could.” Classic example:
ORconditions that block good index usage. Often you can dodge it withUNION ALL. - Bulk ingest is where the huge multipliers love to hide. PostgreSQL’s
COPYis explicitly built for loading lots of rows fast. - New systems like SpacetimeDB might absolutely beat Postgres in certain application shapes. But you want to verify the benchmarks and the constraints before you hand it your production data and say good luck.
What does “1000x faster than PostgreSQL” even mean?
When someone drops 1000x faster than PostgreSQL, they’re usually talking about one of these things:
- Throughput. More operations per second.
- Latency: the same query comes back in 10ms instead of 10s.
- End-to-end app time: not just the DB, but network time, serialization, ORM overhead, query planning, the whole messy path.
Here’s the catch. All of those are real measurements… and they’re not interchangeable.
A database can be “1000x faster” on a carefully chosen benchmark and still lose badly on your day-to-day work: joins, constraints, weird ad-hoc queries, migrations, backups, extensions, all the stuff you only notice once you’re living with it.
SpacetimeDB and the “1000x faster than PostgreSQL” headline
The YouTube video “1000X Faster Than PostgreSQL?!” talks about SpacetimeDB, a newer database claiming 100–1000x faster than traditional databases in some scenarios. Those kinds of numbers usually come from architectural choices like:
- keeping more state “hot” in memory
- tighter integration between compute and storage
- changing how apps talk to the DB so you avoid ORM and network overhead
- optimizing hard for a narrower workload, often real-time or multiplayer or sync-heavy systems
None of this automatically makes the claim bogus. It just changes how you should read it.
Think of it more like: “1000x faster than PostgreSQL in this app pattern, under these assumptions.” If your world is SQL analytics, reporting, and the occasional truly unholy join, you might not see anything close.
Want a fair comparison? Benchmark the same thing. Concurrency. Durability settings. Transaction semantics. Network boundaries. Otherwise you’re not comparing, you’re doing performance cosplay.
A “1000x faster than PostgreSQL” win… without leaving PostgreSQL
One of my favorite reality checks is the Mattermost write-up. They were indexing a database with ~100 million posts. It took ~18 hours, then kept getting worse until it started timing out. Eventually they found the culprit: one query running repeatedly inside a batch loop.
Here’s the simplified query they shared:
SELECT Posts.*, Channels.TeamId
FROM Posts
LEFT JOIN Channels ON Posts.ChannelId = Channels.Id
WHERE
Posts.CreateAt > $1
OR
ORDER BY Posts.Yet ASC, Posts.Id ASC
LIMIT $3;Then they did the thing more people should do before blaming the database:
EXPLAIN ...And oof. The plan showed tens of millions of rows removed by filter, and most of the time went to CPU rather than I/O. Translation: Postgres wasn’t “being slow” for fun. It was doing a ridiculous amount of work because the filter shape stopped it from using the index efficiently for the full condition.
If you haven’t bookmarked it yet, the official docs on plan inspection are genuinely worth your time.
External link: https://www.postgresql.org/docs/current/using-explain.html
PostgreSQL tip: OR vs UNION ALL
A common fix people brought up in the Hacker News discussion around the Mattermost post: swap an OR for a UNION or UNION ALL so the planner can use indexes better on each branch.
Example pattern:
UNION ALL
ORDER BY CreateAt ASC, Id ASC
LIMIT $3;Will this always help? Nope. Sometimes it makes no difference, sometimes it gets worse, and sometimes it’s like flipping on the lights in a dark room.
And yeah, a lot of “1000x faster than PostgreSQL” stories are really “1000x faster than the original SQL I wrote on a tired Tuesday.”
1000x faster than PostgreSQL for inserts: COPY and CopyManager
Bulk ingestion is the other place where you see wild multipliers without any magic fairy dust.
A Medium post measured saving 100K rows one-at-a-time using ORM-style inserts at 2166 seconds (36 minutes). Then they loaded 1 million rows using PostgreSQL’s CopyManager in ~4 seconds. They called it a 1000x+ improvement over the naive approach.
PostgreSQL basically spells this out. The COPY command is optimized for loading large numbers of rows and has significantly less overhead than repeated INSERTs.
External link: https://www.postgresql.org/docs/current/populate.html
CLI-style:
COPY people(name, address)
FROM '/path/to/people.csv'
WITH (FORMAT csv, HEADER true);If you’re doing it in application code, especially Java-ish land, CopyManager is the usual route. Same core idea either way: stop paying round-trip and per-row overhead over and over and over.
How I sanity-check “1000x faster than PostgreSQL” claims
When I hear 1000x faster than PostgreSQL, I run a quick mental checklist. Nothing fancy, just the stuff people love to accidentally change while “only switching the database.”
- What else changed besides the DB? RPC layer, ORM behavior, caching, batching
- Are durability settings comparable?
fsync, WAL, replication - Same query shape? Watch for
OR,LIKE, joins, sorts - Is the dataset realistic, and does it all fit in memory?
- Is one hot path dominating everything? If yes, maybe Postgres can be fixed first
Two quick sanity commands for PostgreSQL
These are my go-tos:
-- See the real plan and timing
EXPLAIN (ANALYZE, BUFFERS) <your query>;
-- Update stats after bulk changes
ANALYZE;And if you’re stuck, the boring stuff still wins: the right composite index, selecting fewer columns, better pagination strategy like keyset over offset, and avoiding accidental full scans. Unsexy. Effective.
“1000x faster than PostgreSQL” is a question, not an answer
So, is 1000x faster than PostgreSQL real? Sometimes, yes. Especially when you’re comparing against a naive implementation, or a query shape that blocks index usage.
The Mattermost story is a great reminder that one bad query can melt CPU and hit timeouts.So bulk ingest examples show how COPY can absolutely obliterate row-by-row inserts.
Before you swap databases, try to earn the 10x–1000x win inside PostgreSQL first with EXPLAIN (ANALYZE, BUFFERS), better query shapes like using UNION ALL when it fits, and proper bulk loading. And if a system like SpacetimeDB matches your application shape, benchmark it honestly.
Got a “Postgres is slow” query right now? Paste anonymized EXPLAIN (ANALYZE, BUFFERS) output in the comments and I’ll tell you what I’d look at first.
Internal link: https://www.basantasapkota026.com.np/2026/03/video-calls-inside-postgresql-yes-really.html
Sources
- YouTube. “1000X Faster Than PostgreSQL?!” (SpacetimeDB claim discussion) — https.//www.youtube.com/watch?v=k7ZemI82Qxs
- Mattermost Engineering. “Making a Postgres query 1,000 times faster” — https.//mattermost.com/blog/making-a-postgres-query-1000-times-faster/
- Hacker News discussion. “Making a Postgres query 1k times faster” (OR vs UNION/index usage discussion) — https.//news.ycombinator.com/item?id=40372296
- PostgreSQL Documentation. “14.1. Using EXPLAIN” — https.//www.postgresql.org/docs/current/using-explain.html
- PostgreSQL Documentation. “14.4. Populating a Database” (COPY guidance) — https.//www.postgresql.org/docs/current/populate.html
- Balkrishan Nagpal (Medium): “How I improved data insertion speed by a factor of more than 1000x” (CopyManager example, timings) — https://balkrishan-nagpal.medium.com/postgres-how-i-improved-data-insertion-speed-by-a-factor-of-more-than-1000x-1a968e736e86