Video Calls inside PostgreSQL works end-to-end. Camera frames go into the database as rows. Another browser pulls them back out and plays them in near real time. It’s clever.Still’s a little unhinged.Still’s also extremely “engineer-brain,” and I mean that as a compliment.
So what’s going on here? How does it work, why is logical replication the whole magic trick, and where do things get sharp enough to draw blood if you try to ship it as a real product?
Key Takeaways
- Yes, Video Calls inside PostgreSQL can work. You store encoded frames in tables, then stream them out using logical replication, so you don’t need a polling loop.
- PlanetScale’s reference build hit 640×360 at 15fps. JPEG frames landed around 25–40KB, which works out to roughly 375–600KB/s of video per direction. Audio was PCM samples.
- LISTEN/NOTIFY won’t carry video frames in a clean way because Postgres enforces an 8KB payload limit for NOTIFY payloads in the default config.
- Treat this as a real-time changefeed demo, not a serious media transport. For actual calls, you still want WebRTC.
- Storing frames as rows opens a strange door. The call becomes queryable later, and research like DeepVQL leans into video analytics inside PostgreSQL.
What “Video Calls inside PostgreSQL” actually means
When most people hear “video call,” they picture WebRTC and all the usual chaos. Peer-to-peer streams. TURN servers. Jitter buffers. Codec negotiation. The whole parade.
This demo flips the script. Here, PostgreSQL plays message broker for the media frames themselves.
Roughly like this, not fancy:
- The browser captures video and audio
- The client encodes them into reasonably compact chunks, JPEG for video and PCM samples for audio
- A relay inserts those frames into Postgres tables
- Another component reads those inserts as a stream and forwards them to the other person
So the “call” is basically: INSERT frames → stream changes → play frames.
Nick Van Wiggeren showed this in PlanetScale’s write-up, inspired by SpacetimeDB’s “video call over a database” challenge. PlanetScale also links the open-source implementation they started from. Their post is still the clearest public walkthrough I’ve seen of the full pipeline.
External reference: https://planetscale.com/blog/video-conferencing-with-postgres
How it works
PlanetScale’s demo used a pretty relatable stack:
A SvelteKit frontend.
A small Node.js WebSocket server in the middle they call “pg-relay.”
And a PostgreSQL database. In the demo, it was a $5 PlanetScale PostgreSQL instance.
The flow, end to end:
Capture and encode in the browser
Video frames get drawn to a canvas, then encoded to JPEG.
Audio samples get collected, resampled to 16kHz mono, and encoded as PCM16LE.Ship frames to the relay over WebSocket
Relay inserts frames into Postgres
Stream those inserts back out
This is where logical replication steps in as the ordered change stream.Other browser plays it back
Video becomes a blob URL from the JPEG and gets rendered into an<img>.
Audio gets scheduled through the Web Audio API with a small jitter buffer.
And yeah, it feels a little weird reading “render video into an <img>” in 2026, but it works.
The PostgreSQL table schema PlanetScale used
This is almost verbatim from what PlanetScale used for video frames:
CREATE TABLE video_frames
).Frames get inserted like:
INSERT INTO video_frames
VALUES;No sorcery here. It’s BYTEA plus some metadata so you can tell what you’re looking at.
Real numbers from the demo
PlanetScale reports these operating points:
- 640×360 @ 15fps
- JPEG quality around 0.65
- Each frame about 25–40KB
- That works out to 375–600KB/s per direction for video, since you’re doing 15 frames times 25–40KB
And the row counts? Oh they get goofy fast.
At 15fps, you’re staring down about 108,000 rows per hour per active call if you keep everything. Which… you don’t. Unless you enjoy buying disks and explaining it later.
Why logical replication makes this feasible
Logical replication is the whole trick.
Postgres logical replication gives you a reliable, ordered change stream based on the WAL, the write-ahead log. PlanetScale’s relay consumes the replication stream, notices a new row, and forwards the frame bytes out over WebSocket.
So you avoid the obvious pain:
No tight “SELECT loop” polling like a maniac trying to keep up with 15fps.
Updates arrive in commit order.
And you can run presence, chat, call state, and other app events through the same mechanism.
If you’ve lived in Kafka-land or Redis Pub/Sub land, the vibe will feel familiar. Same general idea, heavier machinery, different failure modes. Still, it’s a clean demo of “Postgres as the event log.”
Why LISTEN/NOTIFY isn’t enough
Everybody’s first instinct is Postgres pub/sub. LISTEN/NOTIFY. It’s sitting right there. Tempting.
But Postgres is blunt about it. The NOTIFY payload string, in default configuration, must be shorter than 8000 bytes. That’s straight from the docs.
Reference: https://www.postgresql.org/docs/current/sql-notify.html
A single JPEG frame in this demo is 25–40KB, so you immediately smash into the limit. Sure, you could chunk every frame across multiple NOTIFY messages, reassemble them, manage ordering, handle dropped chunks… and now you’ve reinvented a worse transport protocol on top of something that was meant for lightweight signaling. Congrats?
A few more rough edges the docs call out:
Notifications are delivered at commit time, not instantly mid-transaction.
Long-running transactions can block notification queue cleanup.
There’s a notification queue, docs mention 8GB in a standard install, and if it fills up, commits with NOTIFY can fail.
LISTEN/NOTIFY is great for “hey, something changed, go fetch it.”
It’s not a video hose. Not even close.
Cleanup strategy so the database doesn’t eat your disk
PlanetScale handled retention with aggressive pruning. Practical. A little brutal. Exactly what you’d do if you don’t want the database to balloon into a science project.
- A cleanup job runs every ~2 seconds
- It deletes frames older than about 5 seconds
Example:
DELETE FROM audio_frames WHERE inserted_at < NOW() - INTERVAL '5 seconds';
DELETE FROM video_frames WHERE inserted_at < NOW() - INTERVAL '5 seconds'.So you keep a small rolling window. They expected around 5–7 seconds of frames.Plus also computed approximate FPS by counting recent rows, which is one of those “database as instrument panel” moments I honestly love. It feels wrong and right at the same time.
SELECT
from_id,
COUNT(*) AS frames_5s,
ROUND(COUNT(*) / 5.0, 1) AS approx_fps
FROM video_frames
WHERE inserted_at >= NOW() - INTERVAL '5 seconds'
GROUP BY from_id
ORDER BY frames_5s DESC.But… why not just use WebRTC like a normal person?
You should. Most of the time.
WebRTC is literally built for low-latency real-time media. Browsers capture and stream audio and video, plus you can exchange arbitrary data peer-to-peer through data channels.
Reference: https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API
So why do Video Calls inside PostgreSQL at all?
I’ve only got a few answers feel honest:
You want to see how far Postgres logical replication can stretch as a real-time backend.
You’re experimenting with a single “stream of truth” for state, chat, presence, call state, and all that.
Or you’re interested in durability and queryability of the media stream. Weird, yes. Interesting, also yes.
That last one isn’t just a thought experiment.
Video data in PostgreSQL, and where DeepVQL fits
Video conferencing is one angle. Video analytics is another.
The DeepVQL paper, titled “Deep Video Queries on PostgreSQL,” lives in the multimedia database and analytics world. It describes extending PostgreSQL with video database functions and user-defined functions for tasks like object detection, object tracking, and video analytics queries, based on public listings and snippets summarizing the work.
Reference: https://dl.acm.org/doi/10.14778/3611540.3611583
Different goal than video calls. Same underlying idea. Once video is “database-shaped,” you can do database-ish things to it.
Common mistakes people make if they try this
A few problems show up fast. Like, “ten minutes into the build” fast.
Trying to push frames through NOTIFY
You hit the 8KB payload limit and waste your life chunking.Forgetting how fast rows pile up
At 15fps, you’re looking at 108,000 rows/hour per call unless you prune. Retention has to be part of the plan from day one.Treating Postgres like a media server
Postgres can move bytes. Sure. But production video needs congestion control, NAT traversal, codec negotiation, adaptive bitrate. That’s WebRTC territory.Ignoring the community’s consistent advice about “streaming through a DB”
Even on r/PostgreSQL, the repeating pattern is basically “don’t route the live stream through the database, use a message bus or proper streaming path, store what you need for history or analytics.”
Reference discussion starter: https://www.reddit.com/r/PostgreSQL/comments/1fzlo6a/live_streaming_data_in_postgres/
So… should we build Video Calls inside PostgreSQL?
They’re real.Plus’re possible. PlanetScale proved it with a working implementation inserts actual frames, streams them out through logical replication, and hits 15fps at 640×360 with throughput numbers that sound like reality, not a slide deck.
But I’d still treat this as a killer learning project and a sharp demo of Postgres change streams, not as a recommended media transport. For real calls, WebRTC remains the grown-up choice.
If you try your own version, I genuinely want to know what snaps first in your environment. Latency? Egress cost? Replication lag? Something you didn’t expect? And if you’re in a real-time mood, you might also like this related post on browser-native tooling: https://www.basantasapkota026.com.np/2026/02/webmcp-is-awesome-browser-native-tools.html
Sources
- PlanetScale — “Video Conferencing with Postgres” (Nick Van Wiggeren, Feb 27, 2026). Https.//planetscale.com/blog/video-conferencing-with-postgres
- PostgreSQL Documentation —
NOTIFYpayload limit and behavior. Https.//www.postgresql.org/docs/current/sql-notify.html - MDN Web Docs — WebRTC API overview (
RTCPeerConnection, media streams, data channels). Https.//developer.mozilla.org/en-US/docs/Web/API/WebRTC_API - ACM Digital Library — “DeepVQL. Deep Video Queries on PostgreSQL”. Https.//dl.acm.org/doi/10.14778/3611540.3611583
- Reddit — “Live streaming data in Postgres” discussion starter. Https.//www.reddit.com/r/PostgreSQL/comments/1fzlo6a/live_streaming_data_in_postgres/
- YouTube — “Video Calls INSIDE PostgreSQL?” (demo/video reference). Https.//www.youtube.com/watch?v=O8M8C_fs4_4
- X (tweet reference mentioned by PlanetScale) — SpacetimeDB discussion thread: https://x.com/spacetime_db/status/2027252321986510900