📱

Read on Your E-Reader

Thousands of readers get articles like this delivered straight to their Kindle or Boox. New articles arrive automatically.

Learn More

This is a preview. The full article is published at news.ycombinator.com.

We Mass-Deployed 15-Year-Old Screen Sharing Technology and It's Actually Better

We Mass-Deployed 15-Year-Old Screen Sharing Technology and It's Actually Better

By Luke MarsdenHacker News: Front Page

We Mass-Deployed 15-Year-Old Screen Sharing Technology and It's Actually Better Or: How JPEG Screenshots Defeated Our Beautiful H.264 WebCodecs Pipeline Part 2 of our video streaming saga. Read Part 1: How we replaced WebRTC with WebSockets → The Year is 2025 and We’re Sending JPEGs Let me tell you about the time we spent three months building a gorgeous, hardware-accelerated, WebCodecs-powered, 60fps H.264 streaming pipeline over WebSockets... ...and then replaced it with grim | curl when the WiFi got a bit sketchy. I wish I was joking. Act I: Hubris (Also Known As “Enterprise Networking Exists”) We’re building Helix , an AI platform where autonomous coding agents work in cloud sandboxes. Users need to watch their AI assistants work. Think “screen share, but the thing being shared is a robot writing code.” Last week, we explained how we replaced WebRTC with a custom WebSocket streaming pipeline. This week: why that wasn’t enough. The constraint that ruined everything: It has to work on enterprise networks. You know what enterprise networks love? HTTP. HTTPS. Port 443. That’s it. That’s the list. You know what enterprise networks hate? UDP - Blocked. Deprioritized. Dropped. “Security risk.” WebRTC - Requires TURN servers, which requires UDP, which is blocked Custom ports - Firewall says no STUN/ICE - NAT traversal? In my corporate network? Absolutely not Literally anything fun - Denied by policy We tried WebRTC first. Worked great in dev. Worked great in our cloud. Deployed to an enterprise customer. “The video doesn’t connect.” checks network - Outbound UDP blocked. TURN server unreachable. ICE negotiation failing. We could fight this. Set up TURN servers. Configure enterprise proxies. Work with IT departments. Or we could accept reality: Everything must go through HTTPS on port 443. So we built a pure WebSocket video pipeline : H.264 encoding via GStreamer + VA-API (hardware acceleration, baby) Binary frames over WebSocket (L7 only, works through any proxy) WebCodecs API for hardware decoding in the browser 60fps at 40Mbps with sub-100ms latency We were so proud. We wrote Rust. We wrote TypeScript. We implemented our own binary protocol. We measured things in microseconds. Then someone tried to use it from a coffee shop. Act II: Denial “The video is frozen.” “Your WiFi is bad.” “No, the video is definitely frozen. And now my keyboard isn’t working.” checks the video It’s showing what the AI was doing 30 seconds ago. And the delay is growing. Turns out, 40Mbps video streams don’t appreciate 200ms+ network latency. Who knew. When the network gets congested: Frames buffer up in the TCP/WebSocket layer They arrive in-order (thanks TCP!) but increasingly delayed Video falls further and further behind real-time You’re watching the AI type code from 45 seconds ago By the time you see a bug, the AI has already committed it to main Everything is terrible forever “Just lower the bitrate,” you say. Great idea. Now it’s 10Mbps of blocky garbage that’s still 30 seconds behind. Act III: Bargaining We tried everything: “What if we only send...

Preview: ~500 words

Continue reading at Hacker News

Read Full Article

More from Hacker News: Front Page

Subscribe to get new articles from this feed on your e-reader.

View feed

This preview is provided for discovery purposes. Read the full article at news.ycombinator.com. LibSpace is not affiliated with Hacker News.

We Mass-Deployed 15-Year-Old Screen Sharing Technology and It's Actually Better | Read on Kindle | LibSpace