<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: kixelated</title><link>https://news.ycombinator.com/user?id=kixelated</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 09 May 2026 14:40:00 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=kixelated" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by kixelated in "OpenAI’s WebRTC problem"]]></title><description><![CDATA[
<p>To clarify, I meant waiting an extra 200ms if the alternative was dropping part of the prompt. During periods of zero congestion, the latency would be the same.</p>
]]></description><pubDate>Sat, 09 May 2026 03:58:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=48071687</link><dc:creator>kixelated</dc:creator><comments>https://news.ycombinator.com/item?id=48071687</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48071687</guid></item><item><title><![CDATA[New comment by kixelated in "OpenAI’s WebRTC problem"]]></title><description><![CDATA[
<p>HELLO MR SEAN,<p>1. Of course users want lower latency, but they also want fewer instances where the LLM "misheard" them.  It would be amazing to run A/B experiments on the trade-off between latency vs quality, but WebRTC makes that knob difficult to turn.<p>2. I'm obviously not an TTS expert, but what benefit is there to trickling out the result? The silicon doesn't care how quickly the time number increments?<p>3. Yeah, sometimes the client is aware when their IP changes and can do an ICE renegotiation. But often they aren't aware, and normally would rely on the server detecting the change, but that's not possible with your LB setup. It's not a big deal, just unfortunate given how many hoops you have to jump through already.<p>4. Okay, that draft means 7 RTTs instead of 8 RTTs? Again some can be pipelined so the real number is a bit lower. But like the real issue is the mandatory signaling server which causes a double TLS handshake just in case P2P is being used.<p>5. Of course WebRTC is easier for a new developer because it's a black box conferencing app. But for a large company like OpenAI, that black box starts to cause problems that really could be fixed with lower level primitives.<p>I absolutely think you should mess around with RTP over QUIC and would love to help. If you're worried about code size, the browser (and one day the OS) provides the QUIC library. And if you switch to something closer to MoQ, QUIC handles fragmentation, retransmissions, congestion control, etc. Your application ends up being surprisingly small.<p>The main shortcoming with RoQ/MoQ is that we can't implement GCC because QUIC is congestion controlled (including datagrams). We're stuck with cubic/BBR when sending from the browser for now.</p>
]]></description><pubDate>Sat, 09 May 2026 03:28:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=48071545</link><dc:creator>kixelated</dc:creator><comments>https://news.ycombinator.com/item?id=48071545</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48071545</guid></item><item><title><![CDATA[New comment by kixelated in "OpenAI’s WebRTC problem"]]></title><description><![CDATA[
<p>Hello Mr Author here. Apologies that my comment replies aren't as funny.<p>Every low-latency application has to decide the user experience trade-off between quality and latency. Congestion causes queuing (aka latency) and to avoid that, something needs to be skipped (lower quality).<p>The WebRTC latency vs. quality knob is fixed. It's great at minimizing latency, but suffers from a lack of flexibility. We still (try to) use WebRTC anyway, because like you implied, browser support has made it one of the only options.<p>Until now of course! WebTransport means you can achieve WebRTC-like behavior via a generic protocol. Choose how long you want to wait before dropping/resetting a stream, instead of that decision being made for you.<p>And yeah my point in the blog is that often the user wants streaming, but not dropping. Obviously you can stream audio input/output without WebRTC. The application should be able to decide when audio packets are lost forever... is it 50ms or 500ms or 5000ms? My argument is that voice AI shouldn't pick the 50ms option.</p>
]]></description><pubDate>Sat, 09 May 2026 02:41:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=48071260</link><dc:creator>kixelated</dc:creator><comments>https://news.ycombinator.com/item?id=48071260</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48071260</guid></item><item><title><![CDATA[MoqBoy: Anarchy Gameboy Player]]></title><description><![CDATA[
<p>Article URL: <a href="https://moq.dev/blog/moq-boy/">https://moq.dev/blog/moq-boy/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47808585">https://news.ycombinator.com/item?id=47808585</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 17 Apr 2026 17:48:35 +0000</pubDate><link>https://moq.dev/blog/moq-boy/</link><dc:creator>kixelated</dc:creator><comments>https://news.ycombinator.com/item?id=47808585</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47808585</guid></item><item><title><![CDATA[New comment by kixelated in "On a Boat"]]></title><description><![CDATA[
<p>QUIC libraries work by looping over pending streams (in priority order) to determine which UDP packet to send next. If there's more stream data than available congestion control, the data will send there in the stream send buffer.<p>Either side can abort a stream if it's taking too long, clearing the send buffer and officially dropping the data. It's a lot more flexible than opaque UDP send buffers and random packet loss.<p>FEC would make the most sense at the QUIC level because random packet loss is primarily hop-by-hop. But I'm not aware of any serious efforts to do that. There's a lot of ideas out there, but TBH MoQ is too young to have the production usage required to evaluate a FEC scheme.</p>
]]></description><pubDate>Wed, 18 Mar 2026 21:36:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47431730</link><dc:creator>kixelated</dc:creator><comments>https://news.ycombinator.com/item?id=47431730</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47431730</guid></item><item><title><![CDATA[New comment by kixelated in "On a Boat"]]></title><description><![CDATA[
<p>Yep, it's similar to multicast but L7.<p>But a huge difference is that there's a plan for congestion. We heavily rely on QUIC to drain network queues and prioritize/queue media based on importance. It's doable with multicast+unicast, but complicated.</p>
]]></description><pubDate>Wed, 18 Mar 2026 16:13:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47427585</link><dc:creator>kixelated</dc:creator><comments>https://news.ycombinator.com/item?id=47427585</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47427585</guid></item><item><title><![CDATA[New comment by kixelated in "On a Boat"]]></title><description><![CDATA[
<p>Yeah for Safari support I'm using polyfills; it sucks.<p>- libav.js for AudioEncoder/AudioDecoder.
- QMux over WebSockets for WebTransport.<p>Both are NPM packages if you want to use them. @kixelated/libavjs-webcodecs-polyfill and @moq/qmux<p>26.4 removes the need for both so there's hope!</p>
]]></description><pubDate>Wed, 18 Mar 2026 16:09:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47427545</link><dc:creator>kixelated</dc:creator><comments>https://news.ycombinator.com/item?id=47427545</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47427545</guid></item><item><title><![CDATA[New comment by kixelated in "On a Boat"]]></title><description><![CDATA[
<p>Absolutely agree.<p>You can convert any push-based protocol into a pull-based one with a custom protocol to toggle sources on/off. But it's a non-standard solution, and soon enough you have to control the entire stack.<p>The goal of MoQ is to split WebRTC into 3-4 standard layers for reusability. You can use QUIC for networking, moq-lite/moq-transport for pub/sub, hang/msf for media, etc. Or don't! The composability depends on your use case.<p>And yeah lemme know if you want some help/advice on your QUIC-based solution. Join the discord and DM @kixelated.</p>
]]></description><pubDate>Wed, 18 Mar 2026 16:06:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47427494</link><dc:creator>kixelated</dc:creator><comments>https://news.ycombinator.com/item?id=47427494</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47427494</guid></item><item><title><![CDATA[New comment by kixelated in "We replaced H.264 streaming with JPEG screenshots (and it worked better)"]]></title><description><![CDATA[
<p>Hey lewq, 40Mbps is an absolutely ridiculous bitrate. For context, Twitch maxes out around 8.5Mb/s for 1440p60. Your encoder was poorly configured, that's it. Also, it sounds like your mostly static content would greatly benefit from VBR; you could get the bitrate down to 1Mb/s or something for screen sharing.<p>And yeah, the usual approach is to adapt your bitrate to network conditions, but it's also common to modify the frame rate. There's actually no requirement for a fixed frame rate with video codecs. It also you could do the same "encode on demand" approach with a codec like H.264, provided you're okay with it being low FPS on high RTT connections (poor Australians).<p>Overall, using keyframes only is a very bad idea. It's how the low quality animated GIFs used to work before they were secretly replaced with video files. Video codecs are extremely efficient <i>because</i> of delta encoding.<p>But I totally agree with ditching WebRTC. WebSockets + WebCodecs is fine provided you have a plan for bufferbloat (ex. adaptive bitrate, ABR, GoP skipping).</p>
]]></description><pubDate>Wed, 24 Dec 2025 05:46:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46372841</link><dc:creator>kixelated</dc:creator><comments>https://news.ycombinator.com/item?id=46372841</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46372841</guid></item><item><title><![CDATA[Media over QUIC: You Don't Need It]]></title><description><![CDATA[
<p>Article URL: <a href="https://moq.dev/blog/you-dont-need-it/">https://moq.dev/blog/you-dont-need-it/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46353472">https://news.ycombinator.com/item?id=46353472</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 22 Dec 2025 11:57:26 +0000</pubDate><link>https://moq.dev/blog/you-dont-need-it/</link><dc:creator>kixelated</dc:creator><comments>https://news.ycombinator.com/item?id=46353472</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46353472</guid></item><item><title><![CDATA[New comment by kixelated in "WebTransport is almost here to allow UDP-like exchange in the browser"]]></title><description><![CDATA[
<p>Yeah, technically it's SCTP over DTLS for data channels. Only the media layer gets to use raw UDP, limiting the scope.</p>
]]></description><pubDate>Mon, 17 Nov 2025 20:30:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=45957929</link><dc:creator>kixelated</dc:creator><comments>https://news.ycombinator.com/item?id=45957929</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45957929</guid></item><item><title><![CDATA[New comment by kixelated in "WebTransport is almost here to allow UDP-like exchange in the browser"]]></title><description><![CDATA[
<p>QUIC has a much better alternative to FORWARD-TSN, either via RESET_STREAM or QUIC datagrams.<p>I've implemented SCTP before to hack in "datagram" support by spamming FORWARD-TSN. Fun fact: you can't use FORWARD-TSN if there's still reliable data outstanding. TSN is sequential after all, you have to drop all or nothing.<p>QUIC as a protocol is significantly better than SCTP. I really recommend the RFC</p>
]]></description><pubDate>Mon, 17 Nov 2025 20:22:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=45957836</link><dc:creator>kixelated</dc:creator><comments>https://news.ycombinator.com/item?id=45957836</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45957836</guid></item><item><title><![CDATA[New comment by kixelated in "WebTransport is almost here to allow UDP-like exchange in the browser"]]></title><description><![CDATA[
<p>For sure, if you want an ordered/reliable stream then WebSocket is ideal. WebTransport is useful when you also want prioritization and semi-reliable networking, similar in concept to WebRTC data channels.</p>
]]></description><pubDate>Mon, 17 Nov 2025 20:16:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=45957782</link><dc:creator>kixelated</dc:creator><comments>https://news.ycombinator.com/item?id=45957782</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45957782</guid></item><item><title><![CDATA[New comment by kixelated in "WebTransport is almost here to allow UDP-like exchange in the browser"]]></title><description><![CDATA[
<p>I like to frame WebTransport as multiple WebSocket connections to the same host, but using a shared handshake.<p>It's common to multiplex a WebSocket connection but you don't need that with WebTransport, while also avoiding head-of-line-blocking.<p>But yeah I wish WebTransport had a better TCP fallback. I still use WebSocket for that.</p>
]]></description><pubDate>Mon, 17 Nov 2025 18:56:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=45956717</link><dc:creator>kixelated</dc:creator><comments>https://news.ycombinator.com/item?id=45956717</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45956717</guid></item><item><title><![CDATA[New comment by kixelated in "WebTransport is almost here to allow UDP-like exchange in the browser"]]></title><description><![CDATA[
<p>I maintain <a href="https://github.com/kixelated/web-transport" rel="nofollow">https://github.com/kixelated/web-transport</a><p>But yeah the HTTP/3 integration definitely makes WebTransport harder to support. The QUIC connection needs to be shared between HTTP/3 and WebTransport.</p>
]]></description><pubDate>Mon, 17 Nov 2025 18:51:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=45956656</link><dc:creator>kixelated</dc:creator><comments>https://news.ycombinator.com/item?id=45956656</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45956656</guid></item><item><title><![CDATA[New comment by kixelated in "WebTransport is almost here to allow UDP-like exchange in the browser"]]></title><description><![CDATA[
<p>There's no probing in any QUIC implementation but it's possible. There's a QUIC extension in the IETF similar to transport-wide-cc but it would still be up to the browser to use it for any upload CC.</p>
]]></description><pubDate>Mon, 17 Nov 2025 18:48:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=45956623</link><dc:creator>kixelated</dc:creator><comments>https://news.ycombinator.com/item?id=45956623</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45956623</guid></item><item><title><![CDATA[New comment by kixelated in "WebTransport is almost here to allow UDP-like exchange in the browser"]]></title><description><![CDATA[
<p>SCTP and by extension, WebRTC data channels, are supposed to use the same congestion control algorithms as TCP/QUIC. But I don't know which CC libsctp does these days.<p>WebTransport in Chrome currently uses CUBIC but the Google folks want to turn on BBR everywhere. It uses the same QUIC implementation as HTTP/3 so it's going to be more battle hardened.</p>
]]></description><pubDate>Mon, 17 Nov 2025 18:46:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=45956596</link><dc:creator>kixelated</dc:creator><comments>https://news.ycombinator.com/item?id=45956596</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45956596</guid></item><item><title><![CDATA[New comment by kixelated in "Ask HN: Those who applied to the OpenAI Grove program, did you ever hear back?"]]></title><description><![CDATA[
<p>I had a 30 minute intro call and got a rejection a few days later. It was VERY timely (Sep 21 email, Sep 23 meeting, Sep 26 rejection).<p>We both weren't sure if my project was a good fit for the program. It was still a positive experience, and they were nice enough to offer me an intro to a more relevant team within OpenAI.<p>I couldn't quite figure out the goal of Grove. The line about "pre-idea" individuals, and of course the referral offer, made me feel that it's more of a hiring pipeline and not a traditional incubator. But we'll see when they announce the cohort.</p>
]]></description><pubDate>Sun, 19 Oct 2025 22:09:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=45638496</link><dc:creator>kixelated</dc:creator><comments>https://news.ycombinator.com/item?id=45638496</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45638496</guid></item><item><title><![CDATA[New comment by kixelated in "The first Media over QUIC CDN: Cloudflare"]]></title><description><![CDATA[
<p>Thanks!<p>You don't need multicast! CDNs effectively implement multicast, with caching, in L7 instead of relying on routers and ISPs to implement it in L3. That's actually what I did at Twitch for 5 years.<p>In theory, multicast could reduce the traffic from CDN edge to ISP, but only for the largest broadcasts of the year (ex. Superbowl). A lot of CDNs are getting around this by putting CDN edges within ISPs. The smaller events don't benefit because of the low probability of two viewers sharing the same path.<p>There's other issues with multicast, namely congestion control and encryption. Not unsolvable but the federated nature of multicast makes things more difficult to fix.<p>Multicast would benefit P2P the most. I just don't see it catching on given how huge CDNs have become. Even WebRTC, which would benefit from multicast the most and uses RTP (designed with multicast in mind) has shown no interest in supporting it. But I did hear a rumor that Google was using multicast for Meets within their network so maaaybe?</p>
]]></description><pubDate>Sat, 23 Aug 2025 14:55:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=44996441</link><dc:creator>kixelated</dc:creator><comments>https://news.ycombinator.com/item?id=44996441</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44996441</guid></item><item><title><![CDATA[New comment by kixelated in "The first Media over QUIC CDN: Cloudflare"]]></title><description><![CDATA[
<p>QUIC has support for preferred address, where anycast is used for the QUIC handshake then the connection migrates to a unicast address. It still has issues but it's nice to have sticky established connections and avoid flapping mid connection.</p>
]]></description><pubDate>Sat, 23 Aug 2025 04:33:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=44993182</link><dc:creator>kixelated</dc:creator><comments>https://news.ycombinator.com/item?id=44993182</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44993182</guid></item></channel></rss>