<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: winwang</title><link>https://news.ycombinator.com/user?id=winwang</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 07:15:51 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=winwang" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by winwang in "Rust Threads on the GPU"]]></title><description><![CDATA[
<p>Each SM should have 4 independent SMSPs (32 lanes each), no? Effectively a "4-core" task-parallel system per SM.</p>
]]></description><pubDate>Wed, 15 Apr 2026 22:45:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47786312</link><dc:creator>winwang</dc:creator><comments>https://news.ycombinator.com/item?id=47786312</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47786312</guid></item><item><title><![CDATA[New comment by winwang in "McGridsort: Warping Grids for GPU k-way mergesort"]]></title><description><![CDATA[
<p>Had a fun little idea for a weird GPU/SIMD k-way mergesort a couple years back, finally decided to write it up! (Anti-)jumpscare: no hard perf numbers in the post (though I have profiled it somewhat already).</p>
]]></description><pubDate>Tue, 07 Apr 2026 17:55:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47678926</link><dc:creator>winwang</dc:creator><comments>https://news.ycombinator.com/item?id=47678926</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47678926</guid></item><item><title><![CDATA[McGridsort: Warping Grids for GPU k-way mergesort]]></title><description><![CDATA[
<p>Article URL: <a href="https://winwang.blog/posts/mcgridsort">https://winwang.blog/posts/mcgridsort</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47678925">https://news.ycombinator.com/item?id=47678925</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Tue, 07 Apr 2026 17:55:15 +0000</pubDate><link>https://winwang.blog/posts/mcgridsort</link><dc:creator>winwang</dc:creator><comments>https://news.ycombinator.com/item?id=47678925</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47678925</guid></item><item><title><![CDATA[New comment by winwang in "ARC-AGI-3"]]></title><description><![CDATA[
<p>Interestingly, I find that the models generalize decently well as long as the "training" (more analogous to that for humans) fits in (small enough) context. That's to say, "in-context learning" seems good enough for real use.<p>But of course, that's not quite "long term"</p>
]]></description><pubDate>Thu, 26 Mar 2026 23:34:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47537231</link><dc:creator>winwang</dc:creator><comments>https://news.ycombinator.com/item?id=47537231</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47537231</guid></item><item><title><![CDATA[New comment by winwang in "ARC-AGI-3"]]></title><description><![CDATA[
<p>How much of this is expectations setting by the heights models reach? i.e. of we could assess a consistent floor of model performance in a vacuum, would we say it's better at "AGI" than the bottom 0.1% of humans?</p>
]]></description><pubDate>Thu, 26 Mar 2026 19:17:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47534475</link><dc:creator>winwang</dc:creator><comments>https://news.ycombinator.com/item?id=47534475</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47534475</guid></item><item><title><![CDATA[New comment by winwang in "iPhone 17 Pro Demonstrated Running a 400B LLM"]]></title><description><![CDATA[
<p>It would be much worse if it had said "You are absolutely wrong to be confused", haha.</p>
]]></description><pubDate>Mon, 23 Mar 2026 17:09:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47492247</link><dc:creator>winwang</dc:creator><comments>https://news.ycombinator.com/item?id=47492247</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47492247</guid></item><item><title><![CDATA[New comment by winwang in "Non-Messing-Up++: Diagonal Sorting and Young Tableaux"]]></title><description><![CDATA[
<p>Hey HN, I figured to just share this for feedback despite its dry-ness and small-idea-ness.</p>
]]></description><pubDate>Mon, 23 Mar 2026 12:45:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47488727</link><dc:creator>winwang</dc:creator><comments>https://news.ycombinator.com/item?id=47488727</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47488727</guid></item><item><title><![CDATA[Non-Messing-Up++: Diagonal Sorting and Young Tableaux]]></title><description><![CDATA[
<p>Article URL: <a href="https://winwang.blog/posts/non-messing-up++">https://winwang.blog/posts/non-messing-up++</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47488726">https://news.ycombinator.com/item?id=47488726</a></p>
<p>Points: 14</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 23 Mar 2026 12:45:57 +0000</pubDate><link>https://winwang.blog/posts/non-messing-up++</link><dc:creator>winwang</dc:creator><comments>https://news.ycombinator.com/item?id=47488726</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47488726</guid></item><item><title><![CDATA[New comment by winwang in "Dataframe 1.0.0.0"]]></title><description><![CDATA[
<p>(no idea but) I feel like changing the first number has a psychological issue, but the 2nd number feels more important than just "minor" sometimes. So may as well let the schema set the mind free?</p>
]]></description><pubDate>Mon, 23 Mar 2026 12:33:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47488586</link><dc:creator>winwang</dc:creator><comments>https://news.ycombinator.com/item?id=47488586</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47488586</guid></item><item><title><![CDATA[New comment by winwang in "Our commitment to Windows quality"]]></title><description><![CDATA[
<p>...I almost thought it was a parody site!</p>
]]></description><pubDate>Fri, 20 Mar 2026 20:04:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47459894</link><dc:creator>winwang</dc:creator><comments>https://news.ycombinator.com/item?id=47459894</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47459894</guid></item><item><title><![CDATA[New comment by winwang in "Ask HN: What is it like being in a CS major program these days?"]]></title><description><![CDATA[
<p>Interesting. I've felt like it's never been easier to learn things, but I suppose that's not quite the same as "acquiring new skills". I don't know if it applies, but it's always been easy to take the easy way out?<p>I feel like AI has made it a bit easier to do harder things too.</p>
]]></description><pubDate>Mon, 16 Mar 2026 12:43:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47398212</link><dc:creator>winwang</dc:creator><comments>https://news.ycombinator.com/item?id=47398212</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47398212</guid></item><item><title><![CDATA[New comment by winwang in "LLM Writing Tropes.md"]]></title><description><![CDATA[
<p>Yeah that's somewhat close to what I meant, though there's an irony here in that your comment (and this one) are pretty reddit-esque.</p>
]]></description><pubDate>Sun, 08 Mar 2026 23:40:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47302867</link><dc:creator>winwang</dc:creator><comments>https://news.ycombinator.com/item?id=47302867</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47302867</guid></item><item><title><![CDATA[New comment by winwang in "LLM Writing Tropes.md"]]></title><description><![CDATA[
<p>I don't think lived experience matters too much to me.
In some sense, AI has very unique "lived" experience, which is what creates the voice it uses ("doesn't have a voice" seems like an impossibility to me by definition).<p>I find AI very "human-esque", and its "self-reported" phenomenology is very entertaining to me, at least.<p>I also think AI writing might feel trashy also because most human writing is trashy.</p>
]]></description><pubDate>Sun, 08 Mar 2026 21:47:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47301881</link><dc:creator>winwang</dc:creator><comments>https://news.ycombinator.com/item?id=47301881</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47301881</guid></item><item><title><![CDATA[New comment by winwang in "Google Workspace CLI"]]></title><description><![CDATA[
<p>Really interesting. I was thinking about something similar regarding the shape of code. I have no qualms recommending my agents take static analysis to the extreme, though it would cumbersome for most people.</p>
]]></description><pubDate>Thu, 05 Mar 2026 02:54:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47256909</link><dc:creator>winwang</dc:creator><comments>https://news.ycombinator.com/item?id=47256909</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47256909</guid></item><item><title><![CDATA[New comment by winwang in "Nobody gets promoted for simplicity"]]></title><description><![CDATA[
<p>What about someone inexperienced but skeptical, using AI to learn + fix their own code before opening the PR?</p>
]]></description><pubDate>Wed, 04 Mar 2026 14:13:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47247604</link><dc:creator>winwang</dc:creator><comments>https://news.ycombinator.com/item?id=47247604</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47247604</guid></item><item><title><![CDATA[New comment by winwang in "Agentic Engineering Patterns"]]></title><description><![CDATA[
<p>Linear walkthrough: I ask my agents to give me a numbered tree. Controlling tree size specifies granularity. Numbering means it's simple to refer to points for discussion.<p>Other things that I feel are useful:<p>- Very strict typing/static analysis<p>- Denying tool usage with a hook telling the agent why+what they should do (instead of simple denial, or dangerously accepting everything)<p>- Using different models for code review</p>
]]></description><pubDate>Wed, 04 Mar 2026 09:09:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47244948</link><dc:creator>winwang</dc:creator><comments>https://news.ycombinator.com/item?id=47244948</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47244948</guid></item><item><title><![CDATA[New comment by winwang in "Intel's make-or-break 18A process node debuts for data center with 288-core Xeon"]]></title><description><![CDATA[
<p>If you have enough cores, you could pool the L1 together for makeshift RAM!</p>
]]></description><pubDate>Tue, 03 Mar 2026 22:08:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47239757</link><dc:creator>winwang</dc:creator><comments>https://news.ycombinator.com/item?id=47239757</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47239757</guid></item><item><title><![CDATA[New comment by winwang in "Elevated Errors in Claude.ai"]]></title><description><![CDATA[
<p>Kind of agreed. I like vibe coding as "just" another tool. It's nice to review code in IDE (well, VSCode), make changes without fully refactoring, and have the AI "autocomplete". Interesting, sometimes way faster + easier to refactor by hand because of IDE tooling.<p>The ways that agents actually make me "faster" are typically:
1. more fun to slog through tedious/annoying parts
2. fast code review iterations
3. parallel agents</p>
]]></description><pubDate>Tue, 03 Mar 2026 09:21:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47230123</link><dc:creator>winwang</dc:creator><comments>https://news.ycombinator.com/item?id=47230123</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47230123</guid></item><item><title><![CDATA[New comment by winwang in "[dead]"]]></title><description><![CDATA[
<p>Not sure, but <a href="https://status.claude.com/" rel="nofollow">https://status.claude.com/</a> uptime is pretty spotty. Funnily, the latest bar is still green despite there being an incident (and the messages even acknowledge this)</p>
]]></description><pubDate>Tue, 03 Mar 2026 05:00:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47228286</link><dc:creator>winwang</dc:creator><comments>https://news.ycombinator.com/item?id=47228286</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47228286</guid></item><item><title><![CDATA[New comment by winwang in "If AI writes code, should the session be part of the commit?"]]></title><description><![CDATA[
<p>Interesting! I actually split up larger goals into two plan files: one detailed plan for design, and one "exec plan" which is effectively a build graph but the nodes are individual agents and what they should do. I throw the two-plan-file thing into a protocol md file along with a code/review loop.</p>
]]></description><pubDate>Mon, 02 Mar 2026 16:26:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47220113</link><dc:creator>winwang</dc:creator><comments>https://news.ycombinator.com/item?id=47220113</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47220113</guid></item></channel></rss>