<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: gmaster1440</title><link>https://news.ycombinator.com/user?id=gmaster1440</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 08 Apr 2026 01:44:55 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=gmaster1440" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by gmaster1440 in "Taste in the age of AI and LLMs"]]></title><description><![CDATA[
<p>Fair enough. I really like the tarpit analogy, wasn't familiar with it. You can keep pulling your feet out faster than the tar rises, as long as you're willing to keep spending the energy, possibly with diminishing returns over time.</p>
]]></description><pubDate>Tue, 07 Apr 2026 20:52:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47681227</link><dc:creator>gmaster1440</dc:creator><comments>https://news.ycombinator.com/item?id=47681227</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47681227</guid></item><item><title><![CDATA[New comment by gmaster1440 in "Taste in the age of AI and LLMs"]]></title><description><![CDATA[
<p>I think we're basically agreeing here. Your point (if I'm reading it right) is that taste and discernment do scale, but the gains come through pretraining/parameter scaling, which is slow and expensive compared to the fast, cheap wins in math/coding from smaller models. So taste is more of a lagging indicator of scale. it improves, but it's the last thing people notice because the benchmarkable stuff races ahead. Which also means taste isn't really a moat, just late to get commoditized.</p>
]]></description><pubDate>Tue, 07 Apr 2026 19:36:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47680296</link><dc:creator>gmaster1440</dc:creator><comments>https://news.ycombinator.com/item?id=47680296</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47680296</guid></item><item><title><![CDATA[New comment by gmaster1440 in "Taste in the age of AI and LLMs"]]></title><description><![CDATA[
<p>If you're properly bitter-lesson-pilled then why wouldn't better models continue to develop and improve taste and discernment when it comes to design, development, and just better thinking overall?</p>
]]></description><pubDate>Tue, 07 Apr 2026 16:12:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47677533</link><dc:creator>gmaster1440</dc:creator><comments>https://news.ycombinator.com/item?id=47677533</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47677533</guid></item><item><title><![CDATA[Show HN: I built a 30x faster svelte-check in 2 days with AI]]></title><description><![CDATA[
<p>I built a Rust drop-in replacement for svelte-check that's 10-30x faster for Svelte 5 projects.<p>What it does:<p>- Parses Svelte files with a custom Rust parser
- Transforms them to TSX in parallel using Rayon
- Runs type-checking via Microsoft's tsgo (the native Go port of TypeScript)
- Maps errors back to original .svelte locations via source maps<p>Why it's fast:<p>The official svelte-check uses TypeScript's Language Service API optimized for IDEs with persistent connections. Great for autocomplete but slow for batch CLI checks.<p>svelte-check-rs writes real TSX files to disk and runs tsgo as a standalone compiler. This enables incremental builds with persistent .tsbuildinfo, so subsequent runs only re-check changed files.<p>Benchmarks on a 650-file SvelteKit monorepo (M4 Max):<p><pre><code>  Cold: 17.5s vs 39.6s (2.3x faster)
  Warm: 1.3s vs 39.4s (30x faster)
  Iterative: 2.5s vs 39.8s (16x faster)
</code></pre>
The AI part:<p>I built this in ~2 days using Claude Code (Opus 4.5) and Codex CLI (GPT-5.2 xhigh). The entire Svelte parser, TSX transformer, diagnostics engine, and CLI were written entirely by AI. I focused on architecture decisions and testing against real codebases while the models handled the implementation.<p>My motivation was actually to make AI coding agents more effective. When agents write code, they need to verify it works, and waiting 40 seconds for type-checking kills the feedback loop. With 1-2 second checks, agents can iterate much faster and catch their own mistakes immediately on our large and growing production SvelteKit codebase.<p>Website: <a href="https://svelte-check-rs.vercel.app/" rel="nofollow">https://svelte-check-rs.vercel.app/</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46478789">https://news.ycombinator.com/item?id=46478789</a></p>
<p>Points: 6</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 03 Jan 2026 16:51:41 +0000</pubDate><link>https://svelte-check-rs.vercel.app/</link><dc:creator>gmaster1440</dc:creator><comments>https://news.ycombinator.com/item?id=46478789</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46478789</guid></item><item><title><![CDATA[New comment by gmaster1440 in "Windows XP Professional"]]></title><description><![CDATA[
<p>No Pinball :(</p>
]]></description><pubDate>Thu, 07 Aug 2025 14:37:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=44825035</link><dc:creator>gmaster1440</dc:creator><comments>https://news.ycombinator.com/item?id=44825035</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44825035</guid></item><item><title><![CDATA[New comment by gmaster1440 in "Measuring the impact of AI on experienced open-source developer productivity"]]></title><description><![CDATA[
<p>What if the slowdown isn't a bug but a feature? What if AI tools are forcing developers to think more carefully about their code, making them slower but potentially producing better results? AFAIK the study measured speed, not quality, maintainability, or correctness.<p>The developers might feel more productive because they're engaging with their code at a higher level of abstraction, even if it takes longer. This would be consistent with why they maintained positive perceptions despite the slowdown.</p>
]]></description><pubDate>Thu, 10 Jul 2025 19:11:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44524441</link><dc:creator>gmaster1440</dc:creator><comments>https://news.ycombinator.com/item?id=44524441</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44524441</guid></item><item><title><![CDATA[New comment by gmaster1440 in "The Model Is the Product"]]></title><description><![CDATA[
<p>> This is also an uncomfortable direction. All investors have been betting on the application layer. In the next stage of AI evolution, the application layer is likely to be the first to be automated and disrupted.<p>Highly agree with the sentiments expressed in this post, I wrote about something similar in my blog post on "Artificial General Software": <a href="https://www.markfayngersh.com/posts/artificial-general-software" rel="nofollow">https://www.markfayngersh.com/posts/artificial-general-softw...</a></p>
]]></description><pubDate>Tue, 18 Mar 2025 12:41:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=43398727</link><dc:creator>gmaster1440</dc:creator><comments>https://news.ycombinator.com/item?id=43398727</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43398727</guid></item><item><title><![CDATA[New comment by gmaster1440 in "The Anthropic Economic Index"]]></title><description><![CDATA[
<p>the entire premise of this economic index is that they're showing you actual usage and insights from millions of anonymized claude conversations.</p>
]]></description><pubDate>Mon, 10 Feb 2025 14:57:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=43001029</link><dc:creator>gmaster1440</dc:creator><comments>https://news.ycombinator.com/item?id=43001029</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43001029</guid></item><item><title><![CDATA[New comment by gmaster1440 in "OpenAI O3-Mini"]]></title><description><![CDATA[
<p>i think it says, amongst other things, that there is a salient difference between competitive programming like codeforce and real-world programming. u can train a model to hillclimb elo ratings on codeforce, but that won't necessarily directly translate to working on a prod javascript codebase.<p>anthropic figured out something about real world coding that openai is still trying to catch up to, o3-mini-high notwithstanding.</p>
]]></description><pubDate>Sat, 01 Feb 2025 14:01:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=42898333</link><dc:creator>gmaster1440</dc:creator><comments>https://news.ycombinator.com/item?id=42898333</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42898333</guid></item><item><title><![CDATA[Artificial General Software]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.markfayngersh.com/posts/artificial-general-software">https://www.markfayngersh.com/posts/artificial-general-software</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42480725">https://news.ycombinator.com/item?id=42480725</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 21 Dec 2024 16:58:12 +0000</pubDate><link>https://www.markfayngersh.com/posts/artificial-general-software</link><dc:creator>gmaster1440</dc:creator><comments>https://news.ycombinator.com/item?id=42480725</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42480725</guid></item><item><title><![CDATA[Supermaven Joins Cursor]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.cursor.com/blog/supermaven">https://www.cursor.com/blog/supermaven</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42119106">https://news.ycombinator.com/item?id=42119106</a></p>
<p>Points: 14</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 12 Nov 2024 20:03:49 +0000</pubDate><link>https://www.cursor.com/blog/supermaven</link><dc:creator>gmaster1440</dc:creator><comments>https://news.ycombinator.com/item?id=42119106</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42119106</guid></item><item><title><![CDATA[New comment by gmaster1440 in "Large language models in national security applications"]]></title><description><![CDATA[
<p>The "second year university student" analogy is interesting, but might not fully capture what's unique about LLMs in strategic analysis. Unlike students, LLMs can simultaneously process and synthesize insights from thousands of historical conflicts, military doctrines, and real-time data points without human cognitive limitations or biases.<p>The paper actually makes a stronger case for using LLMs to enhance rather than replace human strategists - imagine a military commander with instant access to an aide that has deeply analyzed every military campaign in history and can spot relevant patterns. The question isn't about putting LLMs "in charge," but whether we're fully leveraging their unique capabilities for strategic innovation while maintaining human oversight.</p>
]]></description><pubDate>Tue, 12 Nov 2024 19:21:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=42118683</link><dc:creator>gmaster1440</dc:creator><comments>https://news.ycombinator.com/item?id=42118683</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42118683</guid></item><item><title><![CDATA[New comment by gmaster1440 in "Large language models in national security applications"]]></title><description><![CDATA[
<p>The paper argues against using LLMs for military strategy, claiming "no textbook contains the right answers" and strategy can't be learned from text alone (the "Virtual Clausewitz" Problem). But this seems to underestimate LLMs' demonstrated ability to reason through novel situations. Rather than just pattern-matching historical examples, modern LLMs can synthesize insights across domains, identify non-obvious patterns, and generate novel strategic approaches. The real question isn't whether perfect answers exist in training data, but whether LLMs can engage in effective strategic reasoning—which increasingly appears to be the case, especially with reasoning models like o1.</p>
]]></description><pubDate>Tue, 12 Nov 2024 19:08:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=42118559</link><dc:creator>gmaster1440</dc:creator><comments>https://news.ycombinator.com/item?id=42118559</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42118559</guid></item><item><title><![CDATA[New comment by gmaster1440 in "Artificial Intelligence, Scientific Discovery, and Product Innovation [pdf]"]]></title><description><![CDATA[
<p>How generalizable are these findings given the rapid pace of AI advancement? The paper studies a snapshot in time with current AI capabilities, but the relationship between human expertise and AI could look very different with more advanced models. I would love to have seen the paper:<p>- Examine how the human-AI relationship evolved as the AI system improved during the study period<p>- Theorize more explicitly about which aspects of human judgment might be more vs less persistent<p>- Consider how their findings might change with more capable AI systems</p>
]]></description><pubDate>Tue, 12 Nov 2024 18:43:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=42118340</link><dc:creator>gmaster1440</dc:creator><comments>https://news.ycombinator.com/item?id=42118340</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42118340</guid></item><item><title><![CDATA[New comment by gmaster1440 in "Artificial Intelligence, Scientific Discovery, and Product Innovation [pdf]"]]></title><description><![CDATA[
<p>AI appears to have automated aspects of the job scientists found most intellectually satisfying.<p>- Reduced creativity and ideation work (dropping from 39% to 16% of time)<p>- Increased focus on evaluating AI suggestions (rising to 40% of time)<p>- Feelings of skill underutilization</p>
]]></description><pubDate>Tue, 12 Nov 2024 18:30:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=42118238</link><dc:creator>gmaster1440</dc:creator><comments>https://news.ycombinator.com/item?id=42118238</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42118238</guid></item><item><title><![CDATA[New comment by gmaster1440 in "Launch HN: Codebuff (YC F24) – CLI tool that writes code for you"]]></title><description><![CDATA[
<p>Does Codebuff / the tree sitter implementation support Svelte?</p>
]]></description><pubDate>Thu, 07 Nov 2024 19:50:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=42080202</link><dc:creator>gmaster1440</dc:creator><comments>https://news.ycombinator.com/item?id=42080202</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42080202</guid></item><item><title><![CDATA[Show HN: HN Push – Web push notifications for top stories on Hacker News]]></title><description><![CDATA[
<p>I created HN Push to help reduce the urge to refresh Hacker News constantly. Since I rely on HN for real-time tech news, I wanted an efficient way to stay informed without having to check the site. Receiving summaries with Apple Intelligence was an added bonus. The source code is available on GitHub[^1].<p>[^1]: <a href="https://github.com/pheuter/hnpush">https://github.com/pheuter/hnpush</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42041470">https://news.ycombinator.com/item?id=42041470</a></p>
<p>Points: 4</p>
<p># Comments: 2</p>
]]></description><pubDate>Mon, 04 Nov 2024 13:47:48 +0000</pubDate><link>https://www.hnpush.com</link><dc:creator>gmaster1440</dc:creator><comments>https://news.ycombinator.com/item?id=42041470</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42041470</guid></item><item><title><![CDATA[New comment by gmaster1440 in "OpenAI completes deal that values company at $157B"]]></title><description><![CDATA[
<p>Gift Link: <a href="https://www.nytimes.com/2024/10/02/technology/openai-valuation-150-billion.html?unlocked_article_code=1.PE4.zFlk.U9zKlbwKU6dO&smid=url-share" rel="nofollow">https://www.nytimes.com/2024/10/02/technology/openai-valuati...</a></p>
]]></description><pubDate>Wed, 02 Oct 2024 17:05:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=41722747</link><dc:creator>gmaster1440</dc:creator><comments>https://news.ycombinator.com/item?id=41722747</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41722747</guid></item><item><title><![CDATA[OpenAI completes deal that values company at $157B]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.nytimes.com/2024/10/02/technology/openai-valuation-150-billion.html">https://www.nytimes.com/2024/10/02/technology/openai-valuation-150-billion.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41722742">https://news.ycombinator.com/item?id=41722742</a></p>
<p>Points: 236</p>
<p># Comments: 424</p>
]]></description><pubDate>Wed, 02 Oct 2024 17:04:48 +0000</pubDate><link>https://www.nytimes.com/2024/10/02/technology/openai-valuation-150-billion.html</link><dc:creator>gmaster1440</dc:creator><comments>https://news.ycombinator.com/item?id=41722742</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41722742</guid></item><item><title><![CDATA[New comment by gmaster1440 in "I Quit Teaching Because of ChatGPT"]]></title><description><![CDATA[
<p>I imagine it's difficult to be a good teacher and find effective ways to encourage students to rigorously think about things they care about in spite of the discomfort it might cause.<p>I also believe increasingly capable and sophisticated AI systems will play a formative role in transforming education, not as the current chatbots that are disrupting education as mentioned in the article, but as active participants in the reimagined classrooms of the future. The transition will probably be rough, but it has the potential to bring about a better future and more fruitful learning and writing.</p>
]]></description><pubDate>Tue, 01 Oct 2024 16:10:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=41710323</link><dc:creator>gmaster1440</dc:creator><comments>https://news.ycombinator.com/item?id=41710323</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41710323</guid></item></channel></rss>