<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: rogermarley</title><link>https://news.ycombinator.com/user?id=rogermarley</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 03 May 2026 19:44:57 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=rogermarley" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by rogermarley in "Specsmaxxing – On overcoming AI psychosis, and why I write specs in YAML"]]></title><description><![CDATA[
<p>Exactly. There's little gap between a spec that's been written to the level of detail needed and just code. There's some, but it's not a big gap after decades of umpteen new frameworks and languages and new forms of abstraction.<p>The core of the misunderstanding is between new builds and making changes to existing builds (where most software dev work actually happens). Yes, you'll get a great headstart with a detailed spec for a new build. The issue is in the hundreds of changes that'll follow that.<p>Do people think that the desire to make shortcuts and do minimum effort changes is going to stop just because you've got a bit-more-natural-language-looking spec? And then with an AI underneath making probabilistic changes to code that's now basically a compile target - they really think the dev pace isn't going to collapse, but just faster and with a big ongoing inference bill?<p>The LLM's <i>do not form mental models</i>. You are not going to get a better results from an LLM vibe coding against spec diffs vs a dev prompting it from a position  of understanding the codebase and the requested change.</p>
]]></description><pubDate>Sun, 03 May 2026 12:07:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47996120</link><dc:creator>rogermarley</dc:creator><comments>https://news.ycombinator.com/item?id=47996120</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47996120</guid></item><item><title><![CDATA[New comment by rogermarley in "Specsmaxxing – On overcoming AI psychosis, and why I write specs in YAML"]]></title><description><![CDATA[
<p>This ultimately converges on what source code is though.<p>The most common form of what you'd call a "spec" is the acceptance criteria on a work ticket, which is an accretive spec i.e. a description of desired change -- "given what already exists, change it as follows". I.e. if you somehow layered and summarized and condensed all tickets that have been made since product started, you'd have your "spec".<p>But it's the devs who were doing that condensing via understanding each desired spec addition vs reality of existing codebase.<p>So the gap between what people are currently calling "specs" what the code was already doing is not big and will not stay big, but for the fact you're effectively adding another (quasi) compile step underneath - and in this case its a non-deterministic one.</p>
]]></description><pubDate>Sun, 03 May 2026 11:54:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47996037</link><dc:creator>rogermarley</dc:creator><comments>https://news.ycombinator.com/item?id=47996037</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47996037</guid></item><item><title><![CDATA[New comment by rogermarley in "AI Self-preferencing in Algorithmic Hiring: Empirical Evidence and Insights"]]></title><description><![CDATA[
<p>But I think even if it were purely leetcode-like, devs would actually be quite happy with this, since at least you'd only have to do it <i>once</i> and then it's re-usable for every application.<p>At the end of the day it doesn't really matter what our opinions of good screening are, but what the salary-payers are. Personally I just rely on live (& conversational) task-based coding tests.</p>
]]></description><pubDate>Sun, 03 May 2026 11:42:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47995963</link><dc:creator>rogermarley</dc:creator><comments>https://news.ycombinator.com/item?id=47995963</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47995963</guid></item><item><title><![CDATA[New comment by rogermarley in "AI Self-preferencing in Algorithmic Hiring: Empirical Evidence and Insights"]]></title><description><![CDATA[
<p>Its purpose isn't really to test practical skills though, more just to screen for intelligence and conscientiousness (like a tournament who can take the most mental punishment), which are extremely useful in software development.</p>
]]></description><pubDate>Sun, 03 May 2026 11:37:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47995942</link><dc:creator>rogermarley</dc:creator><comments>https://news.ycombinator.com/item?id=47995942</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47995942</guid></item><item><title><![CDATA[New comment by rogermarley in "AI Self-preferencing in Algorithmic Hiring: Empirical Evidence and Insights"]]></title><description><![CDATA[
<p>I think resumes will eventually (or have already) become obsolete in tech. The SNR is so low, they offer very thin filtering value.<p>Even taking the tiny bits of the resume that are "hard signal", like GPA, certifications, prior roles, etc, it doesn't translate into their performance in the initial screening interview.<p>This is why what I think the industry sorely needs is examination consortia.<p>Rather than trying to guess capability from the name of the university they went to, leading tech companies creating standardized tests in various fields, and your test scores form your "resume", so that developers can just focus on improving their scores rather than wasting time on resume/application/repetitive-screening toil.</p>
]]></description><pubDate>Sat, 02 May 2026 16:07:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47987621</link><dc:creator>rogermarley</dc:creator><comments>https://news.ycombinator.com/item?id=47987621</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47987621</guid></item><item><title><![CDATA[New comment by rogermarley in "A report on burnout in open source software communities (2025) [pdf]"]]></title><description><![CDATA[
<p>It's more whether one allows oneself to be taken advantage of or not. Only a vocal minority take that stance, others would just be disappointed if it went away.<p>I like the approach ngrok took - made a v1 that remained open source, but once realized it had high demand, built v2 closed as a paid service.<p>Another approach is to offer professional services around it.<p>Basically it's very optional to accept the attitude of others that you should provide them value for free.</p>
]]></description><pubDate>Sat, 02 May 2026 15:44:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47987390</link><dc:creator>rogermarley</dc:creator><comments>https://news.ycombinator.com/item?id=47987390</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47987390</guid></item><item><title><![CDATA[New comment by rogermarley in "A report on burnout in open source software communities (2025) [pdf]"]]></title><description><![CDATA[
<p>It's an unfortunate reality of giving things away for free - it reduces respect and attracts the wrong kind of people.<p>Echos a well known pattern in consulting that higher-paying clients cause less headaches than lower-paying ones.</p>
]]></description><pubDate>Sat, 02 May 2026 15:34:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47987306</link><dc:creator>rogermarley</dc:creator><comments>https://news.ycombinator.com/item?id=47987306</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47987306</guid></item></channel></rss>