<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: tunesmith</title><link>https://news.ycombinator.com/user?id=tunesmith</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 08:54:15 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=tunesmith" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by tunesmith in "AI got the blame for the Iran school bombing. The truth is more worrying"]]></title><description><![CDATA[
<p>It feels like an appreciation for hypotheticals or givens is missing here. One can simultaneously be against the war and the bombing in general, and also accept it as a given and then think about a certain situation being understandable within that given.</p>
]]></description><pubDate>Fri, 27 Mar 2026 21:18:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47548392</link><dc:creator>tunesmith</dc:creator><comments>https://news.ycombinator.com/item?id=47548392</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47548392</guid></item><item><title><![CDATA[New comment by tunesmith in "AI got the blame for the Iran school bombing. The truth is more worrying"]]></title><description><![CDATA[
<p>Really fascinating article. Bits of bias here and there, like "The US military has been trying to close the gap between seeing something and destroying it for as long as that gap has existed" -- you can respond to seeing and understanding something without destroying it -- but it underscores, to me at least, how much denser the "fog of war" has become. The fog of media reporting in general. Those first few paragraphs felt like a breath of fresh air.</p>
]]></description><pubDate>Fri, 27 Mar 2026 17:26:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47545659</link><dc:creator>tunesmith</dc:creator><comments>https://news.ycombinator.com/item?id=47545659</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47545659</guid></item><item><title><![CDATA[New comment by tunesmith in "Regular army and reserve components enlistment program: Summary of change"]]></title><description><![CDATA[
<p>What's your therefore??</p>
]]></description><pubDate>Wed, 25 Mar 2026 05:11:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47513539</link><dc:creator>tunesmith</dc:creator><comments>https://news.ycombinator.com/item?id=47513539</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47513539</guid></item><item><title><![CDATA[New comment by tunesmith in "The bridge to wealth is being pulled up with AI"]]></title><description><![CDATA[
<p>haha. Nervous to. :)</p>
]]></description><pubDate>Tue, 24 Mar 2026 22:10:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47510200</link><dc:creator>tunesmith</dc:creator><comments>https://news.ycombinator.com/item?id=47510200</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47510200</guid></item><item><title><![CDATA[New comment by tunesmith in "The bridge to wealth is being pulled up with AI"]]></title><description><![CDATA[
<p>The article says it's from first principles and looks pretty deep, and I didn't have the time to exhaustively read it myself, so I used my side project concludia to try and analyze and graph the argument... you can see the argument graph here if you're interested in that kind of thing.<p><a href="https://concludia.org/step/6f3cbaa2-65d4-3c44-8c1f-23f6ddf2b289" rel="nofollow">https://concludia.org/step/6f3cbaa2-65d4-3c44-8c1f-23f6ddf2b...</a><p>Reading the thread below, I'm always curious where in the argument the various counterpoints would attach. Like if a counterpoint is fatal or just an offshoot. I didn't have the system try and semantically analyze it for flaws/counterpoints yet, I just tried to get it to depict the article's reasoning. Not sure yet how good a job it did.</p>
]]></description><pubDate>Tue, 24 Mar 2026 19:02:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47507543</link><dc:creator>tunesmith</dc:creator><comments>https://news.ycombinator.com/item?id=47507543</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47507543</guid></item><item><title><![CDATA[New comment by tunesmith in "So where are all the AI apps?"]]></title><description><![CDATA[
<p>Isn't most of the positive impact not going to be "new projects" but the relative strength of the ideas that make it into the codebase? Which is almost impossible to measure. You know, the bigger ideas that were put off before and are now more tractable.</p>
]]></description><pubDate>Tue, 24 Mar 2026 17:16:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47506023</link><dc:creator>tunesmith</dc:creator><comments>https://news.ycombinator.com/item?id=47506023</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47506023</guid></item><item><title><![CDATA[New comment by tunesmith in "Tech employment now significantly worse than the 2008 or 2020 recessions"]]></title><description><![CDATA[
<p>Not even close, not when all things are considered. $50/hour is 100k/year, which is still considered a decent salary. 24k/year in 2000-2002 was definitely not considered a decent salary. $12/hour for sw engineers was evil. I hung up on that recruiter and cursed for a while, cold-called my way to a transitional $20/hr job, and then finally landed somewhere at $55/hr which is when things started to feel normal again. $55/hr back then is not the same as $230/hr now.</p>
]]></description><pubDate>Fri, 06 Mar 2026 23:27:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47282479</link><dc:creator>tunesmith</dc:creator><comments>https://news.ycombinator.com/item?id=47282479</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47282479</guid></item><item><title><![CDATA[New comment by tunesmith in "Tech employment now significantly worse than the 2008 or 2020 recessions"]]></title><description><![CDATA[
<p>In Portland, there was a time in 2000-2002 where Nike and Intel had contract offers out to SW developers for $12/hour, and were getting slammed with applications.</p>
]]></description><pubDate>Fri, 06 Mar 2026 19:06:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47279572</link><dc:creator>tunesmith</dc:creator><comments>https://news.ycombinator.com/item?id=47279572</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47279572</guid></item><item><title><![CDATA[New comment by tunesmith in "Ask HN: What are you working on? (February 2026)"]]></title><description><![CDATA[
<p>That's not on my radar right now. I'm not explicitly rejecting the path, but I'm more intent on figuring out ways to make it easier for people to collaborate on arguments together, which means one destination for now.</p>
]]></description><pubDate>Fri, 13 Feb 2026 18:59:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47006358</link><dc:creator>tunesmith</dc:creator><comments>https://news.ycombinator.com/item?id=47006358</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47006358</guid></item><item><title><![CDATA[New comment by tunesmith in "Carl Sagan's Baloney Detection Kit: Tools for Thinking Critically (2025)"]]></title><description><![CDATA[
<p>While each of these are good to keep in mind while reading, I don't find them very exhaustive or derivable. I prefer the Theory of Constraints (Eli Goldratt)'s "Thinking Tools", specifically the "categories of legitimate reservation". Depending on source, there are between six and eight.<p>1. Is it clear?<p>2. Does it actually exist? Is it true?<p>3. Does the cause actually cause the effect?<p>4. For the proposed cause, do its other implied effects exist?<p>5. Is the cause sufficient for the effect?<p>6. Are cause and effect reversed?</p>
]]></description><pubDate>Thu, 12 Feb 2026 22:52:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46996424</link><dc:creator>tunesmith</dc:creator><comments>https://news.ycombinator.com/item?id=46996424</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46996424</guid></item><item><title><![CDATA[New comment by tunesmith in "Ask HN: What are you working on? (February 2026)"]]></title><description><![CDATA[
<p>Yeah... so far, I have found that trying to fully justify a political conclusion has a way of moderating the conclusion. But it's still possible to arrive at very different well-reasoned conclusions just from different axiomatic personal values.</p>
]]></description><pubDate>Wed, 11 Feb 2026 05:21:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46971172</link><dc:creator>tunesmith</dc:creator><comments>https://news.ycombinator.com/item?id=46971172</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46971172</guid></item><item><title><![CDATA[New comment by tunesmith in "Ask HN: What are you working on? (February 2026)"]]></title><description><![CDATA[
<p>Coincidentally, I've been toying with using concludia to make the argument behind a tech design for an upcoming project... when one of our teams is enamored with graph database for it - probably neptune in our case. So far I'm having trouble really nailing down the argument that would justify it.</p>
]]></description><pubDate>Tue, 10 Feb 2026 18:18:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46964315</link><dc:creator>tunesmith</dc:creator><comments>https://news.ycombinator.com/item?id=46964315</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46964315</guid></item><item><title><![CDATA[New comment by tunesmith in "Ask HN: What are you working on? (February 2026)"]]></title><description><![CDATA[
<p>Yes - currently, each argument/graph is independent, but I've designed it in a way that should be compatible with future plans to "transclude" parts of other public graphs. Like if some lemma is really valuable to your own unrelated argument, being able to include it.<p>I do think there's quite a lot that could be done with LLM assistance here, like finding "duplicate" candidates; statements with the same semantic meaning, for potential merge. It's really complicated to think through side effects though so I'm going slow. :)</p>
]]></description><pubDate>Mon, 09 Feb 2026 21:53:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46952027</link><dc:creator>tunesmith</dc:creator><comments>https://news.ycombinator.com/item?id=46952027</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46952027</guid></item><item><title><![CDATA[New comment by tunesmith in "OpenClaw is changing my life"]]></title><description><![CDATA[
<p>Yeah I usually ask what open questions it has, versus when it thinks it is ready to implement.</p>
]]></description><pubDate>Mon, 09 Feb 2026 04:16:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46941535</link><dc:creator>tunesmith</dc:creator><comments>https://news.ycombinator.com/item?id=46941535</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46941535</guid></item><item><title><![CDATA[New comment by tunesmith in "OpenClaw is changing my life"]]></title><description><![CDATA[
<p>ah, so cool. Yeah that is definitely bigger than what I ask for. I'd say the bigger risk I'm dealing with right now is that while it passes all my very strict linting and static analysis toolsets, I neglected to put detailed layered-architecture guidelines in place, so my code files are approaching several hundred lines now. I don't actually know if the "most efficient file size" for an agent is the same as for a human, but I'd like them to be shorter so I can understand them more easily.</p>
]]></description><pubDate>Mon, 09 Feb 2026 01:46:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=46940639</link><dc:creator>tunesmith</dc:creator><comments>https://news.ycombinator.com/item?id=46940639</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46940639</guid></item><item><title><![CDATA[New comment by tunesmith in "OpenClaw is changing my life"]]></title><description><![CDATA[
<p>I guess this is another example - I literally have not experienced what you described in... several weeks, at least.</p>
]]></description><pubDate>Mon, 09 Feb 2026 01:14:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46940441</link><dc:creator>tunesmith</dc:creator><comments>https://news.ycombinator.com/item?id=46940441</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46940441</guid></item><item><title><![CDATA[New comment by tunesmith in "Ask HN: What are you working on? (February 2026)"]]></title><description><![CDATA[
<p>You can actually register now (with a waiting list) and make your own private graphs, if that's what you meant by a personal version. (You'd be like member #4 haha)<p>I've actually had a lot of fun hooking it up to LLM. I have a private MCP server for it. The tools tell it how to read a concludia argument and validate it. It's what generated all the counterpoints for the "carbon offset" argument (<a href="https://concludia.org/step/9b8d443e-9a52-3006-8c2d-472406db729e" rel="nofollow">https://concludia.org/step/9b8d443e-9a52-3006-8c2d-472406db7...</a>) .<p>And yeah... when I've tried to fully justify my own conclusions that I was sure were correct... it's pretty humbling to realize how many assumptions we build into our own beliefs!</p>
]]></description><pubDate>Mon, 09 Feb 2026 01:09:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46940404</link><dc:creator>tunesmith</dc:creator><comments>https://news.ycombinator.com/item?id=46940404</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46940404</guid></item><item><title><![CDATA[New comment by tunesmith in "OpenClaw is changing my life"]]></title><description><![CDATA[
<p>I tend to be surprised in the variance of reported experiences with agentic flows like Claude Code and Codex CLI.<p>It's possible some of it is due to codebase size or tech stack, but I really think there might be more of a human learning curve going on here than a lot of people want to admit.<p>I think I am firmly in the average of people who are getting decent use out of these tools. I'm not writing specialized tools to create agents of agents with incredibly detailed instructions on how each should act. I haven't even gotten around to installing a Playwright mcp (probably my next step).<p>But I've:<p>- created project directories with soft links to several of my employer's repos, and been able to answer several cross-project and cross-team questions within minutes, that normally would have required "Spike/Disco" Jira tickets for teams to investigate<p>- interviewed codebases along with product requirements to come up with very detailed Jira AC, and then,.. just for the heck of it, had the agent then use that AC to implement the actual PR.  My team still code-reviewed it but agreed it saved time<p>- in side projects, have shipped several really valuable (to me) features that would have been too hard to consider otherwise, like... generating pdf book manuscripts for my branching-fiction creating writing club, and launching a whole new website that has been mired in a half-done state for years<p>Really my only tricks are the basics: AGENTS.md, brainstorm with the agent, continually ask it to write markdown specs for any cohesive idea, and then pick one at a time to implement in commit-sized or PR-sized chunks. GPT-5.2 xhigh is a marvel at this stuff.<p>My codebases are scala, pekko, typescript/react, and lilypond - yeah, the best models even understand lilypond now so I can give it a leadsheet and have it arrange for me two-hand jazz piano exercises.<p>I generally think that if people can't reach the above level of success at this point in time, they need to think more about how to communicate better with the models. There's a real "you get out of it what you put into it" aspect to using these tools.</p>
]]></description><pubDate>Mon, 09 Feb 2026 01:03:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46940358</link><dc:creator>tunesmith</dc:creator><comments>https://news.ycombinator.com/item?id=46940358</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46940358</guid></item><item><title><![CDATA[New comment by tunesmith in "Ask HN: What are you working on? (February 2026)"]]></title><description><![CDATA[
<p>Frustration at that kind of debate has been a large part of the motivation, how it occludes so much of what ideally should be a dialectic. I especially dislike how if someone gets flustered, they're seen as losing.</p>
]]></description><pubDate>Mon, 09 Feb 2026 00:44:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46940227</link><dc:creator>tunesmith</dc:creator><comments>https://news.ycombinator.com/item?id=46940227</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46940227</guid></item><item><title><![CDATA[New comment by tunesmith in "Ask HN: What are you working on? (February 2026)"]]></title><description><![CDATA[
<p>Yeah, I only just yesterday got it to the point where people can create their own arguments. I was just using it to check my own assumptions on why I have such a complicated "end-of-month finances" list of things to do. :) But I also like the idea of using it for political arguments or even fun stuff like mystery-solving.</p>
]]></description><pubDate>Mon, 09 Feb 2026 00:36:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46940155</link><dc:creator>tunesmith</dc:creator><comments>https://news.ycombinator.com/item?id=46940155</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46940155</guid></item></channel></rss>