<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: scottfalconer</title><link>https://news.ycombinator.com/user?id=scottfalconer</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 30 Apr 2026 22:51:56 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=scottfalconer" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by scottfalconer in "[dead]"]]></title><description><![CDATA[
<p>For the last 20 years, most software process was built around the assumption that creating software is slow and expensive.<p>That has changed.</p>
]]></description><pubDate>Fri, 13 Mar 2026 02:39:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47360076</link><dc:creator>scottfalconer</dc:creator><comments>https://news.ycombinator.com/item?id=47360076</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47360076</guid></item><item><title><![CDATA[New comment by scottfalconer in "[dead]"]]></title><description><![CDATA[
<p>I built self-learning-skills because I noticed my agents often spent time poking around and guessing at things I had already solved in previous runs. I used to manually copy-paste those fixes into future prompts or backport them into my skills.<p>This repo streamlines that workflow. It acts as a sidecar memory that:
- Stops the guessing: Records “Aha moments” locally so the agent doesn’t start from zero next time.
- Graduates knowledge: Includes a CLI workflow to Backport proven memories into permanent improvements in your actual skills or docs.<p>It works with Claude Code, GitHub Copilot, and Codex and any other system that implements the <a href="https://agentskills.io" rel="nofollow">https://agentskills.io</a> specification.</p>
]]></description><pubDate>Mon, 29 Dec 2025 21:20:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46425844</link><dc:creator>scottfalconer</dc:creator><comments>https://news.ycombinator.com/item?id=46425844</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46425844</guid></item><item><title><![CDATA[New comment by scottfalconer in "OpenAI codex can now generate best of N responses"]]></title><description><![CDATA[
<p>At first pass this seems 1) incredibly useful for me 2) incredibly expensive for them, but after using it a bit I'm thinking it might be incredibly valuable for them because once I review and approve one of the options, they're essentially getting preference data on which of the approaches I felt was "best".<p>Thoughts from those who have used it?</p>
]]></description><pubDate>Sat, 14 Jun 2025 15:01:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44276814</link><dc:creator>scottfalconer</dc:creator><comments>https://news.ycombinator.com/item?id=44276814</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44276814</guid></item><item><title><![CDATA[OpenAI codex can now generate best of N responses]]></title><description><![CDATA[
<p>Article URL: <a href="https://help.openai.com/en/articles/11428266-codex-changelog">https://help.openai.com/en/articles/11428266-codex-changelog</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44276813">https://news.ycombinator.com/item?id=44276813</a></p>
<p>Points: 2</p>
<p># Comments: 2</p>
]]></description><pubDate>Sat, 14 Jun 2025 15:01:03 +0000</pubDate><link>https://help.openai.com/en/articles/11428266-codex-changelog</link><dc:creator>scottfalconer</dc:creator><comments>https://news.ycombinator.com/item?id=44276813</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44276813</guid></item><item><title><![CDATA[Any good strategies to avoid merge conflicts when using Codex, Jules, etc.?]]></title><description><![CDATA[
<p>I tend to start tasks one at a time, let them complete, and then start the next. I know I can manually resolve the conflicts, but in a flow that's otherwise generally painless that feels like a huge diversion.<p>I tried asking it to rebase, but that tends to error out due to resource limitations (and when I asked jules to do it, it flat out refused).<p>My plan is to explore structuring my tasks like I would with a human dev team - targeting specific areas / files in isolation, but was wondering if anyone else has any good tips here.<p>tldr: I'm lazy and would prefer to not deal with merge conflicts in an otherwise great workflow.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44225188">https://news.ycombinator.com/item?id=44225188</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 09 Jun 2025 15:00:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=44225188</link><dc:creator>scottfalconer</dc:creator><comments>https://news.ycombinator.com/item?id=44225188</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44225188</guid></item><item><title><![CDATA[Completing four development tasks while on a trail run]]></title><description><![CDATA[
<p>I tend to tend to get my best ideas when I'm not sitting in front of a computer.<p>My general workflow was:<p>- Be out. 
- Think of idea. 
- Make a note on my phone.
- Hopefully remember to look at it later. (Rarely happened)<p>but now it's:<p>- Be out. 
- Think of idea. 
- Kick off coding / creative / research agent to do whatever I’m thinking of. 
- Review when I’m home.<p>Why make a note when you can just as easily start doing the thing?<p>So today I put it to the test and decided to see how much dev work I could get done while on a run.<p>My workflow: 
Kick off an initial task, head out on the trails, whenever I got to a shady spot, check the tasks, merge the ones with passing tests, and start new tasks as needed.<p>End results:<p>~5 miles through the Boise foothills.
~550ft elevation gain.
- 7 development tasks kicked off.
- 4 pull requests reviewed and merged.<p>Development tasks initiated, developed, and merged while on the run:<p>https://github.com/scottfalconer/compact-memory/pull/399<p>https://github.com/scottfalconer/compact-memory/pull/400<p>https://github.com/scottfalconer/compact-memory/pull/401<p>https://github.com/scottfalconer/compact-memory/pull/402<p>Strava map:<p>https://strava.app.link/e83SL3bz2Tb</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44219172">https://news.ycombinator.com/item?id=44219172</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 08 Jun 2025 19:59:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=44219172</link><dc:creator>scottfalconer</dc:creator><comments>https://news.ycombinator.com/item?id=44219172</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44219172</guid></item><item><title><![CDATA[New comment by scottfalconer in "A Research Preview of Codex"]]></title><description><![CDATA[
<p>Next week: OpenAI rebrands Windsurf as Codex.</p>
]]></description><pubDate>Fri, 16 May 2025 17:04:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=44007680</link><dc:creator>scottfalconer</dc:creator><comments>https://news.ycombinator.com/item?id=44007680</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44007680</guid></item><item><title><![CDATA[New comment by scottfalconer in "AI Is Like a Crappy Consultant"]]></title><description><![CDATA[
<p>A good manager can make a less-than-ideal contributor highly effective with the right guidance and feedback. Applies to AI as well.</p>
]]></description><pubDate>Tue, 13 May 2025 13:33:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=43972791</link><dc:creator>scottfalconer</dc:creator><comments>https://news.ycombinator.com/item?id=43972791</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43972791</guid></item><item><title><![CDATA[New comment by scottfalconer in "Ask HN: What are you working on? (April 2025)"]]></title><description><![CDATA[
<p>A minimalist, text-based memory system designed to naturally store and recall important events. It emphasizes simplicity, portability, and human-friendly structure by using six optional fields: who, what, when, where, how, and thing. These fields capture factual context clearly, deferring interpretation for later use or analysis.<p><a href="https://github.com/scottfalconer/vibedb">https://github.com/scottfalconer/vibedb</a><p>I still have no idea if it's a good idea or a bad idea but it's been fun to think through.</p>
]]></description><pubDate>Mon, 28 Apr 2025 17:08:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=43823657</link><dc:creator>scottfalconer</dc:creator><comments>https://news.ycombinator.com/item?id=43823657</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43823657</guid></item><item><title><![CDATA[New comment by scottfalconer in "Not worried about AI taking our jobs; worried it won't take our current jobs"]]></title><description><![CDATA[
<p>Happy to clarify. The parent comment tackles a macro-economic utopia I never proposed.
My post was about individual level gains: using AI to automate routine work, offload mental load, and free time to think, create, or just live.<p>I’m not claiming to fix the global economy, nor denying real risks like job loss or scarcity.
Labeling me a "summer child" assumes I am naive about those challenges...another projection.<p>In short, I described a practical benefit available today, not a perfect future.
A thoughtful reply would engage with those points instead of refuting a position I never took.</p>
]]></description><pubDate>Sun, 27 Apr 2025 21:18:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=43815217</link><dc:creator>scottfalconer</dc:creator><comments>https://news.ycombinator.com/item?id=43815217</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43815217</guid></item><item><title><![CDATA[New comment by scottfalconer in "Not worried about AI taking our jobs; worried it won't take our current jobs"]]></title><description><![CDATA[
<p>I believe there's still plenty of margin to capture beyond merely overseeing AI. Could we reach a point where humans add no marginal utility? Maybe. I hope not, but we can't discount the possibility.</p>
]]></description><pubDate>Sun, 27 Apr 2025 20:02:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=43814673</link><dc:creator>scottfalconer</dc:creator><comments>https://news.ycombinator.com/item?id=43814673</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43814673</guid></item><item><title><![CDATA[New comment by scottfalconer in "Not worried about AI taking our jobs; worried it won't take our current jobs"]]></title><description><![CDATA[
<p>Once again you make an incorrect assumption and then build an argument against it. Good luck.</p>
]]></description><pubDate>Sun, 27 Apr 2025 19:57:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=43814635</link><dc:creator>scottfalconer</dc:creator><comments>https://news.ycombinator.com/item?id=43814635</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43814635</guid></item><item><title><![CDATA[New comment by scottfalconer in "Not worried about AI taking our jobs; worried it won't take our current jobs"]]></title><description><![CDATA[
<p>What are the false things that I believe in? Seems like you're making some pretty big assumptions on how I think of the world.</p>
]]></description><pubDate>Sun, 27 Apr 2025 19:16:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=43814379</link><dc:creator>scottfalconer</dc:creator><comments>https://news.ycombinator.com/item?id=43814379</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43814379</guid></item><item><title><![CDATA[New comment by scottfalconer in "Not worried about AI taking our jobs; worried it won't take our current jobs"]]></title><description><![CDATA[
<p>Totally hear you. Reinvesting saved time in higher-value / more interesting work doesn’t remove the structural-equity risk, which arguably might be the most challenging problem to solve.</p>
]]></description><pubDate>Sun, 27 Apr 2025 19:09:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=43814332</link><dc:creator>scottfalconer</dc:creator><comments>https://news.ycombinator.com/item?id=43814332</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43814332</guid></item><item><title><![CDATA[New comment by scottfalconer in "Not worried about AI taking our jobs; worried it won't take our current jobs"]]></title><description><![CDATA[
<p>100% agree. Workflow automation is the easy part...a system that leads to fair value distribution is a whole other issue.</p>
]]></description><pubDate>Sun, 27 Apr 2025 19:01:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=43814294</link><dc:creator>scottfalconer</dc:creator><comments>https://news.ycombinator.com/item?id=43814294</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43814294</guid></item><item><title><![CDATA[Not worried about AI taking our jobs; worried it won't take our current jobs]]></title><description><![CDATA[
<p>I want us to plan, strategize, review, and set AI tools to auto. While they work, we're free to be human - thinking, creating, living. Agree, disagree?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43813256">https://news.ycombinator.com/item?id=43813256</a></p>
<p>Points: 13</p>
<p># Comments: 18</p>
]]></description><pubDate>Sun, 27 Apr 2025 16:51:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=43813256</link><dc:creator>scottfalconer</dc:creator><comments>https://news.ycombinator.com/item?id=43813256</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43813256</guid></item><item><title><![CDATA[New comment by scottfalconer in "Lessons Learned Writing a Book Collaboratively with LLMs"]]></title><description><![CDATA[
<p>In the end there are plenty of stories, but they're ones that are relevant. The story that the LLM gave feedback on was about flipping a raft on the Grand Canyon, the LLM's advice was that it felt unrelated to the point I was trying to make. That made me realize I had it in there more because I wanted to talk about the rafting Grand Canyon, vs. it being useful and entertaining to readers.</p>
]]></description><pubDate>Fri, 25 Apr 2025 11:39:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=43792543</link><dc:creator>scottfalconer</dc:creator><comments>https://news.ycombinator.com/item?id=43792543</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43792543</guid></item><item><title><![CDATA[New comment by scottfalconer in "Lessons Learned Writing a Book Collaboratively with LLMs"]]></title><description><![CDATA[
<p>I think it was faster in that I would have never written the book without the LLMs. Essentially they unlocked the swirl of thoughts and notes that lived somewhere between my head, TextEdit, emails to myself, and anywhere else I stashed things.<p>It's like it unblocked the "hard part" (getting the words into a coherent form for others), while letting me focus on the "value parts" (my unique perspective / ideas).<p>It might not be that overall it saved me time, but it made it a hell of a lot more fun, so in the end I completed it - and maybe AI helping us see things through to completion is where we'll see a big unblock in human potential.</p>
]]></description><pubDate>Thu, 24 Apr 2025 15:25:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=43783919</link><dc:creator>scottfalconer</dc:creator><comments>https://news.ycombinator.com/item?id=43783919</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43783919</guid></item><item><title><![CDATA[New comment by scottfalconer in "Ask HN: Is politeness towards LLMs good training data, or just expensive noise?"]]></title><description><![CDATA[
<p>Even in that it likely depends on what you're measuring for waste. Is it wasted electricity, or is wasted productivity/opportunity time waiting for your machine to boot up?</p>
]]></description><pubDate>Thu, 24 Apr 2025 15:06:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=43783723</link><dc:creator>scottfalconer</dc:creator><comments>https://news.ycombinator.com/item?id=43783723</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43783723</guid></item><item><title><![CDATA[New comment by scottfalconer in "Ask HN: Is politeness towards LLMs good training data, or just expensive noise?"]]></title><description><![CDATA[
<p>The email is a good callout, chat would feel the same. What's interesting is the nuance in those channels, i.e. someone saying "hi" by itself in a work chat seems rude to me... just get to the point. But if it was switched in a real conversation, it'd feel rude without.</p>
]]></description><pubDate>Thu, 24 Apr 2025 00:09:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=43778046</link><dc:creator>scottfalconer</dc:creator><comments>https://news.ycombinator.com/item?id=43778046</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43778046</guid></item></channel></rss>