<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: digikata</title><link>https://news.ycombinator.com/user?id=digikata</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 09:30:15 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=digikata" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by digikata in "Ask HN: What dev tools do you rely on that nobody talks about?"]]></title><description><![CDATA[
<p>Last I checked, and things might have changed, atuin runs a full posgresql database  to store and sync the history, while mcfly is lighter, it also has a narrower feature scope.</p>
]]></description><pubDate>Mon, 13 Apr 2026 11:56:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47750732</link><dc:creator>digikata</dc:creator><comments>https://news.ycombinator.com/item?id=47750732</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47750732</guid></item><item><title><![CDATA[New comment by digikata in "Ask HN: What dev tools do you rely on that nobody talks about?"]]></title><description><![CDATA[
<p>You might be interested in:<p><a href="https://github.com/cantino/mcfly" rel="nofollow">https://github.com/cantino/mcfly</a> - fuzzy shell history (feels lighter than atuin to me, in rust)<p><a href="https://github.com/watchexec/watchexec" rel="nofollow">https://github.com/watchexec/watchexec</a> - rerun on file change, knows about .gitignore/.ignore etc (in rust)<p><a href="https://github.com/jonas/tig" rel="nofollow">https://github.com/jonas/tig</a> - instead of lazygit, mostly for easier git log viewing for me as I use straight git most of the time<p>Otherwise a lot of crossover in what I use too.</p>
]]></description><pubDate>Thu, 02 Apr 2026 11:34:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47613019</link><dc:creator>digikata</dc:creator><comments>https://news.ycombinator.com/item?id=47613019</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47613019</guid></item><item><title><![CDATA[New comment by digikata in "Go hard on agents, not on your filesystem"]]></title><description><![CDATA[
<p>One could run a docker container with claude code, with a bind to the project directory.  I do that but also run my docker daemon/container in a Linux VM.</p>
]]></description><pubDate>Sat, 28 Mar 2026 10:22:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47553262</link><dc:creator>digikata</dc:creator><comments>https://news.ycombinator.com/item?id=47553262</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47553262</guid></item><item><title><![CDATA[New comment by digikata in "Ask HN: what’s your favorite line in your Claude/agents.md files?"]]></title><description><![CDATA[
<p>When writing tables in markdown files, text align data in the columns for readability with a plain text editor.</p>
]]></description><pubDate>Sun, 22 Mar 2026 07:21:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47475231</link><dc:creator>digikata</dc:creator><comments>https://news.ycombinator.com/item?id=47475231</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47475231</guid></item><item><title><![CDATA[New comment by digikata in "Node.js needs a virtual file system"]]></title><description><![CDATA[
<p>Go slow to go fast. Breaking up the PR this way also allows later humans and AI alike to understand the codebase. Slowing down the PR process with standards lets the project move faster overall.<p>If there is some bug that slips by review, having the PR broken down semantically allows quicker analysis and recovery later for one case. Even if you have AI reviewing new Node.js releases for if you want to take in the new version - the commit log will be more analyzable by the AI with semantic commits.<p>Treating the code as throwaway is valid in a few small contexts, but that is not the case for PRs going into maintained projects like Node.js.</p>
]]></description><pubDate>Tue, 17 Mar 2026 17:06:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47415436</link><dc:creator>digikata</dc:creator><comments>https://news.ycombinator.com/item?id=47415436</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47415436</guid></item><item><title><![CDATA[New comment by digikata in "Node.js needs a virtual file system"]]></title><description><![CDATA[
<p>Large PRs could follow the practices that the Linux kernel dev lists follow. Sometimes large subsystem changes could be carried separately for a while by the submitter for testing and maintenance before being accepted in theory, reviewed, and if ready, then merged.<p>While the large code changes were maintained, they were often split up into a set of semantically meaningful commits for purposes of review and maintenance.<p>With AI blowing up the line counts on PRs, it's a skill set that more developers need to mature. It's good for their own review to take the mass changes, ask themselves how would they want to systematically review it in parts, then split the PR up into meaningful commits: e.g. interfaces, docs, subsets of changed implementations, etc.</p>
]]></description><pubDate>Tue, 17 Mar 2026 16:33:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47414965</link><dc:creator>digikata</dc:creator><comments>https://news.ycombinator.com/item?id=47414965</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47414965</guid></item><item><title><![CDATA[New comment by digikata in "PeppyOS: A simpler alternative to ROS 2 (now with containers support)"]]></title><description><![CDATA[
<p>Crash? The software, or physically? A 200Hz as a min control loop rate seems on the fast side as a general default, but it all depends on the control environment - and I may be biased as I've done a lot more bare silicon controls than ROS.</p>
]]></description><pubDate>Wed, 11 Mar 2026 13:32:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47335351</link><dc:creator>digikata</dc:creator><comments>https://news.ycombinator.com/item?id=47335351</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47335351</guid></item><item><title><![CDATA[New comment by digikata in "AI doesn’t reduce work, it intensifies it"]]></title><description><![CDATA[
<p>A couple of historical notes that come to mind.<p>When washing machines were introduced, the number of hours of doing the chore of laundry did not necessarily decrease until 40 years after the introduction.<p>When project management software was introduced, it made the task of managing project tasks easier. One could create an order of magnitude or more of detailed plans in the same amount of time - poorly used this decreased the odds of project success, by eating up everyone's time. And the software itself has not moved the needle in terms of project success factors of successfully completing within budget, time, and resources planned.</p>
]]></description><pubDate>Tue, 10 Feb 2026 11:36:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46958347</link><dc:creator>digikata</dc:creator><comments>https://news.ycombinator.com/item?id=46958347</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46958347</guid></item><item><title><![CDATA[New comment by digikata in "Ask HN: Who wants to be hired? (February 2026)"]]></title><description><![CDATA[
<p>Location: Portugal<p>Remote: Yes Willing to relocate: No<p>Willing to Relocate: No<p>Technologies: Rust, Python, C/C++, Typescript, LLM APIs, Distributed Systems, Embedded Systems, devops, Linux Kernel<p>Resume: <a href="https://uplinklabs.com" rel="nofollow">https://uplinklabs.com</a><p>Email: alan@uplinklabs.com<p>Hands on builder, fractional CTO/Architect. 25+ years of US tech experience. Full stack with data intensive backend experience. Multi domain expertise, 0 -> 1 startup stacks, AI prototype cleanup for production, cloud, storage, embedded, autonomous vehicles, regulated industries. Problem solver with using tech and team leadership skills. Open to fractional and contract opportunities. US B2B invoicing available.</p>
]]></description><pubDate>Tue, 03 Feb 2026 12:00:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46869904</link><dc:creator>digikata</dc:creator><comments>https://news.ycombinator.com/item?id=46869904</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46869904</guid></item><item><title><![CDATA[New comment by digikata in "Ask HN: How do you find the "why" behind old code decisions?"]]></title><description><![CDATA[
<p>The easiest is to add short info in comments, and longer info in some sort of document and reference the doc in comments.<p>Lightweight ADRs are a good recommendation. I've put similar practices into place with teams I've worked with. Though I prefer to use the term "Technical Memo", of which some contain Architectural Decisions. Retroactive documentation is a little misaligned with the term ADR, in that it isn't really making any sort of decision. I've found the term ADR sometimes makes some team members hesitant to record the information because of that kind of misalignment.<p>As for retroactively discovering why, code archeology skills in the form of git blame and log, and general search skills are very helpful.</p>
]]></description><pubDate>Fri, 23 Jan 2026 09:58:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46730568</link><dc:creator>digikata</dc:creator><comments>https://news.ycombinator.com/item?id=46730568</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46730568</guid></item><item><title><![CDATA[New comment by digikata in "TimeCapsuleLLM: LLM trained only on data from 1800-1875"]]></title><description><![CDATA[
<p>A fun use of this kind of approach would be to see if conversational game NPCs could be generated that stick the the lore of the game and their character.</p>
]]></description><pubDate>Tue, 13 Jan 2026 10:35:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46599293</link><dc:creator>digikata</dc:creator><comments>https://news.ycombinator.com/item?id=46599293</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46599293</guid></item><item><title><![CDATA[New comment by digikata in "“Erdos problem #728 was solved more or less autonomously by AI”"]]></title><description><![CDATA[
<p>To borrow some definitions from Systems engineering for verification and validation, this question is one of validation. Verification is performed by Lean and spec syntax and logic enforcement. But Validation is a question of is if the Lean spec encodes a true representation of the problem statement (was the right thing specced). Validation at highest levels is probably an irreplaceable human activity.<p>Also, on the verification side - there could also be a window of failure that Lean itself has a hidden bug in it too. And with automated systems that seek correctness, it is slightly elevated that some missed crack of a bug becomes exploited in the dev-check-dev loop run by the AI.</p>
]]></description><pubDate>Sat, 10 Jan 2026 11:40:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46564876</link><dc:creator>digikata</dc:creator><comments>https://news.ycombinator.com/item?id=46564876</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46564876</guid></item><item><title><![CDATA[New comment by digikata in "What Does a Database for SSDs Look Like?"]]></title><description><![CDATA[
<p>I would guess by now none have that internally. As a rule of thumb every major flash density increase (SLC, TLC, QLC) also tended to double internal page size. There were also internal transfer performance reasons for large sizes. Low level 16k-64k flash "pages" are common, and sometimes with even larger stripes of pages due to the internal firmware sw/hw design.</p>
]]></description><pubDate>Sat, 20 Dec 2025 11:53:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46335511</link><dc:creator>digikata</dc:creator><comments>https://news.ycombinator.com/item?id=46335511</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46335511</guid></item><item><title><![CDATA[New comment by digikata in "MinIO is now in maintenance-mode"]]></title><description><![CDATA[
<p>Garage is really good for core S3, the only thing I ran into was it didn't support object tagging. It could be considered maybe a more esoteric corner of the S3 api, but minio does support it. Especially if you're just mapping for a test api, object tagging is most likely an unneeded feature anyway.<p>It's a "Misc" endpoint in the Garage docs here:
<a href="https://garagehq.deuxfleurs.fr/documentation/reference-manual/s3-compatibility/" rel="nofollow">https://garagehq.deuxfleurs.fr/documentation/reference-manua...</a></p>
]]></description><pubDate>Wed, 03 Dec 2025 17:14:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=46137108</link><dc:creator>digikata</dc:creator><comments>https://news.ycombinator.com/item?id=46137108</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46137108</guid></item><item><title><![CDATA[New comment by digikata in "MinIO stops distributing free Docker images"]]></title><description><![CDATA[
<p>Incidentally there is a open source S3 project in rust that I have been following. About a year ago, I applied Garage images to replace some minio instances used in CI pipelines - lighter weight and faster to come up.<p><a href="https://github.com/deuxfleurs-org/garage" rel="nofollow">https://github.com/deuxfleurs-org/garage</a></p>
]]></description><pubDate>Wed, 22 Oct 2025 22:31:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=45676003</link><dc:creator>digikata</dc:creator><comments>https://news.ycombinator.com/item?id=45676003</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45676003</guid></item><item><title><![CDATA[New comment by digikata in "Show HN: FeOx – Fast embedded KV store in Rust"]]></title><description><![CDATA[
<p>This seems around the durability that most databases can reach.  Aside from more specialized hardware arrangements, with a single computer, embedded database there is always a window of data loss. The durability expectation is that some in-flight window of data will be lost, but on restart, it should recover to a consistent state of the last settled operation if at all possible.<p>A related questions is if the code base is mature enough when configured for higher durability to work as intended. Even with Rust, there needs to be some hard systems testing and it's often not just a matter of sprinkling flushes around. Further optimization can try to close the window tighter - maybe with a transaction log, but then you obviously trade some speed for it.</p>
]]></description><pubDate>Fri, 22 Aug 2025 20:36:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=44989509</link><dc:creator>digikata</dc:creator><comments>https://news.ycombinator.com/item?id=44989509</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44989509</guid></item><item><title><![CDATA[New comment by digikata in "Using Podman, Compose and BuildKit"]]></title><description><![CDATA[
<p>On Linux I'm using colima with docker compose and buildx and it seems to work ok for my limited cases.<p>On Mac it works ok to, but there are networking cases that Colima on mac doesn't handle - so orbstack for there</p>
]]></description><pubDate>Thu, 21 Aug 2025 17:43:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44975760</link><dc:creator>digikata</dc:creator><comments>https://news.ycombinator.com/item?id=44975760</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44975760</guid></item><item><title><![CDATA[New comment by digikata in "Who Invented Backpropagation?"]]></title><description><![CDATA[
<p>There are large bodies of work for optimization of state space control theory that I strongly suspect as a lot of crossover for AI, and at least has very similar mathematical structure.<p>e.g. optimization of state space control coefficients looks something like training a LLM matrix...</p>
]]></description><pubDate>Mon, 18 Aug 2025 17:04:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=44942895</link><dc:creator>digikata</dc:creator><comments>https://news.ycombinator.com/item?id=44942895</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44942895</guid></item><item><title><![CDATA[New comment by digikata in "The future of large files in Git is Git"]]></title><description><![CDATA[
<p>I don't know, it's all probability in the dataset that makes one optimization strategy better over another. Git annex iirc does file level dedupe. That would  take care of most of the problem if you're storing binaries that are compressed or encrypted. It's a lot of work to go beyond that, and probably one reason no one has bothered with git yet.  But borg and restic both do chunked dedupe I think.</p>
]]></description><pubDate>Fri, 15 Aug 2025 23:18:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=44918380</link><dc:creator>digikata</dc:creator><comments>https://news.ycombinator.com/item?id=44918380</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44918380</guid></item><item><title><![CDATA[New comment by digikata in "The future of large files in Git is Git"]]></title><description><![CDATA[
<p>I would think that git would need a parallel storage scheme for binaries. Something that does binary chunking and deduplication between revisions, but keeps the same merkle referencing scheme as everything else.</p>
]]></description><pubDate>Fri, 15 Aug 2025 22:52:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44918192</link><dc:creator>digikata</dc:creator><comments>https://news.ycombinator.com/item?id=44918192</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44918192</guid></item></channel></rss>