<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: hxtk</title><link>https://news.ycombinator.com/user?id=hxtk</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 11 Apr 2026 11:39:34 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=hxtk" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by hxtk in "Lichess and Take Take Take Sign Cooperation Agreement"]]></title><description><![CDATA[
<p>Cloud is more cost effective the less of it you have because it doesn’t cost 3x more to maintain a kubernetes cluster with thrice the nodes, but it does cost 3x more to rent one. This is even more true for serverless.<p>I can imagine a lot of small apps buy into serverless at a time where it’s legitimately the most cost-effective solution and then they’re stuck because serverless platforms are easy to lock yourself into.</p>
]]></description><pubDate>Thu, 09 Apr 2026 18:52:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47708029</link><dc:creator>hxtk</dc:creator><comments>https://news.ycombinator.com/item?id=47708029</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47708029</guid></item><item><title><![CDATA[New comment by hxtk in "Ghidra by NSA"]]></title><description><![CDATA[
<p>The Nightmare Course [1], so named because someone with that skillset (developing zero-days) is a nightmare for security, not because the course itself is a nightmare, and Roppers Academy [2] are both good for learning how to reverse engineer software and look for vulnerabilities.<p>The nightmare course explicitly talks about how to use Ghidra.<p>1: <a href="https://guyinatuxedo.github.io" rel="nofollow">https://guyinatuxedo.github.io</a>
2: <a href="https://www.roppers.org" rel="nofollow">https://www.roppers.org</a></p>
]]></description><pubDate>Mon, 16 Feb 2026 18:37:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47038485</link><dc:creator>hxtk</dc:creator><comments>https://news.ycombinator.com/item?id=47038485</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47038485</guid></item><item><title><![CDATA[New comment by hxtk in "Zig – io_uring and Grand Central Dispatch std.Io implementations landed"]]></title><description><![CDATA[
<p>When I look at historical cases, it seems different from a case today. If I’m a programmer in the 60s wanting async in my “low level language,” what I actually want is to make some of the highest level languages available at the time even more high level in their IO abstractions. As I understand it, C was a high-level language when it was invented, as opposed to assembly with macros. People wanting to add async were extending the state of the art for high level abstraction.<p>A language doing it today is doing it in the context of an ecosystem where even higher level languages exist and they have made the choice to target a lower level of abstraction.</p>
]]></description><pubDate>Sat, 14 Feb 2026 19:01:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47017252</link><dc:creator>hxtk</dc:creator><comments>https://news.ycombinator.com/item?id=47017252</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47017252</guid></item><item><title><![CDATA[New comment by hxtk in "Zig – io_uring and Grand Central Dispatch std.Io implementations landed"]]></title><description><![CDATA[
<p>It’s surprising to me how much people seem to want async in low level languages. Async is very nice in Go, but the reason I reach for a language like Zig is to explicitly control those things. I’m happily writing a Zig project right now using libxev as my io_uring abstraction.</p>
]]></description><pubDate>Sat, 14 Feb 2026 13:26:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47014362</link><dc:creator>hxtk</dc:creator><comments>https://news.ycombinator.com/item?id=47014362</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47014362</guid></item><item><title><![CDATA[New comment by hxtk in "An AI agent published a hit piece on me"]]></title><description><![CDATA[
<p>Apparently there are lots of people who signed up just to check it out but never actually added a mechanism to get paid, signaling no intent to actually be "hired" on the service.</p>
]]></description><pubDate>Fri, 13 Feb 2026 14:51:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47003370</link><dc:creator>hxtk</dc:creator><comments>https://news.ycombinator.com/item?id=47003370</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47003370</guid></item><item><title><![CDATA[New comment by hxtk in "Text classification with Python 3.14's ZSTD module"]]></title><description><![CDATA[
<p>I've actually been experimenting with that lately. I did a really naive version that tokenizes the input, feeds the max context window up to the token being encoded into an LLM, and uses that to produce a distribution of likely next tokens, then encodes the actual token with Huffman Coding with the LLM's estimated distribution. I could get better results with arithmetic encoding almost certainly.<p>It outperforms zstd by a long shot (I haven't dedicated the compute horsepower to figuring out what "a long shot" means quantitatively with reasonably small confidence intervals) on natural language, like wikipedia articles or markdown documents, but (using GPT-2) it's about as good as zstd or worse than zstd on things like files in the Kubernetes source repository.<p>You already get a significant amount of compression just out of the tokenization in some cases ("The quick red fox jumps over the lazy brown dog." encodes to one token per word plus one token for the '.' for the GPT-2 tokenizer), where as with code a lot of your tokens will just represent a single character so the entropy coding is doing all the work, which means your compression is only as good as the accuracy of your LLM, plus the efficiency of your entropy coding.<p>I would need to be encoding multiple tokens per "word" with Huffman Coding to hit the entropy bounds, since it has a minimum of one bit per character, so if tokens are mostly just one byte then I can't do better than a 12.5% compression ratio with one token per word. And doing otherwise gets computationally infeasible very fast. Arithmetic coding would do much better especially on code because it can encode a word with fractional bits.<p>I used Huffman coding for my first attempt because it's easier to implement and most libraries don't support dynamically updating the distribution throughout the process.</p>
]]></description><pubDate>Thu, 12 Feb 2026 04:57:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46985039</link><dc:creator>hxtk</dc:creator><comments>https://news.ycombinator.com/item?id=46985039</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46985039</guid></item><item><title><![CDATA[New comment by hxtk in "End of an era for me: no more self-hosted git"]]></title><description><![CDATA[
<p>Even that functions as a sort of proof of work, requiring a commitment of compute resources that is table stakes for individual users but multiplies the cost of making millions of requests.</p>
]]></description><pubDate>Wed, 11 Feb 2026 22:04:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46981821</link><dc:creator>hxtk</dc:creator><comments>https://news.ycombinator.com/item?id=46981821</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46981821</guid></item><item><title><![CDATA[New comment by hxtk in "Bun v1.3.9"]]></title><description><![CDATA[
<p>It’s gotten easier of late because Bazel modules are nice and Gazelle has started support plugins so it can do build file generation for other languages.<p>I don’t like generative AI for rote tasks like this, but I’ve had good luck using generative AI to write deterministic code generators that I can commit to a project and reuse.</p>
]]></description><pubDate>Mon, 09 Feb 2026 02:50:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46941042</link><dc:creator>hxtk</dc:creator><comments>https://news.ycombinator.com/item?id=46941042</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46941042</guid></item><item><title><![CDATA[New comment by hxtk in "Bun v1.3.9"]]></title><description><![CDATA[
<p>I’m not a fan of generative AI for the use case because it’s rote enough to do deterministically, but deterministic code generation is getting better and better.<p>Gazelle, the BUILD file generator for Go, now supports plugins and several other languages have Gazelle plugins.<p>I’ve used AI to generate BUILD file generators before, though. I had good luck getting it to write a script that would analyze a Java project with circular dependencies and aggregate the cycle participants into a single target.</p>
]]></description><pubDate>Mon, 09 Feb 2026 02:47:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46941028</link><dc:creator>hxtk</dc:creator><comments>https://news.ycombinator.com/item?id=46941028</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46941028</guid></item><item><title><![CDATA[New comment by hxtk in "Speed up responses with fast mode"]]></title><description><![CDATA[
<p>I’ve been experimenting with this today. I still don’t think AI is a very good use of my programming time… but it’s a pretty good use of my non-programming time.<p>I ran OpenCode with some 30B local models today and it got some useful stuff done while I was doing my budget, folding laundry, etc.<p>It’s less likely to “one shot” apples to apples compared to the big cloud models; Gemini 3 Pro can one shot reasonably complex coding problems through the chat interface. But through the agent interface where it can run tests, linters, etc. it does a pretty good job for the size of task I find reasonable to outsource to AI.<p>This is with a high end but not specifically AI-focused desktop that I mostly built with VMs, code compilation tasks, and gaming in mind some three years ago.</p>
]]></description><pubDate>Sun, 08 Feb 2026 07:30:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46932147</link><dc:creator>hxtk</dc:creator><comments>https://news.ycombinator.com/item?id=46932147</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46932147</guid></item><item><title><![CDATA[New comment by hxtk in "The Jeff Dean Facts"]]></title><description><![CDATA[
<p>Even with the ellipsized link I knew you were talking about one of a few things because the link shows up as `:visited` for me... had to be either BigTable, MapReduce, or Spanner. All good reads.</p>
]]></description><pubDate>Fri, 09 Jan 2026 04:33:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46550071</link><dc:creator>hxtk</dc:creator><comments>https://news.ycombinator.com/item?id=46550071</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46550071</guid></item><item><title><![CDATA[New comment by hxtk in "Eat Real Food"]]></title><description><![CDATA[
<p>There's some real science there for a couple of reasons. Protein is a macronutrient you can be malnourished if you don't get enough of even if you eat enough calories and the right micronutrients, and if most of your calories are from protein then you're actually probably not getting as many "burnable" calories as you think you are because (1) the amount of protein you need to meet your daily protein needs never enters the citric acid cycle to oxidized for ATP regeneration, (2) protein is the macronutrient that feels the most filling, and (3) excess protein that goes to the liver to be converted into carbs loses around 30% of its net usable calories due to the energy required for that conversion.<p>The way we count calories is based on how many calories are in a meal vs the resulting scat, and that just isn't an accurate representation of how the body processes protein such that a protein-heavy diet doesn't have as many calories as you probably think it does, which makes it a healthy choice in an environment where most food-related health problems stem from overeating.<p>However I agree with your skepticism insofar as when they say "prioritizing protein" they probably mean "prioritizing meat," which is more suspect from a health standpoint and looks somewhat suspicious considering the lobbyists involved.</p>
]]></description><pubDate>Thu, 08 Jan 2026 06:30:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46537913</link><dc:creator>hxtk</dc:creator><comments>https://news.ycombinator.com/item?id=46537913</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46537913</guid></item><item><title><![CDATA[New comment by hxtk in "Web development is fun again"]]></title><description><![CDATA[
<p>This goes further into LLM usage than I prefer to go. I learn so much better when I do the research and make the plan myself that I wouldn’t let an LLM do that part even if I trusted the LLM to do a good job.<p>I basically don’t outsource stuff to an LLM unless I know roughly what to expect the LLM output to look like and I’m just saving myself a bunch of typing.<p>“Could you make me a Go module with an API similar to archive/tar.Writer that produces a CPIO archive in the newcx format?” was an example from this project.</p>
]]></description><pubDate>Mon, 05 Jan 2026 12:07:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46497819</link><dc:creator>hxtk</dc:creator><comments>https://news.ycombinator.com/item?id=46497819</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46497819</guid></item><item><title><![CDATA[New comment by hxtk in "Web development is fun again"]]></title><description><![CDATA[
<p>As I've gotten more experience I've tended to find more fun in tinkering with architectures than tinkering with code. I'm currently working on making a secure zero-trust bare metal kubernetes deployment that relies on an immutable UKI and TPM remote attestation. I'm making heavy use of LLMs for the different implementation details as I experiment with the architecture. As far as I know, to the extent I'm doing anything novel, it's because it's not a reasonable approach for engineering reasons even if it technically works, but I'm learning a lot about how TPMs work and the boot process and the kernel.<p>I still enjoy writing code as well, but I see them as separate hobbies. LLMs can take my hand-optimized assembly drag racing or the joy of writing a well-crafted library from my cold dead hands, but that's not always what I'm trying to do and I'll gladly have an LLM write my OCI layout directory to CPIO helper or my Bazel rule for putting together a configuration file and building the kernel so that I can spend my time thinking about how the big pieces fit together and how I want to handle trust roots and cold starts.</p>
]]></description><pubDate>Sun, 04 Jan 2026 23:56:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46493716</link><dc:creator>hxtk</dc:creator><comments>https://news.ycombinator.com/item?id=46493716</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46493716</guid></item><item><title><![CDATA[New comment by hxtk in "Unix "find" expressions compiled to bytecode"]]></title><description><![CDATA[
<p>The latter sounds like a reimplementation of AIDE, which exists in major Linux distributions’ default package managers.<p>Did you ever compare what you wrote to that?</p>
]]></description><pubDate>Wed, 31 Dec 2025 04:53:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=46441403</link><dc:creator>hxtk</dc:creator><comments>https://news.ycombinator.com/item?id=46441403</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46441403</guid></item><item><title><![CDATA[New comment by hxtk in "A faster path to container images in Bazel"]]></title><description><![CDATA[
<p>If you did that, Bazel would work a lot better. Most of the complexity of Bazel is because it was originally basically an export of the Google internal project "Blaze," and the roughest pain points in its ergonomics were pulling in external dependencies, because that just wasn't something Google ever did. All their dependencies were vendored into their Google3 source tree.<p>WORKSPACE files came into being to prevent needing to do that, and now we're on MODULE files instead because they do the same things much more nicely.<p>That being said, Bazel will absolutely build stuff fully offline if you add the one step of running `bazel sync //...` in between cloning the repo and yanking the cable, with some caveats depending on how your toolchains are set up and of course the possibility that every mirror of your remote dependency has been deleted.</p>
]]></description><pubDate>Tue, 30 Dec 2025 17:57:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46436004</link><dc:creator>hxtk</dc:creator><comments>https://news.ycombinator.com/item?id=46436004</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46436004</guid></item><item><title><![CDATA[New comment by hxtk in "Pre-commit hooks are broken"]]></title><description><![CDATA[
<p>It's how code is written in Google (including their open-source products like AOSP and Chromium), the ffmpeg project, the Linux Kernel, Git, Docker, the Go compiler, Kubernetes, Bitcoin, etc, and it's how things are done at my workplace.<p>I'm surprised by how confident you are that things simply aren't done this way considering the number of high-profile users of workflows where the commit history is expected to tell a story of how the software evolved over time.</p>
]]></description><pubDate>Sat, 27 Dec 2025 22:32:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46406051</link><dc:creator>hxtk</dc:creator><comments>https://news.ycombinator.com/item?id=46406051</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46406051</guid></item><item><title><![CDATA[New comment by hxtk in "Unix "find" expressions compiled to bytecode"]]></title><description><![CDATA[
<p>Virtually all databases compile queries in one way or another, but they vary in the nature of their approaches. SQLite for example uses bytecode, while Postgres and MySQL both compile it to a computation tree which basically takes the query AST and then substitutes in different table/index operations according to the query planner.<p>SQLite talks about the reasons for each variation here: <a href="https://sqlite.org/whybytecode.html" rel="nofollow">https://sqlite.org/whybytecode.html</a></p>
]]></description><pubDate>Fri, 26 Dec 2025 23:52:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46397679</link><dc:creator>hxtk</dc:creator><comments>https://news.ycombinator.com/item?id=46397679</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46397679</guid></item><item><title><![CDATA[New comment by hxtk in "CSRF protection without tokens or hidden form fields"]]></title><description><![CDATA[
<p>It’s a real problem for defense sites because .mil is a public suffix so all navy.mil sites are the “same site” and all af.mil sites etc.</p>
]]></description><pubDate>Thu, 25 Dec 2025 17:21:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46385725</link><dc:creator>hxtk</dc:creator><comments>https://news.ycombinator.com/item?id=46385725</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46385725</guid></item><item><title><![CDATA[New comment by hxtk in "Some Epstein file redactions are being undone"]]></title><description><![CDATA[
<p>Or if the document is just text, simply scan it in black and white (as in, binary, not grayscale).</p>
]]></description><pubDate>Wed, 24 Dec 2025 14:17:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46375766</link><dc:creator>hxtk</dc:creator><comments>https://news.ycombinator.com/item?id=46375766</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46375766</guid></item></channel></rss>