<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: jkhdigital</title><link>https://news.ycombinator.com/user?id=jkhdigital</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 09:43:14 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=jkhdigital" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by jkhdigital in "New 'negative light' technology hides data transfers in plain sight"]]></title><description><![CDATA[
<p>The paper itself mentions steganography in the second sentence at least.</p>
]]></description><pubDate>Fri, 13 Mar 2026 21:35:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47370273</link><dc:creator>jkhdigital</dc:creator><comments>https://news.ycombinator.com/item?id=47370273</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47370273</guid></item><item><title><![CDATA[New comment by jkhdigital in "New 'negative light' technology hides data transfers in plain sight"]]></title><description><![CDATA[
<p>Secure channels can still be jammed. Undetectability is a fundamentally different goal than secrecy.</p>
]]></description><pubDate>Fri, 13 Mar 2026 21:34:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47370268</link><dc:creator>jkhdigital</dc:creator><comments>https://news.ycombinator.com/item?id=47370268</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47370268</guid></item><item><title><![CDATA[Large pipe protrudes 13m above roadway in Osaka]]></title><description><![CDATA[
<p>Article URL: <a href="https://www3.nhk.or.jp/nhkworld/en/news/20260311_20/">https://www3.nhk.or.jp/nhkworld/en/news/20260311_20/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47342817">https://news.ycombinator.com/item?id=47342817</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 11 Mar 2026 22:07:46 +0000</pubDate><link>https://www3.nhk.or.jp/nhkworld/en/news/20260311_20/</link><dc:creator>jkhdigital</dc:creator><comments>https://news.ycombinator.com/item?id=47342817</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47342817</guid></item><item><title><![CDATA[New comment by jkhdigital in "Agentic Engineering Patterns"]]></title><description><![CDATA[
<p>I strongly agree with that last statement—I hate using agents because their code smells awful even if it works. But I have to use them now because otherwise I’m going to wake up one day and be 100% obsolete and never even notice how it happened.</p>
]]></description><pubDate>Wed, 04 Mar 2026 09:23:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47245055</link><dc:creator>jkhdigital</dc:creator><comments>https://news.ycombinator.com/item?id=47245055</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47245055</guid></item><item><title><![CDATA[New comment by jkhdigital in "Agentic Engineering Patterns"]]></title><description><![CDATA[
<p>Today I gave a lecture to my undergraduate data structures students about the evolution of CPU and GPU architectures since the late 1970s. The main themes:<p>- Through the last two decades of the 20th century, Moore’s Law held and ensured that more transistors could be packed into next year’s chips that could run at faster and faster clock speeds. Software floated on a rising tide of hardware performance so writing fast code wasn’t always worth the effort.<p>- Power consumption doesn’t vary with transistor density but varies with the <i>cube</i> of clock frequency, so by the early 2000s Intel hit a wall and couldn’t push the clock above ~4GHz with normal heat dissipation methods. Multi-core processors were the only way to keep the performance increasing year after year.<p>- Up to this point the CPU could squeeze out performance increases by parallelizing sequential code through clever scheduling tricks (and compilers could provide an assist by unrolling loops) but with multiple cores software developers could no longer pretend that concurrent programming was only something that academics and HPC clusters cared about.<p>CS curricula are mostly still stuck in the early 2000s, or at least it feels that way. We teach big-O and use it to show that mergesort or quicksort will beat the pants off of bubble sort, but topics like Amdahl’s Law are buried in an upper-level elective when in fact it is much more directly relevant to the performance of real code, on real present-day workloads, than a typical big-O analysis.<p>In any case, I used all this as justification for teaching bitonic sort to 2nd and 3rd year undergrads.<p>My point here is that Simon’s assertion that “code is cheap” feels a lot like the kind of paradigm shift that comes from realizing that in a world with easily accessible massively parallel compute hardware, the things that matter for writing performant software have completely shifted: minimizing branching and data dependencies produces code that looks profoundly different than what most developers are used to. e.g. running 5 linear passes over a column might actually be faster than a single merged pass if those 5 passes touch different memory and the merged pass has to wait to shuffle all that data in and out of the cache because it doesn’t fit.<p>What all this means for the software development process I can’t say, but the payoff will be tremendous (10-100x, just like with properly parallelized code) for those who can see the new paradigm first and exploit it.</p>
]]></description><pubDate>Wed, 04 Mar 2026 09:08:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47244936</link><dc:creator>jkhdigital</dc:creator><comments>https://news.ycombinator.com/item?id=47244936</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47244936</guid></item><item><title><![CDATA[New comment by jkhdigital in "What's the best way to learn a new language?"]]></title><description><![CDATA[
<p>It definitely wasn’t a waste of time! I passed JLPT N1 back in 2014 after ~6 years of mostly Anki-based studying. Did Heisig’s RtK first and then mostly played old Japanese console games that I was familiar with. Never opened a JLPT study guide and passed the test on my first attempt.<p>Could I speak Japanese at that point? No not really… I even had a Japanese spouse! But we spoke mostly English at home. I could read quite well, but conversation was very challenging.<p>Then we moved to Japan. Despite not having a job that requires me to speak Japanese, I got enough live exposure just from chatting with people at the gym or in social activities that now, a few years later, I’ve backfilled all that conversational fluency that was missing. No special extra effort required, just living in an environment where I used the language reasonably often.<p>Anyways, the point is that all the time spent in Anki laid a rock-solid foundation that merely needed activation in the right environment for active fluency to emerge. Of course I no longer do my daily flashcard drills (and I’ve forgotten how to write quite a few kanji as a result) but the work paid off.</p>
]]></description><pubDate>Sun, 22 Feb 2026 23:32:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47116025</link><dc:creator>jkhdigital</dc:creator><comments>https://news.ycombinator.com/item?id=47116025</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47116025</guid></item><item><title><![CDATA[New comment by jkhdigital in "Japan's Dododo Land, the most irritating place on Earth"]]></title><description><![CDATA[
<p>Miraikan is one of our favorites, been there like 3-4 times with my son. The current exhibit that turns quantum logic gates into a DJ game is really innovative but they only give you like 5 mins which is barely enough time to figure out WTF is even going on</p>
]]></description><pubDate>Fri, 13 Feb 2026 11:25:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47001534</link><dc:creator>jkhdigital</dc:creator><comments>https://news.ycombinator.com/item?id=47001534</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47001534</guid></item><item><title><![CDATA[New comment by jkhdigital in "Learning from context is harder than we thought"]]></title><description><![CDATA[
<p>For all the disparagement of “fact regurgitation” as pedagogical practice, it’s not like there’s some proven better alternative. Higher-order reasoning doesn’t happen without a thorough catalogue of domain knowledge readily accessible in your context window.</p>
]]></description><pubDate>Fri, 06 Feb 2026 23:37:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=46919677</link><dc:creator>jkhdigital</dc:creator><comments>https://news.ycombinator.com/item?id=46919677</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46919677</guid></item><item><title><![CDATA[New comment by jkhdigital in "Learning from context is harder than we thought"]]></title><description><![CDATA[
<p>Your last statement misses the mark—of course the brain is the root of human intelligence. The error is in assuming that <i>consciousness</i> is the primary learning modality. Or, as you put it, “arguing semantics”.<p>From my own personal experience, this realization came after finally learning a difficult foreign language after years and years of “wanting” to learn it but making little progress. The shift came when I approached it like learning martial arts rather than mathematics. Nobody would be foolish enough to suggest that you could “think” your way to a black belt, but we mistakenly assume that skills which involve only the organs in our head (eyes, ears, mouth) can be reduced to a thought process.</p>
]]></description><pubDate>Fri, 06 Feb 2026 23:22:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=46919542</link><dc:creator>jkhdigital</dc:creator><comments>https://news.ycombinator.com/item?id=46919542</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46919542</guid></item><item><title><![CDATA[New comment by jkhdigital in "The Codex app illustrates the shift left of IDEs and coding GUIs"]]></title><description><![CDATA[
<p>Your analogy to PHP developers not reading assembly got me thinking.<p>Early resistance to high-level (i.e. compiled) languages came from assembly programmers who couldn’t imagine that the compiler could generate code that was just as performant as their hand-crafted product. For a while they were right, but improved compiler design and the relentless performance increases in hardware made it so that even an extra 10-20% boost you might get from perfectly hand-crafted assembly was almost never worth the developer time.<p>There is an obvious parallel here, but it’s not quite the same. The high-level language is effectively a formal spec for the abstract machine which is faithfully translated by the (hopefully bug-free) compiler. Natural language is not a formal spec for anything, and LLM-based agents are not formally verifiable software. So the tradeoffs involved are not only about developer time vs. performance, but also <i>correctness</i>.</p>
]]></description><pubDate>Wed, 04 Feb 2026 23:40:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46893540</link><dc:creator>jkhdigital</dc:creator><comments>https://news.ycombinator.com/item?id=46893540</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46893540</guid></item><item><title><![CDATA[New comment by jkhdigital in "Silver plunges 30% in worst day since 1980, gold tumbles"]]></title><description><![CDATA[
<p>People choose to hold non-yield-bearing assets when they believe the returns offered by current investment opportunities are not sufficient to justify the risks.<p>It is the miracle of modern capital markets that enables almost anyone to quickly and easily invest their savings in productive assets, but of course capital markets aren’t perfect. The availability of “none of the above” options (like gold) that remove savings from the pool of active investment capital is the essential feedback loop that balances risk and return.</p>
]]></description><pubDate>Fri, 30 Jan 2026 22:46:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46831012</link><dc:creator>jkhdigital</dc:creator><comments>https://news.ycombinator.com/item?id=46831012</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46831012</guid></item><item><title><![CDATA[New comment by jkhdigital in "The Five Levels: From spicy autocomplete to the dark factory"]]></title><description><![CDATA[
<p>This comment is quintessential HN poetry</p>
]]></description><pubDate>Thu, 29 Jan 2026 00:31:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=46803942</link><dc:creator>jkhdigital</dc:creator><comments>https://news.ycombinator.com/item?id=46803942</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46803942</guid></item><item><title><![CDATA[New comment by jkhdigital in "Local agents will win"]]></title><description><![CDATA[
<p>more than a little… at least there’s no gratuitous emojis</p>
]]></description><pubDate>Wed, 28 Jan 2026 22:43:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46802708</link><dc:creator>jkhdigital</dc:creator><comments>https://news.ycombinator.com/item?id=46802708</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46802708</guid></item><item><title><![CDATA[New comment by jkhdigital in "Talking to LLMs has improved my thinking"]]></title><description><![CDATA[
<p>Another principle that builds on the other two, and is specifically applicable to Java:<p>- To the greatest extent possible, base interfaces should define a <i>single</i> abstract method which allows them to be functional and instantiated through lambdas for easy mocking.<p>- Terminal interfaces (which are intended to be implemented directly by a concrete class) should <i>always</i> provide an abstract decorator implementation that wraps the concrete class for (1) interface isolation that can’t be bypassed by runtime reflection, and (2) decoration as an automatic extension point.</p>
]]></description><pubDate>Fri, 23 Jan 2026 12:05:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46731518</link><dc:creator>jkhdigital</dc:creator><comments>https://news.ycombinator.com/item?id=46731518</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46731518</guid></item><item><title><![CDATA[New comment by jkhdigital in "Talking to LLMs has improved my thinking"]]></title><description><![CDATA[
<p>I don’t <i>completely</i> disagree, just like 90%. Junior developers typically aren’t taught the design patterns in the first place so they dig deeper holes rather than immediately recognizing that they need to refactor, pull out an interface and decorate/delegate/etc.<p>I’m also going to point out that this is a <i>data structures</i> course that I’m teaching, so extensibility and composability are paramount concerns because the interfaces we implement are at the heart of almost everything else in an application.</p>
]]></description><pubDate>Fri, 23 Jan 2026 11:57:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46731458</link><dc:creator>jkhdigital</dc:creator><comments>https://news.ycombinator.com/item?id=46731458</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46731458</guid></item><item><title><![CDATA[New comment by jkhdigital in "Talking to LLMs has improved my thinking"]]></title><description><![CDATA[
<p>I’m required to teach DSA in Java, so I lay down a couple rules early in the course that prohibit 95% of the nonsense garbage that unconstrained OOP allows. Granted, neither of these rules is original or novel, but they are <i>rarely</i> acknowledged in educational settings:<p>1. All public methods <i>must</i> implement an interface, no exceptions.
2. The super implementation <i>must</i> be called if overriding a non-abstract method.<p>The end result of strict adherence to these rules is basically that every feature will look like a GoF design pattern. True creative freedom emerges through constraints, because the only allowable designs are the ones that are proven to be maximally extensible and composable.</p>
]]></description><pubDate>Fri, 23 Jan 2026 08:52:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46730138</link><dc:creator>jkhdigital</dc:creator><comments>https://news.ycombinator.com/item?id=46730138</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46730138</guid></item><item><title><![CDATA[New comment by jkhdigital in "Talking to LLMs has improved my thinking"]]></title><description><![CDATA[
<p>I started teaching undergraduate computer science courses a year ago, after ~20 years in various other careers. My campus has relatively low enrollment, but has seen a massive increase in CS majors recently (for reasons I won’t go into) so they are hiring a lot without much instructional support in place. I was basically given zero preparation other than a zip file with the current instructor’s tests and homeworks (which are on paper, btw).<p>I thought that I would be using LLMs for coding, but it turns out that they have been much more useful as a sounding board for conceptual framing that I’d like to use while teaching. I have strong opinions about good software design, some of them unconventional, and these conversations have been incredibly helpful for turning my vague notions into precise, repeatable explanations for difficult abstractions.</p>
]]></description><pubDate>Fri, 23 Jan 2026 06:09:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46728998</link><dc:creator>jkhdigital</dc:creator><comments>https://news.ycombinator.com/item?id=46728998</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46728998</guid></item><item><title><![CDATA[New comment by jkhdigital in "Gas Town Decoded"]]></title><description><![CDATA[
<p>I can’t stop thinking about this exact notion. The main reason we don’t always use stuff like TLA+ to spec out our software is because it’s tedious AF for anything smaller than, like, mission-critical enterprise-grade systems and we can generally trust the humans to get the details right eventually through carrot-and-stick incentive systems. LLM agents have none of the attentional and motivational constraints of humans so there’s no reason not to do things the right way.</p>
]]></description><pubDate>Mon, 19 Jan 2026 11:30:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46677784</link><dc:creator>jkhdigital</dc:creator><comments>https://news.ycombinator.com/item?id=46677784</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46677784</guid></item><item><title><![CDATA[New comment by jkhdigital in "Gas Town Decoded"]]></title><description><![CDATA[
<p>That last line is exactly what I was thinking. Find an expressive language and then progressively formalize your workflows in DSLs that enforce correctness by design, not through layers and layers of natural language “skills” and deadweight agentic watchdogs.</p>
]]></description><pubDate>Mon, 19 Jan 2026 11:16:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46677672</link><dc:creator>jkhdigital</dc:creator><comments>https://news.ycombinator.com/item?id=46677672</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46677672</guid></item><item><title><![CDATA[New comment by jkhdigital in "Provide agents with automated feedback"]]></title><description><![CDATA[
<p>Still basically relies on feeding context through natural language instructions which can be ignored or poorly followed?<p>The answer is not more natural language guardrails, it is in (progressive) formal specification of workflows and acceptance criteria. The task cannot be marked as complete if it is only accessible through an API that rejects changes lacking proof that acceptance criteria were met.</p>
]]></description><pubDate>Mon, 19 Jan 2026 10:53:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46677493</link><dc:creator>jkhdigital</dc:creator><comments>https://news.ycombinator.com/item?id=46677493</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46677493</guid></item></channel></rss>