<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: NilMostChill</title><link>https://news.ycombinator.com/user?id=NilMostChill</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 23:35:46 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=NilMostChill" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by NilMostChill in "Tadpole – A modular and extensible DSL built for web scraping"]]></title><description><![CDATA[
<p>what is the specific domain of jquery ?></p>
]]></description><pubDate>Wed, 04 Feb 2026 09:00:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46883340</link><dc:creator>NilMostChill</dc:creator><comments>https://news.ycombinator.com/item?id=46883340</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46883340</guid></item><item><title><![CDATA[New comment by NilMostChill in "AI code and software craft"]]></title><description><![CDATA[
<p>TL;DR;<p>If it's working for you, great, but presenting it like it's a general direct replacement for development teams is disingenuous.<p>---<p>> Looks like we just have different expectations: i don't want to micromanage my coding agents any more than i micromanage the developers i work with as a product manager. If the output does what it is supposed to do, and the software is maintainable and extendable by following certain best practices, i'm happy. And i expect that goes for most business people.<p>None of what i said implied any expectations of the process of using the tools, but if you've found something that works for you that's good.<p>On the subject of maintainability and extension, that is usually bound to the level of complexity of the project and the increase in requirements is not generally linear.<p>I agree, many business people would love what you've described, very few are getting it.<p>> And in practice i have more control with a coding agent than with developers as i can iterate over ideas quickly: "build this idea", "no change this", "remove this and replace it with this". Within an hour you can quickly iterate an idea into something that works well. With developers this would have taken days if not more. And they would've complained i need to better prepare my requirements.<p>Up to a point, yes.<p>If your application of this methodology works well enough before you hit the limitations of the tooling, that's great.<p>There is , however, a threshold of complexity where this starts to break down, this threshold can be mitigated somewhat with experience and a better understanding on how to utilise the tooling, but it still exists (currently).<p>Once you reach this threshold the approaches you are talking about start to work less effectively and even actively hinder progress.<p>There are techniques and approaches to software development that can further push this threshold out, but then you're getting into the territory of having to know enough to be able to instruct the LLM to use these approaches.</p>
]]></description><pubDate>Tue, 27 Jan 2026 19:39:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=46785293</link><dc:creator>NilMostChill</dc:creator><comments>https://news.ycombinator.com/item?id=46785293</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46785293</guid></item><item><title><![CDATA[New comment by NilMostChill in "AI code and software craft"]]></title><description><![CDATA[
<p>> I think i have enough control.<p>This is probably just a disagreement about the term "control", so we can agree to disagree on that one i suppose.<p>The rest of the reply doesn't really relate to any of the points i mentioned.<p>That it's possible to successfully use the tool to achieve your goals wasn't in dispute.<p>I'll try to narrow it down:<p>---<p>> You are not a victim at the mercy of your LLM.<p>Yes, you absolutely are, it's how they work.<p>As i said, you can suggest guidelines and directions but it's not guaranteed they'll be adhered to.<p>To be clear , this also applies to people as well.<p>---<p>Directing an LLM (or LLM based orchestration system) is not the same as directing a team of people.<p>The "interface" is similar in that you provide instructions and guidelines and receive an attempt at the wanted outcome.<p>However, the underlying mechanisms of how they work are so different that the analogy you were trying to use doesn't make sense.<p>---<p>Again, LLM's can be useful tools, but presenting them as something they aren't only serves to muddy the waters of understanding how best to use them.<p>---<p>As an aside, IMO, the sketchy salesmen approach to over-promising on features and obscuring the the limitations will do great harm to the adoption of LLM's in the medium to long term.<p>The misrepresentation of terminology is also contributing to this.<p>The term AI is intentionally being used to attribute a level of reasoning and problem solving capability beyond what actually exists in these systems.</p>
]]></description><pubDate>Tue, 27 Jan 2026 15:01:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=46780878</link><dc:creator>NilMostChill</dc:creator><comments>https://news.ycombinator.com/item?id=46780878</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46780878</guid></item><item><title><![CDATA[New comment by NilMostChill in "AI code and software craft"]]></title><description><![CDATA[
<p>> You have a lot of control over what the LLM creates.<p>No, you don't, you have "influence" or "suggestion".<p>You can absolutely narrow down the probability ranges of what is produced , but there is no guarantee that it will stick to your guidelines.<p>So far, at least, it's just not how they work.<p>> You don't have 100% control over what your LLM devs are doing, but more than you think. Just like normal managers don't micromanage every action of their team.<p>This overlooks the role of actual reasoning/interpretation that is found when dealing with actual people.<p>While it might seem like directing an LLM is similar in practice to managing a team of people, the underlying mechanisms are not the same.<p>If you analyse based on comparisons between those two approaches, without understanding the fundamental differences in what's happening beneath the surface, then any conclusions drawn will be flawed.<p>---<p>I'm not against LLM's, i'm against using them poorly and presenting them as something they are not.</p>
]]></description><pubDate>Tue, 27 Jan 2026 14:09:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46780112</link><dc:creator>NilMostChill</dc:creator><comments>https://news.ycombinator.com/item?id=46780112</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46780112</guid></item><item><title><![CDATA[New comment by NilMostChill in "Aldous Huxley predicts Adderall and champions alternative therapies"]]></title><description><![CDATA[
<p>Isn't the whole point of amphetamine based treatement for ADHD to correct(or beneficially alter, depending on your point of view) an non-standard brain chemistry?<p>AFAIK some neurodivergent brains deal with amphetamines differently and the baseline levels of chemical affected by amphetamines is different.<p>Wear and tear might be a thing, i don't know, but the analogy of putting NO2 in their car feels a bit off.<p>It'd be more like finally putting premium unleaded in your car after years of "back of the lorry" pseudo-unleaded.</p>
]]></description><pubDate>Tue, 18 Nov 2025 07:43:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=45962415</link><dc:creator>NilMostChill</dc:creator><comments>https://news.ycombinator.com/item?id=45962415</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45962415</guid></item><item><title><![CDATA[New comment by NilMostChill in "Century-old stone “tsunami stones” dot Japan's coastline (2015)"]]></title><description><![CDATA[
<p>Depends on how metaphorical and/or political you want to get.<p>Arguably books could be considered warning waystones, but that's a stretch in this context.<p>Physical monuments though, we have loads, lots of war memorials are/were intended as warning about the cost of war.<p>Auschwitz-Birkenau being left as as it is could be considered another.<p>If you want to get really close to similar intentions there are the long term nuclear waste warnings:<p><a href="https://en.wikipedia.org/wiki/Long-term_nuclear_waste_warning_messages" rel="nofollow">https://en.wikipedia.org/wiki/Long-term_nuclear_waste_warnin...</a><p>A bit more esoteric (and less warningy) and you get the signals we send in to space intentionally as a time-capsule/marker for potential alien contact.</p>
]]></description><pubDate>Mon, 04 Aug 2025 14:52:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=44786640</link><dc:creator>NilMostChill</dc:creator><comments>https://news.ycombinator.com/item?id=44786640</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44786640</guid></item><item><title><![CDATA[New comment by NilMostChill in "AI comes up with bizarre physics experiments, but they work"]]></title><description><![CDATA[
<p>Indeed, there are lots of methods, but i was specifically thinking of the possibility of a method an isolated AI might feasibly figure out with only the tools it has easily available to it.<p>But as someone said earlier,  the real interesting part is when/if they start figuring out novel concepts we as humans haven't even considered.</p>
]]></description><pubDate>Tue, 22 Jul 2025 17:07:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=44650130</link><dc:creator>NilMostChill</dc:creator><comments>https://news.ycombinator.com/item?id=44650130</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44650130</guid></item><item><title><![CDATA[New comment by NilMostChill in "AI comes up with bizarre physics experiments, but they work"]]></title><description><![CDATA[
<p>The comic(manga actually) i was referring to was "Origin" by the manga author Boichi.<p>I'll have a read of the paper, seems like it's similar in concept</p>
]]></description><pubDate>Tue, 22 Jul 2025 17:03:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=44650068</link><dc:creator>NilMostChill</dc:creator><comments>https://news.ycombinator.com/item?id=44650068</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44650068</guid></item><item><title><![CDATA[New comment by NilMostChill in "I've launched 37 products in 5 years and not doing that again"]]></title><description><![CDATA[
<p>There is an argument to be made that it's good training, like coding kata's just with the end to end.<p>If you're looking to make a living off of it, the training argument only works if you then go on to used the trained skills though.<p>In this instance, if the numbers provided are to be believed they made bank.<p>i'm seeing a six figure sale on a five figure investment, among others.<p>Though i suspect, like is usually it, that the provided cost numbers are much higher in reality when factoring in time and opportunity cost etc.</p>
]]></description><pubDate>Tue, 22 Jul 2025 12:48:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=44646222</link><dc:creator>NilMostChill</dc:creator><comments>https://news.ycombinator.com/item?id=44646222</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44646222</guid></item><item><title><![CDATA[New comment by NilMostChill in "CBA hiring Indian ICT workers after firing Australians"]]></title><description><![CDATA[
<p>source?<p>for the trends i mean, not the extrapolation.</p>
]]></description><pubDate>Tue, 22 Jul 2025 12:13:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=44645897</link><dc:creator>NilMostChill</dc:creator><comments>https://news.ycombinator.com/item?id=44645897</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44645897</guid></item><item><title><![CDATA[New comment by NilMostChill in "AI comes up with bizarre physics experiments, but they work"]]></title><description><![CDATA[
<p>There's a comic out right now positing that a sufficiently intelligent AI with appropriate access could use imperceptible (to us) vibrations from mechanical computing parts like spinning rust HDD's etc.<p>It's a throwaway mechanic in the comic, but it seems plausible.<p>In certain places the power companies are/were passing time information throughout the whole grid - <a href="https://www.nist.gov/publications/time-and-frequency-electrical-power-lines" rel="nofollow">https://www.nist.gov/publications/time-and-frequency-electri...</a></p>
]]></description><pubDate>Tue, 22 Jul 2025 10:43:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=44645309</link><dc:creator>NilMostChill</dc:creator><comments>https://news.ycombinator.com/item?id=44645309</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44645309</guid></item><item><title><![CDATA[New comment by NilMostChill in "Woke Nature of Svelte"]]></title><description><![CDATA[
<p>That's generally how it is with opinionated frameworks (and i use that term as a literal, as opposed to agnostic), you do the thing in the way expected or it's a fight.<p>Svelte is one of the less opinionated ones i've used but it does still have it's eccentricities.<p>Woke as a pejorative is a big red flag for a lot of people btw, especially here, you're going to get significantly less positive interactions by starting your text with the textual equivalent of a red baseball cap.</p>
]]></description><pubDate>Wed, 16 Jul 2025 11:26:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=44580988</link><dc:creator>NilMostChill</dc:creator><comments>https://news.ycombinator.com/item?id=44580988</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44580988</guid></item><item><title><![CDATA[New comment by NilMostChill in "LLM Inevitabilism"]]></title><description><![CDATA[
<p>> LLMs in their current state have integrated into the workflows for many, many IT roles. They'll never be niche, unless governing bodies come together to kill them.<p>That is an exaggeration, it is integrated into <i>some</i> workflows, usually in a provisional manner while the full implications of such integrations are assessed for viability in the mid to long term.<p>At least in the fields of which i have first hand knowledge.<p>> Straw man argument - this is in no way a metric for validating the power of LLMs as a tool for IT roles. Can you not find open source code bases that leverage LLMS because you haven't looked, or because you can't tell the difference between human and LLM code?<p>Straw man rebuttal, presenting an imaginary position in which this statement is doesn't apply doesn't invalidate the statement as a whole.<p>> As I said, you haven't been paying attention.<p>Or alternatively you've been paying attention to a selective subset of your specific industry and have made wide extrapolations based on that.<p>> Denialism - the practice of denying the existence, truth, or validity of something despite proof or strong evidence that it is real, true, or valid<p>What's the one where you claim strong proof or evidence while only providing anecdotal "trust me bro" ?</p>
]]></description><pubDate>Tue, 15 Jul 2025 15:01:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=44571856</link><dc:creator>NilMostChill</dc:creator><comments>https://news.ycombinator.com/item?id=44571856</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44571856</guid></item><item><title><![CDATA[New comment by NilMostChill in "What a Hacker Stole from Me"]]></title><description><![CDATA[
<p>> The author definitely saw it as a targeted attack that, when it failed, caused the attacker to switch tactics to intentionally cause harm.<p>Saying "someone or something" is generic and also accurate it doesn't explicitly imply a specific person or targeting, though I'll concede it could be interpreted that way.<p>As interesting a side conversation as this is it isn't my original point.<p>As i said in my original reply:<p>> It being common doesn't mean it's OK, it also doesn't mean people aren't allowed to be upset by it.<p>> "You probably need to calm down a bit" is dismissive and condescending.<p>It's entirely possible to explain context to someone without being dismissive of their feelings on the subject.</p>
]]></description><pubDate>Tue, 08 Jul 2025 10:35:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=44498748</link><dc:creator>NilMostChill</dc:creator><comments>https://news.ycombinator.com/item?id=44498748</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44498748</guid></item><item><title><![CDATA[New comment by NilMostChill in "XAI data center gets air permit to run 15 turbines, but imaging shows 24 on site"]]></title><description><![CDATA[
<p>>I'm no Elon fan, but I can not think of a single human who has done more to reduce dependence on fossil fuels.<p>We could use some of that clean energy he's facilitated to extract a small amount of gold from seawater.<p>Enough to fashion a gold medal we could then award you for first place in olympic level mental gymnastics.</p>
]]></description><pubDate>Mon, 07 Jul 2025 10:37:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=44488771</link><dc:creator>NilMostChill</dc:creator><comments>https://news.ycombinator.com/item?id=44488771</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44488771</guid></item><item><title><![CDATA[New comment by NilMostChill in "What a Hacker Stole from Me"]]></title><description><![CDATA[
<p>None of what i said had anything to do with the nature of the attack, personal or otherwise.<p>The author seems to be taking it a bit personally but they don't seem to be implying an attack targeted to them exclusively as much as an attack that they experienced personally but it could be either i suppose.<p>The blog post was, "this is a thing that happened, followed by another thing i think was related, i am upset, here is why"<p>Your response was "this is common, suck it up"<p>The post itself doesn't mention any sort of persecution or targeted attack.<p>What you said was dismissive and condescending, being technically correct about things that are unrelated doesn't negate that.</p>
]]></description><pubDate>Sun, 06 Jul 2025 23:41:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=44485182</link><dc:creator>NilMostChill</dc:creator><comments>https://news.ycombinator.com/item?id=44485182</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44485182</guid></item><item><title><![CDATA[New comment by NilMostChill in "The force-feeding of AI features on an unwilling public"]]></title><description><![CDATA[
<p>Entitled, probably not, able to communicate frustrations and suggest alternative options, absolutely.</p>
]]></description><pubDate>Sun, 06 Jul 2025 13:20:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=44480590</link><dc:creator>NilMostChill</dc:creator><comments>https://news.ycombinator.com/item?id=44480590</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44480590</guid></item><item><title><![CDATA[New comment by NilMostChill in "What a Hacker Stole from Me"]]></title><description><![CDATA[
<p>It being common doesn't mean it's OK, it also doesn't mean people aren't allowed to be upset by it.<p>Casual racism and bigotry are common, "You probably need to calm down a bit" is dismissive and condescending.</p>
]]></description><pubDate>Sun, 06 Jul 2025 13:14:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=44480550</link><dc:creator>NilMostChill</dc:creator><comments>https://news.ycombinator.com/item?id=44480550</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44480550</guid></item><item><title><![CDATA[New comment by NilMostChill in "Accumulation of cognitive debt when using an AI assistant for essay writing task"]]></title><description><![CDATA[
<p>I'm aware of my own perspective, i don't generally crusade against whatever flavour of machine learning is being pushed currently.<p>I was just pointing out that arguing against crusading by using an argument (or analogies) that leaves out half of the salient context could be considered disingenuous.<p>The difference between:<p>You're using it incorrectly<p>vs<p>Of the ones that are fit for a particular purpose, they can work well if used correctly.<p>Perhaps i'm just nitpicking.</p>
]]></description><pubDate>Tue, 17 Jun 2025 13:42:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=44299111</link><dc:creator>NilMostChill</dc:creator><comments>https://news.ycombinator.com/item?id=44299111</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44299111</guid></item><item><title><![CDATA[New comment by NilMostChill in "Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task"]]></title><description><![CDATA[
<p>Shallow take.<p>Your analogies only work if you don't take in to account there are different degrees of utility/quality/usefulness of the product.<p>People absolutely crusade against dangerous food, or even just food that has no nutritious benefit.<p>The parent analogy also only holds up on your happy path.</p>
]]></description><pubDate>Tue, 17 Jun 2025 10:45:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=44297591</link><dc:creator>NilMostChill</dc:creator><comments>https://news.ycombinator.com/item?id=44297591</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44297591</guid></item></channel></rss>