<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: azath92</title><link>https://news.ycombinator.com/user?id=azath92</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 22 Apr 2026 10:18:12 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=azath92" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by azath92 in "Laws of Software Engineering"]]></title><description><![CDATA[
<p>the python cookbook is good. and fluent python is more from principles rather than application (obvs both python specific). I also like philosophy of software design. tiny little book that uses simple example (class that makes a text editor) to talk about complexity, not actually about making a text editor at all.</p>
]]></description><pubDate>Tue, 21 Apr 2026 12:17:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47847730</link><dc:creator>azath92</dc:creator><comments>https://news.ycombinator.com/item?id=47847730</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47847730</guid></item><item><title><![CDATA[New comment by azath92 in "M 7.4 earthquake – 100 km ENE of Miyako, Japan"]]></title><description><![CDATA[
<p>This is just the best. A very serious company, doing seriously cool and important stuff, also has an anime name/icon.<p>I wish more corps took themselves so lightly, while remaining serious about what they do.</p>
]]></description><pubDate>Mon, 20 Apr 2026 12:52:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47833570</link><dc:creator>azath92</dc:creator><comments>https://news.ycombinator.com/item?id=47833570</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47833570</guid></item><item><title><![CDATA[New comment by azath92 in "IEA: Solar overtakes all energy sources in a major global first"]]></title><description><![CDATA[
<p>I made the same gut assumption, and it points to either poor writing, or deliberately misreading writing that they mix units like that in the same paragraph, where presumably the idea is that we get a feel for growth in both?<p>Its probably nitpick correct, because the 12GW is planned capacity, while the solar might be measured use? but simple assumptins or conversions, as another comment points out, get you comparable numbers. taking the title into account, the whole article is a little bit smoke and mirrors on clear communication, despite having plenty of numbers. Thats a shame because it sounds like even unvarnished its good results!</p>
]]></description><pubDate>Mon, 20 Apr 2026 10:49:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47832519</link><dc:creator>azath92</dc:creator><comments>https://news.ycombinator.com/item?id=47832519</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47832519</guid></item><item><title><![CDATA[New comment by azath92 in "Stop trying to engineer your way out of listening to people"]]></title><description><![CDATA[
<p>Id guess by your smile there is an element of humor in your response, so this isn't a rebuttal, but rather i identified a lot with your point, and I was thinking that this is such a human response to vulnerability.<p>If it was guaranteed that it will not be abused or that I would regret it, it would not _be_ vulnerable. Just like its not bravery if I am not afraid or I am assured of my safety. Such a paradox. Being vulnerable for me is acknowledging that it might have an increased probability of a more negative outcome, but still trying to be vulnerable because of the huge connection unlocks that (often) occur in my experience.<p>On balance intellectually i am coming to see the expected value from being vulnerable in communications is high, but my little lizard brain keeps saying to me "what if you get hurt though" and being closed off haha. its an exercise to shut it up.</p>
]]></description><pubDate>Mon, 20 Apr 2026 08:25:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47831615</link><dc:creator>azath92</dc:creator><comments>https://news.ycombinator.com/item?id=47831615</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47831615</guid></item><item><title><![CDATA[New comment by azath92 in "US private credit defaults hit record 9.2% in 2025, Fitch says"]]></title><description><![CDATA[
<p>I find the money stuff newsletter by Matt Levine (bloomberg) great for this, the link is behind a paywal, but the newsletter is free. strong rec. todays newseltter <a href="https://www.bloomberg.com/opinion/newsletters/2026-03-11/private-credit-gets-marked-down" rel="nofollow">https://www.bloomberg.com/opinion/newsletters/2026-03-11/pri...</a><p>From that newseltter:<p>> At the Financial Times, Jill Shah and Eric Platt report:<p>>JPMorgan Chase ... informed private credit lenders that it had marked down the value of certain loans in their portfolios, which serve as the collateral the funds use to borrow from the bank, according to people familiar with the matter. >...<p>>The loans that have been devalued are to software companies, which are seen as particularly vulnerable to the onset of AI. ...<p>From what i can tell the problem isn't that an individual who had cash to invest in a private (tech in this case) company goes down<p>the problem is that a company "private credit firms run retail-focused funds (“business development companies” or BDCs)" which took out a bunch of loans to invest in private tech companies is now having the underlying assets that they got those loans against (long term investments in private tech companies) valued lower.<p>the link im missing is what happens when people who also invested in BDCs want their money back, where their actual money is locked up in long term investments made to private tech companies, and their ability to get loans is now valued lower. I think this is called a "run" where if someone starts pulling money out, and ultimately you cant, then its a race to get your money out before others do, which applies to both the individuals and the institutional loans.<p>Note: my quotes are from the bloomberg newsletter i mention, which helped me, not the OP article. And i am writing as much to clarify my own thinking as from a place of understanding. I welcome clarification.</p>
]]></description><pubDate>Thu, 12 Mar 2026 14:30:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47351053</link><dc:creator>azath92</dc:creator><comments>https://news.ycombinator.com/item?id=47351053</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47351053</guid></item><item><title><![CDATA[New comment by azath92 in "Qwen3.5 Fine-Tuning Guide"]]></title><description><![CDATA[
<p>thank you so much! i suffered with this, and now i never will again!</p>
]]></description><pubDate>Thu, 05 Mar 2026 09:04:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47259367</link><dc:creator>azath92</dc:creator><comments>https://news.ycombinator.com/item?id=47259367</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47259367</guid></item><item><title><![CDATA[New comment by azath92 in "Qwen3.5 Fine-Tuning Guide"]]></title><description><![CDATA[
<p>ok, even that "few thousand examples" heuristic is useful. the usecase would be to run this task over id say somewhere in the order of magnitude of 100k extractions in a run, batched not real time, and we'd be interested in (and already do) reruns regularly with minor tweaks to the extracted blob (1-10 simple fields, nothing complex).<p>My interest in fine tuning at all is based on an adjacent interest in self hosting small models, although i tested this on aws bedrock for ease of comparison, so my hope is that given we are self hosting, then fine tuning and hosting our tuned model shouldn't be terribly difficult, at least compared to managed finetuning solutions on cloud providers which im generally wary of. Happy for those assumptions to be challenged.</p>
]]></description><pubDate>Thu, 05 Mar 2026 09:03:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47259356</link><dc:creator>azath92</dc:creator><comments>https://news.ycombinator.com/item?id=47259356</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47259356</guid></item><item><title><![CDATA[New comment by azath92 in "Qwen3.5 Fine-Tuning Guide"]]></title><description><![CDATA[
<p>Only to prompt thought on this exact question, im interested in answers:<p>I just ran a benchmark against haiku of a very simple document classification task that at the moment we farm out to haiku in parallel. very naive same prompt system via same api AWS bedrock, and can see that the a few of the 4b models are pretty good match, and could be easily run locally or just for cheap via a hosted provider. The "how much data and how much improvement" is a question i dont have a good intuition for anymore. I dont even have an order of magnitude guess on those two axis.<p>Heres raw numbers to spark discussion:<p>| Model               | DocType% | Year% | Subject% | In $/MTok |<p>|---------------|----------|-------|----------|-----------|<p>| llama-70b -----|       83 |    98 |       96 |     $0.72 |<p>| gpt-oss-20b --|       83 |    97 |       92 |     $0.07 |<p>| ministral-14b -|       84 |   100 |       90 |     $0.20 |<p>| gemma-4b ----|       75 |    93 |       91 |     $0.04 |<p>| glm-flash-30b -|       83 |    93 |       90 |     $0.07 |<p>| llama-1b ------|       47 |    90 |       58 |     $0.10 |<p>percents are doc type (categorical), year, and subject name match against haiku. just uses the first 4 pages.<p>in the old world where these were my own in house models, id be interested in seeing if i could uplift those nubmers with traingin, but i haven't done that with the new LLMs in a while. keen to get even a finger to the air if possible.<p>Can easily generate tens of thousands of examples.<p>Might try myself, but always keen for an opinion.<p>_edit for table formatting_</p>
]]></description><pubDate>Wed, 04 Mar 2026 15:02:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47248504</link><dc:creator>azath92</dc:creator><comments>https://news.ycombinator.com/item?id=47248504</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47248504</guid></item><item><title><![CDATA[New comment by azath92 in "Why No AI Games?"]]></title><description><![CDATA[
<p>I was aware of the patent, and agree i think its overly narrow and you could get around it easily. I think the reason we haven't seen it or something like it in another game (or i haven't but someone pleeeease id love to hear systems like it), is less because its not useful, or maybe its not useful as a plug and play because the only reason it works is because of the super exhaustive care taken on tuning its parameters and giving it enough variety to make it interesting to play.<p>Kinda like the dialogue/story paths in something like hades, where IIRC they made a whole system to manage it, but the reality is that system only matters when the tree is suuuuuuuuper complex. or maybe it was disco elysium, or both ...</p>
]]></description><pubDate>Wed, 04 Mar 2026 13:51:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47247396</link><dc:creator>azath92</dc:creator><comments>https://news.ycombinator.com/item?id=47247396</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47247396</guid></item><item><title><![CDATA[New comment by azath92 in "Why No AI Games?"]]></title><description><![CDATA[
<p>And to separate my thoughts from the info blob:<p>i think the culture war point is also super true of the game design industry, not just the consumers, where the already ultra competitive nature of the work means that the creatives and the industry as a whole have taken a veeeery strong stance against genai. Thats a reckon, and i dont know if its good or bad.<p>It does feel a little counter to the march of progress, but in a medium where high effort can be enjoyed by many, im personally cool with artisinal handmade games.</p>
]]></description><pubDate>Tue, 03 Mar 2026 16:41:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47235026</link><dc:creator>azath92</dc:creator><comments>https://news.ycombinator.com/item?id=47235026</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47235026</guid></item><item><title><![CDATA[New comment by azath92 in "Why No AI Games?"]]></title><description><![CDATA[
<p>Only because it is something i find fascinating:
>  There are those Orcs in that one Lord of the Rings game who hold grudges against you.<p>Is referring to the nemesis system in Middle-Earth: Shadow of Mordor and Shadow of War, and its an amazing set of interlocking procedural systems that do genuinely feel like its AI, but is really AI in the sense its always been used by games (the rules the games follow to govern NPCs+world) and not AI in the sense of modern LLMs or even other generative systems. This video is a great look at what it is and why its great IMO <a href="https://www.youtube.com/watch?v=Lm_AzK27mZY" rel="nofollow">https://www.youtube.com/watch?v=Lm_AzK27mZY</a><p>I think a system like this could really work well with some modern LLM stuff, but it certainly feels magic without it.</p>
]]></description><pubDate>Tue, 03 Mar 2026 16:39:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47234995</link><dc:creator>azath92</dc:creator><comments>https://news.ycombinator.com/item?id=47234995</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47234995</guid></item><item><title><![CDATA[New comment by azath92 in "Ask HN: Who is hiring? (March 2026)"]]></title><description><![CDATA[
<p>Climatealigned | Onsite | London<p>We've spent the last three years making climate finance data at a fraction of the time and cost using AI. We started pre GPT and have continued to evolve and rebuild our stack with the times. These days we are fully agentic, powered by opus/haiku using python/js as best fits the job, but we ride the wave and aren't attached to the past. We are a small, focused, in-person, and technically capable team with deep industry connections.<p>We are looking for an early career builder to take ownership of end to end data creation, and who can stay on their toes with rapidly changing tech stack and ways of working as the models continue to evolve.<p>Reach out to our CEO aleksi[at]climatealigned[dot]co to discuss, or drop me a line (founding engineer)<p>Check some of our past work at <a href="https://climatealigned.co" rel="nofollow">https://climatealigned.co</a> or touch base with any of the team with questions on LI</p>
]]></description><pubDate>Tue, 03 Mar 2026 09:34:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47230207</link><dc:creator>azath92</dc:creator><comments>https://news.ycombinator.com/item?id=47230207</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47230207</guid></item><item><title><![CDATA[New comment by azath92 in "Claude Sonnet 4.6"]]></title><description><![CDATA[
<p>This whole comment thread here is really echoing and adding to some thoughts ive had lately on the shift from considering LLMs replacing engineering to make software (much of which is about integration, longevity and customization of a general system), vs LLMs replacing buying software.<p>If most software is just used by me to do a specific task, then being able to make software for me to do that task will become the norm. Following that thought, we are going to see a drastic reduction in SASS solutions, as many people who were buying a flexible-toolbox for one usecase to use occasionally, just get an llm to make them the script/software to do that task as and when they need it, without any concern for things like security, longevity, ease of use by others (for better or for worse).<p>I guess what im circling around is that if we define engineering as building the complex tools that have to interact with many other systems, persist, be generally useful and understandable to many people, and we consider that many people actually dont need that complexity for their use of the system, the complexity arises from it needing to serve its purpose at huge scale over time. then maybe there will be less need for enginners, but perhaps first and foremost because the problems that engineering is required to solve are much less if much more focused and bespoke solutions to peoples problems are available on demand.<p>As an engineer i have often felt threatened by LLMs and agents of late, but i find that if i reframe it from Agents replacing me, to Agents causing the type of problems that are even valuable to solve to shift, it feels less threatening for some reason. Ill have to mull more.</p>
]]></description><pubDate>Wed, 18 Feb 2026 09:23:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47058997</link><dc:creator>azath92</dc:creator><comments>https://news.ycombinator.com/item?id=47058997</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47058997</guid></item><item><title><![CDATA[New comment by azath92 in "The Adolescence of Technology"]]></title><description><![CDATA[
<p>I am continually surprised by the reference to "voluntary actions taken by companies" being brought up in discussion of the risks of AI, without some nuance given to why they would do that. The paragraph on surgical action goes in to about 5-10 times more detail on the potential issues with gov't regulation, implying to me that voluntary action is better. Even for someone at anthropic, i would hope that they would discuss it further.<p>I am genuinely curious to understand the incentives for companies who have the power to mitigate risk to actually do so. Are there good examples in the past of companies taking action that is harmful to their bottom line to mitigate societal risk of harm their products on society? My premise being that their primary motive is profit/growth, and that is revenue or investment dictated for mature and growth companies respectively (collectively "bottom line").<p>Im only in my mid 30s so dont have as much perspective on past examples of voluntary action of this sort with respect to tech or pre-tech corporates where there was concern of harm. Probably too late to this thread for replies, but ill think about it for the next time this comes up.</p>
]]></description><pubDate>Tue, 27 Jan 2026 11:32:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46778624</link><dc:creator>azath92</dc:creator><comments>https://news.ycombinator.com/item?id=46778624</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46778624</guid></item><item><title><![CDATA[New comment by azath92 in "Photographing the hidden world of slime mould"]]></title><description><![CDATA[
<p>My understanding is that modern mobile phone cameras do heaps of "stacking" across multiple axes focus, exposure, time etc to compose a photo that saves onto your phone. I believe its one of the reasons for the multiple cameras on most flagship phones, and then each of them might take many "photos" or runs of data from their sensors per "photo" you take. id love to see a good writeup of the process, but my gut says exactly what they do under the hood would be pretty "trade secret"ie.</p>
]]></description><pubDate>Fri, 09 Jan 2026 12:52:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=46553328</link><dc:creator>azath92</dc:creator><comments>https://news.ycombinator.com/item?id=46553328</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46553328</guid></item><item><title><![CDATA[New comment by azath92 in "Programming languages used for music"]]></title><description><![CDATA[
<p>Almost an esolang, but orca is an amazing example of spatial programming for music production (GH <a href="https://github.com/hundredrabbits/Orca" rel="nofollow">https://github.com/hundredrabbits/Orca</a> and video <a href="https://www.youtube.com/watch?v=gSFrBFBd7vY" rel="nofollow">https://www.youtube.com/watch?v=gSFrBFBd7vY</a> to see it in action)</p>
]]></description><pubDate>Mon, 22 Dec 2025 10:29:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=46352943</link><dc:creator>azath92</dc:creator><comments>https://news.ycombinator.com/item?id=46352943</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46352943</guid></item><item><title><![CDATA[New comment by azath92 in "LLM from scratch, part 28 – training a base model from scratch on an RTX 3090"]]></title><description><![CDATA[
<p>For small models this is for sure the way forward, there are some great small datasets out there (check out the tiny stories dataset that limits vocab to a certain age but keeps core reasoning inherent in even simple language <a href="https://huggingface.co/datasets/roneneldan/TinyStories" rel="nofollow">https://huggingface.co/datasets/roneneldan/TinyStories</a> <a href="https://arxiv.org/abs/2305.07759" rel="nofollow">https://arxiv.org/abs/2305.07759</a>)<p>I have less concrete examples but my understanding is that dataset curation is for sure the way many improvements are gained at any model size. Unless you are building a frontier model, you can use a better model to help curate or generate that dataset for sure. TinyStories was generated with GPT-4 for example.</p>
]]></description><pubDate>Tue, 09 Dec 2025 12:51:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46204399</link><dc:creator>azath92</dc:creator><comments>https://news.ycombinator.com/item?id=46204399</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46204399</guid></item><item><title><![CDATA[Show HN: AI carbon and energy calculator – > pick greener cloud regions]]></title><description><![CDATA[
<p>We built a little LLM use -> energy and carbon footprint calculator at work to spark some discussion with our user base about the impact of using AI to solve problems in the climate space, a side effect of playing around with it is we 4x decreased our own carbon intensity on our productby switching to a lower intensity region.<p>We often have people (rightly) asking how we justify that, so we wanted to do the work to be able to speak to it.<p>The reason i wanted to show HN though, is that after we updated it to include some sub-national regional granularity for datacenters, i realised this was a super easy carbon intensity reduction win for picking cloud regions, when i normally dont put much thought into region selection.<p>us-east-1 -> us-west-2 was a 4x decrease in carbon intensity!<p>IDK about others, keen to hear, but i have no metric for picking regions to use hosted LLM models, because all the normal things i would think about like latency dont matter cause the models themselve dominate those metrics. So i just pick first usually.<p>But the tool got me thinking i could quickly and for no tradeoff (provided the model exists) switch to a lower carbon intensity data center. F<p>If you play with the calculator you can easily see an individuals use has to be suuuuuper sustained and large to make a difference compared to other personal activities, but when you have a product with many users a small change like region can have a real impact! Super cool side effect of a tool to spark client discussion.<p>Is this something anyone even cares about? How much effort does it need to be to switch/pick a lower intensity region?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45694778">https://news.ycombinator.com/item?id=45694778</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 24 Oct 2025 14:04:14 +0000</pubDate><link>https://www.climatealigned.co/ai-footprint-calculator</link><dc:creator>azath92</dc:creator><comments>https://news.ycombinator.com/item?id=45694778</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45694778</guid></item><item><title><![CDATA[New comment by azath92 in "What's the strongest AI model you can train on a laptop in five minutes?"]]></title><description><![CDATA[
<p>Totally agree, one of the most interesting podcasts i have listened to in a while was a couple of years ago on the Tiny Stories paper and dataset (the author used that dataset) which focuses on stories that only contain simple words and concepts (like bedtime stories for a 3 year old), but which can be used to train smaller models to produce coherent english, both with grammar, diversity, and reasoning.<p>The podcast itself with one of the authors was fantastic for explaining and discussing the capabilities of LLMs more broadly, using this small controlled research example.<p>As an aside: i dont know what the dataset is in the biological analogy, maybe the agar plate. A super simple and controlled environment in which to study simple organisms.<p>For ref: 
- Podcast ep <a href="https://www.cognitiverevolution.ai/the-tiny-model-revolution-with-ronen-eldan-and-yuanzhi-li-of-microsoft-research/" rel="nofollow">https://www.cognitiverevolution.ai/the-tiny-model-revolution...</a>
- tinystories paper <a href="https://arxiv.org/abs/2305.07759" rel="nofollow">https://arxiv.org/abs/2305.07759</a></p>
]]></description><pubDate>Thu, 14 Aug 2025 15:24:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=44901522</link><dc:creator>azath92</dc:creator><comments>https://news.ycombinator.com/item?id=44901522</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44901522</guid></item><item><title><![CDATA[New comment by azath92 in "Jules, our asynchronous coding agent"]]></title><description><![CDATA[
<p>Im not sure about how this translates to react native, AFAICT build chains for apps less optimiside,  but using vercel for deployment, neon for db if needed, Ive really been digging the ability for any branch/commit/pr to be deployed to a live site i can preview.<p>Coming from the python ecosystem, ive found the commit -> deployed code toolchain very easy, which for this kind of vibe coding really reduces friction when you are using it to explore functional features of which you will discard many.<p>It moves the decision surface on what the right thing to build to _after_ you have built it. which is quite interesting.<p>I will caveat this by saying this flow only works seamlessly if the feature is simple enough for the llm to oneshot it, but for the right thing its an interesting flow.</p>
]]></description><pubDate>Thu, 07 Aug 2025 13:17:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=44824096</link><dc:creator>azath92</dc:creator><comments>https://news.ycombinator.com/item?id=44824096</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44824096</guid></item></channel></rss>