<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: emn13</title><link>https://news.ycombinator.com/user?id=emn13</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 14 Apr 2026 09:50:47 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=emn13" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by emn13 in "The Impossible Prompt"]]></title><description><![CDATA[
<p>Create an image that displays two seven-pointed stars, two eight-pointed stars, and two nine-pointed stars. All stars are connected to each other, except for the ones with the same number of strands. The lines connecting the stars must NOT intersect.</p>
]]></description><pubDate>Thu, 27 Nov 2025 21:11:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46073264</link><dc:creator>emn13</dc:creator><comments>https://news.ycombinator.com/item?id=46073264</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46073264</guid></item><item><title><![CDATA[The Impossible Prompt]]></title><description><![CDATA[
<p>Article URL: <a href="https://teodordyakov.github.io/the-impossible-promt/">https://teodordyakov.github.io/the-impossible-promt/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46073263">https://news.ycombinator.com/item?id=46073263</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 27 Nov 2025 21:11:41 +0000</pubDate><link>https://teodordyakov.github.io/the-impossible-promt/</link><dc:creator>emn13</dc:creator><comments>https://news.ycombinator.com/item?id=46073263</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46073263</guid></item><item><title><![CDATA[New comment by emn13 in "Claude Skills are awesome, maybe a bigger deal than MCP"]]></title><description><![CDATA[
<p>I think it's not at all a marshmellow test; quite the opposite - docs used to be  written way, way in advance of their consumption.  The problem that implies is twofold.  Firstly, and less significantly, it's just not a great return on investment to spend tons of effort now to maybe help slightly in the far future.<p>But the real problem with docs is that for MOST usecases, the audience and context of the readers matter HUGELY. Most docs are bad because we can't predict those. People waste ridiculous amounts of time writing docs that nobody reads or nobody needs based on hypotheses about the future that turn out to be false.<p>And _that_ is completely different when you're writing context-window documents. These aren't really documents describing any codebase or context within which the codebase exists in some timeless fashion, they're better understood as part of a _current_ plan for action on a acute, real concern. They're battle-tested the way docs only rarely are.  And as a bonus, sure, they're retainable and might help for the next problem too, but that's not why they work; they work because they're useful in an almost testable way right away.<p>The exceptions to this pattern kind of prove the rule - people for years have done better at documenting isolatable dependencies, i.e. libraries - precisely because those happen to sit at boundaries where it's both easier to make decent predictions about future usage, and often also because those docs might have far larger readership, so it's more worth it to take the risk of having an incorrect hypothesis about the future wasting effort - the cost/benefit is skewed towards the benefit by sheer numbers and the kind of code it is.<p>Having said that, the dust hasn't settled on the best way to distill context like this.  It's be a mistake to overanalyze the current situation and conclude that documentation is certain to be the long-term answer - it's definitely helpful now, but it's certainly conceivable that more automated and structured representations might emerge, or in forms better suited for machine consumption that look a little more alien to us than conventional docs.</p>
]]></description><pubDate>Sat, 18 Oct 2025 10:11:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=45626190</link><dc:creator>emn13</dc:creator><comments>https://news.ycombinator.com/item?id=45626190</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45626190</guid></item><item><title><![CDATA[New comment by emn13 in "Apple M5 chip"]]></title><description><![CDATA[
<p>Yep.  Still, I think it's a pretty decent benchmark in the sense that it's fairly short, quite repeatable, does have a quite a few subtest, and it's horribly different from the nebulous concept that is "typical workloads". It's suspiciously memory-latency bound, perhaps more than most workloads, but that's a quibble.  If they'd have simply labelled it "lightly threaded" instead of "multithreaded", it would have been fine.<p>As it is, it's just clearly misleading to people that haven't somehow figured out that it's not really a great test of multithreaded throughput.</p>
]]></description><pubDate>Thu, 16 Oct 2025 21:58:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=45611193</link><dc:creator>emn13</dc:creator><comments>https://news.ycombinator.com/item?id=45611193</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45611193</guid></item><item><title><![CDATA[New comment by emn13 in "Apple M5 chip"]]></title><description><![CDATA[
<p>It's not trash - it's quite nice for its niche.  It's just not very scalable with cores, so it's best interpreted as a benchmark of lightly threaded workloads - like lots of typical consumer workloads are (gaming, web browsing, light office work).  Then again, it's not hard to find workloads that scale much better, and geekbench 6 doesn't really have a benchmark for those.<p>For the first 8 threads or so, it's fine.  Once you hit 20 or so it's questionable, or at least that's my impression.</p>
]]></description><pubDate>Thu, 16 Oct 2025 14:27:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=45605775</link><dc:creator>emn13</dc:creator><comments>https://news.ycombinator.com/item?id=45605775</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45605775</guid></item><item><title><![CDATA[New comment by emn13 in "Upcoming Rust language features for kernel development"]]></title><description><![CDATA[
<p>I mean, reliably tracking ownership and therefore knowing that e.g. an aliased write must complete before a read is surely helpful?<p>It won't prevent all races, but it might help avoid mistakes in a few of em.  And concurrency is such a pain; any such machine-checked guarantees are probably nice to have to those dealing with em - caveat being that I'm not such a person.</p>
]]></description><pubDate>Thu, 16 Oct 2025 13:56:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=45605385</link><dc:creator>emn13</dc:creator><comments>https://news.ycombinator.com/item?id=45605385</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45605385</guid></item><item><title><![CDATA[New comment by emn13 in "Detect Electron apps on Mac that hasn't been updated to fix the system wide lag"]]></title><description><![CDATA[
<p>There's also the alternative of announcing this breakage publicly to electron beforehand; and the alternative of having a hack and publicly announcing it will be removed in a year. There's even the alternative of just announcing the caveat at all, so your users aren't unwitting guinea pigs. If they don't want to support a million workarounds forever, they don't have to it's not all or nothing.</p>
]]></description><pubDate>Thu, 02 Oct 2025 06:54:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45446968</link><dc:creator>emn13</dc:creator><comments>https://news.ycombinator.com/item?id=45446968</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45446968</guid></item><item><title><![CDATA[New comment by emn13 in "Detect Electron apps on Mac that hasn't been updated to fix the system wide lag"]]></title><description><![CDATA[
<p>Put it this way: if I were in charge of a major OS, and I having one of the major app frameworks used on my OS tested on for my annual upgrade, I'd feel pretty embarrassed, even if there's a figleaf excuse why it's not my fault.<p>This doesn't exactly instill confidence in Apple's competence.</p>
]]></description><pubDate>Wed, 01 Oct 2025 14:25:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=45438100</link><dc:creator>emn13</dc:creator><comments>https://news.ycombinator.com/item?id=45438100</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45438100</guid></item><item><title><![CDATA[New comment by emn13 in "Solar panels + cold = A potential problem"]]></title><description><![CDATA[
<p>Hyper amusing, thanks for sharing!  Doesn't really improve the analogy, but fun quirk of history :-D.</p>
]]></description><pubDate>Mon, 29 Sep 2025 12:30:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=45412884</link><dc:creator>emn13</dc:creator><comments>https://news.ycombinator.com/item?id=45412884</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45412884</guid></item><item><title><![CDATA[New comment by emn13 in "Solar panels + cold = A potential problem"]]></title><description><![CDATA[
<p>So on the one hand we have a product which isn't even remotely designed for the use case (hamsters), and during normal use shows obvious behaviour (cooking) that should imply risk to said hamsters.  On the other side, we have a product designed to be installed in an electrical system, and shows no signs during normal use that it's installed unsafely, and where the advertised specs are not actually safe for normal usage.<p>Whether or not the company in this case shares some or most of the blame with novice users - the analogy is not a great one.</p>
]]></description><pubDate>Sun, 28 Sep 2025 09:16:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=45402929</link><dc:creator>emn13</dc:creator><comments>https://news.ycombinator.com/item?id=45402929</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45402929</guid></item><item><title><![CDATA[New comment by emn13 in "Next.js is infuriating"]]></title><description><![CDATA[
<p>The author's examples of rough edges are however no better when hosted on vercel. The architecture seems... overly clever, leading to all kinds of issues.<p>I'm sure commercial incentives would lead issues that affect paying (hosted) customers to have better resolutions than those self-hosting, but that's not enough to explain this level of pain, especially not in issues that would affect paying customers just as much.</p>
]]></description><pubDate>Tue, 02 Sep 2025 13:01:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=45102563</link><dc:creator>emn13</dc:creator><comments>https://news.ycombinator.com/item?id=45102563</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45102563</guid></item><item><title><![CDATA[New comment by emn13 in "Beware of Fast-Math"]]></title><description><![CDATA[
<p>If you care about absolute accuracy, I'm skeptical you want floats at all. I'm sure it depends on the use case.<p>Whether it's the standards fault or the languages fault for following the standard in terms of preventing auto-vectorization is splitting hairs; the whole point of the standard is to have predictable and usually fairly low-error ways of performing these operations, which only works when the order of operations is defined. That very aim is the problem; to the extent the stardard is harmless when ordering guarrantees don't exist you're essentially applying some of those tricky -ffast-math suboptimizations.<p>But to be clear in any case: there are obviously cases whereby order-of-operations is relevant enough and accuracy altering reorderings are not valid.  It's just that those are rare enough that for many of these features I'd much prefer that to be the opt-in behavior, not opt-out.  There's absolutely nothing wrong with having a classic IEEE 754 mode and I expect it's an essentialy feature in some niche corner cases.<p>However, given the obviously huge application of massively parallel processors and algorithms that accept rounding errors (or sometimes conversely overly precise results!), clearly most software is willing to generally accept rounding errors to be able to run efficiently on modern chips.  It just so happens that none of the computer languages that rely on mapping floats to IEEE 754 floats in a straitforward fashion are any good at that, which is seems like its a bad trade off.<p>There could be multiple types of floats instead; or code-local flags that delineate special sections that need precise ordering; or perhaps even expressions that clarify how much error the user is willing to accept and then just let the compiler do some but not all transformations; and perhaps even other solutions.</p>
]]></description><pubDate>Sat, 31 May 2025 18:13:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=44145981</link><dc:creator>emn13</dc:creator><comments>https://news.ycombinator.com/item?id=44145981</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44145981</guid></item><item><title><![CDATA[New comment by emn13 in "Beware of Fast-Math"]]></title><description><![CDATA[
<p>I get the feeling that the real problem here are the IEEE specs themselves.  They include a huge bunch of restrictions that each individually aren't relevant to something like 99.9% of floating point code, and probably even in aggregate not a single one is relevant to a large majority of code segments out in the wild. That doesn't mean they're not important - but some of these features should have been locally opt-in, not opt out. And at the very least, standards need to evolve to support hardware realities of today.<p>Not being able to auto-vectorize seems like a pretty critical bug given hardware trends that have been going on for decades now; on the other hand sacrificing platform-independent determinism isn't a trivial cost to pay either.<p>I'm not familiar with the details of OpenCL and CUDA on this front - do they have some way to guarrantee a specific order-of-operations such that code always has a predictable result on all platforms and nevertheless parallelizes well on a GPU?</p>
]]></description><pubDate>Sat, 31 May 2025 10:34:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=44143321</link><dc:creator>emn13</dc:creator><comments>https://news.ycombinator.com/item?id=44143321</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44143321</guid></item><item><title><![CDATA[New comment by emn13 in "Past, present, and future of Sorbet type syntax"]]></title><description><![CDATA[
<p>Yeah, before required properties/fields, C#'s nullability story was quite weak, it's a pretty critical part of making the annotations cover enough of a codebase to really matter. (technically constructors could have done what required does, but that implies _tons_ of duplication and boilerplate if you have a non-trivial amount of such classes, records, structs and properties/fields within them; not really viable).<p>Typescript's partial can however do more than that - required means you can practically express a type that cannot be instantiated partially (without absurd amounts of boilerplate anyhow), but if you do, you can't _also_ express that same type but partially initialized. There are lots of really boring everyday cases where partial initialization is very practical. Any code that collects various bits of required input but has the ability to set aside and express the intermediate state of that collection of data while it's being collected or in the event that you fail to complete wants something like partial.<p>E.g. if you're using the most common C# web platform, asp.net core, to map inputs into a typed object, you now are forced to either expression semantically required but not type-system required via some other path.  Or, if you use C# required, you must choose between unsafe code that nevertheless allows access to objects that never had those properties initialized, or safe code but then you can't access any of the rest of the input either, which is annoying for error handling.<p>typescript's type system could on the other hand express the notion that all or even just some of those properties are missing; it's even pretty easy to express the notion of a mapped type wherein all of the _values_ are replaces by strings - or, say, by a result type.  And flow-sensitive type analysis means that sometimes you don't even need any kind of extra type checks to "convert" from such a partial type into the fully initialized flavor; that's implicitly deduced simply because once all properties are statically known to be non-null, well, at that point in the code the object _is_ of the fully initialized type.<p>So yeah, C#'s nullability story is pretty decent really, but that doesn't mean it's perfect either.  I think it's important to mention stuff like Partial because sometimes features like this are looked at without considering the context. Most of these features sound neat in isolation, but are also quite useless in isolation. The real value is in how it allows you to express and change programs whilst simultaneously avoiding programmer error. Having a bit of unsafe code here and there isn't the end of the world, nor is a bit of boilerplate.  But if your language requires tons of it all over the place, well, then you're more likely to make stupid mistakes and less likely to have the compiler catch them. So how we deal with the intentional inflexibility of non-nullable reference types matters, at least, IMHO.<p>Also, this isn't intended to imply that typescript is "better".  That has even more common holes that are also unfixable given where it came from and the essential nature of so much interop with type-unsafe JS, and a bunch of other challenges.  But in order to mitigate those challenges TS implemented various features, and then we're able to talk about what those feature bring to the table and conversely how their absence affects other languages. Nor is "MOAR FEATURE" a free lunch; I'm sure anybody that's played with almost any language with heavy generics has experienced how complicated it can get. IIRC didn't somebody implement DOOM in the TS type system?  I mean, when your error messages are literally demonic, understanding the code may take a while ;-).</p>
]]></description><pubDate>Fri, 09 May 2025 21:38:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=43941151</link><dc:creator>emn13</dc:creator><comments>https://news.ycombinator.com/item?id=43941151</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43941151</guid></item><item><title><![CDATA[New comment by emn13 in "Past, present, and future of Sorbet type syntax"]]></title><description><![CDATA[
<p>I love building libraries, so having the chance to talk about the gotchas with things like this is a fun chance to reflect on what is and is not possible with the tools we have. I guess my favorite "feature" in C# is how willing they are to improve; and that many of the improvements really matter, especially when accumulated over the years.  A C# 13 codebase can be so much nicer than a c# 3 codebase... and faster and more portable too.  But nothing's perfect!</p>
]]></description><pubDate>Fri, 09 May 2025 18:13:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=43939619</link><dc:creator>emn13</dc:creator><comments>https://news.ycombinator.com/item?id=43939619</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43939619</guid></item><item><title><![CDATA[New comment by emn13 in "Past, present, and future of Sorbet type syntax"]]></title><description><![CDATA[
<p>"Recovered" sounds so binary.<p>I think it's pretty usuable now, but there is scarring. The solution would have been much nicer had it been around from day one; especially surrounding generics and constraints.<p>It's not _entirely_ sound, nor can it warn about most mistakes when those are in the "here-be-dragons" annotations in generic code.<p>The flow sensitive bit is quite nice, but not as powerful as in e.g. typescript, and sometimes the differences hurt.<p>It's got weird gotcha interactions with value-types, for instance but likely not limited to interaction with generics that aren't constrained to struct but _do_ allow nullable usage for ref types.<p>Support in reflection is present, but it's not a "real" type, and so everything works differently, and hence you'll see that code leveraging reflection that needs to deal with this kind of stuff tends to have special considerations for ref type vs. value-type nullabilty, and it often leaks out into API consumers too - not sure if that's just a practical limitation or a fundamental one, but it's very common anyhow.<p>There wasn't last I looked code that allowed runtime checking for incorrect nulls in non-nullable marked fields, which is particularly annoying if there's even an iota of not-yet annoted or incorrectly annotated code, including e.g. stuff like deserialization.<p>Related features like TS Partial<> are missing, and that means that expressing concepts like POCOs that are in the process of being initialized but aren't yet is a real pain; most code that does that in the wild is not typesafe.<p>Still, if you engage constructively and are willing to massage your patterns and habbits you can surely get like 99% type-checkable code, and that's still a really good help.</p>
]]></description><pubDate>Fri, 09 May 2025 18:10:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=43939597</link><dc:creator>emn13</dc:creator><comments>https://news.ycombinator.com/item?id=43939597</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43939597</guid></item><item><title><![CDATA[New comment by emn13 in "Past, present, and future of Sorbet type syntax"]]></title><description><![CDATA[
<p>While I'm most familiar with C#, and haven't used Ruby professionally for almost a decade now, I think we'd be better off looking at typescript, for at least 3 reasons, probably more.<p>1. Flowsensitivity: It's a sure thing that in a dynamic language people use coding conventions that fit naturally to the runtime-checked nature of those types.  That makes flow-sensitive typing really important.<p>2. Duck typing: dynamic languages and certainly ruby codebases I knew often use ducktyping.  That works really well in something like typescript, including really simple features such as type-intersections and unions, but those features aren't present in C#.<p>3. Proof by survival: typescript is empirically a huge success.  They're doing something right when it comes to retrospectively bolting on static types in a dynamic language. Almost certainly there are more things than I can think of off the top of my head.<p>Even though I prefer C# to typescript or ruby _personally_ for most tasks, I don't think it's perfect, nor is it likely a good crib-sheet for historically dynamic languages looking to add a bit of static typing - at least, IMHO.<p>Bit of a tangent, but there was a talk by anders hejlsberg as to why they're porting the TS compiler to Go (and implicitly not C#) - <a href="https://www.youtube.com/watch?v=10qowKUW82U" rel="nofollow">https://www.youtube.com/watch?v=10qowKUW82U</a> - I think it's worth recognizing the kind of stuff that goes into these choices that's inevitably not obvious at first glance.  It's not about the "best" lanugage in a vacuum, it's a about the best tool for _your_ job and _your_ team.</p>
]]></description><pubDate>Fri, 09 May 2025 17:23:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=43939171</link><dc:creator>emn13</dc:creator><comments>https://news.ycombinator.com/item?id=43939171</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43939171</guid></item><item><title><![CDATA[New comment by emn13 in "Cursor IDE support hallucinates lockout policy, causes user cancellations"]]></title><description><![CDATA[
<p>Of course they had a choice: they could have stuck with google maps for longer, and they probably also could have invested more in data and UI beforehand. They could have launched a submarine non-apple-branded product to test the waters.  They could likely have done other things we haven't thought of here, in this thread.<p>Quite plausibly they just didn't realize how rocky the start would be, or perhaps they valued that immediate strategic autonomy more in the short-term that we think, and willingly chose to take the hit to their reputation rather than wait.<p>Regardless, they had choices.</p>
]]></description><pubDate>Wed, 16 Apr 2025 09:10:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=43703202</link><dc:creator>emn13</dc:creator><comments>https://news.ycombinator.com/item?id=43703202</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43703202</guid></item><item><title><![CDATA[New comment by emn13 in "Cursor IDE support hallucinates lockout policy, causes user cancellations"]]></title><description><![CDATA[
<p>While some of what you say is an interesting thought experiment, I think the second half of this argument has, as you'd put it, a low symbolic coherence and low plausibility.<p>Recognizing the relevance of coherence and plausibility does not need to imply that other aspects are any less relevant.  Redefining truth merely because coherence is important and sometimes misinterpreted is not at all reasonable.<p>Logically, a falsehood can validly be derived from assumptions when those assumptions are false. That simple reasoning step alone is sufficient to explain how a coherent-looking reasoning chain can result in incorrect conclusions.  Also, there are other ways a coherent-looking reasoning chain can fail. What you're saying is just not a convincing argument that we need to redefine what truth is.</p>
]]></description><pubDate>Wed, 16 Apr 2025 09:04:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=43703165</link><dc:creator>emn13</dc:creator><comments>https://news.ycombinator.com/item?id=43703165</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43703165</guid></item><item><title><![CDATA[New comment by emn13 in "AI agents: Less capability, more reliability, please"]]></title><description><![CDATA[
<p>Perhaps the solutions(s) needs to be less focusing on output quality, and more on having a solid process for dealing with errors.  Think undo, containers, git, CRDTs or whatever rather than zero tolerance for errors. That probably also means some kind of review for the irreversible bits of any process, and perhaps even process changes where possible to make common processes more reversible (which sounds like an extreme challenge in some cases).<p>I can't imagine we're anywhere even close to the kind of perfection required not to need something like this - if it's even possible.  Humans use all kinds of review and audit processes precisely because perfection is rarely attainable, and that might be fundamental.</p>
]]></description><pubDate>Mon, 31 Mar 2025 15:26:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=43536142</link><dc:creator>emn13</dc:creator><comments>https://news.ycombinator.com/item?id=43536142</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43536142</guid></item></channel></rss>