<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: CHY872</title><link>https://news.ycombinator.com/user?id=CHY872</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 13:13:56 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=CHY872" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by CHY872 in "'Attention is all you need' coauthor says he's 'sick' of transformers"]]></title><description><![CDATA[
<p>In computer vision transformers have basically taken over most perception fields. If you look at paperswithcode benchmarks it’s common to find like 10/10 recent winners being transformer based against common CV problems. Note, I’m not talking about VLMs here, just small ViTs with a few million parameters. YOLOs and other CNNs are still hanging around for detection but it’s only a matter of time.</p>
]]></description><pubDate>Fri, 24 Oct 2025 20:19:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=45698661</link><dc:creator>CHY872</dc:creator><comments>https://news.ycombinator.com/item?id=45698661</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45698661</guid></item><item><title><![CDATA[New comment by CHY872 in "The Speed of VITs and CNNs"]]></title><description><![CDATA[
<p>The article basically argues: You would expect to get similarly good results with subsampling in practice. E.g. no need to process at 1920x1080 when you can do 960x540. Separately, you can break down many problems into smaller tiles and get similar quality results without the compute overheads of a high res ViT.</p>
]]></description><pubDate>Sun, 04 May 2025 21:11:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=43889627</link><dc:creator>CHY872</dc:creator><comments>https://news.ycombinator.com/item?id=43889627</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43889627</guid></item><item><title><![CDATA[New comment by CHY872 in "Signal to leave Sweden if backdoor law passes"]]></title><description><![CDATA[
<p>Archive formats are hard to make reproducible because there are lots of ways of making different yet equivalent archives.
So it’s not surprising to me that someone would fail at this hurdle and find it frustrating to resolve.
Nix defined their own format for this to avoid this exact problem.</p>
]]></description><pubDate>Tue, 25 Feb 2025 14:43:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=43172465</link><dc:creator>CHY872</dc:creator><comments>https://news.ycombinator.com/item?id=43172465</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43172465</guid></item><item><title><![CDATA[New comment by CHY872 in "Why does FM sound better than AM?"]]></title><description><![CDATA[
<p>It’s not immediately clear that Shannon’s theorem is a good point of comparison here, since it’s only recently that coding schemes have really approached the Shannon limits, and FM and AM do not use these.<p>Even if one does assume a Shannon-perfect coding scheme, as the noise ratio gets greater the benefits of spreading a signal across a higher bandwidth fades. Furthermore, most coding schemes hit their maximum inefficiency as the signal to noise ratio decreases and messages start to be too garbled to be well decoded.<p>I’d additionally note that folks get near the Shannon noise limit _through_ ‘magic noise rejection’ (aka turbo and ldpc codes). It’s therefore not obvious that FM isn’t gaining clarity due to a noise rejection mechanic. The ‘capture effect’ is well described as an interference reducing mechanism.<p>Empirically, radio manufacturers who do produce sophisticated long range radio usually advertise a longer range when spreading available power across a narrower rather than wider bandwidth.</p>
]]></description><pubDate>Mon, 14 Oct 2024 11:28:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=41836495</link><dc:creator>CHY872</dc:creator><comments>https://news.ycombinator.com/item?id=41836495</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41836495</guid></item><item><title><![CDATA[New comment by CHY872 in "Ask HN: What's an appropriate compensation counter offer in London 2024?"]]></title><description><![CDATA[
<p>No, you (generally) pay income tax on the current value when you receive it, and then capital gains on any change in value.<p>Essentially, if I pay you £15k for some services, you pay income tax on that £15k. If I buy you a car for £15k, taxman still wants £15k. Same with equity.<p>How it can work is that you can be granted shares and pay the taxes at time of granting (which for a founder is zero), but might not be nice for an employee.<p>You can also give an employee options, and this can get complicated (single or double vest).<p>But in general, for any equity instrument in a stock plan, you get charged income tax at one stage, then capital gains at a later stage, and you can trade off when you want to trigger each. In this case, employer has gone for a model where income tax is deferred as late as possible.</p>
]]></description><pubDate>Sat, 27 Jul 2024 18:41:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=41088457</link><dc:creator>CHY872</dc:creator><comments>https://news.ycombinator.com/item?id=41088457</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41088457</guid></item><item><title><![CDATA[New comment by CHY872 in "Ask HN: What's an appropriate compensation counter offer in London 2024?"]]></title><description><![CDATA[
<p>Firstly, work out how much your 0.2% equity is likely to convert into. You'll probably pay income tax and employers NI on them (and in one year!) and so you'll likely end up paying 55% tax on them. How much do the founders think the company is worth right now? If it's close to £500M, that's a big part of your comp. If it's close to £50M, it's not.<p>Next, work out where you want to be in terms of comp in a few years, rather than thinking of how to optimise the cash right now. For example, I'd stop worrying about your tax-free allowance gradually disappearing, and instead try to work out how to get it to all be gone. In 5 years, the person who makes £120k is £40k better off than the person who makes £100k, after tax. That... sounds worth it. And it's usually easier for the person who's getting paid £120k to get paid £130k than it is for the person who's getting paid £100k. This is to say, having high tax brackets is a benefit, not a curse.<p>And then, it's probably worth noting - this isn't directly their money, especially if they're looking to sell. It's probably worth having the conversation of like, 'what would I need to do in order to justify £100k/year?'. Or, alternatively, negotiating on the vesting of your stock, since that's effectively free. If they think the company is going to be sold in the next few years, that's a relatively small giveaway for you. If the company's grown a lot in four years, it's unlikely a significant increase in stock is on the table.<p>Don't overestimate how long it'd take a good new person to catch up. I've rarely seen a role where a new person can't be effective within 6 months.</p>
]]></description><pubDate>Sat, 27 Jul 2024 16:10:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=41087445</link><dc:creator>CHY872</dc:creator><comments>https://news.ycombinator.com/item?id=41087445</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41087445</guid></item><item><title><![CDATA[New comment by CHY872 in "Batteries as a Military Enabler"]]></title><description><![CDATA[
<p>Probably a few factors.<p>1. Scaling. You want to reap the rewards of someone else investing billions, and while billions of ICE engines are built every year, most of them are much bigger than 50cc.
2. Tolerances, as you say. I know for example that jet engines have low efficiency at small sizes due to efficiency being driven by the gaps between certain rotating parts, which are relatively larger.
3. Certain parts that need miniaturisation are more expensive on smaller engines. For example, a 50cc would typically have a carburettor, a bigger engine fuel injection. A fuel injector would be significantly larger per unit.
4. Some parts are just harder to miniaturise. For example, small turbochargers have to work harder and at much higher RPMs to achieve the same boost due to area scaling quadratically with diameter.</p>
]]></description><pubDate>Sun, 23 Jun 2024 17:19:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=40768982</link><dc:creator>CHY872</dc:creator><comments>https://news.ycombinator.com/item?id=40768982</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40768982</guid></item><item><title><![CDATA[New comment by CHY872 in "Batteries as a Military Enabler"]]></title><description><![CDATA[
<p>Battery efficiency is _broadly_ linear; 10kg will give you 100x the capacity of 0.1kg of batteries. This isn't the case with generators.<p>Under 5kg of batteries, engines basically can't compete. The smallest viable petrol engines (you want engines made in large quantities) are around 4kg and need fuel, and so you end up in a situation in which 4kg of engine and 1kg of fuel is as useful as 5kg of batteries (for motors of that size), but every subsequent 1kg of fuel is then also as useful as 5kg of batteries. But you can't really drop this down much, as a 1kg engine is much less useful than 1kg of batteries, and a drone with 5kg of batteries is really a very large drone.<p>Drones are additionally typically very small and light, with mass at an absolute premium. A typical quadcopter will weigh under a kilogram and have maybe 200g of batteries for 20-30 minutes endurance. A drone with a 2.5m wingspan will typically have room for maybe 1-2kg of extra payload, and an engine will not fit into the battery slot.<p>Furthermore, they are extremely sensitive to weight balance issues.<p>This is to say, once you're in the world where you want chemical fuel, you might as well design a drone for it, rather than trying to retrofit. The mass of retrofitting will mess up your prior drone design, and petrol changes the dynamics of what you're trying to do enough that you might as well just do it all differently.<p>Essentially, if you're making a 1-10kg drone, you want battery. If you're making a 10-20kg drone, you might want battery, you might want petrol. Above 20kg, you probably want petrol.</p>
]]></description><pubDate>Sun, 23 Jun 2024 16:54:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=40768805</link><dc:creator>CHY872</dc:creator><comments>https://news.ycombinator.com/item?id=40768805</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40768805</guid></item><item><title><![CDATA[New comment by CHY872 in "LLM-generated code must not be committed without prior written approval by core"]]></title><description><![CDATA[
<p>'Please translate this AVX-256 algorithm to Arm NEON' is absolutely a prompt I'd give ChatGPT, and similar prompts have revealed new and useful intrinsics. Of course, results to be checked.<p>Assurance is a complex topic, and any safety critical device should have a carefully thought through architecture and rigorous testing program which minimises the risk of incidents. It therefore seems scarcely relevant here, beyond the fact that a well defined delivery system should be able to handle multiple human errors during implementation without leading to crucial failure modes occuring.</p>
]]></description><pubDate>Sat, 18 May 2024 15:30:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=40399816</link><dc:creator>CHY872</dc:creator><comments>https://news.ycombinator.com/item?id=40399816</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40399816</guid></item><item><title><![CDATA[New comment by CHY872 in "LLM-generated code must not be committed without prior written approval by core"]]></title><description><![CDATA[
<p>Yes, if you take a problem you don't understand, ask a GPT to write a solution, do nothing to check the solution, trust it blindly, and then use the solution for some safety critical problem, you're playing with fire. But there's a spectrum, and please don't assume I'm at that end of it.<p>Lots of the time you understand the problem, but the problem is repetitive. Parsing a weird file format might well be that. Beyond that, you have solutions that are easily checked. For example, if I ask ChatGPT to optimise an algorithm for a certain CPU cache, I can easily read whether it did that. And then, there are parts of a software job that are crucial and subtle, and parts that are not.<p>As a practitioner, traditionally that leads to a shift in the focus and speed with which you approach a task - some pieces of code are 100 lines that took you 2 weeks to get to and were hard fought, some are 2000 lines which you wrote in a day.<p>Lastly, so much of solid software is being able to understand a probably unfamiliar domain, and ChatGPT can be a great buddy in terms of gaining problem context, finding the limits of your own understanding.<p>I don't use co-pilot like things, but I've found ChatGPT to be a massive enabler in terms of being able to be productive in unfamiliar problem-spaces.</p>
]]></description><pubDate>Sat, 18 May 2024 15:24:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=40399767</link><dc:creator>CHY872</dc:creator><comments>https://news.ycombinator.com/item?id=40399767</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40399767</guid></item><item><title><![CDATA[New comment by CHY872 in "LLM-generated code must not be committed without prior written approval by core"]]></title><description><![CDATA[
<p>'My customer has given me the documentation to arcane antiquated format X (insert pdf, but it includes 24-bit integers, hex encoded data, semantically signficant whitespace). Here is a sample of the message format, and this struct should represent the contents. Please write me a Python parser which takes an input file in the format and provides the output. In particular, given input X, the output should be equivalent to Y. There should be a set of unit tests for important functionality, which should explain to a reader what is complicated about the format'.
is something that ChatGPT4 will just drop out a solution to that works in a few seconds.<p>If your job is to make an accurate parser for it, probably you want to hand code it. If your job is to make sense of the data the customer has provided you with, this is merely an impediment to your actual job, and ChatGPT has you covered. Yes, there'll be mistakes. But ChatGPT can do in a few seconds what'd take you hours.</p>
]]></description><pubDate>Sat, 18 May 2024 12:40:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=40398525</link><dc:creator>CHY872</dc:creator><comments>https://news.ycombinator.com/item?id=40398525</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40398525</guid></item><item><title><![CDATA[New comment by CHY872 in "Java virtual threads hit with pinning issue"]]></title><description><![CDATA[
<p>The specific edge case which is most annoying here is that locking a Java object with synchronized converts all the non-blocking calls that Loom does further down the stack to be blocking ones, which then lock up your carrier threads. So you get this action as a distance thing.<p>E.g.<p><pre><code>    for (int i = 0; i < 500; i++) {
        newVirtualThread(() -> synchronized (new Object()) {
             Thread.sleep(100_000);
        });
    }
</code></pre>
will blow up your JVM (modulo some compensation mechanisms that work definitely kinda) and that's odd.</p>
]]></description><pubDate>Sun, 25 Feb 2024 22:45:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=39505706</link><dc:creator>CHY872</dc:creator><comments>https://news.ycombinator.com/item?id=39505706</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39505706</guid></item><item><title><![CDATA[New comment by CHY872 in "Java virtual threads hit with pinning issue"]]></title><description><![CDATA[
<p>I think this is a bit more subtle and nasty than bog standard priority inversion. Specifically, in present virtual threads, if you take out a lock with synchronized, if you then do a normally-non-blocking asynchronous API call while you have the lock, you block one of your very few OS threads because the virtual thread can't now be migrated off the OS thread.<p>IMO the compound thing is what makes it be nasty. E.g. you have a function `doSomething` which does some RPC, and that's all nicely non-blocking. But someone called map.computeIfAbsent(x, k -> doSomething(k)), and that uses synchronized on the inside so now your non-blocking API calls all magically became blocking, no further action required.</p>
]]></description><pubDate>Sun, 25 Feb 2024 22:42:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=39505685</link><dc:creator>CHY872</dc:creator><comments>https://news.ycombinator.com/item?id=39505685</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39505685</guid></item><item><title><![CDATA[New comment by CHY872 in "Java virtual threads hit with pinning issue"]]></title><description><![CDATA[
<p>Nah, it's not normal or really documented. Normally when languages have async, they end up with dual primitives - the synchronous one and the asynchronous one, and everyone learns that if they do the blocking operation on the async thread, they deserve their deadlock. Generally you then end up with two variants of the API, with the standard function colouring problem where you can call lockBlocking() from a non-async context, lockAsync() from an async, and woe betide you if you get mixed up. Ideally your language can help you avoid these issues sometimes (e.g. with Rust you can't hold a non-async lock over await points).<p>Java instead did a big thing where they made all the primitives work fine in both cases with one API, something which is honestly really hard and really reflects the level of thought put into modern Java features. But there's a long tail of stuff that's still getting cleaned up (e.g. various weird I/O apis), and honestly I think it _is_ weird that an actual language <i>keyword</i> made it into the long tail.<p>It's extra fun that it's actually quite hard to hit performance problems as a result of this, as the JVM will actually detect that it's getting to this problem and boot up threads to compensate.</p>
]]></description><pubDate>Sun, 25 Feb 2024 22:32:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=39505602</link><dc:creator>CHY872</dc:creator><comments>https://news.ycombinator.com/item?id=39505602</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39505602</guid></item><item><title><![CDATA[New comment by CHY872 in "Java virtual threads hit with pinning issue"]]></title><description><![CDATA[
<p>I think this one's weird because the language puts so much effort into making it hard to hit this that you won't notice until it's a gigantic problem. For me, this was enough of a problem that I wrote a Java agent which patches the method which blocks the carrier thread to just throw (ByteBuddy makes it easy!).<p>Like, it makes sense that when you synchronized on an object, there's room for contention. It seems weird that synchronizing on an _uncontended_ object can cause contention. But that's what this is. The behaviour is that you have target numCores carrier threads, and if someone synchronizes on an object and then does a <i>non-blocking</i> sleep, it's now blocking, because synchronizing upgraded the non-blocking I/O to blocking.<p>So basically when you hit this issue, it's because not only has the bad thing been happening, it's also been happening badly enough that all the compensation mechanisms have failed.<p>It's just weird that a whole language level keyword behaves this badly.</p>
]]></description><pubDate>Sun, 25 Feb 2024 22:24:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=39505530</link><dc:creator>CHY872</dc:creator><comments>https://news.ycombinator.com/item?id=39505530</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39505530</guid></item><item><title><![CDATA[New comment by CHY872 in "Java virtual threads hit with pinning issue"]]></title><description><![CDATA[
<p>I've been coding in Java for a bit longer than that, and in my memory it's always been known that ReentrantLock is _faster_, but synchronized has better language level support (as in, the language won't let you forget to unlock, whereas with locks you need to remember your finally statement). And then, unlike ReentrantLock, synchronized uses no extra memory, which is good for things that end up being uncontended, and then in both cases, the performance doesn't matter if the lock is uncontended, which most of mine were.<p>There of course are things like ErrorProne that statically check that you unlocked your lock, but there's still bugs possible.</p>
]]></description><pubDate>Sun, 25 Feb 2024 22:09:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=39505390</link><dc:creator>CHY872</dc:creator><comments>https://news.ycombinator.com/item?id=39505390</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39505390</guid></item><item><title><![CDATA[New comment by CHY872 in "Meta's new LLM-based test generator"]]></title><description><![CDATA[
<p>bazhenov/tango does something like this for performance tests, basically to counter system behaviour you run the old and new implementation at the same time.</p>
]]></description><pubDate>Sat, 24 Feb 2024 13:31:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=39491348</link><dc:creator>CHY872</dc:creator><comments>https://news.ycombinator.com/item?id=39491348</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39491348</guid></item><item><title><![CDATA[New comment by CHY872 in "Microsoft seeks Rust developers to rewrite core C# code"]]></title><description><![CDATA[
<p>C# has a garbage collector for specifically tracking memory, but lifetimes are more broadly useful.<p>For example, Rust lifetimes (this is also the case in C++ afaik) can be used to suitably scope the lifetimes of mutexes, to have temporary folders which are deleted when they go out of scope, to require that a connection pool is destroyed _after_ the last connection inside it is returned, etc, etc.<p>Mostly, garbage collected language do a bad job of cleaning up objects which refer to resources held elsewhere. Java had persistent issues with direct ByteBuffers (which were wrappers around malloc (but not free!)). Locks are easily held too long. File handles are easily left open. And depending on your GC settings, that file descriptor that's holding a 10GB file around may not get cleaned up for hours.<p>Refcounted languages can be somewhat better, but they don't avoid the bug, they just mitigate the effects.</p>
]]></description><pubDate>Sat, 03 Feb 2024 17:25:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=39242537</link><dc:creator>CHY872</dc:creator><comments>https://news.ycombinator.com/item?id=39242537</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39242537</guid></item><item><title><![CDATA[New comment by CHY872 in "If you can't reproduce the model then it's not open-source"]]></title><description><![CDATA[
<p>And, that's obviously fun, because with LLMs, you have the LLM itself which cost hundreds of thousands in compute to train, but given you have the weights it's eminently fine-tunable. So it's actually not really like Linux - rather it's closer to something like a car, where you had no hope of making it in the first place but now you have it, maybe you can modify it.</p>
]]></description><pubDate>Wed, 17 Jan 2024 21:53:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=39034044</link><dc:creator>CHY872</dc:creator><comments>https://news.ycombinator.com/item?id=39034044</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39034044</guid></item><item><title><![CDATA[New comment by CHY872 in "SD4J – Stable Diffusion pipeline in Java using ONNX Runtime"]]></title><description><![CDATA[
<p>Java 5 and Java 8 were both very big - generics in 5, lambdas in 8. 6 and 7 were iterative in comparison.<p>There were very many important changes in the meantime over that timeframe, but generics and lambdas fundamentally changed how you use the language - Java 4 is not the same language as 5, same between 7 and 8. This is not the case for the 6 and 7 releases.</p>
]]></description><pubDate>Mon, 01 Jan 2024 20:34:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=38835115</link><dc:creator>CHY872</dc:creator><comments>https://news.ycombinator.com/item?id=38835115</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38835115</guid></item></channel></rss>