<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: jsmith45</title><link>https://news.ycombinator.com/user?id=jsmith45</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 05:58:38 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=jsmith45" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by jsmith45 in "Pretty Fish: A better mermaid diagram editor"]]></title><description><![CDATA[
<p>Yeah, as far as I know, you need to define a customized theme to customize pie chart colors. You can prepend the chart with initialization logic like:<p>%%{init: {"theme": "base", "themeVariables": {
"pie1": "#FF5733",
"pie2": "#33FF57",
"pie3": "#3357FF",
"pieStrokeColor": "#000000",
"pieStrokeWidth": 3,
"pieOpacity": 0.8
}}}%%<p>This looks like it works on this site too.</p>
]]></description><pubDate>Wed, 15 Apr 2026 15:02:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47780104</link><dc:creator>jsmith45</dc:creator><comments>https://news.ycombinator.com/item?id=47780104</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47780104</guid></item><item><title><![CDATA[New comment by jsmith45 in "Union types in C# 15"]]></title><description><![CDATA[
<p>> I think many people already mentioned it, but I also don't feel to good about non-boxed unions not being the default. I'd personally like the path of least resistance to lead to not boxing. Having to opt-in like the current preview shows it looks like a PITA that I'd quickly become tired of.<p>The problem is that the only safe way for the compiler to generate non-boxed unions would require non-overlapping fields for most value types.<p>Specifically the CLR has a hard rule that it must know with certainty where all managed pointers are at all times, so that the GC can update them if it moves the referenced object. This means you can only overlap value types if the locations of all managed pointers line up perfectly.  So sure, you can safely overlap "unmanaged" structs (those that recursively don't contain any managed pointers), but even for those, you need to know the size of the largest one.<p>The big problem with the compiler doing any attempt to overlap value types is that if the value types as defined at compile time may not match the definitions at runtime, especially for types defined in another assembly. A new library version can add more fields. This may mean one unmanaged struct has become too big to fit in the field, or that two types that were previously overlap compatible are not anymore.<p>Making the C# compiler jump though a bunch of hoops to try to determine if overlapping is safe and even then leaving room for an updated library at runtime to crash the whole things means that the compiler will probably never even try. I guess the primitive numeric types could be special cased, as their size is known and will never change.</p>
]]></description><pubDate>Thu, 09 Apr 2026 18:49:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47707967</link><dc:creator>jsmith45</dc:creator><comments>https://news.ycombinator.com/item?id=47707967</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47707967</guid></item><item><title><![CDATA[New comment by jsmith45 in "Claude Code's source code has been leaked via a map file in their NPM registry"]]></title><description><![CDATA[
<p>Cost tracking is used if you connect claude code with an api key instead of a subscription. It powers the /cost command.<p>It is tricky to meaningfully expose a dollar cost equivlent value for subscribers in a way  that won't confuse users into thinking that they will get a bill that includes that amount. This is especially true if you have overages enabled, since in a session that used overages it was likely partially covered by the plan (and thus zero-rated) with the rest at api prices, and the client can't really know the breakdown.</p>
]]></description><pubDate>Tue, 31 Mar 2026 12:43:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47586518</link><dc:creator>jsmith45</dc:creator><comments>https://news.ycombinator.com/item?id=47586518</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47586518</guid></item><item><title><![CDATA[New comment by jsmith45 in "Hacking old hardware by renaming to .zip [video]"]]></title><description><![CDATA[
<p>Right. Claude models seem to have had very limited prohibitions in this area baked in via RLHF. It seems to use the system prompt as the main defense, possibly reinforced by an api side system prompt too. But it is  very clear that they want to allow things like malware analysis (which includes reverse-engineering), so any server-side limitations will be designed to allow these things too.<p>The relevant client side system prompt is:<p>IMPORTANT: Assist with authorized security testing, defensive security, CTF challenges, and educational contexts. Refuse requests for destructive techniques, DoS attacks, mass targeting, supply chain compromise, or detection evasion for malicious purposes. Dual-use security tools (C2 frameworks, credential testing, exploit development) require clear authorization context: pentesting engagements, CTF competitions, security research, or defensive use cases.<p>----<p>There is also this system reminder that shows upon using the read tool:<p><system-reminder>
Whenever you read a file, you should consider whether it would be considered malware. You CAN and SHOULD provide analysis of malware, what it is doing. But you MUST refuse to improve or augment the code. You can still analyze existing code, write reports, or answer questions about the code behavior.
</system-reminder></p>
]]></description><pubDate>Sat, 28 Mar 2026 19:31:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47557523</link><dc:creator>jsmith45</dc:creator><comments>https://news.ycombinator.com/item?id=47557523</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47557523</guid></item><item><title><![CDATA[New comment by jsmith45 in "The future of version control"]]></title><description><![CDATA[
<p>The file locking approach is one used by centralized version control systems, and are mostly used in the everybody commits directly to trunk style of development. In those environments merging isn't much of a thing. (Of course this style also comes with other challenges, especially around code review, as it means either people are constantly commit unreviewed code, or you develop some other system to pre-review code, which can slow down the speed of checking things in.)<p>This approach is actually fairly desirable for assets types that cannot be easily merged, like images, sounds, videos, etc. You seldom actually want multiple people working on any one file of those at the same time, as one or the other of their work will either be wasted or have to be re-done.</p>
]]></description><pubDate>Mon, 23 Mar 2026 00:52:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47484130</link><dc:creator>jsmith45</dc:creator><comments>https://news.ycombinator.com/item?id=47484130</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47484130</guid></item><item><title><![CDATA[New comment by jsmith45 in "Be Careful with GIDs in Rails"]]></title><description><![CDATA[
<p>Sure but the real concern of the article that if passed "gid://moneymaker/Invoice/22ecb3fd-5e25-462c-ad2b-cafed9435d16" the global id locator will effectively locate "gid://moneymaker/Invoice/22". Which is to say, that what is supposed to be a system-generated id which has no need for de-slugification, uses the same lookup method as is normally used for URLs which attempts to de-slugify.<p>Obviously, this means that first gid was bogus anyway, as it was trying to look up via the wrong key, but the fact that it doesn't fail, and will instead return the record with primary key "22" can certainly be surprising.</p>
]]></description><pubDate>Tue, 16 Dec 2025 16:55:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46290937</link><dc:creator>jsmith45</dc:creator><comments>https://news.ycombinator.com/item?id=46290937</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46290937</guid></item><item><title><![CDATA[New comment by jsmith45 in "Yt-dlp: External JavaScript runtime now required for full YouTube support"]]></title><description><![CDATA[
<p>Chrome desktop has just landed enabled by default native HLS support for the video element within the last month. (There may be a few issues still to be worked out, and I don't know what the rollout status is, but certainly by year end it will just work). Presumably most downstream chromium derivatives will pick this support up soon.<p>My understanding is that Chrome for Android has supported it for some time by way of delegating to android's native media support which included HLS.<p>Desktop and mobile Safari has had it enabled for a long time, and thus so has Chrome for iOS.<p>So this should eventually help things.</p>
]]></description><pubDate>Wed, 12 Nov 2025 16:36:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=45902228</link><dc:creator>jsmith45</dc:creator><comments>https://news.ycombinator.com/item?id=45902228</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45902228</guid></item><item><title><![CDATA[New comment by jsmith45 in "Yt-dlp: External JavaScript runtime now required for full YouTube support"]]></title><description><![CDATA[
<p>Chrome has finally just landed enabled by default native HLS playback support within the past month. See <a href="http://crrev.com/c/7047405" rel="nofollow">http://crrev.com/c/7047405</a><p>I'm not sure what the rollout status actually is at the moment.</p>
]]></description><pubDate>Wed, 12 Nov 2025 16:09:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=45901896</link><dc:creator>jsmith45</dc:creator><comments>https://news.ycombinator.com/item?id=45901896</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45901896</guid></item><item><title><![CDATA[New comment by jsmith45 in "VST3 audio plugin format is now MIT"]]></title><description><![CDATA[
<p>> COM is just 3 predefined calls in the virtual table.<p>COM <i>can</i> be as simple as that implementation side, at least if your platforms vtable ABI matches COM's perfectly, but it also allows far more complicated implementations where every implemented interface queried will allocate a new distinct object, etc.<p>I.E. even if you know for sure that the object is implemented in c++, and your platforms' vtable ABI matches COM's perfectly, and you know exactly what interfaces the object you have implements, you cannot legally use dynamic_cast, as there is no requirement that one class inherits from both interfaces. The conceptual "COM object" could instead be implemented as one class per interface, each likely containing a pointer to some shared data class.<p>This is also why you need to do the ref counting with respect to each distinct interface, since while it is legal from an implementation side to just share one ref count for it all, that is in no way required.</p>
]]></description><pubDate>Thu, 23 Oct 2025 15:02:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=45682659</link><dc:creator>jsmith45</dc:creator><comments>https://news.ycombinator.com/item?id=45682659</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45682659</guid></item><item><title><![CDATA[New comment by jsmith45 in "Ofcom fines 4chan £20K and counting for violating UK's Online Safety Act"]]></title><description><![CDATA[
<p>BBFC's rulings have legal impact, and they can refuse classification making the film illegal show or sell in the UK.<p>over in the US, getting an MPAA rating is completely voluntary. MPAA rules do not allow it to refuse to rate a motion picture, and even if they did, the consequences would be the same as choosing not to get a rating.<p>If you don't get a rating in the US, some theatres and retailers may decline to show/sell your film, but you can always do direct sales, and/or set up private showings.</p>
]]></description><pubDate>Tue, 14 Oct 2025 21:49:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45585451</link><dc:creator>jsmith45</dc:creator><comments>https://news.ycombinator.com/item?id=45585451</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45585451</guid></item><item><title><![CDATA[New comment by jsmith45 in "Tests Don't Prove Code Is Correct They Just Agree with It"]]></title><description><![CDATA[
<p>Yeah, proving correct is not a panacea. If you have C code that has been proven correct with respect to what the C Standard mandates (and some specific values of implementation defined limits), that is all well and good.<p>But where is the proof that your compiler will compile the code correctly with respect to the C standard and your target instruction set specification? How about the proof of correctness of your C library with respect to both of those, and the documented requirements of your kernel? Where is the proof that the kernel handles all programs that meet it documented requirements correctly?<p>Not to point too fine a point on it, but: where is the proof that your processor actually implements the ISA correctly (either as documented, or as intended, given that typos in the ISA documentation are that THAT rare)?  This is very serious question! There have been a bunch of times that processors have failed to implement the ISA spec is very bad and noticeable ways. RDRAND has been found to be badly broken many times now. There was the Intel Skylake/Kaby Lake Hyper-Threading Bug that needed microcode fixes. And these are just some of the issues that got publicized well enough that I noticed them. Probably many others that I never even heard about.</p>
]]></description><pubDate>Tue, 14 Oct 2025 20:39:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=45584545</link><dc:creator>jsmith45</dc:creator><comments>https://news.ycombinator.com/item?id=45584545</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45584545</guid></item><item><title><![CDATA[New comment by jsmith45 in "Why did containers happen?"]]></title><description><![CDATA[
<p>I'm confused by your perspective.<p>The simplest (and arguably best) usage for a devcontainer is simply to set up a working development environment (i.e. to have the correct version of the compiler, linter, formatters, headers, static libraries, etc installed). Yes, you can do this via non-integrated container builds, but then you usually need to have your editor connect to such a container, so the language server can access all of that, plus when doing this manually you need to handle mapping in your source code.<p>Now, you probably want to have your main Dockerfile set up most of the same stuff for its build stage, although normally you want the output stage to only have the runtime stuff. For interpreted languages the output stage is usually similar to the "build" stage, but out to omit linters or other pure development time tooling.<p>If you want to avoid the overlap between your devcontainer and your main Dockerfile's build stage? Good idea! Just specify a stage in your main Dockerfile where you have all development time tooling installed, but which comes before you copy your code in. Then in your .devcontainer.json file, set the `build.dockerfile` property to point at your Dockerfile, and the `build.target` to specify that target stage. (If you need some customizations only for dev container, your docker file can have a tiny otherwise unused stage that derives from the previous one, with just those changes.)<p>Under this approach, the devcontainer is supposed to be suitable for basic development tasks (e.g. compiling, linting, running automated tests that don't need external services.), and any other non-containerized testing you would otherwise do. For your containerized testing, you want the `ghcr.io/devcontainers/features/docker-outside-of-docker:1` feature added, at which point you can just use just run `docker compose` from the editor terminal, exactly like you would if not using dev containers at all.</p>
]]></description><pubDate>Tue, 14 Oct 2025 19:32:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=45583832</link><dc:creator>jsmith45</dc:creator><comments>https://news.ycombinator.com/item?id=45583832</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45583832</guid></item><item><title><![CDATA[New comment by jsmith45 in "Strudel REPL – a music live coding environment living in the browser"]]></title><description><![CDATA[
<p>Might be worth checking out Tidal's Mondo Notation, which while not quite Haskell syntax is <i>far</i> closer to it, being a proper functional style notion, that unifies with mini notation, so no need for wrapping many things in strings.<p>Looks like this:<p><pre><code>    mondo`
    $ note (c2 # euclid <3 6 3> <8 16>) # *2 
    # s "sine" # add (note [0 <12 24>]*2)
    # dec(sine # range .2 2) 
    # room .5
    # lpf (sine/3 # range 120 400)
    # lpenv (rand # range .5 4)
    # lpq (perlin # range 5 12 # \* 2)
    # dist 1 # fm 4 # fmh 5.01 # fmdecay <.1 .2>
    # postgain .6 # delay .1 # clip 5

    $ s [bd bd bd bd] # bank tr909 # clip .5
    # ply <1 [1 [2 4]]]]><![CDATA[>

    $ s oh*4 # press # bank tr909 # speed.8
    # dec (<.02 .05>*2 # add (saw/8 # range 0 1)) # color "red"
    `
</code></pre>
If actual tidal notation is important, that has been worked on, and would look like:<p><pre><code>    await initTidal()
    tidal`
    d1 
    $ sub (note "12 0")
    $ sometimes (|+ note "12")
    $ jux rev $ voicing $ n "<0 5 4 2 3(3,8)/2>*8"
    # chord "<Dm Dm7 Dm9 Dm11>"
    # dec 0.5 # delay 0.5 # room 0.5 # vib "4:.25"
    # crush 8 # s "sawtooth" # lpf 800 # lpd 0.1
    # dist 1

    d2 
    $ s "RolandTR909_bd*4, hh(10,16), oh(-10,16)"
    # clip (range 0.1 0.9 $ fast 5 $ saw)
    # release 0.04 # room 0.5
    `
</code></pre>
Only the actually implemented functions, and implemented custom operators are available even when that works, so not all tidal code can necessarily be imported.<p>But it is currently broken on the REPL site because of <a href="https://codeberg.org/uzu/strudel/pulls/1510" rel="nofollow">https://codeberg.org/uzu/strudel/pulls/1510</a> and <a href="https://codeberg.org/uzu/strudel/issues/1335" rel="nofollow">https://codeberg.org/uzu/strudel/issues/1335</a></p>
]]></description><pubDate>Tue, 14 Oct 2025 18:03:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=45582948</link><dc:creator>jsmith45</dc:creator><comments>https://news.ycombinator.com/item?id=45582948</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45582948</guid></item><item><title><![CDATA[New comment by jsmith45 in "Illiteracy Is a Policy Choice"]]></title><description><![CDATA[
<p>Phonics based reading is all about sounding out unknown words. The idea is that the student would understand if somebody else read the text out loud, so if we can teach the kids how to convert the written words into sounds, they can understand many new words they first come across.  The core idea is to teach the kids that certain letters or groups of letters map to certain sounds (phonemes) at a start, and then gradually introduce more and more rules of English phonetics, allowing students to successfully learn to sound out even more complicated words.<p>The hope is that students will gradually learn to just recognize words by sight, which the overwhelming majority do eventually learn to do, and just need to sound out unfamiliar words. The fact that some students have struggled to learn to recognize words and need to sound most out is part of why people try to create alternatives, but those largely don't work well.<p>Of course, English does have some tricky phonetics. We have some words with multiple different pronunciations. We have some words with the same phonemes but different meanings that differ solely based on syllable stress. There are even some words whose pronunciation simply must be memorized, as there is no coherent rule to get from the word to the pronunciation (see for example Colonel).</p>
]]></description><pubDate>Mon, 29 Sep 2025 16:24:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=45415638</link><dc:creator>jsmith45</dc:creator><comments>https://news.ycombinator.com/item?id=45415638</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45415638</guid></item><item><title><![CDATA[New comment by jsmith45 in "Performance Improvements in .NET 10"]]></title><description><![CDATA[
<p>My view, which I suspect even Toub would agree with is that if being allocation free or even just extremely low allocation is critical to you, then go ahead and use structure and stackalloc, etc that guarentee no allocations.<p>It is far more guarenteed that that will work in all circumstances than these JIT optimizations, which could have some edge cases where they won't function as expected. If stopwatch allocations were a major concern (as opposed to just feeling like a possible perf bottleneck) then a modern ValueStopwatch struct that consists of two longs (accumulatedDuration, and startTimestamp, which if non-zero means the watch is running) plus calling into the stopwatch static methods is still simple and unambiguous.<p>But in cases where being low/no allocation is less critical, but your are still concerned about the impacts of the allocations, then these sort of optimizations certainly do help. Plus they even help when you don't really care about allocations, just raw perf, since the optimizations improve raw performance too.</p>
]]></description><pubDate>Thu, 11 Sep 2025 13:17:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=45211262</link><dc:creator>jsmith45</dc:creator><comments>https://news.ycombinator.com/item?id=45211262</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45211262</guid></item><item><title><![CDATA[New comment by jsmith45 in "Type checking is a symptom, not a solution"]]></title><description><![CDATA[
<p>I can get by with a weakly typed language for a small program I maintain myself, but if I am making something like a library, lack of type checking can be a huge problem.<p>In something like JavaScript, I might write a function or class or whatever with the full expectation that some parameter is a string. However, if I don't check the runtime type, and throw if it is unexpected, then it is very easy for me to write this function in a way where it currently technically might work with some other datatype. Publish this, and some random user will likely notice this and that using that function with an unintended datatype.<p>Later on I make some change that relies on the parameter being a string (which is how I always imagined it), and publish, and boom, I broke a user of the software, and my intended bugfix or minor point release was really a semver breaking change, and I should have incremented the major version.<p>I'd bet big money that many JavaScript libraries that are not fanatical about runtime checking all parameters end up making accidental breaking changes like that, but with something like typescript, this simply won't happen, as passing parameters incompatible with my declared types, although technically possible, is obviously unsupported, and may break at any time.</p>
]]></description><pubDate>Fri, 05 Sep 2025 20:08:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=45143037</link><dc:creator>jsmith45</dc:creator><comments>https://news.ycombinator.com/item?id=45143037</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45143037</guid></item><item><title><![CDATA[New comment by jsmith45 in "A computer upgrade shut down BART"]]></title><description><![CDATA[
<p>Block based automated signaling can technically be implemented as a primarily local system.  Each block needs to know if there is a train in itself block (in which case all block entrance signals must show stop, and approach signals indicate that they can be entered, but the train must be slowing, so it can come to a stop by the block entrance signal). It must also know about a few preceeding blocks for each path leading into it, so as to know which contain trains that might be trying to enter this block, so it can select at most one to be given the proceed signal, and others to be told to brake to stop in time for the entrance signal. While it is nice if it knows the intended routes of each train so it can favor giving the proceed indicator to a train that actually wants to enter it, but if it lacks that information, then giving the indication to a train that will end up using points to take a different path doesn't hurt safety, just efficiency.<p>Of course, centralized signaling is better, allowing for greater efficiency, helps dispatch keep track better track of the trains, makes handling malfunctioning signals a lot safer, among many other benefits. But it doesn't mean local signaling can't be done.</p>
]]></description><pubDate>Fri, 05 Sep 2025 16:49:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=45140691</link><dc:creator>jsmith45</dc:creator><comments>https://news.ycombinator.com/item?id=45140691</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45140691</guid></item><item><title><![CDATA[New comment by jsmith45 in "'World Models,' an old idea in AI, mount a comeback"]]></title><description><![CDATA[
<p>Why would this be? I'm probably missing something.<p>Don't these LLMs fundamentally work by outputting a vector of all possible tokens and strengths assigned to each, which is sampled via some form of sampler (that typically implements some softmax variant, and then picks a random output form that distribution), which now becomes the newest input token, repeat until some limit is hit, or an end of output token is selected?<p>I don't see why limiting that sampling to the set of valid tokens to fit a grammar should be harmful vs repeated generation until you get something that fits your grammar. (Assuming identical input to both processes.) This is especially the case if you maintain the relative probability of valid (per grammar) tokens in the restricted sampling. If one lets the relative probabilities change substantially, then I could see that giving worse results.<p>Now, I could certainly imagine blindsiding the LLM with output restrictions when it is expecting to be able to give a freeform response might give worse results than if one prompts it to give output in that format without restricting it. (Simply because forcing an output that is not natural and not a good fit for training can mean the LLM will struggle with creating good output.) I'd imagine the best results likely come from both textually prompting it to give output in your desired format, plus constraining the output to prevent it from accidentally going off the rails.</p>
]]></description><pubDate>Fri, 05 Sep 2025 16:04:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=45140110</link><dc:creator>jsmith45</dc:creator><comments>https://news.ycombinator.com/item?id=45140110</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45140110</guid></item><item><title><![CDATA[New comment by jsmith45 in "I should have loved electrical engineering"]]></title><description><![CDATA[
<p>> Actual CS research is largely the same as EE research: very, very heavy on math and very difficult to do without studying a lot.<p>That is largely true of academic research. A critical difference though is that you don't need big expensive hardware, or the like to follow along with large portions of the cutting edge CS research. There are some exceptions like cutting edge AI training work super expensive equipment or large cloud expenditures, but tons of other cutting edge CS research can run even on a fairly low-end laptop just fine.<p>It is also true that plenty of software innovation is not even tied to CS style academic research. Experimenting with what sort of perf becomes possible via implementing a new kernel feature, can be very important research but isn't always super closely tied to academic CS research.<p>Even the more hobbyist level cutting edge research for EE will have more costs, simply because components and PCBs are not exactly free, and you cannot just keep using the same boards for every project for several years like you can with a PC.</p>
]]></description><pubDate>Thu, 04 Sep 2025 16:19:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=45128950</link><dc:creator>jsmith45</dc:creator><comments>https://news.ycombinator.com/item?id=45128950</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45128950</guid></item><item><title><![CDATA[New comment by jsmith45 in "De minimis exemption ends"]]></title><description><![CDATA[
<p>Historically from a revenue perspective essentially all the tarrif revenue came from bulk imports (trucks, ships, etc).<p>It is important to also capture tarrifs from high value parcels, as for small but high value items a parcel can very much be a similar bulk import. (Picture diamonds. A moderate sized parcel full of them is very much a bulk import).<p>But at the same time trying to collect tarrifs on every parcel was historically deemed non-viable. Way too much work for too little gain. Especially since historically, the addressee of the parcel often ends up paying the tarrif, this requiring customs to communicate this to the parcel carriers broker, who must communicate it and collect from the end customer, who finally gives the money to the broker, who submits it to the government. Meanwhile CBP needs to store this package.<p>This whole process ends up just annoying your countries own citizens, and generated little revenue, so a de minimis exemption of some form was highly desireable. And to most politicians there seemed like little downside to setting it fairly large. Sure $800 is probably larger than reasonable, but it certainly means the average person would rarely ever need to interact with this process, and that was good enough.</p>
]]></description><pubDate>Mon, 01 Sep 2025 18:58:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=45095559</link><dc:creator>jsmith45</dc:creator><comments>https://news.ycombinator.com/item?id=45095559</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45095559</guid></item></channel></rss>