<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: citruspi</title><link>https://news.ycombinator.com/user?id=citruspi</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 03 May 2026 14:24:28 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=citruspi" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by citruspi in "ICE Will Use AI to Surveil Social Media"]]></title><description><![CDATA[
<p>> You pay tax direct as us residents, or as tariff if you are in rest of world.<p>Tariffs on goods coming into the US are paid by US residents. (Just had to pay customs to clear a shipment from the UK - I had to pay the tariffs, not the seller.)</p>
]]></description><pubDate>Mon, 27 Oct 2025 13:49:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=45721029</link><dc:creator>citruspi</dc:creator><comments>https://news.ycombinator.com/item?id=45721029</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45721029</guid></item><item><title><![CDATA[New comment by citruspi in "WebDAV isn't dead yet"]]></title><description><![CDATA[
<p>OmniFocus also supports WebDAV for folks that prefer to self-host - <a href="https://support.omnigroup.com/documentation/omnifocus/universal/4.3.3/en/managing-your-data/#sync-advanced-webdav" rel="nofollow">https://support.omnigroup.com/documentation/omnifocus/univer...</a></p>
]]></description><pubDate>Sun, 26 Oct 2025 00:48:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=45708173</link><dc:creator>citruspi</dc:creator><comments>https://news.ycombinator.com/item?id=45708173</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45708173</guid></item><item><title><![CDATA[New comment by citruspi in "Jeffrey Hudson the Court Dwarf of the English Queen Henrietta Maria of France"]]></title><description><![CDATA[
<p>TIL about <a href="https://en.wikipedia.org/wiki/Bear-baiting" rel="nofollow">https://en.wikipedia.org/wiki/Bear-baiting</a> as well<p>Apparently still practiced (although illegal) in Pakistan and was occurring in the United States as recently as last decade<p><a href="https://en.wikipedia.org/wiki/Lion-baiting" rel="nofollow">https://en.wikipedia.org/wiki/Lion-baiting</a> too</p>
]]></description><pubDate>Mon, 13 Oct 2025 12:38:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=45567685</link><dc:creator>citruspi</dc:creator><comments>https://news.ycombinator.com/item?id=45567685</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45567685</guid></item><item><title><![CDATA[New comment by citruspi in "NSF starts vetting all grants to comply with executive orders"]]></title><description><![CDATA[
<p>> Things are moving ridiculously fast for government to the point I suspect there was alot of desire to get rid of all this stuff before, but people were unable to do so.<p>Assuming "was alot of desire" is meant as "widespread support," not just "the President really desired it," that is an absolutely ridiculous interpretation. Fast does not mean strong consensus.<p>The government has also moved _ridiculously fast_ to<p>- pardon/commute the vast majority of J6 defendants including those convicted of violence towards law enforcement<p>- freeze federal aid across the board<p>- blame the recent aviation crash on DEI<p>- rename the Gulf of Mexico to Gulf of America<p>- revoke birthright citizenship<p>Does that mean that there "was alot of desire" to do those things? Absolutely not. It just means that those are things that the President has done unilaterally via executive orders or press release.<p>To be clear I'm not advocating for or against the govt.<p>But the idea that the government moving "ridiculously fast" is because there "was alot of desire" is a massive stretch. They are moving fast because they are steamrolling everyone in their path (allies included), and in their desire to "get shit done" they are implementing changes that are riddled with errors, and in some cases, even flat out illegal.</p>
]]></description><pubDate>Fri, 31 Jan 2025 14:01:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=42887743</link><dc:creator>citruspi</dc:creator><comments>https://news.ycombinator.com/item?id=42887743</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42887743</guid></item><item><title><![CDATA[New comment by citruspi in "Sega is delisting 60 classic games from Steam, so now's the time to grab them"]]></title><description><![CDATA[
<p>> Sega might claim that they have only licensed them and you don't own them; a court might disagree, given the big "buy" button and the consideration paid for them.<p>I'd really like to believe that is the case, but I think we've already seen that is generally not true based on other digital marketplaces (e.g. Kindle books, iTunes media, etc.)<p>But specifically regarding Steam... this was just last month[0][1]<p>> Valve is now explicitly disclosing that you don’t own the games you buy from its Steam online store. The company has added a note on the payment checkout screen stating that “a purchase of a digital product grants a license for the product on Steam,” as reported earlier by Engadget.<p>><p>> ...<p>><p>> Why? Probably, a new law. California has a law going into effect next year that’ll require digital storefronts like Valve’s Steam platform to clearly say that you’re only purchasing a license for your digital media because some companies like Ubisoft and PlayStation were removing digital purchases from users’ libraries, keeping them from playing games like The Crew or watching their old Discovery shows.<p>[0] <a href="https://www.theverge.com/2024/10/11/24267864/steam-buy-purchase-license-digital-storefront" rel="nofollow">https://www.theverge.com/2024/10/11/24267864/steam-buy-purch...</a><p>[1] <a href="https://news.ycombinator.com/item?id=41809193">https://news.ycombinator.com/item?id=41809193</a></p>
]]></description><pubDate>Fri, 08 Nov 2024 01:03:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=42083048</link><dc:creator>citruspi</dc:creator><comments>https://news.ycombinator.com/item?id=42083048</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42083048</guid></item><item><title><![CDATA[New comment by citruspi in "Ask HN: Anyone tired of everything being a subscription now?"]]></title><description><![CDATA[
<p>> 1Password 8 requires iOS 15, so the updates that you're paying for as part of your subscription model require that you pay for a new phone to get them.<p>That assumes that iOS 15 requires a new phone... which isn't the case. Apple generally has an excellent track record of supporting older devices when they release new major OS versions.<p>Taking a look at the devices that support iOS 15[0], the oldest one appears to be the iPhone 6S... which was released in 2015. So you could theoretically use the new 1Password 8 on iOS 15 on a phone that is 7 years old. No need to pay for a new phone.<p>[0] <a href="https://support.apple.com/guide/iphone/supported-models-iphe3fa5df43/15.0/ios/15.0" rel="nofollow">https://support.apple.com/guide/iphone/supported-models-iphe...</a></p>
]]></description><pubDate>Tue, 20 Dec 2022 12:31:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=34064963</link><dc:creator>citruspi</dc:creator><comments>https://news.ycombinator.com/item?id=34064963</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34064963</guid></item><item><title><![CDATA[New comment by citruspi in "Are “bring your own storage” apps a thing?"]]></title><description><![CDATA[
<p>A little surprised no one has mentioned OmniGroup's [0] applications.<p>Some of their applications (e.g. OmniFocus) support syncing online via an Omni account, but also offer support for syncing to a custom WebDAV server[1]. I run my own WebDAV server and point the macOS and iOS apps at it.<p>[0]: <a href="https://www.omnigroup.com" rel="nofollow">https://www.omnigroup.com</a><p>[1]: <a href="https://support.omnigroup.com/documentation/omnifocus/mac/3.0/en/getting-synced/#other-webdav-options" rel="nofollow">https://support.omnigroup.com/documentation/omnifocus/mac/3....</a></p>
]]></description><pubDate>Thu, 11 Mar 2021 11:27:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=26422871</link><dc:creator>citruspi</dc:creator><comments>https://news.ycombinator.com/item?id=26422871</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26422871</guid></item><item><title><![CDATA[New comment by citruspi in "Next.js 10"]]></title><description><![CDATA[
<p>> Did you notice that we wrote a detailed documentation page about how to report those metrics to any service of your choice?<p>Perhaps you should link that on the Analytics[0] page instead of directing users to contact the sales team for more information on non-Vercel deployments.<p>[0] <a href="https://nextjs.org/analytics" rel="nofollow">https://nextjs.org/analytics</a></p>
]]></description><pubDate>Tue, 27 Oct 2020 19:05:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=24910658</link><dc:creator>citruspi</dc:creator><comments>https://news.ycombinator.com/item?id=24910658</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24910658</guid></item><item><title><![CDATA[New comment by citruspi in "APFS changes in Big Sur: how Time Machine backs up to APFS"]]></title><description><![CDATA[
<p>> I've been generally happy with Time Machine's stability, but this is getting me a little worried.<p>For the longest time, I backed up my MacBooks with Time Machine to a NAS.<p>Seemed to work fine - Time Machine was successful and I was able to browse previous versions on the machine being backed up without an issue (browse history of machine A on machine A).<p>Then one day I was planning to wipe a MacBook and do a clean install - figured I'd confirm I could browse my backups made on machine A on machine B before I wiped A. I spent over an hour attempting to open the sparse bundle (w/ Time Machine and manually) and just couldn't do it - kept loading forever or giving me errors about volume verification among other things[0].<p>> I guess now's a good time to looking into Arq (or similar)<p>Like you, I decided to take a look at alternatives. I'd previously played around with Arq (v5) and it looked awesome - stable, well-documented, etc. Well, by the time I actually needed an alternative to Time Machine, Arq had released v6 - earlier this year[1].<p>Unfortunately it appeared to be bug-prone (not great for backups!) and lacked ANY documentation (one of the great things about v5 was the in-depth documentation, particularly around backup format). Users on the subreddit[2] weren't thrilled and you can't purchase v5 licenses (and TBH I wouldn't recommend purchasing software that isn't supported anymore).<p>Within the last week or two, Arq has released a second major version within a year - v7[3]. Feedback appears to be better, and the author has acknowledged mistakes, but TBH I'm wary. Definitely not adopting two-week old software as my primary method for backing up.<p>I've been playing around with Carbon Copy Cloner[4] more recently.<p>The ideal goal would be bootable backups to a disk image hosted remotely but that doesn't appear to be possible[5] - so I'm resigning myself to file-based instead - no bootable disk image, but at least I'm a little more confident in my backups? And a single "file" (or image) becoming corrupt doesn't blow away the rest of my backup ¯\_(ツ)_/¯<p>If anyone has any suggestions or ideas, I'm all ears.<p>Edit: Probably worth noting that in this case machine A was running 10.14 (and HFS) and machine B 10.15 (and APFS) - but I'd imagine 10.15 should be able to open a 10.14 HFS sparse bundle without an issue.<p>[0] <a href="https://pastebin.com/Le6Q407e" rel="nofollow">https://pastebin.com/Le6Q407e</a><p>[1] <a href="https://www.arqbackup.com/blog/arq-6-more-power-more-security-more-storage-savings/" rel="nofollow">https://www.arqbackup.com/blog/arq-6-more-power-more-securit...</a><p>[2] <a href="https://old.reddit.com/r/Arqbackup/" rel="nofollow">https://old.reddit.com/r/Arqbackup/</a><p>[3] <a href="https://www.arqbackup.com/blog/next-up-arq-7/" rel="nofollow">https://www.arqbackup.com/blog/next-up-arq-7/</a><p>[4] <a href="https://bombich.com/" rel="nofollow">https://bombich.com/</a><p>[5] <a href="https://bombich.com/kb/ccc5/i-want-back-up-my-whole-mac-time-capsule-nas-or-other-network-volume" rel="nofollow">https://bombich.com/kb/ccc5/i-want-back-up-my-whole-mac-time...</a></p>
]]></description><pubDate>Mon, 28 Sep 2020 22:32:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=24621943</link><dc:creator>citruspi</dc:creator><comments>https://news.ycombinator.com/item?id=24621943</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24621943</guid></item><item><title><![CDATA[New comment by citruspi in "Things we learned running Postgres 13"]]></title><description><![CDATA[
<p>> It would be great if we stop adding features now... I wish there was a Postgres branch that took previous version and then just applied optimizations and bugfixes. No more.<p>To be fair, quoting from the article:<p>> There are no big new features in Postgres 13, but there are a lot of small but important incremental improvements. Let's take a look.<p>But also, in general, yes there are pieces of software that do this - most recently Moment.js[1]. There was some discussion earlier this week[2].<p>[1] <a href="https://momentjs.com/docs/#/-project-status/" rel="nofollow">https://momentjs.com/docs/#/-project-status/</a><p>[2] <a href="https://news.ycombinator.com/item?id=24477941" rel="nofollow">https://news.ycombinator.com/item?id=24477941</a></p>
]]></description><pubDate>Mon, 21 Sep 2020 18:52:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=24547010</link><dc:creator>citruspi</dc:creator><comments>https://news.ycombinator.com/item?id=24547010</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24547010</guid></item><item><title><![CDATA[New comment by citruspi in "A year ago, We picked up, tracked, and analyzed 130k pieces of litter in SF"]]></title><description><![CDATA[
<p>> Every submission and comment must teach everyone something, expand views, ideas and contribute to society in some way. I don't think you telling how much trash you're picking is contributing to society.<p>I don't know about that. Last year the OP and friends worked to pick up and document trash and now, a year later, that information has been used to make informed decisions about how to keep our streets - and society - cleaner.<p>I'd argue that OP has quite definitively _contributed to society_.<p>If he'd just posted "I picked up some trash" - I could understand your comments, but he's done the work, gathered data, written about it, etc. So w/r/t to _teaching something_, I know more now than I did before reading his post.<p>Quite frankly, I could attempt to compare your contributions to society (aside from the snarky, dismissive comments on HN) but you've opted to hide behind a new, anonymous account. If you believe a submission is off-topic, just flag it, creating a new throwaway account just to disparage it seems a bit extreme.<p>And please understand, I don't say this to disparage you or insult your incredible intellect - just in the hope that perhaps I can _expand your views_ on how OP's submission is of interest to members of this community.</p>
]]></description><pubDate>Sat, 19 Sep 2020 02:15:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=24524276</link><dc:creator>citruspi</dc:creator><comments>https://news.ycombinator.com/item?id=24524276</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24524276</guid></item><item><title><![CDATA[New comment by citruspi in "A year ago, We picked up, tracked, and analyzed 130k pieces of litter in SF"]]></title><description><![CDATA[
<p>> I don't think talking about how much trash you pick is any interest of us<p>That's rather presumptuous, it is of interest to me.<p>And considering the previous post from last year received 100+ upvotes and 100+ comments, I think it's safe to say that it's of interest to others as well.</p>
]]></description><pubDate>Sat, 19 Sep 2020 01:58:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=24524181</link><dc:creator>citruspi</dc:creator><comments>https://news.ycombinator.com/item?id=24524181</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24524181</guid></item><item><title><![CDATA[New comment by citruspi in "Flow browser passes the Acid tests"]]></title><description><![CDATA[
<p>Sure, but I wasn't suggesting that Webkit is an original work of art - the discussion wasn't about the history of rendering engines and browsers - it was about which companies _today_ are willing to maintain their own rendering engines and browsers instead of just skinning Chromium.<p>- Apple has Safari which uses Webkit (which was forked from KHTML as you noted)<p>- Mozilla has Firefox which uses Gecko (originally from Netscape)<p>- Google has Chrome which uses Blink (forked from Webkit)<p>- Microsoft has Edge which uses Blink<p>- Opera has Opera which uses Blink<p>... The only two major players currently maintaining a non-Chromium and Blink based browser are Firefox (mentioned by the OP) and Apple (mentioned by myself).<p>I was simply trying to provide some additional context about the history of Chrome, perhaps I should've also included the context about the history of Safari.<p>But anyways... blah blah blah</p>
]]></description><pubDate>Sun, 14 Jun 2020 12:50:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=23517300</link><dc:creator>citruspi</dc:creator><comments>https://news.ycombinator.com/item?id=23517300</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23517300</guid></item><item><title><![CDATA[New comment by citruspi in "Flow browser passes the Acid tests"]]></title><description><![CDATA[
<p>> they will slowly stop bothering and start cartelizing around Chromium, one way or another, because no corp except for Mozilla solely depends on their browser<p>I'd say Apple falls into that category (although they don't solely depend on their browser, it is a major part of their ecosystem) - the day they stop maintaining Safari and just ship Chrome (or a skinned Chromium) on iOS and macOS is the day I stop using the Apple ecosystem.</p>
]]></description><pubDate>Sat, 13 Jun 2020 18:57:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=23511741</link><dc:creator>citruspi</dc:creator><comments>https://news.ycombinator.com/item?id=23511741</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23511741</guid></item><item><title><![CDATA[New comment by citruspi in "Flow browser passes the Acid tests"]]></title><description><![CDATA[
<p>> Even the largest companies in the world decided it wasn’t worth it. MSFT has a 1.4T market cap,<p>To be fair, Apple has (or had on the 10th of June) a market cap of 1.5T and they maintain Safari and Webkit (which GOOG forked to create Blink).</p>
]]></description><pubDate>Sat, 13 Jun 2020 18:56:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=23511728</link><dc:creator>citruspi</dc:creator><comments>https://news.ycombinator.com/item?id=23511728</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23511728</guid></item><item><title><![CDATA[New comment by citruspi in "Biohacking Lite"]]></title><description><![CDATA[
<p>Yeah, I think you misread the original comment. The "average" apple contains nearly 10g of sugar[0].<p>[0]: <a href="https://fdc.nal.usda.gov/fdc-app.html#/food-details/171688/nutrients" rel="nofollow">https://fdc.nal.usda.gov/fdc-app.html#/food-details/171688/n...</a></p>
]]></description><pubDate>Fri, 12 Jun 2020 19:17:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=23502619</link><dc:creator>citruspi</dc:creator><comments>https://news.ycombinator.com/item?id=23502619</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23502619</guid></item><item><title><![CDATA[New comment by citruspi in "“Tesla will now move its HQ and future programs to Texas/Nevada immediately”"]]></title><description><![CDATA[
<p>> He has built a $150 billion company from the ground up<p>That's a bit of a stretch.<p>The company was founded by other folks (Martin Eberhard and  Marc Tarpenning) in 2003, he invested in 2004, joined the board of directors, ousted the CEO in 2007, and was then only able to refer to himself as a cofounder after settling a lawsuit.<p>Musk didn't build anything "from the ground up," he took control of a company that already existed and grew it into something larger - still admirable, but let's not rewrite history. Musk didn't create Tesla and he wasn't one of the original cofounders.</p>
]]></description><pubDate>Sat, 09 May 2020 18:01:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=23126935</link><dc:creator>citruspi</dc:creator><comments>https://news.ycombinator.com/item?id=23126935</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23126935</guid></item><item><title><![CDATA[New comment by citruspi in "Show HN: Errorship, use datadog as an error tracker"]]></title><description><![CDATA[
<p>> If it fails it raises an Exception... If your server is down, my application would crash too<p>So first off, let me just say that I completely agree. If this were the case, that'd be fucking atrocious and would definitely be a blocker for using it.<p>But I'm curious how you came to that conclusion.<p>It took me < 2 minutes of looking at the source code[0] to determine that your claim was incorrect. Not only does it appear to gracefully handle the server being unavailable, the developer literally commented that code explaining that they wanted to ensure users could continue uninterrupted if the errorship server is unavailable.<p>> We give people the benefit of doubt.<p>> We only consider people to be not authorized if the backend comes back with an authoritative answer to that effect.<p>> Else, any errors or any other outcome; we assume authorization is there and also assume they belong to the highest pricing plan: Enterprise<p>> # failure of errorship should not lead to people been unable to ship exceptions<p>And it took even less time than that to run a new Python Docker container, install the library, run the sample code, and validate my assumptions[1] (the first attempt fails because the key is invalid, I disabled Internet access for the second attempt and it succeeded).<p>So I'm legitimately curious - did I miss something? Is there another failure case I didn't catch or test for? Or did you just make an assumption and not bother to verify it? And if it's the latter, why? What was the point? Like, to be frank, if this was a news piece I could understand the (possibly inaccurate) commentary. But why take the time and energy to write your comment and tear down someone's personal project with seemingly inaccurate claims?<p>(To be clear, no affiliation with errorship, I'm not even a DataDog user. Just a random dev browsing HN).<p>[0] <a href="https://gitlab.com/errorship/errorship_python/-/blob/master/errorship/http.py#L70" rel="nofollow">https://gitlab.com/errorship/errorship_python/-/blob/master/...</a><p>[1] <a href="https://gist.github.com/citruspi/16d359ac2dafef6fc876e2dd101089bf" rel="nofollow">https://gist.github.com/citruspi/16d359ac2dafef6fc876e2dd101...</a></p>
]]></description><pubDate>Sun, 19 Apr 2020 13:47:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=22915854</link><dc:creator>citruspi</dc:creator><comments>https://news.ycombinator.com/item?id=22915854</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=22915854</guid></item><item><title><![CDATA[New comment by citruspi in "Ask HN: Where to publish Rust command-line tools for easy use?"]]></title><description><![CDATA[
<p>bat and exa also have ~19,100 and ~9,300 stars, respectively, on GitHub and (presumably) even more actual users - unless your project is that popular, I honestly wouldn't worry about it.<p>If your project does get that far, it'll start popping up in all of those package managers. To be super blunt, if you're interested in building and sharing CLI tools written in Rust, I'd just start building and sharing CLI tools written in Rust and stop worrying about all the different package management solutions available. crates.io is the standard for the Rust ecosystem and is more than sufficient until your project has hundreds or thousands of users - most package managers won't even accept your package until it has a decent amount of popularity (e.g. in use by more than 50 people).<p>Once the project is popular enough that it warrants placement in package managers, you'll know because a user will either file an issue requesting the project be added to a package manager or they'll simply do it themselves :D</p>
]]></description><pubDate>Sun, 19 Apr 2020 08:42:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=22914453</link><dc:creator>citruspi</dc:creator><comments>https://news.ycombinator.com/item?id=22914453</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=22914453</guid></item><item><title><![CDATA[New comment by citruspi in "Ask HN: Where to publish Rust command-line tools for easy use?"]]></title><description><![CDATA[
<p>TL;DR Just publish it on crates.io and call it a day.<p>> What I'm most familiar with when it comes to publishing CLI tools is NPM. However, since Rust requires a build step I don't really feel like crates.io is set up for that on its own.<p>FWIW, I'd second hazebooth's nod towards simply publishing on crates.io.<p>I think you may be overthinking this, but that said I personally go the route of having CI cross-compile binaries for each OS and arch type that I host in order to minimize dependencies on the systems that run them (e.g. avoid installing the Go or Rust compiler).<p>I don't personally maintain package manager options (e.g. apt and brew) for projects because, as you mentioned, it'd be a lot of work. Plus, to be quite frank, when and if a project is popular enough that a large enough audience uses it, one day you'll find that someone else has added a package for it to e.g. apt or brew.<p>But given your familiarity w/ NPM - if you are interested in precompiling binaries for each system - you might want to take a look at Cloudflare's wrangler[0], their tooling for interacting with Cloudflare Workers. wrangler is published to both crates.io and NPM (the latter using a script to pull down pre-compiled binaries they host).<p>Edit: Another option, if you want really want to make your project available via a "package manager" that isn't crates.io, would be to release a Docker image. This is an option for CLI tooling - e.g. Hashicorp publishes Packer[1] and Terraform[2] Docker images. That way you don't need to worry about compiling binaries for macOS, linux w/ glibc, linux w/ musl, etc. Just publish a single Docker image, ideally as small as possible.<p>[0] <a href="https://github.com/cloudflare/wrangler" rel="nofollow">https://github.com/cloudflare/wrangler</a>
[1] <a href="http://hub.docker.com/r/hashicorp/packer" rel="nofollow">http://hub.docker.com/r/hashicorp/packer</a>
[2] <a href="http://hub.docker.com/r/hashicorp/terraform" rel="nofollow">http://hub.docker.com/r/hashicorp/terraform</a></p>
]]></description><pubDate>Sun, 19 Apr 2020 08:25:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=22914389</link><dc:creator>citruspi</dc:creator><comments>https://news.ycombinator.com/item?id=22914389</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=22914389</guid></item></channel></rss>