<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Zandikar</title><link>https://news.ycombinator.com/user?id=Zandikar</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 00:23:44 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Zandikar" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Zandikar in "Gemini 2.5"]]></title><description><![CDATA[
<p>I have bad news for you if you think non paywalled / non phone# required discord communities are immune to AI scraping, especially as it costs less than hammering traditional websites as the push-on-change event is done for you in real time chat contexts.<p>Especially as the company archives all those chats (not sure how long) and is small enough that a billion dollar "data sharing" agreement would be a very inticing offer.<p>If there isn't a significant barrier to access, it's being scraped. And if that barrier is money, it's being scraped but less often.</p>
]]></description><pubDate>Tue, 25 Mar 2025 19:06:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=43474657</link><dc:creator>Zandikar</dc:creator><comments>https://news.ycombinator.com/item?id=43474657</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43474657</guid></item><item><title><![CDATA[New comment by Zandikar in "Docker limits unauthenticated pulls to 10/HR/IP from Docker Hub, from March 1"]]></title><description><![CDATA[
<p>> Does this commercial company expect volunteers to give them images for free which give their paid subscriptions value?<p>Yes, to an extent, because it costs money to store and serve data, no matter what kind of data it is or it's associated IP rights/licensing/ownership. Regardless, this isn't requiring people to buy a subscription or otherwise charging anyone to access the data. It's not even preventing unauthenticated users from accessing the data. It's reducing the rate at which that data can be ingested without ID/Auth to reduce the operational expense of making that data freely (as in money) and publicly available. Given the explosion in traffic (demand) and the ability to make those demands thanks to automation and AI relative to the operational expense of supplying it, rate limiting access to free and public data egress is not in and of itself unreasonable. Especially if those that are responsible for that increased OpEx aren't respecting fair use (legally or conceptually) and even potentially abusing the IP rights/licensing of "images [given] for free" to the "Library built on the back of volunteers".<p>To what extent that's happening, how relevant it is to docker, and how effective/reasonable Docker's response to it are all perfectly reasonable discussions to have. The entitlement is referring to those that explicitly or implicitly expect or demand such a service should be provided for free.<p>Note: you mentioned you don't use docker. a single docker pull can easily be 100's of MB's (official psql image is ~150MB for example) or even in some cases over a GB worth of network transfer depending on the image. Additionally, there is no restriction by docker/dockerhub that prevents or discourages people from linking to source code or alternative hosts of the data. Furthermore you don't have to do a pull everytime you wish to use an image, and caching/redistributing them within your LAN/Cluster is easy. Should also be mentioned Docker Hub is more than just a publicly accessible storage endpoint for a specific kind of data, and their subscription services provide more that just hosting/serving that data.</p>
]]></description><pubDate>Fri, 21 Feb 2025 13:52:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=43127396</link><dc:creator>Zandikar</dc:creator><comments>https://news.ycombinator.com/item?id=43127396</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43127396</guid></item><item><title><![CDATA[New comment by Zandikar in "Why We're Falling Out of Love with Tesla"]]></title><description><![CDATA[
<p>> Luckily, GP made it abundantly clear he wasn’t talking about a beemer<p>He did not, nor did he make it clear - certainly not abundantly so - what they WERE talking about, which is the core and more important problem.<p>For example, just because it's abundantly clear they weren't talking about a Boeing 747 doesn't mean I have any idea what they <i>were</i> on about.</p>
]]></description><pubDate>Sat, 15 Feb 2025 08:23:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=43056872</link><dc:creator>Zandikar</dc:creator><comments>https://news.ycombinator.com/item?id=43056872</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43056872</guid></item><item><title><![CDATA[New comment by Zandikar in "Passing the Torch on Asahi Linux"]]></title><description><![CDATA[
<p>> why not bite the bullet and write it in C<p>Are they doing things the way they're doing it just to get it done, or are they doing it because they feel it's the right way to do things and they want to get it done right?<p>If the former, then fair question. If the latter, then you answered your own question at the start of that sentence.<p>> Don’t let that work go to waste.<p>Which is what would happen if they gave up and wrote it in C if they're goal is not just to get it done, but to get it done right.<p>Doesn't mean there aren't better/alternative ways to do things. Doesn't mean it's not worth asking if a lang other than rust may be better for certain/all parts of this (And yes, that includes C). But it also doesn't mean their hard work is going to waste just because the work is hard.</p>
]]></description><pubDate>Fri, 14 Feb 2025 05:21:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=43045116</link><dc:creator>Zandikar</dc:creator><comments>https://news.ycombinator.com/item?id=43045116</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43045116</guid></item><item><title><![CDATA[New comment by Zandikar in "PostgreSQL Best Practices"]]></title><description><![CDATA[
<p>General AI on non-objective ("best" is undefined here and for what usecase/priorities?) broadly covered topics like this instance is mostly just a regression to the mean with bias bleed in from other knowledge graphs (eg, trying to use correct grammar/tense (linguistics) in place of endianness(compsci)). As we traverse further into the depths of the dead internet theory, and more AI slop pollutes the internet (and in turn/tandem, poorly curated synthetic datasets), there is some inevitable Ouroboros style reinforcement here too.<p>So a simple filter in the sense of "omit anything too similar to X" would just omit the mean result within your given deviation. It's effectively asking, "What are some truly insane ways to use PostgreSQL", which is an interesting thought experiment, though if it actually produces useful results then you've basically just written a unit-test (Domain test?) for when AI slop evolves into full on AI jumping the shark.<p>If you're doing it based on cross-linking (source-citing), you're basically doing Page-Rank for AI.<p>If you time gate familiarity to posts only up to the NLP/General AI explosion in 2022 or so, well that might still be useful today, but for how long?<p>If you were to write a "Smart" filter, you're basically just  writing the "PostgreSQL Best Practices" article yourself, but writing it for machines instead of humans. And I don't know what to make of that, but frankly I was lead to believe that if nothing else, the robopocalypse would be more interesting than this.</p>
]]></description><pubDate>Mon, 10 Feb 2025 03:09:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=42996526</link><dc:creator>Zandikar</dc:creator><comments>https://news.ycombinator.com/item?id=42996526</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42996526</guid></item><item><title><![CDATA[New comment by Zandikar in "Go is a well designed language, actually"]]></title><description><![CDATA[
<p>> That isn't my argument<p>That is in fact what I'm trying to get you to understand. You're arguing a different point than what was proposed. What you propose is entirely valid, but is missing the forest for the trees.<p>Your goals for what a language should be/do and what Golangs Goals for a language should are not equivalent. You admit it yourself in part here:<p>>  I just think starting from a "C mindset" was the wrong foundation in which to build a language at this point in time.<p>It's why I've entirely bypassed your attempts to discuss the nuance.<p>You may disagree with the "C mindset" and other design principles, but ignoring the context of why decisions were made is not productive discourse for determining whether something is designed well or not, which again, was your original and very firmly stated assertion up top.<p>There is more to a language than it's abstractions and syntax/syntactical sugar and paradigm. Learning curve, intuitiveness, familiarity, conventions, devex (creating and maintaining) and usecase and who is intended/expected to use it are all important as well, and inform why certain decisions are made.<p>In other words, Thinking people need to move on from the C mindset is an entirely valid argument to make. It has no bearing on whether C or C-inspired languages are designed well, as it entirely ignores what those designs were intended to achieve and who they were trying to cater to.<p>TL;DR: How well something caters to your goals isn't the same discussion of how well it's designed to cater to someone elses/it's own stated goals. So yes, you feel it's badly designed because you refuse to acknowledge it's not trying to cater you, and that it has no duty to.</p>
]]></description><pubDate>Fri, 10 Jan 2025 19:23:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=42659055</link><dc:creator>Zandikar</dc:creator><comments>https://news.ycombinator.com/item?id=42659055</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42659055</guid></item><item><title><![CDATA[New comment by Zandikar in "Go is a well designed language, actually"]]></title><description><![CDATA[
<p>> I'm not sure how making the language more expressive and less prone to errors makes programs more difficult to maintain<p>I never said nor contested that it did. I was questioning what the design philosophy or general appeal of the language was. If what you believe it is/should be differs from the maintainers themselves, then naturally you're going to likely have friction with how it solves problems or implements features in the first place, as you have divergent goals/philosophies. That doesn't make it a poorly designed language, it makes it the wrong tool (for you) for the job.<p>To abstract the concept: Nails and Screws are both perfectly valid approaches to fastening things in general, but if you expect a hammer to turn screws effectively, you're gonna have a bad time because your approach/philosophies are misaligned, not because the Hammer is poorly designed. That also doesn't mean there isn't merit in the discussion of the pros and cons of nails and screws and when/how to use them, but that's fundamentally a separate (if adjacent, and still valid) discussion.<p>EDIT: also, just want to clarify, I don't know Golang, so have no skin in the "is it better/worse/correct". I've long been a supporter of "the best tool for the job is the one you know", with perhaps the only exception to that being Brainfuck[0], unless your intended goal is just to fuck with people lol.<p>0: <a href="https://en.wikipedia.org/wiki/Brainfuck" rel="nofollow">https://en.wikipedia.org/wiki/Brainfuck</a></p>
]]></description><pubDate>Thu, 09 Jan 2025 17:04:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=42647608</link><dc:creator>Zandikar</dc:creator><comments>https://news.ycombinator.com/item?id=42647608</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42647608</guid></item><item><title><![CDATA[New comment by Zandikar in "Go is a well designed language, actually"]]></title><description><![CDATA[
<p>> the goal is to make it "easy" for the users to write correct programs.<p>I've only ever dabbled in Golang, but isn't the goal of Go ultimately to make it easy for devs to <i>maintain</i> programs with their hyperfocus on non-breaking changes and backwards compatibility (With previous versions)? It's less about being easy/nice to write the first time, but that you don't have to re-write it again and again with each version change, no?<p>I'm not saying the OP Article is correct, again, not familiar enough with the language to comment on that, but the whole reason I keep wanting to adopt it (just don't have the time) is everyone I know that uses it always sings its praises for the above features. Seems to be the defining point that drives it's adoption, at least among those I interface with.</p>
]]></description><pubDate>Thu, 09 Jan 2025 16:39:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=42647348</link><dc:creator>Zandikar</dc:creator><comments>https://news.ycombinator.com/item?id=42647348</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42647348</guid></item><item><title><![CDATA[New comment by Zandikar in "Ergo S-1 – An open-source ergonomic wireless keyboard"]]></title><description><![CDATA[
<p>You can change the behavior of your layer activation key(s) so that you aren't n+1ing your buckies, and can also customize the keymap of all layers (including the baselayer) so that you don't have Ctrl and A sharing the same physical key between layers to avoid that exact issue. Though, if you do choose to do that, there is still a way to send Ctrl-A (using one shot keys [2] for example). I've listed a few options you have for your layer activiation key behavior from the qmk wiki [0] as it's more succinct than the zmk wiki [1], but QMK and ZMK (which the Ergo S-1 uses) both share similar functionality in this way. Non exhaustive list of layer-activation behavior from the QMK wiki:<p>> MO(layer) - momentarily activates layer. As soon as you let go of the key, the layer is deactivated.<p>> TG(layer) - toggles layer, activating it if it's inactive and vice versa<p>> TT(layer) - Layer Tap-Toggle. If you hold the key down, layer is activated, and then is de-activated when you let go (like MO). If you repeatedly tap it, the layer will be toggled on or off<p>You can also use Macros if you'd prefer (but not required) to handle triple (or more) buckies, which both ZMK and QMK firmwares support.<p>I will note that this Ergo S-1 seems to be missing at least 8 keys that most other Ergodox keyboards have (the 3 keys of the inner column on each side and the bottom right and left corner keys) so total physical keycount appears to be closer to a 60% kb. So in that way, you're going to be more dependent on using layers (or Macros) in general than even other ergodox (such as the Ergodox-ez [3] style keyboards of this type.<p>EDIT: Apologies, I wasn't paying attention to usernames when responding to comments and basically gave you this answer twice across two different comments. Deleted the other as this one is more complete/to the point.<p>0: <a href="https://docs.qmk.fm/feature_layers" rel="nofollow">https://docs.qmk.fm/feature_layers</a><p>1: <a href="https://zmk.dev/docs/keymaps/behaviors/layers" rel="nofollow">https://zmk.dev/docs/keymaps/behaviors/layers</a><p>2: <a href="https://docs.qmk.fm/one_shot_keys" rel="nofollow">https://docs.qmk.fm/one_shot_keys</a><p>3: <a href="https://ergodox-ez.com/" rel="nofollow">https://ergodox-ez.com/</a></p>
]]></description><pubDate>Fri, 03 Jan 2025 17:41:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=42587741</link><dc:creator>Zandikar</dc:creator><comments>https://news.ycombinator.com/item?id=42587741</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42587741</guid></item><item><title><![CDATA[New comment by Zandikar in "Intel announces Arc B-series "Battlemage" discrete graphics with Linux support"]]></title><description><![CDATA[
<p>Of course. A cheap card with oodles of VRAM would benefit <i>some</i> people, I'm not denying that. I'm tackling the question of would it benefit intel (as the original question was "why doesn't intel do this"), and the answer is: Profit/Loss.<p>There's a huge number of people in that community that would <i>love</i> to have such a card. How many are <i>actually</i> willing and able to pony up >=$3k per unit? How many units would they buy? Given all of the other considerations that go into making such cards useful and easy to use (as described), the answer is - in intel's mind - nowhere near enough, especially when the financial side of the company's jimmies are so rustled that they sacked Pat G without a proper replacement and nominated some finance bros in as interim CEO's. Intel is ALREADY taking a big risk and financial burden trying to get into this space in the first place, and they're already struggling, so the prospect of betting the house like that just isn't going to fly for the finance bros that can't see passed the next 2 quarters.<p>To be clear, I <i>personally</i> think there is huge potential value in trying to support the OSS community to, in essence, "crowd source" and speedrun some of that ecosystem by supplying (Compared to the competition) "cheap" cards that aschew the artificial segmentation everyone else is doing and investing in that community. But I'm not running Intel, so while that'd be nice, it's not really relevant.</p>
]]></description><pubDate>Wed, 04 Dec 2024 18:55:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=42320683</link><dc:creator>Zandikar</dc:creator><comments>https://news.ycombinator.com/item?id=42320683</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42320683</guid></item><item><title><![CDATA[New comment by Zandikar in "Intel announces Arc B-series "Battlemage" discrete graphics with Linux support"]]></title><description><![CDATA[
<p>> They can't just slap more memory on the board, they would need to dedicate significantly more silicon area to memory IO and drive up the cost of the part,<p>In the pedantic sense of just literally slapping more on existing boards? No, they might have one empty spot for an extra BGA VRAM chip, but not enough for the gain's we're talking about. But this is absolutely possible, trivially so for someone like Intel/AMD/NVidia, that has full control over the architectural and design process. Is it a switch they flip at the factory 3 days before shipping? No, obviously not. But if they intended this to be the case ~2 years ago when this was just a product on the drawing board? Absolutely. There is 0 technical/hardware/manufacturing reason they couldn't do this. And considering the "entry level" competitor product is the M4 Max which starts at at least $3,000 (for a 128GB equipped one), the margin on pricing more than exists to cover a few hundred extra in ram and extra overhead in higher-layer more populated PCB's.<p>The real impediment is what you landed on at the end there combined with the greater ecosystem not having support for it. Intel could drop a card that is, by all rights, far better performing hardware than a competing Nvidia GPU, but Nvidia's dominance in API's, CUDA, Networking, Fabric-switches (NVLink, mellanox, bluefield), etc etc for that past 10+ years and all of the skilled labor that is familiar with it would largely render a 128GB Arc GPU a dud on delivery, even if it was priced as a steal. Same thing happened with the Radeon VII. Killer compute card that no one used because while the card itself was phenomenal, the rest of the ecosystem just wasn't there.<p>Now, if intel committed to that card, and poured their considerable resources into that ecosystem, and continued to iterate on that card/family, then now we're talking, but yeah, you can't just 10X VRAM on a card that's currently a non-player in the GPGPU market and expect anyone in the industry to really give a damn. Raise an eyebrow or make a note to check back in a year? Sure. But raise the issue to get a greenlight on the corpo credit line? Fat chance.</p>
]]></description><pubDate>Wed, 04 Dec 2024 02:00:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=42313825</link><dc:creator>Zandikar</dc:creator><comments>https://news.ycombinator.com/item?id=42313825</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42313825</guid></item><item><title><![CDATA[New comment by Zandikar in "Show HN: I am Building a Producthunt alternative"]]></title><description><![CDATA[
<p>As an American, I can't recall "The next best thing" ever meaning "second best' but rather the up and coming latest and greatest thing.<p>Big country though, could be a regional thing.</p>
]]></description><pubDate>Tue, 26 Nov 2024 22:13:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=42250645</link><dc:creator>Zandikar</dc:creator><comments>https://news.ycombinator.com/item?id=42250645</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42250645</guid></item><item><title><![CDATA[New comment by Zandikar in "AI chipmaker Cerebras files for IPO"]]></title><description><![CDATA[
<p>Comparing a WSE-3 to a H100 without considering the systems they go in or the systems, cooling, networking, etc that supports them means little when doing cost analysis, be it CapEx or TCO. A better (but still flawed) comparison would be a DGX H200 (a cluster of H100's and their essential supporting infra) to a CS-3 (a cluster of WSE-3's and their essential supporting infra in a similar form factor/volume of a DGX H200).<p>Now, is Cerebras going to eventually beat Nvidia or at least compete healthily with Nvidia and other tech titans in the general market or a given lucrative niche of it? No idea. That'd be a cool plot twist, but hard to say. But it's worth acknowledging that investing in a company and buying their products are two entirely separate decisions. Much of silicon valleys success stories are a result of people investing in the potential of what they could become, not because they were already the best on the market, and for nothing else, Cerebras approach is certainly novel and promising.</p>
]]></description><pubDate>Tue, 01 Oct 2024 03:35:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=41704415</link><dc:creator>Zandikar</dc:creator><comments>https://news.ycombinator.com/item?id=41704415</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41704415</guid></item><item><title><![CDATA[New comment by Zandikar in "National Park Service Will Cite AWD Drivers for Driving on 4WD-Only Trails"]]></title><description><![CDATA[
<p>> That is to say, I'm not convinced by the article's hypothesis about locking diffs.<p>I'm not an offroader, but I did own a vehicle without a locking diff, that I later upgraded to having a locking diff (slapped a G80 on the rear of an 80's GMC Sierra) and that made a huge difference even on pavement in inclement weather. Granted, that was a RWD pickup with very little weight (typically) over the drive wheels. I'd honestly be shocked if the impact was minimal in truly offroad conditions. Granted, that's RWD which is even less than AWD or 4WD, so by no means apples to apples comparison there, just my 2 cents.<p>That said, this isn't a binary thing (locking vs open). There's a wide variety of AWD technology out there, and I could nerd out on the specifics, but at the end of the day, some are very limited in their ability to send power to one set of wheels vs the other, and may not have locking/limited slip diffs at all, and just use brakes to prevent wheel spin. I will say, Subaru (especially the higher/sportier trims like WRX/STi) can often hang and even shame some 4WD vehicles in some conditions. There's no shortage of videos of Subarus helping a 4WD out of a jam, or completing a course they could not, but how much of that is a function of their specific AWD tech and limited slip diffs vs proper tires and lighter weight and any number of things is a matter of debate that I'm not qualified to weigh in on. Again, am a gearhead, but not an offroader.<p>So I suspect it's not so much the Park saying "Subaru/AWD can't cut it" but rather, keeping track of which years, brands, models, trims, and/or potential optional equipment <i>does</i> cut it is a much more massive headache to keep track of and verify than just saying "4WD yes, everything else no", and I can't really fault them for that.</p>
]]></description><pubDate>Fri, 09 Aug 2024 00:01:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=41197646</link><dc:creator>Zandikar</dc:creator><comments>https://news.ycombinator.com/item?id=41197646</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41197646</guid></item><item><title><![CDATA[New comment by Zandikar in "GPUs can now use PCIe-attached memory or SSDs to boost VRAM capacity"]]></title><description><![CDATA[
<p>Depends on the PCIe/DMA topology of the system, but in short, in an ideal system you can avoid the bottleneck of the CPU interconnect (eg, AMD's Infinity Fabric) and reduce overall CPU load by (un)loading data directly from your NVMe storage to your PCIe accelerator [0]. You can also combine this with RDMA/RoCE (provided everything in the chain supports it) to make a clustered network with NVMeoF to serve data from a high speed NVMe flash array(s) to clusters of GPU's; even potentially using this to reduce cost/space/power by reducing the nead for high cost/high power CPU's. Prior to CXL's proliferation (which realistically we haven't achieved yet), this is mostly limited to bespoke HPC systems; most consumer systems lack the PCIe lanes/topology to really make use of this in a practical  way.<p>On the consumer side, you're right, using the System ram is probably a better approach as most consumer motherboards would have the NVMe storage routing up to the CPU Interconnect then back "Down" to the GPU (or worse through the "southbridge" chipset(s) like on X570) so you take that hit anyway.<p>However if you have a PCIe switch on board that allows data to flow direct from storage to GPU without a round trip across the CPU, then NVMe/CXL/SCM modules would theoretically be better than system RAM. Depends on the switch, retimers, muxing, topology etc.<p>Regardless of what you're using for direct storage and how ideal your topology is, the MTps and GBps over PCIe is <i>significantly</i> slower than onboard VRAM (be it GDDR or especially HBM) and bandwidth limited to boot. Doesn't mean it's useless by any means, but important to point out that this doesn't turn a 20GB VRAM card into a 2.02TB VRAM card just because you DirectStorage'd a 2TB Drive to it, no matter how ideal the setup is. However, as PCIe increases in bandwidth and Storage-Class-Memory type devices and just storage tech in general continues to improve, it's rapidly becoming more viable. On PCIe gen 3, you're probably shooting yourself in the foot. on PCIe Gen 6, you can realistically see a very real benefit. But again, there's a lot of "depends" here and for now, you're probably better off buying a bigger or multiple GPUs if you're not on the cutting edge with the corporate credit line.<p>0: <a href="https://developer.nvidia.com/blog/gpudirect-storage/" rel="nofollow">https://developer.nvidia.com/blog/gpudirect-storage/</a></p>
]]></description><pubDate>Tue, 02 Jul 2024 15:37:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=40857682</link><dc:creator>Zandikar</dc:creator><comments>https://news.ycombinator.com/item?id=40857682</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40857682</guid></item><item><title><![CDATA[New comment by Zandikar in "Social-Media Influencers Aren't Getting Rich–They're Barely Getting By"]]></title><description><![CDATA[
<p>Article is paywalled but I assume that's the reason they specified "creator earners" and not just "content creators" or "influencer"</p>
]]></description><pubDate>Tue, 18 Jun 2024 17:30:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=40720169</link><dc:creator>Zandikar</dc:creator><comments>https://news.ycombinator.com/item?id=40720169</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40720169</guid></item><item><title><![CDATA[New comment by Zandikar in "Jike: The obscure social media app beloved by China's tech scene"]]></title><description><![CDATA[
<p>> Hard to follow someone<p>IMO: Good. The harder it is to shout "look at <i>ME</i>" for clout and profit, the more productive and on topic the discourse tends to be, and the easier it is for moderation to weed out the trolls and off topic/hateful/spammy discussion. The fact that this place isn't about who any one given poster is, but what they have to say, is part of what makes it such a vibrant and valuable and informative place to have discussions.<p>> hard to form groups discussing various topics<p>Why is this necessary when various topics tend to get substantial discussion already? Sure, some more than others, but that activity tends to form a rather organic filter without facilitating echo chambers and mob mentality that tends to emerge when you start erecting walled gardens. Sure, that still happens to an extent, but much less than on say reddit or twitter.<p>I fail to see how making this more like platforms succumbing to the enshittification of the internet is a path to improvement here.</p>
]]></description><pubDate>Fri, 17 May 2024 15:42:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=40391092</link><dc:creator>Zandikar</dc:creator><comments>https://news.ycombinator.com/item?id=40391092</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40391092</guid></item><item><title><![CDATA[New comment by Zandikar in "Save the Web by Being Nice"]]></title><description><![CDATA[
<p>I mean, it's a matter of pedantics and subjectivity.<p>Technically, Social Media is a super set of all of those things. They're all media platforms that primarily operate around user socialization (aka engagement). They are, by definition, social media, and in turn social media has been around long before Facebook (eg, slashdot, BBS, Usenet, etc).<p>I do agree there's a difference/nuance to be recognized here though (eg, old web vs new web social interactions). I think user vs content focus kinda misses the mark, as the truly key differentiator for me is what influences the activity, both in terms of driving people to post in the first place, but also in curating what they can('t) or should(n't) post. In other words, is the platform a community trying to serve it's users, or a company trying to serve it's stake/shareholders?</p>
]]></description><pubDate>Tue, 30 Apr 2024 15:39:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=40212261</link><dc:creator>Zandikar</dc:creator><comments>https://news.ycombinator.com/item?id=40212261</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40212261</guid></item><item><title><![CDATA[New comment by Zandikar in "Walmart joins other big retailers in scaling back on self-checkout"]]></title><description><![CDATA[
<p>Sams offers this actually, it's great</p>
]]></description><pubDate>Mon, 22 Apr 2024 14:45:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=40114882</link><dc:creator>Zandikar</dc:creator><comments>https://news.ycombinator.com/item?id=40114882</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40114882</guid></item><item><title><![CDATA[New comment by Zandikar in "How many bathrooms have Neanderthals in the tile?"]]></title><description><![CDATA[
<p>If the recommended course of action to contribute here is to involve the police and inform them there might be human remains on your property, then I strongly doubt you're gonna get many people willing at all. If this is a genuine and serious potential source of fact finding/analysis that is of value to the field, then the field needs to find a less... lets call it polarizing, option.</p>
]]></description><pubDate>Wed, 17 Apr 2024 14:32:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=40065204</link><dc:creator>Zandikar</dc:creator><comments>https://news.ycombinator.com/item?id=40065204</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40065204</guid></item></channel></rss>