<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: sebow</title><link>https://news.ycombinator.com/user?id=sebow</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 13 Apr 2026 05:26:37 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=sebow" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by sebow in "X blames users for Grok-generated CSAM; no fixes announced"]]></title><description><![CDATA[
<p>If you post pictures of yourself on X and don't want grok to "bikini you", block grok.<p>Yes, under the TOS, what grok is doing is not the "fault" of grok(the reason is the causal factor of the post[enabled by 2 humans: the poster and the prompter]; the human intent is what initiates the generated post, not the bot; just like a gun is shot by a human, not by the strong winds). You could argue it's the fault of the "prompter", but we're going to circle back to the cat & mouse censorship issue. And no, I don't want a less censored grok version that's unable to "bikini a NAS"(which is what I've been fortunate to witness) just because "new internet users" don't understand what the Internet is.(Yes, I know you can obviously fine-tune the model to allow funny generations and deny explicit/spicy generations)<p>If X would implement what the so-called "moralists" want, it will just turn into Facebook.<p>And for the "protect the children" folks, it's really disappointing how we're always coming back to this bullsh*t excuse every time a moral issue arises. Blocking grok is a fix both for the person who doesn't want to get edited AND the user who doesn't want to see grok replies(in case the posts don't get the NSFW tag in time).<p>Ironically, a decent amount of people who want to censor grok are bluesky users, where "lolicorn" and similar dubious degenerate content is being posted non-stop AS HUMAN-MADE content. Or what, just because it's an AI it's suddenly a problem? The fact that you can "strip" someone by tweeting a bot?<p>And lastly, sex sells. If people haven't figured out that "bikinis", "boobs", and everything related to sex will be what wins the AI/AGI/etc. race (it actually happens for ANY industry), then it's their problem. Dystopian? Sure, but it's not an issue you can win with moral arguments like "don't strip me". You will get stripped down if it created 1M impressions and drives engagement. You will not convince Musk(or any person who makes such a decision) to stop grok from "stripping you", because the alternative is that other non-grok/xAI/etc. entities/people will make the content, drive the engagement, make the money.</p>
]]></description><pubDate>Mon, 05 Jan 2026 20:52:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=46504785</link><dc:creator>sebow</dc:creator><comments>https://news.ycombinator.com/item?id=46504785</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46504785</guid></item><item><title><![CDATA[New comment by sebow in "US sanctions EU government officials behind the DSA"]]></title><description><![CDATA[
<p>I disagree in principle, but let's say the people decide to do so. Not only in US (under section 230) those are not media companies, but in EU too, social networks like Facebook/Instagram/etc. are treated legally as "public squares" and not media companies like BBC/etc. When you defame somebody on Instagram, you're the one being held legally responsible, not Meta. Why would social networks be responsible for DSA violations made by the users? This is beyond the fact that implementing an "instant-takedowns" censorship mechanism is draconian. DSA's Articles 16-17 do not require the person (who can also be anonymous, which is ironic) who is reporting the content to provide >legally sufficient< evidence for the takedown. Which goes directly against what I would consider "normal" in a society where you're innocent until proven guilty. The "trusted flaggers" (article 22) do need to submit more evidence, but this just becomes a problem of "partisanship" and bias. This basically means you can report someone for illegal activity, provide unnecessary evidence(in the legal sense), and the content is taken down, with the "battle" starting afterwards.<p>YouTube's system of DMCA takedown(the copyright issue being way more serious legally than what DSA is supposed to protect against) is not perfect and cannot be perfect (proven by the fact that content is unjustly taken down all the time). DSA is just the same, except more vague, more complicated and (imo) ultimately worse.<p>DSA has an appeal mechanism, with an option for out-of-court settlements, which means you can employ independent fact-checkers (certified by Digital Services Coordinators (DSCs)); the list of certified bodies is, of course, maintained by the European Commission. The problem is that these DSCs are appointed by each country's gov., which means there's potential room for conflict of interests not only at a national level(I find hard to believe appointed DSCs are completely impartial to the gov. that appointed them) but also at an EU-wide level(certified fact-checking bodies who are supposedly not influenced by EC when judging cases pertaining to EU in international cases).</p>
]]></description><pubDate>Wed, 24 Dec 2025 15:02:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46376166</link><dc:creator>sebow</dc:creator><comments>https://news.ycombinator.com/item?id=46376166</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46376166</guid></item><item><title><![CDATA[New comment by sebow in "US sanctions EU government officials behind the DSA"]]></title><description><![CDATA[
<p>Everybody knows about Cambridge Analytica being used in the US/UK, but, for example, little to no one knows that Cambridge Analytica was also used by political parties within the EU (I won't give specific names [for now], but parties [from Italy, Malta, CZ, and Romania], members of the euro-parliamentary groups EPP/RE/SD, in the 2014-2016 period). Why did nothing happen back then? Those mentioned parties were usually pro-EU, so it's not really surprising no such "scandal" was being discovered until later on, when Cambridge Analytica was being used by the UK/US.<p>And the Cambridge Analytica "phenomenon" is not really something you can realistically prevent. I'm sure it happens now with some other better firm (Palantir probably), but this is really beside the point. The point is that normal citizens, like you and me, are effectively censored upon suspicion before any burden of proof is provided. Nothing says "protecting democracy" like deleting posts from social media and then finding out the context.<p>> Individual free speech is not - of course - ethically or politically identical to "free speech" produced by weaponised industrial content farms funded by corporations and foreign actors.<p>Sure, nobody likes bots/paid shills. But of course, in a normal society, you have to prove those posts are made by actual bots/content farms before taking any action. Otherwise it's just censorship. Election interference always happens, without exceptions, but degrees vary. This is not to say we shouldn't point out when it happens, but to not do censorship against our own citizens because "the models indicate a pattern akin to foreign entities." Patterns are not burdens of proof, and thus employing a "crowdfunded" fact-checking system like Community Notes or the one from YouTube is at least partly the actual solution instead of directly removing content. Under DSA, you can effectively remove content without providing burden of proof regarding the identity of the poster. Platforms must provide a "statement of reasons" (Article 17) to affected users for any removal, including appeal rights, but this does not impose pre-removal identity checks on posters.</p>
]]></description><pubDate>Wed, 24 Dec 2025 13:01:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=46375209</link><dc:creator>sebow</dc:creator><comments>https://news.ycombinator.com/item?id=46375209</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46375209</guid></item><item><title><![CDATA[New comment by sebow in "US sanctions EU government officials behind the DSA"]]></title><description><![CDATA[
<p>In the last electoral cycle I've seen firsthand censorship applied to remote acquaintances because of the newly added EU DSA (this in and of itself would not be a huge disaster [by EU standards] if it wasn't accompanied by arrests), which was used as justification over some posts on TikTok and X; therefore I don't really care who hurts the pro-censorship faction within the EU. People have been arrested in WE for speech online for more than a decade now, but now it also happens in EE, where I live, bringing back communist-era "vibes". You would excuse me if the anti-Trump or anti-US (because of the current administration) rhetoric doesn't move me regarding this.<p>Or let me guess, "Trump bad and therefore we should accept DSA/Chat control 2.0/3.0/etc."? Sorry, I don't care. And people who think this is only about the recent X fine are also wrong (this started last year when Thierry Breton started influencing european elections while also boasting about how he can annul such elections without repercussions; you can deduce what I'm talking about by asking an LLM). This is in part US gov. protecting private companies (and thus itself) from fines, sure, but the broader point about censorship within the west applies. Everything that hurts the people making legislation regarding the Internet (or software in general) within the EU should be welcomed with open arms.<p>EU apologists would rather change the subject and talk about Trump and the polarizing social environment in the US rather than acknowledge that within the EU, there's not even a chance for discourse to be had about any policy(especially the nonexistent free speech) due to the aforementioned laws. The same people will act surprised when extreme positions regarding the EU are adopted by an ever-increasing number of people "until morale improves".</p>
]]></description><pubDate>Wed, 24 Dec 2025 12:06:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46374870</link><dc:creator>sebow</dc:creator><comments>https://news.ycombinator.com/item?id=46374870</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46374870</guid></item><item><title><![CDATA[New comment by sebow in "The U.S. Is Funding Fewer Grants in Every Area of Science and Medicine"]]></title><description><![CDATA[
<p>Thank you for the detailed insight. You've touched on an aspect that outsiders (like me) cannot truly grasp but can only guess about: motivation. And it's definitely true, motivation in the private sector is somewhat harder (you've explained it best), or at least motivation compared to the majority of the private companies; but, like you've mentioned, it doesn't seem like it's a problem with the system itself but with the kind of environments that grow in companies. Corporate culture is, more often than not, very toxic, especially when big money is involved (and/or big ideas; the subject of research could be even more important than money in science).<p>Or maybe it is a problem with the structure that fosters an environment. What comes to my mind is the exceptional case of OpenAI, which started as a nonprofit. Sure, it "ended badly" because of the known drama, but my guess is that besides the money that was poured into it, it thrived because researchers had kind of an "emotional safety net," meaning that they wouldn't be pressured for results as much. Probably the reason some startups perform much better too.<p>I think career continuity matters, and you don't necessarily get that in the private sector for sure. This discontinuity then leads to practical work discontinuity, which means less work done (which is amplified by the non-decentralized nature of working in private compared to shared science in public, as you've explained).<p>My bottom line is that the private field could do better, and frankly it's kind of their loss. What I'm curious about is whether a "semi-private" approach is better: a non-profit or some kind of foundation. I guess in practice they're still private, but whether the money part can be "solved" through crowdfunding/some modern methods and whether they're viable long-term remains to be seen. One thing is for sure: a culture appreciative of science will definitely open more doors into novel methods of funding and organizing (maybe in the future these methods could rival the "traditional ways" of public science).</p>
]]></description><pubDate>Mon, 22 Dec 2025 22:08:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46359731</link><dc:creator>sebow</dc:creator><comments>https://news.ycombinator.com/item?id=46359731</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46359731</guid></item><item><title><![CDATA[New comment by sebow in "The U.S. Is Funding Fewer Grants in Every Area of Science and Medicine"]]></title><description><![CDATA[
<p>Please convince me how gov. funding is better than the private sector. Before people jump to the "late capitalism and everything will be profit-incentivized" bandwagon, I fail to see how things like finding a new good medicine/the next propulsion system/new most efficient energy solution/etc. cannot be linked into the more theoretical fields, which I'm assuming are some of, if not most of the positions/areas of science affected by this.<p>Everything can be "sold", especially in today's age with the new methods of discoverability. But I would argue scientists don't need to "sell" something in the capitalist sense. They need to link the hope of a new discovery to inventors, innovators and entrepreneurs. Sure, some things might "fail" to continue by failing to adjust to the markets, or some scientific discoveries might be used for bad things (ethically), but this is (1) both inevitable and (2) the responsibility of the scientists & the people buying the end product/service. If I'm not mistaken, most bad/evil/etc. discoveries were made by scientists working FOR the government/king/etc. throughout history. If anything, democratizing science through the capitalist markets seems like a more beneficial way to develop self-sustaining science. The key thing is transparency, which can be less present in the private sector, especially when corruption is involved(assuming transparency is demanded by the gov.).</p>
]]></description><pubDate>Mon, 22 Dec 2025 18:09:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46356786</link><dc:creator>sebow</dc:creator><comments>https://news.ycombinator.com/item?id=46356786</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46356786</guid></item><item><title><![CDATA[New comment by sebow in "Ask HN: My mother was scammed out of all her savings. What should I do?"]]></title><description><![CDATA[
<p>(If you're OP: this is not a solution per se but more of a generalist rant; just so you don't waste critical time)<p>People talk about changing laws or technical solutions, but the inconvenient truth is that technically literate people should peer-pressure nearby friends/family/etc. into being more aware of such possibilities. I've done so, to the extent that some people find it ether borderline schizophrenic/paranoid (to my "luck", I live in an ex-communist country, where most people are usually skeptical in many contexts with strangers; so this group of people is relatively small).<p>People who know better bear a responsibility towards helping others who don't; towards those who are too kind (or naive) for their own good; Even though I'm the "tech guy" in my close circles (family, friends,etc.) like many here, I often do the >opposite< of what other "pro-technologists" do these days: I don't encourage people, especially the older generation OR the more tech-illiterate ones to use more technology, because it is obvious that doing so "injects" another vector of attacks into their lives. More often these days this is not possible, everything gets digitalized to the detriment of such groups, but this also delves into the politics of keeping "older options" (cash, paper trails, etc.) available even if digitalization happens. Often times the older options are more secure, though obviously less convenient.<p>This is a non-solution, yes, but it is the correct way to approach this (imo), as more and more places LEGALLY force digitalization of different institutions(banking, gov. agencies, etc.)  which inherently either add, or worse, completely shift the risk into virtual spaces. This is why a "legal" solution is more often than not either a slow one or a completely pointless one. It will always be an arms-race between scammers(which operate more effectively[in theory] due to their decentralized nature) and the gov./banks/etc., which operate in a more centralized fashion, thus demanding and imposing more control above all included parties. A legal way will always demand more than it's worth.<p>I digress from my shift into politics, but bottom line is this: don't let your peers/family/closed ones get into these situations. If you have "an authoritative" voice regarding tech, use it to first cultivate awareness regarding dangers, before cultivating hype/or anything else. (Obviously not talking about anyone specifically, but the whole "geeksphere" as a whole)<p>Good luck to you and your family.</p>
]]></description><pubDate>Mon, 22 Dec 2025 17:20:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=46356195</link><dc:creator>sebow</dc:creator><comments>https://news.ycombinator.com/item?id=46356195</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46356195</guid></item><item><title><![CDATA[New comment by sebow in "Devastated PC builder orders DDR5 RAM from Amazon receives DDR2 and some weights"]]></title><description><![CDATA[
<p>I'm more impressed by the fact that there's still DDR2 going around. I know DDR3 is still alive and well, even manufactured(I myself noticed the appearance of new DDR3 kits, which is weird); but didn't knew DDR2 was still in stock. I'm assuming industrial/embedded applications still use it for obvious reasons, but I have to wonder to what degree DDR2 kits are being produced.</p>
]]></description><pubDate>Fri, 19 Dec 2025 01:13:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46321138</link><dc:creator>sebow</dc:creator><comments>https://news.ycombinator.com/item?id=46321138</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46321138</guid></item><item><title><![CDATA[New comment by sebow in "AI Is Breaking the Moral Foundation of Modern Society"]]></title><description><![CDATA[
<p>AI & Social Media are only exacerbating the decline of morality in society, spreading it and making it more visible. Morality has been "breaking", objectively speaking, for at least a century, most noticeable with the advent of postmodernism.</p>
]]></description><pubDate>Wed, 03 Dec 2025 08:44:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46131902</link><dc:creator>sebow</dc:creator><comments>https://news.ycombinator.com/item?id=46131902</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46131902</guid></item><item><title><![CDATA[New comment by sebow in "Europe's New War on Privacy"]]></title><description><![CDATA[
<p>Firstly, EU =/= Europe.<p>Secondly: the death of EU cannot come soon enough. 10 years ago euroskeptics like me were wrongfully called "russophiles" (lmao), even though I'm from a country that is constantly threatened by Russia with drones, propaganda, etc. (RO for one's curiosity). Ironically enough, for those coming from ex-communists countries, EU sure looks increasingly like USSR, but with blue instead of red. It's infinitely better than communism, sure, but the optics and path of EU resemble those of USSR in it's "wellbeing of workers"(and other socio-cultural issues) propaganda phase(the irony is Russia here, obviously).<p>History never repeats, but it rhymes. And a calcified supranational institution delves into authoritarianism in the later stages of its existence. Reform never happens, and if it does, it’s at face value. It’s much “easier” (for the people in those institutions) to double-down on the status quo position rather than reform; and obviously it becomes increasingly harder for the vast majority of people to voice their opinions or concerns, especially those not aligned with the status quo. (* Key difference here is NATO which is US-led and EU which is still Western Europe-led; US is still a functional democracy unlike EU institutions)<p>Although it’s not really a very complex topic and the causal factors are relatively simple (at least to identify, solutions are much harder to propose [mainly due to the mentioned constant double-downing]), it would take a long time to explain/convince why the existence of the “current” EU is detrimental to Europeans (at least to the people not aligned with the status-quo). So-called benefits stopped at the common market treaty (EEC) iteration of the EU. Security is not and should not be in the EU’s purview, we have NATO for that. And the most obvious issues that the EU keeps worsening are socio-cultural positions that either (1) dilute the differences between different nations (2) [in case (1) was not a “problem” due to shared values] directly propose completely different values and/or positions. There’s no “objective morality” debate to be had here, democracy does not inherently mean choosing the most “scientific”/“moral”/(any other metric) position in policy: it simply means choosing what the majority of people want (If you want to change policy: change people’s minds). The double-downing of the “EU regime” is usually defended with the rhetoric that it does so in the name of “democracy”, “morality”, “tolerance”, “objective wellbeing of society”, etc. but there’s no mechanism for true democracy if the EU undermines/punishes the will of individual nations on the pretext that “it does not conform to EU-wide proposed policy”. This leads to “multi-step” issues and other regional conflicts between interests of nations (the winning bloc [usually the wealthier one] gets to basically impose policy on the smaller one).<p>I’m quite off-topic on the subject of privacy, but my key point is this: don’t expect things to get better; or at least not without a huge cost. Recent pullbacks from the EU regarding the DSA/AI Act/GDPR are done so out of necessity: the EU is losing ground massively (= money) in the tech space due to stupid policies made by dumb bureaucrats. Half-assed “reforms” like these will not make huge improvements (mainly due to the unchanged fiscal policies [which are going to get worse: upcoming euro stablecoin]) but will keep the EU afloat amongst those who can’t see the sinking ship. Oh, you like privacy? Well, expect it to get worse, as eID is surely coming for all citizens in the name of safety (which has been eroded due to “our” [i.e. regime’s] stupid policies).<p>Finally, as I foresee some will keep replying with “Russian <something>”, let me just say this: pray the downfall of the EU won’t be an opportunity for Russia to do anything. At this rate in the regional conflict, Russia doesn’t look good long-term, but if history has tried to teach us anything, it is that Russia’s unpredictable.</p>
]]></description><pubDate>Sun, 30 Nov 2025 15:21:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46097330</link><dc:creator>sebow</dc:creator><comments>https://news.ycombinator.com/item?id=46097330</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46097330</guid></item><item><title><![CDATA[New comment by sebow in "OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide"]]></title><description><![CDATA[
<p>I'm not sure that focusing on this mudslinging towards OpenAI (or any other company, for that matter) will achieve anything. It didn't work in the past and it won't work in the future. The reality is that parents, guardians, teachers and society as a whole need to be held responsible(at least morally if not legally) in order to address the core issue of suicide and similar behaviors, such as murder.<p>Besides the obvious compromise in quality that companies would have to make to appease the 'karens' of society (not to mention the additional compliance and regulatory burden imposed on new companies), wouldn't it be simpler to just have users take a basic 'TOS test' when creating an account? Sure, it's inconvenient, but at least the company would be legally protected. The purpose is obviously not to protect companies, but to move the spotlight towards the real causal factors.<p>No matter how simple the TOS acceptance process becomes, people will still find a way to blame the product or company, ignoring the core issue of how someone got into a mental state where they use LLMs to cause self-harm. I don't see people suing rope manufacturing companies for facilitating suicide.</p>
]]></description><pubDate>Sat, 29 Nov 2025 11:57:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46086878</link><dc:creator>sebow</dc:creator><comments>https://news.ycombinator.com/item?id=46086878</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46086878</guid></item><item><title><![CDATA[New comment by sebow in "OpenAI needs to raise at least $207B by 2030"]]></title><description><![CDATA[
<p>(Imo) They will turn to all of these(especially to porn and gambling) when the core model of "enhance your life" will slowly fade away. The "academia space", the teens/boomers demographics, all of those will stop using OpenAI at scale if they're bombarded with vices (porn, gambling, etc).<p>Ads & referrals are already in the works, and people are generally tolerant of those. But, as with any company, appearances matter. ChatGPT will definitely lose users at the slightest possibility of having non-sanitized content served to more morally sensible groups.</p>
]]></description><pubDate>Wed, 26 Nov 2025 15:35:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46058406</link><dc:creator>sebow</dc:creator><comments>https://news.ycombinator.com/item?id=46058406</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46058406</guid></item><item><title><![CDATA[New comment by sebow in "Users Stuck in YubiKey Re-Enrollment Loop on X (Twitter)"]]></title><description><![CDATA[
<p>(replying instead of editing for timestamp purposes) I clicked "enroll" randomly again > an "error has occurred" message appeared > the page randomly refreshed and everything works now.</p>
]]></description><pubDate>Wed, 12 Nov 2025 23:08:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=45908195</link><dc:creator>sebow</dc:creator><comments>https://news.ycombinator.com/item?id=45908195</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45908195</guid></item><item><title><![CDATA[New comment by sebow in "Users Stuck in YubiKey Re-Enrollment Loop on X (Twitter)"]]></title><description><![CDATA[
<p>Same here on 2 accounts, both having 2fa through a HW key(yubikey; though passkeys have the same behavior). At some point today(few hours ago) both my desktop and phone got reditected to x.com/account/access, where the loop started.<p>Frustratingly enough, i had already done the "re-enrollment" a long time ago(basically when they announced it was mandatory), but it seems like that was pointless(hopefully not).<p>I saw some prompts about birdhouse, re-did the enrollment, and badly enough (I think i dug my own hole with this one) it asked to remove the other 2FA option (SMS), to which I clicked yes.<p>This might sound bad but I sincerely hope X fixes it somehow, and all the keys enrolled/re-(re-[etc])-enrolled are not lost, especially those that were not added today. It might be a good idea (in practice, bad for security) to disable this new "<a href="https://x.com/account/access?flow=two-factor-security-key-policy-enforcement" rel="nofollow">https://x.com/account/access?flow=two-factor-security-key-po...</a>" garbage fully, as I don't see myself contacting X support anytime soon(for obvious reasons).</p>
]]></description><pubDate>Wed, 12 Nov 2025 20:25:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=45906014</link><dc:creator>sebow</dc:creator><comments>https://news.ycombinator.com/item?id=45906014</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45906014</guid></item><item><title><![CDATA[New comment by sebow in "BBC director general and News CEO resign in bias controversy"]]></title><description><![CDATA[
<p>I've watched the doctored video myself. The CEO resigning is "nothing" compared to the accountability they should be held to for propaganda they constantly push, and specifically to the extreme bias in this case.<p>If it was incompetence, one could argue that nothing should happen, perhaps an apology or some useless corporate article. But it was malice, and to deny that is (imo) the real issue here. (I'm not saying you're doing it, I'm just saying some people do/did it)<p>I'm curious if you think people outside US/Western Europe (like me; greetings from EE[we've seen such edits in our communist period, fyi]) who disagree with the assessment that the BBC is "centrist with a slight leftward skew" are far-right with obvious biases? And if so, on what grounds? Most people who say BBC is propaganda(like me) don't consume MSM at all(or, in the case of US, stick to Fox or something). To say all alternative media I consume(which you'd be correct in assuming) is "skewing far-right" is to, ironically, behave like the ones you're pointing to. It's also incorrect: alternative media is infinitely more diverse today, even after all the reshuffling/restructuring in the past 10-12 months(which culled a decent amount of the left-leaning alternative media) than MSM.</p>
]]></description><pubDate>Mon, 10 Nov 2025 10:39:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=45874585</link><dc:creator>sebow</dc:creator><comments>https://news.ycombinator.com/item?id=45874585</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45874585</guid></item><item><title><![CDATA[New comment by sebow in "Climate change deniers make up nearly a quarter of US Congress"]]></title><description><![CDATA[
<p>"Climate change deniers"<p>Like holocaust deniers, like nazis, etc.? More and more buzz-words and labels broadly and freely used in order to dilute their respective meanings.<p>Our parents, grand-parents, etc. did not teach us good virtues "by shaming" (i.e by telling us we're X/Y/Z buzzwords), but either through example(of what not to do) or through rationality (explanations until comprehension). It's no wonder we have rising "X/Y/Z" sentiments when, ironically, the people who seemingly advocate against "these bad boys" resume to just label and categorize individuals instead putting the effort to either educate, explain, reference. Low-effort means low results, and virtuous traits are definitely not gained through complacency.<p>You solve things with dialogue (not monologue*). And if those people who use these buzzwords don't like dialogue "because some guy proved it" [it's actually irrelevant if the referenced fact is true or not] and dismiss discourse shamelessly, then they're doing more damage to their own narrative. Nobody likes being told what to do(this includes knowing/believing/etc.), and facts matter as much as they're being understood.</p>
]]></description><pubDate>Mon, 05 Aug 2024 12:56:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=41160930</link><dc:creator>sebow</dc:creator><comments>https://news.ycombinator.com/item?id=41160930</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41160930</guid></item><item><title><![CDATA[New comment by sebow in "Eating Processed Red Meat Linked to Increased Dementia Risk"]]></title><description><![CDATA[
<p>I expect this "red meat inherently bad" paper to be rebuked faster than the previous one, which lasted quite a while. Of course even retracted if there are still people with integrity in academia.(which I'm sure exist but as a tiny minority when we're talking industries of mass consumption: tobacco, foods, pharma, etc.)</p>
]]></description><pubDate>Fri, 02 Aug 2024 13:18:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=41138621</link><dc:creator>sebow</dc:creator><comments>https://news.ycombinator.com/item?id=41138621</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41138621</guid></item><item><title><![CDATA[New comment by sebow in "ChatGPT generates fake data set to support scientific hypothesis"]]></title><description><![CDATA[
<p>The framing makes it sound like it's a "bug" or something. From my understanding it's not, because it's hardly a reliable reasoning tool: whether using statements or using data. Unless we come up with or advance a better architecture, similar "panic porn" is useless, not to mention this reeks of a hit piece. Just verify everything and stop with the blind trust.</p>
]]></description><pubDate>Thu, 23 Nov 2023 10:44:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=38391421</link><dc:creator>sebow</dc:creator><comments>https://news.ycombinator.com/item?id=38391421</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38391421</guid></item><item><title><![CDATA[New comment by sebow in "Musk says X subscribers will get early access to xAI's chatbot, Grok"]]></title><description><![CDATA[
<p>The most interesting snippet from the article:<p>> Early Friday, Musk said that xAI would release its first AI system — presumably Grok — to a “select group” on Saturday, November 4. But in a follow-up tweet tonight, Musk said all subscribers to X’s recently launched Premium Plus plan, which costs $16 per month for ad-free access to X, will get access to Grok “once it’s out of early beta.”<p>The lower tiers getting in the deal as well sounded too good to be. But still, at 16$/month if grok is anything like cgpt4 it's worth it, imo. What I'm gonna be interested in is Musk's "promise" that it will not be censored/lobotomized, like chatgpt was(and is). I'm not sure what technology they're using such that grok uses realtime X data (if anyone has ideas, feel free to share), though i'm assuming this is something like Bard(which from my experience does a similar thing) and the knowledge isn't "trained" per se in the model.</p>
]]></description><pubDate>Tue, 07 Nov 2023 15:59:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=38178365</link><dc:creator>sebow</dc:creator><comments>https://news.ycombinator.com/item?id=38178365</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38178365</guid></item><item><title><![CDATA[New comment by sebow in "Audioflare: An all-in-one AI audio playground using Cloudflare AI Workers"]]></title><description><![CDATA[
<p>Wasn't aware tooling for whisper came out. Whishper looks neat, definitely better than what i'm using now( whisper locally/collab and then editing with SubtitleEdit/gaupol). Thanks</p>
]]></description><pubDate>Sat, 28 Oct 2023 13:49:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=38049777</link><dc:creator>sebow</dc:creator><comments>https://news.ycombinator.com/item?id=38049777</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38049777</guid></item></channel></rss>