<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: someone_eu</title><link>https://news.ycombinator.com/user?id=someone_eu</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 29 Apr 2026 18:44:29 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=someone_eu" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by someone_eu in "An update on GitHub availability"]]></title><description><![CDATA[
<p>GitHub's scaling issues are caused by their own vendor-lock approach and monopoly. Yes, of course _their_ goal is to be even bigger and even more all-consuming, so _they_ have to deal with the scale. Why a user would be sympathetic to that?<p>The user (and not a big tech monopoly) answer to scaling issues is almost always to stop scaling and start federating and interoperating.</p>
]]></description><pubDate>Tue, 28 Apr 2026 11:11:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47932853</link><dc:creator>someone_eu</dc:creator><comments>https://news.ycombinator.com/item?id=47932853</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47932853</guid></item><item><title><![CDATA[New comment by someone_eu in "Our newsroom AI policy"]]></title><description><![CDATA[
<p>> I was imaging if LLMs could finally solve the micropayments solution people have always proposed for the internet. Part of my monthly payment gets split between all of the sites that the LLM scraped knowledge. Paid out like Spotify pays out artists.<p>This system is usually called taxes.<p>Which then pay for the universal healthcare, free education, affordable housing, libraries, parks,.. and so on.<p>LLM doesn't need to invent it, we should stop allowing them (people and companies behind LLM) to avoid it.</p>
]]></description><pubDate>Thu, 23 Apr 2026 10:13:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47873963</link><dc:creator>someone_eu</dc:creator><comments>https://news.ycombinator.com/item?id=47873963</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47873963</guid></item><item><title><![CDATA[New comment by someone_eu in "Dr Matthew Garrett v Dr Roy Schestowitz and Anor"]]></title><description><![CDATA[
<p>It's a QAnon of FOSS.</p>
]]></description><pubDate>Thu, 20 Nov 2025 17:59:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=45995534</link><dc:creator>someone_eu</dc:creator><comments>https://news.ycombinator.com/item?id=45995534</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45995534</guid></item><item><title><![CDATA[New comment by someone_eu in "AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt"]]></title><description><![CDATA[
<p>It is not even just the copyright issue.<p>The article completely misses the point that AI scrapers are not a "future threat of AI domination". They already do damage by DDOSing site's networking infrastructure and inflicting very real costs to a site hoster.<p>Even when the data is completely free, like in case of Wikipedia or OpenstreetMaps, scraping it is unethical and should be illegal. Most of the open data resources have procedures, which allow downloading of the data in the archived form, without need for scraping. They are built with sharing in mind.<p>So the arguments the article tries to use (what if it is for public good?) has no sense. 1) it is not 2) there are many ways to fetch the open data properly and respectfully.</p>
]]></description><pubDate>Tue, 28 Jan 2025 22:56:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=42859238</link><dc:creator>someone_eu</dc:creator><comments>https://news.ycombinator.com/item?id=42859238</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42859238</guid></item><item><title><![CDATA[New comment by someone_eu in "I prefer semi-automation"]]></title><description><![CDATA[
<p>> "Automating things makes you forget how to do them." That was an... interesting argument.<p>Let me put it differently: Automating things significantly increases the cost of changing them.<p>While you are doing it manually, you have the knowledge, the flexibility and the power to change and adjust if needed. When automated, you lose knowledge, flexibility and control over the task.<p>It can be good or bad depending on the situation. When we talk about strictly defined actions with a high risk of human error, the automation is indeed necessary. Its power of resisting the change is helping us.<p>When you automate a fuzzy, volatile, complex decision-making logic, it becomes a curse rather than a blessing. It is almost impossible to do it correctly, with test coverage and verification and debugging and covering all of the corner cases.. (How often do you see proper QA for the infrastructure glue?) And even if you do invest enormous amount of resources, then you put all that knowledge of the process in a black box and lose the key. Which leads to the situations, where to change the process hidden in a box, people choose to create new layers of automations around it rather than look inside.<p>Refactoring a mess of a process is hard. But refactoring a mess, which has been automated, is simply impossible.<p>(Not saying this justifies the argument against Let's Encrypt though)</p>
]]></description><pubDate>Sat, 07 Jan 2023 14:23:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=34288190</link><dc:creator>someone_eu</dc:creator><comments>https://news.ycombinator.com/item?id=34288190</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34288190</guid></item><item><title><![CDATA[New comment by someone_eu in "EU Voice"]]></title><description><![CDATA[
<p>It is worth mentioning that EU also funds the open source development required to enable translation engine in Mastodon:<p><a href="https://github.com/mastodon/mastodon/pull/19218" rel="nofollow">https://github.com/mastodon/mastodon/pull/19218</a><p>"This project was funded through the NGI0 Discovery Fund, a fund established by NLnet with financial support from the European Commission's Next Generation Internet programme, under the aegis of DG Communications Networks, Content and Technology under grant agreement No 825322."<p>I think it is a much better investment in the future of federated social networking, than trying to get control of it by setting up a centralized instance for EU-citizens, as someone else suggests in the comments.</p>
]]></description><pubDate>Sun, 06 Nov 2022 14:09:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=33492780</link><dc:creator>someone_eu</dc:creator><comments>https://news.ycombinator.com/item?id=33492780</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33492780</guid></item></channel></rss>