<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: atleta</title><link>https://news.ycombinator.com/user?id=atleta</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 30 Apr 2026 10:10:08 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=atleta" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by atleta in "Show HN: Simple script to cripple personalized targeting from Facebook"]]></title><description><![CDATA[
<p>I use a separate email address basically everywhere so this can't be the reason. (I don't even use my main email address for facebook.)</p>
]]></description><pubDate>Fri, 28 Jun 2024 02:11:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=40817126</link><dc:creator>atleta</dc:creator><comments>https://news.ycombinator.com/item?id=40817126</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40817126</guid></item><item><title><![CDATA[New comment by atleta in "Show HN: Simple script to cripple personalized targeting from Facebook"]]></title><description><![CDATA[
<p>For me the correct URL seems to be: <a href="https://www.facebook.com/adpreferences/ad_settings/?section=audience_based_advertising&entry_product=accounts_center" rel="nofollow">https://www.facebook.com/adpreferences/ad_settings/?section=...</a> instead of what you have in the gist. (But the script errs on the awaits.)</p>
]]></description><pubDate>Sat, 22 Jun 2024 23:13:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=40763109</link><dc:creator>atleta</dc:creator><comments>https://news.ycombinator.com/item?id=40763109</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40763109</guid></item><item><title><![CDATA[New comment by atleta in "Show HN: Simple script to cripple personalized targeting from Facebook"]]></title><description><![CDATA[
<p>I don't know how well it works in practice. The other day I bought something from a local webshop (bike parts) on my laptop. The next day I'm seeing an ad from the same webshop on my Facebook feed on my mobile. Yes, it could be coincidental though I do see a lot of bike-related ads and practically never this company. (Even though I am a returning, if not very frequent customer of theirs.)</p>
]]></description><pubDate>Sat, 22 Jun 2024 23:06:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=40763059</link><dc:creator>atleta</dc:creator><comments>https://news.ycombinator.com/item?id=40763059</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40763059</guid></item><item><title><![CDATA[New comment by atleta in "Nokia made too many phones"]]></title><description><![CDATA[
<p>The problem was still Symbian under S60, if you like. Yes, it had code signing (which seemed like an unnecessary restriction at the time it was introduced) and a decent browser, and email that synced in the background (unlike in iPhone 1!) and background tasks in general, etc.<p>But developing for Symbian was convoluted, very painful and slow. And it slowed down Nokia itself not to mention the 3rd party/external app developers. There was no reasonable way to fix Symbian as these issues stemmed from the very foundations. One of them being memory management, the other probably cooperative multitasking and callbacks. But the memory management thing was all over the code (think string handling, so everywhere) and it made using existing software hard too. Linux would have been the way to go, one way or another. Sure, they would have to have rebuilt most things for that platform but e.g. webkit would have been a no-brainer and they could have used a lot of existing open source software.</p>
]]></description><pubDate>Wed, 14 Feb 2024 17:56:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=39372871</link><dc:creator>atleta</dc:creator><comments>https://news.ycombinator.com/item?id=39372871</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39372871</guid></item><item><title><![CDATA[New comment by atleta in "Nokia made too many phones"]]></title><description><![CDATA[
<p>I'm not sure that was a problem. Let's not forget that smartphones, as we know them, weren't really invented yet back then. So there wasn't really a common form factor and feature package that every customer was looking for.<p>Sure, there were Symbian phones that could <i>functionally</i> do almost anything smartphones can do now (and do more than the first iPhone) but those weren't for everyone and those didn't use touch screens so there were multiple form factors. Like the full keyboard communicators (9210, 9300/9500), the Blackberry clone (E61, I think), the slide keyboard (7650?) and then all the non-Symbian phones (S40 OS, IIRC). And, of course cameras were new and shitty so not every phone had them.<p>Now this could have caused a problem in itself and what the article says about the organization could also cause problems but (I keep saying this when this topic comes up) the real problem was that the Nokia management was too convenient/coward and didn't dare to switch away from Symbian. Especially since they have bought out Erinsson and Sony (again, IIRC), their former partners in the Symbian consortium in ~2004/5.<p>There were eperiments with a linux based phone OS around that time. They created the Nokia 770 "internet tablet" [1] which was this PDA-like touch screen device with a landscape screen layout, a pen, and a removable front cover. Obviously it was an experiment (and later followed by the 810 then the 900, the latter being a phone). However no one in the management was brave enough to give a linux phone a go. Especially not committing to a strategy to switch over to linux. Symbian phones were selling great, Nokia was the market leader and you can't really do better than that...<p>I remember, at one point, one team in the Helsinki office of NRC (Nokia Research Center) was coming up with the idea of creating a "unified architecture" (called the "Grand Unified Architecture") where they would create a uniform platform around the 3 operating systems: a linux based one, Symbian S60 and the (non-Symbian) S40. The genius idea was that they'd create a HAL (hardware abstraction layer) then above that would be one of the 3 OSs and above those would be a uniform API that could be used by all app developers. This would have been a great strategy to side-step an actual decision but other than that didn't make any sense, really. (Maybe you could argue back then that the S40 hardware was not capable of running linux, but there was no excuse for trying to keep both Symbian and linux while hiding them below a uniform API.) So the switch to linux never happened and Symbian was a pain in the ass to develop for. Just concatenating two strings took several lines of code in their C++-based API that hasn't even looked like actual C++. And this made developing in-house software slow and made 3rd party software pretty scarce.<p>Nokia also had an aversion towards touch screens. One of the reasons must have been that back then only resistive touch screens were available (I think the oroginal iPhone was the first phone with a capacitive one, i.e. one that was an actual <i>touch</i> screen and not a <i>press/push</i> screen). The other reason must have been Symbian (and the S60 skin) that was really not designed for touch screen and was hard to develop.<p>So Nokia just continued to enjoy being the market leader with the management not taking the risk to try to switch direction. And then the iPhone came and then Android came (who, after seeing an iPhone demo, very quickly changed direction because at first they thought they were competing with Blackberry, so their UI was similar to that and maybe Symbian).<p><a href="https://en.wikipedia.org/wiki/Nokia_770_Internet_Tablet" rel="nofollow">https://en.wikipedia.org/wiki/Nokia_770_Internet_Tablet</a></p>
]]></description><pubDate>Wed, 14 Feb 2024 03:13:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=39365988</link><dc:creator>atleta</dc:creator><comments>https://news.ycombinator.com/item?id=39365988</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39365988</guid></item><item><title><![CDATA[New comment by atleta in "Nokia made too many phones"]]></title><description><![CDATA[
<p>The main issue they had was Symbian. Period. And not willing to let it go. It was f*&^d up before Elop. Years before him.</p>
]]></description><pubDate>Wed, 14 Feb 2024 02:50:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=39365797</link><dc:creator>atleta</dc:creator><comments>https://news.ycombinator.com/item?id=39365797</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39365797</guid></item><item><title><![CDATA[New comment by atleta in "Scale of methane leaks from fossil fuel production and landfills exposed"]]></title><description><![CDATA[
<p>GP talked about who <i>causes</i> the pollution not where to punch (who to blame). Everybody has a little say in policies, at least in democracies. When you say that most of America is not wealthy enough to play this game then you basically admit this. And this is what is happening: people are not keen on making policies happen if those mean lowering their standard of living. But the thing is that, unfortunately, it is that very standard of living (i.e. consumption) that causes the problem.<p>You can punch up as much as you want, things are not going to change without people lowering their standards. And once we accept it we can easily force politicians to do the right thing. The tragedy of the situation is that everyobody is complicit and most people will not accept that they themselves are. Sure, everybody <i>but them</i> .<p>And I'm not saying this to blame anyone. Blaming doesn't make sense. Identifying the causes and what needs to change does.</p>
]]></description><pubDate>Sat, 27 Jan 2024 21:02:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=39159803</link><dc:creator>atleta</dc:creator><comments>https://news.ycombinator.com/item?id=39159803</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39159803</guid></item><item><title><![CDATA[New comment by atleta in "My AI costs went from $100 to less than $1/day: Fine-tuning Mixtral with GPT4"]]></title><description><![CDATA[
<p>So, if I understand you correctly, the business strategy for an AI wrapper company would be that they acquire customers quickly from a specific niche, build a name, while having very little custom technology and then get acquired by some of the larger players who do have the actual AI tech in-house. And, for them, it would be worth it for the brand/market/existing client base.<p>Assuming that the advance made in the meanwhile in AI doesn't eradicate the whole thing. I mean say some company builds a personal assistant for managers to supplant secretaries, they become the go-to name and then Google buys them in 2-3-5 years. Unless Google's AI becomes so good in the meantime that you can just instruct it in 1-2 sentences to do this for you.</p>
]]></description><pubDate>Fri, 19 Jan 2024 16:14:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=39057114</link><dc:creator>atleta</dc:creator><comments>https://news.ycombinator.com/item?id=39057114</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39057114</guid></item><item><title><![CDATA[New comment by atleta in "My AI costs went from $100 to less than $1/day: Fine-tuning Mixtral with GPT4"]]></title><description><![CDATA[
<p>The thing I don't understand about this strategy is that it itself shows that there really is no money to be made here. I mean it's a pretty obvious giveaway that:<p>1. they don't have the resources to build their own technology and probably never will<p>2. even if they did have, the best they could do is come up with something very similar to OpenAI's GPT, i.e. a (somewhat) generic AI model. This means that OpenAI can also easily compete with them.<p>All these companies are doing (if anything) is that they test the market for OpenAI (or Google, MS) for free.</p>
]]></description><pubDate>Fri, 19 Jan 2024 02:02:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=39050747</link><dc:creator>atleta</dc:creator><comments>https://news.ycombinator.com/item?id=39050747</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39050747</guid></item><item><title><![CDATA[New comment by atleta in "My AI costs went from $100 to less than $1/day: Fine-tuning Mixtral with GPT4"]]></title><description><![CDATA[
<p>Well, it seems that initially started with GPT4 but his costs were becoming high so he had to do something and had to do it quickly. Technically he could have written a few hundred responses himself while the site was still using GPT4 using the prompts from the users but that could have been slow (expensive)/boring, etc.</p>
]]></description><pubDate>Fri, 19 Jan 2024 01:57:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=39050707</link><dc:creator>atleta</dc:creator><comments>https://news.ycombinator.com/item?id=39050707</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39050707</guid></item><item><title><![CDATA[New comment by atleta in "I made a website to share rejection letters"]]></title><description><![CDATA[
<p>I think the joke hints at the recent events when Sam Altman has been fired (for a few days) and MS announced that they would take over the whole team as they said they would quit in response to Sam being fired.</p>
]]></description><pubDate>Fri, 19 Jan 2024 01:18:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=39050441</link><dc:creator>atleta</dc:creator><comments>https://news.ycombinator.com/item?id=39050441</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39050441</guid></item><item><title><![CDATA[New comment by atleta in "Hans Reiser on ReiserFS deprecation in the Linux kernel"]]></title><description><![CDATA[
<p>Yep. That's what I was saying too. The first line of my comment quotes the GP and I was correcting that.</p>
]]></description><pubDate>Thu, 18 Jan 2024 18:48:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=39045910</link><dc:creator>atleta</dc:creator><comments>https://news.ycombinator.com/item?id=39045910</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39045910</guid></item><item><title><![CDATA[New comment by atleta in "Hans Reiser on ReiserFS deprecation in the Linux kernel"]]></title><description><![CDATA[
<p>The word "more" is missing from the first half of the sentence: "more proportional".</p>
]]></description><pubDate>Thu, 18 Jan 2024 17:08:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=39044338</link><dc:creator>atleta</dc:creator><comments>https://news.ycombinator.com/item?id=39044338</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39044338</guid></item><item><title><![CDATA[New comment by atleta in "Hans Reiser on ReiserFS deprecation in the Linux kernel"]]></title><description><![CDATA[
<p>> Generally speaking, "the value of a graph is proportional to the square of the number of edges"<p>No, what Metcalfe's law assumes is that the value of the graph is proportional to the number of edges (not their square). And from that assumption and the fact that the graph is fully connected follows that it's proportional to the square of the number of nodes. (Because you can have (n-1)*n/2 edges with n nodes in a fully connected graph.<p>And hence, the Reiser quote above is similar but it emphasizes something else: it states what Metcalfe's law (I think) uses as a premise (or implicit claim) that the value is in the connections. Because it's not necessarily a fully connected graph.<p>Edit: originally I've given (n-1)*2/2 as the number of edges instead of (n-1)*n/2.</p>
]]></description><pubDate>Thu, 18 Jan 2024 17:07:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=39044316</link><dc:creator>atleta</dc:creator><comments>https://news.ycombinator.com/item?id=39044316</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39044316</guid></item><item><title><![CDATA[New comment by atleta in "LLMs and Programming in the first days of 2024"]]></title><description><![CDATA[
<p>There are two problems with this argument. The first, and easier to accept one is that while society might be better off, <i>in the long run</i> , as a result the affected individuals will probably not. We tend to generalize from a single historical example, the industrial revolution and, more specifically, the automatic loom, and in that case the displaced workers ended up doing worse. Better jobs and opportunities only got created later.<p>The other problem is, of course, is that all the historical examples (the data) are too few to generalize from while we do see how these examples are different from each other. As technological evolution progresses, automation gets more and more sophisticated, it can replace jobs that require more and more skills and talent. In other words, jobs that fewer and fewer people were able to do in the first place. This means that the bar for successfully competing in the labor market gets higher and higher and it will get to a point where a substantial number of people will just be plain uncompetitive for any job.<p>Or, at least that was one of the morels until LLMs were invented. (Mostly everyone thought that automation would take over the opportunities from the bottom up in general.) Now it seems that indeed white collar jobs are more in danger for now. But I digress.<p>The point here is that past examples are false analogies because AI (and I moslty mean future AI) is funcamentally different from past inventions. It's capabilities seem to improve quickly but we're mostly stuck with what evolution gave us. (We, as a species, are evolving but it's very slow compared to the rate of technological evolution and also we, as individuals, are stuck with whatever we were born with.)</p>
]]></description><pubDate>Tue, 02 Jan 2024 15:45:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=38842766</link><dc:creator>atleta</dc:creator><comments>https://news.ycombinator.com/item?id=38842766</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38842766</guid></item><item><title><![CDATA[New comment by atleta in "4B If Statements"]]></title><description><![CDATA[
<p>I haven't heard about him but checking the source tree of our 2 front-end apps and his 'is-number' package (which his is-odd depends on), seems to be imported by quite a few other packages.<p>Now looking at the source, that package <i>may</i> make sense if figuring out whether something is a number type in JS is really that cumbersome. (Though I'd expect that there is a more generic package that covers the other built-in types as well.)<p>Also since isNumber treats strings that can be converted to a number, a number, it can yield weird results since adding two strings will naturally just concatenate them. So e.g.:<p><pre><code>    const a = '1'; 
    isNumber(a); // returns true
    const b = a + a; // Now you have a string in b: '11'

</code></pre>
Of course, it's standard JS stupidity (and 2*a would be 2, and 1+'1' and '1'+1 would both be '11'), but then maybe stating that '1' is a number is not the right response. However, the package was downloaded 46 million times last week and that seems to be so low only because of Christmas. The previous weeks averaged around 70M. And most of these are dependencies, like in our projects, I'm sure.</p>
]]></description><pubDate>Thu, 28 Dec 2023 13:39:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=38793355</link><dc:creator>atleta</dc:creator><comments>https://news.ycombinator.com/item?id=38793355</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38793355</guid></item><item><title><![CDATA[New comment by atleta in "Apollo 11 vs. USB-C Chargers (2020)"]]></title><description><![CDATA[
<p>It seems it's going to be a new, RISC-V based chip:<p>[1] <a href="https://www.zdnet.com/article/nasa-has-chosen-these-cpus-to-power-its-next-generation-of-spaceflight-computers/" rel="nofollow">https://www.zdnet.com/article/nasa-has-chosen-these-cpus-to-...</a>
[2] <a href="https://www.nasa.gov/news-release/nasa-awards-next-generation-spaceflight-computing-processor-contract/" rel="nofollow">https://www.nasa.gov/news-release/nasa-awards-next-generatio...</a></p>
]]></description><pubDate>Wed, 27 Dec 2023 03:54:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=38778954</link><dc:creator>atleta</dc:creator><comments>https://news.ycombinator.com/item?id=38778954</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38778954</guid></item><item><title><![CDATA[New comment by atleta in "AGI's impact on Tech, SaaS valuations"]]></title><description><![CDATA[
<p>No, it's not just you, but a lot of people are downplaying the <i>dangers</i> of AI. The easiest to accept one is that it can cause mass unemployment and displacement of workforce. No, it doesn't matter that we have better jobs that what the luddite textile workers lost 200 years ago, because it's not guaranteed to be the same situation (indeed, I'd say it's guaranteed to be different) <i>and</i> those luddites  ended up in way worse situation anyway.<p>So the thing is that nobody knows what the development curve of AI is going to be and what the exact economical and societal effects are going to be. Whether it's 5 years to AGI or 50. (Neither of these seem very likely BTW.) Now since we do expect that there can be problems and since we at least can't rule out that these will manifest in the foreseeable (near) future, it's better to assume that we will have (at least economical) problems soon. It doesn't matter what LLMs can do <i>today</i>.<p>The development curve is what matters. And even if I said we don't know it, we have pretty good reasons to think (see above) that it's going to be powerful enough soonish. Just remember: about 1.5-2 years ago basically nobody would have predicted that LLMs would be able to do what they can do today. And I mean most experts would have probably said that it's not possible for LLMs to do what they can do today <i>at all</i> . Definitely not that they would be doing it by mid 2023. Or even just that they would be so powerful that a lot of non-technical people would use them. (Though, sure, there is still very little practical use as of today but the capabilities did make a huge and unexpected jump. It even surprised researchers like Geoffrey Hinton.)</p>
]]></description><pubDate>Sat, 21 Oct 2023 00:41:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=37963065</link><dc:creator>atleta</dc:creator><comments>https://news.ycombinator.com/item?id=37963065</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37963065</guid></item><item><title><![CDATA[New comment by atleta in "Show HN: Firefox add-on to open YouTube videos in alternative front ends"]]></title><description><![CDATA[
<p>There are a lot of reasons why Firefox or other browsers can't do that, but my claim was that FF (or any browser) can't do it <i>without</i> writing code specifically to get around YT. And this was a response to the parent who said that FF should (and could) simply just ignore the CSS.<p>> Sure I can, uBlock Origin provides exactly that.<p>Obviously, I meant that it doesn't work financially so there is no point being upset about it. If enough people block the ads then they'll do something about it. Actually it's not hypothetical anymore, I just started to see these warnings a few days ago. (I wasn't deliberately blocking the ads, I've been just using ghostery which, it seems, started blocking YT ads.) So yeah, in the end, as you also say, people in general can't consume ad supported services without paying with their attention. It just doesn't work business wise.</p>
]]></description><pubDate>Mon, 16 Oct 2023 09:46:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=37897574</link><dc:creator>atleta</dc:creator><comments>https://news.ycombinator.com/item?id=37897574</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37897574</guid></item><item><title><![CDATA[New comment by atleta in "Show HN: Firefox add-on to open YouTube videos in alternative front ends"]]></title><description><![CDATA[
<p>Lookig at it source (the page downloaded when you open a YT link pointing to a video), it's almost certain that YT doesn't load without JS. It's not an html page with some extra functionality implemented in JS, it's a web app that builds the web page you see from JS.<p>So firefox can't do much about it without actively trying to circumvent YT and YT specifically.<p>I don't think browsers made the turn you mention. It's more like browsers became more and more capable and web developers made use of it. Sometimes it's annoying because most websites are not websites anymore but apps (GUIs) that run in the browser and some of the web sites/apps people use could never work without it. Sure, we could all deploy those apps onto our machines (or have them deploy automatically in a sandbox) and there were actually technologies that did just that (think java web start or whatever the name ended up being) but they lost to what we have now: running these apps in the browser.<p>Also, you can't have an ad-free experience if the price of using a service is that the ad is delivered to you. On YT you can buy a subscription and you'll see no ads. But sure, most sites don't offer this.</p>
]]></description><pubDate>Sun, 15 Oct 2023 03:45:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=37886689</link><dc:creator>atleta</dc:creator><comments>https://news.ycombinator.com/item?id=37886689</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37886689</guid></item></channel></rss>