<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: vlthr</title><link>https://news.ycombinator.com/user?id=vlthr</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 00:28:19 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=vlthr" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by vlthr in "The Registers of Rust"]]></title><description><![CDATA[
<p>This is exceptionally well communicated. Language design is really hard, and it's so easy to lose sight of the bigger picture when trying to find the next incremental step. This post helped me understand a lot of vague frustrations I've experienced in the past, and the bigger picture it outlines really resonates with me.</p>
]]></description><pubDate>Thu, 09 Mar 2023 12:34:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=35080571</link><dc:creator>vlthr</dc:creator><comments>https://news.ycombinator.com/item?id=35080571</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35080571</guid></item><item><title><![CDATA[New comment by vlthr in "The Registers of Rust"]]></title><description><![CDATA[
<p>That example actually highlights why I think <i>register</i> is a helpful name. Dialects vary across people, but registers vary across situations for each person.<p>In the programming language context, <i>dialect</i> can be applied on varying levels but usually signifies the former, where each individual or group has a persistent preference for some language or style. Within a single programming language, dialects are usually a bad thing because they risk splintering the community into mutually incompatible subgroups (e.g. scala fp styles, c++ boost).<p>Part of the reason why evolving a language is hard is because every time you introduce a new way to do something which could be done before, users have to choose. If that choice divides users into groups that persistently pick one over the other based on style or community affiliation, you’ve introduced a new dialect. If the choice flows more naturally from the situation the user finds themselves in, you’ve introduced a new register.</p>
]]></description><pubDate>Wed, 08 Mar 2023 22:04:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=35075370</link><dc:creator>vlthr</dc:creator><comments>https://news.ycombinator.com/item?id=35075370</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35075370</guid></item><item><title><![CDATA[New comment by vlthr in "What Copilot means for open source"]]></title><description><![CDATA[
<p>I’m definitely not worried for Microsoft or the other big tech companies developing copilot-like products. To the extent that legal blowback focuses on issues that are both impactful and solvable (e.g. plagiarizing non-trivial snippets), they should be held to a high standard. Your point about the risk of poisoning the public’s acceptance of these technologies also resonates with me.<p>What worries me the most is the effect the public backlash towards these big companies can have on smaller actors that could enter this space in the near future. In the past we’ve seen open source projects like GPT-J come together to fund and reproduce closed models, and if we’re not careful to be nuanced in our criticism of big-tech frontrunners we might end up poisoning the waters enough to deter small actors without a dedicated legal team.<p>Copyright law is ultimately designed around humans as the only kind of actor. In an ideal world we would sit down and think about the way non-human learners should fit into this system and the balance of tradeoffs we want those laws to aim for. I hope that happens someday, but until then I hope we can cultivate a world where small actors are able to experiment with these technologies without fear of legal action.<p>That’s why it bothers me to see people arguing that language models should be thought of like human programmers making derivative works, even suggesting that we should require attribution for all generated outputs (i.e. the entire training set, <i>always</i>). That helps nobody, except of course big companies with infinite manpower.</p>
]]></description><pubDate>Mon, 27 Jun 2022 07:17:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=31891525</link><dc:creator>vlthr</dc:creator><comments>https://news.ycombinator.com/item?id=31891525</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31891525</guid></item><item><title><![CDATA[New comment by vlthr in "What Copilot means for open source"]]></title><description><![CDATA[
<p>I have a really hard time understanding what future world this article (and other detractors) is arguing for and why that world is made better by taking their arguments seriously.<p>On the practical level I agree with the part advising caution to those that might end up embedding an identifiably licensed snippet in their codebase via copilot. I also agree that copilot users plagiarizing significant chunks of GPL code for profit is immoral. This needs to be prevented.<p>I also share the frustration stemming from big companies leveraging their disproportional access to data and resources for profit given that the greatest value of these models is precisely the open source code it is trained on.<p>Ultimately though, what I care about is the potential for building better tools. LLMs potentially offer paths towards genuinely new forms of human-machine interaction, and I don’t want that exploration to be suffocated by legalism.</p>
]]></description><pubDate>Sun, 26 Jun 2022 15:33:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=31884901</link><dc:creator>vlthr</dc:creator><comments>https://news.ycombinator.com/item?id=31884901</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31884901</guid></item><item><title><![CDATA[New comment by vlthr in "Why Computers Won't Make Themselves Smarter"]]></title><description><![CDATA[
<p>I think the author is right to argue against the claim that “the singularity <i>must</i> happen eventually”, but I’m not sure how applicable your observations are to the question of whether an AI singularity <i>could</i> happen, or whether we should be planning for that contingency.<p>Today, progress in machine learning is almost entirely empirical, with limited theoretical advances following later to make sense of the empirical findings. Our theoretical underpinnings are so weak we don’t even have a rough estimate of how much harder it is to make a self-improving AGI than e.g. GPT-3. Maybe it’s many orders of magnitude harder, and not even remotely solvable by taking iterative steps from where we are now. Maybe it’s just one unifying theoretical advance away.<p>As for waiting and hoping it happens on its own, that’s not a great description of what’s happening. A huge number of people (possibly too many, leading to short-term incentives) are trying to make improvements in any way they can think of.  Progress is happening at a staggering pace —- we just don’t have any idea how much progress is needed to reach the goal of AGI.</p>
]]></description><pubDate>Wed, 31 Mar 2021 08:19:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=26644730</link><dc:creator>vlthr</dc:creator><comments>https://news.ycombinator.com/item?id=26644730</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26644730</guid></item><item><title><![CDATA[New comment by vlthr in "I Am Deleting the Blog"]]></title><description><![CDATA[
<p>I agree with all of your points about the diffusion of responsibility that is common in ML, though I think you may not be sensitive enough to the harmful framing being created by the "anti-bias" side.<p>The original locus of the debate was how the recent face-depixelation paper turned out to depixelate pictures of black faces into ones with white features. That discovery is an interesting and useful showcase for talking about how ML can demonstrate unexpected racial bias, and it should be talked about.<p>As often happens, the nuances of what exactly this discovery means and what we can learn from it quickly got simplified away. Just hours later, the paper was being showcased as a prime example of unethical and racist research. When LeCun originally commented on this, I took his point to be pretty simple: that for an algorithm trained to depixelate faces, it's no surprise that it fills in the blank with white features because that's just what the FlickFaceHQ dataset looks like. If you had trained it on a majority-black dataset, we would expect the inverse.<p>That in no way dismisses all of the real concerns people have (and should have!) about bias in ML. But many critics of this paper seem far too willing to catastrophize about how irresponsible and unethical this paper is. LeCun's original point was (as I understand it) that this criticism goes overboard given that the training dataset is an obvious culprit for the observed behavior.<p>Following his original comment, he has been met with some extremely uncharitable responses. The most circulated example is this tweet (<a href="https://twitter.com/timnitGebru/status/1274809417653866496?s=20" rel="nofollow">https://twitter.com/timnitGebru/status/1274809417653866496?s...</a>) where a bias-in-ml researcher calls him out without as much as a mention of <i>why</i> he is wrong, or even <i>what</i> he is wrong about. LeCun responds with a 17-tweet thread clarifying his stance, and her response is to claim that educating him is not worth her time (<a href="https://twitter.com/timnitGebru/status/1275191341455048704?s=20" rel="nofollow">https://twitter.com/timnitGebru/status/1275191341455048704?s...</a>).<p>The overwhelming attitude there and elsewhere is in support of the attacker. Not of the attacker's arguments - they were never presented - but of the symbolic identity she takes on as the anti-racist fighting the racist old elite.<p>I apologize if my frustration with their behavior shines through, but it really pains me to see this identity-driven mob mentality take hold in our community. Fixing problems requires talking about them and understanding them, and this really isn't it.</p>
]]></description><pubDate>Tue, 23 Jun 2020 16:23:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=23616101</link><dc:creator>vlthr</dc:creator><comments>https://news.ycombinator.com/item?id=23616101</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23616101</guid></item><item><title><![CDATA[New comment by vlthr in "I Am Deleting the Blog"]]></title><description><![CDATA[
<p>I take the point about subscribers being hard to count to mean that even though most of the money comes from subscribers, each individual subscriber doesn't have much leverage or bandwidth to communicate their desires to NYT. On the flip side, each individual advertiser commands some sizeable chunk of NYT's revenue as leverage.</p>
]]></description><pubDate>Tue, 23 Jun 2020 15:17:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=23615021</link><dc:creator>vlthr</dc:creator><comments>https://news.ycombinator.com/item?id=23615021</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23615021</guid></item><item><title><![CDATA[New comment by vlthr in "Is There a Case for Skepticism of Psychedelic Therapy?"]]></title><description><![CDATA[
<p>I don't know if psychedelics will turn out to have any medical value in the end, but I wouldn't be surprised if much of its failure stems from the way we try to fit it into the existing structures of medicine.<p>Psychedelics amplify the perceived significance of many experiences, but what experience are you likely to be getting at a hospital or psychiatric facility? Every encounter I have had with modern medicine has been uncomfortable or even slightly demeaning.<p>The way we practice medicine today has bought us repeatability and accountability of treatment, but with it comes an overwhelming air of impersonality. That's not an issue if you needed surgery, but if you're prescribed a transformative experience I'd say that might be a show stopper.</p>
]]></description><pubDate>Sat, 07 Sep 2019 20:53:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=20906447</link><dc:creator>vlthr</dc:creator><comments>https://news.ycombinator.com/item?id=20906447</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=20906447</guid></item><item><title><![CDATA[New comment by vlthr in "Ask HN: Do you think technology progress will flatten out 100 years from now"]]></title><description><![CDATA[
<p>I don't think it's clear that the options are either that we continue to progress exponentially or stagnate. Both of those seem like possibilities in the medium-term, but I wouldn't rule out the possibilities of decline (in the worst case) or cyclical re-development of the same ideas.<p>Technological discovery doesn't just happen in a vacuum. There needs to be a societal production of potential new inventors and scientists.  Where their interests lie is to some extent influenced by the culture they grow up in, and which research interests will receive funding (or societal status) is also dependent on culture, etc. Worse, if a problem has been widely solved using a <i>bad</i> solution, there is no real desire for a good one.<p>Take electrical engineering and circuit design as an example. I would make the case that despite the immense and obvious success of computers, our society is in some significant way worse off than it was 50 years ago if you were to measure our collective ability to solve <i>new</i> problems using electrical engineering. Fewer and fewer electrical engineers are being educated, while the competency threshold of the field is rising. As that threshold rises, our education system shifts from teaching people to invent things to teaching them about practical skills like how to work all the menus in [some modeling tool], and how to generate report templates for MS Word.<p>Software is so new that we've barely exceeded a single human lifespan (hardly enough time for information to get lost is it?), but how many times have you seen companies with a code base that was written by some greybeard in the 80's, which is critical to the success of the company but nobody understands? When the company realised that, did they try to address the root of the problem, or did they decide to keep piling shit on top of what they already had?<p>It may be that software itself is the problem. Software allows you to snapshot your current problem solving capacity and continue delivering it long after the problem solvers themselves are gone. Maybe if we're lucky, the current generation of AI researchers will get us to something that approaches human level intelligence before these problems become intractable.</p>
]]></description><pubDate>Wed, 29 May 2019 11:53:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=20038987</link><dc:creator>vlthr</dc:creator><comments>https://news.ycombinator.com/item?id=20038987</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=20038987</guid></item><item><title><![CDATA[New comment by vlthr in "Go makes it easier to write correct, clear and efficient code"]]></title><description><![CDATA[
<p>Aren’t you being a bit uncharitable?<p>- The article focuses on the properties of codebases, not individual code snippets. While it would be very valuable to do compare the properties of entire codebases, that is demanding on both the reader and the author.<p>- Opening the article with a statement like “Choosing a programming language is never easy...” is an acknowledgement of the fact that he is not claiming Go is unequivocally the best in all scenarios. The author is signaling that he is a reasonable person, and while that may seem like fluff it is a necessary component of communicating with a wide audience.</p>
]]></description><pubDate>Fri, 10 May 2019 10:57:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=19876654</link><dc:creator>vlthr</dc:creator><comments>https://news.ycombinator.com/item?id=19876654</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=19876654</guid></item><item><title><![CDATA[New comment by vlthr in "Sundar Pichai: Privacy Should Not Be a Luxury Good"]]></title><description><![CDATA[
<p>There is definitely some distinction to be made between Google's and Facebook's approaches to privacy, but data anonymization is a more of a PR technique than a privacy one. It can be done if you accept that you may lose nearly all of the valuable structure in the data, but that is always going to be a hard sell.<p>Recently there has been a lot of discussion in Sweden (maybe elsewhere too) about anonymized mobile phone location data that is sold online. In that case "data anonymization" usually meant swapping out personal identifiers for some token. If that was the only information you had you'd be more or less fine, but what if you have access to some correlated side-channel information that IS personally linked? In the location data example, just combining with publicly available home address data is enough to de-anonymize nearly every person in the dataset (i.e. where does anonymous token X go every night and leave every morning?).<p>This problem emerges very quickly as soon as you start linking together multiple pieces of anonymized data (or just sampling the data with high enough resolution). The only real virtue of data anonymization is that it prevents casual snooping by the people that work with the data.</p>
]]></description><pubDate>Wed, 08 May 2019 09:12:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=19857133</link><dc:creator>vlthr</dc:creator><comments>https://news.ycombinator.com/item?id=19857133</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=19857133</guid></item><item><title><![CDATA[New comment by vlthr in "Period-tracking apps are not for women"]]></title><description><![CDATA[
<p>The author makes a lot of good points about how these apps don't fulfill the needs of their customers in various ways, but to attribute this to sexism seems like a stretch. Software being badly designed, inflexible, and out of touch with the needs of users is the norm rather than tye exception.</p>
]]></description><pubDate>Wed, 14 Nov 2018 21:43:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=18454295</link><dc:creator>vlthr</dc:creator><comments>https://news.ycombinator.com/item?id=18454295</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=18454295</guid></item><item><title><![CDATA[New comment by vlthr in "Don't learn Dvorak"]]></title><description><![CDATA[
<p>I switched to dvorak for RSI reasons about two years ago and have now switched back to qwerty. If dvorak (plus split programmable keyboards, vim, etc) made any difference to my symptoms, it was at least not enough to be noticeable (though it is hard to say since the symptoms vary in intensity).<p>One of the things I'm considering as a possible explanation for my issues is that all of the changes I've made have made my usage patterns smaller and more repetitive, and that going back to more inefficient methods slow me down and force me to move my arms in bigger movements might actually be less painful.</p>
]]></description><pubDate>Fri, 21 Sep 2018 05:49:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=18037421</link><dc:creator>vlthr</dc:creator><comments>https://news.ycombinator.com/item?id=18037421</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=18037421</guid></item><item><title><![CDATA[New comment by vlthr in "Don't learn Dvorak"]]></title><description><![CDATA[
<p>If you're mostly on Linux/Mac I can strongly recommend using a dotfiles repository containing all of your config files and a script that you can run to create symlinks to all of these files in the correct places.<p>With small tweaks like a non-symlinked file that contains only machine local variables (I call mine .bashrc.local and source it from the main bashrc), you can evem make the configs vary across machines on the subtler things.<p>This works wonders for me getting custom configs everywhere, with the exception of windows machines that are always a huge headache to work with.</p>
]]></description><pubDate>Fri, 21 Sep 2018 05:26:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=18037344</link><dc:creator>vlthr</dc:creator><comments>https://news.ycombinator.com/item?id=18037344</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=18037344</guid></item><item><title><![CDATA[New comment by vlthr in "Don't learn Dvorak"]]></title><description><![CDATA[
<p>Is there a video/article you could share describing how to do these with good form?<p>I am now about 7 years into an RSI issue that nothing seems to work on, but I haven't committed to any training routine for more than a couple of weeks (with no effect).</p>
]]></description><pubDate>Fri, 21 Sep 2018 05:15:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=18037304</link><dc:creator>vlthr</dc:creator><comments>https://news.ycombinator.com/item?id=18037304</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=18037304</guid></item><item><title><![CDATA[New comment by vlthr in "Alan Kay's: Why is FP seen as the opposite of OOP rather than an addition?"]]></title><description><![CDATA[
<p>Thank you! Fascinating reading.</p>
]]></description><pubDate>Sun, 25 Mar 2018 10:19:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=16671557</link><dc:creator>vlthr</dc:creator><comments>https://news.ycombinator.com/item?id=16671557</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16671557</guid></item></channel></rss>