<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: SkyBelow</title><link>https://news.ycombinator.com/user?id=SkyBelow</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 16 May 2026 12:26:49 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=SkyBelow" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by SkyBelow in "A message from President Kornbluth about funding and the talent pipeline"]]></title><description><![CDATA[
<p>>Are you also disillusioned with professional sports, music, acting, and art?<p>Not the person you were asking, but I think we need to double down on disillusionment in these.  I've spoken to too many kids who dreamed of careers in this well into high school, often at cost to other academic paths, when their performance already clearly showed they weren't going this route.  Sadly, it is hard to be strong about correcting kids because it is seen as not believing in them and not encouraging them.<p>As disillusioned as one might become in academia, the path one is on to get there tends to better align with setting students up for a successful career outside of it compared to the ones you listed.</p>
]]></description><pubDate>Thu, 14 May 2026 18:16:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=48139090</link><dc:creator>SkyBelow</dc:creator><comments>https://news.ycombinator.com/item?id=48139090</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48139090</guid></item><item><title><![CDATA[New comment by SkyBelow in "50K Tahoe residents need power as utility eyes redirecting lines to data centers"]]></title><description><![CDATA[
<p>Isn't this just a false dichotomy in web comic form?  Implying that there are only two options, full participant or complete withdrawal from society, as a means of suggesting that one cannot apply morals to how one chooses to engage with society.<p>But in reality, there are many many ways to engage in society, some more or less ethical/moral than others, and one is free to criticize individual choices.<p>Even if we consider something like social media, there is still a range of choices other than fully engaging in social media and rejecting all social media.  There are attempts to use it responsible, limiting and curating use to less harmful versions while attempt to get most of the benefit, while still postulating that the overall effect of the average use case of social media is harming society.<p>It feels a lot like saying that, since it is impossible to live a perfectly ethical/moral life, ethics and morals can be completely ignored without regard for what options one does have available to them.</p>
]]></description><pubDate>Wed, 13 May 2026 17:16:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=48124728</link><dc:creator>SkyBelow</dc:creator><comments>https://news.ycombinator.com/item?id=48124728</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48124728</guid></item><item><title><![CDATA[New comment by SkyBelow in "The bottleneck was never the code"]]></title><description><![CDATA[
<p>>I believe in A, I don't take a strong position on B<p>But if A and B are opposed, then there is a question of why a strong position on A can be allowed with a weak position on B, if the reason for the strong position on A would also indicate a strong position against B.<p>The underlying argument being implied (but rarely ever directly stated) is to question if your reason for the strong position on A is really the reason you state, or if that is just the reason that sounds good but not the real reason for your belief.<p>In effect, that you don't apply the stated reason to B despite it fitting is the counter argument to why it doesn't actually support A.<p>If there is an inconsistency in arguments being applied, any formal discussion falls apart and people effectively take up positions simply because they like them, contradictions irrelevant.  This generally isn't a good outcome for public discourse.</p>
]]></description><pubDate>Wed, 06 May 2026 18:17:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=48039631</link><dc:creator>SkyBelow</dc:creator><comments>https://news.ycombinator.com/item?id=48039631</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48039631</guid></item><item><title><![CDATA[New comment by SkyBelow in "The bottleneck was never the code"]]></title><description><![CDATA[
<p>Sometimes there are two groups of people who have different opinions that don't interact, but given the extent they take up the same platform and don't seem to see each other, I'm not sure it is really a fallacy even then.<p>First, it becomes possible for people who have a double standard to hide behind this.  One can try to track an individual's stance, but a lot of internet etiquette seems to be based on the idea of not looking up a person's history to see if they are being contradictory.  (And while being hypocritical doesn't necessarily invalidate an argument, it can help to indicate when someone is arguing it bad faith and it is a waste of time as someone will simply use different axioms to reach otherwise contradictory conclusions when they favor each.)<p>Second, I think there is the ability to call out a group as being hypocritical, even when there are two sub groups.  That one group supports A generally and another group supports B generally (and assuming that A + B is hypocritical), but they stop supporting it when it would bring them into conflict indicates a level of acceptance by the change in behavior.  Each individual is too hard to measure this (maybe they are tired today, or distracted, or didn't even see it), but as a group, we can still measure the overall direction.<p>So if a website ends up being very vocally in support of two contradictory positions, I think there is still a valid argument to be made about contradicting opinions, and the goomba fallacy is itself a fallacy.<p>Edit: Removed example, might be too distracting to bring up an otherwise off topic issue as an example.</p>
]]></description><pubDate>Wed, 06 May 2026 14:08:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=48036461</link><dc:creator>SkyBelow</dc:creator><comments>https://news.ycombinator.com/item?id=48036461</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48036461</guid></item><item><title><![CDATA[New comment by SkyBelow in "Claude Code refuses requests or charges extra if your commits mention "OpenClaw""]]></title><description><![CDATA[
<p>This is the same logic of 'not a booby trap' booby trap,s which sometimes do work out in the favor of the one setting them if they weren't too open about it.  If your commit message is that you are talking about OpenClaw just to booby trap your repo, then I suspect it wouldn't fly, where as if you gave it some plausible deniability, a lawyer would be able to get any suit or charges dismissed.<p>This is all under the assumption we eventually live in a world where booby trapping repositories becomes a legal issue.  On one hand that feels silly.  On the other hand, we have had far less sensible cases make it to court and there is a small kernel of similarity which the legal system might latch onto.</p>
]]></description><pubDate>Fri, 01 May 2026 13:02:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47974304</link><dc:creator>SkyBelow</dc:creator><comments>https://news.ycombinator.com/item?id=47974304</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47974304</guid></item><item><title><![CDATA[New comment by SkyBelow in "Claude Code refuses requests or charges extra if your commits mention "OpenClaw""]]></title><description><![CDATA[
<p>What's the chance that it is market motivated?  That the companies most likely to succeed are those willing to break the rules (this isn't to say that breaking the rules makes one likely to succeed, you have to break the right rules and not the wrong ones, and that distinction is often times unknown til after the fact).<p>This might mean that the companies that we see explode in popularity are those whose cultures are already biased in ways that don't consider negative outcomes, as the companies that did consider them already excluded themselves from exploding in the market (they might still be entirely successful startups, but at a vastly smaller scale of success).</p>
]]></description><pubDate>Fri, 01 May 2026 12:43:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47974132</link><dc:creator>SkyBelow</dc:creator><comments>https://news.ycombinator.com/item?id=47974132</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47974132</guid></item><item><title><![CDATA[New comment by SkyBelow in "AI should elevate your thinking, not replace it"]]></title><description><![CDATA[
<p>Technically LLMs can be ran in deterministic mode as well, but I don't think that is enough.  Even a deterministic LLM is too chaotic, small changes in prompts or the otherwise general context can result in vastly different outputs.  This makes me think we need some other factor that is stronger than (or maybe orthogonal to) determinism.  A notion of being well-behaved or some other non-chaotic term, so that slightly different inputs don't lead to vastly unexpected results.<p>Even that doesn't feel quite correct, because a compiler does seem quite chaotic.  Forget a semi colon and an otherwise 99.99% code base results in a vastly different output.  But it is still a very understandable output.  Very predictable.  So while treating both compilers and LLMs as functions that map massive input strings to massive output strings, there is some property I can't well define that compilers have that LLMs still lack (and the question is if they'll always lack it).  I can't really define what it is, but it is something more than determinism.</p>
]]></description><pubDate>Mon, 27 Apr 2026 17:59:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47924991</link><dc:creator>SkyBelow</dc:creator><comments>https://news.ycombinator.com/item?id=47924991</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47924991</guid></item><item><title><![CDATA[New comment by SkyBelow in "AI should elevate your thinking, not replace it"]]></title><description><![CDATA[
<p>As much as I use AI, even for coding, I really do not like the argument.  They are too chaotic to be compilers.  The descent from prompt to code has far too many branches, and even small requests begin to build up bad patterns.<p>There is some fun to consider when sufficiently advanced AI allows this in areas where we are okay with things going wrong, but that seems a very limited domain for fun and games and not for serious software that needs to be correct as possible.<p>I can see vibe coding building very simple systems, and it likely will get better with systems that are one off throw aways where edge cases don't matter because we have a one off need of turning input X into output Y, but when it comes to people using AI in systems where correctness matters, long term support must be provided, and ease of adding new functionality is a serious consideration, it seems we are as far from having prompt as code as we are from AGI.</p>
]]></description><pubDate>Mon, 27 Apr 2026 17:48:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47924855</link><dc:creator>SkyBelow</dc:creator><comments>https://news.ycombinator.com/item?id=47924855</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47924855</guid></item><item><title><![CDATA[New comment by SkyBelow in "DeepSeek v4"]]></title><description><![CDATA[
<p>Even with all training data provided, won't it still be a black box?  Unless one trains it exactly the same, in the exact same order for each piece of data, potentially requiring the exact same hardware with specific optimizations disabled due to race conditions, etc., the final weights will be different, and so knowing if the original weights actually contain anything extra still leaves any released weights as a black box, no?  There isn't an equivalent of reproducible builds for LLM weights, even if all of this was provided, right?</p>
]]></description><pubDate>Fri, 24 Apr 2026 14:55:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47891149</link><dc:creator>SkyBelow</dc:creator><comments>https://news.ycombinator.com/item?id=47891149</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47891149</guid></item><item><title><![CDATA[New comment by SkyBelow in "DeepSeek v4"]]></title><description><![CDATA[
<p>One issue with factual reporting is what facts are getting reported, given that public attention is a very limited resource.  People consistently extrapolate from data without knowing if that data is good or bad.  So if I show you news with 100 stories of people doing awful things on channel A and 100 stories of people doing awesome things on channel B, both will be factual, but one will have you living more in fear of everyone while the other will inspire you.  These are still biases.<p>One of the least (to the extent possible given the topic) political examples is stranger danger.  Kids are safer than ever before, but due to the way stories are reported when bad things do happen to kids, parents are less trust of strangers than ever before (and this is despite the evidence it isn't the strangers who are the risk to kids).  The sum total experience that media provides now leads to parents being far more fearful and restrictive of their children than past generations, all without needing to tell any lies.<p>If all the police reports and research into stranger danger being a false narrative can't combat it, how will ideas with far less evidence to the contrary be countered?  Should parents trust the news when it comes to the topic of stranger danger?</p>
]]></description><pubDate>Fri, 24 Apr 2026 14:36:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47890903</link><dc:creator>SkyBelow</dc:creator><comments>https://news.ycombinator.com/item?id=47890903</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47890903</guid></item><item><title><![CDATA[New comment by SkyBelow in "GPT-5.5"]]></title><description><![CDATA[
<p>Wait, I thought we were onto racoons on e-scooters to avoid (some of) the issues with Goodhart's Law coming into play.</p>
]]></description><pubDate>Thu, 23 Apr 2026 20:15:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=47881225</link><dc:creator>SkyBelow</dc:creator><comments>https://news.ycombinator.com/item?id=47881225</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47881225</guid></item><item><title><![CDATA[New comment by SkyBelow in "ChatGPT Images 2.0"]]></title><description><![CDATA[
<p>It seems most takes on this are that ends either always or never justify the means, but rarely is their discussion on the option that they can and developing a system of when they do and don't.  At least in the general public discourse I've seen involving means and ends.</p>
]]></description><pubDate>Wed, 22 Apr 2026 13:02:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47862964</link><dc:creator>SkyBelow</dc:creator><comments>https://news.ycombinator.com/item?id=47862964</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47862964</guid></item><item><title><![CDATA[New comment by SkyBelow in "10 years ago, someone wrote a test for Servo that included an expiry in 2026"]]></title><description><![CDATA[
<p>Can't one get randomness and determinism at the same time?  Randomly generate the data, but do so when building the test, not when running the test.  This way something that fails will consistently fail, but you also have better chances of finding the missed edge cases that humans would overlook.  Seeded randomness might also be great, as it is far cleaner to generate and expand/update/redo, but still deterministic when it comes time to debug an issue.</p>
]]></description><pubDate>Mon, 20 Apr 2026 18:05:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47838245</link><dc:creator>SkyBelow</dc:creator><comments>https://news.ycombinator.com/item?id=47838245</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47838245</guid></item><item><title><![CDATA[New comment by SkyBelow in "GitHub's fake star economy"]]></title><description><![CDATA[
<p>Could it be that stars were a good proxy, but as people realized such, they started being gamed, resulting in them becoming a bad proxy?  Goodhart's Law would seem to always be in play for any proxy, because once it is recognized as a good proxy, bad actors will begin gaming it.  A proxy that can't be gamed would be ideal, but I feel that is akin to wishing for the Philosopher's Stone.</p>
]]></description><pubDate>Mon, 20 Apr 2026 15:41:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47835932</link><dc:creator>SkyBelow</dc:creator><comments>https://news.ycombinator.com/item?id=47835932</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47835932</guid></item><item><title><![CDATA[New comment by SkyBelow in "US v. Heppner (S.D.N.Y. 2026) no attorney-client privilege for AI chats [pdf]"]]></title><description><![CDATA[
<p>So, how would it apply to web searches?  If a lawyer searches something for a person's case, is it protected?  If a person searches something for their own case, does it have a similar level of protection?  Seems AI chats would need to follow the same rules.</p>
]]></description><pubDate>Thu, 16 Apr 2026 08:02:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47790065</link><dc:creator>SkyBelow</dc:creator><comments>https://news.ycombinator.com/item?id=47790065</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47790065</guid></item><item><title><![CDATA[New comment by SkyBelow in "AI could be the end of the digital wave, not the next big thing"]]></title><description><![CDATA[
<p>It needs to be something stronger than just deterministic.<p>With the right settings, a LLM is deterministic.  But even then, small variations in input can cause very unforeseen changes in output, sometimes drastic, sometimes minor.  Knowing that I'm likely misusing the vocabulary, I would go with saying that this counts as the output being chaotic so we need compilers to be non-chaotic (and deterministic, I think you might be able to have something that is non-deterministic and non-chaotic).  I'm not sure that a non-chaotic LLM could ever exist.<p>(Thinking on it a bit more, there are some esoteric languages that might be chaotic, so this might be more difficult to pin down than I thought.)</p>
]]></description><pubDate>Mon, 13 Apr 2026 14:22:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47752415</link><dc:creator>SkyBelow</dc:creator><comments>https://news.ycombinator.com/item?id=47752415</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47752415</guid></item><item><title><![CDATA[New comment by SkyBelow in "Claude mixes up who said what"]]></title><description><![CDATA[
<p>You can try to set up a NN where some of the neurons are either only activated off of 'safe' input (directly or indirectly from other 'safe' neurons), but as some point the information from them will have to flow over into the main output neurons which are also activating off unsafe user input.  Where the information combines is there the user's input can corrupt whatever info comes from the safe input.  There are plenty of attempts to make it less likely, but at the point of combining, there is a mixing of sources that can't fully be separated.  It isn't that these don't help, but that they can't guarantee safety.<p>Then again, ever since the first von Neumann machine mixed data and instructions, we were never able to again guarantee safely splitting them.  Is there any computer connected to the internet that is truly unhackable?</p>
]]></description><pubDate>Thu, 09 Apr 2026 17:31:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47706633</link><dc:creator>SkyBelow</dc:creator><comments>https://news.ycombinator.com/item?id=47706633</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47706633</guid></item><item><title><![CDATA[New comment by SkyBelow in "The Future of Everything Is Lies, I Guess"]]></title><description><![CDATA[
<p>Scale is very different, but I wonder if human trust isn't the real issue.  We trust technology too much as a group.  We expect perfection, but we also assume perfection.  This might be because the machines output confident sounding answers and humans default to trusting confidence as an indirect measure for accuracy, but I think there is another level where people just blindly trust machines because they are so use to using them for algorithms that trend towards giving correct responses.<p>Even before LLMs where in the public's discourse, I would have business ask about using AI instead of building some algorithm manually, and when I asked if they had considered the failure rate, they would return either blank stares or say that would count as a bug.  To them, AI meant an algorithm just as good as one built to handle all edge cases in business logic, but easier and faster to implement.<p>We can generally recognize the AIs being off when they deal in our area of expertise, but there is some AI variant of Gell-Mann Amnesia at play that leads us to go back to trusting AI when it gives outputs in areas we are novices in.</p>
]]></description><pubDate>Wed, 08 Apr 2026 16:00:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47692026</link><dc:creator>SkyBelow</dc:creator><comments>https://news.ycombinator.com/item?id=47692026</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47692026</guid></item><item><title><![CDATA[New comment by SkyBelow in "Sweden goes back to basics, swapping screens for books in the classroom"]]></title><description><![CDATA[
<p>>Would it be a mistake to use Desmos in a math classroom<p>Maybe.  Back in the day I had classes where we had to learn the rough shape of a number of basic functions, which built intuition that helped.  This involved drawing a lot of them by hand.  Initially by calculating points and estimating, and later by being given an arbitrary function and graphing it.  Using Desmos too early would've prevented building these skills.<p>Once the skills are built, using it doesn't seem a major negative.<p>I think of it like a calculator.  Don't let kids learning basic arithmetic to use a 4 function calculator, but once you hit algebra, that's find (but graphing calculators still aren't).<p>Best might be to mix it up, some with and some without, but no calculator is preferable to always calculator.</p>
]]></description><pubDate>Thu, 02 Apr 2026 20:46:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47619938</link><dc:creator>SkyBelow</dc:creator><comments>https://news.ycombinator.com/item?id=47619938</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47619938</guid></item><item><title><![CDATA[New comment by SkyBelow in "Nvidia NemoClaw"]]></title><description><![CDATA[
<p>NFTs were fueled by two different drives.  One interested in the technology and if it could do something new and interesting, and another seeing it as an area of speculation (be that fueled by get rich quick and cash out or thinking it is a long term investment generally driven by how much the first factor played in).<p>OpenClaw seems to lack the monetary interest driving it as much.  Not to say there is none, but I don't see people doing nearly as much to get me to buy their OpenClaw.<p>So, yes, on some level, hype alone doesn't prove use, because it can also be because of making money.  But, on the other hand, the specific version of hype seems much more focused on the "Look at what I built" and much less on "Better buy in now" from the builders themselves.  Of course the API providers selling tokens are loving it for financial reasons.</p>
]]></description><pubDate>Wed, 18 Mar 2026 21:35:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47431711</link><dc:creator>SkyBelow</dc:creator><comments>https://news.ycombinator.com/item?id=47431711</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47431711</guid></item></channel></rss>