<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: sixfiveotwo</title><link>https://news.ycombinator.com/user?id=sixfiveotwo</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 03 May 2026 18:31:03 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=sixfiveotwo" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by sixfiveotwo in "Making memcpy(NULL, NULL, 0) well-defined"]]></title><description><![CDATA[
<p>> I think a memory address is a number that CPU considers to be a memory address<p>I meant to say that, indeed, there must be some concept of CPU for a memory address to have a meaning, and for this concept of CPU to be as widely applicable as possible, surely defining it as abstract as possible is the way to go. Ergo, the idea of a C abstract machine.<p>Anyway, other people in this thread are discussing the matter more accurately and in more details than I could hope to do, so I'll leave it like that.</p>
]]></description><pubDate>Wed, 11 Dec 2024 15:55:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=42389028</link><dc:creator>sixfiveotwo</dc:creator><comments>https://news.ycombinator.com/item?id=42389028</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42389028</guid></item><item><title><![CDATA[New comment by sixfiveotwo in "Making memcpy(NULL, NULL, 0) well-defined"]]></title><description><![CDATA[
<p>How would you define what a memory address is without first defining in which context it has a meaning?</p>
]]></description><pubDate>Wed, 11 Dec 2024 13:34:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=42387506</link><dc:creator>sixfiveotwo</dc:creator><comments>https://news.ycombinator.com/item?id=42387506</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42387506</guid></item><item><title><![CDATA[New comment by sixfiveotwo in "OK, I can partly explain the LLM chess weirdness now"]]></title><description><![CDATA[
<p>I'm sorry if you feel that way.<p>I am in no way trying to judge you; rather, I'm trying to get closer to the truth in that matter, and your input is valuable, as it points out a discrepancy wrt TFA, but it is also subject to caution, since it reports the results of only one chess player (right?). Furthermore, both in the case of TFA and this youtuber, we don't have full access to their whole experiments, so we can't reproduce the results, nor can we try to understand why there is a difference.<p>I might very well be mistaken though, and I am open to criticisms and corrections, of course.</p>
]]></description><pubDate>Tue, 26 Nov 2024 06:22:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=42243001</link><dc:creator>sixfiveotwo</dc:creator><comments>https://news.ycombinator.com/item?id=42243001</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42243001</guid></item><item><title><![CDATA[New comment by sixfiveotwo in "OK, I can partly explain the LLM chess weirdness now"]]></title><description><![CDATA[
<p>> Yeah like several hundred "Chess IM/GMs react to ChatGPT playing chess" videos on youtube.<p>If I were to take that sentence literally, I would ask for at least 199 other examples, but I imagine that it was just a figure of speech. Nevertheless, if that's only one player complaining (even several times), can we really conclude that ChatGPT cannot play? Is that enough evidence, or is there something else at work?<p>I suppose indeed one could, if one expected an LLM to be ready to play out of the box, and that would be a fair criticism.</p>
]]></description><pubDate>Sat, 23 Nov 2024 23:51:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=42224828</link><dc:creator>sixfiveotwo</dc:creator><comments>https://news.ycombinator.com/item?id=42224828</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42224828</guid></item><item><title><![CDATA[New comment by sixfiveotwo in "OK, I can partly explain the LLM chess weirdness now"]]></title><description><![CDATA[
<p>Very strange, I cannot spot any specifically saying that ChatGPT cheated or played an illegal move. Can you help?</p>
]]></description><pubDate>Fri, 22 Nov 2024 22:53:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=42218027</link><dc:creator>sixfiveotwo</dc:creator><comments>https://news.ycombinator.com/item?id=42218027</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42218027</guid></item><item><title><![CDATA[New comment by sixfiveotwo in "OK, I can partly explain the LLM chess weirdness now"]]></title><description><![CDATA[
<p>That's simple logic. Quoting you again:<p>> Machines are good at applying rules, so when they fail to apply rules correctly, it means they have incorrect, incomplete, or totally absent models.<p>If this line of reasoning applies to machines, but LLMs aren't machines, how can you derive any of these claims?<p>"A implies B" may be right, but you must first demonstrate A before reaching conclusion B..<p>> I think we are discussing whether LLMs can emulate chess playing machines<p>That is incorrect. We're discussing whether LLMs can play chess. Unless you think that human players also emulate chess playing machines?</p>
]]></description><pubDate>Fri, 22 Nov 2024 22:44:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=42217974</link><dc:creator>sixfiveotwo</dc:creator><comments>https://news.ycombinator.com/item?id=42217974</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42217974</guid></item><item><title><![CDATA[New comment by sixfiveotwo in "OK, I can partly explain the LLM chess weirdness now"]]></title><description><![CDATA[
<p>I think the article briefly touch on that topic at some point:<p>> For one, gpt-3.5-turbo-instruct rarely suggests illegal moves, even in the late game. This requires “understanding” chess. If this doesn’t convince you, I encourage you to write a program that can take strings like 1. e4 d5 2. exd5 Qxd5 3. Nc3 and then say if the last move was legal.<p>However, I can't say if LLMs fall in the "statistical AI" category.</p>
]]></description><pubDate>Fri, 22 Nov 2024 20:08:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=42216997</link><dc:creator>sixfiveotwo</dc:creator><comments>https://news.ycombinator.com/item?id=42216997</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42216997</guid></item><item><title><![CDATA[New comment by sixfiveotwo in "OK, I can partly explain the LLM chess weirdness now"]]></title><description><![CDATA[
<p>> Whereas the LLM makes "moves" that clearly indicate no ability to play chess: moving pieces to squares well outside their legal moveset, moving pieces that aren't on the board, etc.<p>Do you have any evidence of that? TFA doesn't talk about the nature of these errors.</p>
]]></description><pubDate>Fri, 22 Nov 2024 19:55:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=42216875</link><dc:creator>sixfiveotwo</dc:creator><comments>https://news.ycombinator.com/item?id=42216875</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42216875</guid></item><item><title><![CDATA[New comment by sixfiveotwo in "OK, I can partly explain the LLM chess weirdness now"]]></title><description><![CDATA[
<p>> Machines are good at applying rules, so when they fail to apply rules correctly, it means they have incorrect, incomplete, or totally absent models.<p>That's assuming that, somehow, a LLM is a machine. Why would you think that?</p>
]]></description><pubDate>Fri, 22 Nov 2024 19:00:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=42216466</link><dc:creator>sixfiveotwo</dc:creator><comments>https://news.ycombinator.com/item?id=42216466</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42216466</guid></item><item><title><![CDATA[New comment by sixfiveotwo in "OK, I can partly explain the LLM chess weirdness now"]]></title><description><![CDATA[
<p>It's a good point.<p>But this math analogy is not quite appropriate: there's abstract math and arithmetic. A good math practitioner (LLM or human) can be bad at arithmetic, yet good at abstract reasoning. The later doesn't (necessarily) requires the former.<p>In chess, I don't think that you can build a good strategy if it relies on illegal moves, because tactics and strategies are tied.</p>
]]></description><pubDate>Fri, 22 Nov 2024 18:39:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=42216279</link><dc:creator>sixfiveotwo</dc:creator><comments>https://news.ycombinator.com/item?id=42216279</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42216279</guid></item><item><title><![CDATA[New comment by sixfiveotwo in "Hyrum's Law in Golang"]]></title><description><![CDATA[
<p>Quite interesting, thank you.<p>However, in this specific instance, even if the text cannot be changed, couldn't the error itself in the server be processed and signaled differently, eg. by returning a Status Code 413[1], since clients ought to recognize that status code anyway?<p>[1]: <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/413" rel="nofollow">https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/413</a></p>
]]></description><pubDate>Thu, 21 Nov 2024 10:47:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=42202977</link><dc:creator>sixfiveotwo</dc:creator><comments>https://news.ycombinator.com/item?id=42202977</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42202977</guid></item><item><title><![CDATA[New comment by sixfiveotwo in "SVDQuant: 4-Bit Quantization Powers 12B Flux on a 16GB 4090 GPU with 3x Speedup"]]></title><description><![CDATA[
<p>thank you, that looks awesome.</p>
]]></description><pubDate>Sun, 17 Nov 2024 14:27:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=42164343</link><dc:creator>sixfiveotwo</dc:creator><comments>https://news.ycombinator.com/item?id=42164343</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42164343</guid></item><item><title><![CDATA[New comment by sixfiveotwo in "SVDQuant: 4-Bit Quantization Powers 12B Flux on a 16GB 4090 GPU with 3x Speedup"]]></title><description><![CDATA[
<p>Sorry, I understand it was a bit intrusively direct. To bring some context, I toyed a little with neural networks a few years ago and wondered myself about this topic of training a so called quantized network (I wanted to write a small multilayer perceptron based library parameterized by the coefficient type - floating point or integer of different precision), but didn't implement it. Since you mentioned your own work in that area, it picked my interest, but I don't want to waste your time unnecessarily.</p>
]]></description><pubDate>Sat, 09 Nov 2024 19:23:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=42096307</link><dc:creator>sixfiveotwo</dc:creator><comments>https://news.ycombinator.com/item?id=42096307</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42096307</guid></item><item><title><![CDATA[New comment by sixfiveotwo in "SVDQuant: 4-Bit Quantization Powers 12B Flux on a 16GB 4090 GPU with 3x Speedup"]]></title><description><![CDATA[
<p>> I spend much longer trying to figure out how to get 1-trit training to work and I never could.<p>What did you try? What were the research directions at the time?</p>
]]></description><pubDate>Sat, 09 Nov 2024 13:14:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=42094237</link><dc:creator>sixfiveotwo</dc:creator><comments>https://news.ycombinator.com/item?id=42094237</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42094237</guid></item><item><title><![CDATA[New comment by sixfiveotwo in "I'm Not Mutable, I'm Partially Instantiated (Prolog)"]]></title><description><![CDATA[
<p>Indeed, you can get a lot more from dependent types than Damas-Hindley-Milner inference, yet does it mean that you should use the former everywhere?</p>
]]></description><pubDate>Thu, 07 Nov 2024 13:36:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=42076518</link><dc:creator>sixfiveotwo</dc:creator><comments>https://news.ycombinator.com/item?id=42076518</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42076518</guid></item><item><title><![CDATA[New comment by sixfiveotwo in "A Hamiltonian Circuit for Rubik's Cube"]]></title><description><![CDATA[
<p>Yes indeed, I realized that it was way more complicated than what I initially imagined.<p>When I first read the article, the sequence of subgroups that were described evoked that image of a combination lock to me:<p>< UR ><p>< U, R ><p>< U, R, D ><p>< U, R, D, L ><p>< U, R, D, L, F ><p>The behavior of the basic operations on the cube reminds me of the product of quaternion base vectors (i,j,k). For instance, the product of i and j would yield either k or -k depending on the order of i and j. I think the point I wanted to make is that on a combination lock, each operation on a wheel only affect that wheel, not the others, so one cannot produce another operation by combining several of them, like what we see with quaternions. However, on the cube, it is often possible to go from one combination to another by different sequences of different operations.<p>But that may not matter much, if all we care about is going through every possible combination exactly once, just like what one does when using gray code on binary numbers (which is why I alluded to that in my other post), and that for that purpose we can find a set of sequences of operations - let's call them large operations - that are orthogonal (and thus emulating the rotating wheel aspect of the combination lock). I suppose that these subgroups represent the large operations. The problem you bring up now is that these large operations are not commutative, and so finding a correct way to apply them to build the circuit is more involved than simply spinning the wheels on a lock.<p>Is that correct?<p>Edit: I just had a first look at cayley graphs on wikipedia, and they use quaternion rotations as an example!</p>
]]></description><pubDate>Mon, 04 Nov 2024 17:47:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=42044101</link><dc:creator>sixfiveotwo</dc:creator><comments>https://news.ycombinator.com/item?id=42044101</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42044101</guid></item><item><title><![CDATA[New comment by sixfiveotwo in "A Hamiltonian Circuit for Rubik's Cube"]]></title><description><![CDATA[
<p>Clearly that first intuition doesn't work. The Hamiltonian cycle for decimal numbers is perhaps an equivalent of grey code? And if it exists, is there a connection with the Rubik's cube cycle?</p>
]]></description><pubDate>Mon, 04 Nov 2024 11:51:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=42040687</link><dc:creator>sixfiveotwo</dc:creator><comments>https://news.ycombinator.com/item?id=42040687</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42040687</guid></item><item><title><![CDATA[New comment by sixfiveotwo in "A Hamiltonian Circuit for Rubik's Cube"]]></title><description><![CDATA[
<p>My intuition when reading the first lines of this article was that, just like when searching exhaustively for the correct combination on a padlock, one would cycle through each subgroup, where each of them would represent a digit on the lock. On the lock, one would do 9 steps (not 10, as this would loop the lock to a previously seen combination) on the least significant digit, then propagate the carry to the next digits. But it seems that this more complicated than that, as the steps at which subgroups connect (the carry) are not always the same?</p>
]]></description><pubDate>Mon, 04 Nov 2024 08:53:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=42039739</link><dc:creator>sixfiveotwo</dc:creator><comments>https://news.ycombinator.com/item?id=42039739</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42039739</guid></item><item><title><![CDATA[New comment by sixfiveotwo in "Geothermal Energy Could Outperform Nuclear Power"]]></title><description><![CDATA[
<p>Perhaps the problem could be different. Wouldn't the energy gathered from renewable sources otherwise be used by nature? At which point will the amount we capture be large enough to impact sognificantly the ecosystem?</p>
]]></description><pubDate>Sun, 29 Sep 2024 23:17:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=41691748</link><dc:creator>sixfiveotwo</dc:creator><comments>https://news.ycombinator.com/item?id=41691748</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41691748</guid></item><item><title><![CDATA[New comment by sixfiveotwo in "Uber charges more if you have credits in your account"]]></title><description><![CDATA[
<p>> This is a very good example of how you can cause real world harm with by trying to game the system for yourself.<p>Isn't that the rule that Uber itself already imposes on its customers and drivers?</p>
]]></description><pubDate>Mon, 23 Sep 2024 07:40:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=41623393</link><dc:creator>sixfiveotwo</dc:creator><comments>https://news.ycombinator.com/item?id=41623393</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41623393</guid></item></channel></rss>