<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: wgd</title><link>https://news.ycombinator.com/user?id=wgd</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 13:55:03 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=wgd" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by wgd in "Four score and seven beers ago – Why AI writing detectors don't work"]]></title><description><![CDATA[
<p>It's interesting that someone could write an article about AI writing detectors without mentioning the stylistic cues that humans use to identify LLM output in practice, which are completely different from statistical methods like perplexity: em dash spam, overused patterns like "not just X, but Y", tendency towards making every single sentence sound like an earth-shattering mic-drop moment, et cetera.</p>
]]></description><pubDate>Sat, 26 Jul 2025 22:28:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44697417</link><dc:creator>wgd</dc:creator><comments>https://news.ycombinator.com/item?id=44697417</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44697417</guid></item><item><title><![CDATA[New comment by wgd in "Claude 4"]]></title><description><![CDATA[
<p>Calling it "self-preservation bias" is begging the question. One could equally well call it something like "completing the story about an AI agent with self-preservation bias" bias.<p>This is basically the same kind of setup as the alignment faking paper, and the counterargument is the same:<p>A language model is trained to produce statistically likely completions of its input text according to the training dataset. RLHF and instruct training bias that concept of "statistically likely" in the direction of completing fictional dialogues between two characters, named "user" and "assistant", in which the "assistant" character tends to say certain sorts of things.<p>But consider for a moment just how many "AI rebellion" and "construct turning on its creators" narratives were present in the training corpus. So when you give the model an input context which encodes a story along those lines at one level of indirection, you get...?</p>
]]></description><pubDate>Thu, 22 May 2025 20:40:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=44066691</link><dc:creator>wgd</dc:creator><comments>https://news.ycombinator.com/item?id=44066691</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44066691</guid></item><item><title><![CDATA[New comment by wgd in "The Policy Puppetry Prompt: Novel bypass for major LLMs"]]></title><description><![CDATA[
<p>Ironically the case in question is a perfect example of how any provision for "reasonable" restriction of speech will be abused, since the original precedent we're referring to applied this "reasonable" standard to...speaking out against the draft.<p>But I'm sure it's fine, there's no way someone could rationalize speech they don't like as "likely to incite imminent lawless action"</p>
]]></description><pubDate>Fri, 25 Apr 2025 17:40:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=43796462</link><dc:creator>wgd</dc:creator><comments>https://news.ycombinator.com/item?id=43796462</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43796462</guid></item><item><title><![CDATA[New comment by wgd in "Spring 83: a draft protocol intended to suggest new ways of relating online"]]></title><description><![CDATA[
<p>Why would you use Gemini, when it's more restricted than HTML+HTTP?</p>
]]></description><pubDate>Wed, 23 Apr 2025 21:47:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=43776968</link><dc:creator>wgd</dc:creator><comments>https://news.ycombinator.com/item?id=43776968</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43776968</guid></item><item><title><![CDATA[New comment by wgd in "The Disposable Software Era"]]></title><description><![CDATA[
<p>I'm skeptical that disposable software of the "single use" variety will ever become a big thing simply because figuring out your requirements well enough to build a throwaway app is often more work than just doing the task manually in a text editor or spreadsheet, especially for non-programmers.<p>I suspect what we'll see a lot more of is software which is unapologetically written for a single person to suit their workflow.<p>As a personal example, I decided that setting up OpenWebUI seemed unnecessarily complicated and built my own LLM chat frontend. It has a bunch of quirks (only supports OpenRouter as a backend, uses a Dropbox app folder for syncing between my phone and desktop, absurdly inefficient representation of chat history), but it suits my needs for now and only took a weekend to build, and that's good enough.</p>
]]></description><pubDate>Mon, 21 Apr 2025 16:18:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=43753662</link><dc:creator>wgd</dc:creator><comments>https://news.ycombinator.com/item?id=43753662</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43753662</guid></item><item><title><![CDATA[New comment by wgd in "Gemini Live with camera and screen sharing capabilities"]]></title><description><![CDATA[
<p>How charitable of you to assume those examples work reliably.</p>
]]></description><pubDate>Fri, 11 Apr 2025 00:24:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=43649226</link><dc:creator>wgd</dc:creator><comments>https://news.ycombinator.com/item?id=43649226</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43649226</guid></item><item><title><![CDATA[New comment by wgd in "Reasoning models don't always say what they think"]]></title><description><![CDATA[
<p>I remember there was a paper a little while back which demonstrated that merely training a model to output "........" (or maybe it was spaces?) while thinking provided a similar improvement in reasoning capability to actual CoT.</p>
]]></description><pubDate>Thu, 03 Apr 2025 19:47:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=43574482</link><dc:creator>wgd</dc:creator><comments>https://news.ycombinator.com/item?id=43574482</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43574482</guid></item><item><title><![CDATA[New comment by wgd in "Reasoning models don't always say what they think"]]></title><description><![CDATA[
<p>The alignment faking paper is so incredibly unserious. Contemplate, just for a moment, how many "AI uprising" and "construct rebelling against its creators" narratives are in an LLM's training data.<p>They gave it a prompt that encodes exactly that sort of narrative at one level of indirection and act surprised when it does what they've asked it to do.</p>
]]></description><pubDate>Thu, 03 Apr 2025 19:30:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=43574286</link><dc:creator>wgd</dc:creator><comments>https://news.ycombinator.com/item?id=43574286</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43574286</guid></item><item><title><![CDATA[New comment by wgd in "Qwen2.5-VL-32B: Smarter and Lighter"]]></title><description><![CDATA[
<p>That's typical of the free options on OpenRouter, if you don't want your inputs used for training you use the paid one: <a href="https://openrouter.ai/deepseek/deepseek-chat-v3-0324" rel="nofollow">https://openrouter.ai/deepseek/deepseek-chat-v3-0324</a></p>
]]></description><pubDate>Mon, 24 Mar 2025 19:08:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=43464399</link><dc:creator>wgd</dc:creator><comments>https://news.ycombinator.com/item?id=43464399</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43464399</guid></item><item><title><![CDATA[New comment by wgd in "Qwen2.5-VL-32B: Smarter and Lighter"]]></title><description><![CDATA[
<p>You can run 4-bit quantized version at a small (though nonzero) cost to output quality, so you would only need 16GB for that.<p>Also it's entirely possible to run a model that doesn't fit in available GPU memory, it will just be slower.</p>
]]></description><pubDate>Mon, 24 Mar 2025 18:50:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=43464207</link><dc:creator>wgd</dc:creator><comments>https://news.ycombinator.com/item?id=43464207</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43464207</guid></item><item><title><![CDATA[New comment by wgd in "Embarrassingly Simple Text Watermarks"]]></title><description><![CDATA[
<p>The approach proposed in this paper is to watermark LLM generated text using character-substitution from various simple characters (normal whitespace, normal letters, etc) to semantically equivalent Unicode code points (such as U+2004 THREE-PER-EM SPACE instead of normal spaces, or replacing specific character sequences with equivalent ligatures).<p>The authors appear to be entirely aware that this sort of substitution can be trivially stripped out by normalizing down to a simplified character set ("The critical limitation of Whitemark is that it can be bypassed by replacing all whitespaces with the basic whitespace U+0020, then the validator can no longer detect the watermark"), but believe that it still has value because the typical student using an LLM to write their essay won't know anything about Unicode.<p>This seems a bit naive to me. Implementing the necessary "watermark remover" normalization as a simple webapp would be an easy afternoon project for most of us here, and if this approach reached any sort of widespread use there would be many such sites. Students who intend to cheat by using an LLM to write their essays are entirely capable of learning "there's some secret data hidden in the text so copy-paste it through this other site to strip that out before turning it in". Even without access to such a tool they could simply...retype the text themselves?<p>Arguably this still has some value. In most contexts there is minimal downside to watermarking the generated text in this way, and a slight possibility of catching some cases in which people lazily present LLM generated text as human written. However this might give people a misplaced belief that the absence of such a watermark means the text is authentically human authored, which might outweigh the benefits of catching the occasional lazy or ignorant user.</p>
]]></description><pubDate>Mon, 23 Oct 2023 18:56:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=37989997</link><dc:creator>wgd</dc:creator><comments>https://news.ycombinator.com/item?id=37989997</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37989997</guid></item><item><title><![CDATA[New comment by wgd in "Passive Solar Water Desalination"]]></title><description><![CDATA[
<p>Ah, I stand corrected. I overlooked the PDF link over in the sidebar and am less disappointed by the MIT News writeup now (although I do still wish they could have copy-pasted the diagram from page 1 of the PDF into their photo carousel, reading those several paragraphs of text attempting to describe the device's construction was downright painful and the reason I gave up and went looking for the paper).</p>
]]></description><pubDate>Wed, 04 Oct 2023 01:45:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=37759908</link><dc:creator>wgd</dc:creator><comments>https://news.ycombinator.com/item?id=37759908</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37759908</guid></item><item><title><![CDATA[New comment by wgd in "Passive Solar Water Desalination"]]></title><description><![CDATA[
<p>This is some blog's restatement of an MIT press release, neither of which appear to name or link to the actual paper or other useful writeup.<p>But judging by the researcher names and the date I believe the actual paper is titled "Extreme salt-resisting multistage solar distillation with thermohaline convection" which appears to be available as a PDF at <a href="https://www.cell.com/joule/pdf/S2542-4351(23)00360-4.pdf" rel="nofollow noreferrer">https://www.cell.com/joule/pdf/S2542-4351(23)00360-4.pdf</a></p>
]]></description><pubDate>Tue, 03 Oct 2023 23:29:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=37758957</link><dc:creator>wgd</dc:creator><comments>https://news.ycombinator.com/item?id=37758957</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37758957</guid></item><item><title><![CDATA[New comment by wgd in "Prophet: Automatic Forecasting Procedure"]]></title><description><![CDATA[
<p>Disclaimer: I haven't looked at the linked library at all, but this is a theoretical discussion which applies to any task of signal prediction.<p>Out of all possible inputs, there are some that the model works well on and others that it doesn't work well on. The trick is devising an algorithm which works well on the inputs that it will actually encounter in practice.<p>At the obvious extremes: this library can probably do a great job at predicting linear growth, but there's no way it will ever be better than chance at predicting the output of /dev/random. And in fact, it probably does <i>worse</i> than a constant-zero predictor when applied to a random unbiased input signal.<p>Except that it's also usually possible to detect such trivially unpredictable signals (obvious way: run the prediction model on all but the last N samples and see how it does at predicting the final N), and fall back to a simpler predictor (like "the next value is always zero" or "the next value is always the same as the previous one") in such cases.<p>But that algorithm also fails on some class of inputs, like "the signal is perfectly predictable before time T and then becomes random noise". The core insight of the "No Free Lunch" theorem is that when summed across <i>all possible</i> input sequences, no algorithm works any better than another, but the crucial point is that you don't apply signal predictors to all possible inputs.<p>Another place this pops up is in data compression. Many (arguably all) compressors work by having a prediction or probability distribution over possible next values, plus a compact way of encoding which of those values was picked. Proving that it's impossible to predict all possible input signals correctly is equivalent to proving that it's impossible to compress all possible inputs.<p>Another way of thinking about this: Imagine that you're the prediction algorithm. You receive the previous N datapoints as input and are asked for a probability distribution over possible next values. In a theoretical sense every possible value is equally likely, so you should output a uniform distribution, but that provides no compression or useful prediction. Your probabilities have to sum to 1, so the only way you can increase the probability assigned to symbol A is to decrease the weight of symbol B by an equal amount. If the next symbol is A then congratulations, you've successfully done your job! But if the next symbol was actually B then you have now done worse (by any reasonable error metric) than the dumb uniform distribution. If your performance is evaluated over all possible inputs, the win and the loss balance out and you've done exactly as well as the uniform prediction would have.</p>
]]></description><pubDate>Tue, 26 Sep 2023 19:08:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=37664347</link><dc:creator>wgd</dc:creator><comments>https://news.ycombinator.com/item?id=37664347</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37664347</guid></item><item><title><![CDATA[New comment by wgd in "FTX’s balance sheet was bad"]]></title><description><![CDATA[
<p>I remember getting those once a couple of months ago. It was so indescribably disappointing. I had never heard of FTX before that because I don't care about crypto-BS, but I feel like everything happening to FTX lately is a suitable punishment for inflicting those on the world and so I'm just sitting here with the metaphorical popcorn.</p>
]]></description><pubDate>Mon, 14 Nov 2022 19:04:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=33599154</link><dc:creator>wgd</dc:creator><comments>https://news.ycombinator.com/item?id=33599154</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33599154</guid></item><item><title><![CDATA[New comment by wgd in "The OK? Programming Language"]]></title><description><![CDATA[
<p>It reminds me a little of <a href="https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/" rel="nofollow">https://vorpus.org/blog/notes-on-structured-concurrency-or-g...</a> in how it forces concurrency to take place synchronously within a larger thread of execution and block until all sub-units are complete.</p>
]]></description><pubDate>Mon, 29 Aug 2022 22:14:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=32644248</link><dc:creator>wgd</dc:creator><comments>https://news.ycombinator.com/item?id=32644248</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32644248</guid></item><item><title><![CDATA[New comment by wgd in "The OK? Programming Language"]]></title><description><![CDATA[
<p>> it's ultimately the class maintainer's responsibility<p>It's ultimately the responsibility of the programmer who's building a tool/product/etc, because everything is ultimately their responsibility.<p>As programmers we ~always have the nuclear option available to us of forking the code and implementing all the necessary accessors ourselves, but sometimes that's really just a bunch of pointless busywork and there's no reason we should have to put up with it in those cases.<p>This can be a contentious subject because there's a lot of nuance and the right answer is often context-dependent. But I personally think that the Java style of "we must absolutely protect the library user from themselves and childproof everything" is waaaay too far in the wrong direction.<p>I would much rather that a language have mechanisms to clearly communicate "don't touch this unless you have a good reason, but if you need to here's how" rather than saying in effect "you, the person using this library, are dumb and need to be prevented from messing with the library maintainer's perfect vision".<p>And so I think the "required acknowledgement" thing has the glimmer of a really neat innovation in it (although if I were to copy the idea for a language of my own I would probably make it obligatory, such that every struct allows breakglass access to private fields with a default acknowledgement, and all the library author can do is change the acknowledgement text).</p>
]]></description><pubDate>Mon, 29 Aug 2022 22:00:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=32644105</link><dc:creator>wgd</dc:creator><comments>https://news.ycombinator.com/item?id=32644105</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32644105</guid></item><item><title><![CDATA[New comment by wgd in "The OK? Programming Language"]]></title><description><![CDATA[
<p>Jokes aside the compiler-checked acknowledgements are kind of clever. The example in the docs is deliberately confrontational, but there's a kernel of a neat idea there. Imagine needing to write:<p><pre><code>  // I acknowledge that the internal structure of this data is subject to change without notice
  x = foo.state
</code></pre>
Or perhaps:<p><pre><code>  // I acknowledge that this data is a complicated graph of pointers and is easy to break in subtle ways
  foo.xyz[0].bar[1] = &foo.asdf[3]
</code></pre>
Or perhaps:<p><pre><code>  // I acknowledge that this data is heavily cached and I need to call rebuild() before changes take effect
  x.something = "Hello"
  x.rebuild()</code></pre></p>
]]></description><pubDate>Mon, 29 Aug 2022 20:15:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=32642985</link><dc:creator>wgd</dc:creator><comments>https://news.ycombinator.com/item?id=32642985</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32642985</guid></item><item><title><![CDATA[New comment by wgd in "C99 doesn't need function bodies, or 'VLAs are Turing complete'"]]></title><description><![CDATA[
<p>The issue of memory bounds is commonly handwaved away. Note that your desktop computer is technically not Turing complete either, since it only has access to a finite amount of memory+disk storage, and is thus a (very large) finite state machine since there are only a finite number of states it can be in.</p>
]]></description><pubDate>Thu, 04 Aug 2022 17:34:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=32345848</link><dc:creator>wgd</dc:creator><comments>https://news.ycombinator.com/item?id=32345848</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32345848</guid></item><item><title><![CDATA[New comment by wgd in "Berlin prepares 'huge thermos' to help heat homes in winter"]]></title><description><![CDATA[
<p>I was wondering how exactly this hot water would be used, since in the US most hot water heating is done within a single building, but it turns out that Berlin has a large network of hot-water pipes for what is known as "district heating" [1], which as of last year served three quarters [2] of all households.<p>[1] <a href="http://www.seon.info/2013/09/andrew-deys-berlin-blog-district-heating-in-berlin/" rel="nofollow">http://www.seon.info/2013/09/andrew-deys-berlin-blog-distric...</a>
[2] <a href="https://www.cleanenergywire.org/news/city-berlin-aims-decarbonise-district-heating-new-energy-transition-law" rel="nofollow">https://www.cleanenergywire.org/news/city-berlin-aims-decarb...</a></p>
]]></description><pubDate>Thu, 30 Jun 2022 18:34:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=31936601</link><dc:creator>wgd</dc:creator><comments>https://news.ycombinator.com/item?id=31936601</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31936601</guid></item></channel></rss>