<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: unignorant</title><link>https://news.ycombinator.com/user?id=unignorant</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 02 May 2026 13:04:46 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=unignorant" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by unignorant in "Vera: a programming language designed for machines to write"]]></title><description><![CDATA[
<p>This isn't my project, but I shared it here because it has a few important ideas I've been thinking about in my own work. Effect type systems in particular are a really good fit for LLMs because they allow you to reason very precisely about a program's capabilities before runtime (basically, using the type system for capability proofs). This helps you trust agent-created code (for example, you know it can't do IO), or, if the code <i>does</i> require certain capabilities, run it in a sandbox (e.g., mock network or filesystem). This kind of language design also provides a safer foundation for complex meta-systems of agents-that-create-agents, depending on how the runtime is implemented, though Vera may be somewhat limited in that particular respect.<p>The major design decision I'm a little skeptical about is removing variable names; it would be interesting to see empirical data on that as it seems a bit unintuitive. I would expect almost the opposite, that variable names give LLMs some useful local semantics.</p>
]]></description><pubDate>Wed, 29 Apr 2026 23:21:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47956015</link><dc:creator>unignorant</dc:creator><comments>https://news.ycombinator.com/item?id=47956015</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47956015</guid></item><item><title><![CDATA[Vera: a programming language designed for machines to write]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/aallan/vera">https://github.com/aallan/vera</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47955118">https://news.ycombinator.com/item?id=47955118</a></p>
<p>Points: 111</p>
<p># Comments: 95</p>
]]></description><pubDate>Wed, 29 Apr 2026 21:41:32 +0000</pubDate><link>https://github.com/aallan/vera</link><dc:creator>unignorant</dc:creator><comments>https://news.ycombinator.com/item?id=47955118</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47955118</guid></item><item><title><![CDATA[Distilling a Tiny Model for Fast Interpretability]]></title><description><![CDATA[
<p>Article URL: <a href="https://ethanfast.substack.com/p/a-tiny-model-for-fast-interpretability">https://ethanfast.substack.com/p/a-tiny-model-for-fast-interpretability</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47937735">https://news.ycombinator.com/item?id=47937735</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 28 Apr 2026 17:35:57 +0000</pubDate><link>https://ethanfast.substack.com/p/a-tiny-model-for-fast-interpretability</link><dc:creator>unignorant</dc:creator><comments>https://news.ycombinator.com/item?id=47937735</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47937735</guid></item><item><title><![CDATA[Fast-AI-detector: a fast local CLI for detecting AI-generated text]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/Ejhfast/fast-ai-detector">https://github.com/Ejhfast/fast-ai-detector</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47897581">https://news.ycombinator.com/item?id=47897581</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 25 Apr 2026 00:50:35 +0000</pubDate><link>https://github.com/Ejhfast/fast-ai-detector</link><dc:creator>unignorant</dc:creator><comments>https://news.ycombinator.com/item?id=47897581</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47897581</guid></item><item><title><![CDATA[New comment by unignorant in "Metatextual Literacy"]]></title><description><![CDATA[
<p>I agree, the more likely psychology of the Greg character is that he doesn't understand the way he presents himself in the pictures damns his surface level framing. You can really go quite far with more sophisticated versions of this technique in fiction -- Ishiguro's Remains of the Day is my favorite example!</p>
]]></description><pubDate>Sun, 19 Apr 2026 06:17:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47822183</link><dc:creator>unignorant</dc:creator><comments>https://news.ycombinator.com/item?id=47822183</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47822183</guid></item><item><title><![CDATA[New comment by unignorant in "Launch HN: Tamarind Bio (YC W24) – AI Inference Provider for Drug Discovery"]]></title><description><![CDATA[
<p>These days it's almost trivial to design a binder against a target of interest with computation alone (tools like boltzgen, many others). While that's not the main bottleneck to drug development (imo you are correct about the main bottlenecks), it's still a huge change from the state of technology even 1 or 2 years ago, where finding that same binder could take months or years, and generally with a lot more resources thrown at the problem. These kinds of computational tools only started working really well quite recently (e.g., high enough hit rates for small scale screening where you just order a few designs, good Kd, target specificity out of the box).<p>So both things can be true: the more important bottlenecks remain, but progress on discovery work has been very exciting.</p>
]]></description><pubDate>Wed, 07 Jan 2026 01:12:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46521249</link><dc:creator>unignorant</dc:creator><comments>https://news.ycombinator.com/item?id=46521249</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46521249</guid></item><item><title><![CDATA[New comment by unignorant in "AI vs. Professional Authors Results"]]></title><description><![CDATA[
<p>In another reply I gave an example of something you can do: <a href="https://news.ycombinator.com/item?id=44937774">https://news.ycombinator.com/item?id=44937774</a><p>I enjoy writing so a system like this would never replace that for me. But for someone who doesn't enjoy writing (or maybe can't generate work that meets their bar in the Ira Glass sense of taste) I think this kind of setup works okay for generating flash even with today's models.</p>
]]></description><pubDate>Mon, 18 Aug 2025 05:59:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=44937811</link><dc:creator>unignorant</dc:creator><comments>https://news.ycombinator.com/item?id=44937811</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44937811</guid></item><item><title><![CDATA[New comment by unignorant in "AI vs. Professional Authors Results"]]></title><description><![CDATA[
<p>For flash you can get much better results by asking the system to first generate a detailed scaffold. Here's an example of some metadata you might try to generate before actually writing the story: genres the story should fit into; pov of the story;
high level structure of the story; list of characters in the story along with significant details; themes and topics present in the story; detailed style notes<p>From there you have a second prompt to generate a story that follows those details. You can also generate many candidates and have another model instance rate the stories based on both general literary criteria and how well the fit the prompt, then you only read the best.<p>This has produced some work I've been reasonably impressed by, though it's not at the level of the best human flash writers.<p>Also, one easy way to get stuff that completely avoids the "smell" you're talking about by giving specific guidance on style and perspective (e.g., GPT-5 Thinking can do "literary stream-of-consciousness 1st person teenage perspective" reasonably well and will not sound at all like typical model writing).</p>
]]></description><pubDate>Mon, 18 Aug 2025 05:48:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=44937774</link><dc:creator>unignorant</dc:creator><comments>https://news.ycombinator.com/item?id=44937774</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44937774</guid></item><item><title><![CDATA[New comment by unignorant in "AI vs. Professional Authors Results"]]></title><description><![CDATA[
<p>I'm not sure I agree that the human stories felt original. I was pretty unimpressed with all of the stories except maybe 6, and even that one dealt in some common tropes. 5 had fewer tropes than 6 (and maybe as a result of that received the highest average scores from his readers) but I could tell from the style it was AI</p>
]]></description><pubDate>Mon, 18 Aug 2025 05:30:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=44937699</link><dc:creator>unignorant</dc:creator><comments>https://news.ycombinator.com/item?id=44937699</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44937699</guid></item><item><title><![CDATA[New comment by unignorant in "AI vs. Professional Authors Results"]]></title><description><![CDATA[
<p>Here are my notes and guesses on the stories in case people here find it interesting. Like some others in the blog post comments I got 6/8 right:<p>1.) probably human, low on style but a solid twist (CORRECT)
2.) interesting imagery but some continuity issues, maybe AI (INCORRECT)
3.) more a scene than a story, highly confident is AI given style (CORRECT)
4.) style could go either way, maybe human given some successful characterization (INCORRECT)
5.) I like the style but it's probably AI, the metaphors are too dense and very minor continuity errors (CORRECT)
6.) some genuinely funny stuff and good world building, almost certainly human (CORRECT)
7.) probably AI prompted to go for humor, some minor continuity issues (CORRECT)
8.) nicely subverted expectations, probably human (CORRECT)<p>My personal ranking for scores (again blind to author) was:<p>6 (human);
8 (human);
4 (AI);
1 (human) and 5 (AI) -- tied;
2 (human);
3 and 7 (AI) -- tied<p>So for me the two best stories were human and the two worst were AI. That said, I read a lot of flash fiction, and none of these stories really approached good flash imo. I've also done some of my own experiments, and AI can do much better than what is posted above for flash if given more sophisticated prompting.</p>
]]></description><pubDate>Mon, 18 Aug 2025 04:08:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=44937321</link><dc:creator>unignorant</dc:creator><comments>https://news.ycombinator.com/item?id=44937321</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44937321</guid></item><item><title><![CDATA[An Ultra Opinionated Guide to Reinforcement Learning]]></title><description><![CDATA[
<p>Article URL: <a href="https://twitter.com/jsuarez5341/status/1943692998975402064">https://twitter.com/jsuarez5341/status/1943692998975402064</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44551641">https://news.ycombinator.com/item?id=44551641</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 13 Jul 2025 16:43:33 +0000</pubDate><link>https://twitter.com/jsuarez5341/status/1943692998975402064</link><dc:creator>unignorant</dc:creator><comments>https://news.ycombinator.com/item?id=44551641</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44551641</guid></item><item><title><![CDATA[New comment by unignorant in "The cultural decline of literary fiction"]]></title><description><![CDATA[
<p>I really enjoyed this article but the claim of no literary fiction making the Publishers Weekly yearly top 10 lists since 2001 isn't really true:<p><a href="https://en.wikipedia.org/wiki/Publishers_Weekly_list_of_bestselling_novels_in_the_United_States_in_the_2020s" rel="nofollow">https://en.wikipedia.org/wiki/Publishers_Weekly_list_of_best...</a><p><a href="https://en.wikipedia.org/wiki/Publishers_Weekly_list_of_bestselling_novels_in_the_United_States_in_the_2010s" rel="nofollow">https://en.wikipedia.org/wiki/Publishers_Weekly_list_of_best...</a><p>It is true that there isn't that <i>much</i> literary stuff that breaks through, and the stuff that does is usually somewhat crossover (e.g., All the Light We Cannot See in 2015 or Song of Achilles in 2021) but it exists. These two books are shelved under literary codes (though also historical). Song of Achilles in particular is beautifully written and a personal favorite of mine, at least among books published in recent years.<p>Then there are other works like Little Fires Everywhere and The Midnight Library that I might not consider super literary but nonetheless are also often considered so by book shops or libraries (e.g., <a href="https://lightsailed.com/catalog/book/the-midnight-library-a-novel-haig-matt/9780525559481/?utm_source=chatgpt.com" rel="nofollow">https://lightsailed.com/catalog/book/the-midnight-library-a-...</a>; the lit fic code is FIC019000).<p>I was really surprised that Ferrante's Neapolitan series, the best example (I would have thought) of recent work with both high literary acclaim and popular appeal, did not actually make the top 10 list for any year.</p>
]]></description><pubDate>Sun, 22 Jun 2025 21:13:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=44350335</link><dc:creator>unignorant</dc:creator><comments>https://news.ycombinator.com/item?id=44350335</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44350335</guid></item><item><title><![CDATA[New comment by unignorant in "Surprisingly fast AI-generated kernels we didn't mean to publish yet"]]></title><description><![CDATA[
<p>yeah, it seems likely the underlying task here (one reasoning step away) was: replace as many fp32 operations as possible in this kernel with fp16. i'm not sure exactly how challenging a port like that is, but intuitively seems a bit less impressive<p>maybe this intuition is wrong but would be great for the work to address it explicitly if so!</p>
]]></description><pubDate>Fri, 30 May 2025 23:16:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=44140626</link><dc:creator>unignorant</dc:creator><comments>https://news.ycombinator.com/item?id=44140626</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44140626</guid></item><item><title><![CDATA[New comment by unignorant in "I think it's time to give Nix a chance"]]></title><description><![CDATA[
<p>I do a lot of ML work too and recently gave NixOS a try. It's actually not too hard to just use conda/miniconda/micromamba to manage python environments as you would on any other linux system with just a few lines of configuration. Pretty much just add micromamba to your configuration.nix plus a few lines of config for nix-ld. Many other python/ML projects are setup to use docker, and that's another easy option.<p>I don't have the time or desire to switch all my python/ML work to more conventional Nix, and haven't really had any issues so far.</p>
]]></description><pubDate>Mon, 26 May 2025 18:00:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=44099894</link><dc:creator>unignorant</dc:creator><comments>https://news.ycombinator.com/item?id=44099894</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44099894</guid></item><item><title><![CDATA[New comment by unignorant in "AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms"]]></title><description><![CDATA[
<p>This technique doesn't actually use RL at all! There’s no policy-gradient training, value function, or self-play RL loop like in AlphaZero/AlphaTensor/AlphaDev.<p>As far as I can read, the weights of the LLM are not modified. They do some kind of candidate selection via evolutionary algorithms for the LLM prompt, which the LLM then remixes. This process then iterates like a typical evolutionary algorithm.</p>
]]></description><pubDate>Wed, 14 May 2025 17:20:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=43987021</link><dc:creator>unignorant</dc:creator><comments>https://news.ycombinator.com/item?id=43987021</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43987021</guid></item><item><title><![CDATA[NSA: Hardware-Aligned and Natively Trainable Sparse Attention]]></title><description><![CDATA[
<p>Article URL: <a href="https://arxiv.org/abs/2502.11089">https://arxiv.org/abs/2502.11089</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43098140">https://news.ycombinator.com/item?id=43098140</a></p>
<p>Points: 4</p>
<p># Comments: 2</p>
]]></description><pubDate>Wed, 19 Feb 2025 03:12:01 +0000</pubDate><link>https://arxiv.org/abs/2502.11089</link><dc:creator>unignorant</dc:creator><comments>https://news.ycombinator.com/item?id=43098140</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43098140</guid></item><item><title><![CDATA[K Prize: $1M for an AI that can close 90% of new GitHub issues]]></title><description><![CDATA[
<p>Article URL: <a href="https://kprize.ai">https://kprize.ai</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42403375">https://news.ycombinator.com/item?id=42403375</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 12 Dec 2024 21:11:42 +0000</pubDate><link>https://kprize.ai</link><dc:creator>unignorant</dc:creator><comments>https://news.ycombinator.com/item?id=42403375</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42403375</guid></item><item><title><![CDATA[New comment by unignorant in "Show HN: My AI writing assistant for Chinese"]]></title><description><![CDATA[
<p>Thanks for sharing this! I occasionally use google translate and/or GPT4 for similar purposes, but your tool makes the workflow a bit simpler.<p>I've found creative writing in a target language is great for learning.</p>
]]></description><pubDate>Sat, 09 Mar 2024 05:25:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=39649601</link><dc:creator>unignorant</dc:creator><comments>https://news.ycombinator.com/item?id=39649601</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39649601</guid></item><item><title><![CDATA[Claude 3 translates a low-resource language from a few thousand examples]]></title><description><![CDATA[
<p>Article URL: <a href="https://twitter.com/hahahahohohe/status/1765088860592394250">https://twitter.com/hahahahohohe/status/1765088860592394250</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=39608434">https://news.ycombinator.com/item?id=39608434</a></p>
<p>Points: 39</p>
<p># Comments: 8</p>
]]></description><pubDate>Tue, 05 Mar 2024 19:56:58 +0000</pubDate><link>https://twitter.com/hahahahohohe/status/1765088860592394250</link><dc:creator>unignorant</dc:creator><comments>https://news.ycombinator.com/item?id=39608434</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39608434</guid></item><item><title><![CDATA[New comment by unignorant in "AI Generated Podcast Platform"]]></title><description><![CDATA[
<p>I sort of similarly used LLMs and speech synthesis tools to make a prototype that could generate short (<10min) podcasts in Mandarin on any topic I specified. Being interesting is less important in a language learning context, though it's notable that I haven't used the tool much and prefer listening to Mandarin audiobooks and real human podcasts, perhaps because they are more interesting.</p>
]]></description><pubDate>Wed, 11 Oct 2023 19:11:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=37848939</link><dc:creator>unignorant</dc:creator><comments>https://news.ycombinator.com/item?id=37848939</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37848939</guid></item></channel></rss>