<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: RoboTeddy</title><link>https://news.ycombinator.com/user?id=RoboTeddy</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 10:36:59 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=RoboTeddy" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by RoboTeddy in "Teen mathematicians tie knots through a mind-blowing fractal"]]></title><description><![CDATA[
<p>Quanta Magazine consistently explains mathematics/physics for an advanced lay audience in ways that don't terribly oversimplify / still expose you to the true ideas. It's really nice! I don't know of any other sources like this.</p>
]]></description><pubDate>Tue, 26 Nov 2024 23:40:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=42251368</link><dc:creator>RoboTeddy</dc:creator><comments>https://news.ycombinator.com/item?id=42251368</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42251368</guid></item><item><title><![CDATA[New comment by RoboTeddy in "The vagus nerve orchestrates the mind-body connection"]]></title><description><![CDATA[
<p>>  My hope is that my own cadaver is ripped apart by somebody as crazy/appreciative as me =D<p><a href="https://meded.ucsf.edu/willed-body-program" rel="nofollow">https://meded.ucsf.edu/willed-body-program</a> :)</p>
]]></description><pubDate>Sun, 01 Sep 2024 18:01:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=41418875</link><dc:creator>RoboTeddy</dc:creator><comments>https://news.ycombinator.com/item?id=41418875</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41418875</guid></item><item><title><![CDATA[New comment by RoboTeddy in "Show HN: InstantDB – A Modern Firebase"]]></title><description><![CDATA[
<p>Nice!<p>Re: favorite systems for migrations — not really; I've always just kind of not used one, or rolled my own. Desiderata:<p>* fully atomic (all goes through or none goes through)<p>* low-boilerplate<p>* can include execution of arbitrary application code — data-query-only only migrations feel kind of limiting.<p>* painless to use with multiple developers multiple of which might be writing migrations</p>
]]></description><pubDate>Sat, 24 Aug 2024 19:36:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=41340835</link><dc:creator>RoboTeddy</dc:creator><comments>https://news.ycombinator.com/item?id=41340835</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41340835</guid></item><item><title><![CDATA[New comment by RoboTeddy in "Show HN: InstantDB – A Modern Firebase"]]></title><description><![CDATA[
<p>(1) This is awesome. Feels like this wraps enough complexity that it won't just be a toy / for prototyping.<p>(2) When a schema is provided, is it fully enforced? Is there a way to do migrations?<p>Migrations are the only remaining challenge I can think of that could screw up this tool long-term unless a good approach gets baked in early. (They're critically important + very often done poorly or not supported.) When you're dealing with a lot of data in a production app, definitely want some means of making schema changes in a safe way. Also important for devex when working on a project with multiple people — need a way to sync migrations across developers.<p>Stuff like scalability — not worried about that — this tool seems fundamentally possible to scale and your team is smart :) Migrations though... hope you focus on it early if you haven't yet!</p>
]]></description><pubDate>Sat, 24 Aug 2024 18:18:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=41340193</link><dc:creator>RoboTeddy</dc:creator><comments>https://news.ycombinator.com/item?id=41340193</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41340193</guid></item><item><title><![CDATA[New comment by RoboTeddy in "Show HN: InstantDB – A Modern Firebase"]]></title><description><![CDATA[
<p>I haven’t seen a better solution than remolacha’s #2 (create separate temporary state for the form).<p>Forms just inherently can have partially-finished/invalid states, and it feels wrong to try and kraal model objects into carrying intermediary/invalid data for them (and in some cases won’t work at all, eg if a single form field is parsed into structured data in the model)</p>
]]></description><pubDate>Fri, 23 Aug 2024 17:05:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=41330744</link><dc:creator>RoboTeddy</dc:creator><comments>https://news.ycombinator.com/item?id=41330744</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41330744</guid></item><item><title><![CDATA[Ask HN: Pragmatic way to avoid supply chain attacks as a developer]]></title><description><![CDATA[
<p>In the usual course of writing software, it's common to install huge dependency chains (npm, pypi), and any vulnerable package could spell doom. There's some nasty stuff out there, like https://pytorch.org/blog/compromised-nightly-dependency/ which uploaded people's SSH keys to the attacker.<p>It's easy to say just "use containers" or "use VMs" — but are there pragmatic workflows for doing these things that don't suffer from too many performance problems or general pain/inconvenience?<p>Are containers the way to go, or VMs? Which virtualization software? Is it best to use one isolated environment per project no matter how small, or for convenience's sake have a grab-bag VM that contains many projects all of which are low value?<p>Theorycrafting is welcome, but am particularly interested in hearing from anyone who has made this work well in practice.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41259900">https://news.ycombinator.com/item?id=41259900</a></p>
<p>Points: 27</p>
<p># Comments: 21</p>
]]></description><pubDate>Thu, 15 Aug 2024 20:15:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=41259900</link><dc:creator>RoboTeddy</dc:creator><comments>https://news.ycombinator.com/item?id=41259900</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41259900</guid></item><item><title><![CDATA[New comment by RoboTeddy in "Aboriginal ritual passed down over 12,000 years, cave find shows"]]></title><description><![CDATA[
<p>>  slightly charred ends of the sticks had been cut specially to stick into the fire, and both were coated in human or animal fat.<p>Wait, what? Where did they get human fat from…?</p>
]]></description><pubDate>Wed, 03 Jul 2024 13:16:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=40865680</link><dc:creator>RoboTeddy</dc:creator><comments>https://news.ycombinator.com/item?id=40865680</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40865680</guid></item><item><title><![CDATA[New comment by RoboTeddy in "The hikikomori in Asia: A life within four walls"]]></title><description><![CDATA[
<p>In all the cases in the article it looks like shame plays a big role. I wonder if hikikomori is caused by a loop of [adverse circumstances that cause the person to feel shame] -> withdrawal to avoid shame -> being ashamed of having withdrawn [loop]</p>
]]></description><pubDate>Sat, 25 May 2024 15:34:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=40475637</link><dc:creator>RoboTeddy</dc:creator><comments>https://news.ycombinator.com/item?id=40475637</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40475637</guid></item><item><title><![CDATA[New comment by RoboTeddy in "Hacking on PostgreSQL Is Hard"]]></title><description><![CDATA[
<p>How’s PostgreSQL’s code quality? If projects have tons of technical debt or poor abstractions it can often be hard to make significant changes. Is that the case here, or no?</p>
]]></description><pubDate>Sun, 05 May 2024 12:53:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=40264507</link><dc:creator>RoboTeddy</dc:creator><comments>https://news.ycombinator.com/item?id=40264507</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40264507</guid></item><item><title><![CDATA[Bounty: Diverse hard tasks for LLM agents]]></title><description><![CDATA[
<p>Article URL: <a href="https://metr.org/blog/2023-12-16-bounty-diverse-hard-tasks-for-llm-agents/">https://metr.org/blog/2023-12-16-bounty-diverse-hard-tasks-for-llm-agents/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=39064645">https://news.ycombinator.com/item?id=39064645</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 20 Jan 2024 04:21:08 +0000</pubDate><link>https://metr.org/blog/2023-12-16-bounty-diverse-hard-tasks-for-llm-agents/</link><dc:creator>RoboTeddy</dc:creator><comments>https://news.ycombinator.com/item?id=39064645</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39064645</guid></item><item><title><![CDATA[New comment by RoboTeddy in "On Sleeper Agent LLMs"]]></title><description><![CDATA[
<p>It was certainly an unresolved question before they did this work!<p>Naively, it seems reasonable to believe that if you adjust all the weights of a neural net towards the behavior you want via SFT and RLHF, that it would compete with/mute/obscure undesired behavior like a back door. But it seems not to be so… Indeed the cute mask does not cover the entire shoggoth— it may still have tentacles (<a href="https://images.app.goo.gl/YW9g3BvwGqGwYTgd6" rel="nofollow">https://images.app.goo.gl/YW9g3BvwGqGwYTgd6</a>)</p>
]]></description><pubDate>Sun, 14 Jan 2024 17:02:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=38992100</link><dc:creator>RoboTeddy</dc:creator><comments>https://news.ycombinator.com/item?id=38992100</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38992100</guid></item><item><title><![CDATA[New comment by RoboTeddy in "On Sleeper Agent LLMs"]]></title><description><![CDATA[
<p>An LLM can be executed in an “OODA” loop like in AutoGPT and given a goal towards which it takes agentic actions, especially if the LLM is fine-tuned for function calling. So, it can be the main component of an agent that does have goals/de-facto motives! The wrapper code can just be a couple hundred lines.<p>AutoGPT itself is pretty weak, but it’s possible to write wrapper code that leads to stronger agency. Also, agents formed this way with GPT4 are way stronger than with GPT3.5… so expect this trend to continue.</p>
]]></description><pubDate>Sun, 14 Jan 2024 16:59:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=38992071</link><dc:creator>RoboTeddy</dc:creator><comments>https://news.ycombinator.com/item?id=38992071</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38992071</guid></item><item><title><![CDATA[New comment by RoboTeddy in "TSA Policy on Light Sabers"]]></title><description><![CDATA[
<p>Hmm wonder if that applies to <a href="https://www.hacksmith.tech/lightsaber" rel="nofollow noreferrer">https://www.hacksmith.tech/lightsaber</a> :D</p>
]]></description><pubDate>Sat, 23 Sep 2023 17:09:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=37625172</link><dc:creator>RoboTeddy</dc:creator><comments>https://news.ycombinator.com/item?id=37625172</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37625172</guid></item><item><title><![CDATA[New comment by RoboTeddy in "The Myth of AI Omniscience: AI's Epistemological Limits"]]></title><description><![CDATA[
<p>> But the ways in which a LLM can “talk about” the universe (and everything it contains) are limited to the ways in which humans have previously talked about the universe.<p>This is often said, but it isn't so.<p>The task of predicting the next token in human speech really well requires immense intelligence — potentially far more intelligence than possessed by the original speaker! Imagine yourself engaging in the task of listening to someone who isn't that smart speak and then trying to figure out what they'll say next — in doing so, you might make all sorts of extrapolations about the person, their motivations, their manner, their dialect, etc — calling on all sorts of internal models that you've built up about people over time. This is what models are being trained to do when we train them on predicting tokens.<p>There are concrete examples of models inventing new ways of thinking that are not described in their training set. For example, when training a transformer from scratch to perform addition mod P (and having no training data other than examples of addition mod P), the transformer was able to <i>discover</i> the use of discrete fourier transforms and trigonometric identities [1]. As we can see, neural nets can build all sorts of internal mental models that no one explained to them beforehand. These internal mental models can then be elicited and used for other purposes by e.g. fine-tuning.<p>I think a good mental model for transformers/neural nets is that they're automatic scientists. They figure out ways of modeling things in order to predict the output from the input — which is what scientists do! As part of this, they can de-facto discover new theories, and come to rely on the theories that prove useful in their prediction task.<p>Also, not all tokens in the training set are from human speech, so models are being trained to model all manner of data-generating processes.<p>[1] <a href="https://arxiv.org/pdf/2301.05217.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2301.05217.pdf</a></p>
]]></description><pubDate>Sat, 05 Aug 2023 16:55:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=37013981</link><dc:creator>RoboTeddy</dc:creator><comments>https://news.ycombinator.com/item?id=37013981</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37013981</guid></item><item><title><![CDATA[New comment by RoboTeddy in "Douglas Hofstadter changes his mind on Deep Learning and AI risk"]]></title><description><![CDATA[
<p>> but LLMs can’t do anything, really<p>Do you think <a href="https://github.com/Significant-Gravitas/Auto-GPT">https://github.com/Significant-Gravitas/Auto-GPT</a> et al will become more performant as models improve?</p>
]]></description><pubDate>Tue, 04 Jul 2023 05:42:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=36582630</link><dc:creator>RoboTeddy</dc:creator><comments>https://news.ycombinator.com/item?id=36582630</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36582630</guid></item><item><title><![CDATA[New comment by RoboTeddy in "Bluesky facing degraded performance due to record high traffic"]]></title><description><![CDATA[
<p>Happen to have one more? thank you either way!</p>
]]></description><pubDate>Sat, 01 Jul 2023 20:12:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=36554602</link><dc:creator>RoboTeddy</dc:creator><comments>https://news.ycombinator.com/item?id=36554602</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36554602</guid></item><item><title><![CDATA[New comment by RoboTeddy in "M5.4 Earthquake – Northern California"]]></title><description><![CDATA[
<p>Is it me, or have there been more earthquakes in the Bay Area lately?</p>
]]></description><pubDate>Fri, 12 May 2023 00:03:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=35910081</link><dc:creator>RoboTeddy</dc:creator><comments>https://news.ycombinator.com/item?id=35910081</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35910081</guid></item><item><title><![CDATA[New comment by RoboTeddy in "Llama.cpp: Port of Facebook's LLaMA model in C/C++, with Apple Silicon support"]]></title><description><![CDATA[
<p>LLaMA isn't built on RLHF, so it may be necessary to create a more extensive prompt. For example:<p>```<p>You are a super intelligent honest question-answering system.<p>Q: What's 2+2?<p>A: 4<p>Q: What color is the sky?<p>A:<p>```</p>
]]></description><pubDate>Fri, 10 Mar 2023 23:59:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=35103125</link><dc:creator>RoboTeddy</dc:creator><comments>https://news.ycombinator.com/item?id=35103125</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35103125</guid></item><item><title><![CDATA[New comment by RoboTeddy in "Parler’s Parent Company Laid Off Nearly All of Its Employees, Only Has 20 Left"]]></title><description><![CDATA[
<p>It's true that many of the people on Jan 6 didn't view themselves as extreme and didn't realize they were being fascist — they probably mostly truly did believe they were protecting democracy — however this does not preclude them being de-facto fascists.</p>
]]></description><pubDate>Thu, 12 Jan 2023 01:28:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=34347849</link><dc:creator>RoboTeddy</dc:creator><comments>https://news.ycombinator.com/item?id=34347849</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34347849</guid></item><item><title><![CDATA[New comment by RoboTeddy in "A civil war over semicolons: Robert Caro and his editor"]]></title><description><![CDATA[
<p>Caro understands his subjects so so well that you feel you are inside their heads.<p>Caro’s works have spoiled me — other biographies feel out-of-focus and surface-level and speculative.<p>(And all this despite the fact that Caro isn’t even really focusing directly on his subjects, because actually his books are about power and its dynamics.)</p>
]]></description><pubDate>Sun, 08 Jan 2023 18:28:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=34301658</link><dc:creator>RoboTeddy</dc:creator><comments>https://news.ycombinator.com/item?id=34301658</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34301658</guid></item></channel></rss>