<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: abel_</title><link>https://news.ycombinator.com/user?id=abel_</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 07:40:41 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=abel_" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by abel_ in "Small models also found the vulnerabilities that Mythos found"]]></title><description><![CDATA[
<p>This misses the broader ongoing trend. For a few million dollars, of course you can create a startup that builds tools it can use to more efficiently find code vulnerabilities. And of course you can do this with weaker models with scaffolds that incorporate lots of human understanding. The difference now is that you don't need an expensive team, nor a bunch of human heuristics, nor a million dollars. The requisite cost and skill are falling rapidly.</p>
]]></description><pubDate>Sat, 11 Apr 2026 18:53:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47733047</link><dc:creator>abel_</dc:creator><comments>https://news.ycombinator.com/item?id=47733047</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47733047</guid></item><item><title><![CDATA[New comment by abel_ in "Imagen, a text-to-image diffusion model"]]></title><description><![CDATA[
<p>On the contrary -- the opposite will happen. There's a decent body of research showing that just by training foundation models on their outputs, you amplify their capabilities.<p>Less common opinion: this is also how you end up with models that understand the concept of themselves, which has high economic value.<p>Even less common opinion: that's really dangerous.</p>
]]></description><pubDate>Tue, 24 May 2022 10:01:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=31490057</link><dc:creator>abel_</dc:creator><comments>https://news.ycombinator.com/item?id=31490057</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31490057</guid></item><item><title><![CDATA[New comment by abel_ in "Gato – A Generalist Agent"]]></title><description><![CDATA[
<p>This was done so that the IRL robot manipulation tasks could be done fast enough. In the future, we may always need small models mixed with large models for some tasks (e.g., for slow long term planning and fast short term planning), though compute does have a tendency to improve exponentially...</p>
]]></description><pubDate>Wed, 18 May 2022 00:28:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=31417850</link><dc:creator>abel_</dc:creator><comments>https://news.ycombinator.com/item?id=31417850</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31417850</guid></item><item><title><![CDATA[New comment by abel_ in "Gato – A Generalist Agent"]]></title><description><![CDATA[
<p>There's a neat argument against these models doing interpolation: the manifold of the data is so sparse that it's infinitesimally unlikely for a good predictor to be doing interpolation between existing points on the manifold.<p><a href="https://arxiv.org/abs/2110.09485" rel="nofollow">https://arxiv.org/abs/2110.09485</a></p>
]]></description><pubDate>Wed, 18 May 2022 00:23:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=31417808</link><dc:creator>abel_</dc:creator><comments>https://news.ycombinator.com/item?id=31417808</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31417808</guid></item><item><title><![CDATA[New comment by abel_ in "Bits of advice I wish I had known"]]></title><description><![CDATA[
<p>I wonder what the net effect of such pieces of writing is. The problem is that these abstract and contextless statements make sense only if they cause the reader to reflect on some experience, and thus only mildly reinforce currently held beliefs. Otherwise, I can't see how the statements would stick for most people (not even as cached memory).<p>What would add significantly to this is a bunch of Gwern-style links embedded within each of these quips. The author is clearly speaking from a vantage point not many others have attained, and he'd be able to provide a story or other context to each.</p>
]]></description><pubDate>Fri, 29 Apr 2022 02:19:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=31200641</link><dc:creator>abel_</dc:creator><comments>https://news.ycombinator.com/item?id=31200641</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31200641</guid></item><item><title><![CDATA[New comment by abel_ in "You’re muted – or are you? Videoconferencing apps may listen when mic is off"]]></title><description><![CDATA[
<p>The problem with software-controlled permissions is that nation-state actors (who have unbounded resources) can snoop on your private matters with significantly greater ease.<p>At least with a hardware switch, someone would have to physically intercept the air waves in the room you're in. In software, the surface for OS-level vulnerabilities is massive, and state sponsored mass surveillance just gets easier.<p>Sadly, this is a trade-off we have made as a society for "ergonomics".</p>
]]></description><pubDate>Thu, 14 Apr 2022 00:21:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=31021967</link><dc:creator>abel_</dc:creator><comments>https://news.ycombinator.com/item?id=31021967</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31021967</guid></item><item><title><![CDATA[New comment by abel_ in "It’s Like GPT-3 but for Code – Fun, Fast, and Full of Flaws"]]></title><description><![CDATA[
<p>While others here have touched on the idea that Codex has changed their coding habits, what I find interesting is that Codex has changed how I write code altogether. For example, I had to connect a database to an API a little while ago. Obviously I had the option to use an ORM as one would normally. But instead, I just wrote out all the SQL commands and wrapper functions in one big file. Since it was all tedious and predictable, Codex helped me to write it in just a few minutes, and I didn't need to muck around with some complex ORM. These are the tradeoffs I'm personally excited about.</p>
]]></description><pubDate>Sun, 20 Mar 2022 02:29:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=30739489</link><dc:creator>abel_</dc:creator><comments>https://news.ycombinator.com/item?id=30739489</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30739489</guid></item><item><title><![CDATA[New comment by abel_ in "It’s Like GPT-3 but for Code – Fun, Fast, and Full of Flaws"]]></title><description><![CDATA[
<p>As someone else already mentioned, the scaling laws paint a different story empirically: we haven't hit diminishing returns at all, and there's no end in sight.<p>But more anecdotally, the first applied neural network paper in 1989 by LeCunn has pretty much the same format as the GPT paper: a large neural network trained on a large dataset (all relative to the era). <a href="https://karpathy.github.io/2022/03/14/lecun1989/" rel="nofollow">https://karpathy.github.io/2022/03/14/lecun1989/</a><p>It really just seems that there are a certain number of flops you need before certain capabilities can emerge.</p>
]]></description><pubDate>Sun, 20 Mar 2022 02:16:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=30739421</link><dc:creator>abel_</dc:creator><comments>https://news.ycombinator.com/item?id=30739421</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30739421</guid></item><item><title><![CDATA[New comment by abel_ in "A DARPA Perspective on Artificial Intelligence [pdf]"]]></title><description><![CDATA[
<p>The most pressing dangers of AI most researchers see:<p>- error rate too high<p>- you can trick a classifier with noise<p>- it's racist sometimes<p>Actual dangers of AI:<p>- stop problem<p>- infeasibility of sandboxing<p>- difficulty of aligning black boxes with human values</p>
]]></description><pubDate>Sun, 20 Mar 2022 00:30:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=30738838</link><dc:creator>abel_</dc:creator><comments>https://news.ycombinator.com/item?id=30738838</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30738838</guid></item><item><title><![CDATA[New comment by abel_ in "Newly published 9/11 footage [video]"]]></title><description><![CDATA[
<p>While progress has been made in computer vision, that progress has been relatively narrow up until now, and I think the activation energy required to produce this level of quality would be more than it's worth. As others have mentioned, new footage comes out all the time.<p>However, I agree with the sentiment. Someday, we will have a massive foundation model capable of producing any video with a little conditioning on text. But we don't currently have such a model. In some sense, we're still in the era of easily verifiable video, and this era might end someday soon.</p>
]]></description><pubDate>Sat, 26 Feb 2022 13:54:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=30477569</link><dc:creator>abel_</dc:creator><comments>https://news.ycombinator.com/item?id=30477569</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30477569</guid></item><item><title><![CDATA[New comment by abel_ in "Get better sleep – Anecdata and sleep tech"]]></title><description><![CDATA[
<p>Interesting! I hadn't heard this perspective before. Sounds like Goodharting [0] applies to sleep too<p>[0] <a href="https://en.wikipedia.org/wiki/Goodhart%27s_law" rel="nofollow">https://en.wikipedia.org/wiki/Goodhart%27s_law</a></p>
]]></description><pubDate>Tue, 22 Feb 2022 06:48:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=30424722</link><dc:creator>abel_</dc:creator><comments>https://news.ycombinator.com/item?id=30424722</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30424722</guid></item><item><title><![CDATA[New comment by abel_ in "Don’t point out something wrong immediately"]]></title><description><![CDATA[
<p>Completely disagree. I agree that the _delivery_ is important, but the interval between when discovering a problem and announcing it should be minimal.<p>There's a huge difference between maintaining the social etiquette of allowing your conversationalist to explain themselves fully, and waiting a little while before announcing a problem. Announcing right away in a respectful manner also let's you get a correction right away.</p>
]]></description><pubDate>Fri, 18 Feb 2022 08:54:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=30383521</link><dc:creator>abel_</dc:creator><comments>https://news.ycombinator.com/item?id=30383521</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30383521</guid></item><item><title><![CDATA[New comment by abel_ in "Show HN: Clone your voice and speak a foreign language"]]></title><description><![CDATA[
<p>An interesting reflection is how quickly research around TTS/STT has progressed. I remember reading [0] thinking we were a long ways away. And things will get way better with multi-task learning and multi-modal learning in the coming years (or months really).<p>In fact, just a year after this post was written, CoquiAI started their open source projects [1].<p>[0] <a href="https://news.ycombinator.com/item?id=22869365" rel="nofollow">https://news.ycombinator.com/item?id=22869365</a> (<a href="https://thegradient.pub/towards-an-imagenet-moment-for-speech-to-text/" rel="nofollow">https://thegradient.pub/towards-an-imagenet-moment-for-speec...</a>)<p>[1] <a href="https://star-history.com/#coqui-ai/TTS&coqui-ai/STT" rel="nofollow">https://star-history.com/#coqui-ai/TTS&coqui-ai/STT</a></p>
]]></description><pubDate>Tue, 04 Jan 2022 03:20:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=29790890</link><dc:creator>abel_</dc:creator><comments>https://news.ycombinator.com/item?id=29790890</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29790890</guid></item><item><title><![CDATA[New comment by abel_ in "Who won the Amstel Gold Race? Human error in photo-finishes"]]></title><description><![CDATA[
<p>I think the distortion of the video-finish camera should be considered when making these estimates. There's quite a lot of it happening in the shot. Narrowing down the specific camera intrinsic parameters may be a challenge. The EXIF data of the original footage might be fruitful.<p>It might also be possible to undistort using some assumptions on colinear points in the video (i.e., the finish line and signs should have straight lines).</p>
]]></description><pubDate>Thu, 29 Apr 2021 06:09:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=26977928</link><dc:creator>abel_</dc:creator><comments>https://news.ycombinator.com/item?id=26977928</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26977928</guid></item><item><title><![CDATA[New comment by abel_ in "Thanks for the Bonus, I Quit"]]></title><description><![CDATA[
<p>While I see the value in gratitude as described by the author, the lesson learned still rings with a certain amount of complacence with the situation at hand. Had there been more transparency in the bonus program, there's a chance it would have been reformulated as to not fuel internal wars and allow for the long term success of the company. While it's important to always exercise gratitude, it's also important to demand good incentive structures.</p>
]]></description><pubDate>Sun, 18 Apr 2021 19:58:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=26855772</link><dc:creator>abel_</dc:creator><comments>https://news.ycombinator.com/item?id=26855772</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26855772</guid></item></channel></rss>