<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: dmvaldman</title><link>https://news.ycombinator.com/user?id=dmvaldman</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 13 Apr 2026 08:39:04 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=dmvaldman" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by dmvaldman in "Ask HN: What Are You Working On? (April 2026)"]]></title><description><![CDATA[
<p>A ring you can talk into and it controls an agent on your phone. Eg say "pick me up" and an Uber arrives.<p>Looking for people who know hardware well. Let's get to know one another on a flight to Shenzhen :P</p>
]]></description><pubDate>Sun, 12 Apr 2026 21:28:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47744718</link><dc:creator>dmvaldman</dc:creator><comments>https://news.ycombinator.com/item?id=47744718</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47744718</guid></item><item><title><![CDATA[New comment by dmvaldman in "Training my smartwatch to track intelligence"]]></title><description><![CDATA[
<p>may the REM sleep gods light your path</p>
]]></description><pubDate>Fri, 16 Jan 2026 19:27:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46650938</link><dc:creator>dmvaldman</dc:creator><comments>https://news.ycombinator.com/item?id=46650938</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46650938</guid></item><item><title><![CDATA[New comment by dmvaldman in "Training my smartwatch to track intelligence"]]></title><description><![CDATA[
<p>thanks for catching this!</p>
]]></description><pubDate>Fri, 16 Jan 2026 19:27:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46650927</link><dc:creator>dmvaldman</dc:creator><comments>https://news.ycombinator.com/item?id=46650927</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46650927</guid></item><item><title><![CDATA[New comment by dmvaldman in "Training my smartwatch to track intelligence"]]></title><description><![CDATA[
<p>1000%</p>
]]></description><pubDate>Fri, 16 Jan 2026 19:24:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46650899</link><dc:creator>dmvaldman</dc:creator><comments>https://news.ycombinator.com/item?id=46650899</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46650899</guid></item><item><title><![CDATA[The Alignment Game (2023)]]></title><description><![CDATA[
<p><a href="https://docs.google.com/spreadsheets/d/1BYh9ZtEv4k7xoSXmtf1qCP8bYHBCZLEuTVHsTDPQM1M/edit?gid=2033972304#gid=2033972304" rel="nofollow">https://docs.google.com/spreadsheets/d/1BYh9ZtEv4k7xoSXmtf1q...</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46641216">https://news.ycombinator.com/item?id=46641216</a></p>
<p>Points: 55</p>
<p># Comments: 9</p>
]]></description><pubDate>Thu, 15 Jan 2026 23:56:26 +0000</pubDate><link>https://dmvaldman.github.io/alignment-game/</link><dc:creator>dmvaldman</dc:creator><comments>https://news.ycombinator.com/item?id=46641216</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46641216</guid></item><item><title><![CDATA[New comment by dmvaldman in "Training my smartwatch to track intelligence"]]></title><description><![CDATA[
<p>it's a lot of work, but something you could do is track how you feel (manually or some other way) and do a similar statistical analysis. chess elo was just convenient and aligned for me.</p>
]]></description><pubDate>Wed, 14 Jan 2026 23:32:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46625595</link><dc:creator>dmvaldman</dc:creator><comments>https://news.ycombinator.com/item?id=46625595</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46625595</guid></item><item><title><![CDATA[New comment by dmvaldman in "Training my smartwatch to track intelligence"]]></title><description><![CDATA[
<p>thank you! i think it's ridiculous how little they invest in their developer ecosystem. i have been thinking about jumping ship to oura or whoop simply because of this.</p>
]]></description><pubDate>Wed, 14 Jan 2026 23:30:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46625568</link><dc:creator>dmvaldman</dc:creator><comments>https://news.ycombinator.com/item?id=46625568</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46625568</guid></item><item><title><![CDATA[Training my smartwatch to track intelligence]]></title><description><![CDATA[
<p>Article URL: <a href="https://dmvaldman.github.io/rooklift/">https://dmvaldman.github.io/rooklift/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46624563">https://news.ycombinator.com/item?id=46624563</a></p>
<p>Points: 156</p>
<p># Comments: 68</p>
]]></description><pubDate>Wed, 14 Jan 2026 22:19:36 +0000</pubDate><link>https://dmvaldman.github.io/rooklift/</link><dc:creator>dmvaldman</dc:creator><comments>https://news.ycombinator.com/item?id=46624563</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46624563</guid></item><item><title><![CDATA[New comment by dmvaldman in "How to code Claude Code in 200 lines of code"]]></title><description><![CDATA[
<p>This misses that agentic LLMs are trained via RL to use specific tools. Adding custom tools is subpar to those the model has been trained with. That's why Claude Code has an advantage, over say, Cursor, by being vertically integrated.</p>
]]></description><pubDate>Fri, 09 Jan 2026 01:55:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46549213</link><dc:creator>dmvaldman</dc:creator><comments>https://news.ycombinator.com/item?id=46549213</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46549213</guid></item><item><title><![CDATA[A Game to Align People and Priorities]]></title><description><![CDATA[
<p>Article URL: <a href="https://docs.google.com/spreadsheets/d/1BYh9ZtEv4k7xoSXmtf1qCP8bYHBCZLEuTVHsTDPQM1M/edit">https://docs.google.com/spreadsheets/d/1BYh9ZtEv4k7xoSXmtf1qCP8bYHBCZLEuTVHsTDPQM1M/edit</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=37685547">https://news.ycombinator.com/item?id=37685547</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 28 Sep 2023 05:25:15 +0000</pubDate><link>https://docs.google.com/spreadsheets/d/1BYh9ZtEv4k7xoSXmtf1qCP8bYHBCZLEuTVHsTDPQM1M/edit</link><dc:creator>dmvaldman</dc:creator><comments>https://news.ycombinator.com/item?id=37685547</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37685547</guid></item><item><title><![CDATA[Leaving the Company I Co-founded]]></title><description><![CDATA[
<p>Article URL: <a href="https://dmvaldman.github.io/">https://dmvaldman.github.io/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=30711353">https://news.ycombinator.com/item?id=30711353</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 17 Mar 2022 14:11:19 +0000</pubDate><link>https://dmvaldman.github.io/</link><dc:creator>dmvaldman</dc:creator><comments>https://news.ycombinator.com/item?id=30711353</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30711353</guid></item><item><title><![CDATA[New comment by dmvaldman in "In defense of flat earthers (2020)"]]></title><description><![CDATA[
<p>i have a different take. i'm very glad flat earthers exist. in general i would hope the population of people who believe an idea be proportional to the probability of its truth. so even the wildest ideas should have some modicum of support. consider a world without this. i would imagine it would necessarily have to be thought-policed. i believe this is how we should frame this discussion.<p>what i think is the issue is that we have a broadcasting machine (social media, news, etc) that works on sensationalism. so you are always hearing about fringe ideas and given no signaling to the size of the population that supports it.</p>
]]></description><pubDate>Tue, 18 Jan 2022 04:56:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=29975285</link><dc:creator>dmvaldman</dc:creator><comments>https://news.ycombinator.com/item?id=29975285</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29975285</guid></item><item><title><![CDATA[New comment by dmvaldman in "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery"]]></title><description><![CDATA[
<p>language will be the next interface to software. to get software to do something, you will simply ask it. this work is an example.<p>i've been documenting this theme in a twitter thread here <a href="https://twitter.com/dmvaldman/status/1358916558857269250" rel="nofollow">https://twitter.com/dmvaldman/status/1358916558857269250</a></p>
]]></description><pubDate>Sun, 04 Apr 2021 15:52:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=26690439</link><dc:creator>dmvaldman</dc:creator><comments>https://news.ycombinator.com/item?id=26690439</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=26690439</guid></item><item><title><![CDATA[New comment by dmvaldman in "OpenAI API"]]></title><description><![CDATA[
<p>I think this is generally a good answer, but keep in mind I said AGI "in text". My forecasting is that within 3 years you will be able to give arbitrary text commands and get the textual output of the equivalents of "clean my house, take care of my kids, ..." like problems.<p>I also would contend that there is reasoning happening and that zero-shot demonstrates this. Specifically, reasoning about the intent of the prompt. The fact that you get this simply by building a general-purpose text model is a surprise to me.<p>Something I haven't seen yet is a model simulate the mind of the questioner, the way humans do, over time (minutes, days, years).<p>In 3 years, I'll ping you :) Already made a calendar reminder</p>
]]></description><pubDate>Thu, 11 Jun 2020 20:57:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=23493227</link><dc:creator>dmvaldman</dc:creator><comments>https://news.ycombinator.com/item?id=23493227</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23493227</guid></item><item><title><![CDATA[New comment by dmvaldman in "OpenAI API"]]></title><description><![CDATA[
<p>what is the difference between zero-shot learning in text and AGI? not saying there isn't one, but can you state what it is?you can express any intent in text (unlike other media). to solve zero-shot in text is equivalent to the model responding to all intents.<p>many people have different definitions for AGI though. for me it clicked when i realized that text has this universality property of capturing any intent.</p>
]]></description><pubDate>Thu, 11 Jun 2020 19:19:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=23492388</link><dc:creator>dmvaldman</dc:creator><comments>https://news.ycombinator.com/item?id=23492388</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23492388</guid></item><item><title><![CDATA[New comment by dmvaldman in "OpenAI API"]]></title><description><![CDATA[
<p>i think you are assuming that what is happening under the hood is that a human-inputted sentence is being parsed into a grammar. it is not.</p>
]]></description><pubDate>Thu, 11 Jun 2020 19:13:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=23492338</link><dc:creator>dmvaldman</dc:creator><comments>https://news.ycombinator.com/item?id=23492338</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23492338</guid></item><item><title><![CDATA[New comment by dmvaldman in "OpenAI API"]]></title><description><![CDATA[
<p>Zero shot and few-shot learning in GPT-3 and lack of significant diminishing returns in scaling text models. Zero-shot learning is equivalent to saying "i'm just going to ask the model something that it was not trained to do"</p>
]]></description><pubDate>Thu, 11 Jun 2020 19:06:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=23492265</link><dc:creator>dmvaldman</dc:creator><comments>https://news.ycombinator.com/item?id=23492265</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23492265</guid></item><item><title><![CDATA[New comment by dmvaldman in "OpenAI API"]]></title><description><![CDATA[
<p>AGI in text is < 3yrs away.</p>
]]></description><pubDate>Thu, 11 Jun 2020 17:16:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=23491009</link><dc:creator>dmvaldman</dc:creator><comments>https://news.ycombinator.com/item?id=23491009</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23491009</guid></item><item><title><![CDATA[New comment by dmvaldman in "A 2020 Vision of Linear Algebra"]]></title><description><![CDATA[
<p>I think these are good examples, but to me "linear algebra thinking" lies in it's generality. For example, the derivative is a linear operator, so how do you write it down as a matrix? Google's PageRank is a solution of a matrix equation, what does that matrix represent? Etc.</p>
]]></description><pubDate>Tue, 12 May 2020 13:49:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=23153942</link><dc:creator>dmvaldman</dc:creator><comments>https://news.ycombinator.com/item?id=23153942</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23153942</guid></item><item><title><![CDATA[New comment by dmvaldman in "Keys.pub – Manage cryptographic keys and user identities"]]></title><description><![CDATA[
<p>80/20 rule? Make the problem simpler and deliver a better solution for it. Revisit and grow the problem space as needed.</p>
]]></description><pubDate>Mon, 27 Apr 2020 21:19:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=23000157</link><dc:creator>dmvaldman</dc:creator><comments>https://news.ycombinator.com/item?id=23000157</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23000157</guid></item></channel></rss>