<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ohxh</title><link>https://news.ycombinator.com/user?id=ohxh</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 02 May 2026 02:11:54 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ohxh" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[We stopped hiring engineers for coding ability]]></title><description><![CDATA[
<p>Article URL: <a href="https://eliseai.com/blog/we-stopped-hiring-engineers-for-coding-ability">https://eliseai.com/blog/we-stopped-hiring-engineers-for-coding-ability</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47977434">https://news.ycombinator.com/item?id=47977434</a></p>
<p>Points: 8</p>
<p># Comments: 4</p>
]]></description><pubDate>Fri, 01 May 2026 17:23:40 +0000</pubDate><link>https://eliseai.com/blog/we-stopped-hiring-engineers-for-coding-ability</link><dc:creator>ohxh</dc:creator><comments>https://news.ycombinator.com/item?id=47977434</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47977434</guid></item><item><title><![CDATA[New comment by ohxh in "PlanetScale for Postgres is now GA"]]></title><description><![CDATA[
<p>Took a while to find on their website but here’s a benchmark vs AWS Aurora:<p><a href="https://planetscale.com/benchmarks/aurora" rel="nofollow">https://planetscale.com/benchmarks/aurora</a><p>Seems a bit better, but they benchmarked on a kind of small db (500gb db / db.r8g.xlarge)</p>
]]></description><pubDate>Mon, 22 Sep 2025 17:46:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=45336943</link><dc:creator>ohxh</dc:creator><comments>https://news.ycombinator.com/item?id=45336943</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45336943</guid></item><item><title><![CDATA[New comment by ohxh in "Ask HN: What are you actually using LLMs for in production?"]]></title><description><![CDATA[
<p>Lots of non-chatbot uses in property management. Auditing leases vs. payment ledgers. Classifying maintenance work orders. Creating work orders from inspections (photos + text). Scheduling vendors to fix these issues. Etc.</p>
]]></description><pubDate>Sat, 28 Jun 2025 16:49:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44406135</link><dc:creator>ohxh</dc:creator><comments>https://news.ycombinator.com/item?id=44406135</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44406135</guid></item><item><title><![CDATA[New comment by ohxh in "Lateralized sleeping positions in domestic cats"]]></title><description><![CDATA[
<p>They say "thus, on average, about two-thirds of cats preferred to sleep on the left side of their body with their left shoulder down", and their image for leftward lateral bias shows this. So I guess leftward means "lying on their left side", not "curling left".<p>But, they suggest this is because "Upon awakening, a leftward sleeping position would provide a fast left visual field view of objects", which seems suspect. When my cats sleep on their left, it's their left eye that's obscured by their paw, and their right eye that has a better field of view!</p>
]]></description><pubDate>Thu, 26 Jun 2025 18:58:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=44390253</link><dc:creator>ohxh</dc:creator><comments>https://news.ycombinator.com/item?id=44390253</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44390253</guid></item><item><title><![CDATA[New comment by ohxh in "Infinite Grid of Resistors"]]></title><description><![CDATA[
<p>> Here, I don't think it's even useful to look at this problem in electronic terms<p>I always thought this problem was a funny choice for the comic, because it’s <i>not</i> that esoteric! It’s equivalent to asking about a 2d simple random walk on a lattice, which is fairly common. And in general the electrical network <-> random walk correspondence is a useful perspective too</p>
]]></description><pubDate>Sun, 15 Jun 2025 09:00:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=44281335</link><dc:creator>ohxh</dc:creator><comments>https://news.ycombinator.com/item?id=44281335</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44281335</guid></item><item><title><![CDATA[4dv.ai (4d Gaussian Splatting)]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.4dv.ai/en">https://www.4dv.ai/en</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44203642">https://news.ycombinator.com/item?id=44203642</a></p>
<p>Points: 6</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 06 Jun 2025 18:28:33 +0000</pubDate><link>https://www.4dv.ai/en</link><dc:creator>ohxh</dc:creator><comments>https://news.ycombinator.com/item?id=44203642</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44203642</guid></item><item><title><![CDATA[New comment by ohxh in "ChatGPT Is a Gimmick"]]></title><description><![CDATA[
<p>This seems unusually shallow for the hedgehog review. I thought we'd largely moved on from this sort of sentimental, "I can't get good outputs therefore nobody can" style essay -- not to mention the water use argument! They've published far better writing on LLMs too: see "Language Machinery" from fall 23 [1]<p>[1] <a href="https://hedgehogreview.com/issues/markets-and-the-good/articles/language-machinery" rel="nofollow">https://hedgehogreview.com/issues/markets-and-the-good/artic...</a></p>
]]></description><pubDate>Thu, 22 May 2025 08:11:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=44059863</link><dc:creator>ohxh</dc:creator><comments>https://news.ycombinator.com/item?id=44059863</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44059863</guid></item><item><title><![CDATA[New comment by ohxh in "Embeddings are underrated (2024)"]]></title><description><![CDATA[
<p>Johnson-lindenstrauss lemma [1] for anyone curious. But you can only map to k>8(\ln N)/\varepsilon ^{2}} if you want to preserve distances within a factor of \varepsilon with a JL-transform. This is tight up to a constant factor too.<p>I always wondered: if we want to preserve distances between a billion points within 10%, that would mean we need ~18k dimensions. 1% would be 1.8m. Is there a stronger version of the lemma for points that are well spread out? Or are embeddings really just fine with low precision for the distance?<p>[1] <a href="https://en.wikipedia.org/wiki/Johnson%E2%80%93Lindenstrauss_lemma" rel="nofollow">https://en.wikipedia.org/wiki/Johnson%E2%80%93Lindenstrauss_...</a></p>
]]></description><pubDate>Mon, 12 May 2025 19:12:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=43966531</link><dc:creator>ohxh</dc:creator><comments>https://news.ycombinator.com/item?id=43966531</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43966531</guid></item><item><title><![CDATA[New comment by ohxh in "Does current AI represent a dead end?"]]></title><description><![CDATA[
<p>“One could offer so many examples of such categorical prophecies being quickly refuted by experience! In fact, this type of negative prediction is repeated so frequently that one might ask if it is not prompted by the very proximity of the discovery that one solemnly proclaims will never take place. In every period, any important discovery will threaten some organization of knowledge.”
René Girard, Things Hidden Since the Foundation of the World, p. 4</p>
]]></description><pubDate>Fri, 27 Dec 2024 18:21:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=42524583</link><dc:creator>ohxh</dc:creator><comments>https://news.ycombinator.com/item?id=42524583</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42524583</guid></item><item><title><![CDATA[Matrix Multiplications Run Faster When Given "Predictable" Data]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.thonking.ai/p/strangely-matrix-multiplications">https://www.thonking.ai/p/strangely-matrix-multiplications</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42161942">https://news.ycombinator.com/item?id=42161942</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 17 Nov 2024 04:22:50 +0000</pubDate><link>https://www.thonking.ai/p/strangely-matrix-multiplications</link><dc:creator>ohxh</dc:creator><comments>https://news.ycombinator.com/item?id=42161942</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42161942</guid></item><item><title><![CDATA[New comment by ohxh in "Goldman Sachs: AI Is overhyped, expensive, and unreliable"]]></title><description><![CDATA[
<p>Or maybe they <i>do</i> believe this, and entered into trades to express this sentiment. Now, they need the market to correct to what (they believe) is accurate, so they can take profits and free up their capital again.</p>
]]></description><pubDate>Fri, 12 Jul 2024 22:50:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=40950173</link><dc:creator>ohxh</dc:creator><comments>https://news.ycombinator.com/item?id=40950173</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40950173</guid></item><item><title><![CDATA[New comment by ohxh in "Quantum is unimportant to post-quantum"]]></title><description><![CDATA[
<p>> These are all special instances of a more general computational problem called the hidden subgroup problem. And quantum computers are good at solving the hidden subgroup problem. They’re really good at it.<p>I assume they mean the hidden subgroup problem <i>for abelian groups</i>? Later they mention short integer solutions (SIS) and learning with errors (LWE), which by my understanding both rely on the hardness of the shortest vector problem, corresponding to the hidden subgroup problem for some non-abelian groups. I haven't read into this stuff for a while, though</p>
]]></description><pubDate>Mon, 01 Jul 2024 21:57:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=40851210</link><dc:creator>ohxh</dc:creator><comments>https://news.ycombinator.com/item?id=40851210</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40851210</guid></item><item><title><![CDATA[New comment by ohxh in "Japan's Comfort Food: The Onigiri"]]></title><description><![CDATA[
<p>There’s a fun version of this that they have at 7-11 (or 7 & I) there. Probably other convenience stores too. They come in a plastic wrapper that separates the nori from the rice and filling so it doesn’t get soggy. When you pull a little tab, it somehow removes the plastic from in between without messing up the shape. Magic!</p>
]]></description><pubDate>Mon, 15 Jan 2024 17:57:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=39003858</link><dc:creator>ohxh</dc:creator><comments>https://news.ycombinator.com/item?id=39003858</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39003858</guid></item><item><title><![CDATA[The purpose of a system is what it does]]></title><description><![CDATA[
<p>Article URL: <a href="https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_what_it_does">https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_what_it_does</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=38974639">https://news.ycombinator.com/item?id=38974639</a></p>
<p>Points: 7</p>
<p># Comments: 3</p>
]]></description><pubDate>Fri, 12 Jan 2024 21:53:33 +0000</pubDate><link>https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_what_it_does</link><dc:creator>ohxh</dc:creator><comments>https://news.ycombinator.com/item?id=38974639</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38974639</guid></item><item><title><![CDATA[Language Machinery]]></title><description><![CDATA[
<p>Article URL: <a href="https://hedgehogreview.com/issues/markets-and-the-good/articles/language-machinery">https://hedgehogreview.com/issues/markets-and-the-good/articles/language-machinery</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=38287210">https://news.ycombinator.com/item?id=38287210</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 16 Nov 2023 09:04:58 +0000</pubDate><link>https://hedgehogreview.com/issues/markets-and-the-good/articles/language-machinery</link><dc:creator>ohxh</dc:creator><comments>https://news.ycombinator.com/item?id=38287210</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38287210</guid></item><item><title><![CDATA[New comment by ohxh in "Every time you click this link, it will send you to a random Web 1.0 website"]]></title><description><![CDATA[
<p>As far as I understand, web 1.0 is browser makes a request -> backend delivers some html, with subsequent requests just for css/images/iframes. This also had a characteristic style with layouts made from tables and simple but busy designs. Web 2.0 is many of the web apps you see today, where you don’t need to load a page to fetch new content, but instead asynchronous JavaScript grabs it and edits the html — think gmail or Google maps. Web 3.0 is unclear to me, but it seems like most people who use it refer to decentralized or peer to peer applications and crypto.</p>
]]></description><pubDate>Sat, 15 Jul 2023 19:19:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=36740053</link><dc:creator>ohxh</dc:creator><comments>https://news.ycombinator.com/item?id=36740053</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36740053</guid></item><item><title><![CDATA[New comment by ohxh in "Show HN: Thoughts on Flash in 2023, in Flash, in 2023"]]></title><description><![CDATA[
<p>You've made something really cool here. Watching transfixed me in a way similar to a video exhibition I saw at the Tate Modern when I was younger. It was a loop with footage of a JAXA rocket launch, an anti-nuclear protest after the Fukushima-Daiichi disaster, and the eruption of a volcano, with narration from a shaman in Japanese (subtitled) contemplating our relationship with nature. "Transit" by Susan Norrie. <a href="https://www.youtube.com/watch?v=9TS3d4URcB4&t=173s">https://www.youtube.com/watch?v=9TS3d4URcB4&t=173s</a></p>
]]></description><pubDate>Fri, 21 Apr 2023 05:28:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=35650076</link><dc:creator>ohxh</dc:creator><comments>https://news.ycombinator.com/item?id=35650076</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35650076</guid></item><item><title><![CDATA[Aligning ML systems with human intent]]></title><description><![CDATA[
<p>Article URL: <a href="https://jsteinhardt.stat.berkeley.edu/talks/satml/tutorial.html">https://jsteinhardt.stat.berkeley.edu/talks/satml/tutorial.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=35349486">https://news.ycombinator.com/item?id=35349486</a></p>
<p>Points: 3</p>
<p># Comments: 2</p>
]]></description><pubDate>Tue, 28 Mar 2023 23:19:34 +0000</pubDate><link>https://jsteinhardt.stat.berkeley.edu/talks/satml/tutorial.html</link><dc:creator>ohxh</dc:creator><comments>https://news.ycombinator.com/item?id=35349486</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35349486</guid></item><item><title><![CDATA[New comment by ohxh in "Room Generation Using Constraint Satisfaction"]]></title><description><![CDATA[
<p>There’s an interesting approach similar to this called Wave Function Collapse [1] (no relation to wfc in physics besides inspiration). It can infer the probabilistic constraints from one input example, and it seems to generalize quite well. Here’s a little demo: <a href="https://oskarstalberg.com/game/wave/wave.html" rel="nofollow">https://oskarstalberg.com/game/wave/wave.html</a><p>[1] <a href="https://github.com/mxgmn/WaveFunctionCollapse">https://github.com/mxgmn/WaveFunctionCollapse</a></p>
]]></description><pubDate>Thu, 23 Mar 2023 16:59:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=35277714</link><dc:creator>ohxh</dc:creator><comments>https://news.ycombinator.com/item?id=35277714</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35277714</guid></item><item><title><![CDATA[No One Can Explain Why Planes Stay in the Air (2020)]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.scientificamerican.com/article/no-one-can-explain-why-planes-stay-in-the-air/">https://www.scientificamerican.com/article/no-one-can-explain-why-planes-stay-in-the-air/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=34550139">https://news.ycombinator.com/item?id=34550139</a></p>
<p>Points: 11</p>
<p># Comments: 20</p>
]]></description><pubDate>Fri, 27 Jan 2023 18:56:12 +0000</pubDate><link>https://www.scientificamerican.com/article/no-one-can-explain-why-planes-stay-in-the-air/</link><dc:creator>ohxh</dc:creator><comments>https://news.ycombinator.com/item?id=34550139</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34550139</guid></item></channel></rss>