<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: olliepro</title><link>https://news.ycombinator.com/user?id=olliepro</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 05:34:10 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=olliepro" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by olliepro in "MegaTrain: Full Precision Training of 100B+ Parameter LLMs on a Single GPU"]]></title><description><![CDATA[
<p>decentralized training makes a lot more sense when the required hardware isn't a $40K GPU...</p>
]]></description><pubDate>Wed, 08 Apr 2026 16:34:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47692556</link><dc:creator>olliepro</dc:creator><comments>https://news.ycombinator.com/item?id=47692556</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47692556</guid></item><item><title><![CDATA[New comment by olliepro in "MegaTrain: Full Precision Training of 100B+ Parameter LLMs on a Single GPU"]]></title><description><![CDATA[
<p>This would likely only get used for small finetuning jobs. It’s too slow for the scale of pretraining.</p>
]]></description><pubDate>Wed, 08 Apr 2026 13:01:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47689587</link><dc:creator>olliepro</dc:creator><comments>https://news.ycombinator.com/item?id=47689587</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47689587</guid></item><item><title><![CDATA[New comment by olliepro in "GPT-5.4"]]></title><description><![CDATA[
<p>I bet they lack good long context training data and need to start a flywheel of collecting it via their api (from willing customers)</p>
]]></description><pubDate>Thu, 05 Mar 2026 23:25:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47268648</link><dc:creator>olliepro</dc:creator><comments>https://news.ycombinator.com/item?id=47268648</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47268648</guid></item><item><title><![CDATA[New comment by olliepro in "A shortage of tenors"]]></title><description><![CDATA[
<p>Tensors are in no shortage nowadays. I did read this a tensors though and got a good laugh.</p>
]]></description><pubDate>Wed, 11 Feb 2026 19:20:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46979483</link><dc:creator>olliepro</dc:creator><comments>https://news.ycombinator.com/item?id=46979483</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46979483</guid></item><item><title><![CDATA[New comment by olliepro in "Hard-braking events as indicators of road segment crash risk"]]></title><description><![CDATA[
<p>There’s a section of I-15 in Utah’s Salt Lake County which reliably has a crash on weekdays at 6pm. It was unfortunately at a pinch point in the mountains with no good alternate route… very annoying.<p>In a similar way that Google Maps shows eco routes, it’d be fun for them to show “safest” routes which avoid areas with common crashes. (Not always possible, but valuable knowledge when it is.)</p>
]]></description><pubDate>Mon, 09 Feb 2026 18:23:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=46948854</link><dc:creator>olliepro</dc:creator><comments>https://news.ycombinator.com/item?id=46948854</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46948854</guid></item><item><title><![CDATA[New comment by olliepro in "Anthropic AI tool sparks selloff from software to broader market"]]></title><description><![CDATA[
<p>Much of the scientific medical literature is behind paywalls. They have tapped into that datasource (whereas ChatGPT doesn't have access to that data). I suspect that were the medical journals to make a deal with OpenAI to open up the access to their articles/data etc, that open evidence would rely on the existing customers and stickiness of the product, but in that circumstance, they'd be pretty screwed.<p>For example, only 7% of pharmaceutical research is publicly accessible without paying.
See <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC7048123/" rel="nofollow">https://pmc.ncbi.nlm.nih.gov/articles/PMC7048123/</a></p>
]]></description><pubDate>Tue, 03 Feb 2026 22:51:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46878534</link><dc:creator>olliepro</dc:creator><comments>https://news.ycombinator.com/item?id=46878534</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46878534</guid></item><item><title><![CDATA[New comment by olliepro in "Doing the thing is doing the thing"]]></title><description><![CDATA[
<p>It depends on your thing. If the marathon was just the motivation, your thing is running... if the marathon was the bucketlist item, it is the thing.</p>
]]></description><pubDate>Tue, 27 Jan 2026 21:23:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46787051</link><dc:creator>olliepro</dc:creator><comments>https://news.ycombinator.com/item?id=46787051</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46787051</guid></item><item><title><![CDATA[New comment by olliepro in "Doing the thing is doing the thing"]]></title><description><![CDATA[
<p>Getting everyone to fall in love with the thing is not doing the thing... learned this as a data scientist brought in to work on a project which ended soon thereafter. A team of 20 people spent 1.5 years getting people to love an idea which never materialized. Time was wasted because the technical limitations and issues came too late... it died as a 40 page postmortem that will never see daylight.</p>
]]></description><pubDate>Tue, 27 Jan 2026 21:16:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=46786897</link><dc:creator>olliepro</dc:creator><comments>https://news.ycombinator.com/item?id=46786897</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46786897</guid></item><item><title><![CDATA[New comment by olliepro in "Doing the thing is doing the thing"]]></title><description><![CDATA[
<p>Everyone's threshold is different. I aspire to "move fast and break things", but more often than not, I obsess over the rough edges.</p>
]]></description><pubDate>Tue, 27 Jan 2026 21:12:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=46786822</link><dc:creator>olliepro</dc:creator><comments>https://news.ycombinator.com/item?id=46786822</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46786822</guid></item><item><title><![CDATA[New comment by olliepro in "Doing the thing is doing the thing"]]></title><description><![CDATA[
<p>The more I use AI to do the thing, the more it feels like I didn't do the thing.</p>
]]></description><pubDate>Tue, 27 Jan 2026 21:07:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46786752</link><dc:creator>olliepro</dc:creator><comments>https://news.ycombinator.com/item?id=46786752</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46786752</guid></item><item><title><![CDATA[New comment by olliepro in "After two years of vibecoding, I'm back to writing by hand"]]></title><description><![CDATA[
<p>What abstraction levels do you expect will remain only in the Human domain?<p>The progression from basic arithmetic, to complex ratios and basic algebra, graphing, geometry, trig, calculus, linear algebra, differential equations… all along the way, there are calculators that can help students (wolfram alpha basically). When they get to theory, proofs, etc… historically, thats where the calculator ended, but now there’s LLMs… it feels like the levels of abstractions without a “calculator” are running out.<p>The compiler was the “calculator” abstraction of programming, and it seems like the high-level languages now have LLMs to convert NLP to code as a sort of compiler. Especially with the explicitly stated goal of LLM companies to create the “software singularity”, I’d be interested to hear the rationale for abstractions in CS which will remain off limits to LLMs.</p>
]]></description><pubDate>Tue, 27 Jan 2026 01:40:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46774424</link><dc:creator>olliepro</dc:creator><comments>https://news.ycombinator.com/item?id=46774424</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46774424</guid></item><item><title><![CDATA[New comment by olliepro in "Unrolling the Codex agent loop"]]></title><description><![CDATA[
<p>I made a skill that reflects on past conversations via parallel headless codex sessions. Its great for context building. Repo: <a href="https://github.com/olliepro/Codex-Reflect-Skill" rel="nofollow">https://github.com/olliepro/Codex-Reflect-Skill</a></p>
]]></description><pubDate>Fri, 23 Jan 2026 22:56:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46739134</link><dc:creator>olliepro</dc:creator><comments>https://news.ycombinator.com/item?id=46739134</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46739134</guid></item><item><title><![CDATA[New comment by olliepro in "Show HN: Codex Self-Reflect Skill and CLI to run subagents on past Codex convos"]]></title><description><![CDATA[
<p>I was thinking about something like this, but I don't have codex running on a server. Keep me posted on how it goes!</p>
]]></description><pubDate>Fri, 23 Jan 2026 22:04:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=46738606</link><dc:creator>olliepro</dc:creator><comments>https://news.ycombinator.com/item?id=46738606</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46738606</guid></item><item><title><![CDATA[Show HN: Codex Self-Reflect Skill and CLI to run subagents on past Codex convos]]></title><description><![CDATA[
<p>This skill is useful for identifying agent friction points and brainstorming new skills, developing context of past work for a new conversation, identifying code bloat from failed solutions, etc.<p>It is a simple Codex meta-"skill" that uses a CLI to analyze your past Codex sessions and generate reflections. It uses headless Codex subagents running in parallel on temporary copies of conversation history. These reflections are orchestrated by the CLI and a Codex agent can synthesize them for downstream use. Reflections are cached for efficiency.<p>Repo: <a href="https://github.com/olliepro/Codex-Reflect-Skill" rel="nofollow">https://github.com/olliepro/Codex-Reflect-Skill</a></p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46738461">https://news.ycombinator.com/item?id=46738461</a></p>
<p>Points: 3</p>
<p># Comments: 2</p>
]]></description><pubDate>Fri, 23 Jan 2026 21:54:05 +0000</pubDate><link>https://github.com/olliepro/Codex-Reflect-Skill</link><dc:creator>olliepro</dc:creator><comments>https://news.ycombinator.com/item?id=46738461</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46738461</guid></item><item><title><![CDATA[New comment by olliepro in "Cowork: Claude Code for the rest of your work"]]></title><description><![CDATA[
<p>I believe the idea is that it “files away” the files into folders.</p>
]]></description><pubDate>Tue, 13 Jan 2026 04:40:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46597384</link><dc:creator>olliepro</dc:creator><comments>https://news.ycombinator.com/item?id=46597384</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46597384</guid></item><item><title><![CDATA[New comment by olliepro in "Cowork: Claude Code for the rest of your work"]]></title><description><![CDATA[
<p>Lol</p>
]]></description><pubDate>Tue, 13 Jan 2026 04:32:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46597347</link><dc:creator>olliepro</dc:creator><comments>https://news.ycombinator.com/item?id=46597347</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46597347</guid></item><item><title><![CDATA[New comment by olliepro in "Cowork: Claude Code for the rest of your work"]]></title><description><![CDATA[
<p>Can Claude code jump through the hoops for you?</p>
]]></description><pubDate>Tue, 13 Jan 2026 04:31:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46597338</link><dc:creator>olliepro</dc:creator><comments>https://news.ycombinator.com/item?id=46597338</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46597338</guid></item><item><title><![CDATA[New comment by olliepro in "2025, the year we took the red pill"]]></title><description><![CDATA[
<p>Three things that shook me awake to the idea that the information barrage of the internet is a tranquilizer/red herring:<p>- Bad Mental Health: At the start of the war in Ukraine I read/listened to the news every day. I’d frequently cry, hearing an interview from someone who was trapped in a bombed out building etc. After a few weeks I realized I was being emotionally exhausted by something around the world about which I had no control.<p>- Enshitification: Working for a b2b “tech/coding education web app” company as a data scientist and realizing the perversity of incentives which were ruining the product.<p>- Increasing Opportunity Cost: Working on LLMs and realizing that the possibilities for what I could do were expanding because I would now have more answers and information at my fingertips than ever before.<p>Great post… it was useful to be able to reflect and understand my experiences with these abstractions.</p>
]]></description><pubDate>Fri, 02 Jan 2026 15:52:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46465990</link><dc:creator>olliepro</dc:creator><comments>https://news.ycombinator.com/item?id=46465990</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46465990</guid></item><item><title><![CDATA[New comment by olliepro in "Court report detailing ChatGPT's involvement with a recent murder suicide [pdf]"]]></title><description><![CDATA[
<p>Although there are many examples of troubling sycophantic responses confirming or encouraging delusions, this document is the original complaint (the initial filing) in a lawsuit against OpenAI. Because it is an initial legal complaint, it only represents the plaintiff's side of the story. It'll be interesting to see how this plays out when more information comes to light. It is likely that the lawsuit filing selectively quotes chatgpt to strengthen its argument. Additionally it's plausible that Mr. Soelberg actively sought this type of behavior from the model or ignored/regenerated responses when they pushed back on the delusion.</p>
]]></description><pubDate>Wed, 31 Dec 2025 19:33:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=46447461</link><dc:creator>olliepro</dc:creator><comments>https://news.ycombinator.com/item?id=46447461</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46447461</guid></item><item><title><![CDATA[New comment by olliepro in "GPT-5.2"]]></title><description><![CDATA[
<p>It feels like this should work, but the breadth of knowledge in these models is so vast. Everyone knows how to taste, but not everyone knows physics, biology, math, every language… poetry, etc. Enumerating the breadth of valuable human tasks is hard, so both approaches suffer from the scale of the models’ surface area.<p>An interesting problem since the creators of OLMO have mentioned that throughout training, they use 1/3 or their compute just doing evaluations.<p>Edit:<p>One nice thing about the “critic” approach is that the restaurant (or model provider) doesn’t have access to the benchmark to quasi-directly optimize against.</p>
]]></description><pubDate>Fri, 12 Dec 2025 18:43:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=46247226</link><dc:creator>olliepro</dc:creator><comments>https://news.ycombinator.com/item?id=46247226</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46247226</guid></item></channel></rss>