<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: georgemandis</title><link>https://news.ycombinator.com/user?id=georgemandis</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 02:42:12 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=georgemandis" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by georgemandis in "OpenAI charges by the minute, so speed up your audio"]]></title><description><![CDATA[
<p>Definitely in the same spirit!<p>Clearly the next thing we need to test is removing all the vowels from words, or something like that :)</p>
]]></description><pubDate>Thu, 26 Jun 2025 15:24:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=44388314</link><dc:creator>georgemandis</dc:creator><comments>https://news.ycombinator.com/item?id=44388314</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44388314</guid></item><item><title><![CDATA[New comment by georgemandis in "OpenAI charges by the minute, so speed up your audio"]]></title><description><![CDATA[
<p>I had this same thought and won't pretend my fear was rational, haha.<p>One thing that I thought was fairly clear in my write-up but feels a little lost in the comments: I didn't just try this with whisper. I tried it with their newer gpt-4o-transcription model, which seems considerably faster. There's no way to run that one locally.</p>
]]></description><pubDate>Thu, 26 Jun 2025 15:22:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=44388301</link><dc:creator>georgemandis</dc:creator><comments>https://news.ycombinator.com/item?id=44388301</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44388301</guid></item><item><title><![CDATA[New comment by georgemandis in "OpenAI charges by the minute, so speed up your audio"]]></title><description><![CDATA[
<p>I kind of want to take a more proper poke at this but focus more one summarization accuracy over word-for-word accuracy, though I see the value in both.<p>I'm actually curious, if I run transcriptions back-to-back-to-back on the exact same audio, how much variance should I expect?<p>Maybe I'll try three approaches:<p>- A straight diff comparison (I know a lot of people are calling for this, but I really think this is less useful than it sounds)<p>- A "variance within the modal" test running it multiple times against the same audio, tracking how much it varies between runs<p>- An LLM analysis assessing if the primary points from a talk were captured and summarized at 1x, 2x, 3x, 4x runs (I think this is far more useful and interesting)</p>
]]></description><pubDate>Thu, 26 Jun 2025 04:21:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=44384158</link><dc:creator>georgemandis</dc:creator><comments>https://news.ycombinator.com/item?id=44384158</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44384158</guid></item><item><title><![CDATA[New comment by georgemandis in "OpenAI charges by the minute, so speed up your audio"]]></title><description><![CDATA[
<p>I watched your talk. There are so many more interesting ideas in there that resonated with me that the summary (unsurprisingly) skipped over. I'm glad I watched it!<p>LLMs as the operating system, the way you interface with vibe-coding (smaller chunks) and the idea that maybe we haven't found the "GUI for AI" yet are all things I've pondered and discussed with people. You articulated them well.<p>I think some formats, like a talk, don't lend themselves easily to meaningful summaries. It's about giving the audience things to think about, to your point. It's the sum of storytelling that's more than the whole and why we still do it.<p>My post is, at the end of the day, really more about a neat trick to optimize transcriptions. This particular video might be a great example of why you may not always want to do that :)<p>Anyway, thanks for the time and thanks for the talk!</p>
]]></description><pubDate>Wed, 25 Jun 2025 18:29:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=44380436</link><dc:creator>georgemandis</dc:creator><comments>https://news.ycombinator.com/item?id=44380436</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44380436</guid></item><item><title><![CDATA[New comment by georgemandis in "OpenAI charges by the minute, so speed up your audio"]]></title><description><![CDATA[
<p>Hahaha. Okay, okay... I will watch it now ;)<p>(Thanks for your good sense of humor)</p>
]]></description><pubDate>Wed, 25 Jun 2025 17:29:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44379806</link><dc:creator>georgemandis</dc:creator><comments>https://news.ycombinator.com/item?id=44379806</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44379806</guid></item><item><title><![CDATA[New comment by georgemandis in "OpenAI charges by the minute, so speed up your audio"]]></title><description><![CDATA[
<p>Interesting! At $0.02 to $0.04 an hour I don't suspect you've been hunting for optimizations, but I wonder if this "speed up the audio" trick would save you even more.<p>> We do this internally with our tool that automatically transcribes local government council meetings right when they get uploaded to YouTube<p>Doesn't YouTube do this for you automatically these days within a day or so?</p>
]]></description><pubDate>Wed, 25 Jun 2025 16:35:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=44379183</link><dc:creator>georgemandis</dc:creator><comments>https://news.ycombinator.com/item?id=44379183</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44379183</guid></item><item><title><![CDATA[New comment by georgemandis in "OpenAI Charges by the Minute, So Make the Minutes Shorter"]]></title><description><![CDATA[
<p>Yeah, I'd like to do a more formal analysis of the outputs if I can carve out the time.<p>I don't think a simple diff is the way to go, at least for what I'm interested in. What I care about more is the overall accuracy of the summary—not the word-for-word transcription.<p>The test I want to setup is using LLMs to evaluate the summarized output and see if the primary themes/topics persist. That's more interesting and useful to me for this exercise.</p>
]]></description><pubDate>Wed, 25 Jun 2025 16:32:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=44379143</link><dc:creator>georgemandis</dc:creator><comments>https://news.ycombinator.com/item?id=44379143</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44379143</guid></item><item><title><![CDATA[New comment by georgemandis in "OpenAI charges by the minute, so speed up your audio"]]></title><description><![CDATA[
<p>Should be fixed now. Thank you!</p>
]]></description><pubDate>Wed, 25 Jun 2025 15:44:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=44378604</link><dc:creator>georgemandis</dc:creator><comments>https://news.ycombinator.com/item?id=44378604</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44378604</guid></item><item><title><![CDATA[New comment by georgemandis in "OpenAI charges by the minute, so speed up your audio"]]></title><description><![CDATA[
<p>For what it's worth, I completely agree with you, for all the reasons you're saying. With talks in particular I think it's seldom about the raw content and ideas presented and more about the ancillary ideas they provoke and inspire, like you're describing.<p>There is just <i>so</i> much content out there. And context is everything. If the person sharing it had led with some specific ideas or thoughts I might have taken the time to watch and looked for those ideas. But in the context it was received—a quick link with no additional context—I really just wanted the "gist" to know what I was even potentially responding to.<p>In this case, for me, it was worth it. I can go back and decide if I want to watch it. Your comment has intrigued me so I very well might!<p>++ to "Slower is usually better for thinking"</p>
]]></description><pubDate>Wed, 25 Jun 2025 15:40:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44378560</link><dc:creator>georgemandis</dc:creator><comments>https://news.ycombinator.com/item?id=44378560</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44378560</guid></item><item><title><![CDATA[New comment by georgemandis in "OpenAI charges by the minute, so speed up your audio"]]></title><description><![CDATA[
<p>Oooh fun! I had a feeling there was more ffmpeg wizardry I could be leaning into here. I'll have to try this later—thanks for the idea!</p>
]]></description><pubDate>Wed, 25 Jun 2025 15:33:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44378492</link><dc:creator>georgemandis</dc:creator><comments>https://news.ycombinator.com/item?id=44378492</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44378492</guid></item><item><title><![CDATA[New comment by georgemandis in "OpenAI charges by the minute, so speed up your audio"]]></title><description><![CDATA[
<p>I was trying to summarize a 40-minute talk with OpenAI’s transcription API, but it was too long. So I sped it up with ffmpeg to fit within the 25-minute cap. It worked quite well (Up to 3x speeds) and was cheaper and faster, so I wrote about it.<p>Felt like a fun trick worth sharing. There’s a full script and cost breakdown.</p>
]]></description><pubDate>Wed, 25 Jun 2025 13:17:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=44376990</link><dc:creator>georgemandis</dc:creator><comments>https://news.ycombinator.com/item?id=44376990</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44376990</guid></item><item><title><![CDATA[OpenAI charges by the minute, so speed up your audio]]></title><description><![CDATA[
<p>Article URL: <a href="https://george.mand.is/2025/06/openai-charges-by-the-minute-so-make-the-minutes-shorter/">https://george.mand.is/2025/06/openai-charges-by-the-minute-so-make-the-minutes-shorter/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44376989">https://news.ycombinator.com/item?id=44376989</a></p>
<p>Points: 740</p>
<p># Comments: 228</p>
]]></description><pubDate>Wed, 25 Jun 2025 13:17:25 +0000</pubDate><link>https://george.mand.is/2025/06/openai-charges-by-the-minute-so-make-the-minutes-shorter/</link><dc:creator>georgemandis</dc:creator><comments>https://news.ycombinator.com/item?id=44376989</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44376989</guid></item><item><title><![CDATA[Ask a computer A toy powered by GPT-3 and reckless abandon]]></title><description><![CDATA[
<p>Article URL: <a href="https://george.mand.is/2022/12/ask-a-computer-a-toy-powered-by-gpt-3-and-reckless-abandon/">https://george.mand.is/2022/12/ask-a-computer-a-toy-powered-by-gpt-3-and-reckless-abandon/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=33995154">https://news.ycombinator.com/item?id=33995154</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 15 Dec 2022 04:21:36 +0000</pubDate><link>https://george.mand.is/2022/12/ask-a-computer-a-toy-powered-by-gpt-3-and-reckless-abandon/</link><dc:creator>georgemandis</dc:creator><comments>https://news.ycombinator.com/item?id=33995154</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33995154</guid></item><item><title><![CDATA[New Elmo fire memes with DALL-E 2]]></title><description><![CDATA[
<p>Article URL: <a href="https://george.mand.is/2022/07/new-elmo-fire-memes-with-dall-e-2/">https://george.mand.is/2022/07/new-elmo-fire-memes-with-dall-e-2/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=32235005">https://news.ycombinator.com/item?id=32235005</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 26 Jul 2022 06:23:28 +0000</pubDate><link>https://george.mand.is/2022/07/new-elmo-fire-memes-with-dall-e-2/</link><dc:creator>georgemandis</dc:creator><comments>https://news.ycombinator.com/item?id=32235005</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32235005</guid></item><item><title><![CDATA[npm install turboencabulator]]></title><description><![CDATA[
<p>Article URL: <a href="https://george.mand.is/2022/01/npm-install-turboencabulator/">https://george.mand.is/2022/01/npm-install-turboencabulator/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=29931737">https://news.ycombinator.com/item?id=29931737</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Fri, 14 Jan 2022 08:05:38 +0000</pubDate><link>https://george.mand.is/2022/01/npm-install-turboencabulator/</link><dc:creator>georgemandis</dc:creator><comments>https://news.ycombinator.com/item?id=29931737</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=29931737</guid></item><item><title><![CDATA[Facebook Recruiter Correspondence]]></title><description><![CDATA[
<p>Article URL: <a href="https://george.mand.is/2021/10/facebook-recruiter-correspondence/">https://george.mand.is/2021/10/facebook-recruiter-correspondence/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=28815032">https://news.ycombinator.com/item?id=28815032</a></p>
<p>Points: 156</p>
<p># Comments: 146</p>
]]></description><pubDate>Sun, 10 Oct 2021 01:42:06 +0000</pubDate><link>https://george.mand.is/2021/10/facebook-recruiter-correspondence/</link><dc:creator>georgemandis</dc:creator><comments>https://news.ycombinator.com/item?id=28815032</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28815032</guid></item><item><title><![CDATA[New comment by georgemandis in "Ask HN: Should I create an API for iTunes apps?"]]></title><description><![CDATA[
<p>Why not just release it as an open-source project?</p>
]]></description><pubDate>Wed, 13 May 2009 21:42:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=607715</link><dc:creator>georgemandis</dc:creator><comments>https://news.ycombinator.com/item?id=607715</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=607715</guid></item></channel></rss>