<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: conjecTech</title><link>https://news.ycombinator.com/user?id=conjecTech</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 08 Apr 2026 10:27:27 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=conjecTech" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by conjecTech in "Show HN: Whisper at 1.58 bits with custom kernels for edge inference"]]></title><description><![CDATA[
<p>Very nice work. Training these from scratch is a big undertaking.<p>- Did you train the encoder & decoder together or separately? It would be nice to have the encoder representation be compatible with the existing whisper implementation since it would mean you could swap your implementation into models where its used as a component, like in the recent Voxtral model. I'd imagine it also might make training a bit faster as well.<p>- Did you consider training the turbo model as well?</p>
]]></description><pubDate>Mon, 28 Jul 2025 14:41:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=44711348</link><dc:creator>conjecTech</dc:creator><comments>https://news.ycombinator.com/item?id=44711348</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44711348</guid></item><item><title><![CDATA[New comment by conjecTech in "Impacts of adding PV solar system to internal combustion engine vehicles"]]></title><description><![CDATA[
<p>EPA range tends to be pessimistic for EVs as it assumes you are always traveling at highway speeds. Even small reductions in speeds can make EVs much more efficient since drag is quadratic. A quick google search shows Prius prime owners reporting 4-5.5 miles/kwh, so the 3-6 mile range is entirely plausible.</p>
]]></description><pubDate>Mon, 14 Jul 2025 16:52:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=44562333</link><dc:creator>conjecTech</dc:creator><comments>https://news.ycombinator.com/item?id=44562333</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44562333</guid></item><item><title><![CDATA[New comment by conjecTech in "Ask HN: How can I invest in Solar Power?"]]></title><description><![CDATA[
<p>You can buy stock in a solar "yieldco", which is exactly this. It holds the assets of solar farms and pays out the cashflows. There were a lot circa 2018, I believe Brookfield bought up a bunch, so there may be fewer options now.</p>
]]></description><pubDate>Thu, 10 Jul 2025 19:04:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=44524373</link><dc:creator>conjecTech</dc:creator><comments>https://news.ycombinator.com/item?id=44524373</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44524373</guid></item><item><title><![CDATA[New comment by conjecTech in "OpenAI charges by the minute, so speed up your audio"]]></title><description><![CDATA[
<p>It's a very simple change in a vanilla python implementation. The encoder is a set of attention blocks, and the length of the attention can be changed without changing the calculation at all.<p>Here(<a href="https://github.com/openai/whisper/blob/main/whisper/model.py#L201">https://github.com/openai/whisper/blob/main/whisper/model.py...</a>) is the relevant code in the whisper repo. You'd just need to change the for loop to an enumerate and subsample the context along its length at the point you want. I believe it would be:<p>for i, block in enumerate(self.blocks):
  x = block(x)
  if i==4:
    x = x[,,::2]</p>
]]></description><pubDate>Fri, 27 Jun 2025 09:42:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=44395338</link><dc:creator>conjecTech</dc:creator><comments>https://news.ycombinator.com/item?id=44395338</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44395338</guid></item><item><title><![CDATA[New comment by conjecTech in "OpenAI charges by the minute, so speed up your audio"]]></title><description><![CDATA[
<p>If you are hosting whisper yourself, you can do something slightly more elegant, but with the same effect. You can downsample/pool the context 2:1 (or potentially more) a few layers into the encoder. That allows you to do the equivalent of speeding up audio without worry about potential spectral losses. For whisper large v3, that gets you nearly double throughput in exchange for a relative ~4% WER increase.</p>
]]></description><pubDate>Thu, 26 Jun 2025 06:56:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=44384893</link><dc:creator>conjecTech</dc:creator><comments>https://news.ycombinator.com/item?id=44384893</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44384893</guid></item><item><title><![CDATA[New comment by conjecTech in "Japan's declining births on track to fall below 700k"]]></title><description><![CDATA[
<p>Most of this decline isn't driven by a change in current fertility rates, but instead by a persistent trend downward in number of reproductive-aged adults. That was locked in by the fertility rates 20-40 years ago. These things move in the timescale of decades. Even if policy were reasonably successful, it would be a quarter of a century before things stabilized.</p>
]]></description><pubDate>Wed, 06 Nov 2024 20:26:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=42068798</link><dc:creator>conjecTech</dc:creator><comments>https://news.ycombinator.com/item?id=42068798</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42068798</guid></item><item><title><![CDATA[New comment by conjecTech in "AMD's Turin: 5th Gen EPYC Launched"]]></title><description><![CDATA[
<p>Your measuring wall time, not CPU time. It may be that they are similar, but I'd suspect you aren't loading the worker nodes well. If the savings are from the reduced shuffles & serde, it's probably something you can measure. I'd be curious to see the findings.<p>I'm not against using simple methods where appropriate. 95% of the companies out there probably do not need frameworks like spark. I think the main argument against them is operational complexity though, not the compute overhead.</p>
]]></description><pubDate>Sun, 13 Oct 2024 03:56:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=41825042</link><dc:creator>conjecTech</dc:creator><comments>https://news.ycombinator.com/item?id=41825042</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41825042</guid></item><item><title><![CDATA[New comment by conjecTech in "AMD's Turin: 5th Gen EPYC Launched"]]></title><description><![CDATA[
<p>When you talk between remote machines, you have to translate to a format that can transmitted and distributed between machines(serialization). You then have to undo at the other end(deserialization). If what you are sending along is just a few floats, that can be very cheap. If you're sending along a large nested dictionary or even a full program, not so much.<p>Imagine an example where you have two arrays of 1 billion numbers, and you want to add them pairwise. You could use spark to do that by having each "task" be a single addition. But the time it would take to structure and transmit the 1 billion requests will be many multiples of the amount of time it would take to just do the additions.</p>
]]></description><pubDate>Sun, 13 Oct 2024 03:51:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=41825029</link><dc:creator>conjecTech</dc:creator><comments>https://news.ycombinator.com/item?id=41825029</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41825029</guid></item><item><title><![CDATA[New comment by conjecTech in "AMD's Turin: 5th Gen EPYC Launched"]]></title><description><![CDATA[
<p>The difference in throughput for local versus distributed orchestration would mainly come from serdes, networking, switching. Serdes can be substantial. Networking and switching has been aggressively offloaded from CPU through better hardware support.<p>Individual tasks would definitely have better latency, but I'd suspect the impact on throughput/CPU usage might be muted. Of course at the extremes (very small jobs, very large/complex objects being passed) you'd see big gains.</p>
]]></description><pubDate>Sat, 12 Oct 2024 18:11:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=41821126</link><dc:creator>conjecTech</dc:creator><comments>https://news.ycombinator.com/item?id=41821126</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41821126</guid></item><item><title><![CDATA[New comment by conjecTech in "East and Gulf Coast ports strike"]]></title><description><![CDATA[
<p>Longshoreman is not a typical blue collar job. You cannot just go out and become one today. It is a nepotistic profession where positions are passed within families and closely guarded from external competition. Go read The Box. It outlines all of this in the later chapters.</p>
]]></description><pubDate>Tue, 01 Oct 2024 13:30:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=41708139</link><dc:creator>conjecTech</dc:creator><comments>https://news.ycombinator.com/item?id=41708139</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41708139</guid></item><item><title><![CDATA[New comment by conjecTech in "Batch delivery orders provide cheap food, higher cost for workers"]]></title><description><![CDATA[
<p>It seems like the program has some issues to iron out, but the general concept is really compelling. Being able to group both the supply and demand side of delivery systems could make it really efficient. When you increase driver productivity like that it should leave room to increase their wages, so hopefully that happens here.<p>It would be great to see programs like this in the US. The closest I can think of is MealPal, where they simplify logistics by having each restaurant only focus on 1-2 dishes so they can churn them out incredibly quickly. I know Marc Lore's Wonder is also trying something similar with ghost kitchens, but don't know much of the details. Being able to serve more people with the same store space also cuts down on the fixed costs/person. I enjoy cooking, but it is hugely inefficient for everyone to do it for themselves at a civilizational level. If we could make healthy restaurants cost competitive with home cooking, it would probably equate to trillions in saved time.</p>
]]></description><pubDate>Tue, 11 Jun 2024 15:27:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=40647385</link><dc:creator>conjecTech</dc:creator><comments>https://news.ycombinator.com/item?id=40647385</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40647385</guid></item><item><title><![CDATA[New comment by conjecTech in "Solar Passes 100% of Power Demand in California [Updated]"]]></title><description><![CDATA[
<p>Norway's vehicle fleet is now 25% EVs and their per capita electricity usage is the same it was a decade ago[1]. Oil infrastructure uses a ton of electricity, and California refines most of the oil it consumes because of its geography and unique fuel blend. EVs may actually reduce California's overall electricity usage.<p>[1] <a href="https://www.ssb.no/en/statbank/table/08313" rel="nofollow">https://www.ssb.no/en/statbank/table/08313</a></p>
]]></description><pubDate>Mon, 03 Jun 2024 16:14:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=40564163</link><dc:creator>conjecTech</dc:creator><comments>https://news.ycombinator.com/item?id=40564163</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40564163</guid></item><item><title><![CDATA[New comment by conjecTech in "China's ageing tech workers hit by 'curse of 35'"]]></title><description><![CDATA[
<p>This feels like a convenient excuse for cost cutting. Experienced people are expensive. If you need to make cuts, it's an easy place to start.<p>Ironically, it's felt like US tech companies have been moving in the opposite direction lately. There seems to be a lot of demand for very senior engineers at the same time it's become increasingly difficult for new grads to find jobs. Maybe this is just a function of the pinch in CS graduation rates ~15 years ago[1]. From my experience as a manager, the premium you pay for senior talent also seems like a good deal. Even among good programs, there is huge variance in the productivity of new grads and training is a difficult proposition when changing jobs every 2 years is common.<p>[1] <a href="https://www.jamiefosterscience.com/wp-content/uploads/2023/11/graduates-per-year.webp" rel="nofollow">https://www.jamiefosterscience.com/wp-content/uploads/2023/1...</a></p>
]]></description><pubDate>Thu, 25 Apr 2024 16:12:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=40159350</link><dc:creator>conjecTech</dc:creator><comments>https://news.ycombinator.com/item?id=40159350</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40159350</guid></item><item><title><![CDATA[New comment by conjecTech in "How Hertz’s bet on Teslas went sideways"]]></title><description><![CDATA[
<p>I think that's the real loss here. Rental vehicles are a big source of cheap used cars. Having a steady flow of EVs from there would have brought down used prices and made them more financially accessible. Shame we won't see that now, at least in the near future.</p>
]]></description><pubDate>Thu, 04 Apr 2024 15:49:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=39931941</link><dc:creator>conjecTech</dc:creator><comments>https://news.ycombinator.com/item?id=39931941</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39931941</guid></item><item><title><![CDATA[New comment by conjecTech in "Dell tells remote workers that they won't be eligible for promotion"]]></title><description><![CDATA[
<p>They just laid off 6,000 people. Sounds more like it was an attempt to get people to quit so they don't have to pay as much severance.<p><a href="https://www.firstpost.com/tech/tech-layoffs-2024-dell-fires-6000-employees-has-laid-of-13000-employees-in-a-year-13753150.html" rel="nofollow">https://www.firstpost.com/tech/tech-layoffs-2024-dell-fires-...</a></p>
]]></description><pubDate>Thu, 28 Mar 2024 22:53:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=39858392</link><dc:creator>conjecTech</dc:creator><comments>https://news.ycombinator.com/item?id=39858392</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39858392</guid></item><item><title><![CDATA[New comment by conjecTech in "A near 100pct renewable grid for Australia is feasible and affordable"]]></title><description><![CDATA[
<p>The article you linked is from 2017. 80% of Australia's solar was installed after that came out. Hard to argue that it's somehow responsible. Seems like your knowledge is a bit outdated.</p>
]]></description><pubDate>Thu, 28 Mar 2024 13:57:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=39851519</link><dc:creator>conjecTech</dc:creator><comments>https://news.ycombinator.com/item?id=39851519</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39851519</guid></item><item><title><![CDATA[New comment by conjecTech in "A near 100pct renewable grid for Australia is feasible and affordable"]]></title><description><![CDATA[
<p>Australia is transitioning >3% of their electricity supply to solar per year and still accelerating. Rooftop solar is near 50% uptake. Combined with wind, they are on track to almost fully in the next 10 years. So there will be soon.</p>
]]></description><pubDate>Thu, 28 Mar 2024 13:49:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=39851424</link><dc:creator>conjecTech</dc:creator><comments>https://news.ycombinator.com/item?id=39851424</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39851424</guid></item><item><title><![CDATA[New comment by conjecTech in "A near 100pct renewable grid for Australia is feasible and affordable"]]></title><description><![CDATA[
<p>The grid was much nearer to collapse seven years ago when there were actually brownouts, and renewables have been entirely responsible for its improvement. Moving toward renewables and keeping the lights on in Australia are the same thing.</p>
]]></description><pubDate>Thu, 28 Mar 2024 13:42:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=39851325</link><dc:creator>conjecTech</dc:creator><comments>https://news.ycombinator.com/item?id=39851325</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39851325</guid></item><item><title><![CDATA[New comment by conjecTech in "MTA board votes to approve new $15 toll to drive into Manhattan"]]></title><description><![CDATA[
<p>You make it sound like they have no alternative. A literal majority of the city uses the trains regularly. The person you're describing can just live within walking distance of a station. The $20k they arent paying for the car and tolls should be just about enough for the rent on a 1 bedroom.</p>
]]></description><pubDate>Thu, 28 Mar 2024 02:43:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=39847208</link><dc:creator>conjecTech</dc:creator><comments>https://news.ycombinator.com/item?id=39847208</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39847208</guid></item><item><title><![CDATA[New comment by conjecTech in "MTA board votes to approve new $15 toll to drive into Manhattan"]]></title><description><![CDATA[
<p>I do not think we should plan the city around the needs of doctors living in Westchester.</p>
]]></description><pubDate>Thu, 28 Mar 2024 02:34:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=39847164</link><dc:creator>conjecTech</dc:creator><comments>https://news.ycombinator.com/item?id=39847164</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39847164</guid></item></channel></rss>