<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: apinstein</title><link>https://news.ycombinator.com/user?id=apinstein</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 12:47:55 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=apinstein" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by apinstein in "Show HN: AI SDLC Scaffold, repo template for AI-assisted software development"]]></title><description><![CDATA[
<p>I am playing around with building my own similar and am faced with the question you pose.<p>How can you tell if your prompt process works? I feel like the outputs from SDLC process are so much more high level than could be done with evals, but I am no eval expert.<p>How would you benchmark this?</p>
]]></description><pubDate>Sat, 21 Mar 2026 17:10:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47468930</link><dc:creator>apinstein</dc:creator><comments>https://news.ycombinator.com/item?id=47468930</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47468930</guid></item><item><title><![CDATA[New comment by apinstein in "Ask HN: What are you working on? (February 2026)"]]></title><description><![CDATA[
<p>I built <a href="https://www.chatmycal.com/" rel="nofollow">https://www.chatmycal.com/</a><p>Ever get an email or handout with a massive schedule - in text, image or pdf - and you have to hand re-enter dozens of events? I built ChatMyCal to fix this.<p>Copy/paste the email, or take a pic and it will perfectly extract the schedule and publish it as a subscribe-able calendar. Then “transfer” it to the group admin and save everyone else from the same thing.<p>On the inside, it’s basically Cursor for calendars. So you can use AI to batch edit things, decorate with coordinated icons, add rules to apply to all events, etc. It can also develop full schedules like “make a monthly book club for zombie books” or plan a weekend foodie trip to Miami. Not sure all the best uses yet!</p>
]]></description><pubDate>Tue, 10 Feb 2026 12:31:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46958864</link><dc:creator>apinstein</dc:creator><comments>https://news.ycombinator.com/item?id=46958864</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46958864</guid></item><item><title><![CDATA[New comment by apinstein in "The guide to real-world EV battery health"]]></title><description><![CDATA[
<p>While true, this either matters for you or it doesn’t. Classic Innovator’s Dilemma.<p>My EV gets only 230mi range at max, and I only charge to 85% which is like 190mi. But I do it at home and never have any range anxiety.<p>The trajectories for battery improvements indicate it is just a matter of time before those with larger range needs are addressed satisfactorily.<p>If you cannot slow charge at home or work, it’s a tough story, EV’s aren’t right for you yet, and that’s ok.  Roll out of slow charging is less clear that it will be solved in a scaled way. I am not one that believes that 5-10m EV charging is a good goal, it’s very high power and likely not a good price trade off for the time saved. Current 20-30m will likely be the broad solution for those that want EV and cannot charge at home, though I think that’s not a very good solution.</p>
]]></description><pubDate>Sun, 18 Jan 2026 14:52:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46668225</link><dc:creator>apinstein</dc:creator><comments>https://news.ycombinator.com/item?id=46668225</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46668225</guid></item><item><title><![CDATA[New comment by apinstein in "OpenAI's H1 2025: $4.3B in income, $13.5B in loss"]]></title><description><![CDATA[
<p>It’s not a hand wave…<p>The cost to serve a particular level of AI drops by like 10x a year. AI has gotten good enough that next year people can continue to use the current gen AI but at that point it will be profitable. Probably 70%+ gross margin.<p>Right now it’s a race for market share.<p>But once that backs off, prices will adjust to profitability. Not unlike the Uber/Lyft wars.</p>
]]></description><pubDate>Thu, 02 Oct 2025 21:01:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=45455508</link><dc:creator>apinstein</dc:creator><comments>https://news.ycombinator.com/item?id=45455508</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45455508</guid></item><item><title><![CDATA[New comment by apinstein in "Stripe Launches L1 Blockchain: Tempo"]]></title><description><![CDATA[
<p>Here's the play. It's very simple, and it's quite good.<p>Stripe processes a LOT of money. The customers that get that money need to move it around. Often to banks. Stripe makes no money on that.<p>Over the last few years, stablecoins have become a preferred means to hold and move money (for convenience, etc).<p>Stablecoin providers make money on their float -- selling stablecoins means you get free deposits, and risk-free rates are presently around 4%. For every $1M in stablecoins your customers hold, you can make $40k/year. Stablecoin providers like Circle pay about half of that back out to partners that sell the tokens.<p>Stripe is huge, and well-trusted by customers for handling payments. By adoption stablecoin infrastructure to control financial flows into stablecoins, they can amass huge amounts of stablecoin sales.<p>If even ~3% of their transaction volume gets held in Stablecoins, and they make 1% a year on that, it's about $1B a year in bottom line.<p>~$10e9 (daily avg vol) * 365 * 3% (converted to stablecoins) * 1% (net income) = ~$1B</p>
]]></description><pubDate>Thu, 04 Sep 2025 18:28:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=45130564</link><dc:creator>apinstein</dc:creator><comments>https://news.ycombinator.com/item?id=45130564</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45130564</guid></item><item><title><![CDATA[New comment by apinstein in "Do viruses trigger Alzheimer's?"]]></title><description><![CDATA[
<p>Oh wow, a bunch of Alzheimer’s grants at Columbia were canceled, including the Alzheimer’s Disease Research Center. Unclear if this study was affected…<p><a href="https://taggs.hhs.gov/Content/Data/HHS_Grants_Terminated.pdf" rel="nofollow">https://taggs.hhs.gov/Content/Data/HHS_Grants_Terminated.pdf</a></p>
]]></description><pubDate>Sun, 23 Mar 2025 12:01:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=43452369</link><dc:creator>apinstein</dc:creator><comments>https://news.ycombinator.com/item?id=43452369</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43452369</guid></item><item><title><![CDATA[New comment by apinstein in "Waymo Robotaxis Much Safer Than Any Human-Driven Cars"]]></title><description><![CDATA[
<p>There are so many things to control for… 
- compare to taxi / professional drivers
- locations
- times<p>Still a great nominal achievement but anytime a sponsored research study doesn’t even attempt to control for basic things it raises a lot of red flags.</p>
]]></description><pubDate>Sun, 05 Jan 2025 15:12:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=42602260</link><dc:creator>apinstein</dc:creator><comments>https://news.ycombinator.com/item?id=42602260</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42602260</guid></item><item><title><![CDATA[New comment by apinstein in "Show HN: Free mortgage analysis tool to avoid getting screwed by closing costs"]]></title><description><![CDATA[
<p>hmm. I don't see why not optimize closing costs. I have done so on 4-5 mortgages myself. When sourcing your lending options, if you are a good borrower at least, you will have multiple options. I always asked for a full closing cost estimate and compared. Usually saved $2000-5000 through a combination of:<p>- effectively shopping around items like title insurance, appraisals, etc by pointing out differences b/c competing vendors
- identifying BS items that are not even on all offers, and simply having them removed. people like to add bogus fee lines.<p>For sure doing this as lead-gen is great. Agree that there is a huge risk of uploading personal info -- in the future local AI's will be able to do this. In the short term, they should partner with a known brand to give credibility.</p>
]]></description><pubDate>Sat, 16 Nov 2024 19:39:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=42158655</link><dc:creator>apinstein</dc:creator><comments>https://news.ycombinator.com/item?id=42158655</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42158655</guid></item><item><title><![CDATA[New comment by apinstein in "Trump wins presidency for second time"]]></title><description><![CDATA[
<p>It’s fascinating how no one mentions that Trump didn’t pass comprehensive immigration legislation during his first term despite it being core to his platform.<p>This issue is a mess and has been kicked down the road for literal decades at this point. Maybe finally it will get passed…</p>
]]></description><pubDate>Wed, 06 Nov 2024 11:22:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=42060240</link><dc:creator>apinstein</dc:creator><comments>https://news.ycombinator.com/item?id=42060240</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42060240</guid></item><item><title><![CDATA[New comment by apinstein in "MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images"]]></title><description><![CDATA[
<p>I started and ran a real estate photography platform from 2004-2018. We started r&d on this in ~2016 when consumer VR first came out. At the time we used photogrammetry and it was “dreadful” to try to capture due to mirrors, glass, etc.<p>So I have been following GS tech for a while. I’ve not yet seen anything (open source / papers) that quite gets there yet. I do think it will.<p>In my opinion, there are two useful ways GS can bring to this industry.<p>The first is ability to use photo capture to re-render as a high production quality video similar to what people do with Luma AI today. While this is a really cool capability, it’s also not really that hard to do anymore with drones and gimbals. So, the experience of creating the same thing via GS has to be better and easier, and it’s not clear when that will likely happen due to how painful the capture side is. You really need good real time capture feedback to make sure you have good coverage. Finding out there’s a hole once you’re off location is a deal breaker.<p>The second is to create VR capable experiences. I think the first real useful thing for consumers will be so you can walk around in a small three or 4 foot area and get a stereo sense of what it’s like to be there. This is an amazing consumer experience. But the practicality of scaling this depends on VR hardware and adoption, and that hasn’t yet become commonplace enough to make consumer use “adjacent possible” for broad deployment.<p>I could see it being used on super high end to start out.</p>
]]></description><pubDate>Wed, 14 Aug 2024 10:52:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=41244661</link><dc:creator>apinstein</dc:creator><comments>https://news.ycombinator.com/item?id=41244661</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41244661</guid></item><item><title><![CDATA[New comment by apinstein in "Ask HN: Who wants to be hired? (July 2024)"]]></title><description><![CDATA[
<p>--- Founder / Technical Co-Founder / Exec Leadership / Head of Product ---<p><pre><code>  Location: Atlanta, GA
  Remote: Yes, but prefer hybrid/on-site.
  Willing to relocate: No, but open to occasional travel.
  Technologies: 25 years of full stack work in C/C++/Obj-C/php/js/ruby and more. Data Science in R. Embedded, MacOS, iOS, Web (1.0, 2.0, SPA). AWS/GCP. Postgres.
  Résumé/CV: https://www.linkedin.com/in/alanpinstein/
  Email: please contact through LinkedIn
</code></pre>
I am 3x exited founder looking to join an ambitious startup focusing on society-altering products. I am particularly interested in AI, biotech, and sustainability spaces.</p>
]]></description><pubDate>Mon, 01 Jul 2024 22:15:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=40851394</link><dc:creator>apinstein</dc:creator><comments>https://news.ycombinator.com/item?id=40851394</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40851394</guid></item><item><title><![CDATA[New comment by apinstein in "Silicon Valley's best kept secret: Founder liquidity"]]></title><description><![CDATA[
<p>The only fair way to analyze this is by looking at opportunity cost, which isn’t what TFA does.<p>Founders often have slightly higher market value (though not always) than first employees, so they are giving up more to go the startup route.<p>Separately, TFA further  underestimates founder risk as they are typically not taking salary during pre-seed, and no or low salary during seed. However employees 1-5 typically get mostly cash, often much closer to market.<p>Thirdly, there is also often a lot more stress in being the founder. It is a complex, all day job. You have the weight of keeping things going for all employees, and when cash is low it’s your paycheck that gets delayed/cut first, not your employees.<p>That said I am all for reasonable early stage liquidity where it makes sense, but as many other commenters have mentioned, it tends to not be life changing super early for most early employees. Most employees would rather keep the bet on the table. Also, I am strongly against large founder secondaries.  I think it’s helpful for founder to remain feeling “not financially successful”, especially first time founders, so that they keep their heart in the game. I followed this practice with my companies.</p>
]]></description><pubDate>Wed, 12 Jun 2024 16:26:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=40659903</link><dc:creator>apinstein</dc:creator><comments>https://news.ycombinator.com/item?id=40659903</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40659903</guid></item><item><title><![CDATA[New comment by apinstein in "Launch HN: Aqua Voice (YC W24) – Voice-driven text editor"]]></title><description><![CDATA[
<p>This is really great. I imagined such a thing should be created, amazing to see it in reality. It would be great for those of us not limited to exclusively voice to be able to use commands as well, as I still think in some cases doing explicitly what I want for simple things is easier than figuring out how to explain it :)</p>
]]></description><pubDate>Tue, 26 Mar 2024 15:46:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=39829317</link><dc:creator>apinstein</dc:creator><comments>https://news.ycombinator.com/item?id=39829317</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39829317</guid></item><item><title><![CDATA[New comment by apinstein in "Physical Intelligence is building a brain for robots"]]></title><description><![CDATA[
<p>I think it would be significantly less accurate. Their error rates for performing physical tasks would be different b/w they lack the sensors to accurately train decent world models. For instance, I don't think they could catch a ball at the same skill level as a sighted child no matter how hard they tried.<p>So the lack of that sensor will cause the brain to develop poor representations of motion in 3d space.<p>How lack of those representations would affect other representations is less clear; because seeing the fusion between the LLM (which similarly doesn't have an embodied world model representation) and the robot AI (which presumable does) obviously works really well.<p>Now, it's possible that the 2 models are just inter-communicating between their own features (apple the concept and apple the image/object) and then being able to connect that together. The point of this meaning that there could be benefits from separate training and then post-training connection to bridge any gaps in learned representations.<p>However, I'd think that ultimately a model that can train simultaneously on more sensory input vs less will have a better/more efficient world model with more useful & interesting cross-connections between that space and applied uses in non-physical domains.</p>
]]></description><pubDate>Wed, 13 Mar 2024 15:54:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=39693004</link><dc:creator>apinstein</dc:creator><comments>https://news.ycombinator.com/item?id=39693004</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39693004</guid></item><item><title><![CDATA[New comment by apinstein in "Stephen Fry Warns About the Dangers of Voice Clones"]]></title><description><![CDATA[
<p>A reminder that if you have any accounts where that company uses “voice phrase as password”, call them and have it disabled. Usually they have other options like a secret pass phrase.<p>I also taught my whole family a passphrase to verify that any call “from family” is actually that family member and not a shakedown scam.<p>Super easy precautions against really painful consequences.</p>
]]></description><pubDate>Sun, 08 Oct 2023 12:42:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=37809944</link><dc:creator>apinstein</dc:creator><comments>https://news.ycombinator.com/item?id=37809944</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37809944</guid></item><item><title><![CDATA[New comment by apinstein in "Apple Vision"]]></title><description><![CDATA[
<p>I played around a lot with VR180 when it came out. The experience is incredibly, almost uncomfortably intimate for personal videos. I felt so awkward watching demos of other peoples “blow out the birthday candles” moments. However, the fact that it was uncomfortable means that the technology itself is very good, otherwise it couldn’t produce such and emotional experience.<p>On the tech side, I’m just guessing, but it looks like Apple has an even better version of VR189. A 6dof version of VR180 seems entirely plausible for Apple to pull off with NeRFs and would be even more incredible.<p>Again, I agree that it’s a bit weird for personal memories, both on the recording side (possibly awkward to wear goggles in those situations) and even watching personal memories.<p>However, I’d expect Apple to make recording spatial videos possible w iPhone/iPad, which at least fixes the awkward recording issue.<p>Even with that possibility, I think Apple hurt themselves using this “personal memory spatial video” example.<p>For me, the far better use cases are for entertainment. Professional, live (and recorded) spatial video will be <i>huge</i>. Everyone can have front row, court side, or even birds-eye views of all forms of in-person entertainment. Sports, plays, comedy, concerts, orchestras. The experience of watching it is so intimate experientially I think it will be amazing. Looks like the tech to make it happen is finally here. Imagine them owning “the App Store” for spatial video pay-per-view…<p>Excited to see where this goes!</p>
]]></description><pubDate>Tue, 06 Jun 2023 14:10:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=36213064</link><dc:creator>apinstein</dc:creator><comments>https://news.ycombinator.com/item?id=36213064</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36213064</guid></item><item><title><![CDATA[New comment by apinstein in "We come to bury ChatGPT, not to praise it"]]></title><description><![CDATA[
<p>Subject Matter Expert</p>
]]></description><pubDate>Tue, 07 Feb 2023 02:43:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=34687961</link><dc:creator>apinstein</dc:creator><comments>https://news.ycombinator.com/item?id=34687961</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34687961</guid></item><item><title><![CDATA[New comment by apinstein in "Essential Climbing Knots"]]></title><description><![CDATA[
<p>IIRC the 8 on a bight is best for load parallel to the rope, and the butterfly best for load perpendicular to the rope. The butterfly is really fun to tie though :)</p>
]]></description><pubDate>Wed, 10 Aug 2022 11:03:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=32410082</link><dc:creator>apinstein</dc:creator><comments>https://news.ycombinator.com/item?id=32410082</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32410082</guid></item><item><title><![CDATA[New comment by apinstein in "Barcode Detection API"]]></title><description><![CDATA[
<p>I used the microblink one for a small toy project and it worked well in all browsers.<p><a href="https://demo.microblink.com/self-hosted-api/pdf417" rel="nofollow">https://demo.microblink.com/self-hosted-api/pdf417</a></p>
]]></description><pubDate>Thu, 10 Mar 2022 07:57:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=30624329</link><dc:creator>apinstein</dc:creator><comments>https://news.ycombinator.com/item?id=30624329</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30624329</guid></item><item><title><![CDATA[New comment by apinstein in "Ivermectin Prophylaxis Used for Covid-19"]]></title><description><![CDATA[
<p>Are you sure?<p>> Results: Of the 223,128 citizens of Itajaí considered for the study, a total of 159,561 subjects were included in the analysis: 113,845 (71.3%) regular ivermectin users and 45,716 (23.3%) non-users.<p>That reads to me like 160k people participated, and 113k optionally choose to take Ivermectin as prophylaxis.<p>Am I missing something ?</p>
]]></description><pubDate>Tue, 01 Feb 2022 04:22:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=30158595</link><dc:creator>apinstein</dc:creator><comments>https://news.ycombinator.com/item?id=30158595</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30158595</guid></item></channel></rss>