<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: boshalfoshal</title><link>https://news.ycombinator.com/user?id=boshalfoshal</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 08 Apr 2026 01:44:40 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=boshalfoshal" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by boshalfoshal in "Taste in the age of AI and LLMs"]]></title><description><![CDATA[
<p>The joke is that "taste" usually implies you have some strong personal sense of self and style, but if you walked into tech offices in the bay area everyone looks like that and acts/talks the same.<p>So its ironic that these same people are talking about "taste" when they ostensibly have very little.</p>
]]></description><pubDate>Tue, 07 Apr 2026 17:39:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47678748</link><dc:creator>boshalfoshal</dc:creator><comments>https://news.ycombinator.com/item?id=47678748</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47678748</guid></item><item><title><![CDATA[New comment by boshalfoshal in "Taste in the age of AI and LLMs"]]></title><description><![CDATA[
<p>The thing is, do humans _need_ most software? The less surfaces that need to interact with humans, the less you need humans in the loop to design those surfaces.<p>In a hypothetical world where maybe some AI agents or assistants do the vast majority of random tasks for you, does it matter how pleasing the doordash website looks to you? If anything, it should look "good" to an ai agent so that its easier to navigate. And maybe "looking good" just amounts to exposing some public API to do various things.<p>UIs are wrappers around APIs. Agents only need to use APIs.</p>
]]></description><pubDate>Tue, 07 Apr 2026 17:34:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47678699</link><dc:creator>boshalfoshal</dc:creator><comments>https://news.ycombinator.com/item?id=47678699</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47678699</guid></item><item><title><![CDATA[New comment by boshalfoshal in "Taste in the age of AI and LLMs"]]></title><description><![CDATA[
<p>I think "taste" is definitely an overused meme at this point, its like tech twitter discovered this word in 2024 and never stopped using it (same with "agency", "high leverage", etc).<p>Having read the article, I think I see the author's argument (*). I think "taste" here in an engineering context basically just comes down to an innate feeling of what engineering or product directions are right or wrong. I think this is different from the type of "taste" most people here are talking about, though I'm sure product "taste" specifically is somewhat correlated with your overall "taste." Engineering "taste" seems more correlated with experience building systems and/or strong intuitions about the fundamentals. I think this is a little different from the totally subjective, "vibes based taste" that you might think of in the context of design or art.<p>Now where I disagree is that<p>1. "taste" is a defensible moat<p>2. "taste" is "ai-proof" to some extent<p>"Taste" is only defensible to the extent that knowing what to do and cutting off the _right_ cruft is essential to moving faster. Moving faster and out executing is the real "moat" there. And obviously any cognitive task, including something as nebulous as "taste," can in theory be done by a sufficiently good AI. Clarity of thought when communicating with AI is, imo, not "taste."<p>Talking specifically about engineering - the article talks about product constraints and tradeoffs. I'd argue that these are actually _data_ problems, and once you solve those, tradeoffs and solving for constraints go from being a judgement call to being a "correct" solution. That is to say, if you provide more information to your AI about your business context, the less judgement _you_ as the implementer need to give. This thinking is in line with what other people here have already said (real moats are data, distribution, execution speed).<p>I think there's something a bit more interesting to say about the user empathy part, since it could be difficult for LLMs to truly put themselves in users shows when designing some interactive surfaces. But I'm sure that can be "solved" too, or at least, it can be done with far less human labor than it already takes.<p>In general though, tech people are some of the least tasteful people, so its always funny to see posts like this.</p>
]]></description><pubDate>Tue, 07 Apr 2026 16:36:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47677923</link><dc:creator>boshalfoshal</dc:creator><comments>https://news.ycombinator.com/item?id=47677923</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47677923</guid></item><item><title><![CDATA[New comment by boshalfoshal in "What if AI doesn't need more RAM but better math?"]]></title><description><![CDATA[
<p>Well considering basically the entire market was down these past few days, Google included, its unlikely attributable to this paper alone. Its most likely correlated with general war/trade route restrictions/potential recession  fears, or at least, more correlated with those than it is with this paper.<p>This paper was released a year ago and was probably part of how google got to 1m context before other labs.</p>
]]></description><pubDate>Mon, 30 Mar 2026 06:27:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47571055</link><dc:creator>boshalfoshal</dc:creator><comments>https://news.ycombinator.com/item?id=47571055</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47571055</guid></item><item><title><![CDATA[New comment by boshalfoshal in "Which jobs are most vulnerable to AI?"]]></title><description><![CDATA[
<p>Answer: Any job where the majority (or all) of your work can be done strictly by using a computer, and for tasks that have easily verifiable and objective outcomes. And from an economic perspective, jobs that have the highest cost (i.e, highest margins for AI companies to replace) have a strong economic incentive to be automated first. So Software, Finance, Accounting, Law, etc.<p>Yes - this means software engineers are likely the first to go, along with other high paying computer jobs.</p>
]]></description><pubDate>Mon, 16 Mar 2026 22:52:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47406108</link><dc:creator>boshalfoshal</dc:creator><comments>https://news.ycombinator.com/item?id=47406108</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47406108</guid></item><item><title><![CDATA[New comment by boshalfoshal in "Ask HN: What is it like being in a CS major program these days?"]]></title><description><![CDATA[
<p>yeah and 2s has not been doing too hot for a few years now. Jane street I buy - they tend to recruit a lot of CMU students. But definitely less than < 15 of the new grads they hire each year are from CMU. They maybe hire on the order of 50-100 new grad SWEs a year.</p>
]]></description><pubDate>Mon, 16 Mar 2026 22:48:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47406052</link><dc:creator>boshalfoshal</dc:creator><comments>https://news.ycombinator.com/item?id=47406052</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47406052</guid></item><item><title><![CDATA[New comment by boshalfoshal in "AI doesn't replace white collar work"]]></title><description><![CDATA[
<p>It will probably be a lot worse since white collar workers (especially the ones that AI is targeting, like banking, software, etc since they are super high margin jobs to automate) traditionally make and spend more than the average worker.<p>These are the people getting mortgages and sending kids to private school and whatnot. If their spending power suddenly drops to 0, its probably going to be pretty bad. I wonder what the housing market would look like in these cases.</p>
]]></description><pubDate>Tue, 10 Mar 2026 21:18:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47328891</link><dc:creator>boshalfoshal</dc:creator><comments>https://news.ycombinator.com/item?id=47328891</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47328891</guid></item><item><title><![CDATA[New comment by boshalfoshal in "AI doesn't replace white collar work"]]></title><description><![CDATA[
<p>I agree. I think most companies would be better off being 100% AI driven since synchronization problems for agents (or whatever the fad will be) is likely much lower than human social synchronization, and has more rich information transfer between "workers" (so less ambiguity, less tradeoffs to be made, etc).<p>As soon as a person enters the loop you add a manual sync point that probably doesn't need to be there. I think this is why you are increasingly seeing companies tell their people to be "on the loop" or "out of the loop" with their AI. The less syncing with a person, the better. And I think once this experiment runs its course, we will probably find out that human social interaction matters much less than we thought it did, especially for super transactional things like a corporate job where most of your work is done on a computer.</p>
]]></description><pubDate>Tue, 10 Mar 2026 21:14:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47328861</link><dc:creator>boshalfoshal</dc:creator><comments>https://news.ycombinator.com/item?id=47328861</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47328861</guid></item><item><title><![CDATA[New comment by boshalfoshal in "AI doesn't replace white collar work"]]></title><description><![CDATA[
<p>> ... Just that it doesn’t replace the social, human, and relationship based aspects of work, whether this is trust, or just being interested in what someone else says.<p>Yeah I also don't buy this. Most white collar work _seemingly_ necessitates trust, social/human aspects, etc. because we _have_ to interact with other humans, and the way we interact with each other is lossy and often has misaligned or not explicitly stated motivations.<p>In other words, most white collar work _seems_ bottlenecked on people-centric things because we have imperfect information about what other people want, so we have to use soft skills (i.e, skills only real humans have) to actually figure out motivations of various stakeholders and align expectations, garner favor, etc. amongst all of them. In a world where most of the workforce is AI, I think this problem of tacit information gets largely solved, since AIs can in theory, convey their intent and losslessly send information to one another without the need to waste time "aligning."<p>The other thing that people argue, especially in software, is that architecture and tradeoff decisions will remain in the human realm, because apparently only people have the "taste" to pick and chose the right solutions. I also think that:<p>(1) this will be easily solved by AI/current LLMs, since logically there shouldn't be a big difference between designing and writing good code to designing good systems architecture, and LLMs are ostensibly already good at coding<p>(2) "taste" and "tradeoffs" are things that, if you had more information (once again, if you could convey most or all necessary information losslessly between everyone in your org), things that appeared to be "tradeoffs" before might just be binary solutions.<p>Also just practically speaking, the stated goal of AI companies is to automate all labor. They won't just sit back happily collecting checks if there are parts of the human parts of the economy which they can't automate, that's revenue that they could easily capture. Whatever people claim AI lacks today will just be added to it in 6 months, AI companies are strongly incentivized to work towards this.<p>And at the end of the day, work is a transaction between employees and employers. A company's primary purpose is to generate money for shareholders, and human labor is just how it gets done. It doesn't matter if I _want_ to talk to a nice coworker instead of Claude 4.6 opus. If Claude costs less than my nice worker and has the same or better output, the company will happily replace that coworker with Claude because its strictly beneficial for the company.</p>
]]></description><pubDate>Tue, 10 Mar 2026 21:04:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47328766</link><dc:creator>boshalfoshal</dc:creator><comments>https://news.ycombinator.com/item?id=47328766</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47328766</guid></item><item><title><![CDATA[New comment by boshalfoshal in "Why software stocks are getting pummelled"]]></title><description><![CDATA[
<p>two things:<p>1. ai being able to code well seems like it would also get pretty close/good at doing basically everything else you described. If coding is a game of reasoning, if you can solve that, you have effectively solved reasoning and you can likely map it to most other problems provided you have a sufficiently good harness and toolcalling setup. 
2. Lets assume AI won't replace everyone as point (1) assumes - and it just replaces _most_ people. Under this assumption, we will likely see large swathes of layoffs. Many SaaS companies have a pay per seat model. Less people employed at companies = less seats being paid for = less SaaS revenue.<p>So not only is there a threat of companies just vibe coding various SaaS-es in house, but there is also a threat that the TAM of many SaaS products (which is typically proportional to the # of employees there are) will actually _shrink_ in size.<p>I think the main class of SaaS company that will remain in the medium term are the ones in legally touchy or compliance heavy industries - think healthcare, finance and security (workday for example). But even Workday will be affected by point (2) from above. Overall, I think the mid-long term outlook for SaaS, especially "SaaS", is not great.</p>
]]></description><pubDate>Tue, 03 Feb 2026 10:25:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46869137</link><dc:creator>boshalfoshal</dc:creator><comments>https://news.ycombinator.com/item?id=46869137</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46869137</guid></item><item><title><![CDATA[New comment by boshalfoshal in "Will AI Replace Human Thinking? The Case for Writing and Coding Manually"]]></title><description><![CDATA[
<p>Yes, it will replace human thinking. Thats quite literally the explicit goal of every AI company.<p>Historically every technological recolution serves to replace some facet of human labor (usually with the incentive of squeezing profits as technology gets cheaper over time, but wages do not).<p>Industrial revolution == automate non dexterous manual labor<p>Information age == automate "computational"/numerical thinking<p>AI == automate thinking<p>Robotics + AI == automate dexterous manual labor</p>
]]></description><pubDate>Thu, 28 Aug 2025 20:02:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=45056425</link><dc:creator>boshalfoshal</dc:creator><comments>https://news.ycombinator.com/item?id=45056425</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45056425</guid></item><item><title><![CDATA[New comment by boshalfoshal in "Mark Zuckerberg freezes AI hiring amid bubble fears"]]></title><description><![CDATA[
<p>Clickbait title and article. There was a large reorg of genai/msl and several other teams, so things have been shuffled around and they likely don't want to hire into the org while this is finalizing.<p>A freeze like this is common and basically just signals that they are ready to get to work with the current team they have. The whole point of the AI org is to be a smaller, more focused, and lean org, and they have been making several strategic hires for months at this point. All this says is that zuck thinks the org is in a good spot to start executing.<p>From talking with people at and outside of the company, I don't have much reason to believe that this is some kneejerk reaction to some supposed realization that "its all a bubble." I think people are conflating this with whatever Sam Altman said about a bubble.</p>
]]></description><pubDate>Thu, 21 Aug 2025 15:05:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=44973706</link><dc:creator>boshalfoshal</dc:creator><comments>https://news.ycombinator.com/item?id=44973706</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44973706</guid></item><item><title><![CDATA[New comment by boshalfoshal in "Mira Murati’s AI startup Thinking Machines valued at $12B in early-stage funding"]]></title><description><![CDATA[
<p>Well this is blatantly false, she linked the career page and I know of people that received offers recently.<p>They have very strong talent from Meta's FAIR/Pytorch teams as well as a lot of strong people from OAI.</p>
]]></description><pubDate>Tue, 15 Jul 2025 20:23:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44575444</link><dc:creator>boshalfoshal</dc:creator><comments>https://news.ycombinator.com/item?id=44575444</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44575444</guid></item><item><title><![CDATA[New comment by boshalfoshal in "Nvidia Becomes First Company to Reach $4T Market Cap"]]></title><description><![CDATA[
<p>I think Teslas valuation is a bit disconnected from fundamentals too, but saying "collapsing" revenue is a bit dramatic.</p>
]]></description><pubDate>Thu, 10 Jul 2025 01:54:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=44516460</link><dc:creator>boshalfoshal</dc:creator><comments>https://news.ycombinator.com/item?id=44516460</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44516460</guid></item><item><title><![CDATA[New comment by boshalfoshal in "I don't think AGI is right around the corner"]]></title><description><![CDATA[
<p>Keep in mind - this is not reaffirming HN's anti-AGI/extremely long timeline beliefs.<p>The article explicitly states that he thinks we will have an AI system that "Will be able to do your taxes" by 2028, and a system that could basically replace all white collar work by 2032.<p>I think an autonomous system that can reliably do your taxes with minimal to no input is already very very good, and 2032 being the benchmark time for being able to replace 90% - all white collar work is pretty much AGI, in my opinion.<p>Fwiw I think the fundamental problems he describes in the article that are AGI blockers are likely to be solved sooner than we think. Labs are not stupid enough to throw all their eggs and talent into the scaling basket, they are most definitely allocating resources to tackling problems like the ones described in the article, while putting the remaining resources into bottom line production (scale  current model capibilities w/o expensive R&D and reduce serving/training cost).</p>
]]></description><pubDate>Mon, 07 Jul 2025 18:34:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=44493336</link><dc:creator>boshalfoshal</dc:creator><comments>https://news.ycombinator.com/item?id=44493336</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44493336</guid></item><item><title><![CDATA[New comment by boshalfoshal in "CEOs Start Saying the Quiet Part Out Loud: AI Will Wipe Out Jobs"]]></title><description><![CDATA[
<p>The first line is just some cope people use to tell themselves they are different.<p>Someone using AI won't "take" your job, they'll just get more done than you and when the company inevitably fires more people because AI can continue to do more work autonomously, the first people to go will be the people not producing as much (i.e, the people not using AI).<p>In the limit both groups are getting their jobs taken by AI. Knowing how to use AI is not some special skill.</p>
]]></description><pubDate>Thu, 03 Jul 2025 15:25:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=44456026</link><dc:creator>boshalfoshal</dc:creator><comments>https://news.ycombinator.com/item?id=44456026</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44456026</guid></item><item><title><![CDATA[New comment by boshalfoshal in "GitHub CEO: manual coding remains key despite AI boom"]]></title><description><![CDATA[
<p>Imo this is a misunderstanding of what AI companies want AI tools to be and where the industry is heading in the near future. The endgame for many companies is SWE automation, not augmentation.<p>To expand -<p>1. Models "reason" and can increasingly generate code given natural language. Its not just fancy autocomplete, its like having an intern - mid level engineer at your beck and call to implement some feature. Natural language is generally sufficient enough when I interact with other engineers, why is it not sufficient for an AI, which (in the limit), approaches an actual human engineer?<p>2. Business wise, companies will not settle for augmentation. Software companies pay tons of money in headcount, its probably most mid-sized companies top or second line item. The endgame for leadership at these companies is to do more with less. This necessitates automation (in addition to augmenting the remaining roles).<p>People need to stop thinking of LLMs as "autocomplete on steroids" and actually start thinking of them as a "24/7 junior SWE who doesn't need to eat or sleep and can do small tasks at 90% accuracy with some reasonable spec." Yeah you'll need to edit their code once in a while but they also get better and cost less than an actual person.</p>
]]></description><pubDate>Mon, 23 Jun 2025 23:30:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=44361280</link><dc:creator>boshalfoshal</dc:creator><comments>https://news.ycombinator.com/item?id=44361280</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44361280</guid></item><item><title><![CDATA[New comment by boshalfoshal in "The Gentle Singularity"]]></title><description><![CDATA[
<p>Pesimistically, you are right, there will be no new jobs. The entire goal of these companies is to monopolize near 0 marginal cost labor. Another way to read this is that humans are unnecessary for economic progress anymore.<p>All that I hope for in this case is that governments actually take this seriously and labs/governments/people work together to create better societal systems to handle that. Because as it stands, under capitalism I don't think anyone is going to willingly give up the wealth they made from AI to spread to the populus as UBI. This is necessary in some capitalist system (if we want to maintain that) since its built on consumption and spending.<p>Though if its truly an "abundance" scenario then I'd imagine it probably wouldn't matter that people don't have jobs since I'd assume everything would be dirt cheap and quality of life would be very high. Though personally I am very cynical when it comes to "agi is magic pixie dust that can solve any problem" takes, and I'd assume in the short term companies will lay off people in swathes since "AI can do your job," but AI will be nowhere close to increasing those laid-off people's quality of life. It'll be a tough few years if we don't actually get transformative AI.</p>
]]></description><pubDate>Tue, 10 Jun 2025 22:22:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=44242148</link><dc:creator>boshalfoshal</dc:creator><comments>https://news.ycombinator.com/item?id=44242148</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44242148</guid></item><item><title><![CDATA[New comment by boshalfoshal in "The ‘white-collar bloodbath’ is all part of the AI hype machine"]]></title><description><![CDATA[
<p>And as it stands, AI is nowhere close to (1) and (2), but is pretty close to making all of (3) redundant.<p>This could be because most work is actually frivilous (very possible), but its also easy for them to sell those since ostensibly (1) and (2) actually require a lot of out of distribution reasoning, thinking, and real agentic research (which current models
probably aren't capable of).<p>(3) just makes the most money now with the current technology. Curing cancer with LLMs, though altruistic, is more unrealistic and has no clear path to immediate profitability because of that.<p>These "AGI" companies aren't doing this out of the goodness of their hearts with humanity in mind, its pretty clearly meant to be a "final company standing" type race where everyone at the {winning AI Company} is super rich and powerful in whatever new world paradigm shows up afterwards.</p>
]]></description><pubDate>Tue, 03 Jun 2025 02:33:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=44165714</link><dc:creator>boshalfoshal</dc:creator><comments>https://news.ycombinator.com/item?id=44165714</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44165714</guid></item><item><title><![CDATA[New comment by boshalfoshal in "A Song of “Full Self-Driving”"]]></title><description><![CDATA[
<p>You are thinking about "hard" and "easy" in the wrong frame of mind. What Tesla does is not "easy" either. Their moat is manufacturing and the R&D they've spent on codesigning their HW and SW stack, and their insane supply chain.<p>Ford does not suddenly have several million cars with 8-9 cameras to tap into for training data, nor does it have the infrastructure/talent to train models with the data it may get. I think you are underselling the Tesla moat.<p>Its the same reason why there are only 3-4 "frontier" AI labs, and the rest are just playing catchup, despite a lot of LLM improvements being pretty well researched and open in papers.</p>
]]></description><pubDate>Thu, 29 May 2025 15:31:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=44127036</link><dc:creator>boshalfoshal</dc:creator><comments>https://news.ycombinator.com/item?id=44127036</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44127036</guid></item></channel></rss>