<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: opportune</title><link>https://news.ycombinator.com/user?id=opportune</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 10 Apr 2026 11:09:38 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=opportune" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by opportune in "How Figma's databases team lived to tell the scale"]]></title><description><![CDATA[
<p>What is your exact question? To me it makes sense that you’d not want to use NoSQL if you’re dealing with data that’s already relational, and heavily leveraging features common in relational DBs that may not come out of the box with NoSQL DBs.<p>They’re saying basically that NoSQL DBs solve a lot of horizontal scaling problems but aren’t a good fit for their highly relational data, is my understanding. Not that they can’t get NoSQL functionality at eg the query level in relational DBs.</p>
]]></description><pubDate>Thu, 14 Mar 2024 19:47:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=39708205</link><dc:creator>opportune</dc:creator><comments>https://news.ycombinator.com/item?id=39708205</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39708205</guid></item><item><title><![CDATA[New comment by opportune in "How Figma's databases team lived to tell the scale"]]></title><description><![CDATA[
<p>My perspective from working both inside and outside of Google:<p>The external spanner documentation doesn’t seem as good as the internal documentation, in my opinion. Because it’s not generally well known outside of G, they ought to do a better job explaining it and its benefits. It truly is magical technology but you have to be a database nerd to see why.<p>It’s also pretty expensive and because you generally need to rewrite your applications to work with it, there is a degree of lockin. So taking on Spanner is a risky proposition - if your prices get hiked or it starts costing more than you want, you’ll have to spend even more time and money migrating off it. Spanner’s advantages over other DBs (trying to “solve” the CAP theorem) then become a curse, because it’s hard to find any other DB that gives you horizontal scaling, ACID, and high availability out of the box, and you might have to solve those problems yourself/redesign the rest of your system.<p>Personally I would consider using Cloud Spanner, but I wouldn’t bet my business on it.</p>
]]></description><pubDate>Thu, 14 Mar 2024 19:37:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=39708083</link><dc:creator>opportune</dc:creator><comments>https://news.ycombinator.com/item?id=39708083</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39708083</guid></item><item><title><![CDATA[New comment by opportune in "Biden proposes 30% tax on crypto mining"]]></title><description><![CDATA[
<p>This is the lump of labor fallacy in reverse isn't it? Those people would eventually be employed by other industries - both the labor and $ formerly going to yachts would be used somewhere else.</p>
]]></description><pubDate>Tue, 12 Mar 2024 01:30:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=39675193</link><dc:creator>opportune</dc:creator><comments>https://news.ycombinator.com/item?id=39675193</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39675193</guid></item><item><title><![CDATA[New comment by opportune in "AI behavior guardrails should be public"]]></title><description><![CDATA[
<p>I think this is a fair approach when things work well enough that a typical user doesn’t need to worry about whether they’ll trigger some kind of special content/moderation logic. If you shadowban spammers and real users almost never get flagged as spammers, the benefits of being tight-lipped outweigh those of the very few users who get improperly flagged or are just curious.<p>With some of these models the guardrails are so clumsy and forced that I think almost any typical user will notice them. Because they include outright work-refusal it’s a very frustrating UX to have to “discover” the policy for yourself through trial and error.<p>And because they’re more about brand management than preventing fraud/bad UX for other users, the failure modes are “someone deliberately engineered a way to get objectionable content generated in spite of our policies.” Obviously some kinds of content are objectionable enough for this to be worth it still, but those are mostly in the porn area - if somebody figures out a way to generate an image that’s just not PC, despite all the safety features, shouldn’t that be on them rather than the provider?<p>Even tuning the model for political correctness is not the end of the world in my opinion, a lot of LLMs do a perfectly reasonable job for my regular use cases. With image generators they are going so far as to obviously (there’s no other way that makes sense) insert diversity sub prompts for some fraction of images which is simply confusing and amateur. Everybody who uses these products just a little bit will notice it. It’s also so cautious that even mild stuff (I tried to do the “now make it even more X” with “American” and it stopped at one iteration) gets caught in the filters. You’re going to find out the policies anyway because they’re so broad an likely to be encountered while using the product innocently - anything a real non-malicious user is likely to get blocked by should be documented.</p>
]]></description><pubDate>Thu, 22 Feb 2024 00:36:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=39461614</link><dc:creator>opportune</dc:creator><comments>https://news.ycombinator.com/item?id=39461614</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39461614</guid></item><item><title><![CDATA[New comment by opportune in "The case against caffeine"]]></title><description><![CDATA[
<p>I think you are using energy drinks in your example for effect, only because people overrate their caffeine content relative to coffee. Coffee drinkers who have a full tumbler/togo cup in the morning and a sbux or continual refills from the office pot at work can far exceed the caffeine intake of 3 energy drinks. Meanwhile, Red Bulls actually have deceptively little caffeine and are barely worse than just a regular Coca Cola.<p>It’s very easy to get into the habit of extremely high caffeine intake, speaking from experience. Not only is caffeine in many drinks and extremely well distributed, its psychological effects are pretty mild. But I think the worst part is that people don’t really think about how much caffeine is in the coffee they’re drinking because it’s hard to measure.</p>
]]></description><pubDate>Tue, 20 Feb 2024 01:45:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=39437085</link><dc:creator>opportune</dc:creator><comments>https://news.ycombinator.com/item?id=39437085</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39437085</guid></item><item><title><![CDATA[New comment by opportune in "The case against caffeine"]]></title><description><![CDATA[
<p>For people with high daily intake it’s not so much that they want to get “high” on coffee to enjoy the conversation, as it is that their brain is in acute caffeine withdrawal and they will feel so shitty with coffee that they won’t be able to participate in conversation very well.</p>
]]></description><pubDate>Tue, 20 Feb 2024 01:29:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=39436985</link><dc:creator>opportune</dc:creator><comments>https://news.ycombinator.com/item?id=39436985</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39436985</guid></item><item><title><![CDATA[New comment by opportune in "The AI bullshit singularity"]]></title><description><![CDATA[
<p>What does me telling LLMs to write articles for "best vacuum cleaners 2024" and putting it on the Internet have to do with the ability of LLMs to improve themselves? Humans write those kinds of articles for the Internet as it is, and yet humans are the ones designing and improving LLMs now.</p>
]]></description><pubDate>Sun, 18 Feb 2024 20:34:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=39422929</link><dc:creator>opportune</dc:creator><comments>https://news.ycombinator.com/item?id=39422929</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39422929</guid></item><item><title><![CDATA[New comment by opportune in "The AI bullshit singularity"]]></title><description><![CDATA[
<p>The Internet is a pull model. I can go to Gwern's website directly and not care that most other websites have crap on them.<p>People choose to use push models for content through meta properties, tiktok, and aggregators like reddit and HN, but nothing is forcing them to. If they push enough bad content, people won't keep using them. Already happened with Facebook and Reddit predecessors, probably happening to Reddit now.<p>It doesn't matter how big the haystack is when you have the ability to go directly to the needle.</p>
]]></description><pubDate>Sun, 18 Feb 2024 20:22:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=39422772</link><dc:creator>opportune</dc:creator><comments>https://news.ycombinator.com/item?id=39422772</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39422772</guid></item><item><title><![CDATA[New comment by opportune in "The AI bullshit singularity"]]></title><description><![CDATA[
<p>The internet is already mostly filled with low quality bullshit though, and GPT-4/Gemini are much better writers than whoever is churning out SEO as we know it now.<p>It's a lazy argument to imply this 1. invalidates the technological achievement 2. prevents iterative improvement a la the singularity. For one, the Internet itself is not bullshit just because a lot of spammers/hustlers put bad content on it to try to make money. And secondly, you can curate datasets... nothings stopping researchers from training LLMs on shitty SEO now, and if they wanted to they could curate datasets going forward to try to prevent LLM-spam from entering the training sets of future models.<p>And finally, people already use reputation/identity/branding and proxies for it as quality filters on the internet. For example, this is an unfamiliar blogger to me and so I entered it with skepticism I wouldn't have with people like Gwern or Lynn Alden. Good writing from people like Gwern and Lynn Alden won't disappear just because LLM content exists on the internet - it just makes reputations and identity (eg to a real human) more important.</p>
]]></description><pubDate>Sun, 18 Feb 2024 20:14:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=39422666</link><dc:creator>opportune</dc:creator><comments>https://news.ycombinator.com/item?id=39422666</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39422666</guid></item><item><title><![CDATA[New comment by opportune in "Why is everything based on likelihoods even though likelihoods are so small?"]]></title><description><![CDATA[
<p>I've seen this exact same fallacy happen several times throughout my career, which isn't even very long.<p>I think in many cases it boils down to some subtype not being identified and evaluated on its own. As in your case it's especially impactful, and yet IME also usually where these kinds of things get improperly prioritized, when it's a user's first impression or when it occurs in a way that causes a user to have to just sit and wait on the other end as these are often "special" cases with different logic in your application code.<p>OTOH sometimes users try to weird/wrong/adversarial shit and so their high failure rate is working as intended. But it pollutes your stats such that it can hide real issues with similar symptoms and skew distributions.</p>
]]></description><pubDate>Sun, 18 Feb 2024 19:33:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=39422286</link><dc:creator>opportune</dc:creator><comments>https://news.ycombinator.com/item?id=39422286</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39422286</guid></item><item><title><![CDATA[New comment by opportune in "My sixth year as a bootstrapped founder"]]></title><description><![CDATA[
<p>Not everybody wants to sell their startup though. Accessing that liquidity without selling can be very very risky. Also, small companies like this can’t command big premiums because too much operational/institutional knowledge is held by the owners, so a lot of that “profit” is really more like a wage for the founder.<p>I’m in the infancy of a bootstrapped company while witnessing the general enshittification of all these formerly treated VC backed companies as they prepare for /adjust to being public companies. I also recall many decent products like Quora (yes there was a period where it was actually good) go to shit and fail chasing $$. I feel like the culture is shifting for new founders and a lot us want to aim for sustainable businesses that deliver real value rather than VC moonshots that say yes to everything that makes them more money.<p>I think “lifestyle” business carries negative connotations and bootstrapping/roof shots don’t capture this mindset yet. For me personally I guess it’s like, I’d rather have a $100m business that does cool shit and is focused on doing one thing, than be forced into chasing growth at all costs in all directions to get a potentially much bigger exit. If I had such an exit I’d draw it down at a sustainable rate anyway that ends up not being different from the earnings of a private company of similar value.<p>I guess with similar luck and results maybe I could be a billionaire instead of a hundred millionaire, but idk if that really matters, and I’d then get the typical SV meaningless ennui while being glad to leave the shitshow I created before it got even shitter. Running something I’m proud of for the long term seems like a more meaningful and rewarding way to spend my time and it can actually make the world better even if it doesn’t capture as much of that value.<p>Like, what if the dominant social media company had refused to serve ads and optimize for watch time? What if major cloud providers had a cohesive vision instead of cobbling together every stupid thing a $100mm spend customer wanted? What if YouTube could serve recommendations based on similarity/enjoyment? They could still be major successful companies. But selling your company, taking on investors, and maybe even lending against your equity threatens to destroy that.</p>
]]></description><pubDate>Sat, 17 Feb 2024 01:15:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=39405281</link><dc:creator>opportune</dc:creator><comments>https://news.ycombinator.com/item?id=39405281</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39405281</guid></item><item><title><![CDATA[New comment by opportune in "Trading trust"]]></title><description><![CDATA[
<p>A lot of this is because of news media though.<p>News media is in direct competition with big tech companies for advertising. The more eyeballs go towards big tech and not news companies, the less market share and relevancy they have.<p>Even worse for them, when people want news now they generally go to aggregators, search for it on Google, or get it served up by a Meta property. It used to be that instead people would read the newspaper or go to a news channel on TV. So news media is furious that big tech controls their top of funnel and distribution channels, as consumers typically prefer it that way vs directly seeking out news by going to cnn.com. In some places they’ve pushed link taxes which tech companies  strongly criticized for entirely legit reasons/threatened to pull services, which upped the animosity.<p>Also, because news is monetized through advertising they need stories and narratives that capture people’s interests and attention. Nobody would care about a story like “Google Scholar revolutionized research discovery and accessibility and improved geographical collaboration a billion %” or “Waymo actually works pretty well no complaints” or “most SF residents actually like Waymo”. But controversy like “Waymo ran into something” is more attention grabbing the more they spin it as evil. Additionally, “good thing continues to be good” is not news but “good thing is actually bad” and “recognizable company X did a bad thing” are news. Similarly “fall from grace” “David vs Goliath” and “these people made a lot of money so you should blame them for not having money” are consistently popular narratives people like.<p>So news media have literally every reason to drag big tech through the mud and pretty much no reason to ever say anything good about them. For sure these companies have problems but you don’t hear about the good things (IMO Meta has made huge improvements in organizational/data security, and their products drive a lot of commerce in developing countries; Amazon warehouses are usually in places where $15-20/hr is actually a huge step up for local inhabitants; big tech is much better than Microsoft and other old school players at fighting unreasonable law enforcement requests) and the bad things are often overplayed/slanted.</p>
]]></description><pubDate>Sat, 17 Feb 2024 00:46:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=39405058</link><dc:creator>opportune</dc:creator><comments>https://news.ycombinator.com/item?id=39405058</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39405058</guid></item><item><title><![CDATA[New comment by opportune in "Trading trust"]]></title><description><![CDATA[
<p>I think there are a few different but related concepts bundled in “law enforcement” here.<p>A low-trust authoritarian state that restricts personal freedoms has to use heavy handed policing/secret policing with severe punishments to maintain that system. Without heavy policing the state would probably collapse internally.<p>A regular place needs policing to prevent the kinds of crimes we consider unambiguously bad like murder, theft, rape. Like I mentioned elsewhere trust/cooperation is game theoretical, and I think that shows up in our genes such that there are always some latent number of people predisposed to antisocial behavior. So you always need that.<p>But in the second case, the level of trust does reflect how much policing you need. If there is not much crime you don’t need that many police. You probably do always need some but the real world demonstrates that some places just have more criminality (low trust) and require more police.</p>
]]></description><pubDate>Sat, 17 Feb 2024 00:28:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=39404888</link><dc:creator>opportune</dc:creator><comments>https://news.ycombinator.com/item?id=39404888</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39404888</guid></item><item><title><![CDATA[New comment by opportune in "Trading trust"]]></title><description><![CDATA[
<p>I mean, that was the case even moreso in the Soviet Union right? They are the inventors of the phrase “Trust, but verify” after all, as well as the term “politically correct” (before it turned into a slightly different culture war term).<p>I think Trust is not really one singular thing either, and it kind of falls apart when you look at places like China or Japan. For example in Japan people don’t generally fear petty theft of bikes or electronics, but they have women-only train cars and the government forces smartphone cameras to make a sound to prevent creepshots. In China you have the zero-sum “it is good for me when others fail” mentality but also Guanxi and genuine patriotism.<p>Probably technology and law enforcement does allow large scale societies to persist with extremely low trust, but the more concentrated power becomes, the more that state’s continuance is subject to the whims of a small number of people that could either change their mind (like Gorbachev) or fuck things up so badly that they get overthrown (Romania). I think it helps that leaders and police/secret-police also live within that broader low-trust society and so they do have some incentive to not make it too bad.</p>
]]></description><pubDate>Sat, 17 Feb 2024 00:18:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=39404805</link><dc:creator>opportune</dc:creator><comments>https://news.ycombinator.com/item?id=39404805</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39404805</guid></item><item><title><![CDATA[New comment by opportune in "Trading trust"]]></title><description><![CDATA[
<p>The level of trust in society is game theoretical and unstable at the extremes. I don’t really consider it technology either, you can easily envision hunter gatherers having high or low trust perspectives towards other bands based on eg prior interactions, shared language/beliefs/culture. Calling this technology is like calling language technology, maybe it is, but I think it’s also something we developed evolutionarily because it was advantageous.<p>From the game theory perspective, a high trust society makes it easy for bad actors to abuse that trust for personal gain, which at large scale lowers trust at the societal level. A low trust society incentivizes people to build subcommunities of higher trust to get things done (which can grow to encompass lots of society) or can be outcompeted by a higher trust society, as you say. Maybe this is all covered in that book.<p>Clearly there is enough variance to say that societies do not all gravitate towards a fixed equilibrium though. I think a lot of this is due to institutions (eg religion, government, educational systems, militaries) and cultural factors (some cultures value cunning and ruthlessness, others conformity, etc. which can be influenced even by language or the physical environment). Many edgy internet commenters seem to equate low/high trust with race and ethnicity, but if you have ever been in a well run technology company or the US military, or a low-trust homogenous society, you’ll see this obviously wrong.<p>What I’ve been thinking about a lot lately while I bootstrap is whether it’s possible for a group to be resilient to “selfish” bad actors by making cooperation strictly more optimal than defection. At small scales I think this can be accomplished through a BDFL but I’m really interested in figuring out if another approach can scale into the ~thousands.</p>
]]></description><pubDate>Fri, 16 Feb 2024 18:18:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=39400888</link><dc:creator>opportune</dc:creator><comments>https://news.ycombinator.com/item?id=39400888</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39400888</guid></item><item><title><![CDATA[New comment by opportune in "Satya Nadella uses an IBM AS/400 in 1993"]]></title><description><![CDATA[
<p>Microsoft is one of the few companies that seems to do a good job allowing employees to rise through the ranks from top to bottom. Most of the Partners I have met were cultivated internally. Facebook is also like this, with the added bonus of allowing much faster progression for high performers than most other companies (I know someone who became an E6 Engineering Manager/Tech Lead 3 years out of college. I don’t think it was necessarily for “bullshit” either, his work was very fundamental and important to the company).<p>But this is rather rare and most companies have a soft ceiling for growth internally. At Google for example, for years they have been filling most Director positions externally, and so most employees find it very hard to get there and progress past that. Progression is also often subject to norms that make the sheer number of promotions required to make it high up in the company impossible.</p>
]]></description><pubDate>Fri, 16 Feb 2024 18:01:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=39400624</link><dc:creator>opportune</dc:creator><comments>https://news.ycombinator.com/item?id=39400624</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39400624</guid></item><item><title><![CDATA[New comment by opportune in "Sora: Creating video from text"]]></title><description><![CDATA[
<p>How convenient for all the OpenAI employees trying to make millions of dollars by commercializing their technology. Surely this technology won’t be well-understood and easily replicable in a few years as FOSS</p>
]]></description><pubDate>Thu, 15 Feb 2024 23:35:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=39390833</link><dc:creator>opportune</dc:creator><comments>https://news.ycombinator.com/item?id=39390833</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39390833</guid></item><item><title><![CDATA[New comment by opportune in "The happiest kids in the world have social safety nets"]]></title><description><![CDATA[
<p>I think you are being unfairly downvoted. Campuses have the ability to do things like expel their residents and physically remove trespassers, which is only possible in the most draconian gated communities in the “real world.” They filter for things like SES and at least a nominal desire to learn. They can shunt the really hard problems, and problematic people, to the rest of society.</p>
]]></description><pubDate>Wed, 14 Feb 2024 22:15:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=39376330</link><dc:creator>opportune</dc:creator><comments>https://news.ycombinator.com/item?id=39376330</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39376330</guid></item><item><title><![CDATA[New comment by opportune in "Inside the proton, the ‘most complicated thing you could possibly imagine’"]]></title><description><![CDATA[
<p>Is it true that the quarks themselves, in isolation, have that charge? Or is it that combining quarks into a baryon or meson gives the resultant particle a charge according to a fixed ratio of the constituent quarks?<p>Gemini advanced says it’s the latter, because of color confinement. But I’d defer to a human expert</p>
]]></description><pubDate>Wed, 14 Feb 2024 21:52:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=39376050</link><dc:creator>opportune</dc:creator><comments>https://news.ycombinator.com/item?id=39376050</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39376050</guid></item><item><title><![CDATA[New comment by opportune in "The happiest kids in the world have social safety nets"]]></title><description><![CDATA[
<p>And on top of that, your social peers are very nearby. Whereas even in walkable cities like NYC or SF, the cities are big enough that your peers may also be in a walkable area but far enough away to require public transit/careful planning.<p>I have read that one consequence of the Japanese practice of tearing down buildings and buying new is that you tend to get colocated with many people of the same socioeconomic class and age. We have similar forces going on in the Western world (families may prefer suburbs which also tend to sort by SES, yuppies prefer nice urban areas, etc) but I think in Japan it is a bit more deliberate.</p>
]]></description><pubDate>Wed, 14 Feb 2024 21:42:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=39375903</link><dc:creator>opportune</dc:creator><comments>https://news.ycombinator.com/item?id=39375903</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39375903</guid></item></channel></rss>