<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: chrischen</title><link>https://news.ycombinator.com/user?id=chrischen</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 08:44:20 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=chrischen" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by chrischen in "Is legal the same as legitimate: AI reimplementation and the erosion of copyleft"]]></title><description><![CDATA[
<p>A photo is easy to take but hard to reproduce.</p>
]]></description><pubDate>Tue, 10 Mar 2026 00:10:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47317574</link><dc:creator>chrischen</dc:creator><comments>https://news.ycombinator.com/item?id=47317574</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47317574</guid></item><item><title><![CDATA[New comment by chrischen in "US Court of Appeals: TOS may be updated by email, use can imply consent [pdf]"]]></title><description><![CDATA[
<p>The whole concept of intellectual property rights is a social and legal construct designed to promote innovation in an economy. If you don't care about that, then there really isn't any moral or immoral aspect to it. The immorality of it and associating it with stealing was just MPAA propaganda to try to shame people into paying for stuff.<p>If I found some DVD lying on the ground and watched it and I didn't pay for it, it's really up to me to decide if I want to pay the creator so they can continue to produce content. If I don't pay then obviously it doesn't help them produce more content... but the consumption of the content itself neither felt nor heard by the creators.</p>
]]></description><pubDate>Mon, 09 Mar 2026 14:32:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47309665</link><dc:creator>chrischen</dc:creator><comments>https://news.ycombinator.com/item?id=47309665</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47309665</guid></item><item><title><![CDATA[New comment by chrischen in "US Military leaders meet with Anthropic to argue against Claude safeguards"]]></title><description><![CDATA[
<p>Yesterday I was trying to figure out if my expired nacho dip would be safe to eat and wanted to know how much botulism would be toxic if I ate it and so I asked Claude. It refused to answer that question so I could see how the current safeguards can be limiting.</p>
]]></description><pubDate>Wed, 25 Feb 2026 06:29:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47148065</link><dc:creator>chrischen</dc:creator><comments>https://news.ycombinator.com/item?id=47148065</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47148065</guid></item><item><title><![CDATA[New comment by chrischen in "Comma openpilot – Open source driver-assistance"]]></title><description><![CDATA[
<p>It beeps at you if you stop paying attention, which is superior. Hands on wheel is an arbitrary design decision more likely to placate what a layman would think is necessary to ensure safe AI steering.</p>
]]></description><pubDate>Sat, 24 Jan 2026 03:45:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46740869</link><dc:creator>chrischen</dc:creator><comments>https://news.ycombinator.com/item?id=46740869</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46740869</guid></item><item><title><![CDATA[New comment by chrischen in "Total monthly number of StackOverflow questions over time"]]></title><description><![CDATA[
<p>This doesn't mean that it's over for SO. It just means we'll probably trend towards more quality over quantity. Measuring SO's success by measuring number of questions asked is like measuring code quality by lines of code. Eventually SO would trend down simply by advancements of search technology helping users find existing answers rather than asking new ones. It just so happened that AI advanced made it even better (in terms of not having to need to ask redundant questions).</p>
]]></description><pubDate>Sun, 04 Jan 2026 07:12:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46485714</link><dc:creator>chrischen</dc:creator><comments>https://news.ycombinator.com/item?id=46485714</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46485714</guid></item><item><title><![CDATA[New comment by chrischen in "I program on the subway"]]></title><description><![CDATA[
<p>To be fair with the languages I use there are only a finite number of ways a particular line or even function can be implemented due to high level algebraic data structures and strict type checking. Business logic is encoded as data requirements, which is encoded into types, which is enforced by the type checker. Even a non-AI based system can technically be made to fill in the code, but AI system allows this to sort of be generalized across many languages that did not implement auto-complete.</p>
]]></description><pubDate>Mon, 29 Dec 2025 08:42:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46418666</link><dc:creator>chrischen</dc:creator><comments>https://news.ycombinator.com/item?id=46418666</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46418666</guid></item><item><title><![CDATA[New comment by chrischen in "I program on the subway"]]></title><description><![CDATA[
<p>Was it the github issues copilot integration? I found that to be slow compared to natively running copilot in the IDE.</p>
]]></description><pubDate>Mon, 29 Dec 2025 08:39:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46418656</link><dc:creator>chrischen</dc:creator><comments>https://news.ycombinator.com/item?id=46418656</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46418656</guid></item><item><title><![CDATA[New comment by chrischen in "I program on the subway"]]></title><description><![CDATA[
<p>With coding agents AI almost never manually type code anymore. It would be great to have a code editor that runs on my phone so I can do voice prompts and let the coding agents type stuff for me.</p>
]]></description><pubDate>Sun, 21 Dec 2025 17:14:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46346330</link><dc:creator>chrischen</dc:creator><comments>https://news.ycombinator.com/item?id=46346330</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46346330</guid></item><item><title><![CDATA[New comment by chrischen in "The "confident idiot" problem: Why AI needs hard rules, not vibe checks"]]></title><description><![CDATA[
<p>I also try to do verbose type classes using Ocaml's module system and it's been handling these patterns pretty well. My guess is there is probably good documentation / training data in there for these patterns since they are well documented. I haven't actually used coding agents with Haskell yet so it's possible that Ocaml's verbosity helps the agent.</p>
]]></description><pubDate>Wed, 10 Dec 2025 15:06:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=46218481</link><dc:creator>chrischen</dc:creator><comments>https://news.ycombinator.com/item?id=46218481</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46218481</guid></item><item><title><![CDATA[New comment by chrischen in "The "confident idiot" problem: Why AI needs hard rules, not vibe checks"]]></title><description><![CDATA[
<p>Actually I found the coding models to work really well with these languages. And the type systems are not actually complex. Ocaml's type system is actually really simple, which is probably why the compiler can be so fast. Even back in the "beta" days of Copilot, despite being marketed as Python only, I found it worked for Ocaml syntax and worked just as well.<p>The coding models work really well with esoteric syntaxes so if the biggest hurdle to adoption of haskell was syntax, that's definitely less of a hurdle now.<p>> Instead, you're writing non-trivial proofs about all possible runs of the program.<p>All possible runs of a program is exactly what HM type systems type check for. This fed into the coding model automatically iterates until it finds a solution that doesn't violate any possible run of the program.</p>
]]></description><pubDate>Mon, 08 Dec 2025 16:38:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=46194392</link><dc:creator>chrischen</dc:creator><comments>https://news.ycombinator.com/item?id=46194392</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46194392</guid></item><item><title><![CDATA[New comment by chrischen in "The "confident idiot" problem: Why AI needs hard rules, not vibe checks"]]></title><description><![CDATA[
<p>We already have verification layers: high level strictly typed languages like Haskell, Ocaml, Rescript/Melange (js ecosystem), purescript (js), elm, gleam (erlang), f# (for .net ecosystem).<p>These aren’t just strict type systems but the language allows for algebraic data types, nominal types, etc, which allow for encoding higher level types enforced by the language compiler.<p>The AI essentially becomes a glorified blank filler filling in the blanks. Basic syntax errors or type errors, while common, are automatically caught by the compiler as part of the vibe coding feedback loop.</p>
]]></description><pubDate>Mon, 08 Dec 2025 13:51:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46192215</link><dc:creator>chrischen</dc:creator><comments>https://news.ycombinator.com/item?id=46192215</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46192215</guid></item><item><title><![CDATA[New comment by chrischen in "GPT-5.1: A smarter, more conversational ChatGPT"]]></title><description><![CDATA[
<p>I would guess HN readers are not an average cross-section of broader society, but I would also guess that because of that HN readers would be pretty bad at understanding what broader society is thinking.</p>
]]></description><pubDate>Wed, 19 Nov 2025 13:27:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=45979285</link><dc:creator>chrischen</dc:creator><comments>https://news.ycombinator.com/item?id=45979285</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45979285</guid></item><item><title><![CDATA[New comment by chrischen in "Even Realities Smart Glasses: G2"]]></title><description><![CDATA[
<p>I guess the translation can always update itself in real time if the model is fast enough.</p>
]]></description><pubDate>Wed, 19 Nov 2025 13:26:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=45979270</link><dc:creator>chrischen</dc:creator><comments>https://news.ycombinator.com/item?id=45979270</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45979270</guid></item><item><title><![CDATA[New comment by chrischen in "Even Realities Smart Glasses: G2"]]></title><description><![CDATA[
<p>Some of your points are already considered with current implementations. Airpods live translate uses your phone to display what you say to the target person, and the target person's speech is played to your airpods. I think the main issue is that there is a massive delay and apple's translation models are inferior to ChatGPT. The other thing is the airpods don't really add much. It works the same as if you had the translation app open and both people are talking to it.<p>Aircaps demos show it to be pretty fast and almost real time. Meta's live captioning works really fast and is supposed to be able to pick out who is talking in a noisy environment by having you look at the person.<p>I think most of your issues are just a matter of the models improving themselves and running faster. I've found translations tend to not be out of whack, but this is something that can't really be solved except by having better translation models. In the case of Airpods live translate the app will show both people's text.</p>
]]></description><pubDate>Wed, 19 Nov 2025 13:23:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=45979252</link><dc:creator>chrischen</dc:creator><comments>https://news.ycombinator.com/item?id=45979252</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45979252</guid></item><item><title><![CDATA[New comment by chrischen in "Even Realities Smart Glasses: G2"]]></title><description><![CDATA[
<p>This is the sad reality of most if these AI products and it’s that they are just taking poor feature implementations on the hardware. It seems like if they just picked one or these features and doing it well will make the glasses useful.<p>Meta has a model just for isolating speech in noisy environments (the “live captioning feature”) and it seems that’s also the main feature of the Aircaps glasses. Translation is a relatively solved problem. The issue is isolating the conversation.<p>I’ve found meta is pretty good about not overdelivering on promised features, and as a result even though they probably have the best hardware and software stack of any glasses, the stuff you can do with the Rayban displays are extremely limited.</p>
]]></description><pubDate>Wed, 19 Nov 2025 11:17:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=45978241</link><dc:creator>chrischen</dc:creator><comments>https://news.ycombinator.com/item?id=45978241</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45978241</guid></item><item><title><![CDATA[New comment by chrischen in "Even Realities Smart Glasses: G2"]]></title><description><![CDATA[
<p>Real time translations is a real good use case. The problem is most implementations such as the Airpods live translate are not great.</p>
]]></description><pubDate>Wed, 19 Nov 2025 10:21:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=45977883</link><dc:creator>chrischen</dc:creator><comments>https://news.ycombinator.com/item?id=45977883</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45977883</guid></item><item><title><![CDATA[New comment by chrischen in "A new Google model is nearly perfect on automated handwriting recognition"]]></title><description><![CDATA[
<p>Current LLMs are doing this for coding, and it's very effective. It delegates to tool calls, but a specialized model can just be thought of as another tool. The LLM can be weak in some stuff handled by simple shell scripts or utilities, but strong in knowing what scripts/commands to call. For example, doing math via the model natively may be inaccurate, but the model may know to write the code to do math. An LLM can automate a higher level of abstraction, in the same way a manager or CEO might delegate tasks to specialists.</p>
]]></description><pubDate>Sun, 16 Nov 2025 18:36:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=45947286</link><dc:creator>chrischen</dc:creator><comments>https://news.ycombinator.com/item?id=45947286</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45947286</guid></item><item><title><![CDATA[New comment by chrischen in "A new Google model is nearly perfect on automated handwriting recognition"]]></title><description><![CDATA[
<p>But these models are more like generalists no? Couldn’t they simply be hooked up to more specialized models and just defer to them the way coding agents now use tools to assist?</p>
]]></description><pubDate>Sat, 15 Nov 2025 03:46:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=45934890</link><dc:creator>chrischen</dc:creator><comments>https://news.ycombinator.com/item?id=45934890</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45934890</guid></item><item><title><![CDATA[New comment by chrischen in "GPT-5.1: A smarter, more conversational ChatGPT"]]></title><description><![CDATA[
<p>I think a bigger problem is the HN reader mind reading what the rest of the world wants. At least when an HN reader telling us what they want it's a primary source, but reading a comment about an HN reader postulating what the rest of the world wants is simply more noisy than an unrepresentative sample of what the world may want.</p>
]]></description><pubDate>Thu, 13 Nov 2025 07:26:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=45911814</link><dc:creator>chrischen</dc:creator><comments>https://news.ycombinator.com/item?id=45911814</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45911814</guid></item><item><title><![CDATA[New comment by chrischen in "Meta projected 10% of 2024 revenue came from scams"]]></title><description><![CDATA[
<p>Honest companies are priced out by scammy companies, and as long as these companies share the profits they are totally fine profiting off scams. They make more money off the scams, simply put.</p>
]]></description><pubDate>Fri, 07 Nov 2025 16:52:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=45848340</link><dc:creator>chrischen</dc:creator><comments>https://news.ycombinator.com/item?id=45848340</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45848340</guid></item></channel></rss>