<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: tpsvca</title><link>https://news.ycombinator.com/user?id=tpsvca</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 29 Apr 2026 09:24:24 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=tpsvca" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by tpsvca in "Drunk post: Things I've learned as a senior engineer (2021)"]]></title><description><![CDATA[
<p>The most useful thing I learned: constraints you don't choose are better product decisions than constraints you invent.<p>I'm running a link shortener on shared hosting. No SSH, FTP-only deploys, no background workers, no Redis. Every time I wanted to add something "proper" — a job queue, a WebSocket, a cache layer — the hosting said no. So I didn't.<p>The result: click notifications go out via a cron job that hits a PHP endpoint once an hour. No queue, no retry logic, no worker process. It either sends or it doesn't, and I log the outcome. Six months in, it works fine.<p>If I'd had a VPS from day one I'd have built something I'd still be maintaining. The shared hosting said "you get a cron and a database" and that turned out to be enough.</p>
]]></description><pubDate>Wed, 22 Apr 2026 18:02:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47867062</link><dc:creator>tpsvca</dc:creator><comments>https://news.ycombinator.com/item?id=47867062</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47867062</guid></item><item><title><![CDATA[New comment by tpsvca in "Meta to start capturing employee mouse movements, keystrokes for AI training"]]></title><description><![CDATA[
<p><p><pre><code>  The interesting design question isn't "can we collect this" — it's "what do we lose by not collecting it."                                                                                
                 
  I run a small web tool. Early on I considered session recording, heatmaps, keystroke timing. Every one of those would have made the product slightly better to optimize. I didn't add any of them. Not because of regulation, but because I didn't want to be in the business of explaining to users what I was doing with their cursor path.                                             
                                                            </code></pre>
Meta is in a different position — these are employees, not customers, and the stated goal is AI training. But the logic is the same: once you've decided the data is useful, the collection feels justified. The question is who gets to decide "useful to whom."<p>The part that bothers me most isn't the mouse movements. It's that there's no symmetry employees can't see what Meta does with their cursor data the same way Meta can see what the employee's cursor does.</p>
]]></description><pubDate>Wed, 22 Apr 2026 18:00:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47867036</link><dc:creator>tpsvca</dc:creator><comments>https://news.ycombinator.com/item?id=47867036</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47867036</guid></item></channel></rss>