<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: artee_49</title><link>https://news.ycombinator.com/user?id=artee_49</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 04:29:15 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=artee_49" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by artee_49 in "Toward automated verification of unreviewed AI-generated code"]]></title><description><![CDATA[
<p>Unintended side-effects are the biggest problems with AI generated code. I can't think of a proper way to solve that.</p>
]]></description><pubDate>Tue, 17 Mar 2026 20:29:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47417879</link><dc:creator>artee_49</dc:creator><comments>https://news.ycombinator.com/item?id=47417879</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47417879</guid></item><item><title><![CDATA[New comment by artee_49 in "Google spoofed via DKIM replay attack: A technical breakdown"]]></title><description><![CDATA[
<p>TLDR:<p>Google allows you set input long paragraphs and URLs into a field called "App name" and they then send you an email with the paragraph you entered in (malicious with phishing links) to your inbox. Since this is sent by Google, it's DKIM signed and passes DMARC so you can simply download the entire email and just send it as a raw email to other people and it'll continue to be signed and land in their inboxes.<p>The other thing is that with these we cannot change the "To" header in the email (not envelope TO (which is where email is delivered to) but rather what shows up in the "To" when the client renders the email) and so the attacker bought a domain that looks like it's google owned "(rand)goog-ssl.com". When looking at emails in your inbox ensure that the "To" is always valid along with the "From".</p>
]]></description><pubDate>Fri, 25 Jul 2025 13:52:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=44683140</link><dc:creator>artee_49</dc:creator><comments>https://news.ycombinator.com/item?id=44683140</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44683140</guid></item><item><title><![CDATA[New comment by artee_49 in "Measuring the impact of AI on experienced open-source developer productivity"]]></title><description><![CDATA[
<p>Even for senior levels the claim has been that AI will speed up their coding (take it over) so they can focus on higher level decisions and abstract level concepts. These contributions are not those and based on prior predictions the productivity should have gone up.</p>
]]></description><pubDate>Thu, 10 Jul 2025 19:02:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=44524341</link><dc:creator>artee_49</dc:creator><comments>https://news.ycombinator.com/item?id=44524341</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44524341</guid></item><item><title><![CDATA[New comment by artee_49 in "Spammers are better at SPF, DKIM, and DMARC than everyone else"]]></title><description><![CDATA[
<p>It does work that way, but IP reputation is a thing as well so you need to keep that in mind. IPs need to be "seasoned" and "trusted" as well as domains.<p>This is how email-as-infra works, you're sending from a shared pool of their ips and they sign your emails with DKIM and you'll have SPF set up as well on your own.</p>
]]></description><pubDate>Tue, 25 Mar 2025 14:23:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=43471724</link><dc:creator>artee_49</dc:creator><comments>https://news.ycombinator.com/item?id=43471724</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43471724</guid></item><item><title><![CDATA[New comment by artee_49 in "Spammers are better at SPF, DKIM, and DMARC than everyone else"]]></title><description><![CDATA[
<p>DKIM is not meant to block spam, it's meant to authenticate that the sender had access to the private key for the public key exposed on the domain that it was sent from, implying that the sender has sufficient permissions to send from the domain.<p>It should not be used to imply anything else, none of these have anything to do with spam, that's reputation (and yes, having DKIM set-up boosts your reputation but it is not sufficient) and should be "built" up by the domains sending the emails.</p>
]]></description><pubDate>Tue, 25 Mar 2025 14:20:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=43471691</link><dc:creator>artee_49</dc:creator><comments>https://news.ycombinator.com/item?id=43471691</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43471691</guid></item><item><title><![CDATA[New comment by artee_49 in "When imperfect systems are good: Bluesky's lossy timelines"]]></title><description><![CDATA[
<p>I think shuffle sharding is beneficial for read-only replica cases, not for writing scenarios like this. You'll have to write to the primary and not to a "virtual node". Right? Or am I understand it incorrectly? I just read that article now.</p>
]]></description><pubDate>Wed, 19 Feb 2025 22:58:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=43108848</link><dc:creator>artee_49</dc:creator><comments>https://news.ycombinator.com/item?id=43108848</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43108848</guid></item><item><title><![CDATA[New comment by artee_49 in "When imperfect systems are good: Bluesky's lossy timelines"]]></title><description><![CDATA[
<p>I think you'll have to pay a team millions to figure that out, it is unlikely to be a static rate but rather decided based on multiple traits like time of year, time of flight, distance of flight, cost of ticket, etc.</p>
]]></description><pubDate>Wed, 19 Feb 2025 20:53:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=43107512</link><dc:creator>artee_49</dc:creator><comments>https://news.ycombinator.com/item?id=43107512</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43107512</guid></item><item><title><![CDATA[New comment by artee_49 in "When imperfect systems are good: Bluesky's lossy timelines"]]></title><description><![CDATA[
<p>I am a bit perplexed though as to why they have implemented fan-out in a way that each "page" is blocking fetching further pages, they would not have been affected by the high tail latencies if they had not done this,<p>"In the case of timelines, each “page” of followers is 10,000 users large and each “page” must be fanned out before we fetch the next page. This means that our slowest writes will hold up the fetching and Fanout of the next page."<p>Basically means that they block on each page, process all the items on the page, and then move on to the next page. Why wouldn't you rather decouple page fetcher and the processing of the pages?<p>A page fetching activity should be able to continuously keep fetching further set of followers one after another and should not wait for each of the items in the page to be updated to continue.<p>Something that comes  to mind would be to have a fetcher component that fetches pages, stores each page in S3 and publishes the metadata (content) and the S3 location to a queue (SQS) that can be consumed by timeline publishers which can scale independently based on load. You can control the concurrency in this system much better, and you could also partition based on the shards with another system like Kafka by utilizing the shards as keys in the queue to even "slow down" the work without having to effectively drop tweets from timelines (timelines are eventually consistent regardless).<p>I feel like I'm missing something and there's a valid reason to do it this way.</p>
]]></description><pubDate>Wed, 19 Feb 2025 20:51:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=43107494</link><dc:creator>artee_49</dc:creator><comments>https://news.ycombinator.com/item?id=43107494</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43107494</guid></item></channel></rss>