<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: HeinrichAQS</title><link>https://news.ycombinator.com/user?id=HeinrichAQS</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 24 Apr 2026 22:35:30 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=HeinrichAQS" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by HeinrichAQS in "Git commands I run before reading any code"]]></title><description><![CDATA[
<p>One big problem about commit messages is, that you write them from your perspective. If you write them while activly engaged with the topic and changes you are very heavily biased and might miss things which are obvious to you but not to future readers with less knowledge. I think abstracting your personal opinion is quite hard - I agree that LLMs are perfectly fitting for writing this since they just dont have a "personal opinion". Atleast I made the mistake in the past many times focusing on the things which were not obvious to me but then leaving out the non obvious things for others.</p>
]]></description><pubDate>Thu, 09 Apr 2026 09:02:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47701025</link><dc:creator>HeinrichAQS</dc:creator><comments>https://news.ycombinator.com/item?id=47701025</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47701025</guid></item><item><title><![CDATA[Observability >> Predictability]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/HeinrichvH/articles/blob/main/building-with-ai/02-observability-principle/02-observability-principle.md">https://github.com/HeinrichvH/articles/blob/main/building-with-ai/02-observability-principle/02-observability-principle.md</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47671953">https://news.ycombinator.com/item?id=47671953</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 07 Apr 2026 07:46:05 +0000</pubDate><link>https://github.com/HeinrichvH/articles/blob/main/building-with-ai/02-observability-principle/02-observability-principle.md</link><dc:creator>HeinrichAQS</dc:creator><comments>https://news.ycombinator.com/item?id=47671953</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47671953</guid></item><item><title><![CDATA[Refactoring Is Not Heroism – An Information-Theoretic Proof]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/HeinrichvH/articles/blob/main/building-with-ai/01-entropy-cycle/01-entropy-cycle.md">https://github.com/HeinrichvH/articles/blob/main/building-with-ai/01-entropy-cycle/01-entropy-cycle.md</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47611037">https://news.ycombinator.com/item?id=47611037</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 02 Apr 2026 07:17:20 +0000</pubDate><link>https://github.com/HeinrichvH/articles/blob/main/building-with-ai/01-entropy-cycle/01-entropy-cycle.md</link><dc:creator>HeinrichAQS</dc:creator><comments>https://news.ycombinator.com/item?id=47611037</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47611037</guid></item><item><title><![CDATA[Refactoring Is Not Heroism – An Information-Theoretic Proof]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/HeinrichvH/articles/blob/main/building-with-ai/01-entropy-cycle.md">https://github.com/HeinrichvH/articles/blob/main/building-with-ai/01-entropy-cycle.md</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47600380">https://news.ycombinator.com/item?id=47600380</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 01 Apr 2026 13:10:23 +0000</pubDate><link>https://github.com/HeinrichvH/articles/blob/main/building-with-ai/01-entropy-cycle.md</link><dc:creator>HeinrichAQS</dc:creator><comments>https://news.ycombinator.com/item?id=47600380</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47600380</guid></item><item><title><![CDATA[New comment by HeinrichAQS in "Do your own writing"]]></title><description><![CDATA[
<p>If we see intent as a real part of information which influences the output substantially then this is perfectly explainable - the part of intent is not intrinsic to any artificial generation. Intent can only be given by external input and will be therefor always transformed by the statistical operations which happen during the generation of the output. The wrestling with the ideas is to find the sweetspot between intent, anticipated outcome and the anticipated recognition from other of the result. Even if we put in intent into the generation than the assumption of the consequences of the certain output in certain contexts is still missing. And the core of this problem is in my opinion the missing casual chain - without this no causal link between the generation and its consequences can be established and therefore no "deepness" can be found in the generated artifacts.</p>
]]></description><pubDate>Tue, 31 Mar 2026 16:05:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47589446</link><dc:creator>HeinrichAQS</dc:creator><comments>https://news.ycombinator.com/item?id=47589446</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47589446</guid></item></channel></rss>