<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: hazrmard</title><link>https://news.ycombinator.com/user?id=hazrmard</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 08 May 2026 14:43:10 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=hazrmard" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by hazrmard in "Natural Language Autoencoders: Turning Claude's Thoughts into Text"]]></title><description><![CDATA[
<p>Check my understanding & follow-up Qs:<p>An auto-encoder is trained on [activation] -AV-> [text] -AR-> [activation], where [activation] belongs to one layer in the LLM model M.<p>Architecture.:<p><pre><code>    Model being analyzed (M): >|||||>  
    Auto-Verbalizer (AV) same as M, with tokens for activation: >|||||>  
    Auto-Reconstructor (AR) truncated up to the layer being analyzed: ||>
</code></pre>
The AV, AR models are initialized using supervised learning on a summarization task. The assumption being that model thoughts are similar to context summary.<p>The AR is trained on a simple reconstruction loss.<p>The AV is trained using an RL objective of reconstruction loss with a KL penalty to keep the verbalizations similar to the initial weights (to maintain linguistic fluency).<p>- Authors acknowledge, and expect, confabulations in verbalizations: factually incorrect or unsubstantiated statements. But, the internal thought we seek is itself, by definition, unsubstantiated. How can we tell if it is not duplicitous?<p>- They test this on a layer 2/3 deep into the models. I wonder how shallow and deep abstractions affect thought verbalization?</p>
]]></description><pubDate>Thu, 07 May 2026 19:47:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=48053966</link><dc:creator>hazrmard</dc:creator><comments>https://news.ycombinator.com/item?id=48053966</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48053966</guid></item><item><title><![CDATA[New comment by hazrmard in "Show HN: I built an interactive 3D three-body problem simulator in the browser"]]></title><description><![CDATA[
<p>Very cool! Interesting how the choice of solver affects the solution. Euler doesn't handle misbehaved equations very well. You can see this in the Helix setup where the bodies just fly off.</p>
]]></description><pubDate>Tue, 17 Mar 2026 23:17:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47419655</link><dc:creator>hazrmard</dc:creator><comments>https://news.ycombinator.com/item?id=47419655</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47419655</guid></item><item><title><![CDATA[New comment by hazrmard in "My Homelab Setup"]]></title><description><![CDATA[
<p>I should read up on Tailscale more. I have been using ddclient[1] or the router's built-in dynamic DNS[2] to set up my servers / homelab. This leaves the endpoints exposed to the public internet, as the author says.<p><pre><code>    [1]: https://github.com/ddclient/ddclient  
    [2]: https://kb.netgear.com/1058/What-is-Dynamic-DNS-DDNS</code></pre></p>
]]></description><pubDate>Mon, 09 Mar 2026 22:41:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47316758</link><dc:creator>hazrmard</dc:creator><comments>https://news.ycombinator.com/item?id=47316758</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47316758</guid></item><item><title><![CDATA[New comment by hazrmard in "The Waymo World Model"]]></title><description><![CDATA[
<p>cue the bell curve meme for learning autonomy:<p><pre><code>                 ____.----.____
          ______/              \______
    _____/                            \_____
    ________________________________________

    (simulations)  (real world data)  (simulations)
</code></pre>
Seems like it, no?<p>We started with physics-based simulators for training policies. Then put them in the real world using modular perception/prediction/planning systems. Once enough data was collected, we went back to making simulators. This time, they're physics "informed" deep learning models.</p>
]]></description><pubDate>Fri, 06 Feb 2026 18:25:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46916298</link><dc:creator>hazrmard</dc:creator><comments>https://news.ycombinator.com/item?id=46916298</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46916298</guid></item><item><title><![CDATA[New comment by hazrmard in "Ask HN: Share your personal website"]]></title><description><![CDATA[
<p><p><pre><code>    https://iahmed.me
</code></pre>
Hugo website, with a theme I made from scratch myself.<p>Github Pages deployment.<p>Here's my first website from when I was in college and had no experience in web dev. I still keep it on for nostalgia:<p><pre><code>    https://iahmed.me/old_www/</code></pre></p>
]]></description><pubDate>Wed, 14 Jan 2026 20:25:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46622608</link><dc:creator>hazrmard</dc:creator><comments>https://news.ycombinator.com/item?id=46622608</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46622608</guid></item><item><title><![CDATA[New comment by hazrmard in "How to code Claude Code in 200 lines of code"]]></title><description><![CDATA[
<p>This reflects my experience. Yet, I <i>feel</i> that getting reliability out of LLM calls with a while-loop harness is elusive.<p>For example<p>- how can I reliably have a decision block to end the loop (or keep it running)?<p>- how can I reliably call tools with the right schema?<p>- how can I reliably summarize context / excise noise from the conversation?<p>Perhaps, as the models get better, they'll approach some threshold where my worries just go away. However, I can't quantify that threshold myself and that leaves a cloud of uncertainty hanging over any agentic loops I build.<p>Perhaps I should accept that it's a feature and not a bug? :)</p>
]]></description><pubDate>Thu, 08 Jan 2026 20:45:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46546203</link><dc:creator>hazrmard</dc:creator><comments>https://news.ycombinator.com/item?id=46546203</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46546203</guid></item><item><title><![CDATA[New comment by hazrmard in "Scientific production in the era of large language models [pdf]"]]></title><description><![CDATA[
<p>The paper finds:<p>- For LLM-assisted output, the more complex the LLM-writing is, the less likely the paper is to be published. From eyeballing, at WC=-30, both have similar chances of publication (~46%). At the upper range of WC=25, LLM-assisted papers are ~17% less likely to be published.<p>- LLM-assisted authors produced more preprints (+36%).<p>I wonder:<p>- What is the distribution of writing complexity?<p><pre><code>  * Does the 17% publication deficit at WC=25 correspond to 17% of the 36% excess LLM-assisted papers being WC=25, thus nullifying the effect? Although, it puts extra strain on the review process.</code></pre></p>
]]></description><pubDate>Tue, 06 Jan 2026 18:19:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46516270</link><dc:creator>hazrmard</dc:creator><comments>https://news.ycombinator.com/item?id=46516270</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46516270</guid></item><item><title><![CDATA[A Derivation of Entropy]]></title><description><![CDATA[
<p>Article URL: <a href="https://iahmed.me/post/surprise-derivation-entropy/">https://iahmed.me/post/surprise-derivation-entropy/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46507149">https://news.ycombinator.com/item?id=46507149</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 06 Jan 2026 00:21:59 +0000</pubDate><link>https://iahmed.me/post/surprise-derivation-entropy/</link><dc:creator>hazrmard</dc:creator><comments>https://news.ycombinator.com/item?id=46507149</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46507149</guid></item><item><title><![CDATA[New comment by hazrmard in "I rebooted my social life"]]></title><description><![CDATA[
<p>I can vouch for this with my experience.<p>Back in grad school, I was out making new friends. I was playing tennis 4-5 times a week. I'd invite players for post-game coffees (in the morning) and dinner (evenings) at every game. Consistency mattered. I'd ask every time. Slowly we had our regulars. Our coffee times became an institution in and of themselves.<p>People are busy, yes. But, people also want to be in demand. People also don't want to be rejected. And, people also don't want to be left out.<p>Asking around, I was exposing myself to rejection. Some folks appreciated their time being demanded. More still joined because they didn't want to be left out.</p>
]]></description><pubDate>Thu, 01 Jan 2026 21:28:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46458216</link><dc:creator>hazrmard</dc:creator><comments>https://news.ycombinator.com/item?id=46458216</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46458216</guid></item><item><title><![CDATA[New comment by hazrmard in "AdapTive-LeArning Speculator System (ATLAS): Faster LLM inference"]]></title><description><![CDATA[
<p>Do I understand this right?<p>A light-weight speculative model adapts to usage, keeping the acceptance rate for the static heavy-weight model within acceptable bounds.<p>Do they adapt with LoRAs?</p>
]]></description><pubDate>Sun, 12 Oct 2025 19:23:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=45561016</link><dc:creator>hazrmard</dc:creator><comments>https://news.ycombinator.com/item?id=45561016</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45561016</guid></item><item><title><![CDATA[New comment by hazrmard in "CubeSats are fascinating learning tools for space"]]></title><description><![CDATA[
<p>This takes me down a memory lane! For my undergrad capstone project, we made a cubesat tracker for our university's satellite using a RPi/Arduino/Software-defined-radio to receive transmissions every time it passed over us. I cringe a little looking at the code now - but it worked!<p>I agree, cubsats are a wonderful way, for college students even, to tinker with space(-adjacent) tech.<p><a href="https://github.com/hazrmard/SatTrack" rel="nofollow">https://github.com/hazrmard/SatTrack</a></p>
]]></description><pubDate>Mon, 15 Sep 2025 15:09:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=45250602</link><dc:creator>hazrmard</dc:creator><comments>https://news.ycombinator.com/item?id=45250602</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45250602</guid></item><item><title><![CDATA[New comment by hazrmard in "Ask HN: What Are You Working On? (June 2025)"]]></title><description><![CDATA[
<p>I am working on a budgeting app!<p>Features:<p><pre><code>  - Local. No internet connection needed.  
  - Manual. Every transaction is added by the user.
  - One-off or arbitrarily recurring transactions.  
  - No lock-in. Check out your data any time.  
  - Arbitrary metrics to track performance. 
  - Hosting on the cloud for mobile access. 
</code></pre>
Why?<p>I've been using Google sheets + forms for the last 8 years to track my finances. It's worked well, except for minor inconveniences. This app is my answer to my own problems.</p>
]]></description><pubDate>Mon, 30 Jun 2025 18:14:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=44426258</link><dc:creator>hazrmard</dc:creator><comments>https://news.ycombinator.com/item?id=44426258</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44426258</guid></item><item><title><![CDATA[New comment by hazrmard in "Ask HN: What is the latest on treatment of Metastatic Breast Cancer?"]]></title><description><![CDATA[
<p>Thank you very much! I had been reading up on effect of diet for treatment & management. I like the focus on citations.</p>
]]></description><pubDate>Wed, 11 Jun 2025 16:42:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=44249382</link><dc:creator>hazrmard</dc:creator><comments>https://news.ycombinator.com/item?id=44249382</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44249382</guid></item><item><title><![CDATA[Ask HN: What is the latest on treatment of Metastatic Breast Cancer?]]></title><description><![CDATA[
<p>I was reading this recently shared post on the promise of CAR-T therapy for cancer[1]. However, it seems that it may not be universally applicable to all cancers[2], including breast cancer.<p>What have we discovered (and made available as treatment) for advanced breast cancers?<p>[1]: https://news.ycombinator.com/item?id=44219379  
[2]: https://pmc.ncbi.nlm.nih.gov/articles/PMC8990477/</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44244450">https://news.ycombinator.com/item?id=44244450</a></p>
<p>Points: 28</p>
<p># Comments: 10</p>
]]></description><pubDate>Wed, 11 Jun 2025 05:23:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=44244450</link><dc:creator>hazrmard</dc:creator><comments>https://news.ycombinator.com/item?id=44244450</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44244450</guid></item><item><title><![CDATA[New comment by hazrmard in "Mathematical Foundations of Reinforcement Learning"]]></title><description><![CDATA[
<p>Thank you. This is great. I also appreciated the linked code for MinRL (<a href="https://github.com/10-OASIS-01/minrl" rel="nofollow">https://github.com/10-OASIS-01/minrl</a>).<p>Having done research in RL, a big problem with incremental research was to reproduce comparative works, and to validate my own contributions. A simple library like this, with built-in tools for visualization and a gridworld sandbox where I can validate just by observation, is very helpful!</p>
]]></description><pubDate>Tue, 11 Mar 2025 17:24:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=43334873</link><dc:creator>hazrmard</dc:creator><comments>https://news.ycombinator.com/item?id=43334873</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43334873</guid></item><item><title><![CDATA[New comment by hazrmard in "Mistral Saba"]]></title><description><![CDATA[
<p>It's great to see proliferation of models in other languages!<p>Shoutout to Alif, a finetune of Llama 3 8b on Urdu datasets:  
<a href="https://huggingface.co/large-traversaal/Alif-1.0-8B-Instruct" rel="nofollow">https://huggingface.co/large-traversaal/Alif-1.0-8B-Instruct</a><p>It'd be great to see a comparison.</p>
]]></description><pubDate>Mon, 17 Feb 2025 16:27:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=43080594</link><dc:creator>hazrmard</dc:creator><comments>https://news.ycombinator.com/item?id=43080594</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43080594</guid></item><item><title><![CDATA[New comment by hazrmard in "Physics Informed Neural Networks"]]></title><description><![CDATA[
<p>Good read! I am developing PINNs at work and this certainly helped me recall important concepts. This post used deepxde library [2] to compose the PINN. Can anyone comment on how NVIDIA's modulus [2] compares to this? Modulus appears to be much more verbose and poorly documented.<p>[1]: <a href="https://github.com/lululxvi/deepxde">https://github.com/lululxvi/deepxde</a>  
[2]: <a href="https://github.com/nvidia/modulus">https://github.com/nvidia/modulus</a></p>
]]></description><pubDate>Mon, 17 Feb 2025 15:27:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=43079933</link><dc:creator>hazrmard</dc:creator><comments>https://news.ycombinator.com/item?id=43079933</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43079933</guid></item><item><title><![CDATA[New comment by hazrmard in "Time-Series Anomaly Detection: A Decade Review"]]></title><description><![CDATA[
<p>Anomaly detection (AD) can arguably be a value-add to any industry. It may not be a core product, but AD can help optimize operations for almost anyone.<p>* Manufacturing: Computer vision to pick anomalies off the assembly line.<p>* Operation: Accelerometers/temperature sensors w/ frequency analysis to detect onset of faults (prognostics / diagnostics) and do predictive maintenance.<p>* Sales: Timeseries analyses on numbers / support calls to detect up/downticks in cashflows, customer satisfaction etc.</p>
]]></description><pubDate>Mon, 06 Jan 2025 17:13:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=42612526</link><dc:creator>hazrmard</dc:creator><comments>https://news.ycombinator.com/item?id=42612526</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42612526</guid></item><item><title><![CDATA[New comment by hazrmard in "Modelica"]]></title><description><![CDATA[
<p>I studied bond graphs in modeling & simulation courses in college. I thought they were so cool! The utility knife of understanding mechanical phenomena.<p>Until I discovered Hamiltonian physics :)</p>
]]></description><pubDate>Mon, 16 Dec 2024 21:40:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=42435756</link><dc:creator>hazrmard</dc:creator><comments>https://news.ycombinator.com/item?id=42435756</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42435756</guid></item><item><title><![CDATA[New comment by hazrmard in "Modelica"]]></title><description><![CDATA[
<p>MATLAB/Simulnk is imperative. They have signal flow/causal approach. So you should know ahead of time which variable <i>causes</i> another variable to change i.e. which is defined first.<p>Modelica is acausal. You define the variables and how they are related (equations). The compiler handles variable dependencies and resolution internally.<p>There are pros & cons of each. Both are used for simulating cyber-physical systems.</p>
]]></description><pubDate>Mon, 16 Dec 2024 21:35:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=42435713</link><dc:creator>hazrmard</dc:creator><comments>https://news.ycombinator.com/item?id=42435713</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42435713</guid></item></channel></rss>