<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: fultonn</title><link>https://news.ycombinator.com/user?id=fultonn</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 01 May 2026 23:30:22 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=fultonn" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by fultonn in "Ask HN: Who is hiring? (May 2026)"]]></title><description><![CDATA[
<p>IBM Research | Boston, MA, USA (hybrid - 3 days in office; location flexibility - Boston preferred but NYC/SF possible) | Research Scientists and Research Engineers<p>Our team thinks there's a lot of value in co-design of software harnesses and LLMs, particularly for small/medium models, particularly in the open model space.<p>Our colleagues across the isle do an awesome job at training the Granite model series, so we (physically and organizationally) sit in a uniquely good place to do impactful work in this space.<p>I am currently looking for early career scientists and engineers who are interested in LLMs and also one of {programming languages, formal methods, compilers}. Experience with Rust is a plus. Cool systems-y projects with formalizations on paper or in rocq/lean/etc also a plus. Neither is necessary, per se.<p>If this sounds interesting please send over an email: nathan@ibm.com</p>
]]></description><pubDate>Fri, 01 May 2026 17:09:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47977258</link><dc:creator>fultonn</dc:creator><comments>https://news.ycombinator.com/item?id=47977258</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47977258</guid></item><item><title><![CDATA[New comment by fultonn in "Show HN: Price Per Ball – Site that sorts golf balls on Amazon by price per ball"]]></title><description><![CDATA[
<p>what I've done for a similar script in the past:<p><pre><code>    answer_initial = llm(prompt=prompt, site=site) # JSON with answer and any stuff needed to do heuristic checks.
    heuristic_results = heuristics(answer_final) # rule based.
    answer_final = llm(prompt-prompt, site=site, answer=answer_initial)
    mark_for_review = ... # basically just a bunch of hard-coded stuff I add flag possible failures for review.

</code></pre>
You can use an extremely small/cheap model for something like this -- granite 4.0 micro works fine for me, 3.3 8b did as well, both run on my macbook. YMMV / try different models and see how it goes.</p>
]]></description><pubDate>Tue, 17 Feb 2026 17:37:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47050331</link><dc:creator>fultonn</dc:creator><comments>https://news.ycombinator.com/item?id=47050331</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47050331</guid></item><item><title><![CDATA[New comment by fultonn in "Ask HN: Who uses open LLMs and coding assistants locally? Share setup and laptop"]]></title><description><![CDATA[
<p>> Edit: Hey which of the models on that page were you referring to?<p>I was referring to the smaller ones -- `granite4:micro`, `granite4-latest`, `granite4:350m`.<p>> I'm grabbing one now that's apparently double digit GB?<p>You are probably downloading one of these two ids: `granite4:small-h` or `granite4:32b-a9b-h`.<p>The "small" model _is_ small in relative terms, but is also the largest of the currently released granite models! At 32B parameters (19GB download) it's runnable locally but not in the same "run on your laptop with acceptable performance" category of the nano/micro models.<p>> Also my "dev enviornment" is vi -- I come from infosec (so basically a glorified sysadmin) so I'm mostly making little bash and python scripts, so I'm learning a lot of new things about software engineering as I explore this space :-)<p>Shameless plug: if you're writing Python scripts to automate things using small locally hosted models, consider trying out <a href="https://github.com/generative-computing/mellea" rel="nofollow">https://github.com/generative-computing/mellea</a><p>Mellea tries to nudge toward good software engineering practices -- breaking down big tasks into smaller parts, checking outputs after nondeterministic steps, thinking in terms of data structures and invariants rather than flow charts, etc. We built it with "actual fully automated robust workflows" in mind. You can use it with big models or small models, but it really shines when used with small models.</p>
]]></description><pubDate>Mon, 03 Nov 2025 16:38:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=45800992</link><dc:creator>fultonn</dc:creator><comments>https://news.ycombinator.com/item?id=45800992</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45800992</guid></item><item><title><![CDATA[New comment by fultonn in "Ask HN: Who uses open LLMs and coding assistants locally? Share setup and laptop"]]></title><description><![CDATA[
<p>> Is there a way to load this into Ollama?<p>Yes, the granite 4 models are on ollama:<p><a href="https://ollama.com/library/granite4">https://ollama.com/library/granite4</a><p>> but my interest is specifically in privacy respecting LLMs -- my goal is to run the most powerful one I can on my personal machine<p>The HF Spaces demo for granite 4 nano does run on your local machine, using Transformers.js and ONNX. After downloading the model weights you can disconnect from the internet and things should still work. It's all happening in your browser, locally.<p>Of course ollama is preferable for your own dev environment. But ONNX and transformers.js is amazingly useful for edge deployment and easily sharing things with non-technical users. When I want to bundle up a little demo for something I typically just do that instead of the old way I did things (bundle it all up on a server and eat the inference cost).</p>
]]></description><pubDate>Fri, 31 Oct 2025 19:23:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=45775719</link><dc:creator>fultonn</dc:creator><comments>https://news.ycombinator.com/item?id=45775719</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45775719</guid></item></channel></rss>