<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: jncraton</title><link>https://news.ycombinator.com/user?id=jncraton</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 00:12:36 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=jncraton" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[How dangerous is Mythos, Anthropic's new AI model?]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.economist.com/business/2026/04/08/how-dangerous-is-mythos-anthropics-new-ai-model">https://www.economist.com/business/2026/04/08/how-dangerous-is-mythos-anthropics-new-ai-model</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47695154">https://news.ycombinator.com/item?id=47695154</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 08 Apr 2026 19:32:54 +0000</pubDate><link>https://www.economist.com/business/2026/04/08/how-dangerous-is-mythos-anthropics-new-ai-model</link><dc:creator>jncraton</dc:creator><comments>https://news.ycombinator.com/item?id=47695154</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47695154</guid></item><item><title><![CDATA[New comment by jncraton in "Goodbye InnerHTML, Hello SetHTML: Stronger XSS Protection in Firefox 148"]]></title><description><![CDATA[
<p>You are right that the concept of "safe" is nebulous, but the goal here is specifically to be XSS-safe [1]. Elements or properties that could allow scripts to execute are removed. This functionality lives in the user agent and prevents adding unsafe elements to the DOM itself, so it should be easier to get correct than a string-to-string sanitizer. The logic of "is the element currently being added to the DOM a <script>" is fundamentally easier to get right than "does this HTML string include a script tag".<p>[1] <a href="https://developer.mozilla.org/en-US/docs/Web/API/Element/setHTML" rel="nofollow">https://developer.mozilla.org/en-US/docs/Web/API/Element/set...</a></p>
]]></description><pubDate>Tue, 24 Feb 2026 14:03:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47137284</link><dc:creator>jncraton</dc:creator><comments>https://news.ycombinator.com/item?id=47137284</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47137284</guid></item><item><title><![CDATA[New comment by jncraton in "Qwen3.5: Towards Native Multimodal Agents"]]></title><description><![CDATA[
<p>2 and 3 bit is where quality typically starts to really drop off. MXFP4 or another 4-bit quantization is often the sweet spot.</p>
]]></description><pubDate>Mon, 16 Feb 2026 14:41:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47035577</link><dc:creator>jncraton</dc:creator><comments>https://news.ycombinator.com/item?id=47035577</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47035577</guid></item><item><title><![CDATA[New comment by jncraton in "My article on why AI is great (or terrible) or how to use it"]]></title><description><![CDATA[
<p>That's great. Here's "me" implementing a JS version of that library in one shot using Github Copilot and a 1 sentence prompt:<p>> Implement when.js as a simple, zero-dependency js library following SPEC.md exactly.<p><a href="https://github.com/jncraton/whenwords/pulls" rel="nofollow">https://github.com/jncraton/whenwords/pulls</a></p>
]]></description><pubDate>Sat, 10 Jan 2026 14:47:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=46566132</link><dc:creator>jncraton</dc:creator><comments>https://news.ycombinator.com/item?id=46566132</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46566132</guid></item><item><title><![CDATA[New comment by jncraton in "Word spacing"]]></title><description><![CDATA[
<p>I've adjusted or removed those sentences in the article.</p>
]]></description><pubDate>Tue, 09 Dec 2025 01:08:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46200066</link><dc:creator>jncraton</dc:creator><comments>https://news.ycombinator.com/item?id=46200066</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46200066</guid></item><item><title><![CDATA[New comment by jncraton in "Run Python in the Browser Effortlessly"]]></title><description><![CDATA[
<p>Pyodide supports numpy and scipy.<p><a href="https://pyodide.org/en/stable/usage/packages-in-pyodide.html" rel="nofollow">https://pyodide.org/en/stable/usage/packages-in-pyodide.html</a></p>
]]></description><pubDate>Wed, 08 Jan 2025 17:10:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=42636244</link><dc:creator>jncraton</dc:creator><comments>https://news.ycombinator.com/item?id=42636244</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42636244</guid></item><item><title><![CDATA[New comment by jncraton in "Go library for in-process vector search and embeddings with llama.cpp"]]></title><description><![CDATA[
<p>The languagemodels[1] package that I maintain might meet your needs.<p>My primary use case is education, as myself and others use this for short student projects[2] related to LLMs, but there's nothing preventing this package from being used in other ways. It includes a basic in-process vector store[3].<p>[1] <a href="https://github.com/jncraton/languagemodels">https://github.com/jncraton/languagemodels</a><p>[2] <a href="https://www.merlot.org/merlot/viewMaterial.htm?id=773418755" rel="nofollow">https://www.merlot.org/merlot/viewMaterial.htm?id=773418755</a><p>[3] <a href="https://github.com/jncraton/languagemodels?tab=readme-ov-file#semantic-search">https://github.com/jncraton/languagemodels?tab=readme-ov-fil...</a></p>
]]></description><pubDate>Wed, 30 Oct 2024 13:39:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=41994611</link><dc:creator>jncraton</dc:creator><comments>https://news.ycombinator.com/item?id=41994611</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41994611</guid></item><item><title><![CDATA[New comment by jncraton in "Phind-405B and faster, high quality AI answers for everyone"]]></title><description><![CDATA[
<p>It would be nice to see the Phind Instant weights released under a permissive license. It looks like it could be a useful tool in the local-only code model toolbox.</p>
]]></description><pubDate>Thu, 05 Sep 2024 18:32:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=41459272</link><dc:creator>jncraton</dc:creator><comments>https://news.ycombinator.com/item?id=41459272</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41459272</guid></item><item><title><![CDATA[New comment by jncraton in "Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding"]]></title><description><![CDATA[
<p>The speedup would not be that high in practice for folks already using speculative decoding[1]. ANPD is similar but uses a simpler and faster drafting approach. These two enhancements can't be meaningfully stacked. Here's how the paper describes it:<p>> ANPD dynamically generates draft outputs via an adaptive N-gram module using real-time statistics, after which the drafts are verified by the LLM. This characteristic is exactly the difference between ANPD and the previous speculative decoding methods.<p>ANPD does provide a more general-purpose solution to drafting that does not require training, loading, and running draft LLMs.<p>[1] <a href="https://github.com/ggerganov/llama.cpp/pull/2926">https://github.com/ggerganov/llama.cpp/pull/2926</a></p>
]]></description><pubDate>Sun, 21 Apr 2024 20:02:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=40108740</link><dc:creator>jncraton</dc:creator><comments>https://news.ycombinator.com/item?id=40108740</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40108740</guid></item><item><title><![CDATA[New comment by jncraton in "Launch HN: Greptile (YC W24) - RAG on codebases that actually works"]]></title><description><![CDATA[
<p>You might be interested in "Text Embeddings Reveal (Almost) As Much As Text":<p>> We train our model to decode text embeddings from two state-of-the-art embedding models, and also show that our model can recover important personal information (full names) from a dataset of clinical notes.<p><a href="https://arxiv.org/pdf/2310.06816.pdf" rel="nofollow">https://arxiv.org/pdf/2310.06816.pdf</a><p>There's certainly information loss, but there is also a lot of information still present.</p>
]]></description><pubDate>Tue, 05 Mar 2024 19:43:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=39608312</link><dc:creator>jncraton</dc:creator><comments>https://news.ycombinator.com/item?id=39608312</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39608312</guid></item><item><title><![CDATA[New comment by jncraton in "Gemma: New Open Models"]]></title><description><![CDATA[
<p>Google released the T5 paper about 5 years ago:<p><a href="https://arxiv.org/abs/1910.10683" rel="nofollow">https://arxiv.org/abs/1910.10683</a><p>This included full model weights along with a detailed description of the dataset, training process, and ablations that led them to that architecture. T5 was state-of-the-art on many benchmarks when it was released, but it was of course quickly eclipsed by GPT-3.<p>It was common practice from Google (BERT, T5), Meta (BART), OpenAI (GPT1, GPT2) and others to release full training details and model weights. Following GPT-3, it became much more common for labs to not release full details or model weights.</p>
]]></description><pubDate>Wed, 21 Feb 2024 14:32:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=39454273</link><dc:creator>jncraton</dc:creator><comments>https://news.ycombinator.com/item?id=39454273</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39454273</guid></item><item><title><![CDATA[New comment by jncraton in "Compressing Text into Images"]]></title><description><![CDATA[
<p>> PNG uses deflate. General byte-level patterns. It does not do bespoke image-specific stuff.<p>That's not quite the whole story. PNG does include simple filters to represent a line as a difference from the line above, and that may be what the original post is referring to. [1]<p>[1] <a href="https://en.wikipedia.org/wiki/PNG#Filtering" rel="nofollow">https://en.wikipedia.org/wiki/PNG#Filtering</a></p>
]]></description><pubDate>Sun, 14 Jan 2024 18:33:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=38993020</link><dc:creator>jncraton</dc:creator><comments>https://news.ycombinator.com/item?id=38993020</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38993020</guid></item><item><title><![CDATA[New comment by jncraton in "Lego Mechanical Computer"]]></title><description><![CDATA[
<p>There is a large amount of theoretical research on the subject of energy limits in computing. For example, Landauer's principle states any irreversible change in information requires some amount of dissipated heat, and therefore some energy input [1].<p>Reversible computing is an attempt to get around this limit by removing irreversible state changes [2].<p>[1] <a href="https://en.wikipedia.org/wiki/Landauer%27s_principle" rel="nofollow">https://en.wikipedia.org/wiki/Landauer%27s_principle</a><p>[2] <a href="https://en.wikipedia.org/wiki/Reversible_computing" rel="nofollow">https://en.wikipedia.org/wiki/Reversible_computing</a></p>
]]></description><pubDate>Thu, 11 Jan 2024 14:05:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=38952294</link><dc:creator>jncraton</dc:creator><comments>https://news.ycombinator.com/item?id=38952294</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38952294</guid></item><item><title><![CDATA[New comment by jncraton in "Thousands of AI Authors on the Future of AI"]]></title><description><![CDATA[
<p>You might be interested in OpenWorm:<p><a href="https://openworm.org/" rel="nofollow">https://openworm.org/</a><p>This paper might be helpful for understanding the nervous system in particular:<p><a href="https://royalsocietypublishing.org/doi/10.1098/rstb.2017.0379" rel="nofollow">https://royalsocietypublishing.org/doi/10.1098/rstb.2017.037...</a></p>
]]></description><pubDate>Mon, 08 Jan 2024 22:40:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=38919362</link><dc:creator>jncraton</dc:creator><comments>https://news.ycombinator.com/item?id=38919362</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38919362</guid></item><item><title><![CDATA[Details emerge of surprise board coup that ousted CEO Sam Altman at OpenAI]]></title><description><![CDATA[
<p>Article URL: <a href="https://arstechnica.com/information-technology/2023/11/report-sutskever-led-board-coup-at-openai-that-ousted-altman-over-ai-safety-concerns/">https://arstechnica.com/information-technology/2023/11/report-sutskever-led-board-coup-at-openai-that-ousted-altman-over-ai-safety-concerns/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=38321003">https://news.ycombinator.com/item?id=38321003</a></p>
<p>Points: 581</p>
<p># Comments: 722</p>
]]></description><pubDate>Sat, 18 Nov 2023 16:07:04 +0000</pubDate><link>https://arstechnica.com/information-technology/2023/11/report-sutskever-led-board-coup-at-openai-that-ousted-altman-over-ai-safety-concerns/</link><dc:creator>jncraton</dc:creator><comments>https://news.ycombinator.com/item?id=38321003</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38321003</guid></item><item><title><![CDATA[New comment by jncraton in "Jina AI launches open-source 8k text embedding"]]></title><description><![CDATA[
<p>This is great to see. It looks like the size of the embedding vector is half the size of text-embedding-ada-002 (768 vs 1536) while providing competitive performance. This will save space in databases and make lookups somewhat faster.<p>For those unaware, if 512 tokens of context is sufficient for your use case, there are already many options that outperform text-embedding-ada-002 on common benchmarks:<p><a href="https://huggingface.co/spaces/mteb/leaderboard" rel="nofollow noreferrer">https://huggingface.co/spaces/mteb/leaderboard</a></p>
]]></description><pubDate>Thu, 26 Oct 2023 01:17:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=38020552</link><dc:creator>jncraton</dc:creator><comments>https://news.ycombinator.com/item?id=38020552</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38020552</guid></item><item><title><![CDATA[New comment by jncraton in "Expanding Transformer size without losing function or starting from scratch"]]></title><description><![CDATA[
<p>You might be interested in TinyStories:<p><a href="https://arxiv.org/abs/2305.07759" rel="nofollow noreferrer">https://arxiv.org/abs/2305.07759</a><p>> In this work, we introduce TinyStories, a synthetic dataset of short stories that only contain words that a typical 3 to 4-year-olds usually understand, generated by GPT-3.5 and GPT-4. We show that TinyStories can be used to train and evaluate LMs that are much smaller than the state-of-the-art models (below 10 million total parameters), or have much simpler architectures (with only one transformer block), yet still produce fluent and consistent stories with several paragraphs that are diverse and have almost perfect grammar, and demonstrate reasoning capabilities.</p>
]]></description><pubDate>Fri, 18 Aug 2023 23:12:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=37183350</link><dc:creator>jncraton</dc:creator><comments>https://news.ycombinator.com/item?id=37183350</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37183350</guid></item><item><title><![CDATA[New comment by jncraton in "Databricks Strikes $1.3B Deal for Generative AI Startup MosaicML"]]></title><description><![CDATA[
<p>OpenLLaMA models up to 13B parameters have now been trained on 1T tokens:<p><a href="https://github.com/openlm-research/open_llama">https://github.com/openlm-research/open_llama</a></p>
]]></description><pubDate>Mon, 26 Jun 2023 14:22:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=36479985</link><dc:creator>jncraton</dc:creator><comments>https://news.ycombinator.com/item?id=36479985</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36479985</guid></item><item><title><![CDATA[New comment by jncraton in "Show HN: Explore large language models with 512MB of RAM"]]></title><description><![CDATA[
<p>Thanks for pointing that out. Classification is half-baked at the moment. It should ultimately be restricting output to only appropriate labels, but right now it is simply sampling.</p>
]]></description><pubDate>Sat, 17 Jun 2023 22:43:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=36375488</link><dc:creator>jncraton</dc:creator><comments>https://news.ycombinator.com/item?id=36375488</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36375488</guid></item><item><title><![CDATA[New comment by jncraton in "Show HN: Explore large language models with 512MB of RAM"]]></title><description><![CDATA[
<p>You can actually get these models to do this, but you have to ask:<p><pre><code>    >>> lm.do(f"Answer from the context: What is YCombinator? {lm.get_wiki('Python')}")
    'The context does not provide information about YCombinator.'
    >>> lm.do(f"Answer from the context: What is YCombinator? {lm.get_wiki('YCombinator')}")
    'YCombinator is an American technology startup accelerator that has launched over 4,000 companies, including Airbnb, Coinbase, Cruise, DoorDash, Dropbox, Instacart, Quora, PagerDuty, Reddit, Stripe and Twitch.'
</code></pre>
Without being told to be grounded, the model will guess. However, it may be able to identify information not available in a provided context.<p>One of my goals for this package is to provide a way for folks to learn about the basics of grounding and semantic search.</p>
]]></description><pubDate>Sat, 17 Jun 2023 17:52:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=36372569</link><dc:creator>jncraton</dc:creator><comments>https://news.ycombinator.com/item?id=36372569</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36372569</guid></item></channel></rss>