<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: numberalltheway</title><link>https://news.ycombinator.com/user?id=numberalltheway</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 22:57:55 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=numberalltheway" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by numberalltheway in "Ask HN: Could you share your personal blog here?"]]></title><description><![CDATA[
<p>Trying my best to explain computer science topics I'm interested in<p><a href="https://numbersallthewaydown.com/" rel="nofollow noreferrer">https://numbersallthewaydown.com/</a></p>
]]></description><pubDate>Tue, 04 Jul 2023 17:52:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=36590210</link><dc:creator>numberalltheway</dc:creator><comments>https://news.ycombinator.com/item?id=36590210</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36590210</guid></item><item><title><![CDATA[New comment by numberalltheway in "Europe to ChatGPT: disclose your sources"]]></title><description><![CDATA[
<p>This is a great first step. It's a joke that Open AI thinks they can get away with saying they use "both publicly available data (such as internet data) and data licensed from third-party providers" in their Technical Report.<p>There isn't anything left at that point! With that information they could actually have used anything.<p>If you're going to pretend to be doing science you should at least be held to some of the standards we typically associate with doing science.<p>I know the article talks about copyright, but not stating any sources for data is a bad precedent to allow.</p>
]]></description><pubDate>Fri, 28 Apr 2023 02:54:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=35737009</link><dc:creator>numberalltheway</dc:creator><comments>https://news.ycombinator.com/item?id=35737009</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35737009</guid></item><item><title><![CDATA[New comment by numberalltheway in "GPT-4 Technical Report: A blog post masquerading as scientific literature"]]></title><description><![CDATA[
<p>I see what you're saying. I'm pointing this out as especially notable and frustrating because of the success that OpenAI had with GPT-4.<p>To me, there's a world of difference between a random arXiv entry and this.</p>
]]></description><pubDate>Fri, 07 Apr 2023 13:28:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=35481368</link><dc:creator>numberalltheway</dc:creator><comments>https://news.ycombinator.com/item?id=35481368</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35481368</guid></item><item><title><![CDATA[GPT-4 Technical Report: A blog post masquerading as scientific literature]]></title><description><![CDATA[
<p>Article URL: <a href="https://numbersallthewaydown.com/2023/04/06/gpt-4-technical-report-a-blog-post-masquerading-as-scientific-literature/">https://numbersallthewaydown.com/2023/04/06/gpt-4-technical-report-a-blog-post-masquerading-as-scientific-literature/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=35481120">https://news.ycombinator.com/item?id=35481120</a></p>
<p>Points: 3</p>
<p># Comments: 2</p>
]]></description><pubDate>Fri, 07 Apr 2023 12:59:07 +0000</pubDate><link>https://numbersallthewaydown.com/2023/04/06/gpt-4-technical-report-a-blog-post-masquerading-as-scientific-literature/</link><dc:creator>numberalltheway</dc:creator><comments>https://news.ycombinator.com/item?id=35481120</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35481120</guid></item><item><title><![CDATA[New comment by numberalltheway in "OpenAI’s policies hinder reproducible research on language models"]]></title><description><![CDATA[
<p>It's even more frustrating that, from what I can tell, there is nothing published about how GPT-4 improved.<p>I take specific exception to the hiding of the data and techniques used to generate the model. There must be something specific going on in the model that is allowing it to perform better than GPT-3 and better than what any contemporaries are able to produce. Not publishing this information hinders the further progress of the field as a whole.</p>
]]></description><pubDate>Thu, 23 Mar 2023 03:31:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=35270221</link><dc:creator>numberalltheway</dc:creator><comments>https://news.ycombinator.com/item?id=35270221</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35270221</guid></item><item><title><![CDATA[New comment by numberalltheway in "GPT-4 is Computer Vision on Steroids"]]></title><description><![CDATA[
<p>I'm particularly interested how GPT-4 manages multi-modal processing. Do the images share the same domain as the text inputs, or is there some location in the model inputs that is ~for images only~. The Technical Report states that "the model generates text outputs given inputs consisting of arbitrarily interlaced text and image"[1], but that doesn't really clear up how the images are being treated here.<p>[1] <a href="https://arxiv.org/pdf/2303.08774.pdf" rel="nofollow">https://arxiv.org/pdf/2303.08774.pdf</a></p>
]]></description><pubDate>Sat, 18 Mar 2023 04:27:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=35206089</link><dc:creator>numberalltheway</dc:creator><comments>https://news.ycombinator.com/item?id=35206089</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35206089</guid></item></channel></rss>