<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: mks_shuffle</title><link>https://news.ycombinator.com/user?id=mks_shuffle</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 15 May 2026 15:24:40 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=mks_shuffle" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by mks_shuffle in "New arXiv policy: 1-year ban for hallucinated references"]]></title><description><![CDATA[
<p>While this is certainly a welcome step, I hope there is more work done to fix the underlying problem of easily creating correct BibTeX entries for the cited papers. Citations for any given paper can come from a wide range of journals with various publishers, conferences, and preprints. The same paper can be available from multiple sources with varying details, e.g. arXiv and the conference website. Tools like Zotero have certainly made it significantly easier to extract citations from webpages of publication, but I still find issues with the extracted BibTeX details. While author names and titles are often extracted correctly, I still have to manually ensure that details like publication venue, year, volume number, page number, URL, etc. are extracted correctly and also shown correctly in LaTeX format. Different publications can use different citation styles. This can unfortunately lead to taking shortcuts with AI-generated citation data due to the lack of an easy and unified approach to extract consistent citation data. I am not sure whether hallucinated citations are being generated in the main manuscript or in a separate BibTeX file, so I may be a bit off in my understanding.</p>
]]></description><pubDate>Fri, 15 May 2026 02:22:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=48143789</link><dc:creator>mks_shuffle</dc:creator><comments>https://news.ycombinator.com/item?id=48143789</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48143789</guid></item><item><title><![CDATA[New comment by mks_shuffle in "Ask HN: Can companies claim copyright over their LLM-generated codebases?"]]></title><description><![CDATA[
<p>Thanks for the reply. I’ve mostly heard that images generated by AI are considered non-copyrightable (prompts are written by a human). Would the situation be different for code compared to images, since both are created with generative AI tools? Or does it depend on whether the generated artifact is created by an individual versus within a company? Thanks.</p>
]]></description><pubDate>Sat, 17 Jan 2026 03:56:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=46655086</link><dc:creator>mks_shuffle</dc:creator><comments>https://news.ycombinator.com/item?id=46655086</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46655086</guid></item><item><title><![CDATA[Ask HN: Can companies claim copyright over their LLM-generated codebases?]]></title><description><![CDATA[
<p>As tools like Claude Code and Codex become more widely used across industries, will companies be able to claim copyright over their codebases (or products) or impose license restrictions when a significant portion of the code is generated by LLMs?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46654918">https://news.ycombinator.com/item?id=46654918</a></p>
<p>Points: 6</p>
<p># Comments: 5</p>
]]></description><pubDate>Sat, 17 Jan 2026 03:19:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46654918</link><dc:creator>mks_shuffle</dc:creator><comments>https://news.ycombinator.com/item?id=46654918</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46654918</guid></item><item><title><![CDATA[New comment by mks_shuffle in "The creator of Claude Code's Claude setup"]]></title><description><![CDATA[
<p>How much Codex and Claude Code are different from each other?
I have been using Codex for few weeks doing experiments related to data analysis and training models with some architecture modifications. I wouldn't say I have used it extensively, but so far my experience has been good. Only annoying part has been not able to use GPU in the Codex without using `--sandbox danger-full-access` flag. Today, I started using Claude Code, and ran similar experiments as Codex. I find the interface is quite similar to Codex. However, I hit the limit quite quickly in Claude Code. I will be exploring its features further. I would appreciate if anyone can share their experience of using both tools.</p>
]]></description><pubDate>Wed, 07 Jan 2026 07:44:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=46523674</link><dc:creator>mks_shuffle</dc:creator><comments>https://news.ycombinator.com/item?id=46523674</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46523674</guid></item><item><title><![CDATA[New comment by mks_shuffle in "Average DRAM price in USD over last 18 months"]]></title><description><![CDATA[
<p>What could be the potential impact on smartphone and tablet prices in the coming months or years? I am assuming that laptop prices will start increasing next year.</p>
]]></description><pubDate>Thu, 04 Dec 2025 13:19:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=46147344</link><dc:creator>mks_shuffle</dc:creator><comments>https://news.ycombinator.com/item?id=46147344</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46147344</guid></item><item><title><![CDATA[New comment by mks_shuffle in "Attention is your scarcest resource (2020)"]]></title><description><![CDATA[
<p>I think the parent comment is referring to "Attention is All You Need", famous transformer paper.</p>
]]></description><pubDate>Thu, 31 Jul 2025 13:25:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44745409</link><dc:creator>mks_shuffle</dc:creator><comments>https://news.ycombinator.com/item?id=44745409</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44745409</guid></item><item><title><![CDATA[New comment by mks_shuffle in "Qwen3: Think deeper, act faster"]]></title><description><![CDATA[
<p>These are recommendations provided on huggingface page under usage guidelines
QwQ-32b: <a href="https://huggingface.co/Qwen/QwQ-32B" rel="nofollow">https://huggingface.co/Qwen/QwQ-32B</a>
DeepSeek-R1: <a href="https://huggingface.co/deepseek-ai/DeepSeek-R1" rel="nofollow">https://huggingface.co/deepseek-ai/DeepSeek-R1</a></p>
]]></description><pubDate>Tue, 29 Apr 2025 21:49:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=43838442</link><dc:creator>mks_shuffle</dc:creator><comments>https://news.ycombinator.com/item?id=43838442</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43838442</guid></item><item><title><![CDATA[New comment by mks_shuffle in "Qwen3: Think deeper, act faster"]]></title><description><![CDATA[
<p>Does anyone have insights on the best approaches to compare reasoning models? It is often recommended to use a higher temperature for more creative answers and lower temperature values for more logical and deterministic outputs. However, I am not sure how applicable this advice is for reasoning models. For example, Deepseek-R1 and QwQ-32b recommend a temperature around 0.6, rather than lower values like 0.1–0.3. The Qwen3 blog provides performance comparisons between multiple reasoning models, and I am interested in knowing what configurations they used. However, the paper is not available yet. If anyone has links to papers focused on this topic, please share them here. Also, please feel free to correct me if I’m mistaken about anything. Thanks!</p>
]]></description><pubDate>Mon, 28 Apr 2025 23:43:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=43827317</link><dc:creator>mks_shuffle</dc:creator><comments>https://news.ycombinator.com/item?id=43827317</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43827317</guid></item><item><title><![CDATA[New comment by mks_shuffle in "Ask HN: Would you recommend a framework laptop?"]]></title><description><![CDATA[
<p>Can you please elaborate a bit more about your issues with PyTorch on M4 mac. I read PyTorch has some support for Mac GPU with MPS backend, but not sure how extensive it is. I am looking for a new machine, and use of PyTorch and LLM inference are one of the main uses. Sorry for being a bit off-topic from the thread. Thanks.</p>
]]></description><pubDate>Sun, 24 Nov 2024 05:50:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=42226165</link><dc:creator>mks_shuffle</dc:creator><comments>https://news.ycombinator.com/item?id=42226165</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42226165</guid></item><item><title><![CDATA[New comment by mks_shuffle in "Ask HN: Simple web server with drag and drop upload?"]]></title><description><![CDATA[
<p>Another option with Python is<p>uploadserver: <a href="https://github.com/Densaugeo/uploadserver">https://github.com/Densaugeo/uploadserver</a><p>python built-in: python -m http.server <port> (does not support upload)</p>
]]></description><pubDate>Sat, 10 Aug 2024 09:55:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=41208422</link><dc:creator>mks_shuffle</dc:creator><comments>https://news.ycombinator.com/item?id=41208422</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41208422</guid></item><item><title><![CDATA[New comment by mks_shuffle in "How fast can one reasonably expect to get inference on a ~70B model?"]]></title><description><![CDATA[
<p>You can try Groq API for faster inference. They use custom hardware to speed up the inference. Supported open models can be found here: <a href="https://console.groq.com/docs/models" rel="nofollow">https://console.groq.com/docs/models</a> (includes llama-70b)</p>
]]></description><pubDate>Sat, 25 May 2024 05:44:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=40472876</link><dc:creator>mks_shuffle</dc:creator><comments>https://news.ycombinator.com/item?id=40472876</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40472876</guid></item><item><title><![CDATA[New comment by mks_shuffle in "Ask HN: Is there a data set for GitHub repos associated with academic papers?"]]></title><description><![CDATA[
<p>For ML/DL papers you can check <a href="https://paperswithcode.com/" rel="nofollow">https://paperswithcode.com/</a></p>
]]></description><pubDate>Mon, 01 Jan 2024 02:07:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=38829186</link><dc:creator>mks_shuffle</dc:creator><comments>https://news.ycombinator.com/item?id=38829186</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38829186</guid></item></channel></rss>