<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Oranguru</title><link>https://news.ycombinator.com/user?id=Oranguru</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 08 Apr 2026 22:36:40 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Oranguru" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Oranguru in "Executing programs inside transformers with exponentially faster inference"]]></title><description><![CDATA[
<p>Prompt injection is still a possibility, so while it improves the security posture, not by much.</p>
]]></description><pubDate>Sat, 14 Mar 2026 01:26:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47372337</link><dc:creator>Oranguru</dc:creator><comments>https://news.ycombinator.com/item?id=47372337</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47372337</guid></item><item><title><![CDATA[New comment by Oranguru in "AirPods libreated from Apple's ecosystem"]]></title><description><![CDATA[
<p>What "other platforms" are you talking about? Just an Android app would suffice. It's not a huge deal for a company worth trillions, especially if the features are already there and they're just blocking non-Apple products. If they deliberately do that, it makes you think they don't really care about their customers and are more interested in locking people into their ecosystem.</p>
]]></description><pubDate>Sun, 16 Nov 2025 16:22:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=45946238</link><dc:creator>Oranguru</dc:creator><comments>https://news.ycombinator.com/item?id=45946238</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45946238</guid></item><item><title><![CDATA[New comment by Oranguru in "Notion releases offline mode"]]></title><description><![CDATA[
<p>You can absolutely sync your vault without a paid subscription. Simply save it within your OneDrive or Google Drive folder. Alternatively, you could use Syncthing if you prefer a self-hosted solution.</p>
]]></description><pubDate>Wed, 20 Aug 2025 04:14:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=44958574</link><dc:creator>Oranguru</dc:creator><comments>https://news.ycombinator.com/item?id=44958574</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44958574</guid></item><item><title><![CDATA[New comment by Oranguru in "TikTok goes dark in the US"]]></title><description><![CDATA[
<p>However, these rights should be guaranteed to a company operating in the USA and strictly adhering to US law. Of course, if the law is (arbitrarily) changed to make this illegal due to the Chinese government's stake, then it could be forced to shut down, but that would be inconsistent with the constitution.</p>
]]></description><pubDate>Sun, 19 Jan 2025 16:54:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=42758742</link><dc:creator>Oranguru</dc:creator><comments>https://news.ycombinator.com/item?id=42758742</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42758742</guid></item><item><title><![CDATA[New comment by Oranguru in "Google are deliberately breaking YouTube when it detects you're running Firefox"]]></title><description><![CDATA[
<p>Because both of these are based on Chromium (the open-source version of Chrome).</p>
]]></description><pubDate>Wed, 11 Dec 2024 16:27:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=42389432</link><dc:creator>Oranguru</dc:creator><comments>https://news.ycombinator.com/item?id=42389432</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42389432</guid></item><item><title><![CDATA[New comment by Oranguru in "Bend: a high-level language that runs on GPUs (via HVM2)"]]></title><description><![CDATA[
<p>This approach is valuable because it abstracts away certain complexities for the user, allowing them to focus on the code itself. I found it especially beneficial for users who are not willing to learn functional languages or parallelize code in imperative languages. HPC specialists might not be the current target audience, and code generation can always be improved over time, and I trust based on the dev comments that it will be.</p>
]]></description><pubDate>Sat, 18 May 2024 00:38:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=40395558</link><dc:creator>Oranguru</dc:creator><comments>https://news.ycombinator.com/item?id=40395558</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40395558</guid></item><item><title><![CDATA[New comment by Oranguru in "Yi 1.5"]]></title><description><![CDATA[
<p>You can easily fix this using a grammar constraint with llama.cpp. Add this to the command:
--grammar "root ::= [^一-鿿ぁ-ゟァ-ヿ가-힣]*"<p>This will ban Chinese characters from the sampling process. Works for Yi and Qwen models.</p>
]]></description><pubDate>Mon, 13 May 2024 04:46:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=40339862</link><dc:creator>Oranguru</dc:creator><comments>https://news.ycombinator.com/item?id=40339862</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40339862</guid></item><item><title><![CDATA[New comment by Oranguru in "Show HN: Lapdev, a new open-source remote dev environment management software"]]></title><description><![CDATA[
<p>You can access GPUs within containers using CDI (Container Device Interface):
<a href="https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html" rel="nofollow">https://docs.nvidia.com/datacenter/cloud-native/container-to...</a>
No additional tools (e.g., nvidia-ctk) are needed.
Docker has recently added support for CDI in version 25.0.</p>
]]></description><pubDate>Sat, 23 Mar 2024 19:34:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=39802528</link><dc:creator>Oranguru</dc:creator><comments>https://news.ycombinator.com/item?id=39802528</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39802528</guid></item><item><title><![CDATA[New comment by Oranguru in "Show HN: WhisperFusion – Low-latency conversations with an AI chatbot"]]></title><description><![CDATA[
<p>Very interesting. Thanks for the references. Have you released the code or pre-trained models yet or do you plan to do so at some point?</p>
]]></description><pubDate>Tue, 30 Jan 2024 10:32:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=39188482</link><dc:creator>Oranguru</dc:creator><comments>https://news.ycombinator.com/item?id=39188482</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39188482</guid></item><item><title><![CDATA[New comment by Oranguru in "Show HN: Fine-grained stylistic control of LLMs using model arithmetic"]]></title><description><![CDATA[
<p>Thank you for your work. I have been having trouble achieving the desired level of formality in the generated text. When I ask for slightly formal content, the result tends to be too formal. However, when I ask the model to reduce the formality or use a semi-formal tone, the text becomes too informal. This will allow me to exercise more control over the style of the model's output and stop constantly battling with it.</p>
]]></description><pubDate>Sun, 10 Dec 2023 13:21:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=38591361</link><dc:creator>Oranguru</dc:creator><comments>https://news.ycombinator.com/item?id=38591361</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38591361</guid></item><item><title><![CDATA[New comment by Oranguru in "Getting Started with Orca-2-13B"]]></title><description><![CDATA[
<p>This comment does nothing more than repeat the information provided in the title and in a very verbose style. I don't understand the tendency for GPT generated comments that is lately taking over HN and Twitter threads. They are insipid, add absolutely nothing of value to the discussion, and are generally a waste of time to read.</p>
]]></description><pubDate>Mon, 27 Nov 2023 06:31:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=38428930</link><dc:creator>Oranguru</dc:creator><comments>https://news.ycombinator.com/item?id=38428930</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38428930</guid></item><item><title><![CDATA[New comment by Oranguru in "Learn AutoHotKey by stealing my scripts"]]></title><description><![CDATA[
<p>It's about minimising friction in our tasks and reducing any unnecessary obstacles. Even seemingly minor actions that only take a few seconds can build up over time and generate a sense of frustration, especially when they become frequent.
Personally, I have found that dealing with this kind of friction can erode my overall productivity, as I unconsciously shy away from these tedious tasks that involve manual repetition, no matter how small. That's why many of us choose to invest a little time in coming up with these small automations that free us from the clutches of monotonous and repetitive tasks, allowing us to focus on the core of our work.</p>
]]></description><pubDate>Tue, 22 Aug 2023 14:44:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=37223482</link><dc:creator>Oranguru</dc:creator><comments>https://news.ycombinator.com/item?id=37223482</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37223482</guid></item><item><title><![CDATA[New comment by Oranguru in "Show HN: Chie – a cross-platform, native, and extensible desktop client for LLMs"]]></title><description><![CDATA[
<p>Great job!<p>In the future, would you add support for local LLMs, such as LLaMa?</p>
]]></description><pubDate>Mon, 31 Jul 2023 05:03:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=36938982</link><dc:creator>Oranguru</dc:creator><comments>https://news.ycombinator.com/item?id=36938982</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36938982</guid></item><item><title><![CDATA[New comment by Oranguru in "LLaMA2 Chat 70B outperformed ChatGPT"]]></title><description><![CDATA[
<p>Yes, check out: <a href="https://huggingface.co/chat/" rel="nofollow noreferrer">https://huggingface.co/chat/</a><p>You can easily opt out of the data sharing.</p>
]]></description><pubDate>Thu, 27 Jul 2023 22:39:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=36901085</link><dc:creator>Oranguru</dc:creator><comments>https://news.ycombinator.com/item?id=36901085</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36901085</guid></item><item><title><![CDATA[New comment by Oranguru in "In the LLM space, "open source" is being used to mean "downloadable weights""]]></title><description><![CDATA[
<p>Useless for what? Are you comparing the base model with chat-tuned models?<p>Chat-tuned derivatives of LLaMa 2 are already appearing. Given that the base LLaMa 2 model is more efficient than LLaMa 1, it is reasonable to expect that these more refined chat-tuned versions of the chat-tuned versions will outperform the ones you mention.</p>
]]></description><pubDate>Fri, 21 Jul 2023 17:09:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=36816497</link><dc:creator>Oranguru</dc:creator><comments>https://news.ycombinator.com/item?id=36816497</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36816497</guid></item><item><title><![CDATA[New comment by Oranguru in "Run Llama 13B with a 6GB graphics card"]]></title><description><![CDATA[
<p>"Compared to GPT2 it’s on par"
Any benchmarks or evidence tu support this claim? IF you try to find them, official benchmarks will tell you that this is not true. Even the smallest LLaMa model (7B) is far ahead of GPT2, like an order of magnitude better in perplexity.</p>
]]></description><pubDate>Mon, 15 May 2023 05:27:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=35944187</link><dc:creator>Oranguru</dc:creator><comments>https://news.ycombinator.com/item?id=35944187</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35944187</guid></item><item><title><![CDATA[New comment by Oranguru in "Walkout at global science journal over ‘unethical’ fees"]]></title><description><![CDATA[
<p>This is also a requirement for the research conducted under a project funded by the european comission.</p>
]]></description><pubDate>Tue, 09 May 2023 02:19:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=35869493</link><dc:creator>Oranguru</dc:creator><comments>https://news.ycombinator.com/item?id=35869493</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35869493</guid></item><item><title><![CDATA[New comment by Oranguru in "Releasing 3B and 7B RedPajama"]]></title><description><![CDATA[
<p>Take a look at: <a href="https://huggingface.co/blog/trl-peft" rel="nofollow">https://huggingface.co/blog/trl-peft</a></p>
]]></description><pubDate>Sat, 06 May 2023 04:09:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=35837885</link><dc:creator>Oranguru</dc:creator><comments>https://news.ycombinator.com/item?id=35837885</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35837885</guid></item><item><title><![CDATA[New comment by Oranguru in "The Coming of Local LLMs"]]></title><description><![CDATA[
<p>Yes, there are. For example: <a href="https://laion.ai/blog/open-flamingo/" rel="nofollow">https://laion.ai/blog/open-flamingo/</a></p>
]]></description><pubDate>Wed, 12 Apr 2023 00:16:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=35533502</link><dc:creator>Oranguru</dc:creator><comments>https://news.ycombinator.com/item?id=35533502</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35533502</guid></item><item><title><![CDATA[New comment by Oranguru in "StackLlama: A hands-on guide to train LlaMa with RLHF"]]></title><description><![CDATA[
<p>I must remind you that large language models are not designed to perform arithmetic calculations nor they have been trained to do so. They are trained to recognize patterns in large amounts of text data and generate responses based on that learned information. While they may not be able to perform some specific tasks, they can still provide useful information and insights in a wide range of applications. Judging their quality of *language* models because of their inability to do basic math is completely unfair.</p>
]]></description><pubDate>Sat, 08 Apr 2023 02:16:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=35489796</link><dc:creator>Oranguru</dc:creator><comments>https://news.ycombinator.com/item?id=35489796</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35489796</guid></item></channel></rss>