<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: xlayn</title><link>https://news.ycombinator.com/user?id=xlayn</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 21 Apr 2026 16:17:12 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=xlayn" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by xlayn in "AMD GPU LLM Performance Testing"]]></title><description><![CDATA[
<p>I had a 6950 on my pc from when I built the thing... and then bought the 7900 for $5xx, that allows me to run more models, and then I saw the "Radeon AI PRO" and after a couple of frustrating talks with certain LLM to try to get an idea on what the speed of the card is I decided to go, buy it and test it to check what's the actual speed.</p>
]]></description><pubDate>Sat, 11 Apr 2026 02:41:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47726789</link><dc:creator>xlayn</dc:creator><comments>https://news.ycombinator.com/item?id=47726789</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47726789</guid></item><item><title><![CDATA[AMD GPU LLM Performance Testing]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/alainnothere/AmdPerformanceTesting">https://github.com/alainnothere/AmdPerformanceTesting</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47726788">https://news.ycombinator.com/item?id=47726788</a></p>
<p>Points: 4</p>
<p># Comments: 1</p>
]]></description><pubDate>Sat, 11 Apr 2026 02:41:42 +0000</pubDate><link>https://github.com/alainnothere/AmdPerformanceTesting</link><dc:creator>xlayn</dc:creator><comments>https://news.ycombinator.com/item?id=47726788</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47726788</guid></item><item><title><![CDATA[New comment by xlayn in "Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training"]]></title><description><![CDATA[
<p>I updated the results, with just the Devstral part, but ran the full suite for it, and posted all the results file as well as a script to re-run the process.<p>The results are more spectacular...<p>The model pointed way better in gsm8k, but lost a bit on the other categories.</p>
]]></description><pubDate>Fri, 20 Mar 2026 01:54:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47449413</link><dc:creator>xlayn</dc:creator><comments>https://news.ycombinator.com/item?id=47449413</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47449413</guid></item><item><title><![CDATA[New comment by xlayn in "Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training"]]></title><description><![CDATA[
<p>Fair point on the writing style, I used Claude extensively on this project, including drafting. The experiments and ideas are mine though.<p>On the prior art: you're right that layer duplication has been explored before. What I think is new here is the systematic sweep toolkit + validation on standard benchmarks (lm-eval BBH, GSM8K, MBPP) showing exactly which 3 layers matter for which model. The Devstral logical deduction result (0.22→0.76) was a surprise to me.<p>If there are ComfyUI nodes that do this for image models, I'd love links, the "cognitive modes" finding (different duplication patterns that leads to different capability profiles from the same weights) might be even more interesting for diffusion models.</p>
]]></description><pubDate>Thu, 19 Mar 2026 02:22:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47434040</link><dc:creator>xlayn</dc:creator><comments>https://news.ycombinator.com/item?id=47434040</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47434040</guid></item><item><title><![CDATA[New comment by xlayn in "Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training"]]></title><description><![CDATA[
<p>You can check here the results for Devstral, speed limits me, but these are the results for the first 50 tests of the command<p><pre><code>  # Run lm-evaluation-harness
  lm_eval --model local-chat-completions \
      --model_args model=test,base_url=http://localhost:8089/v1/chat/completions,num_concurrent=1,max_retries=3,tokenized_requests=False \
      --tasks gsm8k_cot,ifeval,mbpp,bbh_cot_fewshot_logical_deduction_five_objects,mbpp \
      --apply_chat_template --limit 50 \
      --output_path ./eval_results</code></pre></p>
]]></description><pubDate>Thu, 19 Mar 2026 02:00:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47433883</link><dc:creator>xlayn</dc:creator><comments>https://news.ycombinator.com/item?id=47433883</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47433883</guid></item><item><title><![CDATA[New comment by xlayn in "Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training"]]></title><description><![CDATA[
<p>I explored that, again with Devstral, but the execution with 4 times the same circuit lead to less score on the tests.<p>I chat with the model to see if the thing was still working and seemed coherent to me, I didn't notice anything off.<p>I need to automate testing like that, where you pick the local maxima and then iterate over that picking layers to see if it's actually better, and then leave the thing running overnight</p>
]]></description><pubDate>Thu, 19 Mar 2026 01:56:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47433853</link><dc:creator>xlayn</dc:creator><comments>https://news.ycombinator.com/item?id=47433853</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47433853</guid></item><item><title><![CDATA[New comment by xlayn in "Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training"]]></title><description><![CDATA[
<p>The other interesting point is that right now I'm copy pasting the layers, but a patch in llama.cpp can make the same model now behave better by a fact of simply following a different "flow" without needing more vram...<p>if this is validated enough it can eventually lead to ship some kind of "mix" architecture with layers executed to fit some "vibe?"<p>Devstral was the first one I tried and optimize for math/eq, but that din't result in any better model, then I added the reason part, and that resulted in "better" model<p>I used the devstral with the vibe.cli and it look sharp to me, thing didn't fail, I also used the chat to "vibe" check it and look ok to me.<p>The other thing is that I pick a particular circuit and that was "good" but I don't know if it was a local maxima, I think I ran just like 10 sets of the "fast test harness" and pick the config that gave the most score... once I have that I use that model and run it against the llm_eval limited to only 50 tests... again for sake of speed, I didn't want to wait a week to discover the config was bad</p>
]]></description><pubDate>Thu, 19 Mar 2026 01:53:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47433819</link><dc:creator>xlayn</dc:creator><comments>https://news.ycombinator.com/item?id=47433819</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47433819</guid></item><item><title><![CDATA[New comment by xlayn in "Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training"]]></title><description><![CDATA[
<p>I published the results for devstral... results folder of the github <a href="https://github.com/alainnothere/llm-circuit-finder/tree/main/results" rel="nofollow">https://github.com/alainnothere/llm-circuit-finder/tree/main...</a><p>I'm using the following configuration
--tasks gsm8k_cot,ifeval,mbpp,bbh_cot_fewshot_logical_deduction_five_objects,mbpp I did also try humaneval but something in the harness is missing and failed...<p>notice that I'm running 50 tests for each task, mostly because of time limitation as it takes like two hours to validate the run for the base model and the modified one.<p>I'll also try to publish the results of the small tests harness when I'm testing the multiple layers configurations, for reference this is phi-4-Q6_K.gguf, still running, I'm now giving more importance to the Reason factor, the reason factor comes from running a small subset of all the problems in the task config above<p>Initially I tried the approach of the highest math/eq but in resulted in models that were less capable overall with the exception of math, and math like in the original research is basically how good was the model at giving you the answer of a really though question, say the cubic root of some really large number... but that didn't translate to the model being better at other tasks...<p><pre><code>  Config  | Lyr | Math   | EQ    | Reas   | Math Δ  | EQ Δ  | Reas Δ  | Comb Δ
  --------|-----|--------|-------|--------|---------|-------|---------|-------
  BASE    |   0 | 0.7405 | 94.49 | 94.12% |     --- |   --- |     --- |    ---
  (6,9)   |   3 | 0.7806 | 95.70 | 94.12% | +0.0401 | +1.21 |  +0.00% |  +1.21
  (9,12)  |   3 | 0.7247 | 95.04 | 94.12% | -0.0158 | +0.55 |  +0.00% |  +0.55
  (12,15) |   3 | 0.7258 | 94.14 | 88.24% | -0.0147 | -0.35 |  -5.88% |  -6.23
  (15,18) |   3 | 0.7493 | 95.74 | 88.24% | +0.0088 | +1.25 |  -5.88% |  -4.63
  (18,21) |   3 | 0.7204 | 93.40 | 94.12% | -0.0201 | -1.09 |  +0.00% |  -1.09
  (21,24) |   3 | 0.7107 | 92.97 | 88.24% | -0.0298 | -1.52 |  -5.88% |  -7.41
  (24,27) |   3 | 0.6487 | 95.27 | 88.24% | -0.0918 | +0.78 |  -5.88% |  -5.10
  (27,30) |   3 | 0.7180 | 94.65 | 88.24% | -0.0225 | +0.16 |  -5.88% |  -5.73
  (30,33) |   3 | 0.7139 | 94.02 | 94.12% | -0.0266 | -0.47 |  +0.00% |  -0.47
  (33,36) |   3 | 0.7104 | 94.53 | 94.12% | -0.0301 | +0.04 |  +0.00% |  +0.04
  (36,39) |   3 | 0.7017 | 94.69 | 94.12% | -0.0388 | +0.20 |  +0.00% |  +0.20
  (6,10)  |   4 | 0.8125 | 96.37 | 88.24% | +0.0720 | +1.88 |  -5.88% |  -4.01
  (9,13)  |   4 | 0.7598 | 95.08 | 94.12% | +0.0193 | +0.59 |  +0.00% |  +0.59
  (12,16) |   4 | 0.7482 | 93.71 | 88.24% | +0.0076 | -0.78 |  -5.88% |  -6.66
  (15,19) |   4 | 0.7617 | 95.16 | 82.35% | +0.0212 | +0.66 | -11.76% | -11.10
  (18,22) |   4 | 0.6902 | 92.27 | 88.24% | -0.0504 | -2.23 |  -5.88% |  -8.11
  (21,25) |   4 | 0.7288 | 94.10 | 88.24% | -0.0117 | -0.39 |  -5.88% |  -6.27
  (24,28) |   4 | 0.6823 | 94.57 | 88.24% | -0.0583 | +0.08 |  -5.88% |  -5.80
  (27,31) |   4 | 0.7224 | 94.41 | 82.35% | -0.0181 | -0.08 | -11.76% | -11.84
  (30,34) |   4 | 0.7070 | 94.73 | 94.12% | -0.0335 | +0.23 |  +0.00% |  +0.23
  (33,37) |   4 | 0.7009 | 94.38 |100.00% | -0.0396 | -0.12 |  +5.88% |  +5.77
  (36,40) |   4 | 0.7057 | 94.84 | 88.24% | -0.0348 | +0.35 |  -5.88% |  -5.53
  (6,11)  |   5 | 0.8168 | 95.62 |100.00% | +0.0762 | +1.13 |  +5.88% |  +7.02
  (9,14)  |   5 | 0.7245 | 95.23 | 88.24% | -0.0160 | +0.74 |  -5.88% |  -5.14
  (12,17) |   5 | 0.7825 | 94.88 | 88.24% | +0.0420 | +0.39 |  -5.88% |  -5.49
  (15,20) |   5 | 0.7832 | 95.86 | 88.24% | +0.0427 | +1.37 |  -5.88% |  -4.52
  (18,23) |   5 | 0.7208 | 92.42 | 88.24% | -0.0197 | -2.07 |  -5.88% |  -7.95
  (21,26) |   5 | 0.7055 | 92.89 | 88.24% | -0.0350 | -1.60 |  -5.88% |  -7.48
  (24,29) |   5 | 0.5825 | 95.04 | 94.12% | -0.1580 | +0.55 |  +0.00% |  +0.55
  (27,32) |   5 | 0.7088 | 94.18 | 88.24% | -0.0317 | -0.31 |  -5.88% |  -6.19
  (30,35) |   5 | 0.6787 | 94.69 | 88.24% | -0.0618 | +0.20 |  -5.88% |  -5.69
  (33,38) |   5 | 0.6650 | 94.96 | 88.24% | -0.0755 | +0.47 |  -5.88% |  -5.41
  (6,12)  |   6 | 0.7692 | 95.39 | 94.12% | +0.0287 | +0.90 |  +0.00% |  +0.90
  (9,15)  |   6 | 0.7405 | 94.65 | 94.12% | -0.0000 | +0.16 |  +0.00% |  +0.16
  (12,18) |   6 | 0.7582 | 94.57 | 88.24% | +0.0177 | +0.08 |  -5.88% |  -5.80
  (15,21) |   6 | 0.7828 | 93.52 | 88.24% | +0.0423 | -0.98 |  -5.88% |  -6.86
  (18,24) |   6 | 0.7308 | 92.93 | 94.12% | -0.0097 | -1.56 |  +0.00% |  -1.56
  (21,27) |   6 | 0.6791 | 92.54 | 82.35% | -0.0615 | -1.95 | -11.76% | -13.72</code></pre></p>
]]></description><pubDate>Thu, 19 Mar 2026 01:37:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47433703</link><dc:creator>xlayn</dc:creator><comments>https://news.ycombinator.com/item?id=47433703</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47433703</guid></item><item><title><![CDATA[Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training]]></title><description><![CDATA[
<p>I replicated David Ng's RYS method (<a href="https://dnhkng.github.io/posts/rys/" rel="nofollow">https://dnhkng.github.io/posts/rys/</a>) on consumer AMD GPUs 
(RX 7900 XT + RX 6950 XT) and found something I didn't expect.<p>Transformers appear to have discrete "reasoning circuits" — contiguous blocks of 3-4 layers that 
act as indivisible cognitive units. Duplicate the right block and the model runs its reasoning 
pipeline twice. No weights change. No training. The model just thinks longer.<p>The results on standard benchmarks (lm-evaluation-harness, n=50):<p>Devstral-24B, layers 12-14 duplicated once:
- BBH Logical Deduction: 0.22 → 0.76
- GSM8K (strict): 0.48 → 0.64
- MBPP (code gen): 0.72 → 0.78
- Nothing degraded<p>Qwen2.5-Coder-32B, layers 7-9 duplicated once:
- Reasoning probe: 76% → 94%<p>The weird part: different duplication patterns create different cognitive "modes" from the same 
weights. Double-pass boosts math. Triple-pass boosts emotional reasoning. Interleaved doubling 
(13,13,14,14,15,15,16) creates a pure math specialist. Same model, same VRAM, different routing.<p>The circuit boundaries are sharp — shift by one layer and the effect disappears or inverts. 
Smaller models (24B) have tighter circuits (3 layers) than larger ones (Ng found 7 layers in 72B).<p>Tools to find circuits in any GGUF model and apply arbitrary layer routing are in the repo. 
The whole thing — sweep, discovery, validation — took one evening.<p>Happy to answer questions.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47431671">https://news.ycombinator.com/item?id=47431671</a></p>
<p>Points: 265</p>
<p># Comments: 80</p>
]]></description><pubDate>Wed, 18 Mar 2026 21:31:12 +0000</pubDate><link>https://github.com/alainnothere/llm-circuit-finder</link><dc:creator>xlayn</dc:creator><comments>https://news.ycombinator.com/item?id=47431671</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47431671</guid></item><item><title><![CDATA[New comment by xlayn in "Steam Deck OLED"]]></title><description><![CDATA[
<p>There is a performance improvement as per [0][1] the memory speed went up from 5500MT/s to 6400.<p>[0] <a href="https://www.steamdeck.com/en/tech" rel="nofollow noreferrer">https://www.steamdeck.com/en/tech</a>
[1] <a href="https://www.steamdeck.com/en/tech/deck" rel="nofollow noreferrer">https://www.steamdeck.com/en/tech/deck</a></p>
]]></description><pubDate>Thu, 09 Nov 2023 22:35:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=38212360</link><dc:creator>xlayn</dc:creator><comments>https://news.ycombinator.com/item?id=38212360</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38212360</guid></item><item><title><![CDATA[Tell HN: CryptoWorker Running on Telegram Web]]></title><description><![CDATA[
<p>While creating an entry in uBlock origin for the sidebar with the contacts I fired the developers tool on firefox and notice the entries<p><pre><code>  CryptoWorker start b3d84ba5-2f9c-419b-b555-f8d8c4024d5b:1:68170
  CryptoWorker start b3d84ba5-2f9c-419b-b555-f8d8c4024d5b:1:68170
  CryptoWorker start cbab8426-c9ae-44fd-9b77-e6858fc7cfae:1:68170
  CryptoWorker start cbab8426-c9ae-44fd-9b77-e6858fc7cfae:1:68170
  CryptoWorker start e8360a4a-aae4-493c-afa9-ff19753fae18:1:68170
  CryptoWorker start e8360a4a-aae4-493c-afa9-ff19753fae18:1:68170
</code></pre>
the version reported of Telegram is Telegram WebK 1.9.0 (395)<p>using Firefox 116 64-bit on ubuntu 23.04</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=37596827">https://news.ycombinator.com/item?id=37596827</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 21 Sep 2023 12:47:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=37596827</link><dc:creator>xlayn</dc:creator><comments>https://news.ycombinator.com/item?id=37596827</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37596827</guid></item><item><title><![CDATA[New comment by xlayn in "Ask HN: Remote workers, what headphone/mic combo do you use for video calls?"]]></title><description><![CDATA[
<p>Sennheiser PC31, I'm not sure if the speakers on those are shared with the PX headphones, which sound by the way very nice for music.<p><a href="https://www.amazon.com/Sennheiser-31-II-Binaural-Headset-Microphone/dp/B0077L2WCY" rel="nofollow">https://www.amazon.com/Sennheiser-31-II-Binaural-Headset-Mic...</a><p>and in case your laptop/desk doesn't have mic and headphone jack you can use<p><a href="https://www.amazon.com/Sabrent-External-Adapter-Windows-AU-MMSA/dp/B00IRVQ0F8" rel="nofollow">https://www.amazon.com/Sabrent-External-Adapter-Windows-AU-M...</a><p>that works with Linux, not sure about windows/mac.<p>for the cellphone plantronics voyager legend, which is expensive but works every time very well.</p>
]]></description><pubDate>Sat, 26 Nov 2016 15:03:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=13043830</link><dc:creator>xlayn</dc:creator><comments>https://news.ycombinator.com/item?id=13043830</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=13043830</guid></item><item><title><![CDATA[New comment by xlayn in "HP Unveils Mini Workstation"]]></title><description><![CDATA[
<p>HP Mac Mini.
This is actually interesting, when Apple switched to max 16 Gb of ram on their PRO line, HP throws something with PRO graphics and Xeon.</p>
]]></description><pubDate>Tue, 15 Nov 2016 18:50:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=12960982</link><dc:creator>xlayn</dc:creator><comments>https://news.ycombinator.com/item?id=12960982</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=12960982</guid></item><item><title><![CDATA[New comment by xlayn in "Use It Too Much and Lose It? The Effect of Working Hours on Cognitive Ability [pdf]"]]></title><description><![CDATA[
<p>Nice catch, there are two parts in trying to support my theory, first included in my post:<p>As we get older more and more energy would be used on preservation (read it as fix damage and less efficient process as result of age), therefore shrinking/eliminating everything not being used it's necessary.<p>The second is an entry on how our bodies are machines oriented to try to avoid wasting energy... or better said preserving it... for that part my canonical reference would be the Algernon argument:<p><a href="http://www.gwern.net/Drug%20heuristics" rel="nofollow">http://www.gwern.net/Drug%20heuristics</a></p>
]]></description><pubDate>Thu, 25 Aug 2016 20:59:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=12362458</link><dc:creator>xlayn</dc:creator><comments>https://news.ycombinator.com/item?id=12362458</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=12362458</guid></item><item><title><![CDATA[New comment by xlayn in "Use It Too Much and Lose It? The Effect of Working Hours on Cognitive Ability [pdf]"]]></title><description><![CDATA[
<p>To me seems logical:
As we get older more and more energy would be used on preservation (read it as fix damage and less efficient process as result of age), therefore shrinking/eliminating everything not being used it's necessary.<p>Edit: TL;DR: From the PDF conslussions:<p>it is found that working hours up to 25–30
hours per week have a positive impact on cognition for males depending on the measure
and up to 22–27 hours for females. After that, working hours have a negative impact on
cognitive functioning.</p>
]]></description><pubDate>Thu, 25 Aug 2016 19:51:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=12361945</link><dc:creator>xlayn</dc:creator><comments>https://news.ycombinator.com/item?id=12361945</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=12361945</guid></item><item><title><![CDATA[New comment by xlayn in "Neural network spotted deep inside Samsung's Galaxy S7 silicon brain"]]></title><description><![CDATA[
<p>If you have to pull an army of cores to fight two the really impressive arch belongs to Apple.<p>OTOH I see this as the optimization of hardware to fix the cap that software, paradigms and devs mean.</p>
]]></description><pubDate>Tue, 23 Aug 2016 01:37:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=12340752</link><dc:creator>xlayn</dc:creator><comments>https://news.ycombinator.com/item?id=12340752</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=12340752</guid></item><item><title><![CDATA[New comment by xlayn in "Intel Licenses ARM Technology to Boost Foundry Business"]]></title><description><![CDATA[
<p>Another fab should mean options and with that price decrease.
I wonder how much a given chip price can be decreased with this move.<p>Another way of reading this is that making more use of a given tech should pay the initial cost faster thus making possible more research and improved processes.<p>Sadly another way of reading this is that Intel doesn't have any tech that would capitalize in better chips anymore.</p>
]]></description><pubDate>Wed, 17 Aug 2016 01:48:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=12301975</link><dc:creator>xlayn</dc:creator><comments>https://news.ycombinator.com/item?id=12301975</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=12301975</guid></item><item><title><![CDATA[New comment by xlayn in "Nvidia stuffs desktop GTX 1080, 1070, 1060 into laptops, drops the “M”"]]></title><description><![CDATA[
<p>Yes, I gave a read to it but if I remember I think there was a piece that didn't perform up to the speed of the thunderbolt link making the whole solution run 1/4 speed.<p>The solution approach I think goes as back as when using the express card port on elitebooks<p><a href="https://www.youtube.com/watch?v=IDiizICogMQ" rel="nofollow">https://www.youtube.com/watch?v=IDiizICogMQ</a><p>It would be incredible on the other hand to get this solutions as official products from vendors like nvidia or asus without the DIY (not because I'm against it but to push making use of this solutions officially will improve the state of the solutions).</p>
]]></description><pubDate>Tue, 16 Aug 2016 13:10:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=12297206</link><dc:creator>xlayn</dc:creator><comments>https://news.ycombinator.com/item?id=12297206</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=12297206</guid></item><item><title><![CDATA[New comment by xlayn in "Nvidia stuffs desktop GTX 1080, 1070, 1060 into laptops, drops the “M”"]]></title><description><![CDATA[
<p>Another good reason for dropping the M is in my opinion all related to how good another company got at creating gpus... that's Intel.<p>Intel first eliminated the whole aftermarket entry level gpu industry and will probably eliminate the middle tier also.<p>As Pluma states<p>"If you're going with a dedicated graphics card in your laptop, battery life is already out of the window, so you might as well get as much processing power as the thing can handle"</p>
]]></description><pubDate>Tue, 16 Aug 2016 12:29:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=12296982</link><dc:creator>xlayn</dc:creator><comments>https://news.ycombinator.com/item?id=12296982</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=12296982</guid></item><item><title><![CDATA[New comment by xlayn in "The mystery noise driving the world mad [video]"]]></title><description><![CDATA[
<p>I had to disable every security measure just to get to a "we need flash" notice.<p>Anyone with a youtube link?</p>
]]></description><pubDate>Tue, 16 Aug 2016 00:06:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=12294544</link><dc:creator>xlayn</dc:creator><comments>https://news.ycombinator.com/item?id=12294544</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=12294544</guid></item></channel></rss>