<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: psanchez</title><link>https://news.ycombinator.com/user?id=psanchez</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 25 Apr 2026 19:06:11 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=psanchez" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by psanchez in "Show HN: A Karpathy-style LLM wiki your agents maintain (Markdown and Git)"]]></title><description><![CDATA[
<p>Well, my comment was meant as an example of a setup for actually building something real with reasonable quality. I was answering to that part of the previous comment.<p>In my experience, the difference is context. Agents without structure produce slop, but with a well-curated knowledge base and iteration, they can be useful. I was just sharing a setup that has been working for me lately.<p>Edit: minimal changes for clarity</p>
]]></description><pubDate>Sat, 25 Apr 2026 15:43:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47902280</link><dc:creator>psanchez</dc:creator><comments>https://news.ycombinator.com/item?id=47902280</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47902280</guid></item><item><title><![CDATA[New comment by psanchez in "Show HN: A Karpathy-style LLM wiki your agents maintain (Markdown and Git)"]]></title><description><![CDATA[
<p>Even though I did not know about Andrej Karpathy's tweet from earlier this month, I ended up converging on something very similar.<p>A couple of weeks ago I built a git-based knowledge base designed to run agents prompts on top of it.<p>I connected our company's ticketing system, wiki, GitHub, jenkins, etc, and spent several hours effectively "onboarding" the AI (I used Claude Opus 4.6). I explained where to find company policies, how developers work, how the build system operates, and how different projects relate to each other.<p>In practice, I treated it like onboarding a new engineer: I fed it a lot of context and had it organize everything into AI-friendly documentation (including an AGENTS.md). I barely wrote anything myself, mostly I just instructed the AI to write and update the files, while I guided the overall structure and refactored as needed.<p>The result was a git-based knowledge base that agents could operate on directly. Since the agent had access to multiple parts of the company, I could give high-level prompts like: investigate this bug (with not much context), produce a root cause analysis, open a ticket, fix it, and verify a build on Jenkins. I did not even need to have the repos locally, the AI would figure it out, clone them, analyze, create branches using our company policy, etc...<p>For me, this ended up working as a multi-project coordination layer across the company, and it worked much better than I expected.<p>It wasn't all smooth, though. When the AI failed at a task, I had to step in, provide more context, and let it update the documentation itself. But through incremental iterations, each failure improved the system, and its capabilities compounded very quickly.</p>
]]></description><pubDate>Sat, 25 Apr 2026 11:37:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47900637</link><dc:creator>psanchez</dc:creator><comments>https://news.ycombinator.com/item?id=47900637</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47900637</guid></item><item><title><![CDATA[New comment by psanchez in "Show HN: Tiny VM sandbox in C with apps in Rust, C and Zig"]]></title><description><![CDATA[
<p>I just had a look at the code and it is indeed very compact. I haven't compiled or used it.<p>Looks like RISC-V 32-bit integer and multiply and atomic instr extension. Floating point supported when compiling via gcc or similar the example apps (not by the emulator itself but by the compiler emiting the required software functions to emulate the floating point operations instead).<p>I think it is very clever. Very compact instruction set, with the advantage of being supported by several compilers.<p>Wrapper over this other project which is the one implementing the instruction set itself:
<a href="https://github.com/cnlohr/mini-rv32ima" rel="nofollow">https://github.com/cnlohr/mini-rv32ima</a><p>Kudos to both projects.</p>
]]></description><pubDate>Sat, 13 Dec 2025 11:34:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46253838</link><dc:creator>psanchez</dc:creator><comments>https://news.ycombinator.com/item?id=46253838</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46253838</guid></item><item><title><![CDATA[New comment by psanchez in "Jiratui – A Textual UI for interacting with Atlassian Jira from your shell"]]></title><description><![CDATA[
<p>BTW, just to make it clear, in the case of jiratui you can also download from github repo directly and inspect the code if you wish :D</p>
]]></description><pubDate>Thu, 11 Sep 2025 04:41:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=45207857</link><dc:creator>psanchez</dc:creator><comments>https://news.ycombinator.com/item?id=45207857</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45207857</guid></item><item><title><![CDATA[New comment by psanchez in "Jiratui – A Textual UI for interacting with Atlassian Jira from your shell"]]></title><description><![CDATA[
<p>Indeed. That's why I was transparent from the start. As I mentioned, using an API key this way is generally a bad idea. Even if I'm not a bad actor (which I'm not, but you shouldn't trust me), if someone compromises my server and forges requests, they could potentially access your projects.<p>JIRA's OAuth implementation requires apps to be registered, involves public/private key pairs, and changes the auth flow. That adds complexity and makes setup harder, which is why I opted for a simpler API key setup, you get the API key, you write it down, you can make requests. It is just simpler and does not require JIRA admin rights.<p>For comparison, JiraTUI also uses the user's API token. The difference, I guess, is that it runs locally on your machine, but they could also send it somewhere else. At the end of the day, it comes down to whether you trust what you're downloading versus trusting what runs on a remote server. It is true that locally you could potentially inspect all HTTPS or even TCP requests whereas in the remote server you don't have a clue.<p>The thing is, OAuth in JIRA demands app registration and certificate management, so I guess many developers end up defaulting to user API keys as the path of least resistance, even if they encourage OAuth as well.</p>
]]></description><pubDate>Thu, 11 Sep 2025 04:31:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=45207798</link><dc:creator>psanchez</dc:creator><comments>https://news.ycombinator.com/item?id=45207798</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45207798</guid></item><item><title><![CDATA[New comment by psanchez in "Jiratui – A Textual UI for interacting with Atlassian Jira from your shell"]]></title><description><![CDATA[
<p>Wow. Really cool. I wasn't expecting something so polished.<p>JIRA speed drives me crazy sometimes, so a couple of months ago I decided to build myself a tool to do instant searches/filters on multiple projects right from the browser just to scratch my own itch.<p>I just wanted to see if I could have near-instant filtering. I think I got a pretty decent performance by using some JS tricks. I'm sure there might be ways to make it even faster.<p>Page is around 70kb (HTML+CSS+JS). Everything is manually crafted. I know the design won't win a beauty contest, but   it does feel instant and works for my personal use-case. I had a lot of fun building this side-project.<p>There is a public URL, feel free to try it out [1]. Already mentioned in a previous comment in HN a while ago [2].<p>[1] <a href="https://jetboard.pausanchez.com" rel="nofollow">https://jetboard.pausanchez.com</a>
[2] <a href="https://news.ycombinator.com/item?id=44740472">https://news.ycombinator.com/item?id=44740472</a><p>For the record, it uses a proxy because of CORS. Proxy is in few lines of golang. No NPM or any other framework used to make the project. In any case, if anybody is interested in the source code to run it yourself I'm happy to make the project public. Trusting a proxy on some random's guy on internet is probably a bad idea, given all NPM shit that happened yesterday, in any case, if you want to try, feel free, but use at your own risk :P</p>
]]></description><pubDate>Wed, 10 Sep 2025 20:52:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=45203512</link><dc:creator>psanchez</dc:creator><comments>https://news.ycombinator.com/item?id=45203512</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45203512</guid></item><item><title><![CDATA[New comment by psanchez in "Fast"]]></title><description><![CDATA[
<p>Fast is a distinctive feature.<p>For what is worth I built myself a custom jira board last month, so I could instantly search, filter and group tickets (by title, status, assignee, version, ...)<p>Motivation: Running queries and finding tickets on JIRA kills me sometimes.<p>The board is not perfect,  but works fast and I made it superlightweight. In case anybody wants to give it a try:<p><a href="https://jetboard.pausanchez.com/" rel="nofollow">https://jetboard.pausanchez.com/</a><p>Don't dare to try on mobile, use desktop. Unfortunately it uses a proxy and requires an API key, but doesn't store anything in backend (just proxies the request because of CORS). Maybe there is an API or a way to query jira cloud instance directly from browser, I just tried first approach and moved on. It even crossed my mind to add it somehow to Jira marketplace...<p>Anyway, caches stuff locally and refreshes often. Filtering uses several tricks to feel instant.<p>UI can be improved, but uses a minimalistic interface on purpose, like HN.<p>If anybody tries it, I'll be glad to hear your thoughts.</p>
]]></description><pubDate>Wed, 30 Jul 2025 22:51:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=44740472</link><dc:creator>psanchez</dc:creator><comments>https://news.ycombinator.com/item?id=44740472</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44740472</guid></item><item><title><![CDATA[New comment by psanchez in "Strategies for Fast Lexers"]]></title><description><![CDATA[
<p>The jump table is interesting, although I guess the performance of switch will be similar if properly optimized with the compiler, but would not be able to tell without trying. Also different compilers might take different approaches.<p>A few months ago I built a toy boolean expression parser as a weekend project. The main goal was simple: evaluate an expression and return true or false. It supported basic types like int, float, string, arrays, variables, and even custom operators.<p>The syntax and grammar were intentionally kept simple. I wanted the whole implementation to be self-contained and compact, something that could live in just a .h and .cc file. Single pass for lexing, parsing, and evaluation.<p>After having the first version working, I kind of challenged myself to make it faster and tried many things.<p>Once the first version was functional, I challenged myself to optimize it for speed. Here are some of the performance-related tricks I remember using:<p><pre><code>  - No string allocations: used the input *str directly, relying on pointer manipulation instead of allocating memory for substrings.
  - Stateful parsing: maintained a parsing state structure passed by reference to avoid unnecessary copies or allocations.
  - Minimized allocations: tried to avoid heap allocations wherever possible. Some were unavoidable during evaluation, but I kept them to a minimum.
  - Branch prediction-friendly design: used lookup tables to assist with token identification (mapping the first character to token type and validating identifier characters).
  - Inline literal parsing: converted integer and float literals to their native values directly during lexing instead of deferring conversion to a later phase.
</code></pre>
I think all the tricks are mentioned in the article already.<p>For what is worth, here is the project:<p><pre><code>  https://github.com/pausan/tinyrulechecker
</code></pre>
I used this expression to assess the performance on an Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz (launched Q3 2018):<p><pre><code>  myfloat.eq(1.9999999) || myint.eq(32)

</code></pre>
I know it is a simple expression and likely a larger expression would perform worse due to variables lookups, ... I could get a speed of 287MB/s or 142ns per evaluation (7M evaluations per second). I was gladly surprised to reach those speeds given that 1 evaluation is a full cycle of lexing, parsing and evaluating the expression itself.<p>The next step I thought was also to use SIMD for tokenizing, but not sure it would have helped a lot on the overall expression evaluation times, I seem to recall most of the time was spent on the parser or evaluation phases anyway, not the lexer.<p>It was a fun project.</p>
]]></description><pubDate>Mon, 14 Jul 2025 17:50:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=44563105</link><dc:creator>psanchez</dc:creator><comments>https://news.ycombinator.com/item?id=44563105</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44563105</guid></item><item><title><![CDATA[New comment by psanchez in "AI Meets WinDBG"]]></title><description><![CDATA[
<p>It looks like it is using "Microsoft Console Debugger (CDB)" as the interface to windbg.<p>Just had a quick look at the code:
<a href="https://github.com/svnscha/mcp-windbg/blob/main/src/mcp_server_windbg/server.py">https://github.com/svnscha/mcp-windbg/blob/main/src/mcp_serv...</a><p>I might be wrong, but at first glance I don't think it is only using those 4 commands. It might be using them internally to get context to pass to the AI agent, but it looks like it exposes:<p><pre><code>    - open_windbg_dump
    - run_windbg_cmd
    - close_windbg_dump
    - list_windbg_dumps
</code></pre>
The most interesting one is "run_windbg_cmd" because it might allow the MCP server to send whatever the AI agent wants. E.g:<p><pre><code>    elif name == "run_windbg_cmd":
        args = RunWindbgCmdParams(**arguments)
        session = get_or_create_session(
            args.dump_path, cdb_path, symbols_path, timeout, verbose
        )
        output = session.send_command(args.command)
        return [TextContent(
            type="text",
            text=f"Command: {args.command}\n\nOutput:\n```\n" + "\n".join(output) + "\n```"
        )]

</code></pre>
(edit: formatting)</p>
]]></description><pubDate>Mon, 05 May 2025 06:54:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=43892535</link><dc:creator>psanchez</dc:creator><comments>https://news.ycombinator.com/item?id=43892535</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43892535</guid></item><item><title><![CDATA[New comment by psanchez in "When Greedy Algorithms Can Be Faster [C++]"]]></title><description><![CDATA[
<p>Didn't know about Ziggurat algorithm, the use of a table to directly accept or reject is interesting, although I think I would need to implement myself to fully understand. Your comments in the code are great, but I still need I would need to dedicate some time to fully grasp it.<p>I'm wondering what if a 2D or 3D array was used instead, so that instead of working with the unit circle / unit sphere, you worked on a 256x circle/sphere.<p>Assuming the center of the circle/sphere was on the position (127, 127) or (127, 127, 127), then you could precompute which of those elements in the array would be part of that 256 sphere/circle radius and only the elements in the boundary of the circle/sphere would need to be marked as special. You would only need 3 values (2 bits per item).<p><pre><code>   0 = not in the circle/sphere
   1 = in the circle/sphere
   2 = might be in or out
</code></pre>
Then you would only need to randomly pick a point and just a lookup to evaluate whether is on the 2d/3d array. Most of the times simple math would be involved and simple accept/reject would cause it to return a value. I guess it would also produce the number of additional retries to 0.7% on a circle (one circle intersection for every 128 tiems = 1/128 = 0.78%).<p>From my limited understanding, what I'm saying looks like a simpler implementation but would require more memory and in the end would have the same runtime performance as yours (assuming memory and processor caches were the same, which are probably not). Uhm... I guess the implementation you present is actually doing something similar but with a quarter of the circle, so you need less memory.<p>Interesting, thanks for sharing.</p>
]]></description><pubDate>Sun, 02 Feb 2025 12:15:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=42908187</link><dc:creator>psanchez</dc:creator><comments>https://news.ycombinator.com/item?id=42908187</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42908187</guid></item><item><title><![CDATA[New comment by psanchez in "When Greedy Algorithms Can Be Faster [C++]"]]></title><description><![CDATA[
<p>I think I would call it naive algorithm rather than greedy.<p>It looked like an interesting problem so I spent some time this morning exploring if there would be any performance improvement by pregenerating an array of X items (where X around 1M to 16M items) and then randomly returning one of them at a time. I explored the project and copied the functions to be as faithful as the original implementation as possible.<p>Generating 10M unit sphere (best of 3 runs, g++ 13, linux, Intel i7-8565U, one core for tests):<p><pre><code>  - naive/rejection: ~574ms
  - analytical: ~1122ms
  - pregen 1M elements: ~96ms
</code></pre>
That's almost 6x faster than rejection method. Setup of the 1M elements is done once and does not count on the metrics. Using double type, using float yields around 4x improvements.<p>After looking at those results I decided to try on the project itself, so I downloaded, compiled and applied similar optimizations in the project, only updating circle and sphere random generators (with 16M unit vectors that are only created once on app lifetime) but got almost no noticeable benefits (marginal at most). Hard to tell because of the random nature of the raytracing implementation. On the bright side the image quality was on par. Honestly I was afraid this method would generate poor visuals.<p>Just for the record, I'm talking about something as simple as:<p><pre><code>  std::vector<Vec3> g_unitSpherePregen;
  uint32_t g_unitSpherePregenIndex = 0;

  void setupUnitSpherePregen(uint32_t nElements) {
    g_unitSpherePregen.resize(nElements);
    for (auto i = 0; i < nElements; i++) {
      g_unitSpherePregen[i] = unitSphereNaive();  // call the original naive or analytical method
    }
  }

  Vec3 unitSpherePregen() {
    g_unitSpherePregenIndex = (g_unitSpherePregenIndex + 1) % g_unitSpherePregen.size();
    return g_unitSpherePregen[g_unitSpherePregenIndex];
  }
 
</code></pre>
I tried as well using a psrng (std::mt19937 and xorshf96) in unitSpherePregen instead of the incremented variable, but increment was faster and yielded good visual results.<p>Next step would be profiling, but I don't think I will invest more time on this.<p>Edit: fix formatting</p>
]]></description><pubDate>Sat, 01 Feb 2025 18:57:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=42901078</link><dc:creator>psanchez</dc:creator><comments>https://news.ycombinator.com/item?id=42901078</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42901078</guid></item><item><title><![CDATA[New comment by psanchez in "Show HN: Lightpanda, an open-source headless browser in Zig"]]></title><description><![CDATA[
<p>I think this is a really cool project. Scrapping aside, I would definitely use this with playwright for end2end tests if it had 100% compatibility with chrome and ran with a fraction of the time/memory.<p>At my company we have a small project where we are running the equivalent of 6.5 hours of end2end tests daily using playwright. Running the tests in parallel takes around half an hour. Your project is still in very early stages, but assuming 10x speed, that would mean we could pass all our tests in roughtly 3 min (best case scenario).<p>That being said, I would make use of your browser, but would likely not make use of your business offering (our tests require internal VPN, have some custom solution for reporting, would be a lot of work to change for little savings; we run all tests currently in spot/preemptible instances which are already 80% cheaper).<p>Business-wise I found very little info on your website. "4x the efficiency at half the cost" is a good catch phrase, but compared to what? I mean, you can have servers in Hetzner or in AWS and one is already a fraction of the cost of the other. How convenient is to launch things on your remote platform vs launch them locally or setting it up? does it provide any advantages in the case of web scrapping compared to other solutions? how parallelizable is it? Do you have any paying customers already?<p>Supercool tech project. Best of luck!</p>
]]></description><pubDate>Sat, 25 Jan 2025 07:14:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=42820090</link><dc:creator>psanchez</dc:creator><comments>https://news.ycombinator.com/item?id=42820090</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42820090</guid></item><item><title><![CDATA[New comment by psanchez in "My favourite computer ergonomics hack"]]></title><description><![CDATA[
<p>One more thing I forgot to mention on the "Con" side is the noise.<p>Treadmills aren't completely silent; there's always some level of sound from the engine. Over time, you tend to get used to it. Personally, I wear regular headphones to listen to music, which helps mask the noise.</p>
]]></description><pubDate>Thu, 02 Jan 2025 14:05:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=42574494</link><dc:creator>psanchez</dc:creator><comments>https://news.ycombinator.com/item?id=42574494</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42574494</guid></item><item><title><![CDATA[New comment by psanchez in "My favourite computer ergonomics hack"]]></title><description><![CDATA[
<p>Just sharing my own experience :D<p>I have an "old" ikea model that is not sold anymore, but the equivalent would be something like this one: 
<a href="https://www.ikea.com/us/en/p/rodulf-desk-sit-stand-gray-white-s39396321/" rel="nofollow">https://www.ikea.com/us/en/p/rodulf-desk-sit-stand-gray-whit...</a><p>You can also buy the legs and use your own table (slightly cheaper)
<a href="https://www.ikea.com/us/en/p/rodulf-underframe-sit-stand-f-table-top-electric-white-40497376/" rel="nofollow">https://www.ikea.com/us/en/p/rodulf-underframe-sit-stand-f-t...</a><p>Again, the advantage is the ability to adjust the height, so you can work either seated, standing or walking if you also have a treadmill.</p>
]]></description><pubDate>Thu, 02 Jan 2025 13:54:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=42574402</link><dc:creator>psanchez</dc:creator><comments>https://news.ycombinator.com/item?id=42574402</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42574402</guid></item><item><title><![CDATA[New comment by psanchez in "My favourite computer ergonomics hack"]]></title><description><![CDATA[
<p>If you choose a manual or motorized standing desk with adjustable height (like the one I use), you can easily move the treadmill to the side when you're not walking and switch to working while seated.<p>I mean, you need to have the space to put the treadmill on the side, but other than that you'll have the flexibility to choose between walking and sitting as needed.</p>
]]></description><pubDate>Thu, 02 Jan 2025 13:40:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=42574310</link><dc:creator>psanchez</dc:creator><comments>https://news.ycombinator.com/item?id=42574310</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42574310</guid></item><item><title><![CDATA[New comment by psanchez in "My favourite computer ergonomics hack"]]></title><description><![CDATA[
<p>WalkingPad R1 Pro. I thought the ability to run would be a plus, but honestly, I've only used it for running twice (I'd rather run outdoors than stare at a wall, tv or computer).<p>KingSmith walking pads can be folded and take less space. R1 can also be stored vertically, but I always keep it horizontally for convenience.<p>If I had to buy one treadmill again, I would chose either a regular model or a cheaper foldable model. I would probably lean towards a smaller and cheaper regular model since I believe 40cm x 80cm (16in x 32in) is enough to walk and is not that big.</p>
]]></description><pubDate>Thu, 02 Jan 2025 13:35:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=42574262</link><dc:creator>psanchez</dc:creator><comments>https://news.ycombinator.com/item?id=42574262</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42574262</guid></item><item><title><![CDATA[New comment by psanchez in "My favourite computer ergonomics hack"]]></title><description><![CDATA[
<p>I bought a treadmill + standup desk 2.5 years ago, and to this day, it remains the best investment I've made to avoid sitting for most of the day.<p>Before I started using the treadmill desk, I averaged around 2.5-3k steps per day. On days when I exercised, it could go up to 8-10k steps, although I wasn't exercising regularly at that time. Now, 2.5 years later, I consistently reach 10k-12k steps on a bad day (about 2 hours of walking) and can go up to 18-24k steps on a good day (3-4 hours). Occasionally, I hit 30k steps, but that's quite rare, to be honest.<p>I was hesitant about the idea, but a friend who got one himself and shared his experience encouraged me to give it a try.<p>Pros:<p>- Feels more natural than just standing on the desk (after 20 min I get tired of standing still, whereas I can walk 2h without even realizing)<p>- I can work comfortably with the computer when typing, using the mouse (programming, writing... and even playing games), at speeds up to ~4.5km/h (~2.8 miles per hour). Beyond that the thoughts don't flow in the same way. Below this threshold I don't notice much difference in my work. I initially found 3 km/h (~1.8 mph) fast enough, but over time, 4 km/h (~2.5 mph) has become my sweet spot.<p>- You can enter a flow state just as easily as when seated (or at least that's my feeling)<p>Cons:<p>- Space: The treadmill takes up room, so I keep it next to my desk when not in use for convenience. Setting up the treadmill desk takes around 1 minute.<p>- Meetings: It felt awkward at first. Initially, I avoided attending meetings while walking, but I gradually started participating in 1:1s and eventually team meetings. Nowadays, I’m comfortable walking during most meetings, although I avoid it during large group or company-wide calls. My webcam is positioned to show only my shoulders and face, minimizing visible movement and reducing distractions for others during calls (probably the others won't care anyway).<p>- Limited Upper Body Movement: The upper body remains relatively still since my hands are usually on the keyboard or mouse. This limits overall activity compared to walking outside. However, when reading, my arms and hands move off the desk, mimicking the motion of walking, so it really depends.<p>- Noise: I live in a flat, and while the treadmill isn't very noisy, it could be bothersome if people are sleeping (whether in the next room or in the floor below). I avoid using it early in the morning or late in the evening.<p>My treadmill automatically beeps after 2 hours and shuts off for 30 minutes. It does force me to take a break (or even take a shower depending on the speed I was walking). After the break, I switch to a seated position. I typically have one walking session in the morning, and on some days, another in the afternoon. When it beeps and I'm in the zone I just move it aside and continue seated (sometimes I just continue standing still for some minutes), so it does not get in the way if you are focused.<p>Overall I think it is an improvement over staying still for most of the day (seated or standing), and also an improvement over forcing regular/spaced interruptions (I honestly tried several times, but it breaks my concentration and prevents me from going into the zone). Standing desk + treadmill: Totally worth the investment.</p>
]]></description><pubDate>Thu, 02 Jan 2025 08:53:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=42572821</link><dc:creator>psanchez</dc:creator><comments>https://news.ycombinator.com/item?id=42572821</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42572821</guid></item><item><title><![CDATA[New comment by psanchez in "Flappy Bird for Android, only C, under 100KB"]]></title><description><![CDATA[
<p>I was under de impression java gluing was required to create Android APKs. Really nice to see this project. 0 java files. Bravo.<p>Also worth looking at the rawandroid project like others noted:
<a href="https://github.com/cnlohr/rawdrawandroid/tree/master">https://github.com/cnlohr/rawdrawandroid/tree/master</a></p>
]]></description><pubDate>Sun, 22 Sep 2024 16:19:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=41618055</link><dc:creator>psanchez</dc:creator><comments>https://news.ycombinator.com/item?id=41618055</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41618055</guid></item><item><title><![CDATA[New comment by psanchez in "Bun v1.0.0"]]></title><description><![CDATA[
<p>Jarred congrats for the release!<p>I've been following the progress of bun since your initial announcement and today I decided to give it a try, just to play around.<p>Haven't done much with it, but even for the 5 min I played with it I'm kind of impressed so far.<p>- Superquick install PLUS did not require root (like many other installs)<p>- It identified I was using fish shell and added to path (nice!)<p>- I ran a very quick bench on "npm install" vs "bun install" on one of the projects I have, and the performance is amazing. 50seconds vs 4.5 seconds on the first install. Moreover re-executing "bun install" takes 122ms on my machine. Removing node_modules and re-executing it takes 769ms (because of course, it uses a local cache elsewhere, but still). Amazing.<p>I'll probably continue exploring tomorrow and see whether it is able to run the rest of the backend/frontend  or whether it gives me a hard time. I've seen there are certain things that are not 100% compatible with node yet, but since the initial impressions are great I'll explore further.<p>BTW A code formatter and a linter would be a great addition to bun.<p>I know there is this ticket:
<a href="https://github.com/oven-sh/bun/discussions/712">https://github.com/oven-sh/bun/discussions/712</a><p>But one of the advantages of integrating both things in bun is that it makes it the perfect standard tool to be used inside of a team. So no extra installations from other projects, no extra mental burden of what to use, etc... bun would be the perfect dev companion with both ;)<p>Probably a linter is a different beast (and not sure you or the rest of people working in bun want to get in there... probably not important right now), but a formatter seems doable and it does add a lot of value from my point of view. Given that bun already runs, installs, tests and bundles, to the very least _formatting_ seems like a natural addition to the family. To me a formatter is part of the standard toolset for developers nowadays.<p>Once again, thanks a lot for the effort to you and the rest of the people contributing to the project!<p>(edit: re-formatted my comment :p)</p>
]]></description><pubDate>Fri, 08 Sep 2023 18:43:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=37437658</link><dc:creator>psanchez</dc:creator><comments>https://news.ycombinator.com/item?id=37437658</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37437658</guid></item><item><title><![CDATA[New comment by psanchez in "Show HN: EnvKey 2.0 – End-To-End Encrypted Environments (now open source)"]]></title><description><![CDATA[
<p>Hi Dane, thanks for taking the time to reply and thanks for listening to our feedback.<p>It is still not clear to me what "40 server ENVKEYs" means. Is this different projects, or each ENVKEY on each of the projects? What counts towards this quota?<p>I've read a comment from you today (on another thread) about migrating from OpenPGP (RSA) in v1 to NaCl (EC) in v2. So I guess V2 encryption/decryption works faster on the gui/cli and security is stronger. I still would have loved as a customer to have EnvKey done this transparently to me. No idea on the internals, but something in the line of: whenever a customer updates any of its secrets, re-encrypt everything to use V2... but probably given existing architecture/design this is probably either too complex or unfeasible. Which makes me wonder... what would happen if an attack was found on curve25519 or certain type of attack was found? Just wondering, out of curiosity, if the current V2 design would support re-encrypting using a different algorithm (or even another key) in the client-side without other major changes (even if client has to re-encrypt messages from the CLI/GUI). Just wondering.<p>I've decided I'm going to give it a try to re-import all keys in order to see how a migration would look like and see if I'm hitting any limits beyond the free tier, but if I am, even though I would pay 2-3x what I'm paying now, I think either I'll move to the open-source version or look for something else. In any case I'll drop you an e-mail with my experience.<p>Coming back to the pricing discussion, as a customer I still like V1 pricing for its simplicity/clarity. You pay per users and that's the end of it. I believe a combination of nº of projects and nº of users might be the way to go for your product, because as a customer is easy to understand and easy to predict, and even if there is a fixed price per user/project then the more projects you add, the more you pay, incrementally... but this is just a thought. Same with the limits... I mean, it would be nice to say, here are the limits, if you surpass them regularly, they would be charged by X amounts.. which is also incremental.<p>Anyway, thanks for mentioning in another comment you are considering some adjustments. I mean, as drcongo said, maybe we are not your target anymore, maybe we are just a vocal minority, you are the one with all the info anyway. The new pricing might be the right thing for your company, not a clue,  although I honestly think there can be something in the middle that even if it gets you marginarlly more money/users at the beginning, might allow your customers to stay and grow as their company grows, which will help you grow as they grow. Final though, the current jump in princing from the free tier to the business tier makes me hesitant to even use the free tier.<p>And again, the product itself is amazing and I am very happy with it, no complaints at all with it.<p>- minor edits for clarity -</p>
]]></description><pubDate>Sat, 02 Apr 2022 06:58:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=30886366</link><dc:creator>psanchez</dc:creator><comments>https://news.ycombinator.com/item?id=30886366</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30886366</guid></item></channel></rss>