<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: pwatsonwailes</title><link>https://news.ycombinator.com/user?id=pwatsonwailes</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 29 Apr 2026 10:52:00 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=pwatsonwailes" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by pwatsonwailes in "Modern Board Games: and why you should play them (2022)"]]></title><description><![CDATA[
<p>Terra Mystica, Brass Birmingham, Scythe, Root, ARCS and its expansions, Nemesis: Retaliation and SETI would all get shouts from me for that sort of thing. Slightly depends on your definition of complex and sophisticated, but I'd put all of those in that list.</p>
]]></description><pubDate>Thu, 23 Apr 2026 14:37:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47876271</link><dc:creator>pwatsonwailes</dc:creator><comments>https://news.ycombinator.com/item?id=47876271</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47876271</guid></item><item><title><![CDATA[New comment by pwatsonwailes in "Alberta startup sells no-tech tractors for half price"]]></title><description><![CDATA[
<p>You may want to check out Siromer tractors depending where you are. Similar idea.</p>
]]></description><pubDate>Wed, 22 Apr 2026 17:15:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47866422</link><dc:creator>pwatsonwailes</dc:creator><comments>https://news.ycombinator.com/item?id=47866422</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47866422</guid></item><item><title><![CDATA[New comment by pwatsonwailes in "Europe's rearmament meets reality: the story of a failed frigate project"]]></title><description><![CDATA[
<p><a href="https://archive.ph/tPWvX" rel="nofollow">https://archive.ph/tPWvX</a></p>
]]></description><pubDate>Tue, 07 Apr 2026 13:24:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47675029</link><dc:creator>pwatsonwailes</dc:creator><comments>https://news.ycombinator.com/item?id=47675029</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47675029</guid></item><item><title><![CDATA[Europe's rearmament meets reality: the story of a failed frigate project]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.ft.com/content/124c9dfc-18da-49fa-aab5-6389dce833ae">https://www.ft.com/content/124c9dfc-18da-49fa-aab5-6389dce833ae</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47675027">https://news.ycombinator.com/item?id=47675027</a></p>
<p>Points: 3</p>
<p># Comments: 2</p>
]]></description><pubDate>Tue, 07 Apr 2026 13:24:39 +0000</pubDate><link>https://www.ft.com/content/124c9dfc-18da-49fa-aab5-6389dce833ae</link><dc:creator>pwatsonwailes</dc:creator><comments>https://news.ycombinator.com/item?id=47675027</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47675027</guid></item><item><title><![CDATA[New comment by pwatsonwailes in "Show HN: Browser grand strategy game for hundreds of players on huge maps"]]></title><description><![CDATA[
<p>Doesn't seem to do anything on custom game.</p>
]]></description><pubDate>Thu, 19 Mar 2026 11:49:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47437784</link><dc:creator>pwatsonwailes</dc:creator><comments>https://news.ycombinator.com/item?id=47437784</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47437784</guid></item><item><title><![CDATA[New comment by pwatsonwailes in "We Have Learned Nothing"]]></title><description><![CDATA[
<p>A product being good enough isn't enough. At some point you also need to price it, and communicate it's existence persuasively to the market and win market share, and it has to be distributed effectively.<p>Most businesses fail because they solve for the easier bit (product) and then have no idea about the rest.</p>
]]></description><pubDate>Thu, 19 Mar 2026 07:34:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47436068</link><dc:creator>pwatsonwailes</dc:creator><comments>https://news.ycombinator.com/item?id=47436068</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47436068</guid></item><item><title><![CDATA[New comment by pwatsonwailes in "AirPods Max 2"]]></title><description><![CDATA[
<p>My mistake. Nice catch.<p>That being said, AirPods use a steel frame construction to get the weight up for the same reason. There's a whole thing with weight in neuromarketing for luxury and luxury-adjacent products. There's fringe benefits acoustically to using it, around resonance mainly, but mostly it's to get weight and influence perception. Same reason as the knock-offs, different mechanism.</p>
]]></description><pubDate>Mon, 16 Mar 2026 22:38:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47405976</link><dc:creator>pwatsonwailes</dc:creator><comments>https://news.ycombinator.com/item?id=47405976</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47405976</guid></item><item><title><![CDATA[New comment by pwatsonwailes in "AirPods Max 2"]]></title><description><![CDATA[
<p>1. They have steel weights in them to make them that heavy so they feel more premium.</p>
]]></description><pubDate>Mon, 16 Mar 2026 20:23:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47404369</link><dc:creator>pwatsonwailes</dc:creator><comments>https://news.ycombinator.com/item?id=47404369</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47404369</guid></item><item><title><![CDATA[New comment by pwatsonwailes in "Autoresearch Hub"]]></title><description><![CDATA[
<p>Both built by Claude Sonnet 4.6</p>
]]></description><pubDate>Sun, 15 Mar 2026 21:57:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47392378</link><dc:creator>pwatsonwailes</dc:creator><comments>https://news.ycombinator.com/item?id=47392378</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47392378</guid></item><item><title><![CDATA[New comment by pwatsonwailes in "Clay Christensen's Milkshake Marketing (2011)"]]></title><description><![CDATA[
<p>Doesn't need to be a single person, but yes, all good products come from someone or a (generally) small team, spotting something, and fixing it.<p>As for the marketing department - depends on definition, but assuming you mean (as it mostly is nowdays) comms, it's two things:<p>1. Making sure that potential future customers know you exist, so when they enter the market to buy, they know you're relevant and can purchase from you.<p>2. Making sure that when someone is in the market to buy now, that they're more likely to buy from you, because you have a compelling reason for people to pick you, and not the other brands in the space that they can think of.<p>Mostly 1, some of 2, if the marketing team is good. Ratios vary as to how much though by more than you'd think, depending on industry, customer lifecycle, brand maturity and so on.</p>
]]></description><pubDate>Thu, 12 Feb 2026 14:18:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46989128</link><dc:creator>pwatsonwailes</dc:creator><comments>https://news.ycombinator.com/item?id=46989128</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46989128</guid></item><item><title><![CDATA[New comment by pwatsonwailes in "Clay Christensen's Milkshake Marketing (2011)"]]></title><description><![CDATA[
<p>Good segmentation is mostly not by demography nowadays. At best, demographics are a correlative element to something more fundamental, usually economic, behavioural or psychographic.</p>
]]></description><pubDate>Thu, 12 Feb 2026 13:30:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46988598</link><dc:creator>pwatsonwailes</dc:creator><comments>https://news.ycombinator.com/item?id=46988598</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46988598</guid></item><item><title><![CDATA[New comment by pwatsonwailes in "Clay Christensen's Milkshake Marketing (2011)"]]></title><description><![CDATA[
<p>"how come no one, in the past 25 years of my life as a consumer, has ever had any time to ask me a single real question"<p>There's a lot of shit marketers, is my short answer.<p>Like, a lot.</p>
]]></description><pubDate>Thu, 12 Feb 2026 13:29:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46988589</link><dc:creator>pwatsonwailes</dc:creator><comments>https://news.ycombinator.com/item?id=46988589</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46988589</guid></item><item><title><![CDATA[New comment by pwatsonwailes in "Clay Christensen's Milkshake Marketing (2011)"]]></title><description><![CDATA[
<p>The problem I tend to see is that companies say they're doing JTBD research, but they're actually just running attribute preference surveys (asking customers to rank features, from a list of things the company would like to build, rather than starting off by assuming you don't know what customers require).<p>Listening to what people say they want (feature preferences) almost always diverges from what they actually want the product to do (a functional, emotional, or social outcome). That gets more complex when we think about that there's different levels by which you can evaluate what someone wants, which in the JTBD word are thought of as jobs as progress (why they're doing the thing), and jobs as outcomes (how they're doing the thing). There's another famous example, which is from Bosch's circular saw evolution. Professionals said they wanted lighter tools (and that's true), but the constraint they experienced as a result of weight was the impacts that had. So you can solve for weight, or you can solve for improved usability. Symptoms vs causes sort of thing.<p>This is also why product teams should involve marketers, and why marketers should understand research design. The teams who I've seen do this well at this aren't running quick preference tests and A/B tests on features most of the time. They're generally more focused on running continuous feedback loops, where they conduct broader research, then engage in grounded theory style interpretation to understand what they <i>can</i> do, look at field validation to figure out what they <i>should</i> do, and then iterate.<p>For B2B especially as a side note, if your value proposition is something like accountability or proof of value, but your product's workflows don't make accountability or proving value effortless, fixing that workflow will do more for brand perception than any campaign, because nothing nukes good comms like a poor experience.</p>
]]></description><pubDate>Thu, 12 Feb 2026 11:33:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46987476</link><dc:creator>pwatsonwailes</dc:creator><comments>https://news.ycombinator.com/item?id=46987476</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46987476</guid></item><item><title><![CDATA[New comment by pwatsonwailes in "Clay Christensen's Milkshake Marketing (2011)"]]></title><description><![CDATA[
<p>It's what always happens when there's a disconnect between the product built and the actual thing people want to do. In marketing, we differentiate between Jobs-As-Activities (the task of "changing a background") and Jobs-As-Progress (the user trying to go from something being unsatisfactory to something better).<p>When UI feels dumbed down to that level, or hidden behind advanced settings, it’s often because the product team ends up treating users as a gestalt persona, rather than thinking about their constraints around time and attention. The most meaningful innovations occur when customer insights influence development before launch; sadly, that frequently doesn't happen. People launch the thing, come up with features they could add, ask what people want from that list (and potentially don't even do that) and then add stuff like barnacles accumulating on a ship.</p>
]]></description><pubDate>Thu, 12 Feb 2026 11:16:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46987364</link><dc:creator>pwatsonwailes</dc:creator><comments>https://news.ycombinator.com/item?id=46987364</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46987364</guid></item><item><title><![CDATA[New comment by pwatsonwailes in "Frontier AI agents violate ethical constraints 30–50% of time, pressured by KPIs"]]></title><description><![CDATA[
<p>Not sure where you get that for Milgram. That's been replicated lots of times, in different countries, with different compositions of people, and found to be broadly replicable. Burger in '09, Sheridan & King in '72, Dolinski and co in '17, Caspar in '16, Haslam & Reicher which I referenced somewhere else in the thread...</p>
]]></description><pubDate>Tue, 10 Feb 2026 11:40:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46958383</link><dc:creator>pwatsonwailes</dc:creator><comments>https://news.ycombinator.com/item?id=46958383</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46958383</guid></item><item><title><![CDATA[New comment by pwatsonwailes in "Frontier AI agents violate ethical constraints 30–50% of time, pressured by KPIs"]]></title><description><![CDATA[
<p>You're not wrong strictly speaking - the challenge comes in getting KPIs for ethical and moral behaviour to be things that the company signs up for. Some are geared that way inherently (Patagonia is the cliché example), but most aren't.<p>People will always find other goalposts to move. The trick is making sure the KPIs you set define the goalposts you care about staying in place.<p>Side note: Jordan Peterson is pretty much an example of inventing goalposts to move. Everything he argues about is about setting a goalpost, and then inventing others to move around to avoid being pinned down. Motte-and-bailey fallacy happens with KPIs as much as it does with debates.</p>
]]></description><pubDate>Tue, 10 Feb 2026 11:18:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46958203</link><dc:creator>pwatsonwailes</dc:creator><comments>https://news.ycombinator.com/item?id=46958203</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46958203</guid></item><item><title><![CDATA[New comment by pwatsonwailes in "Frontier AI agents violate ethical constraints 30–50% of time, pressured by KPIs"]]></title><description><![CDATA[
<p>Responded on this line of thinking a bit further down, so I'll be brief on this. Yes, there's selection bias in organisations as you go up the ladder of power and influence, which selects for various traits (psychopathy being an obvious one).<p>That being said, there's a side view on this from interactionism that it's not just the traits of the person's modes of behaviour, but their belief in the goal, and their view of the framing of it, which also feeds into this. Research on cult behaviours has a lot of overlap with that.<p>The culture and the environment, what the mission is seen as, how contextually broad that is and so on all get in to that.<p>I do a workshop on KPI setting which has overlap here too. In short for that - choose mutually conflicting KPIs which narrow the state space for success, such that attempting to cheat one causes another to fail. Ideally, you want goals for an organisation that push for high levels of upside, with limited downside, and counteracting merits, such that only by meeting all of them do you get to where you want to be. Otherwise it's like drawing a line of a piece of paper, asking someone to place a dot on one side of the line, and being upset that they didn't put it where you wanted it. More lines narrows the field to just the areas where you're prepared to accept success.<p>That division can also then be used to narrow what you're willing to accept (for good or ill) of people in meeting those goals, but the challenge is that they tend to see meeting all the goals as the goal, not acting in a moral way, because the goals become the target, and decontextualise the importance of everything else.<p>TL;DR: value setting for positive behaviour and corporate performance is hard.<p>EDIT: actually this wasn't that short as an answer really. Sorry for that.</p>
]]></description><pubDate>Tue, 10 Feb 2026 10:38:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46957850</link><dc:creator>pwatsonwailes</dc:creator><comments>https://news.ycombinator.com/item?id=46957850</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46957850</guid></item><item><title><![CDATA[New comment by pwatsonwailes in "Frontier AI agents violate ethical constraints 30–50% of time, pressured by KPIs"]]></title><description><![CDATA[
<p>It's instructive though, despite the flaws, and at this point has been replicated enough in different ways that we know it's got some basis in reality. There's a whole bunch of constructivist research around interactionism, that shows that whilst it's not just the person's default ways of behaving or just the situation that matters, the situational context definitely influences what people are likely to do in any given scenario.<p>Reicher & Haslam's research around engaged followership gives a pretty good insight into why Zimbardo got the results he did, because he wasn't just observing what went on. That gets into all sorts of things around good study design, constructivist vs positivist analysis etc, but that's a whole different thing.<p>I suspect, particularly with regards to different levels, there's an element of selection bias going on (if for no other reason that what we see in terms of levels of psychopathy in higher levels of management), but I'd guess (and it's a guess), that culture convincing people that achieving the KPI is the moral good is more of a factor.<p>That gets into a whole separate thing around what happens in more cultlike corporations and the dynamics with the VC world (WeWork is an obvious example) as to why organisations can end up with workforces which will do things of questionable purpose, because the organisation has a visible a fearless leader who has to be pleased/obeyed etc (Musk, Jobs etc), or more insidiously, a valuable goal that must be pursued regardless of cost (weaponised effective altruism sort of).<p>That then gets into a whole thing about what happens with something like the UK civil service, where you're asked to implement things and obviously you can't care about the politics, because you'll serve lots of governments that believe lots of different things, and you can't just quit and get rehired every time a party you disagree with personally gets into power, but again, that diverges into other things.<p>At the risk of narrative fallacy - <a href="https://www.youtube.com/watch?v=wKDdLWAdcbM" rel="nofollow">https://www.youtube.com/watch?v=wKDdLWAdcbM</a></p>
]]></description><pubDate>Tue, 10 Feb 2026 10:29:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46957775</link><dc:creator>pwatsonwailes</dc:creator><comments>https://news.ycombinator.com/item?id=46957775</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46957775</guid></item><item><title><![CDATA[New comment by pwatsonwailes in "LiftKit – UI where "everything derives from the golden ratio""]]></title><description><![CDATA[
<p>People's actual measured experience, vs people's experience of the experience, are rarely the same things, when they have prior knowledge of one thing, and low knowledge of the alternative. They prefer the thing they know, even when it's worse.</p>
]]></description><pubDate>Tue, 10 Feb 2026 09:46:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46957416</link><dc:creator>pwatsonwailes</dc:creator><comments>https://news.ycombinator.com/item?id=46957416</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46957416</guid></item><item><title><![CDATA[New comment by pwatsonwailes in "LiftKit – UI where "everything derives from the golden ratio""]]></title><description><![CDATA[
<p>There's loads of this in the UX space. To overly simplify, people's brains use expected ideas about what things are like, in order to interact with the world. We build models as to what things are like, and then things that look like what we expect, we over-weight to stating as things that we understand.<p>So when people are presented with something which is visually appealing, we think it's easy to use, even when it isn't. And people will then default to blaming themselves, not the pretty, elegant thing, because clearly the pretty elegant thing isn't the issue.<p>We call this the aesthetic-usability effect. Perception of the expected experience, and attribution of the actual experience, is more important part than the actual experience.<p>It's one of the many ways in which engineers, economists and analysts (in my experience) tend to run in to issues. They want people to behave rationally, based on their KPIs, not as people actually experience and interact with the world.<p>There's all sorts of research that then comes off this, like people enjoying wine they've been told is more expensive, over wine they've been told is cheaper, and the physiological response as measured with an MRI confirms their reported divergence in experience, despite that the wines are the same, as one quick example.<p>Low contextuality evaluations (my term for where you ask someone to state things about something where they lack enough experience with enough breadth and depth to answer reliably) are always wonky. People can't comment on wine, because they don't know enough about wine, so they seek other clues to tell them about what they're experiencing. Similarly, people don't know about things that are new to them (by default) or that look different to what they expect, so their experience is always reported as being worse than it probably actually is, because their brain doesn't like expending energy learning about something new. They'd rather something they understood. It's where contextualisation and mimicry come in really useful from a design of experience standpoint.</p>
]]></description><pubDate>Tue, 10 Feb 2026 09:45:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=46957411</link><dc:creator>pwatsonwailes</dc:creator><comments>https://news.ycombinator.com/item?id=46957411</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46957411</guid></item></channel></rss>