Posts

Inescapably Value-Laden Experience—a Catchy Term I Made Up to Make Morality Rationalisable 2024-12-19T04:45:37.906Z
James Stephen Brown's Shortform 2024-12-19T04:09:09.869Z
What is Confidence—in Game Theory and Life? 2024-12-10T23:06:24.072Z
Implications—How Conscious Significance Could Inform Our lives 2024-11-26T17:42:49.085Z
Paradigm Shifts—change everything... except almost everything 2024-11-23T18:34:13.088Z
Both-Sidesism—When Fair & Balanced Goes Wrong 2024-11-02T03:04:03.820Z
A Case for Conscious Significance rather than Free Will. 2024-10-25T23:20:30.834Z
Methodology: Contagious Beliefs 2024-10-19T03:58:17.966Z
Contagious Beliefs—Simulating Political Alignment 2024-10-13T00:27:08.084Z
An Interactive Shapley Value Explainer 2024-09-28T05:01:21.169Z
The Other Existential Crisis 2024-09-21T01:16:38.011Z
Could Things Be Very Different?—How Historical Inertia Might Blind Us To Optimal Solutions 2024-09-11T09:53:07.474Z
We Don't Just Let People Die—So What Next? 2024-08-03T01:04:49.756Z
Unlocking Solutions—By Understanding Coordination Problems 2024-07-27T04:52:13.435Z
Why People in Poverty Make Bad Decisions 2024-07-15T23:40:32.116Z
Saving Lives Reduces Over-Population—A Counter-Intuitive Non-Zero-Sum Game 2024-06-28T19:29:55.238Z
Capitalising On Trust—A Simulation 2024-06-21T04:43:29.971Z
Masculinity—A Case For Courage 2024-06-04T00:04:48.411Z
Moloch—An Illustrated Primer 2024-05-26T01:04:55.442Z
A Positive Double Standard—Self-Help Principles Work For Individuals Not Populations 2024-05-22T21:37:16.578Z
What Are Non-Zero-Sum Games?—A Primer 2024-05-18T09:19:52.493Z
Why I'll Keep My Crummy Drawings—How Generative AI Art Won't Supplant... Art. 2024-05-15T19:30:05.410Z
Emergence Is a Universal Non-Zero-Sum Phenomenon. 2024-05-14T08:06:51.503Z
The Alignment Problem No One Is Talking About 2024-05-10T18:34:34.300Z

Comments

Comment by James Stephen Brown (james-brown) on Daniel Tan's Shortform · 2025-01-25T01:16:04.767Z · LW · GW

I personally love the idea of having a highly rational partner to bounce ideas off, and I think LLMs have high utility in this regard, I use them to challenge my knowledge and fill in gaps, unweave confusion, check my biases.

However, what I've heard about how others are using chat, and how I've seen kids use it, is much more as a cognitive off-loader, which has large consequences for learning, because "cognitive load" is how we learn. I've heard many adults say "It's a great way to get a piece of writing going", or "to make something more concise", these are mental skills that we use when communicating that will atrophy with disuse, and unless we are going to have an omnipresent LLM filter for our thoughts, this is likely to have consequences, for our ability to conceive of ideas and compress them into a digestible form.

But, as John Milton says "A fool will be a fool with the best book". It really depends on the user, the internet gave us the world's knowledge at our fingertips, and we managed to fill it with misinformation. Now we have the power of reason at our fingertips, but I'm not sure that's where we want it. At the same time, I think more information, better information and greater rationality is a net-positive, so I'm hopeful.

Comment by James Stephen Brown (james-brown) on James Stephen Brown's Shortform · 2024-12-19T04:09:10.059Z · LW · GW

Developing an idea about complexity and emergence which looks at the stages of an emergent cycle—that being how a substrate gives rise to an emergent phenomenon, which reaches equilibrium providing the substrate for a the next phenomenon. The way I see it, it goes something like this:

quantum randomness > is predictable at a certain scale > reaches equilibrium > becomes base + randomness (as a byproduct)

or this

substrate + free energy > patterns emerge (disturbances in the uniformity of the free energy) > equilibrium reached > substrate + free energy

This echoes Hegel's cycle regarding history...

thesis > antithesis > synthesis (thesis - the substrate for further development)

But it's cumulative. Like a spiral (so is Hegel's actually, as it refers to History which moves forward so cycles don't fold back on themselves)

Karl Popper has a related cycle related to intellectual discovery...

Problem 1 > Tentative Theory > Error Elimination (equilibrium) > Problem 2 (the byproduct left out of the solution to P1)

Popper suggests that this is analogous to inorganic physics, biology (using the example of an amoeba responding to heat) and intellectual discovery. Popper refers to organisms as problem-solving structures (to my mind the problem being solved is how to serve entropy probably, organisms are said to be dissipative structures, that while being ordered themselves increase entropy more efficiently than if they weren't there).

My sense is that all creative or emergent processes follow this pattern. substrate + randomness, patterning (un-uniforming), equilibrium, substrate + randomness.

I'd be interested if anyone else has criticism, or better codifications of this, or elements I've missed in this very rough outline, before I solidify this kernel of an idea into a proper post (probably with pictures or interactives).

Comment by James Stephen Brown (james-brown) on The "Think It Faster" Exercise · 2024-12-13T01:36:02.542Z · LW · GW

That (deliberate grieving) was also an interesting read, yes, exactly.

Comment by James Stephen Brown (james-brown) on The "Think It Faster" Exercise · 2024-12-12T18:39:45.540Z · LW · GW

I see, I think you're right not to change it—it's just provocative enough to be catchy.

Comment by James Stephen Brown (james-brown) on The "Think It Faster" Exercise · 2024-12-12T17:35:02.570Z · LW · GW

Wow, that was quick. I mean, rather than scaffolding work that seems unproductive but is actually necessary, most creative time (for me at least) is wasted in resisting change (my number 3 point was about trying changes even if you don't immediately agree with them).

Comment by James Stephen Brown (james-brown) on The "Think It Faster" Exercise · 2024-12-12T17:29:28.284Z · LW · GW

Thanks for this, nice writing.

The idea of 'thinking it faster' is provocative, because it seems to be over-optimising for speed rather than other values, where as the way you're implementing it is by generating more meaningful or efficient decisions which are underpinned by a meta-analysis of your process—which is actually about increasing the quality of your decision-making.

I think it's worthwhile seeing where we're wasting time. But often I find wasted time isn't what you'd expect it to be. As someone who also works in the creative industry, criticism is a lot easier than creating something out of whole cloth. Your senior partner, doesn't just have more experience, but is also a fresh pair of eyes looking at the product you're creating from a macroscopic (user's) perspective—this is much easier when you're not mired in the minutiae. I get this feedback in my job (a documentary editor) not only from people more experienced than me, but also those less experienced.

There a two things I have learned from experience:

1. Blocking out a scene is useful, even though the scene will never be in that form—the boring form of the scene makes it easier to step back and see the more creative way to approach the scene. The time spent making the picture clearer isn't wasted.
2. When working alone, step away and view your work from a fresh perspective (in my case the audience, in yours the user) to be your own director / senior partner.

That being said, I think it's well worth meta-analysing your own process and that of your more experienced colleagues, another thing I've learned is...

3. When someone you trust gives you changes you don't agree with, try them, they probably have a clearer perspective than you do.

Anyway, thanks for the post, I'm planning to implement your advice in my own job, it sounds like a worthwhile process. I actually think this third thing is likely to be a key lesson learned from meta-analysis, to not be stubborn and to pivot to the better solution more freely, what I call "back it up and break it".

Comment by James Stephen Brown (james-brown) on Both-Sidesism—When Fair & Balanced Goes Wrong · 2024-11-09T17:22:14.509Z · LW · GW

Thanks Hastings,

I think at that time you could reason much better if you could recognize that the separation between left and right was not natural.

I think you're saying it was easier in the past to see unorthodox or contradictory views within parties because the wings were more clearly delineated. I'd agree, it was a divided time, but a less chaotic divided time.

The effective left right split is mono-factor: you are right exactly in proportion to your personal loyalty to one Donald J. Trump

Absolutely, it's also bizarre regarding his tariff policy which is wholly anti-free market, that's a point the left didn't pick up on (because of the chaos I imagine) that was obvious to me. As a left-wing (pro-taxation) person myself who also believes in free markets, his approach is so anti-thetical to my own views, as if he took the last good idea on the right (free markets), and abandoned that in order to create a party based on all the bad ideas. This sort of contrarianism is something I've read Steven Pinker write about as a loyalty test (to despots and cult leaders)—the inducement to followers to knowingly lie or act contrary to their own interests as a statement of loyalty to each other through joint faith in the dear leader.

Comment by James Stephen Brown (james-brown) on Both-Sidesism—When Fair & Balanced Goes Wrong · 2024-11-08T22:58:45.869Z · LW · GW

Thanks Mr Frege for clarifying your points. As I have mentioned (in other comments) I've conceded that I probably should have contextualised my own abandonment of both-sidesism before taking a partisan approach that makes my post appear more biased than it actually is, and probably colours the way it is read.

advocating that we should not consider the other side of the story

Okay, I definitely should have clarified that this is not my intention at all. Both-sidesism, as I'm referring to it, is creating a false equivalence between two issues and giving them equal weight regardless of their validity. I am strongly for considering the other side of the story. I think it is important to steel-man your opponent's position, and your comment has revealed that I failed to do this in the post. I should have made a clear case for both-sidesism further than merely stating that it is "well intentioned", before addressing the problems with it. Thank you for your feedback on those two points, which seem obvious to me in retrospect.

Both sides are prone to such anti-democratic behaviour, but the findings also suggest that one side is "slightly more willing to sacrifice democracy

This was interesting. It provides a counter to the Nature study referenced in the post, which makes sense when considering the different methodologies, specifically what they count as equally anti-democratic actions. I have some ideas about how one could interpret these results, but after writing them down they were pretty lengthy, and would invite a larger argument that I don't really have time for. I think the study provides an important lesson to learn—the danger of accusing your opponent of something leading to justifying your own doing of that very thing. This is something I've called negative moral licensing.

The events of Nov 6/7, 2024 might support the argument that the original argument was indeed self-defeating; ie, an argument against the argument against both-sidesism---effectively, an argument for both-sidesism.

I'm afraid I can't quite parse this point, I'm not sure if you're saying the election results support my post, or the contrary, and either way, I'm not sure why the result would support either position.

Thanks again for clarifying your points. As you will see I've taken on board a few of your points. Hopefully this has been a worthwhile interaction for you, it has for me. Happy to hear your thoughts on the negative moral licensing post if you get around to reading it.

Comment by James Stephen Brown (james-brown) on Both-Sidesism—When Fair & Balanced Goes Wrong · 2024-11-08T17:52:55.936Z · LW · GW

Hi notfnofn, thanks again for the well considered comment, and for responding to my edited response. I think you've made good points which have revealed clarifications I could have made within the post.

Okay Trump is president now. Hoping that things go well regardless.

Me too. And we'll see if the right-wing and online media's concern that Harris is an equal threat to democracy over the next couple of months. Because if she is an equal threat we shouldn't expect to see a peaceful transfer of power, like when Trump lost. Although, she has already graciously conceded as would be expected of any political candidate except Trump (who has continued to lie about the result of the 2020 election and require his followers and compatriots to do the same) due to the fact that he is held to a different standard. Obviously no one seriously expects Harris to lead an insurrection on the capitol, but they have been convinced that both-sides are equally dangerous, giving a permission structure to vote for Trump.

It's not necessarily that it was the worst issue, but the easiest target.

First of all, I am part of the majority that believe that trans-women shouldn't be competing the women's category in sport. It's dangerous, and undermines the integrity of the category due to the natural physical advantages of being born male, particularly on the extremes.

But my point is, as you say "it's not necessarily the worst issue" whereas the promise to "root out" the "enemy within" is the literally the worst issue. The radical left want to fight for the rights of trans-people in all areas, and unfortunately, I believe, have over-stepped in terms of sport—an entirely optional recreational activity of little to no consequence in my opinion. This is an issue that is adjudicated largely independently of government by international sports' bodies, and I hope that over time a fair and consistent ruling will prevail.

Rooting out the enemy within, on the other hand, is not even considered radical on the right, it's said out loud by a mainstream candidate with popular support. This is how far the centre has shifted.

I'm a bit concerned that you referred to cancel culture as "accountability culture"

I think this is a fair use of both-sidesism, if I'm going to use the loaded term 'cancel culture', I'm going to qualify that this is opposed by others who see this as 'accountability culture'. I'm a believer in the free market of ideas, and my support for this principle doesn't stop when a group of less powerful people collectively use their ideas to combat powerful individuals, I also think companies should be able to act so as to protect themselves from public backlash—I largely believe in free markets in general. Where there are instances of top-down cancelling, which as you mention happens on both sides, I'm opposed to this, and would happily call this cancel culture without qualification. But in my experience that's a small proportion of what people call "cancel culture".

Do you not see this as a false equivalence?—Yes...

Great.

Are you comparing the opinions of US politicians on the left with US politicians on the right?

I'm comparing activists on the left with activists on the right. Both the Democratic and Republican parties profess strong pro-Israel support.

How seriously have you investigated the claim that "Harris's plan is based on what many top economists think is best" and not "Economists find Harris' plan overall better than Trump's, despite its many weaknesses"?  Have you controlled for the likelihood that they have other reasons to prefer Kamala to Trump?

The first I'd heard of this was in the debate, as a claim of Harris' that Goldman Sachs and the Wharton School supported her plan, and that 16 Nobel laureates had said that Trump's plan would invite a recession and increase the deficit. This demonstrated her respect for those experts. Trump wasn't able to make any similar claims. Since then I have tried to understand more about tariffs, looking to the Wall St Journal and their explanation of Tariffs, Trump's own interview with John Micklethwait, where he claimed the room full of economists didn't understand tariffs, and this interview with The Economist Editor in Chief Zanny Minton Beddoes where she underscores the strength of Harris's plan relative to Trump's.

These are all respected, relatively right-leaning sources who all agree with Harris, and who's opinions are respected by Harris, as opposed to Trump who has shown disdain for the opinion of the majority of these experts, in favour of his own expertise, borne out of his experience going bankrupt 6 times. I expect that when developing their plans, this same respect for expertise was also at play. So, I think I've investigated this claim seriously enough to have a fair opinion on it.

I'd like to thank you again for this response. I believe you've raised important clarifications that I will consider making in the text itself. As you might know, this cross-posted from my blog, and the blog is actually a series of webpages that I edit continually comprising a growing philosophical framework, and I will likely attempt to make it more ever-green by relying less on a current event. Posting here helps guide my editing process by getting critical feedback from smart people like yourself, so I appreciate your time and efforts.

Comment by James Stephen Brown (james-brown) on GPTs are Predictors, not Imitators · 2024-11-06T18:27:59.214Z · LW · GW

This was a fascinating, original idea as usual. I loved the notion of a brilliant, condescending sort of robot capable of doing a task perfectly who chooses (in order to demonstrate its own artistry) to predict and act out how we would get it wrong.

It did make me wonder though, whether when we reframe something like this for GPTs it's also important to apply the reframing to our own human intelligence to determine if the claim is distinct; in this case asking the question "are we imitators, simulators or predictors?". It might be possible to make the case that we are also predictors in as much as our consciousness projects an expectation of the results of our behaviour on to the world, an idea well explained by cognitive scientist Andy Clark.

I agree though, it would be remarkable if GPTs did end up thinking the way we do. And ironically, if they don't think the way we do, and instead begin to do away with the inefficiencies of predicting and playing out human errors, that would put us in the position of doing the hard work of predicting what how they will act.

Comment by James Stephen Brown (james-brown) on If we solve alignment, do we die anyway? · 2024-11-05T17:56:00.554Z · LW · GW

Hi Seth,

I share your concern that AGI comes with the potential for a unilateral first strike capability that, at present, no nuclear power has (which is vital to the maintenance of MAD), though I think, in game theoretical terms, this becomes more difficult the more self-interested (in survival) players there are. Like in open-source software, there is a level of protection against malicious code because bad players are outnumbered, even if they try to hide their code, there are many others who can find it. But I appreciate that 100s of coders finding malicious code within a single repository is much easier than finding something hidden in the real world, and I have to admit I'm not even sure how robust the open-source model is (I only know how it works in theory). I'm more pointing to the principle, not as an excuse for complacency but as a safety model on which to capitalise.

My point about the UN's law against aggression wasn't that in and of itself it is a deterrent, only that it gives a permission structure for any party to legitimately retaliate.

I also agree that RSI-capable AGI introduces a level of independence that we haven't seen before in a threat. And I do understand inter-dependence is a key driver of cooperation. Another driver is confidence and my hope is that the more intelligent a system gets, the more confident it is, the better it is able to balance the autonomy of others with its goals, meaning it is able to "confide" in others—in the same way as the strongest kid in class was very rarely the bully, because they had nothing to prove. Collateral damage is still damage after all, a truly confident power doesn't need these sorts of inefficiencies. I stress this is a hope, and not a cause for complacency. I recognise that in analogy, the strongest kid, the true class alpha, gets whatever they want with the willing complicity of the classroom. RSI-cabable AGI might get what it wants coercively in a way that makes us happy with our own subjugation, which is still a species of dystopia.

But if you've got a super-intelligent inventor on your side and a few resources, you can be pretty sure you and some immediate loved ones can survive and live in material comfort, while rebuilding a new society according to your preferences.

This sort of illustrates the contradiction here, if you're pretty intelligent (as in you're designing a super-intelligent AGI) you're probably smart enough to know that the scenario outlined here has a near 100% chance of failure for you and your family, because you've created something more intelligent than you that is willing to hide its intentions and destroy billions of people, it doesn't take much to realise that that intelligence isn't going to think twice about also destroying you.

Now, I realise this sounds a lot like the situation humanity is in as a whole... so I agree with you that...

multipolar human-controlled AGI scenario will necessitate ubiquitous surveillance.

I'm just suggesting that the other AGI teams do (or can, leveraging the right incentives) provide a significant contribution to this surveillance.

Comment by James Stephen Brown (james-brown) on Both-Sidesism—When Fair & Balanced Goes Wrong · 2024-11-03T16:09:13.486Z · LW · GW

Sorry, you’re right, I did misread that—I've edited my response, correcting for my mistake.

Comment by James Stephen Brown (james-brown) on Both-Sidesism—When Fair & Balanced Goes Wrong · 2024-11-02T23:37:24.683Z · LW · GW

Thanks for your comment, the post itself is meant to challenge the reader to question what is really bias, and what is actually an even-handed view with apparent bias, due to the shifted centre. But I certainly take your point, beginning in a clearly partisan manner might not have been the best approach before putting it in context.

I do think there are defences that can be made of the points you raise.

You took one of the tamest aspects of the radical left here

I agree I have taken a tame aspect of the radical left, because there are only tame aspects available. This is my point. The claim you point to, that the left is involved in cancelling conservative voices (not arresting conservatives, as you've clarified this claim is not supported by evidence) isn't any less tame than the accusation of pro-LGBTQ woke-ness. Cancel culture is just a naturally occurring aspect of the free market of ideas (people are free to boycott whatever they like and employers are free to protect their businesses from public backlash). People on the right who usually advocate for free markets should know this best. 

The trans issue is a perennial touchstone that has been used as the consistent example of radical left woke-ness for years, and throughout this campaign.

There is antisemitism amongst the pro-Palestine crowd

I don't doubt you are correct that anti-Israel sentiment can stray into anti-semitism. But the point is about motivations, one side is motivated by sympathy for a population with 10s of thousands of people being killed over a year, and millions being displaced and having their homes destroyed, the other is motivated by white supremacy. Do you not see this as a false equivalence?

In short, I think pronouns and Palestine were fair comparisons. 'Cancel' (or 'accountability') culture could well be counted as another valid comparison, with a similar tameness to the examples I did use. The reason they sound so tame is because they are tame, they are not comparable, which is the point I'm making—it is the assumption of both-sidesism that leads people to draw the false equivalence.

But, again I agree that I should have spent some time explaining the problem of both-sidesism and the shifted centre before acting in accordance with the principles with which the post concludes.

I don't know nearly enough about economics to know for sure.

I'm in the same position. It's at these points where I defer to experts, which is what I have advised in the post.

Thanks again for your comment. I hope my comment hasn't been too argumentative, it's meant to explain as an extension of the post.

Comment by James Stephen Brown (james-brown) on Both-Sidesism—When Fair & Balanced Goes Wrong · 2024-11-02T22:48:09.121Z · LW · GW

I agree, it seems as though the incentives aren't aligned that way, so it ends up incumbent upon the audience to distill nuance out of binary messaging, and to recognise the value of those who do present unique perspectives.

Comment by James Stephen Brown (james-brown) on "Real AGI" · 2024-11-02T18:28:00.453Z · LW · GW

This made me think about how this will come about, whether we we have multiple discrete systems for different functions; language, image recognition, physical balance, executive functions etc working interdependently, communicating through compressed-bandwidth conduits, or whether at some point we can/need-to literally chuck all the raw data from all the systems into one learning system, and let that sort it out (likely creating its own virtual semi-independent systems).

Comment by James Stephen Brown (james-brown) on If we solve alignment, do we die anyway? · 2024-11-02T18:11:43.168Z · LW · GW

The nuclear MAD standoff with nonproliferation agreements is fairly similar to the scenario I've described.  We've survived that so far- but with only nine participants to date.

I wonder if there's a clue in this. When you say "only" nine participants it suggests that more would introduce more risk, but that's not what we've seen with MAD. The greater the number becomes, the bigger the deterrent gets. If, for a minute we forgo alliances, there is a natural alliance of "everyone else" at play when it comes to an aggressor. Military aggression is, after all, illegal. So, the greater the number of players, the smaller advantage any one aggressive player has against the natural coalition of all other peaceful players. If we take into account alliances, then this simply returns to a more binary question and the number of players makes no difference.

So, what happens if we apply this to an AGI scenario?

First I want to admit I'm immediately skeptical when anyone mentions a non-iterated Prisoner's Dilemma playing out in the real world, because a Prisoner's Dilemma requires extremely confined parameters, and ignores externalities that are present even in an actual prisoner's dilemma (between two actual prisoners) in the real world. The world is a continuous game, and as such almost all games are iterated games.

If we take the AGI situation, we have an increasing number of players (as you mention "and N increasing"); different AGIs, different humans teams, and mixtures of AGI and human teams, all of which want to survive, some of which may want to dominate or eliminate all other teams. There is a natural coalition of teams that want to survive and don't want to eliminate all other teams, and that coalition will always be larger and more distributed than the nefarious team that seeks to destroy them. We can observe such robustness in many distributed systems, that seem, on the face of it, vulnerable. This dynamic makes it increasingly difficult for the nefarious team to hide their activities, meanwhile the others are able to capitalise on the benefits of cooperation.

I think we discount the benefit of cooperation, because it's so ubiquitous in our modern world. This ubiquity of cooperation is a product of a tendency in intelligent systems to evolve toward greater non-zero-sumness. While I share many reservations about AGI, when I remember this fact, I am somewhat reassured that, as our capability to destroy everything gets greater, this capacity is born out of our greater interconnectedness. It is our intelligence and rationality that allows us to harness the benefits of greater cooperation. So, I don't see why greater rationality on the part of AGI should suddenly reverse this trend.

I don't want to suggest that this is a non-problem, rather that an acknowledgement of these advantages might allow us to capitalise on them.

 

Comment by James Stephen Brown (james-brown) on Both-Sidesism—When Fair & Balanced Goes Wrong · 2024-11-02T16:39:45.702Z · LW · GW

Good point, I guess all-sidesism would be more desirable, this would take the form of panels representing different experts, opinions or demographics. Some issues, like US politics do end up necessarily polarised though, given there are only two options, even if you begin with a panel—they did start with an anti-vax candidate too with RFK Jr (with the Ds and even the Rs being arguably pro-vax), but political expediency results in his being subsumed into the binary.

Comment by James Stephen Brown (james-brown) on A Case for Conscious Significance rather than Free Will. · 2024-10-29T03:50:28.971Z · LW · GW

Thanks Seth, yes, I think we're pretty aligned on this topic. Which gives me some more confidence, given you actually have relevant education and experience in this area.

I'm not sure it's fully satisfying. I'm afraid someone who's really bothered by determinism and their lack of "free will" wouldn't find this comforting at all

I absolutely agree, which is why I followed this section up with the caveat

Now, I'll admit this is not very satisfying, in terms of understanding how our intuitions relate to physical reality

The reason for including this was because it can be an end-of-argument claim for hard determinists. I was meaning only to highlight that an intuition is being smuggled in to an otherwise reductionist argument. I get that this will not be satisfying to believers in free will, as it's not a positive argument for free will, and is not intended to be. Reducing anything to its component parts can remove intuitive meaning from anything and everything, and if an argument can be used to undermine anything and everything, it is self-defeating and essentially meaningless.

I did a whole bunch of work on exactly how the brain does exatly that process of conscious predictions to choose outcomes

This looks fascinating. I should add that I'm aware that prediction is not only involved in internal processes but is also active while taking actions, where we project our expectations on to the world and our consciousness acts as a sort of error correction, or evaluation function. But for the purposes of not over-complicating the logic I was trying to clarify, I omitted this from the model.

So my response to people being bothered by being "just" the "continuation of deterministic forces via genetics and experience" is that those are condensed as beliefs, values, knowledge, and skills, and the effort with which those are applied is what determines outcomes and therefore your results and your future.

I agree, I had implicitly included beliefs, values etc in my 'model of self', and also emphasise effort (or deliberation) as the key "variety of free will worth having". I'm not, in the slightest, concerned that my desires and intentions are not conscious decisions (I've never believed this to be something I was in control of, and when people ask "but are you really in control" at this level, I believe they are accidentally arguing against a straw-man)—although I think desires and intentions can be consciously reviewed, to check their consistency with other values, just like any other aspect of life through the same internal iterative process. 

Thanks again for your thought provoking comment, I'm chuffed that you thought the post was worth engaging with.

Comment by James Stephen Brown (james-brown) on An argument that consequentialism is incomplete · 2024-10-11T04:48:10.171Z · LW · GW

This seems to be discounting the consequentialist value of short term pleasure seeking. Doing something because you enjoy the process has immediate positive consequences. Doing something because it is enriching for your life has both positive short term and long term consequences.

To discount short term pleasures as hedonism (as some might) is to miss the point of consequentialism (well of Utilitarianism at least), which is to increase well-being (which can be either short or long term). Well-being can only be measured (ultimately) in terms of pleasure and pain.

Though I agree consequentialism is necessarily incomplete as we don't have perfect predictive powers.

Comment by James Stephen Brown (james-brown) on An Interactive Shapley Value Explainer · 2024-10-08T23:00:38.125Z · LW · GW

Brilliant, thanks.

Comment by James Stephen Brown (james-brown) on An Interactive Shapley Value Explainer · 2024-10-08T20:26:55.725Z · LW · GW

Thanks Cubefox, very interesting ideas, I like the idea of generalising a coalition so that it can be treated as a player, that seems to make a lot of practical sense, I'll look into Jeffrey to try and get my head around that. 

Comment by James Stephen Brown (james-brown) on An Interactive Shapley Value Explainer · 2024-09-30T08:20:23.185Z · LW · GW

Nice observation. I'm certainly not meaning to advocate for Shapley value—this was largely an attempt to adjust my negative attitude about Shapley value's flaws, and the attempt was not very successful, but I thought it would be useful to others struggling to understand it, as I was.

I can imagine a way to address the probability issue you raise could be to create probabilistic value entries, where, let's say we add the participants in the ratio they exist in reality, so pretending there are twice as many nurses as doctors, you could fill out values for a 3 person coalition (2 nurses, 1 doctor), with a value of 10 awarded to any coalition that satisfies a successful surgery (at least one doctor and one nurse). This results in the nurses being awarded a Shapley Value of 1.67 and the doctor being awarded 6.67.

This hack would give a reasonable answer, in this limited situation. But the oxygen:match-strike would lead to a computationally prohibitive outcome given that this hack would require a massive increase in coalitions, leading to exponentially massive computation required.

Thanks for your comments. I might incorporate the empty coalition actually, thank for alerting me to that.

Comment by James Stephen Brown (james-brown) on An Interactive Shapley Value Explainer · 2024-09-30T06:14:34.015Z · LW · GW

Thanks Austin, yes—the weeks I've spent trying to really understand why Shapley uses such a complicated method to calculate the possible coalitions, has left me feeling that it is actually prohibitively cumbersome for most applications. It has been popular in machine learning algorithms, but faces the problem that it is computationally expensive.

I created a comparison calculator to show Shapley next to my own method that simply weights by dividing all the explicit marginal values by the total of all the explicit marginal values and multiplying that by the grand coalition value. I found that, for realistic values being entered, it yields very similar results to Shapley, and yet is easy to calculate on a spare napkin. It also satisfies Shapley's 4 axioms, and seems more intuitive to me at least.

There might be an issue with mine that you need the total of all marginal values (which is involved in the weighting) to find any one weighted value, whereas Shapley can be used to calculate each weighted marginal value in isolation.

Regardless... who am I to argue with a Nobel Prize winning economist? But I can't be accused of not trying to get on board :) I like the look of Quadratic Funding, perhaps for a future post.

Comment by James Stephen Brown (james-brown) on The Other Existential Crisis · 2024-09-30T05:57:35.086Z · LW · GW

What I'm meaning to say is that if you naively believe that "you" (as in someone's sense of self—a result of their genes, experiences and reflections) have no control over yourself, you might feel a lot more relaxed about past mistakes, or future ones since you have a ready excuse, resulting in lazy decision-making (decision-making involving less effort), of course you'll probably still satisfy the bare necessities for survival—although some of the early existentialists sound like they would barely bother with this.

Of course those same existentialists did write long, ground-breaking books that no doubt required significant cognitive effort, so "shrugs".

Comment by James Stephen Brown (james-brown) on The Other Existential Crisis · 2024-09-27T23:00:12.416Z · LW · GW

Part of the answer is to note that mixtures of indeterminism and determinism are possible, so that libertarian free will is not just pure randomness, where any action is equally likely.

This is really interesting, because I agree with this, but also agree with what Seth's saying. I think this disagreement might actually be largely a semantic one. As such, I'm going to (try to) avoid using the terms 'libertarian' or 'compatibilist' free will. First of all I agree with the use of "indeterminism" to mean non-uniform randomness. I agree that there is a way that determinism and indeterminism can be mixed in such a way as give rise to an emergent property that is not present in either purely determined or purely random systems. I understand this in relation to the idea of evolutionary "design" which emerges from a system that necessarily has both determined and indeterminate properties (indeterminate at least at the level of the genes, they might not be ultimately indeterminate).

I'm going to employ a decision-making map that seeks to clarify my understanding of the how we make decisions and where we might get "what we want" from.

As I see it, the items in white are largely set, and change only gradually, and with no sense of control involved. I don't believe we have any control over our genes, our intentions or desires, what results our actions will have, of the world—I also don't think we have any control over our model of ourselves or the world, those are formed subconsciously. But our effort (in the green areas) allows for deliberative decision making, following an evolutionary selection process, in which our conscious awareness is involved.

In this way we are not beholden to the first action available to us, we can, instead of taking an action in the world, make a series of simulated actions in our head, consciously experiencing the predicted outcome of those actions, until we find a satisfactory one. So, you don't end up with a determined or a random solution, you end up with an option based on your conscious experience of your simulated options. This process satisfies my wants in terms of my sense that I have some control (when I make the effort) over my decisions. At the same time, I'm agnostic about whether true indeterminism exists at all, but, like with evolution, with randomness at the level of the cell (that may not be ultimately random), I think even in an entirely determined universe, we exist on a level that is subject to, at least, some apparent indeterminism. And even if that apparent indeterminism turns out to be determined, our (eternal) inability to calculate what is determined, still means we have no grounds to act in any way other than as if we have the control we feel we have.

I'm actually not sure if this makes me a compatibilist or not.

Determinists are always telling each other to act like libertarians. That's a clue that libertarianism is worth wanting.

So, I don't think my position is the same as asserting that we should act like libertarians, as I have (now) described my conception of the situation, I just think I should act consistent with this conception. By analogy, there are still people who say atheists often act according to "religious" moral values, but in fact they're not—it's just that morality is mode of behaviour that has all the same functions regardless of one's belief system.

Comment by James Stephen Brown (james-brown) on The Other Existential Crisis · 2024-09-22T07:45:30.367Z · LW · GW

I am yet to find a statement by Popper that I disagree with.

Comment by James Stephen Brown (james-brown) on The Other Existential Crisis · 2024-09-22T07:44:25.823Z · LW · GW

I think Seth is not so much contradicting you here but using a deterministic definition of "self" as that which we are referring to as a particular categorisation of the deterministic process, the one experienced as "making decisions", and importantly "deliberating over decisions". Whether we are determined or not, the effort one puts into their choices is not wasted, it is data-processing that produces better outcomes in general.

One might be determined to throw in the towel on cognitive effort if they were to take a particular interpretation of determinism, and they, and the rest of us, would be worse off for it. So, the more of us who expend the effort to convince others of the benefits of continuing cognitive effort in spite of a deterministic universe are doing a service to the future, determined or otherwise.

Comment by James Stephen Brown (james-brown) on The Other Existential Crisis · 2024-09-22T07:35:49.373Z · LW · GW

Very well said, I have expressed this exact sentiment more clumsily many times. I concur.

Comment by James Stephen Brown (james-brown) on The Other Existential Crisis · 2024-09-22T07:34:10.687Z · LW · GW

In the absence of AI we can already pass through this phase of realisation/acceptance (the nausea of the realisation of being an object)


Clearly—Sartre was going through it in the early 20th century. I think, while I've never had much trouble with squaring my existence with materialism, I do feel like this period is giving me somewhat of a relapse, and as you say, this is not really within my control.

This feels like the intersection of Analytic and Continental Philosophy

Exactly, I think I was having a continental moment, please do forgive me, we can now return to our scheduled rationality. But really, that's what I'm on about, our claim on rationality has, I guess, been pretty important to me, and I may just be suffering the side effects of seeing it being eroded in real time (the claim, not the rationality itself).

Comment by James Stephen Brown (james-brown) on The Other Existential Crisis · 2024-09-22T07:22:53.964Z · LW · GW

Well that is (commendably) the most positive interpretation of the pet hypothesis I've heard. When we think about it, we're really half way there already. By many measures, many of us are living in what anyone in the past would call Utopia, and are very much cared for by many over-arching systems, the market, the government, the internet. I've also never been one to complain "Oh, but what will I do with all my spare time if I don't have to work?"

Perhaps my daughter will have time to reach goals she actually wants to achieve rather than those of necessity. I appreciate your thoughts. 

Comment by James Stephen Brown (james-brown) on The Other Existential Crisis · 2024-09-22T07:14:03.647Z · LW · GW

Fair point. From the perspective of one who sees significant value in earning a living and providing goods and services, how are you feeling about the prospects of many marketable skills being mastered by AI? Do we need to reevaluate the value of jobs?

By pointing to a situation where people already don't feel they are contributing much, it seems like Seth is saying that we're not losing much through this rise of AI. But your objection suggests to me you might think that we are losing something significant?

Comment by James Stephen Brown (james-brown) on The Other Existential Crisis · 2024-09-22T07:06:57.430Z · LW · GW

Thanks Seth, I really appreciate what you've said here—it's good to be reminded that it's not necessary to have your kids change the world, and that caring for each other and expressing themselves contributes positively to the whole.

I'm less worried actually about paths my daughter might take, she's very bright and creative and I'm sure she'll do fine, I guess I don't want her to shy away from things just because someone or something else can do it better. I was mainly posting because I felt like that feeling of nausea might be affecting others who keep abreast of these things, and that the coincidence of reading the book right now might have helped me name something others might be experiencing.

But, again, I appreciate your thoughtful insights.

Comment by James Stephen Brown (james-brown) on We Don't Just Let People Die—So What Next? · 2024-08-06T11:37:18.168Z · LW · GW

Thanks for your comment. In workshopping this post, I definitely need to work on clarity :) I certainly wasn't meaning to refer to life-extension—I'm meaning the state of the world as it is, where we (most of us at least) don't find it acceptable to let people die of starvation from poverty (as evolution would have us do).

I'll add couple of edits now just to clarify that, as you're not the first person that wasn't clear to.

Comment by James Stephen Brown (james-brown) on We Don't Just Let People Die—So What Next? · 2024-08-03T21:16:19.256Z · LW · GW

I would genuinely like to understand what you mean, but it’s not clear to me a present. You are allowed to read the entire post.

A starting point to understanding your point of view would be if you could please, in good faith, answer the question I asked in the previous comment. Do you believe that we should let poor people die?

Comment by James Stephen Brown (james-brown) on We Don't Just Let People Die—So What Next? · 2024-08-03T19:07:50.715Z · LW · GW

Thanks for your points npostavs

the real-world smartphone market is surely much closer to oligopoly than perfect competition

This is essentially my point, the government actually have to take measures to break up oligopolies, because oligopolies they are beneficial to companies for maximising profits by charging as much as possible for the least improvement (cost).

How sure are you that this isn't rather the costs of lack of competition?

The costs we've been discussing are externalities like environmental degradation and economic inequality. Competition has been shown to bring down prices—by making sales dependent on lower prices, so this is good for consumers but doesn't take into account those externalities (hence why they're called externalities).

A need for lower prices means companies necessarily have to look for ways to cut costs, which means prioritising lower cost materials over environmentally friendly materials and lower cost labour and automation over good pay for employees. So, as long as the externalities aren't part of the profit equation, the wider system will bear the cost and the system won't self-balance.

There are many ways that externalities can be made part of the profit equation; carbon credits or minimum wage requirements and regulations, but these necessarily have to be introduced from outside the market, they are not produced by the market.

Comment by James Stephen Brown (james-brown) on We Don't Just Let People Die—So What Next? · 2024-08-03T18:22:00.102Z · LW · GW

The article is about people living in poverty who fail to succeed in an open economic competition (the Covid point was a side point that had "shaken my faith").

I proposed that if you think we should let these people die, then you may as well stop reading. Do you think we should let poor people die? Or did I not phrase that clearly enough?

Comment by James Stephen Brown (james-brown) on Unlocking Solutions—By Understanding Coordination Problems · 2024-08-03T01:17:43.479Z · LW · GW

Ha! Love the meme. Thanks so much for your comment, what a compliment to have "unlocked something" in someone's brain! I absolutely hear you on the addiction issue, that's an interesting take to stack the measures—glad that's working for you.

Comment by James Stephen Brown (james-brown) on Unlocking Solutions—By Understanding Coordination Problems · 2024-07-30T08:42:38.799Z · LW · GW

Thanks Viliam,

I think that's a fair interpretation, if you are restricted in your resources, stick to quantifiable outcomes—a stoic dichotomy of control approach. The article is however about how to solve coordination problems, rather than how to choose appropriate problems for your capacity, because there are some unavoidable coordination problems we face as a civilisation..

The original home of this post, nonzerosum.games is a world-help site as opposed to a self-help site. So, it is focused on these wider, unavoidable social issues. Although individuals will certainly face their own coordination problems, and I think your interpretation leads to some good personal advice.

Comment by James Stephen Brown (james-brown) on Failures in Kindness · 2024-07-27T19:55:23.997Z · LW · GW

I am guilty of both offering opt-ins and fake exits, and also being one of those people that don't want to rock the boat by taking an opt-in or a fake exit. Thanks for this article, as it's highlighted a double standard in me. I knew already that I have a tendency towards cognitive unloading, but this gives very clear examples of situations I might want to have a pre-prepared position for that's not contradictory.

Comment by James Stephen Brown (james-brown) on Why People in Poverty Make Bad Decisions · 2024-07-24T18:53:17.823Z · LW · GW

I don't mean to assert that one effect is bigger than the other, more that together they create a vicious cycle. No one disputes that bad decisions can lead to poverty, that's common sense, or that other factors influence it, but if poverty itself is a multiplier it stands to reason that that needs to be addressed as part of any potential solution. The next post (dropping Saturday) is about how, in such coordination problems, multiple factors must align in order for any one solution to be effective.

Comment by James Stephen Brown (james-brown) on Why People in Poverty Make Bad Decisions · 2024-07-20T05:08:01.097Z · LW · GW

Oh, sorry, that was largely boiler-plate, while this post did have some hover-over info which didn't translate to LW (which was actually kind of important, as it provided some disclaimers and caveats to points made) it's probably not what you'd call a "full experience". Some other posts on the site have simulations.

Though I do think the overall aesthetic of the posts on the site is subtly important for the tone of my writing (generally a not too serious tone).

Comment by James Stephen Brown (james-brown) on Why People in Poverty Make Bad Decisions · 2024-07-16T04:04:34.410Z · LW · GW

I'd be interested in reading up on the replication problems with priming if you have any links. I wasn't on guard for this sort of research, so it seemed plausible to me. All of this goes against our general intuitions that people need to feel their poverty to get motivated for working, so I'm more likely to accept scientific research than assume it's wrong and that my intuitions are correct.

Comment by James Stephen Brown (james-brown) on Why People in Poverty Make Bad Decisions · 2024-07-16T00:45:24.682Z · LW · GW

Wait, are you trying to tell me that drug addiction and mental illness contribute to poverty? That seems like a stretch... (jokes)

I feel like that's is a given, the factors that perpetuate poverty are manifold. The article focuses on this finding because it reveals a counter-intuitive and therefore easily overlooked contributory factor (and a significant one at that).

I didn't mean at all to suggest that this finding was the only contributory factor, or that it could be solved with any one solution (which is why the title of the last section reads "a" solution, not "the" solution).

Thanks for reading and responding :)

Comment by James Stephen Brown (james-brown) on A civilization ran by amateurs · 2024-06-06T05:33:46.022Z · LW · GW

I think the idea is really interesting. As someone who spent 5 years creating student video resources, I appreciate the impact they can have, and I have at times tried to convince my father—a life-long maths teacher to collaborate with me on replicating his course... but the fool didn't take me up on the offer.

the cost of showing it to every student in the country is approximately zero

I feel like the cost-effectiveness argument is valid but might run into issues. To begin with, as you have in one of your comments pointed out, video resources with a teacher who can respond dynamically, adds much more than a video alone. So, this means there is no cost saving in terms of teachers time—which I think is a good thing (I'll put a pin in that for later) and then video production on top of that is not at all cheap. One thing that was consistent, in my experience creating educational resources, was the need to constantly update the resources (there was a team of us working full time to just maintain one course).

So, while the cycle of feedback and constant improvement of the resources is a vital part of the process, it makes what seems like a one-off expense into a perpetual expense.

Furthermore teachers are already underpaid, relative to other professions requiring similar skills, so the additional funding for these new resources would need to result from an unprecedented increase in education funding (which could have gone to teachers) or would have to be taken from the budget at the expense of teachers.

Unless of course you leave it to the private sector in which case you have to worry about advertising, special interests and competition leading optimisation for what is appealing to students rather than what is necessarily effective—Hollywood, after all only has the mandate to entertain, they don't have to also educate.

To get back to that pin: If we did manage to create a resource perhaps incorporating generative AI that can present ideas in an engaging way and provide dynamic feedback, making teachers unnecessary we run into another issue. There's something to be said for having well-rounded educators in society, learning in a non-specialised way is enriching for people in general. One negative side of chat GPT is this way drastic drop-off in activity on forums like Stack Overflow, because people don't need other people any more.

There's something about the person to person trading of ideas that I think contributes to a robust community, in the same way that international trade helps to curb international conflicts—we might find that making human interaction unnecessary to education whether in schools or on forums might lead to a fragmentation of the social fabric. Personally I really like the idea of lots of amateurs sharing ideas—like on LessWrong and other forums, there's something uniquely human about learning from sharing, with benefits for the teacher also (à la the Feynman Technique).

But, I think you make a good case. Thanks for sharing the idea.

Comment by James Stephen Brown (james-brown) on Why I'll Keep My Crummy Drawings—How Generative AI Art Won't Supplant... Art. · 2024-05-25T19:37:44.183Z · LW · GW

Hey, again good points.

But I have recognized sparks of true understanding in one-shot AI works.

I absolutely agree here, this is what I was referring to when I wrote...

I think we can appreciate the beauty of connecting with humanity as a whole, knowing that it is the big data of humanity that has informed AI art - I suspect this is what we find so magical about it.

I suspect that AI has an appeal not just because of its fantastic rendering capacity but also the fact that it is synthesising works not just from a prompt but from a vast library of shared human experience.

you're looking at hours of thought and effort

Regarding the arduous* process of iteratively prompting and selecting AI art, I think the analogy with photography works in terms of evoking emotions. Photographers approach their works in a similar way, shooting multiple angles and subjects and selecting those that resonate with them (and presumably others) for exhibition or publication. I think there is something special about connecting with what a human artist recognised in a piece whether it came from a camera or an algorithm. I acknowledge this is a form of connection that is still present in AI art, just as it is in photography.

* I caveat "arduous" because, while it might take hours of wrangling the AI to express something approximating what we intend, the skill that takes artists years to master—that of actually creating the work, is largely performed, in the case of AI art, by the non-sentient algorithm. It is not the hours of work that goes into one painting that impresses the viewer generally, it's the unseen years of toil and graft that allowed the artist to make something magic within those hours. The vast majority of the magic in AI art is provided by the algorithm.

This is why I see it as analogous to photography. Still a valid art form, but not one that need make actual painting obsolete.

Comment by James Stephen Brown (james-brown) on What Are Non-Zero-Sum Games?—A Primer · 2024-05-24T09:20:27.744Z · LW · GW

Okay, so I think I get you now, in the imbalanced game, if the payoff is 100 or 1 as in "Zero Sum" is a misnomer, a rational player will still make the same decision, regardless of the imbalance with the other player, given the resulting preference ordering.

However while this imbalance makes no difference to the players' decisions, it does make a difference to the total payoff, making it non-zero-sum. I'm having difficulty understanding why values such as happiness or resources cannot be substituted for utility—surely at some point this must happen if game theory is to have any application in the real world. Personally I'm interested in real world applications, but I fully acknowledge my ignorance on this particular aspect.

I find a practical way to look at a zero-sum game is to imagine that each of two players must contribute half of the total payoff in order to play. This takes a game that is constant-sum, and makes it zero-sum, and does so in a way that doesn't break the constant-sumness. In the case of the imbalanced game, because it is not constant-sum it doesn't reduce to a zero-sum game in this way, remaining a non-zero-sum game with terrible odds for one player, meaning that a rational player won't opt in if given the option.

If I'm not mistaken, this is generally what is meant when someone refers to a zero-sum game. In chess for instance you enter a competition with your rating (essentially your bet) and the outcome of the game has either a negative or positive (or no) impact on your rating and an opposite impact on your opponent's rating.(I'm not exactly sure if chess ratings are calculated as exactly zero-sum, but you get the idea). So, the game is zero-sum. Of course there are outside factors that make it beneficial to both players; enjoyment, brain-exercise, socialising etc which may have positive utility on another level, but the game itself and the resulting rating changes are essentially zero-sum.

This is the sense in which I am using the term "zero-sum", in the most basic sense for someone to win (relative to their starting point, bet or rating) another must lose by an equal amount.

There is probably a more mathematically succinct way of expressing this, but I don't have those tools at my disposal at present. Again, thanks for your comments. Please don't feel the need to continue your labours educating me on this topic, I understand that you clearly have a better understanding of game theory than I do, so I appreciate your time. I should probably continue reading further to level up my understanding. Of course if you feel like continuing the floor is yours. 

Comment by James Stephen Brown (james-brown) on What Are Non-Zero-Sum Games?—A Primer · 2024-05-23T22:27:16.461Z · LW · GW

I’m not sure how the game is the same when you add a constant. The game as proposed in the example is clearly different. I can see that multiplication makes no difference, and as such doesn’t make the sum non-constant. I don’t see how asymmetrically changing the parameters is a “mere change in notation”.

By the way, I’m sure you’re entirely correct about this, I just simply don’t see how there is a problem with using the concept of zero-sum understood as constant-sum.

Comment by James Stephen Brown (james-brown) on What Are Non-Zero-Sum Games?—A Primer · 2024-05-23T19:23:22.337Z · LW · GW

Hi Vladimir, thanks for your input, it has been fascinating going down the rabbit hole of nuance regarding the term "zero-sum".

I agree that the term is more accurately denoting "constant-sum", I think this is generally implied by most people using it. There was the interesting "zero-sum" example in the linked article that veered away from "constant-sum" with asymmetrical payoffs, 100,0 or 0,1, meaning that depending on the outcome of the game the total sum would be different. This, to me disqualifies it from being called a zero-sum game, given the common understanding that zero-sum denotes constant-sum. The example seemed to solve the problem by conflating zero-sum and constant-sum and then proceeded to stick to a strict definition of zero-sum, which was confusing. But perhaps I just need to sit with it longer.

To your point about Kaldor-Hicks, yes I guess many positive-sum situations could be described in these terms but I'm really referring to something more general—any situation where the total sum payoff increases regardless of Pareto improvements or promised reimbursement by other means to any party left worse off. For instance if a left-wing government were to increase taxes on the wealth, not offering them any reimbursement, but rather doing this based on the mandate that comes with being democratically elected, then this policy might be positive-sum due to the fact that dollar-for-dollar money makes a bigger difference to a poor person than a rich person, due to diminishing returns on happiness.

I really appreciate your comments, and intend to continue exploring the nuances you've raised. I think for a primer on non-zero-sum games, particularly with a site that is focused on practical solutions in the real world rather than pure theory, the more accessible (perhaps less nuanced) definitions I've used are probably appropriate.

Comment by James Stephen Brown (james-brown) on What Are Non-Zero-Sum Games?—A Primer · 2024-05-23T05:46:46.764Z · LW · GW

Hi Vladimir,

Thanks for your comment, please excuse the delay in getting back, I'm actually busily digesting your response and the various branches of dependencies that comprise it (in terms of links to other concepts). I intend to get back to you with a considered answer, but am enjoying taking my time exploring the ideas you've linked to.

Comment by James Stephen Brown (james-brown) on A Positive Double Standard—Self-Help Principles Work For Individuals Not Populations · 2024-05-22T22:59:20.110Z · LW · GW

Hi bideup, thanks for your comment. The graph is simplified from one in the Pew Report with the left bar representing the lower quintile and the right representing the upper. I see what you mean, but the intention of pointing to the 20% mark is to show where it should be given 100% social mobility. Perhaps the omission of the central quintiles didn’t help.