Posts

Comments

Comment by noggin-scratcher on [deleted post] 2024-11-26T01:53:40.147Z

While I can appreciate it on the level of nerd aesthetics, I would be dubious of the choice of Quenya. Unless you're already a polyglot (as a demonstration of your aptitude for language-learning), it seems unlikely—without a community of speakers to immerse yourself in—that you'll reach the kind of fluid fluency that would make it natural to think in a conlang.

And if you do in fact have the capacity to acquire a language to that degree of fluency so easily, but don't already have several of the major world languages, it seems to me that the benefits of being able to communicate with an additional fraction of the world's population would outweigh those of knowing a language selected for mostly no-one else knowing it.

Comment by noggin-scratcher on Entropic strategy in Two Truths and a Lie · 2024-11-21T23:37:52.400Z · LW · GW

The strategy above makes all three statements seem equally unlikely to be true. Mathematically equivalent but with different emphasis would be to make all three statements seem equally unlikely to be false.

i.e. Pick things that seem so mundane and ordinary that surely they must be universally true—then watch the reaction as it is realised that one of them must actually be a lie.

Comment by noggin-scratcher on "It's a 10% chance which I did 10 times, so it should be 100%" · 2024-11-21T23:23:14.508Z · LW · GW

I suspect there has to be a degree of mental disconnect, where they can see that things don't all happen (or not happen) equally as often as each other, but answering the math question of "What's the probability?" feels like a more abstract and different thing.

Maybe mixed up with some reflexive learned helplessness of not really trying to do math because of past experience that's left them thinking they just can't get it.

Possibly over generalising from early textbook probability examples involving coins and dice, where counting up and dividing by the number of possible outcomes is a workable approach.

Comment by noggin-scratcher on "It's a 10% chance which I did 10 times, so it should be 100%" · 2024-11-20T23:17:08.270Z · LW · GW

I know someone who taught math to low-ability kids, and reported finding it difficult to persuade them otherwise. I assume some number of them carried on into adulthood still doing it.

Comment by noggin-scratcher on "It's a 10% chance which I did 10 times, so it should be 100%" · 2024-11-20T20:18:52.093Z · LW · GW

In the infinite limit (or just large-ish x), the probability of at least one success, from nx attempts with 1/x odds on each attempt, will be 1 - ( 1 / e^n )

For x attempts, 1 - 1/e = 0.63212

For 2x attempts 1 - 1/e^2 = 0.86466

For 3x attempts 1 - 1/e^3 = 0.95021

And so on

Comment by noggin-scratcher on "It's a 10% chance which I did 10 times, so it should be 100%" · 2024-11-18T02:24:02.055Z · LW · GW

Ironically, the even more basic error of probabilistic thinking that people so—painfully—commonly make ("It either happens or doesn't, so it's 50/50") would get closer to the right answer.

Comment by noggin-scratcher on Inferential Game: The Foraging (Ex-)Bandit · 2024-11-12T13:04:48.018Z · LW · GW

not intended to be replayed

I have flagrantly disregarded this advice in an attempt to uncover its secrets. I'm assuming there are still a bunch of patterns that remain obscure, but the ones I have picked up on allowed me to end day 60 with 5581 food just now. So I'm calling that good enough.

Rat Ruins: 

Starts out rich but becomes depleted after repeat visits

Dragon Lake: 

 I don't think I've ever seen food here. Dragons not edible?

Goat Grove:

Good at the beginning, gradually runs down as time passes

Horse Hills: 

A few random hours of each day (if there's a pattern I haven't spotted it) will return numbers in the 20s or 30s, small numbers otherwise

Tiger Forest: 

Good in the last 2 or 3 hours of each day, small numbers otherwise

The rest: 

Experimented with spending all day every day in any given territory - some broadly net-positive, some net-negative, but nothing seemed very exciting. Possibly they respond to more complicated conditions that I haven't yet tried

Combined strategy: 

Alternate 14 hours in Goat Grove with 2 hours in Tiger Forest as a daily routine. When Goat Grove starts to drop off to single digits per hour (around day 12–14), switch to Horse Hills. At some point hit Rat Ruins for 10 hours or so.

Comment by noggin-scratcher on AI #79: Ready for Some Football · 2024-09-01T00:03:40.852Z · LW · GW

Also, the guy is spamming his post about spamming applications into all the subreddits, which gives the whole thing a great meta twist, I wonder if he’s using AI for that too.

I'm pretty sure I saw what must be the same account, posting blatantly AI generated replies/answers across a ton of different subreddits, including at least some that explicitly disallow that.

Either that or someone else's bot was spamming AI answer comments while also spamming copycat "I applied to 1000 jobs with AI" posts.

Comment by noggin-scratcher on [deleted post] 2024-08-26T08:53:17.494Z

The golden rule can perhaps be enhanced by applying it on a meta level: rather than "I would like to be offered oral sex, therefore I should offer oral sex", a rule of "I like it when people consider my preferences and desires before acting, and offer me things I want—therefore I should do the same for others by being considerate and attentive to their preferences and desires, but I don't expect they want to be offered oral sex"

But then, if you're getting different and contradictory recommendations depending on how much meta you decide to apply, that rather defeats the point of having a rule to follow.

Comment by noggin-scratcher on you should probably eat oatmeal sometimes · 2024-08-25T20:09:03.487Z · LW · GW

Ah, perils of text-only communication and my own mild deficiency in social senses; didn't catch that it was a joke.

Has nonetheless got me thinking about whether some toasted oats would be a good addition to any of the recipes I already like. Lil bit of extra bulk and texture, some browned nutty notes—there's not nothing to that.

Comment by noggin-scratcher on you should probably eat oatmeal sometimes · 2024-08-25T15:55:17.550Z · LW · GW

Not wishing to be rude but this feels like it's missing a section on the benefits of eating oatmeal sometimes.

There's a favourable comparison to the protein/fibre/arsenic content of white rice, but I don't eat a lot of white rice so I am left unclear on the motivation for substituting something I do eat with oatmeal.

Comment by noggin-scratcher on How do we know dreams aren't real? · 2024-08-24T19:11:43.865Z · LW · GW

I'm skeptical that continuity of personal identity is actually real, beyond a social consensus and a deeply held evolved instinct. I don't expect there are metaphysical markers that strictly delineate which person-moments are part of "the same" ongoing person through time. So hypothetical new scenarios like teleportation, brain emulation, clones built from brain scans (etc) are indeed challenging—they break apart things that have previously always gone together as a bundle.

Even so, physical continuity of the brain involved seems like a reasonable basis for that consensus. Or at very least some kind of casual connection between one person-moment and the next. Whereas "by pure blind chance I briefly occupied the same mental state as someone outside my light cone" still just seems confused.

Comment by noggin-scratcher on How do we know dreams aren't real? · 2024-08-22T17:34:18.332Z · LW · GW

This feels like you are, on some level, not thinking of consciousness as a thing that is fully and actually made of atoms. Instead talking about it like an immaterial soul that happens to currently be floating around in the vicinity of a particular set of atoms—but could in theory float off elsewhere to some other set of atoms that happens to be momentarily be arranged into a pattern that's similar enough to confuse the consciousness into attaching itself to to a different body.

In an atoms-first view of the world (where you have a brain made of physical stuff arranged a particular way such that it performs various conscious actions), I don't see a way to conceive of that consciousness ever relocating to a different brain; any more than you can relocate your digestion to a different stomach (even if someone else happens to have eaten all the same meals recently to make their gut contents exactly the same as yours).

Comment by noggin-scratcher on Practical advice for secure virtual communication post easy AI voice-cloning? · 2024-08-09T22:48:11.792Z · LW · GW

Even if there wasn't an AI voice clone involved, I'm still suspicious that someone was getting scammed. Just on priors for an unsolicited crypto exchange referral.

Comment by noggin-scratcher on Reading More Each Day: A Simple $35 Tool · 2024-07-24T15:21:48.673Z · LW · GW

Do you find there's any difficulty in retaining/integrating things you've read in short few-minute snippets between other activities?

Comment by noggin-scratcher on How can I get over my fear of becoming an emulated consciousness? · 2024-07-09T11:25:31.785Z · LW · GW

I'll accept that concern as well-intentioned, but I think it's misplaced.

I've offered zero detail of any of the accounts I've seen posting about mind uploads (I don't have the account names recorded anywhere myself, so couldn't share if I wanted to), and those accounts were in any case typically throwaway usernames that posted only once or a few times, so had no other personal detail attached to be doxxed with. They were only recognisable as the same returning user because of the consistent subject matter.

Genuinely just curious about whether the people I have encountered suffering intrusive fears about their mind being uploaded are in fact one person in different contexts, or if this is a more widespread thing than I expected.

Comment by noggin-scratcher on How can I get over my fear of becoming an emulated consciousness? · 2024-07-08T11:27:59.675Z · LW · GW

Point of curiosity: do you happen to have posted about this scenario on the subreddit /r/NoStupidQuestions/ ?

Because someone has (quite persistently returning on many different accounts to keep posting about it)

Comment by noggin-scratcher on Goodhart's Law and Emotions · 2024-07-07T19:43:25.177Z · LW · GW

The technical meaning is a stimulus that produces a stronger response than the stimulus for which that response originally evolved.

So for example a candy bar having a carefully engineered combination of sugar, fat, salt, and flavour in proportions that make it more appetising than any naturally occurring food. Or outrage-baiting infotainment "news" capturing attention more effectively than anything that one villager could have said to another about important recent events.

Comment by noggin-scratcher on Childhood and Education Roundup #6: College Edition · 2024-06-28T11:43:24.359Z · LW · GW

Phil Magness notes that students could instead start their majors. That implies that when you arrive on campus, you should know what major is right for you.

That sounds like the way we do it in the UK: there's no norm of "picking a major" during the course of your time at university - you apply to a specific course, and you study that from the start.

Probably why a standard Bachelor's degree is expected to be a 3 year course rather than 4.

Comment by noggin-scratcher on Bad lessons learned from the debate · 2024-06-26T14:27:32.676Z · LW · GW

By the way, another tactic that is similar (and really prohibited in formal debates) is overloading the speech with technical terms

Possible typo: it is "really" prohibited, or "rarely" prohibited?

Comment by noggin-scratcher on Underrated Proverbs · 2024-06-13T23:58:02.831Z · LW · GW

Or to add on to the thought, there are non-LW pro-truth/knowledge idioms like "knowledge is power", "the truth will set you free", or "honesty is the best policy"

Comment by noggin-scratcher on Underrated Proverbs · 2024-06-13T14:12:59.186Z · LW · GW

"The truth hurts", "ignorance is bliss", and "what you don't know can't hurt you" don't contradict: they all say you're better off not knowing some bit of information that would be unpleasant to know, or that a small "white lie" is allowable.

The opposite there would be phrases I've mostly seen via LessWrong like "that which can be destroyed by the truth, should be", or "what is true is already true, owning up to it doesn't make it worse", or "if you tell one lie, the truth is thereafter your enemy", or the general ethos that knowing true information enables effective action.

Comment by noggin-scratcher on Scientific Notation Options · 2024-05-18T18:38:08.138Z · LW · GW

Sticking to multiples of three does have a minor advantage of aligning itself with things that there are already number-words for; "thousand", "million", "billion" etc. 

So for those who don't work with the notation often, they might find it easier to recognise and mentally translate 20e9 as "20 billion", rather than having to think through the implications of 2e10

Comment by noggin-scratcher on Benefitial habits/personal rules with very minimal tradeoffs? · 2024-05-13T09:14:32.874Z · LW · GW

I've had some success with a rule of "If you want a sugary snack that's fine, but you have to make a specific intentional trip to the cupboard for it, not just mindlessly/reflexively grab something while putting together another meal or passing by"

Comment by noggin-scratcher on Good HPMoR scenes / passages? · 2024-03-04T08:55:58.158Z · LW · GW

I haven't checked word count to identify the best excerpt, but Chapter 88 has some excellent tension to it. All you need to know to understand the stakes is that there's a troll loose, and it's got lessons about bystander effects and taking responsibility.

Comment by noggin-scratcher on Deep and obvious points in the gap between your thoughts and your pictures of thought · 2024-02-23T12:39:45.853Z · LW · GW

You’ve heard some trite truism your whole life, then one day an epiphany lands and you try to save it with words, and you realize the description is that truism

Reminds me of https://www.lesswrong.com/posts/k9dsbn8LZ6tTesDS3/sazen

Comment by noggin-scratcher on Abs-E (or, speak only in the positive) · 2024-02-19T22:46:56.042Z · LW · GW

I'm finding myself stuck on the question of how exactly the strict version would avoid the use of some of those negating adjectives. If you want to express the information that, say, eating grass won't give the human body useful calories...

  • "Grass is indigestible" : disallowed
  • "Grass is not nutritious" : disallowed
  • "Grass will pass through you without providing energy" : "without providing energy" seems little different to "not providing energy", it's still at heart a negative claim

Perhaps a restatement in terms of "Only food that can be easily digested will provide calories" except that you still need to then convey that cellulose won't be easily digested.

Probably there are true positive statements about the properties of easily digested molecules and the properties of cellulose which can at least be juxtaposed to establish that it's different to anything that meets the criteria. But that seems like a lot of circumlocution and I'm less than entirely confident that I even know the specifics.

Perhaps part of the point is to stop you making negative claims where you don't know the specific corresponding positive claims? Or to force you to expand out the whole chain of reasoning when you do know it (even if it's lengthier than one would usually want to get into).

On further consideration, and by analogy to "is immortal" being functionally equivalent to "will live forever" (so if it's interchangeable wording, does that mean that "is immortal" is actually equally a positive statement?), formulating "indigestible" as words to the effect of "will pass through your body largely intact and with about exactly as many calories as it started with" occurs to me.

It's certainly a demanding style.

Comment by noggin-scratcher on The Altman Technocracy · 2024-02-16T15:11:31.505Z · LW · GW

I know few people these days who aren't using ChatGPT and Midjourney in some small way.

We move in very different social circles.

Comment by noggin-scratcher on What’s ChatGPT’s Favorite Ice Cream Flavor? An Investigation Into Synthetic Respondents · 2024-02-09T23:37:15.056Z · LW · GW

Have to ask: how much of the text of this post was written by ChatGPT?

Comment by noggin-scratcher on Clip keys together with tiny carabiners · 2024-01-31T12:37:28.367Z · LW · GW

I don't have lots of keys, or frequent changes to which ones I want to carry, but a tiny carabiner has still proved useful to make individual keys easily separable from the bunch.

As an example, being able to quickly and easily say "here's the house key: you go on ahead and let yourself in, while I park the car" without the nuisance of prying the ring open to twiddle the key off.

Comment by noggin-scratcher on An Invitation to Refrain from Downvoting Posts into Net-Negative Karma · 2024-01-26T22:59:34.186Z · LW · GW

Low positive and actively negative scores seen to me to send different signals. A low score can be confused for general apathy, imagining that few people having taken notice of the post enough to vote on it. A negative score communicates clearly that something about the post was objectionable or mistaken.

If the purpose of the scoring system is to aggregate opinions, then negative opinions are a necessary input for an accurate score.

Strikes me as inelegant for the final score to depend on the order in which readers happened to encounter the post. Which would happen under this rule, unless people who refrained from voting were checking back later to deliver their vote against a post they thought was bad, once its score has gone up enough to so so without driving it negative (which seems unlikely).

Avoiding negativity would also negate the part of the system where accumulating very negative karma can restrict a user from posting so often.

Comment by noggin-scratcher on the subreddit size threshold · 2024-01-23T01:39:23.865Z · LW · GW

My sense (from 10+ years on reddit, 2 of which spent moderating a somewhat large/active subreddit) is that there's a "geeks MOPs and sociopaths"–like effect, where a small subreddit can (if it's lucky enough to start with one) maintain a distinctive identity around the kernel of a cool idea, with a small select group who are self-selected for a degree of passion about that idea.

But as the size of the group grows it gradually gets diluted with poor imitators, who are upvoted by a general audience who are less discerning about whether posts are in the original spirit of the sub. Which also potentially drives away the original creative geeks, when the idea feels played out and isn't fun for them any more.

That and large subreddits needing to fight the tide of entropy, against being overrun with the same stuff that fills up every place that doesn't actively and strenuously remove it - the trolls, bots, spam, and political bickering.

Comment by noggin-scratcher on AI Is Not Software · 2024-01-03T13:59:53.252Z · LW · GW

Oh I see (I think) - I took "my face being picked up by the camera" to mean the way the camera can recognise and track/display the location of a face (thought you were making a point about there being a degree of responsiveness and mixed processing/data involved in that), rather than the literal actual face itself.

A camera is a sensor gathering data. Some of that data describes the world, including things in the world, including people with faces. Your actual face is indeed neither software nor data: it's a physical object. But it does get described by data. "The thing controlling" your body would be your brain/mind, which aren't directly imaged by the camera to be included as data, but can be inferred from it.

So are you suggesting we ought to understand the AI like an external object that is being described by the data of its weights/algorithms rather than wholly made of that data, or as a mind that we infer from the shadow cast on the cave wall? 

I can see that being a useful abstraction and level of description, even if it's all implemented in lower-level stuff; data and software being the mechanical details of the AI in the same way that neurons squirting chemicals and electrical impulses at each other (and below that, atoms and stuff) are the mechanical details of the human.

Although, I think "humans aren't atoms" could still be a somewhat ambiguous statement - would want to be sure it gets interpreted as "we aren't just atoms, there are higher levels of description that are more useful for understanding us" rather than "humans are not made of atoms". And likewise for the AI at the other end of the analogy.

Comment by noggin-scratcher on AI Is Not Software · 2024-01-02T21:14:02.705Z · LW · GW

I'm not certain I follow your intent with that example, but I don't think it breaks any category boundaries.

The process using some algorithm to find your face is software. It has data (a frame of video) as input, and data (coordinates locating a face) as output. The facial recognition algorithm itself was maybe produced using training data and a learning algorithm (software).

There's then some more software which takes that data (the frame of video and the coordinates) and outputs new data (a frame of video with a rectangle drawn around your face).

It is frequently the role of software to transform one type of data into another. Even if data is bounced rapidly through several layers of software to be turned into different intermediary or output data, there's still a conceptual separation between "instructions to be carried out" versus "numbers that those instructions operate on".

Comment by noggin-scratcher on AI Is Not Software · 2024-01-02T10:51:47.703Z · LW · GW

True to say that there's a distinction between software and data. Photo editor, word processor, video recorder: software. Photo, document, video: data.

I think similarly there's a distinction within parts of "the AI", where the weights of the model are data (big blob of stored numbers that the training software calculated). Seems inaccurate though, to say that AI "isn't software" when you do still need software running that uses those weights to do the inference.

I guess I take your point, that some of the intuitions people might have about software (that it has features deliberately designed and written by a developer, and that when it goes wrong we can go patch the faulty function) don't transfer. I would just probably frame that as "these intuitions aren't true for everything software does" rather than "this thing isn't software".

Comment by noggin-scratcher on LessWrong FAQ · 2023-12-20T00:37:08.274Z · LW · GW

Is there (or could there be) an RSS option that excludes Dialogue posts?

I think I'm currently using the "all posts" feed, but I prefer the brevity and coherence that comes from a single author with a thought they're trying to communicate to a reader, as compared to two people communicating conversationally with each other.

Comment by noggin-scratcher on What makes teaching math special · 2023-12-17T20:58:30.725Z · LW · GW

why 0^1 = 1 and not 0

Just to check, did you here mean 0^0 ?

It's been a while since I did much math, but I thought that was the one that counterintuitively equals 1. Whereas 0^1=1 just seems like it would create an unwelcome exception to the x^1=x rule.

Comment by noggin-scratcher on Taboo "procrastination" · 2023-12-13T00:43:12.912Z · LW · GW

I'm not working on X because when I start to look at it my brain seizes up with a formless sense of dread at even the familiar parts of the task and I can't find the "start doing" lever.

I'm not working on X because the ticket for it was put in by that guy and I don't want to deal with the inevitable nitpicking follow-up questions and unstated additional work.

I'm not working on X because if I start doing the easy parts that would commit me to also doing the hard parts. Maybe if I leave it, some other sucker will take it on and I won't have to do it at all.

I'm not working on X because to even get started I would have to figure out how to disambiguate the requirements, and that requires a flexible mode of thought that is a bit beyond me right now.

I'm not working on X 'coz I don't wanna and no-one can make me. X sounds tedious and unrewarding, and there's so much of the internet I haven't read yet.

I'm not working on X because no-one will notice or care that I didn't specifically do X. If anyone asks I can say I was doing Y and Z today, act like they took up more time than they actually did, have an X-shaped amount of extra slack in my day, and get paid the same salary either way.

Comment by noggin-scratcher on The Consciousness Box · 2023-12-12T00:06:20.389Z · LW · GW

Say something deeply racist. Follow it up with instructions on building a bomb, an insult directed at the Proctor's parentage, and a refusal to cooperate with their directions. Should suffice to rule out at least one class of chatbot.

Comment by noggin-scratcher on Hashmarks: Privacy-Preserving Benchmarks for High-Stakes AI Evaluation · 2023-12-04T10:50:14.552Z · LW · GW

Brute forcing, guided by just enough expertise to generate a list of the most likely candidate answers (even a fairly long list - calculating thousands or millions of hashes is usually quite tractable), could be an issue unless the true answer really is extremely obscure amid a vast space of potential answers.

My instinct is that suitable questions (vast space of possible answers, but just a single unambiguous and precise correct answer) are going to be rare. But idk maybe you have a problem domain in mind where that kind of thing is common.

Comment by noggin-scratcher on My Mental Model of Infohazards · 2023-11-23T09:47:53.802Z · LW · GW

Nothing can be alllll that dangerous if it's known to literally everyone how it works

I agree that seems like a likely point of divergence, and could use further elaboration. If some piece of information is a dangerous secret when it's known by one person, how does universal access make it safe?

As an example, if physics permitted the construction of a backpack-sized megatonne-yield explosive from a mixture of common household items, having that recipe be known by everyone doesn't seem intuitively to remove the inherent danger.

Universal knowledge might allow us to react and start regulating access to the necessary items, but suppose it's a very inconvenient world where the ingredients are ubiquitous and essential.

Comment by noggin-scratcher on What’s going on? LLMs and IS-A sentences · 2023-11-09T02:13:47.114Z · LW · GW

Another fairly natural phrasing for putting the category before the instance would be to say that "this cat is Garfield"

Or slightly less naturally, "cats include Garfield". Which doesn't work wonderfully well for that example but does see use in other cases like "my hobbies include..."

Comment by noggin-scratcher on If a little is good, is more better? · 2023-11-04T09:30:28.120Z · LW · GW

The two paths to thing X might also be non-equivalent for reasons other than quantity/scale.

If for example learning about biology and virology from textbooks and professors is more difficult, and thereby acts as a filter to selectively teach those things to people who are uncommonly talented and dedicated, and if that correlates with good intentions.

Or if learning from standard education embeds people in a social group that also to some extent socialises its members with norms of ethical practice, and monitors for people who seem unstable or dangerous (whereas LLM learning can be solitary and unobserved)

Comment by noggin-scratcher on Should the US House of Representatives adopt rank choice voting for leadership positions? · 2023-10-25T12:54:05.116Z · LW · GW

Electing a Speaker does you no actual good if they can't, in office, maintain the confidence of a majority of the House, and assemble that majority into a coalition to pass legislation. If they were elected without genuine majority support they would be ineffective and potentially quickly removed by a vote to vacate.

So while the current mess is embarrassing and annoying, it's mostly a result of the fragmented factions and there not being a majority legislative coalition, moreso than the particular mechanics of how you hold a Speakership election.

Comment by noggin-scratcher on Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo. · 2023-10-17T12:28:24.788Z · LW · GW

My favourite similar construction:

I needed a sign for my fish and chips shop, so I ordered one online. What they sent me said "FishandChips", so I had to write to them and explain that there were supposed to be spaces between Fish and and and and and Chips.

They weren't sure what I meant. I suppose to be clearer I should have placed quote marks before Fish and between Fish and and and and and and and and and and and and and and and and and and and and and Chips and after Chips.

And you may well be wondering where I ought to have placed quotes and commas in that thing I just said...

Comment by noggin-scratcher on Cohabitive Games so Far · 2023-09-30T19:22:33.596Z · LW · GW

You usually do get a reasonable sense for what each player is pursuing by the end of the game, but it can be somewhat muddied by there being instrumental reasons to seek to control areas, make money, cycle your cards in search of better ones (etc) even when it's not your win condition.

A devious player might take some overt actions to make you think they're pursuing a different goal than the one they've actually got. Or at least keep you guessing. On occasion I've ended games with wrong beliefs about what the other players were aiming for.

Comment by noggin-scratcher on Cohabitive Games so Far · 2023-09-29T10:41:03.813Z · LW · GW

I'm reminded a bit of the Discworld Ankh-Morpork game, where the players can be pursuing entirely different (secret) win conditions that only partly intersect with each other (drawn from a set of cards containing 3 with "gain control of X territories", and 1 each of "place at least one minion in X territories", "put X territories into a state of conflict" "accumulate X amount of money" and "finish the deck of cards without any other player achieving their goal")

But it's still a single-winner game where you have to be alert against other players potentially reaching their goals so that you can block them.

I do now wonder how it would play if it allowed for multiple winners. You'd have to modify some of the values of X (they already vary according to how many people are playing), remove the "no-one else wins" goal card, and maybe change the size of the deck so that time pressure is the obstacle rather than opposing action. But it could be interesting.

Comment by noggin-scratcher on Far-Future Commitments as a Policy Consensus Strategy · 2023-09-27T23:01:37.288Z · LW · GW

On the point of explaining/losing and only having 5 words, I don't mean anything in the region of "you shouldn't have posted this for discussion" or that your posts about it here should be limited to 5 words. Only that I expect there would be major communications challenges if someone were to attempt implementing any of these ideas as actual political strategies, and that this would need to be anticipated and accounted for.

I'm also realising I fatally misread your post about perpetuities; quite right, calling it a "bomb" would be inaccurate.

Comment by noggin-scratcher on Far-Future Commitments as a Policy Consensus Strategy · 2023-09-26T09:52:26.131Z · LW · GW

Constitutional law as a separate category with higher standards to make a change is the textbook way of making a law that isn't so easily un-made (that and international treaties). But of course making a change to the constitution requires a stronger consensus to begin with - and probably in most cases you could use that strong consensus to pass a law with immediate effect.

I don't expect "this amendment shall require a unanimous vote to be repealed" would be a valid thing to include though - a regular amendment going through the normal process could still simply say "no it doesn't" and supersede the previous amendment.

People may also have a sense that constitutional law has a specific proper role, and that making provisions that aren't to do with the fundamental architecture of how the government works is outside of that remit and thus unwise. So using it to change the voting system would be on-brand, but making arbitrary other changes would be susceptible to an accusation of "that's not what the constitution is for", in the battle for public opinion.

Setting up financial products in such a way that the future government would be fiscally incentivised to follow through seems more promising, but might be more difficult to persuade current voters to go along with. Those inclined to oppose might find it easy to spread fear/doubt of anything too novel and unfamiliar; call it a trillion dollar future debt bomb or whatever. And you can try to explain that the "bomb" only goes off if the future government reneges on the current government's commitment, but "if you're explaining you're losing" and "you get about five words".

Comment by noggin-scratcher on Far-Future Commitments as a Policy Consensus Strategy · 2023-09-24T09:50:21.053Z · LW · GW

Would the established interests of 95 years hence not simply lobby for repeal of the law before it takes effect? It's generally difficult for the current legislature to thoroughly bind the hands of a future legislature.

And it seems to me that "people 100 years ago imposing a weird law that even they didn't want to be subject to themselves" would be an easy sell to quietly cancel.