Posts

Comments

Comment by vanilla_cabs on What risks concern you which don't seem to have been seriously considered by the community? · 2020-10-28T21:51:52.147Z · LW · GW

I worry about rich and powerful sociopaths being able to do evil without consequences or even without being detected (except by the victims, of course).

Many methods used to avoid detection by general population also work on the victims, including:

  • hiding the evil deed or casting doubt on its existence
  • removing knoweledge of alternatives (silencing/redacting information about past and present alternatives), ending present alternatives
  • demonizing alternatives
  • guilt-tripping victims
  • gaslighting
Comment by vanilla_cabs on The Darwin Game - Round 1 · 2020-10-27T21:16:53.755Z · LW · GW

Edit: I realize I didn't understand your question, as I didn't connect your remark about 300-200 with lsusr's statement at the top of the post. You're confused about why CliqueZviBot won in the previous game, while I am about why it is still winning.

When a pair of clones face each other, one wins the tiebreaker on even rounds and the other on odd rounds. So yes, no bot should be able to consistently win this.

My comment before the edit:

I'm confused too, but I notice that in the previous game CliqueZviBot was not ahead on round 1. So it's not the same scenario happening twice.

The cause can't be the tiebreaker: the winner only gains the advantage of starting with 3-2, but then it alternates. Since the turn total is even, both bots score 250.

The simplest explanation is just random chance. If we assume a high chance of uneven scoring for clones on early rounds due to some meeting Silly 0 bots and the like more often than others, then there's a 1/8 chance that the same bot gets on top in both games, which is not too unlikely.

Comment by vanilla_cabs on The Darwin Game - Rounds 0 to 10 · 2020-10-24T14:24:08.071Z · LW · GW

I see, they're lumped with your bot in the red portion of the pie, and still running after 10 rounds.

Comment by vanilla_cabs on The Darwin Game - Rounds 0 to 10 · 2020-10-24T09:34:28.450Z · LW · GW

All clones behave exactly the same until round 90. Even the seed for the random number generator is the same.

All I can imagine is that a tiny difference in score due to facing different bots snowballs into a significant different pie share due to the multiplicative effect that simon noted. There was a Silly 0 Bot. Any clone that was lucky enough to face it on round 1 gorged itself with score. Same thing with Silly 1 Bot and a few others. Since they disappeared fast, it's a one-time bump in score that cannot be averaged over time.

Comment by vanilla_cabs on The Darwin Game - Rounds 0 to 10 · 2020-10-24T09:22:35.278Z · LW · GW

What are the names of your 2 vassal PasswordBots?

Comment by vanilla_cabs on The Darwin Game - Rounds 0 to 10 · 2020-10-24T09:05:09.120Z · LW · GW

Can you tell us who is Insub and the story of your alliance with them?

Comment by vanilla_cabs on The Darwin Game - Rounds 0 to 10 · 2020-10-24T09:00:11.269Z · LW · GW

Wow!

I had expected there'd be around 8 bots in the clique and around 50 bots in total (though not that many sillyBots). But I never imagined we'd rise from 15% to more than 50% of the pool as early as round 10!

The cloneBots are not even attacking the other bots yet. Until round 10, they often back down to 2 in case of 3-3, and they play tit-for-tat in case of 3-2. From round 10 to round 60, they'll get progressively more greedy.

Would we fare better, worse, or the same if the rise in greediness was faster? I wanted to change it to 10->30, but ultimately didn't.

I had thought there would be more attackers in the initial pool. I spent a lot of time fine tuning our behaviour against them (folding in the early rounds, then maintaining 3 more and more often later). Seems like it was mostly a waste of time.

On the other hand, the code to exploit 0-bots and the like was not wasted. Yum yum.

Now that the most easily exploitable sillyBots are out, it's gonna be a race with Multicore's bot. While we try to smother all the outsiders, Multicore will allow cooperators to survive while gaining score from them. If they survive long enough, we'll be the ones smothered.

I think there's a 70% chance we eliminate all non-clones/mimic by round 60. Even if we do, I expect Multicore to be bigger than the aggregate of the 2 next biggest at round 90 when the second phase begins (70%).

Comment by vanilla_cabs on No Causation without Reification · 2020-10-23T23:05:06.302Z · LW · GW

Whenever I read or think about causation, I wish language allowed to make the distinction between the two types:

  •          (A is sufficient to cause B): Too many bullet holes cause death.
  •    (A is necessary to cause B): Lack of vitamin C causes scurvy.
Comment by vanilla_cabs on The bads of ads · 2020-10-23T12:04:02.781Z · LW · GW

What would good and ethical advertising look like? Maybe I decide that I want to be advertised to now, and go to my preferred advertising venue.

Epistemic status: throwing ideas

Ads exist because people aren't aware of all the products that exist. At its best, an ad manages to link a product with a client who needs it. Ethics in advertising should focus on maximising the chances of that, while minimizing side effects.

To me, the most obvious way to get that is that advertisings only assert and imply true things. As a bonus, they would avoid shocking the receiver needlessly, or uglifying their spot too much.

On a system level, the medium displaying ads would rate their quality (based on the aforementioned criteria and others), and the duration an ad is displayed would be commensurate with its rating.

Comment by vanilla_cabs on The Darwin Game · 2020-10-20T17:44:48.817Z · LW · GW

Originally I was planning to not contact the clique and use the public source, but when Vanilla_cabs announced the source would not be public, I messaged them indicating willingness to join the clique. I’m not sure how suspicious the timing seemed to them, but I was included anyway.

I gave an equal chance to an early newcomer and a late newcomer of trying to betray. Maybe I was wrong, and I should be mindful of that in the future.

Also, I felt like our numbers were dangerously close to the minimum (6), and I expected a couple members to retract before I revealed the names (which di not happen). So in my mind I had no choice but to accept you.

I tried it in CloneBot, and… it worked!

Good job! My plan for security was to leave an obvious vulnerability, and entrust members who would report with the task to look for more subtle ones. Only Lanrian reported, late in the week, and I didn't trust them enough because I was suspicious of their motive when they'd asked me for making our code easier to simulate (which turns out they were honest about).

Comment by vanilla_cabs on The Darwin Game · 2020-10-20T17:32:39.005Z · LW · GW

That's an idea worthy of consideration, but in addition to the risk you raised, I also feared some members would have submitted invalid bots.

Comment by vanilla_cabs on The Darwin Game · 2020-10-19T16:07:10.939Z · LW · GW

I didn't know about __new__(), I only knew about redifining methods, so based on what you knew, your reasoning was correct.

I knew no one before starting the clique. Lanrian joined the same way as the others. If anything, Lanrian was suspicious because they insisted we put the random.seed() inside move() and make it pseudorandom so that simulators can accurately emulate our behaviour. The reason they gave was to better collaborate, and have the simulators play 2 against 3 instead of 3 against 3. I was mildly convinced and I still am suspicious of that move. They only late in the week reported the weakness, after you and philh passed on the chance to do so. But they did so soon after I showed them the code.

I was really paranoid about this and I feel you could have used this somehow.

The secrecy on the members was used to:

  • prevent members and potential members from worrying if there were too few current members. That was the purpose I had in mind when I made that choice. A few days before the end I still was not sure we'd be enough. I was also worried some members would drop if we were too little. So the 2 members who joined in the last 2 days really helped.
  • avoid any collusion between members that would not include me. And more generally receive any valuable information that members would like to share.

So I used that advantage only in a defensive way. But I did receive an offer that did inform me on more offensive uses, and impacted my payload, which I will elaborate on if the sender allows it.

Comment by vanilla_cabs on The Darwin Game · 2020-10-19T14:05:55.379Z · LW · GW

They asked if the code was airtight. "I don't see anything I want to flag."

And I saw right through that my friend :D

As I said in my reply to Taleuntum, I left the weakness as a test to find someone I could trust to find sneakier weaknesses. Of you three who saw the code, only Lanrian reported. "I don't see anything I want to flag." That's cute. To be more accurate, I wasn't sure you were hiding a security flaw, but I didn't have to be sure since either way meant I couldn't entrust you with the task. And the wording left me thinking you were hiding a security flaw with 80% credence. I thought about asking "Did you see anything worth flagging?", but decided against it.

Later, lsusr told me that that line would get me disqualified. I didn't say anything, in the hopes some clique member would wonder what it was for, include it in their bot just in case, and get disqualified.

I feel a little bad about all this, and hope Vanilla_cabs has no hard feelings.

Not at all, I just feel like I've dodged a big bullet. How come that line would get someone disqualified? Has lsusr been more specific?

Comment by vanilla_cabs on The Darwin Game · 2020-10-19T13:52:29.973Z · LW · GW

The first versions of CloneBot (the name of the program for our clique) did actually contain a mistake I could exploit (by defining the __new__() method of the class after the payload) and so this was my plan until Vanilla_Cabs fixed this mistake. After they fixed it, I didn't notice any way I can take advantage, so I joined the clique in spirit.

Little did you know that I was aware of this weakness from the beginning, and left it as a test to find whom I could trust to search for the weaknesses I didn't know. Of the 3 (I think) to whom I showed the code early, only Lanrian reported it.

I'm curious how believable my lies were, I felt them to be pretty weak, hopefully it's only because of my inside view.

I didn't play a simulator so I didn't care about the first.

About the second, I can tell you that another member alerted me that you seemed to have a hidden ally. They feared you had made an ally outside the clique, or just given the code to join the clique to a player know only to you. Which I thought was a possibility. Actually, I hoped for a few stowaways to boost our numbers.

Comment by vanilla_cabs on The Darwin Game · 2020-10-17T16:38:56.460Z · LW · GW

We were five hundred, but with swift support

Grew to three thousand as we reached the port

(Le Cid)

It's been an exciting week, I've had lots of fun, thanks everyone who shared ideas, and thanks lsusr for swiftly and kindly answering all my questions. Now is time for the final act.

  • arxhy
  • DaemonicSigil
  • Emiya
  • frontier64
  • Lanrian
  • Multicore
  • philh
  • simon
  • Taleuntum
  • Vanilla_cabs

You will receive a personal message shortly.

That is all.

Comment by vanilla_cabs on The Darwin Game · 2020-10-16T10:19:49.940Z · LW · GW

At least one member asked for a basic obfuscation measure. Publishing the code would defeat their purpose.

Also, from an insider's perspective, publishing the code now would only slightly increase our chances to get another member before the end of admissions, while it would entail a significant risk of opponents adjusting their strategy against it. I should have decided on the publication earlier, but honestly it was never a priority.

Comment by vanilla_cabs on The Darwin Game · 2020-10-15T22:19:21.555Z · LW · GW

I get it that you don't like that players join forces. I am not sure I'd allow coordination if I had a say on the rules. But per the rules coordination is part of the game. That's it. For all we know, others are making cliques in secret.

I believe my scheme substantially increases our chances of winning, so I'll go with that.

Admissions are closing soon. Good luck, whatever you decide :)

Comment by vanilla_cabs on The Darwin Game · 2020-10-12T08:18:18.015Z · LW · GW

do you compare source code alphabetically, and favour X over Y on even rounds and Y over X on odd rounds?

Great idea! I've updated the following heuristic using that.

There is one thing that is different between the programs: the code that you will add to be executed in the later rounds (the payload). As I said, CloneBot will ignore it when judging whether its opponent is a clone. But if the opponent is a clone, it will use this heuristic to decide who goes first:

  • compare both payloads lexicographically
  • if the difference in length is the same parity as the round, the shortest plays 3
  • otherwise, the longest plays 3

This is fair, deterministic, and needs 0 turn to communicate. There's no point in tweaking your payload in the hope of starting with 3 more often. The only problem are ties, which are unllikely, and adding your name as a comment solves it.

Why also compare length? Because otherwise, the payloads of extreme length (very short or very long) would have very stable alternating priority, while the ones in the middle would be more subject to randomness. This way, it's the same randomness for everybody.

Also, it may be a good idea to make the level of defection against outsiders depend on the round number. i.e. cooperate at first to maximize points, then after some number of rounds, when you're likely to be a larger proportion of the pool, switch to defecting to drive the remaining bots extinct more quickly.

That seems reasonable. I'm just worried that we might let greedy or even cooperating bots take too much lead. Ideally, as soon as the clique reaches criticals mass, it should starve its opponents. The 'as soon' depends on what proportion of the pool we'll initially be.

Comment by vanilla_cabs on The Darwin Game · 2020-10-12T00:54:20.285Z · LW · GW

If you're participating in the contest and you want to win, I have a proposition for you:

Howdy! You've probably looked up Zvi's past Darwin game that directly inspired this one. A group of players formed a clique who recognized each other, cooperated among themselves and defected on everyone else. They nearly wiped the board, but they were preyed upon by a member who played both sides.

What they missed was a way to guarantee that all members apply the decided strategy. They had no way to enforce it.

But we have a way.

I call it CloneBot: a bot who checks that its opponent has the exact same code as itself. No way to cheat that! It guarantees that every member of the clique does the same thing. Moreover, there'll be a way to cooperate optimally, avoiding losing the first rounds to coordinate. The clique are gonna be the most efficient ccoperators.

But in the end we're all gonna tie, it's boring. I want to take a chance at winning!

So do I. This is why the clique are only going to collaborate until a predefined round. After we've eliminated the competition, we can have a dramatic showdown among ourselves. Cool, no? In the code, there's gonna be a separate function that's called only after a given round. The code checker will verify that the function is only called at the right time, but will ignore what is inside.

What will CloneBot do?

Depends on the proportion of clones. If we're a big group, CloneBot will straight up defect against outsiders by playing 3. Otherwise, CloneBot will cooperate, but make sure opponent does not gain more than itself.

With its clones, CloneBot will cooperate by alternating 2-3. Who gets the first 3 will be determined fairly before the first turn starts.

Ok, but I don't want to find myself in a small clique that's going to lose.

You don't have to commit to submitting CloneBot right away. All you need to do is contact me to say you're in, conditional on the clique being large enough. By default, contact me privately. If you want, you can roll a D6, and if you roll 6, join me publicly right here.

A few days before the deadline, if we're 5 or less, I'll just announce that the critical mass was not reached. But if we're more than 5, I will make public the names of all who contacted me to join the group and crush the competition. If I post your name at that moment, you must submit CloneBot.

I like the idea, but I wish X instead of Y.

Every detail is open to discussion for about 3 days. After that, this post will be updated, and every person who expressed their desire to join will be informed of the changes and asked to confirm their participation.

Where do I find CloneBot?

 I'm considering making the code public.

Let the best clique member win :)

Comment by vanilla_cabs on Fermi Challenge: Trains and Air Cargo · 2020-10-10T15:45:33.929Z · LW · GW

Low confidence, but let's play anyways:

q1:

5 million miles

q2:

For both 2009 and 2019: 300 million tons

By the way, for q2, what is considered cargo? Are passengers considered cargo? How about the crew? How about hand luggage of the passengers? Hand luggage of the crew? The meals and other consumables? (my answer assumed no for all these questions)

Comment by vanilla_cabs on Babble challenge: 50 ways to escape a locked room · 2020-10-10T14:40:48.175Z · LW · GW

#50 reminded me of another room escape...

Comment by vanilla_cabs on Babble challenge: 50 ways to escape a locked room · 2020-10-08T19:43:32.589Z · LW · GW

Took 1 hour 10 minutes.

1) call for help

2) search the room for the key

3) bash door

4) bash window

5) probe walls to find weak spots (knocking on different surfaces yields different sounds)

6) same for ceiling and floor

7) yell for help

8) hide to make the others believe you escaped (Mc Gyver style)

9) have a pizza delivered and run out when the door opens

10) find out the best solution on LW

11) smash the door one punch a day (also inspired by a series but won't spoil)

12) upload consciousness on a remote server

13) escape through virtual world (escapism)

14) make rope out of clothes to hang by the window

15) unscrew door lock with keys

16) lock has retinal scan, I have clearance

17) same with fingerprint, or other biometrics

18) same with subcutaneous chip

19) same with phone

20) hack clearance with phone (after spending 10 years learning)

21) collapse room with rythmic pounding (mechanical resonance)

22) file the bars with the phone

23) squeeze through the cat flap

24) pick lock with my pen's spring (same 10 years learning)

25) pull out a wooden slat from the floor, then use it as lever on the door

26) mold some soap into the shape of the key

27) buy the room, get keys delivered

28) rent the room, forget to pay the rent, get expelled

29) I'm already escaping the whole outside world when locked in the room, so victory

30) remember the code

31) melt the lock with the electric power of the phone

32) melt the lock with the acid in the phone battery

33) start fire with phone battery to burn down wall (probably a very bad idea)

34) pull the power cables from the wall, cut the power from the electric lock

35) learn martial arts to better smash walls

36) solve the escape room

37) wait for the escape game to finish

38) go online, promise 1 million to whoever gets me out, then write a book about the experience and make 2 millions

39) the room is a device to reverse entropy: use it, tuo teg neht

40) room is a terrace, technically I'm locked inside but I'm still outside

41) blow the lock with pressure

42) melt lock with acid pee

43) wait for whoever locked the room to show up then ambush

44) same but negociate

45) refuse to get out, reverse psychology

46) remotely control a drone to open the door

47) remotely buy a replica of the key and have it delivered

48) remember where I put the key

49) the room is locked if trying to open from the outside, but opens from the inside

50) create a LW thread about escaping a room, hope someone gets the hint

Comment by vanilla_cabs on Babble challenge: 50 ways of sending something to the moon · 2020-10-02T20:23:48.472Z · LW · GW

I was positively surprised that I managed to find 50 ways, and for that discovery I'm grateful and looking forward to the next exercises.

Of course most of my answers are rubbish, but I did find some interesting ideas near the end. I find an idea can be interesting even if it is highly unpractical or would actually be only a part of the solution at most (see answer 47). Conversely, an idea can give the full steps toward the solution and be completely useless because it's just a password (see answer 28).

1) rocket

2) moon elevator (spring-shaped)

3) giant spring

4) remotely controlled 3d printer (teleportation)

5) crash moon on earth (by slowing it down)

6) plane

7) breed space whales

8) mini-wormholes

9) alternate universe where the Earth is the moon and vice-versa

10) go back in time to when moon was ripped from the Earth

11) go back in time to put something in Apollo

12) miniaturize the Moon and put in on Earth

13) antigravity

14) spaceship

15) meet aliens then ask them to do it for us

16) engineer super-geniuses who find the best way to do it

17) blow up the moon via a laser then send something on a fragment that lands on Earth

18) grow trees that go to the moon

19) master out-of-body experiences to go to the moon

20) do it in dream/in a simulation

21) rename my garden 'the Moon' then send something to it

22) mail it

23) leave it on Earth ; eventually, in 4 billion years, the sun will have absorbed both the thing and the Moon and hopefully some parts of both will mix

24) garden hose throwing water to rise

25) wind and light sails

26) cannon

27) orbital stations and a little push

28) telekinesis

29) move through higher dimension

30) use cosmic planet alignment to cancel gravity

31) air balloon then air propulsion

32) already be on the Moon

33) send it by hand then fail

34) if it's information, send it as waves

35) become immortal and wait until someone else does it

36) produce air until the atmosphere gets close enough to the moon to go by aircraft

37) leave it, there's an infinitesimal chance it gets on the moon by itself

38) blowdart

39) ask Lego builders to build a mountain that goes to the moon

40) make tunnel through Earth to fling it to the Moon

41) send it in thought, maybe it's enough

42) create a billion dollar prize for whoever does it first

43) use a Tesla car, fling around the sun

44) naturally select birds that can live in deep space

45) consecrate a temple to the moon and burn the thing to send as offering

46) elevator fixed in moon rather than Earth

47) super tornadoes to get outside atmosphere and get some propulsion

48) magnetohydrodynamics

49) plane that meets a rocket halfway

50) grow moon to greatly increase its gravity

Comment by vanilla_cabs on Puzzle Games · 2020-09-29T00:21:00.967Z · LW · GW

Free:

  • Hana no puzzle 1&2 (same vein as Jelly no puzzle)
  • Teleportower Plus (short, refreshing)
  • NAWNCO (browser-based, very short, flash has become obsolete and might not work on most browsers)
  • Illiteracy (browser-based, very short, by Le Slo)

Not free:

  • Into the Breach (by the creators of FTL)
  • The Bridge
  • Mushroom 11
Comment by vanilla_cabs on Puzzle Games · 2020-09-28T21:24:18.284Z · LW · GW

That's the reason I don't like it.

The game changes the nature of its puzzles abruptly. The players come for a kind of puzzle, the one they see on the trailers. They (and I) are not prepared or interested in the second kind. That's bad game design.

Comment by vanilla_cabs on Puzzle Games · 2020-09-28T21:19:31.314Z · LW · GW

Mild spoilers for The Swapper:

I also highly enjoyed The Swapper, but I'd make a warning: dark mood, do not play when feeling low.

Comment by vanilla_cabs on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-26T19:44:56.965Z · LW · GW

I find the concept of Petrov day valuable, and the principle of the experiment relevant, but doesn't the difference of stakes undermine the experiment? The consequence of entering the codes here was meaningful and noticeable, but it was nothing irreversible, lasting, or consequential.

When I walk in the streets everyday, dozens of car drivers who have a clear shot at running me over abstain from hurting, maiming or even killing me, and not a single one does it. That's what I call consequential. I'll celebrate that.

Comment by vanilla_cabs on Covid 9/10: Vitamin D · 2020-09-15T14:47:47.190Z · LW · GW
I don't think that's the case and I do happen to be a person who raised legitimate concerns about vaccines in the past.

So you don't mind being called an anti-vaxxer? Maybe in the US it's not a big deal, but in France where I am, you might as well be called a flat-earther.

We never know all the side-effects. We make decisions in uncertainty and have to think about the expected value of our decisions given the uncertainty that we have. 

Of course. That shouldn't keep us from doing our best to find out the side effects in the time we have, and to keep searching afterward. And to use that knowlegde when wheighing the benefit/risk ratio of the treatment.

Comment by vanilla_cabs on Why haven't we celebrated any major achievements lately? · 2020-09-15T14:36:44.583Z · LW · GW

Good point. The discussion too often revolves around the death/recovery opposition and forgets permanent damage.

Still, I'm not convinced a vaccine is necessary. HCQ seems to be an efficient treatment when administered early, before the virus has done much damage, and for that reason the chance of permanent damage is probably lower than with patients who need to be hospitalized.

Comment by vanilla_cabs on Covid 9/10: Vitamin D · 2020-09-14T08:29:34.572Z · LW · GW

Thanks for voicing some of the things I thought better than I ever could.

I've noticed a trend on LW of cheap jabs at "anti-vaxxers". To me this seems like a partisan label which just makes it harder to voice legitimate concerns about vaccines. Like any medical treatment, we should ask:

  • how bad is the disease it's meant to ward off?
  • how much is it gonna cost? (and who's gonna pay)
  • how efficient is it?
  • do we know the side effects?

That said, AstraZeneca abandoning a vaccine for one patient with an adverse reaction seems absurd. I notice I am confused, so I wouldn't be surprised if it wasn't the whole story.

Prediction: even if vitamin D (or HCQ) is proven to greatly reduce mortality, I don't think there will be any consensus outside scientific circles. The matter has been far too politicised for any side to back out.

Comment by vanilla_cabs on Beautiful Probability · 2020-09-08T13:57:50.030Z · LW · GW

Maybe. But to assume any of that, you would need additional knoweledge. In the real world, in an actual case, you might have checked that there are 19 other researchers who used the same approach and that they all hid their findings. Whatever that additional knoweledge is that's allowing you to infer 19 hidden motivated researchers where only 1 is given, that is what gives you the ≈1% result.

Comment by vanilla_cabs on How strong is the evidence for hydroxychloroquine? · 2020-09-04T01:23:29.802Z · LW · GW

The paper was retracted.

Comment by vanilla_cabs on Why haven't we celebrated any major achievements lately? · 2020-08-21T06:35:58.790Z · LW · GW

Why celebrate? Deaths from the COVID have plummetted. The epidemic crisis is over. A vaccine is unnecessary.

Comment by vanilla_cabs on Lost Purposes · 2020-04-25T13:46:24.818Z · LW · GW

All the previous points are interesting, but I think they're besides the point that EY (and probably MM) is trying to make.

It is not about conflicting terminal values. It is about never losing sight of terminal value(s) behind the current instrumental value(s) one is pursuing.

You don't parry for the sake of parrying, you have (an) ulterior motive(s). Same for opening car doors or rooting out biases.

Comment by vanilla_cabs on Yudkowsky vs Trump: the nuclear showdown. · 2019-10-13T08:48:28.023Z · LW · GW
Also I agree with you that the "preserve every pulse" kind of thinking could lead to an impractical situation , but I also think that the correct approach for this issue is the "in medio stat virtus" approach being something like "If you create damages to society which are greater than your contribution to it for a continued period of e.g. 5 years" your life would not be worth preserving

Do you realize that under such guidelines, one could easily make the case for most of the unemployed people to be eradicated? I'm pretty sure that's not your goal here.

So are you seriously claiming that you can't see the correlation between number of humans alive on the Earth and average quality of life and progress achieved by our specie?

I can see the correlation, but I think you have the causation backwards. The case for progress and quality of life leading to increases in human population seems much more straightforward to me. In my simplified model, progress is increased production. Quality of life is production per capita. But when quality of life raises, so does natality and death drops, until human population has absorbed most of the additional production and people are just slightly better off than before.

Comment by vanilla_cabs on Mistake Versus Conflict Theory of Against Billionaire Philanthropy · 2019-08-31T14:39:05.680Z · LW · GW

I can only reply for myself: around 60%.

Now you could contact RR and ask him the same question.

In any case, how do you interpret the answer?

Comment by vanilla_cabs on Mistake Versus Conflict Theory of Against Billionaire Philanthropy · 2019-08-03T08:58:05.642Z · LW · GW

Ok, I'm getting a feel of how you come to your conclusions.

My perspective here comes from public choice theory.

Any good reads to learn the basics?

It seems to me that if one could leverage more than one's own share of taxes, then that would constitute a unilateral use of power, because the state is using force to collect taxes, and directing other people's tax money essentially means you're forcing them to spend their money in a way that you want. But if you're only leveraging your own share of taxes, then it just means that the state is not forcing you to spend money the way that it wants.

That's just another way to describe the same facts. I call it everyone's tax money because in my mind, taxes are pooled. When the state refunds someone, it scoops money from that pool without regard from whom it comes from. You see it as a bank vault with separate boxes for each taxpayer. In your view, it's true that the billionaire only leverages their own tax money; but by doing so they escape taxes, and the critical point is that they do so more that the layman. Different perspective, same result.

But maybe by "use of power" you mean something besides "use of force"? If so, what? (The only other thing I can think of is "use of money or other resources" but that seems to cover way too much.)

I did mean the latter, as RR did when he said : Philanthropy can be an exercise of power, and even if it’s unsubsidized philanthropic power, we still are required to scrutinize its deployment.

Also he said "independent of a tax break [...] potentially to be rejected if it’s not." Do you know what he meant by "rejected" here? Just "criticized", or something stronger like "banned"?

I think the latter. Considering his example just above, it interpret it to the effect that the rule forbidding citizens to send money to the police or the army should be extended to philanthropy in some cases, especially when those cases should be or used to be the duty of the state (like the example he gives about schools).


Comment by vanilla_cabs on Mistake Versus Conflict Theory of Against Billionaire Philanthropy · 2019-08-02T16:01:41.826Z · LW · GW
It seems equally valid to say that donors are only leveraging their own tax money, because donations can only reduce your tax bill to zero (or not even that because only donations up to half of your income is tax deductable), and not to a negative number.

Let's say that a bunch of people owe me money. If I give a discount to one of them, clearly, that discount is a present. It's money I give to that person.

The way I see it, when someone gives 100$ to a charity with 40% tax deduction, what actually happens is that the person gives 60$ to the charity, and the state matches that with 40$ of its own taxpayer's money. The fact that the state's gift is limited to the amount of the person's taxes is irrelevant to the nature of the transaction.

As Rob Reich concludes :

So the citizens of the United States are collectively subsidizing, through foregone tax collection, the giving preferences of the wealthy to a much greater degree than the giving preferences of the middle class or poor. And, of course, the giving preferences of the wealthy are not a mirror of the giving preferences of all people.

Worried about what? That there's some kind of slippery slope where billionaire philanthropy starts a process that eventually causes us have a non-democratic form of government, or that “a plutocratic element in a democratic setting” is bad even if there is no risk of that?

Certainly, I see that plutocratic element as an erosion of democracy. But it's not the only one. The whole electoral system is already bad enough; the leaders, elected and unlected, are unaccountable, and generally unwilling to even discuss a lot of measures that the majority of the voters ask for. Using our taxes to finance some rich guy's pet charity is just another nail in the coffin.

the "plutocratic element" is trying and succeeding in solving a bunch of problems that our democracy is failing to solve.

Democracy is certainly not the most expedient. But it has arisen because History has taught us to be wary of forms of power that are too expedient. The point of democracy is precisely to have safeguards against unilateral use of power.

Reich doesn't want to outlaw billionaire philanthropy. All he says is that it shouldn't be subsidized by the taxpayer's money, and that it should be closely scrutinized before rolling out the red carpet. I only see good practice here.

Edit : last minute idea. Billionaire philanthropists probably do a whole lot of good. But giving them credit for all of it would be comparing against a hypothetical world where billionaire philanthropy would be replaced by nothing. But we don't know. We might have a world where good charity is done another way, maybe even better. In any case, even if you think Reich's charitable credit would do worse, only the difference should be credited to our current system.

Comment by vanilla_cabs on Mistake Versus Conflict Theory of Against Billionaire Philanthropy · 2019-08-02T10:42:26.590Z · LW · GW

Please note: I'm writing this not to denounce, but to try to understand a mode of thinking that I am unfamiliar with.

For once I find myself at odds with the common sentiment here. I'm one of those people who are convinced neither by Scott Alexander nor the OP.

Among other points, I fear, if we do as they said, that we'll start self-censoring our speech towards billionaires donation; over time and through halo effect, this could lead to social censoring of any criticsm of billionaires. I can already see it in the way SA uses the loaded word "attacks on billionaire philanthropy" rather than "criticism of billionaire philanthropy".

If I had to venture a guess, I'd say the difference between us is that most LW's posters probably are closer to billionaires, geographically, socially, and in their values, than I. Maybe they are not worried because they can relate to billionaires in ways that I can't.

There is no denying that through tax rebates the donators are leveraging everyone's tax money. This is "a plutocratic element in a democratic setting." as Rob Reich says. The fact that it worries no one here makes me wonder: would you have another government rather than democracy?

Again I'm not trying to corner you into breaking a taboo. I'm legitimately curious.

Comment by vanilla_cabs on Paternal Formats · 2019-07-16T19:59:17.468Z · LW · GW

I feel like the more paternal a format gets, the more it allows for various and complex articulations between individual elements. In order of growing expressivity :

  • bullet list/hyperlink : elements all share the same relation
  • blueprint/diagram/chart : allows to display multiple types of relations between elements, but each must be separately defined in the key
  • story/video : allows the full use of language's power to articulate the elements

I'd choose my format based on that.

Comment by vanilla_cabs on Beautiful Probability · 2019-07-10T21:22:21.938Z · LW · GW

But, that's the thing : P(observation|researcher 1) = P(observation|researcher 2)

The individual patient results would not change whether it is researcher 1 or 2 leading the experiment. And given the 100 results that they had, both researchers would (and did) proceed exactly the same.

Comment by vanilla_cabs on The Blue-Minimizing Robot · 2019-07-09T13:30:23.644Z · LW · GW

So it would be [I SEE BLUE AND I TRY TO SHOOT].

... except that it wouldn't mind if shooting itself damaged its own program so that it wouldn't even try to shoot if it saw blue anymore.

Ok, I am inclined to agree that its behaviour can't be described in terms of goals.

Comment by vanilla_cabs on The Parable of the Dagger · 2019-06-17T14:18:16.769Z · LW · GW
What, it doesn't count as a lie if it's in writing? That's a hell of a system of contract law they've got in this allegorical kingdom.

I have a different answer to this than what has been given so far :

It's a question of implicit conventions. The king's challenge follows and mimics the jester's challenge. In the jester's challenge, the jester makes a statement about the truth value of the inscriptions on the boxes. By doing this, he sets the precedent that the inscriptions on the boxes are part of the game and do not engage the honesty of the game maker. The inscriptions can be true of false, and it's part of the challenge to guess what is each one. Only the jester's own words engage his honesty. If he lied, the challenge would be rigged.

The king mimics the jester's setup, but makes no statement about the truth value of the inscriptions on the boxes. That difference should have sounded suspicious to the jester. He should have asked the king if the statements were logical. The king could have lied, but at that point if the king was ready to lie then he'd probably kill the jester even if he found the key.

Comment by vanilla_cabs on Transhumanism as Simplified Humanism · 2019-06-11T14:25:06.785Z · LW · GW

5) status quo bias.

Most people will change they minds the moment the technology is available and cheap. Or rather, they will keep disliking the idea of 'immortality' while profusely consuming anti-aging products without ever noticing the contradiction, because in their minds these will belong in two different realms : grand theories VS everyday life. Those will conjure different images (ubermensch consumed by hubris VS sympathetic grandpa taking his pills to be able to keep playing with his grandkids). Eventually, they'll have to notice that life expectancy has risen well above what was traditionnally accepted, but by then that will be the new status quo.

6) concern about inequalities. The layman has always had the consolation that however rich and powerful someone is, and however evil they are, at least they die like everyone else eventually. But what will happen when some people can escape death indefinitely ? It means that someone who has accumulated power all his life... can keep accumulating power. Patrimony will no more be splitted among heirs. IMO, people would be right to be suspicious that such a game-changing advantage would end up in the hands of a small super-rich class.

7) popular culture has always envisioned the quest for immortality as a faustian bargain. This conditions people against seeing life lengthening as harmless.

Comment by vanilla_cabs on Degrees of Freedom · 2019-04-04T11:26:10.718Z · LW · GW
Proponents of ideas like radical markets, universal basic income, open borders, income-sharing agreements, or smart contracts (I’d here include, for instance, Vitalik Buterin) are also optimization partisans.


In the case of UBI, what is optimization from the viewpoint of the decision makers is freedom from the viewpoint of those concerned by the decision.

After all : money is needed to fulfill the basic needs of life in society. Without UBI, little people are forced to look for money on the job market, where they are perpetually reminded that they must prove their usefulness by joining a group of sufficient efficiency on the global market (a company).
On the other hand, UBI frees these people to pursue their own, possibly wildly creative goals, however inefficient these are deemed by others.

So I'm thinking : maybe freedom is a limited (and highly valued) goods. If some have leeway to apply arbitrary decisions, then necessarily others don't. I need airplanes to be very reliable so that I can travel at my fancy. Freedom is based on top of reliability (which is equivalent to optimization in this context). Even at the individual level : I'm free to do what I want today because my body is highly optimized to obey my mental commands.

This idea seems to pervade your article (e.g. when you mention corruption as a typical sign of freedom), but it wasn't really explicited anywhere.