Posts

cousin_it's Shortform 2019-10-26T17:37:44.390Z · score: 3 (1 votes)
Announcement: AI alignment prize round 4 winners 2019-01-20T14:46:47.912Z · score: 80 (19 votes)
Announcement: AI alignment prize round 3 winners and next round 2018-07-15T07:40:20.507Z · score: 102 (29 votes)
How to formalize predictors 2018-06-28T13:08:11.549Z · score: 16 (5 votes)
UDT can learn anthropic probabilities 2018-06-24T18:04:37.262Z · score: 63 (19 votes)
Using the universal prior for logical uncertainty 2018-06-16T14:11:27.000Z · score: 0 (0 votes)
Understanding is translation 2018-05-28T13:56:11.903Z · score: 139 (47 votes)
Announcement: AI alignment prize round 2 winners and next round 2018-04-16T03:08:20.412Z · score: 153 (45 votes)
Using the universal prior for logical uncertainty (retracted) 2018-02-28T13:07:23.644Z · score: 39 (10 votes)
UDT as a Nash Equilibrium 2018-02-06T14:08:30.211Z · score: 36 (12 votes)
Beware arguments from possibility 2018-02-03T10:21:12.914Z · score: 13 (9 votes)
An experiment 2018-01-31T12:20:25.248Z · score: 32 (11 votes)
Biological humans and the rising tide of AI 2018-01-29T16:04:54.749Z · score: 55 (18 votes)
A simpler way to think about positive test bias 2018-01-22T09:38:03.535Z · score: 34 (13 votes)
How the LW2.0 front page could be better at incentivizing good content 2018-01-21T16:11:17.092Z · score: 38 (19 votes)
Beware of black boxes in AI alignment research 2018-01-18T15:07:08.461Z · score: 71 (30 votes)
Announcement: AI alignment prize winners and next round 2018-01-15T14:33:59.892Z · score: 166 (63 votes)
Announcing the AI Alignment Prize 2017-11-04T11:44:19.000Z · score: 1 (1 votes)
Announcing the AI Alignment Prize 2017-11-03T15:47:00.092Z · score: 155 (67 votes)
Announcing the AI Alignment Prize 2017-11-03T15:45:14.810Z · score: 7 (7 votes)
The Limits of Correctness, by Bryan Cantwell Smith [pdf] 2017-08-25T11:36:38.585Z · score: 3 (3 votes)
Using modal fixed points to formalize logical causality 2017-08-24T14:33:09.000Z · score: 3 (3 votes)
Against lone wolf self-improvement 2017-07-07T15:31:46.908Z · score: 30 (28 votes)
Steelmanning the Chinese Room Argument 2017-07-06T09:37:06.760Z · score: 5 (5 votes)
A cheating approach to the tiling agents problem 2017-06-30T13:56:46.000Z · score: 3 (3 votes)
What useless things did you understand recently? 2017-06-28T19:32:20.513Z · score: 7 (7 votes)
Self-modification as a game theory problem 2017-06-26T20:47:54.080Z · score: 10 (10 votes)
Loebian cooperation in the tiling agents problem 2017-06-26T14:52:54.000Z · score: 5 (5 votes)
Thought experiment: coarse-grained VR utopia 2017-06-14T08:03:20.276Z · score: 16 (16 votes)
Bet or update: fixing the will-to-wager assumption 2017-06-07T15:03:23.923Z · score: 26 (26 votes)
Overpaying for happiness? 2015-01-01T12:22:31.833Z · score: 32 (33 votes)
A proof of Löb's theorem in Haskell 2014-09-19T13:01:41.032Z · score: 29 (30 votes)
Consistent extrapolated beliefs about math? 2014-09-04T11:32:06.282Z · score: 6 (7 votes)
Hal Finney has just died. 2014-08-28T19:39:51.866Z · score: 33 (35 votes)
"Follow your dreams" as a case study in incorrect thinking 2014-08-20T13:18:02.863Z · score: 29 (31 votes)
Three questions about source code uncertainty 2014-07-24T13:18:01.363Z · score: 9 (10 votes)
Single player extensive-form games as a model of UDT 2014-02-25T10:43:12.746Z · score: 21 (12 votes)
True numbers and fake numbers 2014-02-06T12:29:08.136Z · score: 19 (29 votes)
Rationality, competitiveness and akrasia 2013-10-02T13:45:31.589Z · score: 14 (15 votes)
Bayesian probability as an approximate theory of uncertainty? 2013-09-26T09:16:04.448Z · score: 16 (18 votes)
Notes on logical priors from the MIRI workshop 2013-09-15T22:43:35.864Z · score: 18 (19 votes)
An argument against indirect normativity 2013-07-24T18:35:04.130Z · score: 1 (14 votes)
"Epiphany addiction" 2012-08-03T17:52:47.311Z · score: 52 (56 votes)
AI cooperation is already studied in academia as "program equilibrium" 2012-07-30T15:22:32.031Z · score: 36 (37 votes)
Should you try to do good work on LW? 2012-07-05T12:36:41.277Z · score: 36 (41 votes)
Bounded versions of Gödel's and Löb's theorems 2012-06-27T18:28:04.744Z · score: 32 (33 votes)
Loebian cooperation, version 2 2012-05-31T18:41:52.131Z · score: 13 (14 votes)
Should logical probabilities be updateless too? 2012-03-28T10:02:09.575Z · score: 12 (15 votes)
Common mistakes people make when thinking about decision theory 2012-03-27T20:03:08.340Z · score: 51 (46 votes)
An example of self-fulfilling spurious proofs in UDT 2012-03-25T11:47:16.343Z · score: 20 (21 votes)

Comments

Comment by cousin_it on Moral public goods · 2020-01-26T11:54:20.797Z · score: 3 (1 votes) · LW · GW

I think the example works fine with numbers like “the welfare effect of $1 is a hundred times larger for this poor person than that rich person”

I'm having some trouble working it out. Let's say each noble has 10000 dollars, each peasant has 1 dollar, and peasants get 100x more utility per dollar. Then each noble's utility is simply 10000+1x100, because each of 1000000 peasants has weight 1/1000000 which cancels out. Now let's take 1 dollar from each noble and distribute it to peasants. That's 1000 dollars total, so 0.001 dollars per peasant. Now each noble's utility is 9999+(1+0.001)x100, which is a decrease. What am I missing?

Comment by cousin_it on Have epistemic conditions always been this bad? · 2020-01-26T10:37:24.263Z · score: 4 (2 votes) · LW · GW

In the Soviet Union when someone said or thought something wrong, the local party leaders would advice their employers who would then remove them from the ability to work.

Were people in the USSR getting barred from their constitutional duty to work? I was born there and it sounds weird. You can say many other bad things but not this one.

I believe the major change is that now people lose their jobs due to online mobbing and rage mobs can cancel someone’s entire ability to avoid homelessness through Twitter assaults and Facebook campaigns.

Has anyone ended up poor or homeless due to cancel culture? I don't actually know any examples.

Comment by cousin_it on Moral public goods · 2020-01-26T01:15:13.652Z · score: 1 (2 votes) · LW · GW

If welfare is logarithmic in income, you can get huge utility by giving a dollar to someone who has almost no money. I think that's what makes the numbers work out in your example, and at the same time makes it unrealistic.

Comment by cousin_it on Reality-Revealing and Reality-Masking Puzzles · 2020-01-22T13:05:58.074Z · score: 3 (1 votes) · LW · GW

That's fair. Though I'm also worried that when Alice and Bob exchange beliefs ("I believe in global warming" "I don't"), they might not go on to exchange evidence, because one or both of them just get frustrated and leave. When someone states their belief first, it's hard to know where to even start arguing. This effect is kind of unseen, but I think it stops a lot of good conversations from happening.

While if you start with evidence, there's at least some chance of conversation about the actual thing. And it's not that time-consuming, if everyone shares their strongest evidence first and gets a chance to respond to the other person's strongest evidence. I wish more conversations went like that.

Comment by cousin_it on Reality-Revealing and Reality-Masking Puzzles · 2020-01-21T20:05:22.436Z · score: 3 (1 votes) · LW · GW

For some reason it's not as annoying to me when you do it. But still, in most cases I'd prefer to learn the actual evidence that someone saw, rather than their posterior beliefs or even their likelihood ratios (as your conversation with Hal Finney here shows very nicely). And when sharing evidence you don't have to qualify it as much, you can just say what you saw.

Comment by cousin_it on Reality-Revealing and Reality-Masking Puzzles · 2020-01-21T15:33:08.520Z · score: 15 (6 votes) · LW · GW

Seeing you write about this problem, in such harsh terms as "formerly-known-as-rationality community" and "effects are iffier and getting worse", is surprising in a good way.

Maybe talking clearly could help against these effects. The American talking style has been getting more oblique lately, and it's especially bad on LW, maybe due to all the mind practices. I feel this, I guess that, I'd like to understand better... For contrast, read DeMille's interview after he quit dianetics. It's such a refreshingly direct style, like he spent years mired in oblique talk and mind practices then got fed up and flipped to the opposite, total clarity. I'd love to see more of that here.

Comment by cousin_it on Reality-Revealing and Reality-Masking Puzzles · 2020-01-17T08:48:45.099Z · score: 26 (10 votes) · LW · GW

To me, doing things because they are important seems to invite this kind of self-deception (and other problems as well), while doing things because they are interesting seems to invite many good outcomes. Don't know if other people have the same experience, though.

Comment by cousin_it on Repossessing Degrees · 2020-01-14T16:01:10.348Z · score: 3 (1 votes) · LW · GW

Maybe the most actionable path is to try influencing employers. For example, Google doesn't require college degrees for some jobs, but there's still a difference between not requiring degrees (but still preferring candidates with degrees) and ignoring degrees. One phrase to watch for is "degree or equivalent work experience". I'd be happier if Google's hiring process officially ignored degrees, and the same for other tech companies. But I don't know if anyone's campaigning for that kind of thing.

Comment by cousin_it on Is there a moral obligation to respect disagreed analysis? · 2020-01-11T10:55:36.911Z · score: 11 (4 votes) · LW · GW

Your question is pretty abstract, so my answer will be abstract too: if you're ready to own the negative consequences of your action should they arise, then take the action, otherwise don't. Whatever arguments you read here, they won't be a valid excuse for you later, it's on you.

Comment by cousin_it on Of arguments and wagers · 2020-01-11T10:25:52.435Z · score: 7 (4 votes) · LW · GW

So we can continuously raise the required stakes for each wager, until either (1) the market approximately clears

I think the resulting odds won't reflect the probability of anything, because they depend a lot on whether Alice or Bob is more risk-tolerant (=rich).

Also, it seems to me that your scheme works best for yes/no questions. For anything more complicated, Alice and Bob can cooperate to mislead Judy, which is especially scary in case of AIs. I'm not sure how to fix that problem: it seems to require a way for a non-expert to check the work of a malicious expert, not just adjudicate between two experts.

Comment by cousin_it on Principles of Disagreement · 2020-01-10T16:58:12.104Z · score: 3 (1 votes) · LW · GW

Yeah, I made a pointlessly longer calculation and got the same answer. (And by varying the prior from 0.5 to other values, you can get any other answer.)

Comment by cousin_it on Criticism as Entertainment · 2020-01-10T08:57:24.679Z · score: 9 (8 votes) · LW · GW

One of Eliezer's favorite writing tools is framing things as a two-sided conflict: atheism vs religion, MWI vs Copenhagen, Bayes vs frequentism, and even when presenting his views about AI he was always riffing off the absurdity of opposing views. That's fun to read and makes the reader care about the thing. I think it worked on us for basically the same reason that criticism-as-entertainment works.

Comment by cousin_it on In Favor of Niceness, Community, and Civilization · 2020-01-09T23:01:08.602Z · score: 8 (4 votes) · LW · GW

I'm not Scott, but I think he's arguing against using force to violate rights (kill people or shut them up). He's not against using force to defend rights (when someone tries to kill you or shut you up).

Comment by cousin_it on Circling as Cousin to Rationality · 2020-01-09T08:28:58.585Z · score: 14 (3 votes) · LW · GW

I think I see another drawback of these kinds of techniques: when someone criticizes your thing, your first thought is "let's analyze why the person said that", rather than "wait, is my thing bad?" It's worrying that the thing you're defending happens to teach that kind of mental move.

Comment by cousin_it on (Double-)Inverse Embedded Agency Problem · 2020-01-08T13:45:54.127Z · score: 7 (4 votes) · LW · GW

To me the problem of embedded agency isn't about fitting a large description of the world into a small part of the world. That's easy with quining, which is mentioned in the MIRI writeup. The problem is more about the weird consequences of learning about something that contains the learner.

Also, I love your wording that the problem has many faucets. Please don't edit it out :-)

Comment by cousin_it on Circling as Cousin to Rationality · 2020-01-08T12:11:20.212Z · score: 4 (2 votes) · LW · GW

Thank you for writing this. I was trying to express the same kind of feelings, but you did it better.

Comment by cousin_it on Circling as Cousin to Rationality · 2020-01-07T13:56:28.453Z · score: 5 (2 votes) · LW · GW

It seems you're thinking of Bob as someone who's already pretty assertive and just needs tactical advice, and in that case I agree it can be good advice. But for someone who's less assertive, they might interpret the advice as basically "be more meek", especially if there's pressure to follow it. For such people, I don't think the first exercise should involve lowering of boundaries. Instead it'd be something like practicing saying "no" and laughing in someone's face, until it no longer feels uncomfortable. Doing these kinds of things certainly helped me.

Comment by cousin_it on What is Life in an Immoral Maze? · 2020-01-06T15:24:39.073Z · score: 8 (2 votes) · LW · GW

Yeah, same here. Maybe Zvi is talking about non-tech. But I'd really like to hear where he's getting this. Because if it's all from a book, well, books can exaggerate, or talk about a different time and place than ours.

Comment by cousin_it on Circling as Cousin to Rationality · 2020-01-06T13:47:58.894Z · score: 4 (2 votes) · LW · GW

Yes, accusing someone of betrayal is costly in the short term. But letting betrayal slide is costly in the long term.

Comment by cousin_it on What is Life in an Immoral Maze? · 2020-01-05T17:57:25.964Z · score: 18 (7 votes) · LW · GW

Have you seen this kind of stuff first hand? I mean, it sounds super pessimistic and maybe not 100% accurate - I know a fair number of managers who seem to have fulfilling lives.

Comment by cousin_it on Underappreciated points about utility functions (of both sorts) · 2020-01-05T11:46:50.928Z · score: 4 (2 votes) · LW · GW

I think it makes sense to worry about value fragility and shoehorning, but it's a cost-benefit thing. The benefits of consistency are large: it lets you prove stuff. And the costs seem small to me, because consistency requires nothing more than having an ordering on possible worlds. For example, if some possible world seems ok to you, you can put it at the top of the ordering. So assuming infinite power, any ok outcome that can be achieved by any other system can be achieved by a consistent system.

And even if you want to abandon consistency and talk about messy human values, OP's point still stands: unbounded utility functions are useless. They allow "St Petersburg inconsistencies" and disallow "bounded inconsistencies", but human values probably have both.

Comment by cousin_it on Underappreciated points about utility functions (of both sorts) · 2020-01-05T11:05:56.298Z · score: 3 (1 votes) · LW · GW

Sure, but that's a reason to research consistent values that are close to ours, so we have something to program into a certain kind of FAI. That's why people research "idealizing values", and I think it's a worthwhile direction. Figuring out how to optimize inconsistent values could be another direction, they are not mutually exclusive.

Comment by cousin_it on Underappreciated points about utility functions (of both sorts) · 2020-01-04T19:42:02.422Z · score: 3 (1 votes) · LW · GW

Building FAI that will correctly optimize something inconsistent seems like an even more daunting task than building FAI :-)

Comment by cousin_it on Antimemes · 2020-01-04T14:10:33.154Z · score: 3 (1 votes) · LW · GW

I think both Template Haskell and MetaOCaml have some typechecking of macros.

Though I haven't found macros very useful or exciting. To me the most fun way to program is with simple code, but relying on a lot of implicit understanding. Something like what Kragen describes here:

Also, sometime around 1993, I read Robert Sedgewick's textbook, "Algorithms in C", which I borrowed from my father, Greg Sittler. It opened a whole new world to me. The programs in the book are all concise crystals of beauty, showing how a few lines of code can transform a pile of structs of integers and pointers into a binary search tree, or a hash table, or the minimal spanning tree of a set of points. It used only the basics of C, and did not rely on any libraries.

This was a revelation to me; there was beauty and magic in these programs, and it wasn't because they were calling on powerful libraries hidden, Wizard-of-Oz-style, behind curtains. They were just plain C code, and not very much of it, that "developed a set of operations that were not obviously implicit in the original set," to borrow Bruce Mills's phrase.

Comment by cousin_it on Circling as Cousin to Rationality · 2020-01-02T09:37:06.065Z · score: 11 (5 votes) · LW · GW

Orient towards your impressions and emotions and stories as being yours, instead of about the external world. “I feel alone” instead of “you betrayed me.”

"Alice betrayed Bob" contains some information that "Bob feels alone" doesn't contain, though. I don't think we should always discard such information.

Comment by cousin_it on Toy Organ · 2020-01-01T22:25:55.284Z · score: 3 (1 votes) · LW · GW

I think the most logical way to map chords to buttons would be based on the tonnetz: Bb Dm F Am C Em G and so on. That way every two neighboring chords share two notes.

Comment by cousin_it on Perfect Competition · 2019-12-31T00:40:43.716Z · score: 8 (3 votes) · LW · GW

One could even argue this has already begun – that people in many nations having less than the replacement level number of children represents competitive forces so strong that young people no longer have the surplus to be comfortable having enough kids.

That doesn't sound right. Usually people in poorer countries have more kids, and poorer people in each country have more kids.

Comment by cousin_it on Yet another Simpson's Paradox Post · 2019-12-23T19:15:25.853Z · score: 11 (6 votes) · LW · GW

Here's the simplest explanation of Simpson's paradox that I know: take any two variables that are positively correlated, for example height and weight. Now consider all people whose height in cm + weight in kg equals 250. In that group, height and weight are negatively correlated. But all people can be divided into such groups :-)

Comment by cousin_it on Book Recommendations for social skill development? · 2019-12-20T08:36:10.002Z · score: 1 (2 votes) · LW · GW

Well, someone who never learned music surely doesn't have enough knowledge/skills on what to do in a band, and will perform terribly on stage. Their problems, as you say, are wide. And yet, if such a person goes around asking "where do I start learning music", I just know that they're not that into the whole idea. If they were, they would've picked up a guitar and be playing already.

Comment by cousin_it on Counterfactual Mugging: Why should you pay? · 2019-12-19T22:01:45.083Z · score: 3 (1 votes) · LW · GW

No, wasn't planning. Go ahead and write the post, and maybe link to my comment as independent discovery.

Comment by cousin_it on Counterfactual Mugging: Why should you pay? · 2019-12-19T15:29:02.527Z · score: 4 (2 votes) · LW · GW

Yeah. I don't remember seeing this argument before, it just came to my mind today.

Comment by cousin_it on Counterfactual Mugging: Why should you pay? · 2019-12-19T13:09:00.793Z · score: 3 (1 votes) · LW · GW

I just thought of another argument. Imagine that before being faced with counterfactual mugging, the agent can make a side bet on Omega's coin. Let's say the agent who doesn't care about counterfactual selves chooses to bet X dollars on heads, so the income is X in case of heads and -X in case of tails. Then the agent who cares about counterfactual selves can bet X-5050 on heads (or if that's negative, bet 5050-X on tails). Since this agent agrees to pay Omega, the income will be 10000+X-5050=4950+X in case of heads, and 5050-X-100=4950-X in case of tails. So in both cases the caring agent gets 4950 dollars more than the non-caring agent. And the opposite is impossible: no matter how the two agents bet, the caring agent always gets more in at least one of the cases.

Comment by cousin_it on Counterfactual Mugging: Why should you pay? · 2019-12-19T12:27:23.494Z · score: 16 (4 votes) · LW · GW

Let's imagine a kind of symmetric counterfactual mugging. In case of heads, Omega says: "The coin came up heads, now you can either give me $100 or refuse. After that, I'll give you $10000 if you would've given me $100 in case of tails". In case of tails, Omega says the same thing, but with heads and tails reversed. In this situation, an agent who doesn't care about counterfactual selves always gets 0 regardless of the coin, while an agent who does care always gets $9900 regardless of the coin.

I can't think of any situation where the opposite happens (the non-caring agent gets more with certainty). To me that suggests the caring agent is more rational.

Comment by cousin_it on Counterfactual Mugging: Why should you pay? · 2019-12-18T17:41:50.721Z · score: 3 (1 votes) · LW · GW

No, not like that. I think there is an argument for caring about counterfactual selves. But it cannot be carried out from the assumption that the agent doesn't care about counterfactual selves. You're just asking me to do something impossible.

Comment by cousin_it on Counterfactual Mugging: Why should you pay? · 2019-12-18T11:46:27.679Z · score: 3 (1 votes) · LW · GW

why can’t we just imagine that you are an agent that doesn’t care about counterfactual selves?

Caring about counterfactual selves is part of UDT, though. If you simply assume that it doesn't hold, and ask proponents of UDT to argue under that assumption, I'm not sure there's a good answer.

Comment by cousin_it on What determines the balance between intelligence signaling and virtue signaling? · 2019-12-17T12:22:53.964Z · score: 3 (1 votes) · LW · GW

Interesting idea, but not sure it cuts reality at the joints: 1) left philosophy is full of intelligence signaling, 2) the Russian revolution was opposed to traditional virtues like family.

I think civilization mainly depends on the idea that punishing people for disagreement is not ok, and that's the idea we should try to reinforce.

Comment by cousin_it on Approval Extraction Advertised as Production · 2019-12-16T13:15:15.509Z · score: 18 (8 votes) · LW · GW

Improving a startup's chance of success, which can also be improved in other ways, doesn't make YC a gatekeeper or a scam.

Comment by cousin_it on Approval Extraction Advertised as Production · 2019-12-16T09:07:24.395Z · score: 5 (5 votes) · LW · GW

Most people seem happy with what they get from YC, so I'm not sure why you call it a scam.

Comment by cousin_it on Approval Extraction Advertised as Production · 2019-12-16T01:14:19.905Z · score: 2 (7 votes) · LW · GW

This matches the impression I’ve gotten from most of the people I’ve talked to about their startups, that Y-Combinator is singularly important as a certifier of potential, and therefore gatekeeper to the kinds of network connections that can enable a fledgling business—especially one building business tools, the ostensible means of production—get off the ground.

Maybe PG could reply that if your startup is good, YC isn't singularly important to it, so there's no point getting angry about gatekeeping?

Comment by cousin_it on Book Recommendations for social skill development? · 2019-12-14T22:11:48.703Z · score: 22 (8 votes) · LW · GW

Here's the thing though. When someone asks a vague question like "how do I learn music", to me that means they're not trying. Otherwise they'd have specific questions, like what's that chord in a given song, or how to pick across strings, or how to improve timing on the kick drum. If you have no such questions, just a vague "how to get better at social", you need to start trying and getting more specific questions, not reading ahead.

Comment by cousin_it on cousin_it's Shortform · 2019-12-04T17:17:20.809Z · score: 3 (1 votes) · LW · GW

Edit: no point asking this question here.

Comment by cousin_it on CO2 Stripper Postmortem Thoughts · 2019-12-04T10:55:44.473Z · score: 3 (1 votes) · LW · GW

Ah, silly me. Thanks!

Comment by cousin_it on CO2 Stripper Postmortem Thoughts · 2019-12-04T07:59:18.502Z · score: 3 (1 votes) · LW · GW

One human produces about 1 kg of CO2 in 24 hours. We can idealize a perfect CO2 stripper as a magic box that inhales air and spits it out at 0 ppm. If you want a steady-state concentration of 500 ppm for 2 people, then we can see how much air-flow is required to lock up 2 kg of CO2 in 24 hours. This comes out to about 100 cubic feet per minute. This is the bare minimum air flow for any CO2 stripper

Wait, why does stripping one person's CO2 require more airflow than breathing (0.3 cubic feet per minute)?

Comment by cousin_it on A Practical Theory of Memory Reconsolidation · 2019-12-02T13:47:23.050Z · score: 6 (3 votes) · LW · GW

Thank you! Not sure about writing a life advice column, that's not quite who I am, but if you're interested in anything in particular, I'll be happy to answer.

Comment by cousin_it on A Practical Theory of Memory Reconsolidation · 2019-12-01T13:04:25.358Z · score: 17 (5 votes) · LW · GW

Welcome to LW! I haven't been in your situation, but it feels like I could have been, if things turned out a bit differently. So take this for what it's worth.

I think a lot of it comes down to the way you talk and recount your feelings to others. It can feel either "spiky" or "rounded", like the kiki/bouba effect. For example, when you say you want to secure love and affection, that's you being honest, and also "spiky". These are not the same thing! There's a whole art form of expressing your feelings, even going to some very dark places, while still coming across as "rounded". Take the edge off of your word choices; understate things; allow a range of possible reactions; be kind to the feelings of the person you're talking to. It might feel like a restrictive filter, but to me it's liberating. I can just have a normal conversation with anyone, anytime, about anything that's in my head.

It's not quite the "route to love" that you're looking for, but it opens the door to some new connections, and sometimes you'll click with someone in a deeper way. Hope that made sense.

Comment by cousin_it on The Correct Contrarian Cluster · 2019-11-29T13:59:55.916Z · score: 15 (6 votes) · LW · GW

Eliezer's econ case is based on reading Scott Sumner's blog, so it's not very informative that Sumner praises Eliezer (3 out of 4 endorsements you linked, the remaining one is anon).

Comment by cousin_it on Open-Box Newcomb's Problem and the limitations of the Erasure framing · 2019-11-28T17:12:30.588Z · score: 3 (1 votes) · LW · GW

I see, thanks, that makes it clearer. There's no disagreement, you're trying to justify the approach that people are already using. Sorry about the noise.

Comment by cousin_it on Open-Box Newcomb's Problem and the limitations of the Erasure framing · 2019-11-28T15:14:28.496Z · score: 3 (1 votes) · LW · GW

Well, the program is my formalization. All the premises are right there. You should be able to point out where you disagree.

Comment by cousin_it on Open-Box Newcomb's Problem and the limitations of the Erasure framing · 2019-11-28T14:19:35.584Z · score: 3 (1 votes) · LW · GW

I couldn't understand your comment, so I wrote a small Haskell program to show that two-boxing in the transparent Newcomb problem is a consistent outcome. What parts of it do you disagree with?

Comment by cousin_it on Open-Box Newcomb's Problem and the limitations of the Erasure framing · 2019-11-28T12:31:59.211Z · score: 3 (1 votes) · LW · GW

If you see a full box, then you must be going to one-box if the predictor really is perfect.

Huh? If I'm a two-boxer, the predictor can still make a simulation of me, show it a simulated full box, and see what happens. It's easy to formalize, with computer programs for the agent and the predictor.