Posts

Nonspecific discomfort 2021-09-04T14:15:22.636Z
Fixing the arbitrariness of game depth 2021-07-17T12:37:11.669Z
Feedback calibration 2021-03-15T14:24:44.244Z
Three more stories about causation 2020-11-03T15:51:58.820Z
cousin_it's Shortform 2019-10-26T17:37:44.390Z
Announcement: AI alignment prize round 4 winners 2019-01-20T14:46:47.912Z
Announcement: AI alignment prize round 3 winners and next round 2018-07-15T07:40:20.507Z
How to formalize predictors 2018-06-28T13:08:11.549Z
UDT can learn anthropic probabilities 2018-06-24T18:04:37.262Z
Using the universal prior for logical uncertainty 2018-06-16T14:11:27.000Z
Understanding is translation 2018-05-28T13:56:11.903Z
Announcement: AI alignment prize round 2 winners and next round 2018-04-16T03:08:20.412Z
Using the universal prior for logical uncertainty (retracted) 2018-02-28T13:07:23.644Z
UDT as a Nash Equilibrium 2018-02-06T14:08:30.211Z
Beware arguments from possibility 2018-02-03T10:21:12.914Z
An experiment 2018-01-31T12:20:25.248Z
Biological humans and the rising tide of AI 2018-01-29T16:04:54.749Z
A simpler way to think about positive test bias 2018-01-22T09:38:03.535Z
How the LW2.0 front page could be better at incentivizing good content 2018-01-21T16:11:17.092Z
Beware of black boxes in AI alignment research 2018-01-18T15:07:08.461Z
Announcement: AI alignment prize winners and next round 2018-01-15T14:33:59.892Z
Announcing the AI Alignment Prize 2017-11-04T11:44:19.000Z
Announcing the AI Alignment Prize 2017-11-03T15:47:00.092Z
Announcing the AI Alignment Prize 2017-11-03T15:45:14.810Z
The Limits of Correctness, by Bryan Cantwell Smith [pdf] 2017-08-25T11:36:38.585Z
Using modal fixed points to formalize logical causality 2017-08-24T14:33:09.000Z
Against lone wolf self-improvement 2017-07-07T15:31:46.908Z
Steelmanning the Chinese Room Argument 2017-07-06T09:37:06.760Z
A cheating approach to the tiling agents problem 2017-06-30T13:56:46.000Z
What useless things did you understand recently? 2017-06-28T19:32:20.513Z
Self-modification as a game theory problem 2017-06-26T20:47:54.080Z
Loebian cooperation in the tiling agents problem 2017-06-26T14:52:54.000Z
Thought experiment: coarse-grained VR utopia 2017-06-14T08:03:20.276Z
Bet or update: fixing the will-to-wager assumption 2017-06-07T15:03:23.923Z
Overpaying for happiness? 2015-01-01T12:22:31.833Z
A proof of Löb's theorem in Haskell 2014-09-19T13:01:41.032Z
Consistent extrapolated beliefs about math? 2014-09-04T11:32:06.282Z
Hal Finney has just died. 2014-08-28T19:39:51.866Z
"Follow your dreams" as a case study in incorrect thinking 2014-08-20T13:18:02.863Z
Three questions about source code uncertainty 2014-07-24T13:18:01.363Z
Single player extensive-form games as a model of UDT 2014-02-25T10:43:12.746Z
True numbers and fake numbers 2014-02-06T12:29:08.136Z
Rationality, competitiveness and akrasia 2013-10-02T13:45:31.589Z
Bayesian probability as an approximate theory of uncertainty? 2013-09-26T09:16:04.448Z
Notes on logical priors from the MIRI workshop 2013-09-15T22:43:35.864Z
An argument against indirect normativity 2013-07-24T18:35:04.130Z
"Epiphany addiction" 2012-08-03T17:52:47.311Z
AI cooperation is already studied in academia as "program equilibrium" 2012-07-30T15:22:32.031Z
Should you try to do good work on LW? 2012-07-05T12:36:41.277Z
Bounded versions of Gödel's and Löb's theorems 2012-06-27T18:28:04.744Z

Comments

Comment by cousin_it on The Duplicator: Instant Cloning Would Make the World Economy Explode · 2021-09-09T15:31:39.805Z · LW · GW

Okay, yeah, I had no idea that this much parallelism already existed. There could be still a reason for serial overhang (serial algorithms have more clever optimizations open to them, and neurons firing could be quite sparse at any given moment), but I'm no longer sure things will play out this way.

Comment by cousin_it on The Duplicator: Instant Cloning Would Make the World Economy Explode · 2021-09-09T09:35:19.867Z · LW · GW

Yeah, maybe my intuition was pointing a different way: that the brain is a physical object, physics is local, and the particular physics governing the brain seems to be very local (signals travel at tens of meters per second). And signals from one part of the brain to another have to cross the intervening space. So if we divide the brain into thousands of little cubes, then each one only needs to be connected to its six neighbors, while having plenty of interesting stuff going inside - rewiring and so on.

Edit: maybe another aspect of my intuition is that "tick" isn't really a thing. Each little cube gets a constant stream of incoming activations, at time resolution much higher than typical firing time of one neuron, and generates a corresponding outgoing stream. Generating the outgoing stream requires simulating everything in the cube (at similar high time resolution), and doesn't need any other information from the rest of the brain, except the incoming stream.

Comment by cousin_it on The Duplicator: Instant Cloning Would Make the World Economy Explode · 2021-09-08T23:51:31.523Z · LW · GW

My point is, the whole "age of em" might well come and go in the following regime: many neurons per processor, many processors per em, few ems per data center. In this regime, adding more processors to an em speeds up their subjective time almost linearly. You may ask, how can "few ems per data center" stay true? First of all, today's data centers are like 100K processors, while one em has 100B neurons and way more synapses, so adding processors will make sense for quite awhile. Second of all, it won't take that much subjective time for a handful of Von Neumann-smart ems to figure out how to scale themselves to more neurons per em, allowing "few, smarter ems per data center" to go on longer, which then leads smoothly to the post-em regime.

Also your mentions of clock speed are still puzzling to me. My whole argument still works if there's only ever one type of processors with one clock speed fixed in stone.

Comment by cousin_it on The Duplicator: Instant Cloning Would Make the World Economy Explode · 2021-09-08T17:43:20.747Z · LW · GW

To Amdahl's law - I think simulating a brain won't have any big serial bottlenecks. Split up by physical locality, each machine simulates a little cube of neurons and talks to machines simulating the six adjacent cubes. You can probably split one em into a million machines and get like a 500K times speedup or something. Heck, maybe even more than a million times, because each machine has better memory locality. If your intuition is different, can you explain?

To overclocking - it seems you're saying parallelization depends on it somehow? I didn't really understand this part.

Comment by cousin_it on The Duplicator: Instant Cloning Would Make the World Economy Explode · 2021-09-08T12:02:06.535Z · LW · GW

Here's something that just came to my mind: simulating a human brain is probably very parallelizable, since it has a huge number of neurons, and each neuron fires a couple hundred times per second at most. So if you have some problem which is difficult but still can be solved by one person, it's probably more efficient to give it to one person running at 1000x speedup, not 1000 people at 1x speed who have to pay fixed costs to understand the problem and communication costs to split it up. And as computers get faster, the arithmetic keeps working - a 1M em is better than 1K 1K ems. So it seems possible that the most efficient population of ems will be quite small, one or a handful of people per data center. It's true that as knowledge grows, more ems are needed to understand it all; but Von Neumann was a living example that one person can understand quite a lot of things, and knowledge aids like Wikipedia will certainly be much cheaper to run than ems.

In the slightly longer perspective, I expect our handful of ems to come up with enough self improvement tech, like bigger working memory or just adding more neurons, that a small population can continue to be optimal. No point paying the "fixed costs of being human" (ancestral circuitry) for billions or trillions of less efficient ems, if a smaller number of improved ems gives a better benefit ratio for that cost.

So in short, it seems to me that the world with lots and lots of ems will simply never arrive, and the whole "duplicator" concept is a bit of red herring. Instead we should imagine a world with a much smaller number of "Wise Ones", human-derived entities with godlike speed and understanding. They will probably be quite happy about their lot in life, not miserable and exploited. And since they'll have obvious incentive to improve coordination among themselves as well, that likely leads to the usual singleton scenario.

I don't know if this argument is new, welcome to be shown wrong.

Comment by cousin_it on [deleted post] 2021-09-07T17:32:43.980Z

No way to remove it from the internet at this point, but the obvious thing to do for LW goodwill is to remove the post and replace it with an apology, I'm not sure why that hasn't happened yet.

Comment by cousin_it on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-07T16:13:18.467Z · LW · GW

Well, it depends on which side of the conversation you focus on. One side is refusing to examine their blindspots - damning. The other side is calling "resistance" when their arguments get rejected - more damning in my eyes, only shoddy salesmen and door to door religious folks do that.

Comment by cousin_it on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-07T10:39:55.232Z · LW · GW

I think this book should be seen in a larger context: it would make sense for US white people to materially support descendants of their slaves for three hundred years, but so far that just aint happening (even the forty acres and a mule were given and then taken away), why isn't it happening? Gotta strike the wall until the fists bleed! And so the book is another strike at the wall. Tried materialist asking for money, didn't work, trying a different intellectual tack instead.

And by rights, material restitution is what ought to happen. Ought to have happened in the first place. No? Maybe DiAngelo has a few grifter bones in her body, and the book itself should be thrown into a penthouse window rather than read, the question is historical injustice. "We can't fix all historical injustice!" Yeah but you can fix this one. "But the left tried, and it's making things worse!" Correct. How do you propose to fix it. Answer the question.

Comment by cousin_it on LessWrong is providing feedback and proofreading on drafts as a service · 2021-09-07T07:52:05.682Z · LW · GW

Yeah, I was also wondering about the minimum requirement. It seems feedback would be most useful to people writing their first posts, and there's no limitation on making a first post, is there? In the AI Alignment Prize I tried to write feedback to everyone and it ended up being a very valuable experience, both for the participants and for me.

Comment by cousin_it on Three Principles to Writing Original Nonfiction · 2021-09-07T07:07:56.498Z · LW · GW

So, I think your message has two contradictory parts:

  1. Write from your own experience, in your personal voice, and so on

  2. Write like this! Remove X, remove Y, etc

This took me a while to realize, but (1) is the more valuable advice. Different people's personalities are tuned to different ways of writing, and any trick you learned elsewhere and try to layer "on top" (yes, even brevity) can get in the way. The goal of writing is partly to discover how you write. Not how you write at current skill level - that probably just sucks - but something more like an idealization of how you read.

There was some book that a friend once mentioned to me, I don't know the author or how the book is called, but an anecdote stuck with me. It's about a guy who's trying to learn opera singing, and keeps trying to find some imagined teacher, "the man with the voice of a red bull", a phrase that came to him in a dream. Quite an image for an idealized opera singer, don't you think? Well, he spends years and never realizes that the thing he was dreaming about was his own voice - in a potential future where he chased the art in exactly the right way.

To me that story sums up what art should be about. You've got to have a dream; it has to be your dream, and it will suggest ways to chase it. To some people the dream tells to omit unnecessary words, to Nabokov it suggested something quite different. And to you personally, I guess the question is: are short sentences getting straight to the point really the most enjoyable thing about writing to you? It's certainly the thing that Paul Graham likes, so more power to him, but you're allowed to like other things too.

Comment by cousin_it on Nonspecific discomfort · 2021-09-06T13:41:53.732Z · LW · GW

This is all very suspicious. Let's say I write a program for a robot that will gather apples and avoid tigers. So most of its hardware and software complexity will be taken up by circuitry to recognize apples, recognize tigers, move legs, and so on. There seems no reason why any measure of "symmetry" of the mental state, taken from outside, would correlate much to whether the robot is currently picking an apple or running from a tiger - or in other words, to pleasure or pain.

Maybe we have some basic difference from such robots, but I'd bet that we're not that different. Most of our brain is workaday machinery. If it makes "waves", these waves are probably about workaday functioning. If you're measuring anything real, it's probably not a correlate of consciousness at all, and more likely a correlate of how busy the brain is being at any moment. No?

Comment by cousin_it on Nonspecific discomfort · 2021-09-05T07:10:41.560Z · LW · GW

I don't know, meditation is very inward and mental, the opposite of the stuff I'd recommend. And people who meditate a lot tend to change their affect in a way that's kinda off-putting to me; while people who live "outward" in the way I describe tend to have pretty attractive (to me) manner.

Comment by cousin_it on Nonspecific discomfort · 2021-09-04T23:02:21.599Z · LW · GW

I think the various kinds of soma (drink, TV, internet) are a different species of thing. They don't just pretend to solve the problem, they actually solve it, make nonspecific discomfort go away. Not in the most healthy way, but I think they can often distract people from worse things.

Comment by cousin_it on Nonspecific discomfort · 2021-09-04T21:46:42.799Z · LW · GW

Not an expert on Buddhism either, but I'm not sure that the feeling characterizes life itself. I feel none of it when baking a cake, solving an interesting math problem, or going down a waterslide :-) It could be that it characterizes a certain state of mind, but wouldn't that suggest we should spend less time in that state of mind?

Comment by cousin_it on Nonspecific discomfort · 2021-09-04T21:43:23.724Z · LW · GW

Spot on. I would add that even when you know the actual cause of some problem, doing unrelated healthy stuff can still help you relax, make yourself feel "larger" than the problem and less bothered by it.

Comment by cousin_it on Nonspecific discomfort · 2021-09-04T21:20:31.946Z · LW · GW

Spot on. Though, and this might sound awful, I'm not sure most overworked people are really that low on free time; it's more like, their free time is eaten up by some flavor of soma (alcohol, TV, internet and so on). Which is then another of the fake remedies we're talking about. You probably can't attack it directly by resolutions to quit and so on, the only strategy is to squeeze out, by squeezing in some healthy time-spending by hook or by crook. That said, people with literally no free time do exist and none of my smart advice will work for them - that's true too.

Comment by cousin_it on Nonspecific discomfort · 2021-09-04T20:59:43.305Z · LW · GW

Yeah, great point. I wanted to say about weather specifically; added to the latest edit of the post.

Just curious about "wishing you'd figured it out long ago", at what age did you in fact figure it out? For me I think it was understood instinctively since my twenties, and saved me from a lot of trouble.

Comment by cousin_it on We Live in an Era of Unprecedented World Peace · 2021-09-01T11:57:40.284Z · LW · GW

Yeah. The first chart seems strange too. It says the world in the 20th century had 6 violent deaths per 100K people per year, so the number of violent deaths should be 6 * 100 * average world population / 100K. But taking average world population as 4B, the number of violent deaths comes out as 24M, which seems smaller than WWI + WWII + all other wars and mass killings.

Comment by cousin_it on Petition To Make Inarticulate Downvoting More Difficult · 2021-09-01T11:14:40.348Z · LW · GW

Hmm. You joined three days ago, wrote a post, it got downvoted to -13 (I didn't downvote but also felt it was nonsense), several people explained to you why, you didn't understand their explanations, and now you write a petition to make inarticulate downvoting more difficult, which wouldn't help with your problem. Not a good start.

And honestly I think your feelings of "strong affinity" with LW aren't on a good path. Maybe revert to not-very-strong affinity, treat LW as a place to chat in a more relaxed way, and let affinity grow mutually? As one of my teachers in university said, when giving an A to a student who had attended each and every class: "Relationships should be symmetric".

Comment by cousin_it on [deleted post] 2021-08-31T22:09:25.568Z

Imagine we have $1B of energy we want to store for 100 years. In other words, we can spend $1B worth of resources today to create $1B worth of value, and store it in whatever form that can retain all or most of it for 100 years. What are our best options?

Buy an oilfield. Oil has stayed in the ground for millions of years, it'll be fine for another century. And you'll get more energy from it in a century than you can get today for the same money, because extraction technology will have improved by then.

Comment by cousin_it on A Small Vacation · 2021-08-31T21:31:23.416Z · LW · GW

Let's put some numbers on it. The average US worker is backed by tens of thousands of dollars in physical capital, there's no other way to achieve high productivity. Multiply that by the number of people in the proposed country, and you'll get a sum that even a government won't spend willy nilly. It's a far cry from "let's donate our vacant land to refugees, they will become highly productive for free". A better analogy would be "let's build a vacant city - housing, roads, power, water, stores, warehouses, factories and all the rest, for a million people - and invite a million refugees to fill it".

Comment by cousin_it on Antidotes to Number Numbness · 2021-08-31T15:36:01.948Z · LW · GW

I think physical size of things isn't the best mnemonic for numbers, because we get confused by the difference between length, area, and volume/weight. For example, a cruise ship is 60 times longer than a minivan (5m vs 300m) but 100000 times heavier (2T vs 200KT). So which number should jump to mind when imagining their relative sizes, 60 or 100000 or something in between?

My favourite way to imagine large numbers is to use units of time instead. A thousand seconds is about 15 minutes, a million seconds is about 10 days, and a billion seconds is about 30 years. It's really easy to remember.

It's especially vivid when thinking about population or death toll numbers. If 1 person = 1 second, then 9/11 was about an hour, the Afghan war was two days, the Vietnam war was a month, and WWII was two years. Covid is two months so far, AIDS was a year, the Spanish flu was two years, the Black Death was three years. NYC population is three months, US population is ten years, world population is two centuries and a half, all people who ever lived are three thousand years.

Comment by cousin_it on A Small Vacation · 2021-08-30T11:09:52.002Z · LW · GW

Building a country on vacant land requires a lot of capital per person, but refugees don't have much capital. Where will it come from?

Comment by cousin_it on Existential Angst Factory · 2021-08-11T08:19:41.901Z · LW · GW

It means that anything one strives for can actually be achieved

And then un-achieved, and the opposite achieved instead. So it's a reason for optimism and pessimism equally. To single out optimism, you need the universe to have some ratchet in the direction of good, but it seems hard to find. The only ratchet we did find (law of increasing entropy) doesn't seem too optimistic.

Comment by cousin_it on The Reductionist Trap · 2021-08-11T07:53:31.041Z · LW · GW

Philosophical reductionism is (sort of) the belief that complex systems act the way they do because of the simpler actions of simpler systems that make them up. I think that this is pretty solidly true in our universe, and I won’t discuss it further.

Is it true though? Fundamental physics seems to require more and more complex math.

You could say "physics can be approximated by some Turing machine, made of simple things like bits and state transitions", which sounds plausible, but not sure why call it reductionism.

Comment by cousin_it on Fixing the arbitrariness of game depth · 2021-07-18T07:42:52.220Z · LW · GW

"How many Elo levels until diminishing returns on effort" seems like a sensible idea, and might work on humans as well as computers. But I still think coin-chess and poker show that using win probabilities to infer rating differences (as Elo does) isn't very meaningful, it's better to look only at the binary fact of whether Alice wins against Bob more than half the time.

Comment by cousin_it on Fixing the arbitrariness of game depth · 2021-07-17T20:59:02.388Z · LW · GW

This was one of my first ideas. It fixes the deca-chess bug, but not the coin-chess bug, I think.

Comment by cousin_it on Fixing the arbitrariness of game depth · 2021-07-17T20:57:38.039Z · LW · GW

Yes, my definition fixes some bugs in the standard definition, but not this bug in particular. I'm not sure it's even fixable without losing the intuition of "depth" as "number of distinguishable levels".

Comment by cousin_it on Fixing the arbitrariness of game depth · 2021-07-17T17:32:47.750Z · LW · GW

Yeah. Maybe it would make sense to pull in even more information - not just how well people play with effort vs without, but also how people improve over time with effort vs without. I'm not sure how to do this cleanly yet.

Comment by cousin_it on [Link] Musk's non-missing mood · 2021-07-14T06:59:07.509Z · LW · GW

Are you sure they aren't just trying to be tactful?

Comment by cousin_it on The SIA population update can be surprisingly small · 2021-07-08T11:04:11.875Z · LW · GW

Instant strong upvote. This post changed my view as much as the risk aversion post (which was also by you!)

Comment by cousin_it on Relentlessness · 2021-07-06T21:55:32.786Z · LW · GW

I hesitated before writing this post because I don’t know what is special about languages and childrearing—I can’t think of other obvious things in the category, though there are probably some.

Maybe the skills that are best learned by immersion and just doing them are those that were useful in the ancestral environment, and so come with a lot of preinstalled "machinery" in our bodies and minds that just need to be tapped into. It's easy to see why language acquisition and childrearing would be in this category; other examples are social skills, cooking, running, fighting. As opposed to skills like math or piano, where most people don't start out with the basic machinery, and need to laboriously install it by solving a hundred problems or playing scales over and over.

Comment by cousin_it on How much interest would there be in a fringe theories wiki? · 2021-06-29T13:07:17.625Z · LW · GW

Wow, that's a lot of ideas. Many are obviously wrong, but there are some interesting ones, like the portable airport.

Comment by cousin_it on Voicing Voice · 2021-06-28T14:46:05.304Z · LW · GW

I think your post expresses an important point of view, but the opposite point of view is equally important - that feelings don't mean that much. You can have lots of feelings, and be charming and assertive at expressing them, but still, at the end of the day, have nothing to say. While a mousy, stilted, boring person thinking and working day and night on something external to themselves will almost certainly end up with something worth saying. My favorite artist is M.C. Escher, whose "voice" was unmistakable, but he didn't need to assert himself or break free from fear or anything like that. He just found an external thing that interested him - patterns, mosaics, permutations and inversions of space - and worked on it as faithfully as he could.

Edit: Rereading what I wrote above, I'm not sure I subscribe to it fully. Both views have merit, and maybe those who sympathize with one would benefit most from the other and vice versa.

Comment by cousin_it on The Unexpected Hanging Paradox · 2021-06-27T08:44:55.949Z · LW · GW

I agree with resolving the paradox along these lines: the judge was simply making a false statement.

Maybe a more satisfying way to reach that conclusion is to use Gödelian machinery. Interpret the judge's statement S as saying "there is an integer N from 1 to 5, such that for any M from 1 to 5 the statement 'N=M' is not provable from the statement 'N>M-1 and S is true' and the axioms of PA". Since the self-reference in S happens within a nested statement about provability, S can be interpreted as a statement about integers and arithmetic, using an arithmetized definition of provability in PA, and using the diagonal lemma (quining) to get its own Gödel number. And then yup, S can be shown to be false, by an argument similar to the paradox itself.

Comment by cousin_it on On the limits of idealized values · 2021-06-23T09:34:22.373Z · LW · GW

Very nice and clear writing, thank you! This is exactly the kind of stuff I'd love to see more on LW:

Suppose I can create either this galaxy Joe’s favorite world, or a world of happy puppies frolicking in the grass. The puppies, from my perspective, are a pretty safe bet: I myself can see the appeal.

Though I think some parts could use more work, shorter words and clearer images:

Second (though maybe minor/surmountable): even if your actual attitudes yield determinate verdicts about the authoritative form of idealization, it seems like we’re now giving your procedural/meta evaluative attitudes an unjustified amount of authority relative to your more object-level evaluative attitudes.

But most of the post is good.

R. Scott Bakker made a related point in Crash Space:

The reliability of our heuristic cues utterly depends on the stability of the systems involved. Anyone who has witnessed psychotic episodes has firsthand experience of consequences of finding themselves with no reliable connection to the hidden systems involved. Any time our heuristic systems are miscued, we very quickly find ourselves in ‘crash space,’ a problem solving domain where our tools seem to fit the description, but cannot seem to get the job done.

And now we’re set to begin engineering our brains in earnest. Engineering environments has the effect of transforming the ancestral context of our cognitive capacities, changing the structure of the problems to be solved such that we gradually accumulate local crash spaces, domains where our intuitions have become maladaptive. Everything from irrational fears to the ‘modern malaise’ comes to mind here. Engineering ourselves, on the other hand, has the effect of transforming our relationship to all contexts, in ways large or small, simultaneously. It very well could be the case that something as apparently innocuous as the mass ability to wipe painful memories will precipitate our destruction. Who knows? The only thing we can say in advance is that it will be globally disruptive somehow, as will every other ‘improvement’ that finds its way to market.

Human cognition is about to be tested by an unparalleled age of ‘habitat destruction.’ The more we change ourselves, the more we change the nature of the job, the less reliable our ancestral tools become, the deeper we wade into crash space.

In other words, yeah, I can imagine an alter ego who sees more and thinks better than me. As long as it stays within human evolutionary bounds, I'm even okay with trusting it more than myself. But once it steps outside these bounds, it seems like veering into "crash space" is the expected outcome.

Comment by cousin_it on How can there be a godless moral world ? · 2021-06-21T13:46:28.336Z · LW · GW

"Immoral" interactions between people are mostly interactions that reduce the total pie. So groups that are best at suppressing such interactions within the group (while maybe still allowing harm to outsiders) end up with the biggest total pie - the nicest goods, the best weapons and so on. That's why all Earth is now ruled by governments that reduce murder far below hunter-gatherer level. That doesn't explain all niceness we see, but a big part of it, I think.

Comment by cousin_it on Non-poisonous cake: anthropic updates are normal · 2021-06-18T22:04:28.058Z · LW · GW

Where are you on the spectrum from "SSA and SIA are equally valid ways of reasoning" to "it's more and more likely that in some sense SIA is just true"? I feel like I've been at the latter position for a few years now.

Comment by cousin_it on Reply to Nate Soares on Dolphins · 2021-06-10T15:28:02.862Z · LW · GW

I think the genealogical definition is fine in this case - once you diverge from fish, you're no longer fish, same as birds are no longer dinosaurs. But I would also add that Nate might not have been fully serious, and you tend to get a bit worked up sometimes :-)

Comment by cousin_it on Often, enemies really are innately evil. · 2021-06-07T20:26:14.392Z · LW · GW

So you think that between two theories - "evil comes from people's choices" and "evil comes from circumstances" - the former can't be "leveraged" and we should adopt the latter apriori, regardless of which one is closer to truth? I think that's jumping the gun a bit. Let's figure out what's true first, then make decisions based on that.

Comment by cousin_it on Often, enemies really are innately evil. · 2021-06-07T15:39:13.004Z · LW · GW

I think that theory is false. In an unconstrained wild west environment, an asshole with a gun will happily bully those who he knows don't have guns. And conversely, people have found ways to be good even in very constrained environments. Good and evil are the responsibility of the person doing it, not the environment.

Comment by cousin_it on What to optimize for in life? · 2021-06-06T09:52:05.352Z · LW · GW

One possible answer is "maximize win-win trades with other people", explained a bit more in this comment.

Comment by cousin_it on Alcohol, health, and the ruthless logic of the Asian flush · 2021-06-05T15:02:00.670Z · LW · GW

Wait, you don't know? Disulfiram implants are widely used in Eastern Europe.

Comment by cousin_it on Social behavior curves, equilibria, and radicalism · 2021-06-05T08:30:50.525Z · LW · GW

What a beautiful model! Indeed it seems like a rediscovery of Granovetter's threshold model, but still, great work finding it.

I'm not sure "radical" is the best word for people at the edges of the curve, since figure 19 shows that the more of them you have, the more society is resistant to change. Maybe "independent" instead?

Comment by cousin_it on Rationalists should meet Integral Theory · 2021-06-05T01:51:54.026Z · LW · GW

I agree that something like a math theorem can be independent from its author's life details. But Wilber is a philosopher of life, talking about human development and so on, and the people he holds up as examples again and again turn out to be abusers and frauds. There's just no way his philosophy of life is any good.

Comment by cousin_it on Rationalists should meet Integral Theory · 2021-06-05T00:22:01.608Z · LW · GW

I've read a lot of stuff from EST, Castaneda, Rajneesh and so on. Before my first comment on this post, I downloaded a book by Wilber and read a good chunk of it. It's woo all right.

But attacking woo on substance isn't always the best approach. I don't want to write a treatise on "holons" to which some acolyte will respond with another treatise. As Pelevin wrote, "a dull mind will sink like an iron in an ocean of shit, and a sharp mind will sink like a Damascene blade". It's enough that the idea comes from a self-aggrandizing "guru" who surrounds himself with identical "gurus", each one with a harem, a love for big donations, and a trail of abuse lawsuits. For those who have seen such things before, the first link I gave (showing the founder of the movement promoting a do-nothing quantum trinket) is already plenty.

Comment by cousin_it on Unattributed Good · 2021-06-04T22:22:07.882Z · LW · GW

Good post. I think the principle you attribute to Truman is originally from the Sermon on the Mount ("don't let your left hand know what your right hand is doing").

Comment by cousin_it on Rationalists should meet Integral Theory · 2021-06-04T21:36:13.455Z · LW · GW

Well, he's the founder and leader of the whole thing. Often referred to as the "Einstein of consciousness studies", as he describes himself.

He also enthusiastically promoted this guy (Ctrl+F "craniosacral rhythm"), this guy (Ctrl+F "wives"), and this guy (Ctrl+F "blood"). Are these examples of great people we'd gain?

Comment by cousin_it on Alcohol, health, and the ruthless logic of the Asian flush · 2021-06-04T21:04:15.714Z · LW · GW

Just kidding. It’s called disulfiram, and it was approved by the FDA in 1951.

Cute turnaround + mention of FDA = instant feeling of reading Scott.

Comment by cousin_it on What question would you like to collaborate on? · 2021-06-04T20:29:22.749Z · LW · GW

Specifically looking for conceptual shifts that allow you to do something better.