Posts

The evolution of superstitious and superstition-like behaviour 2017-06-23T16:14:02.589Z
Where do hypotheses come from? 2017-06-11T04:05:14.894Z
IEEE released the first draft of their AI ethics guide 2016-12-14T13:30:42.825Z

Comments

Comment by c0rw1n on COVID Skepticism Isn't About Science · 2022-01-01T21:51:18.741Z · LW · GW

i'd bet at at least 1:20 that lung scarring and brain damage are permanent.

Comment by c0rw1n on Reasons against anti-aging · 2021-01-25T08:09:10.609Z · LW · GW

please go read the most basic counterarguments to this class of objections to anti-aging at https://agingbiotech.info/objections/

Comment by c0rw1n on Learning magic · 2019-06-09T20:23:31.160Z · LW · GW

In my experience as a subject of hypnosis, I always have a background thought that I could choose to not do/feel the thing, that I choose to do/feel as I'm told. I distinctly remember feeling the background thought there, before choosing to do, or letting myself feel, the thing I'm told. It is still surprising how much and ho many things that are usually subconscious can be controlled through it, though.

Comment by c0rw1n on The Very Repugnant Conclusion · 2019-01-18T18:51:42.573Z · LW · GW

If your theory leads you to an obviously stupid conclusion, you need a better theory.

Total utilitarianism is boringly wrong for this reason, yes.

What you need is non-stupid utilitarianism.

First, utility is not a scalar number, even for one person. Utility and disutility are not the same axis: if I hug a plushie, that is utility without any disutility and if I kick a bedpost, that is disutility without utility, and if I do both at the same time, neither of those ends up compensating for each other. They are not the same dimension with the sign reversed. This is before going into the details where, for example, preference utilitarianism is a model where each preference is its own axis, and so is each dispreference. Those axes are sometimes orthogonal and sometimes they trade off against each other, a little or a lot. The numbers are fuzzy and imprecise, and the weighting of the needs/preferences/goals/values also changes over time: for example, it is impossible to maximize for sleep because if you sleep all the time, you starve and die and if you maximize for food then you die of eating too much or never sleeping or whatever. We are not maximizers, we are satisficers, and trying to maximize any need/goal/value by trading it off against all the others leads to a very stupid death. We are more like feedback-based control systems that need to keep a lot of parameters in the good boundaries.

Second, interpersonal comparison are between hazardous and impossible. Going back to the example of preference utilitarianism, people have different levels of enjoyment of the same things (in addition to those degrees also changing over time intrapersonally).

Third, there are limits to the disutility that a person will endure before it rounds off to infinite disutility. Under sufficient torture, people will prefer to die rather than bearing it for any length of time longer; at this point, it can be called subjective infinite disutility (simplifying so as to not get bogged down in discussing discounting rates and limited lifespan).

Third and a halfth, it is impossible to get so much utility that it can be rounded off to positive infinity, short of maybe FiO or other form of hedonium/orgasmium/eudaimonium/whatever of the sort. It is not infinite, but it is "whatever is the limit for a sapient mind" (which is something like "all the thermostat variables satisfied including those that require making a modicum of effort to satisfy the others", because minds are a tool to do that and seem to require doing it to some, intra- and interpersonally varying, extent).

Fourth and the most important point to refute total utilitarianism, you need to account for the entire distribution. Even assuming, very wrongly as explained above, that you can actually measure the utility that one person gets and compare it to the utility that an other person gets, you can still have the bottom of your distribution of utility being sufficiently low that the bottom whatever% of the population would prefer to die immediately, which is (simplified) infinite disutility and can not be traded for the limit of positive utility. (Torture and dust specks: no finite amount of dust specks can trade off for the infinite disutility of a degree of torture sufficient to make even one single victim prefer to die.) (This still works even if the victim dies in a day, because you need to measure over all of the history from the beginnning of when your moral theory begins to take effect.) (For the smartasses in the back row: no, that doesn't mean that there having been that level of torture in the past absolves you from not doing it in the future under the pretext that the disutility over all of history already sums to infinity. Yes it does, and don't you make it worse.)

But alright. Assuming you can measure utility Correctly, let's say you have the floor of the distribution of it at least epsilon above the minimum viable. What then? Job done? No. You also want to maximize the entire area under the curve, raising it as high as possible, which is the point that total utilitarianism actually got right. And, in a condition of scarcity, that may require having not too many people. At least, having the rise in amount of people being slower than the rise in distributable utility.

Comment by c0rw1n on Unikernels: No Longer an Academic Exercise · 2018-10-23T22:01:57.122Z · LW · GW

didn't we use to call those "exokernels" before?

Comment by c0rw1n on The Bizarre Behavior of Berkeley Rationalists · 2018-10-20T05:07:58.100Z · LW · GW

I'm curious who the half is and why. Is it that they are half a rationalist? Half (the time?) in Berkeley? (If it is not half the time then where is the other half?)

Also. The N should be equal to the cardinality of the entire set of rationalists you interacted with, not just of those who are going insane; so, if you have been interacting with seven and a half rationalists in total, how many of those are diving into the woo? Or if you have been interacting with dozens of rationalists, how many times more people were they than 7.5?

Comment by c0rw1n on Outline of Metarationality, or much less than you wanted to know about postrationality · 2018-10-16T01:11:02.454Z · LW · GW
Comment by c0rw1n on Modes of Petrov Day · 2018-09-24T01:13:24.436Z · LW · GW

There was a web thing with a Big Red Button, running in Seattle, Oxford (and I think Boston also).

Each group had a cake and if they got nuked, they wouldn't get to eat the cake.

At the time when the Seattle counter said that the game was over for 1 second, someone there puched the button for the lulz, but the Oxford counter was not at zero yet and so they got nuked, then they decided to burn the cake instead of just not eating it.

Comment by c0rw1n on Realism about rationality · 2018-09-17T04:10:49.865Z · LW · GW
  • Aumann's agreement theorem says that two people acting rationally (in a certain precise sense) and with common knowledge of each other's beliefs cannot agree to disagree. More specifically, if two people are genuine Bayesian rationalists with common priors, and if they each have common knowledge of their individual posterior probabilities, then their posteriors must be equal.

With common priors.

This is what does all the work there! If the disagreeers have non-equal priors on one of the points, then of course they'll have different posteriors.

Of course applying Bayes' Theorem with the same inputs is going to give the same outputs, that's not even a theorem, that's an equals sign.

If the disagreeers find a different set of parameters to be relevant, and/or the parameters they both find relevant do not have the same values, the outputs will differ, and they will continue to disagree.

Comment by c0rw1n on thought: the problem with less wrong's epistemic health is that stuff isn't short form · 2018-09-05T08:48:14.526Z · LW · GW

"the problem with lesswrong : it's not literally twitter"

Comment by c0rw1n on Unrolling social metacognition: Three levels of meta are not enough. · 2018-08-25T21:11:53.396Z · LW · GW

Thank you so much for writing this! I remember reading a tumblr post that explained the main point a while back and could never find it again -because tumblr is an unsearchable memory hole- and kept needing to link it to people who got stuck on taking Eliezer's joking one-liner seriously.

Comment by c0rw1n on We Agree: Speeches All Around! · 2018-06-18T03:47:34.155Z · LW · GW

It may be that the person keeps expounding their reasons for their wanting you to do the thing because it feels aversive to them to stop infodumping, and/or because they expect you to respond with your reasons for doing the thing so that they know whether your doing the thing is an instance of 2 or of 3.

Comment by c0rw1n on The abruptness of nuclear weapons · 2018-03-30T14:45:10.794Z · LW · GW

The AIs still have to make atoms move for anything Actually Bad to happen.

Comment by c0rw1n on Correct Models Are Bad · 2018-03-04T17:22:11.945Z · LW · GW

No. Correct models are good. Or rather, more correct models, applied properly, are better than less correct models, or models applied wrongly.

All those examples, however, are bad:

  1. Calories in / Calories out is a bad model because different sources of calories are metabolized differently and have different effects on the organism. It is bad because it is incomplete and used improperly for things that it is bad at. It stays true that to get output from a mechanism, you do have to input some fuel into it; CICO is good enough to calculate, for example, how many calories one should eat in, say polar climates where one needs many thousands of them to survive the cold; that is not an argument against using a Correct model, it is an argument for using a model that outputs the correct type of data for the problem domain.

  2. Being unwilling to update is bad, yes, that is a problem with the user of the model, not a problem with the model one is using. Do you mean that knowing and using the best known model makes one unwilling to update on the subsequent better model? Because that is still not a problem with using a Correct model.

  3. That is an entirely different definition of Bad. "Bad for the person holding the model (provided they can't pretend to hold the more socially-accepted model when that is more relevant to staying alive and non-ostracized)" is absolutely not the same thing as "Bad at actually predicting how reality will react on inputs".

  4. That is also a problem with the users of the model, both those making the prediction and those accepting the predictions, in domains that the model is not good at predicting.

  5. Still not a problem due to using a Correct model. Also, bad outcomes for who? If immigration is good for the immigrants in some respsect and bad for them in some other respect, that is neither all good or all bad; if it is good for some citizens already there in some ways but bad in others (with some things being a lot of good for some citizens and some other things being a little bad, while also some things being a little good and other things being a lot of bad for other citizens), then the Correct model is the one that takes into account the entirety of all the distributions of the good and the bad things, across all citizens, and predict them Correctly.

( and now please in the name of Truth let this thread not devolve into an object-level discussion of that specific example >_< )

Comment by c0rw1n on Kenshō · 2018-01-20T05:19:25.681Z · LW · GW

Not easy, no. But there is a shorter version here : http://nonsymbolic.org/PNSE-Summary-2013.pdf

Comment by c0rw1n on Kenshō · 2018-01-20T01:58:47.802Z · LW · GW

Is this enlightenment anything like described in https://aellagirl.com/2017/07/07/the-abyss-of-want/ ?

Also possibly related : http://nonsymbolic.org/wp-content/uploads/2014/02/PNSE-Article.pdf (can you point on that map where you think you found yourself)

Comment by c0rw1n on The Craft & The Community - A Post-Mortem & Resurrection · 2017-11-11T16:23:43.161Z · LW · GW

I'm thinking of something like a section on the main lesserwrong.com page showing the latest edits to the wiki, so that the users of the site could see them and choose to go look at whether what changed in the article is worth points.

Comment by c0rw1n on The Craft & The Community - A Post-Mortem & Resurrection · 2017-11-11T09:12:06.871Z · LW · GW

I think the lesswrong wiki was supposed to be that repository of the interesting/important things that were posted to the community blog.

It could be a good idea to make a wiki in lw2.0 and award site karma to people contributing to it.

Comment by c0rw1n on The Craft & The Community - A Post-Mortem & Resurrection · 2017-11-04T04:58:11.134Z · LW · GW

welp, 2/4 current residents and the next one planned to come there are trans women so um, what gender ratio issue again?

Comment by c0rw1n on The Craft & The Community - A Post-Mortem & Resurrection · 2017-11-04T01:49:48.487Z · LW · GW

Why yes, there should be such a list; I don't know of any existing one.

Comment by c0rw1n on The Craft & The Community - A Post-Mortem & Resurrection · 2017-11-03T23:40:33.181Z · LW · GW

Well so far it's ... a group house; with long late night conversations, also running self-experiments (currently measuring results of a low-carb diet), and organizing the local monthly rationalist meetup.

We are developing social tech solutions such as a database of competing access needs and a formal system for dealing with house logistics.

Comment by c0rw1n on The Craft & The Community - A Post-Mortem & Resurrection · 2017-11-03T21:23:40.073Z · LW · GW
I am confused about what sort of blog post you are requesting people write. I assume you don't mean that people should list off a variety of interesting facts about the Bay Area, e.g. "the public transit system, while deeply inadequate, is one of the best in the country," "UCSF is one of the top hospitals in the United States for labor and delivery," "everything in San Francisco smells like urine," "adult night at the Exploratorium is awesome," "there are multiple socialist pizza places in Berkeley which schismed over whether to serve beer." This seems of limited interest to people who don't live in the Bay Area and of equally limited interest to people who live in the Bay Area (they know). If people are moving to the Bay Area I assume they'd discuss this with their friends who already live here.

Those posts would be useful indeed.

Specifically because :

If you are ignorant of verifiable facts about the Bay Area like "it is relatively uncommon to commute by car", I have to question your ability to make accurate statements about more subtle and hard-to-check issues of social dynamics. If you have not talked to enough Berkeley residents to hear "gotta go, train's here", then I doubt your ability to comment on why our social demographics are the way they are.

You literally just pointed out that people are not writing about the practical details of living there, and you're dismissing Ben's points that reference the things that he ostensibly knows because people are writing about them, because he does not know the things that you yourself pointed out that people are not writing about!

Comment by c0rw1n on Our values are underdefined, changeable, and manipulable · 2017-11-02T19:59:42.101Z · LW · GW

This means you are trying to Procustes the human squishiness into legibility, with consistent values. You should, instead, be trying to make pragmatic AIs that would frame the world for the humans, in the ways that the humans would approve*, taking into account their objectively stupid incoherence. Because that would be Friendly and parsed as such by the humans.

*=this doesn't mean that such human preferences as those that violate meta-universalizability from behind the veil of ignorance should not be factored out of the calculation of what is ethically relevant; but it means that the states of the world that violate those preferences should still be hidden from the humans who have those preferences. This obviously results in humans being allowed to more accurately see the states of the world, the more their preferences are tolerant of other people's preferences; there is absolutely nothing that could possibly ever go wrong from this, considering that the AIs, being Friendly, would simply prevent them from sociopathically exploiting that information asymmetry since that would violate the ethical principle.

Comment by c0rw1n on The Craft & The Community - A Post-Mortem & Resurrection · 2017-11-02T06:48:45.183Z · LW · GW

https://thezvi.wordpress.com/2017/08/12/what-is-rationalist-berkleys-community-culture/#comment-494

Comment by c0rw1n on HOWTO: Screw Up The LessWrong Survey and Bring Great Shame To Your Family · 2017-10-21T18:20:53.437Z · LW · GW

Note, last year's survey was also run by /u/ingres

Comment by c0rw1n on Stupidity as a mental illness · 2017-02-12T20:31:26.002Z · LW · GW

http://www.gwern.net/Embryo%20selection

Comment by c0rw1n on Lesswrong Survey - invitation for suggestions · 2016-03-06T22:53:48.049Z · LW · GW

I think the questions of the next survey should be a superset of those on the last survey. Maybe not strictly, but it's too interesting to track year-on-year changes to remove questions unless it's really unquestionably obvious that they're superfluous.

Comment by c0rw1n on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 114 + chapter 115 · 2015-03-04T19:27:09.196Z · LW · GW

New users need 2 points to vote.