Posts
Comments
My thoughts are that you probably havn't read Malcolm's post on communication cultures, or you disagree.
Roughly, different styles of communication cultures (guess, ask, tell) are supported by mutual assumptions of trust in different things (and product hurt and confusion in the absence of that trust). Telling someone you would enjoy a hug is likely to harm a relationship where the other person's assumptions are aligned with ask or guess, even if you don't expect the other person to automatically hug you!
You need to coordinate with people on what type of and which particular culture to use (and that coordination usually happens through inference and experimentation). I certainly expect people who happen to coordinate on a Tell Culture to do better, but I doubt that it works as an intervention, unless they make the underlying adjustments in trust.
The article isn't so much about Reiki as about intentionally utilizing the placebo effect in medicine. And that there is some evidence that, for the group of people that currently believe (medicine x) is effective, the placebo effect of fake (medicine x) may be stronger than that of fake (medicine y) and (medicine x) has fewer medically significant side effects than (medicine y).
Thinking Fast and Slow references studies of disbelief requiring attention - which is what I assume you mean by "easier".
We're a long way from having any semblance of a complete art of rationality, and I think that holding on to even the names used in the greater less wrong community is a mistake. Good names for concepts are important, and while it may be confusing in the short term while we're still developing the art, we are able to do better if we don't tie ourselves to the past. Put the old names at the end of the entry, or under a history heading, but pushing the innovation of jargon forward is valuable.
I've been introducing rationality not by name, but by description. As in, “I've been working on forming more accurate beliefs and taking more effective action.”
- Ionizing Radiation - preferably expressed as synthetic heat or pain with a tolerable cap. The various types could be differentiated, by location or flavor, but mostly it's the warning that matters.
There are a significant number of people who judge themselves harshly. Too harshly. It's not fun and not productive, see Ozy's Post on Scrupulosity. It maybe would be helpful for the unscrupulous to judge themselves with a bit more rigor, but leniency has a lot to recommend it as viewed from over here.
Basic version debug apk here, (more recent) source on GitHub, and Google Play.
The most notable feature lacking is locking the phone when the start time arrives. PM me if you run into problems. Don't set the end time one minute before the start time, or you'll only be able to unlock the phone in that minute.
A more advanced version of this would be to lock the phone into "emergency calls only" mode within a specific time window. I don't know how hard that would be to pull off.
This appears to be possible with the Device Administration API to relock the screen upon receiving an ACTION_USER_PRESENT intent. Neither of which requires a rooted phone.
Probably because they have been dead for forty for fifty years.
The best example still living might be Robert Aumann, though his science is less central (economics) than anyone on your list. Find a well known modern scientist who is doing impressive work and believes in any reasonably traditional sense of God! It's not interesting to show a bunch of people who believed in God when >99% of the rest of their society did.
I'm talking about things on the level of selecting which concepts are necessary and useful to implement in a system or higher. At the simplest that's recognizing that you have three types of things that have arbitrary attributes attached and implementing an underlying thing-with-arbitrary-attributes type instead of three special cases. You tend to get that kind of review from people with whom you share a project and a social relationship such that they can tell you what you're doing wrong without offense.
I think the learn to program by programming adage came from a lack of places teaching the stuff that makes people good programmers. I've never worked with someone who has gone through one of the new programming schools, but I don't think they purport to turn out senior-level programmers, much less 99th percentile programmers. As far as I can tell, folks either learn everything beyond the mechanics and algorithms of programming from your seniors in the workplace or discover it for themself.
So I'd say that there are nodes on the graph that I don't have labels for, and are not taught formally as far as I know. The best way to learn them is to read lots of big well written code bases and try to figure out why everything was done one way and not some other. Second best then maybe is to write a few huge code bases and figure out why things keep falling apart?
Ok, then, humble from the OED: "Having a low estimate of one's importance, worthiness, or merits; marked by the absence of self-assertion or self-exaltation; lowly: the opposite of proud."
Clicking out.
I think you understand the concept that I was trying to convey, and are trying to say that 'humble' and 'humility' are the wrong labels for that concept. Right? I basically agree with the OED's definition of humility: “The quality of being humble or having a lowly opinion of oneself; meekness, lowliness, humbleness: the opposite of pride or haughtiness.” Note the use of the word opposite, not absence.
Besides, shouldn´t a person who believe himself unworthy tend to accept ideas that contradict his own original beliefs more easy? E.g. Oh, Dr. Kopernikues claims that the earth ISN`T flat? Well, who am I to come and believe otherwise?
That's exactly the problem, at best one ends up following whoever is loudest, at worst one ends up saying "everybody is right" and "but we can't really know" and not even pretending to try to figure out the truth.
I was speaking more to how someone acts inside than how someone presents themself. If they believe themself unworthy or unimportant or without merit, they tend not to reject ideas very well and do a lot of equivocating. (Though, I think, all my evidence for that is anecdotal.)
You might say that they are both traps, at least from a truth seeker's perspective. The arrogant will not question their belief sufficiently; the humble will not sufficiently believe.
There're other calculations to consider too (edit: and they almost certainly outweigh the torture possibilities)! For instance:
Suppose that if you can give one year of life this year by giving $25 to AMF (Givewell says $3340 to save a child's life, not counting the other benefits).
If all MIRI does is delay the development of any type of Unfriendly AI, your $25 would need to let MIRI delay that by, ah, 4.3 milliseconds (139 picoyears). With 10% a year exponential future discounting and 100 years before you expect Unfriendly AI to be created if you don't help MIRI and no population growth, that $25 now needs to give them enough resources to delay UFAI about 31 seconds.
This is true for any project that reduces humanity's existential risk. AI is just the saddest if it goes wrong, because then it goes wrong for everything in, slightly less than, our light cone.
It started happening well before the story was complete...
But what does one maximize?
We can not maximize more than one thing (except in trivial cases). It's not too hard to call the thing that we want to maximize our utility, and the balance of priorities and desires our utility function. I imagine that most of the components of that function are subject to diminishng returns, and such components I would satisfice. So I understand this whole thing as saying that these things have the potential for unbounded linear or superlinear utility?
- epistemic rationality
- ethics
- social interaction
- existance
I'm not sure if I'm confused.
The zip file has some extra Apple metadata files included. Nothing too revealing, just dropbox bits.
As is Tom Riddle. I imagine the point of divergence is in Tom Riddle's childhood somewhere, which pushed Albus into consulting the maze of the future, which...
Alastor Moody went to Minerva's right and sat down.
…
Amelia Bones sat down in a chair, taking Minerva's right. Mad-Eye Moody took the chair to her own right.
Oops!
I had always modeled part of the appeal of workout/gym is that one doesn't need to coordinate with other people.
Timing note: While this update was at 12pm Pacific, this is no longer the same as 8pm UTC, due to daylight savings time beginning in the US. I'm assuming tomorrow will be the same (at 19:00/7pm UTC)?
Your question is: after an airliner accident, how often do any of the next n flights following the same route also have an accident?
Guessing (2/3 confidence) lower than the base rate.
Nicholas Flamel is dead, at least according to Dumbledore. (Or tucked away for later secret extraction?)
Posit a world where sustenance, shelter, and well-being are magically provided - nobody actually needs to do anything to continue existing. This would be an instance of what is colloquially, and perhaps to an economist incorrectly, termed a post-scarcity society.
I'm less certain about this phrasing, I'm not yet comfortable with the semantics of the economic definition of scarce, but one could try: An society where only time and some luxuries are (economically) scarce.
This is why I don't take promises of a post-scarcity society very seriously. They seem to think in terms of leaps in production technology, as if the key to ending scarcity is producing lots and lots of stuff.
Is this simply a matter of people using the word scarcity differently?
When someone talks about a post-scarcity future, I doubt that they are thinking about a future without choice between alternatives, but indeed a future without unmet needs of one sort or another. Indeed, such futures tend to have a bewildering amount of choice and alternative uses of time.
I wonder if this (distrusting imperfect algorithms more than imperfect people) holds for programmers and mathematicians. Indeed, the popular perception seems to be that such folks overly trust algorithms...
Different methods are more and less likely to lead one to the truth (in a given universe). I see little harm in calling those less likely arts dark. Rhetoric is surely grey at the lightest.
Adapting the Horcrux (2.0 in HPMoR) spell to make Amulets of Life Saving was the very first thing I thought of when considering ethical immortality in HPverse.
Hermione can always transfigure herself older - possibly with help from the stone - if that becomes a problem.
Voldemort believes that Harry “WILL TEAR APART THE VERY STARS IN HEAVEN” without Hermione. What wouldn't you do to protect the person preventing that, given that you are willing to murder unknown hundreds for Horcruxes.
One does not get put back 49 years hard work toward immortality every day.
And might possibly have prompted Harry to insist on hearing about Bellatrix in Parselmouth.
You cannot transfigure from air, hard physical limit. Harry tested this.
I mean, he just forged a note "from yourself"
Or Harry just wrote a note that looked like Quirrell had forged it, to help his past-self figure it out at the appropriate time.
I could imagine calling all the changes that take place in one's mind due to an event as the memory of that event - not just the ones that involve conscious recall. Still, to be a little more general, I would maybe frame it as process vs. consequences.
Though honestly I'm more interested in understanding the different types of mind-changes it is useful to have names for.
The spell in progress that may kill hundreds of students that the stone can fix — sounds like something transfigured into a gas.
They want them frozen immediately, shipped in an insulated box with an ice pack, and then they extract cells and store the cells cryogenically. So that's probably not sufficient.
The two tooth storage services I looked at both cost US $120/year. One time fees were in the $600-1800 range. Both figure for up to four teeth extracted simultaneously.
This is not a test as to whether we should judge the truth by what the church condemns, but rather for the OP's thesis that they are/were not specifically opposing the progress of truth on an object level.
Galileo was eventually demonstrated correct. Were there trials where the church was eventually demonstrated correct?
I would hazard that cloning comes a lot closer to 100% fidelity than a child comes to 50% fidelity. In any case, one cannot transfer their self to clones or children with our current means - I doubt one can even convey 1%.
Upvoted for cuteness.
However, my understanding is that technology has already reached the level of making copies with ~100% of hardware fidelity.
Note - images and links are broken.
Noticing when you're confused and confidence calibration are two rationality skills that are necessary to have in your system 1 in order to progress as a rationalist… and much of instrumental rationality can be construed as retraining system 1.
There is a dependency tree for Eliezer Yudkowsky's early posts. It's not terribly pretty, but with a couple hours and a decent data presentation toolkit someone could probably make a pretty graphical version. It doesn't include a lot of later contributions by other people, but it'd be a start.
Consider it to be public domain.
If you pull the image from it's current location and message me when you add more folks I might even update it. Or I can send you my data if you want to go for a more consistency.
Birth Year vs Foom:
A bit less striking than the famous enough to have Google pop up their birth year subset (green).