Posts

Meetup : Maine: Automatic Cognition 2015-06-17T23:48:03.804Z

Comments

Comment by imuli on [Link] Mainstreaming Tell Culture · 2015-11-12T13:56:45.489Z · LW · GW

My thoughts are that you probably havn't read Malcolm's post on communication cultures, or you disagree.

Roughly, different styles of communication cultures (guess, ask, tell) are supported by mutual assumptions of trust in different things (and product hurt and confusion in the absence of that trust). Telling someone you would enjoy a hug is likely to harm a relationship where the other person's assumptions are aligned with ask or guess, even if you don't expect the other person to automatically hug you!

You need to coordinate with people on what type of and which particular culture to use (and that coordination usually happens through inference and experimentation). I certainly expect people who happen to coordinate on a Tell Culture to do better, but I doubt that it works as an intervention, unless they make the underlying adjustments in trust.

Comment by imuli on Nature publishes an article about alternative therapy · 2015-10-19T18:08:49.684Z · LW · GW

The article isn't so much about Reiki as about intentionally utilizing the placebo effect in medicine. And that there is some evidence that, for the group of people that currently believe (medicine x) is effective, the placebo effect of fake (medicine x) may be stronger than that of fake (medicine y) and (medicine x) has fewer medically significant side effects than (medicine y).

Comment by imuli on Rationality Compendium: Principle 2 - You are implemented on a human brain · 2015-08-30T01:40:19.928Z · LW · GW

Thinking Fast and Slow references studies of disbelief requiring attention - which is what I assume you mean by "easier".

Comment by imuli on Rationality Compendium · 2015-08-25T17:31:44.694Z · LW · GW

We're a long way from having any semblance of a complete art of rationality, and I think that holding on to even the names used in the greater less wrong community is a mistake. Good names for concepts are important, and while it may be confusing in the short term while we're still developing the art, we are able to do better if we don't tie ourselves to the past. Put the old names at the end of the entry, or under a history heading, but pushing the innovation of jargon forward is valuable.

Comment by imuli on Palatable presentation of rationality to the layperson · 2015-06-06T05:13:14.805Z · LW · GW

I've been introducing rationality not by name, but by description. As in, “I've been working on forming more accurate beliefs and taking more effective action.”

Comment by imuli on Brainstorming new senses · 2015-05-21T14:32:31.884Z · LW · GW
  • Ionizing Radiation - preferably expressed as synthetic heat or pain with a tolerable cap. The various types could be differentiated, by location or flavor, but mostly it's the warning that matters.
Comment by imuli on If you could push a button to eliminate one cognitive bias, which would you choose? · 2015-04-09T15:37:14.669Z · LW · GW

There are a significant number of people who judge themselves harshly. Too harshly. It's not fun and not productive, see Ozy's Post on Scrupulosity. It maybe would be helpful for the unscrupulous to judge themselves with a bit more rigor, but leniency has a lot to recommend it as viewed from over here.

Comment by imuli on Request for help: Android app to shut down a smartphone late at night · 2015-04-02T21:21:38.090Z · LW · GW

Basic version debug apk here, (more recent) source on GitHub, and Google Play.

The most notable feature lacking is locking the phone when the start time arrives. PM me if you run into problems. Don't set the end time one minute before the start time, or you'll only be able to unlock the phone in that minute.

Comment by imuli on Request for help: Android app to shut down a smartphone late at night · 2015-04-02T17:51:39.424Z · LW · GW

A more advanced version of this would be to lock the phone into "emergency calls only" mode within a specific time window. I don't know how hard that would be to pull off.

This appears to be possible with the Device Administration API to relock the screen upon receiving an ACTION_USER_PRESENT intent. Neither of which requires a rooted phone.

Comment by imuli on Some famous scientists who believed in a god · 2015-03-26T21:33:36.621Z · LW · GW

Probably because they have been dead for forty for fifty years.

The best example still living might be Robert Aumann, though his science is less central (economics) than anyone on your list. Find a well known modern scientist who is doing impressive work and believes in any reasonably traditional sense of God! It's not interesting to show a bunch of people who believed in God when >99% of the rest of their society did.

Comment by imuli on Learning by Doing · 2015-03-24T15:57:58.471Z · LW · GW

I'm talking about things on the level of selecting which concepts are necessary and useful to implement in a system or higher. At the simplest that's recognizing that you have three types of things that have arbitrary attributes attached and implementing an underlying thing-with-arbitrary-attributes type instead of three special cases. You tend to get that kind of review from people with whom you share a project and a social relationship such that they can tell you what you're doing wrong without offense.

Comment by imuli on Learning by Doing · 2015-03-24T03:12:55.336Z · LW · GW

I think the learn to program by programming adage came from a lack of places teaching the stuff that makes people good programmers. I've never worked with someone who has gone through one of the new programming schools, but I don't think they purport to turn out senior-level programmers, much less 99th percentile programmers. As far as I can tell, folks either learn everything beyond the mechanics and algorithms of programming from your seniors in the workplace or discover it for themself.

So I'd say that there are nodes on the graph that I don't have labels for, and are not taught formally as far as I know. The best way to learn them is to read lots of big well written code bases and try to figure out why everything was done one way and not some other. Second best then maybe is to write a few huge code bases and figure out why things keep falling apart?

Comment by imuli on Is arrogance a symptom of bad intellectual hygeine? · 2015-03-22T21:24:28.081Z · LW · GW

Ok, then, humble from the OED: "Having a low estimate of one's importance, worthiness, or merits; marked by the absence of self-assertion or self-exaltation; lowly: the opposite of proud."

Clicking out.

Comment by imuli on Is arrogance a symptom of bad intellectual hygeine? · 2015-03-22T19:42:18.270Z · LW · GW

I think you understand the concept that I was trying to convey, and are trying to say that 'humble' and 'humility' are the wrong labels for that concept. Right? I basically agree with the OED's definition of humility: “The quality of being humble or having a lowly opinion of oneself; meekness, lowliness, humbleness: the opposite of pride or haughtiness.” Note the use of the word opposite, not absence.

Besides, shouldn´t a person who believe himself unworthy tend to accept ideas that contradict his own original beliefs more easy? E.g. Oh, Dr. Kopernikues claims that the earth ISN`T flat? Well, who am I to come and believe otherwise?

That's exactly the problem, at best one ends up following whoever is loudest, at worst one ends up saying "everybody is right" and "but we can't really know" and not even pretending to try to figure out the truth.

Comment by imuli on Is arrogance a symptom of bad intellectual hygeine? · 2015-03-22T15:50:37.750Z · LW · GW

I was speaking more to how someone acts inside than how someone presents themself. If they believe themself unworthy or unimportant or without merit, they tend not to reject ideas very well and do a lot of equivocating. (Though, I think, all my evidence for that is anecdotal.)

Comment by imuli on Is arrogance a symptom of bad intellectual hygeine? · 2015-03-21T20:25:14.798Z · LW · GW

You might say that they are both traps, at least from a truth seeker's perspective. The arrogant will not question their belief sufficiently; the humble will not sufficiently believe.

Comment by imuli on Seeking Estimates for P(Hell) · 2015-03-21T18:55:26.002Z · LW · GW

There're other calculations to consider too (edit: and they almost certainly outweigh the torture possibilities)! For instance:

Suppose that if you can give one year of life this year by giving $25 to AMF (Givewell says $3340 to save a child's life, not counting the other benefits).

If all MIRI does is delay the development of any type of Unfriendly AI, your $25 would need to let MIRI delay that by, ah, 4.3 milliseconds (139 picoyears). With 10% a year exponential future discounting and 100 years before you expect Unfriendly AI to be created if you don't help MIRI and no population growth, that $25 now needs to give them enough resources to delay UFAI about 31 seconds.

This is true for any project that reduces humanity's existential risk. AI is just the saddest if it goes wrong, because then it goes wrong for everything in, slightly less than, our light cone.

Comment by imuli on [FINAL CHAPTER] Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 122 · 2015-03-14T20:58:22.318Z · LW · GW

It started happening well before the story was complete...

Comment by imuli on In Praise of Maximizing – With Some Caveats · 2015-03-14T20:53:59.507Z · LW · GW

But what does one maximize?

We can not maximize more than one thing (except in trivial cases). It's not too hard to call the thing that we want to maximize our utility, and the balance of priorities and desires our utility function. I imagine that most of the components of that function are subject to diminishng returns, and such components I would satisfice. So I understand this whole thing as saying that these things have the potential for unbounded linear or superlinear utility?

  • epistemic rationality
  • ethics
  • social interaction
  • existance

I'm not sure if I'm confused.

Comment by imuli on Rationality: From AI to Zombies · 2015-03-13T18:45:15.053Z · LW · GW

The zip file has some extra Apple metadata files included. Nothing too revealing, just dropbox bits.

Comment by imuli on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 119 · 2015-03-10T21:08:33.410Z · LW · GW

As is Tom Riddle. I imagine the point of divergence is in Tom Riddle's childhood somewhere, which pushed Albus into consulting the maze of the future, which...

Comment by imuli on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 119 · 2015-03-10T19:07:09.963Z · LW · GW

Alastor Moody went to Minerva's right and sat down.

Amelia Bones sat down in a chair, taking Minerva's right. Mad-Eye Moody took the chair to her own right.

Oops!

Comment by imuli on Why the culture of exercise/fitness is broken and how to fix it · 2015-03-10T16:54:47.182Z · LW · GW

I had always modeled part of the appeal of workout/gym is that one doesn't need to coordinate with other people.

Comment by imuli on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 117 · 2015-03-08T19:14:25.598Z · LW · GW

Timing note: While this update was at 12pm Pacific, this is no longer the same as 8pm UTC, due to daylight savings time beginning in the US. I'm assuming tomorrow will be the same (at 19:00/7pm UTC)?

Comment by imuli on Plane crashes · 2015-03-08T17:52:12.723Z · LW · GW

Your question is: after an airliner accident, how often do any of the next n flights following the same route also have an accident?

Guessing (2/3 confidence) lower than the base rate.

Comment by imuli on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 116 · 2015-03-05T18:08:23.362Z · LW · GW

Nicholas Flamel is dead, at least according to Dumbledore. (Or tucked away for later secret extraction?)

Comment by imuli on What Scarcity Is and Isn't · 2015-03-04T10:15:09.340Z · LW · GW

Posit a world where sustenance, shelter, and well-being are magically provided - nobody actually needs to do anything to continue existing. This would be an instance of what is colloquially, and perhaps to an economist incorrectly, termed a post-scarcity society.

I'm less certain about this phrasing, I'm not yet comfortable with the semantics of the economic definition of scarce, but one could try: An society where only time and some luxuries are (economically) scarce.

Comment by imuli on What Scarcity Is and Isn't · 2015-03-03T05:07:22.857Z · LW · GW

This is why I don't take promises of a post-scarcity society very seriously. They seem to think in terms of leaps in production technology, as if the key to ending scarcity is producing lots and lots of stuff.

Is this simply a matter of people using the word scarcity differently?

When someone talks about a post-scarcity future, I doubt that they are thinking about a future without choice between alternatives, but indeed a future without unmet needs of one sort or another. Indeed, such futures tend to have a bewildering amount of choice and alternative uses of time.

Comment by imuli on [Link] Algorithm aversion · 2015-02-27T20:42:01.451Z · LW · GW

I wonder if this (distrusting imperfect algorithms more than imperfect people) holds for programmers and mathematicians. Indeed, the popular perception seems to be that such folks overly trust algorithms...

Comment by imuli on What subjects are important to rationality, but not covered in Less Wrong? · 2015-02-27T15:20:07.625Z · LW · GW

Different methods are more and less likely to lead one to the truth (in a given universe). I see little harm in calling those less likely arts dark. Rhetoric is surely grey at the lightest.

Comment by imuli on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 112 · 2015-02-26T05:46:32.755Z · LW · GW

Adapting the Horcrux (2.0 in HPMoR) spell to make Amulets of Life Saving was the very first thing I thought of when considering ethical immortality in HPverse.

Comment by imuli on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 111 · 2015-02-25T20:13:46.302Z · LW · GW

Hermione can always transfigure herself older - possibly with help from the stone - if that becomes a problem.

Comment by imuli on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 111 · 2015-02-25T19:57:05.375Z · LW · GW

Voldemort believes that Harry “WILL TEAR APART THE VERY STARS IN HEAVEN” without Hermione. What wouldn't you do to protect the person preventing that, given that you are willing to murder unknown hundreds for Horcruxes.

Comment by imuli on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 111 · 2015-02-25T19:14:53.946Z · LW · GW

One does not get put back 49 years hard work toward immortality every day.

Comment by imuli on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapters 105-107 · 2015-02-19T01:26:46.027Z · LW · GW

And might possibly have prompted Harry to insist on hearing about Bellatrix in Parselmouth.

Comment by imuli on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapters 105-107 · 2015-02-18T00:14:23.028Z · LW · GW

You cannot transfigure from air, hard physical limit. Harry tested this.

Comment by imuli on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapters 105-107 · 2015-02-17T22:39:31.362Z · LW · GW

I mean, he just forged a note "from yourself"

Or Harry just wrote a note that looked like Quirrell had forged it, to help his past-self figure it out at the appropriate time.

Comment by imuli on Deconstructing the riddle of experience vs. memory · 2015-02-17T21:09:30.675Z · LW · GW

I could imagine calling all the changes that take place in one's mind due to an event as the memory of that event - not just the ones that involve conscious recall. Still, to be a little more general, I would maybe frame it as process vs. consequences.

Though honestly I'm more interested in understanding the different types of mind-changes it is useful to have names for.

Comment by imuli on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapters 105-107 · 2015-02-17T01:33:44.440Z · LW · GW

The spell in progress that may kill hundreds of students that the stone can fix — sounds like something transfigured into a gas.

Comment by imuli on An alarming fact about the anti-aging community · 2015-02-16T21:59:46.488Z · LW · GW

They want them frozen immediately, shipped in an insulated box with an ice pack, and then they extract cells and store the cells cryogenically. So that's probably not sufficient.

Comment by imuli on An alarming fact about the anti-aging community · 2015-02-16T20:13:06.452Z · LW · GW

The two tooth storage services I looked at both cost US $120/year. One time fees were in the $600-1800 range. Both figure for up to four teeth extracted simultaneously.

Comment by imuli on The Galileo affair: who was on the side of rationality? · 2015-02-16T02:05:48.195Z · LW · GW

This is not a test as to whether we should judge the truth by what the church condemns, but rather for the OP's thesis that they are/were not specifically opposing the progress of truth on an object level.

Comment by imuli on The Galileo affair: who was on the side of rationality? · 2015-02-15T23:32:27.390Z · LW · GW

Galileo was eventually demonstrated correct. Were there trials where the church was eventually demonstrated correct?

Comment by imuli on A rational approach to the issue of permanent death-prevention · 2015-02-15T13:42:34.813Z · LW · GW

I would hazard that cloning comes a lot closer to 100% fidelity than a child comes to 50% fidelity. In any case, one cannot transfer their self to clones or children with our current means - I doubt one can even convey 1%.

Comment by imuli on A rational approach to the issue of permanent death-prevention · 2015-02-11T22:09:06.432Z · LW · GW

Upvoted for cuteness.

However, my understanding is that technology has already reached the level of making copies with ~100% of hardware fidelity.

Comment by imuli on Causality does not imply correlation · 2015-02-06T03:30:47.325Z · LW · GW

Note - images and links are broken.

Comment by imuli on Is there a rationalist skill tree yet? · 2015-01-31T19:31:24.523Z · LW · GW

Noticing when you're confused and confidence calibration are two rationality skills that are necessary to have in your system 1 in order to progress as a rationalist… and much of instrumental rationality can be construed as retraining system 1.

Comment by imuli on Is there a rationalist skill tree yet? · 2015-01-30T18:38:24.654Z · LW · GW

There is a dependency tree for Eliezer Yudkowsky's early posts. It's not terribly pretty, but with a couple hours and a decent data presentation toolkit someone could probably make a pretty graphical version. It doesn't include a lot of later contributions by other people, but it'd be a start.

Comment by imuli on Inverse relationship between belief in foom and years worked in commercial software · 2015-01-16T04:43:04.303Z · LW · GW

Consider it to be public domain.

If you pull the image from it's current location and message me when you add more folks I might even update it. Or I can send you my data if you want to go for a more consistency.

Comment by imuli on Inverse relationship between belief in foom and years worked in commercial software · 2015-01-13T17:09:42.772Z · LW · GW

Birth Year vs Foom:

A bit less striking than the famous enough to have Google pop up their birth year subset (green).