Posts

MWI, weird quantum experiments and future-directed continuity of conscious experience 2009-09-18T16:45:47.741Z
Minds that make optimal use of small amounts of sensory data 2009-08-15T14:11:34.964Z

Comments

Comment by SforSingularity on Bloggingheads: Robert Wright and Eliezer Yudkowsky · 2010-08-07T22:31:41.165Z · LW · GW

I was on Robert Wright's side towards the end of this debate when he claimed that there was a higher optimization process that created natural selection for a purpose.

The purpose of natural selection, fine-tuning of physical constants in our universe, and of countless other detailed coincidences (1) was to create me. (Or, for the readers of this comment, to create you)

The optimization process that optimized all these things is called anthropics. Its principle of operation is absurdly simple: you can't find yourself in a part of the universe that can't create you.

When Robert Wright looks at evolution and sees purpose in the existence of the process of evolution itself (and the particular way it happened to play out, including increasing complexity), he is seeing the evidence for anthropics and big worlds.

Once you take away all the meta-purpose that is caused by anthropics, then I really do think there is no more purpose left. Eli should re-do the debate with this insight on the table.

(note 1) (including that evolution on earth happened to create intelligence, which seems to be a highly unlikley outcome of a generic biochemical replicator process on a generic planet; we know this because earth managed to have life for 4 billion years -- half of its total viability as a place for life -- without intelligence emerging, and said intelligence seemed to depend in an essential way on a random asteroid impact at approximately the right moment )

Comment by SforSingularity on Contrived infinite-torture scenarios: July 2010 · 2010-07-26T21:57:09.010Z · LW · GW

1) if something very bad is about to happen to you, what's your credence that you're in a rescue sim and have nothing to fear?

I'd give that some credence, though note that we've talking about subjective anticipation, which is a piece of humanly-compelling nonsense.

Comment by SforSingularity on Financial incentives don't get rid of bias? Prize for best answer. · 2010-07-15T21:13:43.661Z · LW · GW

However, if you approach them with a serious deal where some bias identified in the lab would lead them to accept unfavorable terms with real consequences, they won't trust their unreliable judgments, and instead they'll ask for third-party advice and see what the normal and usual way to handle such a situation is. If no such guidance is available, they'll fall back on the status quo heuristic. People hate to admit their intellectual limitations explicitly, but they're good at recognizing them instinctively before they get themselves into trouble by relying on their faulty reasoning too much. This is why for all the systematic biases discussed here, it's extremely hard to actually exploit these biases in practice to make money.

Yeah... that sounds right. Also, suppose that you have an irrational stock price. One or two contrarians can't make much more than double their stake money out of it, because if they go leveraged, the market might get more irrational before it gets less irrational, and wipe their position.

Comment by SforSingularity on Financial incentives don't get rid of bias? Prize for best answer. · 2010-07-15T21:11:21.141Z · LW · GW

People hate to admit their intellectual limitations explicitly, but they're good at recognizing them instinctively before they get themselves into trouble by relying on their faulty reasoning too much.

Yeah... this is what Bryan Caplan says in The Myth of the Rational Voter

Comment by SforSingularity on Reason as memetic immune disorder · 2010-07-14T23:06:33.231Z · LW · GW

There is a point I am trying to make with this: the human race is a collective where the individual parts pretend to care about the whole, but actually don't care, and we (mostly) do this the insidious way, i.e. using lots of biased thinking. In fact most people even have themselves fooled, and this is an illusion that they're not keen on being disabused of.

The results... well, we'll see.

Comment by SforSingularity on Reason as memetic immune disorder · 2010-07-14T22:42:31.955Z · LW · GW

Look, maybe it does sound kooky, but people who really genuinely cared might at least invest more time in finding out how good its pedigree was. On the other hand, people who just wanted an excuse to ignore it would say "it's kooky, I'm going to ignore it".

But one could look at other cases, for example direct donation of money to the future (Robin has done this).

Or the relative lack of attention to more scientifically respectable existential risks, or even existential risks in general. (Human extinction risk, etc).

Comment by SforSingularity on Open Thread: April 2010 · 2010-04-03T15:51:10.029Z · LW · GW

As you grow up, you start to see that the world is full of waste, injustice and bad incentives. You try frantically to tell people about this, and it always seems to go badly for you.

Then you grow up a bit more, get a bit wise, and realize that the mother-of-all-bad-incentives, the worst injustice, and the greatest meta-cause of waste ... is that people who point out such problems get punished, (especially) including pointing out this problem. If you are wise, you then become an initiate of the secret conspiracy of the successful.

Discuss.

Comment by SforSingularity on The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom · 2010-01-01T14:28:44.425Z · LW · GW

Think about it in evolutionary terms. Roughly speaking, taking the action of attempting to kill someone is risky. An attractive female body is pretty much a guaranteed win for the genes concerned, so it's pointless taking risks. [Note: I just made this up, it might be wrong, but definitely look for an evo-psych explanation]

This explanation also accounts for the lower violent crime rate amongst women, since women are, from a gene's point of view, a low risk strategy, whereas violence is a risky business: you might win, but then again, you might die.

It would also predict, other things equal, lower crime rates amongst physically attractive men.

Comment by SforSingularity on The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom · 2009-12-27T20:05:16.161Z · LW · GW

I had heard about the case casually on the news a few months ago. It was obvious to me that Amanda Knox was innocent. My probability estimate of guilt was around 1%. This makes me one of the few people in reasonably good agreement with Eli's conclusion.

I know almost nothing of the facts of the case.

I only saw a photo of Amanda Knox's face. Girls with cute smiles like that don't brutally murder people. I was horrified to see that among 300 posts on Less Wrong, only two mentioned this, and it was to urge people to ignore the photos. Are they all too PC or something? Have they never read Eckman, or at least Gladwell? Perhaps Less Wrong commenters are distrustful of their instincts to the point of throwing out the baby with the bathwater.

http://www.amandadefensefund.org/Family_Photos.html

Perhaps it is confusing to people that the actual killer is probably a scary looking black guy with a sunken brow. Obviously most scary looking black guys with sunken brows never kill anyone. So that guy's appearance is only very weak evidence of his guilt. But wholesome-looking apple-cheeked college girls don't brutally murder people ever, pretty much. So that is strong evidence of her innocence.

Comment by SforSingularity on In conclusion: in the land beyond money pumps lie extreme events · 2009-11-26T02:54:40.781Z · LW · GW

Yes, but you can manipulate whether the world getting saved had anything to do with you, and you can influence what kind of world you survive into.

If you make a low-probability, high reward bet that and really commit to donating the money to an X-risks organization, you may find yourself winning that bet more often than you would probabilistically expect.

In general, QI means that you care about the nature of your survival, but not whether you survive.

Comment by SforSingularity on In conclusion: in the land beyond money pumps lie extreme events · 2009-11-25T01:39:19.915Z · LW · GW

the singularity institute's budget grows much faster than linearly with cash. ... sunk all its income into triple-rollover lottery tickets

I had the same idea of buying very risky investments. Intuitively, it seems that world-saving probability is superlinear in cash. But I think that the intuition is probably incorrect, though I'll have to rethink now that someone else has had it.

Another advantage of buying triple rollover tickets is that if you adhere to quantum immortality plus the belief that uFAI reliably kills the world, then you'll win the lottery in all the worlds that you care about.

Comment by SforSingularity on Agree, Retort, or Ignore? A Post From the Future · 2009-11-25T01:19:15.978Z · LW · GW

I think that this is a great idea. I often find myself ending a debate with someone important and rational without the sense that our disagreement has been made explicit, and without a good reason for why we still disagree.

I suspect that if we imposed a norm on LW that said: every time two people disagree, they have to write down, at the end, why they disagree, we would do better.

Comment by SforSingularity on A Less Wrong singularity article? · 2009-11-18T06:15:44.764Z · LW · GW

That is plausible.

Comment by SforSingularity on A Less Wrong singularity article? · 2009-11-18T04:52:41.697Z · LW · GW

we disagree about what reply we would hear if we asked a friendly AI how to talk and think about morality in order to maximize human welfare as construed in most traditional utilitarian senses.

Surely you should both have large error bars around the answer to that question in the form of fairly wide probability distributions over the set of possible answers. If you're both well-calibrated rationalists those distributions should overlap a lot. Perhaps you should go talk to Greene? I vote for a bloggingheads.

Comment by SforSingularity on A Less Wrong singularity article? · 2009-11-18T04:29:20.173Z · LW · GW

people should do different things.

Whose version of "should" are you using in that sentence? If you're using the EY version of "should" then it is not possible for you and Greene to think people should do different things unless you and Greene anticipate different experimental results...

... since the EY version of "should" is (correct me if I am wrong) a long list of specific constraints and valuators that together define one specific utility function U humanmoralityaccordingtoEY. You can't disagree with Greene over what the concrete result of maximizing U humanmoralityaccordingtoEY is unless one of you is factually wrong.

Comment by SforSingularity on A Less Wrong singularity article? · 2009-11-18T02:52:30.034Z · LW · GW

Correct. I'm a moral cognitivist;

I think you're just using different words to say the same thing that Greene is saying, you in particular use "should" and "morally right" in a nonstandard way - but I don't really care about the particular way you formulate the correct position, just as I wouldn't care if you used the variable "x" where Greene used "y" in an integral.

You do agree that you and Greene are actually saying the same thing, yes?

Comment by SforSingularity on Bay area LW meet-up · 2009-11-08T23:52:50.811Z · LW · GW

Alicorn, I hereby award you 10 points. These are redeemable after the singularity for kudos, catgirls and other cool stuff.

Comment by SforSingularity on Bay area LW meet-up · 2009-11-08T22:39:12.526Z · LW · GW

For example, having a goal of not going outside its box.

It would be nice if you could tell an AI not to affect anything outside its box.

10 points will be awarded to the first person who spots why "don't affect anything outside your box" is problematic.

Comment by SforSingularity on Bay area LW meet-up · 2009-11-08T10:16:21.080Z · LW · GW

Great meetup; conversation was had about the probability of AI risk. Initially I thought that the probability of AI disaster was close to 5%, but speaking to Anna Salamon convinced me that it was more like 60%.

Also some discussion about what strategies to follow for AI friendliness.

Comment by SforSingularity on Bay area LW meet-up · 2009-11-06T19:06:12.013Z · LW · GW

I'm traveling to the west coast especially for this. Hoping to see you all there.

Comment by SforSingularity on Why the beliefs/values dichotomy? · 2009-10-26T07:53:07.542Z · LW · GW

ungrounded beliefs can be adopted voluntarily to an extent.

I cannot do this, and I don't understand anyone who can. If you consciously say "OK, it would be really nice to believe X, now I am going to try really hard to start believing it despite the evidence against it", then you already disbelieve X.

Comment by SforSingularity on Why the beliefs/values dichotomy? · 2009-10-21T00:12:40.782Z · LW · GW

Since we can imagine a continuous sequence of ever-better-Roombas, the notion of "has beliefs and values" seems to be a continuous one, rather than a discrete yes/no issue.

Comment by SforSingularity on Why the beliefs/values dichotomy? · 2009-10-20T23:55:54.656Z · LW · GW

Does that have implication for self-awareness and consciousness?

Yes, I think so. One prominent hypothesis is that the reason that we evolved with consciousness is that there has to be some way for us to take an overview of the process of us, our goals, and the environment, and the way in which we think that our effort is producing achievement of goals. We need this so that we can do this whole "I am failing to achieve my goals?" check. Why this results in "experience" is not something I am going to attempt in this post.

Comment by SforSingularity on Why the beliefs/values dichotomy? · 2009-10-20T23:36:27.105Z · LW · GW

As I said,

With the superRoomba, the pressure that the superRoomba applies to the environment doesn't vary as much with the kind of trick you play on it; it will eventually work out what changes you have made, and adapt its strategy so that you end up with a clean floor.

This criterion seems to separate an "inanimate" object like a hydrogen atom or a pebble bouncing around the world from a superRoomba.

Comment by SforSingularity on Why the beliefs/values dichotomy? · 2009-10-20T23:24:54.994Z · LW · GW

See heavily edited comment above, good point.

Comment by SforSingularity on Why the beliefs/values dichotomy? · 2009-10-20T22:44:57.836Z · LW · GW

Clearly these are two different things; the real question you are asking is in what relevant way are they different, right?

First of all, the Roomba does not "recognize" a wall as a reason to stop going forward. It gets some input from its front sensor, and then it turns to the right.

So what is the relevant difference between the Roomba that gets some input from its front sensor, and then it turns to the right., and the superRoomba that gets evidence from its wheels that it is cleaning the room, but entertains the hypothesis that maybe someone has suspended it in the air, and goes and tests to see if this alternative (disturbing) hypothesis is true, for example by calculating what the inertial difference between being suspended and actually being on the floor would be,

The difference is the difference between a simple input-response architecture, and an architecture where the mind actually has a model of the world, including itself as part of the model.

SilasBarta notes below that the word "model" is playing too great a role in this comment for me to use it without defining it precisely. What does a Roomba not have that causes it to behave in that laughable way when you suspend it so that its wheel spin?

What does the SuperRoomba that works out that it is being suspended by performing experiments involving its inertial sensor, and then hacks into your computer and blackmails you into letting it get back onto the floor to clean it (or even causes you to clean the floor yourself) have?

If we imagine a collection of tricks that you could play on the Roomba, ways of changing its environment outside of what the designers had in mind. The pressure that it applies to its environment (defined as the derivative of the final state of the environment with respect to how long you leave the Roomba on, for example) would then vary with which trick you play. For example if you replace its dirt-sucker with a black spray paint can, you end up with a black floor. If you put it on a nonstandard floor surface that produces dirt in response to stimulation, you get a dirtier floor than you had to start with,

With the superRoomba, the pressure that the superRoomba applies to the environment doesn't vary as much with the kind of trick you play on it; it will eventually work out what changes you have made, and adapt its strategy so that you end up with a clean floor.

Comment by SforSingularity on Why the beliefs/values dichotomy? · 2009-10-20T22:14:06.323Z · LW · GW

If, however, you programmed the Roomba not to interpret the input it gets from being in midair as an example of being in a room it should clean

then you would be building a beliefs/desires distinction into it.

Comment by SforSingularity on Why the beliefs/values dichotomy? · 2009-10-20T21:32:14.662Z · LW · GW

The difference is between the Roomba spinning and you working for nothing is that if you told the Roomba that it was just spinning its wheels, it wouldn't react. It has no concept of "I am failing to achieve my goals". You, on the other hand, would investigate; prod your environment to check if it was actually as you thought, and eventually you would update your beliefs and change your behaviors.

Comment by SforSingularity on Why the beliefs/values dichotomy? · 2009-10-20T21:27:14.009Z · LW · GW

Would you claim the dog has no belief/value distinction?

Actually, I think I would. I think that pretty much all nonhuman animals would also don't really have the belief/value distinction.

I think that having a belief/values distinction requires being at least as sophisticated as a human. There are cases where a human sets a particular goal and then does things that are unpleasant in the short term (like working hard and not wasting all day commenting on blogs) in order to obtain a long-term valuable thing.

Comment by SforSingularity on Why the beliefs/values dichotomy? · 2009-10-20T18:48:40.711Z · LW · GW

An agent using UDT doesn't necessarily have a beliefs/values separation,

I am behind on your recent work on UDT; this fact comes as a shock to me. Can you provide a link to a post of yours/provide an example here making clear that UDT doesn't necessarily have a beliefs/values separation? Thanks.

Comment by SforSingularity on Why the beliefs/values dichotomy? · 2009-10-20T17:43:20.384Z · LW · GW

One possible response here: We could consider simple optimizers like amoeba or Roomba vacuum cleaners as falling into the category: "mind without a clear belief/values distinction"; they definitely do a lot of signal processing and feature extraction and control theory, but they don't really have values. The Roomba would happily sit with wheels lifted off the ground thinking that it was cleaning a nonexistent room.

Comment by SforSingularity on Why the beliefs/values dichotomy? · 2009-10-20T17:07:51.855Z · LW · GW

Is it possible that the dichotomy between beliefs and values is just an accidental byproduct of our evolution, perhaps a consequence of the specific environment that we’re adapted to, instead of a common feature of all rational minds?

In the normal usage, "mind" implies the existence of a distinction between beliefs and values. In the LW/OB usage, it implies that the mind is connected to some actuators and sensors which connect to an environment and is actually doing some optimization toward those values. Certainly "rational mind" entails a beliefs/values separation.

But suppose we abandon the beliefs/values separation: what properties do we have left? Is the concept "mind without a beliefs/values separation" Simply the concept "thing"?

Comment by SforSingularity on Boston Area Less Wrong Meetup: 2 pm Sunday October 11th · 2009-10-11T23:57:32.961Z · LW · GW

Thought they nearly discovered my true identity....

Comment by SforSingularity on Boston Area Less Wrong Meetup: 2 pm Sunday October 11th · 2009-10-11T22:14:41.173Z · LW · GW

The meetup has been good fun. Much conversing, coffee, and a restaurant meal.

Comment by SforSingularity on Of Gender and Rationality · 2009-10-07T23:54:55.594Z · LW · GW

It would be an evolutionary win to be interested in things that the other gender is interested in.

Why? I think that perhaps your reasoning is that you date someone based upon whether they have the same interests as you. But I suspect that this may be false - i.e. we confabulate shared interests as an explanation, where the real explanation is status or looks.

Comment by SforSingularity on Of Gender and Rationality · 2009-10-07T23:49:36.443Z · LW · GW

Upvoted. I came to exactly the same conclusion. Men are extremophiles, and in (7), Eliezer explained why.

As to Anna's point below, we should ask how much good can be expected to accumulate from trying to go against nature here, versus how difficult it will be. I.e. spending effort X on attracting more women to LW must be balanced against spending that same effort on something else.

Comment by SforSingularity on Of Gender and Rationality · 2009-10-07T23:22:04.029Z · LW · GW

If high intellectual curiosity is a rare trait in males and a very rare one in females, then given that you are here this doesn't surprise me. You are more intellectually curious than most of the men I have met, which is itself a high intellectual curiosity sample.

Comment by SforSingularity on Of Gender and Rationality · 2009-10-07T23:14:58.225Z · LW · GW

his group feels "cliquey". There are a lot of in-phrases and technical jargon

every incorrect comment is completely and utterly destroyed by multiple people.

These apply to both genders...

Comment by SforSingularity on Of Gender and Rationality · 2009-10-07T22:50:53.755Z · LW · GW

The obvious evolutionary psychology hypothesis behind the imbalanced gender ratio in the iconoclastic community is the idea that males are inherently more attracted to gambles that seem high-risk and high-reward; they are more driven to try out strange ideas that come with big promises, because the genetic payoff for an unusually successful male has a much higher upper bound than the genetic payoff for an unusually successful female. ... a difference as basic as "more male teenagers have a high cognitive temperature" could prove very hard to address completely.

You ask evo-psych why we have a problem, and evo-psych provides the answer. The gender that has a biological reason to pursue low risk strategies - shockingly! - tends to not show much interest in weird, high-risk, high-payoff looking things like saving the world.

Ask evo-psych how to solve the problem then. We already know that women tend to like doing highly visible charitable activities (for signaling reasons). Maybe we should provide a way for people to make little sacrifices of their time and then make it visible over the web. I am thinking of a rationalist social network that allowed people to very prominently (perhaps even with a publicly visible part here on LW) show off how many hours they had volunteered next to a picture of themselves. I once attended an amnesty international letter writing group that was 90% female, for example.

However, remember that association with any radical sounding idea is high-risk compared to association with a less radical but equally charitable sounding idea. Thus I would predict that women will, on average, tend to not get involved with singularitarianism, transhumanism, existential risks, etc, until these ideas go mainstream.

Comment by SforSingularity on 'oy, girls on lw, want to get together some time?' · 2009-10-07T21:55:43.698Z · LW · GW

psychology, yes, definitely. Bio, I do not know, but I would like to see what it looks like for evo psych.

Comment by SforSingularity on 'oy, girls on lw, want to get together some time?' · 2009-10-07T21:52:11.114Z · LW · GW

Upvoted for a sensible analysis of the problem. Want girls? Go get them. My experience is that a common mistake amongst academically inclined people is to expect reality to reward them for doing the right thing - for example men on LW may (implicitly, without realizing that they are doing it) expect attractive, eligible women to be abundant in the risk-mitigation movement, because mitigating existential risks is the right* thing to do, and the universe is a just place which rewards good behavior.

The reality of the situation is that a male who spends time attempting to reduce existential risks will find himself in a community which is full of other males, which, relative to other hobbies he could have, will reduce his pool of available women.

Women who spend time attempting to reduce existential risks will find themselves surrounded by guys, who are preselected for intelligence and high ethical standards.

Comment by SforSingularity on 'oy, girls on lw, want to get together some time?' · 2009-10-07T21:27:13.184Z · LW · GW

(Also, if anybody knows or can estimate, are the gender ratios similar in the relevant areas of academia?)

All male biased as far as I know. (Math, philosophy, AI/CS)

Comment by SforSingularity on 'oy, girls on lw, want to get together some time?' · 2009-10-07T21:26:28.454Z · LW · GW

typo. thanks for pointing out.

Comment by SforSingularity on 'oy, girls on lw, want to get together some time?' · 2009-10-03T02:44:40.248Z · LW · GW

I assign a 99.9% probability to there being more male readers than female readers of LW, The most recent LW meetup that I attended had a gender ratio of roughly 20:1 male:female.

Males who feel that they are competing for a small pool of females will attempt to gain status over each other, diminishing the amount of honest, rational dialogue, and replacing it with oneupmanship.

Hence the idea of mixing LW - in its current state - with dating may not be good.

However, there is the possibility of re-framing LW it so that it appeals more to women. Perhaps we need to re-frame saving the world as a charitable sacrifice?

I would love to know what the gender ratio looks like within the atheist movement; I think we should regard that as a bound on what is achievable.

Comment by SforSingularity on The Anthropic Trilemma · 2009-10-03T00:59:10.047Z · LW · GW

a truly remarkable observation: quantum measure seems to behave in a way that would avoid this trilemma completely

Which is why Roger Penrose is so keen to show that consciousness is a quantum phenomenon.

Comment by SforSingularity on Non-Malthusian Scenarios · 2009-09-26T17:35:11.595Z · LW · GW

"Singleton

A world government or superpower imposes a population control policy over the whole world."

  • it has to be stable essentially forever. It seems to me that no human government could achieve this, because of the randomness of human nature. Therefore, only an AI would suffice.
Comment by SforSingularity on Solutions to Political Problems As Counterfactuals · 2009-09-26T00:20:13.892Z · LW · GW

I've observed far more clannishness among children than political perspicuity

but what about the relative amounts in children vs adults?

Comment by SforSingularity on Solutions to Political Problems As Counterfactuals · 2009-09-25T23:35:11.674Z · LW · GW

A priori we should expect children to be genuine knowledge seekers, because in our EEA there would have been facts of life (such as which plants we poisonous) that were important to know early on. Our EEA was probably sufficiently simple and unchanging that once you were an adult there were few new abstract facts to know.

This "story" explains why children ask adults awkward questions about politics, often displaying a wisdom apparently beyond their age. In reality, they just haven't traded in their curiosity for signalling yet.

At least, that is one possible hypothesis.

Comment by SforSingularity on Solutions to Political Problems As Counterfactuals · 2009-09-25T22:21:59.620Z · LW · GW

I do sometimes wonder what proportion of people who think about political matters are asking questions with genuine curiosity, versus engaging in praise for the idea that they and their group have gone into a happy death spiral about.

I suspect that those who ask with genuine curiosity are overwhelmingly chlidren.

EDIT: Others disagree that children are more genuinely curious. Perhaps it's just the nerds who ask genuine questions then?

Comment by SforSingularity on Solutions to Political Problems As Counterfactuals · 2009-09-25T21:39:41.951Z · LW · GW

Great! now that we've both signalled our allegiance to the h+ ideology, would you like to mate with me!?

for an explanation of why I call this "Hansonian", see, for example, this. Hanson has lots of posts on how charity, ideology, etc is all about affiliating with a tribe and finding mates.