Posts

On the Galactic Zoo hypothesis 2015-07-16T19:12:03.017Z
The Joy of Bias 2015-06-09T19:04:30.588Z
Why IQ shouldn't be considered an external factor 2015-04-04T17:58:46.348Z

Comments

Comment by estimator on Abuse of Productivity Systems · 2016-03-31T18:15:24.153Z · LW · GW

Is this acutally a bad thing? In both cases, Bob and Sally not only succeeded in their initial goals, but also made some extra progress.

Also, fictional evidence. It is not implausible to imagine a scenario when Bob does all the same things and learns French, German and then fails on e.g. Spanish. The same thing for Sally.

In general, if you have tried some strategy and succeeded, it does make sense to go ahead and try it on other problems (until it finally stops working). If you have invented e.g. a new machine learning method to solve a specific practical problem, the obvious next step is to try to apply it to other problems. If you found a very interesting article in a blog it makes sense to take a look at other articles in it. And so on. A method being successful is an evidence for it being successful in the future / on other sets of problems / etc.

So, I wouldn't change to change those mistakes into successes, because they weren't mistakes in the first place. An optimal strategy is not guaranteed to succeed every single time; rather it should have the maximal success probability.

Comment by estimator on Instrumental Rationality Questions Thread · 2015-08-24T14:44:10.443Z · LW · GW

Well, I agree, that would help FAI build people similar to you. But why do you want FAI to do that?

And what copying precision is OK for you? Would just making a clone based in your DNA suffice? Maybe, you don't even have to bother with all these screenshots and photos.

Comment by estimator on Instrumental Rationality Questions Thread · 2015-08-24T06:57:10.198Z · LW · GW

I'm very skeptical of the third. A human brain contains ~10^10 neurons and ~10^14 synapses -- which would be hard to infer from ~10^5 photos/screenshots, esp. considering that they don't convey that much information about your brain structure. DNA and comprehensive brain scans are better, but I guess that getting brain scans with required precision isn't quite easy.

Cryonics, at least, might work.

Comment by estimator on On the Galactic Zoo hypothesis · 2015-07-17T01:20:00.359Z · LW · GW

It is; and actually it is a more plausible scenario. Aliens surely may want it; like humans do both in fiction and reality -- for example, see the First directive in Star Trek and the practice of sterilizing rovers before sending them to other planets in real life.

I, however, investigated that particular flavor of the Zoo hypotheses it the post.

Comment by estimator on On the Galactic Zoo hypothesis · 2015-07-17T00:57:17.816Z · LW · GW

I don't know whether the statement (intelligence => consciousness) is true, so I assign a non-zero probability to it being false.

Suppose I said "Assume NP = P", or the contrary "Assume NP != P". One of those statements is logically false (the same way 1 = 2 is false). Still, while you can dismiss an argument which starts "Assume 1 = 2", you probably shouldn't do the same with those NP ones, even if one of them is, strictly speaking, logical nonsense.

Also a few words about concepts. You can explain a concept using other concepts, and then explain the concepts you have used to explain the first one, and so on, but the chain should end somewhere, right? So here it ends on consciousness.

1) I know that there is a phenomenon (that I call 'consciousness'), because I observe it directly.

2) I don't know a decent theory to explain what it really is, and what properties does it have.

3) To my knowledge, nobody actually has. That is why, the problem of consciousness is labeled as 'hard'.

Too many people, I've noticed, just pick a theory of consciousness that they consider the best, and then become overconfident of it. Not quite a good idea given that there is so little data.

So if the most plausible says (intelligence => consciousness) is true, you shouldn't immediately dismiss everything that is based on the opposite. The Bayesian way is to integrate over all possible theories, weighted by their probabilities.

Comment by estimator on On the Galactic Zoo hypothesis · 2015-07-17T00:26:59.663Z · LW · GW

Modern computers can be programmed to do almost every task a human can make, including very high-level ones, that's why sort-of yes, they are (and maybe sort-of conscious, if you are willing to stretch this concept that far).

Some time ago we could program computers to execute some algorithm which solves a problem; now we have machine learning and don't have to provide an algorithm for every task; but we still have different machine learning algorithms for different areas/meta-tasks (computer vision, classification, time series prediction, etc.). When we build systems that are capable of solving problems in all these areas simultaneously -- and combining the results to reach some goal -- I would call such systems truly intelligent.

Having said that, I don't think I need an insight or explanation here -- because well, I mostly agree with you or jacob_cannel -- it's likely that intelligence and unconsciousness are logically incompatible. Yet as long as the problem of consciousness is not fully resolved, I can't be certain, therefore assign non-zero probability for the conjunction to be possible.

Comment by estimator on On the Galactic Zoo hypothesis · 2015-07-16T21:48:33.073Z · LW · GW

Makes sense.

Anyway, any trait which isn't consciousness (and obviously it wouldn't be consciousness) would suffice, provided there is some reason to hide from Earth rather than destroy it.

Comment by estimator on On the Galactic Zoo hypothesis · 2015-07-16T21:18:25.737Z · LW · GW

There are concepts which are hardly explainable (given our current understanding of them). Consciousness is one of them. Qualia. Subjective experience. The thing which separates p-zombies from non-p-zombies.

If you don't already understand what I mean, small chance that I would be able to explain.

As for the assumption, I agree that it is implausible, yet possible. Do you consider your computer conscious?

And no doubt that scenarios your mention are more plausible.

Comment by estimator on On the Galactic Zoo hypothesis · 2015-07-16T21:07:25.285Z · LW · GW

Why do you think it is unlikely? I think any simple criterion which separates aliens from environment would suffice.

Personally, I think that the scenario is implausible for the other reason: human moral system would easily adapt to such aliens. People sometimes personify things that aren't remotely sentient, let alone aliens who would actually act as sentient/conscious beings.

The other reason is that I consider sentience without consciousness relatively implausible.

Comment by estimator on Open Thread, Jun. 15 - Jun. 21, 2015 · 2015-06-18T07:16:56.266Z · LW · GW

Filters don't have to be mutually exclusive, and as for collectively exhaustive part, take all plausible Great Filter candidates.

I don't quite understand that Great Filter hype, by the way; having a single cause for civilization failure seems very implausible (<1%).

Comment by estimator on Open Thread, Jun. 15 - Jun. 21, 2015 · 2015-06-17T06:54:18.220Z · LW · GW

It's extremely hard to ban the research worldwide, and then it's extremely hard to enforce such decision.

Firstly, you'll have to convince all the world's governments (btw, there are >200) to pass such laws.

Then, you'll likely have all powerful nations doing the research secretly, because it provides some powerful weaponry / other ways to acquire power; or just out of fear that some other government will do it first.

And even if you somehow managed to pass the law worldwide, and stopped governments from doing research secretly, how would you stop individual researchers?

The humanity hasn't prevented the use of nuclear bombs, and has barely prevented a full-blown nuclear war; while nuclear bombs require national-level industry to produce, and are available to a few countries only. How can we hope to ban something which can be researched and launched in your basement?

Comment by estimator on Open Thread, Jun. 8 - Jun. 14, 2015 · 2015-06-11T15:04:48.078Z · LW · GW

Why do you prefer offline conversations to online?

Off the top of my head, I can name 3 advantages of online communication, which are quite important to LessWrong:

  • You don't have to go anywhere. Since the LW community is distributed all over the world, it is really important; when you go to meetups, you can communicate only with people who happen to be in the same place as you, when you communicate online, you can communicate with everyone.

  • You have more time to think before reply, if you need to. For example, you can support your arguments with relevant research papers or data.

  • As you have noticed, online articles and discussions remain available on the site. You have proposed to write articles after offline events, but a) not everything will be covered by them and b) it requires additional effort.

Well, enjoy offline events if you like to; but the claim that people should always prefer offline activities over online activities is highly questionable, IMO.

Comment by estimator on Open Thread, Jun. 8 - Jun. 14, 2015 · 2015-06-10T00:40:12.559Z · LW · GW

I have noticed that many people here want LW resurrection for the sake of LW resurrection.

But why do you want it in the first place?

Do you care about rationality? Then research rationality and write about it, here or anywhere else. Do you enjoy the community of LWers? Then participate in meetups, discuss random things in OTs, have nice conversations, etc. Do you want to write more rationalist fiction? Do it. And so on.

After all, if you think that Eliezer's writing constitute most of LW value, and Eliezer doesn't write here anymore, maybe the wise decision is to let it decay.

Beware the lost purposes.

Comment by estimator on Stupid Questions June 2015 · 2015-05-31T15:28:47.651Z · LW · GW

What is the point of having separated Open Threads and Stupid Questions threads, instead of allowing "stupid questions" in OTs and making OTs more frequent?

Comment by estimator on Stupid Questions June 2015 · 2015-05-31T11:44:29.641Z · LW · GW

And the effort required to earn the money to buy the ring is also wasted.

No, it's not. You have produced (hopefully) valuable goods or services; why they are wasted, from the viewpoint of society?

Comment by estimator on Stupid Questions June 2015 · 2015-05-31T09:58:37.347Z · LW · GW

Such cost calculations are wildly overestimated.

Suppose you buy a luxury item, like a golden ring with brilliants. You pay a lot of money, but your money isn't going to disappear; it is redistributed between traders, jewelers, miners, etc. The only thing that's lost is the total effort required to produce that ring, which often costs lesser by an order of magnitude. And if the item you buy is actually useful, the wasted effort is even lower.

The cost of having kids is so high for you, because you will likely raise well-educated children with high intelligence, which are valuable assets to our society; likely being net positive, after all. Needless to say, actually ensuring that these poor children in Africa will end up that well, rather than, say, die of starvation the next year, is going to cost you much more than 800$. So you pay for quality here.

Comment by estimator on Stupid Questions June 2015 · 2015-05-31T08:32:22.596Z · LW · GW

Well, everyone will likely die sooner or later, even post-Singularity (provided that it will happen, which isn't quite a solid fact).

Anyway, I think that any morality system that proclaims unethical all and every birth happened so far is inadequate.

Comment by estimator on The most important meta-skill · 2015-05-29T21:09:57.728Z · LW · GW

That only works if there are few levels of abstraction; I doubt that you can derive how do programs work at the machine codes level based of your knowledge of physics and high-level programming. Sometimes, gears are so small that you can't even see them on your top level big picture, and sometimes just climbing up one level of abstraction takes enormous effort if you don't know in advance how to do it.

I think that you should understand, at least once, how the system works on each level and refresh/deepen that knowledge when you need it.

Comment by estimator on The most important meta-skill · 2015-05-29T20:57:28.324Z · LW · GW

Read what is a matrix, how to add, multiply and invert them, what is a determinant and what is an eigenvector and that's enough to get you started. There are many algorithms in ML where vectors/matrices are used mostly as a handy notation.

Yes, you will be unable to understand some parts of ML which substantially require linear algebra; yes, understanding ML without linear algebra is harder; yes, you need linear algebra for almost any kind of serious ML research -- but it doesn't mean that you have to spend a few years studying arcane math before you can open a ML textbook.

Comment by estimator on The most important meta-skill · 2015-05-29T18:45:08.130Z · LW · GW

You're right; you have to learn solid background for research. But still, it often makes sense to learn in the reversed order.

Comment by estimator on Approximating Solomonoff Induction · 2015-05-29T18:22:30.497Z · LW · GW

Can you unpack "approximation of Solomonoff induction"? Approximation in what sense?

Comment by estimator on The most important meta-skill · 2015-05-29T18:18:15.982Z · LW · GW

In my experience, in math/science prerequisites often can (and should) be ignored, and learned as you actually need them. People who thoroughly follow all the prerequisites often end up bogged down in numerous science fields which have actually weak connection to what they wanted to learn initially, and then get demotivated and drop out of their endeavor. This is a common failure mode.

Like, you need probability theory to do machine learning, but some you are unlikely to encounter some parts of it, and also there are parts of ML which require very little of it. It totally makes sense to start with them.

Comment by estimator on Ideas to Improve LessWrong · 2015-05-27T23:57:24.587Z · LW · GW

One simple UI improvement for the site: add a link from comments in inbox to that comment in the context of its post; now I have to click twice to get to the post and then scroll down to the comment.

Comment by estimator on The most important meta-skill · 2015-05-27T23:50:45.068Z · LW · GW

But these are the things pretty much everybody does while learning languages.

Comment by estimator on The most important meta-skill · 2015-05-27T23:06:27.261Z · LW · GW

Also, I'd like to compare your system against common sense reasoning baseline. What do you think are the main differences between your approach and usual approaches to skill learning? What will be the difference in actions?

I'm asking that because that your guide contains quite long a list of recommendations/actions, while many of them are used (probably intuitively/implicitly) by almost any sensible person. Also, some of the recommendations clearly have more impact than others. So, what happens if we apply the Pareto principle to your learning system? Which 20% are the most important? What is at the core of your approach?

Comment by estimator on The most important meta-skill · 2015-05-27T22:47:58.466Z · LW · GW

I meant something like this.

... take part in routine conversations; write & understand simple written text; make notes & understand most of the general meaning of lectures, meetings, TV programmes and extract basic information from a written document.

Comment by estimator on A resolution to the Doomsday Argument. · 2015-05-27T22:43:23.093Z · LW · GW

I don't think it's strange. Firstly, it does have distinguishing qualities, the question is whether they are relevant or not. So, you choose an analogy which shares the qualities you currently think are relevant; then you do some analysis of your analogy, and come to certain conclusions, but it is easy to overlook a step in the analysis which happens to sufficiently depend on a property that you previously thought was insufficient in the original model, and you can fail to see it, because it is absent in the analogy. So I think that double-checking results provided by analogy thinking is a necessary safety measure.

As for specific examples: something like quantum consciousness by Penrose (although I don't actually believe it it). Or any other reason why consciousness (not intelligence!) can't be reproduced in our computer devices (I don't actually believe it either).

Comment by estimator on A resolution to the Doomsday Argument. · 2015-05-27T22:15:22.731Z · LW · GW

That's a difficult question to answer, so I'll give you the first thing I can think of. It's still me, just a lower percentage of me. I'm not that confident that it can be put to a linear scale, though.

That is one of the reasons why I think binary-consciousness models are likely to be wrong.

There are many differences between brains and computers; they have different structure, different purpose, different properties; I'm pretty confident (>90%) that my computer isn't conscious now, and the consciousness phenomenon may have specific qualities which are absent in its image in your analogy. My objection to using such analogies is that you can miss important details. However, they are often useful to illustrate one's beliefs.

Comment by estimator on The most important meta-skill · 2015-05-27T21:56:17.219Z · LW · GW

Nice, but beware reasoning after you've written the bottom line.

As for the actual content, I basically fail to see its area of applicability. For sufficiently complex skills, like say, math, languages or football decision-trees & howto-guides approach will likely fail as too shallow; for isolated skills like changing a tire complex learning approaches are an overkill -- just google it and follow the instructions. Can you elaborate languages example further? Because, you know, learning a bunch of phrases from phrasebook to be able to say a few words in a foreign country is a non-issue. Actually learning language is. How would you apply your system to achieve intermediate-level language knowledge? Any other non-trivial skills learning example would also suffice. What skills have you trained by using your learning system, and how?

Comment by estimator on A resolution to the Doomsday Argument. · 2015-05-27T21:30:34.846Z · LW · GW

OK, suppose I come to you while you're sleeping, and add/remove a single neuron. Will you wake up in your model? Yes, because while you're naturally sleeping, much more neurons change. Now imagine that I alter your entire brain. Now, the answer seems to be no. Therefore, there must be some minimal change to your brain to ensure that a different person will wake up (i.e. with different consciousness/qualia). This seems strange.

You don't assume that the person who wakes up always has different consciousness with the person who fell asleep, do you?

It would be the same computer, but different working session. Anyway, I doubt such analogies are precise and allow for reliable reasoning.

Comment by estimator on A resolution to the Doomsday Argument. · 2015-05-27T21:11:35.942Z · LW · GW

p("your model") < p("my model") < 50% -- that's how I see things :)

Here is another objection to your consciousness model. You say that you are unconscious while sleeping; so, at the beginning of sleep your consciousness flow disappears, and then appears again when you wake up. But your brain state is different before and after sleep. How does your consciousness flow "find" your brain after sleep? What if I, standing on another planet many light years away from Earth, build atom-by-atom a brain which state is closer to your before-sleep brain state than your after-sleep brain state is?

The reason why I don't believe these theories with a significant degree of certainty isn't that I know some other brilliant consistent theory; rather, I think that all of them are more or less inconsistent.

Actually, I think that it's probably a mistake to consider consciousness a binary trait; but non-binary consciousness assumption makes it even harder to find out what is actually going on. I hope that the progress in machine learning or neuroscience will provide some insights.

Comment by estimator on A resolution to the Doomsday Argument. · 2015-05-27T20:42:13.889Z · LW · GW

That's a typo; I mean't that my model doesn't imply continuous time. By the way, does it make sense to call it "my model" if my estimate of the probability of it being true is < 50%?

So, why do I think that consciousness requires continuity?

I guess, you have meant "doesn't require"?

I'd say that continuity requirement is the main cause for the divergence in our plausibility rankings, at least.

What is your probability estimate of your model being (mostly) true?

Comment by estimator on The most important meta-skill · 2015-05-27T20:14:48.375Z · LW · GW

I've started commenting here recently, but I'm a long time lurker (>1 year). Also, I was speaking about self-help articles in general, not conditional on whether they are posted on LW -- it makes sense, because pretty much anyone can post on LW.

Now I found a somewhat less extreme example of what I think is an OK post on self-help although it doesn't have scientific references, because a) the author told us what actual results he achieved and, more importantly, b) the author explained why he thinks that the advice works in the first place.

Personally, I don't find your post consistent with my observations, but it's not my main objection -- my main objection is that throwing an instruction without any justification is a bad practice, especially on such a controversial topic, especially in a rationalist community.

Comment by estimator on A resolution to the Doomsday Argument. · 2015-05-27T19:58:50.496Z · LW · GW

I find a model plausible if it isn't contradicted by evidence and matches my intuitions.

My model doesn't imply discrete time; I don't think I can precisely explain why, because I basically don't know how consciousness works at that level; intuitively, just replace t + dt with t + 1. Needless to say, I'm uncertain of this, too.

Honestly, my best guess is that all these models are wrong.

Now, what arguments cause you to find your model plausible?

Comment by estimator on A resolution to the Doomsday Argument. · 2015-05-27T19:18:37.726Z · LW · GW

OK, either I wake up in a room with no envelope or die (deterministically) depends on which envelope you have put in my room.

What exactly happens in the process of cloning certainly depends on a particular cloning technology; the real one is that which shares continuous conscious experience line with me. The (obvious) way to detect which was real for an outsider is to look at where it came from -- if it was built as a clone, then, well, it is a clone.

Note that I'm not saying that it's the true model, just that I currently find it more plausible; none of the consciousness theories I've seen so far is truly satisfactory.

I've read the Ebborian posts and wasn't convinced; a thought experiment is just a thought experiment, there are many ways it can be flawed (that is true for all the thought experiments I proposed in this discussion, btw). But yes, that's a problem.

Comment by estimator on The most important meta-skill · 2015-05-27T18:37:23.758Z · LW · GW

So, taking a look at what you actually propose to do, this reduces to a) learn some phrases from the tourist phrasebook and b) learn the rest of the language while c) avoiding high-stakes situations where you need language knowledge. Reminds me of this.

Comment by estimator on The most important meta-skill · 2015-05-27T18:11:06.771Z · LW · GW

Articles on such topics are notorious for their average bad quality. Reformulating in Bayesian terms, the prior probability of your statements being true is low, so you should provide some proofs or evidence -- or why I (or anyone) should believe you? Have you actually checked if it works? Have you actually checked if it works for somebody else?

I don't think that personal achievements are a bullet-proof argumentation for such an advice. Still, when I read something like this, I'm pretty sure that it contains valuable information, although it is probably a mistake to follow such advice verbatim anyway. So, if you have Hamming-level credentials, it will help.

As for your article, probably the only way to fix it is to add proofs to your statements. What evidence supports them? Is there any psychological research to back up your claims? Why do you think it is optimal (or near-optimal) way to learn skills?

This is a good self-help article. Can you see the reference list? :)

Comment by estimator on A resolution to the Doomsday Argument. · 2015-05-27T17:44:09.619Z · LW · GW

Do you think you won't awaken in a room with no in the envelope?

I think that I either wake up in a room with no in the envelope, or die, in which case my clone continues to live.

Yes, but I also think conscious experience is halted during regular sleep. Also, should multiple copies survive, his conscious experience will continue in multiple copies. His subjective probability of finding himself as any particular copy depends on the relative weightings (i.e. self-locating uncertainty).

I find this model implausible. Is there any evidence I can update on?

Comment by estimator on The most important meta-skill · 2015-05-27T17:33:07.160Z · LW · GW

I think that all self-help / "learning to learn" / etc. articles should contain a short summary telling us some reasons to actually believe anything written below. Like references to relevant research, or author's real life achievements, or something. Generally, one shouldn't rely on personal anecdotes; but for self-help, even having a single data point is often too high a standard.

In your article, I couldn't find a single bit of evidence in support of your claims.

Comment by estimator on A resolution to the Doomsday Argument. · 2015-05-27T15:27:14.114Z · LW · GW

I think, the problem with consciousness/qualia discussions is that we don't have a good set of terms to describe such phenomena, while being unable to reduce it to other terms.

No, one copy will see 1, another 2, etc. Something like that will fork my consciousness, which has uncertain effects, which is why I proposed being asleep throughout.

I mean, one of the copies would be you (and share your qualia), while others are forks of you. That's because I think that a) your consciousness is preserved by the branching process and b) you don't experience living in different branches, at least after you observed their difference. So, if the quantum lottery works when you're awake, it requres look-ahead in time.

Now about sleeping. My best guess about consciousness is that we are sort-of conscious even while in non-REM sleep phases and under anesthesia; and halting (almost) all electric activity in the brain doesn't preserve consciousness. That's derived from the requirement of continuity of experience, which I find plausible. But that's probably irrelevant to our discussion.

As far as I understand, in your model, one's conscious experience is halted during quantum lottery (i.e. sleep is some kind of a temporary death). And then, his conscious experience continues in one of the survived copies. Is this a correct description of your model?

Comment by estimator on A resolution to the Doomsday Argument. · 2015-05-27T14:05:52.942Z · LW · GW

OK, now imagine that the computer shows you the number n on it's screen. What will you see? You say that both copies have your consciousness; will you see a superposition of numbers? I don't see how simultaneously being in different branches makes sense from the qualia viewpoint.

Also, let's remove sleeping from the thought experiment. It is an unnecessary complication; by the way, I don't think that consciousness flow is interrupted while sleeping.

And no, I'm currently unable to dissolve the hard problem of consciousness.

Comment by estimator on How do we learn from errors? · 2015-05-26T23:19:46.756Z · LW · GW

Why not both?

Comment by estimator on Open Thread, May 25 - May 31, 2015 · 2015-05-26T23:17:56.199Z · LW · GW

At least in math, a paper can actually be verified during peer review.

Comment by estimator on Less Wrong lacks direction · 2015-05-26T23:02:36.464Z · LW · GW

My impression is that inside LW they are usually assumed true, while outside LW they are usually assumed false or highly questionable. Again, I'm not saying that these theories are wrong, but the pattern looks suspicious; almost every LW's non-mainstream belief can be traced back to Eliezer. What a coincidence. One of the possible explanations is the halo effect of the Sequences. Or they are actually underrated outside LW. Or my impressions are distorted.

Comment by estimator on Less Wrong lacks direction · 2015-05-26T22:34:51.804Z · LW · GW

Can you unpack "optimizing thought processes"? Under some definitions the statement is questionable, under others trivially true.

Also, the articles you've linked to describe techniques that are very popular outside -- so if they are overrated, it isn't a LW-specific mistake.

Comment by estimator on Less Wrong lacks direction · 2015-05-26T22:15:28.955Z · LW · GW

TDT, FAI (esp. CEV), acausal trading, MWI -- regardless whether they are true or not, the level of criticism is lower than one would expect; either because of the Halo effect or ADS.

Comment by estimator on A resolution to the Doomsday Argument. · 2015-05-26T21:53:26.240Z · LW · GW

I don't have a model which I believe with certainty even provided MWI is true.

I think that, given MWI, your consciousness is in any world in which you exist, so that if you kill yourself in the other worlds, you only exist in worlds that you didn't kill yourself. I'm not sure what else could happen; obviously you can't exist in the worlds you're dead in.

What happens if you die in a non-MWI world? Pretty much the same for the case of MWI with random branch choice. If your random branch happens to be a bad one, you cease to exist, and maybe some of your clones in other branches are still alive.

So at time t, the data is already determined from the computer's perspective, but not from mine. At t+dt, the data is determined from my perspective, as I've awoken. In the time between t and t+dt, it's meaningless to ask what "branch" I'm in; there's no test I can do to determine that in theory, as I only awaken if I'm in the data=n branch. It's meaningful to other people, but not to me. I don't see anywhere that requires non-local laws in this scenario.

Non-locality is required if you claim that you (that copy of you which has your consciousness) will always wake up. Otherwise, it's just a twisted version of a Russian roulette and has nothing to do with quants.

At time t, the computer either shoots you, or not. At time t + dt, its bullet kills you (or not). So you say that at time t you will go to the branch where the computer doesn't kill you. But such a choice of a branch requires information at time t + dt (whether you are alive or not in that branch). So, physical laws have to perform a look-ahead in time to decide in which Everett branch they should put your consciousness.

Now, imagine that your (quantum) computer generates a random number n from the Poisson distribution. Then, it will kill you after n days. Now n = ... what? Well, thanks to thermodynamics, your (and computer's) lifespan is limited, so hopefully it will be a finite number -- but, look, if the universe allowed unbounded lifespan, it would be a logical contradiction in physical laws. Anyway, you see that the look-ahead in time required after the random number generation can be arbitrarily large. That's what I mean by non-locality here.

Comment by estimator on Ideas to Improve LessWrong · 2015-05-26T19:26:21.473Z · LW · GW

What are you trying to improve on LW, and why? What is the purpose of improvements? What do you want LW to be like after you'll apply them?

Personally, I'd want LW to be an effective tool in learning how to apply rationality, discussing rationality and rationality-related topics, and developing new rationality techniques. Every instrument has its purpose; if you want to study, say, math, isn't it more effective to go to some math communities and seek help and assistance there? If you want to chat like in Facebook, why not go to Facebook? If you have a brilliant startup idea, why not go to the (potentially much broader) community of people interested in such kind of ideas? And so on.

The obvious, and very important, improvement is writing more high-quality posts on rationality-related topics, like the sequences.

The harder subsection mostly consists of technical improvements. Is it an actual bottleneck of LW, or our needs are mostly satisfied with nested comments?

[pollid:986]

Comment by estimator on A resolution to the Doomsday Argument. · 2015-05-26T18:35:43.904Z · LW · GW

I don't have a model which I believe with certainty, and I think it is a mistake to have one, unless you know sufficiently more than modern physics knows.

Why do you think that your consciousness always moves to the branch where you live, but not at random? Quantum lotteries, quantum immortality and the like require not just MWI, but MWI with a bunch of additional assumptions. And if some QM interpretation flavor violates causality, it is more an argument against such an interpretation, than against causality.

The thing I don't like about such way of winning quantum lotteries is that they require non-local physical laws. Imagine that a machine shoots you iff some condition is not fulfilled; you say that you will therefore find yourself in the branch where the condition is fulfilled. But the machine won't kill you instantly, so the choice of branch at time t must be done based on what happens at time t + dt.

Comment by estimator on Less Wrong lacks direction · 2015-05-25T21:50:28.119Z · LW · GW

It's just my impression; I don't claim that it is precise.

As for the recent post by Loosemore, I think that it is sane and well-written, and clearly required a substantial amount of analysis and thinking to write. I consider it a central example of high-quality non-LW-mainstream posts.

Having said that, I mostly disagree with its conclusions. All the reasoning there is based on the assumption that the AGI will be logic-based (CLAI, following the post's terminology), which I find unlikely. I'm 95% certain that if the AGI is going to be built anytime soon, it will be based on machine learning; anyway, the claim that CLAI is "the only meaningful class of AI worth discussing" is far from being true.