Posts

Paris LW Meetup - LHC Exhibit - 17/01/2015 2015-01-01T12:22:01.678Z
Meetup : Paris LW Meetup - LHC Exhibit 2015-01-01T12:17:49.346Z
The VNM independence axiom ignores the value of information 2013-03-02T14:36:53.279Z
How to tell apart science from pseudo-science in a field you don't know ? 2012-09-02T10:25:41.694Z
Neil Armstrong died before we could defeat death 2012-08-25T19:49:02.906Z
Why space stopped captivating minds ? 2012-07-29T09:58:18.172Z
Interesting rationalist exercise about French election 2012-04-16T15:34:53.811Z
Is risk aversion really irrational ? 2012-01-31T20:34:13.529Z
Less Wrong and non-native English speakers 2011-11-06T13:37:29.702Z
Why would we think artists perform better on drugs ? 2011-10-30T14:01:33.566Z
A few analogies to illustrate key rationality points 2011-10-09T13:00:00.084Z
Signup to wiki broken ? 2011-09-12T18:52:18.995Z

Comments

Comment by kilobug on The Pyramid And The Garden · 2017-12-22T10:39:17.943Z · LW · GW

Just a small nitpciking correction : the metric system wasn't invented in the 1600s, but in the late 1700s during French Revolution.

Comment by kilobug on The Moral Of The Story · 2017-12-22T10:38:01.763Z · LW · GW

Just a small nitpicking correction : the metric system wasn't created in the 1600s, but in the late 1700s, during French Revolution.

Comment by kilobug on Circles of discussion · 2016-12-16T13:05:44.021Z · LW · GW

Interesting proposal.

I would suggest one modification : a "probation" period for contents, changing the rule "Content ratings above 2 never go down, except to 0; they only go up." to "Once a content staid for long enough (two days? one week?) at level 2 or above, it can never go down, only up" to make the system less vulnerable to the order in which content gets rated.

Comment by kilobug on Circles of discussion · 2016-12-16T13:02:36.660Z · LW · GW

The same as "Content ratings above 2 never go down, except to 0", once content as been promoted to level 3 (or 4 or 5) once, it'll never go lower than that.

Comment by kilobug on 2016: a year in review in science · 2016-12-12T16:35:51.550Z · LW · GW

Something important IMHO is missing from the list : no new physics were discovered in LHC, even running at 14TeV, no Susy, no new particle, nothing but a confirmation of all predictions of Standard Model.

It's relatively easy to miss because it's a "negative" discovery (nothing new), but since many were expecting some hints towards new physics from the 2016 LHC runs, the confirmation of the Standard Model (and the death sentence it is to many theories, like many forms of SUSY) is news.

Comment by kilobug on Mismatched Vocabularies · 2016-11-22T10:34:40.948Z · LW · GW

Answer 1 is not always possible - it's possible when you're answering on IRC or Internet forum, but usually is not in real life conversation.

As for #3, it is sometimes justified - there are people out there who will use unnecessarily obscure words just to appear smarter/impress people, or who will voluntarily use unnecessarily complex language just to obfuscate the flaws of their reasoning.

You're right than #1 is (when available) nearly always the best reaction, and that the cases were #3 is true (unless you're speaking to someone trying to sell you homeopathy, or some politicians) are rare, but people having mis-calibrated heuristics is sadly a reality we have to deal with.

Comment by kilobug on The 12 Second Rule (i.e. think before answering) and other Epistemic Norms · 2016-09-06T14:49:17.718Z · LW · GW

Sounds like a good idea, but from a practical pov how do you count those 12 seconds ? I can count 12 seconds more or less accurately, but I can't do that as a background process while trying to think hard. Do you use some kind of a timer/watch/clock ? Or the one asking the question counts on his finger ?

I know the "12 seconds" isn't a magical number, if it ends up being "10" or "15" it won't change much, but if you give a precise number (not just "think before answering") you've to somehow try to respect it.

Comment by kilobug on Inverse cryonics: one weird trick to persuade anyone to sign up for cryonics today! · 2016-08-17T09:04:31.684Z · LW · GW

I expect that the utility per unit time of future life is significantly higher than what we have today, even taking into account loss of social network.

Perhaps, but that's highly debatable. Anyway, my main point was that the two scenarios (bullet / cryonics) are not anywhere near being mathematically equivalent, there are a lot of differences, both in favor and against cryonics, and pretending they don't exist is not helping. If anything, it just reinforces the Hollywood stereotype of the "vulkan rationalist" who doesn't have any feeling or emotion, and that basically fails to understand what makes life worth being lived. And that's pretty harmful from a PR point of view.

Of course this asymmetry goes away if you persuade your friends and family to sign up too.

Even then it's not the case, unless everyone dies and is frozen at the same time. If I sign to cryonics, die tomorrow and am resurrected in 200 years, and my 4 yo niece signs to cryonics when she's adult and dies in 80 years and is resurrected too in 200 years, she'll still have grown without her uncle, and I would still have missed her childhood - in fact, she would likely not even remember me, and the 84-yo person she would be wouldn't be much like the one I remembered.

I think it's probably 2-10 times better in utility than the best we have today.

Perhaps. There is a lot of uncertainty about that (which compounds with the odds of cryonics working at all), and while there are possible futures in which it's the case, it's not certain at all - especially from someone from now.

But you also forget a very important point - utility for other people. Perhaps I would be happier in the future than now - but to take the same example, my niece would still miss her uncle (and that would be even much worse if I were a father, not "just" an uncle), and less utility in her childhood because of it. And I value her life more than my own.

Comment by kilobug on Inverse cryonics: one weird trick to persuade anyone to sign up for cryonics today! · 2016-08-16T16:11:34.665Z · LW · GW

Hum, first I find you numbers very unlikely - cryonics costs more than $1/day, and definitely have less than 10% of chance of working (between the brain damage done by the freezing, the chances that the freezing can't be done in time, disaster striking the storage place before resurrection, risk of society collapse, unwillingness of future people to resurrect you, ...).

Then, the "bullet" scenario isn't comparable to cryonics, because it completely forgets all the context and social network. A significant part of why I don't want to die (not the only reason, by far, but definitely not a minor on either) is that there are people I care about and who either enjoy me being around them, and/or depend on me financially at least partially, and I enjoy spending time with them. If I were to die tomorrow of a bullet in the head, it'll deprive me of time with them and them of time with me. If I were to die of whatever other cause, and then be resurrected centuries in the future, it wouldn't change anything for them (unless they sign up to cryonics too, but that's a wholly different issue).

That doesn't mean cryonics isn't worth it at all - but the two scenarios are far from being mathematically equivalent. And I would definitely pay more than $1 a day to not have the "I'm cut from all the people I care about" scenario to happen.

Comment by kilobug on Open thread, Jul. 18 - Jul. 24, 2016 · 2016-07-23T07:19:52.536Z · LW · GW

Well, I would consider it worrying if a major public advocate of antideathism were also publically advocating a sexuality that is considered disgusting by most people - like say pedophilia or zoophilia.

It is an unfortunate state of the world, because sexual (or political) preference shouldn't have any significant impact on how you evaluate their position on non-related topics, but that's how the world works.

Consider someone who never really thought about antideathism, open the newspaper the morning, reads about that person who publically advocate disgusting political/sexual/whatever opinions, and then learn in that article that he also "considers death to be a curable disease". What will happen ? The person will bundle "death is a curable disease" has the kind of opinions disgusting persons have, and reject it. That's why I'm worried about - it's bad in term of PR when the spokeperson of something unusual you support also happen to be considered "disgusting" by many.

The same happens, for example, when Dawkins takes positions that are disgusting for many people about what he calls "mild pedophilia" - unrelated to whatever Dawkins is right or wrong about it, it does reflect badly on atheism, that a major public advocate of atheism also happens to be a public advocate of something considered "disgusting" by many. Except that it's even worse in the Thiel case, because atheism is relatively mainstream, so it's unlikely people will learn about atheism and that Dawkins defends "mild pedophilia" the same day.

And btw, I'm not saying I've a solution to that problem - that Peter Thiel shouldn't be "allowed" to express his political view (how much I dislike them) is neither possible nor even desirable, but it's still worrying, for the cause of antideathism.

Comment by kilobug on Open thread, Jul. 18 - Jul. 24, 2016 · 2016-07-22T20:52:06.961Z · LW · GW

"Infinite" is only well-defined as the precise limit of a finite process. When you say "infinite" in absolute, it's a vague notion that is very hard to manipulate without making mistakes. One of my university-level maths teacher kept saying that speaking of "infinite" without having precise limit of something finite is equivalent to dividing by zero.

Comment by kilobug on Open thread, Jul. 18 - Jul. 24, 2016 · 2016-07-22T20:49:13.782Z · LW · GW

I am, and not just MIRI/AI safety, also for other topics like anti-deathism. Just today I read in a major French newspaper an article explaining how Peter Thiel is the only one from the silicon valley to support the "populist demagogue Trump" and in the same article that he also has this weird idea that death might ultimately be a curable disease...

I know that reverse stupidity isn't intelligence, and about the halo effect, and that Peter Thiel having disgusting (to me, and to most French citizen) political tastes have no bearing on him being right or wrong about death, but many people will end up associating antideathism with being a Trump-supporting lunatic :/

Comment by kilobug on Zombies Redacted · 2016-07-08T08:42:25.889Z · LW · GW

Imagine a cookie like Oreo to the last atom, except that it's deadly poisonous, weighs 100 tons and runs away when scared.

Well, I honestly can't. When you tell me that, I picture a real Oreo, and then at its side a cartoonish Oreo with all those weird property, but then trying to assume the microscopic structure of the cartoonish Oreo is the same than of a real Oreo just fails.

It's like if you tell me to imagine an equilateral triangle which is also a right triangle. Knowing non-euclidian geometry I sure can cheat around, but assuming I don't know about non-euclidian geometry or you explicitely add the constraint of keeping it, it just fails. You can hold the two sets of properties next to each other, but not reunite them.

Or if you tell me to imagine an arrangement of 7 small stones as a rectangle which isn't a line of 7x1. I can hold the image of 7 stones, the image of a 4x2 rectangle side-by-side, but reuniting the two just fails. Or leads to 4 stones in a line with 3 stones in a line below, which is no longer a rectangle.

When you multiply constraints to the point of being logically impossible, imagination just breaks - it holds the properties in two side-by-side sets, unable to re-conciliate them into a single coherent entity.

That's what your weird Oreo or zombies do to me.

Comment by kilobug on Zombies Redacted · 2016-07-07T07:20:11.751Z · LW · GW

Because consciousness supervenes upon physical states, and other brains have similar physical states.

But why, how ? If consciousness is not a direct product of physical states, if p-zombies are possible, how can you tell apart the hypothesis "every other human is conscious" from "only some humans are conscious" from "I'm the only one conscious by luck" from "everything including rocks are conscious" ?

Comment by kilobug on Zombies Redacted · 2016-07-06T07:19:45.517Z · LW · GW

Is "it" zombies, or epiphenomenalism?

The hypothesis I was answering to, the "person with inverted spectrum".

Comment by kilobug on Zombies Redacted · 2016-07-05T21:34:37.223Z · LW · GW

It definitely does matter.

If you build a human-like robot, remotely controlled by a living human (or by a brain-in-a-vat), and interact with the robot, it'll appear to be conscious but isn't, and yet it wouldn't be a zombie in any way, what actually produces the response about being conscious would be the human (or the brain), not the robot.

If the GLUT was produced by a conscious human (or conscious human simulation), then it's akin to a telepresence robot, only slightly more remote (like the telepresence robot is only slightly more remote than a phone).

And if it "sprung up of pure randomness"... if you are ready to accept such level of improbability, you can accept anything - like the hypothesis that no human actually wrote what I'm replying to, but it's just the product of cosmic rays hitting my computers in the exact pattern for such a text to be displayed on my browser. Or the Shakespear was actually written by monkeys typing at random. If you start accepting such ridiculous levels of improbability, something even below than one chance in a googolplex, you are just accepting everything and anything making all attempt to reason or discuss pointless.

Comment by kilobug on Zombies Redacted · 2016-07-05T14:03:21.899Z · LW · GW

Did you read the GAZP vs GLUT article ? In the GLUT setup, the conscious entity is the conscious human (or actually, more like googolplex of conscious humans) that produced the GLUT, and the robot replaying the GLUT is no more conscious than a phone transmitting the answer from a conscious human to another - which is basically what it is doing, replaying the answer given by a previous, conscious, human from the same input.

Comment by kilobug on Zombies Redacted · 2016-07-05T12:27:37.371Z · LW · GW

Sorry to go meta, but could someone explain me how "Welcome back!" can be at -1 (0 after my upvote) and yet "Seconded." at +2.

Doesn't sound like very consistent scoring...

Comment by kilobug on Zombies Redacted · 2016-07-05T12:22:15.339Z · LW · GW

Not having a solution doesn't prevent from criticizing an hypothesis or theory on the subject. I don't know what are the prime factors of 4567613486214 but I know that "5" is not a valid answer (numbers having 5 among their prime factors end up with 5 or 0) and that "blue" doesn't have the shape of a valid answer. So saying p-zombism and epiphenomenalism aren't valid answers to the "hard problem of consciousness" doesn't require having a solution to it.

Comment by kilobug on Zombies Redacted · 2016-07-05T12:03:08.990Z · LW · GW

Or more likely :

d) the term "qualia" isn't very properly defined, and what turchin means with "qualia" isn't exactly what VAuroch means with "qualia" - basically an illusion of transparecny/distance of inference issue.

Comment by kilobug on Zombies Redacted · 2016-07-05T12:01:27.603Z · LW · GW

I would like to suggest zombies of second kind. This is a person with inverted spectrum. It even could be my copy, which speaks all the same philosophical nonsense as me, but any time I see green, he sees red, but names it green. Is he possible? I could imagine such atom-exact copy of me, but with inverted spectrum.

I can't.

As a reductionist and materialist, it doesn't make sense - the feeling of "red" and "green" is a consequence of the way your brain is wired and structured, an atom-exact copy would have the same feelings.

But letting aside the reductionist/materialist view (which after all is part of the debate), it still wouldn't make sense. The special quality that "red" has in my consciousness, the emotions it call upon, the analogies it triggers, has consequences on how I would invoke the "red" color in poetry, or use the "red" color in a drawing. And on how I would feel about a poetry or drawing using "red".

If seeing #ff0000 triggers exactly all the same emotions, feelings, analogies in the consciousness of your clone, then he's getting the same experience than you do, and he's seeing "red", not "green".

Comment by kilobug on Zombies Redacted · 2016-07-05T11:50:40.117Z · LW · GW

Another more directly worrying question, is why or if the p-zombie philosopher postulate that other persons have consciousness.

After all, if you can speak about consciousness exactly like we do and yet be a p-zombie, why doesn't Chalmer assume he's the only not being a zombie, and therefore letting go of all forms of caring for others and all morality ?

The fact that Chalmer and people like him still behave like they consider other people to be as conscious as they are probably points to the fact they have belief-in-belief, more than actual belief, in the possibility of zombieness.

Comment by kilobug on Zombies Redacted · 2016-07-05T11:47:44.794Z · LW · GW

I agree with your point in general, and it does speak against an immaterial soul surviving death, but I don't think it necessarily apply to p-zombies. The p-zombie hypothesis is that the consciousness "property" has no causality over the physical world, but it doesn't say that there is no causality the other way around: that the state of the physical brain can't affect the consciousness. So a traumatic brain injury would (under some unexplained mysterious mechanism) reflect into that immaterial consciousness.

But sure, it's yet more epicycles.

Comment by kilobug on State your physical account of experienced color · 2016-03-31T07:06:33.294Z · LW · GW

No, it is much more simple than that - "green" is a wavelength of light, and "the feeling of green" is how the information "green" is encoded in your information processing system, that's it. No special ontology for qualia or whatever. Qualia isn't a fundamental component of the universe like quarks and photons are, it's only encoding of information in your brain.

But yes, how reality is encoded in an information system sometimes doesn't match the external world, the information system can be wrong. That's a natural, direct consequence of that ontology, not a new postulate, and definitely not any other ontology. The fact that "the feeling of green" is how "green wavelength" is encoded in an information processing system automatically implies that if you perturbate the information processing system by giving it LSD, it may very well encode "green wavelength" without "green wavelength" being actually present.

In short, ontology is not the right level to look at qualia - qualia is information in a (very) complex information processing system, it has no fundamental existence. Trying to explain it at an ontological level just make you ask invalid questions.

Comment by kilobug on "3 Reasons It’s Irrational to Demand ‘Rationalism’ in Social Justice Activism" · 2016-03-30T15:49:03.457Z · LW · GW

First, "Social justice" is a broad and very diverse movement of people wanting to reduce the amount of (real or perceived) injustice people face for a variety of reasons (skin color, gender, sexual orientation, place of birth, economical position, disability, ...). Like in any such broad political movement, subparts of the movement are less rational than others.

Overall, "social justice" is still mostly a force of reason and rationality against the most frequent and pervasive forms of irrationality in society, which are mostly religion-based, but yes it varies from subparts of the movement. It is, historically, a byproduct of the Enlightenment after all.

That said, there are several levels of "rationality" and "rationalism", and it might be very rational to make irrational demands.

When you make demands in social and political context, you know your demands will usually not be completely fulfilled. Asking for something "impossible" may be the best way, from a game theoretical point of view, to end up with having something not too far from what you really want - the same way that when you're bargaining the price of an item in an informal market (like in latam or maghreb).

It can also be a powerful way to make people think about a question in novel ways and try to find alternative solutions which aren't part of the hypothesis space they usually wander. "Abolish prisons" may seem an irrational demand, and it's very likely that something "like prison" will be required for a few very dangerous individuals, but it can make people think about possible alternatives to prison, something they don't usually do, and which could very well be used for 90% or even perhaps 99% of people currently in prison.

Of course, making "irrational" demands can also be counterproductive, it can discredit the movement, may you appear to be a lunatic, ... but it's a powerful tool to have in your toolbox when you rationally pursue some deep changes in society.

Comment by kilobug on Genetic "Nature" is cultural too · 2016-03-30T08:11:54.941Z · LW · GW

One issue I have with statements like "~50% of the variation is heritable and ~50% is due to non-shared environment" is that they assume the two kind of factors are unrelated, and you can do an arithmetic average between the two.

But very often, the effects are not unrelated, and it works more like a geometric average. In many ways it's more than genetic gives you a potential, an ease to learn/train yourself, but then it depends of your environment if you actually develop that potential or not. Someone with a very high "genetic IQ" but who is underfed and kept isolated and not even taught to read will likely not be a very bright adult, it'll not be "(genes + environment)/2" pour more "(genes * environment)".

Other times, it's more like the environment can help compensate for the genes, offsetting a disability, in a way that you end with "min(genes, environment)" rather than average.

The truth is that the interaction between genes and environment is much more complicated than a mere pondered arithmetic average, and this is rarely considered extensively when people speak of "how much is it genetic, how is it environmental".

Comment by kilobug on State your physical account of experienced color · 2016-03-23T16:21:15.657Z · LW · GW

The experience of green has nothing to with wavelengths of light. Wavelengths of light are completely incidental to the experience.

Not at all. The experience of green is the way our information processing system internally represent "light of green wavelength", nothing else. That if you voluntarily mess up with your cognitive hardware by taking drugs, or that during background maintenance tasks, or that "bugs" in the processing system can lead to "experience of green" when there is no real green to be perceived doesn't change anything about it - the experience of green is the way "green wavelenngth" is encoded in our information processing system, nothing less, nothing more.

Comment by kilobug on The ethics of eating meat · 2016-02-22T15:39:56.527Z · LW · GW

Do you think there's something wrong about all that? Because it seems obviously reasonable to me.

Well, perhaps it is a reason of "cognitive simplicity" but it really feels a very artificial line when someone refuses to eat meat in every situation, with all associated consequences, like they are invited to relatives for christmas eve dinner and they won't eat meat, putting extra burden on the person inviting him so they cook a secondary vegetarian meal for him, and yet not caring much about the rats that are killed regularly in the basement of his apartment by the pest control.

It feels more like a religious interdiction than an utilitarian decision. There are people who avoid eating meat, but do occasionally ("flexitarian" they are called I think). Those appear as much more reasonable than a strict "no meat" policy, if you admit that killing animals is something society has to do anyway, so you try to avoid it, but not in a strict manner.

I do myself have lots of "ethical behavior", like I try to buy fair trade products when I can for stuff like tea, coffee, chocolate, ..., because I want third world producers to be treated decently. But I know that my computer was probably assembled by workers in sweet shops, and if I'm offered a non-fair trade tea at a relative I won't refuse it.

Comment by kilobug on The ethics of eating meat · 2016-02-22T13:57:58.296Z · LW · GW

I guess the average driver kills at most one animal ever by bumping into them, whereas the average meat-eater may consume thousands of animals.

There we touch another problem with the "no meat eating" thing : where do you draw the line ? Would people who refuse to eat chicken and beef be ok with eating shrimps or insects ? What with fish, is it "meat" and unethical ? Because, whenever you drive, you kill hundred of flies and butterflies and the like, which are animals.

So where to draw the line, vertebrates ? Eating shrimps and insects would be fine ? But it's not like a chicken or a cow have lots of cognitive abilities, so feels quite arbitrary to me.

Comment by kilobug on The ethics of eating meat · 2016-02-22T11:10:54.964Z · LW · GW

I always felt that argument 1 is a bit hypocritical and not very rational. We kill animals constantly for many reasons - farming even for vegetables requires killing rodents and birds to prevent them eating the crops, we kill rats and other pests in our buildings to keep them from transmitting disease and damaging cables, we regularly kill animals by bumping into them when we drive a car or take a train or a plane, ... And of course, we massively take living space away from animals, leading them to die.

So why stop eating meat, and yet disregard all the other multiple cases in which our technological civilization massively kill animals ? I personally don't think most animals matter from an utilitarian point of view (they have no consciousness), but if they did, "not eating meat" wouldn't be enough, and eating meat from "dump" fish or chicken would be less a violation of ethics than killing "smart" rats for pest control.

Reason 2. would prevent eating factory-farmed meat, but it wouldn't prevent eating meat from less intensive forms of meat producing (or from wild game) which are usually available in supermarkets here in France, but a slightly higher price.

Reason 4. is just false taken in its absolute form - there are several studies showing that eating too much meat (especially processed meat) is harmful, but so far it seems some kind of meat (like chicken) is pretty harmless, and that eating a bit of meat is better health-wise than not eating any.

Reason 3. and 5. could justify eating less meat, but not no meat at all.

So with the available data, I would recommend eating perhaps less meat (for reasons 3., 4., 5.), less of the high-fat processed meat (like bacon) and try to buy food from more "humane" farms (for reasons 2), but not to stop eating meat completely.

Comment by kilobug on Consciousness and Sleep · 2016-01-08T14:18:45.288Z · LW · GW

Regular sleep may not suspend consciousness (although it can very well be argued in some phases of sleep it does), but anesthesia, deep hypothermia, coma, ... definitely do, and are very valid examples to bring forward in the "teleport" debate.

I've yet to see a definition of consciousness that doesn't have problems with all those states of "deep sleep" (which most people don't have any trouble with), while saying it's not "the same person" for the teleporter.

Comment by kilobug on Voiceofra is banned · 2015-12-24T08:49:08.428Z · LW · GW

+1 for something like "no more than 5 downvotes/week for content which is more than a month old", but be careful that new comment on an old article is not old content.

Comment by kilobug on In favour of total utilitarianism over average · 2015-12-23T08:39:13.565Z · LW · GW

There is no objective absolute morality that exists in a vacuum. Our morality is a byproduct of evolution and culture. Of course we should use rationality to streamline and improve it, not limit ourselves to the intuitive version that our genes and education gave us. But that doesn't mean we can streamline it to the point of simple average or sum, and yet have it remain even roughly compatible with our intuitive morality.

Utility theory, prisoner's dilemma, Occam's razor, and many other mathematical structures put constraints on what a self-consistent, formalized morality has to be like. But they can't and won't pinpoint a single formula in the huge hypothesis space of morality, but we'll always have to rely heavily on our intuitive morality at the end. And this one isn't simple, and can't be made that simple.

That's the whole point of the CEV, finding a "better morality", that we would follow if we knew more, were more what we wished we were, but that remains rooted in intuitive morality.

Comment by kilobug on In favour of total utilitarianism over average · 2015-12-22T10:12:32.559Z · LW · GW

The same way that human values are complicated and can't be summarized as "seek happiness !", the way we should aggregate utility is complicated and can't be summarized with just a sum or an average. Trying to use a too simple metric will lead to ridiculous cases (utility monster, ...). The formula we should use to aggregate individual utilities is likely to be involve total, median, average, Ginny, and probably other statistical tools, and finding it is a significant part of finding our CEV.

Comment by kilobug on timeless quantum immortality · 2015-12-07T08:46:30.752Z · LW · GW

The MWI doesn't necessarily mean that every possible event, however unlikely, "exists". As long as we don't know where the Born rule comes from, we just don't know.

Worlds in MWI aren't discrete and completely isolated from each others, they are more inkstains on paper, not clearly delimited blobs, where "counting the blobs" can't be defined in non ambiguous way. There are hytpothesis (sometimes called "mangled world") that would make worlds of too small probability (inkstains not thick enough) unstable and "contaged" from "nearby" high probability world.

But the main issue is that as long as we don't have a formal derivation of the Born rule inside MWI, we can't make any formal analysis of stuff like QI. We are left with at best semi-intuitive analysis of what "MWI" does mean, but QI being highly counter-intuitive, a semi-intuitive analysis breaks down there.

Comment by kilobug on LessWrong 2.0 · 2015-12-03T12:32:54.110Z · LW · GW

Personally, I liked LW for being an integrated place with all that : the Sequences, interesting posts and discussions between rationalists/transhumanists (be it original thoughts/viewpoints/analysis, news related to those topics, links to related fanfiction, book suggestion, ...), and the meetup organization (I went to several meetup in Paris).

If that were to be replaced by many different things (one for news, one or more for discussion, one for meetups, ...) I probably wouldn't bother.

Also, I'm not on Facebook and would not consider going there. I think replacing the open ecosystem of Internet by a proprietary platform is a very dangerous trend for future of innovation, and I oppose the global surveillance that Facebook is part of. I know we are entering politics which is considered "dirty" by many here, but politics is part of the Many Causes, and I don't think we should alienate people for political reasons. The current LW is politically neutral, and allows "socialists" to discuss without much friction with "libertarians", which is part of its merits, and we should keep that.

Comment by kilobug on Gatekeeper variation · 2015-08-10T13:41:27.028Z · LW · GW

This wont work, like with all other similar schemes, because you can't "prove" the gatekeeper down to the quark level of what makes its hardware (so you're vulnerable to some kind of side-attack, like the memory bit flipping attack that was spoken about recently), nor shield the AI from being able to communicate through side channels (like, varying the temperature of its internal processing unit which it turns will influence the air conditioning system, ...).

And that's not even considering that the AI could actually discover new physics (new particles, ...) and have some ability to manipulate them with its own hardware.

This whole class of approach can't work, because there are just too many ways for side-attacks and side-channels of communication, and you can't formally prove none of them are available, without going down to making proof over the whole (AI + gatekeeper + power generator + air conditioner + ...) down at Schrödinger equation level.

Comment by kilobug on How to escape from your sandbox and from your hardware host · 2015-08-02T07:22:58.456Z · LW · GW

To be fair, the DRAM bit flipping thing doesn't work on ECC RAM, and any half-decent server (especially if you run an AI on it) should have ECC RAM.

But the main idea remains yes : even a program proven to be secure can be defeated by attacking one of the assumptions made (such as the hardware being 100% reliable, which it rarely is) in the proof. Proving a program to be secure down from applying Schrödinger's equation on the quarks and electrons the computer is made of is way beyond our current abilities, and will remain so for a very long time.

Comment by kilobug on Agency is bugs and uncertainty · 2015-06-08T14:31:05.427Z · LW · GW

I see your point, but I think you're confusing a partial overlapping with an identity.

There are many bugs/uncertainty that appear as agency, but there are also many bugs/uncertainty which doesn't appear as agency (as you said about true randomness), and there are also behavior that are actually smart and that appear as agency because of smartness (like the way I was delighted with Emacs the first time I realized that if I asked it to replace "blue" with "red", it would replace "Blue" with "Red" and "BLUE" with "RED"), I got the same "feeling of agency" there that I could have on bugs.

So I wouldn't say that agency is bugs, but that we have evolved to mis-attribute attribute agency to things that are dangerous/unpleasant (because it's safest to mis-attribute agency to nothing that doesn't have it, than to not attribute it to something that does have it), the same way our ancestors used to see the sun, storms, volcanoes, ... as having agency.

Agency is something different, hard to exactly pinpoint (philosophers have been going at it for centuries), but that involves ability to have a representation of reality, to plan ahead for a goal, a complexity of representation and ability to explore solution-space in a way that will end up surprising us, not because of bugs, but because of its inherent complexity. And we have been evolved to mis-attribute agency to things which behave in unexpected ways. But that's a bug of our own ability to detect agency, not a feature of agency itself.

Comment by kilobug on The lymphatic system is found to connect to the Central Nervous System · 2015-06-08T13:22:52.065Z · LW · GW

I'm really skeptical of claims like « the "thinking unit" is really the whole body », they tend to discard quantitative considerations for purely qualitative ones.

Yes, the brain is influenced, and influences, the whole body. But that doesn't mean the whole body has the same importance in the thinking. The brain is also influenced by lots of external factors (such as ambient light or sounds, ...) if as soon as there is a "connection" between two parts you say "it's the whole system that does the processing", you'll just end up considering the solar system as a whole, or even the entire event horizon sphere.

There is countless evidence that, while your body and your environment have significant influence on your thinking, it's just influence, not fundamentally being part of the cognition. For example, people who have graft or amputations rarely change personality, memory or cognitive abilities in any way comparable to what brain damage can do.

Comment by kilobug on Perceptual Entropy and Frozen Estimates · 2015-06-04T08:43:28.827Z · LW · GW

A little nitpicking about the "2 dice" thing : usually when you throw you two dices, it doesn't matter which dice gives which result. Sure you could use colored dices and have the "blue 2, red 3" be different than "blue 3, red 2", but that's very rarely the case. Usually you do the sum (or look for patterns like doubles) but "2, 3" and "3, 2" are equivalent, and in that case the entropy isn't the double, but lower.

What you wrote is technically right - but goes against the common usage of dices, so it would be worth adding a footnote or precision about that, IMHO.

Comment by kilobug on Less Wrong lacks direction · 2015-05-26T08:18:39.443Z · LW · GW

I'm not really sure the issue is about "direction", but more about people who have enough time and ideas to write awesome (or at least, interesting) posts like the Sequences (the initial ones by Eliezer or the additional ones by various contributors).

What I would like to see are sequences of posts that build on each other, starting from the basics and going to deep things (a bit like Sequences). It could be collective work (and then need a "direction"), but it could also be the work of a single person.

As for myself, I did write a few posts (a few in main and a few in discussions) but if I didn't write recently is mostly because of three issues :

  1. Lack of time, like I guess many of us.

  2. The feeling of not being "good enough", that's the problem with a community of "smart" people like LW, with high quality base content (the Sequences), it's a bit intimidating.

  3. The "taboo" subjects (like politics) which I do understand and respect, but they limit what I could write about.

There are a few things I would like to write about, but either I feel I lack the skill/knowledge to do it at LW level (point 2) or they border too much the "taboo" subjects (point 3).

Comment by kilobug on Astronomy, space exploration and the Great Filter · 2015-04-22T09:07:38.527Z · LW · GW

I don't see why it's likely one of the numbers has to be big. There are really are lots of complicated steps you need to cross to go from inert matter to space-faring civilizations, it's very easy to point a dozen of such steps that could fail in various ways or just take too long, and there many disasters that can happen to blow everything down.

If you've a long ridge to climb in a limited time and most people fail to do it, it's not very likely there is a very specific part of it which is very hard, but (unless you've actual data that most people fail at the same place) more likely that are lots of moderately difficult parts and few people succeed in all of them in time.

Or if you've a complicated project that takes 4x longer than expected to be done, it's much less likely that there was a single big difficulty you didn't foresee than many small-to-moderate difficulties you didn't foresee stacking on top of each other. The planning fallacy isn't usually due to black swans, but to accumulating smaller factors. It's the same here.

Comment by kilobug on Astronomy, space exploration and the Great Filter · 2015-04-20T11:35:17.431Z · LW · GW

There is a thing which really upsets me with the "Great Filter" idea/terminology, is that implies that it's a single event (which is either in the past or the future).

My view on the "Fermi paradox" is not that there is a single filter, cutting ~10 orders of magnitude (ie, from 10 billions of planets in our galaxy with could have a life to just one), but more a combination of many small filters, each taking their cuts.

To have intelligent space-faring life, we need a lot of things to happen without any disaster (nearby supernova, too big asteroid, ...) disrupting it too much. It's more a like a "game of the goose", where the optimization process steadily advance but events will either accelerate it or make it go backwards, and you need to reach the "win" cell before time runs out (ie, your star becoming too hot as the Sun will be in less than a billion of years) or reaching a '"you lose" cell (a nearby supernova blasting away your atmosphere, or a thermonuclear warfare).

I don't see any reason to believe there is a single "Great Filter", instead of a much more continuous process with many intermediate filters you have to pass through.

Comment by kilobug on Desire is the direction, rationality is the magnitude · 2015-04-07T08:46:14.974Z · LW · GW

Nicely put for an introduction, but of course things are in reality not as clear-cut, "rationality" changing the direction and "desire" the magnitude.

  1. Rationality can make you realize some contradictions between your desires, and force you to change them. It can also make you realize that what you truly desire isn't what you thought you desired. Or it can make you desire whole new things, that you didn't believe to be possible initially.

  2. Desire will affect the magnitude because it'll affect how much effort you put in your endeavor. With things like akrasia and procastination around, if you don't have strong desire to do something, you are much likely to do it, especially if there is an initial cost. That's what Eliezer calls "something to protect".

Of course those two are mostly positive feedback between rationality and desire, but there also be can negative feedbacks between the two, usually due to human imperfections.

Comment by kilobug on Calibration Test with database of 150,000+ questions · 2015-03-20T14:33:00.889Z · LW · GW

Interesting idea, thanks for doing it, but saddly many questions are very US-centric. It would be nice to have some "tags" on the questions, and let the users select which kind of questions he wants (for example the non-US people could remove the US-specific ones).

Comment by kilobug on [FINAL CHAPTER] Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 122 · 2015-03-16T21:00:49.901Z · LW · GW

Yes, it is a bit suspicious - but then Azkaban and Dementors are so terrible that it's worth the risk, IMHO.

And I don't think Harry is counting just on the Horcrux, I think he's counting on Horcrux as last failback, counting on the unicorn blood and the "she knows death can be defeated because she did went back from death", and maybe even Hermione calling a Phoenix.

Comment by kilobug on [FINAL CHAPTER] Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 122 · 2015-03-15T23:20:23.828Z · LW · GW

The chapter 122 in itself was good, I liked it, but I feel a bit disappointed that it's the end of the whole hpmor.

Not to be unfairly critical, it's still a very great story and many thanks to Eliezer for writing it, but... there are way too many remaining unanswered questions, unfinished business, ... to be the complete end. It feels more like "end of season 1, see season 2 for the next" than "and now it's over".

First, I would really have liked a "something to protect" about Harry's parents.

But mostly, there are lots of unanswered questions : how does magic work ? What destroyed Atlantis ? Where do the Deadly Hallows and the Stone come from ? What is the thing threatening to destroy the world ? What exactly are the effects of all the transformations on Hermione ?

And many plots arcs not finished : will Hermione managed to destroy the dementors, and with which consequences ? How will the political landscape reshape, but in UK and worldwide ? How will Harry manage to find a safe way to save the muggles too ? How will Hogwarts evolved ? How will Harry manage the Elder Wand ?

Comment by kilobug on [FINAL CHAPTER] Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 122 · 2015-03-15T11:21:27.285Z · LW · GW

I don't really see the point in antimatter suiciding. It'll not kill Voldermort due to the Horcrux network, so it'll just kill the Death Eaters but letting Voldemort in power, and Voldemort would be so pissed of he would do the worst he can to Harry's family and friends... how is that any better than letting Voldemort kill Harry and manage to save a couple of people by telling him a few secrets ?

Comment by kilobug on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 119 · 2015-03-10T22:59:53.264Z · LW · GW

If I remember well, it's not just "person", but information. I can't use a Time Turner to go 6 hours back to the past, give a piece of paper to someone (or an information to that person), and have that person goes back for 6 more hours.

So while it is an interesting hypothesis, it would require no information to be carried... and isn't the fact that the Stone still exists and works an information in itself ? Or that's nitpicking ?