Posts

Welcome to LW-Cologne 2018-04-16T08:34:20.068Z · score: 7 (2 votes)
Meetup : LW-cologne meetup May 2016-05-16T20:26:20.363Z · score: 0 (1 votes)
Meetup : LW Cologne meetup 2016-02-15T23:49:46.979Z · score: 1 (2 votes)
Meetup : LW Cologne meetup 2016-01-17T22:04:19.753Z · score: 1 (2 votes)
Meetup : LW-cologne meetup 2015-11-23T21:50:11.726Z · score: 1 (2 votes)
Meetup : LW Cologne meetup 2015-09-20T19:51:54.698Z · score: 1 (2 votes)
Meetup : LW Cologne meetup 2015-08-10T10:27:33.455Z · score: 1 (2 votes)
Meetup : LW Cologne meetup 2015-07-06T20:49:11.428Z · score: 1 (2 votes)
Meetup : LW-cologne Meetup 2015-06-07T21:14:01.132Z · score: 1 (2 votes)
Meetup : LW Cologne meetup 2015-05-04T00:12:02.468Z · score: 1 (2 votes)
Meetup : LW Cologne meetpu 2015-04-06T20:00:23.334Z · score: 1 (2 votes)
Meetup : HPMOR wrap party cologne / Cologne LW meetup (restart) 2015-03-09T15:31:57.335Z · score: 1 (2 votes)
Existential biotech hazard that was designed in the 90s? 2015-03-08T01:08:54.154Z · score: 5 (8 votes)
Newcomb's Problem dissolved? 2013-02-25T15:34:34.536Z · score: -3 (12 votes)

Comments

Comment by egi on Open thread, Nov. 16 - Nov. 22, 2015 · 2015-11-17T17:20:21.391Z · score: 6 (6 votes) · LW · GW

Um, no, we cannot colonise the stars with current tech. What a surprise! We cannot even colonise mars, antarctica or the ocean floor.

Of course you need to solve bottom up manufacturing (nanotech or some functional eqivalent) first, making you independent from eco system services, agricultural food production, long supply chains and the like. This also vastly reduces radiation problems and probably solves ageing. Then you have a fair chance.

So yes, if we wreck earth the stars are not plan B, we need to get our shit together first.

If at this point there is still a reason to send canned monkeys is a completely different question.

Comment by egi on There is no such thing as strength: a parody · 2015-07-09T03:15:59.389Z · score: 1 (3 votes) · LW · GW

While this post is meant as a parody / reductio, I think the idea that "there is no such thing as strength" is not entirely invalid. This has of course nothing to do with strength being culturally constructed or some such nonsense but with "strength" - as it is used colloquially- being highly multidimensional.

Thus there is no unambiguous way to say my strenght is [number] [unit]. You can of course devise a strenght test and define a strength quotient as the output of this test. And if the test is any good of course this strength quotient will corelate with different abilities and outcomes such as digging ditches or carrying stones or the probability of having back pain. But this does not mean that "your strenght" as measured by the strenght test behaves like a physical unit.

It may for example (depending on the exact nature of the test) not be meaningful to ask how a non human like an ant or a zebra or an excavator would rate on the test for example because the test may involve handling dumbbells (what neither ant nor zebra can) or involve endurance tests (what the excavator can do until the fuel tank is empty or not at all). I hope the parallel to AI is obvious. On the other hand if I do measure a dimension of strength this problem goes away. If muscle x at max tension applies a torque of y to joint z this does behave as a physical unit and can easily be applied to any system with joints, be it ant, zebra or excavator.

Furthermore the strength test is to a certain degree arbitrary. You could do a slightly different test with slightly different correlations and stil call it "strength". This is not the case with a single dimension of strength. That muscle x at max tension does apply a torque of y to joint z is an objective fact about the world which can be ascertatined with a host of different methods all of which will yield the same result (at least theoretically).

Concerning intelligence we unfortunately do not know the onedimensional subcomponents. I think this is the propper steelman for "there is no such thing as intelligence".

Comment by egi on FAI Research Constraints and AGI Side Effects · 2015-06-08T07:18:14.256Z · score: 3 (3 votes) · LW · GW

Problem is that this formalisation is probably bullshit. It looks a bit like a video game where you generate "research points" for AGI and/or FAI. Research IRL does not work like that. You need certain key insights for AGI and a different set for FAI if some insights are shared among both sets (they probably are) the above model does not work any longer. Further problem: How do you quantify G and F? A mathematical modell with variables you can't quantify is of um very limited use (or should I say ornamentation?).

Comment by egi on Even better cryonics – because who needs nanites anyway? · 2015-05-26T18:11:36.489Z · score: 0 (0 votes) · LW · GW

Thanks!

Comment by egi on Even better cryonics – because who needs nanites anyway? · 2015-05-26T17:54:57.922Z · score: 0 (0 votes) · LW · GW

Huh after copying the link to my own post, it works! The link in the above post still does not. Weird!

Comment by egi on Even better cryonics – because who needs nanites anyway? · 2015-05-26T17:44:24.045Z · score: 0 (0 votes) · LW · GW

I would be VERY interested in reading that http://onlinelibrary.wiley.com/doi/10.1111/acel.12344/pdf paper. Unfortunately the link does not work for me (page not found).

Comment by egi on HPMOR Wrap Parties: Resources, Information and Discussion · 2015-03-13T21:00:35.396Z · score: 2 (2 votes) · LW · GW

Thinking about it from this direction you are probably correct in doing ths via facebook.

Comment by egi on HPMOR Wrap Parties: Resources, Information and Discussion · 2015-03-09T15:18:56.412Z · score: 2 (2 votes) · LW · GW

Why not do the whole coordination here on LW instead of Facebook? Much easier to access, since everything on LW is visible without login. And creating an account is easy and has no privacy/terms of use issues.

Comment by egi on Existential biotech hazard that was designed in the 90s? · 2015-03-08T18:17:16.543Z · score: 1 (1 votes) · LW · GW

Oh, never noticed! Thanks!

Comment by egi on Existential biotech hazard that was designed in the 90s? · 2015-03-08T07:36:51.132Z · score: 1 (1 votes) · LW · GW

The Google Scholar link has got the same paywall for me but the ask-force.org link fortunately works. Thanks!

Comment by egi on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-03-02T17:29:56.043Z · score: 7 (7 votes) · LW · GW

Here is my stab at a solution (already posted at ffnet):

First Harry tells V. that Dementors are death, Patronuses work by not thinking about death and the true Patronus works by using a diferent mindstate which V. probably cannot attain (without specifics). Second Harry states that as long as Dementors are around every person including V have in each moment a small but finite probability to be kissed by one. Over an indefinite timeframe the aggregate probaility that V. is kissed approaches one. How this would interact with V's Horkruxes is unclear but he may easily suffer a fate worse than death. Therfore he should keep Harry around at least until the dementors are dealt with.

Then he points out that given what he knows about the ambiguity of prophecies the prophecy V. heard has probably not clearly identified that Harry and not V. is the threat. Thus V. killing Harry might easily doom the world. This is especially likely as V. is not bound by the vow. Thus V. should keep Harry around to guard against his own mistakes and probably take a similar vow. He himself may offer more vows to further Vs goals in exchange for V. vowing to further Harry's goals and so on. This should be beneficial for even a purely selfish V. who wants the world to survive.

In case V. is not convinced by his above offer of cooperation Harry uses the time they are talking to prepare for an attack on V. and the Death Eaters using partial transfiguration: Thinking about venues for attack he first thinks about transfiguring an invisible nanoweapon such as a monofilament knife to decapitate the death eaters. Though he quickly realizes that that will not work since no known material including carbonanotubes is stiff enough to form an invisible blade of several meters length. Independently acting nanobots are out too, because he lacks time and knoledge to design one let alone test them for safety and efficiency. Then he realizes he does not need them, because partial transfiguration can do everything a nanobot could and even more.

He points his wand to a patch of skin on his leg and starts to transfigure the stratum corneum. An invisible bundle of carbonanotubes extends from his skin to the ground branches out to each death eater running up their robes and into their necks. (They do not feel this, since the bundle of tubes has a crossection of only 50 nm. Pain or touch receptors would not pick that up.) Another branch extends to the Dark Lord, but Harry does not dare touch him with his construct fearing the resonance. Instead he builds a small tower form the ground using carbonanotubes in a pattern resembling the Eiffel Tower extending right into the muzzle of his gun (Beneath the moonlight glints a tiny fragment of silver, a fraction of a line...). He seals the muzzle with a thin sheet of carbonanotubes and fills the barrel with nitroglycerine contained by a second thin sheet of carbonanotubes just before the bullet. All of this is very low volume and quickly transfigured.

If the Dark Lord refuses cooperation he snaps his fingers and immeadetly extends the tube in each of the death eaters neck to severe the brainstem from the spinal cord, the language center from the brain (to prevent wordless, wandless magic) and the neck from the body (black robes, falling). To make sure that everything is properly seperated he turns his entire construct (except for the part in Vs gun) into pressurised air (...blood spills out in litres,...). Now the Dark Lord either surenders or fires his gun. ...and Harry screams a word: "rennervate" and points at Hermione to wake her up. Hermione stunns V. Even if V. fired he should not die immeadetly except if part of the gun passed through his brain. Hermione transfigures V. into a small stone to prevent him from dying and thus from coming back. Afterwards they transfigure the Death Eaters for eventual revival.

I wrote multiple redundant plans, because I genuinely think Harry should be able to convince V. to cooperate for purely selfish reasons. But even if V. is not only rational and selfish but "For the Evulz" Evil and thus refuses, the transfiguration attack should secure Harrys victory.

Comment by egi on A discussion of heroic responsibility · 2014-10-30T23:43:23.015Z · score: 0 (0 votes) · LW · GW

they just don't have the option of not picking a treatment.

They do, they call the problem "psychosomatic" and send you to therapy or give you some echinacea "to support your immune system" or prescribe "something homeopathic" or whatever... And in very rare cases especially honest doctors may even admit that they do not have any idea what to do.

Comment by egi on 2014 Less Wrong Census/Survey · 2014-10-27T23:16:34.011Z · score: 30 (30 votes) · LW · GW

Survey taken!

Concerning the mental health questions, how do you weight self diagnosed and diagnosed by psychiatrist? Do you think, given the Less Wrong demographic self diagnosis is less or more reliable (intuitively I would tend to more). How should cases like myself answer - diagnosed with asperger by psychiatrist1, two years later diagnosed with ADHD but not asperger by psychiatrist2, several month later diagnosed as neither asperger nor ADHD by psychiatrist3?

Comment by egi on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-09-15T11:28:59.901Z · score: 4 (4 votes) · LW · GW

and you certai8nly do get underwater volcanoes, so the ash should be available

No, the ash would react with water immeadetly and thus be useless and you need burned lime (CaO or (CaOH)2), not limestone (CaCO3)

Comment by egi on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-09-14T19:49:20.894Z · score: 2 (2 votes) · LW · GW

Given an expectation of how hard it is to solve the problem....

Agreed

... like dolphins in all relevant respects excepts they also have hands (maybe as additional retractable limbs, to preserve swimming capabilities). So they should be about as waterbound as dolphins.

No they are not. They are much less waterbound than seals (watch the video), because they can move around on their hands and use their hands to cover themselves with seaweeds or somesuch to protect against drying / sun. I fully agree with you that such creatures are can bootstrap a civilisation especially if they have scientific knowledge.

Where I disagree is the point where an unmodified dolphin or a strictly waterbound (arbitrarily defined as cannot leave the water for more than 5 seconds) "dolphin with hands" gets anything done on the surface without having significant technology to start with (arbitrarily defined as anything humans could not build 40000 years ago). They would run into the problem that they have to build complex contraptions

Let's make the handle of the axe a much longer stick, and also attach another stick perpendicularly...

to perform simple tasks (felling a tree) without being able to build those complex contraptions without the help of even more complex contraptions (You cannot build what you described in the above quote without having wood and being able to work with it - and do that in a terrestrial environment, where you can not do anything in the first place, because you can not move.).

Comment by egi on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-09-14T17:34:30.132Z · score: 2 (2 votes) · LW · GW

It's not impossible. Significant evidence of the negative will be obtained if performing a thorough investigation (which would be expected to solve the problem if it can be solved) fails to solve the problem.

You could allways argue that we are both not creative / inteligent enough to find a solution and that this is not indicative that a whole society would not find a solution. And this argument may well be correct.

Start with the simpler problem of developing technology as dolphins with hands.

What does that even mean? A dolphin body with functional human arms and a human brain attached and the necessary modifications to make that work? Well now you have got more or less a meremaid with very substantial terrestrial capabilities (well exeeding those of a seal; watch this to get an impression of what I mean ). A group of creatures like that with general knowledge of science might well make it.

Now imagine this creature as strictly waterbound and I think even in this much simpeler problem we can identify a major showstopper: Iron smelting. Imagine this meremaid civilsation with propper hands, and flintstone tools (Can flintstone be found in the oceans? I don't know) and modern scientific knowledge trying to light a fire. They gather mangrooves using their flint axes, build a raft and throw some wood atop to dry. What now? They cannot board the raft to strike or drill fire so they might try to bulid a mirror to use sunlight. Humans did not do that, but they did not know science, so granted. How do they build it without glass or metal? I don't know, but let's say they manage. So now they have fire, not controlled fire, but a bonfire atop a wooden raft. But they don't need a bonfire they need something like a bloomery and then they need to do some very serious smithing only to build something like a very crude excavator arm to do very basic manipulations in a terrestrial environment. And you cannot do smithing under water.

Let's suppose that it's possible to solve this simpler problem ... Can you come up with a particular example of a very simple action that can be performed with hands (underwater, etc.), which doesn't look like it can be reduced to working without hands?

Can you imagine a way a group of quadruplegics ( imho a good aproximation of a stranded dolphin with a human brain - except that their skin does not dry out - ) could fell a tree with stone tools? And delimb it? And bring it to the construction site? And erect it as a pillar?

Comment by egi on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-09-09T21:30:18.592Z · score: 7 (7 votes) · LW · GW

I think the basic problem here is that I have to proove a negative, which is, as we all know, impossible. Thus I am pretty much reduced to debating your suggestions. This will sound quite nitpicky but is not meant as an offense, but to demonstrate, where the difficulties would be:

Initially, power can be provided by pulling strong vines (some kind of seaweed will probably work) attached together.

Power to what? Whatever it is it has to be build without hands !!! and with very basic tools. No Seeweed would not work, because there is no evolutionary pressure on aquatic plants to build the strong supportive structures we use from terrestrial plants.

It should be possible to farm trees somewhere on the shoreline

No, trees do not grow in salty environment (except mangroves). How does a dolphin plant, and harvest mangroves without hands and without an axe or a saw (see below).

A saw could be made of something like a shark jaw with vines attached to the sides, so that it can be dragged back and forth.

No it can not: Shark teeth would break quickly and even if they would not, they do not have the correct form to saw wood. Humans allmost exclusively used axes and knives for woodcrafting before the advent of advanced metallurgy. And you do not get wines.

These enable screws, joints, jars and all kinds of basic mechanical components, which can be used in the construction of tools for controlling things on surfaces of rafts, so that in principle it becomes possible to do anything there given enough woodcrafting and bonecrafting work. At this point we also probably have fire and can use tides to power simple machinery, so that it's practical to create bigger controlled environments and study chemistry and materials.

I think you severely underestimate just how helpless a dolphin would be on such a raft or are we talking remote operation? Without metall? Without precision tools? (I mean real 19th century precision tools - lathe, milling cutter and so on, not stone age "precision tools")

To get land access and do uesful work there (gather wood, create fire, smelt metal ect.) a dolphin would imho need something like a powered exoskeleton controlled perhaps by fin movement or better by DNI. Modern humanity might perhaps be able to build something to enable a dolphin to work on land, but not a medival or a stone age human civilisation and certainly not a stone age civilisation without hands.

I hope I have brought across which kind of difficulties I think would prevent your dolphin engineers from ever getting anywhere. If you disagree on a certain point I am willing to discuss it in greater detail

Comment by egi on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-09-07T19:13:46.666Z · score: 3 (5 votes) · LW · GW

Do you expect animals with human-like intelligence and dolphin-like bodies will fail to develop technological civilization? As a first approximation, I expect a community of modern human engineers (with basic technical background, but no specific knowledge) in dolphin bodies can manage to do that eventually,

How? You can not have fire (no magnesium, phosphorus and so on do not count, since you do not get them without fire), thus you do not get metals, steam and internal combustion engine. Since you do not get metals, you do not get precision tools, or electricity. You are more or less stuck with sharpened rocks and whale bones as a very poor substitute for wood (if you get them in the first place). I am very curious how you think a human or even smarter than human inteligence might bootstrap an industrial civilisation from there.

Comment by egi on 2013 Less Wrong Census/Survey · 2013-11-29T11:56:05.298Z · score: 2 (2 votes) · LW · GW

Thanks! Did not know that.

Comment by egi on 2013 Less Wrong Census/Survey · 2013-11-29T11:51:49.402Z · score: 0 (0 votes) · LW · GW

Interesting thought. So how would you define ontologically basic?

Comment by egi on 2013 Less Wrong Census/Survey · 2013-11-29T11:34:04.918Z · score: 0 (0 votes) · LW · GW

45 to 95 %

Comment by egi on 2013 Less Wrong Census/Survey · 2013-11-28T09:10:15.608Z · score: 0 (0 votes) · LW · GW

What does it mean to not have Kolmogorov complexity?

What I meant is, that (apart from positional information) you can only give one bit of information about the thing in question: it is there or not. There is no internal complexity to be described. Perhaps I overstreched the meaning of Kolmogorov complexity slightly. Sorry for that.

Do you mean that the entity is capable of engaging in non-computable computations?

No.

Comment by egi on 2013 Less Wrong Census/Survey · 2013-11-28T08:58:15.696Z · score: 0 (0 votes) · LW · GW

Before I knew of Hilbert space and the universal wave function, I would have said 1, now I am somewhat confused about that.

Comment by egi on 2013 Less Wrong Census/Survey · 2013-11-26T22:20:23.678Z · score: 1 (1 votes) · LW · GW

Here I understand "ontologically basic" to mean "having no Kolmogorov complexity / not amenable to reductionistic exlanations / does not posses an internal mechanism". Why do you think this is not coherent?

Comment by egi on 2013 Less Wrong Census/Survey · 2013-11-26T22:13:45.184Z · score: 1 (1 votes) · LW · GW

I personally think it's a strawman...

Why?

Comment by egi on 2013 Less Wrong Census/Survey · 2013-11-26T22:11:53.657Z · score: 3 (3 votes) · LW · GW

My confidence bounds were 75% and 98% for defect, so my estimate was diametrically opposed to yours. If the admittedly low sample size of these comments is any indication, we were both way off.

I expected most of the LessWrong comunity to cooperate for two reasons:

  1. I model them as altruistic as in Kurros comment.
  2. I model them as oneboxing in newcombs problem.

One consideration I did not factor into my prediction is, that - judging from the comments - many people refuse to cooperate in transfering money form CFAR/Yvain to a random community member.

Comment by egi on 2013 Less Wrong Census/Survey · 2013-11-25T12:25:05.714Z · score: 4 (4 votes) · LW · GW

Supernatural: AFAIK there is no agreed-on definition of "supernatural" events other than "physically impossible" ones which of course have a probability of 0 (epsilon). OTOH, if you specify "events that the average human observer would use the word 'supernatural' to describe", the probability is very high.

Somewhere on LessWrong I have seen supernatural defined as "involving ontologically basic mental entities". This is imho the best deffinition of supernatural I have ever seen and should probably be included into this question in the future. Other definitions do not really make sense with this question, as you allready pointed out.

Comment by egi on 2013 Less Wrong Census/Survey · 2013-11-24T18:20:12.792Z · score: 21 (21 votes) · LW · GW

Surveyed, including bonus.

I really liked the monetary reward prisoners dillema. I am really curious how this turns out. Given the demographic here, I would predict ~ 85% cooperate.

The free text options were rendered in german (Sonstige). Was that a bug or does it serve some hidden purpose?

Comment by egi on Meetup : First Meetup in Cologne (Köln) · 2013-10-29T12:21:55.166Z · score: 1 (1 votes) · LW · GW

Me and probably my girlfriend too. Awesome somone finally did it. I wanted to start a meetup too, but kept procrastinating about it.

Comment by egi on Rationality Quotes April 2013 · 2013-04-28T18:24:58.487Z · score: -2 (2 votes) · LW · GW

Some people want it to happen, some wish it would happen, others make it happen.

"Michael Jordan"

Comment by egi on Rationality Quotes April 2013 · 2013-04-23T14:30:33.692Z · score: 0 (0 votes) · LW · GW

10-5-10 against veteran by trying to predict the computer and occasionally changing levels of recursion.

Second try: 14-16-15 by trying to act randomly (without conciously using an algorithm).

Comment by egi on Musk, Mars and x-risk · 2013-03-20T11:05:20.356Z · score: 1 (1 votes) · LW · GW

In most cases of "conventional" xrisks (< 10km diameter impactors, (current tech) wars, famine, super volcanoes, (nearly all) climate change scenarios, global computer failure, most pandemics (sub 100% lethality), most supernova scenarios) you don't even need the shelter.

It is sufficient to have the tech necessary to be self sufficient in a small (say 1000 people) group independently from ecosystem services (i.e. food, water, maybe air - not an issue in most scenarios, organic raw materials, fossil fuels). This is the minimum requirement for a martian colony or a deep shelter anyway and much easier, especially if you use outside air. Though it is still very hard - today we don't come close to be independent form ecosystem services even with a supply chain of 7 billion people. I doubt it is possible at all short of some sort of MNT.

As long as we are not independent form ecosystem services in a small group a space- or underground- or oceanic colony as protection from xrisks is a pipe dream, because any settlement not completely independent from the mother civilisation will die long before the mother civilisation breaks down. Especially if transport is as demanding as to Mars.

Comment by egi on Musk, Mars and x-risk · 2013-03-19T14:08:32.114Z · score: 1 (1 votes) · LW · GW

If we can build a self sufficient small scale economy which is independent from earth's ecosystem services and industry base - i.e. an independent martian colony - most listed existential risks a martian colony might mitigate cease to be existential. This is since the mechanism of these existential risks is reduction of ecosystem services provided by earth's biosphere triggering a breakdown of our interconnected world economy with subsequent starvation of most people or even a breakdown of our interconnected world economy without significantly reduced ecosystem services.

This obviously applies to: < 20km diameter impactors, (current tech) wars, famine, super volcanoes, (nearly all) climate change scenarios, global computer failure.

It also applyes to: most pandemics (sub 100% lethality or shelter avaiable or some region spared), most supernova scenarios (breakdown of agriculture due to Ozone layer disruption, far away enough to not instantly fry the earth), some bio- and nanoweapons (sub 100% lethality or shelter avaiable or some region spared).

So a Mars colony will only exclusively survive some highly specific and thus unlikely scenarios: A nano-outbreak which can break into an earthbound shelter, but does not spread through space, a very intense GRB which hits earth but not Mars (is this even possible?), an earth impactor large enough to heat the atmosphere to several hundred °C, perhaps some weird physics disaster.

So what we should do to mitigate x-risks is building a self sufficient small scale economy which is independent from earth's ecosystem services and industry base, not ship it to Mars. Though I fear this is not possible at our current tech level.

Comment by egi on Newcomb's Problem dissolved? · 2013-02-27T15:09:47.228Z · score: 0 (0 votes) · LW · GW

Newcomb's problem doesn't rely on existence of predictors who can predict any agent in any situation. It relies on existence of rational agents that can be predicted at least in certain situations including the scenario with boxes.

This was probably just me (how I read / what I think is interesting about Newcomb's problem). As I understand the responses most people think the main point of Newcomb's problem is that you rationally should cooperate given the 1000000 / 1000 payoff matrix. I emphazised in my post, that I take that as a given. I thought most about the question if you can successfully twobox at all, so this was the "point" of Newcomb's problem for me. To formalize this say I replaced the payoff matrix by 1000/1000 or even device A / device B where device A corresponds to $1000, device B corresponds to $1000 but device A + device B correspond to= $100000 (E.g. they have a combined function).

I still don't understand why would you be so much surprised if you saw Omega doing the trick hundred times, assuming no stage magic. Do you find it so improbable that out of the hundred people Omega has questioned not a single one had a quantum coin by him and a desire to toss it on the occasion? Even game-theoretical experiment volunteers usually don't carry quantum widgets.

Well, I thought about people actively resisting prediction, so some of them flipping a coin or using at least a mental process with severeal recursion levels (I think, that Omega thinks, that I think...). I am pretty though not absolutely sure that these processes are partly quantum random or at least chaotic enough to be computationally intractable for evrything within our universe. Though Omega would probably do much better than random (except if everyone flipps a coin, I am not sure if that is precictable with computational power levels realizable in our universe).

Comment by egi on Newcomb's Problem dissolved? · 2013-02-25T21:59:43.136Z · score: 0 (0 votes) · LW · GW

In particular, I am extremely sceptical that simply not making your mind up, and then at the last minute doing something that feels random, would actually correspond to making use of quantum nondeterminism. In particular, if individual neurons are reasonably deterministic, then regardless of quantum physics any human's actions can be predicted pretty perfectly, at least on a 5/10 minute scale.

As stated in my post I am not sure about this either, though my reasoning is, that while memory is probably easy to read out, thinking is probably a chaotic process, where the outcome may depend on single action potentials, especially if the process does not heavily rely on things stored in memory. If a single action potential occurs can be determined by few - in the limit one - sodium ion(s) passing or not passing a channel. If a sodium Ion passes a channel is a quantum probabilistic process. Though as I said before I am not sure of this, so precommit to use a suitable device.

Alternatively, even if it is possible to be delibrately non-cooperative, the problem can just be changed so that if Omega notices you are deliberately making its judgement hard, then it just doesn't fill the box. The problem in this version seems exactly as hard as Newcomb's.

Yep! Omega can of course do so.

Comment by egi on Newcomb's Problem dissolved? · 2013-02-25T18:59:18.475Z · score: 1 (3 votes) · LW · GW

ad 1: As I pointed out in my post twice, in this case he percommits to oneboxing and and that's it, since assuming atomic resolution scanning and practically infinite processing power he cannot hide his intention to cheat if he wants to twobox.

ad 2: You can, I did not, I suspect - as pointed out - that he could do that with his own brain too, but of course if so Omega woud know and still exclude him.

ad 3:

First of all I want to point out, that I would still one box after seeing Omega predicting 50 or 100 other people correctly, > since 50 to 100 bits of evidence are enough to ovecome (nearly) any prior I have about how the universe works.

This assumed that I could somehow rule out stage magic. Did not say that, my mistake.

On terminology: See my response to shiminux. Yes there is probably an aspect of fighting the hypo, but I think not primarily, since I think it is rather interesting to establish, that you can pervent to be perdicted in a newcomblike problem

Comment by egi on Newcomb's Problem dissolved? · 2013-02-25T18:32:37.605Z · score: 1 (3 votes) · LW · GW

I do not see any way how Omega and the boxes now can be entangeled with a photon passing or not passing through a semitransparent mirror in 5 min.

Comment by egi on Newcomb's Problem dissolved? · 2013-02-25T18:27:59.429Z · score: 1 (3 votes) · LW · GW

Newcomb's problem is in my reading about how an agent A should decide in a contrafactual in which another agent B decides conditional on the outcome of a future decision of A.

I tried to show that it is under certain conditions (deliberate noncompliance of A) not possible for B to know A's future decision any better than random (what - in the limit of atomic resolution scanning and practically infinite processing power - is only possible due to "quantum mumbo-jumbo"). This is IMHO a form of "dissolving" the Question, though perhaps the meaning of "dissolving" is somewhat streched here.

This is of cause not applicable to all Newcomblike Problems - namely all these where A complies and B can gather enough data about A and processing power.

Comment by egi on Newcomb's Problem dissolved? · 2013-02-25T18:03:31.056Z · score: 1 (3 votes) · LW · GW

1000000 x 0.99 + 0 x 0.01 > 1001000 x 0.01 + 1000 * 0.99 so yes. But this is rather besides (my) point. As I pointed out if my aim is to make money I do everything to make Omega's job as easy as possible (by precommiting) and then onebox (if Omega is any better than random).

My point is rather that Omega can be fooled regardless of it's power. - And fooled throughly enough that Omegas percision can be no better than random

Comment by egi on Hard Takeoff · 2013-02-23T15:06:43.805Z · score: 0 (0 votes) · LW · GW

If that were literally more efficient, evolution would have designed humans to have four chimpanzee heads that argued with each other.

No! This is what a human engineer would have done. Evolution cannot do that! (Though the premise is still correct.)

Comment by egi on Welcome to Less Wrong! (July 2012) · 2013-02-09T23:40:41.858Z · score: 4 (4 votes) · LW · GW

Hello,

I found this site via HPMOR, which was the most awesome book I have read for several years. Besides being awesome as a book there were a lot of moments during reading I thought wow, there is someone who really thinks quite like myself. (Which is unfortunately something I do not experience too often.) Thus I was interested in who the author of HPMOR is, so I googled “less wrong”.

This site really held what HPMOR promised, so I spend quite some time reading through many articles absorbing a lot of new and interesting concepts.

Regarding my own person, I am a 30 years old biochemist currently working on my master thesis in structural biology. I grew up and live in Cologne, Germany.

I am, since early childhood very interested in everything science, engineering and philosophy related, thus inferential distances to most topics discussed here were not too large. On the downside most people perceive me as quite nerdy. This is reinforced by my rather poor social skill(I am possibly on the spectrum) so I was bullied a lot during childhood. Thus my social life was quite dim, though it improved quite a lot during my twenties, mostly due to having a relationship.

I was raised with an agnostic respectively weakly catholic (maybe there is a god, perhaps or something) worldview, and became increasingly atheistic during my teen-years, though this is not really remarkable and pretty much the default for scientifically educated people in Germany. Further on a lot of transhumanistic idea(l)s have a lot of appeal to me.

Besides the clarity and high intellectual level of discourse on this site I really like the technophilic / progress optimistic worldview of most people here. The general technology is evil meme held by a lot of “intellectuals” really puts me of, especially if they do not realize, that their entire live depends utterly on the very technology they shun.

My main criticism is an (IMHO) over-representation of the ai-foom scenario as a projected future, though this is a post on its own (which I hope to write up soon).

I have been lurking the site for quite some time now (> 1 year) mostly due to akrasia related reasons. First I really like reading interesting ideas and dislike writing so if I can spend time on less wrong this time has a much higher hedonic quality for me if I read articles than if I write my own article or comments. Second, whenever I read a post here and find something missing or imprecise or even wrong, in most cases someone already pointed it out often more precisely and eloquently than I could have done, so I mostly did not feel to much need to comment anyway.

I decided to delurk now anyway, because I have several ideas for posts in mind, which I hope to write up over the next few weeks or month, hopefully contributing to the awesomeness of this site. Further on I contemplate starting an LW meet-up group in my hometown (I could use som help / advise there).

Cudos and an unconditional upvote to the person who first guesses the meaning of my username.