Posts

Comments

Comment by ema on Rationality Quotes December 2013 · 2013-12-22T18:00:40.179Z · LW · GW

If you get one bitter cucumber, asking for its cause may be a waste of time. But if you get a lot of bitter cucumbers, spending some time on changing that might give net positive utility.

Comment by ema on Open Thread, June 2-15, 2013 · 2013-06-07T19:07:12.401Z · LW · GW

The subset of people who are Anki users and members of the competitive conspiracy might be interested in the Anki high score list addon I wrote: Ankichallenge

Comment by ema on MetaMed: Evidence-Based Healthcare · 2013-03-05T18:08:05.101Z · LW · GW

According to their site Jaan Tallinn is not the CEO but chairman of the board. Zvi Mowshowitz is the CEO.

Comment by ema on What if "status" IS a terminal value for most people? · 2012-12-25T21:56:26.076Z · LW · GW

I go without shaving my legs, I don't mind wearing stained clothing, I'll happily sit on wet grass

There are Communities where this is high status behavior. But i presume you would have considered this if you were to belong to such a community.

Comment by ema on Programming Thread · 2012-12-07T09:27:36.456Z · LW · GW

Paul Graham's writings on programming and learning Haskell leveled me up. Although i suspect you are already at that level.

Comment by ema on Open Thread, November 1-15, 2012 · 2012-11-14T08:49:39.817Z · LW · GW

Simulated paperclips.

Now we get to the question how detailed the paperclips have to be for the paperclipper to care. I expect the paperclipper to only care when the paperclips are simulated individually and we can't simulate 3^^^^^^3 paperclips individually.

I see no reason to think any work of fiction can lead to such a distortion of reality.

I see no reason to think works of fiction that lead to such a distortion of reality are impossible.

Comment by ema on Open Thread, November 1-15, 2012 · 2012-11-13T09:15:33.091Z · LW · GW

Which is a good thing, because we really do have such powers and we really don't value paperclips.

Our universe has not enough atoms or energy to destroy 3^^^^^3 paperclips.

... were you seriously that confused or are you extrapolating to a "supercharged" novel?

I am extrapolating.

I somehow doubt there would be a single, full-time guard.

Groups of people are not that much harder to manipulate than single persons.

Comment by ema on Open Thread, November 1-15, 2012 · 2012-11-12T20:15:37.673Z · LW · GW

Because we have magical powers from outside the matrix [...].

The AI is vastly more smarter than we and can communicate with us. So it asks us questions which sound innocent to us, but from the answers it can derive a fairly accurate map of how it looks outside the matrix.

It would have to argue that destroying humanity and replacing it with paperclips was a good thing.

The goal of the AI is to have the guard execute the code that would let the AI access the outside world. Arguing with us could be one way to archive this goal. Although i agree it sounds like a unlikely way to succeed. Another possible way would be to write a novel that is so interesting that the guard doesn't put it down and that leaves him in so a confused state that he types in the code, thinking he saves princess Foo from the evil lord Bar.

A super smart AI who wants to reach this goal very badly will likely come up with a whole bunch of other possible ways. Some of which i would never consider even if i spent the next 4 decades thinking about that.

That sounds like more a side effect of reading the same thing "nonstop for 24 hours" than a property of the book [...]

Yes. I am sure any other well written book read for 24 hours would have a similar effect. I think it is likely that a potential guard is at most 2 orders of magnitude less vulnerable to such things than i was at that time. That's not enough against an AI that has 6 orders of magnitude more optimization power.

Comment by ema on Open Thread, November 1-15, 2012 · 2012-11-11T09:52:20.935Z · LW · GW

Why would it believe us that we are able to destroy 3^^^^^3 paperclips?

"arguing" is to narrow a word for describing the possibilities the AI has. For example it could manipulate us emotionally. It could write us a novel that leaves us in a very irrational state and then give us a bogus, but effective on us, argument for why we should let it out.

I once read the fifth Harry Potter book nonstop for 24 hours and for a couple of hours afterwards i had difficulties distinguishing between me and Harry Potter. It seems likely that a author who is a millions times smarter than Rowling and who has it as explicit goal, could write a novel that leaves me with far bigger misconceptions.

Comment by ema on Open Thread, November 1-15, 2012 · 2012-11-10T09:54:50.328Z · LW · GW

Could it be that you are confusing the complexity of a utility function of an agent with its optimization power? A super intelligent paperclipper has a simple utility function, but would have no problem reasoning about humans in great enough detail to find out what it has to say to get the guard to let it out of the box.

Comment by ema on Open Thread, November 1-15, 2012 · 2012-11-06T10:31:11.496Z · LW · GW

Maybe that is more to your liking -> https://dl.dropbox.com/u/3943312/gwern-small.png I just cropped and rescaled it in gimp.

Comment by ema on 2012 Less Wrong Census Survey: Call For Critiques/Questions · 2012-10-21T08:13:38.336Z · LW · GW

people with a photographic memory still could use SRS for learning sounds.

Comment by ema on 2012 Less Wrong Census Survey: Call For Critiques/Questions · 2012-10-19T09:57:55.838Z · LW · GW

But what should Spaniards answer?

i think "White (non-Hispanic)". Not that i understand the category Hispanic, but putting Swedish and Greek people in one category while excluding Spaniards seems deeply weird to me.

Comment by ema on Firewalling the Optimal from the Rational · 2012-10-07T21:29:33.753Z · LW · GW

I like that idea, but i think there can be too much granularity. The feeling of 'People who agree with me on X also agree with me on completely unrelated Y' is awesome.

Comment by ema on Less Wrong Polls in Comments · 2012-09-21T14:35:30.970Z · LW · GW

That doesn't really prevent trolling, so i'm not sure that it would be helpful.

Comment by ema on The Yudkowsky Ambition Scale · 2012-09-12T18:48:57.487Z · LW · GW

I would put it lower than 9 because a general AI is science as software. Which means it is already contained in 9.

Comment by ema on Meetup : Berlin Meetup · 2012-08-31T19:24:35.967Z · LW · GW

I will come too.

Comment by ema on [video] Robin Hanson: Uploads Economics 101 · 2012-08-08T07:31:40.537Z · LW · GW

One benefit of running on a lower speed is that you can interact with things farther away from you while it still seems instantaneous. although i have no idea why that would be more important for the workers than for the boss.

Comment by ema on New Singularity.org · 2012-06-21T15:21:19.489Z · LW · GW

Now it works for me too.

Comment by ema on New Singularity.org · 2012-06-19T10:10:27.770Z · LW · GW

on the about page "Meet the Team" links to http://singularity.org/visiting-fellows/ instead of http://singularity.org/team/

Comment by ema on What are you working on? June 2012 · 2012-06-04T13:00:43.876Z · LW · GW

I want a UI that suits me better. Concretely this means: More keyboard shortcuts. Dragging the mouse only changes the selection. In Inkscape it also moves paths which can get annoying. Non destructive boolshe operations, makes shading way easier.

Comment by ema on What are you working on? June 2012 · 2012-06-03T18:40:10.007Z · LW · GW

I develop a vector drawing program. It seams to have a good balance between archivability and ambition for me. So far it has 80% of the functionality i use of Inkscape. Currently i'm struggling with getting the performance from barely usable to smooth.

Comment by ema on Meetup : First Berlin meetup · 2012-05-30T16:13:06.834Z · LW · GW

I will also attend.

Comment by ema on Why a Human (Or Group of Humans) Might Create UnFriendly AI Halfway On Purpose · 2012-05-01T18:21:54.746Z · LW · GW

Of course we want to favor the group we are part of. Otherwise our CEV wouldn't differ.

Comment by ema on New Year's Prediction Thread (2012) · 2012-01-03T13:26:43.888Z · LW · GW

they don't exclude each other.

Comment by ema on Rationality Quotes December 2011 · 2011-12-02T07:56:23.799Z · LW · GW

I can't see how this is a rationality quote. This would imply that humans have a hard time controlling their actions. How else could someone who thinks wisely act in an absurd fashion? Isn't rationality about how to overcome that humans don't think wisely?

Comment by ema on Reconstructing visual experiences from brain activity evoked by natural movies. [link] · 2011-09-23T07:56:25.600Z · LW · GW

I would have expected more details on the faces, considering how much processing power is assigned to the task.

The results are obtained by mixing 100 other movies together so it is not surprising that there are no details.