Posts

Question about metaethics 2016-12-02T10:21:43.741Z

Comments

Comment by pangel on Open thread, September 4 - September 10, 2017 · 2017-09-08T01:12:25.277Z · LW · GW

An intuition is that red-black trees encode 2-3-4 trees (B-trees of order 4) as binary trees.

For a simpler case, 2-3 trees (Ie. B-trees of order 3) are either empty, a (2-)node with 1 value and 2 subtrees, or a (3-)node with 2 values and 3 subtrees. The idea is to insert new values in their sorted position, expand 2-nodes to 3-nodes if necessary, and bubble up the extra values when a 3-node should be expanded. This keeps the tree balanced.

A 2-3-4 tree just generalises the above.

Now the intuition is that red means "I am part of a bigger node." That is, red nodes represent the values contained in some higher black node. If the black node represents a 2-node, it has no red children. If it represents a 3-node, it has one red child, and if it represents a 4-node, it has 2 red children.

In this context, the "rules" of the red-black trees make complete sense. For instance we only count black trees when comparing branch heights because those represent the actual nodes. I'm sure that with a bit of work, it's possible to make complete sense of the insertion/deletion rules through the B-tree lens but I haven't done it.

edit: I went through the insertion rules and they do make complete sense if you think about a B-tree while you read them.

Comment by pangel on Open Thread May 30 - June 5, 2016 · 2016-06-02T15:21:09.342Z · LW · GW

Although I appreciate the parallel, and am skeptical of both, the mental paths that lead to those somewhat related ideas are seriously dissimilar.

Comment by pangel on Open Thread May 30 - June 5, 2016 · 2016-06-02T14:40:38.036Z · LW · GW

I have a question, but I try to be careful about the virtue of silence. So I'll try to ask my question as a link :

http://www.theverge.com/2016/6/2/11837874/elon-musk-says-odds-living-in-simulation

Also, these ideas are still weird enough to win against his level of status, as I think the comments here show:

https://news.ycombinator.com/item?id=11822302

Comment by pangel on Open Thread March 7 - March 13, 2016 · 2016-03-07T23:07:40.291Z · LW · GW

Could you expand on this?

...there are reasons why a capitalist economy works and a command economy doesn't. These reasons are relevant to evaluating whether a basic income is a good idea.

Comment by pangel on Open Thread March 7 - March 13, 2016 · 2016-03-07T23:04:55.527Z · LW · GW

Sorry, "fine" was way stronger than what I actually think. It just makes it better than the (possibly straw) alternative I mentioned.

Comment by pangel on Open Thread March 7 - March 13, 2016 · 2016-03-07T23:01:35.973Z · LW · GW

No. Thanks for making me notice how relevant that could be.

I see that I haven't even thought through the basics of the problem. "power over" is felt whenever scarcity leads the wealthier to take precedence. Okay, so to try to generalise a little, I've never been really hit by the scarcity that exists because my desires are (for one reason or another) adjusted to my means.

I could be a lot wealthier yet have cravings I can't afford, or be poorer and still content. But if what I wanted kept hitting a wealth ceiling (a specific type, one due to scarcity, such that increasing my wealth and everyone else's in proportion wouldn't help), I'd start caring about relative wealth really fast.

Comment by pangel on Open Thread March 7 - March 13, 2016 · 2016-03-07T18:16:17.573Z · LW · GW

I see it as a question of preference so I know by never having felt envy, etc. at someone richer than me just for being richer. I only feel interested in my wealth relative to what I need or want to purchase.

As noted in the comment thread I linked, I could start caring if someone's relative wealth gave them power over me but I haven't been in this situation so far (stuff like boarding priority for first-class tickets are a minor example I did experience, but that's never bothered me).

Comment by pangel on Open Thread March 7 - March 13, 2016 · 2016-03-07T11:56:09.033Z · LW · GW

Responding to a point about the rise of absolute wealth since 1916, this article makes (not very well) a point about the importance of relative wealth.

Comparing folks of different economic strata across the ages ignores a simple fact: Wealth is relative to your peers, both in time and geography.

I've had a short discussion about this earlier, and find it very interesting.

In particular, I sincerely do not care about my relative wealth. I used to think that was universal, then found out I was wrong. But is it typical? To me it has profound implications about what kind of economic world we should strive for -- if most folks are like me, the current system is fine. If they are like some people I have met, a flatter real wealth distribution, even at the price of a much, much lower mean, could be preferable.

I'm interested in any thoughts you all might have on the topic :)

Comment by pangel on The Brain Preservation Foundation's Small Mammalian Brain Prize won · 2016-02-11T16:49:03.211Z · LW · GW

...people have already set up their fallback arguments once the soldier of '...' has been knocked down.

Is this really good phrasing or did you manage to naturally think that way? If you do it automatically: I would like to do it too.

It often takes me a long time to recognize an argument war. Until that moment, I'm confused as to how anyone could be unfazed by new information X w.r.t. some topic. How do you detect you're not having a discussion but are walking on a battlefield?

Comment by pangel on Are there really no ghosts in the machine? · 2015-09-24T09:05:09.897Z · LW · GW

I think practitioners of ML should be more wary of their tools. I'm not saying ML is a fast track to strong AI, just that we don't know if it is. Several ML people voiced reassurances recently, but I would have expected them to do that even if it was possible to detect danger at this point. So I think someone should find a way to make the field more careful.

I don't think that someone should be MIRI though; status differences are too high, they are not insiders, etc. My best bet would be a prominent ML researcher starting to speak up and giving detailed, plausible hypotheticals in public (I mean near-future hypotheticals where some error creates a lot of trouble for everyone).

Comment by pangel on [Link] First almost fully-formed human [foetus] brain grown in lab, researchers claim · 2015-08-22T16:32:19.085Z · LW · GW

I meant it in the sense you understood first. I don't know what to make of the other interpretation. If a concept is well-defined, the question "Does X match the concept?" is clear. Of course it may be hard to answer.

But suppose you only have a vague understanding of ancestry. Actually, you've only recently coined the word "ancestor" to point at some blob of thought in your head. You think there's a useful idea there, but the best you can for now is: "someone who relates to me in a way similar to how my dad and my grandmother relate to me". You go around telling people about this, and someone responds "yes, this is the brute fact from which the conundrum of ancestry start". An other tells you you ought to stop using that word if you don't know what the referent is. Then they go on to say your definition is fine, it doesn't matter if you don't know how someone comes to be an ancestor, you can still talk about an ancestor and make sense. You have not gone through all the tribe's initiation rituals yet, so you don't know how you relate to grey wolves. Maybe they're your ancestors, maybe not. But the other says : "At least, you know what you mean when you claim they are or are not your ancestors.".

Then your little sisters drops by and says: "Is this rock one of your ancestors?". No, certainly not. "OK, didn't think so. Am I one of your ancestors?". You feel about it for a minute and say no. "Why? We're really close family. It's very similar to how dad or grandma relate to you." Well, you didn't include it in your original definition, but someone younger than you can definitely not be your ancestor. It's not that kind of "similar". A bit of time and a good number of family members later, you have a better definition. Your first definition was just two examples, something about "relating", and the word "similar" thrown in to mean "and everyone else who is also an ancestor." But similar in what way?

Now the word means "the smallest set such that your parents are in it, and any parent of an ancestor is an ancestor"..."union the elders of the tribe, dead or alive, and a couple of noble animal species." Maybe a few generations later you'll drop the second term of the definition and start talking about genes, whatever.

My "fuzziest starting point" was really fuzzy, and not a good definition. It was one example, something about being able to "experience" stuff, and the word "similar" thrown in to mean "and everyone else who is conscious." I may (kind of) know what I mean when I say a rock is not conscious, since it doesn't experience anything, but what do I mean exactly when I say that a dog isn't conscious?

I don't think I know what I mean when I say that, but I think it can help to keep using the word.

P.S. The final answer could be as in the ancestor story, a definition which closely matches the initial intuition. It could also be something really weird where you realize you were just confused and stop using the word. I mean, the life force of vitalism was probably a brute fact for a long time.

Comment by pangel on [Link] First almost fully-formed human [foetus] brain grown in lab, researchers claim · 2015-08-20T20:25:17.704Z · LW · GW

As an instance of the limits of replacing words with their definitions to clarify debates, this looks like an important conversation.

The fuzziest starting point for "consciousness" is "something similar to what I experience when I consider my own mind". But this doesn't help much. Someone can still claim "So rocks probably have consciousness!", and another can respond "Certainly not, but brains grown in labs likely do!". Arguing from physical similarity, etc. just relies on the other person sharing your intuitions.

For some concepts, we disagree on definitions because we don't know actually know what those concepts refer to (this doesn't include concepts like "art", etc.). I'm not sure what the best way to talk about whether an entity possesses such a concept is. Are there existing articles/discussions about that?

Comment by pangel on What are "the really good ideas" that Peter Thiel says are too dangerous to mention? · 2015-04-13T23:39:17.838Z · LW · GW

Straussian thinking seems like a deep well full of status moves !

  • Level 0 - Laugh at the conspiracy-like idea. Shows you are in the pack.
  • Level 1 - As Strauss does, explain it / present instances of it. Shows you are the guru.
  • Level 2 - Like Thiel, hint at it while playing the Straussian game. Shows you are an initiate.
  • Level 3 - Criticize it for failing too often (bad thinking attractor, ideas that are hard to check and deploy usual rationality tools on). Shows you see through the phyg's distortion field.
Comment by pangel on Are there really no ghosts in the machine? · 2015-04-13T23:19:38.581Z · LW · GW

You probably already agreed with "Ghosts in the Machine" before reading it since obviously, a program executes exactly its code even in the context of AI. Also obviously, the program can still appear to not do what it's supposed to if "supposed" is taken to mean to programmer's intent.

These statements don't ignore machine learning; they imply that we should not try to build an FAI using current machine learning techniques. You're right, we understand (program + parameters learned from dataset) even less than (program). So while the outside view might say: "current machine learning techniques are very powerful, so they are likely to be used for FAI," that piece of inside view says: "actually, they aren't. Or at least they shouldn't." ("learn" has a precise operational meaning here, so this is unrelated to whether an FAI should "learn" in some other sense of the word).

Again, whether a development has been successful or promising in some field doesn't mean it will be as successful in FAI, so imitation of the human brain isn't necessarily good here. Reasoning by analogy and thinking about evolution is also unlikely to help; nature may have given us "goals", but they are not goals in the same sense as : "The goal of this function is to add 2 to its input," or "The goal of this program is to play chess well," or "The goal of this FAI is to maximize human utility."

Comment by pangel on Bragging Thread March 2015 · 2015-03-08T14:16:32.818Z · LW · GW

Congratulations!

Comment by pangel on LessWrong help desk - free paper downloads and more · 2013-09-21T11:11:30.173Z · LW · GW

Thank you!

Comment by pangel on [LINK] Larry = Harry sans magic? Google vs. Death · 2013-09-18T23:48:08.990Z · LW · GW

I have met people who explicitly say they prefer a lower gap between them and the better-offs over a better absolute level for themselves. IIRC they were more concerned about 'fairness' than about what the powerful might do to them. They also believed that most would agree with them (I believe the opposite).

Comment by pangel on LessWrong help desk - free paper downloads and more · 2013-09-18T12:19:17.617Z · LW · GW

Gentzen’s Cut Elimination Theorem for Non-Logicians

Knowledge and Value, Tulane Studies in Philosophy Volume 21, 1972, pp 115-126

Comment by pangel on To what degree do you model people as agents? · 2013-08-28T21:39:40.128Z · LW · GW

Being in a situation somewhat similar to yours, I've been worrying that my lowered expectations about others' level of agency (with elevated expectations as to what constitutes a "good" level of agency) has an influence on those I interact with: if I assume that people are somewhat influenced by what others expect of them, I must conclude that I should behave (as far as they can see) as if I believed them to be as capable of agency as myself, so that their actual level of agency will improve. This would would work on me, for instance I'd be more generally prone to take initiative if I saw trust in my peers' eyes.

Comment by pangel on Improving Enjoyment and Retention Reading Technical Literature · 2013-08-08T09:52:09.266Z · LW · GW

There is an animated series for children aimed at explaining the human body which personifies bacteria, viruses, etc. Anyone interested in pursuing your idea may want to pick up techniques from the show:

Wikipedia article: http://en.wikipedia.org/wiki/Once_Upon_a_Time..._Life

Example: http://www.youtube.com/watch?v=LIyvrcHnriE&t=1m11s

Comment by pangel on Harry Potter and the Methods of Rationality discussion thread, part 24, chapter 95 · 2013-07-18T05:52:52.060Z · LW · GW

So MoR might be a meta-fantasy of the wizarding world as The Sword of Good is a meta-fantasy of the muggle world. Or at least, MoR!Harry might make the same impression to a wizard reading one fic as Hirou does to a muggle reading the other.

Although my instinct is still that Harry fails at the end.

Comment by pangel on Harry Potter and the Methods of Rationality discussion thread, part 23, chapter 94 · 2013-07-08T13:40:31.988Z · LW · GW

Or Harry transfigured Hermione's body into a rock and then the rock into a brown diamond. Unless the story explicitly disallows double transfigurations and I missed it.

Comment by pangel on Meetup : Paris Meetup: Sunday, May 26. · 2013-05-16T20:54:11.144Z · LW · GW

I'll be there as well.

Comment by pangel on Harry Potter and the Methods of Rationality discussion thread, part 17, chapter 86 · 2012-12-21T18:00:34.175Z · LW · GW

Sounds right, but the present-day situation is the same: orbs may float to you if and only if you enter the Hall. So Dumbledore should know whether he is involved in the prophecy or not. Unless I missed something?

Comment by pangel on Harry Potter and the Methods of Rationality discussion thread, part 17, chapter 86 · 2012-12-18T23:47:01.675Z · LW · GW

... a great room of shelves filled with glowing orbs, one after another appearing over the years. (...) Those mentioned within a prophecy would have an glowing orb float to their hand, and then hear the prophet's true voice speaking.

I interpret it as: Anyone who enters this room sees a glowing orb float to their hand for every prophecy that mentions them. How do you interpret it?

Comment by pangel on Harry Potter and the Methods of Rationality discussion thread, part 17, chapter 86 · 2012-12-17T22:37:24.265Z · LW · GW

"Those who are spoken of in a prophecy, may listen to that prophecy there. Do you see the implication, Harry?"

Shouldn't Minerva see another implication, that Dumbledore has no reason to wonder whether he is the dark lord of the prophecy?

Comment by pangel on Meetup : Paris Meetup · 2012-08-22T23:26:47.266Z · LW · GW

Same here.

Comment by pangel on Mentioning cryonics to a dying person · 2012-08-11T10:07:37.005Z · LW · GW

Thank you for the link! Note that the .pdf version of the article (which is also referenced in dbaupp's link) has a record of the "hostile-wife" cases over a span of 8 years.

Comment by pangel on Mentioning cryonics to a dying person · 2012-08-09T14:32:59.166Z · LW · GW

Women don't like cryonics.

What made you believe this? Is there a pattern to the declared reasons?

Comment by pangel on What Is Signaling, Really? · 2012-07-10T15:45:10.842Z · LW · GW

The fictional college of the article only selects incoming students on price.

Comment by pangel on How to deal with non-realism? · 2012-05-22T18:33:08.717Z · LW · GW

I had the exact same argument with my girlfriend (a bad idea) a while ago and asked for references to point her to on the IRC channel. I was given The Simple Truth and The Relativity of Wrong.

So I was about to write a very supportive response when I saw Mitchell Porter's comment. And this

(...) the children of post-Christian agnostics grow up to be ideologically aggressive posthuman rationalists.

aptly describes recent interactions I've had with my father¹. The accusation of narrowmindedness was present.

So, recurring conflicts with friends and family because of a newfound perspective on, well, everything? Values quickly changing as a consequence of new beliefs on what is true and what is not? Assuming we are in the they-were-right-this-time subgroup of this cliché, there must be smarter ways of dealing with it than making ourselves look crazy in front of the people who care about us.

¹ Except he's a raging atheist but has never propagated the consequences of this belief to his philosophy.

Comment by pangel on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 · 2012-04-20T13:21:04.689Z · LW · GW

I see your point. As an author I would think I'm misdirecting my readers by doing that though; "Voldemort has the same deformity as in canon? He's been playing with Horcruxes!" is the reasoning I would expect from them. Which is why I would, say, remove Quirrell's turban as soon as my plot had Voldemort not on the back of Quirrell's head.

Comment by pangel on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 · 2012-04-20T01:04:41.758Z · LW · GW

The soul-mangling is what causes Voldemort's snake-like appearance, IIRC, and MoR!McGonagall remembers a snake-like Voldemort from her battles. So either MoR!Voldemort has been doing some serious damage to his soul, or he decided to look freakish just for effect and stumbled by chance upon the exact same look which canon!Voldemort got from making Horcruxes.

Comment by pangel on Interesting rationalist exercise about French election · 2012-04-16T22:43:57.695Z · LW · GW

As an anecdote, I had an opposite slight tendency to go for what seemed like the worst answer and I had to switch answers twice because of this.

Comment by pangel on Fictional Bias · 2012-04-01T04:40:54.869Z · LW · GW

I understood the introductory question as "Frodo Baggins from the Lord of the Rings is buying pants. Which of these is he most likely to buy?", and correctly answered (c). I suggest rephrasing your question to ensure that it actually tests the reader's fictional bias. Also, Szalinski in Journal of Cognitive Minification is a nice one.

Comment by pangel on The AI design space near the FAI [draft] · 2012-03-18T15:01:07.145Z · LW · GW

Unless its utility function has a maximum, we are at risk. Observing Mandelbrot fractals is probably enhanced by having all the atoms of a galaxy playing the role of pixels.

Would you agree that unless the utility function of a random AI has a (rather low) maximum, and barring the discovery of infinite matter/energy sources, its immediate neighbourhood is likely to get repurposed?

I must say that at least I finally understand why you think botched FAIs are more risky than others.

But consider, as Ben Goertzel mentioned, that nobody is trying to build a random AI. Whatever achieves AGI-level is likely to have a built-in representation for humans and to have a tendency to interact with them. Check to see if I actually understood you correctly: does the previous sentence make it more probable that any future AGI is likely to be destructive?

Comment by pangel on Where are we? · 2011-05-03T21:36:01.286Z · LW · GW

I am too.