Posts
Comments
An appropriate post : I've come to find EY's posts very boring. Subtle, intelligent, all that, sure. A mind far finer than my own, sure. But it never gets anywhere, never goes anywhere. He spends so much time posting he's clearly not moving AI forward. His book is still out of sight, two years down the line. I can understand the main thrust of his posts, and the comments, if I invest enough, my intelligence and knowledge are just about up to that. But why bother ? It's sterile. Boredom = sterility. As for Robin's comment, which is pertinent and bears on the real world of lived emotions, the connection is that boredom is not a result of what you are doing, it's a result of what you're not doing. Think about it.
We do not know that the territory is single- level. It is conceivable that it is not, and the available evidence does not exclude the possibility.
The territory is single level...... BY DEFINITION ....... waaaahahahahahahahahahaha !!!!!
Things in thingspace commonly coming within the boundary 'free will' :
moral responsibility could have done otherwise possible irrational action possible self-sacrificial action gallantry and style (thanks to Kurt Vonnegut for that one) non-caused agency I am a point in spacetime and my vector at t+1 has no determinant outside myself whimsy 'car c'est mon bon désir' absolute monarchy you can put a gun at my head and I'll still say 'no' idealistic non-dualism consciousness subtending matter disagreeing with Mum & Dad disagreeing with the big Mom & Pop up there in the White House armed response no taxation without representation ... no taxation even with representation (daft ) 'No dear not tonight I've got a headache....' ....
aw hell, just go read Dennett : 'Elbow Room', he did it better than I could.
You've forgotten one important caveat in the phrase "And the way to carve reality at its joints, is to draw your boundaries around concentrations of unusually high probability density in Thingspace." The important caveat is : 'boundaries around where concentrations of unusually high probability density lie, to the best of our knowledge and belief' . All the imperfections in categorisation in existing languages come from that limitation. Other problems in categorisation, like those of Antonio, in 'Merchant of Venise', or those of the founding fathers who wrote that it is 'self evident that all men were created equal' but at the same time were slave owners, do not come from language problems in categorisation, they would have acknowledged that Shylock or the slaves were human, but from different types of cognitive compromise. Apart from that, it's an intellectually satisfying approach, and you might, if you persevere, end up with a poor relation to an existing language. Why a poor relation ? because it would lack nuance, ambiguity, and redundance, which are the roots of poetry. It would also lack words for the surprising but significant improbable phenomenon. Like genius, or albino. Then again, once you get around to saying you will have words for significant low hills of probability, the whole argument blows away. Bon courage.
Firstly, saying "you can define a word any way you want" is not the same thing as "any way which is meaningful to you". Secondly, I don't believe the development on entropy has anything to do with the convenience of using short words for often used concepts. "chair" is a meaningful piece of jointed reality not because of its intrinsic physical properties but because of its handiness for humans. A dolphin would find "chair" as a significant piece of jointed reality absurd. Thirdly, there is an obvious distinction bewteen using language descriptively and using it predictively. I would agree with you that mistakes often arise when moving from the descriptive to the predictive incautiously. That doesn't, however, make the descriptive use of language invalid, or even unwise, 98% of the use of language is descriptive. (I have proof of that statistic, but it won't fit in this margin).
Psychoh, do not despair. Remember : "The real challenge can be played as a single-player game, without speaking aloud.". We are looking for the natural joints of reality, and that is a purely subjective assessment. Every single pair of phenomena in the Universe can be the subject of a natural join if the difference in one of their attributes happens to be a salient division for you. So draw the line around Christmas any way you want, just like you can draw the line around 'food things living in the sea' any way which is relevant to your way of fishing. Just don't speak it aloud.
While we're staking out the new language, I want a word for red flowers, because I like red flowers, and that is much more important to me than their genotype or taxonomy. Also, I want a special word for slightly-out-of-focus photos, which is a very important category for reasons I'm not at liberty to disclose. The joints of reality are articulated in a rather large number of dimensions. Carving it correspondingly is going to need one heck of a .... dictionary.
Ben, Rolf, no problem, I just thought that 'people who look at dictionnaries' was starting to be a category subject to sneaky connotations.. :)
I'll second Frank Hirsch's comment and add one point. I don't get this obsession with 'dictionary definitions' either. An etymological dictionary is endlessly fascinating precisely because it shows you the evolution of thought processes, concepts, and word usages, in action. Very much the opposite of the sort of table thumping that dictionaries are here supposed to give rise to. Eliezer's examples seem to be taken from a pretty toxic discussion environment
So if we have 100 pieces of information about phenomenon A, then we have 100 separate, weaker or stronger, potential categorisations, each with its own set of potential, weaker or stronger, inferences. All legit. and above board, nothing sneaky about it. One could imagine the interactions of these 100 sets of inferences as a multi-dimensional interference pattern, with some nodes glowing brightly as inferences re-inforce, others vanishing completely. The 101st piece of information will bring its own potential categorisation and an additional set of potential inferences. The alternative, I suppose, is just buying a whole truckload of hemlock and going round paying calls on all my friends......
Eliezer seems to want us to strike out some category of words from our vocabulary, but the category is not well defined. Perhaps a meta-Taboo game is necessary to find out what the heck we are supposed to be doing without. I'm not too bothered, grunting and pointing are reasonably effective ways of communicating. Who needs words ?
Albert and Barry's different usages of the word 'sound' are both perfectly testable. Once they've taken the reasonable and sufficient step of looking 'sound' up in a dictionary, and having identified the two (out of many) possible meanings they were using, then one can go off and test for the presence of pressure waves in the air, while the other tests for auditory perceptions in the humans (and/or other animals doted with hearing) nearest to the event. They can later compare their results and Albert will say 'there was sound according to the definition that I was using (Webster : sound(1) 1a), while Barry can happily agree while saying there wasn't, according to the definition that he was using (Webster : sound(1) 1b). Having got that over, they will go off for a beer at the nearest bar and have a good laugh over that time-travelling guy's not even knowing how to use a dictionary....
Silas, billswift, Eliezer does say, introducing his diagrams in the Neural Categories post : "Then I might design a neural network that looks something like this:"
Again, very interesting. A mind composed of type 1 neural networks looks as though it wouldn't in fact be able to do any categorising, so wouldn't be able to do any predicting, so would in fact be pretty dumb and lead a very Hobbesian life....
The primary categorisation is "Threat / Not a threat", and the main categorisation bias is "Better safe than sorry". You'll find that many of your specific categorisation biases are particular examples of that. Examples are : nervousness about your Great Thing being a cult, Asch experiment situations where you have to join the group or stick out from it. Diagram 1b has 'Threat' written all over it.....
To summarise : A storm in a teacup between a pot and a kettle.
Excellent post, however, "But people often don't realize that their argument about where to draw a definitional boundary, is really a dispute over whether to infer a characteristic shared by most things inside an empirical cluster..." Indeed so, but there are other aspects. Humans also have obsessions with (a) how far your cluster is from mine (kinship or the lack of it) (b) given one empirical cluster, how can I pick a characteristic, however minor, which will allow me to split it into 'us vs them' (Robber's Cave). So when you get to discussing whether an uploaded human brain is part of the cluster 'human', those are the considerations which will be foremost.
I sense these 6 essays on cognitive semantics are going to bring us back to transhumanism sooner or later. As of right now, whatever the radial distance from the prototype, and except on the Island of Dr Moreau, you are DEFINITELY human or definitely not, definitely a bird or definitely not. Pluto is DEFINITELY a pla...... whoops.
Thanks for the stuff on typicality, interesting. Just as a side thought, I suspect this has a bearing on Robin's recent post on complexity in political discourse. If one 'plank' of a candidate's position becomes 'typical' of his whole set of ideas, then that gives strength and coherence to Candidate X as a concept.
I'm a little puzzled by all the above. A prediction market is supposed to squeeze out the last drop of (solvent) expert knowledge concerning a given outcome, but voting is a complex and chaotic phenomenon where 'expert knowledge' is thin on the ground or non-existent. If anyone could reliably forecast election results, I think we'd know about it by now. Think weather forecasting, as a comparison. So we're left with, at best, a few people who really believe in their favorite groundhog, and are prepared to place serious money, and others who are prepared to have a flutter on their hunch or their hopes. The result, even ignoring any possible manipulation, looks awfully like a poll of forecasts, distorted by individuals' like of betting and amounts of spare cash as selection biases. Polls of forecasts are less reliable than polls of intentions. And as for Phil, who seems to think not betting is suspect, perhaps Un-American, or perhaps downright criminal, consider that many people have a conservative mindset, don't bet as a general rule, and so just couldn't be bothered. Apart from which, my Dad told me you should work for a living and not to bet, and my Dad is an Authority.
I suppose I should add, for those who are really stuck in maths or formal logic, that changing the definition of a symbol in a formal system is not the same thing as changing the meaning of a word in a language. In fact you can't, individually and as a decision of will, change the meaning of a word in a language. It either changes, as per my previous comment, or it doesn't.
Reactions to 500lb stripy feline things jumping unexpectedly come from pre-verbal categorisations(the 'low road', in Daniel Goleman's terms), so have nothing to do with word definitions. The same is true for many highly emotionally charged categorisations (e.g. for a previous generation, person with skin colour different from mine....). Words themselves do get their meanings from networks of associations. The content of these networks can drift over time, for an individual as for a culture. Words change their meanings. A deliberate attempt to change the meaning of a word by introducing new associations (e.g. via the media) can be successful. Changes in the meanings of political labels, or the associations with a person's name, are good examples. Whether the direct amygdala circuit can be reprogramed is a different matter. Certainly not as easily as the neocortex. If you lived in the world of Calvin and Hobbes for six months, would you start to instinctively see large stripy feline things jumping out at you unexpectedly as an invitation to play ?
Hollerith, if 'most psychologists are idiots', I wonder how they discovered all the cognitive biases ?
Under Multiple Worlds, aren't you condemned, whatever you do or don't do, to there being a number tending to infinity of worlds where what you want to protect is protected, and a number tending to infinity where it is not ?
Eliezer, I don't read the main thrust of your post as being about Newcomb's problem per se. Having distinguished between 'rationality as means' to whatever end you choose, and 'rationality as a way of discriminating between ends', can we agree that the whole specks / torture debate was something of a red herring ? Red herring, because it was a discussion on using rationality to discriminate between ends, without having first defined one's meta-objectives, or, if one's meta-objectives involved hedonism, establishing the rules for performing math over subjective experiences. To illustrate the distinction using your other example, I could state that I prefer to save 400 lives certainly, simply because the purple fairy in my closet tells me to (my arbitrary preferred objective), and that would be perfectly legitimate. It would only be incoherent if I also declared it to be a strategy which would maximise the number of lives saved if a majority of people adopted it in similar circumstances (a different arbitrary preferred objective). I could in fact have as preferred meta-objective for the universe that all the squilth in flobjuckstooge be globberised, and that would be perfectly legitimate. An FAI (or a BFG, for that matter (Roald Dahl, not Tom Hall)) could scan me and work towards creating the universe in which my proposition is meaningful, and make sure it happens. If now someone else's preferred meta-objective for the universe is ensuring that the princess on page 3 gets a fairy cake, how is the FAI to prioritise ?
A Utilitarian should care about the outcomes of Utilitarianism..... and yes, as soon as ends justify means, you do get Stalin, Mao, Pol Pot, who were all striving for good consequences...... Which is relevant, as your arguments mostly involve saving lives (a single type of outcome, so making options intuitively comparable). I'm afraid the 'rant' doesn't add much in terms of content, that I can see.
An AGI project would presumably need a generally accepted, watertight, axiom based, formal system of ethics, whose rules can reliably be applied right up to limit cases. I am guessing that that is the reason why Eliezer et al are arguing from the basis that such an animal exists.
If it does, please point to it. The FHI has ethics specialists on its staff, what do they have to say on the subject ?
Based on the current discussion, such an animal, at least as far as 'generally accepted' goes, does not exist. My belief is that what we have are more or less consensual guidelines which apply to situations and choices within human experience. Unknown's examples, for instance, tend to be 'middle of the range' ones. When we get towards the limits of everyday experience, these guidelines break down.
Eliezer has not provided us with a formal framework within which summing over single experiences for multiple people can be compared to summing over multiple experiences for one person. For me it stops there.
Great New Theorem in color perception : adding together 10 peoples' perceptions of light pink is equivalent to one person's perception of dark red. This is demonstrable, as there is a continuous scale between pink and red.
The answer to 'shut up and multiply' is 'that's the way people are, deal with it'. One thing apparent from these exchanges is that 'inferential distance' works both ways.
To get back to the 'human life' examples EY quotes. Imagine instead the first scenario pair as being the last lifeboat on the Titanic. You can launch it safely with 40 people on board, or load in another 10 people, who would otherwise die a certain, wet, and icy death, and create a 1 in 10 chance that it will sink before the Carpathia arrives, killing all. I find that a strangely more convincing case for option 2. The scenarios as presented combine emotionally salient and abstract elements, with the result that the emotionally salient part will tend to be foreground, and the '% probabilities' as background. After all no-one ever saw anyone who was 10% dead (jokes apart).
Put baldly, the main underlying question is : how do you compare the value of (a) a unit of work expended now, today, on the well-being of a person alive, now, today, with the value of (b) the same unit of work expended now, today, for the well-being of 500 potential people who might be alive in 500 years' time, given that units of work are in limited supply. I suspect any attempt at a mathematical answer to that would only be an expression of a subjective emotional preference. What is more, the mathematical answer wouldn't be a discount function, it would be a compounding function, as it would be the result of comparing all the AI units of work available between now and time t in the future, with the units of work required between now and time t to address all the potential needs of humanity and trans-humanity between now and the end of time, which looks seriously like infinity.
Ben Jones, and Patrick (orthonormal), if you offer me 400$ I'll say 'yes, thank you'. If you offer me 500$ I'll say 'yes, thank you'. If, from whatever my current position is after you've been so generous, you ask me to choose between "a certain loss of $100 or a 20% chance of losing $200", I'll choose the 20% chance of losing 200$. That's my math, and I accept money orders, wire transfers, or cash....
James Bach, your point and EY's are not incompatible : it is a given that what you care about and give importance to is subjective and irrational, however having chosen what outcomes you care about, your best road to achieving them must be Bayesian.... perhaps. My problem with this whole Bayesian kick is that it reminds me of putting three masts and a full set of square-rigged sails on what is basically a canoe : the masts and sails are the Bayesian edifice, the canoe is our useful knowledge in any given real life situation.
Risk aversion, and the degree to which it is felt, is a personality trait with high variance between individuals and over the lifespan. To ignore it in a utility calculation would be absurd. Maurice Allais should have listened to his homonym Alphonse Allais (no apparent relation), humorist and theoretician of the absurd, who famously remarked "La logique mène à tout à condition d'en sortir". Logic leads to everything, on condition it don't box you in.
Just to respond to the theme that 'right wing' is a meaningless label, not so. It originally arose from the seating arrangements in the French Assembly, where the right wing were the monarchists. Hence right wing became generally accepted as a label for the authoritarian defence of a monarchic, aristocratic, or oligarchic power structure. As these power structure tended to be the ones in place, you have the confusion with Conservatism (e.g. Torys). By a further semantic slide, it came, for some, to mean any authoritarian power structure with power concentrated in the hands of the few, hence the lumping together of the various 20thC dictatorships as right wing. For those who conceive the power of 'Big Business' to be oligarchic and oppressive, any political program favourising the large corporations is right wing. One source of confusion between 'right wing' and Libertarianism comes from the disingenuous protests that any politics which limit the power of the corporate world are 'attacking free enterprise' thus, attacking individual freedom. This is compounded by the myths attached to the notion of private property, where 'mine' as in 'my log cabin and my boots' is extended to 'my corporation over which I have Regalian powers' simply because I invested some bucks in it 30 years ago. Libertarianism as described here seems to be a peculiarly American movement, which would map somewhat but not completely to the European anarchists. Finally, of course individual politics are multi-dimensional. However, all countries which aren't dictatorships seem to end up with two party systems, so all those dimensions have to projected down, hopefully on a 'best-fit' basis, to the single axis most appropriate to the country in question.
Charlie (Colorado), I'd appreciate your thoughts on the difference between 'hard core libertarian' and 'right wing'. For me they map to pretty much the same territory, obviously not for you.
When one got past pre-adolescence, one realised that Heinlein's writing skills, such as they were, were in the service of a political philosophy somewhat to the right of Attila the Hun. Whatever floats your boat.
I just saw an incredibly beautiful sunset. I also see the beauty in some of EY's stuff. Does that mean the sunset was Bayesian, or indeed subject to underlying lawfulness ? No, it only means my enhanced primate brain has a tendency to see beauty in certain things. Not that there is any more epistemic significance in a sunset than there is in a theorem.
OK thanks, nice intuition pump.
"only because our fundamental theory tells us quite definitely that different versions of us will see different results".
EY, on what do you base your 'quite definitely' ? David Lewis ?
Thanks for the beauty, it feels good. Some thinking out loud. I can't help but feel that the key is in the successive layers of maps and territories : maths is (or contains) the map of which physics is the territory, physics is the map of which 'the real world' is the territory, 'the real world' is the map our brains create from the sensory input concerning the territory which is the 'play of energies' out there, while that in itself is another map. Antony Garrett Lisi's proposal, as an example, would be the most elegant meta-map yet. What these maps have in commmon is : being created by the human brain, a wet lump of nervous tissue comprising ad-hoc purpose specific modules. It has specific ways of making maps, so small wonder all these layers of maps are coherent. Now if the 'mathematics' layer of maps has unforeseen and self-consistent properties, it could be just a manifestation of the nature of our map-making modules : they are rules driven. So, is the Universe a geometric figure corresponding to a Lie E8 group, or does that just happen to be the way the human brain is built to interpret things ?
The nature of 0 & 1 as limit cases seem to be fascinating for the theorists. However, in terms of 'Overcoming Bias', shouldn't we be looking at more mundane conceptions of probability ? EY's posts have drawn attention to the idea that the amount of information needed to add additional cetainty on a proposition increases exponentially while the probability increases linearly. This says that in utilitarian terms, not many situations will warrant chasing the additional information above 99.9% certainty (outside technical implementations in nuclear physics, rocket science or whatever). 99.9% as a number is taken out of a hat. In human terms, when we say 'I'm 99.9% sure that 2+2 always =4', where not talking about 1000 equivalent statements. We're talking about one statement, with a spatial representation of what '100% sure' means with respect to that statement, and 0.1% of that spatial representation allowed for 'niggling doubts', of the sort : what have I forgotten ? What don't I know ? What is inconceivable for me ? The interesting question for 'overcoming bias' is : how do we make that tradeoff between seeking additional information on the one hand and accepting a limited degree of certainty on the other ? As an example (cf. the Evil Lords of the Matrix), considering whether our minds are being controlled by magic mushrooms from Alpha Pictoris may someday increase the 'niggling doubt' range from 0.1% to 5%, but the evidence would have to be shoved in our faces pretty hard first.