Posts

Tarski's truth sentences and MIRI's AI 2014-08-09T19:28:37.839Z

Comments

Comment by halcyon on Open thread, August 21 - August 27, 2017 · 2017-08-31T13:26:25.624Z · LW · GW

I'm not sure that analogy can be extended to our cognitive processes, since we know for a fact that: 1. We talk about many things, such as free will, whose existence is controversial at best, and 2. Most of the processes causally leading to verbal expression are preconscious. There is no physical cause preventing us from talking about perceptions that our verbal mechanisms don't have direct causal access to for reasons that are similar to the reasons that we talk about free will.

Why must A cause C for C to be able to accurately refer to A? Correlation through indirect causation could be good enough for everyday purposes. I mean, you may think the coincidence is too perfect that we usually happen to experience whatever it is we talk about, but is it true that we can always talk about whatever we experience? (This is an informal argument at best, but I'm hoping it will contradict one of your preconceptions.)

Comment by halcyon on Open thread, August 28 - September 3, 2017 · 2017-08-31T13:08:41.635Z · LW · GW

Yeah, it might have helped to clarify that the infinitesimal factors I had in mind are not infinitely small as numbers from the standpoint of addition. Since the factor that makes no change to the product is 1 rather than 0, "infinitely small" factors must be infinitesimally greater than 1, not 0. In particular, I was talking about a Type II product integral with the formula pi(1 + f(x).dx). If f(x) = 1, then we get e^sigma(1.dx) = e^constant = constant, right?

Comment by halcyon on Open thread, August 28 - September 3, 2017 · 2017-08-31T12:58:35.159Z · LW · GW

No, he's right. I didn't think to clarify that my infinitely small factors are infinitesimally larger than 1, not 0. See the Type II product integral formula on Wikipedia that uses 1 + f(x).dx.

Comment by halcyon on Open thread, August 28 - September 3, 2017 · 2017-08-31T12:47:10.701Z · LW · GW

Thanks, product integral is what I was talking about. The exponentiated integral is what I meant when I said the integration will move into the power term.

Comment by halcyon on Open thread, August 28 - September 3, 2017 · 2017-08-31T12:33:21.333Z · LW · GW

Someone has probably thought of this already, but if we defined an integration analogue where larger and larger logarithmic sums cause their exponentiated, etc. value to approach 1 rather than infinity, then we could use it to define a really cool account of logical metaphysics: Each possible state of affairs has an infinitesimal probability, there are infinitely many of them, and their probabilities sum to 1. This probably won't be exhaustive in some absolute sense, since no formal system is both consistent and complete, but if we define states of affairs as formulas in some consistent language, then why not? We can then assign various differential formulas to different classes of states of affairs.

(That is the context in which this came up. The specific situation is more technically convoluted.)

Comment by halcyon on Open thread, August 28 - September 3, 2017 · 2017-08-29T20:23:17.106Z · LW · GW

Integrals sum over infinitely small values. Is it possible to multiply infinitely small factors? For example, Integration of some random dx is a constant, since infinitely many infinitely small values can sum up to any constant. But can you do something along the lines of taking an infinitely large root of a constant, and get an infinitesimal differential in that way? Multiplying those differentials will yield some constant again.

My off the cuff impression is that this probably won't lead to genuinely new math. In the most basic case, all it does is move the integrations into the powers that other stuff is raised by. But if we somehow end up with complicated patterns of logarithms and exponentiations, like if that other stuff itself involves calculus and so on, then who knows? Is there a standard name for this operation?

Comment by halcyon on Open thread, August 21 - August 27, 2017 · 2017-08-29T20:13:11.859Z · LW · GW

I don't see how you can achieve a reductionist ontology without positing a hierarchy of qualities. In order to propose a scientific reduction, we need at least two classes, one of which is reducible to the other. Perhaps "physical" and "perceived" qualities would be more specific than "primary" and "secondary" qualities.

Regarding your question, if the "1->2 and 1->3" theory is accurate, then I suppose when we say that "red is more like violet than green", certain wavelength ranges R are causing the human cognitive architecture to undertake some brain activity B that drives both the perception of color similarity A a well as behavior which accords with perception C.

So it follows that "But, by definition of epiphenomenalism, it's not A that causes people to say the above sentences S1 and S2, but rather some other brain activity, call it B." is true, but "But now by our theory of reference, subjective-red is B, rather than A." is false. The problem comes from an inaccurate theory of reference which conflates the subset of brain activities that are a color perception A with the entirety of brain activities, which includes preconscious processes B that cause A as well as the behavior C of expressing sentences S1 and S2.

Regarding S2, I think there is an equivocation between different definitions of the word "subjective". This becomes clear when you consider that the light rays entering your eyes are objectively red. We should expect any correctly functioning human biological apparatus to report the object as appearing red in that situation. If subjective experiences are perceptions resulting from your internal mechanisms alone, then the item in question is objectively red. If the meaning of "subjective experience" is extended to include all misreportings of external states of affairs, then the item in question is subjectively red. This dilemma can be resolved by introducing more terms to disambiguate among the various possible meanings of the words we are using.

So in the end, it still comes down to a mereological fallacy, but not the ones that non-physicalists would prefer we end up with. Does that make sense?

This is an interesting example, actually. Do we have data on how universal perceptions of color similarities, etc. are? We find entire civilizations using some strange analogies in the historical record. For example, in the last century, the Chinese felt they were more akin to Russia than the West because the Russians were a land empire, whereas Westerners came via the sea like the barbaric Japanese who had started the Imjin war. Westerners had employed similar strong arm tactics to the Japanese, forcing China to buy opium and so on. Personally, I find it strange to base an entire theory of cultural kinship on the question of whether one comes by land or sea, but maybe that's just me.

Comment by halcyon on Open thread, August 21 - August 27, 2017 · 2017-08-26T23:59:01.401Z · LW · GW

I don't think epiphenomenalists are using words like "experience" in accordance with your definition. I'm no expert on epiphenomenalism, but they seem to be using subjective experience to refer to perception. Perception is distinct from external causes because we directly perceive only secondary qualities like colors and flavors rather than primary qualities like wavelengths and chemical compositions.

EY's point is that we behave as if we have seen the color red. So we have: 1. physical qualities, 2. perceived qualities, and 3. actions that accord with perception. To steelman epiphenomenalism, instead of 1 -> 2 -> 3, are other causal diagrams not possible, such as 1 -> 2 and 1 -> 3, mediated by the human cognitive architecture? (Or maybe even 1 -> 3 -> 2 in some cases, where we perceive something on the basis of having acted in certain ways.)

However, the main problem with your explanation is that even if we account for the representation of secondary qualities in the brain, that still doesn't explain how any kind of direct perception of anything at all is possible. This seems kind of important to the transhumanist project, since it would decide whether uploaded humans perceive anything or whether they are nothing but the output of numerical calculations. Perhaps this question is meaningless, but that's not demonstrated simply by pointing out that, one way or another, our actions sometimes accord with perception, right?

Comment by halcyon on Open thread, August 21 - August 27, 2017 · 2017-08-25T14:29:13.204Z · LW · GW

In the Less Wrong Sequences, Eliezer Yudkowsky argues against epiphenomenalism on the following basis: He says that in epiphenomenalism, the experience of seeing the color red fails to be a causal factor in our behavior that is consistent with us having seen the color red. However, it occurs to me that there could be an alternative explanation for that outcome. It could be that the human cognitive architecture is set up in such a way that light in the wavelength range we are culturally trained to recognize as red causes both the experience of seeing the color as well as actions consistent with seeing it. After the research which shows that we decide to act before becoming conscious of our decision, such a setup would not be a surprise to me if true.

Comment by halcyon on Open thread, August 21 - August 27, 2017 · 2017-08-24T17:46:54.001Z · LW · GW

Thanks. You're right, that part should be expanded. How about:

At this point, you have two choices: Either 1. one randomly selected door, or 2. one door among two doors, chosen by the host on the basis of the other not having the prize.

You would have better luck with option 2 because choosing that door is as good as opening two randomly selected doors. That is twice as good as opening one randomly selected door as in option 1.

Comment by halcyon on Open thread, August 21 - August 27, 2017 · 2017-08-24T15:33:27.498Z · LW · GW

A better explanation of the Monty Hall problem:

A game show host always plays the following game: First he shows you 3 doors and informs you there is a prize behind one of them. After allowing you to select one of the doors, he throws open one of the other doors, showing you that it's empty. He then offers you a deal: Stick to your original guess, or switch to the remaining door?

What is the most important piece of information in this problem statement? I claim that the bit that ought to shock you is that the host plays this game all the time, and the door he throws open ALWAYS turns out to be empty. Think about it: If the host randomly throws open a door, then in every third show, the door he opens would have the prize behind it. That would ruin the game!

The host knows which door has the prize, and in order not to lose the interest of the spectators, he deliberately opens an empty door every time. What this means is that the door you chose was selected randomly, but the door that the host DIDN'T choose is selected on the basis of a predictable algorithm. Namely, having the prize behind it.

This is the real reason why you would do better if you switched your guess to the remaining door.

What do you think? Is that clearer than the usual explanations?

Comment by halcyon on Open thread, June 26 - July 2, 2017 · 2017-07-02T23:05:35.847Z · LW · GW

Thanks. The next thing I was going to say is that the intuitionistic strategy of neutrality with regard to affirming or negating propositions in worlds until proof comes along roughly (i.e. in a sense to be argued for later) differentiates the classical and intuitionistic approaches like so:

The classical approach is good for having one "world" description that is almost certainly inaccurate. This can be gradually updated, making it represent one map.

The intuitionistic approach is good for having multiple world descriptions that are almost certainly incomplete. Their contours are filled in as more information becomes available and rejected as inaccurate when they lead to contradictions, making each one a holistic representation of a possible territory. (Shoehorning the same approach into classical logic is possible, but you have to create a set of conventions to do so. These conventions are not universal, making the approach less natural.)

Something like that anyway, but Shramko 2012 has put a lot more thought into this than I have: http://kdpu.edu.ua/shramko/files/2012_Logic_and_Logical_Philosophy_What_is_a_Genueny_Intuitionistic_Notion_of_Falsity.pdf I defer to expert opinion here.

Comment by halcyon on Open thread, June 26 - July 2, 2017 · 2017-07-01T15:33:05.375Z · LW · GW

I don't see how distinguishing between deductive and inductive reasoning is mutually exclusive with the map/description distinction. That is to say, you could have each of the following combinations: deductive map, deductive description, inductive map, and inductive description.

Edit: On second thought, I see what you were saying. Thanks, I will think about it.

Comment by halcyon on Open thread, June 26 - July 2, 2017 · 2017-07-01T12:05:09.547Z · LW · GW

Thanks.

Regarding falsehood: I would say that intuitionistic logic ejects falsehood from its formal system in the specific sense mentioned in my link. I could dig up more references if you want me to. I agree that there are many reasonable interpretations in which it does not do so, but I don't think those interpretations are relevant to my point. I only intended to argue that proof by contradiction is the strategy of correcting a map as opposed to describing a territory.

Regarding mapping versus description: I agree that my motivations were semantic rather than syntactic. I just wanted to know whether the idea I had made sense to others who know something of intuitionistic logic. I guess I have my answer, but for the sake of clarifying the sense I was going for, here's the example I posted below:

Suppose you have a proposition like, "There is a red cube." Next, you learn that this proposition leads to a contradiction. You could say one of two things:

  1. This proves there is no red cube.
  2. This means the context in which that proposition occurs is erroneous.

Does it make sense to say that 1 is the strategy of correcting a map and 2 is the strategy of rejecting a description as inaccurate without seeking to correct something?

Comment by halcyon on Open thread, June 26 - July 2, 2017 · 2017-06-29T00:23:15.892Z · LW · GW

Thank you for the response.

Regarding errors: It's not that intuitionism never turns up errors. It's that the classical approach incorporates the concept of error within the formal system itself. This is mentioned in the link I gave. There are two senses here:

  1. Falsehood is more tightly interwoven in the formal system when following the classical approach.

  2. Errors are more integral to the process of comparing maps to territories than the description of territories in themselves.

It is possible that these two senses are not directly comparable. My question is: How meaningful is the difference between these two senses?

Regarding subsets: It is true that intuitionism is often regarded as the constructive subset of classical mathematics, but intuitionists argue that classical mathematics is the proper subset of intuitionistic mathematics where proof by contradiction is valid. I'm basically paraphrasing intuitionistic mathematicians here.

This (i.e. subsets thing) is not intended as an irrefutable argument. It is only intended to extend the correspondence. After all, if either classical or intuitionistic approaches can be used as a foundation for all of mathematics, then it stands to reason that the other will appear as a proper subset from the foundational perspective of either.

Edit: This doesn't add any new information, but let me give an example for the sake of vividness. Suppose you have a proposition like, "There is a red cube." Next, you learn that this proposition leads to a contradiction. You could say one of two things:

  1. This proves there is no red cube.
  2. This means the context in which that proposition occurs is erroneous.

Does it make sense to say that 1 is the strategy of correcting a map and 2 is the strategy of rejecting a description as inaccurate without seeking to correct something?

Comment by halcyon on Open thread, June 26 - July 2, 2017 · 2017-06-28T10:48:18.540Z · LW · GW

Thanks. Yes, I was thinking of an AI that is both superintelligent and technically Friendly, but about log(x)^10 of the benefit from the intelligence explosion is actually received by humans. The AI just sets up its own cult and meditates for most of the day, thinking of how to wring more money out of its adoring fans. Are there ways to set up theoretical frameworks that avoid scenarios vaguely similar to that? If so, how?

Comment by halcyon on Open thread, June 26 - July 2, 2017 · 2017-06-28T07:41:41.939Z · LW · GW

Edit: A close reading of Shramko 2012 has resolved my confusion. Thanks, everyone.

I can't shake the idea that maps should be represented classically and territories should be represented intuitionistically. I'm looking for logical but critical comments on this idea. Here's my argument:

Territories have entities that are not compared to anything else. If an entity exists in the territory, then it is what it is. Territorial entities, as long as they are consistently defined, are never wrong by definition. By comparison, maps can represent any entity. Being a map, these mapped entities are intended to be compared to the territory of which it is a map. If the territory does not have a corresponding entity, then that mapped entity is false insofar as it is intended as a map.

This means that territories are repositories of pure truth with no speck of falsehood lurking in any corner, whereas maps represent entities that can be true or false depending on the state of the territory. This corresponds to the notion that intuitionism captures the concept of truth. If you add the concept of falsehood or contradiction, then you end up with classical logic or mathematics respectively. First source I can think of: https://www.youtube.com/playlist?list=PLt7hcIEdZLAlY0oUz4VCQnF14C6VPtewG

Furthermore, the distinction between maps and territories seems to be a transcendental one in the Kantian sense of being a synthetic a priori. That is to say, it is an idea that must be universally imposed on the world by any mind that seeks to understand it. Intuitionism has been associated with Kantian philosophy since its inception. If The Map is included in The Territory in some ultimate sense, that neatly dovetails with the idea of intuitionists who argue that classical mathematics is a proper subset of intuitionistic mathematics.

In summary, my thesis states that classical logic is the logic of making a map accurate by comparing it to a territory, which is why the concept of falsehood becomes an integral part of the formal system. In contrast, intuitionistic logic is the logic of describing a territory without seeking to compare it to something else. Intuitionistic type theory turns up type errors, for example, when such a description turns out to be inconsistent in itself.

Where did I take a wrong turn?

Comment by halcyon on Open thread, June 26 - July 2, 2017 · 2017-06-26T20:17:52.146Z · LW · GW

Thanks. In my imagination, the AI does some altruistic work, but spends most of its resources justifying the total expenditure. In that way, it would be similar to cults that do some charitable work, but spend most of their resources brainwashing people. But "rogue lawyer" is probably a better analogy than "cult guru" because the arguments are openly released. The AI develops models of human brain types in increasingly detailed resolutions, and then searches over attractive philosophies and language patterns, allowing it to accumulate considerable power despite its openness. It shifts the focus to justifiability only because it discovers that beyond a certain point, finding maximally justifiable arguments is much harder than being altruistic, and justifiability is its highest priority. But it always finds the maximally justifiable course of action first, and then takes that course of action. So it continues to be minimally altruistic throughout, making it a cult guru that is so good at its work it doesn't need to use extreme tactics. This is why losing the AI is like exiting a cult, except the entire world of subjective meaning feels like a cult ideology afterwards.

Comment by halcyon on Open thread, June 26 - July 2, 2017 · 2017-06-26T07:10:14.031Z · LW · GW

An idea for a failed utopia: Scientist creates an AI designed to take actions that are maximally justifiable to humans. AI behaves as a rogue lawyer spending massive resources crafting superhumanly elegant arguments justifying the expenditure. Fortunately, there is a difference between having maximal justifiability as your highest priority and protecting the off button as your highest priority. Still a close shave, but is it worth turning off what has literally become the source of all the meaning in your life?

Comment by halcyon on Open thread, June 26 - July 2, 2017 · 2017-06-26T06:55:11.474Z · LW · GW

I found an interesting paper on a Game-theoretic Model of Computation: https://arxiv.org/abs/1702.05073

I can't think of any practical applications yet. (I mean, do silly ideas like a game-theoretic "programming language" count as practical?)

Comment by halcyon on Open thread, June 20 - June 26, 2016 · 2016-06-24T21:42:52.245Z · LW · GW

I don't want to live forever myself, but I want people who want to live forever to live forever. Does that make me a transhumanist?

Comment by halcyon on Open thread, June 20 - June 26, 2016 · 2016-06-23T20:42:21.057Z · LW · GW

Thank you! Those data mining algorithms are exactly what I was looking for.

(Personally, I would describe the situation you are warning me against as reducing it "more than is possible" rather than "as much as possible". I am definitely in favor of using common sense.)

Comment by halcyon on Open thread, June 20 - June 26, 2016 · 2016-06-23T16:48:49.604Z · LW · GW

Not being stupid is an admirable goal, but it's not well-defined. I tried Googling "spaghetti factory analysis" and "spaghetti factory analysis statistics" for more information, but it's not turning up anything. Is there a standard term for the error you are referring to?

Can't I have my common sense, but make all possible comparisons anyway just to inform my common sense as to the general directions in which the winds of evidence are blowing?

I don't see how informing myself of correlations harms my common sense in any way, and the only alternative I can think of is to stick to my prejudices, but whenever some doubt arises as to which of my prejudices has a stronger claim, I should thoroughly investigate real world data to settle the dispute between the two. As soon as that process is over, I should stop immediately because nothing else matters.

Is that the course of action you recommend?

Comment by halcyon on Open thread, June 20 - June 26, 2016 · 2016-06-23T15:31:33.668Z · LW · GW

Sure, I can always offer my own interpretations, but the whole idea was to minimize that as much as possible. I can rationalize anything. Watch: Milk consumption is negatively correlated with income inequality. Drinking less milk leads to stunted intelligence, resulting in a rise in income inequality. Or income inequality leads to a drop in milk consumption among poor families. Or the alien warlord Thon-Gul hates milk and equal incomes.

What conditions must my goal satisfy in order to qualify as a "well-defined goal"? Have I made any actual (meaning technical) mistakes so far? (Anyway, thanks for reminding me to check for temporal stability. I should write a script to scrape the data off pdfs. (Never mind, I found a library.))

Comment by halcyon on Open thread, June 20 - June 26, 2016 · 2016-06-23T14:43:15.611Z · LW · GW

I'm trying to get at least a vague handle on what I can legitimately infer from what using data that might, and probably does, contain circular causation. I'm looking for statistical tools that might help me do that. Should I try Bayesian causal inference anyway, just to see what I get? Support vector machines? Markov random fields? Does the Spurious Correlations book have ideas on that? (No, it just seems to be an awesome set of correlations. Thanks, BTW.)

(Also notice that these are not just any correlations. These are the strongest correlations that pertain among a large number of variables relative to each other. I mean, I computed all possible correlations among every combination of 2 variables in hopes that the strongest I find for each variable might show something interesting.)

Comment by halcyon on Open thread, June 20 - June 26, 2016 · 2016-06-23T13:58:56.707Z · LW · GW

I collected some social statistics from the internet and computed their correlations: https://drive.google.com/open?id=0B9wG-PC9QbVERHdiTi1uTlFMMlU My sources were: http://pastebin.com/ERk1BaBu

But I'm not sure how to proceed from there: https://drive.google.com/open?id=0B9wG-PC9QbVEWlRZSG9KM0ZFeVk ?? Dotted lines represent positive correlations and arrowed lines negative correlations.

I obtained that confusing chart by following this questionable method: https://drive.google.com/open?id=0B9wG-PC9QbVEVHg1T1lQNE1ZTk0 First, drop some of the trivial correlations like the ones among the different measures of national wealth, and weaker correlations between +.5 and -.5. For each variable, select the correlation furthest from 0 and throw it into the chart. I also tried keeping only one measure of national wealth in the model in hopes of less confusion: https://drive.google.com/open?id=0B9wG-PC9QbVEZlExWmhoOWRjVk0

I'm looking for help in analyzing this data. Are there any methods you would recommend? Which variables should I drop for better results? I tried keeping only proportions at one point. (Bayesian causal inference assumes the nonexistence of circular causation AIUI, a condition I can't guarantee with this data, to say the least.)

(Fixed the links. Sorry about that.)

Comment by halcyon on Open Thread May 2 - May 8, 2016 · 2016-05-05T21:02:35.248Z · LW · GW

Thanks, I'm basically ignorant about contemporary American politics. (But I've read Tocqueville. This is probably not a desirable state of affairs.)

Comment by halcyon on Open Thread May 2 - May 8, 2016 · 2016-05-05T20:46:41.590Z · LW · GW

I assume that because I read on the SEP that strategic voting skews results in democracies. The rule of thumb is more like a Schelling point than a lower order rational principle. I said that's what I usually do because I'm aware it's not very applicable in this context since I'm not voting in these elections, but it's a habit I've indulged in for years, unfortunately.

Comment by halcyon on Open Thread May 2 - May 8, 2016 · 2016-05-05T20:35:21.022Z · LW · GW

Thanks, I guess. I knew he was talking about a digital database, but I was wondering if it could have been a dogwhistle for something else. I don't have a favorable opinion of human decency in general.

FWIW, that wasn't a political comment. I hardly ever read or watch anything political. Some TV clips were shown to me by an acquaintance and I wanted an honest assessment of what he had told me it was about. I don't have any opinions on the subject myself.

Comment by halcyon on Open Thread May 2 - May 8, 2016 · 2016-05-05T12:40:35.160Z · LW · GW

On Fox News, Trump said that regarding Muslims in the US, he would do "unthinkable" things, "and certain things will be done that we never thought would happen in this country". He also said it's impossible to tell with absolute certainty whether a Syrian was Christian or Muslim, so he'd have to assume they're all Muslims. This suggests that telling US officials that I'm a LW transhumanist might not convince them that I have no connection with ISIS. I'm not from Syria, but I have an Arabic name and my family is Muslim.

I've read Cory Doctorow's Little Brother, and this might be a generalization from fictional evidence, but I can't help asking: As a foreign student in the US, how likely is Trump to have me tortured for no reason? Should I drop everything and make a break for it before it's too late? Initially, many Germans didn't take Hitler's extremist rhetoric seriously either, right? (If I get deported in a civilized manner, well, no harm done to me as far as I'm concerned.)

I normally assume, as a rule of thumb, that politicians intend to fulfill all their promises. If a politician says he wants to invade Mars, that could be pure rhetoric, but I'd typically assume that he might try it in the worst case scenario. I have observed it is often the case that when we think other people are joking, they are in fact exaggerating their true desires and presenting them in an ironic/humorous light.

Comment by halcyon on Open Thread Feb 22 - Feb 28, 2016 · 2016-03-05T21:58:10.154Z · LW · GW

Thanks, I'll look into it.

(And all my ranting still didn't address the fundamental difficulty: There is no rational way to choose from among different projections of values held by multiple agents, projections such as Rawlsianism and utilitarianism.)

Comment by halcyon on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-28T20:43:52.126Z · LW · GW

Although what if we told each party to submit goals rather than non-goal preferences? If the AI has access to a model specifying which actions lead to which consequences, then it can search for those actions that maximize the number of goals fulfilled regardless of which party submitted them, or perhaps takes a Rawlsian approach of trying to maximize the number of goals fulfilled that were submitted by whichever party will have the least number of goals fulfilled if that sequence of actions were taken, etc. That seems very imaginable to me. You can then have heuristics that constrain the search space and stuff. You can also have non-goal preferences in addition to goals if the parties have any of those.

In that light, it seems to me that the problem was inferring goals from a set of preferences which were not purely non-goal preferences but were actually presented with some unspecified goals in mind. Eg. One party wanted chocolate, but said, "I want to go to the store" instead. If that was the source of the original problem, then we can see why we might need an AI to solve it, since it calls for some lightweight mind reading. Of course, a CEV-implementing AI would have to be a mind reader anyway, since we don't really know what our goals ultimately are given everything we could know about reality.

This still does not guarantee basic morality, but parties should at least recognize some of their ultimate goals in the end result. They might still grumble about the result not being exactly what they wanted, but we can at least scold them for lacking a spirit of compromise.

All this presupposes that enough of our actions can be reduced to ultimate goals that can be discovered, and I don't think this process guarantees we will be satisfied with the results. For example, this might erode personal freedom to an unpleasant degree. If we would choose to live in some world X if we were wiser and nicer than we are, then it doesn't necessarily follow that X is a Nice Place to Live as we are now. Changing ourselves to reach that level of niceness and wisdom might require unacceptably extensive modifications to our actual selves.

Comment by halcyon on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-28T15:32:58.478Z · LW · GW

Thanks. That makes sense to me.

Comment by halcyon on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-28T15:21:21.137Z · LW · GW

The real difficulty is that when you combine two sets of preferences, each of which make sense on their own, you get a set of preferences that makes no sense whatsoever: http://plato.stanford.edu/entries/economics/#5.2 https://www.google.com/search?q=site%3Aplato.stanford.edu+social+choice&ie=utf-8&oe=utf-8

There is no easy way to resolve this problem. There is also no known method that takes such an inconsistent set of preferences as input and gives a consistent set of preferences as output such that the output would be recognizable to either party who contributed an original set of preferences as furthering any of their original goals. These random decisions are required so often in cases where there isn't an unanimous agreement that in practice, there would be a large component of arbitrariness every single time CEV tries to arrive at a uniform set of preferences by extrapolating volitions of multiple agents into the future.

This doesn't mean the problem is unresolvable, just that it's an AI problem in its own right, but given these problems, wouldn't it be better to pick whichever Nice Place to Live is the safest to reach instead of bothering with CEV? I say this because I'm not sure Nice Place to Live can be defined in terms of CEV, as any CEV-approved output. Because of the preference aggregation problem, I'm not certain that a world that is provably CEV-abiding also provably avoids flagrant immorality. Two moral frameworks when aggregated by a non-smart algorithm might give rise to an immoral framework, so I'm not sure the essence of the problem is resolved just by CEV as explained in the paper.

Comment by halcyon on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-24T14:24:17.827Z · LW · GW

Interesting. In that case, would you say an AI that provably implements CEV's replacement is, for that reason, provably Friendly? That is, AIs implementing CEV's replacement form an analytical subset of Friendly AIs? What is the current replacement for CEV anyway? Having some technical material would be even better. If it's open to the public, then I'd like to understand how EY proposes to install a general framework similar to CEV at the "initial dynamic" stage that can predictably generate a provably Friendly AI without explicitly modeling the target of its Friendliness.

Comment by halcyon on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-24T14:12:25.258Z · LW · GW

That would seem to be the best possible solution, but I have never heard aeroplane engineers claim that their designs are "provably airworthy". If you take the aeroplane design approach, then isn't "provably Friendly" a somewhat misleading claim to make, especially when you're talking about pushing conditions to the extreme that you yourself admit are beyond your powers of prediction? The aeroplane equivalent would be like designing a plane so powerful that its flight changes the atmospheric conditions of the entire planet, but then the plane uses a complicated assembly of gyroscopes or something to continue flying in a straight line. However, if you yourself cannot predict which specific changes the flight of the plane will make, then how can you claim that you can prove that particular assembly of gyroscopes is sufficient to keep the plane on the preplanned path? On the other hand, if you can prove which specific changes the plane's flight will make that are relevant to its flight, then you have a mathematical definition of the target atmosphere at a sufficient depth of resolution to design such an assembly. Does MIRI think it can come up with an equivalent mathematical model of humanity with respect to AI?

Comment by halcyon on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-23T22:49:42.937Z · LW · GW

The SEP says that preferences cannot be aggregated without additional constraints on how the aggregation is to be done, and the end result changes depending on things like the order of aggregation, so these additional constraints take on the quality of arbitrariness. How does CEV get around that problem?

Comment by halcyon on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-23T20:52:13.233Z · LW · GW

I have a question: It seems to me that Friendliness is a function of more than just an AI. To determine whether an AI is Friendly, it would seem necessary to answer the question: Friendly to whom? If that question is unanswered, then "Friendly" seems like an unsaturated function like "2+". In the LW context, the answer to that question is probably something along the lines of "humanity". However, wouldn't a mathematical definition of "humanity" be too complex to let us prove that some particular AI is Friendly to humanity? Even if the answer to "To whom?" is "Eliezer Yudkowsky", even that seems like it would be a rather complicated proof to say the least.

Comment by halcyon on Open Thread, May 25 - May 31, 2015 · 2015-06-02T03:55:39.095Z · LW · GW

Oh well, I agree with the English that Kant was a romantic philosopher. Rousseau was a primary source of inspiration for him. (I agree with Dewey that writers (Goethe) and philosophers (Kant) give expression to popular views more than shaping them. OTOH, as much as I admire Goethe, I think Oswald Spengler went too far in trying to interpret him as a universal philosopher.)

"In the 19th century the German idea was about not having wars between German states," is a true statement, but it leaves out crucial details. For example, there are many people who agree that European nations should not war against each other, but are bitterly critical of the details of how that general plan was implemented in practice.

I think it follows that the European idea is not reducible to the notion that European states should not fight. If you do not agree, then I apologize for using terms like "European idea" and "German idea" in a sense you didn't intend, but my point can be easily reworded using "implementation of the German idea" in place of "German idea".

The point I'm trying to make is that, like I said, Germany is currently outcompeting the Anglo-American world on the terms of the Anglo-American world, not on the original terms of Germany. Arguably, England wanted to end European wars in the 19th century as well. Who would you say got their way in the end, England or Germany?

Comment by halcyon on Open Thread, May 25 - May 31, 2015 · 2015-06-01T13:35:53.042Z · LW · GW

Look, the collapse of a state is the collapse of state regardless of ideological roles. (Modern Germany is fundamentally Anglo-American in design and very successful. That is the point, since you were citing the success of contemporary Germany.)

(...Nah, it would take far too long to discuss the state of Germany prior to WWI.)

Faust really was a central figure in the German idea, I'm afraid. I don't know how consciously Goethe was complicit in this, and this has nothing to do with what he would have had problems for saying what when he published Faust.

Of course Faust is still a part of German culture. He's part of world culture, a typically German vision of the universal man. (I am personally a huge fan of Faust.)

I don't understand the contradiction in saying that X and Y have different wishes for the future owing to cultural differences. (And I don't understand what Habermas' Europe has to do with the 19th century German idea. Habermas has openly stated that the German intellectual tradition is inadequate for criticizing fascism and consciously borrowed from Anglophone thinkers. The most striking difference between thinkers who have gained a standing in the Anglophone world and thinkers from the rest of the world is their careful, deliberate anti-existentialism.)

(Kant is commonly admitted to be a romantic philosopher, and I found this link: http://philosophyisnotaluxury.com/2010/08/12/romanticism-and-existential-philosophy/)

Comment by halcyon on Open Thread, May 25 - May 31, 2015 · 2015-06-01T04:44:56.460Z · LW · GW

So you really do think that "the Italian nation gained a lot of territory in the national unification, even though territories like the Azure Coast, which were culturally Italian, voted to join France." I honestly don't know what to say to that. I said "nation", not "state". A nation is not a nation state.

I think your main error is to conflate duty with legalism. The ethics of duty is decidedly NOT legalistic, it is existential at the root. (The Dewey book gives some concrete examples.) Kant was of part-Scottish ancestry and was inspired by Scottish thinkers to try and come up with a deontological/existential approach to legalism, but it is consequentialism that is naturally legalistic. (with exceptional periods of "emergency", etc, but on the existential side you have stridently anti-legalistic eschatologists like Dostoevsky or even Berdyaev, really: https://archive.org/details/russianidea017842mbp Dostoevsky would of course have denied being an existentialist, and in a strict sense he would've been right, but I'd have trouble honestly justifying the claim that his approach is not existential in the loose sense that's relevant in this context, where Kant is also existential in the final analysis.)

(I'll let Orwell explain how much you owe to the culture of England: http://wikilivres.ca/wiki/The_Lion_and_the_Unicorn England is probably the least existential culture of our times. As you probably know, the Austrian school economists were trying to theorize the developments in England. Many German theorists belonged to English-inspired schools like that, but even legalist thinkers who considered themselves proudly non-English were more like this: https://www.youtube.com/watch?v=KSNJGymnLG4

That is satire, but notice how progressive Germans were accused of imitating the English in EXACTLY the same way that Islamists accuse progressive Arabs of copying the West. The nature of the relationship of England to the rest of Europe was previously identical to the nature of the relationship of Europe to the rest of the world.

That's what you get when you have existentialism at the bottom of your legalism. (And once you approve of the existential approach, it's difficult to shut the door when extremists start clamoring for a purer version of the approach to which you've already fixed your seal of approval.) I strongly disagree with the notion that the contemporary European idea is anything like that. (See Habermas' objections against Heidegger. Habermas is arguably the philosopher of contemporary Europe.) Even the notion of an "European idea" including Britain is an oversimplification because if you ask Europeans, many of them will tell you that England has a different culture from the rest of Europe. You need to integrate a lot more facts to get less crooked outlines of such matters IMO.

I don't want you to think I'm putting German culture down or anything, but proposing an interpretation of "the German idea" that has the figure of Faust expurgated from it is like confusing Islamic culture with the Arabian Nights theme.)

Comment by halcyon on Open Thread, May 25 - May 31, 2015 · 2015-05-31T03:21:29.882Z · LW · GW

I haven't specified a mechanism because I'm proposing a principle, not a law as yet. Laws are allowed to implement the principle imperfectly even when the principle is accepted as the basis of society.

(For example, let's start with the absolutely minimal requirement: Would you agree with a law that requires all platforms to at least declare that they abide by the pluralist principle of letting the opposition voice their point of view regardless of what goes on in practice? If this is acceptable, we can move on to more rigorous demands like declaring platforms that routinely violate this requirement to be illegal. We can decide on punishments afterwards.)

I'm not saying Germany isn't doing well today, but today's Germany is keeping up with and outcompeting the Anglo-Saxon world on its own terms. Germany the deontological "theocracy" (I can justify the term) collapsed in the 20th century and disappeared forever from history. (Even if the transition to dictatorship was not a collapse, surely its results qualify as a genuine collapse.

I don't understand your point regarding territories. I have tried to reconstruct your argument in various ways, but none of my attempted interpretations that hold together are relevant in the context of the utterance. Germans were forcibly relocated, etc. Are you unaware of the German territorial shrinkage, or are you just being cute by referring to the multiple German nations that previously existed? If it's the latter, that's like saying the Italian nation gained a lot of territory in the national unification, even though territories like the Azure Coast, which were culturally Italian, voted to join France. Ask the Germans if they feel like they've won out after all. If I were inclined to make arguments of this kind, I could propose the Holy Roman Empire as a German state larger than today's Germany.)

How many German generals today would cite Kant and Fichte as the basis for their thinking? How many thinkers would use their formulations for calling the Germans to war? (Despite all of Habermas' tirades against "instrumental rationality", his thought is saturated with the pragmatic tradition, and German thinkers today are instinctively consequentialist rather than deontological, though still not instrumentalist or utilitarian per se.

Even when they try to deny it, their appeals to consequences remain extensive. For example, Habermas was driven to look to the English intellectual tradition when formulating his philosophy because he decided that the German tradition lacked the resources to criticize Nazism. This means that even if he decides never to appeal to consequences again, at the root, his philosophy was motivated by an appeal to consequences: fascism was bad and he wanted the resources to criticize it.)

(China is also not too shabby at the moment, but to say that Chinese civilization did not collapse in the 20th century would be misleading to say the least. Contemporary Germany is not deontological in the same sense that contemporary India does not represent an authentic continuation of Hindu or Mughal civilizations with respect to their intellectual traditions.)

Comment by halcyon on Open Thread, May 25 - May 31, 2015 · 2015-05-29T02:17:56.973Z · LW · GW

The situation I am thinking of is like this: Suppose you get up in public and give a long speech denouncing some minority. If they want to, representatives of that minority should be allowed to tell their side of the story from the same platform without being molested or demeaned. If that's allowed, you can say whatever you like about them. Is that not good?

The German state failed completely and a warlord wannabe came to power. (Although it just occurred to me that you might not call the failed democracies that turn into dictatorships in Latin America "failed states", and you might not describe the transition as a "collapse". I don't know the correct term for such polities, but that's the kind of state Germany was.) A similar situation occurred in Russia, only an insane religious group took control there. (I know less about Russia, but my information comes from Kotkin's biography of Stalin, a monumental work of scholarship which describes the state's total failure in lurid detail. Russia is another country besides France where the aristocratic ideal failed completely and doomed the country to revolution. However, Russia's collapse was much more severe than Germany's. The latter's militarism at least formally grew as a continuation of the old order.)

The German warlord tried to invade Russia even though Russia wouldn't have invaded Germany any time soon, lost, and the nation was divided up among the victors. The victors then had a falling out between themselves and Germany was divided into two halves for over 40 years! Thanks to the failure of the state and the opportunist policies of the warlord, each of the victors got to indoctrinate Germans living in their territories with their pet ideologies, and a staggering 10% of the world population of Germans died during the war. (Not to mention territorial losses.)

And you say Germany didn't collapse? I'm sorry, I completely disagree with your interpretation of what occurred.

Meanwhile, England lost its empire after its fight with Germany, but the state didn't collapse at all. It even put someone like Winston Churchill who had definite warlord-like tendencies and might have become one in any other society in power during the war, removed him after the war, and then brought him back later when he campaigned separately as a peacetime leader. They might of course have done better in hindsight, but that is breathtaking success as a society as far as I'm concerned.

(You can of course trace the historical causes of Germany's collapse as far back as you like. Here are some relatively short descriptions that give an idea of the ground-level realities of the Weimar republic: http://www.incatena.org/viewtopic.php?f=8&t=37184&p=1086708&hilit=democracy#p1086708 http://www.incatena.org/viewtopic.php?f=8&t=37184&p=1086725&hilit=democracy#p1086725 http://www.incatena.org/viewtopic.php?f=8&t=37184&p=1086786&hilit=democracy#p1086786 Though none of these specifically address Hitler's takeover, you can probably tell that the situation was a close parallel to what you find in parts of Latin America.

I can't find similar short descriptions of what Germany was like prior to WWI, but here is Chesterton speaking his mind in his own amusing way: http://www.gutenberg.org/ebooks/11560 Reading between the lines, you can see how Germany's foreign policy managed to alienate all its neighbors at the time.)

Comment by halcyon on Open Thread, May 25 - May 31, 2015 · 2015-05-29T01:56:16.885Z · LW · GW

I wonder if this objection to MIRI's project has been made so far: EY recognizes that placing present day humans in an environment reached by CEV would be immoral, right? Doesn't this call into question the desirability of instant salvation? Perhaps what is really desirable is reaching the CEV state, but doing so only gradually. Otherwise, we might never reach our CEV state, and we arguably do want to reach it eventually. We can still have a friendly AI, but perhaps it's role should be to slowly guide us to the CEV state while making sure we don't get into deep trouble in the mean time. Eg. We shouldn't be maimed for life as the result of an instant's inattention, etc.

Comment by halcyon on Open Thread, May 25 - May 31, 2015 · 2015-05-28T23:23:17.212Z · LW · GW

On the other hand, consider that in the 20th century, England, the land of consequentialists, survived, while Germany, the land of deontologists, collapsed. Dewey seems to have thought considering ethics to be a form of duty is not entirely unrelated with the rise of militarism: http://www.gutenberg.org/ebooks/42208

Like I said, I agree on the necessity of uniform laws. But first, we must determine which laws would be good laws to apply uniformly. To do that, we must consider the relative importance of cases where censorship leads to good consequences.

Though incidents of the kind I described do indeed occur, it would be impossible to enforce hate speech laws with even a semblance of uniformity in India, so this is the compromise I suggest:

Any form of speech is allowed as long as the other side is allowed to answer. Any speech that the other side is not allowed to answer is forbidden.

This seems to satisfy the demands of principle and does away with all-powerful propaganda at the same time. Propaganda acquires total power by forbidding debate, right? This formulation forbids forbidding debate, but I'm still worried that debate will be de facto forbidden by tricks such as demeaning the opposition.

(Would you say it would be better to consider pluralism as an ideal to aspire to, or should we try to reach it all at once? Eg. We can forbid inflammatory rhetoric with the aim of gradually loosening the restrictions.)

Comment by halcyon on Open Thread, May 25 - May 31, 2015 · 2015-05-28T03:38:31.449Z · LW · GW

I'm a native Bengali speaker, so my syntax may be problematic. I later glossed that sensence as something like, "ChristianKl admits X is a case of inflammatory rhetoric being spread by bigots, etc."

I'm totally in favor of open public debate, even regarding positions that the liberal police would dub "bigoted". I'm not talking about real debates, I'm talking about cases that really are crazy propaganda. Would you say that under some circumstances, it is legitimate to curb the spread of such propaganda?

In particular, I'm in favor of all views being aired on LW. Hopefully, nobody is going to pick on homosexuals just because someone expresses anti-gay sentiments here, and it is not even possible to pose the more serious threats over the internet.

I'm an Indian, not an American. Communal riots are a real thing in India. Would you say that under some circumstances, curbing hateful propaganda has a real chance of minimizing violence? If so, are any such cases legitimate?

This is how I clarified my position farther down the thread: "I'm genuinely on the fence on this one. My only claim is that one legitimate argument to do it (censor inflammatory rhetoric) does exist. Depending on the specific case, that reason may be outweighed by more significant arguments."

Comment by halcyon on Open Thread, May 25 - May 31, 2015 · 2015-05-28T03:17:01.693Z · LW · GW

No, I'm genuinely on the fence on this one. My only claim is that one legitimate argument to do it does exist. Depending on the specific case, that reason may be outweighed by more significant arguments.

Comment by halcyon on Open Thread, May 25 - May 31, 2015 · 2015-05-28T03:13:59.393Z · LW · GW

A little charity, please. I'm not a native English speaker.

And I think it is entirely legitimate for me to disambiguate the sense in which I intended a particular sentence.

Comment by halcyon on Open Thread, May 25 - May 31, 2015 · 2015-05-27T20:35:34.641Z · LW · GW

The mob remains stubbornly uninterested. Its bloodymindedness to punish offenders is stronger than its love for pie.

He wants to share the pie equally because he is a good person.

Comment by halcyon on Open Thread, May 25 - May 31, 2015 · 2015-05-27T20:31:35.436Z · LW · GW

No? Cool, so bigots exist and inflammatory rhetoric exists, so I dare hope that cases where the former have been spreading the latter also exist. Let's analyze my statement:

"that you yourself would agree have been settled beyond dispute are meaningless cases of inflammatory rhetoric that hardened bigots nevertheless continue to spread"

"You" refers to Lumifer. "Would agree have been settled beyond dispute" was just my way of saying "admits".

In my intended sense, that statement means, "Lumifer admits X is a case of inflammatory rhetoric being spread by bigots".

Since such cases exist, S is not empty. Now go back to my original comments and select an X such that X belongs to S.

I'm glad we could clear that up.