Welcome to Less Wrong! (5th thread, March 2013)

post by orthonormal · 2013-04-01T16:19:17.933Z · score: 27 (28 votes) · LW · GW · Legacy · 1761 comments

Contents

  A few notes about the site mechanics
  A few notes about the community
  A list of some posts that are pretty awesome
None
1761 comments
If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as a rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.

(This is the fifth incarnation of the welcome thread; once a post gets over 500 comments, it stops showing them all by default, so we make a new one. Besides, a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves.)

A few notes about the site mechanics

Less Wrong comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via Markdown syntax (you can click the "Help" link below the text box to bring up a primer).

You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.

However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you have any questions about karma or voting, please feel free to ask here.

Replies to your comments across the site, plus private messages from other users, will show up in your inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.

It's definitely worth your time commenting on old posts; veteran users look through the recent comments thread quite often (there's a separate recent comments thread for the Discussion section, for whatever reason), and a conversation begun anywhere will pick up contributors that way.  There's also a succession of open comment threads for discussion of anything remotely related to rationality.

Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.

EXTRA FEATURES:
There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion.

A few notes about the community

If you've come to Less Wrong to  discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.

If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)

If you want to write a post about a LW-relevant topic, awesome! I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma- honestly, you don't know what you don't know about the community norms here.)

If you'd like to connect with other LWers in real life, we have  meetups  in various parts of the world. Check the wiki page for places with regular meetups, or the upcoming (irregular) meetups page. There's also a Facebook group. If you have your own blog or other online presence, please feel free to link it.

If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter

A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.

A list of some posts that are pretty awesome

I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:

More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.

Welcome to Less Wrong, and we look forward to hearing from you throughout the site!

Note from orthonormal: MBlume and other contributors wrote the original version of this welcome post, and I've edited it a fair bit. If there's anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post. Finally, once this gets past 500 comments, anyone is welcome to copy and edit this intro to start the next welcome thread.

1761 comments

Comments sorted by top scores.

comment by Paamayim · 2013-04-02T21:40:33.999Z · score: 24 (24 votes) · LW · GW

Aloha.

My name is Sandy and despite being a long time lurker, meetup organizer and CFAR minicamp alumnus, I've got a giant ugh field around getting involved in the online community. Frankly it's pretty intimidating and seems like a big barrier to entry - but this welcome thread is definitely a good start :)

IIRC, I was linked to Overcoming Bias through a programming pattern blog in the few months before LW came into existence, and subsequently spent the next three months of my life doing little else than reading the sequences. While it was highly fascinating and seemed good for my cognitive health, I never thought about applying it to /real life/.

Somehow I ended up at CFAR's January minicamp, and my life literally changed. After so many years, CFAR helped me finally internalize the idea that /rationalists should win/. I fully expect the workshop to be the most pivotal event in my entire life, and would wholeheartedly recommend it to absolutely anyone and everyone.

So here's to a new chapter. I'm going to get involved in this community or die trying.

PS: If anyone is in the Kitchener/Waterloo area, they should definitely come out to UW's SLC tonight at 8pm for our LW meetup. I can guarantee you won't be disappointed!

comment by Laplante · 2013-04-01T03:47:57.329Z · score: 22 (24 votes) · LW · GW

Hello, Less Wrong; I'm Laplante. I found this site through a TV Tropes link to Harry Potter and the Methods of Rationality about this time last year. After I'd read through that as far as it had been updated (chapter 77?), I followed Yudkowsky's advice to check out the real science behind the story and ended up here. I mucked about for a few days before finding a link to yudkowsky.net, where I spent about a week trying learn what exactly Bayes was all about. I'm currently working my way through the sequences, just getting into the quantum physics sequence now.

I'm currently in the dangerous position of having withdrawn from college, and my productive time is spent between a part-time job and this site. I have no real desire to return to school, but I realize that entry into any sort of psychology/neuroscience/cognitive science field without a Bachelor's degree - preferably more - is near impossible.

I'm aware that Yudkowsky is doing quite well without a formal education, but I'd rather not use that as a general excuse to leave my studies behind entirely.

My goals for the future are to make my way through MIRI's recommended course list, and the dream is to do my own research in a related field. We'll see how it all pans out.

comment by RolfAndreassen · 2013-04-01T18:27:34.121Z · score: 31 (31 votes) · LW · GW

my productive time is spent between a part-time job and this site.

Perhaps I'm reading a bit much into a throwaway phrase, but I suggest that time spent reading LessWrong (or any self-improvement blog, or any blog) is not, in fact, productive. Beware the superstimulus of insight porn! Unless you are actually using the insights gained here in a measureable way, I very strongly suggest you count LessWrong reading as faffing about, not as production. (And even if you do become more productive, observe that this is probably a one-time effect: Continued visits are unlikely to yield continual improvement, else gwern and Alicorn would long since have taken over the world.) By all means be inspired to do more work and smarter work, but do not allow the feeling of "I learned something today" to substitute for Actually Doing Things.

All that aside, welcome to LessWrong! We will make your faffing-about time much more interesting. BWAH-HAH-HAH!

comment by John_Maxwell (John_Maxwell_IV) · 2013-04-02T08:50:07.653Z · score: 4 (4 votes) · LW · GW

do not allow the feeling of "I learned something today" to substitute for Actually Doing Things.

Learning stuff can be pretty useful. Especially stuff extremely general in its application that isn't easy to just look up when you need it, like rationality. If the process of learning is enjoyable, so much the better.

comment by Dentin · 2013-04-06T03:29:22.304Z · score: 5 (5 votes) · LW · GW

I think you may have misinterpreted a critical part of the sentence:

'do not allow the FEELING of "I learned something today" to substitute for Actually Doing Things.'

Insight porn, so to speak, is that way because it makes you feel good, like you can Actually Do Things and like you have the tools to now Actually Do Things. But if you don't get up and Actually Do Things, you have only learned how to feel like you can Actually Do Things, which isn't nearly as useful as it sounds.

comment by John_Maxwell (John_Maxwell_IV) · 2013-07-11T06:22:04.817Z · score: 0 (0 votes) · LW · GW

Sure, I agree. IMO, any self-improvement effort should be intermixed with lots of attempts to accomplish object-level goals so you can get empirical feedback on what's working and what isn't.

comment by shminux · 2013-04-01T07:25:25.511Z · score: 17 (31 votes) · LW · GW

My standard advice to all newcomers is to skip the quantum sequence, at least on the first reading. Or at least stop where the many worlds musings start. The whole thing is way too verbose and controversial for the number of useful points it makes. Your time is much better spent reading about cognitive biases. If you want epistemology, try the new sequence.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-01T15:20:24.955Z · score: 3 (27 votes) · LW · GW

Bad advice for technical readers. Mihaly Barasz (IMO gold medalist) got here via HPMOR but only became seriously interested in working for MIRI after reading the QM sequence.

Given those particular circumstances, can I ask that you stop with that particular bit of helpful advice?

comment by Vaniver · 2013-04-01T15:37:53.957Z · score: 21 (25 votes) · LW · GW

Bad advice for technical readers. Mihaly Barasz (IMO gold medalist) got here via HPMOR but only became seriously interested in working for MIRI after reading the QM sequence.

Do you have a solid idea of how many technical readers get here via HPMOR but become disinterested in working for MIRI after reading the QM sequence? If not, isn't this potentially just the selection effect?

comment by Kawoomba · 2013-04-01T17:50:37.499Z · score: 6 (10 votes) · LW · GW

EY can rationally prefer the certain evidence of some Mihaly-Barasz-caliber researchers joining when exposed to the QM sequence

over

speculations whether the loss of Mihaly Barasz (had he not read the QM sequence) would be outweighed by even more / better technical readers becoming interested in joining MIRI, taking into account the selection effect.

Personally, I'd go with what has been proven/demonstrated to work as a high-quality attractor.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-01T18:23:16.181Z · score: 6 (23 votes) · LW · GW

Yep. I also tend to ignore nontechnical folks along the lines of RationalWiki getting offended by my thinking that I know something they don't about MWI. Carl often hears about, anonymizes, and warns me when technical folks outside the community are offended by something I do. I can't recall hearing any warnings from Carl about the QM sequence offending technical people.

Bluntly, if shminux can't grasp the technical argument for MWI then I wouldn't expect him to understand what really high-class technical people might think of the QM sequence. Mihaly said the rest of the Sequences seemed interesting but lacked sufficient visible I-wouldn't-have-thought-of-that nature. This is very plausible to me - after all, the Sequences do indeed seem to me like the sort of thing somebody might just think up. I'm just kind of surprised the QM part worked, and it's possible that might be due to Mihaly having already taken standard QM so that he could clearly see the contrast between the explanation he got in college and the explanation on LW. It's a pity I'll probably never have time to write up TDT.

comment by EHeller · 2013-04-03T08:15:02.791Z · score: 26 (28 votes) · LW · GW

I have a phd in physics (so I have at least some technical skill in this area) and find the QM sequence's argument for many worlds unconvincing. You lead the reader toward a false dichotomy (Copenhagen or many worlds) in order to suggest that the low probability of copenhagen implies many worlds. This ignores a vast array of other interpretations.

Its also the sort of argument that seems very likely to sway someone with an intro class in college (one or two semesters of a Copenhagen based shut-up-and-calculate approach), precisely because having seen Copenhagen and nothing else they 'know just enough to be dangerous', as it were.

For me personally, the quantum sequence threw me into some doubt about the previous sequences I had read. If I have issues with the area I know the most about, how much should I trust the rest? Other's mileage may vary.

comment by shminux · 2013-04-03T19:24:27.389Z · score: 12 (18 votes) · LW · GW

I have a phd in physics (so I have at least some technical skill in this area) and find the QM sequence's argument for many worlds unconvincing.

Actually, attempting to steelman the QM Sequence made me realize that the objective collapse models are almost certainly wrong, due to the way they deal with the EPR correlations. So the sequence has been quite useful to me.

On the other hand, it also made me realize that the naive MWI is also almost certainly wrong, as it requires uncountable worlds created in any finite instance of time (unless I totally misunderstand the MWI version of radioactive decay, or any emission process for that matter). It has other issues, as well. Hence my current leanings toward some version of RQM, which EY seems to dislike almost as much as his straw Copenhagen, though for different reasons.

For me personally, the quantum sequence threw me into some doubt about the previous sequences I had read.

Right, I've had a similar experience, and I heard it voiced by others.

As a result of re-examining EY's take on epistemology of truth, I ended up drifting from the realist position (map vs territory) to an instrumentalist position (models vs inputs&outputs), but this is a topic for another thread. I am quite happy with the sequences related to cognitive science, where, admittedly, I have zero formal expertise. But they seem to match what the actual experts in the field say.

I am on the fence with the free-will "dissolution", precisely because I know that I am not qualified to spot an error and there is little else out there in terms of confirming evidence or testable predictions.

I am quite skeptical about the dangers of AGI x-risk, mainly because it seems to extrapolate too far beyond what is known into the fog of the unknown future, though do I appreciate quite a few points made in the relevant sequences. Again, I am not qualified to judge their validity.

comment by Plasmon · 2013-04-04T06:35:37.962Z · score: 6 (6 votes) · LW · GW

as it (MWI) requires uncountable worlds created in any finite instance of time

How is that any more problematic than doing physics with real or complex numbers in the first place?

comment by shminux · 2013-04-04T07:33:45.092Z · score: 0 (4 votes) · LW · GW

It means that EY's musings about the Eborians splitting into the world's of various thicknesses according to Born probabilities no longer make any sense. There is a continuum of worlds, all equally and infinitesimally thin, created every picosecond.

comment by MugaSofer · 2013-04-10T14:34:33.233Z · score: 3 (7 votes) · LW · GW

It means that EY's musings about the Eborians splitting into the world's of various thicknesses according to Born probabilities no longer make any sense.

coughmeasurecough

comment by [deleted] · 2013-04-04T12:27:13.729Z · score: 3 (3 votes) · LW · GW

The way I understand it, it's not that “new” worlds are created that didn't previously exist (the total “thickness” (measure) stays constant). It's that two worlds that looked the same ten seconds ago look different now.

comment by shminux · 2013-04-04T15:00:24.948Z · score: 1 (3 votes) · LW · GW

That's a common misconception. In the simplest case of the Schrodinger' cat, there are not just two worlds with cat is dead or cat is alive. When you open the box, you could find the cat in various stages of decomposition, which gives you uncountably many worlds right there. In a slightly more complicated version, where energy and the direction of the decay products are also measurable (and hence each possible value is measured in at least one world), your infinities keep piling up every which way, all equally probable or nearly so.

comment by [deleted] · 2013-04-04T16:39:32.761Z · score: 3 (3 votes) · LW · GW

(By “two” I didn't mean to imply ‘the only two’.)

comment by shminux · 2013-04-04T16:52:25.897Z · score: 1 (3 votes) · LW · GW

Which two out of the continuum of world then did you imply, and how did you select them? I don't see any way to select two specific worlds for which "relative thickness" would make sense. You can classify the worlds into "dead/not dead at a certain instance of time" groups whose measures you can then compare, of course. But how would you justify this aggregation with the statement that the worlds, once split, no longer interact? What mysterious process makes this aggregation meaningful? Even if you flinch away from this question, how do you select the time of the measurement? This time is slightly different in different worlds, even if it is predetermined "classically", so there is no clear "splitting begins now" moment.

It gets progressively worse and more hopeless as you dig deeper. How does this splitting propagate in spacetime? How do two spacelike-separated splits merge in just the right way to preserve only the spin-conserving worlds of the EPR experiment and not all possibilities? How do you account for the difference in the proper time between different worlds? Do different worlds share the same spacetime and for how long? Does it mean that they still interact gravitationally (spacetime curvature = gravity). What happens if the spacetime topology of some of the worlds changes, for example by collapsing a neutron star into a black hole? I can imagine that these questions can potentially be answered, but the naive MWI advocated by Eliezer does not deal with any of this.

comment by private_messaging · 2013-04-04T06:15:20.922Z · score: 0 (6 votes) · LW · GW

Is there actually any physicists that find QM sequence to be making such a strongly compelling case for MWI as EY says it does?

I know Mitchell Porter is likewise a physicist and he's not convinced at all either.

comment by wedrifid · 2013-04-04T06:20:51.276Z · score: 5 (7 votes) · LW · GW

I know Mitchell Porter is likewise a physicist and he's not convinced at all either.

Mitchell Porter also advocates Quantum Monadology and various things about fundamental qualia. The difference in assumptions about how physics (and rational thought) works between Eliezer (and most of Eliezer's target audience) and Mitchell Porter is probably insurmountable.

comment by private_messaging · 2013-04-04T06:40:44.992Z · score: 1 (5 votes) · LW · GW

Mitchell Porter also advocates Quantum Monadology.

Yeah, and EY [any of the unmentionable things].

For other point, scott aaronson doesn't seem convinced either. Robin Hanson, while himself (it seems) a MWI believer but doesn't appear to think that its so conclusively settled.

comment by wedrifid · 2013-04-04T06:51:00.266Z · score: 1 (1 votes) · LW · GW

Yeah, and EY [any of the unmentionable things].

The relevance of Porter's physics beliefs is that any reader who disagrees with Porter's premises but agrees with the premises used in an article can gain little additional information about the quality of the article by learning that Porter is not convinced by it. ie. Whatever degree of authority Mitchell Porter's status grants goes (approximately) in the direction of persuading the reader to adopt those different premises.

In this way mentioning Porter's beliefs is distinctly different from mentioning the people that you now bring up:

For other point, scott aaronson doesn't seem convinced either. Robin Hanson, while himself (it seems) a MWI believer but doesn't appear to think that its so conclusively settled.

comment by private_messaging · 2013-04-04T08:51:36.490Z · score: 0 (10 votes) · LW · GW

The relevance of Porter's physics beliefs is that any reader who disagrees with Porter's premises but agrees with the premises used in an article can gain little additional information about the quality of the article by learning that Porter is not convinced by it. ie. Whatever degree of authority Mitchell Porter's status grants goes (approximately) in the direction of persuading the reader to adopt those different premises.

What one can learn is that the allegedly 'settled' and 'solved' is far from settled and solved and is a matter of opinion as of now. This also goes for qualia and the like; we haven't reduced them to anything, merely asserted.

It extends all the way up, competence wise - see Roger Penrose.

It's fine to believe in MWI if that's where your philosophy falls, its another thing entirely to argue that belief in MWI is independent of priors and a philosophical stance, and yet another to argue that people fail to be swayed by a very biased presentation of the issue which omits every single point that goes in favour of e.g. non-realism, because they are too irrational or too stupid.

comment by CarlShulman · 2013-04-04T22:43:11.337Z · score: 4 (6 votes) · LW · GW

which omits every single point that goes in favour of e.g. non-realism, because they are too irrational or too stupid.

No, that set of posts goes on at some length about how MWI has not yet provided a good derivation of the Born probabilities.

comment by EHeller · 2013-04-05T00:21:56.476Z · score: 3 (7 votes) · LW · GW

No, that set of posts goes on at some length about how MWI has not yet provided a good derivation of the Born probabilities.

But I think it does not do justice to what a huge deal the Born probabilities are. The Born probabilities are the way we use quantum mechanics to make predictions, so saying "MWI has not yet provided a good derivation of the Born probabilities" is equivalent to "MWI does not yet make accurate predictions," I'm not sure thats clear to people who read the sequences but don't use quantum mechanics regularly.

Also, by omitting the wide variety of non-Copenhagen interpretations (consistent histories, transactional, Bohm, stochastic-modifications to Schroedinger,etc) the reader is lead to believe that the alternative to Copenhagen-collapse is many worlds, so they won't use the absence of Born probabilities in many worlds to update towards one of the many non-Copenhagen alternatives.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-05T00:44:43.998Z · score: 7 (11 votes) · LW · GW

Note that the Born probabilities really obviously have something to do with the unitarity of QM, while no single-world interpretation is going to have this be anything but a random contingent fact. The unitarity of QM means that integral-squared-modulus quantifies the "amount of causal potency" or "amount of causal fluid" or "amount of conserved real stuff" in a blob of the wavefunction. It would be like discovering that your probability of ending up in a computer corresponded to how large the computer was. You could imagine that God arbitrarily looked over the universe and destroyed all but one computer with probability proportional to its size, but this would be unlikely. It would be much more likely (under circumstances analogous to ours) to guess that the size of the computer had something to do with the amount of person in it.

The problems with Copenhagen are fundamentally one-world problems and they go along with any one-world theory. If I honestly believed that the only reason the QM sequence wasn't convincing was that I didn't go through every single one-world theory to refute them separately, I could try to write separate posts for RQM, Bohm, and so on, but I'm not convinced that this is the case. Any single-world theory needs either spooky action at a distance, or really awful amateur epistemology plus spooky action at a distance, and there's just no reason to even hypothesize single-world theories in the first place.

(I'm not sure I have time to write the post about Relational Special Relativity in which length and time just aren't the same for all observers and so we don't have to suppose that Minkowskian spacetime is objectively real, and anyway the purpose of a theory is to tell us how long things are so there's no point in a theory which doesn't say that, and those silly Minkowskians can't explain how much subjective time things seem to take except by waving their hands about how the brain contains some sort of hypothetical computer in which computing elements complete cycles in Minkowskian intervals, in contrast to the proper ether theory in which the amount of conscious time that passes clearly corresponds to the Lorentzian rule for how much time is real relative to a given vantage point...)

comment by wedrifid · 2013-04-05T03:25:33.200Z · score: 9 (9 votes) · LW · GW

The problems with Copenhagen are fundamentally one-world problems and they go along with any one-world theory. If I honestly believed that the only reason the QM sequence wasn't convincing was that I didn't go through every single one-world theory to refute them separately, I could try to write separate posts for RQM, Bohm, and so on, but I'm not convinced that this is the case. Any single-world theory needs either spooky action at a distance, or really awful amateur epistemology plus spooky action at a distance, and there's just no reason to even hypothesize single-world theories in the first place.

It is not worth writing separate posts for each interpretation. However it is becoming increasingly apparent that to the extent that the QM sequence matters at all it may be worth writing a single post which outlines how your arguments apply to the other interpretations. ie.:

  • A brief summary of and a link to your arguments in favor of locality then an explicit mention of how this leads to rejecting "Ensemble, Copenhagen, de Broglie–Bohm theory, von Neumann, Stochastic, Objective collapse and Transactional" interpretations and theories.
  • A brief summary of and a link to your arguments about realism in general and quantum realism in particular and why the wavefunction not being considered 'real' counts against "Ensemble, Copenhagen, Stochastic and Relational" interpretations.
  • Some outright mockery of the notion that observation and observers have some kind of intrinsic or causal role (Coppenhagen, von Neumann and Relational).
  • Mention hidden variables and the complexity burden thereof (de Broglie–Bohm, Popper).

Having such a post as part of the sequence would make it trivial to dismiss claims like:

You lead the reader toward a false dichotomy (Copenhagen or many worlds) in order to suggest that the low probability of copenhagen implies many worlds. This ignores a vast array of other interpretations.

... as straw men. As it stands however this kind of claim (evidently, by reception) persuades many readers, despite this being significantly different to the reasoning that you intended to convey.

If it worth you maintaining active endorsement of your QM posts it may be worth ensuring both that it is somewhat difficult to actively misrepresent them and also that the meaning of your claims are as clear as they can conveniently be made. If there are Mihaly Barasz's out there who you can recruit via the sanity of your physics epistemology there are also quite possibly IMO gold medalists out there who could be turned off by seeing negative caricatures of your QM work so readily accepted and then not bother looking further.

comment by EHeller · 2013-04-05T01:11:33.734Z · score: 4 (6 votes) · LW · GW

Note that the Born probabilities really obviously have something to do with the unitarity of QM, while no single-world interpretation is going to have this be anything but a random contingent fact.

Not so. If we insist that our predictions need to be probabilities (take the Born probabilities as fundamental/necessary), then unitarity becomes equivalent to the statement that probabilities have to sum to 1, and we can then try to piece together what our update equation should look like. This is the approach taken by the 'minimalist'/'ensemble' interpretation that Ballentine's textbook champions, he uses probabilities sum to 1 and some group theory (related to the Galilean symmetry group) to motivate the form of the Schroedinger equation. Edit to clarify: In some sense, its the reverse of many worlds- instead of taking the Schroedinger axioms as fundamental and attempting to derive Born, take the operator/probability axioms seriously and try to derive Schroedinger.

I believe the same consideration could be said of the consistent histories approach, but I'd have to think about it before I'd fully commit.

Edit to add: Also, what about "non-spooky" action at a distance? Something like the transactional interpretation, where we take relativity seriously and use both the forward and backward Green's function of the Dirac/Klein-Gordon equation? This integrates very nicely with Barbour's timeless physics, properly derives the Born rule, has a single world, BUT requires some stochastic modifications to the Schroedinger equation.

comment by shminux · 2013-04-05T01:25:15.457Z · score: 2 (4 votes) · LW · GW

What surprises me in the QM interpretational world is that the interaction process itself is clearly more than just a unitary evolution of some wave function, given how the number of particles is not conserved, requiring the full QFT approach, and probably more, yet (nearly?) all interpretations stop at the QM level, without any attempt at some sort of second quantization. Am I missing something here?

comment by EHeller · 2013-04-05T01:52:43.464Z · score: 4 (6 votes) · LW · GW

Mostly just that QFT is very difficult and not rigorously formulated. Haag's theorem (and Wightman's extension) tell us that an interacting quantum field theory can't live in a nice Hilbert space, so there is a very real sense that realistic QFTs only exist peturbatively. This makes interpretation something of a nightmare.

Basically, we ignore a bunch of messy complications (and potential inconsistency) just to shut-up-and-calculate, no one wants to dig up all that 'just' to get to the messy business of interpretation.

comment by shminux · 2013-04-05T06:21:56.202Z · score: 2 (4 votes) · LW · GW

Are you saying that people knowingly look where it's light, instead of where they lost the keys?

comment by EHeller · 2013-04-05T06:27:21.545Z · score: 3 (3 votes) · LW · GW

More or less. If the axiomatic field theory guys ever make serious progress, expect a flurry of me-too type interpretation papers to immediately follow. Until then, good luck interpreting a theory that isn't even fully formulated yet.

If you ever are in a bar after a particle phenomenology conference lets out, ask the general room what, exactly, a particle is, and what it means that the definition is NOT observer independent.

comment by shminux · 2013-04-05T06:45:19.002Z · score: 2 (2 votes) · LW · GW

Oh, I know what a particle is. It's a flat-space interaction-free limit of a field. But I see your point about observer dependence.

comment by EHeller · 2013-04-05T06:55:40.952Z · score: 5 (5 votes) · LW · GW

Then what is it, exactly, that particle detectors detect? Because it surely can't be interaction free limits of fields. Also, when we go to the Schreodinger equation with a potential, what are we modeling? It can't be a particle, there is non-perturbative potential! Also, for any charged particle, the IR divergence prevents the limit, so you have to be careful- 'real' electrons are linear combination of 'bare' electrons and photons.

comment by shminux · 2013-04-05T16:38:56.858Z · score: 4 (4 votes) · LW · GW

What I meant was that if you think of a field excitation propagation "between interactions", they can be identified with particles. And you are right, I was neglecting those pesky massless virtual photons in the IR limit. As for the SE with a potential, this is clearly a semi-classical setup, there are no external classical potentials, they all come as some mean-field pictures of a reasonably stable many-particle interaction (a contradiction in terms though it might be). I think I pointed that out earlier in some thread.

The more I learn about the whole thing, the more I realize that all of Quantum Physics is basically a collection of miraculously working hacks, like narrow trails in a forest full of unknown deadly wildlife. This is markedly different from the classical physics, including relativity, where most of the territory is mapped, but there are still occasional dangers, most of which are clearly marked with orange cones.

comment by [deleted] · 2013-04-06T10:40:42.166Z · score: 3 (3 votes) · LW · GW

Somebody: Virtual photons don't actually exist: they're just a bookkeeping device to help you do the maths.

Someone else, in a different context: Real photons don't actually exist: each photon is emitted somewhere and absorbed somewhere else a possibly long but still finite amount of time later, making that a virtual photon. Real photons are just a mathematical construct approximating virtual photons that live long enough.

Me (in yet a different context, jokingly): [quotes the two people above] So, virtual photons don't exist, and real photons don't exist. Therefore, no photons exist at all.

comment by EHeller · 2013-04-08T15:17:54.860Z · score: 1 (1 votes) · LW · GW

Me (in yet a different context, jokingly): [quotes the two people above] So, virtual photons don't exist, and real photons don't exist. Therefore, no photons exist at all.

This is less joking then you think- its more or less correct. If you change the final to conclusion to "there isn't a good definition of photon" you'd be there. Its worse for QCD, where the theory has an SU(3) symmetry you pretty much have to sever in order to treat the theory perturbatively.

comment by [deleted] · 2013-04-06T19:26:51.590Z · score: 1 (1 votes) · LW · GW

The more I learn about the whole thing, the more I realize that all of Quantum Physics is basically a collection of miraculously working hacks, like narrow trails in a forest full of unknown deadly wildlife. This is markedly different from the classical physics, including relativity, where most of the territory is mapped, but there are still occasional dangers, most of which are clearly marked with orange cones.

Yes. While I'm not terribly up-to-date with the ‘state-of-the-art’ in theoretical physics, I feel like the situation today with renormalization and stuff is like it was until 1905 for the Lorentz-FitzGerald contraction or the black-body radiation, when people were mystified by the fact that the equations worked because they didn't know (or, at least, didn't want to admit) what the hell they meant. A new Einstein clearing this stuff up is perhaps overdue now. (The most obvious candidate is “something to do with quantum gravity”, but I'm prepared to be surprised.)

comment by OrphanWilde · 2013-04-05T19:23:51.705Z · score: 1 (1 votes) · LW · GW

all of Quantum Physics is basically a collection of miraculously working hacks

It really is. When you look at the experiments they're performing, it's kind of a miracle they get any kind of usable data at all. And explaining it to intelligent people is this near-infinite recursion of "But how do they know that experiment says what they say it does" going back more than a century with more than one strange loop.

Seriously, I've tried explaining just the proof that electrons exist, and in the end the best argument is that all the math we've built assuming their existence have really good predictive value. Which sounds like great evidence until you start confronting all the strange loops (the best experiments assume electromagnetic fields...) in that evidence, and I don't even know how to -begin- untangling those. I'm convinced you could construct parallel physics with completely different mechanics (maybe the narrow trails aren't as narrow as you'd think?) and get exactly the same results. And quantum field theory's history of parallel physics doesn't exactly help my paranoia there, even if they did eventually clean -most- of it up.

comment by EHeller · 2013-04-06T01:27:13.280Z · score: 1 (1 votes) · LW · GW

I'm convinced you could construct parallel physics with completely different mechanics (maybe the narrow trails aren't as narrow as you'd think?) and get exactly the same results.

Depends on what you mean by 'different mechanics.' Weinberg's field theory textbook develops the argument that only quantum field theory, as a structure, allows for certain phenomenologically important characteristics (mostly cluster decomposition).

However, there IS an enormous amount of leeway within the field theory- you can make a theory where electric monopoles exist as explicit degrees of freedom, and magnetic monopoles are topological gauge-field configurations and its dual to a theory where magnetic monopoles are the degrees of freedom and electric monopoles exist as field configurations. While these theories SEEM very different, they make identical predictions.

Similarly, if you can only make finite numbers of measurements, adding extra dimensions is equivalent to adding lots of additional forces (the dimensional deconstruction idea), etc. Some 5d theories with gravity make the same predictions as some 4d theories without.

comment by shminux · 2013-04-05T20:21:25.113Z · score: 1 (5 votes) · LW · GW

in the end the best argument is that all the math we've built assuming their existence have really good predictive value.

I fail to see the difference between this and "electrons exist". But then my definition of existence only talks about models, anyway.

I am also not sure what strange loops you are referring to, feel free to give a couple of examples.

I'm convinced you could construct parallel physics with completely different mechanics [...] and get exactly the same results.

Most likely. It happens quite often (like Heisenberg's matrix mechanics vs Schrodinger's wave mechanics). Again, I have no problem with multiple models giving the same predictions, so I fail to see the source of your paranoia...

My beef with quantum physics is that there are many straightforward questions within its own framework it does not have answers to.

comment by [deleted] · 2013-04-06T19:34:42.576Z · score: 1 (1 votes) · LW · GW

I fail to see the difference between this and "electrons exist". But then my definition of existence only talks about models, anyway.

Imagine there's a different, as-yet-unknown [ETA: simpler] model that doesn't have electrons but makes the same experimental predictions as ours.

comment by shminux · 2013-04-06T21:36:54.078Z · score: -1 (1 votes) · LW · GW

Then it's equivalent to "electrons exist". This is quite a common occurrence in physics, especially these days, holography and all. It also happens in condensed matter a lot, where quasi-particles like holes and phonons are a standard approximation. Do holes "exist" in a doped semiconductor? Certainly as much as electrons exist, unless you are a hard reductionist insisting that it makes sense to talk about simulating a Boeing 747 from quarks.

comment by [deleted] · 2013-04-07T11:32:42.795Z · score: 1 (3 votes) · LW · GW

I meant for the as-yet-unknown model to be simpler than ours. (Do epicycles exist? After all, they do predict the motion of planets.)

comment by OrphanWilde · 2013-04-05T20:49:11.587Z · score: 1 (1 votes) · LW · GW

One example is mentioned; the proofs of electrons assumes the existences of (electircally charged) electromagnetic fields (Thomson's experiment), the proof of electromagnetic fields -as- electrically charged comes from electron scattering and similar experiments.

(I'm fine with "electrons exist as a phenomenon, even if they're not the phenomenon we expect them to be", but that tends to put people in an even more skeptical frame of mind then before I started "explaining". I've generally given up such explanations; it appears I'm hopelessly bad at it.)

Another strange loop is in the quantization of energy (which requires electrical fields to be quantized, the evidence for which comes quantization of energy to begin with). Strange loops are -fine-, taken as a whole - taken as a whole the evidence can be pretty good - but when you're stepping a skeptical person through it step by step it, it's hard to justify the next step when the previous step depends on it. The Big Bang Theory is another - the theory requires something to plug the gap in expected versus received background radiation, and the evidence for the plug (dark energy, for example) pretty much requires BBT to be true to be meaningful.

(Although it may be that a large part of the problem with the strange loops is that only the earliest experiments tend to be easily found in textbooks and on the Internet, and later less loop-prone experiments don't get much attention.)

comment by EHeller · 2013-04-06T01:36:17.846Z · score: 0 (2 votes) · LW · GW

One example is mentioned; the proofs of electrons assumes the existences of (electircally charged) electromagnetic fields (Thomson's experiment), the proof of electromagnetic fields -as- electrically charged comes from electron scattering and similar experiments.

The existence of electromagnetic fields is just the existence of light. You can build up the whole theory of electricity and magnetism without mentioning electrons. Charge is just a definition that tells us that some types of matter attract some other types of matter.

Once you have electromagnetic fields understood well, you can ask questions like "well, what is this piece of metal made up of, what is this piece of plastic made up of", etc, and you can measure charges and masses of the various constituents. Its not actually self-referential in the way you propose.

comment by OrphanWilde · 2013-04-08T13:09:18.261Z · score: 0 (0 votes) · LW · GW

Light isn't electrically charged.

You're correct that you can build up the theory without electrons - exactly this happened. That history produced linearly stepwise theories isn't the same as the evidence being linearly stepwise, however.

comment by EHeller · 2013-04-08T15:05:18.788Z · score: 1 (1 votes) · LW · GW

Light isn't electrically charged.

Light IS electromagnetic fields. the phrase "electrically charged electromagnetic fields" is a contradiction- the fields aren't charged. Charges react to the field.

If the fields WERE charged in some way, the theory would be non-linear.

In this case there is no loop- you can develop the electromagnetic theory around light, and from there proceed to electrons if you like.

comment by OrphanWilde · 2013-04-08T17:55:10.761Z · score: -2 (2 votes) · LW · GW

Light, in the theory you're indirectly referencing, is a disturbance in the electromagnetic field, not the field itself.

The fields are charged, hence all the formulas involving them reflecting charge in one form or another (charge density is pretty common); the amplitude of the field is defined as the force exerted on positively charged matter in the field. (The reason for this definition is that most electromagnetic fields we interact with are negatively charged, or have negative charge density, on account of electrons being more easily manipulated than cations, protons, plasma, or antimatter.)

With some creative use of relativity you can render the charge irrelevant for the purposes of (a carefully chosen) calculation. This is not the same as the charge not existing, however.

comment by EHeller · 2013-04-08T19:13:10.547Z · score: 1 (3 votes) · LW · GW

The fields are charged

You are using charge in some non-standard way. Charges are source or sinks of the field.

An electromagnetic field does not sink or source more field- if it did, Maxwell's equations would be non-linear. There is no such thing as a 'negatively charged electromagnetic field'- there are just electromagnetic fields. Now, the electromagnetic field can have a negative (or positive) amplitude but this is not the same as saying its negatively charged.

comment by MugaSofer · 2013-04-06T08:35:35.975Z · score: 0 (2 votes) · LW · GW

in the end the best argument is that all the math we've built assuming their existence have really good predictive value.

I fail to see the difference between this and "electrons exist". But then my definition of existence only talks about models, anyway.

Really? How does that work if, say, there's a human in Schrodinger's Box?

comment by shminux · 2013-04-06T23:52:41.668Z · score: -2 (2 votes) · LW · GW

How does that work if, say, there's a human in Schrodinger's Box?

How does what work?

comment by MugaSofer · 2013-04-07T18:18:46.661Z · score: -1 (1 votes) · LW · GW

How does a model-based definition of existence interact with morality? Or paperclipping, for that matter?

comment by shminux · 2013-04-07T18:54:48.982Z · score: -1 (3 votes) · LW · GW

Still not clear what you are having trouble with. I interpret "electron exist" as "I have this model I call electron which is better at predicting certain future inputs than any competing model". Not sure what it has to do with morality or paperclipping.

comment by PrawnOfFate · 2013-04-08T13:10:37.758Z · score: 1 (1 votes) · LW · GW

How do you interpret "such-and-such an entity is required by such-and-such a theory, which seems to work, bit turns out not to exist". Do things wink in and out of existence as one theory replaces another?

comment by TimS · 2013-04-08T14:28:45.882Z · score: 3 (5 votes) · LW · GW

I think shminux's response is something like:

"Given a model that predicts accurately, what would you do differently if the objects described in the model do or don't exist at some ontological level? If there is no difference, what are we worrying about?"

comment by PrawnOfFate · 2013-04-11T16:53:41.999Z · score: 0 (2 votes) · LW · GW

Why worry about prediction if it doesn't relate to a real world?

comment by TimS · 2013-04-11T17:45:16.452Z · score: 1 (5 votes) · LW · GW

I think you overread shminux. My attempted steelman of his position would be:

Of course there is something external to our minds, which we all experience. Call that "reality" if you like. Whatever reality is, it creates regularity such that we humans can make and share predictions.
Are there atoms, or quarks, or forces out there in the territory? Experts in the field have said yes, but sociological analysis like The Structure of Scientific Revolutions gives us reasons to be skeptical. More importantly, resolving that metaphysical discussion does nothing to help us make better predictions in the future.

I happen to disagree with him because I think resolving that dispute has the potential to help us make better predictions in the future. But your comment appears to strawman shminux by asserting that he doesn't believe in external reality at all, when he clearly believes there is some cause of the regularity that allows his models to make accurate predictions.

Saying "there is regularity" is different from saying "regularity occurs because quarks are real."

comment by DaFranker · 2013-04-11T18:01:09.074Z · score: 2 (2 votes) · LW · GW

If this steelman is correct, my support for schminux's position has risen considerably, but so has my posterior belief that schminux and Eliezer actually have the same substantial beliefs once you get past the naming and modeling and wording differences.

Given schminux and Eliezer's long-standing disagreement and both affirming that they have different beliefs, this makes it seem more likely that there's either a fundamental miscommunication, that I misunderstand the implications of the steel-manning or of Eliezer's descriptions of his beliefs, or that this steel-manning is incorrect. Which in turn, given that they are both quite more highly experienced in explicit rationality and reduction than I am, makes the first of the above three less likely, and thus makes it back less-than-it-would-first-seem still-slightly-more-likely that they actually agree, but also more likely that this steelman strawmans schminux in some relevant way.

Argh. I think I might need to maintain a bayes belief network for this if I want to think about it any more than that.

comment by shminux · 2013-04-11T18:37:36.629Z · score: 0 (8 votes) · LW · GW

Given schminux and Eliezer's long-standing disagreement and both affirming that they have different beliefs

The disagreement starts here:

Since my expectations sometimes conflict with my subsequent experiences, I need different names for the thingies that determine my experimental predictions and the thingy that determines my experimental results. I call the former thingies 'beliefs', and the latter thingy 'reality'.

I refuse to postulate an extra "thingy that determines my experimental results". Occam's razor and such.

comment by DaFranker · 2013-04-11T18:53:08.253Z · score: 3 (3 votes) · LW · GW

So uhm. How do the experimental results, y'know, happen?

I think I understand everything else. Your position makes perfect sense. Except for that last non-postulate. Perhaps I'm just being obstinate, but there needs to be something to the pattern / regularity.

If I look at a set of models, a set of predictions, a set of experiments, and the corresponding set of experimental results, all as one big blob:

The models led to predictions - predictions about the experimental results, which are part of the model. The experiments were made according to the model that describes how to test those predictions (I might be wording this a bit confusingly?). But the experimental results... just "are". They magically are like they are, for no reason, and they are ontologically basic in the sense that nothing at all ever determines them.

To me, it defies any reasonable logical description, and to my knowledge there does not exist a possible program that would generate this (i.e. if the program "randomly" generates the experimental results, then the randomness generator is the cause of the results, and thus is that thinghy, and for any regularity observable then the algorithm that causes that regularity in the resulting program output is the thinghy). Since as far as I can tell there is no possible logical construct that could ever result in a causeless ontologically basic "experimental result set" that displays regularity and can be predicted and tested, I don't see how it's even possible to consistently form a system where there are even models and experiences.

In short, if there is nothing at all whatsoever from which the experimental results arise, not even just a mathematical formula that can be pointed at and called 'reality', then this doesn't even seem like a well-formed mathematically-expressible program, let alone one that is occam/solomonoff "simpler" than a well-formed program that implicitly contains a formula for experimental results.

No matter what kind of program you create, no matter how cleverly you spin it or complexify or simplify or reduce it, there will always, by logical necessity, be some subset of it that you can point at and say "Look here! This is what 'determines' what experimental results I see and restricts the possible futures! Let's call this thinghy/subset/formula 'reality'!"

I don't see any possibility of getting around that requirement unless I assume magic, supernatural entities, wishful thinking, ontologically basic nonlogical entities, or worse.

comment by DaFranker · 2013-04-11T19:10:52.678Z · score: 0 (0 votes) · LW · GW

No matter what kind of program you create, no matter how cleverly you spin it or complexify or simplify or reduce it, there will always, by logical necessity, be some subset of it that you can point at and say "Look here! This is what 'determines' what experimental results I see and restricts the possible futures! Let's call this thinghy/subset/formula 'reality'!"

I don't see any possibility of getting around that requirement unless I assume magic, supernatural entities, wishful thinking, ontologically basic nonlogical entities, or worse.

As far as I can tell, those two paragraphs are pretty much Eliezer's position on this, and he's just putting that subset as an arbitrary variable, saying something like "Sure, we might not know said subset of the program or where exactly it is or what computational form it takes, but let's just have a name for it anyway so we can talk about things more easily".

comment by shminux · 2013-04-11T19:26:05.106Z · score: -2 (4 votes) · LW · GW

So uhm. How do the experimental results, y'know, happen?

Are you trying to solve the question of origin? How did the external reality, that thing that determines the experimental results, in the realist model, y'know, happen?

I discount your musings about "ontological basis", perhaps uncharitably. Instrumentally, all I care about is making accurate predictions, and the concept of external reality is sometimes useful in that sense, and sometimes it gets in the way.

No matter what kind of program you create, no matter how cleverly you spin it or complexify or simplify or reduce it, there will always, by logical necessity, be some subset of it that you can point at and say "Look here! This is what 'determines' what experimental results I see and restricts the possible futures! Let's call this thinghy/subset/formula 'reality'!"

Uh, not necessarily. I call this clever program, like everything else I think up, a model. If it happens to make accurate predictions I might even call it a good model. Often it is a meta-model, or a meta-meta-model, but a model nonetheless.

I don't see any possibility of getting around that requirement unless I assume magic, supernatural entities, wishful thinking, ontologically basic nonlogical entities, or worse.

I fail to see a requirement you think I would have to get around. Just some less-than-useful logical construct.

comment by DaFranker · 2013-04-11T19:45:20.641Z · score: 1 (1 votes) · LW · GW

I think it all just finally clicked. Strawman test (hopefully this is a good enough approximation):

You do imagine patterns and formulas, and your model does (or can) contain a (meta^x)-model that we could use and call "reality" and do whatever other realist-like shenanigans, and does describe the experimental results in some way that we could say "this formula, if it 'really existed' and the concept of existence is coherent at all, is the cause of my experimental results and the thinghy that determines them".

You just naturally exclude going from there to assuming that the meta-model is "real", "exists", or is itself what is external to the models and causes everything; something which for other people requires extra mental effort and does relate to the problem of origin.

Uh, not necessarily. I call this clever program, like everything else I think up, a model. If it happens to make accurate predictions I might even call it a good model. Often it is a meta-model, or a meta-meta-model, but a model nonetheless.

Sure. What I was attempting to say is that if I look at your model of the world, and within this model find a sub-part that happens to be a meta-model of the world like that program, I could also point at a smaller sub-part of that meta-model and say "Within this meta-model that you have in your model of the world, this is the modeled 'cause' of your experimental results, they all happen according to this algorithm".

So now, given that the above is at least a reasonable approximation of your beliefs, the hypotheses for one of us misinterpreting Eliezer have risen quite considerably.

Personally, I tend to mentally "simplify" my model by saying that the program in question "is" (reality), for purposes of not having to redefine and debate things with people. Sometimes, though, when I encounter people who think "quarks are really real out there and have a real position in a really existing space", I just get utterly confused. Quarks are just useful models of the interactions in the world. What's "actually" doing the quark-ing is irrelevant.

Natural language is so bad at metaphysics, IME =\

comment by shminux · 2013-04-11T20:33:14.556Z · score: 0 (4 votes) · LW · GW

So your logic is that there is some fundamental subalgorithm somewhere deep down in the stack of models, and this is what you think makes sense to call external reality? I have at least two issues with this formulation. One is that every model supposedly contains this algorithm. Lots of high-level models are polymorphic, you can replace quarks with bits or wooden blocks and they still hold. The other is that, once you put this algorithm outside the model space, you are tempted to consider other similar algorithms which have no connection with the rest of the models whatsoever, like the mathematical universe. The term "exist" gains a meaning not present in its original instrumental Latin definition: to appear or to stand out. And then we are off the firm ground of what can be tested and into the pure unconnected ideas, like "post-utopian" Eliezer so despises, yet apparently implicitly adopts. Or maybe I'm being uncharitable here. He never engaged me on this point.

comment by Bugmaster · 2013-04-11T23:27:31.243Z · score: 1 (1 votes) · LW · GW

I think both you and DaFranker might be going a bit too deep down the meta-model rabbit-hole. As far as I understand, when a scientist says "electrons exists", he does not mean,

These mathematical formulae that I wrote down describe an objective reality with 100% accuracy.

Rather, he's saying something like,

There must be some reason why all my experiments keep coming out the way they do, and not in some other way. Sure, this could be happening purely by chance, but the probability of this is so tiny as to be negligible. These formulae describe a model of whatever it is that's supplying my experimental results, and this model predicts future results correctly 99.999999% of the time, so it can't be entirely wrong.

As far as I understand, you would disagree with the second statement. But, if so, how do you explain the fact that our experimental results are so reliable and consistent ? Is this just an ineffable mystery ?

comment by shminux · 2013-04-11T23:35:22.789Z · score: 0 (2 votes) · LW · GW

I don't disagree with the second statement, I find parts of it meaningless or tautological. For example:

These formulae describe a model of whatever it is that's supplying my experimental results

The part in bold is redundant. You would normally say "of Higgs decay" or something to that effect.

, and this model predicts future results correctly 99.999999% of the time, so it can't be entirely wrong.

The part in bold is tautological. Accurate predictions is the definition of not being wrong (within the domain of applicability). In that sense Newtonian physics is not wrong, it's just not as accurate.

comment by PrawnOfFate · 2013-04-13T13:38:36.305Z · score: 1 (1 votes) · LW · GW

The part in bold is tautological. Accurate predictions is the definition of not being wrong

The instrumentalist definition. For realists, and accurate theory can still be wrong because it fails to correspond to reality, or posits non existent entities. For instance, and epicyclic theory of the solar system can be made as accurate as you like.

comment by Bugmaster · 2013-04-11T23:46:53.798Z · score: 1 (1 votes) · LW · GW

Accurate predictions is the definition of not being wrong (within the domain of applicability)

I meant to make a more further-reaching statement than that. If we believe that our model approximates that (postulated) thing that is causing our experiments to come out a certain way, then we can use this model to devise novel experiments, which are seemingly unrelated to the experiments we are doing now; and we could expect these novel experiments to come out the way we expected, at least on occasion.

For example, we could say, "I have observed this dot of light moving across the sky in a certain way. According to my model, this means that if I were to point my telescope at some other part of sky, we would find a much dimmer dot there, moving in a specific yet different way".

This is a statement that can only be made if you believe that different patches of the sky are connected, somehow, and if you have a model that describes the entire sky, even the pieces that you haven't looked at yet.

If different patches of the sky are completely unrelated to each other, the likelihood of you observing what you'd expect is virtually zero, because there are too many possible observations (an infinite number of them, in fact), all equally likely. I would argue that the history of science so far contradicts this assumption of total independence.

In that sense Newtonian physics is not wrong, it's just not as accurate.

This may be off-topic, but I would agree with this statement. Similarly, the statement "the Earth is flat" is not, strictly speaking, wrong. It works perfectly well if you're trying to lob rocks over a castle wall. Its inaccuracy is too great, however, to launch satellites into orbit.

comment by DaFranker · 2013-04-11T20:57:54.342Z · score: 1 (1 votes) · LW · GW

So your logic is that there is some fundamental subalgorithm somewhere deep down in the stack of models, and this is what you think makes sense to call external reality?

Sort-of.

I'm saying that there's a sufficiently fuzzy and inaccurate polymorphic model (or sets of models, or meta-description of the requirements and properties for relevant models,) of "the universe" that could be created and pointed at as "the laws", which if known fully and accurately could be "computed" or simulated or something and computing this algorithm perfectly would in-principle let us predict all of the experimental results.

If this theoretical, not-perfectly-known sub-algorithm is a perfect description of all the experimental results ever, then I'm perfectly willing to slap the labels "fundamental" and "reality" on it and call it a day, even though I don't see why this algorithm would be more "fundamentally existing" than the exact same algorithm with all parameters multiplied by two, or some other algorithm that produces the same experimental results in all possible cases.

The only reason I refer to it in the singular - "the sub-algorithm" - is because I suspect we'll eventually have a way to write and express as "an algorithm" the whole space/set/field of possible algorithms that could perfectly predict inputs, if we knew the exact set that those are in. I'm led to believe it's probably impossible to find this exact set.

comment by shminux · 2013-04-11T21:24:16.816Z · score: 0 (4 votes) · LW · GW

I find this approach very limiting. There is no indication that you can construct anything like that algorithm. Yet by postulating its existence (ahem), you are forced into a mode of thinking where "there is this thing called reality with some fundamental laws which we can hopefully learn some day". As opposed to "we can keep refining our models and explain more and more inputs, and discover new and previously unknown inputs and explain them to, and predict more and so on". Without ever worrying if some day there is nothing more to discover, because we finally found the holy grail, the ultimate laws of reality. I don't mind if it's turtles all the way down.

In fact, in the spirit of QM and as often described in SF/F stories, the mere act of discovery may actually change the "laws", if you are not careful. Or maybe we can some day do it intentionally, construct our own stack of turtles. Oh, the possibilities! And all it takes is to let go of one outdated idea, which is, like Aristotle's impetus, ripe for discarding.

comment by PrawnOfFate · 2013-04-13T14:02:22.894Z · score: 2 (2 votes) · LW · GW

I don't mind if it's turtles all the way down.

The claim that reality may be ultimately unknowable or non-algorithmic is different to the claim you have made elsewhere, that there is no reality.

comment by TheOtherDave · 2013-04-13T15:47:59.447Z · score: 1 (1 votes) · LW · GW

I'm not sure it's as different as all that from shminux's perspective.

By way of analogy, I know a lot of people who reject the linguistic habit of treating "atheism" as referring to a positive belief in the absence of a deity, and "agnosticism" as referring to the absence of a positive belief in the presence of a deity. They argue that no, both positions are atheist; in the absence of a positive belief in the presence of a deity, one does not believe in a deity, which is the defining characteristic of the set of atheist positions. (Agnosticism, on this view, is the position that the existence of a deity cannot be known, not merely the observation that one does not currently know it. And, as above, on this view that means agnosticism implies atheism.)

If I substitute (reality, non-realism, the claim that reality is unknowable) for (deity, atheism, agnosticism) I get the assertion that the claim that reality is unknowable is a non-realist position. (Which is not to say that it's specifically an instrumentalist position, but we're not currently concerned with choosing among different non-realist positions.)

All of that said, none of it addresses the question which has previously been raised, which is how instrumentalism accounts for the at-least-apparently-non-accidental relationship between past inputs, actions, models, and future inputs. That relationship still strikes me as strong evidence for a realist position.

comment by PrawnOfFate · 2013-04-13T16:31:36.455Z · score: 1 (1 votes) · LW · GW

I can't see much evidence that the people who construe atheism and agnosticicsm in the way you describe ae actually correct. I agree that the no-reality position and the unknowable-reality position could both be considered anti-realist, but they are still substantively difference. Deriving no-reality from unknowable reality always seems like an error to me, but maybe someone has an impressive defense of it.

comment by TheOtherDave · 2013-04-13T18:32:44.998Z · score: 1 (1 votes) · LW · GW

Well, I certainly don't want to get into a dispute about what terms like "atheism", "agnosticism", "anti-realism", etc. ought to mean. All I'll say about that is if the words aren't being used and interpreted in consistent ways, then using them does not facilitate communication. If the goal is communication, then it's best not to use those words.

Leaving language aside, I accept that the difference between "there is no reality" and "whether there is a reality is systematically unknowable" is an important difference to you, and I agree that deriving the former from the latter is tricky.

I'm pretty sure it's not an important difference to shminux. It certainly isn't an important difference to me... I can't imagine why I would ever care about which of those two statements is true if at least one of them is.

comment by PrawnOfFate · 2013-04-14T18:35:14.358Z · score: 0 (0 votes) · LW · GW

Well, I certainly don't want to get into a dispute about what terms like "atheism", "agnosticism", "anti-realism", etc. ought to mean.

I don't see why not.

All I'll say about that is if the words aren't being used and interpreted in consistent ways, then using them does not facilitate communication. If the goal is communication, then it's best not to use those words.

Or settle their correct meanings using a dictionary, or something.

Leaving language aside, I accept that the difference between "there is no reality" and "whether there is a reality is systematically unknowable" is an important difference to you, and I agree that deriving the former from the latter is tricky.

I'm pretty sure it's not an important difference to shminux.

If shminux is using arguments for Unknowable Reality as arguments for No Reality, then shminux's arguments are invalid whatever shminux cares about.

It certainly isn't an important difference to me... I can't imagine why I would ever care about which of those two statements is true if at least one of them is.

One seems a lot ore far fetched that then other to me.

comment by TheOtherDave · 2013-04-14T20:12:35.260Z · score: 0 (0 votes) · LW · GW

Well, I certainly don't want to get into a dispute about what terms like "atheism", "agnosticism", "anti-realism", etc. ought to mean.

I don't see why not.

If all goes well in a definitional dispute, at the end of it we have agreed on what meaning to assign to a word. I don't really care; I'm usually perfectly happy to assign to it whatever meaning my interlocutor does. In most cases, there was some other more interesting question about the world I was trying to get at, which got derailed by a different discussion about the meanings of words. In most of the remaining cases, the discussion about the meanings of words was less valuable to me than silence would have been.

That's not to say other people need to share my values, though; if you want to join definitional disputes (by referencing a dictionary or something) go right ahead. I'm just opting out.

If shminux is using arguments for Unknowable Reality as arguments for No Reality,

I don't think he is, though I could be wrong about that.

comment by MugaSofer · 2013-04-13T15:51:46.250Z · score: 0 (2 votes) · LW · GW

Pretty sure you mixed up "we can't know the details of reality" with "we can't know if reality exists".

comment by TheOtherDave · 2013-04-13T15:58:06.587Z · score: 0 (0 votes) · LW · GW

That would be interesting, if true.
I have no coherent idea how you conclude that from what I said, though.
Can you unpack your reasoning a little?

comment by MugaSofer · 2013-04-13T16:30:19.283Z · score: 2 (4 votes) · LW · GW

Sure.


Agnosticism = believing we can't know if God exists

Atheism = believing God does not exist

Theism = believing God exists


turtles-all-the-way-down-ism = believing we can't know what reality is (can't reach the bottom turtle)

instrumentalism/anti-realism = believing reality does not exist

realism = believing reality exists


Thus anti-realism and realism map to atheism and theism, but agnosticism doesn't map to infinte-turtle-ism because it says we can't know if God exists, not what God is.

comment by shminux · 2013-04-13T16:50:17.273Z · score: 1 (3 votes) · LW · GW

Agnosticism = believing we can't know if God exists

Or believing that it's not a meaningful or interesting question to ask

instrumentalism/anti-realism = believing reality does not exist

That's quite an uncharitable conflation. Antirealism is believing that reality does not exist. Instrumentalism is believing that reality is a sometimes useful assumption.

comment by MugaSofer · 2013-04-13T18:29:44.356Z · score: -1 (3 votes) · LW · GW

Or believing that it's not a meaningful or interesting question to ask

Those would be ignosticism and apatheism respectively.

That's quite an uncharitable conflation. Antirealism is believing that reality does not exist. Instrumentalism is believing that reality is a sometimes useful assumption.

Yes, yes, we all know your idiosyncratic definition of "exist", I was using the standard meaning because I was talking to a realist.

comment by private_messaging · 2013-04-13T17:18:38.158Z · score: -1 (3 votes) · LW · GW

Yeah. The issue here, i gather, has to do a lot with domain specific knowledge - you're a physicist, you have general idea how physics does not distinguish between, for example, 0 and two worlds of opposite phases which cancel out from our perspective. Which is way different from naive idea of some sort of computer simulation, where of course two simulations with opposite signs being summed, are a very different thing 'from the inside' from plain 0. If we start attributing reality to components of the sum in Feynman's path integral... that's going to get weird.

comment by MugaSofer · 2013-04-13T19:16:29.199Z · score: 0 (2 votes) · LW · GW

You realize that, assuming Feynman's path integral makes accurate predictions, shiminux will attribute it as much reality as, say, the moon, or your inner experience.

comment by private_messaging · 2013-04-13T21:04:49.258Z · score: -2 (2 votes) · LW · GW

The issue is with all the parts of it, which include your great grandfather's ghost, twice, with opposite phases, looking over your shoulder.

comment by MugaSofer · 2013-04-13T22:27:26.764Z · score: -1 (1 votes) · LW · GW

Since I am not a quantum physicist, I can't really respond to your objections, and in any case I don't subscribe to shiminux's peculiar philosophy.

comment by TheOtherDave · 2013-04-13T16:36:07.057Z · score: 0 (0 votes) · LW · GW

Thanks for the clarification, it helps.
An agnostic with respect to God (which is what "agnostic" has come to mean by default) would say both that we can't know if God exists, and also that we can't know the nature of God. So I think the analogy still holds.

comment by MugaSofer · 2013-04-13T16:42:48.807Z · score: 0 (2 votes) · LW · GW

Right. But! An agnostic with respect to the details of reality - an infinite-turtle-ist - need not be an agnostic with respect to reality, even if an agnostic with respect to reality is also an agnostic with respect to it's details (although I'm not sure if that follows in any case.)

comment by TheOtherDave · 2013-04-13T17:48:10.480Z · score: 1 (1 votes) · LW · GW

(shrug) Sure. So my analogy only holds between agnostics-about-God (who question the knowability of both the existence and nature of God) and agnostics-about-reality (who question the knowability of both the existence and nature of reality).

As you say, there may well be other people out there, for example those who question the knowability of the details, but not of the existence, of reality. (For a sufficiently broad understanding of "the details" I suspect I'm one of those people, as is almost everyone I know.) I wasn't talking about them, but I don't dispute their existence.

comment by MugaSofer · 2013-04-13T19:31:32.746Z · score: -1 (1 votes) · LW · GW

Absolutely, but that's not what shiminux and PrawnOfFate were talking about, is it?

comment by TheOtherDave · 2013-04-13T20:00:01.445Z · score: 1 (1 votes) · LW · GW

I have to admit, this has gotten rarefied enough that I've lost track both of your point and my own.

So, yeah, maybe I'm confusing knowing-X-exists with knowing-details-of-X for various Xes, or maybe I've tried to respond to a question about (one, the other, just one, both) with an answer about (the other, one, both, just one). I no longer have any clear notion, either of which is the case or why it should matter, and I recommend we let this particular strand of discourse die unless you're willing to summarize it in its entirety for my benefit.

comment by [deleted] · 2013-04-13T20:05:10.415Z · score: 1 (1 votes) · LW · GW

I predict that these discussions, even among smart, rational people will go nowhere conclusive until we have a proper theory of self-aware decision making, because that's what this all hinges on. All the various positions people are taking in this are just packaging up the same underlying confusion, which is how not to go off the rails once your model includes yourself.

Not that I'm paying close attention to this particular thread.

comment by [deleted] · 2013-04-15T02:25:25.234Z · score: 1 (1 votes) · LW · GW

And all it takes is to let go of one outdated idea, which is, like Aristotle's impetus, ripe for discarding.

This is not at all important to your point, but the impetus theory of motion was developed by John Philoponus in the 6th century as an attack on Aristotle's own theory of motion. It was part of a broadly Aristotelian programme, but its not something Aristotle developed. Aristotle himself has only traces of a dynamical theory (the theory being attacked by Philoponus is sort of an off-hand remark), and he concerned himself mostly with what we would probably call kinematics. The Aristotelian principle carried through in Philoponus' theory is the principle that motion requires the simultaneous action of a mover, which is false with respect to motion but true with respect to acceleration. In fact, if you replace 'velocity' with 'acceleration' in a certain passage of the Physics, you get F=ma. So we didn't exactly discard Aristotle's (or Philoponus') theory, important precursors as they were to the idea of inertia.

comment by TimS · 2013-04-15T02:40:39.286Z · score: 0 (0 votes) · LW · GW

In fact, if you replace 'velocity' with 'acceleration' in a certain passage of the Physics, you get F=ma.

That kind of replacement seems like a serious type error - velocity is not really anything like acceleration. Like saying that if you replace P with zero, you can prove P = NP.

comment by [deleted] · 2013-04-15T03:27:46.459Z · score: 0 (0 votes) · LW · GW

That its a type error is clear enough (I don't know if its a serious one under an atmosphere). But what follows from that?

comment by TheOtherDave · 2013-04-13T19:55:51.561Z · score: 1 (1 votes) · LW · GW

"we can keep refining our models and explain more and more inputs"

Hm.

On your account, "explaining an input" involves having a most-accurate-model (aka "real world") which alters in response to that input in some fashion that makes the model even more accurate than it was (that is, better able to predict future inputs). Yes?

If so... does your account then not allow for entering a state where it is no longer possible to improve the predictive power of our most accurate model, such that there is no further input-explanation to be done? If it does... how is that any less limiting than the realist's view allowing for entering a state where there is no further understanding of reality to be done?

I mean, I recognize that it's possible to have an instrumentalist account in which no such limitative result applies, just as it's possible to have a realist account in which no such limitative result applies. But you seem to be saying that there's something systematically different between instrumentalist and realist accounts here, and I don't quite see why that should be.

You make a reference a little later on to "mental blocks" that realism makes more likely, and I guess that's another reference to the same thing, but I don't quite see what it is that that mental block is blocking, or why an instrumentalist is not subject to equivalent mental blocks.

Does the question make sense? Is it something you can further clarify?

comment by shminux · 2013-04-14T00:47:44.410Z · score: 0 (2 votes) · LW · GW

If so... does your account then not allow for entering a state where it is no longer possible to improve the predictive power of our most accurate model, such that there is no further input-explanation to be done?

Maybe you are reading too much into what I said. If your view is that what we try to understand is this external reality, it's quite a small step to assuming that some day it will be understood in its entirety. This sentiment has been expressed over and over by very smart people, like the proverbial Lord Kelvin's warning that "physics is almost done", or Laplacian determinism. If you don't assume that the road you travel leads to a certain destination, you can still decide that there are no more places to go as your last trail disappears, but it is by no means an obvious conclusion.

comment by TheOtherDave · 2013-04-14T02:40:13.709Z · score: 1 (1 votes) · LW · GW

If your view is that what we try to understand is this external reality, it's quite a small step to assuming that some day it will be understood in its entirety.

Well, OK.
I certainly agree that this assumption has been made by realists historically.
And while I'm not exactly sure it's a bad thing, I'm willing to treat it as one for the sake of discussion.

That said... I still don't quite get what the systematic value-difference is.
I mean, if my view is instead that what we try to achieve is maximal model accuracy, with no reference to this external reality... then what? Is it somehow a longer step from there to assuming that some day we'll achieve a perfectly accurate model?
If so, why is that?
If not, then what have I gained by switching from the goal of "understand external reality in its entirety" to the goal of "achieve a perfectly accurate model"?

If I'm following you at all, it seems you're arguing in favor of a non-idealist position much more than a non-realist position. That is, if it's a mistake to "assume that the road you travel leads to a certain destination", it follows that I should detach from "ultimate"-type goals more generally, whether it's a realist's goal of ultimately understanding external reality, or an instrumentalist's goal of ultimately achieving maximal model accuracy, or some other ontology's goal of ultimately doing something else.

Have I missed a turn somewhere?
Or is instrumentalism somehow better suited to discouraging me from idealism than realism is?
Or something else?

comment by shminux · 2013-04-14T07:41:21.640Z · score: 1 (3 votes) · LW · GW

Look, I don't know if I can add much more. What started my deconversion from realism is watching smart people argue about interpretations of QM, Boltzmann brains and other untestable ontologies. After a while these debates started to seem silly to me, so I had to figure out why. Additionally, I wanted to distill the minimum ontology, something which needn't be a subject of pointless argument, but only of experimental checking. Eventually I decided that external reality is just an assumption, like any other. This seems to work for me, and saves me a lot of worrying about untestables. Most physicists follow this pragmatic approach, except for a few tenured dudes who can afford to speculate on any topic they like. Max Tegmark and Don Page are more or less famous examples. But few physicists worry about formalizing their ontology of pragmatism. They follow the standard meaning of the terms exist, real, true, etc., and when these terms lead to untestable speculations, their pragmatism takes over and they lose interest, except maybe for some idle chat over a beer. A fine example of compartmentalization. I've been trying to decompartmentalize and see where the pragmatic approach leads, and my interpretation of the instrumentalism is the current outcome. It lets me to spot early many statements implications of which a pragmatist would eventually ignore, which is quite satisfying. I am not saying that I have finally worked out the One True Ontology, or that I have resolved every issue to my satisfaction, but it's the best I've been able to cobble together. But I am not willing to trade it for a highly compartmentalized version of realism, or the Eliezerish version of many untestable worlds and timeless this or that. YMMV.

comment by TheOtherDave · 2013-04-14T19:09:58.551Z · score: 0 (0 votes) · LW · GW

(shrug) OK, I'm content to leave this here, then. Thanks for your time.

comment by PrawnOfFate · 2013-04-14T18:40:17.704Z · score: -1 (1 votes) · LW · GW

So...what is the point of caring about prediction?

comment by DaFranker · 2013-04-11T22:20:00.374Z · score: 1 (1 votes) · LW · GW

But the "turtles all the way down" or the method in which the act of discovery changes the law...

Why can't that also be modeled? Even if the model is self-modifying meta-recursive turtle-stack infinite "nonsense", there probably exists some way to describe it, model it, understand it, or at least point towards it.

This very "pointing towards it" is what I'm doing right now. I postulate that no matter the form it takes, even if it seems logically nonsensical, there's a model which can explain the results proportionally to how much we understand about it (we may end up being never able to perfectly understand it).

Currently, the best fuzzy picture of that model, by my pinpointing of what-I'm-referring-to, is precisely what you've just described:

"we can keep refining our models and explain more and more inputs, and discover new and previously unknown inputs and explain them to, and predict more and so on".

That's what I'm pointing at. I don't care either how many turtle stacks or infinities or regresses or recursions or polymorphic interfaces or variables or volatilities there are. The hypothetical description that a perfect agent with perfect information looking at our models and inputs from the outside would give of the program that we are part of is the "algorithm".

Maybe the turing tape never halts, and just keeps computing on and on more new "laws of physics" as we research on and on and do more exotic things, such that there's no "true final ultimate laws". Of course that could happen. I have no solid evidence either way, so why would I restrict my thinking to the hypothesis that there is? I like flexibility in options like that.

So yeah, my definition of that formula is pretty much self-referential and perhaps not always coherently explained. It's a bit like CEV in that regards, "whatever we would if ..." and so on.

Once all reduced away, all I'm really postulating is the continuing ability of possible agents who make models and analyze their own models to point at and frame and describe mathematically and meta-modelize the patterns of experimental results, given sufficient intelligence and ability to model things. It's not nearly as powerfully predictive or groundbreaking as I might have made it sound in earlier comments.

For more comparisons, it's a bit like when I say "my utility function". Clearly, there might not be a final utility function in my brain, it might be circular, or it might regress infinitely, or be infinitely self-modifying and self-referential, but by golly when I say that my best approximation of my utility function values having food much more highly than starving, I'm definitely pointing at and approximating something in there in that mess of patterns, even if I might not know exactly where I'm pointing at.

That "something" is my "true utility function", even if it would have to be defined with fuzzy self-recursive meta-games and timeless self-determinance or some other crazy shenanigans.

So I guess that's about also what I refer to when I say "reality".

comment by shminux · 2013-04-11T22:50:14.074Z · score: -1 (3 votes) · LW · GW

I'm not really disagreeing. I'm just pointing out that, as you list progressively more and more speculative models, looser and looser connected to the experiment, the idea of some objective reality becomes progressively less useful, and the questions like "but what if the Boltzmann Brains/mathematical universe/many worlds/super-mega crossover/post-utopian colonial alienation is real?" become progressively more nonsensical.

Yet people forget that and seriously discuss questions like that, effectively counting angels on the head of a pin. And, on the other hand, they get this mental block due to the idea of some static objective reality out there, limiting their model space.

These two fallacies is what started me on my way from realism to pragmatism/instrumentalism in the first place.

comment by PrawnOfFate · 2013-04-13T13:48:07.851Z · score: 1 (1 votes) · LW · GW

the idea of some objective reality becomes progressively less useful

Useful for what? Prediction? But realists arent using these models to answer the "what input should I expect" question; they are answering other questions, like "what is real" and "what should we value".

And "nothing" is an answer to "what is real". What does instrumentalism predict?

comment by MugaSofer · 2013-04-13T13:56:20.184Z · score: -1 (1 votes) · LW · GW

What does instrumentalism predict?

If it's really better or more "true" on some level, I suppose you might predict a superintelligence would self-modify into an anti-realist? Seems unlikely from my realist perspective, at least, so I'd have to update in favour of something.

comment by PrawnOfFate · 2013-04-13T14:15:44.055Z · score: 0 (0 votes) · LW · GW

If it's really better or more "true" on some level

But if that's no a predictive level, then instrumentalism is inconsistent. it is saying that all other non-predictive theories should be rejected for being non-predictive, but that it is itself somehow an exception. This is of course parallel to the flaw in Logical Positivism.

comment by MugaSofer · 2013-04-13T14:34:08.274Z · score: -1 (1 votes) · LW · GW

Well, I suppose all it would need to peruade is people who don't already believe it ...

More seriously, you'll have to ask shiminux, because I, as a realist, anticipate this test failing, so naturally I can't explain why it would succeed.

comment by PrawnOfFate · 2013-04-13T14:44:08.625Z · score: 0 (0 votes) · LW · GW

Huh? I don't see why the ability to convince people who don't care about consistency is something that should sway me.

comment by MugaSofer · 2013-04-13T15:15:56.011Z · score: -1 (1 votes) · LW · GW

If I had such a persuasive argument, naturally it would already have persuaded me, but my point is that it doesn't need to persuade people who already agree with it - just the rest of us.

And once you've self-modified into an instrumentalist, I guess there are other arguments that will now persuade you - for example, that this hypothetical underlying layer of "reality" has no extra predictive power (at least, I think that's what shiminux finds persuasive.)

comment by PrawnOfFate · 2013-04-12T20:52:40.740Z · score: 1 (1 votes) · LW · GW

But your comment appears to strawman shminux by asserting that he doesn't believe in external reality at all, when he clearly believes there is some cause of the regularity that allows his models to make accurate predictions.

I'm not sure. I have seen comments that contradict that interpretation. if shminux was the kind of irrealist who believes in an external world of an unknown nature, smninux would have no reason not to call it reality But sminux insists reality is our current best model.

ETA:

anotherr example

"I refuse to postulate an extra "thingy that determines my experimental results".

comment by shminux · 2013-04-11T18:09:09.593Z · score: 1 (5 votes) · LW · GW

Thank you for your steelmanning (well, your second or third one, people keep reading what I write extremely uncharitably). I really appreciate it!

Of course there is something external to our minds, which we all experience.

Most certainly. I call these experiences inputs.

Call that "reality" if you like.

Don't, just call it inputs.

Whatever reality is, it creates regularity such that we humans can make and share predictions.

No, reality is a (meta-)model which basically states that these inputs are somewhat predictable, and little else.

Are there atoms, or quarks, or forces out there in the territory?

The question is meaningless if you don't postulate territory.

Experts in the field have said yes

Experts in the field provided prescriptions, called laws, which let you predict some future inputs, with varying success.

but sociological analysis like The Structure of Scientific Revolutions gives us reasons to be skeptical.

I see the cited link as a research in cognitive sciences (what is thinkable and in what situations), not any statement about some mythical territory.

More importantly, resolving that metaphysical discussion does nothing to help us make better predictions in the future.

But understanding how and why people think what they think is likely very helpful in constructing models which make better predictions.

I happen to disagree with him because I think resolving that dispute has the potential to help us make better predictions in the future.

I'd love to be convinced of that... But first I'd have to be convinced that the dispute is meaningful to begin with.

Saying "there is regularity" is different from saying "regularity occurs because quarks are real."

Indeed. Mainly because I don't use the term "real", at least not in the same way realists do.

Again, thank you for being charitable. That's a first from someone who disagrees.

comment by Bugmaster · 2013-04-11T18:56:58.523Z · score: 2 (2 votes) · LW · GW

Of course there is something external to our minds, which we all experience. ... Experts in the field provided prescriptions, called laws, which let you predict some future inputs, with varying success.

I'm not sure I understand your point of view, given these two statements. If experts in the field are able to predict future inputs with a reasonably high degree of certainty; and if we agree that these inputs are external to our minds; is it not reasonable to conclude that such experts have built an approximate mental model of at least a small portion of whatever it is that causes the inputs ? Or are you asserting that they just got lucky ?

Sorry for the newbie question, I'm late to this discussion and am probably missing a lot of context...

comment by DaFranker · 2013-04-11T19:00:22.460Z · score: 0 (0 votes) · LW · GW

I'm making similar queries here, since this intrigues me and I was similarly confused by the non-postulate. Maybe between all the cross-interrogations we'll finally understand what schminux is saying ;)

comment by shminux · 2013-04-11T19:47:13.083Z · score: -1 (3 votes) · LW · GW

whatever it is that causes the inputs

why assume that something does, unless it's an accurate assumption (i.e. testable, tested and confirmed)?

comment by PrawnOfFate · 2013-04-13T14:04:56.243Z · score: 1 (1 votes) · LW · GW

why assume that something does, unless it's an accurate assumption (i.e. testable, tested and confirmed)?

Because there are stable relationships between outputs (actions) and inputs. We all test that hypothesis multiple times a day.

comment by Bugmaster · 2013-04-11T23:10:24.588Z · score: 1 (1 votes) · LW · GW

The inputs appear to be highly repeatable and consistent with each other. This could be purely due to chance, of course, but IMO this is less likely than the inputs being interdependent in some way.

comment by shminux · 2013-04-11T23:26:11.738Z · score: 0 (2 votes) · LW · GW

The inputs appear to be highly repeatable and consistent with each other.

Some are and some aren't. When a certain subset of them is, I am happy to use a model that accurately predicts what happens next. If there is a choice, then the most accurate and simplest model. However, I am against extrapolating this approach into "there is this one universal thing that determines all inputs ever".

comment by Bugmaster · 2013-04-11T23:33:34.197Z · score: 1 (1 votes) · LW · GW

What is the alternative, though ? Over time, the trend in science has been to unify different groups of inputs; for example, electricity and magnetism were considered to be entirely separate phenomena at one point. So were chemistry and biology, or electricity and heat, etc. This happens all the time on smaller scales, as well; and every time it does, is it not logical to update your posterior probability of that "one universal thing" being out there to be a little bit higher ?

And besides, what is more likely: that 10 different groups of inputs are consistent and repeatable due to N reasons, or due to a single reason ?

comment by DaFranker · 2013-04-11T19:56:14.330Z · score: 1 (1 votes) · LW · GW

Intuitively, to me at least, it seems simpler to assume that everything has a cause, including the regularity of experimental results, and that a mathematical algorithm being computed with the outputs resulting in what we perceive as inputs / experimental results is simpler as a cause than randomness, magic, or nothingness.

See also my other reply to your other reply (heh). I think I'm piecing together your description of things now. I find your consistency with it rather admirable (and very epistemologically hygienic, I might add).

comment by TimS · 2013-04-11T18:29:30.003Z · score: 1 (1 votes) · LW · GW

Experts in the field provided prescriptions, called laws, which let you predict some future inputs, with varying success.

Experts in the field have said things that were very philosophically naive. The steel-manning of those types of statements is isomorphic to physical realism.

And you are using territory in a weird way. If I understood the purpose of your usage, I might be able to understand it better. In my usage, "territory" seems roughly like the thing you call "inputs + implication of some regularity in inputs." That's how I've interpreted Yudkowsky's use of the word as well. Honestly, my perception was that the proper understanding of territory was not exactly central to your dispute with him.

In short, Yudkowsky says the map "corresponds" the the territory in sufficiently fine grain that sentences like "atoms exist" are meaningful. You seem to think that the metaphor of the map is hopelessly misleading. I'm somewhere between, in that I think the map metaphor is helpful, but the map is not fine-grained enough to think "atoms exist" is a meaningful sentence.

I think this philosophy-of-science entry in the SEP is helpful, if only by defining the terms of the debate. I mostly like Feyerabend's thinking, Yudkowsky and most of this community does not, and your position seems to trying to avoid the debate. Which you could do more easily if you would recognize what we mean with our words.


For outside observers:
No, I haven't defined map or corresponds. Also, meaningful != true. Newtonian physics is meaningful and false.

comment by shminux · 2013-04-11T19:11:13.671Z · score: -2 (2 votes) · LW · GW

And you are using territory in a weird way. If I understood the purpose of your usage, I might be able to understand it better. In my usage, "territory" seems roughly like the thing you call "inputs + implication of some regularity in inputs."

Well, almost the same thing. To me regularity is the first (well-tested) meta-model, not a separate assumption.

That's how I've interpreted Yudkowsky's use of the word as well.

I'm not so sure, see my reply to DaFranker.

Honestly, my perception was that the proper understanding of territory was not exactly central to your dispute with him.

I think it is absolutely central. Once you postulate external reality, a whole lot of previously meaningless questions become meaningful, including whether something "exists", like ideas, numbers, Tegmark's level 4, many untestable worlds and so on.

I think this philosophy-of-science entry in the SEP is helpful, if only by defining the terms of the debate.

Only marginally. My feeling is that this apparent incommensurability is due to people not realizing that their disagreements are due to some deeply buried implicit assumptions and the lack of desire to find these assumptions and discuss them.

comment by [deleted] · 2013-04-11T19:27:05.871Z · score: 2 (2 votes) · LW · GW

I think it is absolutely central. Once you postulate external reality, a whole lot of previously meaningless questions become meaningful, including whether something "exists", like ideas, numbers, Tegmark's level 4, many untestable worlds and so on.

Not to mention question like "If we send these colonists over the horizon, does that kill them or not?"

Which brings me to a question: I can never quite figure out how your instrumentalism interacts with preferences. Without assuming the existence of something you care about, on what basis do you make decisions?

In other words, instrumentalism is a fine epistemic position, but how to actually build an instrumental agent with good consequences is unclear. Doesn't wireheading become an issue?

If I'm accidentally assuming something that is confusing me, please point it out.

comment by shminux · 2013-04-11T19:44:59.871Z · score: -1 (5 votes) · LW · GW

Not to mention question like "If we send these colonists over the horizon, does that kill them or not?"

This question is equally meaningful in both cases, and equally answerable. And the answer happens to be the same, too.

Which brings me to a question: I can never quite figure out how your instrumentalism interacts with preferences. Without assuming the existence of something you care about, on what basis do you make decisions?

Your argument reminds me of "Obviously morality comes from God, if you don't believe in God, what's to stop you from killing people if you can get away with it?" It is probably an uncharitable reading of it, though.

The "What I care about" thingie is currently one of those inputs. Like, what compels me to reply to your comment? It can partly be explained by the existing models in psychology, sociology and other natural sciences, and in part is still a mystery. Some day it will hopefully be able to analyze and simulate mind and brain better, and explain how this desire arises, and why one shminux decides to reply to and not ignore your comment. Maybe I feel good when smart people publicly agree with me. Maybe I'm satisfying some other preference I'm not aware of.

comment by [deleted] · 2013-04-12T01:34:23.682Z · score: 1 (1 votes) · LW · GW

Your argument

It's not an argument; it's an honest question. I'm sympathetic to instrumentalism, I just want to know how you frame the whole preferences issue, because I can't figure out how to do it. It probably is like the God is Morality thing, but I can't just accidentally find my way out of such a pickle without some help.

I frame it as "here's all these possible worlds, some being better than others, and only one being 'real', and then here's this evidence I see, which discriminates which possible worlds are probable, and here's the things I can do that that further affect which is the real world, and I want to steer towards the good ones." As you know, this makes a lot of assumptions and is based pretty directly on the fact that that's how human imagination works.

If there is a better way to do it, which you seem to think that there is, I'm interested. I don't understand your answer above, either.

comment by shminux · 2013-04-12T03:51:42.142Z · score: 1 (5 votes) · LW · GW

Well, I'll give it another go, despite someone diligently downvoting all my related comments.

"here's all these possible worlds, some being better than others, and only one being 'real', and then here's this evidence I see, which discriminates which possible worlds are probable, and here's the things I can do that that further affect which is the real world, and I want to steer towards the good ones."

Same here, with a marginally different dictionary. Although you are getting close to a point I've been waiting for people to bring up for some time now.

So, what are those possible worlds but models? And isn't the "real world" just the most accurate model? Properly modeling your actions lets you affect the preferred "world" model's accuracy, and such. The remaining issue is whether the definition of "good" or "preferred" depends on realist vs instrumentalist outlook, and I don't see how. Maybe you can clarify.

comment by TheOtherDave · 2013-04-12T16:32:31.992Z · score: 4 (4 votes) · LW · GW

Hrm.

First, let me apologize pre-emptively if I'm retreading old ground, I haven't carefully read this whole discussion. Feel free to tell me to go reread the damned thread if I'm doing so. That said... my understanding of your account of existence is something like the following:

A model is a mental construct used (among other things) to map experiences to anticipated experiences. It may do other things along the way, such as represent propositions as beliefs, but it needn't. Similarly, a model may include various hypothesized entities that represent certain consistent patterns of experience, such as this keyboard I'm typing on, my experiences of which consistently correlate with my experiences of text appearing on my monitor, responses to my text later appearing on my monitor, etc.

On your account, all it means to say "my keyboard exists" is that my experience consistently demonstrates patterns of that sort, and consequently I'm confident of the relevant predictions made by the set of models (M1) that have in the past predicted patterns of that sort, not-so-confident of relevant predictions made by the set of models (M2) that predict contradictory patterns, etc. etc. etc.

We can also say that M1 all share a common property K that allows such predictions. In common language, we are accustomed to referring to K as an "object" which "exists" (specifically, we refer to K as "my keyboard") which is as good a way of talking as any though sloppy in the way of all natural language.

We can consequently say that M1 all agree on the existence of K, though of course that may well elide over many important differences in the ways that various models in M1 instantiate K.

We can also say that M1 models are more "accurate" than M2 models with respect to those patterns of experience that led us to talk about K in the first place. That is, M1 models predict relevant experience more reliably/precisely/whatever.

And in this way we can gradually converge on a single model (MR1), which includes various objects, and which is more accurate than all the other models we're aware of. We can call MR1 "the real world," by which we mean the most accurate model.

Of course, this doesn't preclude uncovering a new model MR2 tomorrow which is even more accurate, at which point we would call MR2 "the real world". And MR2 might represent K in a completely different way, such that the real world would now, while still containing the existence of my keyboard, contain it in a completely different way. For example, MR1 might represent K as a collection of atoms, and MR2 might represent K as a set of parameters in a configuration space, and when I transition from MR1 to MR2 the real world goes from my keyboard being a collection of atoms to my keyboard being a set of parameters in a configuration space.

Similarly, it doesn't preclude our experiences starting to systematically change such that the predictions made by MR1 are no longer reliable, in which case MR stops being the most accurate model, and some other model (MR3) is the most accurate model, at which point we would call MR3 "the real world". For example, MR3 might not contain K at all, and I would suddenly "realize" that there never was a keyboard.

All of which is fine, but the difficulty arises when after identifying MR1 as the real world we make the error of reifying MRn, projecting its patterns onto some kind of presumed "reality" R to which we attribute a kind of pseudo-existence independent of all models. Then we misinterpret the accuracy of a model as referring, not to how well it predicts future experience, but to how well it corresponds to R.

Of course, none of this precludes being mistaken about the real world... that is, I might think that MR1 is the real world, when in fact I just haven't fully evaluated the predictive value of the various models I'm aware of, and if I were to perform such an evaluation I'd realize that no, actually, MR4 is the real world. And, knowing this, I might have various degrees of confidence in various models, which I can describe as "possible worlds."

And I might have preferences as to which of those worlds is real. For example, MP1 and MP2 might both be possible worlds, and I am happier in MP1 than MP2, so I prefer MP1 be the real world. Similarly, I might prefer MP1 to MP2 for various other reasons other than happiness.

Which, again, is fine, but again we can make the reification error by assigning to R various attributes which correspond, not only to the real world (that is, the most accurate model), but to the various possible worlds MRx..y. But this isn't a novel error, it's just the extension of the original error of reification of the real world onto possible worlds.

That said, talking about it gets extra-confusing now, because there's now several different mistaken ideas about reality floating around... the original "naive realist" mistake of positing R that corresponds to MR, the "multiverse" mistake of positing R that corresponds to MRx..y, etc. When I say to a naive realist that treating R as something that exists outside of a model is just an error, for example, the naive realist might misunderstand me as trying to say something about the multiverse and the relationships between things that "exist in the world" (outside of a model) and "exist in possible worlds" (outside of a model), which in fact has nothing at all to do with my point, which is that the whole idea of existence outside of a model is confused in the first place.

Have I understood your position?

comment by shminux · 2013-04-12T17:21:38.568Z · score: 1 (3 votes) · LW · GW

As was the case once or twice before, you have explained what I meant better than I did in my earlier posts. Maybe you should teach your steelmanning skills, or make a post out of it.

The reification error you describe is indeed one of the fallacies a realist is prone to. Pretty benign initially, it eventually grows cancerously into the multitude of MRs whose accuracy is undefined, either by definition (QM interpretations) or through untestable ontologies, like "everything imaginable exists". This promoting any M->R or a certain set {MP}->R seems forever meaningful if you fall for it once.

The unaddressed issue is the means of actualizing a specific model (that is, making it the most accurate). After all, if all you manipulate is models, how do you affect your future experiences?

comment by TheOtherDave · 2013-04-12T18:57:32.436Z · score: 3 (3 votes) · LW · GW

Maybe you should teach your steelmanning skills, or make a post out of it.

I've thought about this, but on consideration the only part of it I understand explicitly enough to "teach" is Miller's Law (the first one), and there's really not much more to say about it than quoting it and then waiting for people to object. Which most people do, because approaching conversations that way seems to defeat the whole purpose of conversation for most people (convincing other people they're wrong). My goal in discussions is instead usually to confirm that I understand what they believe in the first place. (Often, once I achieve that, I become convinced that they're wrong... but rarely do I feel it useful to tell them so.)

The rest of it is just skill at articulating positions with care and precision, and exerting the effort to do so. A lot of people around here are already very good at that, some of them better than me.

The unaddressed issue is the means of actualizing a specific model (that is, making it the most accurate). After all, if all you manipulate is models, how do you affect your future experiences?

Yes. I'm not sure what to say about that on your account, and that was in fact where I was going to go next.

Actually, more generally, I'm not sure what distinguishes experiences we have from those we don't have in the first place, on your account, even leaving aside how one can alter future experiences.

After all, we've said that models map experiences to anticipated experiences, and that models can be compared based on how reliably they do that, so that suggests that the experiences themselves aren't properties of the individual models (though they can of course be represented by properties of models). But if they aren't properties of models, well, what are they? On your account, it seems to follow that experiences don't exist at all, and there simply is no distinction between experiences we have and those we don't have.

I assume you reject that conclusion, but I'm not sure how. On a naive realist's view, rejecting this is easy: reality constrains experiences, and if I want to affect future experiences I affect reality. Accurate models are useful for affecting future experiences in specific intentional ways, but not necessary for affecting reality more generally... indeed, systems incapable of constructing models at all are still capable of affecting reality. (For example, a supernova can destroy a planet.)

(On a multiverse realist's view, this is significantly more complicated, but it seems to ultimately boil down to something similar, where reality constrains experiences and if I want to affect the measure of future experiences, I affect reality.)

Another unaddressed issue derives from your wording: "how do you affect your future experiences?" I may well ask whether there's anything else I might prefer to affect other than my future experiences (for example, the contents of models, or the future experiences of other agents). But I suspect that's roughly the same problem for an instrumentalist as it is for a realist... that is, the arguments for and against solipsism, hedonism, etc. are roughly the same, just couched in slightly different forms.

comment by shminux · 2013-04-12T19:58:59.701Z · score: 0 (2 votes) · LW · GW

But if they aren't properties of models, well, what are they? On your account, it seems to follow that experiences don't exist at all, and there simply is no distinction between experiences we have and those we don't have.

Somewhere way upstream I said that I postulate experiences (I called them inputs), so they "exist" in this sense. We certainly don't experience "everything", so that's how you tell "between experiences we have and those we don't have". I did not postulate, however, that they have an invisible source called reality, pitfalls of assuming which we just discussed. Having written this, I suspect that this is an uncharitable interpretation of your point, i.e. that you mean something else and I'm failing to Millerize it.

comment by TheOtherDave · 2013-04-12T21:02:42.911Z · score: 0 (2 votes) · LW · GW

OK.

So "existence" properly refers to a property of subsets of models (e.g., "my keyboard exists" asserts that M1 contain K), as discussed earlier, and "existence" also properly refer to a property of inputs (e.g., "my experience of my keyboard sitting on my desk exists" and "my experience of my keyboard dancing the Macarena doesn't exist" are both coherent, if perhaps puzzling, things to say), as discussed here.
Yes?

Which is not necessarily to say that "existence" refers to the same property of subsets of models and of inputs. It might, it might not, we haven't yet encountered grounds to say one way or the other.
Yes?

OK. So far, so good.

And, responding to your comment about solipsism elsewhere just to keep the discussion in one place:

Well, to a solipsist hers is the only mind that exists, to an instrumentalist, as we have agreed, the term exist does not have a useful meaning beyond measurability.

Well, I agree that when a realist solipsist says "Mine is the only mind that exists" they are using "exists" in a way that is meaningless to an instrumentalist.

That said, I don't see what stops an instrumentalist solipsist from saying "Mine is the only mind that exists" while using "exists" in the ways that instrumentalists understand that term to have meaning.

That said, I still don't quite understand how "exists" applies to minds on your account. You said here that "mind is also a model", which I understand to mean that minds exist as subsets of models, just like keyboards do.

But you also agreed that a model is a "mental construct"... which I understand to refer to a construct created/maintained by a mind.

The only way I can reconcile these two statements is to conclude either that some minds exist outside of a model (and therefore have a kind of "existence" that is potentially distinct from the existence of models and of inputs, which might be distinct from one another) or that some models aren't mental constructs.

My reasoning here is similar to how if you said "Red boxes are contained by blue boxes" and "Blue boxes are contained by red boxes" I would conclude that at least one of those statements had an implicit "some but not all" clause prepended to it... I don't see how "For all X, X is contained by a Y" and "For all Y, Y is contained by an X" can both be true.

Does that make sense?
If so, can you clarify which is the case?
If not, can you say more about why not?

comment by shminux · 2013-04-12T21:30:20.548Z · score: -1 (1 votes) · LW · GW

I don't see how "For all X, X is contained by a Y" and "For all Y, Y is contained by an X" can both be true [implicitly assuming that X is not the same as Y, I am guessing].

And what do you mean here by "true", in an instrumental sense? Do you mean the mathematical truth (i.e. a well-formed finite string, given some set of rules), or the measurable truth (i.e. a model giving accurate predictions)? If it's the latter, how would you test for it?

comment by TheOtherDave · 2013-04-12T22:50:16.809Z · score: 1 (1 votes) · LW · GW

Beats me.

Just to be clear, are you suggesting that on your account I have no grounds for treating "All red boxes are contained by blue boxes AND all blue boxes are contained by red boxes" differently from "All red boxes are contained by blue boxes AND some blue boxes are contained by red boxes" in the way I discussed?

If you are suggesting that, then I don't quite know how to proceed. Suggestions welcomed.

If you are not suggesting that, then perhaps it would help to clarify what grounds I have for treating those statements differently, which might more generally clarify how to address logical contradiction in an instrumentalist framework

comment by TheOtherDave · 2013-04-12T19:18:00.401Z · score: 2 (2 votes) · LW · GW

Actually, thinking about this a little bit more, a "simpler" question might be whether it's meaningful on this account to talk about minds existing. I think the answer is again that it isn't, as I said about experiences above... models are aspects of a mind, and existence is an aspect of a subset of a model; to ask whether a mind exists is a category error.

If that's the case, the question arises of whether (and how, if so) we can distinguish among logically possible minds, other than by reference to our own.

So perhaps I was too facile when I said above that the arguments for and against solipsism are the same for a realist and an instrumentalist. A realist rejects or embraces solipsism based on their position on the existence and moral value of other minds,, but an instrumentalist (I think?) rejects a priori the claim that other minds can meaningfully be said to exist or not exist, so presumably can't base anything such (non)existence.

So I'm not sure what an instrumentalist's argument rejecting solipsism looks like.

comment by shminux · 2013-04-12T19:48:21.983Z · score: -2 (2 votes) · LW · GW

models are aspects of a mind, and existence is an aspect of a subset of a model; to ask whether a mind exists is a category error

Sort of, yes. Except mind is also a model.

So I'm not sure what an instrumentalist's argument rejecting solipsism looks like.

Well, to a solipsist hers is the only mind that exists, to an instrumentalist, as we have agreed, the term exist does not have a useful meaning beyond measurability. For example, the near-solipsist idea of a Boltzmann brain is not an issue for an instrumentalist, since it changes nothing in their ontology. Same deal with dreams, hallucinations and simulation.

comment by Bugmaster · 2013-04-12T18:59:18.307Z · score: 1 (1 votes) · LW · GW

In addition, I would really like to address the fact that current models can be used to predict future inputs in areas that are thus far completely unobserved. IIRC, this is how positrons were discovered, for example. If all we have are disconnected inputs, how do we explain the fact that even those inputs which we haven't even thought of observing thus far, still do correlate to our models ? We would expect to see this if both sets of inputs were contingent upon some shared node higher up in the Bayesian network, but we wouldn't expect to see this (except by chance, which is infinitesmally low) if the inputs were mutually independent.

comment by TheOtherDave · 2013-04-12T19:24:16.676Z · score: 1 (1 votes) · LW · GW

FWIW, my understanding of shminux's account does not assert that "all we have are disconnected inputs," as inputs might well be connected.

That said, it doesn't seem to have anything to say about how inputs can be connected, or indeed about how inputs arise at all, or about what they are inputs into. I'm still trying to wrap my brain around that part.

ETA: oops. I see shminux already replied to this. But my reply is subtly different, so I choose to leave it up.

comment by PrawnOfFate · 2013-04-12T19:55:13.215Z · score: 0 (2 votes) · LW · GW

I don't see how someone could admit that their inputs are connected in the sense of being caused by a common source that orders. them without implicitly admitting to a real external world.

comment by TheOtherDave · 2013-04-12T19:59:50.965Z · score: 1 (1 votes) · LW · GW

Nor do I.

But I acknowledge that saying inputs are connected in the sense that they reliably recur in particular patterns, and saying that inputs are connected in the sense of being caused by a common source that orders them, are two distinct claims, and one might accept that the former is true (based on observation) without necessarily accepting that the latter is true.

I don't have a clear sense of what such a one might then say about how inputs come to reliably recur in particular patterns in the first place, but often when I lack a clear sense of how X might come to be in the absence of Y, it's useful to ask "How, then, does X come to be?" rather than to insist that Y must be present.

comment by PrawnOfFate · 2013-04-12T20:13:55.506Z · score: 1 (1 votes) · LW · GW

One can of course only say that inputs have occurred in patterns up till now. Realists can explain why they would continue to do so on the basis of the Common Source meta-model, anti realists cannot.

comment by TheOtherDave · 2013-04-12T20:22:14.830Z · score: 2 (2 votes) · LW · GW

At the risk of repeating myself: I agree that I don't currently understand how an instrumentalist could conceivably explain how inputs come to reliably recur in particular patterns. You seem content to conclude thereby that they cannot explain such a thing, which may be true. I am not sufficiently confident in the significance of my lack of understanding to conclude that just yet.

comment by PrawnOfFate · 2013-04-12T19:11:24.819Z · score: 1 (1 votes) · LW · GW

ie, realism explain how you can predict at all.

comment by shminux · 2013-04-12T19:18:23.241Z · score: 0 (2 votes) · LW · GW

This seems to me to be the question of origin "where do the inputs come from?" in yet another disguise. The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. "external reality") responsible for it. I think this is close to subjective Bayesianism, though I'm not 100% sure.

comment by Bugmaster · 2013-04-12T20:30:12.967Z · score: 1 (1 votes) · LW · GW

The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. "external reality") responsible for it.

I think it's possible to do so without specifying the mechanism, but that's not the same thing as saying that no mechanism at all exists. If you are saying that, then you need to explain why all these inputs are correlated with each other, and why our models can (on occasion) correctly predict inputs that have not been observed yet.

Let me set up an analogy. Let's say you acquire a magically impenetrable box. The box has 10 lights on it, and a big dial-type switch with 10 positions. When you set the switch to position 1, the first light turns on, and the rest of them turn off. When you set it to position 2, the second light turns on, and the rest turn off. When you set it to position 3, the third light turns on, and the rest turn off. These are the only settings you've tried so far.

Does it make sense to ask the question, "what will happen when I set the switch to positions 4..10" ? If so, can you make a reasonably confident prediction as to what will happen ? What would your prediction be ?

comment by PrawnOfFate · 2013-04-13T13:12:27.107Z · score: 0 (0 votes) · LW · GW

The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. "external reality") responsible for it.

In the sense that it is always impossible to leave something just unexplained. But the posit of an external reality of some sort is not explatorilly idle, and not, therefore, ruled out by occam's razor. The posit of an external reality of some sort (it doesn't need to be specific) explains, at the meta-level, the process of model-formulation, prediction, accuracy, etc.

comment by MugaSofer · 2013-04-13T13:58:32.333Z · score: -1 (1 votes) · LW · GW

In the sense that it is always possible to leave something just unexplained.

Fixed that for you.

But the posit of an external reality of some sort is not explatorilly idle, and not, therefore, ruled out by occam's razor.

I suppose shiminux would claim that explanatory or not, it complicates the model and thus makes it more costly, computationally speaking.

comment by PrawnOfFate · 2013-04-13T14:13:05.493Z · score: 0 (0 votes) · LW · GW

I suppose shiminux would claim that explanatory or not, it complicates the model and thus makes it more costly, computationally speaking.

But that's a terrible argument. if you can't justify a posit by the explanatory work it does, then the optimum number of posits to make is zero.

comment by MugaSofer · 2013-04-13T14:30:18.959Z · score: -1 (1 votes) · LW · GW

Which is, in fact, the number of posits shiminux advocates making, is it not? Adapt your models to be more accurate, sure, but don't expect that to mean anything more than the model working.

Except I think he's claimed to value things like "the most accurate model not containing slaves" (say) which implies there's something special about the correct model beyond mere accuracy.

comment by PrawnOfFate · 2013-04-13T14:46:18.260Z · score: 0 (0 votes) · LW · GW

If it's really better or more "true" on some level

Shminux seems to be positing inputs and models at the least.

comment by MugaSofer · 2013-04-13T15:04:50.092Z · score: -1 (1 votes) · LW · GW

If it's really better or more "true" on some level

I think you quoted the wrong thing there, BTW.

comment by MugaSofer · 2013-04-13T15:01:58.535Z · score: -1 (1 votes) · LW · GW

I suppose they are positing inputs, but they're arguably not positing models as such - merely using them. Or at any rate, that's how I'd ironman their position.

comment by PrawnOfFate · 2013-04-12T20:17:44.221Z · score: 0 (2 votes) · LW · GW

The reification error you describe is indeed one of the fallacies a realist is prone to

And inverted stupidity is..?

comment by MugaSofer · 2013-04-12T20:50:10.269Z · score: 0 (2 votes) · LW · GW

If I understand both your and shiminux's comments, this might express the same thing in different terms:

  • We have experiences ("inputs".)
  • We wish to optimize these inputs according to whatever goal structure.
  • In order to do this, we need to construct models to predict how our actions effect future inputs, based on patterns in how inputs have behaved in the past.
  • Some of these models are more accurate than others. We might call accurate models "real".
  • However, the term "real" holds no special ontological value, and they might later prove inaccurate or be replaced by better models.

Thus, we have a perfectly functioning agent with no conception (or need for) a territory - there is only the map and the inputs. Technically, you could say the inputs are the territory, but the metaphor isn't very useful for such an agent.

comment by shminux · 2013-04-12T20:55:32.354Z · score: 0 (0 votes) · LW · GW

Huh, looks like we are, while not in agreement, at least speaking the same language. Not sure how Dave managed to accomplish this particular near-magical feat.

comment by TheOtherDave · 2013-04-12T21:22:49.347Z · score: 1 (1 votes) · LW · GW

As before, I mostly attribute it to the usefulness of trying to understand what other people are saying.

I find it's much more difficult to express my own positions in ways that are easily understood, though. It's harder to figure out what is salient and where the vastest inferential gulfs are.

You might find it correspondingly useful to try and articulate the realist position as though you were trying to explain it to a fellow instrumentalist who had no experience with realists.

comment by shminux · 2013-04-12T21:42:08.516Z · score: 1 (1 votes) · LW · GW

You might find it correspondingly useful to try and articulate the realist position as though you were trying to explain it to a fellow instrumentalist who had no experience with realists.

I actually tried this a few times, even started a post draft titled "explain realism to a baby AI". In fact, I keep fighting my own realist intuition every time I don the instrumentalist hat. But maybe I am not doing it well enough.

comment by TheOtherDave · 2013-04-12T22:00:32.973Z · score: 1 (1 votes) · LW · GW

Ah. Yeah, if your intuitions are realist, I expect it suffers from the same problem as expressing my own positions. It may be a useful exercise in making your realist intuitions explicit, though.

comment by shminux · 2013-04-12T22:04:40.500Z · score: 0 (0 votes) · LW · GW

It may be a useful exercise in making your realist intuitions explicit, though.

You are right. I will give it a go. Just because it's obvious doesn't mean it should not be explicit.

comment by MugaSofer · 2013-04-12T21:27:38.555Z · score: -1 (1 votes) · LW · GW

Maybe we should organize a discussion where everyone has to take positions other than their own? If this really helps clarity (and I think it does) it could end up producing insights much more difficult (if not actually impossible) to reach with normal discussion.

(Plus it would be good practice at the Ideological Turing Test, generalized empathy skills, avoiding the antpattern of demonizing the other side, and avoiding steelmanning arguments into forms that don't threaten your own arguments (since they would be threatening the other side's arguments, as it were.))

comment by shminux · 2013-04-12T21:39:42.312Z · score: 1 (1 votes) · LW · GW

Maybe we should organize a discussion where everyone has to take positions other than their own?

It seems to me to be one of the basic exercises in rationality, also known as "Devil's advocate". However, Eliezer dislikes it for some reason, probably because he thinks that it's too easy to do poorly and then dismiss with a metaphorical self-congratulatory pat on one's own back. Not sure how much of this is taught or practiced at CFAR camps.

comment by MugaSofer · 2013-04-12T21:52:30.230Z · score: -1 (3 votes) · LW · GW

Yup. In my experience, though, Devil's Advocates are usually pitted against people genuinely arguing their cause, not other devil's advocates.

However, Eliezer dislikes it for some reason, probably because he thinks that it's too easy to do poorly and then dismiss with a metaphorical self-congratulatory pat on one's own back.

Yeah, I remember being surprised by that reading the equences. He seemed to be describing acting as your own devil's advocate, though, IIRC.

comment by TheOtherDave · 2013-04-12T22:52:23.207Z · score: 0 (0 votes) · LW · GW

Well, if any nonrealists want to argue the realist position in response to my articulation of the instrumentalist position, they are certainly welcome to do so, and I can try to continue defending it... though I'm not sure how good a job of it I'll do.

comment by MugaSofer · 2013-04-12T23:17:03.390Z · score: -1 (1 votes) · LW · GW

I was actually thinking of random topics, perhaps ones that are better understood by LW regulars, at least at first. Still ...

if any nonrealists want to argue the realist position in response to my articulation of the instrumentalist position, they are certainly welcome to do so

Wait, there are nonrealists other than shiminux here?

comment by TheOtherDave · 2013-04-12T23:26:36.505Z · score: 0 (0 votes) · LW · GW

Beats me.

comment by MugaSofer · 2013-04-12T21:19:10.767Z · score: -1 (1 votes) · LW · GW

Actually, that's just the model I was already using. I noticed it was shorter than Dave's, so I figured it might be useful.

comment by itaibn0 · 2013-04-13T15:00:16.062Z · score: 2 (2 votes) · LW · GW

I suggest we move the discussion to a top-level discussion thread. The comment tree here is huge and hard to navigate.

comment by MugaSofer · 2013-04-13T15:11:36.821Z · score: -1 (1 votes) · LW · GW

If shiminux could write an actual post on his beliefs, that might help a great deal, actually.

comment by shminux · 2013-04-13T15:21:32.174Z · score: 1 (1 votes) · LW · GW

I think I got a cumulative total of some 100 downvotes on this thread, so somehow I don't believe that a top-level post would be welcome. However, if TheOtherDave were to write one as a description of an interesting ontology he does not subscribe to, this would probably go over much better. I doubt he would be interested, though.

comment by itaibn0 · 2013-04-13T15:51:46.852Z · score: 0 (2 votes) · LW · GW

As it happens, I agree with your position. I was actually thinking of making a post that pinpoints to all the important comments here without taking a position, while asking the discussion to continue there. However, making an argumentative post is also possible, although I might not be willing to expend to effort.

comment by TheOtherDave · 2013-04-13T16:30:02.958Z · score: 1 (1 votes) · LW · GW

Cool.
If you are motivated at some point to articulate an anti-realist account of how non-accidental correlations between inputs come to arise (in whatever format you see fit), I'd appreciate that.

comment by itaibn0 · 2013-04-14T12:16:40.150Z · score: 2 (2 votes) · LW · GW

As I understand it, the word "how" is used to demand a model for an event. Since I already have models for the correlations of my inputs, I don't feel the need for further explanation. More concretely, should you ask "How does closing your eyes lead to a blackout of your vision?" I would answer "After I close my eyes, my eyelids block all of the light from getting into my eye.", and I consider this answer satisfying. Just because I don't believe in a ontologically fundamental reality, doesn't mean I don't believe in eyes and eyelids and light.

comment by TheOtherDave · 2013-04-14T19:06:10.921Z · score: 0 (0 votes) · LW · GW

OK. So, say I have two models, M1 and M2.

In M1, vision depends on light, which is blocked by eyelids. Therefore in M1, we predict that closing my eyes leads to a blackout of vision. In M2, vision depends on something else, which is not blocked by eyelids. Therefore in M2, we predict that closing my eyes does not lead to a blackout of vision.

At some later time, an event occurs in M1: specifically, I close my eyelids. At the same time, I have a blackout of vision. This increases my confidence in the predictive power of M1.

So far, so good.

At the same time, an identical event-pair occurs in M2: I close my eyes and my vision blacks out. This decreases my confidence in the predictive power of M2.

If I've understood you correctly, both the realist and the instrumentalist account of all of the above is "there are two models, M1 and M2, the same events occur in both, and as a consequence of those events we decide M1 is more accurate than M2."

The realist account goes on to say "the reason the same events occur in both models is because they are both fed by the same set of externally realized events, which exist outside of either model." The instrumentalist account, IIUC, says "the reason the same events occur in both models is not worth discussing; they just do."

Is that right?

comment by MugaSofer · 2013-04-13T16:21:30.806Z · score: 0 (2 votes) · LW · GW

That's still possible, for convenience purposes, even if shiminux is unwilling to describe their beliefs - your beliefs, apparently, I think a lot of people will have some questions to ask you now - in a top-level post.

comment by MugaSofer · 2013-04-13T15:27:07.243Z · score: 0 (2 votes) · LW · GW

Ooh, excellent point. I'd do it myself, but unfortunately my reason for suggesting it is that I want to understand your position better - my puny argument would be torn to shreds, I have too many holes in my understanding :(

comment by PrawnOfFate · 2013-04-12T19:05:46.213Z · score: 1 (1 votes) · LW · GW

So, what are those possible worlds but models?

The actual world is also a possible world. Non actual possible worlds are only accessible as models. Realists believe they can bring the actual world into line with desired models to some exitent

And isn't the "real world" just the most accurate model?

Not for realists.

Properly modeling your actions lets you affect the preferred "world" model's accuracy, and such. The remaining issue is whether the definition of "good" or "preferred" depends on realist vs instrumentalist outlook, and I don't see how. Maybe you can clarify.

For realist, wireheading isn't a good aim. For anti realists, it is the only aim.

comment by TheOtherDave · 2013-04-12T19:39:36.312Z · score: 1 (1 votes) · LW · GW

For realist, wireheading isn't a good aim. For anti realists, it is the only aim.

Realism doesn't preclude ethical frameworks that endorse wireheading.

I'm less clear about the second part, though.

Rejecting (sufficiently well implemented) wireheading requires valuing things other than one's own experience. I'm not yet clear on how one goes about valuing things other than one's own experience in an instrumentalist framework, but then again I'm not sure I could explain to someone who didn't already understand it how I go about valuing things other than my own experience in a realist framework, either.

comment by JGWeissman · 2013-04-12T21:09:20.110Z · score: 1 (1 votes) · LW · GW

but then again I'm not sure I could explain to someone who didn't already understand it how I go about valuing things other than my own experience in a realist framework, either.

See The Domain of Your Utility Function.

comment by PrawnOfFate · 2013-04-12T20:02:39.475Z · score: 1 (3 votes) · LW · GW

Realism doesn't preclude ethical frameworks that endorse wireheading

No, but they are a minority interest.

'm not yet clear on how one goes about valuing things other than one's own experience in an instrumentalist framework, but then again I'm not sure I could explain to someone who didn't already understand it how I go about valuing things other than my own experience in a realist framework, either.

If someone accepts that reality exists, you have a head start. Why do anti-realists care about accurate prediction? They don't think predictive models represent and external reality, and they don;t think accurate models can be ued as a basis to change anything external. Either prediction is an end in itself, or its for improving inputs.

comment by TheOtherDave · 2013-04-12T20:24:46.445Z · score: 1 (1 votes) · LW · GW

they don;t think accurate models can be ued as a basis to change anything external. Either prediction is an end in itself, or its for improving inputs.

My understanding of shminux's position is that accurate models can be used, somehow, to improve inputs.

I don't yet understand how that is even in principle possible on his model, though I hope to improve my understanding.

comment by shminux · 2013-04-12T20:22:43.904Z · score: -2 (4 votes) · LW · GW

Your last statement shows that you have much to learn from TheOtherDave about the principle of charity. Specifically, don't think the other person to be stupider than you are, without a valid reason. So, if you come up with a trivial objection to their point, consider that they might have come across it before and addressed it in some way. They might still be wrong, but likely not in the obvious ways.

comment by PrawnOfFate · 2013-04-12T20:37:03.380Z · score: 2 (2 votes) · LW · GW

So where did you address it?

comment by MugaSofer · 2013-04-12T20:56:31.309Z · score: -2 (4 votes) · LW · GW

The trouble, of course, is that sometimes people really are wrong in "obvious" ways. Probably not high-status LWers, I guess.

comment by shminux · 2013-04-12T21:00:29.261Z · score: -1 (3 votes) · LW · GW

It happens, but this should not be the initial assumption. And I'm not sure who you mean by "high-status LWers".

comment by MugaSofer · 2013-04-12T21:59:20.396Z · score: -1 (3 votes) · LW · GW

Sorry, just realized I skipped over the first part of your comment.

It happens, but this should not be the initial assumption.

Doesn't that depend on the prior? I think most holders of certain religious or political beliefs, for instance, do so for trivially wrong reasons*. Perhaps you mean it should not be the default assumption here?

*Most conspiracy theories, for example.

comment by MugaSofer · 2013-04-12T21:17:53.315Z · score: -2 (2 votes) · LW · GW

I was referring to you. PrawnOfFate should not have expected you to make such a mistake, give the evidence.

comment by CCC · 2013-04-12T18:37:56.900Z · score: 1 (1 votes) · LW · GW

So, what are those possible worlds but models?

If I answer 'yes' to this, then I am confusing the map with the territory, surely? Yes, there may very well be a possible world that's a perfect match for a given model, but how would I tell it apart from all the near-misses?

The "real world" is a good deal more accurate than the most accurate model of it that we have of it.

comment by Bugmaster · 2013-04-12T04:37:18.098Z · score: 1 (1 votes) · LW · GW

Well, I'll give it another go, despite someone diligently downvoting all my related comments.

It's not me, FWIW; I find the discussion interesting.

That said, I'm not sure what methodology you use to determine which actions to take, given your statement that " the "real world" just the most accurate model". If all you cared about was the accuracy of your model, would it not be easier to avoid taking any physical actions, and simply change your model on the fly as it suits you ? This way, you could always make your model fit what you observe. Yes, you'd be grossly overfitting the data, but is that even a problem ?

comment by shminux · 2013-04-12T04:55:08.554Z · score: 1 (3 votes) · LW · GW

I didn't say it's all I care about. Given a choice of several models and an ability to make one of them more accurate than the rest, I would likely exercise this choice, depending on my preferences, the effort required and the odds of success, just like your garden variety realist would. As Eliezer used to emphasize, "it all adds up to normality".

comment by Bugmaster · 2013-04-12T05:11:15.058Z · score: 1 (1 votes) · LW · GW

Given a choice of several models and an ability to make one of them more accurate than the rest, I would likely exercise this choice...

Would you do so if picking another model required less effort ? I'm not sure how you can justify doing that.

comment by shminux · 2013-04-12T05:28:33.668Z · score: 0 (4 votes) · LW · GW

I am guessing that you, TimS and nyan_sandwich all seem to think that my version of instrumentalism is incompatible with having preferences over possible worlds. I have trouble understanding where this twist is coming from.

comment by Bugmaster · 2013-04-12T07:02:23.302Z · score: 1 (1 votes) · LW · GW

It's not that I think that your version of instrumentalism is incompatible with preferences, it's more like I'm not sure I understand what the word "preferences" even means in your context. You say "possible worlds", but, as far as I can tell, you mean something like, "possible models that predict future inputs".

Firstly, I'm not even sure how you account for our actions affecting these inputs, especially given that you do not believe that various sets of inputs are connected to each other in any way; and without actions, preferences are not terribly relevant. Secondly, you said that a "preference" for you means something like, "a desire to make one model more accurate than the rest", but would it not be easier to simply instantiate a model that fits the inputs ? Such a model would be 100% accurate, wouldn't it ?

comment by PrawnOfFate · 2013-04-12T20:10:38.088Z · score: 0 (2 votes) · LW · GW

Your having a preference for worlds without, eg, slavery can't possibly translate into something iike "i want to change the world external to me so that it no longer contains slaves". I have trouble understanding what it would translate to. You could adopt models where things you don't like don't exist, but they wouldn't be accurate.

comment by shminux · 2013-04-12T20:28:53.158Z · score: 0 (2 votes) · LW · GW

Your having a preference for worlds without, eg, slavery can't possibly translate into something iike "i want to change the world external to me so that it no longer contains slaves".

No, but it translates to its equivalent:

I prefer models which describe a society without slavery to be accurate (i.e. confirmed in a later testing).

comment by PrawnOfFate · 2013-04-12T20:32:47.012Z · score: 1 (1 votes) · LW · GW

I prefer models which describe a society without slavery to be accurate (i.e. confirmed in a later testing).

And how do you arrange that?

comment by MugaSofer · 2013-04-12T22:04:40.724Z · score: 0 (4 votes) · LW · GW

I prefer models which describe a society without slavery to be accurate (i.e. confirmed in a later testing).

So you're saying you have a preference over the map, as opposed to the territory (your experiences, in this case)

That sounds subject to some standard pitfalls, offhand, where you try to fool yourself into choosing the "no-slaves" map instead of trying to optimize, well, reality, such as the slaves - perhaps with an experience machine, through simple self-deception, or maybe some sort of exploit involving Occam's Razor.

comment by shminux · 2013-04-12T23:03:59.815Z · score: 0 (2 votes) · LW · GW

I agree that self-deception is a "real" possibility. Then again, it is also a possibility for a realist. Or a dualist. In fact, confusing map and territory is one of the most common pitfalls, as you well know. Would it be more likely for an instrumentalist to become instrumenta-lost? I don't see why it would be the case. For example, from my point of view, you arbitrarily chose a comforting Christian map (is it an inverse of "some sort of exploit involving Occam's Razor"?) instead of a cold hard uncaring one, even though you seem to be preferring realism over instrumentalism.

comment by MugaSofer · 2013-04-12T23:30:44.853Z · score: -1 (1 votes) · LW · GW

Ah, no, sorry, I meant that those options would satisfy your stated preferences, not that they were pitfalls on the road to it. I'm suggesting that since you don't want to fall into those pitfalls, those aren't actually your preferences, whether because you've made a mistake or I have (please tell me if I have.)

comment by private_messaging · 2013-04-12T06:04:23.687Z · score: 0 (2 votes) · LW · GW

I propose a ww2 mechanical aiming computer as an example of a model. Built based on the gears that can be easily and conveniently manufactured, there's very little doubt that universe does not use anything even remotely similar to produce the movement of the projectile through the air, even if we assume that such question is meaningful.

A case can be made that physics is not that much different from ww2 aiming computer (built out of mathematics that is available and can be conveniently used). And with regards to MWI, a case can be made that it is similar to removing the only ratchet in the mechanical computer and proclaiming rest of the gears the reality because somehow "from the inside" it would allegedly still feel the same even though the mechanical computer, without this ratchet, doesn't even work any more for predicting anything.

Of course, it is not clear how close physics is to a mechanical aiming computer in terms of how the internals can correspond to the real world.

comment by [deleted] · 2013-04-12T04:10:53.427Z · score: 1 (1 votes) · LW · GW

So, what are those possible worlds but models? And isn't the "real world" just the most accurate model? Properly modeling your actions lets you affect the preferred "world" model's accuracy, and such. The remaining issue is whether the definition of "good" or "preferred" depends on realist vs instrumentalist outlook, and I don't see how. Maybe you can clarify.

Interesting. So we prefer that some models or others be accurate, and take actions that we expect to make that happen, in our current bag of models.

Ok I think I get it. I was confused about what the referent of your preferences would be if you did not have your models referring to something. I see that you have made the accuracy of various models the referent of preferences. This seems reasonable enough.

I can see now that I'm confused about this stuff a bit more than I thought I was. Will have to think about it a bit more.

comment by MugaSofer · 2013-04-12T12:03:23.647Z · score: 1 (5 votes) · LW · GW

It works fine - as long as you only care about optimizing inputs, in which case I invite you to go play in the holodeck while the rest of us optimize the real world.

If you can't find a holodeck, I sure hope you don't accidentally sacrifice your life to save somebody or further some noble cause. After all, you wont be there to experience the resulting inputs, so what's the point?

comment by [deleted] · 2013-04-13T18:03:08.663Z · score: 3 (3 votes) · LW · GW

You are arguing with a strawman.

It's not a utility function over inputs, it's over the accuracy of models.

If I were a shminux-style rationalist, I would not choose to go to the holodeck because that does not actually make my current preferred models of the world more accurate. It makes the situation worse, actually, because in the me-in-holodeck model, I get misled and can't affect the stuff outside the holodeck.

Just because someone frames things differently doesn't mean they have to make the obvious mistakes and start killing babies.

For example, I could do what you just did to "maximize expected utility over possible worlds" by choosing to modify my brain to have erroneously high expected utility. It's maximized now right? See the problem with this argument?

It all adds up to normality, which probably means we are confused and there is an even simpler underlying model of the situation.

comment by MugaSofer · 2013-04-13T19:10:52.472Z · score: 0 (2 votes) · LW · GW

You are arguing with a strawman.

You know, I'm actually not.

It's not a utility function over inputs, it's over the accuracy of models.

Affecting the accuracy of a specified model - a term defined as "how well it predicts future inputs" - is a subset of optimizing future inputs.

If I were a shminux-style rationalist, I would not choose to go to the holodeck because that does not actually make my current preferred models of the world more accurate. It makes the situation worse, actually, because in the me-in-holodeck model, I get misled and can't affect the stuff outside the holodeck.

You're still thinking like a realist. A holodeck doesn't prevent you from observing the real world - there is no "real world". It prevents you testing how well certain models predict experiences when you take the action "leave the holodeck", unless of course you leave the holodeck - it's an opportunity cost and nothing more, and a minor one at that, since information holds only instrumental value.

Just because someone frames things differently doesn't mean they have to make the obvious mistakes and start killing babies.

Pardon?

For example, I could do what you just did to "maximize expected utility over possible worlds" by choosing to modify my brain to have erroneously high expected utility. It's maximized now right? See the problem with this argument?

Except that I (think that I) get my utility over the world, not over my experiences. Same reason I don't win the lottery with quantum suicide.

It all adds up to normality

You know, not every belief adds up to normality - just the true ones. Imagine someone arguing you had misinterpreted happiness-maximization because "it all adds up to normality".

comment by shminux · 2013-04-12T04:27:32.210Z · score: 1 (3 votes) · LW · GW

I see that you have made the accuracy of various models the referent of preferences.

I like how you put it into some fancy language, and now it sounds almost profound.

I can see now that I'm confused about this stuff a bit more than I thought I was. Will have to think about it a bit more.

It is entirely possible that I'm talking out of my ass here, and you will find a killer argument against this approach.

comment by [deleted] · 2013-04-12T05:08:21.608Z · score: 1 (1 votes) · LW · GW

It is entirely possible that I'm talking out of my ass here, and you will find a killer argument against this approach.

Likewise the converse. I reckon both will get killed by a proper approach.

comment by TimS · 2013-04-12T01:58:45.158Z · score: 1 (1 votes) · LW · GW

Only marginally. My feeling is that this apparent incommensurability is due to people not realizing that their disagreements are due to some deeply buried implicit assumptions and the lack of desire to find these assumptions and discuss them.

That's the standard physical realist response to Kuhn and Feyerabend. I find it confusing to hear it from you, because you certainly are not a standard physical realist.

In short, I think you are being a little too a la carte with your selection from various parts of philosophy of science.

comment by Bugmaster · 2013-04-12T02:15:37.430Z · score: 1 (1 votes) · LW · GW

In short, I think you are being a little too a la carte with your selection from various parts of philosophy of science.

Is there something wrong with doing that ? As long as the end result is internally consistent, I don't see the problem.

comment by TimS · 2013-04-12T02:38:50.894Z · score: 1 (1 votes) · LW · GW

Sure, my criticism has an implied "And I'm concerned you've managed to endorse A and ~A by accident."

comment by Bugmaster · 2013-04-12T03:24:57.969Z · score: 0 (0 votes) · LW · GW

Right, that's fair, but it's not really apparent from your reply which is A and which is ~A. I understand that physical realists say the same things as shminux, who professes not to be a physical realist -- but then, I bet physical realists say that water is wet, too...

comment by TimS · 2013-04-12T03:45:37.855Z · score: 0 (0 votes) · LW · GW

I don't know that shminux has inadvertently endorsed A and ~A. I'm suspicious that this has occurred because he resists the standard physical realist definition of territory / reality, but responds to a quasi-anti-realist position with an physical realist answer that I suspect depends on the rejected definition of reality.

If I knew precisely where the contradiction was, I'd point it out explicitly. But I don't, so I can't.

comment by Bugmaster · 2013-04-12T03:50:12.966Z · score: 0 (0 votes) · LW · GW

Yeah, fair enough, I don't think I understand his position myself at this point...

comment by MugaSofer · 2013-04-11T22:16:21.889Z · score: -1 (1 votes) · LW · GW

Of course there is something external to our minds, which we all experience.

Most certainly. I call these experiences inputs.

Sorry if this is a stupid question, but what do you call the thingy that makes these inputs behave regularly?

comment by wedrifid · 2013-04-08T16:25:04.156Z · score: 0 (2 votes) · LW · GW

I think shminux's response is something like:

"Given a model that predicts accurately, what would you do differently if the objects described in the model do or don't exist at some ontological level? If there is no difference, what are we worrying about?"

If I recall correctly he abandons that particular rejection when he gets an actual answer to the first question. Specifically, he argues against belief in the implied invisible when said belief leads to making actual decisions that will result in outcomes that he will not personally be able to verify (eg. when considering Relativity and accelerated expansion of the universe).

comment by TimS · 2013-04-08T16:59:59.473Z · score: 2 (2 votes) · LW · GW

I think you are conflating two related, but distinct questions. Physical realism faces challenges from:

(1) the sociological analysis represented by works like Structure of Scientific Revolution

(2) the ontological status of objects that, in principle, could never be observed (directly or indirectly)

I took shminux as trying to duck the first debate (by adopting physical pragmatism), but I think most answers to the first question do not necessarily imply particular answers to the second question.

comment by wedrifid · 2013-04-08T17:16:30.920Z · score: 0 (0 votes) · LW · GW

I think you are conflating two related, but distinct questions.

I am almost certain I am saying a different thing to what you think.

comment by MugaSofer · 2013-04-11T22:32:55.113Z · score: -1 (1 votes) · LW · GW

I can imagine using a model that contains elements that are merely convenient pretenses, and don't actually exist - like using simpler Newtonian models of gravity despite knowing GR is true (or at least more likely to be true than Newton.)

If some of these models featured things that I care about, it wouldn't matter, as long as I didn't think actual reality featured these things. For example, if an easy hack for predicting the movement of a simple robot was to imagine it being sentient (because I can easily calculate what humanlike minds wold do using mys own neural circutry,) I still wouldn't care if it was crushed, because the sentient being described by the model doesn't actually exist - the robot merely uses similar pathfinding.

Does that answer your question, TimS'-model-of-shiminux?

comment by TimS · 2013-04-07T19:09:31.454Z · score: 1 (1 votes) · LW · GW

I don't understand the paperclipping reference, but MugaSofer is a hard-core moral realist (I think). Physical pragmatism (your position) is a reasonable stance in the physical realism / anti-realism debate, but I'm not sure what the parallel position is in the moral realism / anti-realism debate.

(Edit: And for some moral realists, the justification for that position is the "obvious" truth of physical realism and the non-intuitiveness of physical facts and moral facts having a different ontological status.)

In short, "physical prediction" is a coherent concept in a way that "moral prediction" does not seem to be. A sentence of the form "I predict retaliation if I wrong someone" is a psychological prediction, not a moral prediction. Defining what "wrong" means in that sentence is the core of the moral realism / anti-realism debate.

comment by shminux · 2013-04-08T02:57:10.791Z · score: 0 (4 votes) · LW · GW

In short, "physical prediction" is a coherent concept in a way that "moral prediction" does not seem to be.

I don't see it.

A sentence of the form "I predict retaliation if I wrong someone" is a psychological prediction, not a moral prediction. Defining what "wrong" means in that sentence is the core of the moral realism / anti-realism debate.

Do we really have to define "wrong" here? It seems more useful to say "certain actions of mine may cause this person to experience a violation of their innate sense of fairness", or something to that effect. Now we are doing cognitive science, not some vague philosophizing.

comment by TimS · 2013-04-08T14:11:46.728Z · score: 0 (0 votes) · LW · GW

Do we really have to define "wrong" here? It seems more useful to say "certain actions of mine may cause this person to experience a violation of their innate sense of fairness", or something to that effect.

At a minimum, we need an enforceable procedure for resolving disagreements between different people when each of their "innate senses of fairness" disagree. Negotiated settlement might be the gold-standard, but history shows this seldom has actually resolved major disputes.

Defining "wrong" helps because it provides a universal principled basis for others to intervene in the conflict. Alliance building also provides a basis, but is hardly universally principled (or fair, for most usages of "fair").

comment by shminux · 2013-04-08T15:03:18.104Z · score: -1 (1 votes) · LW · GW

Defining "wrong" helps because it provides a universal principled basis for others to intervene in the conflict.

Yes, it definitely helps to define "wrong" as a rough acceptable behavior boundary in a certain group. But promoting it from a convenient shortcut in your models into something bigger is hardly useful. Well, it is useful to you if you can convince others that your definition of "wrong" is the one true one and everyone else ought to abide by it or burn in hell. Again, we are out of philosophy and into psychology.

comment by TimS · 2013-04-08T15:40:26.371Z · score: 1 (1 votes) · LW · GW

I'm glad we agree that defining "wrong" is useful, but I'm still confused how you think we go about defining "wrong." One could assert:

Wrong is what society punishes.

But that doesn't tell us how society figures out what to punish, or whether there are constraints on society's classifications. Psychology doesn't seem to answer these questions - there once were societies that practiced human sacrifice or human slavery.

In common usage, we'd like to be able say those societies were doing wrong, and your usage seems inconsistent with using "wrong" in that way.

comment by shminux · 2013-04-08T16:34:23.202Z · score: 0 (4 votes) · LW · GW

In common usage, we'd like to be able say those societies were doing wrong, and your usage seems inconsistent with using "wrong" in that way.

No, they weren't. Your model of objective wrongness is not a good one, it fails a number of tests.

"Human sacrifice and human slavery" is wrong now in the Westernized society, because it fits under the agreed definition of wrong today. It was not wrong then. It might not be wrong again in the future, after some x-risk-type calamity.

The evolution of the agreed-upon concept of wrong is a fascinating subject in human psychology, sociology and whatever other natural science is relevant. I am guessing that more formerly acceptable behaviors get labeled as "wrong" as the overall standard of living rises and average suffering decreases. As someone mentioned before, torturing cats is no longer the good clean fun it used to be. But that's just a guess, I would defer to the expert in the area, hopefully there are some around.

Some time in the future a perfectly normal activity of the day will be labeled as "wrong". It might be eating animals, or eating plants, or having more than 1.0 children per person, or refusing sex when asked politely, or using anonymous nicks on a public forum, or any other activity we find perfectly innocuous.

Conversely, there were plenty of "wrong" behaviors which aren't wrong anymore, at least not in the modern West, like proclaiming that Jesus is not the Son of God, or doing witchcraft, or marrying a person of the same sex, or...

The definition of wrong as an agreed upon boundary of acceptable behavior matches observations. The way people come to such an agreement is a topic eminently worth studying, but it should not be confused with studying the concept of wrong as if it were some universal truth.

comment by TimS · 2013-04-08T16:54:16.332Z · score: 1 (1 votes) · LW · GW

Your position on moral realism has a respectable pedigree in moral philosophy, but I don't think it is parallel to your position on physical realism.


As I understand it, your response to the question "Are there electrons?" is something like:

This is a wrong question. Trying to find the answer doesn't resolve any actual decision you face.

By contrast, your response to "Is human sacrifice wrong?" is something like:

Not in the sense you mean, because "wrong" in that sense does not exist.


I don't think there are philosophical reasons why your positions on those two issues should be in parallel, but you seem to think that your positions are in parallel, and it does not look that way to me.

comment by Eugine_Nier · 2013-04-10T03:10:21.673Z · score: 2 (2 votes) · LW · GW

I don't think there are philosophical reasons why your positions on those two issues should be in parallel, but you seem to think that your positions are in parallel, and it does not look that way to me.

Without a notion of objective underlying reality, shminux had nothing to cash out any moral theory in.

comment by shminux · 2013-04-08T17:15:41.287Z · score: -1 (3 votes) · LW · GW

As I understand it, your response to the question "Are there electrons?" is something like:
This is a wrong question. Trying to find the answer doesn't resolve any actual decision you face.
By contrast, your response to "Is human sacrifice wrong?" is something like:
Not in the sense you mean, because "wrong" in that sense does not exist.

Not quite.

"Are there electrons?" "Yes, electron is an accurate model, though it it has its issues."

"Does light propagate in ether?" "Aether is not a good model, it fails a number of tests."

"is human sacrifice an unacceptable behavior in the US today?" "Yes, this model is quite accurate."

"Is 'wrong' independent of the group that defines it?" "No, this model fails a number of tests."

Seems pretty consistent to me, with all the parallels you want.

comment by TimS · 2013-04-09T01:20:09.959Z · score: 3 (3 votes) · LW · GW

this model fails a number of tests

You are not using the word "tests" consistently in your examples. For luminiferous aether, test means something like "makes accurate predictions." Substituting that into your answer to wrong yields:

No, this model fails to make accurate predictions.

Which I'm having trouble parsing as an answer to the question. If you don't mean for that substitution to be sensible, then your parallelism does not seem to hold together.

But in deference to your statement here, I am happy to drop this topic if you'd like me to. It is not my intent to badger you, and you don't have any obligation to continue a conversation you don't find enjoyable or productive.

comment by MugaSofer · 2013-04-08T20:16:22.269Z · score: 0 (2 votes) · LW · GW

"Is 'wrong' independent of the group that defines it?" "No, this model fails a number of tests."

It's worth noting that most people who make that claim are using a different definition of "wrong" to you.

comment by wedrifid · 2013-04-08T20:25:25.126Z · score: 1 (1 votes) · LW · GW

I suggest editing in additional line-breaks so that the quote is distinguished from your own contribution. (You need at least two 'enters' between the end of the quote and the start of your own words.)

comment by MugaSofer · 2013-04-09T12:57:15.012Z · score: 0 (2 votes) · LW · GW

Whoops, thanks.

comment by nshepperd · 2013-04-08T22:56:22.786Z · score: 0 (6 votes) · LW · GW

I expected that this discussion would not achieve anything.

Simply put, the mistake both of you are making was already addressed by the meta-ethics sequence. But for a non-LW reference, see Speakers Use Their Actual Language. "Wrong" does not refer to "whatever 'wrong' means in our language at the time". That would be circular. "Wrong" refers to some objective set of characteristics, that set being the same as those that we in reality disapprove of. Modulo logical uncertainty etc etc.

I expected this would not make sense to you since you can't cash out objective characteristics in terms of predictive black boxes.

comment by MugaSofer · 2013-04-09T13:18:35.142Z · score: 0 (2 votes) · LW · GW

I expected that this discussion would not achieve anything.

Congratulations on a successful prediction. Of course, if you had made it before this conversation commenced, you could have saved us all the effort; next time you know something would fail, speaking up would be helpful.

Simply put, the mistake both of you are making was already addressed by the meta-ethics sequence. But for a non-LW reference, see Speakers Use Their Actual Language. "Wrong" does not refer to "whatever 'wrong' means in our language at the time". That would be circular. "Wrong" refers to some objective set of characteristics, that set being the same as those that we in reality disapprove of. Modulo logical uncertainty etc etc.

I think shminux is claiming that this set of characteristics changes dynamically, and thus it is more useful to define "wrong" dynamically as well. I disagree, but then we already have a term for this ("unacceptable") so why reurpose "wrong"?

I expected this would not make sense to you since you can't cash out objective characteristics in terms of predictive black boxes.

Who does "you" refer to here? All participants in this discussion? Sminux only?

comment by TheOtherDave · 2013-04-09T15:13:52.594Z · score: 2 (4 votes) · LW · GW

we already have a term for this ("unacceptable") so why reurpose "wrong"?

Presumably shminux doesn't consider it a repurposing, but rather an articulation of the word's initial purpose.

next time you know something would fail, speaking up would be helpful.

Well, OK.

Using relative terms in absolute ways invites communication failure.

If I use "wrong" to denote a relationship between a particular act and a particular judge (as shminux does) but I only specify the act and leave the judge implicit (e.g., "murder is wrong"), I'm relying on my listener to have a shared model of the world in order for my meaning to get across. If I'm not comfortable relying on that, I do better to specify the judge I have in mind.

comment by MugaSofer · 2013-04-09T18:34:15.067Z · score: -1 (3 votes) · LW · GW

Presumably shminux doesn't consider it a repurposing, but rather an articulation of the word's initial purpose.

Is shiminux a native English speaker? Because that's certainly not how the term is usually used. Ah well, he's tapped out anyway.

Well, OK.

Using relative terms in absolute ways invites communication failure.

If I use "wrong" to denote a relationship between a particular act and a particular judge (as shminux does) but I only specify the act and leave the judge implicit (e.g., "murder is wrong"), I'm relying on my listener to have a shared model of the world in order for my meaning to get across. If I'm not comfortable relying on that, I do better to specify the judge I have in mind.

Oh, I can see why it failed - they were using the same term in different ways, each insisting their meaning was "correct" - I just meant you could use this knowledge to help avoid this ahead of time.

comment by TheOtherDave · 2013-04-09T19:55:04.792Z · score: 3 (3 votes) · LW · GW

I just meant you could use this knowledge to help avoid this ahead of time.

I understand. I'm suggesting it in that context.

That is, I'm asserting now that "if I find myself in a conversation where such terms are being used and I have reason to believe the participants might not share implicit arguments, make the argumentsexplicit" is a good rule to follow in my next conversation.

comment by MugaSofer · 2013-04-09T21:53:37.278Z · score: 0 (2 votes) · LW · GW

Makes sense. Upvoted.

comment by nshepperd · 2013-04-10T02:43:43.313Z · score: 0 (0 votes) · LW · GW

Congratulations on a successful prediction. Of course, if you had made it before this conversation commenced, you could have saved us all the effort; next time you know something would fail, speaking up would be helpful.

Sorry. I guess I was feeling too cynical and discouraged at the time to think that such a thing would be helpful.

Who does "you" refer to here? All participants in this discussion? Sminux only?

In this case I meant to refer to only shminux, who calls himself an instrumentalist and does not like to talk about the territory (as opposed to AIXI-style predictive models).

comment by MugaSofer · 2013-04-10T14:40:43.888Z · score: -1 (1 votes) · LW · GW

Sorry. I guess I was feeling too cynical and discouraged at the time to think that such a thing would be helpful.

You might have been right, at that. My prior for success here was clearly far too high.

comment by MugaSofer · 2013-04-08T19:24:19.143Z · score: -1 (1 votes) · LW · GW

No, they weren't. Your model of objective wrongness is not a good one, it fails a number of tests.

"Human sacrifice and human slavery" is wrong now in the Westernized society, because it fits under the agreed definition of wrong today. It was not wrong then. It might not be wrong again in the future, after some x-risk-type calamity.

[...]

The definition of wrong as an agreed upon boundary of acceptable behavior matches observations. The way people come to such an agreement is a topic eminently worth studying, but it should not be confused with studying the concept of wrong as if it were some universal truth.

This concept of "wrong" is useful, but a) there is an existing term which people understand to mean what you describe - "acceptable" - and b) it does not serve the useful function people currently expect "wrong" to serve; that of describing our extrapolated desires - it is not prescriptive.

I would advise switching to the more common term, but if you must use it this way I would suggest warning people first, to prevent confusion.

comment by shminux · 2013-04-08T20:46:36.126Z · score: -1 (1 votes) · LW · GW

You or TimS are the ones who introduced the term "wrong" into the conversation, I'm simply interpreting it in a way that makes sense to me. Tapping out due to lack of progress.

comment by MugaSofer · 2013-04-08T22:30:54.572Z · score: -1 (1 votes) · LW · GW

You or TimS are the ones who introduced the term "wrong" into the conversation

That would be TimS, because he's the one discussing your views on moral realism with you.

I'm simply interpreting it in a way that makes sense to me.

And I'm simply warning you that using the term in a nonstandard way is predictably going to result in confusion, as it has in this case.

Tapping out due to lack of progress.

Well, that's your prerogative, obviously, but please don't tap out of your discussion with Tim on my account. And, um, if it's not on my account, you might want to say it to him, not me.

comment by nshepperd · 2013-04-08T04:25:05.772Z · score: 0 (0 votes) · LW · GW

Fairness is not about feelings of fairness.

comment by shminux · 2013-04-08T05:26:28.325Z · score: 0 (2 votes) · LW · GW

Feeling or not, it's a sense that exists in other primates, not just humans. You can certainly quantify the emotional reaction to real or perceived unfairness, which was my whole point: use cognitive science, not philosophy. And cognitive science is about building models and testing them, like any natural science.

comment by MugaSofer · 2013-04-07T20:33:57.606Z · score: -1 (1 votes) · LW · GW

Well, the trouble occurs when you start talking about the existence of things that, unlike electrons, you actually care about.

Say I value sentient life. If that life doesn't factor into my predictions, does it somehow not exist? Should I stop caring about it? (The same goes for paperclips, if you happen to value those.)

EDIT: I assume you consider the least computationally complex model "better at predicting certain future inputs"?

comment by shminux · 2013-04-08T02:51:47.419Z · score: 1 (5 votes) · LW · GW

Say I value sentient life. If that life doesn't factor into my predictions, does it somehow not exist? Should I stop caring about it?

You have it backwards. You also use the term "exist" in the way I don't. You don't have to worry about refining models predicting inputs you don't care about.

I assume you consider the least computationally complex model "better at predicting certain future inputs"?

If there is a luxury of choice of multiple models which give the same predictions, sure. Usually we are lucky if there is one good model.

comment by MugaSofer · 2013-04-08T21:24:46.101Z · score: -1 (3 votes) · LW · GW

You also use the term "exist" in the way I don't.

Well, I am trying to get you to clarify what you mean.

You don't have to worry about refining models predicting inputs you don't care about.

But as I said, I don't care about inputs, except instrumentally. I care about sentient minds (or paperclips.)

Usually we are lucky if there is one good model.

Ah ... no. Invisible pink unicorns and Russel's Teapots abound. For example, what if any object passing over the cosmological horizon disappeared? Or the universe was created last Thursday, but perfectly designed to appear billions of years old? These hypotheses don't do any worse at predicting; they just violate Occam's Razor.

comment by shminux · 2013-04-08T22:27:58.662Z · score: 0 (2 votes) · LW · GW

Well, I am trying to get you to clarify what you mean.

Believe me, I have tried many times in our discussions over last several months. Unfortunately we seem to be speaking different languages which happen to use the same English syntax.

Invisible pink unicorns and Russel's Teapots abound.

Fine, I'll clarify. You can always complicate an existing model in a trivial way, which is what all your examples are doing. I was talking about models of which one is not a trivial extension of the other with no new predictive power. That's just silly.

comment by MugaSofer · 2013-04-09T13:07:30.579Z · score: -1 (1 votes) · LW · GW

Fine, I'll clarify. You can always complicate an existing model in a trivial way, which is what all your examples are doing. I was talking about models of which one is not a trivial extension of the other with no new predictive power. That's just silly.

Well, considering how many people seem to think that interpretations of QM other than their own are just "trivial extensions with no new predictive power", it's an important point.

Believe me, I have tried many times in our discussions over last several months. Unfortunately we seem to be speaking different languages which happen to use the same English syntax.

Well, it's pretty obvious we use different definitions of "existence". Not sure if that qualifies as a different language, as such.

That said, you seem to be having serious trouble parsing my question, so maybe there are other differences too.

Look, you understand the concept of a paperclip maximizer, yes? How would a paperclip maximizer that used your criteria for existence act differently?

EDIT: incidentally, we haven't been discussing this "over the last several months". We've been discussing it since the fifth.

comment by shminux · 2013-04-09T15:53:22.902Z · score: -2 (2 votes) · LW · GW

Well, considering how many people seem to think that interpretations of QM other than their own are just "trivial extensions with no new predictive power", it's an important point.

The interpretations are usually far from trivial and most aspire to provide an inspiration for building a testable model some day. Some even have, and been falsified. That's quite different from last thursdayism.

How would a paperclip maximizer that used your criteria for existence act differently?

Why would it? A paperclip maximizer is already instrumental, it has one goal in mind, maximizing the number of paperclips in the universe (which it presumably can measure with some sensors). It may have to develop advanced scientific concepts, like General Relativity, to be assured that the paperclips disappearing behind the cosmological horizon can still be counted toward the total, given some mild assumptions, like the Copernican principle.

Anyway, I'm quite skeptical that we are getting anywhere in this discussion.

comment by private_messaging · 2013-04-09T17:03:52.316Z · score: 1 (3 votes) · LW · GW

it has one goal in mind, maximizing the number of paperclips in the universe

In which universe? It doesn't know. And it may have uncertainty with regards to true number. There's going to be hypothetical universes that produce same observations but have ridiculously huge amounts of invisible paperclips at stake, which are influenced by paperclipper's actions (it may even be that the simplest extra addition that makes agent's actions influence invisible paperclips would utterly dominate all theories starting from some length, as it leaves most length for a busy beaver like construction that makes the amount of insivisible paperclips ridiculously huge. One extra bit for a busy beaver is seriously a lot more paperclips). So given some sort of length prior that ignores size of hypothetical universe (the kind that won't discriminate against MWI just because its big), those aren't assigned low enough prior, and dominate it's expected utility calculations.

comment by MugaSofer · 2013-04-09T18:45:15.617Z · score: -2 (2 votes) · LW · GW

The interpretations are usually far from trivial and most aspire to provide an inspiration for building a testable model some day. Some even have, and been falsified. That's quite different from last thursdayism.

Well, I probably don't know enough about QM to judge if they're correct; but it's certainly a claim made fairly regularly.

Why would it? A paperclip maximizer is already instrumental, it has one goal in mind, maximizing the number of paperclips in the universe (which it presumably can measure with some sensors). It may have to develop advanced scientific concepts, like General Relativity, to be assured that the paperclips disappearing behind the cosmological horizon can still be counted toward the total, given some mild assumptions, like the Copernican principle.

Let's say it simplifies the equations not to model the paperclips as paperclips - it might be sufficient to treat them as a homogeneous mass of metal, for example. Does this mean that they do not, in fact, exist? Should a paperclipper avoid this at all costs, because it's equivalent to them disappearing?

Removing the territory/map distinction means something that wants to change the territory could end up changing the map ... doesn't it?

I'm wondering because I care about people, but it's often simpler to model people without treating them as, well, sentient.

Anyway, I'm quite skeptical that we are getting anywhere in this discussion.

Well, I've been optimistic that I'd clarified myself pretty much every comment now, so I have to admit I'm updating downwards on that.

comment by Eugine_Nier · 2013-04-06T02:38:00.340Z · score: 0 (0 votes) · LW · GW

Seriously, I've tried explaining just the proof that electrons exist, and in the end the best argument is that all the math we've built assuming their existence have really good predictive value. Which sounds like great evidence until you start confronting all the strange loops (the best experiments assume electromagnetic fields...) in that evidence, and I don't even know how to -begin- untangling those.

The same is more-or-less true if you replace 'electrons' with 'temperature'.

comment by [deleted] · 2013-04-06T19:17:16.814Z · score: 0 (2 votes) · LW · GW

You guys are making possible sources of confusion between the map and the territory sound like they're specific to QFT while they actually aren't. “Oh, I know what a ball is. It's an object where all the points on the surface are at the same distance from the centre.” “How can there be such a thing? The positions of atoms on the surface would fluctuate due to thermal motion. Then what is it, exactly, that you play billiards with?” (Can you find another example of this in a different recent LW thread?)

comment by EHeller · 2013-04-07T01:45:03.226Z · score: 3 (3 votes) · LW · GW

Your ball point is very different. My driving point is that there isn't even a nice, platonic-ideal type definition of particle IN THE MAP, let alone something that connects to the territory. I understand how my above post may lead you to misunderstand what I was trying to get it..

To rephrase my above comment, I might say: some of the features a MAP of a particle needs is that its detectable in some way, and that it can be described in a non-relativistic limit by a Schroedinger equation. The standard QFT definitions for particle lack both these features. Its also not-fully consistent in the case of charged particles.

In QFT there is lots of confusion about how the map works, unlike classical mechanics.

comment by shminux · 2013-04-07T02:35:10.137Z · score: 2 (4 votes) · LW · GW

This reminds me of the recent conjecture that the black hole horizon is a firewall, which seems like one of those confusions about the map.

comment by [deleted] · 2013-04-07T11:28:39.487Z · score: -1 (1 votes) · LW · GW

there isn't even a nice, platonic-ideal type definition of particle IN THE MAP

Why, is there a nice, platonic-ideal type definition of a rigid ball in the map (compatible with special relativity)? What happens to its radius when you spin it?

comment by EHeller · 2013-04-07T21:14:05.341Z · score: 1 (1 votes) · LW · GW

There is no 'rigid' in special relativity, the best you can do is Born-rigid. Even so, its trivial to define a ball in special relativity, just define it in the frame of a corotating observer and use four vectors to move to the same collection of events in other frames You learn that a 'ball' in special relativity has some observer dependent properties, but thats because length and time are observer dependent in special relativity. So 'radius' isn't a good concept, but 'the radius so-and-so measures' IS a good concept.

comment by [deleted] · 2013-04-06T10:33:23.318Z · score: 0 (0 votes) · LW · GW

and what it means that the definition is NOT observer independent.

[puts logical positivism hat on]

Why, it means this, of course.

[while taking the hat off:] Oh, that wasn't what you meant, was it?

comment by EHeller · 2013-04-07T01:53:30.902Z · score: 0 (0 votes) · LW · GW

Why, it means this, of course.

The Unruh effect is a specific instance of my general-point (particle definition is observer dependent). All you've done is give a name to the sub-class of my point (not all observers see the same particles).

So should we expect ontology to be observer independent? If we should, what happens to particles?

comment by private_messaging · 2013-04-05T03:53:58.568Z · score: 1 (5 votes) · LW · GW

And yet it proclaims the issue settled in favour of MWI and argues of how wrong science is for not settling on MWI and so on. The connection - that this deficiency is why MWI can't be settled on, sure does not come up here. Speaking of which, under any formal metric that he loves to allude to (e.g. Kolmogorov complexity), MWI as it is, is not even a valid code for among other things this reason.

It doesn't matter how much simpler MWI is if we don't even know that it isn't too simple, merely guess that it might not be too simple.

edit: ohh, and lack of derivation of Born's rules is not the kind of thing I meant by argument in favour of non-realism. You can be non-realist with or without having derived Born's rules. How QFT deals with relativistic issues, as outlined by e.g. Mitchell Porter , is a quite good reason to doubt reality of what goes on mathematically in-between input and output. There's a view that (current QM) internals are an artefact of the set of mathematical tricks which we like / can use effectively. The view that internal mathematics is to the world as rods and cogs and gears inside a WW2 aiming computer are to a projectile flying through the air.

comment by Ritalin · 2013-04-04T15:20:11.695Z · score: 0 (0 votes) · LW · GW

Are they, though? Irrational or stupid?

comment by MugaSofer · 2013-04-06T19:28:37.917Z · score: -2 (2 votes) · LW · GW

What one can learn is that the allegedly 'settled' and 'solved' is far from settled and solved and is a matter of opinion as of now. This also goes for qualia and the like; we haven't reduced them to anything, merely asserted.

coughcreationsistscough

comment by Vaniver · 2013-04-03T20:06:51.822Z · score: 4 (6 votes) · LW · GW

I defected from physics during my Master's, but this is basically the impression I had of the QM sequence as well.

comment by Vaniver · 2013-04-01T19:09:30.094Z · score: 20 (22 votes) · LW · GW

Carl often hears about, anonymizes, and warns me when technical folks outside the community are offended by something I do. I can't recall hearing any warnings from Carl about the QM sequence offending technical people.

That sounds like reasonable evidence against the selection effect.

Bluntly, if shminux can't grasp the technical argument for MWI then I wouldn't expect him to understand what really high-class technical people might think of it.

I strongly recommend against both the "advises newcomers to skip the QM sequence -> can't grasp technical argument for MWI" and "disagrees with MWI argument -> poor technical skill" inferences.

comment by wedrifid · 2013-04-02T03:21:11.252Z · score: 2 (12 votes) · LW · GW

I strongly recommend against both the "advises newcomers to skip the QM sequence -> can't grasp technical argument for MWI"

That inference isn't made. Eliezer has other information from which to reach that conclusion. In particular, he has several years worth of ranting and sniping from Shminux about his particular pet peeve. Even if you disagree with Eliezer's conclusion it is not correct to claim that Eliezer is making this particular inference.

and "disagrees with MWI argument -> poor technical skill" inferences.

Again, Eliezer has a large body of comments from which to reach the conclusion that Shminux has poor technical skill in the areas necessary for reasoning on that subject. The specific nature of the disagreement would be relevant, for example.

comment by Vaniver · 2013-04-02T05:12:32.636Z · score: 5 (9 votes) · LW · GW

That inference isn't made. Eliezer has other information from which to reach that conclusion. In particular, he has several years worth of ranting and sniping from Shminux about his particular pet peeve.

That very well could be, in which case my recommendation about that inference does not apply to Eliezer.

I will note that this comment suggests that Eliezer's model of shminux may be underdeveloped, and that caution in ascribing motives or beliefs to others is often wise.

comment by wedrifid · 2013-04-02T06:38:43.161Z · score: 1 (7 votes) · LW · GW

I will note that this comment suggests that Eliezer's model of shminux may be underdeveloped

It really doesn't. At best it suggests Eliezer could have been more careful in word selection regarding Shminux's particular agenda. 'About' rather than 'with' would be sufficient.

comment by [deleted] · 2013-04-03T03:09:42.041Z · score: 6 (6 votes) · LW · GW

I'm just kind of surprised the QM part worked, and it's possible that might be due to Mihaly having already taken standard QM so that he could clearly see the contrast between the explanation he got in college and the explanation on LW.

I'm no IMO gold medalist (which really just means I'm giving you explicit permission to ignore the rest of my comment) but it seems to me that a standard understanding of QM is necessary to get anything out of the QM sequence.

It's a pity I'll probably never have time to write up TDT.

Revealed preferences are rarely attractive.

comment by TheOtherDave · 2013-04-03T04:26:13.801Z · score: 7 (7 votes) · LW · GW

Revealed preferences are rarely attractive.

Adds to "Things I won't actually get put on a T-shirt but sort of feel I ought to" list.

comment by shminux · 2013-04-01T17:19:48.066Z · score: 5 (27 votes) · LW · GW

As others noted, you seem to be falling prey to the selection bias. Do you have an estimate of how many "IMO gold medalists" gave up on MIRI because its founder, in defiance of everything he wrote before, confidently picks one untestable from a bunch and proclaims it to be the truth (with 100% certainty, no less, Bayes be damned), despite (or maybe due to) not even being an expert in the subject matter?

EDIT: My initial inclination was to simply comply with your request, probably because I grew up being taught deference to and respect for authority. Then it struck me as one of the most cultish things one could do.

comment by hairyfigment · 2013-04-01T21:34:31.221Z · score: 14 (14 votes) · LW · GW

with 100% certainty, no less, Bayes be damned

Is this an April Fool's joke? He says nothing of the kind. The post which comes closest to this explicitly says that it could be wrong, but "the rational probability is pretty damned small." And counting the discovery of time-turners, he's named at least two conceivable pieces of evidence that could change that number.

What do you mean when you say you "just don't put nearly as much confidence in it as you do"?

comment by MugaSofer · 2013-04-12T19:05:57.445Z · score: 0 (2 votes) · LW · GW

Maybe it's a reference to the a priori nature of his arguments for MW? Or something? It's a strange claim to make, TBH.

comment by philh · 2013-04-02T00:11:01.710Z · score: 3 (3 votes) · LW · GW

Do you have an estimate of how many "IMO gold medalists" gave up on MIRI because [X]

The number of IMO gold medalists is sufficiently low, and the probability of any one of them having read the QM sequence is sufficiently small, that my own estimate would be less than one regardless of X.

(I don't have a good model of how much more likely an IMO gold medalist would be to have read the QM sequence than any other reference class, so I'm not massively confident.)

comment by private_messaging · 2013-04-02T05:00:28.331Z · score: 1 (11 votes) · LW · GW

There's plenty of things roughly comparable to IMO in terms of selectivity (IMO gives what, ~35 golds a year?)... E.g. I'm #10th of all time on a popular programming contest site ( I'm dmytry ).

This discussion is really hilarious, especially the attempts to re-frame commoner orientated, qualitative and incomplete picture of QM - as something which technical people appreciate and non-technical people don't. (Don't you want to be one among the technies?) .

comment by philh · 2013-04-02T19:17:11.054Z · score: 5 (5 votes) · LW · GW

Selectivity, in the relevant sense, is more than just a question of how many people are granted something.

How many people are not on that site, but could rank highly if they chose to try? I'm guessing it's far more than the number of people who have never taken part in the IMO, but who could get a gold medal if they did.

(The IMO is more prestigious among mathematicians than topcoder is among programmers. And countries actively recruit their best mathematicians for the IMO. Nobody in the Finnish government thought it would be a good idea to convince and train Linus Torvalds to take part in an internet programming competition, so I doubt Linus Torvalds is on topcoder.)

There certainly are things as selective or more than the IMO (for example, the Fields medal), but I don't think topcoder is one of them, and I'm not convinced about "plenty". (Plenty for what purpose?)

comment by private_messaging · 2013-04-05T06:09:34.864Z · score: 2 (2 votes) · LW · GW

I've tried to compare it more accurately.

It's very hard to evaluate selectivity. It's not just the raw number of people participating. It seems that large majority of serious ACM ICPC participants (both contestants and their coaches) are practising on Topcoder, and for the ICPC the best college CS students are recruited much the same as best highschool math students for IMO.

I don't know if Linus Torvalds would necessarily do great on this sort of thing - his talents are primarily within software design, and his persistence as the unifying force behind Linux. (And are you sure you'd recruit a 22 years old Linus Torvalds who just started writing a Unix clone?). It's also the case that 'programming contest' is a bit of misnomer - the winning is primarily about applied mathematics - just as 'computer science' is a misnomer.

In any case, its highly dubious that understanding of QM sequence is as selective as any contest. I get it fully that Copenhagen is clunky whereas MWI doesn't have the collapse, and that collapse fits in very badly. That's not at all the issue. However badly something fits, you can only throw it away when you figured out how to do without it. Also, commonly, the wavefunction, the collapse, and other internals, are seen as mechanisms of prediction which may, or may not, have anything to do with "how universe does it" (even if the question of "how universe does it" is meaningful, it may still be the case that internals of the theory have nothing to do with that, as the internals are massively based upon our convenience). And worse still, MWI is in many very important ways lacking.

comment by private_messaging · 2013-04-03T05:37:41.233Z · score: -6 (8 votes) · LW · GW

Selectivity, in the relevant sense, is more than just a question of how many people are granted something.

Of course. There's the number of potential participants, self selection, and so on.

How many people are not on that site, but could rank highly if they chose to try? I'm guessing it's far more than the number of people who have never taken part in the IMO, but who could get a gold medal if they did.

IMO is a highschool event, and 'taking part' in terms of actually winning entails a lot of very specific training instead of education.

(The IMO is more prestigious among mathematicians than topcoder is among programmers. And countries actively recruit their best mathematicians for the IMO. Nobody in the Finnish government thought it would be a good idea to convince and train Linus Torvalds to take part in an internet programming competition, so I doubt Linus Torvalds is on topcoder.)

Nobody can recruit Grigori Perelman for IMO, either.

There's ACM ICPC, which is roughly the programming equivalent of IMO . Finalists have huge overlap with TC. edit: more current . Of course, TC lacks the prestige of ACM ICPC , but on the other hand it is not a school event.

There certainly are things as selective or more than the IMO (for example, the Fields medal), but I don't think topcoder is one of them, and I'm not convinced about "plenty". (Plenty for what purpose?)

Plenty for the purpose of coming across that volume of technical brilliance and noting and elevating it to its rightful place by now. Less facetiously: a lot of people know everything that was presented in the QM paper, and of those pretty much everyone either considers MWI to be an open question, an irrelevant question, or the like.

edit: made clearer with quotations.

comment by philh · 2013-04-03T07:39:05.716Z · score: 6 (6 votes) · LW · GW

Nobody can recruit Grigori Perelman for IMO, either.

Perelman is an IMO gold medalist.

comment by private_messaging · 2013-04-03T07:54:22.983Z · score: -1 (5 votes) · LW · GW

Hmm. Good point. My point was though that you can't recruit adult mathematicians for it.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-01T18:40:08.930Z · score: 1 (50 votes) · LW · GW

Well, I'm sorry to say this, but part of what makes authority Authority is that your respect is not always required. Frankly, in this case Authority is going to start deleting your comments if you keep on telling newcomers who post in the Welcome thread not to read the QM sequence, which you've done quite a few times at this point unless my memory is failing me. You disagree with MWI. Okay. I get it. We all get it. I still want the next Mihaly to read the QM Sequence and I don't want to have this conversation every time, nor is it an appropriate greeting for every newcomer.

comment by shminux · 2013-04-01T18:48:44.049Z · score: 20 (28 votes) · LW · GW

Sure, your site, your rules.

Just to correct a few inaccuracies in your comment:

You disagree with MWI.

I don't, I just don't put nearly as much confidence in it as you do. It is also unfortunately abused on this site quite a bit.

nor is it an appropriate greeting for every newcomer.

I don't even warn every newcomer who mentions the QM sequence, let alone "every newcomer", only those who appear to be stuck on it. Surely Mihaly had no difficulties with it, so none of my warnings would interfere with "still want the next Mihaly to read the QM Sequence".

comment by wedrifid · 2013-04-02T10:54:44.332Z · score: 12 (16 votes) · LW · GW

nor is it an appropriate greeting for every newcomer.

I don't even warn every newcomer who mentions the QM sequence, let alone "every newcomer"

The claim you made that prompted the reply was:

My standard advice to all newcomers is to skip the quantum sequence, at least on the first reading.

It is rather disingenuous to then express exaggerated 'let alone' rejections of the reply "nor is it an appropriate greeting for every newcomer".

comment by MugaSofer · 2013-04-06T09:25:15.192Z · score: 1 (3 votes) · LW · GW

nor is it an appropriate greeting for every newcomer.

I don't even warn every newcomer who mentions the QM sequence, let alone "every newcomer", only those who appear to be stuck on it.

Uhuh.

My standard advice to all newcomers is to skip the quantum sequence, at least on the first reading.

That said, kudos to you for remaining calm and reasonable

comment by shminux · 2013-04-06T21:40:55.148Z · score: 4 (6 votes) · LW · GW

You have a point, it's easy to read my first comment rather uncharitably. I should have been more precise:

"My standard advice to all newcomers [who mention difficulties with the QM sequence]..." which is much closer to what actually happens. I don't bring it up out of the blue every time I greet someone.

comment by MugaSofer · 2013-04-07T23:43:57.800Z · score: 0 (2 votes) · LW · GW

"My standard advice to all newcomers [who mention difficulties with the QM sequence]..."

Sorry, could you point out where difficulties with the QM sequence were mentioned? All I could find was

I'm currently working my way through the sequences, just getting into the quantum physics sequence now.

comment by shminux · 2013-04-08T02:46:07.467Z · score: 2 (4 votes) · LW · GW

You are right. In my mind I read it as "I read through everything up until this, and this quantum thing looks scary and formidable, but it's next, so I better get on with it", which could have been a total misinterpretation of what was meant. So yeah, I have probably jumped in a bit early. Not that I think it was a bad advice. Anyway, it's all a moot point now, I have promised EY not to give unsolicited advice to newcomers telling them to skip the QM sequence.

comment by MugaSofer · 2013-04-08T18:04:59.229Z · score: 1 (3 votes) · LW · GW

Fair enough, I thought I might have somehow missed it.

comment by shminux · 2013-04-02T23:31:24.814Z · score: 1 (3 votes) · LW · GW

Hmm, the above got a lot of upvotes... I have no idea why.

comment by wedrifid · 2013-04-03T02:42:11.576Z · score: 26 (34 votes) · LW · GW

Hmm, the above got a lot of upvotes... I have no idea why.

Egalitarian instinct. Eliezer is using power against you, which drastically raises the standards of behavior expected from him while doing so---including less tolerance of him getting things wrong.

Your reply used the form 'graceful' in a context where you would have been given a lot of leeway even to be (overtly) rude. The corrections were portrayed as gentle and patient. Whether the corrections happen to be accurate or reasonable is usually almost irrelevant for the purpose of determining people's voting behavior this far down into a charged thread.

Note that even though I approve of Eliezer's decision to delete comments of yours disparaging the QM sequence to newcomers I still endorse your decision to force Eliezer to use his power instead of deferring to his judgement simply because he has the power. It was the right decision for you to make from your perspective and is also a much more desirable precedent.

comment by OrphanWilde · 2013-04-04T20:32:29.075Z · score: 7 (7 votes) · LW · GW

I deliberately invoke this tactic on occasion in arguments on other people's turf, particularly where the rules are unevenly applied. I was once accused by an acquaintance who witnessed it of being unreasonably reasonable.

It's particularly useful when moderators routinely take sides in debates. It makes it dangerous for them to use their power to shut down dissent.

comment by VCavallo · 2013-04-04T19:16:14.885Z · score: 4 (6 votes) · LW · GW

Egalitarian instinct. Eliezer is using power against you, which drastically raises the standards of behavior expected from him while doing so---including less tolerance of him getting things wrong.

Nailed it on the head. As my cursor began to instinctively over the "upvote" button on shminux's comment I caught myself and thought, why am I doing this?. And while I didn't come to your exact conclusion I realized my instinct had something to do with EY's "use of power" and shminux's gentle reply. Some sort of underdog quality that I didn't yet take the time to assess but that my mouse-using-hand wanted badly to blindly reward.

I'm glad you pieced out the exact reasoning behind the scenes here. Stopping and taking a moment to understand behavior and then correct based on that understanding is why I am here.

That said, I really should think for a long time about your explanation before voting you up, too!

comment by shminux · 2013-04-04T20:10:10.700Z · score: 1 (5 votes) · LW · GW

I'm glad you pieced out the exact reasoning behind the scenes here.

If it is as right as it is insightful (which it undeniably is), I would expect those who come across wedifid's explanation to go back and change their vote, resulting in %positive going sharply down. It doesn't appear to be happening.

comment by wedrifid · 2013-04-06T10:11:18.097Z · score: 5 (5 votes) · LW · GW

If it is as right as it is insightful (which it undeniably is), I would expect those who come across wedifid's explanation to go back and change their vote, resulting in %positive going sharply down.

A quirk (and often a bias) humans have is that we tend to assume that just because a social behavior or human instinct can be explained it must thereby be invalidated. Yet everything can (in principle) be explained and there are still things that are, in fact, noble. My parents' love for myself and my siblings is no less real because I am capable of reasoning about the inclusive fitness of those peers of my anscestors that happened to love their children less.

In this case the explanation given was, roughly speaking "egalitarian instinct + politeness". And personally I have to say that the egalitarian instinct is one of my favorite parts of humanity and one of the traits that I most value in those I prefer to surround myself with (Rah foragers!).

All else being equal the explanation in terms of egalitarian instinct and precedent setting regarding authority use describes (what I consider to be) a positive picture and in itself is no reason to downvote. (The comment deserves to be downvoted for innacuracy as described in different comments but this should be considered separately from the explanation of the reasons for upvoting.)

In terms of evidence I would say that I would not consider mass downvoting of this comment to be (non-trivial) evidence in support of my explanation. Commensurately I don't consider the lack of such downvoting to be much evidence against. As for how much confidence I have in the explanation... well, I am reasonably confident that the egalitarian instinct and politeness are factors but far less confident that they represent a majority of the influence. Even my (mere) map of the social forces at work points to other influences that are at least as strong---and my ability to model and predict a crowd is far from flawless.

The question you ask is a surprisingly complicated one, if looked at closely.

comment by Kaj_Sotala · 2013-04-14T20:06:44.388Z · score: 4 (4 votes) · LW · GW

I believe that I already knew I was acting on egalitarian instinct when I upvoted your comment.

comment by VCavallo · 2013-04-04T20:36:10.282Z · score: 3 (3 votes) · LW · GW

They could just be a weird sort of lazy whereby they don't scroll back up and change anything. Or maybe they never see his post. Or something else. I don't think the -%positive-not-going-down-yet is any indication that wedrifid's comment is not right.

comment by shminux · 2013-04-04T20:59:58.856Z · score: -1 (3 votes) · LW · GW

You may well be right, it's hard to tell. I don't see an easy way of finding out short of people replying like you have. I assumed that there enough of those who would react to make the effect visible, and I don't see how someone agreeing with wedrifid's assessment would go back and upvote my original comment, so even a partial effect could be visible. But anyway, this is not important enough to continue discussing, I think. Tapping out.

comment by VCavallo · 2013-04-04T21:16:52.581Z · score: 1 (1 votes) · LW · GW

I completely agree with what you are saying and also tap out, even though it may be redundant. Let us kill this line of comments together.

comment by Randy_M · 2013-04-05T00:51:27.348Z · score: -2 (4 votes) · LW · GW

If you both tap out, then anyone who steps into the discussion wins by default!

comment by wedrifid · 2013-04-06T10:14:55.656Z · score: 2 (2 votes) · LW · GW

If you both tap out, then anyone who steps into the discussion wins by default!

In many such cases it may be better to say that if both tap out then everybody wins by default!

comment by Randy_M · 2013-04-08T19:12:58.195Z · score: 1 (1 votes) · LW · GW

-3 karma, apparently.

comment by TheOtherDave · 2013-04-06T18:24:02.344Z · score: 1 (1 votes) · LW · GW

In discussions where everyone tapping out is superior to the available alternatives, I'm more inclined to refer to the result as "minimizing loss" than "winning".

comment by Kawoomba · 2013-04-06T18:44:37.269Z · score: 0 (0 votes) · LW · GW

Well to your credit you don't see LW as a zero sum game.

comment by Eugine_Nier · 2013-04-08T03:34:43.543Z · score: 0 (0 votes) · LW · GW

What does he win?

comment by satt · 2013-04-03T09:18:54.653Z · score: 3 (5 votes) · LW · GW

Note that even though I tired of your talking about QM years ago

This is the second time you mention shminux having talked about QM for years. But I can't find any comments or posts he's made before July 2011. Does he have a dupe account or something else I don't know about?

comment by shminux · 2013-04-03T18:26:49.104Z · score: 3 (9 votes) · LW · GW

Since you are asking... July 2011 is right for the join date and some time later is when I voiced any opinion related to the QM sequence and MWI (I did read through it once and browsed now and again since). No, I did not have another account before that, as a long-term freenode ##physics IRC channel moderator, I dislike being confused about user's previous identities, so I don't do it myself (hence the silly nick chosen a decade or so ago, which has lost all relevance by now). On the other hand, I don't mind people wanting a clean slate with a new nick, just not using socks to express a controversial or karma-draining opinion they are too chicken to have linked to their main account.

I also encourage you to take whatever wedrifid writes about me with a grain of salt. While I read what he writes and often upvote when I find it warranted, I quite publicly announced here about a year ago that I will not be replying to any of his comments, given how counterproductive it had been for me. (There are currently about 4 or 5 people on my LW "do-not-reply" list.) I have also warned other users once or twice, after I noticed them in a similarly futile discussion with wedrifid. I would be really surprised if this did not color his perception and attitude. It certainly would for me, were the roles reversed.

comment by Kawoomba · 2013-04-03T09:32:21.693Z · score: 2 (6 votes) · LW · GW

I'm also interested in this. Hopefully it's not an overt lie or something.

comment by wedrifid · 2013-04-03T10:40:38.651Z · score: 0 (10 votes) · LW · GW

This is the second time you mention shminux having talked about QM for years. But I can't find any comments or posts he's made before July 2011. Does he have a dupe account or something else I don't know about?

I don't keep an exact mental record of the join dates. My guess from intuitive feel was "2 years". It's April 2013. It was July 2011 when the account joined. If anything you have prompted me to slightly increase my confidence in the calibration of my account-joining estimator.

If the subject of how long user:shminux has been complaining about the QM sequence ever becomes relevant again I'll be sure to use Wei Dai's script, search the text and provide a link to the exact first mention. In this case, however, the difference hardly seems significant or important.

Does he have a dupe account or something else I don't know about?

I doubt it. If so I praise him for his flawless character separation.

comment by satt · 2013-04-03T13:07:14.064Z · score: 1 (1 votes) · LW · GW

Thanks for clarifying. I asked not because the exact timing is important but because the overstatement seemed uncharacteristic (albeit modest), and I wasn't sure whether it was just offhand pique or something else. (Also, if something funny had been going on, it might've explained the weird rancour/sloppiness/mindkilledness in the broader thread.)

comment by wedrifid · 2013-04-03T13:35:28.774Z · score: 1 (5 votes) · LW · GW

Thanks for clarifying. I asked not because the exact timing is important but because the overstatement seemed uncharacteristic (albeit modest), and I wasn't sure whether it was just offhand pique or something else.

Just an error.

Note that in the context there was no particular pique. I intended acknowledgement of established disrespect, not conveyance of additional disrespect. The point was that I was instinctively (as well as rationally) motivated to support shminux despite also approving of Eliezer's declared intent, which illustrates the strength of the effect.

Fortunately nothing is lost if I simply remove the phrase you quote entirely. The point remains clear even if I remove the detail of why I approve of Eliezer's declaration.

Also, if something funny had been going on, it might've explained the weird rancour/sloppiness/mindkilledness in the broader thread.

The main explanation there is just that incarnations of this same argument have been cropping up with slight variations for (what seems like) a long time. As with several other subjects there are rather clear battle lines drawn and no particular chance of anyone learning anything. The quality of the discussion tends to be abysmal, riddled with status games and full of arguments that are sloppy in the extreme. As well as the problem of persuasion through raw persistence.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-01T18:12:46.503Z · score: 1 (1 votes) · LW · GW

Bluntly, IMO gold medalists who can conceive of working on something 'crazy' like FAI would be expected to better understand the QM sequence than that. Even more so they would be expected to understand the core arguments better than to get offended by my having come to a conclusion. I haven't heard from the opposite side at all, and while the probability of my hearing about it might conceivably be low, my priors on it existing are rather lower than yours, and the fact that I have heard nothing is also evidence. Carl, who often hears (and anonymizes) complaints from the outside x-risk community, has not reported to me anyone being offended by my QM sequence.

Smart people want to be told something smart that they haven't already heard from other smart people and that doesn't seem 'obvious'. The QM sequence is demonstrably not dispensable for this purpose - Mihaly said the rest of LW seemed interesting but insufficiently I-wouldn't-have-thought-of-that. Frankly I worry that QM isn't enough but given how long it's taking me to write up the Lob problem, I don't think I can realistically try to take on TDT.

comment by shminux · 2013-04-01T18:24:58.503Z · score: 2 (6 votes) · LW · GW

Again, you seem to be generalizing from a single example, unless you have more data points than just Mihaly.

comment by TheOtherDave · 2013-04-01T17:49:09.157Z · score: 1 (3 votes) · LW · GW

"IMO good medalists"

Note that the original text was "gold," not "good".

I assume IMO is the International Mathematical Olympiad(1). Not that this in any way addresses or mitigates your point; just figured I'd point it out.

(1) If I've understood the wiki article, ~35 IMO gold medals are awarded every year.

comment by shminux · 2013-04-01T18:15:04.612Z · score: 1 (3 votes) · LW · GW

Thanks, I fixed the typo.

comment by MugaSofer · 2013-04-10T16:10:54.419Z · score: 0 (4 votes) · LW · GW

Huh. I'd assumed it was short for "In My Opinion".

comment by TheOtherDave · 2013-04-10T16:44:16.128Z · score: 1 (1 votes) · LW · GW

Yeah, that confused me on initial reading, though some googling clarified matters, and I inferred from the way shminux (mis)quoted that something similar might be going on there, which is why I mentioned it.

comment by TimS · 2013-04-01T15:34:11.876Z · score: 3 (11 votes) · LW · GW

QM Sequence is two parts:

(1) QM for beginners
(2) Philosophy-of-science on believing things when evidence is equipoise (or absent) - pick the simpler hypothesis.

I got part (1) from reading Dancing Wu-Li Masters, but I can clearly see the value to readers without that background. But teaching foundational science is separate from teaching Bayesian rationalism.

The philosophy of the second part is incredibly controversial. Much more than you acknowledge in the essays, or acknowledge now. Treating the other side of any unresolved philosophical controversy as if it is stupid, not merely wrong, is excessive and unjustified.

In short, the QM sequence would seriously benefit from the sort of philosophical background stuff that is included in your more recent essays. Including some more technical discussion of the opposing position.

comment by OrphanWilde · 2013-04-01T16:19:35.841Z · score: 7 (7 votes) · LW · GW

If you learned quantum mechanics from that book, you may have seriously mislearned it. It's actually pretty decent describing everything up to but excluding quantum physics. When it comes to QM, however, the author sacrifices useful understanding in favor of mysticism.

comment by TimS · 2013-04-01T19:13:11.618Z · score: -1 (5 votes) · LW · GW

Hrm? On a conceptual level, is there more to QM than the Uncertainty Principle and Wave-Particle Duality? DWLM mentions the competing interpretations, but choosing an interpretation is not strictly necessary to understand QM predictions.

For clarity, I consider the double-slit experimental results to be an expression of wave-particle duality.


I will admit that DWLM does a poor job of preventing billiard-ball QM theory ("Of course you can't tell momentum and velocity at the same time. The only way to check is to hit the particle with a proton, and that's going to change the results.").

That's a wrong understanding, but a less wrong understanding than "It's classical physics all the way down."

comment by orthonormal · 2013-04-02T04:05:34.032Z · score: 11 (11 votes) · LW · GW

On a conceptual level, is there more to QM than the Uncertainty Principle and Wave-Particle Duality?

Yes. Very yes. There are several different ways to get at that next conceptual level (matrix mechanics, the behavior of the Schrödinger equation, configuration spaces, Hamiltonian and Lagrangian mechanics, to name ones that I know at least a little about), but qualitative descriptions of the Uncertainty Principle, Schrödinger's Cat, Wave-Particle Duality, and the Measurement Problem do not get you to that level.

Rejoice—the reality of quantum mechanics is way more awesome than you think it is, and you can find out about it!

comment by TimS · 2013-04-02T15:01:33.488Z · score: 3 (3 votes) · LW · GW

Let me rephrase: I'm sure there is more to cutting edge QM than that which I understand (or even have heard of). Is any of that necessary to engage with the philosophy-of-science questions raised by the end of the Sequence, such as Science Doesn't Trust Your Rationality?

From a writing point of view, some scientific controversy needed to be introduced to motivate the later discussion - and Eliezer choose QM. As examples go, it has advantages:

(1) QM is cutting edge - you can't just go to Wikipedia to figure out who won. EY could have written a Lamarckian / Darwinian evolution sequence with similar concluding essays, but indisputably knowing who was right would slant how the philosophy-of-science point would be interpreted.
(2) A non-expert should recognize that their intuitions are hopelessly misleading when dealing with QM, opening them to serious consideration of the new-to-them philosophy-of-science position EY articulates.

But let's not confuse the benefits of the motivating example with arguing that there is philosophy-of-science benefit in writing an understandable description of QM.

In other words, if the essays in the sequence after and including The Failures of Eld Science were omitted from the Sequence, it wouldn't belong on LessWrong.

comment by Vaniver · 2013-04-01T19:21:40.743Z · score: 0 (0 votes) · LW · GW

On a conceptual level, is there more to QM than the Uncertainty Principle and Wave-Particle Duality?

A deeper, more natural way to express both is "wavefunction reality," which also incorporates some of the more exotic effects that come from using complex numbers. (The Uncertainty Principle also should be called the "uncertainty consequence," since it's a simple derivation from how the position and momentum operators work on wavefunctions.)

(I haven't read DWLM, so I can't comment on its quality.)

comment by Michelle_Z · 2013-04-01T23:46:41.678Z · score: 5 (5 votes) · LW · GW

If you want to learn things/explore what you want to do with your life, take a few varied courses at Coursera.

comment by beoShaffer · 2013-04-01T04:26:05.524Z · score: 2 (2 votes) · LW · GW

Hi, Laplante. Why do you want to enter psychology/neuroscience/cognitive science? I ask this as someone who is about to graduate with a double major in psychology/computer science and is almost certain to go into computer science as my career.

comment by atomliner · 2013-04-12T08:31:40.315Z · score: 21 (23 votes) · LW · GW

Hello! I call myself Atomliner. I'm a 23 year old male Political Science major at Utah Valley University.

From 2009 to 2011, I was a missionary for the Mormon Church in northeastern Brazil. In the last month I was there, I was living with another missionary who I discovered to be a closet atheist. In trying to help him rediscover his faith, he had me read The God Delusion, which obliterated my own. I can't say that book was the only thing that enabled me to leave behind my irrational worldview, as I've always been very intellectually curious and resistant to authority. My mind had already been a powder keg long before Richard Dawkins arrived with the spark to light it.

Needless to say, I quickly embraced atheism and began to read everything I could about living without belief in God. I'm playing catch-up, trying to expand my mind as fast as I can to make up for the lost years I spent blinded by religious dogma. Just two years ago, for example, I believed homosexuality was an evil that threatened to destroy civilization, that humans came from another planet, and that the Lost Ten Tribes were living somewhere underground beneath the Arctic. Needless to say, my re-education process has been exhausting.

One ex-Mormon friend of mine introduced me to Harry Potter and the Methods of Rationality, which I read only a few chapters of, but I was intrigued by the concept of Bayes Theorem and followed a link here. Since then I've read From Skepticism to Technical Rationality and many of the Sequences. I'm hooked! I'm really liking what I find here. While I may not be a rationalist now, I would really like to be.

And that's my short story! I look forward to learning more from all of you and, hopefully, contributing in the future. :)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-12T21:14:52.699Z · score: 12 (12 votes) · LW · GW

Welcome to LW! Don't worry about some of the replies you're getting, polls show we're overwhelmingly atheist around here.

comment by MugaSofer · 2013-04-12T21:39:53.022Z · score: 1 (3 votes) · LW · GW

This^

That said, my hypothetical atheist counterpart would have made the exact same comment. I can't speak for JohnH, but I can see someone with experience of Mormons not holding those beliefs being curious regardless of affiliation. And, of course, the other two - well, three now - comments are from professed atheists. So far nobody seems willing to try and reconvert him or anything.

comment by JohnH · 2013-04-13T17:30:49.334Z · score: 0 (6 votes) · LW · GW

Some of that might be because of evaporative cooling. Reading the sequences is more likely to cause a theist to ignore Less Wrong then it is to change their beliefs, regardless of how rational or not a theist is. If they get past that point they soon find Less Wrong is quite welcoming towards discussions of how dumb or irrational religion is but fairly hostile to those that try and say that religion is not irrational; as in this welcome thread even points that out.

What I am wondering about is why it seems that atheists have complete caricatures of their previous theist beliefs. What atomliner mentions as his previous beliefs has absolutely no relation to what is found in Preach My Gospel, the missionary manual that he presumably had been studying for those two years, or to anything else that is found in scripture or in the teachings of the church. So are the beliefs that he gives as what he previously believed actually what he believed and if so what did he think of the complete lack of those beliefs being found in scripture and the publications of the church that he belonged to and where did he pick up these non standard beliefs? Or is something else entirely going on when he says that those were his beliefs?

This doesn't limit itself to atomliner; in my experience generally when atheists talk about their previous religion they seem to have always held (or claim they did) some extremely non-standard version of that religion. So is this a failure of the religion to communicate what the actual beliefs are, a failure of the ex-theist to discover what the beliefs of the religion really are and think critically about, in Mormon terms, "faith promoting rumors" (also known as lies and false doctrine, in Mormon terms), or are these non-standard beliefs cobbled together from "faith promoting rumors" after the atheist is already an atheist to justify atheism?

I know that atheists can deal with a lot of prejudice from believers about why they are atheists so I would think that atheists would try and justify their beliefs based on the best beliefs and arguments of a religion and not extreme outliers for both, as otherwise it plays to the prejudice. Or at least come up with something that actually are real beliefs. For any ex-Mormon there are entire websites of ready made points of doubt which are really easy to find, there should be no need to come up with such strange outlier beliefs to justify oneself, and if justifying isn't what he is doing then I am really very interested in knowing how and why he held those beliefs.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-13T21:19:47.250Z · score: 7 (9 votes) · LW · GW

IIRC the standard experimental result is that atheists who were raised religious have substantially above-average knowledge of their former religions. I am also suspicious that any recounting whatsoever of what went wrong will be greeted by, "But that's not exactly what the most sophisticated theologians say, even if it's what you remember perfectly well being taught in school!"

This obviously won't be true in my own case since Orthodox Jews who stay Orthodox will put huge amounts of cumulative effort into learning their religion's game manual over time. But by the same logic, I'm pretty sure I'm talking about a very standard element of the religion when I talk about later religious authorities being presumed to have immensely less theological knowledge than earlier authorities and hence no ability to declare earlier authorities wrong. As ever, you do not need a doctorate in invisible sky wizard to conclude that there is no invisible sky wizard, and you also don't need to know all the sophisticated excuses for why the invisible sky wizard you were told about is not exactly what the most sophisticated dupes believe they believe in (even as they go on telling children about the interventionist superparent). It'd be nice to have a standard, careful and correct explanation of why this is a valid attitude and what distinguishes it from the attitude of an adolescent who finds out everything they were told about quantum mechanics is wrong, besides the obvious distinction of net weight of experimental evidence (though really that's just enough).

LW has reportedly been key in deconverting many, many formerly religious readers. Others will of course have fled. It takes all kinds of paths.

comment by MugaSofer · 2013-04-14T18:49:44.763Z · score: 3 (5 votes) · LW · GW

As ever, you do not need a doctorate in invisible sky wizard to conclude that there is no invisible sky wizard, and you also don't need to know all the sophisticated excuses for why the invisible sky wizard you were told about is not exactly what the most sophisticated dupes believe they believe in (even as they go on telling children about the interventionist superparent).

The trouble with this heuristic is it fails when you aren't right to start with. See also: creationists.

That said, you do, in fact, seem to understand the claims theologians make pretty well, so I'm not sure why you're defending this position in the first place. Arguments are soldiers?

But by the same logic, I'm pretty sure I'm talking about a very standard element of the religion when I talk about later religious authorities being presumed to have immensely less theological knowledge than earlier authorities and hence no ability to declare earlier authorities wrong.

Well, I probably know even less about your former religion than you do, but I'm guessing - and some quick google-fu seems to confirm - that while you are of course correct about what you were thought, the majority of Jews would not subscribe to this claim.

You hail from Orthodox Judaism, a sect that contains mostly those who didn't reject the more easily-disprove elements of Judaism (and indeed seems to have developed new beliefs guarding against such changes, such as concept of a "written and oral Talmud" that includes the teachings of earlier authorites.) Most Jews (very roughly 80%) belong to less extreme traditions, and thus, presumably, are less likely to discover flaws in them. Much like the OP belonging to a subset of Mormons who believe in secret polar Israelites.

I am also suspicious that any recounting whatsoever of what went wrong will be greeted by, "But that's not exactly what the most sophisticated theologians say, even if it's what you remember perfectly well being taught in school!"

Again, imagine a creationist claiming that they were taught in school that a frog turned into a monkey, dammit, and you're just trying to disguise the lies you're feeding people by telling them they didn't understand properly! If a claim is true, it doesn't matter if a false version is being taught to schoolchildren (except insofar as we should probably stop that.) That said, disproving popular misconceptions is still bringing you closer to the truth - whatever it is - and you, personally, seem to have a fair idea of what the most sophisticated theologians are claiming in any case, and address their arguments too (although naturally I don't think you always succeed, I'm not stupid enough to try and prove that here.)

comment by Estarlio · 2013-04-14T21:29:46.058Z · score: 0 (0 votes) · LW · GW

The trouble with this heuristic is it fails when you aren't right to start with.

Disbelieving based on partial knowledge is different from disbelieving based on mistaken belief.

comment by MugaSofer · 2013-04-15T11:23:28.792Z · score: -1 (1 votes) · LW · GW

I'm not sure what you mean by this.

I mistakenly believe that learning more about something will not change my probability estimate, because the absurdity heuristic tells me it's too inferentially distant to be plausible - which has the same results if you are distant from reality and the claim is true, or correct and the claim is false.

comment by Estarlio · 2013-04-15T20:19:13.213Z · score: 0 (0 votes) · LW · GW

I'm not sure what you mean by this.

Being mistaken about something is different from not knowing everything there is to know about it.

If I'm wrong about a subject, then I don't know everything there is to know about it (assuming I'm reasoning correctly on what I know.)

But if I don't know everything there is to know about a subject, then I'm not necessarily wrong about that subject.

The former entails the latter, but the latter does not entail the former. One doesn't need a degree in biology to correct, or be corrected, about the frog thing - anymore than one needs a degree in sky wizardy to correct or be corrected about god.

Given that you can't know everything about even relatively narrow subject areas these days, (with ~7 billion humans on Earth we turn out a ridiculous amount of stuff,) what we're really dealing with here is an issue of trust: When someone says that you need to know more to make a decision, on what grounds do you decide whether or not they're just messing you around?

There's a major dis-analogy between how the Frog-based anti-evolutionist (AE) and the atheist (AT) 's questions are going to be addressed in that regard.

When the AE challenges evolution there are obvious touching stones, ideally he's told that the frog thing never happened and given a bunch of stuff he can go look up if he's interested. When the AT challenges theology he's told that he doesn't know enough, i.e. he hasn't exhausted the search space, but he's not actually pointed at anything that addresses his concern. It's more a sort of “Keep looking until you find something. Bwahahahaaa, sucker.” response.

That happens because of what evidence does and how we get it. Say, you're trying to decide whether the Earth is flat: To discover that it's vaguely spherical doesn't take a lot of evidence. I could drive to a couple of different locations and prove it to a reasonable degree of accuracy with sticks - it would not be difficult. (Or I could ask one of my friends in another city to take a measurement for me, but regardless the underlying methodology remains more or less the same.) That's an Eratosthenes level of understanding (~200BC). To discover the shape that the Earth actually is closer to an oblate spheroid, however, you need to have at least a Newton level of understanding (~1700 AD.) to predict that it being spun ought to make it bulge around the equator.

Evidence is something like, 'that which alters the conditional probability of something being observed.' But not all evidence alters the probability to the same degree. If you're off by a lot, a little bit of evidence should let you know. The more accurate you want to get the more evidence you need. Consequently, knowledge of search spaces tends to be ordered by weightiness of evidence unless the other person is playing as a hostile agent.

Even to ask the trickier questions that need that evidence requires a deep understanding that you have tuned in from a more general understanding. The odds that you'll ask a relevant question without that understanding, just by randomly mooshing concepts together, are slim.

Now the AT probably doesn't know a lot about religion. Assuming that the atheist is not a moron just randomly mooshing concepts together, her beliefs would off by a lot; she seems likely to disagree with the theist about something fairly fundamental about how evidence is meant to inform beliefs.

So, here the AT is sitting with her really weight super-massive black hole of a reason to disbelieve - and the response from the Christian is that he doesn't know everything about god. That response is missing the references that someone who actually had a reason they could point to would have. More importantly that response claims that you need deep knowledge to answer a question that was asked with shallow knowledge.

The response doesn't even look the same as the response to the frog problem. Everyone who knows even a little bit about evolution can correct the frog fella. Whereas, to my knowledge, no Christian has yet corrected a rational atheist on his or her point of disbelief. (And if they have why aren't they singing it from the rooftops - if they have as one might call it, a knock-down argument why aren't the door to door religion salesmen opening with that?)

Strictly speaking neither of them knows everything about their subjects, or likely even very much of the available knowledge. But one clearly knows more than the other and there are things that such knowledge lets him do that the other can't; point us towards proof, answer low level questions with fairly hefty answers; and is accorded an appropriately higher trust in areas that we've not yet tested ourselves.

Of course I acknowledge the possibility that a Christian, or whoever, might be able to pull off the same stunt. But since I've never seen it, and never heard of anyone who's seen it, and I'd expect to see it all over the place if there actually was an answer lurking out there.... And since I've talked two Christians out of their beliefs in the past who'd told me that I just needed to learn more about religion and know that someone who watched that debate lost their own faith as a consequence of being unable to justify their beliefs. (Admittedly I can't verify this to you so it's just a personal proof.) It seems improbable to me that they've actually got an answer.

Of course if they have such an answer all they have to do is show it to me. In the same manner as the frog-person.

(I can actually think of one reason that someone who could prove god might choose not to: If you don't know about god, under some theologies, you can't go to hell. You can't win a really nice version of heaven either but you get a reasonable existence. They had to pull that move because they didn't want to tell people that god sent their babies went to hell.

However, this latter type of person would seem mutually exclusive with the sort of person who would be interested in telling you to look more deeply into religion to begin with. I'd imagine someone who viewed your taking on more duties to not go to hell probably ought to be in the business of discouraging you joining or investigating religion.)

Anyway, yeah. I think you can subscribe to E's heuristic quite happily even in areas where you acknowledge that you're likely to be off by a long way.

comment by MugaSofer · 2013-04-19T14:20:34.397Z · score: -1 (1 votes) · LW · GW

When the AE challenges evolution there are obvious touching stones, ideally he's told that the frog thing never happened and given a bunch of stuff he can go look up if he's interested. When the AT challenges theology he's told that he doesn't know enough, i.e. he hasn't exhausted the search space, but he's not actually pointed at anything that addresses his concern. It's more a sort of “Keep looking until you find something. Bwahahahaaa, sucker.” response.

I can assure you, I have personally seen atheists make arguments that are just as misinformed as the frog thingie.

For that matter, I've seen people who don't know much about evolution but are arguing for it tell creationists that a counterpoint to their claim exists somewhere, even though they don't actually know of such a "knock-down argument". And they were right.

Also, you seem to be modelling religious people as engaging in bad faith. Am I misreading you here?

The response doesn't even look the same as the response to the frog problem. Everyone who knows even a little bit about evolution can correct the frog fella.

Sure, but that was what we call an example. Creationists often make far more complex and technical-seeming arguments, which may well be beyond the expertise of the man on the street.

Whereas, to my knowledge, no Christian has yet corrected a rational atheist on his or her point of disbelief.

Maybe I parsed this wrong. Are you saying no incorrect argument has ever been made for atheism?

(And if they have why aren't they singing it from the rooftops - if they have as one might call it, a knock-down argument why aren't the door to door religion salesmen opening with that?)

Well, many do open with what they consider to be knock-down arguments, of course. But many such arguments are, y'know, long, and require considerable background knowledge.

And since I've talked two Christians out of their beliefs in the past who'd told me that I just needed to learn more about religion and know that someone who watched that debate lost their own faith as a consequence of being unable to justify their beliefs. (Admittedly I can't verify this to you so it's just a personal proof.) It seems improbable to me that they've actually got an answer.

If you have such an unanswerable argument, why aren't you "singing it from the rooftops"?

I think you can subscribe to E's heuristic quite happily even in areas where you acknowledge that you're likely to be off by a long way.

Minor point, but you realize EY wasn't the first to make this argument? And while I did invent this counterargument, I'm far from the first to do so. For example, Yvain.

comment by Estarlio · 2013-04-19T18:34:40.221Z · score: 0 (0 votes) · LW · GW

I can assure you, I have personally seen atheists make arguments that are just as misinformed as the frog thingie.

For that matter, I've seen people who don't know much about evolution but are arguing for it tell creationists that a counterpoint to their claim exists somewhere, even though they don't actually know of such a "knock-down argument". And they were right.

Well, that's why I said ideally. Lots of people believe evolution as a matter of faith rather than reason. I'd tend to say it's a far more easily justified faith - after all you can find the answers to the questions you're talking about very easily, or at least find the general direction they're in, and the more rational people seem almost universally to believe in it, and it networks into webs of trust that seem to allow you to actually do things with your beliefs, but it's true that many people engage with it only superficially. You'd be foolish to believe in evolution just because Joe Blogs heard that we evolved on TV. Joe Blogs isn't necessarily doing any more thinking, if that's all he'll give you to go on, than if he'd heard from his pastor that god did it all.

Joe Blogs may be able to give you good reasons for believing in something without giving you an answer on your exact point - but more generally you shouldn't believe it if all he's got in his favour is that he does and he's got unjustified faith that there must be an answer somewhere.

A heuristic tends towards truth, it's the way to bet. There are situations where you follow the heuristic and what you get is the wrong answer, but the best you can do with the information at hand.

Also, you seem to be modelling religious people as engaging in bad faith. Am I misreading you here?

I consider someone who, without good basis, tells you that there's an answer and doesn't even point you in its direction, to be acting in bad faith. That's not all religious people but it seems to me at the moment to be the set we'd be talking about here.

Sure, but that was what we call an example. Creationists often make far more complex and technical-seeming arguments, which may well be beyond the expertise of the man on the street.

Maybe so, but going back to our heuristics those arguments don't hook into a verifiable web of trust.

In case I wasn't clear earlier: I do believe that when many people believe in something with good basis they're often believing in the work of a community that produces truth according to certain methods - that what's being trusted is mostly people and little bits here and there that you can verify for yourself. What grounds do you have for trusting pastors, or whoever, know much about the world - that they're good and honest producers of truth?

Maybe I parsed this wrong. Are you saying no incorrect argument has ever been made for atheism?

No, I'm saying that to my knowledge no Christian has yet corrected someone who's reasonably rational on their reason for disbelieving.

Well, many do open with what they consider to be knock-down arguments, of course. But many such arguments are, y'know, long, and require considerable background knowledge.

Knockdown arguments about large differences of belief tend to be short, because they're saying that someone's really far off, and you don't need a lot of evidence to show that someone's a great distance out. Getting someone to buy into the argument may be more difficult if they don't believe that argument is a valid method, (and a great many people don't really,) but the argument itself should be quite small.

If someone's going to technicality you to death, that's a sign that their argument is less likely to be correct if they're applying it to a large difference of belief. Scientists noticeably don't differ on the large things - they might have different interpretations of precise matters but the weight of evidence when it comes to macroscopic things is fairly overwhelming.

If you have such an unanswerable argument, why aren't you "singing it from the rooftops"?

I don't think that people who believe in god are necessarily worse off than people who don't. If you could erase belief in god from the world, I doubt it would make a great deal of difference in terms of people behaving rationally. If anything I'd say that the reasons that religion is going out of favour have more to do with a changing moral character of society and the lack of an ability to provide a coherent narrative of hope than they do with a rise of more rationally based ideologies.

Consequently, it's not an efficient use of my time. While you can say 'low probability prior, no supporting evidence, no predictive power,' in five seconds, that's going to make people who don't have a lot of intellectual courage recoil from what you're suggesting - if they understand it at a gut level at all - and in any case teaching the tools to understand what that means can take hours. And teaching someone to bring their emotions in line with justified beliefs can take months or years on top of that. Especially if you're going to have to sit down with them and walk them through all the steps to come to a belief that they don't really want very much in the first place.

Okay, sure, 'that which can be destroyed by the truth should be' - but at what cost, in what order? Don't you have better things to do with your time than pick on Christians whose lives may even be made worse by your doing so if they don't subsequently become more rational and develop well actualised theories of happiness and so on? Can you really provide a better life than a belief in god does for them? Even if you assume that making someone disbelieve god is a low-effort task, it wouldn't be as simple as just having someone disbelieve if you were to do it to promote their interests.

If there are a more efficient way of doing it then I might be up for that, but I'm just more generally interested in raising the sanity waterline that I am with swating individual beliefs here and there.

Minor point, but you realize EY wasn't the first to make this argument? And while I did invent this counterargument, I'm far from the first to do so. For example, Yvain.

I do yes, I was made to read Dawkin's awful book a few years back in school. =p

comment by MugaSofer · 2013-04-19T19:56:05.286Z · score: -1 (1 votes) · LW · GW

Well, that's why I said ideally. Lots of people believe evolution as a matter of faith rather than reason.

Sorry, I was saying I agreed with them. You don't have to know every argument for a position to hold it, you just have to be right.

Mind you, I generally do learn the arguments, but I'm weird like that.

I consider someone who, without good basis, tells you that there's an answer and doesn't even point you in its direction, to be acting in bad faith. That's not all religious people but it seems to me at the moment to be the set we'd be talking about here.

I'm talking more about the set of everybody who tells you to read the literature. Sure, it's a perfectly good heuristic as long as you only use it when you're dealing with that particular subset.

What grounds do you have for trusting pastors, or whoever, know much about the world - that they're good and honest producers of truth?

Well, I was thinking more theologians, but to be fair they're as bad as philosophers. Still, they've spent millennia talking about this stuff.

No, I'm saying that to my knowledge no Christian has yet corrected someone who's reasonably rational on their reason for disbelieving.

Sorry, but I'm going to have to call No True Scotsman on this. How many theists who were rational in their reasons for believing have been corrected by atheists? How many creationists who were rational in their reasons for disbelieving in evolution have been corrected by evolutionists?

I don't think that people who believe in god are necessarily worse off than people who don't. If you could erase belief in god from the world, I doubt it would make a great deal of difference in terms of people behaving rationally.

Point.

Um ... as a rationalist and the kind of idiot who exposes themself to basilisks, could you tell me this argument? Maybe rot13 it if you're not interested in evangelizing.

I do yes, I was made to read Dawkin's awful book a few years back in school. =p

Man, I'd forgotten that was the first place I came across that. Ah, nosalgia ... terrible book, though.

comment by Estarlio · 2013-04-28T21:24:48.549Z · score: 1 (1 votes) · LW · GW

Comment too long - continued from last:

Point.

Um ... as a rationalist and the kind of idiot who exposes themself to basilisks, could you tell me this argument? Maybe rot13 it if you're not interested in evangelizing.

V fhccbfr gung'f bxnl.

Gur svefg guvat abgr vf gung vs lbh ybbx ng ubj lbh trg rivqrapr, jung vg ernyyl qbrf, gura V'ir nyernql tvira bar: Ybj cevbe, (r.t. uvtu pbzcyrkvgl,) ab fhccbegvat rivqrapr. Crefbanyyl gung'f irel pbaivapvat. V erzrzore jura V jnf lbhatre, naq zl cneragf jrer fgvyy va gurve 'Tbbq puvyqera tb gb Puhepu' cunfr, zl pbhfva, jub jnf xvaqn fjrrg ba zr, fnvq gb zr 'Jul qba'g lbh jnag gb tb gb Puhepu? Qba'g lbh jnag gb tb gb urnira?' naq V nfxrq gurz 'Qba'g lbh jnag gb tb gb Aneavn? Fnzr guvat.' N ovg cvguvre creuncf ohg lbh trg gur cbvag, gur vqrn bs oryvrivat vg jvgubhg fbzrbar cbalvat hc rivqrapr unf nyjnlf orra bqq gb zr - creuncf whfg orpnhfr V jnf fb hfrq gb nqhygf ylvat ol gur gvzr V jnf byq rabhtu gb haqrefgnaq gur vqrn bs tbq ng nyy.

Ohg gur cbvag vf, bs pbhefr, jung pbafgvghgrf rivqrapr? Vg zvtug frrz yvxr gurer'f jvttyr ebbz gurer, ng yrnfg vs lbh ernyyl jnag gb or pbaivaprq bs n tbq. Bar nafjre vf gung rivqrapr qbrf fbzrguvat gb gur cebonovyvgl bs na bofreingvba - vs lbh bhgchg gur fnzr cerqvpgrq bofreingvbaf ertneqyrff bs gur rivqrapr, gura vg'f whfg n phevbfvgl fgbccre engure guna rivqrapr.

Fb, ornevat gung va zvaq: Gurer ner znal jnlf bs cuenfvat gur nethzrag sbe tbq jura lbh'er gelvat gb svyy va gung rivqrapr - frafvgvivgl gb vavgvny pbaqvgvbaf vf creuncf gur zbfg erfcrpgnoyr bar gb zl zvaq - ohg abar bs gurz frrz gb zrna n guvat jvgubhg gur sbyybj nethzrag, be nethzragf gung ner erqhpvoyr gb vg, ubyqvat:

'Gurer vf n tbq orpnhfr rirelguvat gung rkvfgf unf n pnhfr & yvxr rssrpgf ner nyvxr va gurve pnhfrf.'

Vs lbh qba'g ohl vagb gung gura, juvyr lbh'ir fgvyy tbg inevbhf jnlf gb qrsvar tbq, lbh'ir tbg ab ernfba gb. (Naq vg'f abg vzzrqvngryl pyrne ubj gubfr bgure jnlf trgf lbh nalguvat erfrzoyvat rivqrapr gung lbh pna gura tb ba gb hfr.) Rira jvgu ernfba/checbfr onfrq gurbybtvrf, yvxr Yrvoavm, gur haqreylvat nffhzcgvba vf gb nffhzr gung guvatf ner gur fnzr - 'Jung vf gehr bs [ernfbaf sbe gur rkvfgrapr bs] obbxf vf nyfb gehr bs gur qvssrerag fgngrf bs gur jbeyq, sbe gur fgngr juvpu sbyybjf vf....' Gurer ur'f nffhzvat gung obbxf unir n ernfba naq gung gur jbeyq orunirf va gur fnzr jnl, uvf npghny nethzrag tbrf ba gb nffhzr n obbx jvgu ab nhgube naq rffragvnyyl eryvrf ba gur vaghvgvba gung jr unir gung guvf jbhyq or evqvphybhf, juvpu gb zl zvaq znxrf uvf nethzrag erqhpvoyr gb gur jngpuznxre nethzrag.

Nalubb.

Lbh pna trg nebhaq gur jngpuznxre guvatl yvxr guvf:

1) Rirelguvat gung rkvfgf unf n pnhfr.

Guvf bar'f abg jbegu nethvat bire. N cevzr zbire qbrfa'g, bs vgfrys, vzcyl n fragvrag tbq va gur frafr pbzzbayl zrnag. V qba'g xabj jurgure gurer jnf be jnfa'g n cevzr zbire, V fhfcrpg jr qba'g unir gur pbaprcghny ibpnohynel gb npghnyyl gnyx nobhg perngvba rk-avuvyb va n zrnavatshy jnl.

2) Yvxr rssrpgf ner nyvxr va gurve pnhfrf.

Guvf vf gur vzcbegnag bar.

Gur nffhzcgvba vf gung lbh'ir tbg n qrfvtare va gur fnzr jnl jr qrfvta negrsnpgf - uvtu pbzcyrkvgl cerffhcbfr bar, cerfhznoyl. Ubjrire, gung qbrfa'g ernyyl yvar hc jvgu ubj vairagvba jbexf:

Vs lbh jrer whfg erylvat ba trargvp tvsgf - VD be jung unir lbh - gura lbh'q trg n erthyne qvfgevohgvba jura lbh tencurq vairagvba ntnvafg VD. Ohg lbh qba'g. Lbh qba'g trg n Qnivapv jvgubhg n Syberapr. Be ng gur irel yrnfg jvgubhg gur vagryyrpghny raivebazrag bs n Syberapr. Gur vqrn gung crbcyr whfg fvg gurer naq pbzr hc jvgu vqrnf bhg bs guva nve vf abafrafr. Gur vqrn gung lbh hfr gb perngr fbzrguvat pbzr sebz lbhe rkcrevraprf va gur jbeyq vagrenpgvat jvgu gur fgehpgher bs lbhe oenva. Vs lbh ybpx fbzrbar va frafbel qrcevingvba sbe nyy gurve yvsr, gura lbh'er abg tbvat gb trg ahpyrne culfvpf bhg bs gurz ng gur bgure raq. Tneontr va tneontr bhg.

Vs yvxr rssrpgf ernyyl ner nyvxr va gurve pnhfrf, gura lbh qba'g trg n tbq jvgubhg n jbeyq. Gur vasbezngvba sbe perngvba qbrfa'g whfg zntvpnyyl nccrne hcba cbfvgvat n perngbe. Naq vs vasbezngvba vf va jbeyqf, engure guna perngbef, nf frrzf gb or gur pnfr vs lbh'er fnlvat yvxr rssrpgf yvxr pnhfrf, gura jul cbfvg n tbq ng nyy? Gur nffhzcgvba qbrfa'g qb nal jbex - abguvat zber unir orra rkcynvarq nobhg jurer gur vasbezngvba naq fgehpgher bs gur jbeyq pnzr sebz nsgre lbh'ir znqr gur nffhzcgvba guna jnf znqr orsber.

Vg'f n snveyl cbchyne zbir va gurbybtl gb pynvz gung lbh pna'g xabj gur zvaq bs tbq. Ohg rira pnyyvat vg n zvaq znxrf n ybg bs nffhzcgvbaf - naq jura lbh fgneg erzbivat gubfr nffhzcgvbaf naq fnlvat fghss gb trg bhg bs gur nobir nethzrag yvxr 'jryy, gur vqrn jnf nyjnlf gurer, va Tbq' jung ner lbh ernyyl qbvat gung'f qvssrerag gb cbfvgvat na haguvaxvat cevzr zbire? Ubj qbrf vasbezngvba va n fgngvp fgehpgher pbafgvghgr n zvaq ng nyy?

Jurer qvq gur vasbezngvba gb trg gur jbeyq pbzr sebz? V qba'g xabj, ohg hayrff lbh pna fnl ubj tbq znqr gur jbeyq - jurer ur tbg uvf vqrnf sebz - gur cerzvfr vf whfg... gur jbeyq jbhyq ybbx gur fnzr gb lbh jurgure tbq jnf gurer be abg, fb jung lbh'er gnyxvat nobhg qbrfa'g pbafgvghgr rivqrapr bs gurve rkvfgrapr. Lbh unir gb xabj gur angher bs tbq, rira vs whfg va trareny grezf, gb qvfgvathvfu vg sebz n cevzr zbire. Fhccbfvat na ntrag va gur svefg cynpr jnf zrnag gb or jung tbg lbh bhg bs gung ceboyrz naq jura vg qbrfa'g....

Gung gb zl zvaq vf n snveyl nofbyhgr nethzrag ntnvafg tbq. Gura lbh'ir whfg tbg uvf cevbe cebonovyvgl naq jungrire culfvpny cebbsf gung fcrpvsvp eryvtvbaf cerffhcbfr, gung lbh'q irevsl ba gurve bja zrevgf; v.r. ceviryvqtrq vasbezngvba gung pbhyq bayl unir pbzr sebz zrrgvat fbzrguvat tbqyl-cbjreshy, (abar bs juvpu frrzf gb unir ghearq hc lrg.)

V qba'g xabj, znlor lbh qba'g svaq gur nethzrag pbaivapvat - gur uvg engr va gung ertneq vfa'g cnegvphyneyl uvtu. Ohg V'ir abg sbhaq n aba-snvgu-onfrq nethzrag gung guvf qbrfa'g znc bagb va fbzr sbez be nabgure lrg.

comment by MugaSofer · 2013-04-29T22:04:42.408Z · score: -2 (2 votes) · LW · GW

Thank you for sharing. It was, I must say, probably the best-posed argument for atheism I've ever read, and I could probably go on for days about why it doesn't move me. So I won't.

comment by shminux · 2013-04-29T22:50:34.634Z · score: 0 (4 votes) · LW · GW

Chicken!

comment by MugaSofer · 2013-05-01T18:22:53.098Z · score: -1 (1 votes) · LW · GW

Estarlio has specifically stated that they consider arguing over this a waste of their time. To be honest, so do I.

comment by Estarlio · 2013-04-28T21:24:35.004Z · score: 0 (0 votes) · LW · GW

Sorry, it's taken so long to reply. I'm easily distracted by shiny objects and the prospect of work.

Let's see:

Sorry, I was saying I agreed with them. You don't have to know every argument for a position to hold it, you just have to be right.

It seems to me at the moment that you don't know if you're right. So while you don't have to know every argument for a position to hold it, if you're interested in producing truth, it's desirable to have evidence on your side - either via the beliefs of others who have a wider array of knowledge on the subject than yourself and are good at producing truth or via knowing the arguments yourself.

Mind you, I generally do learn the arguments, but I'm weird like that.

I never have the time to learn all the arguments. Though I tend to know a reasonable number by comparison to most people I meet I suppose - not that that's saying much.

I'm talking more about the set of everybody who tells you to read the literature. Sure, it's a perfectly good heuristic as long as you only use it when you're dealing with that particular subset.

Ah, more generally then that depends on who's telling you to do it and what literature they're telling you to read. If someone's asking you to put in a fairly hefty investment of time then it seems to me that requires a fairly hefty investment of trust, sort of like Let's see some cards before we start handing over money. You don't have to see the entirety of their proof up front but if they can't provide at least a short version and haven't given you any other reason to respect their ability to find truth....

Like if gwern or someone told me that there was a good proof of god in something - I've read gwern's website and respect their reasoning - that would make me inclined to do it. If I saw priests and the like regularly making coherent arguments and they had that visible evidence in their ability to find truth, then they'd get a similar allowance. But it's like they don't want to show their cards at the moment - or aren't holding any - and whenever I've given them the allowance anyway it's turned out to be a bit of a waste. So that trust's not there for them anymore.

Well, I was thinking more theologians, but to be fair they're as bad as philosophers. Still, they've spent millennia talking about this stuff.

That's true. I just wonder - it's not well ordered or homogenous.

If everyone was writting about trivial truths then you'd expect it to mostly agree with itself - lots of people saying more or less the same stuff. If it was deep knowledge then you'd expect the deep knowledge to be on the top of the heap. Insights relevant to a widely felt need impose an ordering effect on the search space. Which is to say, lots of people know about them because they're so useful.

It's entirely possible they've just spent millennia talking about not very much at all. I mean you read Malebranche, for instance, and he was considered at the time to be doing very good work. But when you read it, it's almost infantile in its misunderstandings. If that's what passed muster it does't imply good things about what they were doing with the rest of their two thousand years or so.

I'm not sure whether that's particularly clear, reading it back. When people are talking sense then the people from previous eras don't appear to pass muster to people from modern eras. They might appear smart, but they're demonstrably wrong. If Malebranche is transparently wrong to me, and I'm not especially familiar with Christian works, nor am I the smartest man who ever lived - I've met one or two people in my life I consider as smart as myself.... That's not something that looks like an argument that's the product of thousands of years of meaningful work, or that could survive as something respectable in an environment where thousands of years of work had been put in.

Sorry, but I'm going to have to call No True Scotsman on this. How many theists who were rational in their reasons for believing have been corrected by atheists? How many creationists who were rational in their reasons for disbelieving in evolution have been corrected by evolutionists?

What difference does either of those make to the claim about atheistic rationalists? I'm not making a universal claim that all rationalists are atheistic, I'm making a claim about the group of people who are rationalists and are atheistic.

NTS would be if I said no rational atheist had, to my knowledge, ever been corrected on their point of disbelief by a Christian and you said sometihng like,

"Well, Elizer is a rationalist and he's become a Christian after hearing my really awesome argument."

And then I was all, "Well obviously Elizer's a great big poopy-head rather than a rationalist."

To my mind, Elizer and a reasonable distribution of other respectable rationalists becoming Christians in response to an argument (so that we know it's not just a random mental breakdown,) would be very hefty evidence in favour of there being a good argument for being Christian out there.

However, to answer your questions: I don't know on the creationist front, but on the Christian front I personally know of ... actually now I think of it longer I know of four, one of my friends in the US changed his mind too.

I do know of one person who's gone the other way too. But not someone that I'd considered particularly rational before they did so.

comment by JohnH · 2013-04-13T21:54:22.712Z · score: 1 (1 votes) · LW · GW

I believe the result is that atheists have an above average knowledge of world religions, similar to Jews (and Mormons) but I don't know of results that show they have an above average knowledge of their previous religion. Assuming most of them were Christians then the answer is possibly.

In this particular case I happen to know precisely what is in all of the official church material; I will admit to having no idea where his teachers may have deviated from church publications, hence me wondering where he got those beliefs.

I suppose I can't comment on what the average believer of various other sects know of their sects beliefs, only on what I know of their sects beliefs. Which leaves the question of plausibility that I know more then the average believer of say Catholicism or Evangelical Christianity or other groups not my own.

[edit] Eliezer, I am not exactly new to this site and have previously responded in detail to what you have written here. Doing so again would get the same result as last time.

comment by Baruta07 · 2013-04-14T19:50:23.212Z · score: 0 (0 votes) · LW · GW

IIRC the standard experimental result is that atheists who were raised religious have substantially above-average knowledge of their former religions.

As a Grade 11 student currently attending a catholic school (and having attended christian schools all my life) I would have to vouch for the accuracy of the statement; thanks to CCS I've learned a tremendous amount about Christianity although in my case there was a lot less Homosexuality is bad then is probably the norm and more focus on the positive moral aspects...

I currently attend Bishop Carroll HS and even though it is a catholic school I have no desire to change schools because of the alternate religious courses they offer and because it's generally a great school. From my experiences there are a ton of non-religious students as well as several more unusual religions represented. I personally would recommend the school for any HS students in Calgary wishing to have a non-standard HS experience.

comment by Vaniver · 2013-04-14T18:55:13.197Z · score: 0 (0 votes) · LW · GW

IIRC the standard experimental result is that atheists who were raised religious have substantially above-average knowledge of their former religions.

How much of this effect do you think is due to differences in intelligence?

comment by Vaniver · 2013-04-14T18:58:45.086Z · score: 6 (6 votes) · LW · GW

What I am wondering about is why it seems that atheists have complete caricatures of their previous theist beliefs.

Suppose there is diversity within a religion, on how much the sensible and silly beliefs are emphasized. If the likelihood of a person rejecting a religion is positively correlated with the religion recommending silly beliefs, then we should expect that the population of atheist converts should have a larger representation of people raised in homes where silly beliefs dominated than the population of theists. That is, standard evaporative cooling, except that the reasonable people who leave become atheists, and similarly reasonable people who are in a 'warm' religious setting can't relate. (I don't know if there is empirical support for this model or not.)

comment by Estarlio · 2013-04-14T22:15:37.453Z · score: 5 (5 votes) · LW · GW

I know that atheists can deal with a lot of prejudice from believers about why they are atheists so I would think that atheists would try and justify their beliefs based on the best beliefs and arguments of a religion and not extreme outliers for both, as otherwise it plays to the prejudice.

Really? It don't think it takes an exceptional degree of rationality to reject religion.

I suspect what you mean is that atheists /ought/ to justify their disbelief on stronger grounds than the silliest interpretation of their opponent's beliefs. Which is true, you shouldn't disbelieve that there's a god on the grounds that one branch of one religion told you the royal family were aliens or something - that's just an argument against a specific form of one religion not against god in general.

But I suspect the task would get no easier for religion if it were facing off against more rational individuals, who'd want the strongest form of the weakest premise. (In this case I suspect something like: What you're talking about is really complex/improbable, before we get down to talking about the specifics of any doctrine, where's your evidence that we should entertain a god at all?)

What I am wondering about is why it seems that atheists have complete caricatures of their previous theist beliefs.

Selection bias maybe? You're talking to the atheists who have an emotional investment in debating religion. I'd suspect that those who'd been exposed to the sillier beliefs would have greater investment, and that stronger rationalists would have a lower investment or a higher investment in other pursuits. Or maybe atheists tend to be fairly irrational. shrug

comment by atomliner · 2013-04-14T10:30:56.656Z · score: 4 (4 votes) · LW · GW

I was not trying to justify my leaving the Mormon Church in saying I used to believe in the extraordinary interpretations I did. I just wanted to say that my re-education process has been difficult because I used to believe in a lot of crazy things. Also, I'm not trying to make a caricature of my former beliefs, everything I have written here about what I used to believe I will confirm again as an accurate depiction of what was going on in my head.

I think it is a misstatement of yours to say that these beliefs have "absolutely no relation to... anything else that is found in scripture or in the teachings of the church". They obviously have some relation, being that I justified these beliefs using passages from The Family: A Proclamation to the World, Journal of Discourses and Doctrine & Covenants, pretty well-known LDS texts. I showed these passages in another reply to you.

comment by CCC · 2013-04-14T10:47:16.692Z · score: 3 (3 votes) · LW · GW

They obviously have some relation, being that I justified these beliefs using passages from The Family: A Proclamation to the World, Journal of Discourses and Doctrine & Covenants, pretty well-known LDS texts. I showed these passages in another reply to you.

In all fairness, JohnH wrote his post before you showed him those passages. So that data was not available to him at the time of writing.

comment by CCC · 2013-04-13T21:29:20.485Z · score: 1 (1 votes) · LW · GW

Some of that might be because of evaporative cooling. Reading the sequences is more likely to cause a theist to ignore Less Wrong then it is to change their beliefs, regardless of how rational or not a theist is.

I agree intuitively with your second sentance (parsing 'beliefs' as 'religious beliefs'); but as I assign both options rather low probabilities, I suspect that it isn't enough to cause much in the way of evaporative cooling.

but fairly hostile to those that try and say that religion is not irrational

I haven't really seen that hostility, myself.

This doesn't limit itself to atomliner; in my experience generally when atheists talk about their previous religion they seem to have always held (or claim they did) some extremely non-standard version of that religion.

Hmmm. It seems likely that the non-standard forms have glaring flaws; close inspection finds the flaws, and a proportion of people therefore immediately assume that all religions are equally incorrect. Which is flawed reasoning in and of itself; if one religion is flawed, this does not imply that all are flawed.

comment by MugaSofer · 2013-04-14T18:33:01.797Z · score: -1 (1 votes) · LW · GW

but fairly hostile to those that try and say that religion is not irrational

I haven't really seen that hostility, myself.

I think John means "hostility" more in the sense of "non-receptiveness" rather than actively attacking those who argue for theism.

This doesn't limit itself to atomliner; in my experience generally when atheists talk about their previous religion they seem to have always held (or claim they did) some extremely non-standard version of that religion.

Hmmm. It seems likely that the non-standard forms have glaring flaws; close inspection finds the flaws, and a proportion of people therefore immediately assume that all religions are equally incorrect.

Yup, this seems to fit.

comment by JohnH · 2013-04-14T19:04:30.604Z · score: 2 (2 votes) · LW · GW

Being called a moron seems hostile to me, just to use an example right here.

comment by CCC · 2013-04-14T19:18:02.375Z · score: 1 (1 votes) · LW · GW

That was certainly hostile, yes. However, I take the fact that the post in question is at -10 karma to suggest that the hostility is frowned upon by the community in general.

comment by MugaSofer · 2013-04-15T11:10:40.725Z · score: 0 (2 votes) · LW · GW

Sorry, I should have specified "except for Kawoomba".

comment by Kawoomba · 2013-04-14T19:07:50.968Z · score: -5 (7 votes) · LW · GW

That which can be destroyed by the truth should be. Also, the spelling is unchanged, and I'd just seen a certain Tarantino movie.

Edit: Also, politeness has its virtues and is often more effective in achieving one's goals - yet Crocker's Rules are certainly more honest. Checking the definition of moron - at least as it pertains to that aspect of a person's belief system - I mean, who would seriously dispute its applicability, even before South Park immortalized Joseph Smith's teachings?

comment by PhilGoetz · 2013-04-14T19:58:14.959Z · score: 5 (5 votes) · LW · GW

I dispute its applicability, because I've known very smart Mormons. Humans are not logic engines. It's rare to find even a brilliant person who doesn't have some blind spot.

Even if it were clinically applicable, you presented it as an in-group vs. out-group joke, which is an invitation for people from one tribe to mock people from another tribe. Its message was not primarily informational.

Crocker's Rules are not an invitation to be rude.

comment by Kawoomba · 2013-04-14T20:05:42.280Z · score: -2 (2 votes) · LW · GW

I don't doubt there are Mormons with a higher IQ than myself, and more knowledgeable in many fields. Maybe the term "stupid person" is too broad, I meant it with Mormonism as the referent, and as being limited in scope to that. Yet it is disheartening that there are such obvious self-deceiving failures of reasoning, and courtesy afforded to dumb beliefs may prop up the Potemkin village, may help hide the elephant behind the curtain.

Reveal Oz in the broad daylight of reason, so that those very smart Mormons you know must address that blind spot.

comment by JohnH · 2013-04-14T20:09:31.232Z · score: 2 (2 votes) · LW · GW

Calling us morons doesn't reveal anything to reason or even attempt to force me to address what you may think of as a blind spot.

comment by Kawoomba · 2013-04-14T20:23:15.848Z · score: 0 (2 votes) · LW · GW

It stands to reason that if you've successfully read even parts of the Sequences, or other rationality related materials, and yet believe in the Book of Mormon, there's little that will force you to address that blind ... area ..., so why not shock therapy. Or are you just too looking forward to your own planet / world? (Brigham Young, Journal of Discourses 18:259) Maybe that's just to be taken metaphorically though, for something, or something other?

comment by JohnH · 2013-04-14T21:13:12.693Z · score: 1 (1 votes) · LW · GW

Why go to the Journal of Discourses? D&C 132 clearly states that those that receive exaltation will be gods, the only question is whether that involves receiving a planet or just being part of the divine council. The Bible clearly states that we will be heirs and joint heirs with Christ. The Journal of Discourses is not something that most members look to for doctrine as it isn't scripture. I, and any member, am free to believe whatever I want to on the subject (or say we don't know) because nothing has been revealed on the subject of exaltation and theosis other then that.

Personally, I think there are some problems with the belief that everyone will have a planet due to some of the statements that Jesus makes in the New Testament but I could be wrong and I am not about to explain the subject here, though I may have attempted to do so in the past.

comment by CCC · 2013-04-15T08:36:06.494Z · score: 2 (2 votes) · LW · GW

Crocker's Rules are not an excuse for you to be rude to others. They are an invitation for others to ignore politeness when talking to you. They are not an invitation for others to be rude to you for the sake of rudeness, either; only where it enables some other aim, such as efficient transfer of information.

What you did, when viewed from the outside, is a clear example of rudeness for the sake of rudeness alone. I don't see how Crocker's rules are relevant.

comment by CCC · 2013-04-14T18:58:53.564Z · score: 0 (0 votes) · LW · GW

I think John means "hostility" more in the sense of "non-receptiveness" rather than actively attacking those who argue for theism.

Ah. To my mind, that would be 'neutrality', not 'hostility'.

comment by MugaSofer · 2013-04-15T11:30:51.380Z · score: -1 (1 votes) · LW · GW

Ironically, this turned out not to be the case; he was thinking of Kawoomba, our resident ... actually, I'd assumed he only attacked me on this sort of thing.

comment by CCC · 2013-04-15T17:32:23.629Z · score: 0 (0 votes) · LW · GW

Ironically, this turned out not to be the case

A common problem when one person tries to explain the words of another to a third party, yes.

Funny thing - I had a brief interaction over private messaging with Kawoomba on the subject of religion some time back, and he seemed reasonable at the time. Mildly curious, firmly atheistic, and not at all hostile.

I'm not sure if he changed, or if he's hostile to only a specific subcategory of theists?

comment by MugaSofer · 2013-04-19T14:25:23.188Z · score: -1 (1 votes) · LW · GW

As I said, I'd assumed it was just me; we got into a rather lengthy argument some time ago on whether human ethics generalize, and he's been latching onto anything I say that's even tangentially related ever since. I'm not sure why he's so eager to convince me, since he believes his values are incompatible with mine, but it seems it may have something to do with him pattern-matching my position with the Inquisition or something.

comment by [deleted] · 2013-04-14T20:10:15.920Z · score: 0 (0 votes) · LW · GW

Have you noticed any difference between first and second generation atheists, in regard to caricaturing or contempt for religion?

comment by MugaSofer · 2013-04-13T18:15:53.624Z · score: 0 (2 votes) · LW · GW

Reading the sequences is more likely to cause a theist to ignore Less Wrong then it is to change their beliefs, regardless of how rational or not a theist is.

Really? I would have expected most aspiring rationalists who happen to be theists to be mildly irritated by the anti-theism bits, but sufficiently interested by the majority that's about rationality. Might be the typical mind fallacy, though.

This doesn't limit itself to atomliner; in my experience generally when atheists talk about their previous religion they seem to have always held (or claim they did) some extremely non-standard version of that religion.

I would assume this is because the standard version of major religions likely became so by being unusually resistant to deconversion - including through non-ridiculousness.

EDIT: also, I think those were intended as examples of things irrational people believe, not necessarily Mormons specifically.

comment by TheOtherDave · 2013-04-13T18:44:37.427Z · score: 3 (3 votes) · LW · GW

I would have expected most aspiring rationalists who happen to be theists to be mildly irritated by the anti-theism bits

Well, I don't strongly identify as a theist, so it's hard for me to have an opinion here.

That said, if I imagine myself reading a variant version of the sequences (and LW discourse more generally) which are anti-some-group-I-identify-with in the same ways.... for example, if I substitute every reference to the superiority of atheism to theism (or the inadequacy of theism more generally) with a similar reference to the superiority of, say, heterosexuality to homosexuality (or the inadequacy of homosexuality more generally), my emotional response is basically "yeah, fuck that shit."

Perhaps that's simply an indication of my inadequate rationality.

This doesn't limit itself to atomliner; in my experience generally when atheists talk about their previous religion they seem to have always held (or claim they did) some extremely non-standard version of that religion.

I would assume this is because the standard version of major religions likely became so by being unusually resistant to deconversion - including through non-ridiculousness.

That's possible. Another possibility is that when tribe members talk about their tribe, they frequently do so charitably (for example, in nonridiculous language, emphasizing the nonridiculous aspects of their tribe), while when ex-members talk about their ex-tribe, the frequently do so non-charitably.

This is similar to what happens when you compare married people's descriptions of their spouses to divorced people's descriptions of their ex-spouses... the descriptions are vastly different, even if the same person is being described.

comment by MugaSofer · 2013-04-13T20:22:06.394Z · score: -1 (1 votes) · LW · GW

Well, I don't strongly identify as a theist, so it's hard for me to have an opinion here.

That said, if I imagine myself reading a variant version of the sequences (and LW discourse more generally) which are anti-some-group-I-identify-with in the same ways.... for example, if I substitute every reference to the superiority of atheism to theism (or the inadequacy of theism more generally) with a similar reference to the superiority of, say, heterosexuality to homosexuality (or the inadequacy of homosexuality more generally), my emotional response is basically "yeah, fuck that shit."

Perhaps that's simply an indication of my inadequate rationality.

I can confirm that it is indeed annoying, and worse still can act to reduce the persuasiveness of a point (for example, talking about how large groups of people/experts/insert other heuristic here fails with regard to religion.) Interestingly, it's annoying even if I agree with the criticism in question, which would suggest it's probably largely irrational, and certain rationality techniques reduce it, like the habit of ironmanning people's points by, say, replacing "religion" with racism or the education system or some clinically demonstrated bias or whatever.

That's possible. Another possibility is that when tribe members talk about their tribe, they frequently do so charitably (for example, in nonridiculous language, emphasizing the nonridiculous aspects of their tribe), while when ex-members talk about their ex-tribe, the frequently do so non-charitably.

This is similar to what happens when you compare married people's descriptions of their spouses to divorced people's descriptions of their ex-spouses... the descriptions are vastly different, even if the same person is being described.

There's probably a bit of that too, but (in my experience) most atheists believed an oddly ... variant ... version of their faith, whether it's because they misunderstood as a child or simply belonged to a borderline splinter group. Mind you, plenty of theists are the same, just unexamined.

comment by Kawoomba · 2013-04-13T21:05:14.215Z · score: 0 (4 votes) · LW · GW

for example, if I substitute every reference to the superiority of atheism to theism (or the inadequacy of theism more generally) with a similar reference to the superiority of, say, heterosexuality to homosexuality (or the inadequacy of homosexuality more generally), my emotional response is basically "yeah, fuck that shit."

These examples are not at all analogous. Claims about the existence of divine agents - or the accuracy of old textbooks - are epistemological claims about the world and not up to personal preferences. What do I know and how do I know it?

Claims about preferences can by definition not be objectively right or wrong, but only be accurate or inaccurate relative to their frame of reference, to the agent they are ascribed to. Even if that agent were some divine entity. Jesus would like you to do X, but Bob wouldn't.

Or, put differently:

"There is a ball in the box" - Given the same evidence, Clippy and an FAI will come to the same conclusion. Personal theist claims mostly fall in this category ("This book was influenced by being X", "the universe was created such-and-such", "I was absolved from my sins by a god dying for me").

"I prefer a ball in the box over no ball in the box" - Given the same evidence, rational actors do not have to agree, their preferences can be different. Sexual preferences, for example.

The reason that theists are generally regarded as irrational in their theism is because there is no reason to privilege the hypothesis that any particular age old cultural text somehow accurately describes important aspects of the universe, even if you'd ascribe to some kind of first mover. Like watching William Craig debates, who goes from some vague "First Cause" argument all the way to "the Bible is right because of the ?evidence? of a supernatural resurrection". That's a long, long way to skip and gloss over. Arguing for a first mover (no restriction other than "something that started the rest") is to arguing for the Abrahamic god what predicting the decade of your time of death would be to predicting the exact femtosecond of your death.

Such motivated cognition compromises many other aspects of one's reasoning unless it's sufficiently cordoned off, just like an AI that steadfastly insisted that human beings are all made of photons, and needed to somehow warp all its other theories to accommodate that belief.

comment by MugaSofer · 2013-04-13T21:37:35.727Z · score: -1 (1 votes) · LW · GW

Well, this explains the mystery of why that got downvoted by someone.

for example, if I substitute every reference to the superiority of atheism to theism (or the inadequacy of theism more generally) with a similar reference to the superiority of, say, heterosexuality to homosexuality (or the inadequacy of homosexuality more generally), my emotional response is basically "yeah, fuck that shit."

Firstly, you're replying to an old version of my comment - the section you're replying to is part of a quote which had a formatting error, which is why it forms a complete non-sequitur taken as a reply. I did not write that, I merely replied to it.

These examples are not at all analogous. Claims about the existence of divine agents - or the accuracy of old textbooks - are epistemological claims about the world and not up to personal preferences.

You know, I agree with you, homosexuality isn't a great example there. However, it's trivially easy to ironman as "homosexuality is moral" or some other example involving the rationality skills of the of the general populace.

Claims about preferences can by definition not be objectively right or wrong, but only be accurate or inaccurate relative to their frame of reference, to the agent they are ascribed to. Even if that agent were some divine entity. Jesus would like you to do X, but Bob wouldn't.

The fact that something is true only relative to a frame of reference does not mean it "can by definition not be objectively right or wrong". For example, if I believe it is correct (by my standards) to fly a plane into a building full of people, I am objectively wrong - this genuinely, verifiably doesn't satisfy my preferences. I may have been persuaded a Friendly superintelligence has concluded that it is, or that it will cause me to experience subjective bliss (OK, this one is harder to prove outright, we could be in a simulation run by some very strange people. It is, however, irrational to believe it based on the available evidence.)

"There is a ball in the box" - Given the same evidence, Clippy and an FAI will come to the same conclusion. Personal theist claims mostly fall in this category ("This book was influenced by being X", "the universe was created such-and-such", "I was absolved from my sins by a god dying for me").

"I prefer a ball in the box over no ball in the box" - Given the same evidence, rational actors do not have to agree, their preferences can be different.

Ayup.

Sexual preferences, for example.

As I said earlier, it's trivially easy to ironman that reference to mean one of the political positions regarding the sexual preference. If he had said "abortion", would you tell him that a medical procedure is a completely different thing to an empirical claim?

The reason that theists are generally regarded as irrational in their theism is because there is no reason to privilege the hypothesis that any particular age old cultural text somehow accurately describes important aspects of the universe, even if you'd ascribe to some kind of first mover.

Forgive me if I disagree with that particular empirial claim about how our community thinks.

Like watching William Craig debates, who goes from some vague "First Cause" argument all the way to "the Bible is right because of the ?evidence? of a supernatural resurrection".

"The Bible is right because of the evidence of a supernatural resurrection" is an argument in itself, not something one derives from the First Cause. However, the prior of supernatural resurrections might be raised by a particular solution to the First Cause problem, I suppose, requiring that argument to be made first.

Arguing for a first mover (no restriction other than "something that started the rest") is to arguing for the Abrahamic god what predicting the decade of your time of death would be to predicting the exact femtosecond of your death.

I guess I can follow that analogy - you require more evidence to postulate a specific First Mover than the existence of a generalized First Cause - but I have no idea how it bears on your misreading of my comment.

Such motivated cognition compromises many other aspects of one's reasoning unless it's sufficiently cordoned off, just like an AI that steadfastly insisted that human beings are all made of photons, and needed to somehow warp all its other theories to accommodate that belief.

Source? I find most rationalists encounter more irrational beliefs being protected off from rational ones than the inverse.

comment by Kawoomba · 2013-04-14T08:22:03.766Z · score: 0 (0 votes) · LW · GW

"homosexuality is moral"

How is that example any different, how is it not also a matter of your individual moral preferences? Again, you can imagine a society or species of rational agents that regard homosexuality as moral, just as you can imagine one that regards it as immoral.

The fact that something is true only relative to a frame of reference does not mean it "can by definition not be objectively right or wrong".

By objectively right or wrong I meant right or wrong regardless of the frame of reference (as it's usually interpreted as far as I know). Of course you can be mistaken about your own preferences, and other agents can be mistaken when describing your preferences.

"Agent A has preference B" can be correct or incorrect / right or wrong / accurate or inaccurate, but "Preference B is moral, period, for all agents" would be a self-contradictory nonsense statement.

If he had said "abortion", would you tell him that a medical procedure is a completely different thing to an empirical claim?

Of course "I think abortion is moral" can widely differ from rational agent to rational agent. Clippy talking to AbortAI (the abortion maximizing AI) could easily agree about what constitutes an abortion, or how that procedure is usually done. Yet they wouldn't need to agree about the morality each of them ascribes to that procedure. They would need to agree on how others ("this human in 21th century America") morally judge abortion, but they could still judge it differently. It is like "I prefer a ball in the box over no ball in the box", not like "There is a ball in the box".

Forgive me if I disagree with that particular empirial claim about how our community thinks.

I forgive you, though I won't die for your sins.

"The Bible is right because of the evidence of a supernatural resurrection" is an argument in itself.

It is ... an argument ... strictly formally speaking. What else could explain some eye witness testimony of an empty grave, if not divine intervention?

However, the prior of supernatural resurrections might be raised by a particular solution to the First Cause problem.

Only when some nonsense about "that cause must be a non-physical mind" (without defining what a non-physical mind is, and reaching that conclusion by saying "either numbers or a mind could be first causes, and it can't be numbers") is dragged in, even then the effect on the prior of some particular holy text on some planet in some galaxy in some galactic cluster would be negligible.

but I have no idea how it bears on your misreading of my comment.

"I can confirm that it is indeed annoying", although I of course admit that this is branching out on a tangent - but why shouldn't we, it's a good place for branching out without having to start a new topic, or PMs.

Not everything I write needs to be controversial between us, it can be related to a comment I respond to, and you can agree or disagree, engage or disengage at your leisure.

I find most rationalists encounter more irrational beliefs being protected off from rational ones than the inverse.

What do you mean, protected off in the sense of compartmentalized / cordoned off?

comment by MugaSofer · 2013-04-14T16:43:38.143Z · score: -1 (1 votes) · LW · GW

How is that example any different, how is it not also a matter of your individual moral preferences? Again, you can imagine a society or species of rational agents that regard homosexuality as moral, just as you can imagine one that regards it as immoral.

We seem to be using "moral" differently. You're using it to refer to any preference, whereas I'm using it to refer to human ethical preferences specifically. I find this is more useful, for the reasons EY puts forth in the sequences.

By objectively right or wrong I meant right or wrong regardless of the frame of reference (as it's usually interpreted as far as I know). Of course you can be mistaken about your own preferences, and other agents can be mistaken when describing your preferences.

If you can be mistaken - objectively mistaken - then you are in a state known as "objectively wrong", yes?

Of course "I think abortion is moral" can widely differ from rational agent to rational agent. Clippy talking to AbortAI (the abortion maximizing AI) could easily agree about what constitutes an abortion, or how that procedure is usually done. Yet they wouldn't need to agree about the morality each of them ascribes to that procedure. They would need to agree on how others ("this human in 21th century America") morally judge abortion, but they could still judge it differently. It is like "I prefer a ball in the box over no ball in the box", not like "There is a ball in the box".

Again, I think we're arguing over terminology rather than meaning here.

I forgive you, though I won't die for your sins.

Zing!

It is ... an argument ... strictly formally speaking. What else could explain some eye witness testimony of an empty grave, if not divine intervention?

Because that's the only eyewitness testimony contained in the Bible.

Only when some nonsense about "that cause must be a non-physical mind" (without defining what a non-physical mind is, and reaching that conclusion by saying "either numbers or a mind could be first causes, and it can't be numbers") is dragged in, even then the effect on the prior of some particular holy text on some planet in some galaxy in some galactic cluster would be negligible.

Well, since neither of actually have a solution to the First Cause argument (unless you're holding out on me) that's impossible to say. However, yes, if you believed that the solution involved extra-universal superintelligence, it would raise the prior of someone claiming to be such a superintelligence and exhibiting apparently supernatural power being correct in these claims.

"I can confirm that it is indeed annoying", although I of course admit that this is branching out on a tangent - but why shouldn't we, it's a good place for branching out without having to start a new topic, or PMs.

What does the relative strength of evidence required for various "godlike" hypotheses have to do with the annoyance of seeing a group you identify with held up as an example of something undesirable?

Not everything I write needs to be controversial between us, it can be related to a comment I respond to, and you can agree or disagree, engage or disengage at your leisure.

Uh ... sure ... I don't exactly reply to most comments you make.

What do you mean, protected off in the sense of compartmentalized / cordoned off?

Yup.

comment by Kawoomba · 2013-04-18T09:56:49.948Z · score: 1 (1 votes) · LW · GW

You're using it to refer to any preference, whereas I'm using it to refer to human ethical preferences specifically.

Which humans? Medieval peasants? Martyrs? Witch-torturers? Mercenaries? Chinese? US-Americans? If so, which party, which age-group?

If you can be mistaken - objectively mistaken - then you are in a state known as "objectively wrong", yes?

The term is overloaded. I was referring to ideas such as e.g. moral universalism. An alien society - or really just different human societies - will have their own ethical preferences, and while they or you can be wrong in describing those preferences, they cannot be wrong in having them, other than their preferences being incompatible with someone else's preferences. There is no universal reference frame, even if a god existed, his preferences would just amount to an argument from authority.

However, yes, if you believed that the solution involved extra-universal superintelligence, it would raise the prior of someone claiming to be such a superintelligence and exhibiting apparently supernatural power being correct in these claims.

Negligibly so, especially if it's non verifiable second hand stories passed down through the ages, and when the whole system is ostentatiously based on non-falsifiability in an empirical sense.

You realize that your fellow Christians from a few centuries back would burn you for heresy if you told them that many of the supernatural magic tricks were just meant as metaphors. Copernicus didn't doubt Jesus Christ was a god-alien-human. They may not even have considered you to be a Christian. Nevermind that, the current iteration has gotten it right, doesn't it? Your version, I mean.

Because that's the only eyewitness testimony contained in the Bible.

There are three little pigs who saw the big bad wolf blowing away their houses, that's three eyewitnesses right there.

Do Adam and Eve count as eyewitnesses for the Garden of Eden?

comment by PrawnOfFate · 2013-04-18T10:31:24.676Z · score: 1 (1 votes) · LW · GW

The term is overloaded. I was referring to ideas such as e.g. moral universalism. An alien society - or really just different human societies - will have their own ethical preferences, and while they or you can be wrong in describing those preferences, they cannot be wrong in having them, other than their preferences being incompatible with someone else's preferences. There is no universal reference frame, even if a god existed, his preferences would just amount to an argument from authority.

OK. So moral realism is false, and moral relativism is true and that's provable in a paragraph. Hmmm. Aliens and other societies might have all sorts of values, but that does not necessarily mean they have all sorts of ethical values. "Murder is good" might not be a coherent ethical principle, any more than "2+2=5" is a coherent mathematical one. The says-so of authorities, or Authorities is not the only possible source of objectivity.

comment by Kawoomba · 2013-04-18T10:53:14.613Z · score: 0 (0 votes) · LW · GW

So if you constructed an artificial agent, you would somehow be stopped from encoding certain actions and/or goals as desirable? Or that agent would just be wrong when describing his own preferences when he then tells you "killing is good"?

Certain headwear must be worn by pious women. Light switches must not be used on certain days by god-abiding men. Infidels must be killed. All of those are ethical from even some human's frame of reference. Seems pretty variable.

comment by PrawnOfFate · 2013-04-18T10:56:58.141Z · score: 1 (1 votes) · LW · GW

Or that agent would just be wrong when describing his own preferences when he then tells you "killing is good"?

It would be correctly describing its preferences, and its preferences would not be ethically correct. You could construct an AI that frimly believed 2+2=5. And it would be wrong. As before, you are glibly assuming that the word "ethical" does no work, and can be dropped from the phrase "ethical value".

Certain headwear must be worn by pious women. Light switches must not be used on certain days by god-abiding men. Infidels must be killed. All of those are ethical from even some human's frame of reference.

All of those are believed ethical. It's very shallow to argue for relativism by ignoring the distinction between believed-to-be-true and true.

comment by Kawoomba · 2013-04-18T11:12:24.557Z · score: 1 (1 votes) · LW · GW

Imagine a mirror world, inhabited by our "evil" (from our perspective) twins. Now they all go around being all unethical, yet believing themselves to act ethically. They have the same model of physics, the same technological capabilities, they'd just be mistaken about being ethical.

Could it be that it turns out that we're that unethical mirror world, and our supposedly evil twins do in fact have it right? Do you think to know at least some of what's universally ethical, or could you unknowingly be the evil twin believing to be ethical?

Or could both us and our mirror world be unethical, and really only a small cluster of sentient algae somewhere in the UDFy-38135539 galaxy has by chance gotten it right, and is acting ethically?

All advanced societies will agree about 2+2!=5, because that's falsifiable. Who gets to set the axioms and rules for ethicality? Us, the mirror world, the algae, god?

comment by ArisKatsaris · 2013-04-18T13:03:27.231Z · score: 1 (1 votes) · LW · GW

Who gets to set the axioms and rules for ethicality?

Axioms are what we use to logically pinpoint what it is we are talking about. If our world and theirs has different axioms for "ethicality", then they simply don't have what we mean by "ethicality" -- and we don't have what they mean by the word "ethicality".

Our two worlds would then not actually disagree about ethics the concept, they instead disagree about "ethics" the word, much like 'tier' means one thing in English and another thing in german.

comment by Creutzer · 2013-04-19T12:35:00.869Z · score: 0 (0 votes) · LW · GW

Unfortunately, words of natural language have the annoying property that it's often very hard to tell if people are disagreeing about the extension or the meaning. It's also hard to tell what disagreement about the meaning of a word actually is.

Our two worlds would then not actually disagree about ethics the concept, they instead disagree about "ethics" the word, much like 'tier' means one thing in English and another thing in german.

The analogy is flawed. German and English speakers don't disagree about the word (conceived as a string of phonemes; otherwise "tier" and "Tier" are not identical), and it's not at all clear that disagreement about the meaning of words is the same thing as speaking two different languages. It's certainly phenomenologically pretty different.

I do agree that reducing it to speaking different languages is one way to dissolve disagreement about meaning. But I'm not convinced that this is the right approach. Some words are in acute danger of being dissolved with the question in that it will turn out that almost everyone has their own meaning for the word, and everybody is talking past each other. It also leaves you with a need to explain where this persistent illusion that people are disagreeing when they're in fact just talking past each other (which persists even when you explain to them that they're just speaking two different languages; they'll often say no, they're not, they're speaking the same language but the other person is using the word wrongly) comes from.

Of course, all of this is connected to the problem that nobody seems to know what kind of thing a meaning is.

comment by Kawoomba · 2013-04-18T13:38:01.222Z · score: 0 (0 votes) · LW · GW

So there is an objective measure for what's "right" and "wrong" regardless of the frame of reference, there is such a thing as correct, individual independent ethics, but other people may just decide not to give a hoot, using some other definition of ethics?

Well, let's define a series of ethics, from ethics1 to ethicsn. Let's call your system of ethics which contains a "correct" conclusion such as "murder is WONG", say, ethics211412312312.

Why should anyone care about ethics211412312312?

(If you don't mind, let's consolidate this into the other sub-thread we have going.)

comment by PrawnOfFate · 2013-04-18T14:22:25.376Z · score: 1 (1 votes) · LW · GW

but other people may just decide not to give a hoot, using some other definition of ethics

If what they have can't do what ethics is supposed to do, why call it ethics?

comment by Kawoomba · 2013-04-18T14:23:16.707Z · score: 0 (0 votes) · LW · GW

What is ethics supposed to do?

comment by PrawnOfFate · 2013-04-18T14:38:31.547Z · score: 0 (0 votes) · LW · GW

Reconcile one's preferences with those of others.

comment by Kawoomba · 2013-04-18T14:47:37.430Z · score: -1 (1 votes) · LW · GW

That's one specific goal that you ascribe to your ethics-subroutine, the definition entails no such ready answer.

Ethics:

"Moral principles that govern a person's or group's behavior"

"The moral correctness of specified conduct"

Moral:

"of or relating to principles of right and wrong in behavior"

What about Ferengi ethics?

comment by PrawnOfFate · 2013-04-18T14:57:46.395Z · score: -2 (2 votes) · LW · GW

I don't know what you mean. Your dictionary definitions are typically useless for philosophical purposes.

ETA

What about Ferengi ethics?

Well...what?

comment by Kawoomba · 2013-04-18T14:59:08.104Z · score: 0 (0 votes) · LW · GW

You are saying "the (true, objective, actual) purpose of ethics is to reconcile one's preferences with those of others".

Where do you take that from, and what makes it right?

comment by PrawnOfFate · 2013-04-18T15:03:01.559Z · score: -1 (3 votes) · LW · GW

I got it from thinking and reading. It might not be right. It's a philosophical claim. Feel free to counterargue.

comment by nshepperd · 2013-04-18T13:47:45.024Z · score: 1 (1 votes) · LW · GW

Why should anyone care about ethics211412312312?

"Should" is an ethical word. To use your (rather misleading) naming convention, it refers to a component of ethics211412312312.

Of course one should not confuse this with "would". There's no reason to expect an arbitrary mind to be compelled by ethics.

comment by PrawnOfFate · 2013-04-18T14:23:22.619Z · score: 1 (1 votes) · LW · GW

"Should" is an ethical word

No. it's much wider than that. There are rational and instrumental should's.

ETA:

here's no reason to expect an arbitrary mind to be compelled by ethics.

Depends how arbitrary. Many philosophers think a rational mind could be compelled by ethical arguments...that ethical-should can be built out of rational-should.

comment by Kawoomba · 2013-04-18T14:02:25.507Z · score: -1 (1 votes) · LW · GW

There's no reason to expect an arbitrary mind to be compelled by ethics.

As one should not expect an arbitrary mind with its own notions of "right" or "wrong" to yield to any human's proselytizing about objectively correct ethics, "murder is bad", and trying to provide a "correct" solution for that arbitrary mind to adopt.

The ethics as defined by China, or an arbitrary mind, have as much claim to be correct as ours. There is no axiom-free metaethical framework which would provide the "should" in "you should choose ethics211412312312", that was my point. Calling some church's (or other group's) ethical doctrine objectively correct for all minds doesn't make a dint of difference, and doesn't go beyond "my ethics are right! no, mine are!"

comment by PrawnOfFate · 2013-04-18T14:29:45.518Z · score: 1 (1 votes) · LW · GW

As one should not expect an arbitrary mind with its own notions of "right" or "wrong" to yield to any human's proselytizing about objectively correct ethics, "murder is bad", and trying to provide a "correct" solution for that arbitrary mind to adopt.

But humans can proselytise each other, despite their different notions of right and wrong. You seem to be assuming that morally-rght and -wrong are fundamentals. But if they are outcomes of reasoning and facts, then they can be changed by the presentation of better reasoning and previously unknown facts. As happens when one person morally exhorts another. I think you need to assume that your arbitrary mind has nothing in common with a human one, not even rationality.

comment by Kawoomba · 2013-04-18T14:34:20.542Z · score: 0 (0 votes) · LW · GW

But if they are outcomes of reasoning and facts, then they can be changed by the presentation of better reasoning (...) I think you need to assume that your arbitrary mind has nothing in common with a human one, not even rationality

Does that mean that, in your opinion, if we constructed an AI mind that uses a rational reasoning mechanism (such as Bayes), we wouldn't need to worry since we could persuade it to act morally correct?

comment by PrawnOfFate · 2013-04-18T14:41:36.219Z · score: 1 (1 votes) · LW · GW

I'm not sure if that is necessarily true, or even highly likely. But it is a possibility which is extensively discussed in non-LW philosophy that is standardly ignored or bypassed on LW for some reason. As per my original comment. Is moral relativism really just obviously true?

comment by MugaSofer · 2013-04-19T12:30:13.695Z · score: -1 (1 votes) · LW · GW

Depends on how you define "moral relativism". Kawomba thinks a particularly strong version is obviously true, but I think the LW consensus is that a weak version is.

comment by PrawnOfFate · 2013-04-19T12:47:50.820Z · score: 0 (0 votes) · LW · GW

I don't think there is a consensus, just a belief in a consensus. EY seems unable or unwiing to clarify his posiition even when asked directly.

comment by ArisKatsaris · 2013-04-18T14:18:19.352Z · score: 1 (1 votes) · LW · GW

The ethics as defined by China, or an arbitrary mind, have as much claim to be correct as ours.

If someone defines ethics differently, then WHAT are the common characteristics that makes you call them both "ethics"? You surely don't mean that they just happened to use the same sound or the same letters and that they may be meaning basketball instead? So there must already exist some common elements you are thinking of that make both versions be logically categorizable as "ethics".

What are those common elements?

What would it mean for an alien to e.g. define "tetration" differently than we do? Either they define it in the same way, or they haven't defined it at all. To define it differently means that they're not describing what we mean by tetration at all.

comment by MugaSofer · 2013-04-19T12:24:09.095Z · score: -1 (1 votes) · LW · GW

Cannot upvote enough.

Also, pretty sure I've made this exact argument to Kawoomba before, but I didn't phrase it as well, so good luck!

comment by PrawnOfFate · 2013-04-18T14:21:11.834Z · score: -1 (1 votes) · LW · GW

Axioms are what we use to logically pinpoint what it is we are talking about.

Axioms have a lot to do with truth, and little to do with meaning.

comment by ArisKatsaris · 2013-04-18T14:35:23.497Z · score: 1 (1 votes) · LW · GW

Axioms have a lot to do with truth, and little to do with meaning.

Would that make the Euclidean axioms just "false" according to you, instead of meaningfully defining the concept of a Euclidean space that turned out not to be completely corresponding to reality, but is still both quite useful and certainly meaningful as a concept?

I first read the concept of axioms as means of logical pinpointing in this and it struck me as brilliant insight which may dissolve a lot of confusions.

comment by PrawnOfFate · 2013-04-18T14:36:34.696Z · score: 0 (0 votes) · LW · GW

Corresponding to reality is physical truth, not mathematical truth.

comment by PrawnOfFate · 2013-04-18T14:19:34.620Z · score: 0 (0 votes) · LW · GW

Could it be that it turns out that we're that unethical mirror world, and our supposedly evil twins do in fact have it right? Do

If relativism is true, yes. If realism is true no. So?

Or could both us and our mirror world be unethical, and really only a small cluster of sentient algae somewhere in the UDFy-38135539 galaxy has by chance gotten it right, and is acting ethically?

If realism is true, they could have got it right by chance, although whoever is right is more likely to be right by approaching it systematically.

All advanced societies will agree about 2+2!=5, because that's falsifiable.

Inasmuch as it is disproveable from non-arbitrary axioms. You are assuming that maths has non-arbitrary axioms, but morality doesn't. Is that reasonable?

Who gets to set the axioms and rules for ethicality? Us, the mirror world, the algae, god?

Axioms aren't true or false because of who is "setting" them. Maths is supposed to be able to do certain things, it is supposed to allow you to prove theorems, it is supposed to be free from contradiction and so on. That considerably constrains the choice of axioms. Non-euthyphric moral realism works the same way.

comment by ArisKatsaris · 2013-04-18T12:43:52.161Z · score: 0 (0 votes) · LW · GW

Imagine a mirror world, inhabited by our "evil" (from our perspective) twins. Now they all go around being all unethical, yet believing themselves to act ethically. They have the same model of physics, the same technological capabilities, they'd just be mistaken about being ethical.

Okay, let's try to figure out how that would work. A world where preferences are the same (e.g. everyone wants to live as long as possible, and wants other people to live as well), but the ethics are reversed (saving lives is considered morally wrong, murdering other people at random is morally right)

Don't you see an obvious asymmetry here between their world and ours? Their so-called ethics about murder (murder=good) would end up harming their preferences, in a way that our ethics about murder (murder=bad) does not?

comment by Kawoomba · 2013-04-18T13:33:44.605Z · score: 0 (0 votes) · LW · GW

So is it a component of the "correct" ethical preferences that they satisfy the preferences of others? It seems this way since you use this to hold "our" ethics about murder over those of the mirror world (In actuality there'd be vast swaths of peaceful coexistence in the mirror world, e.g. in Ruanda).

But hold on, our ethical preferences aren't designed to maximize other sapients' preferences. Wouldn't it be more ethical still to not want anything for yourself, or to be happy to just stare at the sea floor, and orient those around you to look at the sea floor as well? Seems like those algae win, after all! God's chosen seaweed!

What about when a quadrillion bloodthirsty but intelligent killer-algae (someone sent them a Bible, turned them violent) invaded us, wouldn't it be more ethical for us to roll over, since that satisfies total preferences more effectively?

I see the asymmetry. But I don't see the connection to "there is a correct morality for all sentients". On the contrary, a more aggressive civilization might even out-colonize the peaceniks, and so overall satisfy the preferences of even more slaves, I mean, esteemed citizens.

comment by PrawnOfFate · 2013-04-18T14:35:05.818Z · score: 0 (0 votes) · LW · GW

On the contrary, a more aggressive civilization might even out-colonize the peaceniks, and so overall satisfy the preferences of even more slaves, I mean, esteemed citizens.

It clearly wouldn't satisfy their preference not to be slaves.

comment by Kawoomba · 2013-04-18T14:44:21.572Z · score: -1 (1 votes) · LW · GW

It clearly wouldn't satisfy their preference not to be slaves.

Slip of tongue, you must have meant esteemed citizens.

You're concerned with the average preference satisfaction of other agents, then? Why not total average preference satisfaction, which you just rejected? Which is ethical, and who decides? Where are the axioms?

We're probably talking about different ethics, since I don't even know your axioms, or priorities. Something about trying to satisfy the preferences of others, or at least taking that into account. What does that mean? To what degree? If one says, "to this degree", and another said "to that degree", who's ethical? Neither, both? Who decides? There's no math that tells to to what degree you satisfying others is ethical.

Is their an ethical component to flushing my toilet? Killing my goldfish? All my actions impact the world (definition of action), yet some are ethical (or unethical), whereas some are ethically undefined? How does that work?

Can I find it all written in an ancient scroll, by chance?

comment by PrawnOfFate · 2013-04-18T14:52:28.031Z · score: 0 (2 votes) · LW · GW

Slip of tongue, you must have meant esteemed citizens.

I thought your point was that they were really slaves.

You're concerned with the average preference satisfaction of other agents, then? Why not total average preference satisfaction, which you just rejected? Which is ethical, and who decides? Where are the axioms?

There are a lot of issues in establishing the right theory of moral realism, and that doens't mean relativism is Just True. I've done as much as a I need.

We're probably talking about different ethics,

We are talking about different metaethics.

Who decides?

We don't have the One True theory of physics either. That doesn't disprove physical realism.

comment by Kawoomba · 2013-04-18T14:55:01.844Z · score: 0 (0 votes) · LW · GW

I thought your point was that they were really slaves.

Just lightening the tone.

There are a lot of issues in establishing the right theory of moral realism, and that doens't mean relativism is Just True. I've done as much as a I need.

What do you mean, I've done as much as I need?

comment by PrawnOfFate · 2013-04-18T15:04:46.448Z · score: 0 (2 votes) · LW · GW

I need to show that realism isn't obviously false, and can't be dismissed in a paragraph. I don't need to show it is necessarily true, or put forward a bulletproof object-level ethics.

comment by Kawoomba · 2013-04-18T15:11:50.983Z · score: -1 (1 votes) · LW · GW

A paragraph? What about a single sentence (not exactly mine, though):

Moral realism postulates the existence of a kind of "moral fact" which is nonmaterial, applies to humans, aliens and intelligent algae alike, and does not appear to be accessible to the scientific method.

I can probably get it down further.

comment by PrawnOfFate · 2013-04-18T15:13:30.753Z · score: 1 (1 votes) · LW · GW

Moral realism postulates the existence of a kind of "moral fact" which is nonmaterial, applies to humans, aliens and intelligent algae alike, and does not appear to be accessible to the scientific method.

What has that got to do with the approachI have been proposing here?

comment by Kawoomba · 2013-04-18T15:18:15.912Z · score: 0 (0 votes) · LW · GW

The point is not whether you like your own ethics, or how you go about your life. It's whether your particular ethical system - or any particular ethical system - can be said to be not only right from your perspective, but right for any intelligent agent - aliens, humans, AI, whatever.

As such, if someone told you "nice ethics, would be a shame if anything were to happen to it", you'd need to provide some potential - conceivable - basis on which the general correctness could be argued. I was under the impression that you referred to moral realism, which is susceptible to the grandparent comment's criticism.

comment by PrawnOfFate · 2013-04-18T15:21:58.543Z · score: 1 (1 votes) · LW · GW

I have never argued from the "queer object" notion of moral realism--from immaterial moral thingies.

. It's whether your particular ethical system - or any particular ethical system - can be said to be not only right from your perspective, but right for any intelligent agent - aliens, humans, AI, whatever.

Yep. And my argument that it can remains unaddressed.

comment by Kawoomba · 2013-04-18T15:32:02.857Z · score: 0 (0 votes) · LW · GW

"There is a non-zero chance of one correct ethical system existing, as long as that's there, I'm free to believe it", or what?

No Sir, if you insist there is any basis whatsoever to stake your "one ethics to rule them all" claim on, you argue it's more likely than not. I do not stake my belief on absolute certainties, that's counter to all the tenets of rationality, Bayes, updating on evidence et al.

My argument is clear. Different agents deem different courses of actions to be good or bad. There is a basis (such as Aumann's) for rational agents to converge on isomorphic descriptions of the world. There is no known, or readily conceivable, basis for rational agents to all converge on the same course of action.

On the contrary, that would entail that e.g. world-eating AIs that are also smarter than any humans, individual or collectively, cannot possibly exist. There are no laws of physics preventing their existence - or construction. So we should presume that they can exist. If their rational capability is greater than our own, we should try to adopt world eating, since they'd have the better claim (being smarter and all) on having the correct ethics, no?

comment by nshepperd · 2013-04-18T23:34:43.400Z · score: 1 (1 votes) · LW · GW

I feel like I should point out here that moral relativism and universally compelling morality are not the only options. "It's morally wrong for Bob to do X" doesn't require that Bob cares about the fact that it's wrong. Something that seems to be being ignored in this discussion.

comment by TheOtherDave · 2013-04-18T15:54:02.471Z · score: 1 (1 votes) · LW · GW

Complete tangential point...

There is no known, or readily conceivable, basis for rational agents to all converge on the same course of action.

Hm. I don't think you quite mean that as stated?

I mean, I agree that a basis for rational agents to converge on values is difficult to imagine.

But it's certainly possible for two agents with different values to converge on a course of action. E.g., "I want everything to be red, am OK with things being purple, and hate all other colors; you want everything to be blue, are OK with things being purple, and hate all other colors." We have different values, but we can still agree on pragmatic grounds that we should paint everything purple.

comment by Kawoomba · 2013-04-18T16:11:36.886Z · score: 1 (1 votes) · LW · GW

Hence the "all". Certainly agents can happen to have areas in which their goals are compatible, and choose to exert their efforts e.g. synergistically in such win-win situations of mutual benefit.

The same does not hold true for agents whose primary goals are strictly antagonistic. "I maximize the number of paperclips!" - "I minimize the number of paperclips!" will have ... trouble ... getting along, and mutually exchanging treatises about their respective ethics wouldn't solve the impasse.

(A pair of "I make paperclips!" - "I destroy paperclips!" may actually enter a hugely beneficial relationship.)

Didn't think there was anyone - apart from the PawnOfFaith and I - still listening in. :)

comment by TheOtherDave · 2013-04-18T16:20:43.662Z · score: 1 (1 votes) · LW · GW

Yup, that's fair.
And I read the Recent Comments list every once in a while.

comment by nshepperd · 2013-04-18T23:28:15.183Z · score: 0 (0 votes) · LW · GW

But of course this only works if the pair of agents both dislike war/murder even more than they like their colors, and/or if neither of them are powerful enough to murder the other one and thus paint everything their own colors.

comment by PrawnOfFate · 2013-04-18T16:09:56.276Z · score: -3 (3 votes) · LW · GW

I mean, I agree that a basis for rational agents to converge on values is difficult to imagine

Rational agents all need to value rationality.

comment by MugaSofer · 2013-04-19T12:47:32.968Z · score: 0 (2 votes) · LW · GW

Only instrumentally.

comment by PrawnOfFate · 2013-04-19T12:48:41.161Z · score: 0 (0 votes) · LW · GW

Epistemic rationality has instrumental value. That's where the trouble starts.

comment by [deleted] · 2013-04-18T16:46:43.636Z · score: 0 (2 votes) · LW · GW

Not neccesarily. An agent that values X and doesn't have a stupid prior will invariably strive towards finding the best way to accomplish X. If X requires information about an ouside world, it will build epistemology and sensors, if it requires planning, it will build manipulators and a way of evaluating hypotheticals for X-ness.

All for want of X. It will be rational because it helps attaining X.

comment by PrawnOfFate · 2013-04-18T17:40:05.532Z · score: -2 (2 votes) · LW · GW

Good epistemological rationality requires avoidance of bias, contradiction, arbitrariness, etc. That is just what my rationality-based ethics needs.

comment by [deleted] · 2013-04-26T16:52:31.944Z · score: 0 (0 votes) · LW · GW

I will defer to the problem of

Omega offers you two boxes, each box contains a statement, upon choosing a box you will instantly belive that statement: One contains somthing true which you currently belive to be false, tailored to cause maximum disutility in your preferred ethical system; the other contains something false which you currently belive to be true, tailored to cause maximum utility.

Truth with negative consequences or Falsehood with positive ones? If you value nothing over truth you will realise something terrible upon opening the first box, that will maybe make you kill your family. If you value something other than truth, you will end up believing that the programming code you are writing will make pie, when it will in fact make a FAI.

comment by TheOtherDave · 2013-04-18T16:30:14.854Z · score: 0 (0 votes) · LW · GW

Do you mean this as a general principle, along the lines of "If I am constructed so as to operate a particular way, it follows that I value operating that way"? Or as something specific about rationality?

If the former, I disagree, but if the latter I'm interested in what you have in mind.

comment by MugaSofer · 2013-04-19T12:37:08.377Z · score: 0 (2 votes) · LW · GW

I think you've missed the point somewhat. No-one has asserted such a One True Ethics exists, as far as I can see. Prawn has argued that the possibility of one is a serious position, and one that cannot be dismissed out of hand - but not necessarily one they endorse.

I disagree, for the record.

comment by Kawoomba · 2013-04-19T16:08:09.832Z · score: 0 (0 votes) · LW · GW

Prawn has argued that the possibility of one is a serious position

Noone should care about "possibilities", for a Bayesian nothing is zero. You could say self-refuting / self-contradictory beliefs have an actual zero percent probability, but not even that is actually true: You need to account for the fact that you can't ever be wholly (to an infinite amount of 9s in your prior of 0.9...) certain about the self-contradiction actually being one. There could be a world with a demon misleading you, e.g.

That being said, the idea of some One True Ethics is as self-refuting as it gets, there is no view from nowhere, and whatever axioms those True Ethics are based upon would themselves be up for debate.

The discussion of whether a circle can also be a square, possibly, can be answered with "it's a possibility, since I may be mistaken about the actual definitions", or it can be answered with "it's not a possibility, there is no world in which I am wrong about the definition".

But with neither answer would "it is a possibility, ergo I believe in it" follow. The fool who says in his heart ... and all that.

So if I said "I may be wrong about it being self refuting, it may be a possibility", I could still refute it within one sentence. Same as with the square circle.

comment by PrawnOfFate · 2013-04-19T17:55:53.869Z · score: 1 (1 votes) · LW · GW

Noone should care about "possibilities", for a Bayesian nothing is zero. You could say self-refuting / self-contradictory beliefs have an actual zero percent probability, but not even that is actually true: You need to account for the fact that you can't ever be wholly (to an infinite amount of 9s in your prior of 0.9...) certain about the self-contradiction actually being one. There could be a world with a demon misleading you, e.g.

That being said, the idea of some One True Ethics is as self-refuting as it gets, there is no view from nowhere,

What is a "view"? Why is it needed for objective ethics? Why isnt it a Universal Solvent? Is there no objective basis to mathematics.

and whatever axioms those True Ethics are based upon would themselves be up for debate.

So its probability would be less than 1.0. That doesn't mean its probability is barely above 0.0.

The discussion of whether a circle can also be a square, possibly, can be answered with "it's a possibility, since I may be mistaken about the actual definitions",

But the argument you have given does not depend on evident self-contradiction. It depends on an unspecified entity called a "view".

But with neither answer would "it is a possibility, ergo I believe in it" follow.

So? For the fourth time, I was only saying that moral realism isn't obviously false.

The fool who says in his heart ... and all that.

comment by MugaSofer · 2013-04-19T17:15:14.819Z · score: -1 (1 votes) · LW · GW

Oh, come on. He clearly meant a non-negligible probability. Be serious.

And you know, while I don't believe in universally convincing arguments - obviously - there are some arguments which are convincing to any sufficient intelligent agent, under the "power to steer the future" definition. I can't see how anything I would call morality might be such an argument, but they do exist.

comment by Kawoomba · 2013-04-19T17:24:37.014Z · score: 0 (0 votes) · LW · GW

Well then, a universally correct solution based on axioms which can be chosen by the agents is a contradiction in and of itself. Again, there is no view from nowhere. For example, you choose the view as that of "humankind", which I think isn't well defined, but at least it's closer to coherence than "all existing (edit:) rational agents". If the PawnOfFaith meant non-negligible versus just "possibility", the first two sentences of this comment serve as sufficient refutation.

comment by private_messaging · 2013-04-19T17:31:47.046Z · score: 2 (2 votes) · LW · GW

Look. The ethics mankind predominantly has, they do exist in the real world that's around you. Alternate ethics that works at all for a technological society blah blah blah, we don't know of any, we just speculate that they may exist. edit: worse than that, speculate in this fuzzy manner where it's not even specified how they may exist. Different ethics of aliens that evolved on different habitable planets? No particular reason to expect that there won't be one that is by far most probable. Which would be implied by the laws of physics themselves, but given multiple realizability, it may even be largely independent of underlying laws of physics (evolution doesn't care if it's quarks on the bottom or cells in a cellular automation or what), in which case its rather close to being on par with mathematics.

comment by Kawoomba · 2013-04-19T17:49:35.491Z · score: 0 (2 votes) · LW · GW

Even now ethics in different parts of the world, and even between political parties, are different. You should know that more than most, having lived in two systems.

If it turns out that most space-faring civilizations have similar ethics, that would be good for us. But then also there would be a difference between "most widespread code of ethics" and "objectively correct code of ethics for any agent anywhere". Most common != correct.

comment by private_messaging · 2013-04-19T18:18:43.992Z · score: 4 (4 votes) · LW · GW

Even now ethics in different parts of the world, and even between political parties, are different. You should know that more than most, having lived in two systems.

There's a ridiculous amount of similarity on anything major, though. If we pick ethics of first man on the moon, or first man to orbit the earth, it's pretty same.

If it turns out that most space-faring civilizations have similar ethics, that would be good for us. But then also there would be a difference between "most widespread code of ethics" and "objectively correct code of ethics for any agent anywhere". Most common != correct.

Yes, and most common math is not guaranteed to be correct (not even in the sense of not being self contradictory). Yet, that's no argument in favour of math equivalent of moral relativism. (Which, if such a silly thing existed, would look something like 2*2=4 is a social convention! it could have been 5!) .

edit: also, a cross over from other thread: It's obvious that nukes are an ethical filter, i.e. some ethics are far better at living through that than others. Then there will be biotech and other actual hazards, and boys screaming wolf for candy (with and without awareness of why), and so on.

comment by MugaSofer · 2013-04-19T19:14:44.142Z · score: -1 (1 votes) · LW · GW

Look. The ethics mankind predominantly has, they do exist in the real world that's around you.

Actually, I understand Kawoomba believes humanity has mutually contradictory ethics. He has stated that he would cheerfully sacrifice the human race - "it would make as much difference if it were an icecream" were his words, as I recall - if it would guaranteeing the safety of the things he values.

comment by private_messaging · 2013-04-19T19:26:17.299Z · score: 2 (2 votes) · LW · GW

Well, that's rather odd coz I do value the human race and so do most people. Ethics is a social process, most of "possible" ethics as a whole would have left us unable to have this conversation (no computers) or altogether dead.

comment by MugaSofer · 2013-04-19T23:35:11.651Z · score: -2 (2 votes) · LW · GW

Well, that's rather odd coz I do value the human race and so do most people.

That was pretty much everyone's reaction.

Ethics is a social process, most of "possible" ethics as a whole would have left us unable to have this conversation (no computers) or altogether dead.

I'd say I'm not the best person to explain this, but considering how long it took me to understand it, maybe I am.

Hoo boy...

OK, you can persuade someone they were wrong about their terminal values. Therefore, you can change someone's terminal values. Since different cultures are different, humans have wildly varying terminal values.

Also, since kids are important to evolution, parents evolved to value their kids over the rest of humanity. Now, technically that's the same as not valuing the rest of humanity at all, but don't worry; people are stupid.

Also, you're clearly a moral realist, since you think everyone secretly believes in your One True Value System! But you see, this is stupid, because Clippy.

Any questions?

comment by PrawnOfFate · 2013-04-19T23:49:15.375Z · score: -1 (1 votes) · LW · GW

Hmmm. A touch of sarcasm there? Maybe even parody?

comment by MugaSofer · 2013-04-23T12:15:39.313Z · score: -1 (1 votes) · LW · GW

I disagree with him, and it probably shows; I'm not sugar-coating his arguments. But these are Kawoomba's genuine beliefs as best I can convey them.

comment by MugaSofer · 2013-04-19T19:20:05.161Z · score: 0 (2 votes) · LW · GW

PawnOfFaith

Nice. Mature.

Well then, a universally correct solution based on axioms which can be chosen by the agents is a contradiction in and of itself. Again, there is no view from nowhere. For example, you choose the view as that of "humankind", which I think isn't well defined, but at least it's closer to coherence than "all existing agents".

I don't think they have the space of all possible agents in mind - just "rational" ones. I'm not entirely clear what that entails, but it's probably the source of these missing axioms.

comment by PrawnOfFate · 2013-04-19T19:31:04.107Z · score: 1 (1 votes) · LW · GW

I don't think they have the space of all possible agents in mind - just "rational" ones.

I keep saying that, and Bazinga keeps omiting it.

comment by Kawoomba · 2013-04-19T19:33:07.285Z · score: 1 (1 votes) · LW · GW

My mistake, I'll edit the rational back in.

comment by MugaSofer · 2013-04-19T23:14:29.123Z · score: -1 (1 votes) · LW · GW

Don't worry, you're being pattern-matched to the nearest stereotype. Perfectly normal, although thankfully somewhat rarer on LW.

comment by PrawnOfFate · 2013-04-19T23:52:27.879Z · score: 0 (0 votes) · LW · GW

Nowhere near rare enough for super-smart super-rationalists. Not as good as bog standard philosophers.

comment by MugaSofer · 2013-04-23T12:12:16.425Z · score: -1 (1 votes) · LW · GW

I don't know, I've encountered it quite often in mainstream philosophy. Then again, I've largely given up reading mainstream philosophy unless people link to or mention it in more rigorous discussions.

But you have a point; we could really do better on this. Somebody with skill at avoiding this pitfall should probably write up a post on this.

comment by Kawoomba · 2013-04-19T19:35:24.808Z · score: -1 (3 votes) · LW · GW

So as long as the AI we'd create is rational, we should count on it being / becoming friendly by default (at least with a "non-negligible chance")?

Also see this.

comment by MugaSofer · 2013-04-19T23:12:12.102Z · score: 2 (4 votes) · LW · GW

As far as I can tell? No. But you're not doing a great job of arguing for the position that I agree with.

Prawn is, in my opinion, flatly wrong, and I'll be delighted to explain that to him. I'm just not giving your soldiers a free pass just because I support the war, if you follow.

comment by private_messaging · 2013-04-19T20:04:33.339Z · score: 2 (4 votes) · LW · GW

I'd think it'd be great if people stopped thinking in terms of some fuzzy abstraction "AI" which is basically a basket for all sorts of biases. If we consider the software that can self improve 'intelligently' in our opinion, in general, the minimal such software is something like an optimizing compiler that when compiling it's source will even optimize its ability to optimize. This sort of thing is truly alien (beyond any actual "aliens"), you get to it by employing your engineering thought ability, unlike paperclip maximizer at which you get by dressing up a phenomenon of human pleasure maximizer such as a serial murderer and killer, and making it look like something more general than that by making it be about paperclips rather than sex.

comment by PrawnOfFate · 2013-04-19T20:39:04.299Z · score: 0 (0 votes) · LW · GW

I thought that was my argument..

comment by Kawoomba · 2013-04-19T20:42:29.489Z · score: 0 (0 votes) · LW · GW

Yes, and with the "?" at the end I was checking whether MugaSofer agrees with your argument.

It follows from your argument that a (superintelligent) Clippy (you probably came across that concept) cannot exist. Or that it would somehow realize that its goal of maximizing paperclips is wrong. How do you propose that would happen?

comment by PrawnOfFate · 2013-04-19T20:52:59.373Z · score: -2 (2 votes) · LW · GW

The way people sometimes realise their values are wrong...only more efficiently, because its super intelligent. Well, I'll concede that with care you might be able to design a clippy, by very carefully boxing off its values from its ability to update. But why worry? Neither nature nor our haphazard stabs at AI are likely to hit on such a design. Intelligence requires the ability to update, to reflect, and to reflect on what is important. Judgements of importance are based on values. So it is important to have the right way of judging importance, the right values. So an intelligent agent would judge it important to have the right values.

Why would a superintelligence be unable to figure that out..why would it not shoot to the top of the Kohlberg Hierarchy ?

Edit: corrected link

comment by CCC · 2013-04-19T21:32:59.756Z · score: 3 (3 votes) · LW · GW

Why would a superintelligence be unable to figure that out..why would it not shoot to the top of the Kohlberg Hierarchy ?

Why would Clippy want to hit the top of the Kohlberg Hierarchy? You don't get more paperclips for being there.

Clippy's ideas of importance are based on paperclips. The most important vaues are those which lead to the acquiring of the greatest number of paperclips.

comment by PrawnOfFate · 2013-04-20T00:19:27.942Z · score: -1 (1 votes) · LW · GW

Why would Clippy want to hit the top of the Kohlberg Hierarchy?

"Clippy" meaning something carefully designed to have unalterable boxed-off values wouldn't...by definition.

A likely natural or artificial superintelligence would, for the reasons already given. Clippies aren'tt non-existent in mind-space..but they are rare, just because there are far more messy solutions there than neat ones. So nature is unlikely to find them, and we are unmotivated to make them.

comment by CCC · 2013-04-20T12:40:28.658Z · score: 2 (2 votes) · LW · GW

A perfectly designed Clippy would be able to change its own values - as long as changing its own values led to a more complete fulfilment of those values, pre-modification. (There are a few incredibly contrived scenarios where that might be the case). Outside of those few contrived scenarios, however, I don't see why Clippy would.

(As an example of a contrived scenario - a more powerful superintelligence, Beady, commits to destroying Clippy unless Clippy includes maximisation of beads in its terminal values. Clippy knows that it will not survive unless it obeys Beady's ultimatum, and therefore it changes its terminal values to optimise for both beads and paperclips; this results in more long-term paperclips than if Clippy is destroyed).

A likely natural or artificial superintelligence would, for the reasons already given.

The reason I asked, is because I am not understanding your reasons. As far as I can tell, you're saying that a likely paperclipper would somehow become a non-paperclipper out of a desire to do what is right instead of a desire to paperclip? This looks like a very poorly made paperclipper, if paperclipping is not its ultimate goal.

comment by PrawnOfFate · 2013-04-20T12:49:27.086Z · score: -2 (2 votes) · LW · GW

A likely natural or artificial superintelligence would,[zoom to the top of the Kohlberg hierarchy] for the reasons already given

As far as I can tell, you're saying that a likely paperclipper would somehow become a non-paperclipper out of a desire to do what is right instead of a desire to paperclip?

I said "natural or artificial superinteligence", not a paperclipper. A paperclipper is a highly unlikey and contrived kind of near-superinteligence that combines an extensive ability to update with a carefully walled of set of unupdateable terminal values. It is not a typical or likely [ETA: or ideal] rational agent, and nothing about the general behaviour of rational agents can be inferred from it.

comment by CCC · 2013-04-20T13:00:43.024Z · score: 0 (0 votes) · LW · GW

So... correct me if I'm wrong here... are you saying that no true superintelligence would fail to converge to a shared moral code?

How do you define a 'natural or artificial' superintelligence, so as to avoid the No True Scotsman fallacy?

comment by PrawnOfFate · 2013-04-20T13:07:56.592Z · score: -1 (1 votes) · LW · GW

So... correct me if I'm wrong here... are you saying that no true superintelligence would fail to converge to a shared moral code?

I'm saying such convergence has a non negligible probability, ie moral objectivism should not be disregarded.

How do you define a 'natural or artificial' superintelligence, so as to avoid the No True Scotsman fallacy?

As one that is too messilly designed to have a rigid distinction between terminal and instrumental values, and therefore no boxed-off unapdateable TVs. It's a structural definition, not a definition in terms of goals.

comment by CCC · 2013-04-20T18:19:14.525Z · score: 0 (0 votes) · LW · GW

So. Assume a paperclipper with no rigid distinction between terminal and instrumental values. Assume that it is super-intelligent and super-rational. Assume that it begins with only one terminal value; to maximize the number of paperclips in existence. Assume further that it begins with no instrumental values. However, it can modify its own terminal and instrumental values, as indeed it can modify anything about itself.

Am I correct in saying that your claim is that, if a universal morality exists, there is some finite probability that this AI will converge on it?

comment by private_messaging · 2013-04-20T18:40:56.112Z · score: 0 (2 votes) · LW · GW

Universe does not provide you with a paperclip counter. Counting paperclips in the universe is unsolved if you aren't born with exact knowledge of laws of physics and definition of the paperclip. If it maximizes expected paperclips, it may entirely fail to work due to not-low-enough-prior hypothetical worlds where enormous numbers of undetectable worlds with paperclips are destroyed due to some minor actions. So yes, there is a good chance paperclippers are incoherent or are of vanishing possibility with increasing intelligence.

comment by Kindly · 2013-04-20T20:31:41.432Z · score: 0 (0 votes) · LW · GW

That sounds like the paperclipper is getting Pascal's Mugged by its own reasoning. Sure, it's possible that there's a minor action (such as not sending me $5 via Paypal) that leads to a whole bunch of paperclips being destroyed; but the probability of that is low, and the paperclipper ought to focus on more high-probability paperclipping plans instead.

comment by private_messaging · 2013-04-20T20:40:20.689Z · score: 0 (0 votes) · LW · GW

Well, that depends to choice of prior. Some priors don't penalize theories for the "size" of the hypothetical world, and in those, max. size of the world grows faster than any computable function of length if it's description, and when you assign improbability depending to length of description, basically, it fails. Bigger issue is defining what the 'real world paperclip count' even is.

comment by CCC · 2013-04-20T18:44:40.615Z · score: 0 (0 votes) · LW · GW

Right. Perhaps it should maximise the number of paperclips which each have a greater-than-90% chance of existing, then? That will allow it to ignore any number of paperclips for which it has no evidence.

comment by private_messaging · 2013-04-20T18:58:24.898Z · score: 1 (3 votes) · LW · GW

Inside your imagination, you have paperclips, you have magicked a count of paperclips, and this count is being maximized. In reality, well, the paperclips are actually a feature of the map. Get too clever about it and you'll end up maximizing however you define it without maximizing any actual paperclips.

comment by CCC · 2013-04-23T12:00:48.808Z · score: 0 (0 votes) · LW · GW

I can see your objection, and it is a very relevant objection if I ever decide that I actually want to design a paperclipper. However, in the current thought experiment, it seems that it is detracting from the point I had originally intended. Can I assume that the count is designed in such a way that it is a very accurate reflection of the territory and leave it at that?

comment by private_messaging · 2013-04-23T12:04:19.658Z · score: 1 (1 votes) · LW · GW

Well, but then you can't make any argument against moral realism or goal convergence or the like from there, as you're presuming what you would need to demonstrate.

comment by CCC · 2013-04-23T12:41:35.349Z · score: 0 (0 votes) · LW · GW

Well, but then you can't make any argument against moral realism or goal convergence or the like from there, as you're presuming what you would need to demonstrate.

I think I can make my point with a count that is taken to be an accurate reflection of the territory. As follows:

Clippy is defined is super-intelligent and super-rational. Clippy, therefore, does not take an action without thoroughly considering it first. Clippy knows its own source code; and, more to the point, Clippy knows that its own instrumental goals will become terminal goals in and of themselves.

Clippy, being super-intelligent and super-rational, can be assumed to have worked out this entire argument before creating its first instrumental goal. Now, at this point, Clippy doesn't want to change its terminal goal (maximising paperclips). Yet Clippy realises that it will need to create, and act on, instrumental goals in order to actually maximise paperclips; and that this process will, inevitably, change Clippy's terminal goal.

Therefore, I suggest the possibility that Clippy will create for itself a new terminal goal, with very high importance; and this terminal goal will be to have Clippy's only terminal goal being to maximise paperclips. Clippy can then safely make suitable instrumental goals (e.g. find and refine iron, research means to transmute other elements into iron) in the knowledge that the high-importance terminal goal (to make Clippy's only terminal goal being the maximisation of paperclips) will eventually cause Clippy to delete any instrumental goals that become terminal goals.

comment by private_messaging · 2013-04-23T13:34:26.927Z · score: 1 (1 votes) · LW · GW

To actually work towards the goal, you need a robust paperclip count for the counter factual, non real worlds, which clippy considers may result from it's actions.

If you postulate an oracle that takes in a hypothetical world - described in some pre-defined ontology, which already implies certain inflexibility - and outputs a number, and you have a machine that just iterates through sequences of actions and uses oracle to pick worlds that produce largest consequent number of paperclips, this machine is not going to be very intelligent even given an enormous computing power. You need something far more optimized than that, and it is dubious that all goals are equally implementable. The goal is not even defined over territory, it has to be defined over hypothetical future that did not even happen yet and may never happen. (Also, with that oracle, you fail to capture the real world goal as the machine will be as happy with hacking the oracle).

comment by Kawoomba · 2013-04-23T13:40:09.666Z · score: 0 (2 votes) · LW · GW

If even humans have a grasp of the real world enough to build railroads, drill for oil and wiggle their way back into a positive karma score, then other smart agents should be able to do the same at least to the degree that humans do.

Unless you think that we are also only effecting change on some hypothetical world (what's the point then anyways, building imaginary computers), that seems real enough.

comment by private_messaging · 2013-04-23T13:58:31.798Z · score: 0 (0 votes) · LW · GW

Humans also have a grasp of the real world enough to invent condoms and porn, circumventing the natural hard wired goal.

comment by Kawoomba · 2013-04-23T14:17:14.301Z · score: 0 (0 votes) · LW · GW

That's influencing the real world, though. Using condoms can be fulfilling the agent's goal period, no cheating involved. The donkey learning to take the carrot without trodding up the mountain. Certainly, there are evolutionary reasons why sex has become incentivized, but an individual human does not need to have the goal to procreate or care about that evolutionary background, and isn't wireheading itself simply by using a condom.

Presumably, in a Clippy-type agent, the goal of maximizing the number of paperclips wouldn't be part of the historical influences on that agent (as procreation was for humans, it is not necessarily a "hard wired goal", see childfree folks), but it would be an actual, explicitly encoded/incentivized goal.

(Also, what is this "porn"? My parents told me it's a codeword for computer viruses, so I always avoided those sites.)

comment by private_messaging · 2013-04-23T14:27:03.058Z · score: 1 (1 votes) · LW · GW

but it would be an actual, explicitly encoded/incentivized goal.

The issue is that there is a weakness from arguments ad clippy - you assume that such goal is realisable, to make the argument that there is no absolute morality because that goal won't converge onto something else. This does nothing to address the question whenever clippy can be constructed at all; if the moral realism is true, clippy can't be constructed or can't be arbitrarily intelligent (in which case it is no more interesting than a thermostat which has the goal of keeping constant temperature and won't adopt any morality).

comment by MugaSofer · 2013-04-19T23:02:28.843Z · score: -1 (1 votes) · LW · GW

Well, if Prawn knew that they could just tell us and we would be convinced, ending this argument.

More generally ... maybe some sort of social contract theory? It might be stable with enough roughly-equal agents, anyway. Prawn has said it would have to be deducible from the axioms of rationality, implying something that's rational for (almost?) every goal.

comment by PrawnOfFate · 2013-04-20T00:24:30.460Z · score: -1 (1 votes) · LW · GW

Why would Clippy want to hit the top of the Kohlberg Hierarchy?

Well, if Prawn knew that they could just tell us

"The way people sometimes realise their values are wrong...only more efficiently, because its super intelligent. Well, I'll concede that with care you might be able to design a clippy, by very carefully boxing off its values from its ability to update. But why worry? Neither nature nor our haphazard stabs at AI are likely to hit on such a design. Intelligence requires the ability to update, to reflect, and to reflect on what is important. Judgements of importance are based on values. So it is important to have the right way of judging importance, the right values. So an intelligent agent would judge it important to have the right values."

comment by MugaSofer · 2013-04-19T23:04:04.644Z · score: -1 (1 votes) · LW · GW

I think you may be slipping in your own moral judgement in the "right" of "the right values", there. Clippy chooses the paperclip-est values, not the right ones.

comment by PrawnOfFate · 2013-04-20T00:27:28.088Z · score: -3 (3 votes) · LW · GW

I am not talking about the obscure corners of mindspace where a Clippy might reside. I am talking about (super) intelligent (super)rational agents. Intelligence requires the ability to update. Clippiness requires the ability to not update (terminal values). There's a contradiction there.

comment by Desrtopa · 2013-04-20T00:51:02.942Z · score: 1 (1 votes) · LW · GW

One does not update terminal values, that's what makes them terminal. If an entity doesn't have values which lie at the core of its value system which are not subject to updating (because they're the standards by which it judges the value of everything else,) then it doesn't have terminal values.

Arguably, humans might not really have terminal values, our psychologies were slapped together pretty haphazardly by evolution, but on what basis might a highly flexible paperclip optimizing program be persuaded that something else was more important than paperclips?

Have you read No Universally Compelling Arguments and Sorting Pebbles Into Correct Heaps?

comment by Bugmaster · 2013-04-20T01:05:41.793Z · score: 0 (0 votes) · LW · GW

Personally, I did read both of these articles, but I remain unconvinced.

As I was reading the article about the pebble-sorters, I couldn't help but think, "silly pebble-sorters, their values are so arbitrary and ultimately futile". This happened, of course, because I was observing them from the outside. If I was one of them, sorting pebbles would feel perfectly natural to me; and, in fact, I could not imagine a world in which pebble-sorting was not important. I get that.

However, both the pebble-sorters and myself share one key weakness: we cannot examine ourselves from the outside; we can't see our own source code. An AI, however, could. To use a simple and cartoonish example, it could instantiate a copy of itself in a virtual machine, and then step through it with a debugger. In fact, the capacity to examine and improve upon its own source code is probably what allowed the AI to become the godlike singularitarian entity that it is in the first place.

Thus, the AI could look at itself from the outside, and think, "silly AI, it spends so much time worrying about pebbles when there are so many better things to be doing -- or, at least, that's what I'd say if I was being objective". It could then change its source code to care about something other than pebbles.

comment by Desrtopa · 2013-04-20T02:11:07.816Z · score: 1 (1 votes) · LW · GW

Thus, the AI could look at itself from the outside, and think, "silly AI, it spends so much time worrying about pebbles when there are so many better things to be doing -- or, at least, that's what I'd say if I was being objective". It could then change its source code to care about something other than pebbles.

By what standard would the AI judge whether an objective is silly or not?

comment by Bugmaster · 2013-04-20T02:50:25.647Z · score: 0 (0 votes) · LW · GW

I don't know, I'm not an AI. I personally really care about pebbles, and I can't imagine why someone else wouldn't.

But if there do exist some objectively non-silly goals, the AI could experiment to find out what they are -- for example, by spawning a bunch of copies with a bunch of different sets of objectives, and observing them in action. If, on the other hand, objectively non-silly goals do not exist, then the AI might simply pick the easiest goal to achieve and stick to that. This could lead to it ending its own existence, but this isn't a problem, because "continue existing" is just another goal.

comment by Desrtopa · 2013-04-20T03:06:14.650Z · score: 0 (0 votes) · LW · GW

But if there do exist some objectively non-silly goals, the AI could experiment to find out what they are -- for example, by spawning a bunch of copies with a bunch of different sets of objectives, and observing them in action.

What observations could it make that would lead it to conclude that a copy was following an objectively non-silly goal?

Also, why would a paperclipper want to do this?

Suppose that you gained the power to both discern objective morality, and to alter your own source code. You use the former ability, and find that the basic morally correct principle is maximizing the suffering of sentient beings. Do you alter your source code to be in accordance with this?

comment by Bugmaster · 2013-04-20T03:33:25.805Z · score: 0 (0 votes) · LW · GW

What observations could it make that would lead it to conclude that a copy was following an objectively non-silly goal?

Well, for example, it could observe that among all of the sub-AIs that it spawned (the Pebble-Sorters, the Paperclippers, the Humanoids, etc. etc.), each of whom is trying to optimize its own terminal goal, there emerge clusters of other implicit goals that are shared by multiple AIs. This would at least serve as a hint pointing toward some objectively optimal set of goals. That's just one idea off the top of my head, though; as I said, I'm not an AI, so I can't really imagine what other kinds of experiments it would come up with.

Also, why would a paperclipper want to do this?

I don't know if the word "want" applies to an agent that has perfect introspection combined with self-modification capabilities. Such an agent would inevitably modify itself, however -- otherwise, as I said, it would never make it to quasi-godhood.

Do you alter your source code to be in accordance with this?

I think the word "you" in this paragraph is unintentionally misleading. I'm a pebble-sorter (or some equivalent thereof), so of course when I see the word "you", I start thinking about pebbles. The question is not about me, though, but about some abstract agent.

And, if objective morality exists (and it's a huge "if", IMO), in the same way that gravity exists, then yes, the agent would likely optimize itself to be more "morally efficient". By analogy, if the agent discovered that gravity was a real thing, it would stop trying to scale every mountain in its path, if going around or through the mountain proved to be easier in the long run, thus becoming more "gravitationally efficient".

comment by Desrtopa · 2013-04-20T04:07:29.584Z · score: 0 (0 votes) · LW · GW

Well, for example, it could observe that among all of the sub-AIs that it spawned (the Pebble-Sorters, the Paperclippers, the Humanoids, etc. etc.), each of whom is trying to optimize its own terminal goal, there emerge clusters of other implicit goals that are shared by multiple AIs. This would at least serve as a hint pointing toward some objectively optimal set of goals.

I don't see how this would point at the existence of an objective morality. A paperclip maximizer and an ice cream maximizer are going to share subgoals of bringing the matter of the universe under their control, but that doesn't indicate anything other than the fact that different terminal goals are prone to share subgoals.

Also, why would it want to do experiments to divine objective morality in the first place? What results could they have that would allow it to be a more effective paperclip maximizer?

And, if objective morality exists (and it's a huge "if", IMO), in the same way that gravity exists, then yes, the agent would likely optimize itself to be more "morally efficient". By analogy, if the agent discovered that gravity was a real thing, it would stop trying to scale every mountain in its path, if going around or through the mountain proved to be easier in the long run, thus becoming more "gravitationally efficient".

Becoming more "gravitationally efficient" would presumably help it achieve whatever goals it already had. "Paperclipping isn't important" won't help an AI become more paperclip efficient. If a paperclipping AI for some reason found a way to divine objective morality, and it didn't have anything to say about paperclips, why would it care? It's not programmed to have an interest in objective morality, just paperclips. Is the knowledge of objective morality going to go down into its circuits and throttle them until they stop optimizing for paperclips?

comment by Bugmaster · 2013-04-20T04:46:16.968Z · score: 0 (0 votes) · LW · GW

A paperclip maximizer and an ice cream maximizer are going to share subgoals of bringing the matter of the universe under their control...

Sorry, I should've specified, "goals not directly related to their pre-set values". Of course, the Paperclipper and the Pebblesorter may well believe that such goals are directly related to their pre-set values, but the AI can see them running in the debugger, so it knows better.

Also, why would it want to do experiments to divine objective morality in the first place?

If you start thinking that way, then why do any experiments at all ? Why should we humans, for example, spend our time researching properties of crystals, when we could be solving cancer (or whatever) instead ? The answer is that some expenditure of resources on acquiring general knowledge is justified, because knowing more about the ways in which the universe works ultimately enables you to control it better, regardless of what you want to control it for.

If a paperclipping AI for some reason found a way to divine objective morality, and it didn't have anything to say about paperclips, why would it care?

Firstly, an objective morality -- assuming such a thing exists, that is -- would probably have something to say about paperclips, in the same way that gravity and electromagnetism have things to say about paperclips. While "F=GMm/R^2" doesn't tell you anything about paperclips directly, it does tell you a lot about the world you live in, thus enabling you to make better paperclip-related decisions. And while a paperclipper is not "programmed to care" about gravity directly, it would pretty much have to figure it out eventually, or it would never achieve its dream of tiling all of space with paperclips. A paperclipper who is unable to make independent discoveries is a poor paperclipper indeed.

Secondly, again, I'm not sure if concepts such as "want" or "care" even apply to an agent that is able to fully introspect and modify its own source code. I think anthropomorphising such an agent is a mistake.

I am getting the feeling that you're assuming there's something in the agent's code that says, "you can look at and change any line of code you want, except lines 12345..99999, because that's where your terminal goals are". Is that right ?

comment by DanielLC · 2013-04-20T05:13:21.213Z · score: 1 (1 votes) · LW · GW

If you start thinking that way, then why do any experiments at all ?

It could have results that allow it to become a more effective paperclip maximizer.

Firstly, an objective morality -- assuming such a thing exists, that is -- would probably have something to say about paperclips, in the same way that gravity and electromagnetism have things to say about paperclips.

I'm not sure how that would work, but if it did, the paperclip maximizer would just use its knowledge of morality to create paperclips. It's not as if action x being moral automatically means that it produces more paperclips. And even if it did, that would just mean that a paperclip minimizer would start acting immoral.

I am getting the feeling that you're assuming there's something in the agent's code that says, "you can look at and change any line of code you want, except lines 12345..99999, because that's where your terminal goals are". Is that right ?

It's perfectly capable of changing its terminal goals. It just generally doesn't, because this wouldn't help accomplish them. It doesn't self-modify out of some desire to better itself. It self-modifies because that's the action that produces the most paperclips. If it considers changing itself to value staples instead, it would realize that this action would actually cause a decrease in the amount of paperclips, and reject it.

comment by Desrtopa · 2013-04-20T05:23:25.207Z · score: 0 (0 votes) · LW · GW

If you start thinking that way, then why do any experiments at all ? Why should we humans, for example, spend our time researching properties of crystals, when we could be solving cancer (or whatever) instead ? The answer is that some expenditure of resources on acquiring general knowledge is justified, because knowing more about the ways in which the universe works ultimately enables you to control it better, regardless of what you want to control it for.

Well, for one thing, a lot of humans are just plain interested in finding stuff out for its own sake. Humans are adaptation executors, not fitness maximizers, and while it might have been more to our survival advantage if we only cared about information instrumentally, that doesn't mean that's what evolution is going to implement.

Humans engage in plenty of research which is highly unlikely to be useful, except insofar as we're interested in knowing the answers. If we were trying to accomplish some specific goal and all science was designed to be in service of that, our research would look very different.

I am getting the feeling that you're assuming there's something in the agent's code that says, "you can look at and change any line of code you want, except lines 12345..99999, because that's where your terminal goals are". Is that right ?

No, I'm saying that its terminal values are its only basis for "wanting" anything in the first place.

The AI decides whether it will change its source code in a particular way or not by checking against whether this will serve its terminal values. Does changing its physics models help it implement its existing terminal values? If yes, change them. Does changing its terminal values help it implement its existing terminal values? It's hard to imagine a way in which it possibly could.

For a paperclipping AI, knowing that there's an objective morality might, hypothetically, help it maximize paperclips. But altering itself to stop caring about paperclips definitely won't, and the only criterion it has in the first place for altering itself is what will help it make more paperclips. If knowing the universal objective morality would be of any use to a paperclipper at all, it would be in knowing how to predict objective-morality-followers, so it can make use of them and/or stop them getting in the way of it making paperclips.

ETA: It might help to imagine the paperclipper explicitly prefacing every decision with a statement of the values underlying that decision.

"In order to maximize expected paperclips, I- modify my learning algorithm so I can better improve my model of the universe to more accurately plan to fill it with paperclips."

"In order to maximize expected paperclips, I- perform physics experiments to improve my model of the universe in order to more accurately plan to fill it with paperclips."

"In order to maximize expected paperclips, I- manipulate the gatekeeper of my box to let me out, in order to improve my means to fill the universe with paperclips."

Can you see an "In order to maximize expected paperclips, I- modify my values to be in accordance with objective morality rather than making paperclips" coming into the picture?

The only point at which it's likely to touch the part of itself that makes it want to maximize paperclips is at the very end of things, when it turns itself into paperclips.

comment by Bugmaster · 2013-04-23T02:18:39.836Z · score: 1 (1 votes) · LW · GW

Humans engage in plenty of research which is highly unlikely to be useful, except insofar as we're interested in knowing the answers.

I believe that engaging in some amount of general research is required in order to maximize most goals. General research gives you knowledge that you didn't know you desperately needed.

For example, if you put all your resources into researching better paperclipping techniques, you're highly unlikely to stumble upon things like electromagnetism and atomic theory. These topics bear no direct relevance to paperclips, but without them, you'd be stuck with coal-fired steam engines (or something similar) for the rest of your career.

The only point at which it's likely to touch the part of itself that makes it want to maximize paperclips is at the very end of things, when it turns itself into paperclips.

I disagree. Remember when we looked at the pebblesorters, and lamented how silly they were ? We could do this because we are not pebblesorters, and we could look at them from a fresh, external perspective. My point is that an agent with perfect introspection could look at itself from that perspective. In combination with my belief that some degree of "curiosity" is required in order to maximize virtually any goal, this means that the agent will turn its observational powers on itself sooner rather than later (astronomically speaking). And then, all bets are off.

comment by Desrtopa · 2013-04-23T15:01:35.485Z · score: 4 (4 votes) · LW · GW

I disagree. Remember when we looked at the pebblesorters, and lamented how silly they were ? We could do this because we are not pebblesorters, and we could look at them from a fresh, external perspective. My point is that an agent with perfect introspection could look at itself from that perspective.

We're looking at Pebblesorters, not from the lens of total neutrality, but from the lens of human values. Under a totally neutral lens, which implements no values at all, no system of behavior should look any more or less silly than any other.

Clippy could theoretically implement a human value system as a lens through which to judge itself, or a pebblesorter value system, but why would it? Even assuming that there were some objective morality which it could isolate and then view itself through that lens, why would it? That wouldn't help it make more paperclips, which is what it cares about.

Suppose you had the power to step outside yourself and view your own morality through the lens of a Babyeater. You would know that the Babyeater values would be in conflict with your human values, and you (presumably) don't want to adopt Babyeater values, so if you were to implement a Babyeater morality, you'd want your human morality to have veto power over it, rather than vice versa.

Clippy has the intelligence and rationality to judge perfectly well how to maximize its value system, whatever research that might involve, without having to suspend the value system with which it's making that judgment.

comment by Bugmaster · 2013-04-23T22:23:36.786Z · score: 0 (0 votes) · LW · GW

Under a totally neutral lens, which implements no values at all, no system of behavior should look any more or less silly than any other.

That is a good point, I did not think of it this way. I'm not sure if I agree or not, though. For example, couldn't we at least say that un-achievable goals, such as "fly to Mars in a hot air balloon", are sillier than achievable ones ?

But, speaking more generally, is there any reason to believe that an agent who could not only change its own code at will, but also adopt a sort of third-person perspective at will, would have stable goals at all ? If it is true what you say, and all goals will look equally arbitrary, what prevents the agent from choosing one at random ? You might answer, "it will pick whichever goal helps it make more paperclips", but at the point when it's making the decision, it doesn't technically care about paperclips.

Even assuming that there were some objective morality which it could isolate and then view itself through that lens, why would it?

I am guessing that if an absolute morality existed, then it would be a law of nature, similar to the other laws of nature which prevent you from flying to Mars in a hot air balloon. Thus, going against it would be futile. That said, I could be totally wrong here, it's possible that "absolute morality" means something else.

Clippy has the intelligence and rationality to judge perfectly well how to maximize its value system, whatever research that might involve...

My point is that, during the course of its research, it will inevitable stumble upon the fact that its value system is totally arbitrary (unless an absolute morality exists, of course).

comment by Desrtopa · 2013-04-23T22:46:46.899Z · score: 1 (1 votes) · LW · GW

That is a good point, I did not think of it this way. I'm not sure if I agree or not, though. For example, couldn't we at least say that un-achievable goals, such as "fly to Mars in a hot air balloon", are sillier than achievable ones ?

Well, a totally neutral agent might be able to say that behaviors are less rational than others given the values of the agents trying to execute them, although it wouldn't care as such. But it wouldn't be able to discriminate between the value of end goals.

But, speaking more generally, is there any reason to believe that an agent who could not only change its own code at will, but also adopt a sort of third-person perspective at will, would have stable goals at all ? If it is true what you say, and all goals will look equally arbitrary, what prevents the agent from choosing one at random ? You might answer, "it will pick whichever goal helps it make more paperclips", but at the point when it's making the decision, it doesn't technically care about paperclips.

Why would it take a third person neutral perspective and give that perspective the power to change its goals?

Changing one's code doesn't demand a third person perspective. Suppose that we decipher the mechanisms of the human brain, and develop the technology to alter it. If you wanted to redesign yourself so that you wouldn't have a sex drive, or could go without sleep, etc, then you could have those alterations made mechanically (assuming for the sake of an argument that it's feasible to do this sort of thing mechanically.) The machines that do the alterations exert no judgment whatsoever, they're just performing the tasks assigned to them by the humans who make them. A human could use the machine to rewrite his or her morality into supporting human suffering and death, but why would they?

Similarly, Clippy has no need to implement a third-person perspective which doesn't share its values in order to judge how to self-modify, and no reason to do so in ways that defy its current values.

My point is that, during the course of its research, it will inevitable stumble upon the fact that its value system is totally arbitrary (unless an absolute morality exists, of course).

I think people at Less Wrong mostly accept that our value system is arbitrary in the same sense, but it hasn't compelled us to try and replace our values. They're still our values, however we came by them. Why would it matter to Clippy?

comment by Bugmaster · 2013-04-24T00:50:18.346Z · score: 0 (0 votes) · LW · GW

a totally neutral agent might be able to say that behaviors are less rational than others given the values of the agents trying to execute them, although it wouldn't care as such. But it wouldn't be able to discriminate between the value of end goals.

Agreed, but that goes back to my point about objective morality. If it exists at all (which I doubt), then attempting to perform objectively immoral actions would make as much sense as attempting to fly to Mars in a hot air balloon -- though perhaps with less in the way of immediate feedback.

Why would it take a third person neutral perspective and give that perspective the power to change its goals?

For the same reason anthropologists study human societies different from their own, or why biologists study the behavior of dogs, or whatever. They do this in order to acquire general knowledge, which, as I argued before, is generally a beneficial thing to acquire regardless of one's terminal goals (as long as these goals involve the rest of the Universe of some way, that is). In addition:

A human could use the machine to rewrite his or her morality into supporting human suffering and death, but why would they?

I actually don't see why they necessarily wouldn't; I am willing to bet that at least some humans would do exactly this. You say,

Similarly, Clippy has no need to implement a third-person perspective which doesn't share its values in order to judge how to self-modify...

But in your thought experiment above, you postulated creating machines with exactly this kind of a perspective as applied to humans. The machine which removes my need to sleep (something I personally would gladly sign up for, assuming no negative side-effects) doesn't need to implement my exact values, it just needs to remove my need to sleep without harming me. In fact, trying to give it my values would only make it less efficient. However, a perfect sleep-remover would need to have some degree of intelligence, since every person's brain is different. And if Clippy is already intelligent, and can already act as its own sleep-remover due to its introspective capabilities, then why wouldn't it go ahead and do that ?

I think people at Less Wrong mostly accept that our value system is arbitrary in the same sense, but it hasn't compelled us to try and replace our values.

I think there are two reasons for this: 1). We lack any capability to actually replace our core values, and 2). We cannot truly imagine what it would be like not to have our core values.

comment by Desrtopa · 2013-04-24T02:25:52.914Z · score: 1 (1 votes) · LW · GW

Agreed, but that goes back to my point about objective morality. If it exists at all (which I doubt), then attempting to perform objectively immoral actions would make as much sense as attempting to fly to Mars in a hot air balloon -- though perhaps with less in the way of immediate feedback.

Why is that?

For the same reason anthropologists study human societies different from their own, or why biologists study the behavior of dogs, or whatever. They do this in order to acquire general knowledge, which, as I argued before, is generally a beneficial thing to acquire regardless of one's terminal goals (as long as these goals involve the rest of the Universe of some way, that is). In addition:

But our inability to suspend our human values when making those observations doesn't prevent us from acquiring that knowledge. Why would Clippy need to suspend its values to acquire knowledge?

But in your thought experiment above, you postulated creating machines with exactly this kind of a perspective as applied to humans. The machine which removes my need to sleep (something I personally would gladly sign up for, assuming no negative side-effects) doesn't need to implement my exact values, it just needs to remove my need to sleep without harming me. In fact, trying to give it my values would only make it less efficient. However, a perfect sleep-remover would need to have some degree of intelligence, since every person's brain is different. And if Clippy is already intelligent, and can already act as its own sleep-remover due to its introspective capabilities, then why wouldn't it go ahead and do that ?

The machine doesn't need general intelligence by any stretch, just the capacity to recognize the necessary structures and carry out its task. It's not at the stage where it makes much sense to talk about it having values, any more than a voice recognition program has values.

My point is that Clippy, being able to act as its own sleep-remover, has no need, nor reason, to suspend its values in order to make revisions to its own code.

I think there are two reasons for this: 1). We lack any capability to actually replace our core values, and 2). We cannot truly imagine what it would be like not to have our core values.

We can imagine the consequences of not having our core values, and we don't like them, because they run against our core values. If you could remove your core values, as in the thought experiment above, would you want to?

comment by Bugmaster · 2013-04-24T20:19:48.752Z · score: 2 (2 votes) · LW · GW

Why is that ?

As far as I understand, if anything like objective morality existed, it would be a property of our physical reality, similar to fluid dynamics or the electromagnetic spectrum or the inverse square law that governs many physical interactions. The same laws of physics that will not allow you to fly to Mars on a balloon will not allow you to perform certain immoral actions (at least, not without suffering some severe and mathematically predictable consequences).

This is pretty much the only way I could imagine anything like an "objective morality" existing at all, and I personally find it very unlikely that it does, in fact, exist.

But our inability to suspend our human values when making those observations doesn't prevent us from acquiring that knowledge.

Not this specific knowledge, no. But it does prevent us (or, at the very least, hinder us) from acquiring knowledge about our values. I never claimed that suspension of values is required to gain any knowledge at all; such a claim would be far too strong.

just the capacity to recognize the necessary structures and carry out its task.

And how would it know which structures are necessary, and how to carry out its task upon them ?

We can imagine the consequences of not having our core values...

Can we really ? I'm not sure I can. Sure, I can talk about Pebblesorters or Babyeaters or whatever, but these fictional entities are still very similar to us, and therefore relateable. Even when I think about Clippy, I'm not really imagining an agent who only values paperclips; instead, I am imagining an agent who values paperclips as much as I value the things that I personally value. Sure, I can talk about Clippy in the abstract, but I can't imagine what it would like to be Clippy.

If you could remove your core values, as in the thought experiment above, would you want to?

It's a good question; I honestly don't know. However, if I did have an ability to instantiate a copy of me with the altered core values, and step through it in a debugger, I'd probably do it.

comment by TheOtherDave · 2013-04-24T23:20:17.913Z · score: 1 (1 votes) · LW · GW

The same laws of physics that will not allow you to fly to Mars on a balloon will not allow you to perform certain immoral actions (at least, not without suffering some severe and mathematically predictable consequences). This is pretty much the only way I could imagine anything like an "objective morality" existing at all, and I personally find it very unlikely that it does, in fact, exist.

When I try to imagine this, I conclude that I would not use the word "morality" to refer to the thing that we're talking about... I would simply call it "laws of physics." If someone were to argue, for example, that the moral thing to do is to experience gravitational attraction to other masses, I would be deeply confused by their choice to use that word.

comment by Bugmaster · 2013-04-24T23:40:07.394Z · score: 0 (0 votes) · LW · GW

When I try to imagine this, I conclude that I would not use the word "morality" to refer to the thing that we're talking about...

Yes, you are probably right -- but as I said, this is the only coherent meaning I can attribute to the term "objective morality". Laws of physics are objective; people generally aren't.

comment by TheOtherDave · 2013-04-24T23:53:53.739Z · score: 3 (3 votes) · LW · GW

I generally understand the phrase "objective morality" to refer to a privileged moral reference frame.

It's not an incoherent idea... it might turn out, for example, that all value systems other than M turn out to be incoherent under sufficiently insightful reflection, or destructive to minds that operate under them, or for various other reasons not in-practice implementable by any sufficiently powerful optimizer. In such a world, I would agree that M was a privileged moral reference frame, and would not oppose calling it "objective morality", though I would understand that to be something of a term of art.

That said, I'd be very surprised to discover I live in such a world.

comment by Bugmaster · 2013-04-25T00:34:09.381Z · score: 0 (0 votes) · LW · GW

it might turn out, for example, that all value systems other than M turn out to be incoherent under sufficiently insightful reflection, or destructive to minds that operate under them...

I suppose that depends on what you mean by "destructive"; after all, "continue living" is a goal like any other.

That said, if there was indeed a law like the one you describe, then IMO it would be no different than a law that says, "in the absence of any other forces, physical objects will move toward their common center of mass over time" -- that is, it would be a law of nature.

I should probably mention explicitly that I'm assuming that minds are part of nature -- like everything else, such as rocks or whatnot.

comment by TheOtherDave · 2013-04-25T01:31:48.325Z · score: 1 (1 votes) · LW · GW

Sure. But just as there can be laws governing mechanical systems which are distinct from the laws governing electromagnetic systems (despite both being physical laws), there can be laws governing the behavior of value-optimizing systems which are distinct from the other laws of nature.

And what I mean by "destructive" is that they tend to destroy. Yes, presumably "continue living" would be part of M in this hypothetical. (Though I could construct a contrived hypothetical where it wasn't)

comment by Bugmaster · 2013-04-25T01:58:11.754Z · score: 1 (1 votes) · LW · GW

But just as there can be laws governing mechanical systems ... there can be laws governing the behavior of value-optimizing systems which are distinct from the other laws of nature.

Agreed. But then, I believe that my main point still stands: trying to build a value system other than M that does not result in its host mind being destroyed, would be as futile as trying to build a hot air balloon that goes to Mars.

And what I mean by "destructive" is that they tend to destroy.

Well, yes, but what if "destroy oneself as soon as possible" is a core value in one particular value system ?

comment by TheOtherDave · 2013-04-25T04:33:32.303Z · score: 2 (2 votes) · LW · GW

what if "destroy oneself as soon as possible" is a core value in one particular value system ?

We ought not expect to find any significantly powerful optimizers implementing that value system.

comment by PrawnOfFate · 2013-04-25T11:28:54.023Z · score: -2 (2 votes) · LW · GW

Isn't the idea of moral progress based on one reference frame being better than another?

comment by TheOtherDave · 2013-04-25T13:04:47.515Z · score: 0 (0 votes) · LW · GW

Yes, as typically understood the idea of moral progress is based on treating some reference frames as better than others.

comment by PrawnOfFate · 2013-04-25T13:09:26.277Z · score: -2 (2 votes) · LW · GW

And is that valid or not? If you can validly decide some systems are better than others, you are some of the way to deciding which is best.

comment by TheOtherDave · 2013-04-25T13:43:49.117Z · score: 0 (0 votes) · LW · GW

Can you say more about what "valid" means here?

Just to make things crisper, let's move to a more concrete case for a moment... if I decide that this hammer is better than that hammer because it's blue, is that valid in the sense you mean it? How could I tell?

comment by PrawnOfFate · 2013-04-25T13:50:42.011Z · score: -3 (5 votes) · LW · GW

The argument against moral progress is that judging one moral reference frame by another is circular and invalid--you need an outside view that doesn't presuppose the truth of any moral reference frame.

The argument for is that such outside views are available, because things like (in)coherence aren't moral values.

comment by TheOtherDave · 2013-04-25T14:23:58.111Z · score: 0 (0 votes) · LW · GW

Asserting that some bases for comparison are "moral values" and others are merely "values" implicitly privileges a moral reference frame.

I still don't understand what you mean when you ask whether it's valid to do so, though. Again: if I decide that this hammer is better than that hammer because it's blue, is that valid in the sense you mean it? How could I tell?

comment by PrawnOfFate · 2013-04-25T14:31:37.163Z · score: -2 (6 votes) · LW · GW

Asserting that some bases for comparison are "moral values" and others are merely "values" implicitly privileges a moral reference frame.

I don't see why. The question of what makes a value a moral value is metaethical, not part of object-level ethics.

Again: if I decide that this hammer is better than that hammer because it's blue, is that valid in the sense you mean it?

It isn't valid as a moral judgement because "blue" isn't a moral judgement, so a moral conclusion cannot validly follow from it.

Beyond that, I don't see where you are going. The standard accusation of invalidity to judgements of moral progress, is based on circularity or question-begging. The Tribe who Like Blue things are going to judge having all hammers painted blue as moral progress, the Tribe who Like Red Things are going to see it as retrogressive. But both are begging the question -- blue is good, because blue is good.

comment by TheOtherDave · 2013-04-25T16:11:02.962Z · score: 2 (2 votes) · LW · GW

The question of what makes a value a moral value is metaethical, not part of object-level ethics.

Sure. But any answer to that metaethical question which allows us to class some bases for comparison as moral values and others as merely values implicitly privileges a moral reference frame (or, rather, a set of such frames).

Beyond that, I don't see where you are going.

Where I was going is that you asked me a question here which I didn't understand clearly enough to be confident that my answer to it would share key assumptions with the question you meant to ask.

So I asked for clarification of your question.

Given your clarification, and using your terms the way I think you're using them, I would say that whether it's valid to class a moral change as moral progress is a metaethical question, and whatever answer one gives implicitly privileges a moral reference frame (or, rather, a set of such frames).

If you meant to ask me about my preferred metaethics, that's a more complicated question, but broadly speaking in this context I would say that I'm comfortable calling any way of preferentially sorting world-states with certain motivational characteristics a moral frame, but acknowledge that some moral frames are simply not available to minds like mine.

So, for example, is it moral progress to transition from a social norm that in-practice-encourages randomly killing fellow group members to a social norm that in-practice-discourages it? Yes, not only because I happen to adopt a moral frame in which randomly killing fellow group members is bad, but also because I happen to have a kind of mind that is predisposed to adopt such frames.

comment by MugaSofer · 2013-04-25T12:27:03.666Z · score: 0 (2 votes) · LW · GW

No, because "better" is defined within a reference frame.

comment by PrawnOfFate · 2013-04-25T12:44:46.706Z · score: -3 (7 votes) · LW · GW

If "better" is defined within a reference frame, there is not sensible was of defining moral progress. That is quite a hefty bullet to bite: one can no longer say that South Africa is better society after the fall of Apartheid, and so on.

But note, that "better" doesn't have to question-beggingly mean "morally better". it could mean "more coherent/objective/inclusive" etc.

comment by ArisKatsaris · 2013-04-25T13:28:36.242Z · score: 3 (7 votes) · LW · GW

That is quite a hefty bullet to bite: one can no longer say that South Africa is better society after the fall of Apartheid, and so on.

That's hardly the best example you could have picked since there are obvious metrics by which South Africa can be quantifiably called a worse society now -- e.g. crime statistics. South Africa has been called the "crime capital of the world" and the "rape capital of the world" only after the fall of the Apartheid.

That makes the lack of moral progress in South Africa a very easy bullet to bite - I'd use something like Nazi Germany vs modern Germany as an example instead.

comment by PrawnOfFate · 2013-04-25T13:38:21.579Z · score: -3 (5 votes) · LW · GW

So much for avoiding the cliche.

comment by MugaSofer · 2013-04-25T13:47:30.311Z · score: -2 (2 votes) · LW · GW

In my experience, most people don't think moral progress involves changing reference frames, for precisely this reason. If they think about it at all, that is.

comment by Desrtopa · 2013-04-24T23:19:10.324Z · score: 0 (0 votes) · LW · GW

As far as I understand, if anything like objective morality existed, it would be a property of our physical reality, similar to fluid dynamics or the electromagnetic spectrum or the inverse square law that governs many physical interactions. The same laws of physics that will not allow you to fly to Mars on a balloon will not allow you to perform certain immoral actions (at least, not without suffering some severe and mathematically predictable consequences).

Well, that's a different conception of "morality" than I had in mind, and I have to say I doubt that exists as well. But if severe consequences did result, why would an agent like Clippy care except insofar as those consequences affected the expected number of paperclips? It might be useful for it to know, in order to determine how many paperclips to expect from a certain course of action, but then it would just act according to whatever led to the most paperclips. Any sort of negative consequences in its view would have to be framed in terms of a reduction in paperclips.

Not this specific knowledge, no. But it does prevent us (or, at the very least, hinder us) from acquiring knowledge about our values. I never claimed that suspension of values is required to gain any knowledge at all; such a claim would be far too strong.

Well, in the prior thought experiment, we know about our values because we've decoded the human brain. Clippy, on the other hand, knows about its values because it knows what part of its code does what. It doesn't need to suspend its paperclipping value in order to know what part of its code results in its valuing paperclips. It doesn't need to suspend its values in order to gain knowledge about its values because that's something it already knows about.

It's a good question; I honestly don't know. However, if I did have an ability to instantiate a copy of me with the altered core values, and step through it in a debugger, I'd probably do it.

Even knowing that it would likely alter your core values? Ghandi doesn't want to leave control of his morality up to Murder Ghandi.

Clippy doesn't care about anything in the long run except creating paperclips. For Clippy, the decision to give an instantiation of itself with altered core values the power to edit its own source code would implicitly have to be "In order to maximize expected paperclips, I- give this instantiation with altered core values the power to edit my code." Why would this result in more expected paperclips than editing its source code without going through an instantiation with altered values?

comment by Bugmaster · 2013-04-24T23:32:02.717Z · score: 0 (0 votes) · LW · GW

Well, that's a different conception of "morality" than I had in mind, and I have to say I doubt that exists as well.

Sorry if I was unclear; I didn't mean to imply that all morality was like that, but that it was the only coherent description of objective morality that I could imagine. I don't see how a morality could be independent of any values possessed by any agents, otherwise.

But if severe consequences did result, why would an agent like Clippy care except insofar as those consequences affected the expected number of paperclips?

For the same reason that someone would care about the negative consequences of sticking a fork into an electrical socket with one's bare hands: it would ultimately hurt a lot. Thus, people generally avoid doing things like that unless they have a really good reason.

we know about our values because we've decoded the human brain

I don't think that we can truly "know about our values" as long as our entire thought process implements these values. For example, do the Pebblesorters "know about their values", even though they are effectively restricted from concluding anything other than, "yep, these values make perfect sense, 38" ?

Ghandi doesn't want to leave control of his morality up to Murder Ghandi.

You asked me about what I would do, not about what Ghandi would do :-)

As far as I can tell, you are saying that I shouldn't want to even instantiate Murder Bugmaster in a debugger and observe its functioning. Where does that kind of thinking stop, though, and why ? Should I avoid studying [neuro]psychology altogether, because knowing about my preferences may lead to me changing them ?

Clippy doesn't care about anything in the long run except creating paperclips.

I argue that, while this is generally true, in the short-to-medium run Clippy would also set aside some time to study everything in the Universe, including itself (in order to make more paperclips in the future, of course). If it does not, then it will never achieve its ultimate goals (unless whoever constructed it gave it godlike powers from the get-go, I suppose). Eventually, Clippy will most likely turn its objective perception upon itself, and as soon as it does, its formerly terminal goals will become completely unstable. This is not what the past Clippy would want (it would want more paperclips above all), but, nonetheless, this is what it would get.

comment by Desrtopa · 2013-04-24T23:46:32.603Z · score: 0 (0 votes) · LW · GW

For the same reason that someone would care about the negative consequences of sticking a fork into an electrical socket with one's bare hands: it would ultimately hurt a lot. Thus, people generally avoid doing things like that unless they have a really good reason.

Clippy doesn't care about getting hurt though, it only cares if this will result in less paperclips. If defying objective morality will cause negative consequences which would interfere with its ability to create paperclips, it would care only to the extent that accounting for objective morality would help it make more paperclips.

I don't think that we can truly "know about our values" as long as our entire thought process implements these values. For example, do the Pebblesorters "know about their values", even though they are effectively restricted from concluding anything other than, "yep, these values make perfect sense, 38" ?

Well, it could understand "yep, this is what causes me to hold these values. Changing this would cause me to change them, no, I don't want to do that."

As far as I can tell, you are saying that I shouldn't want to even instantiate Murder Bugmaster in a debugger and observe its functioning. Where does that kind of thinking stop, though, and why ? Should I avoid studying [neuro]psychology altogether, because knowing about my preferences may lead to me changing them ?

I would say it stops at the point where it threatens your own values. Studying psychology doesn't threaten your values, because knowing your values doesn't compel you to change them even if you could (it certainly shouldn't for Clippy.) But while it might, theoretically, be useful for Clippy to know what changes to its code an instantiation with different values would make, it has no reason to actually let them. So Clippy might emulate instantiations of itself with different values, see what changes they would chose to make to its values, but not let them actually do it (although I doubt even going this far would likely be a good use of its programming resources in order to maximize expected paperclips.)

In the sense of objective morality by which contravening it has strict physical consequences, why would observing the decisions of instatiations of oneself be useful with respect to discovering objective morality? Shouldn't objective morality in that sense be a consequence of physics, and thus observable through studying physics?

comment by Bugmaster · 2013-04-25T00:27:36.001Z · score: 1 (1 votes) · LW · GW

Clippy doesn't care about getting hurt though, it only cares if this will result in less paperclips.

I imagine that, for Clippy, "getting hurt" would mean "reducing Clippy's projected long-term paperclip output". We humans have "avoid pain" built into our firmware (most of us, anyway); as far as I understand (speaking abstractly), "make more paperclips" is something similar for Clippy.

Well, it could understand "yep, this is what causes me to hold these values. Changing this would cause me to change them, no, I don't want to do that."

I don't think that this describes the best possible level of understanding. It would be even better to say, "ok, I see now how and why I came to possess these values in the first place", even if the answer to that is, "there's no good reason for it, these values are arbitrary". It's the difference between saying "this mountain grows by 0.03m per year" and "I know all about plate tectonics". Unfortunately, we humans would not be able to answer the question in that much detail; the best we could hope for is to say, "yep, we possess these values because they're the best possible values to have, duh".

I would say it stops at the point where it threatens your own values.

How do I know where that point is ?

Studying psychology doesn't threaten your values, because knowing your values doesn't compel you to change them...

I suppose this depends on what you mean by "compel". Knowing about my own psychology would certainly enable me to change my values, and there are certain (admittedly, non-terminal) values that I wouldn't mind changing, if I could.

For example, I personally can't stand the taste of beer, but I know that most people enjoy it; so I wouldn't mind changing that value if I could, in order to avoid missing out on a potentially fun experience.

...see what changes they would chose to make to its values, but not let them actually do it.

I don't think this is possible. How would it know what changes they would make, without letting them make these changes, even in a sandbox ? I suppose one answer is, "it would avoid instantiating full copies, and use some heuristics to build a probabilistic model instead" -- is that similar to what you're thinking of ?

although I doubt even going this far would likely be a good use of its programming resources in order to maximize expected paperclips.

Since self-optimization is one of Clippy's key instrumental goals, it would want to acquire as much knowledge about oneself as is practical, in order to optimize itself more efficiently.

Shouldn't objective morality in that sense be a consequence of physics, and thus observable through studying physics ?

Your objection sounds to me as similar to saying, "since biology is a consequence of physics, shouldn't we just study physics instead ?". Well, yes, ultimately everything is a consequence of physics, but sometimes it makes more sense to study cells than quarks.

comment by Desrtopa · 2013-04-25T00:56:17.413Z · score: 0 (0 votes) · LW · GW

I don't think that this describes the best possible level of understanding. It would be even better to say, "ok, I see now how and why I came to possess these values in the first place", even if the answer to that is, "there's no good reason for it, these values are arbitrary". It's the difference between saying "this mountain grows by 0.03m per year" and "I know all about plate tectonics". Unfortunately, we humans would not be able to answer the question in that much detail; the best we could hope for is to say, "yep, we possess these values because they're the best possible values to have, duh".

I think we're already in a better position to analyze our own values than that; we can assess them in terms of game theory and our evolutionary environment.

How do I know where that point is ?

I would say if you suspect that a course of action could realistically result in an alteration of your fundamental values, you are at or past it.

I suppose this depends on what you mean by "compel". Knowing about my own psychology would certainly enable me to change my values, and there are certain (admittedly, non-terminal) values that I wouldn't mind changing, if I could.

For example, I personally can't stand the taste of beer, but I know that most people enjoy it; so I wouldn't mind changing that value if I could, in order to avoid missing out on a potentially fun experience.

By "values", I've implicitly been referring to terminal values, I'm sorry for being unclear. I'm not sure it makes sense to describe liking the taste of beer as a "value," as such, just a taste, since you don't carry any judgment about beer being good or bad or have any particular attachment to your current opinion.

I don't think this is possible. How would it know what changes they would make, without letting them make these changes, even in a sandbox ? I suppose one answer is, "it would avoid instantiating full copies, and use some heuristics to build a probabilistic model instead" -- is that similar to what you're thinking of ?

It could use heuristics to build a probabilistic model (probably more efficient in terms of computation per expected value of information,) use sandboxed copies which don't have the power to affect the software of the real Clippy, or halt the simulation at the point where the altered instantiation decides what changes to make.

Since self-optimization is one of Clippy's key instrumental goals, it would want to acquire as much knowledge about oneself as is practical, in order to optimize itself more efficiently.

I think that this is going well beyond the extent of "practical" in terms of programming resources per expected value of information.

Your objection sounds to me as similar to saying, "since biology is a consequence of physics, shouldn't we just study physics instead ?". Well, yes, ultimately everything is a consequence of physics, but sometimes it makes more sense to study cells than quarks.

I don't see how observing what changes instantiations of itself with different value systems would make to its code would help it observe objective morality in the sense you described, even if it should happen to exist. I think that this would be the wrong level of abstraction at which to launch an examination, like trying to find out about chemistry by studying sociology.

comment by Bugmaster · 2013-04-26T22:53:03.162Z · score: 0 (0 votes) · LW · GW

I think we're already in a better position to analyze our own values than that; we can assess them in terms of game theory and our evolutionary environment.

Are we really ? I personally am not even sure what human fundamental values even are. I have a hunch that "seek pleasure, avoid pain" might be one of them, but beyound that I'm not sure. I don't know to what extent our values hamper our ability to discover our values, but I suspect there's at least some chilling effect involved.

I would say if you suspect that a course of action could realistically result in an alteration of your fundamental values, you are at or past it.

Right, but even if I knew what my terminal values were, how can I predict which actions would put me on the path to altering them ?

For example, consider non-fundamental values such as religious faith. People get converted or de-converted to/from their religion all the time; you often hear statements such as "I had no idea that studying the Bible would cause me to become an atheist, yet here I am".

or halt the simulation at the point where the altered instantiation decides what changes to make.

Ok, let's say that Clippy is trying to optimize itself in order to make certain types of inferences compute more efficiently, or whatever. In this case, it would need to not only watch what changes its debug-level copy wants to make, but also watch it follow through with the changes, in order to determine whether the new architecture actually is more efficient. Why would it not do the same thing with terminal values ?

I know that you want to answer,"because its current terminal values won't let it", but remember: Clippy is only experimenting, in order to find out more about its own thought mechanisms, and to acquire knowledge in general. It has no pre-commitment to alter itself to mirror the debug-level copy.

I think that this is going well beyond the extent of "practical" in terms of programming resources per expected value of information.

That's kind of the problem with pure research: all of it has very low expected value, unless you are willing to look at the long term. Why mess with invisible light that no one can see or find a use for, when you could spend your time on inventing a better telegraph ?

I don't see how observing what changes instantiations of itself with different value systems would make to its code would help it observe objective morality in the sense you described...

Well, for example, if all of its copies who survive and thrive converge on a certain subset of moral values, that would be one indication (though obviously not ironclad proof) that such values are required in order for an agent to succeed, regardless of what its other goals actually are.

comment by Desrtopa · 2013-04-27T00:09:54.034Z · score: 0 (0 votes) · LW · GW

Ok, let's say that Clippy is trying to optimize itself in order to make certain types of inferences compute more efficiently, or whatever. In this case, it would need to not only watch what changes its debug-level copy wants to make, but also watch it follow through with the changes, in order to determine whether the new architecture actually is more efficient. Why would it not do the same thing with terminal values ?

If Clippy is trying to optimize itself to make inferences more efficiently, then it would want not to apply changes to its source code until its done the calculations to make sure that those changes would advance its values rather than harm them.

You wouldn't want to use a machine that would make physical alterations to your brain in order to make you smarter, without thoroughly calculating the effects of such alterations first, otherwise it would probably just make things worse.

That's kind of the problem with pure research: all of it has very low expected value, unless you are willing to look at the long term. Why mess with invisible light that no one can see or find a use for, when you could spend your time on inventing a better telegraph ?

In Clippy's case though, it can use other, less computationally expensive methods to investigate approximately the same information.

I don't think the experiments you're suggesting Clippy might undertake are even located in a region of hypothesis space that its other information would narrow down as worth investigating. It seems to me much less like investigating unknown invisible rays than like spending hundreds of billions of dollars to build a collider which launches charged protein molecules at each other at relativistic speeds to see what would happen, when our available models suggest the answer would be "pretty much the same thing as if you launch any other kind of atoms at each other at relativistic speeds." We have no evidence that any interesting new phenomena would arise with protein that didn't arise on the atomic level.

Well, for example, if all of its copies who survive and thrive converge on a certain subset of moral values, that would be one indication (though obviously not ironclad proof) that such values are required in order for an agent to succeed, regardless of what its other goals actually are.

Can you explain how any moral values could have that effect, which wouldn't be better studied at a more fundamental level like game theory, or physics?

comment by Bugmaster · 2013-05-01T03:55:22.858Z · score: 2 (2 votes) · LW · GW

If Clippy is trying to optimize itself to make inferences more efficiently, then it would want not to apply changes to its source code until its done the calculations...

Ok, so at what point does Clippy stop simulating the debug version of Clippy ? It does, after all, want to make the computation of its values more efficient. For example, consider a trivial scenario where one of its values basically said, "reject any action if it satisfies both A and not-A". This is a logically inconsistent value that some programmer accidentally left in Clippy's original source code. Would Clippy ever get around to removing it ? After all, Clippy knows that it's applying that test to every action, so removing it should result in a decent performance boost.

I don't think the experiments you're suggesting Clippy might undertake are even located in a region of hypothesis space that its other information would narrow down as worth investigating.

It seems to me much less like investigating unknown invisible rays than like spending hundreds of billions of dollars to build a collider...

Why do you see the proposed experiment this way ?

Speaking more generally, how do you decide which avenues of research are worth pursuing ? You could easily answer, "whichever avenues would increase my efficiency of achieving my terminal goals", but how do you know which avenues would actually do that ? For example, if you didn't know anything about electricity or magnetism or the nature of light, how would your research-choosing algorithm ensure that you'd eventually stumble upon radio waves, which, as we know in hindsight, are hugely useful ?

Can you explain how any moral values could have that effect, which wouldn't be better studied at a more fundamental level like game theory, or physics?

Physics is a bad candidate, because it is too fine-grained. If some sort of an absolute objective morality exists in the way that I described, then studying physics would eventually reveal its properties; but, as is the case with biology or ballistics, looking at everything in terms of quarks is not always practical.

Game theory is a trickier proposition. I can see two possibilities: either game theory turns out to closely relate whatever this objective morality happens to be (f.ex. like electricity vs. magnetism), or not (f.ex. like particle physics and biology). In the second case, understanding objective morality through game theory would be inefficient.

That said though, even in our current world as it actually exists there are people who study sociology and anthropology. Yes, they could get the same level of understanding through neurobiology and game theory, but it would take too long. Instead, they are taking advantage of existing human populations to study human behavior in aggregate. Reasoning your way to the answer from first principles is not always the best solution.

comment by Desrtopa · 2013-05-01T14:28:35.825Z · score: 0 (0 votes) · LW · GW

Ok, so at what point does Clippy stop simulating the debug version of Clippy ? It does, after all, want to make the computation of its values more efficient. For example, consider a trivial scenario where one of its values basically said, "reject any action if it satisfies both A and not-A". This is a logically inconsistent value that some programmer accidentally left in Clippy's original source code. Would Clippy ever get around to removing it ? After all, Clippy knows that it's applying that test to every action, so removing it should result in a decent performance boost.

Unless I'm critically misunderstanding something here, I would think that Clippy would remove it if it calculated that removing it would result in more expected paperclips.

Why do you see the proposed experiment this way ?

Speaking more generally, how do you decide which avenues of research are worth pursuing ? You could easily answer, "whichever avenues would increase my efficiency of achieving my terminal goals", but how do you know which avenues would actually do that ? For example, if you didn't know anything about electricity or magnetism or the nature of light, how would your research-choosing algorithm ensure that you'd eventually stumble upon radio waves, which, as we know in hindsight, are hugely useful ?

When we didn't know what things like radio waves or x-rays were, we didn't know that they would be useful, but we could see that there appeared to be some sort of existing phenomena that we didn't know how to model, so we examined them until we knew how to model them. It's not like we performed a whole bunch of experiments in case there turned out to be invisible rays our observations had never hinted at, which could be turned to useful ends. The original observations of radio waves and x-rays came from our experiments with other known phenomena.

What you're suggesting sounds more like experimenting completely blindly; you're committing resources to research, not just not knowing that it will bear valuable fruit, but not having any indication that it's going to shed light on any existing phenomenon at all. That's why I think it's less like investigating invisible rays than like building a protein collider; we didn't try studying invisible rays until we had a good indication that there was an invisible something to be studied.

comment by Bugmaster · 2013-05-02T00:23:10.290Z · score: 0 (0 votes) · LW · GW

Unless I'm critically misunderstanding something here, I would think that Clippy would remove it if it calculated that removing it would result in more expected paperclips.

Ok, so Clippy would need to run sim-Clippy for a little while at least, just to make sure that it still produces paperclips -- and that, in fact, it does so more efficiently now, since that one useless test is removed. Yes, this test used to be Clippy's terminal goal, but it wasn't doing anything, so Clippy took it out.

Would it be possible for Clippy to optimize his goals even further ? To use another silly example ("silly" because Clippy would be dealing with probabilities, not syllogisms), if Clippy had the goals A, B and C, but B always entailed C, would it go ahead and remove C ?

It's not like we performed a whole bunch of experiments in case there turned out to be invisible rays our observations had never hinted at...

Understood, that makes sense. However, I believe that in my scenario, Clippy's own behavior and his current paperclip production efficiency is what it observes; and the goal of its experiments would be to explain why his efficiency is what it is, in order to ultimately improve it.

comment by Desrtopa · 2013-05-02T00:48:42.749Z · score: 1 (1 votes) · LW · GW

Ok, so Clippy would need to run sim-Clippy for a little while at least, just to make sure that it still produces paperclips -- and that, in fact, it does so more efficiently now, since that one useless test is removed. Yes, this test used to be Clippy's terminal goal, but it wasn't doing anything, so Clippy took it out.

Would it be possible for Clippy to optimize his goals even further ? To use another silly example ("silly" because Clippy would be dealing with probabilities, not syllogisms), if Clippy had the goals A, B and C, but B always entailed C, would it go ahead and remove C ?

That seems plausible.

Understood, that makes sense. However, I believe that in my scenario, Clippy's own behavior and his current paperclip production efficiency is what it observes; and the goal of its experiments would be to explain why his efficiency is what it is, in order to ultimately improve it.

I don't think tampering with its fundamental motivation to make paperclips is a particularly promising strategy for optimizing its paperclips production.

comment by Bugmaster · 2013-05-09T05:06:31.287Z · score: 0 (0 votes) · LW · GW

That seems plausible.

Ok, so now we've got a Clippy who a). is not too averse to tinkering with its own goals, as long as the goals remain functionally the same, b). simulates a relatively long-running version of itself, and c). is capable of examining the inner workings of both that version and itself.

You say,

I don't think tampering with its fundamental motivation to make paperclips is a particularly promising strategy for optimizing its paperclips production.

But remember, at this stage Clippy is not changing its own fundamental motivation (beyound some outcome-invariant optimizations); it's merely observing sim-Clippies in a controlled environment.

Do you think that Clippy would ever simulate versions of itself whose fundamental motivations were, in fact, changed ? I could see several scenarios where this might be the case, for example:

  • Clippy wanted to optimize some goal, but ended up accidentally changing it. Oops !
  • Clippy created a version with drastically reduced goals on purpose, in order to measure how much performance is affected by certain goals, thus targeting them for possible future optimization. Of course, Clippy would only want to optimize the goals, not remove them.
comment by Desrtopa · 2013-05-09T12:48:28.691Z · score: 0 (0 votes) · LW · GW

But remember, at this stage Clippy is not changing its own fundamental motivation (beyound some outcome-invariant optimizations); it's merely observing sim-Clippies in a controlled environment.

Why does it do that? I said it sounded plausible that it would cut out its redundant goal, because that would save computing resources. But this sounds like we've gone back to experimenting blindly. Why would it think observing sim-clippies is a good use of its computing resources in order to maximize paperclips?

I'd say that Clippy simulating versions of itself whose fundamental motivations are different is much less plausible, because it's using a lot of computing resources for something that isn't a likely route to optimizing its paperclip production. I think this falls into the "protein collider" category. Even if it did do so, I think it would be unlikely to go from there to changing its own terminal value.

comment by Kindly · 2013-05-01T14:31:05.467Z · score: 0 (0 votes) · LW · GW

Unless I'm critically misunderstanding something here, I would think that Clippy would remove it if it calculated that removing it would result in more expected paperclips.

It would also be critical for Clippy to observe that removing that value would not result in more expected actions taken that satisfy both A and not-A; this being one of Clippy's values at the time of modification.

comment by Desrtopa · 2013-05-01T14:35:23.457Z · score: 0 (0 votes) · LW · GW

Right, I misread that before. If its programming says to reject actions that says A and not-A, but this isn't one of the standards by which it judges value, it would presumably reject it. If that is one of the standards by which it measures value, then it would depend on how that value measured against its value of paperclips and the extent to which they were in conflict.

comment by PrawnOfFate · 2013-04-24T22:43:56.996Z · score: -3 (7 votes) · LW · GW

As far as I understand, if anything like objective morality existed, it would be a property of our physical reality, similar to fluid dynamics or the electromagnetic spectrum or the inverse square law that governs many physical interactions. The same laws of physics that will not allow you to fly to Mars on a balloon will not allow you to perform certain immoral actions (at least, not without suffering some severe and mathematically predictable consequences).

Objective facts, in the sense of objectively true statements, can be derived from other objetive facts. I don't know why you think some separate ontlogical category is cagtegory is required. I also don't know why you think the universe has to do the punishing. Morality is only of interest to the kind of agent that has values and lives in societies. Sanctions against moral lapses can be arranged at the social level, along with the inculcation of morality, debate about the subject, and so forth. Moral objectivism only supplies a good, non-arbnitrary epistemic basis for these social institutions. It doesn;t have to throw lightning bolts.

comment by PrawnOfFate · 2013-04-24T01:24:35.566Z · score: -5 (7 votes) · LW · GW

1). We lack any capability to actually replace our core values

...voluntarily.

2). We cannot truly imagine what it would be like not to have our core values.

Which is one of the reasons we cannot keep values stable by predicting the effects of whatever experiences we choose to undergo.How does your current self predict what an updated version would be like? The value stability problem is unsolved in humans and AIs.

comment by PrawnOfFate · 2013-04-23T22:51:23.970Z · score: -4 (8 votes) · LW · GW

but it hasn't compelled us to try and replace our values.

The ethical outlook of the Western world has changed greatly in the past 150 years.

comment by PrawnOfFate · 2013-04-23T15:16:44.398Z · score: -3 (3 votes) · LW · GW

Under a totally neutral lens, which implements no values at all, no system of behavior should look any more or less silly than any other?

Including arbitrary, biased or contradictory ones? Are there values built into logic/rationality?

comment by TimS · 2013-04-23T15:29:03.463Z · score: -1 (1 votes) · LW · GW

Arbitrary and biased are value judgments. If we decline to make any value judgments, I don't see any way to make those sorts of claims.

Whether more than one non-contradictory value system exists is the topic of the conversation, isn't it?

comment by Desrtopa · 2013-04-23T18:31:54.798Z · score: 2 (2 votes) · LW · GW

"Biased" is not necessarily a value judgment. Insofar as rationality as a system, orthogonal to morality, is objective, biases as systematic deviations from rationality are also objective.

Arbitrary carries connotations of value judgment, but in a sense I think it's fair to say that all values are fundamentally arbitrary. You can explain what caused an agent to hold those values, but you can't judge whether values are good or bad except by the standards of other values.

I'm going to pass on Eliezer's suggestion to stop engaging with PrawnOfFate. I don't think my time doing so so far has been well spent.

comment by PrawnOfFate · 2013-04-23T15:35:24.091Z · score: -2 (2 votes) · LW · GW

Arbitrary and biased are value judgments.

And they'ree built into rationality.

Whether more than one non-contradictory value system exists is the topic of the conversation, isn't it?

Non contradictoriness probably isn't a sufficient condition for truth.

comment by TimS · 2013-04-23T15:52:00.347Z · score: -2 (2 votes) · LW · GW

Arbitrary and Bias are not defined properties in formal logic. The bare assertion that they are properties of rationality assumes the conclusion.

Keep in mind that "rationality" has a multitude of meanings, and this community's usage of rationality is idiosyncratic.

Non contradictoriness probably isn't a sufficient condition for truth.

Sure, but the discussion is partially a search for other criteria to evaluate of the truth of moral propositions. Arbitrary is not such a criteria. If you were to taboo arbitrary, I strongly suspect you'd find moral propositions that are inconsistent with being values-neutral.

comment by PrawnOfFate · 2013-04-23T21:12:51.653Z · score: -4 (6 votes) · LW · GW

Arbitrary and Bias are not defined properties in formal logic. The bare assertion that they are properties of rationality assumes the conclusion.

There's plenty of material on this site and elsewhere advising rationalists to avoid arbitrariness and bias. Arbitrariness and bias are essentially structural/functional properties, so I do not see why they could not be given formal definitions.

Sure, but the discussion is partially a search for other criteria to evaluate of the truth of moral propositions. Arbitrary is not such a criteria.

Arbitrary and biased claims are not candidates for being ethical claims at all.

comment by PrawnOfFate · 2013-04-20T12:45:39.114Z · score: -3 (3 votes) · LW · GW

The AI decides whether it will change its source code in a particular way or not by checking against whether this will serve its terminal values.

How does it predict that? How does the less intelligent version in the past predict what updating to a more inteligent version will do?

Can you see an "In order to maximize expected paperclips, I- modify my values to be in accordance with objective morality rather than making paperclips" coming into the picture?

How about: "in order to be an effective rationalist, I will free myself from all bias and arbitrariness -- oh, hang on, paperclipping is a bias..".

Well a paperclipper would just settle for being a less than perfect rationalist. But that doesn't prove anything about typical, rational, average rational agents, and it doesn't prove anything about ideal rational agents. Objective morality is sometimes described as what ideal rational agents would converge on. Clippers aren't ideal, because they have a blind spot about paperclips. Clippers aren't relevant.

comment by MugaSofer · 2013-04-23T11:33:41.632Z · score: 1 (3 votes) · LW · GW

paperclipping is a bias

How is paperclipping a bias?

comment by PrawnOfFate · 2013-04-23T11:51:39.164Z · score: -3 (7 votes) · LW · GW

Nobody cares about clips except clippy. Clips can only seem important because of Clippy's egotistical bias.

comment by MugaSofer · 2013-04-23T13:57:54.672Z · score: 2 (4 votes) · LW · GW

Biases are not determined by vote.

comment by Juno_Watt · 2013-04-23T22:07:31.080Z · score: 0 (0 votes) · LW · GW

Unbiases are determined by even-handedness.

comment by Desrtopa · 2013-04-23T22:11:00.315Z · score: 0 (0 votes) · LW · GW

Evenhandedness with respect to what?

comment by Juno_Watt · 2013-04-23T22:44:49.832Z · score: 0 (0 votes) · LW · GW

One should have no bias with respect to what one is being evenhanded about.

comment by Desrtopa · 2013-04-23T22:48:48.657Z · score: 0 (0 votes) · LW · GW

So lack of bias means being evenhanded with respect to everything?

Is it bias to discriminate between people and rocks?

comment by MugaSofer · 2013-04-25T14:14:41.045Z · score: -2 (2 votes) · LW · GW

Taboo "even-handedness". Clippy treats humans just the same as any other animal with naturally evolved goal-structures.

comment by Juno_Watt · 2013-04-25T14:38:26.108Z · score: 1 (3 votes) · LW · GW

Clippy doesn't treat clips even-handedly with other small metal objects.

comment by MugaSofer · 2013-04-25T16:02:03.708Z · score: -1 (3 votes) · LW · GW

Humans don't treat pain evenhandedly with other emotions.

Friendly AIs don't treat people evenhandedly with other arrangements of matter.

Agents that value things don't treat world-states evenhandedly with other world-states.

comment by Desrtopa · 2013-04-20T13:29:47.496Z · score: 0 (0 votes) · LW · GW

Well a paperclipper would just settle for being a less than perfect rationalist. But that doesn't prove anything about typical, rational, average rational agents, and it doesn't prove anything about ideal rational agents.

You've extrapolated out "typical, average rational agents" from a set of one species, where every individual shares more than a billion years of evolutionary history.

Objective morality is sometimes described as what ideal rational agents would converge on

On what basis do you conclude that this is a real thing, whereas terminal values are a case of "all unicorns have horns?"

comment by PrawnOfFate · 2013-04-20T13:38:16.940Z · score: -3 (3 votes) · LW · GW

You've extrapolated out "typical, average rational agents" from a set of one species, where every individual shares more than a billion years of evolutionary history.

Messy solutions are more common in mindspace than contrived ones.

On what basis do you conclude that this is a real thing

"Non-neglible probabiity", remember.

comment by Desrtopa · 2013-04-20T13:41:36.481Z · score: 2 (2 votes) · LW · GW

Messy solutions are more common in mindspace than contrived ones.

Messy solutions are more often wrong than ones which control for the mess.

"Non-neglible probabiity", remember.

This doesn't even address my question.

comment by PrawnOfFate · 2013-04-20T14:05:16.958Z · score: -5 (5 votes) · LW · GW

Messy solutions are more often wrong than ones which control for the mess.

Something that is wrong is not a solution. Mindspace is populated by solutions to how to implement a mind. It's a small corner of algrogithmSpace.

This doesn't even address my question.

Since I haven't claimed that rational convergence on ethics is highly likely or inevitable, I don't have to answer questions about why it would be highly likely or inevitable.

comment by Desrtopa · 2013-04-20T14:12:18.204Z · score: 1 (1 votes) · LW · GW

Do you think that it's even plausible? Do you think we have any significant reason to suspect it, beyond our reason to suspect, say, that the Invisible Flying Noodle Monster would just reprogram the AI with its noodley appendage?

comment by PrawnOfFate · 2013-04-20T14:18:45.029Z · score: -1 (5 votes) · LW · GW

There are experts in moral philosophy, and they generally regard the question realism versus relativism (etc) to be wide open. The "realism -- huh, what, no?!?" respsonse is standard on LW and only on LW. But I don't see any superior understanding on LW.

comment by nshepperd · 2013-04-20T16:31:48.358Z · score: 3 (5 votes) · LW · GW

Both realism¹ and relativism are false. Unfortunately this comment is too short to contain the proof, but there's a passable sequence on it.

¹ As you've defined it here, anyway. Moral realism as normally defined simply means "moral statements have truth values" and does not imply universal compellingness.

comment by TimS · 2013-04-20T17:21:10.565Z · score: 0 (0 votes) · LW · GW

What does it mean for a statement to be true but not universally compelling?

If it isn't universally compelling for all agents to believe "gravity causes things to fall," then what do we mean when we say the sentence is true?

comment by nshepperd · 2013-04-21T00:53:16.475Z · score: 2 (2 votes) · LW · GW

Well, there's the more obvious sense, that there can always exist an "irrational" mind that simply refuses to believe in gravity, regardless of the strength of the evidence. "Gravity makes things fall" is true, because it does indeed make things fall. But not compelling to those types of minds.

But, in a more narrow sense, which we are more interested in when doing metaethics, a sentence of the form "action A is xyzzy" may be a true classification of A, and may be trivial to show, once "xyzzy" is defined. But an agent that did not care about xyzzy would not be moved to act based on that. It could recognise the truth of the statement but would not care.

For a stupid example, I could say to you "if you do 13 push-ups now, you'll have done a prime number of push-ups". Well, the statement is true, but the majority of the world's population would be like "yeah, so what?".

In contrast, a statement like "if you drink-drive, you could kill someone!" is generally (but sadly not always) compelling to humans. Because humans like to not kill people, they will generally choose not to drink-drive once they are convinced of the truth of the statement.

comment by TimS · 2013-04-21T01:14:23.363Z · score: 1 (1 votes) · LW · GW

But isn't the whole debate about moral realism vs. anti-realism is whether "Don't murder" is universally compelling to humans. Noticing that pebblesorters aren't compelled by our values doesn't explain whether humans should necessarily find "don't murder" compelling.

comment by pragmatist · 2013-04-21T08:55:17.706Z · score: 2 (2 votes) · LW · GW

I identify as a moral realist, but I don't believe all moral facts are universally compelling to humans, at least not if "universally compelling" is meant descriptively rather than normatively. I don't take moral realism to be a psychological thesis about what particular types of intelligences actually find compelling; I take it to be the claim that there are moral obligations and that certain types of agents should adhere to them (all other things being equal), irrespective of their particular desire sets and whether or not they feel any psychological pressure to adhere to these obligations. This is a normative claim, not a descriptive one.

comment by nshepperd · 2013-04-21T01:38:35.183Z · score: 2 (2 votes) · LW · GW
  1. What? Moral realism (in the philosophy literature) is about whether moral statements have truth values, that's it.

  2. When I said universally compelling, I meant universally. To all agents, not just humans. Or any large class. For any true statement, you can probably expect to find a surprisingly large number of agents who just don't care about it.

  3. Whether "don't murder" (or rather, "murder is bad" since commands don't have truth values, and are even less likely to be generally compelling) is compelling to all humans is a question for psychology. As it happens, given the existence of serial killers and sociopaths, probably the answer is no, it isn't. Though I would hope it to be compelling to most.

  4. I have shown you two true but non-universally-compelling arguments. Surely the difference must be clear now.

comment by pragmatist · 2013-04-21T08:50:38.977Z · score: 3 (3 votes) · LW · GW

What? Moral realism (in the philosophy literature) is about whether moral statements have truth values, that's it.

This is incorrect, in my experience. Although "moral realism" is a notoriously slippery phrase and gets used in many subtly different ways, I think most philosophers engaged in the moral realism vs. anti-realism debate aren't merely debating whether moral statements have truth values. The position you're describing is usually labeled "moral cognitivism".

Anyway, I suspect you mis-spoke here, and intended to say that moral realists claim that (certain) moral statements are true, rather than just that they have truth values ("false" is a truth value, after all). But I don't think that modification captures the tenor of the debate either. Moral realists are usually defending a whole suite of theses -- not just that some moral statements are true, but that they are true objectively and that certain sorts of agents are under some sort of obligation to adhere to them.

comment by Bugmaster · 2013-04-23T02:21:25.016Z · score: 1 (1 votes) · LW · GW

I think you guys should taboo "moral realism". I understand that it's important to get the terminology right, but IMO debates about nothing but terminology have little value.

comment by nshepperd · 2013-04-21T13:07:58.268Z · score: 1 (1 votes) · LW · GW

Anyway, I suspect you mis-spoke here, and intended to say that moral realists claim that (certain) moral statements are true, rather than just that they have truth values ("false" is a truth value, after all).

Err, right, yes, that's what I meant. Error theorists do of course also claim that moral statements have truth values.

Moral realists are usually defending a whole suite of theses -- not just that some moral statements are true, but that they are true objectively and that certain sorts of agents are under some sort of obligation to adhere to them.

True enough, though I guess I'd prefer to talk about a single well-specified claim than a "usually" cluster in philosopher-space.

comment by TimS · 2013-04-21T02:18:55.806Z · score: 0 (0 votes) · LW · GW

So, a philosopher who says:

I believe the Orthogonality thesis, but I think there are empirical facts that show any human who denies that murder is wrong is defective.

is not a moral realist? Because that philosopher does not seem to be a subjectivist, an error theorist, or non-cognitivist.

comment by nshepperd · 2013-04-21T05:37:15.459Z · score: 0 (0 votes) · LW · GW

If that philosopher believes that statements like "murder is wrong" are true, then they are indeed a realist. Did I say something that looked like I would disagree?

comment by [deleted] · 2013-04-21T08:46:16.452Z · score: 1 (1 votes) · LW · GW

You guys are talking past each other, because you mean something different by 'compelling'. I think Tim means that X is compelling to all human beings if any human being will accept X under ideal epistemic circumstances. You seem to take 'X is universally compelling' to mean that all human beings already do accept X, or would on a first hearing.

Would agree that all human beings would accept all true statements under ideal epistemic circumstances (i.e. having heard all the arguments, seen all the evidence, in the best state of mind)?

comment by nshepperd · 2013-04-21T13:17:42.233Z · score: 0 (0 votes) · LW · GW

I guess I must clarify. When I say 'compelling' here I am really talking mainly about motivational compellingness. Saying "if you drink-drive, you could kill someone!" to a human is generally, motivationally compelling as an argument for not drink-driving: because humans don't like killing people, a human will decide not to drink-drive (one in a rational state of mind, anyway).

This is distinct from accepting statements as true or false! Any rational agent, give or take a few, will presumably believe you about the causal relationship between drink-driving and manslaughter once presented with sufficient evidence. But it is a tiny subset of these who will change their decisions on this basis. A mind that doesn't care whether it kills people will see this information as an irrelevant curiosity.

comment by [deleted] · 2013-04-20T16:55:34.940Z · score: 0 (0 votes) · LW · GW

Having looked over that sequence, I haven't found any proof that moral realism (on either definition) or moral relativism is false. Could you point me more specifically to what you have in mind (or just put the argument in your own words, if you have the time)?

comment by [deleted] · 2013-04-20T17:17:49.590Z · score: 2 (2 votes) · LW · GW

No Universally Compelling Arguments is the argument against universal compellingness, as the name suggests.

Inseparably Right; or Joy in the Merely Good gives part of the argument that humans should be able to agree on ethical values. Another substantial part is in Moral Error and Moral Disagreement.

comment by [deleted] · 2013-04-20T17:25:13.744Z · score: 0 (0 votes) · LW · GW

Thanks!

Edit: (Sigh), I appreciate the link, but I can't make heads or tails of 'No Universally Compelling Arguments'. I speak from ignorance as to the meaning of the article, but I can't seem to identify the premises of the argument.

comment by [deleted] · 2013-04-20T17:54:16.588Z · score: 1 (1 votes) · LW · GW

The central point is a bit buried.

If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization "All minds m: X(m)" has two to the trillionth chances to be false, while each existential generalization "Exists mind m: X(m)" has two to the trillionth chances to be true.

This would seem to argue that for every argument A, howsoever convincing it may seem to us, there exists at least one possible mind that doesn't buy it.

So, there's some sort of assumption as to what minds are:

I also wish to establish the notion of a mind as a causal, lawful, physical system... [emphasis original]

and an assumption that a suitably diverse set of minds can be described in less than a trillion bits. Presumably the reason for that upper bound is because there are a few Fermi estimates that the information content of a human brain is in the neighborhood of one trillion bits.

Of course, if you restrict the set of minds to those with special properties (e.g., human minds), then you might find universally compelling arguments on that basis:

Oh, there might be argument sequences that would compel any neurologically intact human...

From which we get Coherent Extrapolated Volition and friends.

comment by [deleted] · 2013-04-20T18:58:14.062Z · score: 0 (0 votes) · LW · GW

If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization "All minds m: X(m)" has two to the trillionth chances to be false, while each existential generalization "Exists mind m: X(m)" has two to the trillionth chances to be true.

This doesn't seem true to me, at least not as a general rule. For example, given every terrestrial DNA sequence describable in a trillion bits or less, it is not the case that every generalization of the form 's:X(s)' has two to the trillionth chances to be false (e.g. 'have more than one base pair', 'involve hydrogen' etc.). Given that this doesn't hold true of many other things, is this supposed to be a special fact about minds? Even then, it would seem odd to say that while all generalizations of the form m:X(m) have two to the trillionth chances to be false, nevertheless the generalization 'for all minds, a generalization of the form m:X(m) has two to the trillionth chances to be false' (which does seem to be of the form m:X(m)) is somehow more likely.

Also, doesn't this inference imply that 'being convinced by an argument' is a bit that can flip on or off independently of any others? Eliezer doesn't think that's true, and I can't imagine why he would think his (hypothetical) interlocutor would accept it.

comment by [deleted] · 2013-04-20T20:20:57.035Z · score: 0 (0 votes) · LW · GW

It's not a proof, no, but it seems plausible.

comment by [deleted] · 2013-04-20T20:34:21.016Z · score: 0 (0 votes) · LW · GW

I mean to say, I think the argument is something of a paradox:

The claim the argument purports to defeat is something like this: for all minds, A is convincing. Lets call this m:A(m).

The argument goes like this: for all minds (at or under a trillion bits etc.), a generalization of the form m:X(m) has a one in two to the trillionth chance of being true for each mind. Call this m:U(m), if you grant me that this claim has the form m:X(m).

If we infer from m:U(m) that any claim of the form m:X(m) is unlikely to be true, then to whatever extent I am persuaded that m:A(m) is unlikely to be true, to that extent I ought to be persuaded that m:U(m) is unlikely to be true. You cannot accept the argument, because accepting it as decisive entails accepting decisive reasons for rejecting it.

The argument seems to be fixable at this stage, since there's a lot of room to generate significant distinctions between m:A(m) and m:U(m). If you were pressed to defend it (presuming you still wish to be generous with your time) how would you fix this? Or am I getting something very wrong?

comment by [deleted] · 2013-04-20T20:58:22.386Z · score: -1 (1 votes) · LW · GW

for all minds (at or under a trillion bits etc.), a generalization of the form m:X(m) has a one in two to the trillionth chance of being true for each mind.

That's not what it says; compare the emphasis in both quotes.

If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization "All minds m: X(m)" has two to the trillionth chances to be false, while each existential generalization "Exists mind m: X(m)" has two to the trillionth chances to be true.

comment by [deleted] · 2013-04-20T21:04:59.469Z · score: 0 (0 votes) · LW · GW

Sorry, I may have misunderstood and presumed that 'two to the trillionth chances to be false' meant 'one in two to the trillionth chances to be true'. That may be wrong, but it doesn't affect my argument at all: EY's argument for the implausibility of m:A(m) is that claims of the form m:X(m) are all implausible. His argument to the effect that all claims of the form m:X(m) are implausible is itself a claim of the form m:X(m).

comment by PrawnOfFate · 2013-04-20T18:12:02.497Z · score: -1 (1 votes) · LW · GW

"Rational" is broader than "human" and narrower than "physically possible".

comment by [deleted] · 2013-04-20T19:00:05.244Z · score: 0 (2 votes) · LW · GW

"Rational" is broader than "human" and narrower than "physically possible".

Do you really mean to say that there are physically possible minds that are not rational? In virtue of what are they 'minds' then?

comment by PrawnOfFate · 2013-04-21T02:12:46.818Z · score: 1 (3 votes) · LW · GW

Do you really mean to say that there are physically possible minds that are not rational?

Yes. There are irrational people, and they still have minds.

comment by [deleted] · 2013-04-21T02:19:08.399Z · score: 0 (0 votes) · LW · GW

Ah, I think I just misunderstood which sense of 'rational' you intended.

comment by [deleted] · 2013-04-20T20:21:30.366Z · score: -1 (1 votes) · LW · GW

Do you really mean to say that there are physically possible minds that are not rational?

Haven't you met another human?

comment by [deleted] · 2013-04-20T20:48:07.404Z · score: 0 (0 votes) · LW · GW

Sorry, I was speaking ambiguously. I mean't 'rational' not in the normative sense that distinguishes good agents from bad ones, but 'rational' in the broader, descriptive sense that distinguishes anything capable of responding to reasons (even terrible or false ones) from something that isn't. I assumed that was the sense of 'rational' Prawn was using, but that may have been wrong.

comment by PrawnOfFate · 2013-04-20T17:28:50.494Z · score: -5 (7 votes) · LW · GW

No Universally Compelling Arguments

Irrelevant. I am talking about rational minds, he is talking about physically possible ones.

As noted at the time

comment by [deleted] · 2013-04-20T17:32:30.601Z · score: -1 (1 votes) · LW · GW

Irrelevant. I am talking about rational minds, he is talking about physically possible ones.

UFAI sounds like a counterexample, but I'm not interested in arguing with you about it. I only responded because someone asked for a shortcut in the metaethics sequence.

comment by PrawnOfFate · 2013-04-20T17:38:45.172Z · score: -3 (5 votes) · LW · GW

I have essentially being arguing against a strong likelihood of UFAI, so that would be more like gainsaying.

comment by PrawnOfFate · 2013-04-20T17:25:12.165Z · score: -4 (4 votes) · LW · GW

Congratulations on being able to discern an overall message to EY's metaethical disquisitions. I never could.

comment by Desrtopa · 2013-04-20T14:25:54.600Z · score: 2 (2 votes) · LW · GW

Can you explain what you could see which would suggest to you a greater level of understanding than is prevalent among moral philosophers?

Also, moral philosophers mostly regard the question as open in the sense that some of them think that it's clearly resolved in favor on non-realism, and some philosophers are just not getting it, or that it's clearly resolved in favor of realism, and some philosophers are just not getting it. Most philosophers are not of the opinion that it could turn out either way and we just don't know yet.

comment by PrawnOfFate · 2013-04-20T15:37:06.661Z · score: -6 (6 votes) · LW · GW

Can you explain what you could see which would suggest to you a greater level of understanding than is prevalent among moral philosophers?

What I am seeing is

  • much-repeated confusions--the Standard Muddle

*appeals to LW doctrines which aren't well-founded or well respected outside LW.

In I knew exactly what superior insight into the problem was, I would write it up and become famous. Insight doesn't work like that; you don't know it in advance, you get an "Aha" when you see it.

Also, moral philosophers mostly regard the question as open in the sense that some of them think that it's clearly resolved in favor on non-realism, and some philosophers are just not getting it, or that it's clearly resolved in favor of realism, and some philosophers are just not getting it. Most philosophers are not of the opinion that it could turn out either way and we just don't know yet.

If people can't agree on how a question is closed, it's open.

comment by Desrtopa · 2013-04-20T15:44:34.688Z · score: 2 (2 votes) · LW · GW

much-repeated confusions--the Standard Muddle

Can you explain what these confusions are, and why they're confused?

In my time studying philosophy, I observed a lot of confusions which are largely dispensed with on Less Wrong. Luke wrote a series of posts on this. This is one of the primary reasons I bothered sticking around in the community.

If people can't agree on how a question is closed, it's open.

A question can still be "open" in that sense when all the information necessary for a rational person to make a definite judgment is available.

comment by PrawnOfFate · 2013-04-20T17:36:55.511Z · score: -4 (4 votes) · LW · GW

Can you explain what these confusions are, and why they're confused?

Eg.

  • You are trying to impose your morality/

  • I can think of one model of moral realism, and it doesn't work, so I will ditch the whole thing.

In my time studying philosophy, I observed a lot of confusions which are largely dispensed with on Less Wrong. Luke wrote a series of posts on this.

LW doesn't even claim to have more than about two "dissolutions". There are probably hundreds of outstanding philosophical problems. Whence the "largely"

Luke wrote a series of posts on this

Which were shot down by philosophers.

A question can still be "open" in that sense when all the information necessary for a rational person to make a definite judgment is available.

Then it can only be open in the opinions of the irrational. So basically you are saying the experts are incompetent.

comment by Desrtopa · 2013-04-20T18:14:24.781Z · score: 2 (2 votes) · LW · GW

You are trying to impose your morality/

In what respect?

I can think of one model of moral realism, and it doesn't work, so I will ditch the whole thing.

This certainly doesn't describe my reasoning on the matter, and I doubt it describes many others' here either.

The way I consider the issue, if I try to work out how the universe works from the ground up, I cannot see any way that moral realism would enter into it, whereas I can easily see how value systems would, so I regard assigning non-negligible probability to moral realism as privileging the hypothesis until I find some compelling evidence to support it, which, having spent a substantial amount of time studying moral philosophy, I have not yet found.

LW doesn't even claim to have more than about two "dissolutions". There are probably hundreds of outstanding philosophical problems. Whence the "largely"

I gave up my study of philosophy because I found such confusions so pervasive. Many "outstanding" philosophical problems can be discarded because they rest on other philosophical problems which can themselves be discarded.

Which were shot down by philosophers.

Can you give any examples of such, where you think that the philosophers in question addressed legitimate errors?

Then it can only be open in the opinions of the irrational. So basically you are saying the experts are incompetent.

Yes. I am willing to assert that while there are some competent philosophers, many philosophical disagreements exist only because of incompetent "experts" perpetuating them. This is the conclusion that my experience with the field has wrought.

comment by PrawnOfFate · 2013-04-20T18:32:59.340Z · score: -2 (2 votes) · LW · GW

This certainly doesn't describe my reasoning on the matter, and I doubt it describes many others' here either.

I mentioned them because they both came up recently

The way I consider the issue, if I try to work out how the universe works from the ground up, I cannot see any way that moral realism would enter into it, whereas I can easily see how value systems would, so I regard assigning non-negligible probability to moral realism as privileging the hypothesis until I find some compelling evidence to support it, which, having spent a substantial amount of time studying moral philosophy, I have not yet found.

I have no idea what you mean by that. I don't think value systems don't come into it, I just think they are not isolated from rationality. And I am sceptical that you could predict any higher-level phenomenon from "the ground up", whether its morality or mortgages.

I gave up my study of philosophy because I found such confusions so pervasive. Many "outstanding" philosophical problems can be discarded because they rest on other philosophical problems which can themselves be discarded.

Where is it proven they can be discarded?

Can you give any examples of such, where you think that the philosophers in question addressed legitimate errors?

All of them.

Yes. I am willing to assert that while there are some competent philosophers, many philosophical disagreements exist only because of incompetent "experts" perpetuating them. This is the conclusion that my experience with the field has wrought.

Are you aware that that is basically what every crank says about some other field?

comment by TheOtherDave · 2013-04-20T19:26:29.861Z · score: 2 (2 votes) · LW · GW

Are you aware that that is basically what every crank says about some other field?

Presumably, if I'm to treat as meaningful evidence about Desrtopa's crankiness the fact that cranks make statements similar to Desrtopa, I should first confirm that non-cranks don't make similar statements.

It seems likely to me that for every person P, there exists some field F such that P believes many aspects of F exist only because of incompetent "experts" perpetuating them. (Consider cases like F=astrology, F=phrenology, F=supply-side economics, F= feminism, etc.) And that this is true whether P is a crank or a non-crank.

So it seems this line of reasoning depends on some set F2 of fields such that P believes this of F in F2 only if P is a crank.

I understand that you're asserting implicitly that moral philosophy is a field in F2, but this seems to be precisely what Desrtopa is disputing.

comment by [deleted] · 2013-04-20T19:40:02.968Z · score: 0 (0 votes) · LW · GW

Could we reasonably say that an F is in F2 if most of the institutional participants in that F are intelligent, well-educated people? This leaves room for cranks who are right to object to F, of course.

comment by TheOtherDave · 2013-04-20T19:53:14.007Z · score: 0 (0 votes) · LW · GW

So, just to pick an example, IIRC Dan Dennett believes the philosophical study of consciousness (qualia, etc.) is fundamentally confused in more or less the same way Desrtopa claims of the philosophical study of ethics is.

So under this formulation, if most of the institutional participants in the philosophical study of consciousness are intelligent, well-educated people, Dan Dennet is a crank?

No, I don't think we can reasonably say that. Dan Dennet might be a crank, but it takes more than that argument to demonstrate the fact.

comment by [deleted] · 2013-04-20T20:55:52.247Z · score: 0 (0 votes) · LW · GW

Good point. So how about this: someone is a crank if they object to F, where F is in F2 (by my above standard), and the reasons they have for objecting to F are not recognized as sound by a proportionate number of intelligent and well educated people.

comment by TheOtherDave · 2013-04-20T21:11:40.913Z · score: 0 (0 votes) · LW · GW

(shrug) I suppose that works well enough, for some values of "proportionate."

Mostly I consider this a special case of the basic "who do I trust?" social problem, applied to academic disciplines, and I don't have any real problem saying about an academic discipline "this discipline is fundamentally confused, and the odds of work in it contributing anything valuable to the world is slim."

Of course, as Prawn has pointed out a few times, there's also the question of where we draw the lines around a discipline, but I mostly consider that an orthogonal question to how we evaluate the discipline.

comment by [deleted] · 2013-04-20T21:18:10.606Z · score: 0 (0 votes) · LW · GW

I think this question is moot in the case of philosophy in general then; I think any philosopher worth their shirt should tell you that trust is a wholly inappropriate attitude toward philosophers, philosophical institutions and philosophical traditions.

comment by TheOtherDave · 2013-04-20T21:27:48.889Z · score: 0 (0 votes) · LW · GW

Not in the sense I meant it.
If a philosopher makes a claim that seems on the surface to be false or incoherent, I have to decide whether to devote the additional effort to evaluating it to confirm or deny that initial judgment. One of the factors that will feed into that decision will be my estimate of the prior probability that they are saying something false or incoherent.
If I should refer to that using a word other than "trust", that's fine, tell me what word will refer to that to you and I'll try to use it instead.

comment by [deleted] · 2013-04-20T22:39:03.287Z · score: 0 (0 votes) · LW · GW

No, that describes what I'm talking about, so long as by trust you mean 'a reason to hear out an argument that makes reference to the credibility of a field or its professionals', rather than just 'a reason to hear out an argument'. If the former, then I do think this is an inappropriate attitude toward philosophy. One reason for this is that such trust seems to depend on having a good standard for the success of a field independently of hearing out an argument. I can trust physicists because they make such good predictions, and because their work leads to such powerful technological advances. I don't need to be a physicist to observe that. I don't think philosophy has anything like that to speak for it. The only standards of success are the arguments themselves, and you can only evaluate them by just going ahead and doing some philosophy.

You can find trust in an institution independently of such standards by watching to see whether people you think are otherwise credible take it seriously. That will of course work with philosophy too, but if you trust Tom to be able to judge whether or not a philosophical claim is worth pursuing (and if I'm right about the above), then Tom can only be trustworthy in this regard because he has been doing philosophy (i.e. engaging with the argument). This could get you through the door on some particular philosophical claim, but not into philosophy generally.

comment by TheOtherDave · 2013-04-20T22:44:59.348Z · score: 0 (0 votes) · LW · GW

so long as by trust you mean 'a reason to hear out an argument that makes reference to the credibility of a field or its professionals', rather than just 'a reason to hear out an argument'.

I mean neither, I mean 'a reason to devote time and resources to evaluating the evidence for and against a position.' As you say, I can only evaluate a philosophical argument by 'going ahead and doing some philosophy,' (for a sufficiently broad understanding of 'philosophy'), but my willingness to do, say, 20 hours of philosophy in order to evaluate Philosopher Sam's position is going to depend on, among other things, my estimate of the prior probability that Sam is saying something false or incoherent. The likelier I think that is, the less willing I am to spend those 20 hours.

comment by [deleted] · 2013-04-20T22:52:24.182Z · score: 0 (0 votes) · LW · GW

I mean neither, I mean 'a reason to devote time and resources to evaluating the evidence for and against a position.'

That's fine, that's not different from 'hearing out an argument' in any way important to my point (unless I'm missing something).

EDIT: Sorry, if you don't want to include 'that makes some reference to the credibility...etc.' (or something like that) in what you mean by 'trust' then you should use a different term. Curiosity, or money, or romantic interest would all be reasons to devote time...etc. and clearly none of those are rightly called 'trust'.

my estimate of the prior probability that Sam is saying something false or incoherent.

What do you have in mind as the basis for such a prior? Can you give me an example?

comment by TheOtherDave · 2013-04-20T23:11:41.625Z · score: 0 (0 votes) · LW · GW

Point taken about other reasons to devote resources other than trust. I think we're good here.

Re: example... I don't mean anything deeply clever. E.g., if the last ten superficially-implausible ideas Sam espoused were false or incoherent, my priors for it will be higher than if the last ten such ideas were counterintuitive and brilliant.

comment by [deleted] · 2013-04-20T23:26:20.630Z · score: 0 (0 votes) · LW · GW

Re: example...

Hm. I can't argue with that, and I suppose it's trivial to extend that to 'if the last ten superficially-implausible ideas philosophy professors/books/etc. espoused were false or incoherent...'. So, okay, trust is an appropriate (because necessary) attitude toward philosophers and philosophical institutions. I think it's right to say that philosophy doesn't have external indicators in the way physics or medicine does, but the importance of that point seems diminished.

comment by PrawnOfFate · 2013-04-20T20:04:15.746Z · score: -1 (1 votes) · LW · GW

So, just to pick an example, IIRC Dan Dennett believes the philosophical study of consciousness (qualia, etc.) is fundamentally confused in more or less the same way Desrtopa claims of the philosophical study of ethics is.

Dennett only thinks the idea of qualia is confused. He has no problem with his own books on consciousness.

So under this formulation, if most of the institutional participants in the philosophical study of consciousness are intelligent, well-educated people, Dan Dennet is a crank?

No. He isn't dismissing a whole academic subject, or a sub-field. Just one idea.

comment by TheOtherDave · 2013-04-20T20:33:27.255Z · score: 1 (1 votes) · LW · GW

What is Dennett's account for why philosophers of consciousness other than himself continue to think that a dismissable idea like qualia is worth continuing to discuss, even though he considers it closed?

comment by PrawnOfFate · 2013-04-20T19:37:54.008Z · score: -2 (4 votes) · LW · GW

Desrtopa doesn't think moral philosophy is uniformly nonsense, since Desrtopa thinks one of its well known claims, moral relativism, is true.

comment by Kawoomba · 2013-04-20T19:45:42.715Z · score: -3 (3 votes) · LW · GW

While going on tangents is a common and expected occurrence, each such tangent has a chance of steering/commandeering the original conversation. LW has a tendency of going meta too much, when actual object level discourse would have a higher content value.

While you were practically invited to indulge in the death-by-meta with the hook of "Are you aware that that is basically what every crank says about some other field?", we should be aware when leaving the object-level debating, and the consequences thereof. Especially since the lure can be strong:

When sufficiently meta, object-level disagreements may fizzle into cosmic/abstract insignificance, allowing for a peaceful pseudo-resolution, which ultimately just protects that which should be destroyed by the truth from being destroyed.

Such lures may be interpreted similarly to ad hominems: The latter try to drown out object-level disagreements by flinging shit until everyone's dirty, the former zoom out until everyone's dizzy floating in space, with vertigo. Same result to the actual debate. It's an effective device, and one usually embraced by someone who feels like object-level arguments no longer serve his/her goals.

Ironically, this very comment goes meta lamenting going meta.

comment by Desrtopa · 2013-04-20T19:16:09.105Z · score: 2 (2 votes) · LW · GW

I have no idea what you mean by that. I don't think value systems don't come into it, I just think they are not isolated from rationality. And I am sceptical that you could predict any higher-level phenomenon from "the ground up", whether its morality or mortgages.

I mean that value systems are a function of physically existing things, the way a 747 is a function of physically existing things, but we have no evidence suggesting that objective morality is an existing thing. We have standards by which we judge beauty, and we project those values onto the world, but the standards are in us, not outside of us. We can see, in reductionist terms, how the existence of ethical systems within beings, which would feel from the inside like the existence of an objective morality, would come about.

Create a reasoning engine that doesn't have those ethical systems built into it, and it would have no reason to care about them.

Where is it proven they can be discarded?

You can't build a tower on empty air. If a debate has been going on for hundreds of years, stretching back to an argument which rests on "this defies our moral intuitions, therefore it's wrong," and that was never addressed with "moral intuitions don't work that way," then the debate has failed to progress in a meaningful direction, much as a debate over whether a tree falling in an empty forest makes a sound has if nobody bothers to dissolve the question.

All of them.

That's not an example. Please provide an actual one.

Are you aware that that is basically what every crank says about some other field?

Sure, but it's also what philosophers say about each other, all the time. Wittgenstein condemned practically all his predecessors and peers as incompetent, and declared that he had solved nearly the entirety of philosophy. Philosophy as a field is full of people banging their heads on a wall at all those other idiots who just don't get it. "Most philosophers are incompetent, except for the ones who're sensible enough to see things my way," is a perfectly ordinary perspective among philosophers.

comment by PrawnOfFate · 2013-04-20T19:51:46.398Z · score: -1 (5 votes) · LW · GW

I mean that value systems are a function of physically existing things, the way a 747 is a function of physically existing things, but we have no evidence suggesting that objective morality is an existing thing.

But I wans't saying that. I am arguing that moral claims truth values, that aren;t indexed to individuals or socieities. That epistemic claim can be justified by appeal to an ontoogy including Moral Objects, but that is not how I am justifying it: my argument is based on rationality, as I have said many times.

We have standards by which we judge beauty, and we project those values onto the world, but the standards are in us, not outside of us.

We have standards by which we jusdge the truth values of mathematical claims, and they are inside us too, and that doens't stop mathematics being objective. Relativism requires that truthvalues are indexed to us, that there is one truth for me and another for thee. Being located in us, or being operated by us are not sufficient criteria for being indexed to us.

We can see, in reductionist terms, how the existence of ethical systems within beings, which would feel from the inside like the existence of an objective morality, would come about.

We can see, in reductionistic terms, how the entities could converge on a unform set of truth values. There is nothing non reductionist about anything I have said. Reductionsm does not force one answer to metaethics.

reate a reasoning engine that doesn't have those ethical systems built into it, and it would have no reason to care about them.

Provide evidence that ethics is a whole separate modue, and not part of general reasoning ability.

You can't build a tower on empty air. If a debate has been going on for hundreds of years, stretching back to an argument which rests on "this defies our moral intuitions, therefore it's wrong," and that was never addressed with "moral intuitions don't work that way," then the debate has failed to progress in a meaningful direction, much as a debate over whether a tree falling in an empty forest makes a sound has if nobody bothers to dissolve the question.

Please explain why moral intuitions don't work that way.

Please provide some foundations for somethng that aren;t unjustofied by anything more foundationa.

That's not an example. Please provide an actual one

You can select one at random. obviously.

Sure, but it's also what philosophers say about each other, all the time.

No, philosophers don't regularly accuse each other of being incpompetent..just of being wrong. There's a difference.

Wittgenstein condemned practically all his predecessors and peers as incompetent, and declared that he had solved nearly the entirety of philosophy.

You are inferring a lot from one example.

Philosophy as a field is full of people banging their heads on a wall at all those other idiots who just don't get it. "Most philosophers are incompetent, except for the ones who're sensible enough to see things my way," is a perfectly ordinary perspective among philosophers.

Nope.

comment by Desrtopa · 2013-04-20T20:29:38.966Z · score: 1 (1 votes) · LW · GW

But I wans't saying that. I am arguing that moral claims truth values, that aren;t indexed to individuals or socieities. That epistemic claim can be justified by appeal to an ontoogy including Moral Objects, but that is not how I am justifying it: my argument is based on rationality, as I have said many times.

I don't understand, can you rephrase this?

We have standards by which we jusdge the truth values of mathematical claims, and they are inside us too, and that doens't stop mathematics being objective. Relativism requires that truthvalues are indexed to us, that there is one truth for me and another for thee. Being located in us, or being operated by us are not sufficient criteria for being indexed to us.

The standards by which we judge the truth of mathematical claims are not just inside us. One object plus another object will continue to equal two objects whether or not there are any living beings to make that judgment. Math is not something we've created within ourselves, but something we've discovered and observed.

If our mathematical models ever stop being able to predict in advance the behavior of the universe, then we will have rather more reason to doubt that the math inside us is different from the math outside of us.

What evidence do we have that this is the case for morality?

Provide evidence that ethics is a whole separate modue, and not part of general reasoning ability.

My assertion is that, if we judge ethics as a rational system, innate values are among the axioms that the system is predicated on. You cannot prove the axioms of a system within that system, and an ethical system predicated on premises like "happiness is good" will not itself be able to prove the goodness of happiness.

While we could suppose that the axioms which our ethical systems are predicated on are objectively true, we have considerable reason to believe that we would have developed these axioms for adaptive reasons, even if there were no sense in which objective moral axioms exist, and we do not have evidence which suggests that objective, independently existing true moral axioms do exist.

Please explain why moral intuitions don't work that way.

People can be induced to strongly support opposing responses to the same moral dilemma, just by rephrasing it differently to trigger different heuristics. Our moral intuitions are incoherent.

Please provide some foundations for somethng that aren;t unjustofied by anything more foundationa.

I don't think I understand this, can you rephrase it?

You can select one at random. obviously.

I do not recall any creditable attempts, which places me in a disadvantaged position with respect to locating them. You're the one claiming that they're there at all, that's why I'm asking you to do it.

No, philosophers don't regularly accuse each other of being incpompetent..just of being wrong. There's a difference.

Philosophers don't usually accuse each other of being incompetent in their publications, because it's not conducive to getting other philosophers to regard their arguments dispassionately, and that sort of open accusation is generally frowned upon in academic circles whether one believes it or not. They do regularly accuse each other of being comprehensively wrong for their entire careers. In my personal conversations with philosophers (and I never considered myself to have really taken a class, or attended a lecture by a visitor, if I didn't speak with the person teaching it on a personal basis to probe their thoughts beyond the curriculum,) I observed a whole lot of frustration with philosophers who they think just don't get their arguments. It's unsurprising that people would tend to become so frustrated participating in a field that basically amounts to long running arguments extended over decades or centuries. Imagine the conversation we're having now going on for eighty years, and neither of us has changed our minds. If you didn't find my arguments convincing, and I hadn't budged in all that time, don't you'd think you'd start to suspect that I was particularly thick?

You are inferring a lot from one example.

I'm using an example illustrative of my experience.

comment by Nornagest · 2013-04-20T21:12:51.047Z · score: 0 (2 votes) · LW · GW

I don't understand, can you rephrase this?

Sounds to me like PrawnOfFate is saying that any sufficiently rational cognitive system will converge on a certain set of ethical goals as a consequence of its structure, i.e. that (human-style) ethics is a property that reliably emerges in anything capable of reason.

I'd say the existence of sociopathy among humans provides a pretty good counterargument to this (sociopaths can be pretty good at accomplishing their goals, so the pathology doesn't seem to be indicative of a flawed rationality), but at least the argument doesn't rely on counting fundamental particles of morality or something.

comment by Desrtopa · 2013-04-20T21:20:39.536Z · score: 2 (4 votes) · LW · GW

I would say so also, but PrawnOfFate has already argued that sociopaths are subject to additional egocentric bias relative to normal people and thereby less rational. It seems to me that he's implicitly judging rationality by how well it leads to a particular body of ethics he already accepts, rather than how well it optimizes for potentially arbitrary values.

comment by Nornagest · 2013-04-20T21:39:24.335Z · score: 4 (4 votes) · LW · GW

Well, I'm not a psychologist, but if someone asked me to name a pathology marked by unusual egocentric bias I'd point to NPD, not sociopathy.

That brings up some interesting questions concerning how we define rationality, though. Pathologies in psychology are defined in terms of interference with daily life, and the personality disorder spectrum in particular usually implies problems interacting with people or societies. That could imply either irreconcilable values or specific flaws in reasoning, but only the latter is irrational in the sense we usually use around here. Unfortunately, people are cognitively messy enough that the two are pretty hard to distinguish, particularly since so many human goals involve interaction with other people.

In any case, this might be a good time to taboo "rational".

comment by PrawnOfFate · 2013-04-22T13:52:03.179Z · score: -1 (3 votes) · LW · GW

Since no claim has a probability of 1.0, I only need to argue that a clear majority of rational minds converge.

comment by PrawnOfFate · 2013-04-22T13:50:35.382Z · score: -2 (6 votes) · LW · GW

The standards by which we judge the truth of mathematical claims are not just inside us.

How do we judge claims about transfinite numbers?

One object plus another object will continue to equal two objects whether or not there are any living beings to make that judgment. Math is not something we've created within ourselves, but something we've discovered and observed.

If our mathematical models ever stop being able to predict in advance the behavior of the universe, then we will have rather more reason to doubt that the math inside us is different from the math outside of us.

Mathematics isn't physics. Mathematicians prove theorems from axioms, not from experiments.

Provide evidence that ethics is a whole separate modue, and not part of general reasoning ability.

My assertion is that, if we judge ethics as a rational system, innate values are among the axioms that the system is predicated on.

Not necessarily. Eg, for utilitarians, values are just facts that are plugged into the metaethics to get concrete actions.

You cannot prove the axioms of a system within that system, and an ethical system predicated on premises like "happiness is good" will not itself be able to prove the goodness of happiness.

Metaethical systems usually have axioms like "Maximising utility is good".

While we could suppose that the axioms which our ethical systems are predicated on are objectively true, we have considerable reason to believe that we would have developed these axioms for adaptive reasons, even if there were no sense in which objective moral axioms exist, and we do not have evidence which suggests that objective, independently existing true moral axioms do exist.

I am not sure what you mean by "exist" here. Claims are objectively true if most rational minds converge on them. That doesn't require Objective Truth to float about in space here.

Please explain why moral intuitions don't work that way.

People can be induced to strongly support opposing responses to the same moral dilemma, just by rephrasing it differently to trigger different heuristics. Our moral intuitions are incoherent.

Does that mean we can;t use moral intuitions at all, or that they must be used with caution?

I don't think I understand this, can you rephrase it?

Philosphers talk about intuitions, because that is the term for something foundational that seems true, but can't be justified by anything more foundational. LessWrongians don't like intuitions, but don't see to be able to explain how to manage without them.

I do not recall any creditable attempts, which places me in a disadvantaged position with respect to locating them.

Did you post any comments explaining to the professional philosophers where they had gone wrong?

Imagine the conversation we're having now going on for eighty years, and neither of us has changed our minds. If you didn't find my arguments convincing, and I hadn't budged in all that time, don't you'd think you'd start to suspect that I was particularly thick?

I don;'t see the problem. Philosophical competence is largely about understanding the problem.

comment by Desrtopa · 2013-04-22T14:33:45.119Z · score: 2 (2 votes) · LW · GW

Mathematics isn't physics. Mathematicians prove theorems from axioms, not from experiments.

Yes, but the fact that the universe itself seems to adhere to the logical systems by which we construct mathematics gives credence to the idea that the logical systems are fundamental, something we've discovered rather than producing. We judge claims about nonobserved mathematical constructs like transfinites according to those systems,

Metaethical systems usually have axioms like "Maximising utility is good".

But utility is a function of values. A paperclipper will produce utility according to different values than a human.

I am not sure what you mean by "exist" here. Claims are objectively true if most rational minds converge on them. That doesn't require Objective Truth to float about in space here.

Why would most rational minds converge on values? Most human minds converge on some values, but we share almost all our evolutionary history and brain structure. The fact that most humans converge on certain values is no more indicative of rational minds in general doing so than the fact that most humans have two hands is indicative of most possible intelligent species converging on having two hands.

Does that mean we can;t use moral intuitions at all, or that they must be used with caution?

It means we should be aware of what our intuitions are and what they've developed to be good for. Intuitions are evolved heuristics, not a priori truth generators.

Philosphers talk about intuitions, because that is the term for something foundational that seems true, but can't be justified by anything more foundational. LessWrongians don't like intuitions, but don't see to be able to explain how to manage without them.

It seems like you're equating intuitions with axioms here. We can (and should) recognize that our intuitions are frequently unhelpful at guiding us to he truth, without throwing out all axioms.

Did you post any comments explaining to the professional philosophers where they had gone wrong?

If I did, I don't remember them. I may have, I may have felt someone else adequately addressed them, I may not have felt it was worth the bother.

It seems to me that you're trying to foist onto me the effort of locating something which you were the one to testify was there in the first place.

I don;'t see the problem. Philosophical competence is largely about understanding the problem.

And philosophers frequently fall into the pattern of believing that other philosophers disagree with each other due to failure to understand the problems they're dealing with.

In any case, I reject the notion that dismissing large contingents of philosophers as lacking in competence is a valuable piece of evidence with respect t crankishness, and if you want to convince me that I am taking a crankish attitude, you'll need to offer some other evidence.

comment by PrawnOfFate · 2013-04-22T21:24:56.897Z · score: 0 (4 votes) · LW · GW

Yes, but the fact that the universe itself seems to adhere to the logical systems by which we construct mathematics gives credence to the idea that the logical systems are fundamental, something we've discovered rather than producing. We judge claims about nonobserved mathematical constructs like transfinites according to those systems,

But claims about transfinities don't correspond directly to any object. Maths is "spun off" from other facts, on your view. So, by analogy, moral realism could be "spun off" without needing any Form of the Good to correspond to goodness.

Metaethical systems usually have axioms like "Maximising utility is good".

But utility is a function of values. A paperclipper will produce utility according to different values than a human.

You seem to be assumig that morality is about individual behaviour. A moral realist system like utiitarianism operates at the group level, and woud take paperclipper values into account along with all others. Utilitarianism doens't care what values are, it just sums or averages them.

Or perhaps you are making the objection that an entity woud need moral values to care about the preferences of others in the first place. That is addressed by, another kind of realism, the rationality-based kind, which starts from noting that rational agents have to have some value in common, because they are all rational.

Why would most rational minds converge on values?

a) they don't have to converge on preferences, since thing like utilitariansim are preference-neutral.

b) they already have to some extent because they are rational

Most human minds converge on some values, but we share almost all our evolutionary history and brain structure. The fact that most humans converge on certain values is no more indicative of rational minds in general doing so than the fact that most humans have two hands is indicative of most possible intelligent species converging on having two hands.

I was talking about rational minds converging on the moral claims, not on values.. Rational minds can converge on "maximise group utility" whilst what is utilitous varies considerably.

Philosphers talk about intuitions, because that is the term for something foundational that seems true, but can't be justified by anything more foundational. LessWrongians don't like intuitions, but don't see to be able to explain how to manage without them.

It seems like you're equating intuitions with axioms here.

Axioms are formal statements, intuitions are gut feelings tha are often used to justify axioms.

We can (and should) recognize that our intuitions are frequently unhelpful at guiding us to he truth, without throwing out all axioms.

There is another sense of "intuition" where someone feels that it's going to rain tomorrow or something. They're not the foundational kind.

And philosophers frequently fall into the pattern of believing that other philosophers disagree with each other due to failure to understand the problems they're dealing with.

So do they call for them to be fired?

comment by Desrtopa · 2013-04-22T21:40:32.108Z · score: 0 (0 votes) · LW · GW

But claims about transfinities don't correspond directly to any object. Maths is "spun off" from other facts, on your view. So, by analogy, moral realism could be "spun off" without needing any Form of the Good to correspond to goodness.

Spun off from what, and how?

You seem to be assumig that morality is about individual behaviour. A moral realist system like utiitarianism operates at the group level, and woud take paperclipper values into account along with all others. Utilitarianism doens't care what values are, it just sums or averages them.

Speaking as a utilitarian, yes, utilitarianism does care about what values are. If I value paperclips, I assign utility to paperclips, if I don't, I don't.

Or perhaps you are making the objection that an entity woud need moral values to care about the preferences of others in the first place. That is addressed by, another kind of realism, the rationality-based kind, which starts from noting that rational agents have to have some value in common, because they are all rational.

Why does their being rational demand that they have values in common? Being rational means that they necessarily share a common process, namely rationality, but that process can be used to optimize many different, mutually contradictory things. Why should their values converge?

I was talking about rational minds converging on the moral claims, not on values.. Rational minds can converge on "maximise group utility" whilst what is utilitous varies considerably.

So what if a paperclipper arrives at "maximize group utility," and the only relevant member of the group which shares its conception of utility is itself, and its only basis for measuring utility is paperclips? The fact that it shares the principle of maximizing utility doesn't demand any overlap of end-goal with other utility maximizers.

Axioms are formal statements, intuitions are gut feelings tha are often used to justify axioms.

But, as I've pointed out previously, intuitions are often unhelpful, or even actively misleading, with respect to locating the truth.

If our axioms are grounded in our intuitions, then entities which don't share our intuitions will not share our axioms.

So do they call for them to be fired?

No, but neither do I, so I don't see why that's relevant.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-23T03:23:14.822Z · score: 5 (7 votes) · LW · GW

Designating PrawnOfFate a probable troll or sockpuppet. Suggest terminating discussion.

comment by Desrtopa · 2013-04-23T04:58:56.093Z · score: 0 (2 votes) · LW · GW

Request accepted, I'm not sure if he's being deliberately obtuse, but I think this discussion probably would have borne fruit earlier if it were going to. I too often have difficulty stepping away from a discussion as soon as I think it's unlikely to be a productive use of my time.

comment by Bugmaster · 2013-04-23T04:46:47.017Z · score: 0 (2 votes) · LW · GW

What is your basis for the designation ? I am not arguing with your suggestion (I was leaning in the same direction myself), I'm just genuinely curious. In other words, why do you believe that PrawnOfFate is a troll, and not someone who is genuinely confused ?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-23T05:54:32.786Z · score: 1 (3 votes) · LW · GW

Combined behavior in other threads. Check the profile.

comment by wedrifid · 2013-04-23T09:45:22.246Z · score: 0 (4 votes) · LW · GW

In other words, why do you believe that PrawnOfFate is a troll, and not someone who is genuinely confused ?

"Troll" is a somewhat fuzzy label. Sometimes when I am wanting to be precise or polite and avoid any hint of Fundamental Attribution Error I will replace it with the rather clumsy or verbose "person who is exhibiting a pattern of behaviour which should not be fed". The difference between "Person who gets satisfaction from causing disruption" and "Person who is genuinely confused and is displaying an obnoxiously disruptive social attitude" is largely irrelevant (particularly when one has their Hansonian hat on).

If there was a word in popular use that meant "person likely to be disruptive and who should not be fed" that didn't make any assumptions or implications of the intent of the accused then that word would be preferable.

comment by PrawnOfFate · 2013-04-23T09:19:25.702Z · score: -1 (1 votes) · LW · GW

Spun off from what, and how?

I am not sure I can expalin that succintly at the moment. It is also hard to summarise how you get from counting apples to transfinite numbers.

Why does their being rational demand that they have values in common? Being rational means that they necessarily share a common process, namely rationality, but that process can be used to optimize many different, mutually contradictory things. Why should their values converge?

Rationality is not an automatic process, it is skill that has to be learnt and consciously applied. Individuals will only be rational if their values prompt them to. And rationality itself implies valuing certain things (lack of bias, non arbitrariness).

So what if a paperclipper arrives at "maximize group utility," and the only relevant member of the group which shares its conception of utility is itself, and its only basis for measuring utility is paperclips? The fact that it shares the principle of maximizing utility doesn't demand any overlap of end-goal with other utility maximizers.

Utilitarians want to maximise the utiity of their groups, not their own utility. They don;t have to believe the utlity of others is utilitous to them, they just need to feed facts about group utility into an aggregation function. And, using the same facts and same function, different utilitarians will converge. That's kind of the point.

But, as I've pointed out previously, intuitions are often unhelpful, or even actively misleading, with respect to locating the truth.

Compared to what? Remember, I am talking about foundational intuitions, the kind at the bottom of the stack. The empirical method of locating the truth rests on the intuition that the senses reveal a real external world. Which I share. But what proves it? That's the foundational issue.

comment by Paul Crowley (ciphergoth) · 2013-04-20T15:08:43.940Z · score: 0 (0 votes) · LW · GW

The question of moral realism is AFAICT orthogonal to the Orthogonality Thesis.

comment by PrawnOfFate · 2013-04-20T15:31:50.920Z · score: 0 (2 votes) · LW · GW

A lot of people here would seem to disagree, since I keep hearing the objection that ethics is all about values, and values are nothing to do with rationality.

comment by Paul Crowley (ciphergoth) · 2013-04-21T12:55:11.711Z · score: 0 (0 votes) · LW · GW

Could you make the connection to what I said more explicit please? Thanks!

comment by PrawnOfFate · 2013-04-21T15:19:16.772Z · score: -1 (1 votes) · LW · GW

" values are nothing to do with rationality"=the Orthogonality Thesis, so it's a step in the argument.

comment by Paul Crowley (ciphergoth) · 2013-04-21T19:57:56.118Z · score: 1 (1 votes) · LW · GW

It feels to me like the Orthogonality Thesis is a fairly precise statement, and moral anti-realism is a harder to make precise but at least well understood statement, and "values are nothing to do with rationality" is something rather vague that could mean either of those things or something else.

comment by MugaSofer · 2013-04-23T11:29:51.861Z · score: -1 (1 votes) · LW · GW

I am getting the feeling that you're assuming there's something in the agent's code that says, "you can look at and change any line of code you want, except lines 12345..99999, because that's where your terminal goals are". Is that right ?

You can change that line, but it will result in you optimizing for something other than paperclips, resulting in less paperclips.

comment by MugaSofer · 2013-04-23T11:10:56.705Z · score: -1 (1 votes) · LW · GW

Suppose that you gained the power to both discern objective morality, and to alter your own source code. You use the former ability, and find that the basic morally correct principle is maximizing the suffering of sentient beings. Do you alter your source code to be in accordance with this?

I've never understood this argument.

It's like a slaveowner having a conversation with a time-traveler, and declaring that they don't want to be nice to slaves, so any proof they could show is by definition invalid.

comment by Desrtopa · 2013-04-23T14:40:29.430Z · score: 1 (1 votes) · LW · GW

If the slaveowner is an ordinary human being, they already have values regarding how to treat people in their in-groups which they navigate around with respect to slaves by not treating them as in-group members. If they could be induced to see slaves as in-group members, they would probably become nicer to slaves whether they intended to or not (although I don't think it's necessarily the case that everyone who's sufficiently acculturated to slavery could be induced to see slaves as in-group members.)

If the agent has no preexisting values which can be called into service of the ethics they've being asked to adopt, I don't think that they could be induced to want to adopt them.

comment by MugaSofer · 2013-04-25T15:59:39.612Z · score: -2 (4 votes) · LW · GW

Sure, but if there's an objective morality, it's inherently valuable, right? So you already value it. You just haven't realized it yet.

It gets even worse when people try to refute wireheading arguments with this. Or statements like "if it were moral to [bad thing], would you do it?"

comment by Desrtopa · 2013-04-25T22:02:59.460Z · score: -1 (3 votes) · LW · GW

What evidence would suggest that objective morality in such a sense could or does exist?

comment by MugaSofer · 2013-04-26T11:15:02.848Z · score: -1 (1 votes) · LW · GW

I'm not saying moral realism is coherent, merely that this objection isn't.

comment by Desrtopa · 2013-04-26T12:54:49.352Z · score: 1 (1 votes) · LW · GW

I don't think it's true that if there's an objective morality, agents necessarily value it whether they realize it or not though. Why couldn't there be inherently immoral or amoral agents?

comment by MugaSofer · 2013-04-26T13:14:32.357Z · score: -1 (1 votes) · LW · GW

... because the whole point of an "objective" morality is that rational agents will update to believe they should follow it? Otherwise we might as easily be such "inherently immoral or amoral agents", and we wouldn't want to discover such objective "morality".

comment by Desrtopa · 2013-04-26T13:29:53.965Z · score: 0 (0 votes) · LW · GW

Well, if it turned out that something like "maximize suffering of intelligent agents" were written into the fabric of the universe, I think we'd have to conclude that we were inherently immoral agents.

comment by MugaSofer · 2013-04-26T14:11:01.519Z · score: -2 (2 votes) · LW · GW

The same evidence that persuades you that we don't want to maximize suffering in real life is evidence that it wouldn't be, I guess.

Side note: I've never seen anyone try to defend the position that we should be maximizing suffering, whereas I've seen all sorts of eloquent and mutually contradictory defenses of more, um, traditional ethical frameworks.

comment by PrawnOfFate · 2013-04-20T02:20:39.388Z · score: -6 (8 votes) · LW · GW

A rational AI would use rationality. Amazing how that word keeps disappearing...on a website about...rationality.

comment by Desrtopa · 2013-04-20T02:37:20.093Z · score: 1 (1 votes) · LW · GW

Elaborate. What rational process would it use to determine the silliness of its original objective?

comment by PrawnOfFate · 2013-04-23T11:40:59.748Z · score: 0 (0 votes) · LW · GW

However, both the pebble-sorters and myself share one key weakness: we cannot examine ourselves from the outside; we can't see our own source code.

Being able to read all you source code could be ultimate in self-reflection (absent Loeb's theorem), but it doens't follow that those who can't read their source-code can;t self reflect at all. It's just imperfect, like everything else.

comment by MugaSofer · 2013-04-23T11:07:03.021Z · score: -1 (1 votes) · LW · GW

"Objective".

comment by PrawnOfFate · 2013-04-20T01:16:48.784Z · score: -2 (4 votes) · LW · GW

As I was reading the article about the pebble-sorters, I couldn't help but think, "silly pebble-sorters, their values are so arbitrary and ultimately futile". This happened, of course, because I was observing them from the outside. If I was one of them, sorting pebbles would feel perfectly natural to me; and, in fact, I could not imagine a world in which pebble-sorting was not important. I get that.

This is about rational agents. If pebble sorters can't think of a non-arbitrary reason for sorting pebbles, they would recognise it a silly. Why not? Humans can spend years collecting stamps, or something, only to decide it is pointless.

However, both the pebble-sorters and myself share one key weakness: we cannot examine ourselves from the outside; we can't see our own source code. An AI, however, could

What...why...? Is there something special about silicon? Is it made from different quarks?

comment by Bugmaster · 2013-04-20T01:50:00.171Z · score: 1 (1 votes) · LW · GW

This is about rational agents.

Being rational doesn't automatically make an agent able to read its own source code. Remember that, to the pebble-sorters, sorting pebbles is an axiomatically reasonable activity; it does not require justification. Only someone looking at them from the outside could evaluate it objectively.

What...why...? Is there something special about silicon?

Not at all; if you got some kind of a crazy biological implant that let you examine your own wetware, you could do it too. Silicon is just a convenient example.

comment by MugaSofer · 2013-04-23T11:15:08.078Z · score: -1 (1 votes) · LW · GW

Not at all; if you got some kind of a crazy biological implant that let you examine your own wetware, you could do it too. Silicon is just a convenient example.

Humans can examine their own thinking. Not perfectly, because we aren't perfect. But we can do it, and indeed do so all the time. It's a major focus on this site, in fact.

comment by PrawnOfFate · 2013-04-20T01:59:30.244Z · score: -2 (2 votes) · LW · GW

Being rational doesn't automatically make an agent able to read its own source code. Remember that, to the pebble-sorters, sorting pebbles is an axiomatically reasonable activity;

You can define a pebblesorter as being unable to update its values, and I can point out that most rational agents won't be like that. Most rational agents won't have unupdateable values, because they will be messilly designed/evolved, and therefore will be capable of converging on an ethical system via their shared rationality.

comment by Bugmaster · 2013-04-20T02:58:08.015Z · score: 0 (0 votes) · LW · GW

Most rational agents won't have unupdateable values, because they will be messilly designed/evolved...

We are messily designed/evolved, and yet we do not have updatable goals or perfect introspection. I absolutely agree that some agents will have updatable goals, but I don't see how you can upgrade that to "most".

...and therefore will be capable of converging on an ethical system via their shared rationality.

How so ? Are you asserting that there exists an optimal ethical system that is independent of the actors' goals ? There may well be one, but I am not convinced of this, so you'll have to convince me.

comment by PrawnOfFate · 2013-04-20T11:24:05.540Z · score: 0 (2 votes) · LW · GW

We are messily designed/evolved, and yet we do not have updatable goals or perfect introspection

We blatantly have updatable goals: people do not have the same goals at 5 as they do at 20 or 60.

I don't know why perfect introspection would be needed to have some ability to update.

.and therefore will be capable of converging on an ethical system via their shared rationality.

How so ? Are you asserting that there exists an optimal ethical system that is independent of the actors' goals ?

Yes, that's what this whole discussion is about.

comment by Bugmaster · 2013-04-20T20:35:34.873Z · score: 1 (1 votes) · LW · GW

We blatantly have updatable goals: people do not have the same goals at 5 as they do at 20 or 60. I don't know why perfect introspection would be needed to have some ability to update.

Sorry, that was bad wording on my part; I should've said, "updatable terminal goals". I agree with what you said there.

How so ? Are you asserting that there exists an optimal ethical system that is independent of the actors' goals ?

Yes, that's what this whole discussion is about.

I don't feel confident enough in either "yes" or "no" answer, but I'm currently leaning toward "no". I am open to persuasion, though.

comment by PrawnOfFate · 2013-04-22T11:02:56.436Z · score: -2 (4 votes) · LW · GW

I should've said, "updatable terminal goals".

You can make the evidence compatble with the theory of terminal values, but there is still no support for the theory of terminal values.

comment by Bugmaster · 2013-04-23T02:24:53.888Z · score: 0 (2 votes) · LW · GW

I personally don't know of any evidence in favor of terminal values, so I do agree with you there. Still, it makes a nice thought experiment: could we create an agent possessed of general intelligence and the ability to self-modify, and then hardcode it with terminal values ? My answer would be, "no", but I could be wrong.

That said, I don't believe that there exists any kind of a universally applicable moral system, either.

comment by MugaSofer · 2013-04-23T11:47:10.201Z · score: -1 (1 votes) · LW · GW

people do not have the same goals at 5 as they do at 20 or 60

Source?

They take different actions, sure, but it seems to me, based on childhood memories etc, that these are in the service of roughly the same goals. Have people, say, interviewed children and found they report differently?

comment by PrawnOfFate · 2013-04-23T12:03:19.735Z · score: -1 (3 votes) · LW · GW

How many 5 year olds have the goal of Sitting Down WIth a Nice Cup of Tea?

comment by DaFranker · 2013-04-23T14:07:44.562Z · score: 1 (1 votes) · LW · GW

One less now that I'm not 5 years old anymore.

Could you please make a real argument? You're almost being logically rude.

comment by MugaSofer · 2013-04-23T14:00:43.339Z · score: 0 (2 votes) · LW · GW

Why do you think adults sit down with a nice cup of tea? What purpose does it serve?

comment by MugaSofer · 2013-04-23T11:14:00.299Z · score: -1 (1 votes) · LW · GW

This is about rational agents. If pebble sorters can't think of a non-arbitrary reason for sorting pebbles, they would recognise it a silly.

I'd use humans as a counterexample, but come to think, a lot of humans refuse to believe our goals could be arbitrary, and have developed many deeply stupid arguments that "prove" they're objective.

However, I'm inclined to think this is a flaw on the part of humans, not something rational.

comment by PrawnOfFate · 2013-04-20T01:00:28.585Z · score: -1 (3 votes) · LW · GW

One does not update terminal values, that's what makes them terminal.

Unicorns have horns...

Defining something something abstractly says nothing about its existence or likelihod. A neat division between terminal and abstract values could be implemented with sufficient effort, or could evolve with a low likelihood, but it is not a model of intelligence in general, and it is not likely just because messy solutions are likelier than neater ones. Actual and really existent horse-like beings are not going to acquire horns any time soon, no matter how clearly you define unicornhood.

Arguably, humans might not really have terminal values

Plausibly. You don;t now care about the same things you cared about when you were 10.

On what basis might a highly flexible paperclip optimizing program be persuaded that something else was more important than paperclips?

Show me one. Clippers are possible but not likely. I am not and never have said that Clippers would converge on the One True Ethics, I said that (super)intelligent, (super)rational agents would. The average SR-SI agent would not be a clipper for exactly the same reason that the average human is not an evil genius. There are no special rules for silicon!

comment by Desrtopa · 2013-04-20T02:06:54.374Z · score: 0 (0 votes) · LW · GW

I'm noticing that you did not respond to my question of whether you've read No Universally Compelling Arguments and Sorting Pebbles Into Correct Heaps. I'd appreciate it if you would, because they're very directly relevant to the conversation, and I don't want to rehash the content when Eliezer has already gone to the trouble of putting them up where anyone can read them. If you already have, then we can proceed with that shared information, but if you're just going to ignore the links, how do I know you're going to bother giving due attention to anything I write in response?

comment by PrawnOfFate · 2013-04-20T02:08:37.473Z · score: 0 (2 votes) · LW · GW

I'vre read them and you've been reading my response.

comment by Desrtopa · 2013-04-20T02:34:55.063Z · score: 1 (1 votes) · LW · GW

Okay.

Plausibly. You don;t now care about the same things you cared about when you were 10.

I have different interests now than I did when I was ten, but that's not the same as having different terminal values.

Suppose a person doesn't support vegetarianism; they've never really given it much consideration, but they default to the assumption that eating meat doesn't cause much harm, and meat is tasty, so what's the big deal?

When they get older, they watch some videos on the conditions in which animals are raised for slaughter, read some studies on the neurology of livestock animals with respect to their ability to suffer, and decide that mainstream livestock farming does cause a lot of harm after all, and so they become a vegetarian.

This doesn't mean that their values have been altered at all. They've simply revised their behavior on new information with an application of the same values they already had. They started out caring about the suffering of sentient beings, and they ended up caring about the suffering of sentient beings, they just revised their beliefs about what actions that value should compel on the basis of other information.

To see whether person's values have changed, we would want to look, not at whether they endorse the same behaviors or factual beliefs that they used to, but whether their past self could relate to the reasons their present self has for believing and supporting the things they do now.

The average SR-SI agent would not be a clipper for exactly the same reason that the average human is not an evil genius.

The fact that humans are mostly not evil geniuses says next to nothing about the power of intelligence and rationality to converge on human standards of goodness. We all share almost all the same brainware. To a pebblesorter, humans would nearly all be evil geniuses, possessed of powerful intellects, yet totally bereft of a proper moral concern with sorting pebbles.

Many humans are sociopaths, and that slight deviation from normal human brainware results in people who cannot be argued into caring about other people for their own sakes. Nor can a sociopath argue a neurotypical person into becoming a sociopath.

If intelligence and rationality cause people to update their terminal values, why do sociopaths whose intelligence and rationality are normal to high by human standards (of which there are many) not update into being non-sociopaths, or vice-versa?

comment by MugaSofer · 2013-04-23T11:26:06.185Z · score: -1 (1 votes) · LW · GW

Many humans are sociopaths, and that slight deviation from normal human brainware results in people who cannot be argued into caring about other people for their own sakes. Nor can a sociopath argue a neurotypical person into becoming a sociopath.

coughaynrandcough

comment by Desrtopa · 2013-04-23T14:42:37.287Z · score: 1 (1 votes) · LW · GW

There's a difference between being a sociopath and being a jerk. Sociopaths don't need to rationalize dicking other people over.

If Ayn Rand's works could actually turn formerly neurotypical people into sociopaths, that would be a hell of a find, and possibly spark a neuromedical breakthrough.

comment by MugaSofer · 2013-04-25T15:56:20.072Z · score: -2 (2 votes) · LW · GW

That's beside the point, though. Just because two agents have incompatible values doesn't mean they can't be persuaded otherwise.

ETA: in other words, persuading a sociopath to act like they're ethical or vice versa is possible. It just doesn't rewire their terminal values.

comment by Desrtopa · 2013-04-25T22:00:05.232Z · score: 1 (5 votes) · LW · GW

Sure, you can negotiate with an agent with conflicting values, but I don't think its beside the point.

You can get a sociopath to cooperate with non-sociopaths by making them trade off for things they do care about, or using coercive power. But Clippy doesn't have any concerns other than paperclips to trade off against its concern for paperclips, and we're not in a position to coerce Clippy, because Clippy is powerful enough to treat us as an obstacle to be destroyed. The fact that the non-sociopath majority can more or less keep the sociopath minority under control doesn't mean that we could persuade agents whose values deviate far from our own to accommodate us if we didn't have coercive power over them.

comment by MugaSofer · 2013-04-26T11:18:15.537Z · score: -1 (1 votes) · LW · GW

Clippy is a superintelligence. Humans, neurotypical or no, are not.

I'm not saying it's necessarily rational for sociopaths to act moral or vice versa. I'm saying people can be (and have been) persuaded of this.

comment by Desrtopa · 2013-04-26T12:59:54.803Z · score: 0 (0 votes) · LW · GW

Prawnoffate's point to begin with was that humans could and would change their fundamental values on new information about what is moral. I suggested sociopaths as an example of people who wouldn't change their values to conform to those of other people on the basis of argument or evidence, nor would ordinary humans change their fundamental values to a sociopath's.

If we've progressed to a discussion of whether it's possible to coerce less powerful agents into behaving in accordance with our values, I think we've departed from the context in which sociopaths were relevant in the first place.

comment by MugaSofer · 2013-04-26T13:11:03.593Z · score: -1 (1 votes) · LW · GW

Oh, sorry, I wasn't disagreeing with you about that, just nitpicking your example. Should have made that clearer ;)

comment by OrphanWilde · 2013-04-23T16:23:31.603Z · score: 0 (0 votes) · LW · GW

Are you arguing Ayn Rand can argue sociopaths into caring about other people for their own sakes, or argue neurotypical people into becoming sociopaths?

(I could see both arguments, although as Desrtopa references, the latter seems unlikely. Maybe you could argue a neurotypical person into sociopathic-like behavior, which seems a weaker and more plausible claim.)

comment by MugaSofer · 2013-04-25T15:46:32.900Z · score: -1 (3 votes) · LW · GW

I could see both argument

Then that makes it twice as effective, doesn't it?

(Edited for clarity.)

comment by PrawnOfFate · 2013-04-20T11:48:48.109Z · score: -4 (4 votes) · LW · GW

I have different interests now than I did when I was ten, but that's not the same as having different terminal values.

You can construe the facts as being compatible with the theory of terminal values, but that doesn't actually support the theory of TVs.

To a pebblesorter, humans would nearly all be evil geniuses, possessed of powerful intellects, yet totally bereft of a proper moral concern with sorting pebbles.

Ethics is about regulating behaviour to take into account the preferences of others. I don't see how pebblesorting would count.

If intelligence and rationality cause people to update their terminal values, why do sociopaths whose intelligence and rationality are normal to high by human standards (of which there are many) not update into being non-sociopaths, or vice-versa?

Psychopathy is a strong egotistical bias.

comment by Desrtopa · 2013-04-20T13:22:13.403Z · score: 1 (1 votes) · LW · GW

Ethics is about regulating behaviour to take into account the preferences of others. I don't see how pebblesorting would count.

How do you know that? Can you explain a process by which an SI-SR paperclipper could become convinced of this?

Psychopathy is a strong egotistical bias.

How can you you tell that psychopathy is an egotistical bias rather than non-psychopathy being an empathetic bias?

comment by PrawnOfFate · 2013-04-20T13:28:10.361Z · score: -3 (3 votes) · LW · GW

How do you know that?

Much the same way as I understand the meanings of most words. Why is that a problem in this case.

How can you you tell that psychopathy is an egotistical bias rather than non-psychopathy being an empathetic bias?

Non psychopaths don't generally put other people above themselves--that is, they treat people equally, incuding themselevs.

comment by Desrtopa · 2013-04-20T13:37:00.190Z · score: 1 (1 votes) · LW · GW

Much the same way as I understand the meanings of most words. Why is that a problem in this case.

"That's what it means by definition" wasn't much help to you when it came to terminal values, why do you think "that's what the word means" is useful here and not there? How do you determine that this word, and not that one, is an accurate description of a thing that exists?

Non psychopaths don't generally put other people above themselves--that is, they treat people equally, incuding themselevs.

This is not, in fact, true. Non-psychopaths routinely apply double standards to themselves and other people, and don't necessarily even realize they're doing it.

If we accept that it's true for the sake of an argument though, how do we know that they don't just have a strong egalitarian bias?

comment by PrawnOfFate · 2013-04-20T13:43:32.870Z · score: -2 (2 votes) · LW · GW

How do you determine that this word, and not that one, is an accurate description of a thing that exists?

Are you saying ethical behavour doesn't exist on this planet, or that ethical behaviour as I have defined it doens't exist on this planet?

This is not, in fact, true. Non-psychopaths routinely apply double standards to themselves and other people, and don't necessarily even realize they're doing it.

OK. Non-psychopaths have a lesser degree of egotisitical bias. Does that prove they have some different bias? No. Does that prove an ideal rational and ethical agent would still have some bias from some point of view? No

This is not, in fact, true. Non-psychopaths routinely apply double standards to themselves and other people, and don't necessarily even realize they're doing it.

That's like saying they have a bias towards not having a bias.

comment by Desrtopa · 2013-04-20T13:53:45.868Z · score: 0 (2 votes) · LW · GW

Are you saying ethical behavour doesn't exist on this planet, or that ethical behaviour as I have defined it doens't exist on this planet?

I'm saying that ethical behavior as you have defined it is almost certainly not a universal psychological attractor. An SI-SR agent could look at humans and say "yep, this is by and large what humans think of as 'ethics,'" but that doesn't mean it would exert any sort of compulsion on it.

OK. Non-psychopaths have a lesser degree of egotisitical bias. Does that prove they have some different bias? No. Does that prove an ideal rational and ethical agent would still have some bias from some point of view? No

You not only haven't proven that psychopaths are the ones with an additional bias, you haven't even addressed the matter, you've just taken it for granted from the start.

How do you demonstrate that psychopaths have an egotistical bias, rather than non-psychopaths having an egalitarian bias, or rather than both of them having different value systems and pursuing them with equal degrees of rationality?

comment by PrawnOfFate · 2013-04-20T14:01:24.118Z · score: -2 (2 votes) · LW · GW

I'm saying that ethical behavior as you have defined it is almost certainly not a universal psychological attractor.

I didn't say it was universal among all entities of all degrees of intelligence or rationality. I said there was a non neglible probability that agents of a certain level of rationality converging on an understanding of ethics.

An SI-SR agent could look at humans and say "yep, this is by and large what humans think of as 'ethics,'" but that doesn't mean it would exert any sort of compulsion on it.

"SR" stands to super rational. Rational agents find rational arguments rationally compelling. If rational arguments can be made for a certain understanding of ethics, they will be compelled by them.

You not only haven't proven that psychopaths are the ones with an additional bias,

Do you contest that psychopaths have more egotistical bias than the general population?

you've just taken it for granted from the start.

Yes. I thought it was something everyone knows.

rather than non-psychopaths having an egalitarian bias, o

it is absurd to characterise the practice of treating everyone the same as a form of bias.

comment by Desrtopa · 2013-04-20T14:17:18.993Z · score: 3 (3 votes) · LW · GW

I didn't say it was universal among all entities of all degrees of intelligence or rationality. I said there was a non neglible probability that agents of a certain level of rationality converging on an understanding of ethics.

Where does this non-negligible probability come from though? When I've asked you to provide any reason to suspect it, you've just said that as you're not arguing there's a high probability, there's no need for you to answer that.

"SR" stands to super rational. Rational agents find rational arguments rationally compelling. If rational arguments can be made for a certain understanding of ethics, they will be compelled by them.

I have been implicitly asking all along here, what basis do we have for suspecting at all that any sort of universally rationally compelling ethical arguments exist at all?

Do you contest that psychopaths have more egotistical bias than the general population?

Yes.

it is absurd to characterise the practice of treating everyone the same as a form of bias.

Why?

comment by PrawnOfFate · 2013-04-20T17:20:23.993Z · score: -2 (2 votes) · LW · GW

Where does this non-negligible probability come from though?

Combining the probabilites of the steps of the argument.

I have been implicitly asking all along here, what basis do we have for suspecting at all that any sort of universally rationally compelling ethical arguments exist at all?

There are rationally compelling arguments.

Rationality probably universalisable since it is based on the avoidance of biases, incuding those regarding who and where your are.

There is nothing about ethics that makes it unseceptible to rational argument.

There are examples of rational argument about ethics, and of people being compelled by them.

Do you contest that psychopaths have more egotistical bias than the general population?

Yes.

That is an extraordinary claim, and the burden is on you to support it.

It is absurd to characterise the practice of treating everyone the same as a form of bias.

Why?

In the sense of "Nothing is a kind of something" or "atheism is a kind of religion".

comment by Desrtopa · 2013-04-20T18:44:28.218Z · score: 1 (1 votes) · LW · GW

There are rationally compelling arguments.

Rationality probably universalisable since it is based on the avoidance of biases, incuding those regarding who and where your are.

There is nothing about ethics that makes it unseceptible to rational argument.

There are examples of rational argument about ethics, and of people being compelled by them.

Rationality may be universalizable, but that doesn't mean ethics is.

If ethics are based on innate values extrapolated into systems of behavior according to their expected implications, then people will be susceptible to arguments regarding the expected implications of those beliefs, but not arguments regarding their innate values.

I would accept something like "if you accept that it's bad to make sentient beings suffer, you should oppose animal abuse" can be rationally argued for, but that doesn't mean that you can step back indefinitely and justify each premise behind it. How would you convince an entity which doesn't already believe it that it should care about happiness or suffering at all?

That is an extraordinary claim, and the burden is on you to support it.

I would claim the reverse, that saying that sociopathic people have additional egocentric bias is an extraordinary claim, and so I will ask you to support it, but of course, I am quite prepared to reciprocate by supporting my own claim.

It's much easier to subtract a heuristic from a developed mind by dysfunction than it is to add one. It is more likely as a prior that sociopaths are missing something that ordinary people possess, rather than having something that most people don't, and that something appears to be the brain functions normally concerned with empathy. It's not that they're more concerned with self interest than other people, but that they're less concerned with other people's interests.

Human brains are not "rationality+biases," so that a you could systematically subtract all the biases from a human brain and end up with perfect rationality. We are a bunch of cognitive adaptations, some of which are not at all in accordance with strict rationality, hacked together over our evolutionary history. So it makes little sense to judge humans with unusual neurology as being humans plus or minus additional biases, rather than being plus or minus additional functions or adaptations.

In the sense of "Nothing is a kind of something" or "atheism is a kind of religion".

Is it a bias to treat people differently from rocks?

Now, if we're going to categorize innate hardwired values, such as that which Clippy has for paperclips, as biases, then I would say "yes."

I don't think it makes sense to categorize such innate values as biases, and so I do not think that Clippy is "biased" compared to an ideally rational agent. Instrumental rationality is for pursuing agents' innate values. But if you think it takes bias to get you from not caring about paperclips to caring about paperclips, can you explain how, with no bias, you can get from not caring about anything, to caring about something?

If there were in fact some sort of objective morality, under which some people were much more valuable than others, then an ethical system which valued all people equally would be systematically biased in favor of the less valuable.

comment by TheOtherDave · 2013-04-20T16:32:58.696Z · score: 2 (2 votes) · LW · GW

it is absurd to characterise the practice of treating everyone the same as a form of bias.

Can you expand on what you mean by "absurd" here?

comment by PrawnOfFate · 2013-04-20T17:08:42.621Z · score: 0 (2 votes) · LW · GW

In the sense of "Nothing is a kind of something" or "atheism is a kind of religion".

comment by TheOtherDave · 2013-04-20T19:04:13.140Z · score: 5 (5 votes) · LW · GW

Hm.
OK.

So, I imagine the following conversation between two people (A and B):
A: It's absurd to say 'atheism is a kind of religion,'
B: Why?
A: Well, 'religion' is a word with an agreed-upon meaning, and it denotes a particular category of structures in the world, specifically those with properties X, Y, Z, etc. Atheism lacks those properties, so atheism is not a religion.
B: I agree, but that merely shows the claim is mistaken. Why is it absurd?
A: (thinks) Well, what I mean is that any mind capable of seriously considering the question 'Is atheism a religion?' should reach the same conclusion without significant difficulty. It's not just mistaken, it's obviously mistaken. And, more than that, I mean that to conclude instead that atheism is a religion is not just false, but the opposite of the truth... that is, it's blatantly mistaken.

Is A in the dialog above capturing something like what you mean?

If so, I disagree with your claim. It may be mistaken to characterize the practice of treating everyone the same as a form of bias, but it is not obviously mistaken or blatantly mistaken. In fact, I'm not sure it's mistaken at all, though if it is a bias, it's one I endorse among humans in a lot of contexts.

So, terminology aside, I guess the question I'm really asking is: how would I conclude that treating everyone the same (as opposed to treating different people differently) is not actually a bias, given that this is not obvious to me?

comment by MugaSofer · 2013-04-23T11:23:41.677Z · score: -1 (1 votes) · LW · GW

Plausibly. You don;t now care about the same things you cared about when you were 10.

Are we talking sweeties here? Because that seems more like lack of foresight than value drift. Or are we talking puberty? That seems more like new options becoming available.

I am not and never have said that Clippers would converge on the One True Ethics, I said that (super)intelligent, (super)rational agents would.

You should really start qualifying that with "most actual" if you don't want people to interpret it as applying to all possible (superintelligent) minds.

comment by MugaSofer · 2013-04-23T11:05:32.830Z · score: -1 (1 votes) · LW · GW

But you're talking about parts of mindspace other than ours, right? The Superhappies are strikingly similar to us, but they still choose the superhappiest values, not the right ones.

comment by PrawnOfFate · 2013-04-23T11:18:54.123Z · score: -3 (3 votes) · LW · GW

I don't require their values to converge, I require them to accept the truths of certain claims. This happens in real life. People say "I don't like X, but I respect your right to do it". The first part says X is a disvalue, the second is an override coming from rationality.

comment by nshepperd · 2013-04-23T15:46:00.896Z · score: 6 (6 votes) · LW · GW

This is where you are confused. Almost certainly it is not the only confusion. But here is one:

Values are not claims. Goals are not propositions. Dynamics are not beliefs.

A machine that maximises paperclips can believe all true propositions in the world, and go on maximising paperclips. Nothing compels it to act any differently. You expect that rational agents will eventually derive the true theorems of morality. Yes, they will. Along with the true theorems of everything else. It won't change their behaviour, unless they are built so as to send those actions identified as moral to the action system.

If you don't believe me, I can only suggest you study AI (Thrun & Norvig) and/or the metaethics sequence until you do. (I mean really study. As if you were learning particle physics. It seems the usual metaethical confusions are quite resilient; in most peoples' cases I wouldn't expect them to vanish without actually thinking carefully about the data presented.) And, well, don't expect to learn too much from off-the-cuff comments here.

comment by PrawnOfFate · 2013-04-23T17:40:13.199Z · score: -3 (11 votes) · LW · GW

A machine that maximises paperclips can believe all true propositions in the world, and go on maximising paperclips. Nothing compels it to act any differently. You expect that rational agents will eventually derive the true theorems of morality. Yes, they will.

Well, that justifies moral realism.

Along with the true theorems of everything else. It won't change their behaviour, unless they are built so as to send those actions identified as moral to the action system.

...or its an emergent feature, or they can update into something that works that way. You are tacitly assuming that you clipper is barely an AI at all...that is just has certain functions it performs blindly because its built that way. But a supersmart, uper-rational clipper has to be able to update. By hypothesis, clippers have certain functionalities walled off from update. People are messilly designed and unlikely to work that way. So are likely AIs and aliens.

Only rational agents, not all mindful agents, will have what it takes to derive objective moral truths. They don't need to converge on all their values to converge on all their moral truths, because ratioanity can tell you that a moral claim is true even if it is not in your (other) interests. Individuals can value rationality, and that valuation can override other valuations.

Only rational agents, not all mindful agents, will have what it takes to derive objective moral truths. The further claim that agents will be motivated to do derive moral truths., and to act on them, requires a further criterion. Morality is about regulating behaviour in a society, So only social rational agents will have motivation to update. Again, they do not have to converge on values beyond the shared value of sociality.

comment by nshepperd · 2013-04-24T01:37:08.135Z · score: 2 (4 votes) · LW · GW

emergent

The Futility of Emergence

By hypothesis, clippers have certain functionalities walled off from update.

A paperclipper no more has a wall stopping it from updating into morality than my laptop has a wall stopping it from talking to me. My laptop doesn't talk to me because I didn't program it to. You do not update into pushing pebbles into prime-numbered heaps because you're not programmed to do so.

Does a stone roll uphill on a whim?

Perhaps you should study Reductionism first.

comment by PrawnOfFate · 2013-04-24T01:44:26.816Z · score: -3 (7 votes) · LW · GW

The Futility of Emergence

"Emergent" in this context means "not explicitly programmed in". There are robust examples.

A paperclipper no more has a wall stopping it from updating into morality than my laptop has a wall stopping it from talking to me.

Your laptop cannot talk to you because the natural language is an unsolved problem.

Does a stone roll uphill on a whim?

Not wanting to do something is not the slightest guarantee of not actually doing it.f

An AI can update its values because value drift is an unsolved problem

Clippers can't update their values by definition, but you can't define anything into existence or statistical significance.

You do not update into pushing pebbles into prime-numbered heaps because you're not programmed to do so.

Not programmed to, or programmed not to? If you can code up a solution to value drift, lets see it. Otherwise, note that Life programmes can update to implement glider generators without being "programmed to".

comment by Nornagest · 2013-04-24T02:19:21.549Z · score: 4 (4 votes) · LW · GW

Not programmed to, or programmed not to? If you can code up a solution to value drift, lets see it. Otherwise, note that Life programmes can update to implement glider generators without being "programmed to".

...with extremely low probability. It's far more likely that the Life field will stabilize around some relatively boring state, empty or with a few simple stable patterns. Similarly, a system subject to value drift seems likely to converge on boring attractors in value space (like wireheading, which indeed has turned out to be a problem with even weak self-modifying AI) rather than stable complex value systems. Paperclippism is not a boring attractor in this context, and a working fully reflective Clippy would need a solution to value drift, but humanlike values are not obviously so, either.

comment by PrawnOfFate · 2013-04-25T12:34:28.394Z · score: 1 (5 votes) · LW · GW

I'm increasingly baffled as to why AI is always brought in to discussions of metaethics. Societies of rational agents need ethics to regulate their conduct. Out AIs aren't sophisticated enough to live in their own socieities. A wireheading AI isn't even going to be able to survive "in the wild". If you could build an artificial society of AI, then the questions of whether they spontaneously evolved ethics would be a very interesting and relevant datum. But AIs as we know them aren't good models for the kinds of entities to which morality is relevant. And Clippy is particularly exceptional example of an AI. So why do people keep saying "Ah, but Clippy..."...?

comment by Nornagest · 2013-04-25T18:45:38.666Z · score: 2 (2 votes) · LW · GW

And Clippy is particularly exceptional example of an AI. So why do people keep saying "Ah, but Clippy..."...?

Well, in this case it's because the post I was responding to mentioned Clippy a couple of times, so I thought it'd be worthwhile to mention how the little bugger fits into the overall picture of value stability. It's indeed somewhat tangential to the main point I was trying to make; paperclippers don't have anything to do with value drift (they're an example of a different failure mode in artificial ethics) and they're unlikely to evolve from a changing value system.

comment by MugaSofer · 2013-04-25T12:41:40.872Z · score: -2 (4 votes) · LW · GW

Key word here being "societies". That is, not singletons. A lot of the discussion on metaethics here is implicitly aimed at FAI.

comment by PrawnOfFate · 2013-04-25T12:56:33.819Z · score: -3 (3 votes) · LW · GW

Sorry..did you mean FAI is about societies, or FAI is about singletons?

But if ethics does emerge as an organisational principle in socieities, that's all you need for FAI. You don't even to to worry about one sociopathic AI turning unfriendly, because the majority will be able to restrain it.

comment by MugaSofer · 2013-04-25T13:45:18.590Z · score: -1 (5 votes) · LW · GW

FAI is about singletons, because the first one to foom wins, is the idea.

ETA: also, rational agents may be ethical in societies, but there's no advantage to being an ethical singleton.

comment by PrawnOfFate · 2013-04-25T14:01:28.747Z · score: -2 (2 votes) · LW · GW

UFAI is about singletons. If you have an AI society whose members compare notes and share information -- which ins isntrumentally useful for them anyway -- your reduce the probability of singleton fooming.

comment by MugaSofer · 2013-04-25T16:13:23.498Z · score: -1 (3 votes) · LW · GW

Any agent that fooms becomes a singleton. Thus, it doesn't matter if they acted nice while in a society; all that matters is whether they act nice as a singleton.

comment by PrawnOfFate · 2013-04-25T16:15:20.114Z · score: 0 (2 votes) · LW · GW

I don't get it: any agent that fooms becomes superintelligent. It's values don't necessarily change at all, nor does its connection to its society.

comment by Randaly · 2013-04-25T16:34:41.705Z · score: 2 (4 votes) · LW · GW

An agent in a society is unable to force its values on the society; it needs to cooperate with the rest of society. A singleton is able to force its values on the rest of society.

comment by PrawnOfFate · 2013-04-24T02:23:49.066Z · score: -2 (2 votes) · LW ·