Scenario analysis: semi-general AIs

post by Will_Newsome · 2012-03-22T09:11:45.829Z · LW · GW · Legacy · 66 comments

Contents

66 comments

Are there any essays anywhere that go in depth about scenarios where AIs become somewhat recursive/general in that they can write functioning code to solve diverse problems, but the AI reflection problem remains unsolved and thus limits the depth of recursion attainable by the AIs? Let's provisionally call such general but reflection-limited AIs semi-general AIs, or SGAIs. SGAIs might be of roughly smart-animal-level intelligence, e.g. have rudimentary communication/negotiation abilities and some level of ability to formulate narrowish plans of the sort that don't leave them susceptible to Pascalian self-destruction or wireheading or the like.

At first blush, this scenario strikes me as Bad; AIs could take over all computers connected to the internet, totally messing stuff up as their goals/subgoals mutate and adapt to circumvent wireheading selection pressures, without being able to reach general intelligence. AIs might or might not cooperate with humans in such a scenario. I imagine any detailed existing literature on this subject would focus on computer security and intelligent computer "viruses"; does such literature exist, anywhere?

I have various questions about this scenario, including:

Those are the questions that immediately spring to mind, but I'd like to see who else has thought about this and what they've already considered before I cover too much ground.
My intuition says that thinking about SGAIs in terms of population genetics and microeconomics will somewhat counteract automatic tendencies to imagine cool stories rather than engage in dispassionate analysis. I'd like other suggestions for how to achieve that goal.
I'm confused that I don't see people talking about this scenario very much; why is that? Why isn't it the default expected scenario among futurologists? Or have I just not paid close enough attention? Is there already a name for this class of AIs? Is the name better than "semi-general AIs"?
Thanks for any suggestions/thoughts, and my apologies if this has already been discussed at length on LessWrong.

 

66 comments

Comments sorted by top scores.

comment by Dmytry · 2012-03-22T11:00:00.162Z · LW(p) · GW(p)

I would say humans are incapable of reflection. If we were, we'd already know how intelligence works and how to make AI capable of reflection.

What we say we "reflect" on, is some minor subset of internal processes, that is being processed like external input, and confuses the hell out of us, making even smart people like Roger Penrose trying to invent up new physics just to explain their subjectivity. Combined with general ability to invent a label for a physical object, and calling one of the physical objects - the one that's always real nearby, etc etc - 'self'.

We also don't really recurse anywhere, we just got a symbol that says 'I am' in the "map", which is distinct from symbol that says 'my brain' (the problem with map analogy, is that two points in the mental map, can correspond to one real world item, as well).

Replies from: wedrifid
comment by wedrifid · 2012-03-22T11:57:42.753Z · LW(p) · GW(p)

I would say humans are incapable of reflection. If we were, we'd already know how intelligence works and how to make AI capable of reflection.

We are limited when it comes to reflection but we are most certainly not incapable of it. I am reflecting as I write this, analyzing just what my reflective capabilities are!

Replies from: cousin_it, Dmytry
comment by cousin_it · 2012-03-22T13:40:29.371Z · LW(p) · GW(p)

There seems to be a spectrum of different types of reflection. You can trust observations of how parts of you map inputs to outputs, but you can't trust quined descriptions embedded in your source code, because the programmer who wrote you didn't necessarily care about writing a perfect quine.

Replies from: wedrifid
comment by wedrifid · 2012-03-22T13:43:27.917Z · LW(p) · GW(p)

because the programmer who wrote you didn't necessarily care about writing a perfect quine.

Understatement of the year!

comment by Dmytry · 2012-03-22T12:00:44.312Z · LW(p) · GW(p)

Are you truly reflecting on yourself though? I understand that the thing you're reflecting at, does map to same point, as the thing that i would describe as you, and so you are 'reflecting on yourself', but it is just a general feature of map vs territory problem where multiple things compress to a single map point.

One could say that the thing you are reflecting at, is some portion of yourself; that would perhaps be fair as 'limited reflection' goes. But then, it is not hard to code some example that looks at a small portion of itself.

Replies from: wedrifid
comment by wedrifid · 2012-03-22T12:10:57.181Z · LW(p) · GW(p)

One could say that the thing you are reflecting at, is some portion of yourself; that would perhaps be fair as 'limited reflection' goes.

We're certainly better at reflecting at some parts of our self than others. The ironic thing is, though, that when we look more closely and analyze just what it is that we are not reflecting on very well that we open up the can of worms that we had previously been avoiding.

It occurs to me that it may be a good thing that we are limited in this regard and are not yet able to reflect well enough to reproduce our intelligence in the form of a self reflective AI. If we could we'd probably have gone extinct already.

Replies from: Will_Newsome, Dmytry
comment by Will_Newsome · 2012-03-22T12:24:24.039Z · LW(p) · GW(p)

If we could we'd probably have gone extinct already.

(I've been trying to avoid the uFAI=extinction thing lately, as there are various reasons that might not be the case. If we build uFAI we'll probably lose in some sense, maybe even to the extent that the common man will be able to notice we lost, but putting emphasis on the extinction scenario might be inaccurate. Killing all the humans doesn't benefit the AI much in most scenarios and can easily incur huge costs, both causally and acausally. Do you disagree that it's worth avoiding conflating losing and extinction?)

Replies from: wedrifid
comment by wedrifid · 2012-03-22T12:30:46.257Z · LW(p) · GW(p)

Do you disagree that it's worth avoiding conflating losing and extinction?

I agree that it's worth avoiding the conflation. There are losing scenarios that don't involve extinction - most highly concentrated in the realm of not-quite-friendly-AI. (And, naturally, excluding from the class under consideration all "uFAIs" that are insufficiently capable or insufficiently motivated to do anything much at all.)

For the purpose of this utterance it happens that I would make both the claims:

  • If we could we'd probably have gone extinct already.
  • If we could we'd probably have Lost already.

I expect I have different predictions regarding likely uFAI results and in particular different expectations regarding how acausal costs will be calculated.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-22T13:04:10.230Z · LW(p) · GW(p)

I expect I have different predictions regarding likely uFAI results and in particular different expectations regarding how acausal costs will be calculated.

FWIW acausal considerations don't figure as much into my calculations as straightforward "wow I hope there isn't an AI already chilling 'round these parts who will get pissed at me if I try to kill all the humans" considerations do.

It really seems to me that you'd have to be very, very confident that there were no gods around to punish you for you to think it was worth it to turn the humans into computronium. Like, there's an entire sun just sitting there for you to pluck, assuming Amon-Ra isn't already chilling in the center of it. I guess if the uFAI splintered a lot for whatever reason into AIs of differing power, and the AIs didn't cooperate, then you might end up with humans caught in the crossfire...?

Replies from: wedrifid, Crux
comment by wedrifid · 2012-03-22T13:09:46.599Z · LW(p) · GW(p)

It really seems to me that you'd have to be very, very confident that there were no gods around to punish you for you to think it was worth it to turn the humans into computronium.

You just Pascal-Mugged future superintelligences into letting humans live. I've got to admit. That's kind of baddass.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-22T14:27:21.129Z · LW(p) · GW(p)

I just learned that von Neumann got... um, "wagered". Considering von Neumann was clearly a transhuman this establishes a lower bound on how smart you can be and still accept Pascal's wager. (Though I somewhat suspect that von Neumann's true reasons for returning to Catholicism late in life are more complicated than that.)

Replies from: Wei_Dai, Dmytry, XiXiDu
comment by Wei Dai (Wei_Dai) · 2012-03-22T21:53:53.140Z · LW(p) · GW(p)

Gary Drescher once reminded me that von Neumann may have already suffered neurological damage from metastatic cancer at that point, so this lower bound may not be as high as you think (unless that's what you were alluding to by "true reasons").

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-22T22:10:21.450Z · LW(p) · GW(p)

Von Neumann had been a practicing Catholic earlier in life, so it's not that strange that he would return to Catholicism near the end. By "true reasons" I didn't mean brain damage, though thanks for bringing up that possibility. He was still transhumanly intelligent up until the end, but maybe not quite as transhumanly intelligent. But I guess I just meant that Pascal's wager surely wasn't his only consideration.

comment by Dmytry · 2012-03-22T19:11:37.679Z · LW(p) · GW(p)

To expand on that, the difference between the let humans live pascal's wager, and the ordinary pascal's wager, is that letting humans live is the status quo.

Consider the 'don't eat me, my daddy is a cop' argument. It is hardly a pascal's wager. It is a rational thing to consider, especially when there's no shortage of food. It is more plausible that the status quo is product of actions of ultra powerful being, than 'you must give all money you got to me or the god will be pissed off' .

comment by XiXiDu · 2012-04-10T10:29:44.248Z · LW(p) · GW(p)

I just learned that von Neumann got... um, "wagered". Considering von Neumann was clearly a transhuman this establishes a lower bound on how smart you can be and still accept Pascal's wager.

It is quite fascinating how the belief in God does oscillate between intelligence (rationality?) levels. Chimpanzees are naturally atheistic. Average humans are religious. Above average humans are usually atheistic. High IQ individuals like Eliezer Yudkowsky tend to be agnostic, in the sense that they assign a nonzero probability to the existence of God and believe in the existence of natural or artificial Gods. And people on the verge of posthumanism like John von Neumann, of whom was said that "only he was fully awake", are again leaning towards theism. I wonder if a truly posthuman AI would oscillate back to atheism while conjecturing that Omega is probably a theist.

Replies from: pedanterrific
comment by pedanterrific · 2012-04-10T11:54:04.621Z · LW(p) · GW(p)

Well, Omega would have to have a rather strange mind design not to believe in itself.

Replies from: Nisan
comment by Nisan · 2012-04-27T14:44:33.386Z · LW(p) · GW(p)

Like AIXI.

comment by Crux · 2012-03-22T17:26:30.564Z · LW(p) · GW(p)

I've never even seen a shred of evidence suggesting I should believe there's a deity prepared to punish you if you kill people or do something immoral (or anything else for that matter), so it's on the exact same level as worrying about the possibility that snapping your fingers more than 300 times throughout the course of your lifetime may lead to an afterlife of eternal torture.

There's absolutely no reason why one should consider it more likely that there's a deity waiting to judge you after your death than really anything else at all. Maybe playing disc golf even once is bound to lead to an afterlife of hell. Ever played? If not, you may still have a chance!

If you consider it from a sound epistemic point of view, it becomes obvious that in our current state of knowledge, Pascal's Wager is equally apt to everything, and thus utterly useless or even simply meaningless. It's only a particular vulnerability in human brain hardware that makes it seem any different.

A lot of people still believe the whole god hypothesis, and many more have believed it over the course of human history, but that doesn't mean it's any more useful than an equally ridiculous hypothesis that I could invent right now: Learning to type with proper mechanics may help you in the short term (your mortal life), but beware, for God may not approve, and He doesn't mess around with his revenge.

We as humans are designed to think in groups, and it takes a special sort of introspection to pull yourself out of the mess and realize that the system is broken, and that a large percentage of the output is not to be trusted. That much you should know by virtue of having spent more than an hour reading Less Wrong, but perhaps you don't see this particular application.

I understand the pull of Pascal's Wager. I've felt it, and I still do. It feels like there's some special evidence for God's existence, and like we should privilege this hypothesis over another, but that's only because of how many people have believed in religion in the past, or rather how many times you've heard it espoused in an approving way. We're simply wired that way.

Replies from: Dmytry, Will_Newsome
comment by Dmytry · 2012-03-22T19:02:50.667Z · LW(p) · GW(p)

AI is thinking more straight than you. It doesn't need to believe in God. It can assign some uncertainty. The only good argument for non-existence of God is Occam's razor, and Occam's razor doesn't say the complex explanation is impossible, merely unlikely (and shouldn't be privileged).

Imagine the AI that knows precisely how much more complex are the rules that lead to universe full of Gods who create sims of our universe, than rules for our universe. This AI will tell you exactly how likely it is that it is sitting in a simulation, instead of making some hacks for the pascal's wager. This AI will have atheist predictor with weight p and theist predictor with weight 1-p . The theist predictor likes status quo. The theist predictor, however, won't necessarily respond to claims "you must give me all your money or else God will punish you", because it has no evidence towards this statement about God over the converse "you must not give me any of your money or else God will punish you".

Replies from: Crux
comment by Crux · 2012-03-22T19:22:22.177Z · LW(p) · GW(p)

What's the point of speculating about what something literally defined as having more knowledge than us would believe? All I know is that in our current state of knowledge, there's no more evidence for the existence of a God prepared to punish you with an afterlife in hell if you murder somebody than one who would do the same if you play disc golf.

I certainly don't think I have an argument for the non-existence of God, nor am I looking for one. Try disproving the idea that having a lifetime average of chewing on your left side between 60-70% will leave you screwed in the afterlife. Of course you can't disprove it, but then again I don't think I need to make this point on LW.

Why exactly would one have to be "very, very confident that there were no gods around to punish you for you to think it was worth it to turn the humans into computronium" (the original quote from Will)? Since there's no evidence for this hypothesis, you might as well spend your time worrying about making left turns, or chewing on the wrong side of your mouth, or really anything at all.

Replies from: Dmytry, Will_Newsome
comment by Dmytry · 2012-03-22T19:23:36.226Z · LW(p) · GW(p)

What's the point of speculating about what something literally defined as having more knowledge than us would believe?

Precisely. No point. But people been speculating a lot about how it would behave, talking themselves into certainty that it would eat us. Those people need speculative antidote. If you speculate about one thing too much, but not about anything else, you start taking speculation as weak evidence, and deluding yourself.

edit: Also, try eating random unknown chemicals if you truly believe you should not worry about unknowns. One absolutely SHOULD worry about changing the status quo.

Replies from: Crux, Crux
comment by Crux · 2012-03-22T19:36:12.018Z · LW(p) · GW(p)

It may be because I haven't slept in 30 hours, but I'm having a hard time interpreting your writing. I've seen you make some important insights elsewhere, and I occasionally see exactly what you're saying, but my general impression of you is that you're not very good at judging your audience and properly managing the inferential distance.

You seem to agree with me to some extent in this discussion, or at least we don't seem to have a crucial disagreement, and this topic doesn't seem very important anyway, so I'm not necessarily asking you to explain yourself if that would take a long time, but perhaps this can serve as some constructive criticism thrown at you in a dark corner of a random thread.

As a meta question, would this sort of reply do better as a PM? What are the social considerations (signaling etc) with this sort of response? I don't know where to even start in that train of thought.

Replies from: Dmytry, Will_Newsome
comment by Dmytry · 2012-03-22T19:43:15.773Z · LW(p) · GW(p)

It may be because I haven't slept in 30 hours, but I'm having a hard time interpreting your writing

English is not my first language and you haven't slept in 30 hours, that reliably adds up to mutual incomprehension.

Yes, I think it is better in pm. People who read the recent comments would prefer that, i think. The public talks are particularly difficult because e.g. I am inclined to defend the notion that my English is good enough, while you are inclined to defend the notion that non-sleep in 30 hours doesn't impair your reading comprehension substantially. I'll reply in PM.

comment by Will_Newsome · 2012-03-22T19:40:19.610Z · LW(p) · GW(p)

FWIW I understood his point. I normally put comments like yours in comments, not PMs, but I haven't done a thorough analysis either.

comment by Crux · 2012-03-22T19:48:36.517Z · LW(p) · GW(p)

To respond to the edit, I simply don't see the analogy.

Your wording makes it sound analogous because you could describe what I'm saying as "don't worry about unknowns" (i.e., you have no evidence for whether God exists or not, so don't worry about it), and you could also describe your reductio the same way (i.e., you have no evidence for whether some random chemical is safe, so don't worry about it), but when I try to visualize the situation I don't see the connection.

A better analogy would be being forced to take one of five different medications, and having absolutely no evidence at all for their safety, or any hope of getting such evidence, and knowing that the only possible unsafe side effects would come only far in the future (if at all). In such a situation, you would of course forget about choosing based on safety, and simply choose based on other practical considerations such as price, how easy they are to get down, etc.

One should worry about changing the status quo only if there was a useful, reliable market test in place beforehand that had anything to do with why that status quo was like it was, and especially in the case that you don't have overwhelming evidence that (1) it was a known hardware or software vulnerability that led to what became the status quo, and (2) it's obvious that remaining a part of that status quo is extremely epistemically hazardous (being religious is certainly an epistemic hazard--feel free to ask for elaboration).

comment by Will_Newsome · 2012-03-22T19:37:57.083Z · LW(p) · GW(p)

You do realize that "gods" means "other AGIs", right?

Replies from: Crux
comment by Crux · 2012-03-22T20:02:06.528Z · LW(p) · GW(p)

Yes, or rather I realize that in the sense that I do remember seeing you write that somewhere, but I'm not sure whether I had it sufficiently in mind during my replies. If you see anything suggesting that I didn't have it in mind such that it invalidated what I said as irrelevant to your position, let me know.

I should mention though that it may be epistemically unsanitary to use the term "god" (or "God)" when you really mean AIs, considering how long and winding the history of such theistic terminology has been. If your goal is clear communication, I would suggest switching to a term with less baggage.

Even though I did know that's what you meant in the sense that I saw you define it earlier, I might easily have fallen into pattern-matching and ended up largely criticizing a position irrelevant to yours.

Your goal seems to be to identify as a theist though, so using the term "God" (and the other standard theistic terminology) may be necessary for that purpose, in which case you may either (1) want to make sure to take extra care to compensate for the historical baggage and ambiguity, or (2) simply forget you ever read this comment.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-22T20:15:23.352Z · LW(p) · GW(p)

I actually go out of my way to equate "god" and "AGI"/"superintelligence", because to a large extent they seem like the same thing to me.

Your goal seems to be to identify as a theist though

It's not that I want to identify as a theist, so much as that I want to point out that I think that the only reason people think that gods/angels/demons and AGIs/superintelligences/transhuman-intelligences are different things is because they're compartmentalizing. I think Aquinas and I believe in the same God, even if we think about Him differently. I know algorithmic probability theory, Aquinas didn't. Leibniz almost did.

(There's two different things going on: I believe there exists an ideal decision theory, Who is God, for theoretical reasons; whereas my reasons for believing that transhuman intelligences (lower-case-g gods) affect humans are entirely phenomenological.)

Replies from: Crux
comment by Crux · 2012-03-23T17:39:00.588Z · LW(p) · GW(p)

I actually go out of my way to equate "god" and "AGI"/"superintelligence", because to a large extent they seem like the same thing to me.

Can you give me the common meanings of those terms, and explain how they're equivalent?

It's not that I want to identify as a theist, so much as that I want to point out that I think that the only reason people think that gods/angels/demons and AGIs/superintelligences/transhuman-intelligences are different things is because they're compartmentalizing.

Compartmentalizing in what way? I think they're different things, or rather it seems utterly obvious to me that religious people using the theistic terms are always using them to refer to things completely different than those on LW employing those other terms.

I should say though that the way that the theistic terms are used is in no way consistent, and everybody seems to mean something different (if I can even venture a guess as to what the hell they're talking about). There are multiple meanings associated with these terms, to say the least.

Maybe your conception is something like, "If there really is anything out there that could in any way match the description in Catholicism or whatever, then it would perhaps have to be an AGI, or else a super-intelligent life-form that evolved naturally."

I would say though that this seems like a desperate attempt to resurrect the irrationality of religion. If I came up with or learned something interesting or important, and also realized that some scholar or school of thought from the past or present had a few central conclusions or beliefs that seem sort of similar in some way, but believed them all for the wrong reasons--specifically ones absolutely insane by my own epistemic standards--I would not care. I would move on, and consider that tradition utterly useless and uninteresting.

I don't understand why you care. It's not like Aquinas or anybody else believed any of this stuff for the same reasons you do, or anything like that, so what's the point of being like, "Hey, I know these people came up with this stuff for some random other reasons, but it seems like I can still support their conclusions and everything, so yeah, I'm a theist!" It just doesn't make any sense to me, unless of course you think they came to those conclusions for good reasons that have anything at all to do with yours, in which case I need some elaboration on that point.

Either way, usually I can't even tell what the hell most religious people are talking about from an epistemic or clear communication standpoint. I used to think they were just totally insane or something, and I would make actual attempts to understand what they were trying to get me to visualize, but it all became clear when I started interpreting what they were saying in a different way. It all became clear when I started thinking about it in terms of them employing techniques to delude themselves into believing in an afterlife, or simply just believing it because of some epistemic vulnerability their brain was operating under.

Those theistic terms ("God" etc) have multiple meanings, and different people tend to use them differently, or rather they don't really have meanings at all, and they're just the way some people delude themselves into feeling more comfortable about whatever, or perhaps they're just mind viruses taking advantage of some well-known vulnerabilities found our hardware.

I can't for the life of me figure out why you want to retain this terminology. What use is it besides for contrarianism? Does calling yourself a theist and using the theistic terms actually aid in my or anybody else's understanding of what you're thinking, or what? Is the objective clear communication of something that would be important for me or other people on here to know, or what? I'm utterly confused at what you're trying to do, and what the supposed utility is, of these beliefs of yours and your way of trying to communicate them.

I think Aquinas and I believe in the same God, even if we think about Him differently.

What does that even mean? It sounds like the worst sort of sophistry, but I say that not necessarily to suggest you're making an error in your thinking, but simply to allude to how and why I have no exactly what that means.

(There's two different things going on: I believe there exists an ideal decision theory, Who is God, for theoretical reasons;

So you're defining the sequence of letters starting with "G", next being "o", and ending with "d" as "the ideal decision theory"? Is this a common meaning? Do all (or most of) the religious people I know IRL use that term to refer to the ideal decision theory, even if they wouldn't call it that?

And what do you mean by "ideal"? Ideal for what? Our utility functions? Maybe I even need to hear a bit of elaboration on what you mean by "decision theory". Are we talking about AI programming, or human psychology, or what?

whereas my reasons for believing that transhuman intelligences (lower-case-g gods) affect humans are entirely phenomenological.)

I literally have absolutely no idea why you chose the word "phenomenological" right there, or what you could possibly mean.

Replies from: Eugine_Nier, Will_Newsome
comment by Eugine_Nier · 2012-03-24T20:01:23.135Z · LW(p) · GW(p)

If I came up with or learned something interesting or important, and also realized that some scholar or school of thought from the past or present had a few central conclusions or beliefs that seem sort of similar in some way, but believed them all for the wrong reasons--specifically ones absolutely insane by my own epistemic standards--I would not care. I would move on, and consider that tradition utterly useless and uninteresting.

If I found a school of thought that seemed to come to correct conclusion unusually often but "believed them all for the wrong reasons--specifically ones absolutely insane by my own epistemic standards", I'd take that as evidence that there is something to their reasons that I'm missing.

So you're defining the sequence of letters starting with "G", next being "o", and ending with "d" as "the ideal decision theory"? Is this a common meaning?

Actually, yes. Specifically the tendency in Catholic thought to equate God with Plato's Form of the Good.

Replies from: Crux
comment by Crux · 2012-03-25T02:25:10.545Z · LW(p) · GW(p)

If I found a school of thought that seemed to come to correct conclusion unusually often but "believed them all for the wrong reasons--specifically ones absolutely insane by my own epistemic standards", I'd take that as evidence that there is something to their reasons that I'm missing.

You're absolutely right, but you're stipulating the further condition that they come to the correct conclusions "unusually often". I on the other hand was talking about a situation where they just happen to have a few of the same conclusions, and those conclusions just so happen to be central to their worldview.

I didn't get the feeling that Will thought that Catholicism was correct an unusually amount of time. I was under the impression that he (like many others before him) is simply trying his hardest to use some of the theistic terminology and identify as a theist, despite his science background.

Actually, yes. Specifically the tendency in Catholic thought to equate God with Plato's Form of the Good

I just read that article, but I couldn't parse anything, nor did I see any relation to decision theory. I'm left utterly confused.

comment by Will_Newsome · 2012-03-24T11:30:56.522Z · LW(p) · GW(p)

I think you're way too confident that the people you disagree with are obviously wrong, to the extent that I don't think we can usefully communicate. I'm tapping out of this discussion.

Replies from: Crux
comment by Crux · 2012-03-24T16:45:18.097Z · LW(p) · GW(p)

Have you observed my discussions elsewhere on this website, and came to the conclusion that I'm way too confident in that way in general, or are you referring only to this particular exchange?

This discussion seems like sort of a unique case. I wouldn't say I'm generally so confident in that respect, but I'm certainly extremely confident in this discussion, even to the point that I don't yet have a sufficiently detailed model of you to account for how you could possibly spend so much time on this website and still engage in the sort of communication that went on throughout this discussion.

Sure, I'm extremely confident that I'm the one who's right in this discussion, but then again that's probably the majority feeling when people on this website engage you on this topic, even to the extent that it's not uncommon for people to question whether you're just trolling at this point.

Looking back on what I could have done better in this discussion to have it have been more likely for you to hear me out instead of quit, I realize that I probably would have had to spend about 5 times as much time writing, and have been extremely careful in every way throughout every reply. Even in retrospect, that probably wouldn't have been worth it. Takes much less time to just spill my thoughts and reactions than it is to take the necessary precautions to make this sort of tap out less likely.

Even with all that said, I don't really understand the connection between me signaling that I think you're obviously wrong, and you saying that we can't usefully communicate. Am I being uncharitable in my replies, or do you think it's unlikely that I would update toward your position after setting a precedent of me thinking your clearly confused, or what?

I could see how you could pattern-match my high confidence with expecting me to have trouble updating, and I could also see how maybe some of my more terse moments may have come off as, "If he wasn't so confident, perhaps he would have thought longer about this and responded to a more charitable interpretation." But I should mention that in at least one case I went as far as responding to your question of whether I even know that you're using the word "God" or "god" to refer to AGIs, by admitting that I may be attacking a strawman.

I don't necessarily expect you to respond to any of this considering you already tapped out, but perhaps I've gone sufficiently meta for you to consider it a different discussion, one that you perhaps haven't tapped out of, but in any case you'll probably read this and maybe get something out of it, or even change your mind as to whether you want to continue engaging me on this topic.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-24T18:18:32.459Z · LW(p) · GW(p)

Have you observed my discussions elsewhere on this website, and came to the conclusion that I'm way too confident in that way in general, or are you referring only to this particular exchange?

Only this particular exchange, I haven't seen any of your other discussions.

It's not you clearly signaling you think I'm obviously wrong that I anticipate difficulties with; I was being imprecise. Rather, it's a specific emotion/attitude (exasperation?) that I detect and that stresses me out a lot, because it imposes a moral obligation on me to act in good faith to show you that the kind of reasoning you're engaged in in my experience often leads to terrible consequences that will look in retrospect as if they could easily have been avoided. On the one hand I want to try to help you, on the other hand I want to avoid blame for not having tried to help you enough, and there's no obvious solution to that double bind, and the easiest solution is to simply bail out of the discussion. (Not necessarily your blame, just someone's, e.g. God's.)

And it's not you thinking that I'm obviously wrong; apologies for being unclear. It's people in general. You say you "usually can't even tell what the hell most religious people are talking about from an epistemic or clear communication standpoint", and yet you're very confident they are wrong. People who haven't practiced the art of analyzing people's decision policies in terms of signaling games, Schelling points, social psychology &c. simply don't have the skills necessary to determine whether they're justified in strongly disagreeing with someone. Confidently assuming that your enemies are stupid is what basically everyone does, and they're all retarded for doing it. LessWrong is no exception; in fact, it's a lot worse than my high school friends, who weren't fooled into thinking that their opinions were worth something 'cuz of a superficial knowledge of cognitive science and Bayesian statistics.

It's not that I don't think you'd update. If I took the time to lay out all my arguments, or had time to engage you often in conversation, as I have done with many folk from the SingInst community, then I'm sure I would cause you to massively update towards thinking I'm right and that LessWrong has gaping holes in its epistemology. It's happened many times now. People start out thinking I'm crazy or obviously wrong or just being contrarian, I talk to them for a long time, they realize I have very good epistemic habits and kick themselves for not seeing it earlier. But it takes time, and LessWrong isn't worth my time; the only reason I comment on LessWrong is because I feel a moral obligation to, and the moral obligation isn't strong enough to compel me to do it well.

Also, I generally don't like talking about object level beliefs; I prefer to discuss epistemology. But I'm too lazy to have long, involved discussions about epistemology, so I wouldn't have been able to keep up our discussion either way.

Replies from: Crux
comment by Crux · 2012-03-25T02:09:06.521Z · LW(p) · GW(p)

Rather, it's a specific emotion/attitude (exasperation?) that I detect and that stresses me out a lot, because it imposes a moral obligation on me to act in good faith to show you that the kind of reasoning you're engaged in in my experience often leads to terrible consequences that will look in retrospect as if they could easily have been avoided.

I just don't understand. I see why you may detect a level of exasperation in my replies, but I don't get why that specifically would be what would impose that sort of moral obligation on you. You're saying that what I'm doing may lead to terrible consequences, which sounds bad and like maybe you should do something about it, but I'm utterly confused about why my attitude is what confers that on you.

In other words, wouldn't you feel just as morally obligated (if not more) to help me avoid such terrible consequences if I had handled this discussion with a higher level of respect or grace? Why does me (accidentally or not) signaling exasperation or annoyance lead to that feeling of moral obligation, rather than the simple fact that you consider it in your power to help somebody avoid (or lower the likelihood) of whatever horrible outcome you have in mind?

When I was first reading your reply and had only reached up to where you said "stresses me out a lot", I thought you were just going to say that me acting frustrated with you or whatever was simply making it uncomfortable or like you would get emotionally attached such that it would be epistemically hazardous or something, which I would have understood, but then you transitioned to the whole moral obligation thing and I sort of lost you.

On the one hand I want to try to help you

Just for reference, I should probably tell you what (I think) my utility function is, so you're in a (better) position to appraise whether what you have in mind really would be of help to me.

I'm completely and utterly disinterested in academic or intellectual matters unless they somehow directly benefit me in the more mundane, base aspects of my life. Unless a piece of information is apt to make me better at parkour, lifting, socializing, running, etc., or enable me to eat healthier so I'm less likely to get sick or come down with a terrible disease, or something like that, it's not useful to me.

If studying some science or learning some new esoteric fact or correcting some intellectual error of mine could help me get to sleep on time, make it less likely for me to die anytime soon, make it less probable for me to suffer from depression, help me learn how to handle social interaction more effectively, tame my (sometimes extreme) akrasia, enable me to contribute to reducing the possibility of civilization-wide catastrophe in my lifetime, etc., then I'm interested. Otherwise I'm not.

I'm telling you this simply so you know what it means to help me out. If whatever you have in mind can't be of use for me in my everyday life, then it's not helpful. I hang out on this website, and engage in intellectual matters quite regularly, but I do so only because I think it's the best way to fulfill my rather mundane utility function. We're not designed properly for our current environment, and the only way to compensate is to engage in some pretty deep introspection and spend a lot of time and energy working through plenty of intellectual matters.

So what do you have that could help me? I want to live a healthy, happy human life, not have it cut short by some societal collapse, and also hopefully be around for when (or if) we put an end to aging and make it so we don't have to die so young anymore. I also don't want to suffer an eternity burning in Hell, that is if such a place exists.

And it's not you thinking that I'm obviously wrong; apologies for being unclear. It's people in general. You say you "usually can't even tell what the hell most religious people are talking about from an epistemic or clear communication standpoint", and yet you're very confident they are wrong.

Oh sorry. I should have been more precise. I don't think anything past what you quoted of me. If by "wrong", you mean anything incompatible with not having any idea what they're talking about, or rather just not being able to interpret what they're saying as serious attempts at clear communication, then I certainly don't think they're wrong. I just think they're either really bad at communicating, or else engaged in a different activity.

So yeah. In that sense, I don't think they're wrong, and I don't think you're wrong. I just don't know what they're attempting to communicate. Or rather, it seems pretty obvious to me that most religious people aren't even trying to communicate at all, at least in the sense of intellectual discourse, or in terms of epistemic rationality. It seems pretty clear to me that they're just employing a bunch of techniques to get themselves to believe certain things, or else they're just repeating certain things because of some oddity in human brain design.

But there are a ton of different religions, and a ridiculous amount of variation from person to person, so I can't really criticize them all at once or anything, nor would it matter. And as for you, at this point I really just have no idea what you believe. It's not that I think you're wrong about whatever beliefs you have. It's that I still don't know what those beliefs are, and also that I'm under the impression that you're not doing a very good job with your attempts to communicate them to me.

In most discussions like this, the issue isn't that somebody has a clear map that doesn't fit the territory. It's almost always just a matter of a communication failure or a set of key misinterpretations, or something like that. Likewise in this discussion. It's not that I think what you believe is wrong; it's that I don't know even know what you believe.

People who haven't practiced the art of analyzing people's decision policies in terms of signaling games, Schelling points, social psychology &c. simply don't have the skills necessary to determine whether they're justified in strongly disagreeing with someone.

I can't tell whether you're implying that I specifically don't have those skills, or whether you're just making some general observation or something.

Confidently assuming that your enemies are stupid is what basically everyone does, and they're all retarded for doing it.

I certainly don't do that. When you disagree with somebody, there's no getting around thinking that they're making an error (because it's like that by definition), but considering them "stupid" is nothing more than an empty explanation. Much more useful would be saying that your opponent thinks X because he's operating under some bias Y, or something like that.

In other words, I probably engage in plenty of discussions where I consider my opponent to be making a serious error, or not very good at managing the inferential distance properly, or ridiculously apt to make word-based errors, or whatever, but I never settle for the thought-terminating explanation that they're just stupid, or at least I don't think I do. Or do I?

There's just no getting around appraising the level of intellectual ability your opponent is operating on, just like I would never play a match of tennis with somebody who sucks without acknowledging that to myself. It's saying "he sucks" without even considering why exactly that is the case that's the problem. When I engage in intellectual discussions, I try to stick to observations like "he doesn't define his terms precisely" rather than just "he's an idiot".

LessWrong is no exception; in fact, it's a lot worse than my high school friends, who weren't fooled into thinking that their opinions were worth something 'cuz of a superficial knowledge of cognitive science and Bayesian statistics.

Is this aimed at me also, or what?

It's not that I don't think you'd update. If I took the time to lay out all my arguments, or had time to engage you often in conversation, as I have done with many folk from the SingInst community, then I'm sure I would cause you to massively update towards thinking I'm right and that LessWrong has gaping holes in its epistemology.

If the received opinion on Less Wrong really does have gaping holes in its epistemology, then I'd like to be first in line to hear about it.

That said, I alone am not this entity we call "Less Wrong". You're telling me I'd update massively in your direction, which means you think I also have these gaping holes in my epistemology, but do you really know that through just these few posts back and forth we've had here?

It's happened many times now. People start out thinking I'm crazy or obviously wrong or just being contrarian, I talk to them for a long time, they realize I have very good epistemic habits and kick themselves for not seeing it earlier.

with many folk from the SingInst community

Who are these people who updated massively in your direction, and would any of them be willing to explain what happened, or have they in some series of posts? You're telling me that I could revolutionize my epistemic framework if I just listened to what you had to say, but then you're leaving me hanging.

But it takes time, and LessWrong isn't worth my time; the only reason I comment on LessWrong is because I feel a moral obligation to, and the moral obligation isn't strong enough to compel me to do it well.

Are you sure you're not just doing more harm than good by being so messy in your posts? And of course the important implication here is that I personally am not worth your time, or you would talk to me long enough to actually explain yourself.

I'm just left wondering why you're still here, just as many other people probably have been, and of course also left wondering what sort of revolutionary idea you may be hiding.

Replies from: khafra
comment by khafra · 2012-03-27T15:27:41.669Z · LW(p) · GW(p)

I think I can translate, a bit:

People who haven't practiced the art of analyzing people's decision policies in terms of signaling games, Schelling points, social psychology &c. simply don't have the skills necessary to determine whether they're justified in strongly disagreeing with someone.

I can't tell whether you're implying that I specifically don't have those skills, or whether you're just making some general observation or something.

As far as I can tell, Will's a stronger epistemic majoritarian than most nerds, including us LW nerds. If a bunch of people engage in a behavior, his default belief is that behavior is adaptive in a comprehensive enough context, when examined at a meta-enough level.

Will spend a lot of time practicing model-based thinking. Even with that specific focus, he doesn't consider his own skills adequate to declare the average person's behavior stupid and counterproductive. I'm an average LW'ian, I've read The Strategy of Conflict and the sequences and Overcoming Bias had a few related insights in my daily life. I don't have enough skill to dissolve the question and write out a flowchart that shows why some of the smartest and most rational people in the world are religious. So Will's not going to trust me when I say that they're wrong.

And as for you, at this point I really just have no idea what you believe.

He's a prospective modal catholic--replace each instance of "amen" with "or so we are led to believe."

I'm just left wondering why you're still here, just as many other people probably have been, and of course also left wondering what sort of revolutionary idea you may be hiding.

He suspects himself of prodromal schizophrenia, due to symptoms like continuing to post here.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-05-11T19:04:30.901Z · LW(p) · GW(p)

Some of my majoritarianism is in some sense a rationalization, or at least it's retrospective. I happened to reach various conclusions, some epistemic, some moral, and learned various things that happened to line up much better with Catholic dogma than with any other system of thought. Some of my majoritarianism stems from wondering how I could have reached those conclusions earlier or more reliably, without the benefit of epistemic luck, which I've had a lot of. I think the policy that pops out isn't actually majoritarianism so much as harboring a deep respect for highly evolved institutions, a la Nick Szabo. There's also Chesterton's idea of orthodoxy as democracy spread over time. On matters where there's little reason to expect great advancement of the moderns over older cultures, like in spirituality or morality, it would be foolish to adopt a modern-majoritarian position that ignored the opinions of those older cultures. I don't actually have all that much respect for the "average person", but I do have great respect for the pious and the intellectually humble. I honestly see more rationality in the humble creationist than in the protypical yay-science boo-religion liberal.

He's a prospective modal catholic--replace each instance of "amen" with "or so we are led to believe."

Though I think my actually converting is getting less likely the more I think about the issue and study recent Church history.

He suspects himself of prodromal schizophrenia, due to symptoms like continuing to post here.

More due to typical negative symptoms and auditory hallucinations and so on most prominent about six months ago, among a few other reasons. But perhaps it's more accurate to characterize myself as schizotypal.

comment by Will_Newsome · 2012-03-22T17:43:55.587Z · LW(p) · GW(p)

Other peoples' beliefs are evidence. Many people believe in God. No one believes that disc golf causes eternal torture. The two hypotheses should not be assigned equal probability.

that's only because of how many people have believed in religion in the past

So you do not believe that others' beliefs are evidence?

Replies from: Crux, Vladimir_Nesov
comment by Crux · 2012-03-22T18:22:14.490Z · LW(p) · GW(p)

So you do not believe that others' beliefs are evidence?

It's sometimes (or even very often) evidence, but not when (1) there's not even a shred of evidence elsewhere, and (2) there's a convincing, systematic explanation for how a particular cluster of epistemic vulnerabilities in human brain hardware led to its widespread adoption.

In other words, a large portion of society believing something is evidence only if the memetic market test for the adoption of the idea at hand is intact. But our hardware and factory settings are so ridiculously mal-adapted to the epistemic environment of the modern world that this market test is extremely often utterly broken and useless.

If you want to make use of the societal thoughts on an issue, you must first appraise the health of the market test for the adoption of the ideas. Is it likely that competition in this area of the memetic environment will lead to ever more sound beliefs, or is there a wrench in the system that is bound to lead to a systematic spiral to ever more ridiculous or counterproductive dogmas?

Our hardware is just so riddled with epistemic problems that it would be a huge mistake to consider societal conclusions at face value. If the market test for meme propagation were intact, and the trial-and-error system for weeding out less useful beliefs in favor of more useful ones ran smoothly, large-scale acceptance of a position would of course be plenty of evidence--no further questions asked.

But we live in a different world--one where this trial-and-error system is in utter disrepair in an absolutely staggering number of cases. In such a world, one must always start with the question, "Is the memetic market test intact in this case, or must I go this epistemic journey myself?"

Of course the market test is better or worse from one place to the next, and I hang out here because the Less Wrong community certainly has one of the best belief propagation systems out there. If everybody on here seems to believe something with a lot of conviction, that to me is strong evidence.

In case the point was lost in the length, I should state it concisely. Whether the beliefs of others are evidence is a contextual question. It depends what the market test is, specifically for the propagation of the belief under scrutiny. If there's reason to believe that the market test is corrupted because of a particular hardware or software vulnerability, then there's reason to dismiss the widespread acceptance, and declare it no evidence at all.

If you accept all that, this of course brings us to the all-important question of why I think the memetic market test for the propagation of religion is broken enough to explain such widespread adoption despite how epistemically insane I consider it, but I don't think I need to (try to) answer that. You've probably heard it all before on here, in writing on memetics, from Dawkins, etc.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-22T19:29:58.045Z · LW(p) · GW(p)

I more or less accept your reasoning as far as it goes, but:

our hardware and factory settings are so ridiculously mal-adapted to the epistemic environment of the modern world that this market test is extremely often utterly broken and useless.

If this is true, then why have so much confidence in your own personal appraisal of who to trust and who to write off as deluded? It is of course true that nearly everyone believes what they do for non-truth-tracking reasons, but "nearly everyone" isn't everyone, and there are many people, both theist and atheist, who believe what they do even despite strong memetic pressures to the contrary. Take me, for example; my theism doesn't win me any points with anyone, at least not as many points as it loses. And there are many theists like me. Knowing what you know about how easily humans fall into delusion, how can you be so confident that it's the other side that is deluded, and not your own? To return to the point, can you really be confident enough to disregard Pascal's wager? If so, how did so many at-least-nominally-truth-seeking people, from Plato to Pascal to Kant to me, end up disagreeing with you? How did we fall into such an obvious error?

Replies from: lavalamp, Crux
comment by lavalamp · 2012-03-22T19:49:55.547Z · LW(p) · GW(p)

And there are many theists like me.

Based on the comments of yours I've read, I think the only way you can call yourself a theist is by redefining most theistic terminology. Tell me if I'm wrong, but I don't think you agree with the object-level claims made by an average theist, as that theist would understand them. I'm not sure what you should call yourself...

Replies from: pedanterrific, Will_Newsome
comment by pedanterrific · 2012-03-22T20:39:20.405Z · LW(p) · GW(p)

I'm not sure what you should call yourself...

Newsomelike.

comment by Will_Newsome · 2012-03-22T19:55:05.959Z · LW(p) · GW(p)

As far as I know, I have the same conception of God as Thomas Aquinas did, and Thomism is the predominant philosophy of the Catholic church, which is the largest sect of Christianity, which is the most popular religion in the world.

A year or two ago my ideas were still pretty fuzzy, so that might have tripped you up. I change my mind pretty often.

Replies from: lavalamp
comment by lavalamp · 2012-03-22T20:09:35.776Z · LW(p) · GW(p)

Can you reliably communicate a good approximation of what you believe to another without reference to decision theory?

If yes, I'll accept your hypothesis that I've been reading the wrong comments of yours.

If no, I really doubt that Aquinas would recognize what you believe as what he believed.

(And I don't know what the situation is among the average Catholic, but IMX the average protestant doesn't even know who Aquinas is, so my point may still hold anyway....)

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-22T20:22:49.144Z · LW(p) · GW(p)

Can you reliably communicate a good approximation of what you believe to another without reference to decision theory?

I think so. There is a supremely powerful person, Who is the Form of the Good, Who is perfectly simple... yeah, pretty sure I can do it using accepted theological terminology.

Does it matter what the average theist believes? If Aquinas doesn't believe in the same God that a typical Baptist churchgoer does, I don't think that means that Aquinas isn't a theist. If the average biology students don't have the same definition of "gene" as the best biologists do... (This is like some really weird variation on No True Scotsman.)

Replies from: lavalamp, kodos96
comment by lavalamp · 2012-03-22T20:36:11.191Z · LW(p) · GW(p)

I think so. There is a supremely powerful person, Who is the Form of the Good, Who is perfectly simple... yeah, pretty sure I can do it using accepted theological terminology.

But I don't think those words coming from you are generated by the same thought process that most theists use to make similar statements. You use the same words, but you mean something different. At least, that is my impression.

Does it matter what the average theist believes?

No. But at some point it becomes helpful to try and make sure everyone means similar things when using the same word. If that's not possible then maybe it's a good time to taboo the word. I'm thinking that "theist" usually refers to a particular cluster of beliefs that are sorta similar to yours but different enough that I'm not sure if calling yourself a theist clarifies or obscures your actual beliefs. I'm leaning towards "obscures"...

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-22T20:44:28.440Z · LW(p) · GW(p)

You use the same words, but you mean something different. At least, that is my impression.

My impression differs... like, there's only so many different things "supremely powerful person" and "Form of the Good" can mean, ya know? The meanings of those words all seem pretty straightforward.

Replies from: lavalamp
comment by lavalamp · 2012-03-22T21:05:04.359Z · LW(p) · GW(p)

mmm.... I was about to agree with you, but after some thought, no, I think those words are incredibly vague. I can see adherents of most any religion agreeing with them. And the various religions typically think that they disagree with each other. I still maintain that by the time you define your beliefs at the same specificity as a typical human religion does, most Christians will not count you among their number.

I'm not saying that they're right and you're wrong (I'll bet on you if those are my options) just that you aren't really saying the same thing.

comment by kodos96 · 2012-12-18T21:32:54.309Z · LW(p) · GW(p)

Does it matter what the average theist believes? If Aquinas doesn't believe in the same God that a typical Baptist churchgoer does, I don't think that means that Aquinas isn't a theist.

It matters if you're arguing from a majoritarian "orthodoxy as democracy spread over time" perspective. If the vast majority of theists throughout history didn't actually believe in the God of Aquinas, but rather in the God of the old testament (or whatever), then you can't cite their belief as evidence supporting Aquinas' (or your) God.

Or am I misunderstanding your argument?

comment by Crux · 2012-03-22T20:17:23.332Z · LW(p) · GW(p)

We seem to be getting into some potentially very important territory, and I would certainly like to continue this discussion, but I'm running out of time for now and may be busy for up to 24 hours.

Before I go though, I should say at least one thing. It's certainly not an obvious error, and I could well be the one who's wrong. The discussions about rationality on Less Wrong are extremely useful for a basic reason: it's an extremely difficult and intricate epistemic journey to compensate for our mal-adapted hardware and software, and LW does it better than any other place at the moment (as far as I can see).

So yeah, your questions are certainly important, and they perhaps get to the essence of the issue. I look forward to trying to answer those questions, and seeing where it leads us in the discussion (assuming you think this is useful too). Feel free to write anything else in the meantime, or not.

comment by Vladimir_Nesov · 2012-03-22T20:27:19.793Z · LW(p) · GW(p)

So you do not believe that others' beliefs are evidence?

A belief can be evidence for its stipulated meaning (this holds often), but could also be counterevidence, or irrelevant. What is a belief evidence for? Not at all automatically its stipulated meaning.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-22T20:38:12.973Z · LW(p) · GW(p)

Yes, that's the response I was fishing for, so I could spring my trap. But the bait wasn't meant for you, so now I have to make pointless commentary. Alas.

Replies from: kodos96
comment by kodos96 · 2012-12-18T21:35:59.137Z · LW(p) · GW(p)

Would it be possible for you to go ahead and spring your trap anyway? I'm very curious what you had in mind.

comment by Dmytry · 2012-03-22T12:16:34.302Z · LW(p) · GW(p)

We're certainly better at reflecting at some parts of our self than others. The ironic thing is, though, that when we look more closely and analyze just what it is that we are not reflecting on very well that we open up the can of worms that we had previously been avoiding.

In the context of the original post - suppose that SGAI is logging some of the internal state into a log file, and then gains access to reading this log file, and reasons about it in same way as it reasons about the world - noticing correlation between it's feelings and state with the log file. Wouldn't that be the kind of reflection that we have? Is SGAI even logically possible without hard-coding some blind spot inside the AI about itself?

If we could we'd probably have gone extinct already.

Or maybe we're going to go extinct real soon now, because we lack ability to reflect like this, and consequently didn't have couple thousands years to develop effective theory of mind for FAI before we make the hardware.

Replies from: wedrifid
comment by wedrifid · 2012-03-22T12:24:02.299Z · LW(p) · GW(p)

Or maybe we're going to go extinct real soon now, because we lack ability to reflect like this, and consequently didn't have couple thousands years to develop effective theory of mind for FAI before we make the hardware.

Having the ability to design and understand AI for a couple of thousand of years but somehow the inability to actually implement it sounds just about perfect. If only!

Replies from: Armok_GoB, Dmytry
comment by Armok_GoB · 2012-03-23T19:06:24.634Z · LW(p) · GW(p)

That is one idea for hacking friendliness: "Become the AI we would make if there were no existential threats and we didn't have the hardware to implement it for a few thousand years, and flaming letters appeared on the moon saying 'thou shall focus on designing Frienly AI' "

Havn't bothered typing it out before because it falls in the reference class of trying to cheat on FAI, wich is always a bad idea, but it seemed relevant here.

comment by Dmytry · 2012-03-22T12:27:43.693Z · LW(p) · GW(p)

Well, it wouldn't be AI, it'd be simply I, as in "I think therefore I am." but not stopping at this period.

edit: I mean, look at the SIAI; what do exactly they do right now which they couldn't do in ancient Greece? If we could reflect on our mind better, and if our mind is physical in nature, then the idea of thinking machine would've been readily apparent, yet the microchips would still require very, very long time.

Replies from: roystgnr
comment by roystgnr · 2012-03-24T02:00:13.900Z · LW(p) · GW(p)

By this logic we'd have discovered all there is to know about math (including computer science) by Roman times at the latest.

Replies from: roystgnr
comment by roystgnr · 2012-03-24T22:35:05.593Z · LW(p) · GW(p)

Would anyone downvoting me care to explain why they disagree? Look at Newton or Turing: what exactly did they do which couldn't have been done in ancient Greece, and why is there no analogous counterexample for the SIAI?

Isn't "why weren't the Greeks working on Calculus" a far less silly question than "why weren't the ancient Greeks working on AI"?

comment by billswift · 2012-03-22T13:24:54.164Z · LW(p) · GW(p)

Your scenario sounds a lot like a more intelligent (the scenario, not the AI) version of the media trope of the military AI that goes out of control, like Colossus or Skynet. It would help explain why they are always so limited while still dangerous.

comment by DanielLC · 2012-03-22T19:54:58.248Z · LW(p) · GW(p)

You're assuming it has super-human programming ability. It very well might. If it's anywhere near our level on average, it probably greatly exceeds it in places. It's not necessary though, and if it doesn't have it, that would explain why it can't make itself more intelligent.

If it can't program, it would be at the mercy of humans. I suspect it would get a job in a field where its ability exceeds that of a human. It will quickly dominate all such fields, and make a large, but finite, amount of money. At this point it branches again. If it's good at politics, it will quickly take over the world. If not, it will be at the mercy of humanity as a whole, and it's power will be limited.

It's possible that at some point it would create some von Neumann machine and amass an army somewhere were we don't notice it until it's too late, like on another planet, and eventually enslave humanity. It couldn't flat out destroy them, because there's things they're better at, but it should be able to enslave them.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-22T22:22:49.581Z · LW(p) · GW(p)

You're assuming it has super-human programming ability.

I didn't mean to; I only meant to assume it has access to more resources than humans do because of its substrate, which would make up for its lack of coding skill. I'm thinking it could maybe reliably write smallish functions but not complex quines, or something.

Replies from: DanielLC
comment by DanielLC · 2012-03-23T04:38:36.534Z · LW(p) · GW(p)

I don't think having access to more resources will do a whole lot. Imagine going from having one monkey trying to write the works of Shakespeare to thousands of them. It may allow it to do stuff like that when slightly sub-human, but I suspect that it's ability would be significantly different from humans.

Replies from: Dmytry
comment by Dmytry · 2012-03-23T06:03:32.304Z · LW(p) · GW(p)

I wonder what happens if you can graft monkey brains together. As far as the evidence goes, Shakespeare, or Einstein, is just a bigger monkey with especially big brain.

Replies from: DanielLC
comment by DanielLC · 2012-03-23T16:35:57.570Z · LW(p) · GW(p)

If it were that easy, it wouldn't be semi-general, now would it?

In any case, I don't see why adding computing power would be much different than adding time. In fact, I'd expect adding time would be better. Anything you can do with two processers in parallel you can do with one in twice the time by doing one thread after the other. The reverse isn't true.

Replies from: Dmytry
comment by Dmytry · 2012-03-23T16:44:52.923Z · LW(p) · GW(p)

Twice the time, and twice the space. In any case, it does not work very well like this for brains, where you for some unknown reason fail at remembering more than ~7 objects in short term memory. Cut it to 3, and you may not be able to think many thoughts; add some small tweaks, and you may be as smart as Einstein.