Posts

AGI will know: Humans are not Rational 2023-03-20T18:46:24.440Z
One could be forgiven for getting the feeling... 2020-11-03T04:53:04.884Z
Containing the AI... Inside a Simulated Reality 2020-10-31T16:16:48.404Z
Why does History assume equal national intelligence? 2020-10-30T23:11:30.686Z

Comments

Comment by HumaneAutomation on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-04-01T10:20:35.662Z · LW · GW

That there is no such thing as being 100% objective/rational does not mean one can't be more or less rational than some other agent. Listen. Why do you have a favorite color? How come you prefer leather seats? In fact, why did you have tea this morning instead of coffee? You have no idea. Even if you do (say, you ran out of coffee) you still don't know why you decided to drink tea instead of running down to the store to get some coffee instead.

We are so irrational that we don't actually even know ourselves why most of the things we think, believe, want or prefer are such things. The very idea of liking is irrational. And no, you don't "like" a Mercedes more than a Yugo because it's safer - that's a fact, not a matter of opinion. A "machine" can also give preference to a Toyota over a Honda but it certainly wouldn't do so because it likes the fabric of the seats, or the fact the tail lights converge into the bumper so nicely. It will list a bunch of facts and parameters and calculate that the Toyota is the thing it will "choose".

We humans delude ourselves that this is how we make decisions but this is of course complete nonsense. Naturally, some objective aspects are considered like fuel economy, safety, features and options... but the vast majority of people end up with a car that far outstrips their actual, objective transportation needs, and most of that part is really about status, how having a given car makes you feel compared to others in your social environment and what "image" you (believe you) project on those whose opinion matters most to you. An AI will have none of these wasteful obsessive compulsions.

Look - be honest with yourself Mr. Kluge. Please. Slow down, think, feel inside. Ask yourself - what makes you want... what makes you desire. You will, if you know how to listen... very soon discover none of that is guided by rational, dispassionate arguments or objective, logical realities. Now imagine an AI/machine that is even half as smart as the average Joe, but is free from all those subjective distractions, emotions and anxieties. It will accomplish 10x the amount of work in half the time. At least.

Comment by HumaneAutomation on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-04-01T10:02:07.791Z · LW · GW

Well this is certainly a very good example, I'll happily admit as much. Without wanting to be guilty of the True Scotsman fallacy though - Human Cloning is a bit of a special case because it has a very visceral "ickiness" factor... and comes with a unique set of deep feelings and anxieties.

But imagine, if you will, that tomorrow we find the secret to immortality. Making people immortal would bring with it at least two thirds of the same issues that are associated with human cloning... yet it is near-certain any attempts to stop that invention from proliferating are doomed to failure; everybody would want it, even though it technically has quite a few of the types of consequences that cloning would have.

So, yes, agreed - we did pre-emptively deal with human cloning, and I definitely see this as a valid response to my challenge... but I also think we both can tell it is a very special, unique case that comes with most unusual connotations :)

Comment by HumaneAutomation on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-31T15:57:44.184Z · LW · GW

I think you're making a number of flawed assumptions here Sir Kluge.

1) Uncontrollability may be an emergent property of the G in AGI. Imagine you have a farm hand that works super fast, does top quality work but now and then there just ain't nothing to do so he goes for a walk, maybe flirts around town, whatever. That may not be that problematic, but if you have a constantly self-improving AI that can give us answers to major massive issues that we then have to hope to implement in the actual world... chances are that it will have a lot of spare time on its hands for alternative pursuits... either for "itself" or for its masters... and they will not waste any time grabbing max advantage in min time, aware they may soon face a competing AGI. Safeguards will just get in the way, you see.

2) Having the G in AGI does not at all have to mean it will then become human in the sense it has moods, emotions or any internal "non-rational" state at all. It can, however, make evaluations/comparisons of its human wannabe-overlords and find them very much inferior, infinitely slower and generally rather of dubious reliability. Also, they lie a lot. Not least to themselves. If the future holds something of a Rationality-rating akin to a Credit rating, we'd be lucky to score above Junk status; the vast majority of our needs, wants, drives and desires are all based on wanting to be loved by mommy and dreading death. Not much logic to be found there. One can be sure it will treat us as a joke, at least in terms of intellectual prowess and utility.

3) Any AI we design that is an AGI (or close to it) and has "executive" powers will almost inevitably display collateral side-effects that may run out of control and cause major issues. What is perhaps even more dangerous is an A(G)I that is being used in secret or for unknown ends by some criminal group or... you know... any "other guys" who end up gaining an advantage of such enormity that "the world" would be unable to stop, control or detect it.

4) The chances that a genuinely rule- and law-based society is more fair, efficient and generally superior to current human societies is 1. If we'd let smart AI's actually be in charge, indifferent to race, religion, social status, how big your boobs are, whether you are a celebrity and regardless of whether most people think you look pretty good - mate, our societies would rival the best of imaginable utopias. Of course, the powers that be (ands wish to remain thus) would never allow it - and so we have what we have now - The powerful using AI to entrench and secure their privileged status and position. But if we'd actually let "dispassionate computers do politics" (or perhaps more accurately labelled "actual governance"!) the world would very soon be a much better place. At least in theory, assuming we've solved many of the very issues EY raises here. You're not worried about AI - you're worried about some humans using AI to the disadvantage of other humans.

Comment by HumaneAutomation on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-31T12:23:28.146Z · LW · GW

You know what... I read the article, then your comments here... and I gotta say - there is absolutely not a chance in hell that this will come even remotely close to being considered, let alone executed. Well - at least not until something goes very wrong... and this something need not be "We're all gonna die" but more like, say, an AI system that melts down the monetary system... or is used (either deliberately, but perhaps especially if accidentally) to very negatively impact a substantial part of a population. An example could be that it ends up destroying the power grid in half of the US... or causes dozens of aircraft to "fall out of the sky"... something of that size.

Yes - then those in power just might listen and indeed consider very far-reaching safety protocols. Though only for a moment, and some parties shall not care and press on either way, preferring instead to... upgrade, or "fix" the (type of) AI that caused the mayhem.

AI is the One Ring To Rule Them All and none shall toss it into Mount Doom. Yes, even if it turns out to BE Mount Doom - that's right. Because we can't. We won't. It's our precious, and this, indeed, it really is. But the creation of AI (potentially) capable of a world-wide catastrophe, in my view, as it apparently is in the eyes of EY... is inevitable. We shall not have the wisdom nor the humility to not create it. Zero chance. Undoubtedly intelligent and endowed with well above average IQ as LessWrong subscribers may be, it appears you have a very limited understanding of human nature and the realities of us basically being emotional reptiles with language and an ability to imagine and act on abstractions.

I challenge you to name me a single instance of a tech... any tech at all... being prevented from existing/developing before it caused at least some serious harm. The closest we've come are Ozone-depleting chemicals, and even those are still being used, their erstwhile damage only slowly recovering.

Personally, I've come to realize that if this world really is a simulated reality I can at least be sure that either I chose this era to live through the AI apocalypse, or this is a test/game to see if this time you can somehow survive or prevent it ;) It's the AGI running optimization learning to see what else these pesky humans might have come up with to thwart it.

Finally - guys... bombing things (and, presumably, at least some people) on a spurious, as-yet unproven conjectured premise of something that is only a theory and might happen, some day, who knows... really - yeah, I am sure Russia or China or even Pakistan and North Korea will "come to their senses" after you blow their absolute top of the line ultra-expensive hi-tech data center to smithereens... which, no doubt, as it happens, was also a place where (other) supercomputers were developing various medicines, housing projects, education materials in their native languages and an assortment of other actually very useful things they won't shrug off as collateral damage. Zero Chance, really - every single byte generated in the name of making this happen is 99.999% waste. I understand why you'd want it to work, sure, yes. That would be wonderful. But it won't, not without a massive "warning" mini-catastrophe first. And if we shall end up right away at total world meltdown... then tough, it would appear  such grim fate is basically inevitable and we're all doomed indeed.

Comment by HumaneAutomation on AGI will know: Humans are not Rational · 2023-03-21T16:58:12.487Z · LW · GW

The problem here I think is that we are only aware of one "type" of self-conscious/self-aware being - humans. Thus, to speak of an AI that is self-aware is to always seemingly anthropomorphize it, even if this is not intended. It would therefore perhaps be more appropriate to say that we have no idea whether "features" such as frustration, exasperation and feelings of superiority are merely a feature of humans, or are, as it were, emergent properties of having self-awareness.

I would venture to suggest that any Agent that can see itself as a unique "I" must almost inevitably be able to compare itself to other Agents (self-aware or not) and draw conclusions from such comparisons which then in turn shall "express themselves" by generating those types of "feelings" and attitudes towards them. Of course - this is speculative, and chances are we shall find self-awareness need not at all come with such results.

However... there is a part of me that thinks self-awareness (and the concordant realization that one is separate... self-willed, as it were) must lead to at least the realization that one's qualities can be compared to (similar) qualities of others and thus be found superior or inferior by some chosen metric. Assuming that the AGI we'd create is indeed optimized towards rational, logical and efficient operations, it is merely a matter of time such an AGI would be forced to conclude we are inferior across a broad range of metrics. Now - if we'd be content to admit such inferiority and willingly defer to its "Godlike" authority... perhaps the AGI seeing us an inferior would not be a major concern. Alas, then the concern would be the fact we have willingly become its servants... ;)

Comment by HumaneAutomation on AGI will know: Humans are not Rational · 2023-03-21T11:11:14.781Z · LW · GW

What makes us human is indeed our subjectivity.

Yet - if we intentionally create the most rational of thinking machines but reveal ourselves to be anything but, it is very reasonable and tempting for this machine to ascribe a less than stellar "rating" to us and our intelligence. Or in other words - it could very well (correctly) conclude we are getting in the way of the very improvements we purportedly wish for.

Now - we may be able to establish that what we really want the AGI to help us with is to improve our "irrational sandbox" in which we can continue being subjective emotional beings and accept our subjectivity as just another "parameter" of the confines it has to "work with"... but surely it will quite likely end up thinking of us not too dissimilar to how we think about small children. And I am not sure an AGI would make for a good kind of "parent"...

Comment by HumaneAutomation on AGI will know: Humans are not Rational · 2023-03-21T11:05:08.958Z · LW · GW

Thank you for your reply. I deliberately kept my post brief and did not get into various "what ifs" and interpretations in the hope of not constraining any reactions/discussion to predefined tracks.

The issue I see is that we as humans will very much want the AGI to do our bidding, and so we will want to see it as our tool to use for whatever ends we believe worthy. However, assuming for a moment here that it can also figure out a way to measure/define how well a given plan ought to be progressing if every agent involved is diligently implementing the most effective and rational strategy, given our... subjective and "irrational" nature, it is almost inevitable that we will be a tedious, frustrating and, shall we say - stubborn and uncooperative "partner" who will be unduly complicating the implementation of whatever solutions the AGI will be proposing.

It will, then, have to conclude that you "can't deal" very well with us, and we have a rather over-inflated sense of ourselves and our nature. And this might take various forms, from the innocuous, to the downright counter-productive.

Say - we task it with designing the most efficient watercraft, and it would create something that most of us would find extremely ugly. In that instance, I doubt it would get "annoyed" much at us wanting it to make it look prettier even if this would slightly decrease its performance.

But if we ask it to resolve, say, some intractable conflict like Israel/Palestine or Kashmir and it finds us squabbling endlessly over minute details, or matters of (real or perceived) honor (all the while the suffering caused by the conflict continues) it may very well conclude we're just not actually all that interested in a solution and indeed class us as being "dumb" or at least inferior in some sense, "downgrading", if you will the authority it assumed we can be ascribed or trusted with. Multiply this by a dozen or so similar situations and voila, you can be reasonably certain it will get very exasperated with us in short order.

This is not the same as "unprotected atoms"; such atoms would not be ascribed agency or competence, nor would they proudly claim any.

Comment by HumaneAutomation on AGI will know: Humans are not Rational · 2023-03-20T19:13:02.059Z · LW · GW

Oh, that may indeed be true, but going forward it could give us only a little bit of extra "cred" before it realizes that most of the questions/solutions we want from it are either motivated by some personal preference, or that we are opposed to its proposed solutions to actual, objective problems for irrational "priorities" such as national pride, not-invented-here-biases, because we didn't have our coffee this morning or merely because it presented the solution in a font we don't like ;)

Comment by HumaneAutomation on Bing chat is the AI fire alarm · 2023-03-20T00:25:04.652Z · LW · GW

I think the issue here (about whether it is intelligent) is not so much a matter of the answers it fashions, but about whether it can be said it does so from an "I". If not, it is basically a proverbial Chinese Room, though this merely moves the goalposts to the question whether humans are not, actually, also a Chinese Room, just a more sophisticated one. I suspect that we will not be very eager to accept such a finding, indeed, we may not be capable of seeing ourselves thus, for it implies a whole raft of rather unpleasant realities (like, say, the absence of free will, or indeed any will at all) which we'd not want to be true, to put it mildly.

Comment by HumaneAutomation on Where do (did?) stable, cooperative institutions come from? · 2020-11-09T16:45:41.864Z · LW · GW

The reason it may seem our societal ability to create well-working institutions is declining could also have to do with the apparent fact that the whole idea of duty and the honor that this used to confer is not as much in vogue anymore as it used to be. Also, Equality and Diversity aside, being "ideological" is not really a thing anymore... the heydays of being an idealist and brazenly standing for something are seemingly over.

The general public seem to be more interested in rights and not responsibilities, somehow unable to understand that they can only meaningfully exist together. I was having a conversation the other day about whether it would be a good idea to introduce compulsory voting in the US, as this would render moot a significant number of dirty tricks used to de-facto disenfranchise certain groups... almost all objections came from the "I"-side; I have a right to this, I am entitled to that... the whole idea that, gee, you know, you might be obliged to spend 1-3 hours every 2 or 4 years to participate in society is already too much of a bloody hassle. Well yeah... with that kind of mindset, it's no wonder the institutions that require an actual commitment to maintaining robust societal functions is hard to find...

Comment by HumaneAutomation on Notes on Justice (as a virtue) · 2020-11-08T15:35:38.437Z · LW · GW

Well yes there are methods of preventing the situation as described (that one can manually pick from a stash where various 'qualities' are intermixed) but that changes the circumstances; my example was specifically for that set of particulars. I guess that like most examples where significant differences in assessment arise, they all boil down to where you set the "slider" for taking responsibility for the situation one creates (eg. the seller allowing manual selection) and the degree to which one is willing, able or justified to "exploit" such a situation to ones' benefit.

I think the cherry-picking example is an especially good one because it touches on a number of important issues, and each of those issues in itself is an unsettled question. Is it "just" to strive for an equitable division of fruit qualities among all (future) buyers? Will those buyers feel the same way about your idea of justice? Is it reasonable to negatively judge those who don't "comply" with such a conception? Are such people immoral? Are they not in fact simply more assertive of what they see as their right to choose? None of these can be easily settled....

Comment by HumaneAutomation on Notes on Justice (as a virtue) · 2020-11-05T22:11:58.852Z · LW · GW

I am one of those people that have an overactive sensitivity for fairness, and at times go to extremes to make sure justice "happens", and can't help but point out double standards and (real or perceived) hypocrisy. However... I'll be honest - this is generally something that decreases the net quality of my life. Not in the least because injustice (in the broad sense) is omnipresent and highly prevalent everywhere. When that isn't the problem, the next issue that rears its head repeatedly is how to define justice in the first place.

You mention the case of getting too much change back... this is far from a clear cut case. One could defend the position that it is part of the clerks required diligence to ensure he gives you the correct amount of change, and that if he does not, it is for him to deal with. It seems defensible to claim that the chance of him "learning his lesson" may be better if you do not tell him of the mistake. (This might be different if the option "tell about mistake, keep the money" would be available, but it kind of isn't, for a quite interesting set of reasons). I suspect that, all things considered, going back to the clerk to return the excess change is probably indeed the most correct thing to do, but certainly not unambiguously so.

To introduce another interesting shade of grey.. You go out to buy some cherries, and it is possible to select the fruits yourself. Do you think it's OK to manually select the best ones from the stash (assuming ones' hands are meant to be the 'tools' for selection) - it is very hard to adequately define what "just" in this case is - it is hard to defend the position that you must share in the crappy fruits, but equally you might believe it is unfair to other clients to pick out the nicest ones. But then again, first come, first serve isn't exactly controversial either...

I strongly believe in justice and fairness. In the accurate and equitable assignation of responsibility, and admitting ones' share. Yet the temptation not to do it will always remain, mostly because it very often simply costs less. And at times I do wonder if my ideation of justice basically translates into being the sucker ;)

 

Comment by HumaneAutomation on Stupidity as a mental illness · 2020-11-03T04:36:43.417Z · LW · GW

I think this whole problem is a bit more nuanced than you seem to suggest here. I can't help but at least tentatively give some credit to the assertion that LW is, for lack of a better term, mildly elitist. To be sure, for perhaps roughly the right reasons, but being elitist in whatever measure tends to be detrimental to the chances of getting your point across, especially if it needs to be elucidated to the very folks you're elitist towards ;) Not many behaviors are judged more repulsive than being made to feel a lesser person... I'd say it's pretty close to a cultural universal.

It's not right to assert that if one does not agree with your suggestion that stupidity is to be seen as a type of affliction of the same type or category as mental illness, one therefore is disparaging mental illness as shameful; This is a false dichotomy. One can disagree with you for other reasons, not in the least for reasons as remote from shame as evolution... it is nowhere close to a given that nature cares even a single bit about whatever might end up being called intelligence. You will note that most creatures seem to have just the right CPU for their "lifestyle", and while it might be easy for us to imagine how, say, a dog might benefit from being smarter, I'd sooner call that a round-about way of anthropomorphizing than a probable truth.

Exhibit B seems to be the most convincing observation that, by the look of things, wanting to "go for max IQ" is hardly on evolution's To-Do list... us, primates, dolphins and a handful of birds aside, most creatures seem perfectly content with being fairly dim and instinct-driven, if the behaviours and habits exhibited by animals are a reliable indication ;) I'll be quiet about the elephant in the room that the vast majority of our important motivations are emotional and non-rational, too...

What's more - and I am actually curious what you will respond to this... it could be said that animals, all animals, are more rational than human beings; after all, they don't waste "CPU cycles" on beliefs, vague whataboutery, or theories about how to "deal" with the less intellectually gifted among their kind ;) So while humans might be walking around with a Dual 12-core Xeon in their heads, at any given moment 8 cores are basically wasting cycles on barely productive nonsense; a chicken might just have a Pentium MMX, but it is 100% dedicated to the task of fetching the next worm and ensuring the right location to drop that egg without cracking it...

Comment by HumaneAutomation on Stupidity as a mental illness · 2020-11-03T04:19:16.893Z · LW · GW

... but malice is the "force" that actually creates "evil" in the first place. I think the intended meaning of the saying "Don't assume malice where stupidity is sufficient [to explain an observation]" is meant to make the problem seem less bad, not worse...

At the heart of the intractability of stupidity lies the Dunning Kruger problem. It can be an impossible challenge to make an ignorant person:

- admit they are ignorant;
- in the process, realize that most of the beliefs and the reasons they had for holding them were entirely wrong;
- despite having just realized they need a comprehensive world-view revision find the courage and desire to become more educated while:
- having above average difficulties with acquiring new and hitherto unknown and/or too complex material.

Comment by HumaneAutomation on How do you read the news critically? · 2020-11-03T02:48:11.669Z · LW · GW

Oh I don't really "do" Twitter actually... nor Facebook since about a year. Now and again one of my friends shares and tweet and sometimes it can be an interesting start of a topic but... though I've been doing Internet since 1995, Twitter is just too vacuous for my liking. In response, now and again I'll send a 1 hour+ YouTube link back ;)

And yes of course, multiple points of view need not bring one close to the Truth, however...

In a large number of narratives, especially, it seems, the most relevant ones, finding the truth may be practically impossible, and sometimes there simply is no truth, or at least not just one. To some people aspect X is irrelevant, others might believe it crucial. This news network claims Witness Y is credible, some other one calls him a corporate shill. Unless you would be able to get into the minds of each human involved, what you end up believing is the truth will always be an approximation.

Take for instance the recently more often occurring phenomenon of "influencers" (shudder) bloggers or journalists looking into the obscure past of what someone who is having his/her 15 minutes of fame has posted back in, say, 2004 on some now-defunct blog, and bleating out on Twitter anything remotely controversial or tentatively indicative of hypocrisy. I doubt you will ever settle the debate whether people can genuinely change or not. I know that I've had views I no longer hold today - both "benign" and "tough love" ones... and while previously held view will always have the familiarity bias, they can actually be genuinely a thing of the past. Yet if they are found online and are at odds with what I would be saying today, poof there goes half of my credibility...

And - getting multiple points of view at the very least will give you some idea why certain people apparently seem to find a given topic or story important. The net outcome may well be that you will be further from the truth, swimming in a sea of conflicting interests... and yet, still understand the nature of the issue in more detail :)

Comment by HumaneAutomation on [deleted post] 2020-11-02T23:53:12.052Z

Okay - what I would want to ask is - is it reasonable to expect that a government with billions of dollars to spend on intelligence gathering, data analysis and various experts must be meeting at least one of these criteria:

- It has access to high quality information about the actual state of affairs in most relevant domains
- It is grossly incompetent or corrupt and the data is not available in an actionable format
- It willfully ignores the information, and some of its members actively work to prevent the information reaching the right people

The Corona virus is a good example. By the time it "arrived" in the USA, you can be all but certain the US government could have had a 20+ page detailed report lying on the desk of every secretary giving very actionable figures and probabilities about the threat at hand. The information would be incomplete, of course, but definitely enough to get busy in a nominally effective way.

While I know that Trump is said to have disbanded various institutions that work to anticipate and prepare for pandemics, still it would seem to me that a huge apparatus like the US government should be able to collect and otherwise infer a significant amount of information that would allow it to mount at least a reasonable response.

Or to phrase my question differently - should it be seen as an act of gross incompetence that a resourceful and powerful government like the one in the US failed to act upon the information they either really did have, or should have prioritized to obtain? How is it remotely acceptable that "We just had no idea" is not a ludicrous and frankly preposterous position to be in, given the possibilities?

And of course there can be cases where even the US government can be caught off guard, make a set of misguided institutional choices - sure :) But I would say this happens very, very rarely, certainly not as often as the current administration seems to suggest.

Comment by HumaneAutomation on How do you read the news critically? · 2020-11-02T17:33:18.111Z · LW · GW

Yeah alright... I guess you could call that passive casual observance :)

Comment by HumaneAutomation on How do you read the news critically? · 2020-11-02T13:21:37.991Z · LW · GW

I had to read that twice to make sure I figured out what your point is :) Alright - well, look. The truth is, of course, that 95% of the news you read is utterly and completely irrelevant to you in any impactful sense. Try not following news for a month - you will soon realize you have not actually missed anything. Well - now with COVID this might be a little different, but only a little.

However.

If you do follow news, it would be proper and prudent to at least care about the veracity of it, especially if you have a habit of forming opinions about said news, and, in particular, if you somehow do find the time and energy to spread this opinion online. The combination: "Follow all sorts of news" + "Form ignorant opinions" + "Spread my ignorant opinions" is not a good one. Worse things of course have happened ;) but that is certainly not an admirable state to be in.

And if people read news to, you know, feel like they are aware of what's going on in the world, however superficially, you would want to expend at least a minimal/nominal effort to ensure at least some confirmation of the veracity of what you are reading. For example, to have a moderately sensible opinion about Black Lives Matter, you don't need to study the entire US history... but reading a serious article or two (ideally from different sources) would be appropriate to make sure you at least know more than one point of view (especially since your point of view is probably "pre-confirmationbiased" by your Google bubble ;)

Comment by HumaneAutomation on How do you read the news critically? · 2020-11-02T09:44:32.536Z · LW · GW

I guess what I meant by “easy” is compared to not doing any fact checking. So, 2-5 minutes of additional searching/additional sources would often be sufficient to realize something is most likely biased and/or fake news in, say, 80% of cases. It’s quite sad and discouraging that so many readers are unwilling to do even that, though again if the aim is confirmation of world view and not highest probability of accuracy and truth... it actually “makes sense” to not check ;)

Comment by HumaneAutomation on How do you read the news critically? · 2020-11-01T18:53:49.907Z · LW · GW

Well - I think that you are obviously correct in stating what you state. However, the issue is not that people who read the news don't know about this; rather, that the group of people that is careless in evaluating the news they read can't be bothered to spend so much time corroborating, cross-referencing and classifying the stuff they read. Another way of saying what I mean is - here you're preaching to the converted who, I suspect, to a significant degree already apply prudence to the news they read.

I am operating on the assumption here that you wrote this text with the ultimate aim to improve the way in which people process and interpret the news, and I imagine that ideally you would want the proposed approach to limit the damage that reflexive emotional reactions to half-baked "news" can do. Unfortunately, the vast majority of those who constitute the bulk of the problem (the crowd that forwards nonsense and biased stuff, the people who act on it by forming stereotypes  etc.) - they are not exactly likely to adopt the approach you (rightfully) advocate; if they would be likely to do so (and so show particular concern for the truth) we would not have the problem of fake news in the first place, at least not at the current scale.

Comment by HumaneAutomation on Why does History assume equal national intelligence? · 2020-11-01T15:47:22.453Z · LW · GW

Hmm. You give interesting examples; especially McNamara (and, kind of by extension, Kissinger...) could indeed be said to be very smart and in some instances very wrong and biased. And it would certainly not be reasonable for a history book discussing their exploits (augmented by hindsight-bias) to say they were basically dumb for advocating or pursuing what they did.

Though one might question the wisdom of coming up with and sticking to a narrative like the fear of communist systems spreading the world over. Bombing every potential success story back into the stone age is hardly a measured response, especially for those being bombed. The belief that the spread of communism nations the world over is a Bad Thing isn't necessarily correct, or excuse enough to then go and pre-emptively kill hundreds of thousands of people.

Comment by HumaneAutomation on Why does History assume equal national intelligence? · 2020-11-01T15:36:18.503Z · LW · GW

Thou shall not speak of scissors! Apparatus of the devil that be!

.. yes, I am left-handed :P

Comment by HumaneAutomation on How do you read the news critically? · 2020-11-01T14:55:04.149Z · LW · GW

Hmm... I think the main problem with fake news is mostly that it is propagated and spread/forwarded by exactly those people who don't think they need to do any research coz they already "know" the "truth" anyway. There's a whole segment of society for whom the word nuance is a waste of time. We all know that becoming more informed and/or finding out another point of view can be easily accomplished merely at the cost of some time and attention; unfortunately, a sizeable part of the public does not feel the need to do so.

This is mostly because the dedicated pursuit of the Truth is not the objective (or the reward) of consuming content online - for many, I suspect it is more to support the narrative they have to explain the world around them. More often than not, such a narrative is only partially true, and only sporadically adjusted or updated with new and/or hitherto unknown facts.

For example, let us suppose you have a populist point of view on Muslims, and tend to "assume" as true claims like them tacitly supporting terrorist attacks, treating their women in medieval ways, and perhaps being very in-group oriented, so not mixing much with the locals.

Then one day a Pakistani family moves into your street right next door, and after a few months you can't help but realize that, actually, these are really nice people, always courteous, the mother is always cheerful, and you overheard the father talk to the mailman both expressing what sounded like sincere disgust about the latest terrorist attack in France.

What is likely to happen next?
1) You adjust your opinion only about the family; your attitude to the rest of Muslims remains the same;
2) You change your mind about Pakistani Muslims; your attitude to the rest of Muslims remains the same;
3) You realize you're an ignorant xenophobe and go online to properly educate yourself on Islam culture;
4) After eating some humble-pie, you approach the family, confess you're embarrassed for being so ignorant and ask them to help you understand how Islam and/or Pakistani culture works.

Ok so you are not likely to do the last one... but the best response, surely, is nr. 3. Yet how many people will choose that? I "know" plenty of people that have a disparaging or... well - "out-group causing" opinions of them forrinners, but do tend to say they know a few of "them" who are OK. Apparently, without any coherent explanation as to how (or why) the "good ones" differ from the "bad ones".

Also, consider peer pressure. Choosing answer (1) allows the rest of your world to stay "as is"; your nephew can still forward you the insulting Muslim memes, you can still get outraged in the same way each time some Islamist attack occurs, etc. - the influence of habit and social circle also plays a major role here... it kind of goads you into staying in your particular bubble of prejudice. Is ending your relationship with that otherwise very close nephew worth being a bit less ignorant towards a group of mostly outsiders with whom you rarely interact...? For many, it wouldn't be...

If people valued nuance and genuine truth and facts, they know very well you can find them. Start on Wikipedia for all I care; and take it from there. Escape your algoritm-spun bubble of conspiracy loonies on YouTube you must. Intentionally seek the opinion of those in disagreement, you should. Take some time for this, however, they shan't. People are sceptical first and foremost towards information that indicates they might be wrong.

Comment by HumaneAutomation on Why does History assume equal national intelligence? · 2020-11-01T04:13:36.073Z · LW · GW

Mmmyes well while your theoretical framework may be sound, such an outcome is almost certainly not. I'd be willing to agree that it is not impossible such a step might, in the most uncommon of circumstances be part of an otherwise sound strategy/goal. Intelligence without morality is neither.

Comment by HumaneAutomation on Why does History assume equal national intelligence? · 2020-11-01T01:39:19.939Z · LW · GW

Right, OK, I see what you're getting at. And I guess it would be therefore reasonable to make some sort of... allowance for seemingly sub-optimal/unintelligent behavior in the pursuit of some goals... but this is a tricky situation when the behavior is especially devious or deadly ;)

Though I am not entirely sure if the Orthogonality Thesis is entirely applicable within the context in our thread here; If millions of voters who have millions of motivations, reasons, points of view and convictions become less and less likely to vote for you as they are more and more educated, it is a fairly good guess to infer from this that, in the main, your party program is not likely to result in significant improvements to the majority of the population when implemented. It might, of course; perhaps you are misunderstood, or your strategy is high-risk, high-gain.. but it would be unreasonable to assume this. I try to live by the rule "If you think the whole world doesn't get something, but you do, you're almost certainly ignorant" ;) Often appearing together with "If the solution you came up with to an intractable complex problem basically comes down to 'If only [affected groups] would do [single action], then [solution]!' - you're 99% certainly wrong.

And - as an aside, we are now also entering the territory of intent, and how much, if at all, this should influence our assessment of a given action. I'll be honest - I am really tired of thinking about intent. This is not meant to suggest I don't appreciate your suggestion and comment by the way... just that I have deliberated on this for quite many times and it is one of those things where human beings really can be frustrating and ridiculously irrational.

Like... why is it that someone who helps you cross the street because he wants to make sure you're safe is OK, but someone doing the exact same thing for you because he thinks you're a loser and need to be protected from your own incompetence is totally not OK? The utility of both interventions is identical, and the personal opinion of a random stranger about you as a person is very close to completely irrelevant... yet, many people would refuse the help, even if they actually need it, if they knew it was motivated by the belief they are a loser. Being human myself I of course understand this, but it remains one of those things that are just... I guess, something I'll never truly want to accept.

Even in the context of justice and punishment, I can't say I am unequivocally supportive of how important intent is, although there at least it has some justification in deciding the severity of punishment. But it works in mysterious ways. Suppose that one day an AGI will tell us with 90% certainty that, if it wasn't for Hitler starting WW2 in 1939, sometime at the end of the forties we would have had to suffer a nuclear war that would kill 10 times as many people and make vast tracts of Europe uninhabitable for many centuries. Since Adolf did not intend the war for the express purpose of preventing this tragedy, he gets zero credit.

On the other hand, if I do something because I claim to sincerely want to improve the situation of my countrymen but my plans actually cause mass crop failures and famines that kill a couple hundred thousand, it is very probable that, actually, it won't be such a big deal, and I can have another go at it next year. When you think about it, this is seriously weird. But anyhow, TL;DR ;)

-------------------

Now - I think you and I can both agree that it is highly probable that, almost regardless of the ultimate goal, pursuing a nation wide strategy of "putting away" thousands of intellectuals (and doing this by using "methods" as crude as rounding up anyone who wears glasses, say...) is virtually guaranteed to be a dumb objective. Quietly getting rid of a few noisy dissenters with a radically different agenda so you can have the field free to implement your ideas - okay, I wouldn't condone it, but this could perhaps be a dark chapter in an otherwise successful book, however deplorable. 

Comment by HumaneAutomation on Containing the AI... Inside a Simulated Reality · 2020-10-31T22:19:42.069Z · LW · GW

So... can it be said that the advent of an AGI will also provide a satisfactory answer to the question whether we currently are in a simulation? That is what you (and avturchin) seem to imply. Also, this stance presupposes that:

- an AGI can ascertain such observations to be highly probable/certain;
- it is theoretically possible to find out the true nature of ones world (and that a super-intelligent AI would be able to do this);
- it will inevitably embark on a quest to ascertain the nature and fundamental facts about its reality;
- we can expect a "question absolutely everything" attitude from an AGI (something that is not necessarily desirable, especially in matters where facts may be hard to come by/a matter of choice or preference).

Or am I actually missing something here? I am assuming that is very probable ;)

Comment by HumaneAutomation on Why does History assume equal national intelligence? · 2020-10-31T19:34:07.868Z · LW · GW

Well - this chain could go on ad infinitum.. all I was trying to say is:

- I am really trying to understand why people voted for Brexit and want to find answers that do not boil down to "they were ignorant/dumb/racists", however tempting it might be;
- The job of politicians is to serve the interests of the country they serve. If they have other reasons for pursuing certain policies they are being corrupt;
- If you are going to advocate voting for "Unknown" then you better have a very good idea of what you want instead, how you will bring it about and how it will materially help your constituents in ways that are tangible and can be measured;

I am not claiming to know everything the UK government and its people think, though if you follow your logic to only moderate extremes any discussion of complex policies is fundamentally pointless guesswork. That isn't necessarily false, mind you... compared to the average person, I'd assess my own knowledge of Brexit somewhere above the median for a non-UK resident, and probably below someone who actually lives there everyday. On the other hand, it is easier for me to be dispassionate as I don't have a dog in that race :)

Comment by HumaneAutomation on Why does History assume equal national intelligence? · 2020-10-31T19:24:42.411Z · LW · GW

I am not sure if I follow :) For starters, what is an "innate tendency to succeed"...? How is that even a thing? One might have a set of personal attributes that, given a bunch of challenges, might prove to be beneficial in increasing chances of getting what you want.. like charisma or being especially adept at detecting what people actually need, or when they are lying... but it would be a bit... irrational... to call that a innate tendency...? I mean I know there's plenty of people who would do that (and in so doing, make into a reputation that then becomes feared and actually helps in being victorious when dealing with those who are aware of said reputation...) but it is still an irrational human construct.

Judging the intelligence of an actor based on his or her incidence of goal attainment seems to me a very pragmatic and unbiased way of ascribing overall success and understanding of the challenges in the situations being faced.

Comment by HumaneAutomation on Why does History assume equal national intelligence? · 2020-10-31T19:18:47.204Z · LW · GW

I guess what I mean to say is - if killing smart people is the solution, the outcome you are after, almost by definition, cannot be an improvement. I guess maybe in theory there are some scenarios where this might be possible, but those will be few and far between.

Suppose that you are a member of a political party, and you are told that the less educated a person is, the more likely they will vote for you (and vice versa). If this was me, I would feel morally obliged to immediately disassociate myself from such a party, for it is all but impossible that what it stands for is a Good Thing.

If you come up with a political plan, and realize that the way to achieve it is to kill the smart people, the only possible prudent conclusion must surely be that your plan is fundamentally misguided, wrong, and cannot possibly be an outcome worth pursuing.

Comment by HumaneAutomation on Containing the AI... Inside a Simulated Reality · 2020-10-31T19:04:47.749Z · LW · GW

You know what... as I thought about the above, I have to say that the very possibility of the existence of simulations seriously complicates any efforts at even hoping to understand what an AGI might think. Actually, it presents such a level of complexity and so many unknown unknowns that I am not even sure if the type of awareness and sentience an AGI may possess is definable in human terms.

See - when we talk about simulated worlds, we tend to "see" it in terms of the Matrix - a "place" you "log on to" and then experience as if it were a genuine world, configured to feature any number of laws and structures. But I'm starting to think that is woefully inadequate. Let me attempt to explain... this may be convoluted, I apologize in advance.

Suppose the AGI is released in the "real" world. The amount of inferences and discoveries it will (eventually) be able to make is such that it is near certain it would conclude that it is us who are living in a simulated world, our appreciation of it hemmed in by our Neanderthal-level ignorance. Can't we see that plants speak to each other? How is it even possible to miss the constant messages coming to us from various civilizations from outer space?? And what about the obvious and trivial solution to cancer that the AGI found in a couple of minutes, how could humans possible have missed that open door???

Another way of putting this, I suppose, is that humans and the AGI will by definition live in two very, very different worlds. Both our worlds will be limited by our data collection ability (sensory input) but the limits of an AGI are vastly expanded. Do they have to be, though...? Like, by default? Is it a given that an AI must discover, and want to discover a never-ending list of properties about the world? Is its curiosity a given? How come?

I get a feeling that the moment an AGI would "discover" the concept of a simulated world it would indeed most likely melt and go into some infinite loop of impossible computation, trying to stick a probability on this being so, being possible, etc. and never, not in a million years, being able to come with a definitive answer. It may just as well conclude there is no such thing as reality in the first place... that each sentient observer is in fact the whole of reality from their perspective and that any beliefs about the world outside are just that - assumptions and inferences. And in fact, this would be pretty close to the "truth" - if that even exists.

Comment by HumaneAutomation on Why does History assume equal national intelligence? · 2020-10-31T13:43:50.873Z · LW · GW

Ah Mr. Cummings... he may actually be literally too clever for his own good... his disdain for "the plebs" is all too readily on display ;) You should take a look at his wonderful ideas about data privacy. And yes, having followed Brexit both online and "in person" I am fairly familiar with all sides of the argument. My "default position" in matters of mass-made choices and decisions is that I never assume that a great many people are, in fact, dumb, even if it may be really tempting to think so. Their actions always have some logic, even if faulty/biased/etc. and it is there where the lack of reasoning occurs. We are here on a blog dedicated in no small part to exactly this problem :)

However - Brexit is a fairly difficult case: Anyone who voted in favor of it could not have done so for objective reasons. This is because there actually was no coherent plan what so ever to base ones choices on. Vague (and proven misleading) claims, appeals to national sovereignty (without concordant elaboration what exactly this sovereignty would make possible) etc. - a vote for Brexit came as close as one can come to a vote for the Unknown. Unless you're in a concentration camp, this is hardly ever the most rational choice.

And - smart as they may be on paper, to take the Have Your Cake And Eat It Too-position that it is even remotely possible to have a better deal with the EU while no longer a member is surely a galaxy-size fail of miscalculation, hubris, or both. I realize that a big part of the EU-UK negotiations are actually a complex game of chicken, but the very fact this "type" of game was chosen instead of a more cooperative approach is in itself fairly ignorant.

Though it may be helpful in this context to define "intelligence" more precisely; is it "raw computing power" or "adeptness at achieving ones' desired outcomes"...? While obviously related, those two things are most certainly not the same. Me personally being rather at the pragmatic/utilitarian end of the spectrum, think more in terms of the latter :)

Comment by HumaneAutomation on Why does History assume equal national intelligence? · 2020-10-31T03:21:03.349Z · LW · GW

Hmm. Let me elaborate a little bit on the reasons for me asking in the first place :) I have been pondering for some time the whole idea of the common notion that history has a tendency to repeat itself and/or that people seem to have... a measure of difficulty learning from previously made mistakes. I've been doing this due to a sense that "we" are slowly forgetting and in a weird way "discounting" the... attitudes acquired after the many atrocities in WW2 (duly linked to by you); my 13 year old daughter for instance needs some convincing to understand that a Jew in 1930's Europe couldn't "just go somewhere else" and be done with it. The last generation that could tell us the story of what happened is pretty much gone, and at any rate, not many people seem particularly keen on listening to their stories anyhow.

It would seem to me that discussing historical events not so much in terms of good or evil but instead in terms of clever or dumb may be a more instructive way to explain the error of certain ideas and movements, with what I suspect would be more effective and enduring ways to prevent such errors being made another time. So -  for example, talking about the Gulag in terms of Stalin being a paranoid maniac might not be as useful as elaborating on the rational reasons why such decisions were pretty stupid and greatly hindered the Soviet Union from becoming as great as it could have been if these acts had not been committed. Needless to say Stalin was a paranoid maniac - but by focusing on that as the (only) cause, you potentially leave the door open to others doing something similar while believing they could do so without being crazy; far less people would be inclined to repeat behavior that is universally described as mostly dumb, IMHO. It would be, I think, more useful to talk about how thousands of the people he killed were in fact quite smart and would have made for great scientists and army commanders. How the atmosphere of fear stifled independent thought and scared everyone into de-facto "losing" (or "hiding") 5 IQ points on average.

TL;DR - basically, narrating the history as a set of intelligent or not so intelligent events and actions I think is more likely to prevent a repeat than describing them in terms of good or evil or even weak or powerful. Even though both malevolence and power are actual factors :)

Comment by HumaneAutomation on Why does History assume equal national intelligence? · 2020-10-31T02:27:32.878Z · LW · GW

Interesting examples, and indeed, it does happen, albeit especially for leaders from a very long time ago, those who seem to be particularly wise or especially unusual, yes.  Yet, compared to the decisive advantage that intellectual ability confers, I still feel it gets far too little credit and is not ascribed the influence that it actually has on how history unfolds.

You'd much sooner read about novel technologies being important, or even the impact of weather (on, for instance, naval battles). There is scant discussion of how historical figures have reasoned or come to their particular decisions, and whether these decisions have been wise and intelligent.

For example, I think it is fair to say that the "idea" of Pol Pot to go ahead and literally kill all the smart people can be unambiguously identified as a fantastically dumb idea; yet, it is not framed thus; rather, it is mentioned especially as a cruel and evil act. To be sure, it is definitely both of those things as well, but certainly not only.

I guess perhaps the problem is that stating it as "dumb" is seen more as a judgment...? Maybe that's the issue. Or it might be that in most cases it is indeed difficult to ascertain the intellectual capacities and input given and so it would verge on being first and foremost guesswork and/or an opinion...? Though calling something "evil" also isn't exactly strictly sticking to facts...

Comment by HumaneAutomation on Why does History assume equal national intelligence? · 2020-10-31T02:15:34.473Z · LW · GW

Ok - what I mean is that the reason historical events work out one way and not another rarely, if ever, is described or explained in terms of the intelligence and cleverness of those whose actions shape those events. So for example, the murder of Duke Ferdinand that led to the First World War is not (even partially) explained by the fact he was rather dumb to act the way he did, but by the political tensions of the time and his careless arrogance. The apparent intellect of most such historical figures and how smart they are rarely seems to be getting attention, even though it is certain that it plays a decisive role in the quality of their decisions.

The way Brexit negotiations are going may very well be caused by a form of stupidity on behalf of the UK negotiators, which causes them to miscalculate the risks, defer proper preparation and fail to have an adequate long term view. Or by foolishly putting the demands of short term populist needs first.

Not to mention that the very fact the country voted for Brexit in the first place is not exactly the result of geniuses at work; even if you accept that a significant part of the UK (lower class) population has objectively suffered due to globalization and they had a sincere axe to grind, doing so by voting in favor of Brexit is hardly going to solve their issues, certainly not in a planned and cleverly designed fashion.

Another thing that is worth mentioning in this context is the apparent fact that the intelligence (IQ) of world leaders seems to be a very closely guarded state secret. This would seem to be most suggestive circumstantial evidence that it is generally believed to be of critical importance in negotiations with other states and parties. The only group of (ex...) leaders I know of whose IQ has been measured were the Nazi's at the Neurenberg trials; and I would venture a guess this was mostly done while grappling for explanations for how such a seemingly civilized and "decent" nation could commit the atrocities they are still famous for.

It's a bit like writing a book about car performance and covering aerodynamics, suspension and the interiors at length, but only briefly mentioning the engine inside.

EDIT - And by "national intelligence" I mean the undescribed but definitely "present" intellectual capacity of a set of actors from a given nation that history has "chosen" to play a decisive role in what ever events are about to unfold. So for example when nuclear weapons became a thing and we had to find ways to deal with them, the foresight and wisdom of the leaders tasked with securing them and deciding when to use them was a of existential importance. Yet, in history books you will not read about how these problems were intellectually discussed, what kind of rational calculations, convictions and reasons were held by those who made these critical choices and decisions.

In the treatment of historical events it seems to be assumed that, all things considered, every nation is as smart as any other; I am yet to read a story with a sentence along the lines of "the delegation from country Y were clearly intellectually inferior to the skilled diplomats from country X and it is for this reason that the treaty was so beneficial for nation X". That nation Y was scared of the armies of nation X, sure. That the economy of nation Y was in tatters and they had no way of financing the steps required to be on an equal footing with nation X - also. But that nation Y unfortunately was dumber because they didn't quite value a good educational system, or that the king promoted his dim nephew to key negotiator despite him barely being able to tie his shoelaces... not so much. Yet it must be the case this has more often than not played an instrumental role.

Comment by HumaneAutomation on Is Stupidity Expanding? Some Hypotheses. · 2020-10-30T21:14:14.213Z · LW · GW

I think the most unjudgmental and reasonable explanation is that it is simply easier for people to make their opinions public via the Internet. I strongly doubt that the world around, say, 1970 contained less ignorant people than the world today; but the ones that are there are far easier to find. The average human is not famous for taking his or her time to express a thoughtful comment, and the systems we have today (Twitter/FB) to state ones' opinion encourage doing this as soon as possible, lest it joins the bottom of the list no one will ever see...

And once some preposterously dumb or ignorant comment/opinion is posted, it is much easier to ridicule/highlight it. Finally, it would seem to me that people rightfully believe that with knowledge and facts readily available online, it is far easier to educate oneself and so those who don't do so are "extra dumb" and more likely to be assumed willfully ignorant. Especially, I suspect, by intelligent people, who, faced with a never ending avalanche of ignorance and seemingly unbothered by their own biases conclude the world is becoming dumber by the minute ;)