Posts

Comments

Comment by bortels on Learning to get things right first time · 2015-06-05T02:50:14.730Z · LW · GW

Strawman?

"... idea for an indirect strategy to increase the likelihood of society acquiring robustly safe and beneficial AI." is what you said. I said preventing the creation of an unfriendly AI.

Ok. valid point. Not the same.

I would say the items described will do nothing whatsoever to "increase the likelihood of society acquiring robustly safe and beneficial AI."

They are certainly of value in normal software development, but it seems increasingly likely as time passes without a proper general AI actually being created that such a task is far, far more difficult than anyone expected, and that if one does come into being, it will happen in a manner other than the typical software development process as we do things today. It will be an incremental process of change and refinement seeking a goal, is my guess. Starting from a great starting point might presumably reduce the iterations a bit, but other than a head start toward the finish line, I cannot imagine it would affect the course much.

If we drop single cell organisms on a terraformed planet, and come back a hundred million years or so - we might well expect to find higher life forms evolved from it, but finding human beings is basically not gonna happen. If we repeat that - same general outcome (higher life forms), but wildly differing specifics. The initial state of the system ends up being largely unimportant - what matters is evolution, the ability to reproduce, mutate and adapt. Direction during that process could well guide it - but the exact configuration of the initial state (the exact type of organisms we used as a seed) is largely irrelevant.

re. Computer security - I actually do that for a living. Small security rant - my apologies:

You do not actually try to get every layer "as right and secure as possible." The whole point of defense in depth is that any given security measure can fail, so to ensure protection, you use multiple layers of different technologies so that when (not if) one layer fails, the other layers are there to "take up the slack", so to speak.

The goal on each layer is not "as secure as possible", but simply "as secure as reasonable" (you seek a "sweet spot" that balances security and other factors like cost), and you rely on the whole to achieve the goal. Considerations include cost to implement and maintain, the value of what you are protecting, the damage caused should security fail, who your likely attackers will be and their technical capabilities, performance impact, customer impact, and many other factors.

Additionally, security costs at a given layer do not increase linearly, so making a given layer more secure, while often possible, quickly becomes inefficient. Example - Most websites use a 2k SSL key; 4k is more secure, and 8k is even moreso. Except - 8k doesn't work everywhere, and the bigger keys come with a performance impact that matters at scale - and the key size is usually not the reason a key is compromised. So - the entire world (for the most part) does not use the most secure option, simply because it's not worth it - the additional security is swamped by the drawbacks. (Similar issues occur regarding cipher choice, fwiw).

In reality - in nearly all situations, human beings are the weak link. You can have awesome security, and all it takes is one bozo and it all comes down. SSL is great, until someone manages to get a key signed fraudulently, and bypasses it entirely. Packet filtering is dandy, except that fred in accounting wanted to play minecraft and opened up a ssh tunnel, incorrectly. MFA is fine, except the secretary who logged into the VPN using MFA just plugged the thumb drive they found in the parking lot into per PC, and actually ran "Elf Bowling", and now your AD is owned and the attacker is escalating privledge from inside. so it doesn't matter that much about your hard candy shell, he's in the soft, chewy center. THIS, by the way, is where things like education are of the most value - not in making the very skilled more skilled, but in making the clueless somewhat more clueful. If you want to make a friendly AI - remove human beings from the loop as much as possible...

Ok, done with rant. Again, sorry - I live this 40-60 hours a week.

Comment by bortels on Stupid Questions June 2015 · 2015-06-05T01:59:46.751Z · LW · GW

I think that's a cognitive illusion, but I understand that it can generate positive emotions who are not an illusion, by any means.

More a legacy kind of consideration, really - I do not imagine any meaningful part of myself other than genes (which frankly I was just borrowing) live on. But - If I have done my job right, the attitudes and morals that I have should be reflected in my children, and so I have an effect on the world in some small way that lingers, even if I am not around to see it. And yes - that's comforting, a bit. Still would rather not die, but hey.

Comment by bortels on A resolution to the Doomsday Argument. · 2015-06-04T06:30:21.951Z · LW · GW

So - I am still having issues parsing this, and I am persisting because I want to understand the argument, at least. I may or may not agree, but understanding it seems a reasonable goal.

The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare.

The success of the self-modifying AI would make the builders of that AI's observations extremely rare... why? Because the AI's observations count, and it is presumably many orders of magnitude faster?

For a moment, I will assume I have interpreted that correctly. So? How is this risky, and how would creating billions of simulated humanities change that risk?

I think the argument is that - somehow - the overwhelming number of simulated humanities somehow makes it likely that the original builders are actually a simulation of the original builders running under an AI? How would this make any difference? How would this be expected to "percolate up" thru the stack? Presumably somewhere there is the "original" top level group of researchers still, no? How are they not at risk?

How is it that a builder's observations are ok, the AI's are bad, but the simulated humans running in the AI are suddenly good?

I think, after reading what I have, that this is the same fallacy I talked about in the other thread - the idea that if you find yourself in a rare spot, it must mean something special, and that you can work the probability of that rareness backwards to a conclusion. But I am by no means sure, or even mostly confident, that I am interpreting the proposal correctly.

Anyone want to take a crack at enlightening me?

Comment by bortels on A resolution to the Doomsday Argument. · 2015-06-04T04:00:42.077Z · LW · GW

Ah - I'd seen the link, but the widget just spun. I'll go look at the PDF. The below is before I have read it - it could be amusing and humility inducing if I read it and it makes me change my mind on the below (and I will surely report back if that happens).

As for the SSA being wrong on the face of it - the DA wiki page says "The doomsday argument relies on the self-sampling assumption (SSA), which says that an observer should reason as if they were randomly selected from the set of observers that actually exist." Assuming this is true (I do not know enough to judge yet), then if the SSA is false, then the DA argument is unsupported.

So - lets look at SSA. In a nutshell, it revolves around how unlikely it is that you were born in the first small% of history - and ergo, doomsday must be around the corner.

I can think of 2 very strong arguments for the SSA being untrue.

First - this isn't actually how probability works. Take a fair coin and decide to flip it. The probability of heads and tails are the same, 1/2 - 50% for each. Flip the coin, and note the result. The probability is now unity - there is no magic way to get that 50/50 back. That coin toss result is now and forever more heads (or tails). You cannot look at a given result, and work backwards about how improbable it was, then use that - because it is no longer improbable, it's history. Probability does not actually work backwards in time, although it is convenient in some cases to pretend it does.

Another example - what is the probability that I was born at the exact second, minute, hour, and day, at the exact location I was born at, out of the countless other places and times that humanity has existed that I could have been born in/at? The answer, of course - unity. And nil at all other places and times, because it has already happened - the wave form, if you will, has collapsed, Elvis has left the building.

So - what is the probability you were born so freakishly close to the 5 million year reign of humanity, in the first 0.000001% of all living people? Unity. Because it's history. And the only thing making this position any different whatsoever from the others is blind chance. There is nothing one bit special about being in the first bit, other than that it allows you to notice that. (Feel free to substitute anything for 5 million above - it's all the same).

Second - there are also logical issues - you can spin the argument on it's head, and it still works (with less force to be sure). What are the chances of me being alive for Doomsday? Fairly small - despite urban legend, the number of people alive are a fairly small percentage (6-7%) of all who have ever lived. Ergo - doomsday cannot be soon, because it was unlikely I would be born to see it. (again, flawed - right now, the liklihood I was born at that time is unity)

An argument that can be used to "prove" both T and ~T is flawed, and should be discarded, aside from the probability thing. Prove here being used very loosely, because this is nowhere close to proof, which is good because I like things like Math working.

Time to go read a PDF.

Update: Done. That was quite enjoyable, thank you. A great deal of food for thought, and like most good, crunchy info filled things, there were bits I quite agreed with, and quite disagreed with (and that's fine.)

I took some notes; I will not attempt to post them here, because I have already run into comment length issues, and I'm a wordy SOB. I can post them to a gist or something if anyone is interested, I kept them mostly so I could comment intelligently after reading it. Scanning back thru for the important bits:

Anthropomorphic reasoning would be useless as suggested - unless the AI was designed by and for humans to use. Which it would be. So - it may well be useful in the beginning, because presumably we would be modeling desired traits (like "friendliness") on human traits. That could easily fail catastrophically later, of course.

The comparison between evolution and AI, in terms of relation to humans on page 11 was profound, and very well said.

There are an awful lot of assumptions presented as givens, and then used to assert other things. If any of them are wrong - the chain breaks. There were also a few suggestions that would violate physics, but the point being made was still valid ("With molecular nanotechnology, the AI could (potentially) rewrite the solar system unopposed." was my favorite; it is probably beneficial to separate what is possible and impossible, given things like distances and energy and time, not to mention "why?").

There is an underlying assumption that intelligence can increase without bound. I am by no means sure this is true - I can think of no other trait that does so, you run into limits (again) of physics and energy and so on. It is very possible that things like the speed-of-light propagation delay, heat, and inherent difficulty of certain tasks such as factoring would end up imposing an upper-limit on intelligence of an AI before it reached the w00 w00 god-power magic stage. Not that it matters that much, if it's goal is to harm us, you don't need to be too smart to do that...

Anyone thinking an AI might want my body for it's atoms is not thinking clearly. I am made primarily of carbon, hydrogen, and oxygen - all are plentiful, in much easier to work with form, elsewhere. An early stage AI bootstrapping production would almost certainly want metals, some basic elements like silicon, and hydrocarbons (which we keep handy). Oh, and likely fissionables for power. Not us. Later on, all bets are off, but there are still far better places to get atoms than people.

Finally - the flaw in assuming an AI will predate mind upload is motivation. Death is a powerful, powerful motivator. A researcher close to being able to do it, about to die, is damn well going to try, no matter what the government says they can or can't do - I would. And the guesses as to fidelity required are just that - guesses. Life extension is a powerful, powerful draw. Upload may also ultimately be easier - hand-waving away a ton of details, it's just copy and simulation; it does not require new, creative inventions, just refinements on current thoughts. You don't need to totally understand how something works to scan and simulate it.

Enough. If you have read this far - more power to you, thank you much for your time.

PS. I still don't get the whole "simulated human civilizations" bit - the paper did not seem to touch on that. But I rather suspect it's the same backwards probability thing...

Comment by bortels on Learning to get things right first time · 2015-06-04T02:22:10.153Z · LW · GW

I have an intellectual issue with using "probably" before an event that has never happened before, in the history of the universe (so far as I can tell).

And - if I am given the choice between slow, steady improvement in the lot of humanity (which seems to be the status quo), and a dice throw that results in either paradise, or extinction - I'll stick with slow steady, thanks, unless the odds were overwhelmingly positive. And - I suspect they are, but in the opposite direction, because there are far more ways to screw up than to succeed, and once the AI is out - you no longer have a chance to change it much. I'd prefer to wait it out, slowly refining things, until paradise is assured.

Hmm. That actually brings a thought to mind. If an unfriendly AI was far more likely than a friendly one (as I have just been suggesting) - why aren't we made of computronium? I can think of a few reasons, with no real way to decide. The scary one is "maybe we are, and this evolution thing is the unfriendly part..."

Comment by bortels on Learning to get things right first time · 2015-06-04T02:13:21.089Z · LW · GW

The techniques are useful, in and of themselves, without having to think about utility in creating a friendly AI.

So, yes, by all means, work on better skills.

But - the point I'm trying to make is that while they may help, they are insufficient to provide any real degree of confidence in preventing the creation of an unfriendly AI, because the emergent effects that would likely be responsible for such are not amenable to planning about ahead of time.

It seems to me your original proposal is the logical equivalent to "Hey, if we can figure out how to better predict where lightning strikes - we could go there ahead of time and be ready to stop the fires quickly, before the spread". Well, sure - except that sort of prediction would depend on knowing ahead of time the outcome of very unpredictable events ("where, exactly, will the lightning strike?") - and it would be far more practical to spend the time and effort on things like lightning rods and firebreaks.

Comment by bortels on Stupid Questions June 2015 · 2015-06-04T01:57:19.938Z · LW · GW

So - there's probably no good reason for you - as a mind - to care about your genes, unless you have reason to believe they are unique or somehow superior in some way to the rest of the population.

But as a genetic machine, you "should" care deeply, for a very particular definition of "should" - simply because if you do not, and that turns out to have been genetically related, then yours will indeed die out. The constant urge and competition to reproduce your particular set of genes is what drives evolution (well, that and some other stuff like mutations). I like what evolution has come up with so far, and so it behooves me to help it along.

On a more practical note - I take a great deal of joy from my kids. I see in them echoes of people who are no longer with us, and it's delightful when they echo back things I have taught them, and even moreso when they come up with something totally unexpected. Barring transhumanism, your kids and your influence upon them are one of the only ways to extend your influence past death. My mother died over a decade ago - and I see elements of her personality in my daughters, and it's comforting.

I don't hold a lot of hope for eternal life for myself - I'm 48 and not in the greatest health, and I am not what the people on this board would consider optimistic about technology saving my mentation when by body fails, by any means (and I dearly would love to be wrong, but until that happens, you plan for the worst). But - I think there's a strong possibility my daughters will live forever. And that is extremely comforting. The spectre of death is greatly lessened when you think there is a good chance that things you love will live on after you, remembering, maybe forever.

Comment by bortels on A Proposal for Defeating Moloch in the Prison Industrial Complex · 2015-06-04T01:37:28.811Z · LW · GW

Exactly. Having a guaranteed low-but-livable-income job as a reward for serving time and not going back is hardly a career path people will aim for - but might be attractive to someone who is out but sees little alternatives but to go back to a life of crime.

I actually think training and new-deal type employment guarantees for those in poverty is a good idea aside from the whole prison thing - in that attempts to raise people from poverty would likely reduce crime to begin with.

The real issue here - running a prison being a profit-making business - has already been pointed out.

Comment by bortels on Confession Thread: Mistakes as an aspiring rationalist · 2015-06-03T07:03:29.813Z · LW · GW

Dunning-Kruger - learn it, fear it. So long as you are aware of that effect, and aware of your tendency to arrogance (hardly uncommon, especially among the educated), you are far less likely to have it be a significant issue. Just be vigilant.

I have similar issues - I find it helpful to dive deeply into things I am very inexperienced with, for a while; realizing there are huge branches of knowledge you may be no more educated in than a 6th grader is humbling, and freeing, and once you are comfortable saying "That? Oh, hell - I don't know much about that, and will never find the time to", you can let it go and relax a bit. Or - I have. (my favorites are microbiology, or advance mathematics. I fancy myself smart, but it is super easy to be so totally over my head it may as well be mystic sorcery they're talking about. Humbles you right out.)

Big chunks of this board do that as well, FWIW.

Comment by bortels on Confession Thread: Mistakes as an aspiring rationalist · 2015-06-03T06:53:10.793Z · LW · GW

I spent 7 years playing a video game that started to become as important to me as the real world, at least in terms of how it emotionally effected me. If I had spent the 6ish hours a day, on average, doing something else - well, it makes me vaguely sick to think of the things I might have better spent the time and energy on. Don't get me wrong - it was fun. And I did not sink nearly so low as so many others have, and in the end when I realized what was going on - I left. I am simply saddened by the lost opportunity cost. FWIW - this is less about the "virtual" nature of things - I had good, real human beings as friends - and more about not having the presence of mind and fortitude to spend that time, oh, learning an instrument, or developing a difficult skill, or simply doing things in the real world to help society as a whole. I mean - 6 hours a day (average, 7 days a week) for 7 years is what, a doctorate program? Not that I value the paper and all, but the education means something.

Comment by bortels on A Proposal for Defeating Moloch in the Prison Industrial Complex · 2015-06-03T06:42:31.661Z · LW · GW

Perhaps instead of the prison, the ex-prisoner should be given the financial incentive to avoid recidivism. Reward good behavior, rather than punish bad.

We could do this by providing training, and given them reasonable jobs. HA HA! I make myself laugh. Sigh.

It seems to me the issue is less one of recidivism, and more one of the prison-for-profit machine. Rather than address it by trying to make them profit either way (they get paid if the prisoner returns already - this is proposing they get paid if they stay out) - it seems simpler to remove profit as a motive (ie. if the state is gonna lock you up - the state has to deal with it, nobody should be doing it as a business). Not that the state is likely to have a better record here.

My take is that we would be better off spending that money on education and health care for the poor, as an effort to avoid having crime be seen as an easy way out of poverty. Ooh, my inner hippie is showing. Time to go hug a tree.

Comment by bortels on Learning to get things right first time · 2015-06-03T06:26:36.031Z · LW · GW

Fair question.

My point is that if improving techniques could take you from (arbitrarily chosen percentages here) a 50% chance that an unfriendly AI would cause an existential crisis, to 25% chance that it would - you really didn't gain all that much, and the wiser course of action is still not to make the AI.

The actual percentages are wildly debatable, of course, but I would say that if you think there is any chance - no matter how small - of triggering ye olde existential crisis, you don't do it - and I do not believe that technique alone could get us anywhere close to that.

The ideas you propose in OP seem wise, and good for society - and wholly ineffective in actually stopping us from creating an unfriendly AI, The reasons are simply that the complexity defies analysis, at least by human beings. The fear is that the unfriendly arises from unintended design consequences, from unanticipated system effects rather than bugs in code or faulty intent

It's a consequence of entropy - there are simply far, far more ways for something to get screwed up than for it to be right. So unexpected effects arising from complexity are far, far more likely to cause issues than be beneficial unless you can somehow correct for them - planning ahead only will get you so far.

Your OP suggests that we might be more successful if we got more of it right "the first time". But - things this complex are not created, finished, de-novo - they are an iterative, evolutionary task. The training could well be helpful, but I suspect not for the reasons you suggested. The real trick is to design things so that when they go wrong - it still works correctly. You have to plan for and expect failure, or that inevitable failure is the end of the line.

Comment by bortels on Stupid Questions June 2015 · 2015-06-03T06:09:58.000Z · LW · GW

The article supports that agricultural diets were worse - but the hunter-gatherers were, as well. Nobody ate a lot back then, abundance is fairly new to humanity. The important part about agriculture is not that it might be healthier - far from it.

Agriculture (and the agricultural diets that go with it) allowed humanity luxuries that the hunter-gatherer did not have - a dependable food supply, and moreover a supply where a person could grow more food than they actually needed for subsistence. This is the very foundation of civilization, and all of the benefits derived from that - the freed up workers could spend their time on other things like permanent structures, research into new technologies, trade, exploration, that were simply impossible in hunter-gatherer society. You can afford to be sickish, as a society, if you can have more babies and support a higher population, at least temporarily. (I suspect that beyond this, adapting to the diet was probably a big issue, and continues to be - look at how many are still lactose intolerant...)

Over time, that allowed agrarian culture to become far better nourished - to the point where sheer abundance causes a whole new set of health issues. I would suggest that today the issues with diet are those of abundance, not agricultural versus hunter-gatherer types of food choices. And, today, with the information we have - you can indeed have a vegan diet, and avoid all or nearly all of the issues the article cites. Technology rocks.

Comment by bortels on Stupid Questions June 2015 · 2015-06-03T05:49:14.629Z · LW · GW

It hits a nerve with me. I do computer tech stuff, and one of the hardest things for people to learn, seemingly, is to admit they don't actually know something (and that they should therefore consider, oh, doing research, or experiment, or perhaps seek someone with experience). The concept of "Well - you certainly can narrrow it down in some way" is lovely - but you still don't actually know. The incorrect statement would be "I know nothing (about your number)" - but nobody actually says that.

I kinda flip it - we know nothing for sure (you could be hallucinating or mistaken) - but we are pretty confident about a great many things, and can become more confident. So long as we follow up "I don't know" with "... but I can think of some ways to try to find out", it strikes me as simple humility.

Amusingly - "I am thinking of a number" - was a lie. So - there's a good chance that however you narrowed it down, you were wrong. Fair's fair - you were given false information you based that on, but still thought you might know more than you actually did. Just something to ponder.

Comment by bortels on Stupid Questions June 2015 · 2015-06-03T05:39:13.339Z · LW · GW

Actually - I took a closer look. The explanation is perhaps simpler.

Tide doesn't make a stand-alone fabric softener. Or if they do - amazon doesn't seem to have it? There's TIde, and Tide with Fabric Softener, and Tide with a dozen other variants - but nothing that's not detergent plus.

So - no point in differentiating. The little Ad-man in my said says "We don't sell mere laundry detergent - we sell Tide!"

To put it another way - did you ever go buy to buy detergent, and accidentally buy fabric softener? Yeah, me neither. So - the concern is perhaps unfounded.

Comment by bortels on Stupid Questions June 2015 · 2015-06-01T23:09:43.117Z · LW · GW

While reading up on Jargon in the wiki (it is difficult to follow some threads without it), I came across:

http://wiki.lesswrong.com/wiki/I_don%27t_know

The talk page does not exist, and I have no rights to create it, so I will ask here: If I say "I am thinking of a number - what is it?" - would "I don't know" be not only a valid answer, but the only answer, for anyone other than myself?

The assertion the page makes is that "I don't Know" is "Something that can't be entirely true if you can even formulate a question." - but this seems a counterexample.

I understand the point that is trying to be made - that "I don't know" is often said even when you actually could narrow down your guess a great deal - but the assertion given is only partially correct, and if you base arguments on a string of mostly correct things, you can still end up wildly off-course in the end.

Am I perhaps applying rigor where it is inappropriate? Perhaps this is taken out of context?

Comment by bortels on A resolution to the Doomsday Argument. · 2015-06-01T21:34:53.410Z · LW · GW

Ah - that's much clearer than your OP.

FWIW - I suspect it violates causality under nearly everyone's standards.

You asked if your proposal was plausible. Unless you can postulate some means to handle that causality issue, I would have to say the answer is "no".

So - you are suggesting that if the AI generates enough simulations of the "prime" reality with enough fidelity, then the chances that a given observer is in a sim approach 1, because of the sheer quantity of them. Correct?

If so - the flaw lies in orders of infinity. For every way you can simulate a world, you can incorrectly simulate it an infinite number of other ways. So - if you are in a sim, it is likely with a chance approaching unity that you are NOT in a simulation of the higher level reality simulating you. And if it's not the same, you have no causality violation, because the first sim is not actually the same as reality; it just seems to be from the POV an an inhabitant.

The whole thing seems a bit silly anyway - not your argument, but the sim argument - from a physics POV. Unless we are actually in a SIM right now, and our understanding of physics is fundamentally broken, doing the suggested would take more time and energy than has ever or will ever exist, and is still mathematically impossible (another orders of infinity thing).

Comment by bortels on A resolution to the Doomsday Argument. · 2015-06-01T21:17:41.197Z · LW · GW

No, the entire point is not to know whether you are simulated before the Singularity. Afterwards, the danger is already averted.

Then perhaps I simply do not understand the proposal.

The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare.

This is where I am confused. The "of course" is not very "of coursey" to me. Can you explain how a self-modifying AI would be risky in this regard (a citation is fine, you do not need to repeat a well known argument I am simply ignorant of).

I am also foggy on terminology - DA and FAI and so on. I don't suppose there's a glossary around. Ok - DA is "Doomsday Argument" from the thread context (which seems silly to me - the SSA seems to be wrong on the face of it, which then invalidates DA).

Comment by bortels on Ideas to Improve LessWrong · 2015-06-01T20:56:33.231Z · LW · GW

Fair enough. I should mention my "Why" was more nutsy-and-boltsy than asking about motive; it would perhaps more accurately have been asked as "What do you observe about lesswrong, as it stands, that make you believe it can or should be improved". I am willing to take the desire for it as a given.

The goal of the why, fwiw, was to encourage self-examination, to help perhaps ensure that the "improvement" is just that. Fairly often, attempts to improve things are not as successful as hoped (see most of world history), and as I get older I begin to think more and more that most human attempts to "fix" complex things just tend to screw em up more.

Imagine an "improvement" where your picture was added as part of your post. There are perhaps some who would consider that an improvement - I, emphatically, would not. Not that you are suggesting that - just that the actual improvements should ideally be agreed upon (or at least tolerable to) most or all of the community, and sometimes that sort of consensus is just impossible.

Comment by bortels on Learning to get things right first time · 2015-06-01T20:46:15.095Z · LW · GW

The flaws leading to an unexpectedly unfriendly AI certainly might lead back to a flaw in the design - but I think it is overly optimistic to think that the human mind (or a group of minds, or perhaps any mind) is capable of reliably creating specs that are sufficient to avoid this. We can and do spend tremendous time on this sort of thing already, and bad things still happen. You hold the shuttle up as an example of reliability done right (which it is) - but it still blew up, because not all of shuttle design is software. In the same way, the issue could arise from some environmental issue that alters the AI in such a way that it is unpredictable - power fluctuations, bit flip, who knows. The world is a horribly non-deterministic place, from a human POV.

By way of analogy - consider weather prediction. We have worked on it for all of history, we have satellites and supercomputers - and we are still only capable of accurate predictions for a few days or week, getting less and less accurate as we go. This isn't a case of making a mistake - it is a case of a very complex end-state arising from simple beginnings, and lacking the ability to make perfectly accurate predictions about some things. To put it another way - it may simply be the problem is not computable, now or with any forseeable technology.

Comment by bortels on Stupid Questions June 2015 · 2015-06-01T20:34:01.410Z · LW · GW

Thank you. The human element struck me as the "weak link" as well, which is why I am attempting to 'formally prove' (for a pretty sketchy definition of 'formal') that the AI should be left in the box no matter what it says or does - presumably to steel resolve in the face of likely manipulation attempts, and ideally to ensure that if such a situation ever actually happened, "let it out of the box" isn't actually designed to be a viable option. I do see the chance that a human might be subverted via non-logical means - sympathy, or a desire for destruction, or foolish optimism and hope of reward - to let it out. Pragmatically, we would need to evaluate the actual means used to contain the AI, the probable risk, and the probable rewards to make a real decision between "keep it in the box" and "do not create it in the first place"

I was also worried about side-effects of using information obtained; which is where the invocation of Godel comes in, along with the requirement of provability, eliminating the need to trust the AI's veracity. There are some bits of information ("AI, what is the square root of 25?") that are clearly not exploitable, in that there is simply nowhere for "malware" to hide. There are likewise some ("AI, provide me the design of a new quantum supercomputer") that could be easily used as a trojan. By reducing the acceptable exploits to things that can be formally proven outside of the AI box, and comprehensible to human beings, I am maybe removing wondrous technical magic - but even so, what is left can be tremendously useful. There are a tremendous amount of very simple questions ("Prove Fermat's last theorem") that could shed tremendous insight on things, yet have no significant chance of subversion due to their limited nature.

I suspect idle chit-chat would be right out. :-)

Man, I need to learn to type the umlaut. Gödel.

Comment by bortels on Stupid Questions June 2015 · 2015-06-01T20:19:22.667Z · LW · GW

Case in point - I, for one, would likely not have posted anything whatsoever were it not for Stupid Questions. There is enough jargon here that asking something reasonable can still be intimidating - what if it turns out to be common knowledge? Once you break the ice, it's easier, but count this as a sample of 1 supporting it.

Comment by bortels on Why is a goal a good thing? · 2015-06-01T06:20:02.591Z · LW · GW

Setting a goal helps clarify thought process and planning; it forces you to step back a bit and look at the work to be done, and the outcome, from a slightly different viewpoint. It also helps you maintain focus on driving toward a result, and gives you the satisfaction of accomplishment when (if) you reach the goal.

Comment by bortels on Six Ways To Get Along With People Who Are Totally Wrong* · 2015-06-01T06:09:32.878Z · LW · GW

So - I'm not sure I want to get along with those who are totally wrong (or who I think are). More power to altruism, you rock, but I wonder sometimes if we do not bring some of this stupidity on ourselves by tolerating and giving voice to idiocy.

I look, for example, at the vaccination situation; I live in Southern California, a hotbed of clueless celebrity bozos who think for some reason they know more about disease and epidemiology than freaking doctors, and who cause real harm - actual loss of human life - to their community, of which my kids are a part.

Ok - maybe they're not totally wrong, I am willing to accept that some small percentage are actually opting out of vaccinations for good medical reasons, at the advice of their physicians - a buddy of mine has a daughter who fought leukemia, and the vaccinations deep in the middle of her treatments would have been very bad - but that doesn't mean I can, or should, give pass to the idiots who do it because they "don't want to put poisons in their children's bodies".

Point being - I cannot help but think that we might have been better off, as a society, if we took the first few who did that, put em in stocks in the middle of town, and threw rotten fruit at them. It should be socially unacceptable to be that wrong, not be something that gets them interviewed on tv.

I know - this is aimed more at philosophical differences, or matters of opinion, trying to prevent online debate from spiraling down into a flamewar. I just can't help but feel we are developing a society where people have the expectation that their wrong beliefs are somehow to be protected from criticism. Believe whatever crazy thing you want - but do not expect to be unmocked for it. Maybe - just maybe - getting roasted pretty good online is a useful educational experience. Maybe if people got flamed good and hard on usenet back in the late 80's, they wouldn't do the stupid public shaming (and evoking the mob response) they do today. Sometimes, the burned hand teaches best.

Or maybe I'm just an asshole. Who knows. It is certainly within the realms of possibility. Even so - being an asshole does not automatically mean you're wrong.

Just food for thought.

Comment by bortels on Leaving LessWrong for a more rational life · 2015-06-01T05:51:07.823Z · LW · GW

I now regard the sequences as a memetic hazard, one which may at the end of the day be doing more harm than good.

To your own cognition, or just to that of others?

I just got here. I have no experience with the issues you cite, but it strikes me that disengagement does not, in general, change society. If you think ideas, as presented, are wrong - show the evidence, debate, fight the good fight. This is probably one of the few places it might actually be acceptable - you can't lurk on religious boards and try to convince them of things, they mostly cannot or will not listen, but I suspect/hope most here do?

I actually agree, a lot of the philosophy tips over to woo woo and sophistry - but it is perhaps better to light a candle than curse the dark.

what is taught by the sequences is a form of flawed truth-seeking (thought experiments favored over real world experiments) which inevitably results in errors,

Well - let's fix it then! I tend to agree, I see rationalism as only one of many useful tools. I would add formal Logic, and Science (refinement via experiment - do those sequences actually suggest that experiment is unnecessary somehow? I'd love to see it, could use the laugh. ) and perhaps even foggy things like "experience" (I find I do not control, to a large extent, my own problem solving and thought processes nearly as well as I would imagine). The good carpenter has many tools, and uses the appropriate ones at the appropriate time.

Or is this one of those academic "we need to wait for the old guard to die off" things? If so, again, providing a counterpoint for those interested in truth as opposed to dogma seems like a fun thing to do. But I'm weird that way. (I strongly believe in the value of re-examination of what we hold true, for refinement or discarding if it no longer fits reality, as well as for personal growth - so the idea of sequences that people take as gospel of sorts is simply argument from authority to mock unless it stands up to critical analysis)

But within the LessWrong community there is actually outright hostility to...

Ghandi said "First they ignore you, then they laugh at you, then they fight you, then you win."

...but he was pretty pragmatic for a philosopher. If you get hostility to ideas, that means they're listening, which means you actually have some chance of causing reform, if that is a goal. If you are not willing to piss off a few people in the name of truth... well, I understand. We get tired, and human beings do not generally seek confrontation continually (or at least the ones who survive and reproduce do not). But if your concern is that they are hostile toward ideas that more effectively help humanity, disengagement isn't gonna change that, although it may help your own sanity.

Comment by bortels on Ideas to Improve LessWrong · 2015-06-01T05:21:35.240Z · LW · GW

I am new here - and so do not have enough experience to make a judgement call, but I do have a question:

Why do you want to "improve" it? What are the aspects of it's current operation that you think are sub-optimal, and why?

I see a lot of interesting suggestions for changes, and a wishlist for features - but I have no inkling if they might "improve" anything at all. I tend to be of the "simpler is better" school, and from the sound of things, it seems things are already pretty good, or at least pretty non-bad?

STORYTIME!

I used to play a lot of World of Warcraft. I mean - a lot. I had always been a big fan of Blizzard, and when WoW came out, I participated eagerly in the beta, and played it heavily for many years. I eventually left, for a number of reasons - but the relevant one, here, is that Blizzard had been steadily "improving" WoW to the point where it was not what I wanted. In the early days, a lot of WoW was hard, and thus rewarding. You had giant questlines, 40 man raids, and it would take months, maybe a year, to complete goals. Doing so was rewarding, as it was challenging to the intellect, and demonstrated mastery to my peer group - It's fun to brag and show off, even in a video game. But - my goals were not Blizzards, and they steadily "improved" things by making it simpler - rather than a 40 man raid where everyone must be in top form, you could do 25 man, 10 man, 5 man "raids", and you could earn some things by virtue of just grinding (quantity) rather than excellence (quality). Eventually, they started simply selling the types of things that I had spent a great deal of time earning, further invalidating it in my eyes. They improved themselves out of a paying customer, and while they maybe picked up 5 in my place - for me, at least, it ruined the game.

The moral is - beware of "improving" things so much that you alter them fundamentally. I'll be blunt - very little of what you propose above can't be done in discussion threads, and the world has enough social networks. Part of the reason I joined here is the fact that I cannot ask or discuss these things on twitter or shudder facebook - well, I can, but I would get very little but the blank stares of bumpkins. I love humanity, but on a whole we are a bunch of bumpkins, sorry to say.

Comment by bortels on Open Thread, Jun. 1 - Jun. 7, 2015 · 2015-06-01T05:07:18.937Z · LW · GW

What about the boys who can't or don't have these experiences...

They fail to reproduce, presumably. Genetics and evolution are a harsh mistress. Is there some reason to think that males that do not find a mate should get some sort of assistance? Perhaps for them, 40 is the "appropriate age".

I think I could make a fairly strong case that anyone who is not capable of talking to peers of both sexes and learning the right social cues to find a mate is probably someone also poorly equipped to take care of the results of finding that mate in the first place, namely a relationship and children. And - that's fine, viva la difference - a nice thing about being an intelligent human being is that you are not necessarily constrained in your behavior by what might be best from the standpoint of genetics and survival of the species.

Comment by bortels on A resolution to the Doomsday Argument. · 2015-06-01T04:57:31.233Z · LW · GW

Seems backwards. If you are a society that has actually designed and implemented an AI and infrastructure capable of "creating billions of simulated humanities" - it seems de-facto you are the "real" set, as you can see the simulated ones, and a recursive nesting of such things should, in theory have artifacts of some sort (ie. a "fork bomb" in the unix parlance).

I rather think that pragmatically, if a simulated society developed an AI capable of simulating society in sufficient fidelity, it would self-limit - either the simulations would simply lack fidelity, or the +1 society running us would go "whoops, that one is spinning up exponentially" and shut us down. If you really think you are in a simulated society, things like this would be tantamount to suicide...

I don't find the Doomsday argument compelling, simply because it assumes something is not the case ("we are in the first few percent of humans born") just because it is improbable.

Comment by bortels on Request for Advice : A.I. - can I make myself useful? · 2015-06-01T04:44:18.133Z · LW · GW

I think a large part of my lack of enthusiasm comes from my belief that advances in artificial intelligence are going to make human-run biology irrelevant before long.

I suspect that's the issue, and I suspect AI will not be the Panacea you expect it to; or rather, if AI gets to the point of making Human-run research in any field irrelevant - it may well do so in all fields shortly thereafter, so you're right back where you started.

I rather doubt it will happen that way at all; it seems to me in the forseeable future, the most likely scenario of computers and biology are as a force multiplier, allowing processes that are traditionally slow or tedious to be done rapidly and reliably, freeing humans to do that weird pattern-recognition and forecasting thing we do so well.

Comment by bortels on Learning to get things right first time · 2015-06-01T04:38:38.881Z · LW · GW

I think there may be an unfounded assumption here - that an unfriendly AI would be the results of some sort of bug, or coding errors that could be identified ahead of time and fixed.

I rather suspect those sorts of stuff would not result in "unfriendly", they would result in crash/nonsense/non-functional AI.

Presumably part of the reason the whole friendly/non-friendly thing is an issue is because our models of cognition are crude, and a ton of complex high-order behavior is a result of emergent properties in a system, not from explicit coding. I would expect the sort of error that accidentally turns an AI into a killer robot would be subtle enough that it is only comprehensible in hindsight, if then. (Note this does not mean intentionally making a hostile AI is all that hard. I can make hostility, or practical outcomes identical to it, without AI at all, so it stands to reason that could carry over)

Comment by bortels on Stupid Questions June 2015 · 2015-06-01T04:32:26.075Z · LW · GW

Is a 3 minute song worse, somehow, than a 10 minute song? or a song that plays forever, on a loop, like the soundtrack at Pirates of the Caribbean, is that somehow even better?

The value of a life is more about quality than quantity, although presumably if quality is high, longer is more desirable, at least to a point.

You could argue with current overpopulation is is unethical to have any children. In which case your genes will be deselected from the gene pool, in favor of those of my children, so it's maybe not a good argument to make.

Comment by bortels on Stupid Questions June 2015 · 2015-06-01T04:25:05.944Z · LW · GW

There's a label on the back as well with details. The front label is a billboard, designed to get your attention and take advantage of brand loyalty, so yes - you are expected to know it's detergent, and they are happy to handle the crazy rare edge-case person who does not recognize the brand. I suspect they also expect the supermarket you buy it at to have it in the "laundry detergents" section, likely with labels as well, so it's not necessary on the front label.

Comment by bortels on Stupid Questions June 2015 · 2015-06-01T03:33:24.890Z · LW · GW

I am hoping this is not stupid - but there is a large corpus of work on AI, and it is probably faster for those who have already digested it to point out fallacies than it is for me to try to find them. So - here goes:

BOOM. Maybe it's a bad sign when your first post to a new forum gets a "Comment Too Long" error.

I put the full content here - https://gist.github.com/bortels/28f3787e4762aa3870b3#file-aiboxguide-md - what follows is a teaser, intended to get those interested to look at the whole thing

TL;DR - it seems evident to me that the "keep it in the box" for the AI-Box experiment is not only the only correct course of action, it does not actually depend on any of the aspects of the AI whatsoever. The full argument is at the gist above - here are the points (in the style of a proof, so hopefully some are obvious):

1) The AI did not always exist. 2) Likewise, human intelligence did not always exist, and individual instantiations of it cease to exist frequently. 3) The status quo is fairly acceptable. 4) Godel's Theorem of Incompleteness is correct. 5) The AI can lie. 6) The AI cannot therefore be "trusted". 7) The AI could be "paused", without harm to it or the status quo. 8) By recording the state of the paused AI, you could conceivably "rewind" it to a given state. 9) The AI may be persuaded, while executing, to provide truths to us that are provable within our limited comprehension.

Given the above, the outcomes are:

Kill it now - status quo is maintained Let it out - wildly unpredictable, possible existential threat Exploit it in the box - actually doable, and possibly wildly useful, with minimal risk.

Again - the arguments in detail are at the gist.

What I am hoping for here are any and all of the following: 1) critical eye points out logical flaw or something I forgot, ideally in small words, and maybe I can fix it. 2) critical eye agrees, so maybe at least I feel I am on the right path 3) Any arguments on the part of the AI that might still be compelling, if you accept the above to be correct.

In a nutshell - there's the argument, please poke holes (gently, I beg, or at least with citations if necessary). it is very possible some or all of this has been argued and refuted before, point me to it, please.