Posts

Which are the useful areas of AI study? 2011-01-15T23:03:25.044Z

Comments

Comment by PeterisP on Beta - First Impressions · 2017-09-21T09:27:36.001Z · LW · GW

A RSS feed for new posts is highly desirable - I don't generally go to websites "polling" for new information that may or may not be there unless e.g. I'm returning to a discussion that I had yesterday, so a "push mechanism" e.g. RSS is essential to me.

Comment by PeterisP on Double Crux — A Strategy for Mutual Understanding · 2016-11-29T23:54:09.504Z · LW · GW

I'm going to go out and state that the chosen example of "middle school students should wear uniforms" fails the prerequisite of "Confidence in the existence of objective truth", as do many (most?) "should" statements.

I strongly believe that there is no objectively true answer to the question "middle school students should wear uniforms", as the truth of that statement depends mostly not on the understanding of the world or the opinion about student uniforms, but on the interpretation of what the "should" means.

For example, "A policy requiring middle school students to wear uniforms is beneficial to the students" is a valid topic of discussion that can uncover some truth, and "A policy requiring middle school students to wear uniforms is mostly beneficial to [my definition of] society" is a completely different topic of discussion that likely can result in a different or even opposite answer.

Talking about unqualified "should" statements are a common trap that prevents reaching a common understanding and exploring the truth. At the very least, you should clearly distinguish between "should" as good, informed advice from "should" as a categorical moral imperative. If you want to discuss if "X should to Y" in the sense of discussing what are the advantages of doing Y (or not), then you should (see what I'm doing here?) convert them to statements in the form "X should do Y because that's a dominant/better/optimal choice that benefits them", because otherwise you won't get what you want but just an argument between a camp arguing this question versus a camp arguing about why we should/shouldn't force X to do Y because everyone else wants it.

Comment by PeterisP on Rationality Quotes Thread March 2015 · 2015-03-03T09:52:34.110Z · LW · GW

The most important decisions are before starting a war, and there the mistakes have very different costs. Overestimating your enemy results in peace (or cold war) which basically means that you just lose out on some opportunistic conquests but underestimating your enemy results in a bloody, unexpectedly long war that can disrupt you for a decade or more - there are many nice examples of that in 20th century history.

Comment by PeterisP on Rationality Quotes Thread March 2015 · 2015-03-03T09:48:35.894Z · LW · GW

"Are we political allies, or enemies?" is rather orthogonal to that - your political allies are those whose actions support your goals and your political enemies are those whose actions hurt your goals.

For example, a powerful and popular extreme radical member of the "opposite" camp that has conclusions that you disagree with, uses methods you disagree with, and is generally toxic and spewing hate - that's often a prime example of your political ally whose actions incite the moderate members of society to start supporting you and focusing on your important issues instead of something else. The existance of such a pundit is important to you, you want them to keep doing what their do and have their propaganda actions be successful up to a point. I won't go into examples of particular politicians/parties of various countries, that gets dirty quickly, but many strictly opposed radical groups are actually allies in this sense against the majority of moderates; and sometimes they actively coordinate and cooperate despite the ideological differences.

On the other hand, a public speaker that targets the same audience as you do, shares the same goals/conslusions that you do, and the intended methods to achieve it, but simply does it consistently poorly - by using sloppy arguments that alienate part of the target audience, or by disgusting personal behavior that hurts the image of your organization. That's a good example of a political enemy, one that you must work to silence, to get them ignored and not heard; despite being "aligned" with your conclusions.

And of course, a political competitor that does everything that you want to do but holds a chair/position that you want for yourself, is also a political enemy. Infighting inside powerful political groups is a normal situation, and when (and if) it goes to public, very interesting political arguments appear to distinguish one from their political enemy despite sharing most of the platform.

Comment by PeterisP on On Caring · 2014-10-15T16:07:25.516Z · LW · GW

The difference is that there are many actions that help other people but don't give an appropriate altruistic high (because your brain doesn't see or relate to those people much) and there are actions that produce a net zero or net negative effect but do produce an altruistic high.

The built-in care-o-meter of your body has known faults and biases, and it measures something often related (at least in classic hunter-gatherer society model) but generally different from actually caring about other people.

Comment by PeterisP on On Caring · 2014-10-15T16:01:23.065Z · LW · GW

An interesting followup to your example of an oiled bird deserving 3 minutes of care that came to mind:

Let's assume that there are 150 million suffering people right now, which is a completely wrong random number but a somewhat reasonable order-of-magnitude assumption. A quick calculation estimates that if I dedicate every single waking moment of my remaining life to caring about them and fixing the situation, then I've got a total of about 15 million care-minutes.

According to even the best possible care-o-meter that I could have, all the problems in the world cannot be totally worth more than 15 million care-minutes - simply because there aren't any more of them to allocate. And in a fair allocation, the average suffering person 'deserves' 0.1 care-minutes of my time, assuming that I don't leave anything at all for the oiled birds. This is a very different meaning of 'deserve' than the one used in the post - but I'm afraid that this is the more meaningful one.

Comment by PeterisP on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-09-04T21:50:27.804Z · LW · GW

I'd read it as an acknowledgement that any intelligence has a cost, and if your food is passive instead of antagonistic, then it's inefficient (and thus very unlikely) to put such resources into outsmarting it.

Comment by PeterisP on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-09-04T21:46:39.379Z · LW · GW

If animal-complexity CNS is your criteria, then humans + octopuses would be a counterexample, as urbilaterals wouldn't be expected to have such a system, and the octopus intelligence has formed separately.

Comment by PeterisP on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-09-04T21:38:59.419Z · LW · GW

A gold-ingot-manufacturing-maximizer can easily manufacture more gold than exists in their star system by using arbitrary amounts of energy to create gold, starting with simple nuclear reactions to transmute bismuth or lead into gold and ending with direct energy to matter to gold ingots process.

Furthermore, if you plan to send copies-of-you to N other systems to manufacture gold ingots there, as long as there is free energy, you can send N+1 copies-of-you. A gold ingot manufacturing rate that grows proportionally to time^(n+1) is much faster than time^n, so sending N copies wouldn't be maximizing.

And a third point is that if it's possible that somewhere in the universe there are some ugly bags of mostly water that prefer to use their atoms and energy for not manufacturing gold ingots but their survival; then it's very important to ensure that they don't grow strong enough to prevent you from maximizing gold ingot manufacturing. Speed is of the essence, you must reach them before it's too late, or gold ingot manufacture won't get maximized.

Comment by PeterisP on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-09-04T20:53:00.023Z · LW · GW

Dolphins are able to herd schools of fish, cooperating to keep a 'ball' of fish together for a long time while feeding from it.

However, taming and sustained breeding is a long way from herding behavior - it requires long term planning for multi-year time periods, and I'm not sure if that has been observed in dolphins.

Comment by PeterisP on 2013 Less Wrong Census/Survey · 2013-11-23T07:12:53.749Z · LW · GW

Income question needs to be explicit about if it's pre-tax or post-tax, since it's a huge difference, and the "default measurement" differs between cultures, in some places "I earn X" means pre-tax and in some places it means post-tax.

Comment by PeterisP on The dangers of zero and one · 2013-11-23T06:38:57.391Z · LW · GW

Actually "could he, in principle, have made place for such possibilities in advance?" is very, very excellent question.

We can allocate for such possibilities in advance. For example, we can use a simple statistical model for limitations of our own understanding of reality - I have a certain number of years experience in making judgements and assumptions about reality; I know that I don't consider all possible explanations, and I can estimate that in x% cases the 'true' explanation was one that I hadn't considered. So I can make a 'belief budget' for the 'other' category. For example, any question like 'will the coin fall heads or tails' has to include 'other' option. It may fall on it's side.

A great example is the quotation "One of these days in your travels, a guy is going to show you a brand-new deck of cards on which the seal is not yet broken. Then this guy is going to offer to bet you that he can make the jack of spades jump out of this brand-new deck of cards and squirt cider in your ear. But, son, do not accept this bet, because as sure as you stand there, you're going to wind up with an ear full of cider."

If you want to handle reality, you have to model the probability of 'jack of spades jumping out of this brand-new deck of cards and squirting cider in your ear' as non-zero. 10^-99 might be ok, but not 0.

Comment by PeterisP on The dangers of zero and one · 2013-11-23T06:01:30.795Z · LW · GW

Well, but you can (a) preform moderately extensive testing, and (b) do redundancy.

If you write 3 programs for verifying primeness (using different algorithms and possibly programming languages/approaches); and if all their results match, then you can assume a much higher confidence in correctness than for a single such program.

Comment by PeterisP on A Voting Puzzle, Some Political Science, and a Nerd Failure Mode · 2013-10-10T16:32:50.371Z · LW · GW

There's the classic economic textbook example of two hot-dog vendors on a beach that need to choose their location - assuming an even distribution of customers, and that customers always choose the closest vendor; the equilibrium location is them standing right next to each other in the middle; while the "optimal" (from customer view, minimizing distance) locations would be at 25% and 75% marks.
This matches the median voter principle - the optimal behavior of candidates is to be as close as possible to the median but on the "right side" to capture "their half" of the voters; even if most voters in a specific party would prefer their candidate to cater for, say, the median Republican/Democrat instead, it's against the candidates interests to do so.

Comment by PeterisP on The genie knows, but doesn't care · 2013-09-06T09:14:47.030Z · LW · GW

"Tell the AI in English" is in essence an utility function "Maximize the value of X, where X is my current opinion of what some english text Y means".

The 'understanding English' module, the mapping function between X and "what you told in English" is completely arbitrary, but is very important to the AI - so any self-modifying AI will want to modify and improve that. Also, we don't have a good "understanding English" module so yes, we also want the AI to be able to modify and improve that. But, it can be wildly different from reality or opinions of humans - there are trivial ways of how well-meaning dialogue systems can misunderstand statements.

However, for the AI "improve the module" means "change the module so that my utility grows" - so in your example it has strong motivation to intentionally misunderstand English. The best case scenario is to misunderstand "Make everyone happy" as "Set your utility function to MAXINT". The worst case scenario is, well, everything else.

There's the classic quote "It is difficult to get a man to understand something, when his salary depends upon his not understanding it!" - if the AI doesn't care in the first place, then "Tell AI what to do in English" won't make it care.

Comment by PeterisP on To what degree do you model people as agents? · 2013-08-27T14:28:01.617Z · LW · GW

If you model X as "rude person", then you expect him to be rude with a high[er than average] probability cases, period.

However, if you model X as an agent that believes that rudeness is appropriate in common situations A,B,C, then you expect that he might behave less rudely (a) if he would percieve that this instance of a common 'rude' situation is nuanced and that rudeness is not appropriate there; or (b) if he could be convinced that rudeness in situations like that is contrary to his goals, whatever those may be.

In essence, it's simpler and faster to evaluate expected reactions for people that you model as just complex systems, you can usually do that right away. But if you model goal-oriented behavior, "walk a mile in his shoes" and try to understand the intent of every [non]action and the causes of that, then it tends to be tricky but allows you more depth in both accurate expectations, and ability to affect the behavior.

However, if you do it poorly, or simply lack data neccessary to properly understand the reasons/motivations of that person then you'll tend to get gross misunderstandings.

Comment by PeterisP on The Robots, AI, and Unemployment Anti-FAQ · 2013-07-26T01:07:42.038Z · LW · GW

My [unverified] intuition on AI properties is that the delta between current status and 'IQ60AI' is multiple orders of magnitude larger than the delta between 'IQ60AI' and 'IQ180AI'. In essence, there is not that much "mental horsepower" difference between the stereotypical Einstein and a below-average person; it doesn't require a much larger brain or completely different neuronal wiring or a million years of evolutionary tuning.

We don't know how to get to IQ60AI; but getting from IQ60AI to IQ180AI could (IMHO) be done with currently known methods in many labs around the world by the current (non IQ180) researchers rapidly (ballpark of 6 months maybe?). We know from history that a 0 IQ process can optimize from monkey-level intelligence to an Einstein by bruteforcing; So in essence, if you've got IQ70 minds that can be rapidly run and simulted, then just apply more hardware (for more time-compression) and optimization, as that gap seems to require exactly 0 significant breakthroughs to get to IQ180.

Comment by PeterisP on Prisoner's Dilemma (with visible source code) Tournament · 2013-06-07T13:38:52.556Z · LW · GW

It's quite likely that the optimal behaviour should be different in case the program is functionally equal but not exactly equal.

If you're playing yourself, then you want to cooperate.

If you're playing someone else, then you'd want to cooperate if and only if that someone else is smart enough to check if you'll cooperate; but if it's decision doesn't depend on yours, then you should defect.

Comment by PeterisP on The Centre for Applied Rationality: a year later from a (somewhat) outside perspective · 2013-05-27T21:42:23.272Z · LW · GW

I see MOOC's as a big educaational improvement because of this - sure, I could get the same educational info without the MOOC structure; just by reading the field best textbooks and academic papers; but having a specific "course" with the quizzes/homework makes me actually do the excercises, which I wouldn't have done otherwise; and the course schedule forces me to do them now, instead of postponing them for weeks/months/forever.

Comment by PeterisP on Pinpointing Utility · 2013-02-01T09:45:42.143Z · LW · GW

I feel confused. "a space I can measure distances in" is a strong property of a value, and it does not follow from your initial 5 axioms, and seems contrary to the 5th axiom.

In fact, your own examples given further seem to provide a counterexample - i.e., if someone prefers being a whale to 400 actual orgasms, but prefers 1/400 of being a whale to 1 orgasm, then both "being a whale" and "orgasm" have some utility value, but they cannot be used as units to measure distance.

If you're in a reality where a>b and 2a<2b, then you're not allowed to use classic arithmetic simply because some of your items look like numbers, since they don't behave like numbers.

Comment by PeterisP on By Which It May Be Judged · 2012-12-28T09:46:23.211Z · LW · GW

OK, for a slightly clearer example, in the USA abortion debate, the pro-life "camp" definitely considers pro-life to be moral and wants to apply to everyone; and pro-choice "camp" definitely considers pro-choice to be moral and to apply to everyone.

This is not a symbolic point, it is a moral question that defines literally life-and-death decisions.

Comment by PeterisP on By Which It May Be Judged · 2012-12-22T12:26:05.262Z · LW · GW

That's not sufficient - there can be wildly different, incompatible universalizable morality systems based on different premises and axioms; and each could reasonably claim to be that they are a true morality and the other is a tribal shibboleth.

As an example (but there are others), many of the major religious traditions would definitely claim to be universalizable systems of morality; and they are contradicting each other on some points.

Comment by PeterisP on By Which It May Be Judged · 2012-12-19T11:57:49.762Z · LW · GW

What is the difference between "self-serving ideas" as you describe, "tribal shibboleths" and "true morality" ?

What if "Peterdjones-true-morality" is "PeterisP-tribal-shibboleth", and "Peterdjones-tribal-shibboleth" is "PeterisP-true-morality" ?

Comment by PeterisP on By Which It May Be Judged · 2012-12-19T11:52:04.427Z · LW · GW

I'm afraid that any nontrivial metaethics cannot result in concrete universal ethics - that the context would still be individual and the resulting "how RichardKennaway should live" ethics wouldn't exactly equal "how PeterisP should live".

The difference would hopefully be much smaller than the difference between "how RichardKennaway should live RichardKennaway-justly" and "How Clippy should maximize paperclips", but still.

Comment by PeterisP on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-27T21:31:38.640Z · LW · GW

Another situation that has some parallels and may be relevant to the discussion.

Helping starving kids is Good - that's well understood. However, my upbringing and current gut feeling says that this is not unconditional. In particular, feeding starving kids is Good if you can afford it; but feeding other starving kids if that causes your own kids to starve is not good, and would be considered evil and socially unacceptable. i.e., that goodness of resource redistribution should depend on resource scarcity; and that hurting your in-group is forbidden even with good intentions.

It may be caused by the fact that I'm partially brought up by people that actually experienced starvation and have had their relatives starve to death (WW2 aftermath and all that), but I'd guess that their opinion is more fact-based than mine and that they definitely had put more thought into it than I have, so until/if I analyze it more, I probably should accept that prior.

Comment by PeterisP on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-27T21:18:56.036Z · LW · GW

That is so - though it depends on the actual chances; "much higher chance of survival" is different than "higher chance of survival".

But my point is that:

a) I might [currently thinking] rationally desire that all of my in-group would adopt such a belief mode - I would have higher chances of survival if those close to me prefer me to a random stranger. And "belief-sets that we want our neighbors to have" are correlated with what we define as "good".

b) As far as I understand, homo sapiens do generally actually have such an attitude - evolutionary psychology research and actual observations when mothers/caretakers have had to choose kids in fires/etc.

c) Duty may be a relevant factor/emotion. Even if the values were perfectly identical (say, the kids involved would be twins of a third party), if one was entrusted to me or I had casually accepted to watch him, I'd be strongly compelled to save that one first, even if the chances of survival would (to an extent) suggest otherwise. And for my own kids, naturally, I have a duty to take care of them unlike 99.999% other kids - even if I wouldn't love them, I'd still have that duty.

Comment by PeterisP on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-27T20:51:10.300Z · LW · GW

OK, then I feel confused.

Regarding " if I have to choose wether to save my child or your child, the unbiased rational choice is to save my child, as the utility (to me) of this action is far greater" - I was under impression that this would be a common trait shared by [nearly] all homo sapiens. Is it not so and is generally considered sociopathic/evil ?

Comment by PeterisP on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-27T12:11:05.469Z · LW · GW

No, I'm not arguing that this is a bias to overcome - if I have to choose wether to save my child or your child, the unbiased rational choice is to save my child, as the utility (to me) of this action is far greater.

I'm arguing that this is a strong counterexample to the assumption that all entities may be treated as equals in calculating "value of entity_X's suffering to me". They are clearly not equal, they differ by order(s) of magnitude.

"general value of entity_X's suffering" is a different, not identical measurement - but when making my decisions (such as the original discussion on what charities would be the most rational [for me] to support) I don't want to use the general values, but the values as they apply to me.

Comment by PeterisP on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-26T23:33:35.992Z · LW · GW

What should be objective grounds for such a multiplier? Not all suffering is valued equally. Excluding self-suffering (which is so much subjectively different) from the discussion, I would value the suffering of my child as more important than the suffering of your child. And vice versa.

So, for any valuation that would make sense to me (so that I would actually use that method to make decisions), there should be some difference between multipliers for various beings - if the average homo sapiens would be evaluated with a coefficient of 1, then some people (like your close relatives or friends) would be >1, and some would be <1. Animals (to me) would clearly be <1 as illustrated by a simple dilemma - if I had to choose to kill a cow to save a random man, or to kill a random man to save a cow, I'd favor the man in all cases without much hesitation.

So an important question is, what should be a reasonable basis to quantitatively compare a human life versus (as an example) cow lifes - one-to-ten? one-to-thousand? one-to-all-the-cows-in-the-world? Frankly, I've got no idea. I've given it some thought but I can't imagine a way how to get to an order of magnitude estimate that would feel reasonable to me.

Comment by PeterisP on How minimal is our intelligence? · 2012-11-26T11:52:14.092Z · LW · GW

The examples of corvids designing and making specialized tools after observing what they would need to solve specific problems (placement of an otherwise unaccessible treat) seem to demonstrate such chains of thought.

Comment by PeterisP on LW Women- Minimizing the Inferential Distance · 2012-11-26T11:19:55.759Z · LW · GW

Why not?

Of course, the best proportion would be 100% of people telling me that p(the_warming)=85%; but if we limit the outside opinions to simple yes/no statements, then having 85% telling 'yes' and 15% telling 'no' seems to be far more informative than 100% of people telling 'yes' - as that would lead me to very wrongly assume that p(the_warming) is the same as p(2+2=4).

Comment by PeterisP on Interlude for Behavioral Economics · 2012-07-06T22:22:34.147Z · LW · GW

The participants don't know the rules, and have been given a hint that they don't know the rules - as the host said that the choices will be independent/hidden, but then is telling you the other contestant's choice. So they can easily assume a chance that the host is lying, or might then give the first contestant a chance to switch his choice, etc.

Comment by PeterisP on 2011 Survey Results · 2011-12-05T22:04:41.459Z · LW · GW

Actually, how should one measure own IQ ? I wouldn't know a reasonable place where to start looking for it, as the internet is full of advertising for IQ measurements, i.e., lots of intentional misinformation. Especially avoiding anything restricted to a single location like USA - this makes SAT's useless, well, at least for me.

Comment by PeterisP on Offense versus harm minimization · 2011-04-17T19:00:33.964Z · LW · GW

Your interlucotur clearly wouldn't be behaving nicely and would clearly be pushing for some confrontation - but does it mean that it is wrong or not allowed? This feels the same as if (s)he simply and directly called you a jackass in your face - it is an insult and potentially hostile, but it's clearly legal and 'allowed'; there are often quite understandable valid reasons to (re)act in such a way against someone, and it wouldn't really be an excuse in a murder trial (and the original problem does involve murders as reaction to perceived insults).

Comment by PeterisP on Offense versus harm minimization · 2011-04-17T18:50:48.619Z · LW · GW

All of the above days seem quite fun and fine to me.

As for the original article point - I agree that there isn't any significant difference between the hypothetical British salmon case and Mohammad's case, but it this fact doesn't change anything. There isn't a right to never be offended. There is no duty to abstain from offending others. It's nice if others are nice, but you can't demand everybody to be nice - most of them will be indifferent, and some will be not nice, and you just have to live with it and deal with it without using violence - and if you don't know how to handle it without violence, then you are still a 'child' in that sense and have to learn proper reaction, so everybody can (and probably should) provoke you until you learn to deal with it.

Comment by PeterisP on Fun and Games with Cognitive Biases · 2011-03-01T18:14:35.932Z · LW · GW

If I understand your 'problem' correctly - estimating potential ally capabilities and being right/wrong about that (say, when considering teammates/guildmates/raid members/whatever), then it's not nearly a game-specific concept - it applies to any partner-selection without perfect information, like mating or in job interviews. As long as there is a large enough pool of potential parners, and you don't need all of the 'good' ones, then false negatives don't really matter as much as the speed or ease of the selection process and the cost of false positives, where you trust someone and he turns out to be poor after all.

There's no major penalty for being picky and denigrating a potential mate (or hundreds of them), especially for females, as long as you get a decent one in the end; In such situations the optimal evaluation criteria seem to be 'better punish a hundred innocents than let one bad guy/loser past the filter', the exact opposite of what most justice systems try to achieve.

There's no major penalty for, say, throwing out a random half of CV's you get for a job vacancy if you get too many responses - if you get a 98% 'fit' candidate up to final in-person interviews, then it doesn't matter that much if you lose a 99% candidate that you didn't consider at all - the cost of interviewing an extra dozen of losers would be greater than the benefit.

The same situation happens also in MMOG's, and unsurprisingly people tend to find the same reasonable solutions as in real life.

Comment by PeterisP on Politics is a fact of life · 2011-01-21T23:43:23.986Z · LW · GW

As the saying goes, you can ignore politics but it doesn't mean that politics will ignore you.

It is instrumentally rational to be aware of political methodologies both in the sense that they will interact with many issues in your daily life and also in the sense on how you may improve the success chances of any goals needing interaction or cooperation with others.

Comment by PeterisP on The Best Textbooks on Every Subject · 2011-01-19T21:06:01.150Z · LW · GW

It goes on from the reasons of systems thinking through the theoretical foundation, the maths used, and the practical applications and pretty much all common types of issues seen in real world.

It's about 5 times larger volume (~1000 A4 pages) than the Meadows' "Thinking in Systems", so not exactly textbook format, but covers the same stuff quite well and more. Though, it does spend much of the second half of the book focusing almost exclusively on practical development of system dynamics models.

Comment by PeterisP on The Best Textbooks on Every Subject · 2011-01-18T23:25:27.871Z · LW · GW

The saying actually goes 'jack of all trades and a master of none, though oft better than a master of one'.

There are quite a few insights and improvements that are obvious with cross-domain expertise, and much of the new developments nowadays pretty much are merging of two or more knowledge domains - bioinformatics as a single, but not nearly only example. Computational linguistics, for example - there are quite a few treatises on semantics written by linguists that would be insightful and new for computer science guys handling also non-linguistic knowledge/semantics projects.

Comment by PeterisP on The Best Textbooks on Every Subject · 2011-01-18T23:19:30.182Z · LW · GW

I haven't read the books you mention, but it seems that Sterman's 'Business Dynamics: Systems thinking and modeling for a complex world' covers mostly the same topics, and it felt really well written, I'd recommend that one as an option as well.

Comment by PeterisP on Which are the useful areas of AI study? · 2011-01-18T22:00:22.614Z · LW · GW

In that sense, it's still futile. The whole reason for the discussion is that AI doesn't really need permission or consent of anyone; the expected result is that AI - either friendly or unfriendly - will have the ability to enforce the goals of its design. Political reasons will be easily satisfied by a project that claims to try CEV/democracy but skips it in practice, as afterwards the political reasons will cease to have power.

Also, a 'constitution' matters only if it is within the goal system of a Friendly AI, otherwise it's not worth the paper it's written on.

Comment by PeterisP on Which are the useful areas of AI study? · 2011-01-17T21:35:41.472Z · LW · GW

I'm still up in the air regarding Eliezer's arguments about CEV.

I have all kinds of ugh-factors coming in mind about not-good or at least not-'PeterisP-good' issues an aggregate of 6 billion hairless ape opinions would contain.

The 'Extrapolated' part is supposed to solve that; but in that sense I'd say that it turns the whole concept of this problem from knowledge extraction to the extrapolation. In my opinion, the difference between the volition of Random Joe and volition of Random Mohammad (forgive me for stereotyping for the sake of a short example) is much smaller than the difference between volition of Random Joe and the extrapolated volition of Random Joe 'if he knew more, thought faster, was more the person he wishes he was'. Ergo, the idealistic CEV version of 'asking everyone' seems a bit futile. I could go into more detail, but in that case that's probably material for a separate discussion, analyzing the parts of CEV point by point.

Comment by PeterisP on Link: why training a.i. isn’t like training your pets · 2011-01-16T15:09:11.564Z · LW · GW

To put it in very simple terms - if you're interested in training AI according to technique X because you think that X is the best way, then you design or adapt the AI structure so that technique X is applicable. Saying 'some AI's may not respond to X' is moot, unless you're talking about trying to influence (hack?) AI designed and controlled by someone else.

Comment by PeterisP on Harry Potter and the Methods of Rationality discussion thread, part 7 · 2011-01-16T00:07:20.940Z · LW · GW

I've worn full-weight chain and plate reconstruction items while running around for a full day, and I'm not physically fit at all - I'd say that a random geeky 12 year old boy would be easily able to wear an armor suit, the main wizard-combat problems being getting winded very, very quickly if running (so they couldn't rush in the same way as Draco's troops did), and slightly slowed down arm movement, which might hinder combat spellcasting. It is not said how long the battles are - if they are less than an hour, then there shouldn't be any serious hindrances; if longer then the boys would probably want to sit down and rest occasionally or use some magic to lighten the load.

Comment by PeterisP on Working hurts less than procrastinating, we fear the twinge of starting · 2011-01-03T04:09:37.389Z · LW · GW

hypothesis—that it is really hard to over-ride the immediate discomfort of an unpleasant decision—is to look at whether aversions of comparable or greater magnitude are hard to override. I think the answer in general is 'no.' Consider going swimming and having to overcome the pain of entering water colder than surrounding. This pain, less momentary than the one in question and (more or less) equally discounted, doesn't produce problematic hesitation.

I can't agree with you - it most definitely does produce a problematic hesitation. If you're bringing this example, then I'd say that it is evidence that the general answer is 'yes', at least for a certain subpopulation of homo sapiens.

Comment by PeterisP on Never Leave Your Room · 2011-01-01T20:00:24.015Z · LW · GW

Sorry for intruding on an very old post, but checking 'peoplerandom' integers modulo 2 is worse than flipping a coin - when asked for a random number, people tend to choose odd numbers more often than even numbers, and prime numbers more often than non-prime numbers.

Comment by PeterisP on Some rationality tweets · 2010-12-31T04:20:20.525Z · LW · GW

Then it should be rephrased as 'We should seek a model of reality that is accurate even at the expense of flattery.'

Ambiguous phrasings facilitate only confusion.

Comment by PeterisP on Dark Arts 101: Using presuppositions · 2010-12-31T04:04:27.727Z · LW · GW

I'm not an expert on relevant US legislative acts, but this is the legal definition in local laws here and I expect that the term of espionage have been defined a few centuries ago and would be mostly matching throughout the world.

A quick look at current US laws (http://www.law.cornell.edu/uscode/18/usc_sec_18_00000793----000-.html) does indicate that there is a penalty for such actions with 'intent or reason to believe ... for the injury of United States or advantage of any foreign nation' - so simply acting to intentionally harm US would be punishable as well, but it's not calling it espionage. And the Manning issue would depend on his intention/reason to believe about harming US vs. helping US nation, which may be clarified by evidence in his earlier communications with Adrian Lamo and others.

Comment by PeterisP on Dark Arts 101: Using presuppositions · 2010-12-30T11:26:50.989Z · LW · GW

Spies by definition are agents of foreign powers acting on your soil without proper registration - i.e., like the many representatives in embassies have registered as agents of that country and are allowed to operate on their behalf until/if expelled.

As far as Assange (IIRC) has not been in USA while the communiques were leaked, and it is not even claimed that he is an agent of some other power, then there was no act of espionage. It might be called espionage if and only if Manning was acting on behalf of some power - and even then, Manning would be the 'spy', not Assange.

Comment by PeterisP on Infinite Certainty · 2010-12-19T15:48:50.556Z · LW · GW

I perceive the intention of the original assertion is that even in this case you would still fail in making 10.000 independent statements of such sort - i.e., in trying to do it, you are quite likely somehow make a mistake at least once, say, by a typo, a slip of the tongue, accidentally ommitting 'not' or whatever. All it takes to fail on a statement like "53 to be prime" all it takes is for you to not notice that it actually says '51 is prime' or make some mistake when dividing.

Any random statement of yours has a 'ceiling' of x-nines accuracy.

Even any random statement of yours where it is known that you aren't rushed, tired, on medication, sober, not sleepy, had a chance and intent to review it several times still has some accuracy ceiling, a couple orders of magnitude higher, but still definitely not 1.