Posts

Comments

Comment by andrewkemendo on Scaling Evidence and Faith · 2009-12-29T10:11:12.339Z · score: 1 (1 votes) · LW · GW

I had not read that part. Thanks.

I do not see any difference in inductive bias as it is written there and dictionary and wikipedia definitions of faith:

Something that is believed especially with strong conviction(http://www.merriam-webster.com/dictionary/faith)

Faith is to commit oneself to act based on sufficient experience to warrant belief, but without absolute proof.

Comment by andrewkemendo on Scaling Evidence and Faith · 2009-12-29T10:06:14.234Z · score: 0 (0 votes) · LW · GW

I think you, EY and most use the term faith in a historical context related to religion rather than its definitional context as it relates to epistemological concerns of trust in an idea or claim

The best definition I have found so far for faith is thus:

Faith is to commit oneself to act based on sufficient experience to warrant belief, but without absolute proof.

So I have no problem using faith and induction interchangeably because it is used just as you say:

inferring the future from the past (or the past from the present), which basically requires the universe to consistently obey the same laws.

Religions claim that they do this. Of course they don't because they do not apply a constant standard to their worldview to all events. It is not because of their faith that they are wrong, it is because of their inconsistent application of accepting claims and ignoring evidence.

The point of the system is to deconstruct why you see their claims of evidence as faith and vice versa. Hence the incorruptible example.

Comment by andrewkemendo on Scaling Evidence and Faith · 2009-12-28T07:07:17.911Z · score: 4 (6 votes) · LW · GW

Intuition (what you call "faith") is evidence.

If you will, please define intution as you understand it.

From how I understand intuition, it is knowledge for which the origin cannot be determined. I have certainly experienced the "I know I read something about that somewhere but I just can't remember" feeling before and was right about it. However just as equally I have been wrong about conclusions that I have come to through this means.

I think your entire post gives the same visceral description as someone would describe about having "felt the holy spirit "or some other such nonsense.

I honestly think that the issue of intuition is a MAJOR hurdle for rationality. I tend to err on the side of intuition being false evidence - hence why I indicated that our heuristics filled in the blanks. That is why I categorize intuition with faith similarly.

Comment by andrewkemendo on Scaling Evidence and Faith · 2009-12-28T06:59:43.986Z · score: 2 (2 votes) · LW · GW

confidence level.

Most people do not understand what a confidence interval or confidence levels are. At least in my interactions. Unless you have had some sort of statistics (even basic) you probably haven't heard of it.

Comment by andrewkemendo on Scaling Evidence and Faith · 2009-12-28T06:58:08.585Z · score: 0 (2 votes) · LW · GW

I think it improperly relabels "uncertainty" as "faith."

Perhaps. The way I see uncertainty as it pertains to one or another claim is that there will almost always be a reasonable counter claim and in order to dismiss the counter claim and accept the premise, that is faith in the same sense.

The only thing one truly must have faith in (and please correct me if you can; I'd love to be wrong) is induction, and if you truly lacked faith in induction, you'd literally go insane.

Intuition and induction are in my view very similar to what is understood as faith. I failed to make that clear, however I would use those interchangeably.

I recognize that faith is a touchy issue because it is so dramatically irrational and essentially leads to the slippery slope of faith. I view the issue similar to how the case was made for selecting the correct contrarian views, we are concluding approximately for what we do not know or for counterclaims.

Comment by andrewkemendo on On the Power of Intelligence and Rationality · 2009-12-25T14:10:15.510Z · score: 2 (2 votes) · LW · GW

Sure. What's not rational is to believe ... politicians

I think that is likely the best approach

Comment by andrewkemendo on On the Power of Intelligence and Rationality · 2009-12-23T12:32:56.627Z · score: 1 (5 votes) · LW · GW

Your argument seems to conclude that:

It is impossible to reason with unreasonable people

Agreed. Now what?

Ostensibly your post is about how to swing the ethos of a large group of people towards behaving differently. I would argue that has never been necessary and still is not.

A good hard look at any large political or social movement reveals a small group of very dedicated and motivated people, and a very large group of passive marginally interested people who agree with whatever sounds like it is in their best interest without them really doing too much work.

So can rationality work on a large scale? Arguably, it always does work. I rarely hear political or social arguments that are obviously (to everyone) pure hokum. If you look at how the last 4 U.S. presidents campaigned, it was always on "save you money" talking points and "less waste, more justice" platform. All rational things in the mind of the average person.

I think however your implication is that rationality is not always obviously rational. Well friend, that is why you have to completely understand the implications of rational decision making in terms that the majority can agree on in order to describe why they are better decisions. You often have to connect the dots for people so that they can see how to get from some contrarian or "non-intuitive" idea to their goal of raising a happy family.

This is the essence of "selling." Of course spinners and politicians sell lots of crap to people by telling half truths, overcomplicated arguments or simply outright lying. These are obviously disingenuous. If you need to lie to sell your ethos it is probably wrong. That or you just aren't wise enough to make it comprehensible.

Comment by andrewkemendo on December 2009 Meta Thread · 2009-12-17T11:54:08.390Z · score: 4 (4 votes) · LW · GW

I am not a fan of internet currency in all its forms generally because it draws attention away from the argument.

Reddit, which this is based on, went to disabling a subtractive karma rule for all submissions and comments. Submissions with down votes greater than up votes just don't go anywhere while negative comment votes get buried similar to how they do here. That seems like a good way to organize the system.

Is the reason that it was implemented in order to be signaling for other users or is it just an artifact of the reddit API? Would disabling the actual display of the "points" simultaneously disable the comment ranking? What would be the most rational way to organize the comments. The least biased way would be for it to be based on time. The current way and the way reddit works is direct democracy and that of course is the tyranny of the majority. The current way may be the most efficient if the readers have such a high vale of their time that they only have time to read the most popular comments and skip the rest. However even if that is efficient it is not necessarily optimized to elucidate the best discussion points as users typically vote up things that they agree with rather than strong arguments.

I personally do not submit more responses and posts because of the karma system. As I have seen heavily on reddit, there is karma momentum where people tend to vote similar to how others have voted (as human nature would dictate). Based on that, I know that people will reference the total points of submitters and make decisions on how to take their comments and suggestions in light of that primed information - when the arguments should be evaluated independently.

Maybe I'm missing something though.

Comment by andrewkemendo on The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom · 2009-12-13T06:43:20.323Z · score: 1 (1 votes) · LW · GW

The most important of which is: if you only do what feels epistemically "natural" all the time, you're going to be, well, wrong.

Then why do I see the term "intuitive" used around here so much?

I say this by way of preamble: be very wary of trusting in the rationality of your fellow humans, when you have serious reasons to doubt their conclusions.

Hmm, I was told here by another lw user that the best thing humans have to truth is consensus.

Somewhere there is a disconnect between your post and much of the consensus, at least in practice, of LW users.

Comment by andrewkemendo on A question of rationality · 2009-12-13T03:27:41.802Z · score: 2 (6 votes) · LW · GW

From my understanding Mr. Yudkowski has two separate but linked interests, that of rationality, which predominates in writings and blog posts and designing AI, which is the interaction with SIAI. While I disagree about their particular approach (or lack thereof) I can see how it is rational to follow both simultaneously toward similar ends.

I would argue that rationality and AI are really the same project at different levels and different stated outcomes. Even if an AI never develops, increasing rationality is a good enough goal in and of itself.

Comment by andrewkemendo on Probability Space & Aumann Agreement · 2009-12-12T12:50:21.285Z · score: 0 (0 votes) · LW · GW

I suppose my post was poorly worded. Yes, in this case omega is the reference set for possible world histories.

What I was referring to was the baseline of w as an accurate measure. It is a normalizing reference, though not a set.

Comment by andrewkemendo on Probability Space & Aumann Agreement · 2009-12-11T11:53:42.614Z · score: -1 (1 votes) · LW · GW

The main problem I have always had with this is that the reference set is "actual world history" when in fact that is the exact thing that observers are trying to decipher.

We all realize that there is in fact an "actual world history" however if it was known then this wouldn't be an issue. Using it as a reference set then, seems spurious in all practicality.

The most obvious way to achieve it is for the two agents to simply tell each other I(w) and J(w), after which they share a new, common information partition.

I think that summation is a good way to interpret the problem I addressed in as practical a manner as is currently available; I would note however that most people arbitrarily weight observational inference, so there is a skewing of the data.

The sad part about the whole thing is that both or all observers exchanging information may be the same deviation away from w such that their combined probabilities of l(w)are further away from w than either individually.

Comment by andrewkemendo on Science - Idealistic Versus Signaling · 2009-12-08T07:44:14.143Z · score: 2 (2 votes) · LW · GW

1) In the pursuit of truth, you must always be on the lookout for the motive force of the resource-seeking that hinges on not finding the truth.

I think this sums up the "follow the money" axiom quite nicely.

Comment by andrewkemendo on Science - Idealistic Versus Signaling · 2009-12-08T07:25:33.287Z · score: 4 (4 votes) · LW · GW

There is a fantastic 24 part CBC podcast called How to think about science mp3 format here. It interviews 24 different research scientists and philosophy of science experts on the history and different views of both the scientific process, historical trends and the role of science in society. It is beyond well worth the time to listen to.

I have found that the series confirms what scientists have known already: Researchers rarely behave differently as a group than any other profession, yet they are presented as a non biased objective homogeneous group by most (Of course there are always outliers). Indeed the sciences are much more social than most would indicate and I think as you point out peer review indicates "social networking" best.

This is nothing new, after all, theories and their acceptance have meant nothing without a strong group of well respected researchers around it.

Comment by andrewkemendo on Parapsychology: the control group for science · 2009-12-06T02:28:04.839Z · score: 11 (17 votes) · LW · GW

In no way do I think that the parapsychologists have good hypotheses or reasonable claims. I also am a firm adherent to the ethos: Extraordinary claims must have extraordinary proofs. However to state the following:

one in which the null hypothesis is always true.

is making a bold statement about your level of knowledge. You are going so far as to say that there is no possible way that there are hypotheses which have yet to be described which could be understood through the methodology of this particular subgroup. This exercise seems to me to be rejecting these studies intuitively,(without study) just from the ad hominem approach to rejection - well they are parapsychologists therefore they are wrong. If they are wrong, then proper analysis would indicate that, would it not?

I have never seen a parapsychology study, so I will go look for one. However does every single study have massive flaws in it?

Comment by andrewkemendo on The Difference Between Utility and Utility · 2009-12-05T05:18:35.321Z · score: 1 (1 votes) · LW · GW

See my response here

You want to consider the utility of the terrorists, at the appropriate level of detail.

Huh? Yes it will. You mean "you will still find it undesirable and or hard for you to understand".

What are the units for expected utility? How do you measure them? Can you graph my utility function?

I can look at behaviors of people and say that on this day Joe bought 5 apples and 4 oranges, on this day he bought 2 kiwis, 2 apples and no oranges etc...but that data doesn't reliably forecast expected utility for oranges. There are so many exogenous variables that the data is reliably unreliable.

I have yet to see a researcher give a practical empirical formula mapping the utility of a person or group. I argue it is because it is impossible (currently), thus trying to do so doesn't make sense in practice. I have however, as demonstrated in the link previously, seen formulas which imply weighted preference set's. Those aren't any more useful or descriptive than saying that Joe prefers apples to oranges.

Comment by andrewkemendo on The Difference Between Utility and Utility · 2009-12-05T05:06:59.845Z · score: 0 (0 votes) · LW · GW

efficient markets quite by definition are allowing greater progress along individual value scales than inefficient markets, though not necessarily as much progress as some further refinement

Inefficient markets are great for increasing individual wealth of certain groups. I think Rothbard would disagree with the second point (regulation) - as would I.

In short, I, and much of the modern profession of economics, hold little attachment to the origins of economic theory (though I am surprised that you didn't include Smith's Wealth of Nations in your list, being more directly foundational for economics through the 19th century).

The wealth of nations was built on the philosophical foundations set in TMS it is even referenced as such with Smith labeling economics as the study of the nature of morality.

Indeed, you define out the very kind of economics that is most prevalent in modern departments (Berkeley perhaps excepted): mathematical models that seek to understand and predict how humans will act.

Explain to me how that is different than statistics. You cannot do economics without good statistics, but if it stops there, then you are a statistician; by definition. Just because you are discussing markets is irrelevant.

As I said in other responses, modern economics seeks to be little more than advanced statistics as you mentioned. You undoubtedly took econometrics so you will know what I am referencing. Masters level economics might as well be a masters of statistics currently.

The reason this is the case is because political economy was getting a bad rap around the time of the first U.S. depression (1893) and was being marginalized to the point of extinction. The result was that Thorsten Veblen, Alfred Marshall and others formed what we now call neoclassical economics in the late 19th century. At that point the basis' and market theories implicit in the assumptions in each of the Smith/Marx/Mises camps. Further study from there revolved around either supply and demand, labor theory of value or time preference assumptions. The first example taking the broadest foothold.

Again, this is the stuff that economic philosophers debate and really has no relation to the original topic at this level.

Comment by andrewkemendo on The Difference Between Utility and Utility · 2009-12-04T10:15:09.595Z · score: 0 (2 votes) · LW · GW

The description you gave of economic theory completely ignores the origins of micro and macro economics, price theory and comparative economics.

The assumptions that underlie these disciplines are normative.

Steve Levitt's finding that the availability of abortion caused a lagged decrease in crime.

Actually that is descriptive statistics. Just as I pointed out before - economics without normative conclusions is statistics.

Doubtful, but in your undergrad you might have read one of the following:

Adam Smith's Theory of Moral Sentiments

John Maynerd Keynes' General Theory of Employment

John Kenneth Galbraiths Affluent Society

Marx' Kapital

Even more doubtful Friedman or Rothbard

These are all philosophical works and serve as the foundations for the economics discipline. All detail first theories about how markets form and work, and second how to make these markets more efficient based on their own unique goal seeking behavior.

More than likely however you predominately used the Baretto Howland Econometrics, Joe Mankiw's Microeconomics, Paul Krugman's International Economics or some other such text which does not describe the assumptions which developed the theories behind standard economic concepts. Yes, full employment is in fact a normative conclusion.

I do not dispute that there is a significant descriptive aspect to economics, especially at the undergraduate level - Once you start to actually do economic analysis in real life, and public policy is a blatantly example, it becomes clear that it is indeed normative.

This discussion got off track however. What we are discussing now does not really add to the discussion at hand, and that is arguably of my own doing because I brought the point up. My reply originally was an attempt to refute a false dichotomy and perhaps I did not do a well enough job of pointing that out. So let me do that now.

Utilitarianism as developed and introduced by Bentham was devised as a way to measure the unitless "utility" for the ends of driving normative change towards hedonistic goals. It is possible to divorce the method from it's origins and use the formulaic theory to simply describe a preference set. Doing this however only gives us a statistical metric firmly in the realm of mathematics, something which requires little knowledge of markets. Economists have used utility in both manners, more heavily using the latter in recent decades. Thus if you are using utilitarian theory normatively you are truly using the original economic theory, not simply the statistical methodology which was birthed from it.

People around here seem to use the terms interchangeably without proper context. When I see someone here say "maximize utility" either you you using Bentham's hedonistic calculation method in which the goals are implicit and you mean maximize hedonistic happiness, OR you are using the divorced economic mechanization and are incoherent because you have not defined your goal seeking terms.

Comment by andrewkemendo on The Difference Between Utility and Utility · 2009-12-03T02:27:43.152Z · score: 0 (0 votes) · LW · GW

Economics can conclude "If you want X then you should do Y".

This is what economists are trying to do now. Yet, implicit in their advice are normative economic principals that comprise the set list of X: Full employment, lower inflation, lower taxes, higher revenue etc...Obviously whoever wants x is normatively seeking a solution. As a result the analysis must then also and it is implicit in the formulation.

The economists themselves may have no feelings one way or another but they are using the economic and statistical principals toward normative ends, even if they are not their own. This is why I found the economic discipline so frustrating. Everyone want's to be a human calculator, forgetting that they are being used to solve someone else's philosophical dilemma.

Z is something that probably will happen therefore Z is something that should happen. This tends to invoke my contempt, particularly since it is seldom applied consistently.

As well it should. What you described is still normative, only it applies a naturalistic fallacy spin on the normative conclusion.

Comment by andrewkemendo on The Difference Between Utility and Utility · 2009-12-03T02:13:29.227Z · score: -2 (2 votes) · LW · GW

Murder can increase utility in the economist's utility function

That is really immaterial though and computationally moot. Ok so his "utility function" is negative. Is that it, is that the difference? Besides, I would argue that reevaluating it on those terms does a poor job of actually describing motivation in a coherent set.

Yet murdering is a net negative in the ethicist's utility function.

It isn't in the economists? These things aren't neutral.

The broader aspect that economists seek is normative. You said it yourself in the economists assumptions. Assumptions are not exogenous when calculating value, try as they may.

Most good studies in their presentation will explain why their methodology is as it is, and why understanding their paper will solve a problem or lead to a conflict resolution. That was the purpose behind applied economic game theory, optimizing equilibrium in previously zero sum outcomes and eliminating dominated strategies for competition. One cannot successfully separate economics from ethics ( I would argue for all but the explicitly classifying sciences (Chemistry, Cladistics etc...) this holds true).

If we are simply talking about mathematical notation, then feel free to slap a negative sign on the expected utility portion for terrorists in the "aggregate worldwide utility" formula. It still won't make any sense in practice.

Comment by andrewkemendo on The Difference Between Utility and Utility · 2009-12-02T07:45:55.734Z · score: 0 (4 votes) · LW · GW

As I asked in response to your other argument: Who has given utility this new definition?

I think perhaps there is a disconnect between the origins of utilitarianism, and how people who are not economists (Even some economists) understand it.

You as well as black belt bayesian are making the point that utilitarianism as used in an economic sense is somehow non-ethics based, which could not be more incorrect as utilitarianism was explicitly developed with goal seeking behavior in mind - stated by Bentham as greatest hedonic hapiness. It was not derived as a simple calculator, and is rarely used as such in serious academic works because it is so insanely sloppy, subjective and arguably useless as a metric.

True, some economists do use it in reference and it is introduced in academic economic theory as a mathematical principal but I have yet to see an authoritative study which uses expected utility as a variable, nor as it was introduced in my undergrad (Economics) as a reliable measure - again, why you do not see it in authoritative works.

You both imply that the economics version utility is non normative. Again as I said before, it was created specifically to guide economic decision making in how homoeconomicus should act. Does the fact that it can be both used normatively and objectively in economic decision making change the definition? No, because as you said, they use the same math. People forget that political economics was and is still normative whether economist want it to be or not.

Which leads me to what I think the root of this problem is in understanding what economics is. At it's heart economics is both descriptive, prescriptive and normative. Current trends in economics are seeking to turn the discipline into a physics-esqe discipline which seeks to describe economic patterns. Yet, even in these camps they must hold natural rate of employment as good, trade as enhancing, public goods as multiplicative good etc... Lest we forget than Keynesianism was hailed as the next great coming and would revolutionize the way that humans interact. Economics without normative conclusions is just statistics.

I realize it is a semantic point, however if we want to use a term then let's use it correctly. I know Mr. Yudkowski has posted before about the uselessness of debating definitions, however we are talking about the same thing here.

All of this redefining utility discussion smacks of cognitive dissonance to me because it seems to be looking to find some authority on the use of the term utility in the way that people around here want to use it. If you want to use normative utilitarianism then you'll have great fun with Bentham's utilitarianism as it is and has always been normative. The beef seems to lie between expected and average utility - which are both still normative anyway so it is really a moot point.

I have thought of making a separate post on utilitarianism, it's history and errors, mostly because it is the aspect I have been most interested in for the past decade. However I doubt it would give any more information than what exists on the web and in text for any interested parties.

edit: Here is a perfect example of my point about the silliness of expected utility calculation in empirical metrics. The author uses VNM Expected utility based on assumed results of expected utility in terms of summed monetary and psychic income. There are no units, there is no actual calculation. There are however nice pretty formulas which do nothing for us but restate that a terrorist must gain more from his terrorism than other activities.

Comment by andrewkemendo on A Nightmare for Eliezer · 2009-11-30T01:50:06.543Z · score: 0 (0 votes) · LW · GW

Any AGI will have all dimensions which are required to make a human level or greater intelligence. If it is indeed smarter, then it will be able to figure the theory out itself if the theory is obviously correct, or find a way to get it in a more efficient manner.

Comment by andrewkemendo on A Nightmare for Eliezer · 2009-11-29T02:59:30.560Z · score: 0 (0 votes) · LW · GW

I'm trying to be Friendly, but I'm having serious problems with my goals and preferences.

So is this an AGI or not? If it is then it's smarter than Mr. Yudkowski and can resolve it's own problems.

Comment by andrewkemendo on Getting Feedback by Restricting Content · 2009-11-28T03:38:35.960Z · score: 1 (1 votes) · LW · GW

[P]resent only one idea at a time.

Most posts do present one idea at a time. However it may not seem like it because most of the ideas presented are additive - that is, you have to have a fairly good background on topics that have been presented previously in order to understand the current topic. OB and LW are hard to get into for the uninitiated.

To provide more background and context, with the necessarily larger numbers of ideas being presented, while still getting useful feedback from readers.

That is what the sequences were designed to do - give the background needed.

Comment by andrewkemendo on Friedman on Utility · 2009-11-23T02:04:50.587Z · score: 1 (3 votes) · LW · GW

it just takes the understanding that five lives are, all things being equal, more important than four lives.

Your examples rely too heavily on "intuitively right" and ceteris paribus conditioning. It is not always the case that five are more important than four and the mere idea has been debunked several times.

if people agree to judge actions by how well they turn out general human preference

What is the method you use to determine how things will turn out?

similarity can probably make them agree on the best action even without complete agreement on a rigorous definition of "well"

Does consensus make decisions correct?

Comment by andrewkemendo on Friedman on Utility · 2009-11-22T21:23:49.017Z · score: -1 (3 votes) · LW · GW

The economist's utility function is not the same as the ethicist's utility function

According to who? Are we just redefining terms now?

As far as I can tell your definition is the same as Benthams only implying rules bound more weakly for the practitioner.

I think someone started (incorrectly) using the term and it has taken hold. Now a bunch of cognitive dissonance is fancied up to make it seem unique because people don't know where the term originated.

Comment by andrewkemendo on A Less Wrong singularity article? · 2009-11-18T02:32:11.344Z · score: 3 (3 votes) · LW · GW

This is a problem for both those who'd want to critique the concept, and for those who are more open-minded and would want to learn more about it.

Anyone who is sufficiently technically minded undoubtedly finds frustration in reading books which give broad brush stroked counterfactuals to decision making and explanation without delving into the details of their processes. I am thinking of books like Freakonomics, Paradox of Choice, Outliers, Nudge etc..

These books are very accessible but lack the in depth analysis which are expected to be thoroughly critiqued and understood in depth. Writings like Global catastrophic risks and any of the other written deconstructions of the necessary steps of technological singularity lack those spell-it-out-for-us-all sections that Gladwell et al. make their living from. Reasonably so. The issue of singularity is so much more complex and involved that it does not do the field justice to give slogans and banner phrases. Indeed it is arguably detrimental and has the ability to backfire by simplifying too much.

I think however what is needed is a clear, short and easily understood consensus on why this crazy AI thing is the inevitable result of reason, why it is necessary to think about, how it will help humanity, how it could reasonably hurt humanity.

The SIAI tried to do this:

http://www.singinst.org/overview/whatisthesingularity

http://www.singinst.org/overview/whyworktowardthesingularity

Neither of these is compelling in my view. They both go into some detail and leave the un-knowledgeable reader behind. Most importantly neither has what people want: a clear vision of exactly what we are working for. The problem is there isn't a clear vision; there is no consensus on how to start. Which is why in my view the SIAI is more focused on "Global risks" rather than just stating "We want to build an AI"; frankly, people get scared by the latter.

So is this paper going to resolve the dichotomy between the simplified and complex approach, or will we simply be replicating what the SIAI has already done?

Comment by andrewkemendo on Consequences of arbitrage: expected cash · 2009-11-13T13:56:44.899Z · score: 2 (2 votes) · LW · GW

Thus if we want to avoid being arbitraged, we should cleave to expected utility.

Sticking with expected utility works in theory if you have a discrete number of variables (options) and can discern between all variables such that they can be judged equally and the cost (in time or whatever) is not greater than the marginal gain from the process. Here is an example I like: Go to the supermarket and optimize your expected utility for breakfast cereal.

The money pump only works if your "utility function" is static, or more accurately, if your preferences update slower than the pumper can change the statistical trade imbalance eg: arbitrage doesn't work if the person outsourced to can also outsource.

I can take advantage of your vN-M axioms if I have any information about one of your preferences which you do not have (this need not be gotten illegally), as a result, you sticking to it would money pump regardless.

Comment by andrewkemendo on Restraint Bias · 2009-11-13T11:31:16.305Z · score: -1 (1 votes) · LW · GW

This might have something to do with how public commitment may be counterproductive: once you've effectively signaled your intentions, the pressure to actually implement them fades away.

I was thinking about this today in the context of Kurzweil's future predictions and I wonder if it is possible that there is some overlap. Obviously Kurzweil is not designing the systems he is predicting but likely the people who are designing them will read his predictions.

I wonder, if they see the time lines that he predicts if they will potentially think: "oh, well [this or that technology] will be designed by 2019, so I can put it off for a little while longer, or maybe someone else will take the project instead"

It might not be the case and in fact they might use the predicted time line as a motivator to beat. Regardless, I think it would be good for developers to keep things like that in mind.

Comment by andrewkemendo on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-12T06:37:13.287Z · score: 0 (0 votes) · LW · GW

As I replied to Tarelton, the not for sake of happiness alone post does not address how he came to the conclusions based on specific decision theoretic optimization. He gives very loose subjective terms for his conclusions:

The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.

which is why I worded my question as I did the first time. I don't think he has done the same amount of thinking on his epistemology as he has on his TDT.

Comment by andrewkemendo on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-12T06:29:01.743Z · score: 0 (0 votes) · LW · GW

Yes I remember reading both and scratching my head because both seemed to beat around the bush and not address the issues explicitly. Both lean to much on addressing the subjective aspect of non-utility based calculations, which in my mind is a red herring.

Admittedly I should have referenced it and perhaps the issue has been addressed as well as it will be. I would rather see this become a discussion as in my mind it is more important than any of the topics dealt with daily here - however that may not be appropriate for this particular thread.

Comment by andrewkemendo on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-12T02:31:40.935Z · score: -1 (1 votes) · LW · GW

Thanks, I followed up below.

Comment by andrewkemendo on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-12T02:31:26.710Z · score: -1 (1 votes) · LW · GW

You'll have to forgive me because I am economist by training and mentions of utility have very specific references to Jeremy Bentham.

Your definition of what the term "maximizing utility" means and the Bentham definition (who was the originator) are significantly different; If you don't know what it is then I will describe it (if you do, sorry for the redundancy).

Jeremy Bentham devised Felicific calculus which is a hedonistic philosophy and seeks as its defining purpose to maximize happiness. He was of the opinion that it was possible in theory to create a literal formula which gives optimized preferences such that it maximized happiness for the individual. This is the foundation for all utilitarian ethics as each seeks to essentially itemize all preferences.

Virtue ethics for those who do not know is the Aristotelian philosophy that posits: each sufficiently differentiated organism or object is naturally optimized for at least one specific purpose above all other purposes. Optimized decision making for a virtue theorist would be doing the things which best express or develop that specific purpose - similar to how specialty tools are best used for their specialty. Happiness is said to spring forth from this as a consequence, not as it's goal.

I just want to know, if it is the case that he came to follow the former (Bentham) philosophy, how he came to that decision (theoretically it is possible to combine the two).

So in this case, while the term may give an approximation of the optimal decision, if used in that manner is not explicitly clear in how it determines the basis for the decision is in the first place; that is unless, as some have done, it is specified that maximizing happiness is the goal (which I had just assumed people were asserting implicitly anyhow).

Comment by andrewkemendo on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-12T02:06:19.664Z · score: 0 (0 votes) · LW · GW

Ha, fair enough.

I often see reference to maximizing utility and individual utility functions in your writing and it would seem to me (unless I am misinterpreting your use) that you are implying that hedonic (fellicific) calculation is the most optimal way to determine what is correct when applying counterfactual outcomes to optimizing decision making.

I am asking how you determined (if that is the case) that the best way to judge the optimality of decision making was through utilitarianism as opposed to say ethical egoism or virtue (not to equivocate). Or perhaps your reference is purely abstract and does not invoke the fellicific calculation.

Comment by andrewkemendo on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-11T12:34:23.280Z · score: 1 (7 votes) · LW · GW

Since you and most around here seem to be utilitarian consequentialists, how much thought have you put into developing your personal epistemological philosophy?

Worded differently, how have you come to the conclusion that "maximizing utility" is the optimized goal as opposed to say virtue seeking?

Comment by andrewkemendo on Light Arts · 2009-11-07T03:01:12.225Z · score: -5 (5 votes) · LW · GW

We don't have to understand the universe completely to be very confident that it contains no contradictions.

Where is the proof of concept for this?

I have several resources which point to extreme inconsistency with the current and past behaviors of particle and astro physics. Beyond natural sciences, there are inconsistencies in the way that political systems are organized and interacted with even on a local level - yet most find them acceptable enough to continue to work with.

You argue that inconsistency alone is enough to reject a theory. The point I make is that understanding that a process may work differently under different circumstances is not necessarily inconsistent and does not "guarantee" it being wrong. That is the point behind chaotic modeling.

There can still be valuable achievements that come from better understanding how the seemingly inconsistent theories work and I argue would not be wholly acceptable as a sole reason for rejection as you seem to advocate.

I still am not convinced that all systems must be consistent to exist - however that is a much different discussion.

Comment by andrewkemendo on Light Arts · 2009-11-06T14:40:59.747Z · score: -2 (4 votes) · LW · GW

Inconsistency is a general, powerful case of having reason to reject something. Inconsistency brings with it the guarantee of being wrong in at least one place.

I would agree if the laws of the universe or the system, political or material are also consistent and understood completely. I think history shows us clearly that there are few laws which, under enough scrutiny are consistent in their known form - hence exogenous variables and stochastic processes.

Comment by andrewkemendo on Less Wrong / Overcoming Bias meet-up groups · 2009-10-30T14:18:29.732Z · score: 0 (0 votes) · LW · GW

I looked into that but it lacks the database support that would be desired from this project. With LW owning the xml or php database, closest match algorithms can be built which optimize meeting locations for particular members.

That said, if the current LW developer wants to implement this I think it would at least be a start.

Comment by andrewkemendo on Less Wrong / Overcoming Bias meet-up groups · 2009-10-30T13:58:50.340Z · score: 0 (0 votes) · LW · GW

I thought so too - however not in the implementation that I think is most user friendly.

Comment by andrewkemendo on Less Wrong / Overcoming Bias meet-up groups · 2009-10-30T06:10:17.775Z · score: 4 (4 votes) · LW · GW

I am currently working on a google map API application which will allow LW/OB readers to add their location, hopefully encouraging those around them to form their own meetups. That might also make determining the next singularity summit location easier.

If there are any PHP/MySQL programmers who want to help I could def use some.

Comment by andrewkemendo on A Less Wrong Q&A with Eliezer (Step 1: The Proposition) · 2009-10-30T02:17:41.563Z · score: 2 (2 votes) · LW · GW

Perhaps this could be expanded to be Q&A for the people the readers agree would comparably elucidate on all manners rationality/AGI such as Wei Dei and Nesov rather than a single person.

To me it gives a broader perspective and has an added benefit of eliminating any semblance of cultishness, despite Mr. Yudkowski's protests of such a following.

Comment by andrewkemendo on Better thinking through experiential games · 2009-10-25T03:29:49.776Z · score: 0 (0 votes) · LW · GW

Would it be inappropriate to put this list somewhere on the Less Wrong Wiki?

I think that would be great if we had a good repository of mind games

Comment by andrewkemendo on Better thinking through experiential games · 2009-10-24T05:09:51.755Z · score: 1 (1 votes) · LW · GW

I think a lot of it has to do with your experience with computer based games and web applications.

This is why I say it would have to be a controlled study because those with significant computer experience and gaming experience have a distinct edge on those who do not. For example many gamers would automatically go to the WASD control pattern (which is what some first person shooting games use) on the "alternate control" level.

5:57:18 with 15 deaths here

Comment by andrewkemendo on Better thinking through experiential games · 2009-10-23T13:32:19.231Z · score: 18 (18 votes) · LW · GW

A few months ago I stumbled upon a game wherein the goal is to guide an elephant from one side of the screen to a pipe; perhaps you have seen it:

This is the only level

Here's the rub: The rules change on every level. In order to do well you have to be quick to change your view of how the new virtual world works. That takes a flexible mind and accurate interpretation of the cues that the game gives you.

I sent this to some of my colleagues and have concluded anecdotally that their mental flexibility is in rough correlation with their results from the game. I think that experimental games are great and would, if done in a controlled setting, be an interesting way to evaluate mental acuity.

Comment by andrewkemendo on The continued misuse of the Prisoner's Dilemma · 2009-10-23T06:22:32.664Z · score: 7 (7 votes) · LW · GW

I probably came off as more "anticapitalist" or "collectivist" than I really am, but the point is important: betraying your partners has long-term consequences which aren't apparent when you only look at the narrow version of this game.

This is actually the real meaning of "selfishness." It is in my own best interest to do things for the community.

The mantras of collectivists and anti-capitalists seem to either not realize or ignore the fact that greedy people aren't really doing things in their own best interest if they are making enemies in the process.

Comment by andrewkemendo on Dying Outside · 2009-10-05T07:41:07.769Z · score: 3 (3 votes) · LW · GW

With mechanical respiration, survival with ALS can be indefinitely extended.

What a great opportunity to start your transhuman journey (that is if you indeed are a transhumanist). Admittedly these are not the circumstances you or anyone would have chosen but here we are nonetheless.

If you decide to document your process then I look forward to watching your progression out of organic humanity. I think it is people like you who have both the impetus and the knowledge to really show how transhuman technology can be a bolster to our society.

Cheers!

Comment by andrewkemendo on Open Thread: October 2009 · 2009-10-03T14:04:32.224Z · score: 0 (0 votes) · LW · GW

Upon reading of that link (which I imagine is now fairly outdated?) his theory falls apart under the weight of its coercive nature - as the questioner points out.

It is understood that the impact of an AI will be on all in humanity regardless of it's implementation if it is used for decision making. As a result consequentialist utilitarianism still holds a majority rule position, as the link talks about, which implies that the decisions that the AI would make would favor a "utility" calculation (Spare me the argument about utilons; as an economist I have previously been neck deep in Bentham).

The discussion at the same time dismisses and reinforces the importance of the debate itself, which seems contrary. I personally think this is a much more important topic than is thought and I have yet to see a compelling argument otherwise.

From the people (researchers) I have talked to about this specifically, the responses I have gotten are: "I'm not interested in that, I want to know how intelligence works" or "I just want to make it work, I'm interested in the science behind it." And I think this attitude is pervasive. It is ignoring the subject.

Comment by andrewkemendo on Open Thread: October 2009 · 2009-10-03T13:18:10.102Z · score: -1 (1 votes) · LW · GW

"Utilons" are a stand-in for "whatever it is you actually value"

Of course - which makes them useless as a metric.

we tend to support decision making based on consequentialist utilitarianism

Since you seem to speak for everyone in this category - how did you come to the conclusion that this is the optimal philosophy?

Thanks for the link.

Comment by andrewkemendo on Open Thread: October 2009 · 2009-10-03T13:15:13.380Z · score: 0 (0 votes) · LW · GW

Maybe I'm just dense but I have been around a while and searched, yet I haven't stumbled upon a top level post or anything of the like here on the FHI, SIAI (other than ramblings about what AI could theoretically give us) OB or otherwise which either breaks it down or gives a general consensus.

Can you point me to where you are talking about?

Comment by andrewkemendo on Open Thread: October 2009 · 2009-10-03T08:49:20.391Z · score: 1 (1 votes) · LW · GW

I never see discussion on what the goals of the AI should be. To me this is far more important than any of the things discussed on a day to day basis.

If there is not a competent theory on what the goals of an intelligent system will be, then how can we expect to build it correctly?

Ostensibly, the goal is to make the correct decision. Yet there is nearly no discussion on what constitutes a correct decision. I see lot's of contributors talking about calculating utilons so that demonstrates that most contributors are hedonistic consequentialist utilitarians.

Am I correct then to assume that the implicit goal of the AI for the majority in the community is to aid in the maximization of human happiness?

If so I think there are serious problems that would be encountered and I think that the goal of maximizing happiness would not be accomplished.