Posts

Comments

Comment by Venu on Are calibration and rational decisions mutually exclusive? (Part one) · 2011-08-02T21:39:06.407Z · LW · GW

I came to this post via a Google search (hence this late comment). The problem that Cyan's pointing out - the lack of calibration of Bayesian posteriors - is a real problem, and in fact something I'm facing in my own research currently. Upvoted for raising an important, and under-discussed, issue.

Comment by Venu on What I Think, If Not Why · 2008-12-11T22:24:03.000Z · LW · GW

The default case of FOOM is an unFriendly AI Before this, we also have: "The default case of an AI is to not FOOM at all, even if it's self-modifying (like a self-optimizing compiler)." Why not anti-predict that no AIs will FOOM at all?

This AI becomes able to improve itself in a haphazard way, makes various changes that are net improvements but may introduce value drift, and then gets smart enough to do guaranteed self-improvement, at which point its values freeze (forever). Given the tiny minority of AIs that will FOOM at all, what is the probability that an AI which has been designed for a purpose other than FOOMing, will instead FOOM?

Comment by Venu on Selling Nonapples · 2008-11-14T01:53:32.000Z · LW · GW

@Don: Eliezer says in his AI risks paper , criticising Bill Hibbard, that one cannot use supervised learning to specify the goal system for an AI. And although he doesn't say this in the AI risks paper (contra what I said in my previous comment), I remember him saying somewhere (was it in a mailing list?) that supervised learning as such is not a reliable component to include in a Friendly AI. (I may be wrong in attributing this to him however.) I feel this criticism is misguided as any viable proposal for (Friendly or not) AI will have to be built out of modules which are not smart enough to be Friendly themselves. And supervised learning sure seems like a handy module to have - it clusters highly variable lower level sensory input into more stable higher level objects, and its usefulness has been demonstrated by the heavy use of it by Thrun's team.

Comment by Venu on Selling Nonapples · 2008-11-13T23:37:43.000Z · LW · GW

I don't get this post. There is no big mystery to asynchronous communication - a process looks for messages whenever it is convenient for it to do so, very much like we check our mail-boxes when it is convenient for us. Although it is not clear to me how asynchronous communication helps in building an AI, I don't see any underspecification here. And if people (including Brooks) have actually used the architecture for building robots, that at least must be clear proof that there is a real architecture here.

Btw, from my understanding, Thrun's team made heavy use of supervised learning - the same paradigm that Eliezer knocked down as being unFriendly in his AI risks paper.

Comment by Venu on The Weighted Majority Algorithm · 2008-11-13T06:19:57.000Z · LW · GW

I am interested in what Scott Aaronson says to this.

I am unconvinced, and I agree with both the commenters g and R above. I would say Eliezer is underestimating the number of problems where the environment gives you correlated data and where the correlation is essentially a distraction. Hash functions are, e.g., widely used in programming tasks and not just by cryptographers. Randomized algorithms often are based on non-trivial insights into the problem at hand. For example, the insight in hashing and related approaches is that "two (different) objects are highly unlikely to give the exact same result when (the same) random function (from a certain class of functions) is applied to both of them, and hence the result of this function can be used to distinguish the two objects."

Comment by Venu on Inner Goodness · 2008-10-24T03:09:04.000Z · LW · GW

To me it seems like this post evades what is to me the hard question of morality. If my own welfare often comes in conflict with the welfare of others, then how much weight should I attach to my own utility in comparison to the utility of other humans? This post seems to say I should look into the mirror to get my answer - but that answer is too crude - in the sense that I know I should care for others, but how much?

I think there is definitely a role for external influence here. My reading OB for the last year or more has made me consciously think of myself as a rationalist, and this has pushed me to behave in a manner consistent with my self-labelling as a rationalist. In a similar fashion, if I start thinking of myself as an altruist (having come under some external influence), I am quite sure it will push me to behave in a manner more consistent with that labelling. It is trivial/wrong to then say that this altruism was "latent" in me all along.

Comment by Venu on Inner Goodness · 2008-10-24T02:53:09.000Z · LW · GW

Ayn Rand? Aleister Crowley? How exactly do you get there? What Rubicons do you cross? It's not the justifications I'm interested in, but the critical moments of thought.

My guess is that Ayn Rand at least applied a "reversed stupidity = intelligence" heuristic. She saw examples of ostensible altruists committing great evil - and from there generalized to the opposite extreme - since altruism leads to evil, the only good must come from selfishness.

(Just to be clear, I am not defending Rand here.)

Comment by Venu on Could Anything Be Right? · 2008-07-18T08:40:39.000Z · LW · GW

"There are no-free-lunch theorems in computer science - in a maxentropy universe, no plan is better on average than any other. " I don't think this is correct - in this form, the theorem is of no value, since we know the universe is not max-entropy. No-free-lunch theorems say that no plan is better on average than any other, when we consider all utility functions. Hence, we cannot design an intelligence that will maximize all utility functions/moralities.

Comment by Venu on Whither Moral Progress? · 2008-07-16T11:41:09.000Z · LW · GW

@billswift: I do not want to divert the thread onto the topic of animal rights. It was only an example in any case. See Paul Gowder's comment previous to mine for a more detailed (and different) example of how empirical knowledge can affect our moral judgements.

Comment by Venu on Whither Moral Progress? · 2008-07-16T09:22:02.000Z · LW · GW

A few processes to explain moral progress (but probably not all of it): a) Acquiring new knowledge (e.g. the knowledge that chimps and humans are, on an evolutionary scale, close relatives), which leads us to throw away moral judgements that make assumptions which are inconsistent with such knowledge. b) Morality is only one of the many ends that we pursue, and as an end it becomes easier to pursue once you are amply fed, watered and clothed. In other words, improvements in material conditions enable improvements in morality. c) Conquest of one culture by another means the morals of the conquerors get transferred to the conquered (to some extent). Similarly, migration and higher levels of general exposure between cultures means practices that are viewed as immoral by much of the rest of the world are under much pressure to be abolished.

Comment by Venu on Moral Complexities · 2008-07-04T16:38:59.000Z · LW · GW

@Richard I agree with you, of course. I meant there exists no objective, built-into-the-fabric-of-the-universe morality which we can compute using an idealised philosopher program (without programming in our own intuitions that is).

Comment by Venu on Moral Complexities · 2008-07-04T15:38:53.000Z · LW · GW

I share neither of those intuitions. Why not stick with the obvious option of morality as the set of evolved (and evolving) norms? This is it, looking for the "ideal" morality would be passing the recursive buck.

This does not compel me to abandon the notion of moral progress though; one of our deepest moral intuitions is that our morality should be (internally) consistent, and moral progress, in my view, consists of better reasoning to make our morality more and more consistent.

Comment by Venu on The Tragedy of Group Selectionism · 2007-11-07T16:43:58.000Z · LW · GW

"Rationalisation of a predetermined bottom-line" is not always be a bad thing. It is common enough in Mathematics that you intuitively feel a result is right, and you work backwards from the result to see how you can prove it. The real mistake is if you do not take care in working it out backwards, and make wrong inferential steps in the chain. You may (legitimately) point out failures of this strategy, but there are also successes that you need to acknowledge.

Comment by Venu on Priming and Contamination · 2007-10-10T03:28:08.000Z · LW · GW

"Yet the most fearsome aspect of contamination is that it serves as yet another of the thousand faces of confirmation bias. Once an idea gets into your head, it primes information compatible with it - and thereby ensures its continued existence."

I am not sure I understand this. Once an idea gets into my head, my brain should prime all information related to the idea, not just information that is compatible with the idea. I am of course not denying the existence of confirmation bias, just trying to understand how priming in particular can promote it.

Comment by Venu on Recommended Rationalist Reading · 2007-10-02T03:48:00.000Z · LW · GW

Eliezer:'Probability theory and the structure of the real world exploited by tractable cognitive algorithms: Judea Pearl, "Probabilistic Reasoning in Intelligent Systems"'

Is the use of the phrase "cognitive algorithms" intended to mean that these algorithms are plausibly implemented in our own brains?

Comment by Venu on Why is the Future So Absurd? · 2007-09-07T23:27:57.000Z · LW · GW

".. but technological change feeds on itself, and therefore has a positive second derivative."

Nitpick: If technological progress were merely quadratic in time, then too it would have a positive second derivative. Kurzweil of course claims something much stronger - that technological progress is exponential in time, which means the first derivative and all succeeding derivatives are also exponential.

Comment by Venu on Suggested Posts · 2007-04-12T22:14:10.000Z · LW · GW

Why does statistical hypothesis testing continue to be used in many research fields despite its very many flaws? Are there biases at work here in that the widespread rejection of hypothesis testing would lead to the trashing of many senior researchers' works? How skeptical should we be of science in general if such shaky methodology is so widely adopted?