Posts

Comments

Comment by andreas on Rationality Quotes February 2013 · 2013-02-02T05:42:44.940Z · LW · GW

"I design a cell to not fail and then assume it will and then ask the next 'what-if' questions," Sinnett said. "And then I design the batteries that if there is a failure of one cell it won't propagate to another. And then I assume that I am wrong and that it will propagate to another and then I design the enclosure and the redundancy of the equipment to assume that all the cells are involved and the airplane needs to be able to play through that."

Mike Sinnett, Boeing's 787 chief project engineer

Comment by andreas on Random thought: What is the optimal PD strategy under imperfect information? · 2012-01-17T03:51:18.962Z · LW · GW

The game theory textbook "A Course in Microeconomic Theory" (Kreps) addresses this situation. Quoting from page 516:

We will give an exact analysis of this problem momentarily (in smaller type), but you should have no difficulty seeing the basic trade-off; too little punishment, triggered only rarely, will give your opponent the incentive to try to get away with the noncooperative strategy. You have to punish often enough and harshly enough so that your opponent is motivated to play [cooperate] instead of [defect]. But the more often/more harsh is the punishment, the less are the gains from cooperation. And even if you punish in a fashion that leads you to know that your opponent is (in her own interests) choosing [cooperate] every time (except when she is punishing), you will have to "punish" in some instances to keep your opponent honest.

Comment by andreas on Heading Toward: No-Nonsense Metaethics · 2011-04-24T01:31:29.600Z · LW · GW

I am more motivated to read the rest of your sequence if the summary sounds silly than if I can easily see the arguments myself.

Comment by andreas on Heading Toward: No-Nonsense Metaethics · 2011-04-24T01:13:32.063Z · LW · GW

Back when Eliezer was writing his metaethics sequence, it would have been great to know where he was going, i.e., if he had posted ahead of time a one-paragraph technical summary of the position he set out to explain. Can you post such a summary of your position now?

Comment by andreas on Making Reasoning Obviously Locally Correct · 2011-03-12T21:29:18.230Z · LW · GW

Now, citing axioms and theorems to justify a step in a proof is not a mere social convention to make mathematicians happy. It is a useful constraint on your cognition, allowing you to make only inferences that are actually valid.

When you are trying to build up a new argument, temporarily accepting steps of uncertain correctness can be helpful (if mentally tagged as such). This strategy can move you out of local optima by prompting you to think about what further assumptions would be required to make the steps correct.

Techniques based on this kind of reasoning are used in the simulation of physical systems and in machine inference more generally (tempering). Instead of exploring the state space of a system using the temperature you are actually interested in, which permits only very particular moves between states ("provably correct reasoning steps"), you explore using a higher temperature that makes it easier to move between different states ("arguments"). Afterwards, you check how probable the state is that you moved to when evaluated using the original temperature.

Comment by andreas on Hyperlinks and Less Wrong · 2011-01-24T03:14:01.542Z · LW · GW

As you wish: Drag the link on this page to your browser's bookmark bar. Clicking it on any page will turn all links black and remove the underlines, making links distinguishable from black plain text only through changes in mouse pointer style. Click again to get the original style back.

Comment by andreas on Unsolved Problems in Philosophy Part 1: The Liar's Paradox · 2010-11-30T09:47:26.069Z · LW · GW

See also: A Universal Approach to Self-Referential Paradoxes, Incompleteness and Fixed Points, which treats the Liar's paradox as an instance of a generalization of Cantor's theorem (no onto mapping from N->2^N).

The best part of this unified scheme is that it shows that there are really no paradoxes. There are limitations. Paradoxes are ways of showing that if you permit one to violate a limitation, then you will get an inconsistent systems. The Liar paradox shows that if you permit natural language to talk about its own truthfulness (as it - of course - does) then we will have inconsistencies in natural languages.

Comment by andreas on Rationality is Not an Attractive Tribe · 2010-11-27T06:33:59.338Z · LW · GW

Do you think that your beliefs regarding what you care about could be mistaken? That you might tell yourself that you care more about being lazy than about getting cryonics done, but that in fact, under reflection, you would prefer to get the contract?

Comment by andreas on An Xtranormal Intelligence Explosion · 2010-11-08T03:43:24.388Z · LW · GW

Please stop commenting on this topic until you have understood more of what has been written about it on LW and elsewhere. Unsubstantiated proposals harm LW as a community. LW deals with some topics that look crazy on surface examination; you don't want people who dig deeper to stumble on comments like this and find actual crazy.

Comment by andreas on The Curve of Capability · 2010-11-07T00:07:22.979Z · LW · GW

Similarly, inference (conditioning) is incomputable in general, even if your prior is computable. However, if you assume that observations are corrupted by independent, absolutely continuous noise, conditioning becomes computable.

Comment by andreas on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) · 2010-10-30T19:53:08.815Z · LW · GW

Consider marginal utility. Many people are working on AI, machine learning, computational psychology, and related fields. Nobody is working on preference theory, formal understanding of our goals under reflection. If you want to do interesting research and if you have the background to advance either of those fields, do you think the world will be better off with you on the one side or on the other?

Comment by andreas on A Paradox in Timeless Decision Theory · 2010-10-25T04:33:47.340Z · LW · GW

Now suppose you are playing against another timeless decision theory agent. Clearly, the best strategy is to be that actor which defects no matter what. If both agents do this, the worst possible result for both of them occurs.

Which shows that defection was not the best strategy in this situation.

Comment by andreas on Church: a language for probabilistic modeling · 2010-10-24T23:03:10.264Z · LW · GW

Yes, deriving mechanisms that take complex models and turn them into something tractable is mostly an open problem.

Comment by andreas on Church: a language for probabilistic modeling · 2010-10-24T02:10:33.751Z · LW · GW

They don't work without continuous parameters. If you have a probabilistic program that includes both discrete and continuous parameters, you can use gradient methods to generate MH proposals for your continuous parameters. I don't think there are any publications that discuss this yet.

Comment by andreas on Three kinds of political similarity · 2010-10-23T21:42:24.306Z · LW · GW

A pdf of the nature intro is here.

Comment by andreas on Church: a language for probabilistic modeling · 2010-10-23T21:06:34.897Z · LW · GW

I was comparing the two choices people face who want to do inference in nontrivial models. You can either write the model in an existing probabilistic programming language and get inefficient inference for free or you can write model+inference in something like Matlab. Here, you may be able to use libraries if your model is similar enough to existing models, but for many interesting models, this is not the case.

Comment by andreas on Church: a language for probabilistic modeling · 2010-10-23T20:44:33.892Z · LW · GW

Current universal inference methods are very limited, so the main advantages of using probabilistic programming languages are (1) the conceptual clarity you get by separating generative model and inference and (2) the ability to write down complex nonparametric models and immediately be able to do inference, even if it's inefficient. Writing a full model+inference implementation in Matlab, say, takes you much longer, is more confusing and less flexible.

That said, some techniques that were developed for particular classes of problems have a useful analog in the setting of programs. The gradient-based methods you mention have been generalized to work on any probabilistic program with continuous parameters.

Comment by andreas on Church: a language for probabilistic modeling · 2010-10-23T20:20:26.815Z · LW · GW

Probabilistic inference in general is NP-hard, but it is not clear that (1) this property holds for the kinds of problems people are interested in and, even if it does, that (2) approximate probabilistic inference is hard for this class of problems. For example, if you believe this paper, probabilistic inference without extreme conditional probabilities is easy.

Comment by andreas on Lifelogging: the recording device · 2010-10-23T05:13:36.227Z · LW · GW

Combine this with speech-to-text transcription software and you get a searchable archive of your recorded interactions!

ETA: In theory. In practice, dictation software algorithms are probably not up to the task of turning noisy speech from different people into text with any reasonable accuracy.

Comment by andreas on Church: a language for probabilistic modeling · 2010-10-23T00:32:57.340Z · LW · GW

The key idea behind Church and similar languages is that they allow us to express and formally reason about a large class of probabilistic models, many of which cannot be formalized in any concise way as Bayes nets.

Bayes nets express generative models, i.e. processes that generate data. To infer the states of hidden variables from observations, you condition the Bayes net and compute a distribution on the hidden variable settings using Bayesian inference or some approximation thereof. A particularly popular class of approximations is the class of sampling algorithms, e.g. Markov Chain Monte Carlo methods (MCMC) and importance sampling.

Probabilistic programs express a larger class of models, but very similar approximate inference algorithms can be used to condition a program on observations and to infer the states of hidden variables. In both machine learning and cognitive science, when you are doing Bayesian inference with some model that expresses your prior belief, you usually code both model and inference algorithm and make use of problem-specific approximations to Bayesian inference. Probabilistic programs separate model from inference by using universal inference algorithms.

If you are interested in this set of ideas in the context of cognitive science, I recommend this interactive tutorial.

Church is based on Lisp. At the lowest level, it replaces Boolean gates with stochastic digital circuits. These circuits are wired together to form Markov chains (the probabilistic counterpart of finite state machines.) At the top, it's possible to define probabilistic procedures for generating samples from recursively defined distributions.

This confuses Church as a language for expressing generative models with ideas on how to implement such a language in hardware. There are three different ideas here:

  • Church as a language for generative models with sampling-based semantics
  • MCMC as an approximate inference method for such models (that can be implemented on traditional von Neumann architectures)
  • Machine architectures that are well-suited for MCMC

So this is an open call for volunteers -- any brave Bayesians want to blog about a brand new computer language?

I'll write an exposition within the next weeks if people are interested.

Comment by andreas on Help: When are two computations isomorphic? · 2010-10-08T02:49:11.504Z · LW · GW

The notion of abstract state machines may be useful for a formalization of operational equivalence of computations.

Comment by andreas on Consciousness doesn't exist. · 2010-10-03T02:38:03.326Z · LW · GW

Your argument leaves out necessary steps. It is not a careful analysis, does not consider ways in which it might be mistaken, but gives rise to the impression that you wanted to get to your conclusion as quickly as possible.

There is, necessarily, absolutely no way to determine - given an algorithm - whether it is conscious or not. It is not even a formally undecidable statement!

It is unclear how this follows from anything you wrote.

consciousness refuses to be phrased formally (it is subjective, and computation is objective)

Consider tabooing words like "subjective" and "objective".

Comment by andreas on Open Thread September, Part 3 · 2010-09-28T23:33:43.642Z · LW · GW

Related: affect heuristic, affective death spirals.

Comment by andreas on Open Thread, September, 2010-- part 2 · 2010-09-26T16:10:07.758Z · LW · GW

From the document:

I suggest a synthesis between the approaches of Yudkowsky and de Garis.

Later, elaborating:

Yudkowsky's emphasis on pristine best scenarios will probably fail to survive the real world precisely because evolution often proceeds by upsetting such scenarios. Yudkowsky's dismissal of random mutations or evolutionary engineering could thus become the source of the downfall of his approach. Yet de Garis's overemphasis on evolutionary unpredictability fails to account for the extent to which human intelligence itself is model for learning from "dumb" random processes on a higher levels of abstraction so that they do not have to be repeated.

Comment by andreas on Let's make a deal · 2010-09-23T01:47:38.043Z · LW · GW

To make a good case for financial support, point to past results that are evidence of clear thinking and of the ability to get research done.

Comment by andreas on Error detection bias in research · 2010-09-22T04:55:21.016Z · LW · GW

90% of spreadsheets contain errors.

Source (scroll down to the last line of the first spreadsheet)

Comment by andreas on Open Thread, September, 2010-- part 2 · 2010-09-19T03:12:52.424Z · LW · GW

Ask yourself: If the LW consensus on some question was wrong, how would you notice? How do you distinguish good arguments from bad arguments? Do your criteria for good arguments depend on social context in the sense that they might change if your social context changes?

Next, consider what you believe and why you think you believe it, applying the methods you just named. According to your criteria, are the arguments in favor of your beliefs strong, and the arguments against weak? Or do your criteria not discriminate between them? Do you have difficulty explaining why you hold the positions you hold?

These two sets of questions correspond to two related problems that you could worry about and that imply different solutions. The former, more fundamental problem is broken epistemology. The latter problem is knowledge that is not truly part of you, knowledge disconnected from your epistemic machinery.

I don't see an easy way out; no simple test you could apply, only the hard work of answering the fundamental questions of rationality.

Comment by andreas on Open Thread, September, 2010-- part 2 · 2010-09-18T17:21:45.351Z · LW · GW

I'm in Cambridge, MA, looking for a rationalist roommate. PM me for details if you are interested or if you know someone who is!

Comment by andreas on Less Wrong: Open Thread, September 2010 · 2010-09-10T21:08:56.400Z · LW · GW

Thanks for coding this!

Currently, the script does not work in Chrome (which supports Greasemonkey out of the box).

Comment by andreas on A "Failure to Evaluate Return-on-Time" Fallacy · 2010-09-08T00:03:32.523Z · LW · GW

Comments on HN and LW result in immediate reward through upvoting and replies whereas writing a book is a more solitary experience. If you identify this difference as a likely cause for your behavior and if you believe that the difference in value to you is as large as you say, then you should test this hypothesis by turning book-writing into a more interactive, immediately rewarding process. Blogging and sending pieces to friends once they are written come to mind.

More generally, consider structuring your social environment such that social expectations and rewards line up with activities you consider valuable. I have found this to be a powerful way to change my behavior.

Comment by andreas on A "Failure to Evaluate Return-on-Time" Fallacy · 2010-09-07T20:36:18.824Z · LW · GW

Meanwhile, there's something on-hand I could do that'd have 300 times the impact. For sure, almost certainly 300 times the impact, because I see some proven success in the 300x area, and the frittering-away-time area is almost certainly not going to be valuable.

Your post includes a "silly" and a business-scale example, but not a personal one. In order to answer the questions about causes that you ask, it seems necessary to look at specific situations. Is there a real-life situation that you can talk about where you have two options, one almost certainly hundreds of times as good as the other, and you choose the option that is worse?

Comment by andreas on A "Failure to Evaluate Return-on-Time" Fallacy · 2010-09-07T20:27:22.026Z · LW · GW

I feel like a lot of us have those opportunities - we see that a place we're putting a small amount of effort is accounting for most of our success, but we don't say - "Okay, that area that I'm giving a little attention that's producing massive results? All attention goes there now."

If you are giving some area a little attention, this does not imply that more attention would get you proportionally better results; you may run into diminishing returns quickly. Of course, for any given situation, it is worth understanding whether this is the case or not.

Comment by andreas on Open Thread, August 2010-- part 2 · 2010-08-25T23:50:13.271Z · LW · GW

If all you want is single bits from a quantum random number generator, you can use this script.

Comment by andreas on The Threat of Cryonics · 2010-08-03T21:13:07.333Z · LW · GW

The question is what causes this sensation that cryonics is a threat? What does it specifically threaten?

It doesn't threaten the notion that we will all die eventually. Accident, homicide, and war will remain possibilities unless we can defeat them, and suicide will always remain an option.

Even if cryonics does not in fact threaten the notion of eventual death, it might still cause the sensation that it poses this threat.

Comment by andreas on Open Thread, August 2010 · 2010-08-01T22:35:55.067Z · LW · GW

Scott Aaronson asks for rational arguments for and against cryonics.

Comment by andreas on Metaphilosophical Mysteries · 2010-07-28T11:53:07.616Z · LW · GW

I use the word "prior" in the sense of priors as mathematical objects, meaning all of your starting information plus the way you learn from experience.

Comment by andreas on Metaphilosophical Mysteries · 2010-07-28T11:30:19.218Z · LW · GW

Nothing much happens to intelligent agents - because an intelligent agents' original priors mostly get left behind shortly after they are born - and get replaced by evidence-based probability estimates of events happening.

Prior determines how evidence informs your estimates, what things you can consider. In order to "replace priors with evidence-based probability estimates of events", you need a notion of event, and that is determined by your prior.

Comment by andreas on Newcomb's Problem and Regret of Rationality · 2010-07-27T02:03:22.649Z · LW · GW

Intuitively, the notion of updating a map of fixed reality makes sense, but in the context of decision-making, formalization in full generality proves elusive, even unnecessary, so far.

By making a choice, you control the truth value of certain statements—statements about your decision-making algorithm and about mathematical objects depending on your algorithm. Only some of these mathematical objects are part of the "real world". Observations affect what choices you make ("updating is about following a plan"), but you must have decided beforehand what consequences you want to establish ("[updating is] not about deciding on a plan"). You could have decided beforehand to care only about mathematical structures that are "real", but what characterizes those structures apart from the fact that you care about them?

Vladimir talks more about his crazy idea in this comment.

Comment by andreas on [deleted post] 2010-07-23T18:49:37.805Z

Are you doing this? If not, why not?

Comment by andreas on (One reason) why capitalism is much maligned · 2010-07-19T18:50:58.243Z · LW · GW

In my experience, academics often cannot distinguish between SIAI and Kurzweil-related activities such as the Singularity University. With its 25k tuition for two months, SU is viewed as some sort of scam, and Kurzweilian ideas of exponential change are seen as naive. People hear about Kurzweil, SU, the Singularity Summit, and the Singularity Institute, and assume that the latter is behind all those crazy singularity things.

We need to make it easier to distinguish the preference and decision theory research program as an attempt to solve a hard problem from the larger cluster of singularity ideas, which, even in the intelligence explosion variety, are not essential.

Comment by andreas on CogSci books · 2010-04-21T03:34:30.290Z · LW · GW

Fodor's arguments for a "language of thought" make sense (see his book of the same name). In a nutshell, thought seems to be productive – out of given concepts, we can always construct new ones, e.g. arbitrary nestings of "the mother of the mother of ..." – systematic – knowing certain concepts automatically leads to the ability to construct other concepts, e.g. knowing the concept "child" and the concept "wild", I can also represent "wild child" – and compositional, e.g. the meaning of "wild child" is a function of the meaning of "wild" and "child".

Comment by andreas on CogSci books · 2010-04-21T00:41:00.336Z · LW · GW

If you want to learn the fundamental concepts of a field, I find most of the time that textbooks with exercises are still the best option. The more introductory chapters of PhD theses are also helpful in this situation.

Comment by andreas on Open Thread: April 2010, Part 2 · 2010-04-09T03:18:20.967Z · LW · GW

Thanks! Please keep on posting, this is interesting.

Comment by andreas on Open Thread: April 2010 · 2010-04-07T01:22:45.069Z · LW · GW

Since I never described a way of extracting preference from a human (and hence defining it for a FAI), I'm not sure where do you see the regress in the process of defining preference.

Reading your previous post in this thread, I felt like I was missing something and I could have asked the question Wei Dai asked ("Once we implement this kind of FAI, how will we be better off than we are today?"). You did not explicitly describe a way of extracting preference from a human, but phrases like "if you manage to represent your preference in terms of your I/O" made it seem like capturing strategy was what you had in mind.

I now understand you as talking only about what kind of object preference is (an I/O map) and about how this kind of object can contain certain preferences that we worry might be lost (like considerations of faulty hardware). You have not said anything about what kind of static analysis would take you from an agent's s̶t̶r̶a̶t̶e̶g̶y̶ program to an agent's preference.

Comment by andreas on Open Thread: April 2010 · 2010-04-05T00:19:28.573Z · LW · GW

There is also Shades, which lets you set a tint color and which provides a slider so you can move gradually between standard and tinted mode.

Comment by andreas on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future · 2010-03-15T23:39:25.970Z · LW · GW

My conclusion from this discussion is that our disagreement lies in the probability we assign that uploads can be applied safely to FAI as opposed to generating more existential risk. I do not see how to resolve this disagreement right now. I agree with your statement that we need to make sure that those involved in running uploads understand the problem of preserving human preference.

Comment by andreas on The problem of pseudofriendliness · 2010-03-15T22:33:38.599Z · LW · GW

People have very feeble understanding of their own goals. Understanding is not required. Goals can't be given "from the outside", goals are what system does.

Even if we have little insight into our goals, it seems plausible that we frequently do things that are not conducive to our goals. If this is true, then in what sense can it be said that a system's goals are what it does? Is the explanation that you distinguish between preference (goals the system would want to have) and goals that it actually optimizes for, and that you were talking about the latter?

Comment by andreas on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future · 2010-03-15T02:00:07.575Z · LW · GW

Good, this is progress. Your comment clarified your position greatly. However, I do not know what you mean by "how long is WBE likely to take?" — take until what happens?

Comment by andreas on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future · 2010-03-14T08:52:09.940Z · LW · GW

The first option tries to capture our best current guess as to our fundamental preference. It then updates the agent (us) based on that guess.

This guess may be awful. The process of emulation and attempts to increase the intelligence of the emulations may introduce subtle psychological changes that could affect the preferences of the persons involved.

For subsequent changes based on "trying to evolve towards what the agent thinks is its exact preference" I see two options: Either they are like the first change, open to the possibility of being arbitrarily awful due to the fact that we do not have much introspective insight into the nature of our preferences, and step by step we lose part of what we value — or subsequent changes consist of the formalization and precise capture of the object preference, in which case the situation must be judged depending on how much value was lost in the first step vs how much value was gained by having emulations work on the project of formalization.

For the second option though, it's hard for me to imagine ever choosing to self-modify into an agent with exact, unchanging preferences.

This is not the proposal under discussion. The proposal is to build a tool that ensures that things develop according to our wishes. If it turns out that our preferred (in the exact, static sense) route of development is through a number of systems that are not reflectively consistent themselves, then this route will be realized.

Comment by andreas on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future · 2010-03-13T03:05:04.044Z · LW · GW

It's not clear to me that this is the only way to evaluate my claim, or that it is even a reasonable way. My understanding of FAI is that arriving at such a resolution of human preferences is a central ingredient to building an FAI, hence using your method to evaluate my claim would require more progress on FAI.

If your statement ("The route of WBE simply takes the guess work out") were a comparison between two routes similar in approach, e.g. WBE and neuroenhancement, then you could argue that a better formal understanding of preference would be required before we could use the idea of "precise preference" to argue for one approach or the other.

Since we are comparing one option which does not try to capture preference precisely with an option that does, it does not matter what exactly precise preference says about the second option: Whatever statement our precise preferences make, the second option tries to capture it whereas the first option makes no such attempt.