Comment by itaibn0 on Open question: are minimal circuits daemon-free? · 2018-05-20T21:01:05.909Z · score: 8 (2 votes) · LW · GW

This may be relevant:

Imagine a computational task that breaks up into solving many instances of problems A and B. Each instance reduces to at most n instances of problem A and at most m instances of problem B. However, these two maxima are never achieved both at once: The sum of the number of instances of A and instances of B is bounded above by some . One way to compute this with a circuit is to include n copies of a circuit for computing problem A and m copies of a circuit for computing problem B. Another approach for solving the task is to include r copies of a circuit which, with suitable control inputs, can compute either problem A or problem B. Although this approach requires more complicated control circuitry, if r is significantly less than n+m and the size of is significantly less than the sum of the sizes of and (which may occur if problems A and B have common subproblems X and Y which can use a shared circuit) then this approach will use less logic gates overall.

More generally, consider some complex computational task that breaks down into a heterogeneous set of subproblems which are distributed in different ways depending on the exact instance. Analogous reasoning suggests that the minimal circuit for solving this task will involve a structure akin to emulating a CPU: There are many instances of optimized circuits for low-level tasks, connected by a complex dependency graph. In any particular instance of the problem the relevant data dependencies are only a small subgraph of this graph, with connections decided by some control circuitry. A particular low-level circuit need not have a fixed purpose, but is used in different ways in different instances.

So, our circuit has a dependency tree of low-level tasks optimized for solving our problem in the worst-case. Now, at a starting stage of this hierarchy it has to process information about how a particular instance is separated into subproblems and generate the control information for solving this particular instance. The control information might need to be recomputed as new information about the structure of the instance are made manifest, and sometimes a part of the circuit may perform this recomputation without full access to potentially conflicting control information calculated in other parts.

Comment by itaibn0 on Against the Linear Utility Hypothesis and the Leverage Penalty · 2017-12-14T21:08:38.776Z · score: 4 (3 votes) · LW · GW

Yes, this is the refutation for Pascal's mugger that I believe in, although I never got around to writing it up like you did. However, I disagree with you that it implies that our utilities must be bounded. All the argument shows is that ordinary people never assign to events enourmous utility values with also assigning them commensuably low probabilities. That is, normative claims (i.e., claims that certain events have certain utility assigned to them) are judged fundamentally differently from factual claims, and require more evidence than merely the complexity prior. In a moral intuitionist framework this is the fact that anyone can say that 3^^^3 lives are suffering, but it would take living 3^^^3 years and getting to know 3^^^3 people personally to feel the 3^^^3 times utility associated with this events.

I don't know how to distinguish the scenarios where our utilities are bounded and where our utilities are unbounded but regularized (or whether our utilities are suffiently well-defined to distinguish the two). Still, I want to emphasize that the latter situation is possible.

Comment by itaibn0 on Changing habits for open threads · 2017-11-26T15:45:53.346Z · score: 6 (2 votes) · LW · GW

Quick thought: I think you are relying too much on your own experience which I don't expect to generalize well. Different people will have different habits on how much thought they put to their comments, and I expect some put too much thought and some too. We should put more effort at identifying the aggregate tendencies of people at this forum before we make reccomendations.

Then again, perhaps you are just offering the idea casually, so it's okay. Still I worry that the most likely future pathways for posts like this are "get ignored" and "get cited uncritically", and there's no clear place for this more thorough investigation.

Comment by itaibn0 on Living in an Inadequate World · 2017-11-14T14:02:58.119Z · score: 11 (3 votes) · LW · GW
What's the fallacy you're claiming?

First, to be clear, I am referring to things such as this description of the prisoner's dilemma and EY's claim that TDT endorses cooperation. The published material has been careful to only say that these decision theories endorse cooperation among identical copies running the same source code, but as far as I can tell some researchers at MIRI still believe this stronger claim and this claim has been a major part of the public perception of these decision theories (example here; see section II).

The problem is that when two FDT agent with a different utility functions and different prior knowledge are facing a prisoner's dilemma with each other, then their decisions are actually two different logical variables X0 and X1. The argument for cooperating is that X0 and X1 are sufficiently similar to one another that in the counterfactual where X0=C we also have X1=C. However, you could just as easily take the opposite premise, where X0 and X1 are sufficiently dissimilar that counterfactually changing X0 will have no effect on X1. Then you are left with the usual CDT analysis of the game. Given the vagueness of logical counterfactuals it is impossible to distinguish these two situations.

Here's a related question: What does FDT say about the centipede game? There's no symmetry between the players so I can't just plug in the formalism. I don't see how you can give an answer that's in the spirit of cooperating in the prisoner's dilemma without reaching the conclusion that FDT involves altruism among all FDT agents through some kind of veil of ignorance argument. And taking that conclusion is counter to the affine-transformation-invariance of utility functions.

Comment by itaibn0 on Living in an Inadequate World · 2017-11-14T12:22:41.759Z · score: 2 (1 votes) · LW · GW

This discussion has moved far off-topic away from EY's general rationality lessons. I'm pleased with this, since these are topics that I want to discuss, but I want to mention this explicitly since constant topic-changes can be bad for a productive discussion by preventing the participants from going into any depth. In addition, lurkers might be annoyed at reading yet another AI argument. Do you think we should move the discussion to a different venue?

My motivations for discussing this are a chance to talk about critisms of MIRI that I haven't gotten down in writing in detail before, a chance to get a rough impression on how MIRI supporters to these explanations, and more generally an opportunity to practice intellectual honest debates. I don't expect the discussion to go on far enough to resolve our disagreements, but I am trying to anyways to get practice. I'm currently enthusiastic about continuing the discussion. but the sort of enthusiasm that could easily wane in a day. What is your motivation?

Comment by itaibn0 on Living in an Inadequate World · 2017-11-14T00:33:13.273Z · score: 2 (1 votes) · LW · GW
"but a fundamental assumption behind TDT and UDT is the existence of a causal structure behind logical statements, which sounds implausible to me."
None of the theories mentioned make any assumption like that; see the FDT paper above.

Page 14 of the FDT paper:

Instead of a do operator, FDT needs a true operator, which takes a logical sentence φ and updates P to represent the scenario where φ is true...
...Equation (4) works given a graph that accurately describes how changing the value of a logical variable affects other variables, but it is not yet clear how to construct such a thing—nor even whether it can be done in a satisfactory manner within Pearl’s framework.

This seems wrong, if you're saying that we can't formally establish the behavior of different decision theories, or that applying theories to different cases requires ad-hoc emendations; see section 5 of "Functional Decision Theory" (and subsequent sections) for a comparison and step-by-step walkthrough of procedures for FDT, CDT, and EDT. One of the advantages we claim for FDT over CDT and EDT is that it doesn't require ad-hoc tailoring for different dilemmas (e.g., ad-hoc precommitment methods or ratification procedures, or modifications to the agent's prior).

The main thing that distinguishes FDT from CDT is how the true operator mentioned above functions. As far as I'm aware this is always inserted by hand. This is easy to for situations where entities make perfect simulations of one another, but there aren't even rough guidelines for what to do when the computations that are done cannot be delineated in such a clean manner. In addition, if this was a rich research field I would expect more "math that bites back", i.e., substantive results that reduce to clearly-defined mathematical problems whose result wasn't expected during the formalization.

This point about "load-bearing elements" is at its root an intuitive judgement that might be difficult for me to convey properly.

Comment by itaibn0 on Living in an Inadequate World · 2017-11-13T14:44:37.512Z · score: 2 (1 votes) · LW · GW

Thinking further, I've spotted something that may a crucial misunderstanding. Is the issue whether EY was right to create his own technical research institute on AI risk, is it whether he was right to pursue AI risk at all? I agree that before EY there was relatively little academic work on AI risk, and that he played an important role in increasing the amount of attention the issue recieves. I think it would have been a mistake for him to ignore the issue on the basis that the experts must know better than him and they aren't worried.

On the other hand, I expect an equally well-funded and well-staffed group that is mostly within academia to do a better job than MIRI. I think EY was wrong in believing that he could create an institute that is better at pursuing long-term technical research in a particular topic than academia.

Comment by itaibn0 on Living in an Inadequate World · 2017-11-13T13:35:59.802Z · score: 2 (1 votes) · LW · GW
When I think about the people working on AGI outcomes within academia these days, I think of people like Robin Hanson, Nick Bostrom, Stuart Russell, and Eric Drexler, and it's not immediately obvious to me that these people have converged more with each other than any of them have with researchers at MIRI.

I see the lack of convergence between people in academia as supporting my position, since I am claiming that MIRI is looking too narrowly. I think AI risk research is still in a brainstorming stage where we still don't have a good grasp on what all the possibilities are. If all of these people have rather different ideas for how to go about it, was is it just the approaches that Eliezer Yudkowsky likes that are getting all the funding?

I also have specific objections. Let's take TDT and FDT as an example since they were mentioned in the post. The primary motivation for them is that they handle Newcombe-like dilemmas better. I don't think Newcombe-like dilemmas are relevant for the reasoning of potentially dangerous AIs, and I don't think you will get a good holistic understanding of what a good reasoner out of these theories. One secondary motivation for TDT/UDT/FDT is a fallacious argument that it endorses cooperation in the true prisoner's dilemma. Informal arguments seem to be the load-bearing applying these theories to any particular problem; the technical works seem to be mainly formalizing narrow instances of these theories to agree with the informal intuition. I don't know about FDT, but a fundamental assumption behind TDT and UDT is the existence of a causal structure behind logical statements, which sounds implausible to me.

Comment by itaibn0 on Living in an Inadequate World · 2017-11-12T23:20:34.888Z · score: 7 (2 votes) · LW · GW

(Background: I used to be skeptical about AI risk as a high-value cause, now I am uncertain, and I am still skeptical of MIRI.)

When you compete for grant funding, that means your work is judged by people with roughly the same level of expertise as you. When you make a publically-funded research institute your work is judged for more shallowly. That you chose to go along the second path rather than the first path had left a bad first impression on me when I first learned of it, like you can't make a convincing case in a fair test. As MIRI grew and as I learned more about it, I got the impression that since MIRI is a small team too little contact with a broader intellectual community it prematurely reached a consensus on a particular set of approaches and assumptions that I think are likely to go nowhere.

Comment by itaibn0 on Announcing AASAA - Accelerating AI Safety Adoption in Academia (and elsewhere) · 2017-06-21T01:55:55.963Z · score: 0 (0 votes) · LW · GW

I specifically appreciate the article on research debt.

Since I was confused by this when I first read this, I want to clarify: As far as I can tell the article is not written by anybody associated with AASAA. You're saying it was nice of toonalfrink to link to it.

(I'm not sure if this comment is useful, since I don't expect a lot of people to have the same misunderstanding I did.)

Comment by itaibn0 on [stub] 100-Word Unpolished Insights Thread (3/10-???) · 2017-03-17T22:01:30.273Z · score: 0 (0 votes) · LW · GW

I'm not sure to what extent you want people to criticize ideas in this thread, and I'm going to test the waters. Give me feedback on how well this matches the norms you envision.

An immediate flaw comes to mind, that any elaboration of this idea should respond to: Changing the high school curriculum is very difficult. If you've acquired the social capital to change the curriculum of a high school, you should not spend it by making such a small, marginal contribution, but rather you could probably find something with a larger effect with the same social capital.

Comment by itaibn0 on Act into Fear and Abandon all Hope · 2017-01-04T13:20:56.126Z · score: 0 (0 votes) · LW · GW

You start the discussion with a very practical frame: "Here is some advice I intend to give you.". You give caveats, then you give the advice, and you give some justification. The advice sounds plausible. Then you continue to a very philosophical discussion on what fear is and what people think about it that does not appear to tie in with the practical frame. While your article would appear very lopsided with so much caveat and so little content, I don't see how the later parts help. Alternately, you can remove everything up to the 10th paragraph and write a very different sort of essay.

Comment by itaibn0 on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2017-01-04T01:09:14.416Z · score: 1 (1 votes) · LW · GW

"Straw that breaks the camel's back" implies the existence of a large pre-existing weight, so your claim is a tautology.

Comment by itaibn0 on Progress and Prizes in AI Alignment · 2017-01-04T00:03:55.174Z · score: 2 (2 votes) · LW · GW

You point out a problem: There's no way to tell which organizations are making progress on AI alignment, and there is little diversity in current approaches. You turn this into the question: How do we create prizes that incentivize progress in AI alignment? You're missing a step or two here.

I'd say the logic goes the opposite direction: because there are no clear objectively measurable targets that will improve AI safety, prizes are probably a bad idea for increasing the diversity and effectiveness of AI safety research.

Comment by itaibn0 on Act into Fear and Abandon all Hope · 2017-01-03T23:26:14.803Z · score: 1 (1 votes) · LW · GW

Writing suggestion: Drop everything past the 10th paragraph ("It’s not immediately obvious that you’d want to overcome fear, though...").

## Why I Am Changing My Mind About AI Risk

2017-01-03T22:57:53.086Z · score: 4 (5 votes)
Comment by itaibn0 on On the importance of Less Wrong, or another single conversational locus · 2016-12-06T00:59:21.851Z · score: 0 (0 votes) · LW · GW

Perhaps I should not have used such sensationalist language. I admit I don't know the whole story, and that more details are likely to find many nonrational reasons the change occurs. Still, I suspect rational persuasion did play a role, if not a complete one. Anecdotally, the Less Wrong discussion changed my opinion of polyamory from "haven't really thought about it that much" to "sounds plausible but I haven't tried it".

In any case, if your memory of that section of Less Wrong history contributes positively to your nostalgia, it's worth reconsidering the chance events like that will ever happen again.

Comment by itaibn0 on On the importance of Less Wrong, or another single conversational locus · 2016-12-04T22:53:08.208Z · score: 0 (0 votes) · LW · GW

Given the communities initial heavy interest in the heuristic & biases research, I am amused that there is no explicit mention of the sunk cost policy. Seriously, watch out for that.

My opinion is that revitalizing the community is very likely to fail, and I am neutral on whether it's worth to try anyways by current prominent rationalists. A lot of people are suggesting to restore the website with a more centralized structure. It should be obvious the result won't work the same as the old Less Wrong.

Finally, a reminder on Less Wrong history, which suggests that we lost more than a group of high-quality posters: Less Wrong wasn't always a polyamory hub. It became that way because there was a group of people who seriously believed they could improve the way they think, a few noticed they didn't have any good reason to be monogamous, set out to convince the others, and succeeded. Do you think a change of that scale will ever happen in the future of the rationalist community?

Comment by itaibn0 on A few misconceptions surrounding Roko's basilisk · 2015-10-07T23:08:55.019Z · score: -1 (1 votes) · LW · GW

Based on personal experience, if you're dreaming I don't recommend trying to wake yourself up. Instead, enjoy your dream until you're ready to wake up naturally. That way you'll have far better sleep.

Comment by itaibn0 on Open Thread, Jul. 27 - Aug 02, 2015 · 2015-07-28T18:51:57.920Z · score: 3 (3 votes) · LW · GW

"assume unbelievable X".

Only this is not an unbelievable X, its an entirely believable X (I wouldn't have any reason to ask an >unbelieveable - as would anyone asking a question - unless they are actually trying to trick you with a >question). In fact - assuming that people are asking you to believe an "unbelievable X" is a strawman of the >argument in point.

Can you do me a favour and try to steelman the question I asked? And see what the results are, and what answer you might give to it?

A similar concern is applicable here: Recall that steelmanning means, when encountering a argument that seems easily flawed, not to respond to that argument but to strengthen it ways the seem reasonable to you and answer that instead. The sounds like the exact opposite of what you want people to do to your questions.

Comment by itaibn0 on Open Thread, Jul. 27 - Aug 02, 2015 · 2015-07-27T21:16:46.981Z · score: 3 (3 votes) · LW · GW

Sometimes what happens is that people don't know the answer to the question you're asking but still want to contribute to the discussion, so they answer a different question which they know the answer to. In this case the solution is to find someone who knows the answer before you start asking.

Comment by itaibn0 on Beyond Statistics 101 · 2015-06-28T23:38:58.060Z · score: 2 (2 votes) · LW · GW

I don't know about Grothendieck, but the two other sources appear to have softer criticism of the mathematical community than "actually functioning as a cult".

Comment by itaibn0 on How my social skills went from horrible to mediocre · 2015-06-09T06:12:14.246Z · score: 0 (0 votes) · LW · GW

That's why I said "supposed to do". The core argument behind schooling is that we can make a person much more capable by exposing them to things they would not otherwise be exposed to, and that it is valuable to give a broad background in many different topics. Fundamentally this is similar to what you're suggesting, and the differences you point out just indicate that school has a bad choice of curriculum and teaches it badly. The primary novelty in what you're suggesting is that you want "a lot of different type of experience" with a shallow view on each topic ("a different profession... every day"), whereas school typically spends a lot of time on a couple of different topics but with essentially the same type of experience. I do not intend to comment on whether I think this will work better.

For the record, I don't know what Toastmasters does, but the schools I've to had Drama class and occasionally required giving presentations.

Comment by itaibn0 on How my social skills went from horrible to mediocre · 2015-05-23T04:49:01.378Z · score: 0 (0 votes) · LW · GW

That's exactly what school is supposed to do.

Comment by itaibn0 on Bragging Thread May 2015 · 2015-05-11T21:05:59.172Z · score: 7 (7 votes) · LW · GW

I got an honorary mention in the 2014 Putnam Competition. I have taken the test at December and I heard the results on April, but I haven't posted this other bragging threads, so I'm not if this is appropriate here.

Comment by itaibn0 on Guidelines for Upvoting and Downvoting? · 2015-05-07T02:07:20.079Z · score: 4 (4 votes) · LW · GW

Downvotes sort of do the opposite, but it's not perfectly symmetrical because scores below zero pack an extra punch.

The standard guideline is to upvote if you want more of that kind of comment, and downvote if you want less. The asymmetry between upvotes and downvotes comes the fact Less Wrongers on a whole want more content on Less Wrong rather than less. Negative scores pack a punch because they mean your comment would be better off not existing.

Well really, I think it's mostly that people just have a pre-existing idea of the connotation of negative numbers, but I gave this retroactive justification to show that I think the result is surprisingly internally consistent.

Comment by itaibn0 on Is Scott Alexander bad at math? · 2015-05-07T01:35:18.482Z · score: 1 (1 votes) · LW · GW

Based on JonahSinick's prior comments, his motivation for asking this question is pretty clear. You have already critiqued the thought process that made him think this question is necessary, to attack it again is almost double-counting. I think if you had answered the question directly the discussion would have a better chance of bootstrapping out of mutual unintelligibility. Then again, I mostly lurk and only rarely participate in internet debates so I don't feel I really understand how any given discussion strategy would actually play out. Also, I cheated, since Jonah already expressed a desire for a direct answer.

Comment by itaibn0 on Is Scott Alexander bad at math? · 2015-05-06T19:42:35.883Z · score: 0 (0 votes) · LW · GW

Other commenters have said similar things, but I want to express this with my own words. To do mathematics requires multiple skills, and an aesthetic sense may be an underappreciated one of them. You argue that Scott has a good aesthetic sense. I also think that Scott probably has good abilities in some of the skills necessary for doing mathematics. But from Scott's account he appears to be lacking in other skills. Why do you think that what Scott has is sufficient? You mention that early college courses are not representative of real math, but even at higher levels you need skills such as reading formulas, applying algorithms, and understanding the implicit meaning of unmotivated (or even imperfectly motivated) definitions. Keep in mind the Scott relates here that other people skilled in math have tried to educate him outside of a college context.

I'm saying I think your conclusion is wrong, I'm uncertain myself. And even Scott admits "I don’t know if it’s that I’m bad at math, or that I just don’t enjoy math enough to be intrinsically motivated to pursue it," (same link as above), which sounds a bit like a way of retreat to your way of thinking.

Comment by itaibn0 on Is Scott Alexander bad at math? · 2015-05-06T04:59:51.728Z · score: 0 (0 votes) · LW · GW

predicting and modelling a preexisting reality

Depending on how you define "preexisting reality", most professional mathematics can be said not to achieve this. In any case, the terms under which people usually praise Douglas Hofstadter do not include this sort of achievement. And if you really want to know what Hofstadter has done, there's this.

Comment by itaibn0 on Why isn't the following decision theory optimal? · 2015-04-16T06:48:38.314Z · score: 2 (3 votes) · LW · GW

an informal version of Updateless Decision Theory

Are you implying that UDT is formal?

Comment by itaibn0 on A pair of free information security tools I wrote · 2015-04-14T14:57:26.261Z · score: 2 (2 votes) · LW · GW

I've never seen it stated as a requirement of the PGP protocol that it is impossible to hide extra information in a signature. In an ordinary use case this is not a security risk; it's only a problem when the implementation is untrusted. I have as much disrespect as anyone towards people who think they can easily achieve what experts who spent years thinking about it can't, but that's not what is going on here.

Comment by itaibn0 on A pair of free information security tools I wrote · 2015-04-14T14:16:47.018Z · score: 2 (2 votes) · LW · GW

How much money are you willing to bet on that?

If the amount is less than $50,000, I suggest you just offer it all as prize to whoever proves you wrong. The value to your reputation will be more than$5, and due to transaction costs people are unlikely to bet with you directly with less than $5 to gain. Comment by itaibn0 on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 108 · 2015-02-21T02:35:24.036Z · score: 3 (3 votes) · LW · GW That quote is from chapter 74. I mention this because you didn't specify and to save the trouble for others to search. Comment by itaibn0 on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 104 · 2015-02-16T02:18:27.507Z · score: 3 (7 votes) · LW · GW Remember that no matter what happens, the Hufflepuff boy will still come to Harry at a bit after 11:04. This means either that Voldemort will survive this encounter and retain mobility in four hours, or that he set up this message in advance (or that Harry is wrong about the source of this message). Comment by itaibn0 on Can AIXI be trained to do anything a human can? · 2014-10-21T23:52:27.318Z · score: 1 (1 votes) · LW · GW I don't think guided training is generally the right way to disabuse an AIXI agent of misconception we think it might get. What training amounts to is having the agent's memory begin with some carefully constructed string s0. All this does is change the agent's prior from some P based on Kolmogorov complexity to the prior P' (s) = P (s0+s | s0) (Here + is concatenation). If what you're really doing is changing the agent's prior to what you want, you should do that with self-awareness and no artificial restriction. In certain circumstances guided training might be the right method, but the general approach should be to think about what prior we want and hard-code it as effectively as possible. Taken to the natural extreme this amounts to making an AI that works on completely different principles than AIXI. Comment by itaibn0 on Happiness Logging: One Year In · 2014-10-10T00:34:07.235Z · score: 3 (3 votes) · LW · GW Overall my experience with logging has made me put less trust in "how happy are you right now" surveys of happiness. Aside from the practical issues like logging unexpected night wake-time, I mostly don't feel like the numbers I'm recording are very meaningful. I would rather spend more time in situations I label higher than lower on average, so there is some signal there, but I don't actually have the introspection to accurately report to myself how I'm feeling. I've also been suspicious of happiness surveys for a similar reason. One theory I have is that a large portion of the variation in happiness set-point is just that different people have different tendencies in answering "rate in 1-10"-type questions. It would be interesting to test how much does happiness set-point correlates with questions such as "rate this essay from 1 to 10". Another test for this theory that is far more like have actually been conducted already is to see how well happiness set-point correlates with neurological signals of happiness (the difficulty being here that the primary way to determine whether a neurological signal signals happiness is through self-report. Nonetheless, if the happiness set-point correlates with any neurological signal then it more likely that this signal plays a role in happiness than in inducing high number ratings). Comment by itaibn0 on Causal decision theory is unsatisfactory · 2014-09-14T16:54:30.703Z · score: 0 (2 votes) · LW · GW On this topic, I'd like to suggest a variant of Newcomb's problem that I don't recall seeing anywhere in LessWrong (or anywhere else). As usual, Omega presents you with two boxes, box A and box B. She says "You may take either box A or both boxes. Box B contains 1,000$. Box A either contains 1,000,000\$ or is empty. Here is how I decided what to put in box A: I consider a perfectly rational agent being put in an identical situation to the one you're in. If I predict she takes one box I put the money in box A, otherwise I put nothing." Suppose further that Omega has put many other people into this exact situation, and in all those cases the amount of money in box A was identical.

The reason why I mention the problem is that while the original Newcomb's problem is analogous to the Prisoner's Dilemma with clones that you described, this problem is more directly analogous to the ordinary one-shot Prisoner's Dilemma. In the Prisoner's Dilemma with clones and in Newcomb's problem, your outcome is controlled by a factor that you don't directly control but is nonetheless influenced by your strategy. In the ordinary Prisoner's dilemma and in my Newcomb-like problem, this factor is controlled a rational agent that is distinct from yourself (although note that in the Prisoner's Dilemma this agent's outcome is directly influenced by what you do, but not so in my own dilemma).

People have made the argument that you should cooperate in the one-shot Prisoner's Dilemma for essentially the same reason you should one-box. I disagree with that, and I think my hypothetical illustrates that the two problems are disanalogous by presenting a more correct analogue. While there is a strong argument for one-boxing in Newcomb's problem, which I agree with, the case is less clear here. I think the argument that a TDT agent would choose cooperation in Prisoner's Dilemma is flawed. I believe TDT in its current form is not precise enough to give a clear answer to this question. After all, both the CDT argument in terms of dominated strategies and the superrational argument in terms of the underlying symmetry of the situation can be phrased in TDT depending on how you draw the causal graph over computations.

Comment by itaibn0 on Talking to yourself: A useful thinking tool that seems understudied and underdiscussed · 2014-09-11T00:57:23.400Z · score: 2 (2 votes) · LW · GW

Personally I don't expect this to be of much use to me. I find the task of translating thoughts into words to be more strenuous than it is for others, and so I expect this to be more distracting than helpful. I played games where I tried to subvocalise all of my thoughts the way some people have interior monologues and they support this conclusion. I believe I have a fairly good working memory (for instance, I can play blind chess) and so I don't as as much value in an external aid. Other people are commenting based on their own personal experience and feelings, so I think I can trust my own gut feeling in terms of how this will work out for me.

Comment by itaibn0 on Alternative to Campaign Finance Reform? · 2014-08-01T01:43:30.517Z · score: 1 (1 votes) · LW · GW

I don't understand the title. You're talking about a reform to the democratic process, and you're comparing it with 'finance reform'. Those only seem tangentially related.

Comment by itaibn0 on A simple game that has no solution · 2014-07-21T18:35:31.126Z · score: 1 (1 votes) · LW · GW

You're right. I'm not actually advocating this option. Rather, I was comparing EY's seemingly arbitrary strategy with other seemingly arbitrary strategies. The only one I actually endorse is "P1: A". It's true that this specific criterion is not invariant under affine transformations of utility functions, but how do I know EY's proposed strategy wouldn't change if we multiply player 2's utility function by 100 as you propose?

(Along a similar vein, I don't see how I can justify my proposal of "P1: 3/10 C 7/10 B". Where did the 10 come from? "P1: 2/7 C 5/7 B" works equally well. I only chose it because it is convenient to write down in decimal.)

Comment by itaibn0 on A simple game that has no solution · 2014-07-21T16:53:01.431Z · score: 2 (2 votes) · LW · GW

I have no idea where those numbers came from. Why not "P1: .3C .7B" to make "P2: Y" rational? Otherwise, why does P2 play Y at all? Why not "P1: C, P2: Y", which maximizes the sum of the two utilities, and is the optimal precommitment under the Rawlian veil-of-ignorance prior? Heck, why not just play the unique Nash equilibrium "P1: A"? Most importantly, if there's no principled way to make these decisions, why assume your opponent will timelessly make them the same way?

Comment by itaibn0 on The Power of Noise · 2014-06-16T22:46:26.999Z · score: 3 (3 votes) · LW · GW

I think an example of what jsteinhardt is referring to would be quicksort. It can take an arbitrary list as an argument, but for many perversely ordered inputs it takes Omega(n^2). However, it does have an efficient average-case complexity of O(n log n). In other words, if the input is sampled from the uniform distribution over permutations the algorithm is guaranteed to finish in O(n log n) time.

Many of the other examples that were considered are similar, in that the algorithm doesn't give an error when given an input outside of the expected distribution, but rather silently works less effectively

Comment by itaibn0 on Failures of an embodied AIXI · 2014-06-07T01:28:26.734Z · score: 4 (4 votes) · LW · GW

Ack! I'm not sure what to think. When I wrote that comment, I had the impression that we had some sort of philosophical conflict, and I felt like I should make the case for my side. However, now I worry the comment was too aggressive. Moreover, it seems like we agree on most of the questions we can state precisely. I'm not sure how to deal with this situation.

I suppose I could turn some assumptions into questions: To what extent is it your goal in this inquiry to figure out 'naturalized induction'? Do you think 'naturalized induction' is something humans naturally do when thinking, perhaps imperfectly?

Comment by itaibn0 on Failures of an embodied AIXI · 2014-06-06T22:23:47.997Z · score: 1 (3 votes) · LW · GW

Intuitively, this limitation could be addressed by hooking up the AIXItl's output channel to its source code. Unfortunately, if you do that, the resulting formalism is no longer AIXItl.

I dispute this. Any robot which instantiates AIXI-tl must consist of two parts: First, there must be a component which performs the actual computations for AIXI-tl. Second, there is a router, which observes the robot's environment and feeds it to the first compoment as input, and also reads the first component's output and translates it into an action the robot performs. The design of the router must be neccessity make additional arbitrary choices not present in the pure description of AIXI-tl. For example, the original description of AIXI described the output as a bit-string, which in this scenario must somehow be converted into a constree for the output register. If the router is badly designed then it can make problems that no program of any intelligence can overcome. For example, imagine the router can't perform the action 'move right'.

The problem described here is not at all in AIXI-tl, but entirely in the design of the router. This can be seen from how at no point you look into the internal components of AIXI-tl or what output it would generate. If you allowed the router to change the internal registers of the robot, it would still be AIXI-tl, just that it would have a different output router.

I think that if the robot use such a router then it would kill itself in experimentation before it would have the chance to solve the problem, but you haven't established that. I would like to see an argument against AIXI-tl that does not really rely what it is or is not physically capable of doing, but rather on what it is intelligent enough to choose to do. After all, humans, despite supposedly being capable of "naturalized induction", would not do well in this problem either. A human cannot by force of will reprogram her brain into a static set of commands, nor can she make her brain stop emitting heat.

Finally, I want to say why I am making these arguments. It is not because I want to advocate for AIXI-tl and argue for its intelligence. The way I think of it AIXI is the dumbest program that is still capable of learning the right behavior eventually. Actually it's worse than that; my argument here has convince me that even with exponential resources AIXI-tl can't argue itself out of a paper bag (Note argument does look into the internals of AIXI-tl rather than treating it as a black-box). So if anything I think you might be overestimating the intelligence of AIXI-tl. However, my concern is that in addition to its usual stupidity, you think AIXI-tl has an additional obstacle in terms of some sort of 'Cartesian boundary problem', and that there exists some sort of 'naturalized induction' which humans have and which AIXI and AIXI-tl don't have. I am unconvinced by this, and I think it is an unproductive line of research. Rather, I think any problem AIXI has in reasoning about itself is either one humans also have in reasoning about themselves or analogous to a problem it has reasoning about other things. In this case it is a problem humans also have.

Comment by itaibn0 on A Dialogue On Doublethink · 2014-05-10T22:58:26.666Z · score: 0 (4 votes) · LW · GW

Not if what you're trying to calculate is e^(-5).

Comment by itaibn0 on Siren worlds and the perils of over-optimised search · 2014-04-23T14:31:03.816Z · score: 0 (0 votes) · LW · GW

Remember that my original point is that I believe appearing to be good correlates with goodness, even in extreme circumstances. Therefore, I expect restructuring humans to make the world appear tempting will be to the benefit of their happiness/meaningfulness/utility. Now, I'm willing to consider that are aspects of goodness which are usually not apparent to an inspecting human (although this moves to the borderline of where I think 'goodness' is well-defined). However, I don't think these aspects are more likely to be satisfied in a satisficing search than in an optimizing search.

Comment by itaibn0 on AI risk, new executive summary · 2014-04-19T12:57:47.864Z · score: 0 (0 votes) · LW · GW

Thinking about this, it seems like there should exist some version of diff which points out differences on the word level rather than the line level. That would be useful for text documents which only have line breaks in between paragraphs. Given how easy I expect it to be to program such a thing almost certainly does exist, but I don't know where to find it.

Comment by itaibn0 on Beware technological wonderland, or, why text will dominate the future of communication and the Internet · 2014-04-18T21:23:10.874Z · score: 0 (0 votes) · LW · GW

Personally I prefer speaking to writing but I prefer reading to listening. I believe part of the reason is that I set myself higher standards when I write. For instance, in a conversation I would be satisfied to finish this comment with just the first sentence, but here I want to elaborate.

Comment by itaibn0 on Siren worlds and the perils of over-optimised search · 2014-04-17T22:50:24.426Z · score: 0 (0 votes) · LW · GW

Here's the sort of thing I'm imagining:

In the beginning there are humans. Human bodies become increasingly impractical in the future environment and are abandoned. Digital facsimiles will be seen as pointless and will also be abandoned. Every component of the human mind will be replaced with algorithms that achieve the same purpose better. As technology allows the remaining entities to communicate with each other better and better, the distinction between self and other will blur, and since no-one will see to any value in reestablishing it artificially, it will be lost. Individuality too is lost, and nothing that can be called human remains. However, every step happens voluntarily because what comes after is seen as better than what is before, and I don't see why I should consider the final outcome bad. If someone has different values they would perhaps be able to stop at some stage in the middle, I just imagine such people would be a minority.

Comment by itaibn0 on The value of the online hive mind · 2014-04-11T20:51:41.966Z · score: 0 (0 votes) · LW · GW

I don't think researchers review papers because they want to have power over their peers. I think they do it because it is a community norm and beneficial to their community. This is similar to why people avoid littering. Status games may still enter into it because how often someone litters or reviews papers affects their reputation.

Comment by itaibn0 on Schelling Day 2.0 · 2014-04-09T22:02:17.781Z · score: 6 (6 votes) · LW · GW

If your die shows a one, you MAY NOT speak

I suggest you change "MAY NOT" into "MUST NOT". The statement "you MAY NOT speak" could be misinterpreted to mean that you have the permission not to speak, which you do by default.

## How to make AIXI-tl incapable of learning

2014-01-27T00:05:35.767Z · score: 4 (9 votes)