itaibn0's Shortform 2020-05-11T05:19:08.672Z
Why I Am Changing My Mind About AI Risk 2017-01-03T22:57:53.086Z
How to make AIXI-tl incapable of learning 2014-01-27T00:05:35.767Z


Comment by itaibn0 on The Point of Trade · 2021-07-13T18:02:10.229Z · LW · GW

How do you make spoiler tags?

Comment by itaibn0 on The Point of Trade · 2021-07-13T17:41:28.453Z · LW · GW

A neat thought experiment! At the end of it all, you no longer need to exchange fruit, you can just keep the fruit in place and exchange the identity of the people instead.

Comment by itaibn0 on Agency in Conway’s Game of Life · 2021-06-04T07:58:32.661Z · LW · GW

Thanks too for responding. I hope our conversation will be productive.

A crucial notion that plays into many of your objections is the distinction between "inner intelligence" and "outer intelligence" of an object (terms derived from "inner vs. outer optimizer"). Inner intelligence is the intelligence the object has in itself as an agent, determined through its behavior in response to novel situation, and outer intelligence is the intelligence that it requires to create this object, and is determined through the ingenuity of its design. I understand your "AI hypothesis" to mean that any solution to the control problem must have inner intelligence. My response is claiming that while solving the control problem may require a lot of outer intelligence, I think it only a requires a small amount of inner intelligence. This is because it seems like the environment in Conway's Game of Life with random dense initial conditions is very low variety and requires a small number of strategies to handle. (Although just as I'm open-minded about intelligent life somehow arising in this environment, it's possible that there are patterns much frequent than abiogenesis that make the environment much more variegated.)

Matter and energy and also approximately homogeneously distributed in our own physical universe, yet building a small device that expands its influence over time and eventually rearranges the cosmos into a non-trivial pattern would seem to require something like an AI.

The universe is only homogeneous at the largest scales, at smaller scales it is highly inhomogeneities in highly diverse ways like stars and planets and raindrops. The value of our intelligence comes from being able to deal with the extreme diversity of intermediate-scale structures. Meanwhile, at the computationally tractable scale in CGOL, dense random initial conditions do not produce intermediate-scale structures between the random small-scale sparks and ashes and the homgeneous large-scale. That said, conditional on life being rare in the universe, I expect that the control problem for our universe requires lower-than-human inner intelligence.

You mention the difficulty of "building a small device that...", but that is talking about outer intelligence. Your AI hypothesis states that, however such a device can or cannot be built, the device itself must be an AI. That's where I disagree.

Now it could actually be that in our own physical universe it is also possible to build not-very-intelligent machines that begin small but eventually rearrange the cosmos. In this case I am personally more interested in the nature of these machines than in "intelligent machines", because the reason I am interested in intelligence in the first place is due to its capacity to influence the future in a directed way, and if there are simpler avenues to influence in the future in a directed way then I'd rather spend my energy investigating those avenues than investigating AI. But I don't think it's possible to influence the future in a directed way in our own physical universe without being intelligent.

Again, the distinction between inner and outer intelligence is crucial. In a pure mathematical sense of existence there exist arrangements of matter that solve the control problem for our universe, but for that to be relevant for our future there has also has to be a natural process that creates these arrangements of matter at a non-negligible rate. If the arrangement requires a high outer intelligence then this process must be intelligent. (For this discussion, I'm considering natural selection to be a form of intelligent design.) So intelligence is still highly relevant for influencing the future. Machines that are mathematically possible cannot practically be created are not "simpler avenues to influence in the future".

"to solve the control problem in an environment full of intelligence only requires marginally more intelligence at best"

What do you mean by this?

Sorry. I meant that the solution to the control problem need only be marginally more intelligent than the intelligent beings in its environment. The difference in intelligence between a controller in an intelligent environment and a controller in a unintelligent environment may be substantial. I realize the phrasing you quote is unclear.

In chess, one player can systematically beat another if the first is ~300 ELO rating points higher, but I'm considering that as a marginal difference in skill on the scale from zero-strategy to perfect play. If our environment is creating the equivalent of a 2000 ELO intelligence, and the solution to the control problem has 2300 ELO, then the specification of the environment contributed 2000 ELO of intelligence, and the specification of the control problem only contributed an extra 300 ELO. In other words, open-world control problems need not be an efficient way of specifying intelligence.

But if one entity reliably outcompetes another entity, then on what basis do you say that this other entity is the more intelligent one?

On the basis of distinguishing narrow intelligence from general intelligence. A solution to the control problem is guaranteed to outcompete other entities in force or manipulation, but it might be worse at most other tasks. The sort of thing I had in mind for "NP-hard problems in military strategy" would be "this particular pattern of gliders is particularly good at penetrating a defensive barrier, and the only way to find this pattern is through a brute force search". Knowing this can the controller a decisive advantage at military conflicts without making it any better at any other tasks, and can permit the controller to have lower general intelligence while still dominating.

Comment by itaibn0 on Agency in Conway’s Game of Life · 2021-06-04T04:53:20.936Z · LW · GW

Thanks. I also found an invite link in a recent reddit post about this discussion (was that by you?).

Comment by itaibn0 on Agency in Conway’s Game of Life · 2021-06-02T04:57:25.666Z · LW · GW

While I appreciate the analogy between our real universe and simpler physics-like mathematical models like the game of life, assuming intelligence doesn't arise elsewhere in your configuration, this control problem does not seem substantially different or more AI-like from any other engineering problems. After all, there are plenty of other problems that involve leveraging a narrow form of control on a predicable physical system to achieve a more refined control, ex. building a rocket that hits a specific target. The structure that arises from a randomly initialized pattern in Life should be homogeneous in a statistical sense a so highly predictable. I expect almost all of it should stabilize to debris of stable periodic patterns. It's not clear whether it's possible to manipulate or clear the debris in controlled ways, but if it is possible, then a single strategy will work for the entire grid. It may take a great deal of intelligence to come up with such a strategy, but once such a strategy is found it can be hard-coded into the initial Life pattern, without any need for an "inner optimizer". The easiest-to-design solution may involve computer-like patterns, with the pattern keeping track of state involved in debris-clearing and each part tracking its location to determine its role in making the final smiley pattern, but I don't see any need for any AI-like patterns beyond that. On the other hand, if there are inherent limits in the ability to manipulate debris then no amount of reflection by our starting pattern is going to fix that.

That is assuming intelligence doesn't arise in the random starting pattern. If it does, our starting configuration would to overpower every other intelligence that arises and tries to control the space, and this would reasonably require it to be intelligent itself. But if this is the case then the evolution of the random pattern already encodes the concept of intelligence in a much simpler way then this control problem. To predict the structures that would arise from a random initial configuration the idea of intelligence would naturalistic come up. Meanwhile, to solve the control problem in an environment full of intelligence only requires marginally more intelligence at best, and compared to the no-control prediction problem the control problem adds off some complexity for not very much increase in intelligence. Indeed, the solution to the control problem may even be less intelligent than the structures it competes against, and make up for that with hard-coded solutions to NP-hard problems in military strategy.

On a different note, I'm flattered to see a reference in the comments to some of my own thoughts on working through debris in the Game of Life. It was surprising to see interest in that resurge, and especially surprising to see that interest come from people in AI alignment.

Comment by itaibn0 on Agency in Conway’s Game of Life · 2021-06-02T03:12:35.483Z · LW · GW

Thanks for linking to my post! I checked the other link, on Discord, and for some reason it's not working.

Comment by itaibn0 on Six economics misconceptions of mine which I've resolved over the last few years · 2020-07-16T14:29:17.282Z · LW · GW

Do you know of any source that gives the same explanations in text instead of video?

Edit: Never mind, the course has links to "Lecture PDF" that seem to summarize them. For the first lecture the summary is undetailed and I couldn't make sense of it without watching the videos, but they appear to get more detailed later on.

Comment by itaibn0 on Preview On Hover · 2020-06-25T01:50:11.912Z · LW · GW

I don't like the fact that the preview doesn't disappear when I stop hovering. I find the preview visually jarring enough that I would prefer to spend most of my reading time without a spurious preview window. At the very least, there should be a way to manually close the preview. Otherwise I would want to avoid hovering over any links and to refresh when I do, which is a bad reading experience.

Comment by itaibn0 on A non-mystical explanation of "no-self" (three characteristics series) · 2020-05-11T06:48:54.023Z · LW · GW

My main point of disagreement is the way you characterize these judgements as feelings. With minor quibbles I agree with your paragraph after substituting "it feels" with "I think". In your article you distinguish between abstract intellectual understanding which may believe that there is no self in some sense and some sort of lower-level perception of the self which has a much harder time accepting this; I don't follow what you're pointing to in the latter.

To be clear, I do acknowledge to experience mental phenomena that are about myself in some sense, such as a proprioceptive distinction between my body and other objects in my mental spatial model, an introspective ability to track my thoughts and feelings, and a sense of the role I play in my community that I am expected to adhere to. However, the form of these pieces of mental content is wildly different, and it is only through an abstract mental categorization that I recognize them as all about the same thing. Moreover, I believe these senses are imperfect but broadly accurate, so I don't know what it is that you're saying is an illusion.

Comment by itaibn0 on itaibn0's Shortform · 2020-05-11T05:19:09.125Z · LW · GW

Crossposted on my blog:

Lightspeed delays lead to multiple technological singularities.

By Yudkowsky's classification, I'm assuming the Accelerating Change Singularity: As technology gets better, the characteristic timescale at which technological progress is made becomes shorter, so that the time until this reaches physical limits is short from the perspective of our timescale. At a short enough timescale the lightspeed limit becomes important: When information cannot traverse the diameter of civilization in the time until singularity further progress must be made independently in different regions. The subjective time from then may still be large, and without communication the different regions can develop different interest and, after their singularities, compete. As the characteristic timescale becomes shorter the independent regions split further.

Comment by itaibn0 on A non-mystical explanation of "no-self" (three characteristics series) · 2020-05-11T04:57:59.745Z · LW · GW

I'm still not sure what you mean by the feeling of having a self. Your exercise of being aware of looking at an object reminds of the bouba/kiki effect: The words "bouba" and "kiki" are meaningless but you ask people to label which shapes are bouba and which are kiki in spite of that. The fact they answer does mean they deep down believe that "bouba" and "kiki" are real words. In the same way, when you ask me being aware of being someone looking at an object, I may have a response -- observing that the proposition "I am looking at my phone" is true, contemplating the simpleminded self-evidence of this fact, thinking about how this relates to the points Kaj is trying to make -- and there may even be some regularities in this response I can't rationally justify. Nonetheless this response is not a feeling of a self, nor is it something I am mistakenly confusing with a self -- any conflation is only being made from my attempt to interpret an unclear instruction, and is not a mistake I would make in regular thought.

A related point is that the word "self" is so rarely used in ordinary language. The suffix "-self", like "myself" or "yourself", yes, but not "self". That's only said when people are doing philosophy.

Comment by itaibn0 on TurnTrout's shortform feed · 2020-05-06T20:30:33.088Z · LW · GW

This map is not a surjection because not every map from the rational numbers to the real numbers is continuous, and so not every sequence represents a continuous function. It is injective, and so it shows that a basis for the latter space is at least as large in cardinality as a basis for the former space. One can construct an injective map in the other direction, showing the both spaces of bases with the same cardinality, and so they are isomorphic.

Comment by itaibn0 on Open question: are minimal circuits daemon-free? · 2018-05-20T21:01:05.909Z · LW · GW

This may be relevant:

Imagine a computational task that breaks up into solving many instances of problems A and B. Each instance reduces to at most n instances of problem A and at most m instances of problem B. However, these two maxima are never achieved both at once: The sum of the number of instances of A and instances of B is bounded above by some . One way to compute this with a circuit is to include n copies of a circuit for computing problem A and m copies of a circuit for computing problem B. Another approach for solving the task is to include r copies of a circuit which, with suitable control inputs, can compute either problem A or problem B. Although this approach requires more complicated control circuitry, if r is significantly less than n+m and the size of is significantly less than the sum of the sizes of and (which may occur if problems A and B have common subproblems X and Y which can use a shared circuit) then this approach will use less logic gates overall.

More generally, consider some complex computational task that breaks down into a heterogeneous set of subproblems which are distributed in different ways depending on the exact instance. Analogous reasoning suggests that the minimal circuit for solving this task will involve a structure akin to emulating a CPU: There are many instances of optimized circuits for low-level tasks, connected by a complex dependency graph. In any particular instance of the problem the relevant data dependencies are only a small subgraph of this graph, with connections decided by some control circuitry. A particular low-level circuit need not have a fixed purpose, but is used in different ways in different instances.

So, our circuit has a dependency tree of low-level tasks optimized for solving our problem in the worst-case. Now, at a starting stage of this hierarchy it has to process information about how a particular instance is separated into subproblems and generate the control information for solving this particular instance. The control information might need to be recomputed as new information about the structure of the instance are made manifest, and sometimes a part of the circuit may perform this recomputation without full access to potentially conflicting control information calculated in other parts.

Comment by itaibn0 on Against the Linear Utility Hypothesis and the Leverage Penalty · 2017-12-14T21:08:38.776Z · LW · GW

Yes, this is the refutation for Pascal's mugger that I believe in, although I never got around to writing it up like you did. However, I disagree with you that it implies that our utilities must be bounded. All the argument shows is that ordinary people never assign to events enourmous utility values with also assigning them commensuably low probabilities. That is, normative claims (i.e., claims that certain events have certain utility assigned to them) are judged fundamentally differently from factual claims, and require more evidence than merely the complexity prior. In a moral intuitionist framework this is the fact that anyone can say that 3^^^3 lives are suffering, but it would take living 3^^^3 years and getting to know 3^^^3 people personally to feel the 3^^^3 times utility associated with this events.

I don't know how to distinguish the scenarios where our utilities are bounded and where our utilities are unbounded but regularized (or whether our utilities are suffiently well-defined to distinguish the two). Still, I want to emphasize that the latter situation is possible.

Comment by itaibn0 on Changing habits for open threads · 2017-11-26T15:45:53.346Z · LW · GW

Quick thought: I think you are relying too much on your own experience which I don't expect to generalize well. Different people will have different habits on how much thought they put to their comments, and I expect some put too much thought and some too. We should put more effort at identifying the aggregate tendencies of people at this forum before we make reccomendations.

Then again, perhaps you are just offering the idea casually, so it's okay. Still I worry that the most likely future pathways for posts like this are "get ignored" and "get cited uncritically", and there's no clear place for this more thorough investigation.

Comment by itaibn0 on Living in an Inadequate World · 2017-11-14T14:02:58.119Z · LW · GW
What's the fallacy you're claiming?

First, to be clear, I am referring to things such as this description of the prisoner's dilemma and EY's claim that TDT endorses cooperation. The published material has been careful to only say that these decision theories endorse cooperation among identical copies running the same source code, but as far as I can tell some researchers at MIRI still believe this stronger claim and this claim has been a major part of the public perception of these decision theories (example here; see section II).

The problem is that when two FDT agent with a different utility functions and different prior knowledge are facing a prisoner's dilemma with each other, then their decisions are actually two different logical variables X0 and X1. The argument for cooperating is that X0 and X1 are sufficiently similar to one another that in the counterfactual where X0=C we also have X1=C. However, you could just as easily take the opposite premise, where X0 and X1 are sufficiently dissimilar that counterfactually changing X0 will have no effect on X1. Then you are left with the usual CDT analysis of the game. Given the vagueness of logical counterfactuals it is impossible to distinguish these two situations.

Here's a related question: What does FDT say about the centipede game? There's no symmetry between the players so I can't just plug in the formalism. I don't see how you can give an answer that's in the spirit of cooperating in the prisoner's dilemma without reaching the conclusion that FDT involves altruism among all FDT agents through some kind of veil of ignorance argument. And taking that conclusion is counter to the affine-transformation-invariance of utility functions.

Comment by itaibn0 on Living in an Inadequate World · 2017-11-14T12:22:41.759Z · LW · GW

Some meta-level comments and questions:

This discussion has moved far off-topic away from EY's general rationality lessons. I'm pleased with this, since these are topics that I want to discuss, but I want to mention this explicitly since constant topic-changes can be bad for a productive discussion by preventing the participants from going into any depth. In addition, lurkers might be annoyed at reading yet another AI argument. Do you think we should move the discussion to a different venue?

My motivations for discussing this are a chance to talk about critisms of MIRI that I haven't gotten down in writing in detail before, a chance to get a rough impression on how MIRI supporters to these explanations, and more generally an opportunity to practice intellectual honest debates. I don't expect the discussion to go on far enough to resolve our disagreements, but I am trying to anyways to get practice. I'm currently enthusiastic about continuing the discussion. but the sort of enthusiasm that could easily wane in a day. What is your motivation?

Comment by itaibn0 on Living in an Inadequate World · 2017-11-14T00:33:13.273Z · LW · GW
"but a fundamental assumption behind TDT and UDT is the existence of a causal structure behind logical statements, which sounds implausible to me."
None of the theories mentioned make any assumption like that; see the FDT paper above.

Page 14 of the FDT paper:

Instead of a do operator, FDT needs a true operator, which takes a logical sentence φ and updates P to represent the scenario where φ is true...
...Equation (4) works given a graph that accurately describes how changing the value of a logical variable affects other variables, but it is not yet clear how to construct such a thing—nor even whether it can be done in a satisfactory manner within Pearl’s framework.

This seems wrong, if you're saying that we can't formally establish the behavior of different decision theories, or that applying theories to different cases requires ad-hoc emendations; see section 5 of "Functional Decision Theory" (and subsequent sections) for a comparison and step-by-step walkthrough of procedures for FDT, CDT, and EDT. One of the advantages we claim for FDT over CDT and EDT is that it doesn't require ad-hoc tailoring for different dilemmas (e.g., ad-hoc precommitment methods or ratification procedures, or modifications to the agent's prior).

The main thing that distinguishes FDT from CDT is how the true operator mentioned above functions. As far as I'm aware this is always inserted by hand. This is easy to for situations where entities make perfect simulations of one another, but there aren't even rough guidelines for what to do when the computations that are done cannot be delineated in such a clean manner. In addition, if this was a rich research field I would expect more "math that bites back", i.e., substantive results that reduce to clearly-defined mathematical problems whose result wasn't expected during the formalization.

This point about "load-bearing elements" is at its root an intuitive judgement that might be difficult for me to convey properly.

Comment by itaibn0 on Living in an Inadequate World · 2017-11-13T14:44:37.512Z · LW · GW

Thinking further, I've spotted something that may a crucial misunderstanding. Is the issue whether EY was right to create his own technical research institute on AI risk, is it whether he was right to pursue AI risk at all? I agree that before EY there was relatively little academic work on AI risk, and that he played an important role in increasing the amount of attention the issue recieves. I think it would have been a mistake for him to ignore the issue on the basis that the experts must know better than him and they aren't worried.

On the other hand, I expect an equally well-funded and well-staffed group that is mostly within academia to do a better job than MIRI. I think EY was wrong in believing that he could create an institute that is better at pursuing long-term technical research in a particular topic than academia.

Comment by itaibn0 on Living in an Inadequate World · 2017-11-13T13:35:59.802Z · LW · GW
When I think about the people working on AGI outcomes within academia these days, I think of people like Robin Hanson, Nick Bostrom, Stuart Russell, and Eric Drexler, and it's not immediately obvious to me that these people have converged more with each other than any of them have with researchers at MIRI.

I see the lack of convergence between people in academia as supporting my position, since I am claiming that MIRI is looking too narrowly. I think AI risk research is still in a brainstorming stage where we still don't have a good grasp on what all the possibilities are. If all of these people have rather different ideas for how to go about it, was is it just the approaches that Eliezer Yudkowsky likes that are getting all the funding?

I also have specific objections. Let's take TDT and FDT as an example since they were mentioned in the post. The primary motivation for them is that they handle Newcombe-like dilemmas better. I don't think Newcombe-like dilemmas are relevant for the reasoning of potentially dangerous AIs, and I don't think you will get a good holistic understanding of what a good reasoner out of these theories. One secondary motivation for TDT/UDT/FDT is a fallacious argument that it endorses cooperation in the true prisoner's dilemma. Informal arguments seem to be the load-bearing applying these theories to any particular problem; the technical works seem to be mainly formalizing narrow instances of these theories to agree with the informal intuition. I don't know about FDT, but a fundamental assumption behind TDT and UDT is the existence of a causal structure behind logical statements, which sounds implausible to me.

Comment by itaibn0 on Living in an Inadequate World · 2017-11-12T23:20:34.888Z · LW · GW

(Background: I used to be skeptical about AI risk as a high-value cause, now I am uncertain, and I am still skeptical of MIRI.)

I disagree with you about MIRI compared with mainstream academia. Academics may complain about the way academia discourages "long-term substantive research projects", but taking a broader perspective academia is still the best thing there is for such projects. I think you misconstrued comments by academics complaining about their situation on the margin as being statements about academia in the absolute, and thereby got the wrong idea about the relative difficulty of doing good research within and outside academia.

When you compete for grant funding, that means your work is judged by people with roughly the same level of expertise as you. When you make a publically-funded research institute your work is judged for more shallowly. That you chose to go along the second path rather than the first path had left a bad first impression on me when I first learned of it, like you can't make a convincing case in a fair test. As MIRI grew and as I learned more about it, I got the impression that since MIRI is a small team too little contact with a broader intellectual community it prematurely reached a consensus on a particular set of approaches and assumptions that I think are likely to go nowhere.

Comment by itaibn0 on Announcing AASAA - Accelerating AI Safety Adoption in Academia (and elsewhere) · 2017-06-21T01:55:55.963Z · LW · GW

I specifically appreciate the article on research debt.

Since I was confused by this when I first read this, I want to clarify: As far as I can tell the article is not written by anybody associated with AASAA. You're saying it was nice of toonalfrink to link to it.

(I'm not sure if this comment is useful, since I don't expect a lot of people to have the same misunderstanding I did.)

Comment by itaibn0 on [stub] 100-Word Unpolished Insights Thread (3/10-???) · 2017-03-17T22:01:30.273Z · LW · GW

I'm not sure to what extent you want people to criticize ideas in this thread, and I'm going to test the waters. Give me feedback on how well this matches the norms you envision.

An immediate flaw comes to mind, that any elaboration of this idea should respond to: Changing the high school curriculum is very difficult. If you've acquired the social capital to change the curriculum of a high school, you should not spend it by making such a small, marginal contribution, but rather you could probably find something with a larger effect with the same social capital.

Comment by itaibn0 on Act into Fear and Abandon all Hope · 2017-01-04T13:20:56.126Z · LW · GW

You start the discussion with a very practical frame: "Here is some advice I intend to give you.". You give caveats, then you give the advice, and you give some justification. The advice sounds plausible. Then you continue to a very philosophical discussion on what fear is and what people think about it that does not appear to tie in with the practical frame. While your article would appear very lopsided with so much caveat and so little content, I don't see how the later parts help. Alternately, you can remove everything up to the 10th paragraph and write a very different sort of essay.

Comment by itaibn0 on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2017-01-04T01:09:14.416Z · LW · GW

"Straw that breaks the camel's back" implies the existence of a large pre-existing weight, so your claim is a tautology.

Comment by itaibn0 on Progress and Prizes in AI Alignment · 2017-01-04T00:03:55.174Z · LW · GW

You point out a problem: There's no way to tell which organizations are making progress on AI alignment, and there is little diversity in current approaches. You turn this into the question: How do we create prizes that incentivize progress in AI alignment? You're missing a step or two here.

I'd say the logic goes the opposite direction: because there are no clear objectively measurable targets that will improve AI safety, prizes are probably a bad idea for increasing the diversity and effectiveness of AI safety research.

Comment by itaibn0 on Act into Fear and Abandon all Hope · 2017-01-03T23:26:14.803Z · LW · GW

Writing suggestion: Drop everything past the 10th paragraph ("It’s not immediately obvious that you’d want to overcome fear, though...").

Comment by itaibn0 on On the importance of Less Wrong, or another single conversational locus · 2016-12-06T00:59:21.851Z · LW · GW

Perhaps I should not have used such sensationalist language. I admit I don't know the whole story, and that more details are likely to find many nonrational reasons the change occurs. Still, I suspect rational persuasion did play a role, if not a complete one. Anecdotally, the Less Wrong discussion changed my opinion of polyamory from "haven't really thought about it that much" to "sounds plausible but I haven't tried it".

In any case, if your memory of that section of Less Wrong history contributes positively to your nostalgia, it's worth reconsidering the chance events like that will ever happen again.

Comment by itaibn0 on On the importance of Less Wrong, or another single conversational locus · 2016-12-04T22:53:08.208Z · LW · GW

Given the communities initial heavy interest in the heuristic & biases research, I am amused that there is no explicit mention of the sunk cost policy. Seriously, watch out for that.

My opinion is that revitalizing the community is very likely to fail, and I am neutral on whether it's worth to try anyways by current prominent rationalists. A lot of people are suggesting to restore the website with a more centralized structure. It should be obvious the result won't work the same as the old Less Wrong.

Finally, a reminder on Less Wrong history, which suggests that we lost more than a group of high-quality posters: Less Wrong wasn't always a polyamory hub. It became that way because there was a group of people who seriously believed they could improve the way they think, a few noticed they didn't have any good reason to be monogamous, set out to convince the others, and succeeded. Do you think a change of that scale will ever happen in the future of the rationalist community?

Comment by itaibn0 on A few misconceptions surrounding Roko's basilisk · 2015-10-07T23:08:55.019Z · LW · GW

Based on personal experience, if you're dreaming I don't recommend trying to wake yourself up. Instead, enjoy your dream until you're ready to wake up naturally. That way you'll have far better sleep.

Comment by itaibn0 on Open Thread, Jul. 27 - Aug 02, 2015 · 2015-07-28T18:51:57.920Z · LW · GW

"assume unbelievable X".

Only this is not an unbelievable X, its an entirely believable X (I wouldn't have any reason to ask an >unbelieveable - as would anyone asking a question - unless they are actually trying to trick you with a >question). In fact - assuming that people are asking you to believe an "unbelievable X" is a strawman of the >argument in point.

Are you sure that's how you want to defend your question? If you defend the question by saying that the premise is believable, you are implicitly endorsing the standard that questions should only be answered if they are reasonable. However, accepting this standard runs the risk that your conversational partner will judge your question to be unreasonable even if it isn't and fail to answer your question, in exactly the way you're complaining about. A better standard for the purpose of getting people to answer the questions you ask literally is that people should answer the questions that you ask literally even if they rely on fantastic premises.

Can you do me a favour and try to steelman the question I asked? And see what the results are, and what answer you might give to it?

A similar concern is applicable here: Recall that steelmanning means, when encountering a argument that seems easily flawed, not to respond to that argument but to strengthen it ways the seem reasonable to you and answer that instead. The sounds like the exact opposite of what you want people to do to your questions.

Comment by itaibn0 on Open Thread, Jul. 27 - Aug 02, 2015 · 2015-07-27T21:16:46.981Z · LW · GW

Sometimes what happens is that people don't know the answer to the question you're asking but still want to contribute to the discussion, so they answer a different question which they know the answer to. In this case the solution is to find someone who knows the answer before you start asking.

Comment by itaibn0 on Beyond Statistics 101 · 2015-06-28T23:38:58.060Z · LW · GW

I don't know about Grothendieck, but the two other sources appear to have softer criticism of the mathematical community than "actually functioning as a cult".

Comment by itaibn0 on How my social skills went from horrible to mediocre · 2015-06-09T06:12:14.246Z · LW · GW

That's why I said "supposed to do". The core argument behind schooling is that we can make a person much more capable by exposing them to things they would not otherwise be exposed to, and that it is valuable to give a broad background in many different topics. Fundamentally this is similar to what you're suggesting, and the differences you point out just indicate that school has a bad choice of curriculum and teaches it badly. The primary novelty in what you're suggesting is that you want "a lot of different type of experience" with a shallow view on each topic ("a different profession... every day"), whereas school typically spends a lot of time on a couple of different topics but with essentially the same type of experience. I do not intend to comment on whether I think this will work better.

For the record, I don't know what Toastmasters does, but the schools I've to had Drama class and occasionally required giving presentations.

Comment by itaibn0 on How my social skills went from horrible to mediocre · 2015-05-23T04:49:01.378Z · LW · GW

That's exactly what school is supposed to do.

Comment by itaibn0 on Bragging Thread May 2015 · 2015-05-11T21:05:59.172Z · LW · GW

I got an honorary mention in the 2014 Putnam Competition. I have taken the test at December and I heard the results on April, but I haven't posted this other bragging threads, so I'm not if this is appropriate here.

Comment by itaibn0 on Guidelines for Upvoting and Downvoting? · 2015-05-07T02:07:20.079Z · LW · GW

Downvotes sort of do the opposite, but it's not perfectly symmetrical because scores below zero pack an extra punch.

The standard guideline is to upvote if you want more of that kind of comment, and downvote if you want less. The asymmetry between upvotes and downvotes comes the fact Less Wrongers on a whole want more content on Less Wrong rather than less. Negative scores pack a punch because they mean your comment would be better off not existing.

Well really, I think it's mostly that people just have a pre-existing idea of the connotation of negative numbers, but I gave this retroactive justification to show that I think the result is surprisingly internally consistent.

Comment by itaibn0 on Is Scott Alexander bad at math? · 2015-05-07T01:35:18.482Z · LW · GW

Based on JonahSinick's prior comments, his motivation for asking this question is pretty clear. You have already critiqued the thought process that made him think this question is necessary, to attack it again is almost double-counting. I think if you had answered the question directly the discussion would have a better chance of bootstrapping out of mutual unintelligibility. Then again, I mostly lurk and only rarely participate in internet debates so I don't feel I really understand how any given discussion strategy would actually play out. Also, I cheated, since Jonah already expressed a desire for a direct answer.

Comment by itaibn0 on Is Scott Alexander bad at math? · 2015-05-06T19:42:35.883Z · LW · GW

Other commenters have said similar things, but I want to express this with my own words. To do mathematics requires multiple skills, and an aesthetic sense may be an underappreciated one of them. You argue that Scott has a good aesthetic sense. I also think that Scott probably has good abilities in some of the skills necessary for doing mathematics. But from Scott's account he appears to be lacking in other skills. Why do you think that what Scott has is sufficient? You mention that early college courses are not representative of real math, but even at higher levels you need skills such as reading formulas, applying algorithms, and understanding the implicit meaning of unmotivated (or even imperfectly motivated) definitions. Keep in mind the Scott relates here that other people skilled in math have tried to educate him outside of a college context.

I'm saying I think your conclusion is wrong, I'm uncertain myself. And even Scott admits "I don’t know if it’s that I’m bad at math, or that I just don’t enjoy math enough to be intrinsically motivated to pursue it," (same link as above), which sounds a bit like a way of retreat to your way of thinking.

Comment by itaibn0 on Is Scott Alexander bad at math? · 2015-05-06T04:59:51.728Z · LW · GW

predicting and modelling a preexisting reality

Depending on how you define "preexisting reality", most professional mathematics can be said not to achieve this. In any case, the terms under which people usually praise Douglas Hofstadter do not include this sort of achievement. And if you really want to know what Hofstadter has done, there's this.

Comment by itaibn0 on Why isn't the following decision theory optimal? · 2015-04-16T06:48:38.314Z · LW · GW

an informal version of Updateless Decision Theory

Are you implying that UDT is formal?

Comment by itaibn0 on A pair of free information security tools I wrote · 2015-04-14T14:57:26.261Z · LW · GW

I've never seen it stated as a requirement of the PGP protocol that it is impossible to hide extra information in a signature. In an ordinary use case this is not a security risk; it's only a problem when the implementation is untrusted. I have as much disrespect as anyone towards people who think they can easily achieve what experts who spent years thinking about it can't, but that's not what is going on here.

Comment by itaibn0 on A pair of free information security tools I wrote · 2015-04-14T14:16:47.018Z · LW · GW

How much money are you willing to bet on that?

If the amount is less than $50,000, I suggest you just offer it all as prize to whoever proves you wrong. The value to your reputation will be more than $5, and due to transaction costs people are unlikely to bet with you directly with less than $5 to gain.

Comment by itaibn0 on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 108 · 2015-02-21T02:35:24.036Z · LW · GW

That quote is from chapter 74. I mention this because you didn't specify and to save the trouble for others to search.

Comment by itaibn0 on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 104 · 2015-02-16T02:18:27.507Z · LW · GW

Remember that no matter what happens, the Hufflepuff boy will still come to Harry at a bit after 11:04. This means either that Voldemort will survive this encounter and retain mobility in four hours, or that he set up this message in advance (or that Harry is wrong about the source of this message).

Comment by itaibn0 on Can AIXI be trained to do anything a human can? · 2014-10-21T23:52:27.318Z · LW · GW

I don't think guided training is generally the right way to disabuse an AIXI agent of misconception we think it might get. What training amounts to is having the agent's memory begin with some carefully constructed string s0. All this does is change the agent's prior from some P based on Kolmogorov complexity to the prior P' (s) = P (s0+s | s0) (Here + is concatenation). If what you're really doing is changing the agent's prior to what you want, you should do that with self-awareness and no artificial restriction. In certain circumstances guided training might be the right method, but the general approach should be to think about what prior we want and hard-code it as effectively as possible. Taken to the natural extreme this amounts to making an AI that works on completely different principles than AIXI.

Comment by itaibn0 on Happiness Logging: One Year In · 2014-10-10T00:34:07.235Z · LW · GW

Overall my experience with logging has made me put less trust in "how happy are you right now" surveys of happiness. Aside from the practical issues like logging unexpected night wake-time, I mostly don't feel like the numbers I'm recording are very meaningful. I would rather spend more time in situations I label higher than lower on average, so there is some signal there, but I don't actually have the introspection to accurately report to myself how I'm feeling.

I've also been suspicious of happiness surveys for a similar reason. One theory I have is that a large portion of the variation in happiness set-point is just that different people have different tendencies in answering "rate in 1-10"-type questions. It would be interesting to test how much does happiness set-point correlates with questions such as "rate this essay from 1 to 10". Another test for this theory that is far more like have actually been conducted already is to see how well happiness set-point correlates with neurological signals of happiness (the difficulty being here that the primary way to determine whether a neurological signal signals happiness is through self-report. Nonetheless, if the happiness set-point correlates with any neurological signal then it more likely that this signal plays a role in happiness than in inducing high number ratings).

Comment by itaibn0 on Causal decision theory is unsatisfactory · 2014-09-14T16:54:30.703Z · LW · GW

On this topic, I'd like to suggest a variant of Newcomb's problem that I don't recall seeing anywhere in LessWrong (or anywhere else). As usual, Omega presents you with two boxes, box A and box B. She says "You may take either box A or both boxes. Box B contains 1,000$. Box A either contains 1,000,000$ or is empty. Here is how I decided what to put in box A: I consider a perfectly rational agent being put in an identical situation to the one you're in. If I predict she takes one box I put the money in box A, otherwise I put nothing." Suppose further that Omega has put many other people into this exact situation, and in all those cases the amount of money in box A was identical.

The reason why I mention the problem is that while the original Newcomb's problem is analogous to the Prisoner's Dilemma with clones that you described, this problem is more directly analogous to the ordinary one-shot Prisoner's Dilemma. In the Prisoner's Dilemma with clones and in Newcomb's problem, your outcome is controlled by a factor that you don't directly control but is nonetheless influenced by your strategy. In the ordinary Prisoner's dilemma and in my Newcomb-like problem, this factor is controlled a rational agent that is distinct from yourself (although note that in the Prisoner's Dilemma this agent's outcome is directly influenced by what you do, but not so in my own dilemma).

People have made the argument that you should cooperate in the one-shot Prisoner's Dilemma for essentially the same reason you should one-box. I disagree with that, and I think my hypothetical illustrates that the two problems are disanalogous by presenting a more correct analogue. While there is a strong argument for one-boxing in Newcomb's problem, which I agree with, the case is less clear here. I think the argument that a TDT agent would choose cooperation in Prisoner's Dilemma is flawed. I believe TDT in its current form is not precise enough to give a clear answer to this question. After all, both the CDT argument in terms of dominated strategies and the superrational argument in terms of the underlying symmetry of the situation can be phrased in TDT depending on how you draw the causal graph over computations.

Comment by itaibn0 on Talking to yourself: A useful thinking tool that seems understudied and underdiscussed · 2014-09-11T00:57:23.400Z · LW · GW

Personally I don't expect this to be of much use to me. I find the task of translating thoughts into words to be more strenuous than it is for others, and so I expect this to be more distracting than helpful. I played games where I tried to subvocalise all of my thoughts the way some people have interior monologues and they support this conclusion. I believe I have a fairly good working memory (for instance, I can play blind chess) and so I don't as as much value in an external aid. Other people are commenting based on their own personal experience and feelings, so I think I can trust my own gut feeling in terms of how this will work out for me.

Comment by itaibn0 on Alternative to Campaign Finance Reform? · 2014-08-01T01:43:30.517Z · LW · GW

I don't understand the title. You're talking about a reform to the democratic process, and you're comparing it with 'finance reform'. Those only seem tangentially related.