Posts

Quantilizer ≡ Optimizer with a Bounded Amount of Output 2021-11-16T01:03:34.849Z
itaibn0's Shortform 2020-05-11T05:19:08.672Z
Why I Am Changing My Mind About AI Risk 2017-01-03T22:57:53.086Z
How to make AIXI-tl incapable of learning 2014-01-27T00:05:35.767Z

Comments

Comment by itaibn0 on Why I’m not into the Free Energy Principle · 2023-03-03T19:23:33.353Z · LW · GW

By the same token, I’m generally opposed to grand unified theories of the body. The shoulder involves a ball-and-socket joint, and the kidney filters blood. OK cool, those are two important facts about the body. I’m happy to know them! I don’t feel the need for a grand unified theory of the body that includes both ball-and-socket joints and blood filtration as two pieces of a single grand narrative.

I think I am generally on board with you on your critiques of FEP, but I disagree with this framing against grand unified theories. The shoulder and the kidney are both made of cells. They both contain DNA which is translated into proteins. They are both are designed by an evolutionary process.

Grand unified theories exist, and they are precious. I want to eke out every sliver of generality wherever I can. Grand unified theories are also extremely rare, and far more common in the public discourse are fakes that create an illusion of generality without making any substantial connections. The style of thinking that looks at a ball-and-socket joint and a blood filtration system and immediately thinks "I need to find how these are really the same" rather than studying these two things in detail and separately is apt to create these false grand unifications, and altho I haven't looked into FEP as deeply as you or other commenters the writing I have seen on it smells more like this mistake than true generality.

But a big reason I care about exposing these false theories and the bad mental habits that are conducive to them is precisely because I care so much about true grand unified theories. I want grand unified theories to shine like beacons so we can notice their slightest nudge, and feel the faint glimmer of a new one when it is approaching from the distance, rather than be hidden by a cacophony of overblown rhetoric coming from random directions. 

Comment by itaibn0 on itaibn0's Shortform · 2023-03-02T23:55:53.640Z · LW · GW

I think MIRI's Logical Inductor idea can be factored into two components, one of which contains the elegant core that is why this idea works so well, and the other is an arbitrary embellishment that obscures what is actually going on. Of course I am calling for this to be recognized and that people should only be teaching and thinking about the elegant core. The elegant core is infinitary markets: Markets that exist for an arbitrarily long time, with commodities that can take arbitrarily long to return dividends, and infinitely many market participants who use every computable strategy. The hack is that the commodities are labeled by sentences in a formal language and the relationships between them are governed by a proof systems. This creates a misleading pattern that that the value of the commodity labeled phi appears to measure the probability that phi is true; in fact what it measures is more like the probability the that proof system will eventually affirm that phi is true, or more precisely like the probability that phi is true in a random model of the theory. Of course what we really care about is the probability phi is actually true, meaning true in the standard model where the things labeled "natural numbers" are actual natural numbers and so on. By combining proof systems and infinitary markets, one obscures how much of the "work" in obtaining accurate information is done by either. I think it is better to study these two things separately. Since proof systems are already well-studies and infinitary markets are the novel idea in MIRI's work, that means they should primarily study infinitary markets.

Comment by itaibn0 on Whole Bird Emulation requires Quantum Mechanics · 2023-02-15T05:59:15.675Z · LW · GW

I think it is a mistake to focus on these kinds weird effects as "biological systems using quantum mechanics", because it ignores the much more significant ways quantum mechanics is essential for all the ordinary things that are ubiquitous in biological systems. The stability of every single atom depends on quantum mechanics, and every chemical bond requires quantum mechanics to model. For the intended implication on the difficulty of Whole Bird Emulation, these ordinary usages of QM are much more significant. There are a huge number of different kinds of molecular interactions in a bird's body and each one requires solving a multi-particle Schroedinger equation. The computation work for this one effect is tiny in comparison.

As I understand, the unique thing about this effect is that it involves much longer coherence times than in molecular interactions. This is cool, but unless you can argue that birds have error-correcting quantum computers inside them, which is incredibly unlikely, I don't think it is that relevant to AI timelines.

Comment by itaibn0 on itaibn0's Shortform · 2023-01-10T22:51:37.700Z · LW · GW

While I like a lot of Hanson's grabby alien model, I do not buy the inference that since humans appeared early in cosmological history, that implies that the cosmic commons are taken quickly and so a lower bound on how often grabby aliens appear. I think that is neglecting the possibility that the early universe is inherently more conducive to creating life, so most life is created early, but these lifeforms may be very far apart.

Comment by itaibn0 on The Onion Test for Personal and Institutional Honesty · 2022-10-23T20:35:33.688Z · LW · GW

Eliezer is very explicit and repeats many times in that essay, including in the very segment you quote, that his code of meta-honesty does in fact compel you to never lie in a meta-honesty discussion. The first 4 paragraphs of your comment are not elaborating with what Eliezer really meant, they are disagreeing with him. Reasonable disagreements too, in my opinion, but conflating them with Eliezer's proposal is corrosive to the norms that allows people to propose and test new norms.

Comment by itaibn0 on The "you-can-just" alarm · 2022-10-23T02:08:49.679Z · LW · GW

I had trouble making the connection between the first two paragraphs and the rest. Are you introducing what you mean by an "alarm" and then giving a specific proposal for an alarm afterwards? Is there significance in how the example alarms are in response to specific words being misleading?

Comment by itaibn0 on If you’re very optimistic about ELK then you should be optimistic about outer alignment · 2022-04-28T03:22:16.465Z · LW · GW

Writing suggestion: Expand the acronym "ELK" early in the piece. I looked at the title and my first question was what ELK is, I quickly skimmed the piece and wasn't able to find out until I clicked on the link to the ELK document. I now see it's also expanded in the tag list, which I normally don't examine. I haven't read the article more closely than a skim.

Comment by itaibn0 on On infinite ethics · 2022-04-15T15:06:26.255Z · LW · GW

On further thought I want to walk back a bit:

  1. I confess my comment was motivated by seeing something where it looked like I could make a quick "gotcha" point, which is a bad way to converse.
  2. Reading the original comment more carefully, I'm seeing how I disagree with it. It says (emphasis mine)

in practice the problems of infinite ethics are more likely to be solved at the level of maths, as opposed on the level of ethics and thinking about what this means for actual decisions.

I highly doubt this problem will be solved purely on the level of math, and expect it will involve more work on the level of ethics than on the level of foundations of mathematics. However, I think taking an overly realist view on the conventions mathematicians have chosen for dealing with infinities is an impediment to thinking about these issues, and studying alternative foundations is helpful to ward against that. The problems of infinite ethics, especially for uncountable infinities, seem to especially rely on such realism. I do expect a solution to such issues, to the extent it is mathematical at all, could be formalized in ZFC. The central thing I liked about the comment is the call to rethink the relationship of math and mathematical infinity to reality, and that doesn't necessary require changing our foundations, just changing our attitude towards them.

Comment by itaibn0 on On infinite ethics · 2022-04-14T18:03:01.330Z · LW · GW

If the only alternative you can conceive of for ZFC is removing the axiom of choice then you are proving Jan_Kulveit's point.

Comment by itaibn0 on How dath ilan coordinates around solving alignment · 2022-04-14T06:14:03.412Z · LW · GW

I was reading the story for the first quotation entitled "The discovery of x-risk from AGI", and I noticed something around quotation that doesn't make sense to me and I'm curious if anyone can tell what Eliezer Yudkowsky was thinking. As referenced in a previous version of this post, after the quoted scene highest Keeper commits suicide. Discussing the impact of this, EY writes,

And in dath ilan you would not set up an incentive where a leader needed to commit true suicide and destroy her own brain in order to get her political proposal taken seriously.  That would be trading off a sacred thing against an unsacred thing.  It would mean that only true-suicidal people became leaders.  It would be terrible terrible system design.

So if anybody did deliberately destroy their own brain in attempt to increase their credibility - then obviously, the only sensible response would be to ignore that, so as not create hideous system incentives.  Any sensible person would reason out that sensible response, expect it, and not try the true-suicide tactic.

The second paragraph is clearly a reference to acausal decision theory, people making a decision because how they anticipate others react to expecting that this is how they make the decision rather than the direct consequences of the decision. I'm not sure if it really makes sense, a self-indulgent reminder that nobody has knows any systematic method for producing prescriptions from acausal decision theories in cases where purportedly they differs from causal decision theory in everyday life. Still, it's fiction, I can suspend my disbelief.

The confusing thing is that in the story the actual result of the suicide is exactly what this passage says shouldn't be the result. It convinces the Representatives to take the proposal more seriously and implement it. This passage is just used to illustrate how shocking the suicide was, no additional considerations are described why for the reasoning is incorrect in those circumstances. So it looks like the Representatives are explicitly violating the Algorithm which supposedly underlies the entire dath ilan civilization and is taught to every child at least in broad strokes, in spite of being the second-highest ranked governing body of dath ilan.

Comment by itaibn0 on Quantilizer ≡ Optimizer with a Bounded Amount of Output · 2021-11-16T01:50:35.088Z · LW · GW

Really all I need is that a strategy that takes n bits to specify will be performed by 1 in  of all random strategies. Maybe a random strategy consists of a bunch of random motions that cancel each other out, and in 1 in  of strategies in between these random motions are directed actions that add up to performing this n-bit strategy. Maybe 1 in  strategies start off by typing this strategy to another computer and end with shutting yourself off, so that in the remaining bits of the strategy will be ignored. A prefix-free encoding is basically like the latter situation except ignoring the bits after a certain point is built into the encoding rather than being an outcome of the agent's interaction with the environment.

Comment by itaibn0 on The Point of Trade · 2021-07-13T18:02:10.229Z · LW · GW

How do you make spoiler tags?

Comment by itaibn0 on The Point of Trade · 2021-07-13T17:41:28.453Z · LW · GW

A neat thought experiment! At the end of it all, you no longer need to exchange fruit, you can just keep the fruit in place and exchange the identity of the people instead.

Comment by itaibn0 on Agency in Conway’s Game of Life · 2021-06-04T07:58:32.661Z · LW · GW

Thanks too for responding. I hope our conversation will be productive.

A crucial notion that plays into many of your objections is the distinction between "inner intelligence" and "outer intelligence" of an object (terms derived from "inner vs. outer optimizer"). Inner intelligence is the intelligence the object has in itself as an agent, determined through its behavior in response to novel situation, and outer intelligence is the intelligence that it requires to create this object, and is determined through the ingenuity of its design. I understand your "AI hypothesis" to mean that any solution to the control problem must have inner intelligence. My response is claiming that while solving the control problem may require a lot of outer intelligence, I think it only a requires a small amount of inner intelligence. This is because it seems like the environment in Conway's Game of Life with random dense initial conditions is very low variety and requires a small number of strategies to handle. (Although just as I'm open-minded about intelligent life somehow arising in this environment, it's possible that there are patterns much frequent than abiogenesis that make the environment much more variegated.)

Matter and energy and also approximately homogeneously distributed in our own physical universe, yet building a small device that expands its influence over time and eventually rearranges the cosmos into a non-trivial pattern would seem to require something like an AI.

The universe is only homogeneous at the largest scales, at smaller scales it is highly inhomogeneities in highly diverse ways like stars and planets and raindrops. The value of our intelligence comes from being able to deal with the extreme diversity of intermediate-scale structures. Meanwhile, at the computationally tractable scale in CGOL, dense random initial conditions do not produce intermediate-scale structures between the random small-scale sparks and ashes and the homgeneous large-scale. That said, conditional on life being rare in the universe, I expect that the control problem for our universe requires lower-than-human inner intelligence.

You mention the difficulty of "building a small device that...", but that is talking about outer intelligence. Your AI hypothesis states that, however such a device can or cannot be built, the device itself must be an AI. That's where I disagree.

Now it could actually be that in our own physical universe it is also possible to build not-very-intelligent machines that begin small but eventually rearrange the cosmos. In this case I am personally more interested in the nature of these machines than in "intelligent machines", because the reason I am interested in intelligence in the first place is due to its capacity to influence the future in a directed way, and if there are simpler avenues to influence in the future in a directed way then I'd rather spend my energy investigating those avenues than investigating AI. But I don't think it's possible to influence the future in a directed way in our own physical universe without being intelligent.

Again, the distinction between inner and outer intelligence is crucial. In a pure mathematical sense of existence there exist arrangements of matter that solve the control problem for our universe, but for that to be relevant for our future there has also has to be a natural process that creates these arrangements of matter at a non-negligible rate. If the arrangement requires a high outer intelligence then this process must be intelligent. (For this discussion, I'm considering natural selection to be a form of intelligent design.) So intelligence is still highly relevant for influencing the future. Machines that are mathematically possible cannot practically be created are not "simpler avenues to influence in the future".

"to solve the control problem in an environment full of intelligence only requires marginally more intelligence at best"

What do you mean by this?

Sorry. I meant that the solution to the control problem need only be marginally more intelligent than the intelligent beings in its environment. The difference in intelligence between a controller in an intelligent environment and a controller in a unintelligent environment may be substantial. I realize the phrasing you quote is unclear.

In chess, one player can systematically beat another if the first is ~300 ELO rating points higher, but I'm considering that as a marginal difference in skill on the scale from zero-strategy to perfect play. If our environment is creating the equivalent of a 2000 ELO intelligence, and the solution to the control problem has 2300 ELO, then the specification of the environment contributed 2000 ELO of intelligence, and the specification of the control problem only contributed an extra 300 ELO. In other words, open-world control problems need not be an efficient way of specifying intelligence.

But if one entity reliably outcompetes another entity, then on what basis do you say that this other entity is the more intelligent one?

On the basis of distinguishing narrow intelligence from general intelligence. A solution to the control problem is guaranteed to outcompete other entities in force or manipulation, but it might be worse at most other tasks. The sort of thing I had in mind for "NP-hard problems in military strategy" would be "this particular pattern of gliders is particularly good at penetrating a defensive barrier, and the only way to find this pattern is through a brute force search". Knowing this can the controller a decisive advantage at military conflicts without making it any better at any other tasks, and can permit the controller to have lower general intelligence while still dominating.

Comment by itaibn0 on Agency in Conway’s Game of Life · 2021-06-04T04:53:20.936Z · LW · GW

Thanks. I also found an invite link in a recent reddit post about this discussion (was that by you?).

Comment by itaibn0 on Agency in Conway’s Game of Life · 2021-06-02T04:57:25.666Z · LW · GW

While I appreciate the analogy between our real universe and simpler physics-like mathematical models like the game of life, assuming intelligence doesn't arise elsewhere in your configuration, this control problem does not seem substantially different or more AI-like from any other engineering problems. After all, there are plenty of other problems that involve leveraging a narrow form of control on a predicable physical system to achieve a more refined control, ex. building a rocket that hits a specific target. The structure that arises from a randomly initialized pattern in Life should be homogeneous in a statistical sense a so highly predictable. I expect almost all of it should stabilize to debris of stable periodic patterns. It's not clear whether it's possible to manipulate or clear the debris in controlled ways, but if it is possible, then a single strategy will work for the entire grid. It may take a great deal of intelligence to come up with such a strategy, but once such a strategy is found it can be hard-coded into the initial Life pattern, without any need for an "inner optimizer". The easiest-to-design solution may involve computer-like patterns, with the pattern keeping track of state involved in debris-clearing and each part tracking its location to determine its role in making the final smiley pattern, but I don't see any need for any AI-like patterns beyond that. On the other hand, if there are inherent limits in the ability to manipulate debris then no amount of reflection by our starting pattern is going to fix that.

That is assuming intelligence doesn't arise in the random starting pattern. If it does, our starting configuration would to overpower every other intelligence that arises and tries to control the space, and this would reasonably require it to be intelligent itself. But if this is the case then the evolution of the random pattern already encodes the concept of intelligence in a much simpler way then this control problem. To predict the structures that would arise from a random initial configuration the idea of intelligence would naturalistic come up. Meanwhile, to solve the control problem in an environment full of intelligence only requires marginally more intelligence at best, and compared to the no-control prediction problem the control problem adds off some complexity for not very much increase in intelligence. Indeed, the solution to the control problem may even be less intelligent than the structures it competes against, and make up for that with hard-coded solutions to NP-hard problems in military strategy.

On a different note, I'm flattered to see a reference in the comments to some of my own thoughts on working through debris in the Game of Life. It was surprising to see interest in that resurge, and especially surprising to see that interest come from people in AI alignment.

Comment by itaibn0 on Agency in Conway’s Game of Life · 2021-06-02T03:12:35.483Z · LW · GW

Thanks for linking to my post! I checked the other link, on Discord, and for some reason it's not working.

Comment by itaibn0 on Six economics misconceptions of mine which I've resolved over the last few years · 2020-07-16T14:29:17.282Z · LW · GW

Do you know of any source that gives the same explanations in text instead of video?

Edit: Never mind, the course has links to "Lecture PDF" that seem to summarize them. For the first lecture the summary is undetailed and I couldn't make sense of it without watching the videos, but they appear to get more detailed later on.

Comment by itaibn0 on Preview On Hover · 2020-06-25T01:50:11.912Z · LW · GW

I don't like the fact that the preview doesn't disappear when I stop hovering. I find the preview visually jarring enough that I would prefer to spend most of my reading time without a spurious preview window. At the very least, there should be a way to manually close the preview. Otherwise I would want to avoid hovering over any links and to refresh when I do, which is a bad reading experience.

Comment by itaibn0 on A non-mystical explanation of "no-self" (three characteristics series) · 2020-05-11T06:48:54.023Z · LW · GW

My main point of disagreement is the way you characterize these judgements as feelings. With minor quibbles I agree with your paragraph after substituting "it feels" with "I think". In your article you distinguish between abstract intellectual understanding which may believe that there is no self in some sense and some sort of lower-level perception of the self which has a much harder time accepting this; I don't follow what you're pointing to in the latter.

To be clear, I do acknowledge to experience mental phenomena that are about myself in some sense, such as a proprioceptive distinction between my body and other objects in my mental spatial model, an introspective ability to track my thoughts and feelings, and a sense of the role I play in my community that I am expected to adhere to. However, the form of these pieces of mental content is wildly different, and it is only through an abstract mental categorization that I recognize them as all about the same thing. Moreover, I believe these senses are imperfect but broadly accurate, so I don't know what it is that you're saying is an illusion.

Comment by itaibn0 on itaibn0's Shortform · 2020-05-11T05:19:09.125Z · LW · GW

Crossposted on my blog:

Lightspeed delays lead to multiple technological singularities.

By Yudkowsky's classification, I'm assuming the Accelerating Change Singularity: As technology gets better, the characteristic timescale at which technological progress is made becomes shorter, so that the time until this reaches physical limits is short from the perspective of our timescale. At a short enough timescale the lightspeed limit becomes important: When information cannot traverse the diameter of civilization in the time until singularity further progress must be made independently in different regions. The subjective time from then may still be large, and without communication the different regions can develop different interest and, after their singularities, compete. As the characteristic timescale becomes shorter the independent regions split further.

Comment by itaibn0 on A non-mystical explanation of "no-self" (three characteristics series) · 2020-05-11T04:57:59.745Z · LW · GW

I'm still not sure what you mean by the feeling of having a self. Your exercise of being aware of looking at an object reminds of the bouba/kiki effect: The words "bouba" and "kiki" are meaningless but you ask people to label which shapes are bouba and which are kiki in spite of that. The fact they answer does mean they deep down believe that "bouba" and "kiki" are real words. In the same way, when you ask me being aware of being someone looking at an object, I may have a response -- observing that the proposition "I am looking at my phone" is true, contemplating the simpleminded self-evidence of this fact, thinking about how this relates to the points Kaj is trying to make -- and there may even be some regularities in this response I can't rationally justify. Nonetheless this response is not a feeling of a self, nor is it something I am mistakenly confusing with a self -- any conflation is only being made from my attempt to interpret an unclear instruction, and is not a mistake I would make in regular thought.

A related point is that the word "self" is so rarely used in ordinary language. The suffix "-self", like "myself" or "yourself", yes, but not "self". That's only said when people are doing philosophy.

Comment by itaibn0 on TurnTrout's shortform feed · 2020-05-06T20:30:33.088Z · LW · GW

This map is not a surjection because not every map from the rational numbers to the real numbers is continuous, and so not every sequence represents a continuous function. It is injective, and so it shows that a basis for the latter space is at least as large in cardinality as a basis for the former space. One can construct an injective map in the other direction, showing the both spaces of bases with the same cardinality, and so they are isomorphic.

Comment by itaibn0 on Open question: are minimal circuits daemon-free? · 2018-05-20T21:01:05.909Z · LW · GW

This may be relevant:

Imagine a computational task that breaks up into solving many instances of problems A and B. Each instance reduces to at most n instances of problem A and at most m instances of problem B. However, these two maxima are never achieved both at once: The sum of the number of instances of A and instances of B is bounded above by some . One way to compute this with a circuit is to include n copies of a circuit for computing problem A and m copies of a circuit for computing problem B. Another approach for solving the task is to include r copies of a circuit which, with suitable control inputs, can compute either problem A or problem B. Although this approach requires more complicated control circuitry, if r is significantly less than n+m and the size of is significantly less than the sum of the sizes of and (which may occur if problems A and B have common subproblems X and Y which can use a shared circuit) then this approach will use less logic gates overall.

More generally, consider some complex computational task that breaks down into a heterogeneous set of subproblems which are distributed in different ways depending on the exact instance. Analogous reasoning suggests that the minimal circuit for solving this task will involve a structure akin to emulating a CPU: There are many instances of optimized circuits for low-level tasks, connected by a complex dependency graph. In any particular instance of the problem the relevant data dependencies are only a small subgraph of this graph, with connections decided by some control circuitry. A particular low-level circuit need not have a fixed purpose, but is used in different ways in different instances.

So, our circuit has a dependency tree of low-level tasks optimized for solving our problem in the worst-case. Now, at a starting stage of this hierarchy it has to process information about how a particular instance is separated into subproblems and generate the control information for solving this particular instance. The control information might need to be recomputed as new information about the structure of the instance are made manifest, and sometimes a part of the circuit may perform this recomputation without full access to potentially conflicting control information calculated in other parts.

Comment by itaibn0 on Against the Linear Utility Hypothesis and the Leverage Penalty · 2017-12-14T21:08:38.776Z · LW · GW

Yes, this is the refutation for Pascal's mugger that I believe in, although I never got around to writing it up like you did. However, I disagree with you that it implies that our utilities must be bounded. All the argument shows is that ordinary people never assign to events enourmous utility values with also assigning them commensuably low probabilities. That is, normative claims (i.e., claims that certain events have certain utility assigned to them) are judged fundamentally differently from factual claims, and require more evidence than merely the complexity prior. In a moral intuitionist framework this is the fact that anyone can say that 3^^^3 lives are suffering, but it would take living 3^^^3 years and getting to know 3^^^3 people personally to feel the 3^^^3 times utility associated with this events.

I don't know how to distinguish the scenarios where our utilities are bounded and where our utilities are unbounded but regularized (or whether our utilities are suffiently well-defined to distinguish the two). Still, I want to emphasize that the latter situation is possible.

Comment by itaibn0 on Changing habits for open threads · 2017-11-26T15:45:53.346Z · LW · GW

Quick thought: I think you are relying too much on your own experience which I don't expect to generalize well. Different people will have different habits on how much thought they put to their comments, and I expect some put too much thought and some too. We should put more effort at identifying the aggregate tendencies of people at this forum before we make reccomendations.

Then again, perhaps you are just offering the idea casually, so it's okay. Still I worry that the most likely future pathways for posts like this are "get ignored" and "get cited uncritically", and there's no clear place for this more thorough investigation.

Comment by itaibn0 on Living in an Inadequate World · 2017-11-14T14:02:58.119Z · LW · GW
What's the fallacy you're claiming?

First, to be clear, I am referring to things such as this description of the prisoner's dilemma and EY's claim that TDT endorses cooperation. The published material has been careful to only say that these decision theories endorse cooperation among identical copies running the same source code, but as far as I can tell some researchers at MIRI still believe this stronger claim and this claim has been a major part of the public perception of these decision theories (example here; see section II).

The problem is that when two FDT agent with a different utility functions and different prior knowledge are facing a prisoner's dilemma with each other, then their decisions are actually two different logical variables X0 and X1. The argument for cooperating is that X0 and X1 are sufficiently similar to one another that in the counterfactual where X0=C we also have X1=C. However, you could just as easily take the opposite premise, where X0 and X1 are sufficiently dissimilar that counterfactually changing X0 will have no effect on X1. Then you are left with the usual CDT analysis of the game. Given the vagueness of logical counterfactuals it is impossible to distinguish these two situations.

Here's a related question: What does FDT say about the centipede game? There's no symmetry between the players so I can't just plug in the formalism. I don't see how you can give an answer that's in the spirit of cooperating in the prisoner's dilemma without reaching the conclusion that FDT involves altruism among all FDT agents through some kind of veil of ignorance argument. And taking that conclusion is counter to the affine-transformation-invariance of utility functions.

Comment by itaibn0 on Living in an Inadequate World · 2017-11-14T12:22:41.759Z · LW · GW

Some meta-level comments and questions:

This discussion has moved far off-topic away from EY's general rationality lessons. I'm pleased with this, since these are topics that I want to discuss, but I want to mention this explicitly since constant topic-changes can be bad for a productive discussion by preventing the participants from going into any depth. In addition, lurkers might be annoyed at reading yet another AI argument. Do you think we should move the discussion to a different venue?

My motivations for discussing this are a chance to talk about critisms of MIRI that I haven't gotten down in writing in detail before, a chance to get a rough impression on how MIRI supporters to these explanations, and more generally an opportunity to practice intellectual honest debates. I don't expect the discussion to go on far enough to resolve our disagreements, but I am trying to anyways to get practice. I'm currently enthusiastic about continuing the discussion. but the sort of enthusiasm that could easily wane in a day. What is your motivation?

Comment by itaibn0 on Living in an Inadequate World · 2017-11-14T00:33:13.273Z · LW · GW
"but a fundamental assumption behind TDT and UDT is the existence of a causal structure behind logical statements, which sounds implausible to me."
None of the theories mentioned make any assumption like that; see the FDT paper above.

Page 14 of the FDT paper:

Instead of a do operator, FDT needs a true operator, which takes a logical sentence φ and updates P to represent the scenario where φ is true...
...Equation (4) works given a graph that accurately describes how changing the value of a logical variable affects other variables, but it is not yet clear how to construct such a thing—nor even whether it can be done in a satisfactory manner within Pearl’s framework.

This seems wrong, if you're saying that we can't formally establish the behavior of different decision theories, or that applying theories to different cases requires ad-hoc emendations; see section 5 of "Functional Decision Theory" (and subsequent sections) for a comparison and step-by-step walkthrough of procedures for FDT, CDT, and EDT. One of the advantages we claim for FDT over CDT and EDT is that it doesn't require ad-hoc tailoring for different dilemmas (e.g., ad-hoc precommitment methods or ratification procedures, or modifications to the agent's prior).

The main thing that distinguishes FDT from CDT is how the true operator mentioned above functions. As far as I'm aware this is always inserted by hand. This is easy to for situations where entities make perfect simulations of one another, but there aren't even rough guidelines for what to do when the computations that are done cannot be delineated in such a clean manner. In addition, if this was a rich research field I would expect more "math that bites back", i.e., substantive results that reduce to clearly-defined mathematical problems whose result wasn't expected during the formalization.

This point about "load-bearing elements" is at its root an intuitive judgement that might be difficult for me to convey properly.

Comment by itaibn0 on Living in an Inadequate World · 2017-11-13T14:44:37.512Z · LW · GW

Thinking further, I've spotted something that may a crucial misunderstanding. Is the issue whether EY was right to create his own technical research institute on AI risk, is it whether he was right to pursue AI risk at all? I agree that before EY there was relatively little academic work on AI risk, and that he played an important role in increasing the amount of attention the issue recieves. I think it would have been a mistake for him to ignore the issue on the basis that the experts must know better than him and they aren't worried.

On the other hand, I expect an equally well-funded and well-staffed group that is mostly within academia to do a better job than MIRI. I think EY was wrong in believing that he could create an institute that is better at pursuing long-term technical research in a particular topic than academia.

Comment by itaibn0 on Living in an Inadequate World · 2017-11-13T13:35:59.802Z · LW · GW
When I think about the people working on AGI outcomes within academia these days, I think of people like Robin Hanson, Nick Bostrom, Stuart Russell, and Eric Drexler, and it's not immediately obvious to me that these people have converged more with each other than any of them have with researchers at MIRI.

I see the lack of convergence between people in academia as supporting my position, since I am claiming that MIRI is looking too narrowly. I think AI risk research is still in a brainstorming stage where we still don't have a good grasp on what all the possibilities are. If all of these people have rather different ideas for how to go about it, was is it just the approaches that Eliezer Yudkowsky likes that are getting all the funding?

I also have specific objections. Let's take TDT and FDT as an example since they were mentioned in the post. The primary motivation for them is that they handle Newcombe-like dilemmas better. I don't think Newcombe-like dilemmas are relevant for the reasoning of potentially dangerous AIs, and I don't think you will get a good holistic understanding of what a good reasoner out of these theories. One secondary motivation for TDT/UDT/FDT is a fallacious argument that it endorses cooperation in the true prisoner's dilemma. Informal arguments seem to be the load-bearing applying these theories to any particular problem; the technical works seem to be mainly formalizing narrow instances of these theories to agree with the informal intuition. I don't know about FDT, but a fundamental assumption behind TDT and UDT is the existence of a causal structure behind logical statements, which sounds implausible to me.

Comment by itaibn0 on Living in an Inadequate World · 2017-11-12T23:20:34.888Z · LW · GW

(Background: I used to be skeptical about AI risk as a high-value cause, now I am uncertain, and I am still skeptical of MIRI.)

I disagree with you about MIRI compared with mainstream academia. Academics may complain about the way academia discourages "long-term substantive research projects", but taking a broader perspective academia is still the best thing there is for such projects. I think you misconstrued comments by academics complaining about their situation on the margin as being statements about academia in the absolute, and thereby got the wrong idea about the relative difficulty of doing good research within and outside academia.

When you compete for grant funding, that means your work is judged by people with roughly the same level of expertise as you. When you make a publically-funded research institute your work is judged for more shallowly. That you chose to go along the second path rather than the first path had left a bad first impression on me when I first learned of it, like you can't make a convincing case in a fair test. As MIRI grew and as I learned more about it, I got the impression that since MIRI is a small team too little contact with a broader intellectual community it prematurely reached a consensus on a particular set of approaches and assumptions that I think are likely to go nowhere.

Comment by itaibn0 on Announcing AASAA - Accelerating AI Safety Adoption in Academia (and elsewhere) · 2017-06-21T01:55:55.963Z · LW · GW

I specifically appreciate the article on research debt.

Since I was confused by this when I first read this, I want to clarify: As far as I can tell the article is not written by anybody associated with AASAA. You're saying it was nice of toonalfrink to link to it.

(I'm not sure if this comment is useful, since I don't expect a lot of people to have the same misunderstanding I did.)

Comment by itaibn0 on [stub] 100-Word Unpolished Insights Thread (3/10-???) · 2017-03-17T22:01:30.273Z · LW · GW

I'm not sure to what extent you want people to criticize ideas in this thread, and I'm going to test the waters. Give me feedback on how well this matches the norms you envision.

An immediate flaw comes to mind, that any elaboration of this idea should respond to: Changing the high school curriculum is very difficult. If you've acquired the social capital to change the curriculum of a high school, you should not spend it by making such a small, marginal contribution, but rather you could probably find something with a larger effect with the same social capital.

Comment by itaibn0 on Act into Fear and Abandon all Hope · 2017-01-04T13:20:56.126Z · LW · GW

You start the discussion with a very practical frame: "Here is some advice I intend to give you.". You give caveats, then you give the advice, and you give some justification. The advice sounds plausible. Then you continue to a very philosophical discussion on what fear is and what people think about it that does not appear to tie in with the practical frame. While your article would appear very lopsided with so much caveat and so little content, I don't see how the later parts help. Alternately, you can remove everything up to the 10th paragraph and write a very different sort of essay.

Comment by itaibn0 on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2017-01-04T01:09:14.416Z · LW · GW

"Straw that breaks the camel's back" implies the existence of a large pre-existing weight, so your claim is a tautology.

Comment by itaibn0 on Progress and Prizes in AI Alignment · 2017-01-04T00:03:55.174Z · LW · GW

You point out a problem: There's no way to tell which organizations are making progress on AI alignment, and there is little diversity in current approaches. You turn this into the question: How do we create prizes that incentivize progress in AI alignment? You're missing a step or two here.

I'd say the logic goes the opposite direction: because there are no clear objectively measurable targets that will improve AI safety, prizes are probably a bad idea for increasing the diversity and effectiveness of AI safety research.

Comment by itaibn0 on Act into Fear and Abandon all Hope · 2017-01-03T23:26:14.803Z · LW · GW

Writing suggestion: Drop everything past the 10th paragraph ("It’s not immediately obvious that you’d want to overcome fear, though...").

Comment by itaibn0 on On the importance of Less Wrong, or another single conversational locus · 2016-12-06T00:59:21.851Z · LW · GW

Perhaps I should not have used such sensationalist language. I admit I don't know the whole story, and that more details are likely to find many nonrational reasons the change occurs. Still, I suspect rational persuasion did play a role, if not a complete one. Anecdotally, the Less Wrong discussion changed my opinion of polyamory from "haven't really thought about it that much" to "sounds plausible but I haven't tried it".

In any case, if your memory of that section of Less Wrong history contributes positively to your nostalgia, it's worth reconsidering the chance events like that will ever happen again.

Comment by itaibn0 on On the importance of Less Wrong, or another single conversational locus · 2016-12-04T22:53:08.208Z · LW · GW

Given the communities initial heavy interest in the heuristic & biases research, I am amused that there is no explicit mention of the sunk cost policy. Seriously, watch out for that.

My opinion is that revitalizing the community is very likely to fail, and I am neutral on whether it's worth to try anyways by current prominent rationalists. A lot of people are suggesting to restore the website with a more centralized structure. It should be obvious the result won't work the same as the old Less Wrong.

Finally, a reminder on Less Wrong history, which suggests that we lost more than a group of high-quality posters: Less Wrong wasn't always a polyamory hub. It became that way because there was a group of people who seriously believed they could improve the way they think, a few noticed they didn't have any good reason to be monogamous, set out to convince the others, and succeeded. Do you think a change of that scale will ever happen in the future of the rationalist community?

Comment by itaibn0 on A few misconceptions surrounding Roko's basilisk · 2015-10-07T23:08:55.019Z · LW · GW

Based on personal experience, if you're dreaming I don't recommend trying to wake yourself up. Instead, enjoy your dream until you're ready to wake up naturally. That way you'll have far better sleep.

Comment by itaibn0 on Open Thread, Jul. 27 - Aug 02, 2015 · 2015-07-28T18:51:57.920Z · LW · GW

"assume unbelievable X".

Only this is not an unbelievable X, its an entirely believable X (I wouldn't have any reason to ask an >unbelieveable - as would anyone asking a question - unless they are actually trying to trick you with a >question). In fact - assuming that people are asking you to believe an "unbelievable X" is a strawman of the >argument in point.

Are you sure that's how you want to defend your question? If you defend the question by saying that the premise is believable, you are implicitly endorsing the standard that questions should only be answered if they are reasonable. However, accepting this standard runs the risk that your conversational partner will judge your question to be unreasonable even if it isn't and fail to answer your question, in exactly the way you're complaining about. A better standard for the purpose of getting people to answer the questions you ask literally is that people should answer the questions that you ask literally even if they rely on fantastic premises.

Can you do me a favour and try to steelman the question I asked? And see what the results are, and what answer you might give to it?

A similar concern is applicable here: Recall that steelmanning means, when encountering a argument that seems easily flawed, not to respond to that argument but to strengthen it ways the seem reasonable to you and answer that instead. The sounds like the exact opposite of what you want people to do to your questions.

Comment by itaibn0 on Open Thread, Jul. 27 - Aug 02, 2015 · 2015-07-27T21:16:46.981Z · LW · GW

Sometimes what happens is that people don't know the answer to the question you're asking but still want to contribute to the discussion, so they answer a different question which they know the answer to. In this case the solution is to find someone who knows the answer before you start asking.

Comment by itaibn0 on Beyond Statistics 101 · 2015-06-28T23:38:58.060Z · LW · GW

I don't know about Grothendieck, but the two other sources appear to have softer criticism of the mathematical community than "actually functioning as a cult".

Comment by itaibn0 on How my social skills went from horrible to mediocre · 2015-06-09T06:12:14.246Z · LW · GW

That's why I said "supposed to do". The core argument behind schooling is that we can make a person much more capable by exposing them to things they would not otherwise be exposed to, and that it is valuable to give a broad background in many different topics. Fundamentally this is similar to what you're suggesting, and the differences you point out just indicate that school has a bad choice of curriculum and teaches it badly. The primary novelty in what you're suggesting is that you want "a lot of different type of experience" with a shallow view on each topic ("a different profession... every day"), whereas school typically spends a lot of time on a couple of different topics but with essentially the same type of experience. I do not intend to comment on whether I think this will work better.

For the record, I don't know what Toastmasters does, but the schools I've to had Drama class and occasionally required giving presentations.

Comment by itaibn0 on How my social skills went from horrible to mediocre · 2015-05-23T04:49:01.378Z · LW · GW

That's exactly what school is supposed to do.

Comment by itaibn0 on Bragging Thread May 2015 · 2015-05-11T21:05:59.172Z · LW · GW

I got an honorary mention in the 2014 Putnam Competition. I have taken the test at December and I heard the results on April, but I haven't posted this other bragging threads, so I'm not if this is appropriate here.

Comment by itaibn0 on Guidelines for Upvoting and Downvoting? · 2015-05-07T02:07:20.079Z · LW · GW

Downvotes sort of do the opposite, but it's not perfectly symmetrical because scores below zero pack an extra punch.

The standard guideline is to upvote if you want more of that kind of comment, and downvote if you want less. The asymmetry between upvotes and downvotes comes the fact Less Wrongers on a whole want more content on Less Wrong rather than less. Negative scores pack a punch because they mean your comment would be better off not existing.

Well really, I think it's mostly that people just have a pre-existing idea of the connotation of negative numbers, but I gave this retroactive justification to show that I think the result is surprisingly internally consistent.

Comment by itaibn0 on Is Scott Alexander bad at math? · 2015-05-07T01:35:18.482Z · LW · GW

Based on JonahSinick's prior comments, his motivation for asking this question is pretty clear. You have already critiqued the thought process that made him think this question is necessary, to attack it again is almost double-counting. I think if you had answered the question directly the discussion would have a better chance of bootstrapping out of mutual unintelligibility. Then again, I mostly lurk and only rarely participate in internet debates so I don't feel I really understand how any given discussion strategy would actually play out. Also, I cheated, since Jonah already expressed a desire for a direct answer.

Comment by itaibn0 on Is Scott Alexander bad at math? · 2015-05-06T19:42:35.883Z · LW · GW

Other commenters have said similar things, but I want to express this with my own words. To do mathematics requires multiple skills, and an aesthetic sense may be an underappreciated one of them. You argue that Scott has a good aesthetic sense. I also think that Scott probably has good abilities in some of the skills necessary for doing mathematics. But from Scott's account he appears to be lacking in other skills. Why do you think that what Scott has is sufficient? You mention that early college courses are not representative of real math, but even at higher levels you need skills such as reading formulas, applying algorithms, and understanding the implicit meaning of unmotivated (or even imperfectly motivated) definitions. Keep in mind the Scott relates here that other people skilled in math have tried to educate him outside of a college context.

I'm saying I think your conclusion is wrong, I'm uncertain myself. And even Scott admits "I don’t know if it’s that I’m bad at math, or that I just don’t enjoy math enough to be intrinsically motivated to pursue it," (same link as above), which sounds a bit like a way of retreat to your way of thinking.