Posts

The simple picture on AI safety 2018-05-27T19:43:27.025Z · score: 52 (22 votes)
Opportunities for individual donors in AI safety 2018-03-31T18:37:21.875Z · score: 63 (16 votes)
Superrationality and network flow control 2013-07-22T01:49:46.093Z · score: 18 (20 votes)
Personality tests? 2012-02-29T09:33:00.489Z · score: 1 (10 votes)
What independence between ZFC and P vs NP would imply 2011-12-08T14:30:44.714Z · score: 3 (15 votes)
Weight training 2011-08-26T15:25:42.166Z · score: 6 (9 votes)
Derek Parfit, "On What Matters" 2011-07-07T16:52:51.007Z · score: 4 (5 votes)
[link] Bruce Schneier on Cognitive Biases in Risk Analysis 2011-05-03T18:37:42.698Z · score: 8 (9 votes)
What would you do with a solution to 3-SAT? 2011-04-27T18:19:51.186Z · score: 3 (8 votes)
[link] flowchart for rational discussions 2011-04-05T09:14:40.772Z · score: 0 (7 votes)
The AI-box for hunter-gatherers 2011-04-02T12:09:42.602Z · score: 9 (11 votes)
I want a better memory. 2011-04-02T11:36:01.147Z · score: 20 (21 votes)
Size of the smallest recursively self-improving AI? 2011-03-30T23:31:52.022Z · score: 4 (9 votes)
Audio from Eliezer's talk at the Oxford Transhumanists 2011-03-29T21:31:35.562Z · score: 8 (9 votes)
rationality for turing machines 2011-03-23T00:38:01.980Z · score: 2 (9 votes)
Enjoying musical fashion: why not? 2011-02-21T16:22:02.891Z · score: 2 (15 votes)
Automated theorem proving by learning from examples 2011-02-16T13:38:03.753Z · score: 3 (4 votes)
Not owning our beliefs 2011-02-15T15:02:44.137Z · score: 6 (7 votes)
This Sunday: Oxford Rationality Meetup 2011-01-28T15:06:52.670Z · score: 7 (8 votes)
Eliezer to speak in Oxford, 8pm Jan 25th 2011-01-17T23:58:21.854Z · score: 8 (9 votes)
I want to learn economics 2011-01-13T23:02:24.846Z · score: 10 (10 votes)
Stanford historian on the singularity 2010-11-06T10:01:29.868Z · score: 4 (5 votes)
That which can be destroyed by the truth should *not* necessarily be 2010-10-24T10:41:55.278Z · score: 8 (12 votes)
Does it matter if you don't remember? 2010-10-22T11:53:18.401Z · score: 7 (8 votes)
When does an insight count as evidence? 2010-01-04T09:09:23.345Z · score: 11 (20 votes)
Rational lies 2009-11-23T03:32:08.789Z · score: 6 (15 votes)

Comments

Comment by alexflint on The simple picture on AI safety · 2018-05-30T16:09:37.014Z · score: 4 (1 votes) · LW · GW

So are you saying that my distillation didn't unpack the problem sufficiently to be helpful (in which case I agree but that wasn't my goal), or are you saying that I missed something important / included something unimportant?

Comment by alexflint on The simple picture on AI safety · 2018-05-28T21:17:31.883Z · score: 8 (2 votes) · LW · GW

I parse you as pointing to the clarification of a vague problem like "flight" or "safety" or "heat" into an incrementally more precise concept or problem statement. I agree this type of clarification is ultra important and represents real progress in solving a problem, and I agree that my post absolutely did not do this. But I was actually shooting for something quite different.

I was shooting for a problem statement that (1) causes people to work on the problem, and (2) causes them to work on the right part of the problem. I claim it is possible to formulate such a problem statement without doing any clarification in the sense that you pointed at, and additionally that it is useful to do so because (1) distilled problem statements can cause additional progress to be made on a problem, and (2) clarification is super hard, so we definitely shouldn't block causing additional work to happen until clarification happens, since addition work could be a key ingredient in getting to key clarifications.

To many newcomers to the AI safety space, the problem feels vast and amorphous, and it seems to take a long time before newcomers have confidence that they know what exactly other people in the space are actually trying to accomplish. During this phase, I've noticed that people are mostly not willing to work directly on the problem, because of the suspicion that they have completely misunderstood where the core of the problem actually is. This is why distillation is valuable even absent clarification.

Comment by alexflint on The simple picture on AI safety · 2018-05-28T20:56:16.146Z · score: 15 (4 votes) · LW · GW

I think the distillation needs to (1) be correct and (2) resonate with people. It's really hard to find a distillation that meets these two criteria. Finding such distillations is a good part of what a tech sector product manager spends their time doing.

I'm not at all sure that my distillation of AI safety meets those two.

Comment by alexflint on The simple picture on AI safety · 2018-05-28T16:19:13.083Z · score: 5 (3 votes) · LW · GW

I completely agree.

Comment by alexflint on Decision theory and zero-sum game theory, NP and PSPACE · 2018-05-28T04:19:14.214Z · score: 8 (2 votes) · LW · GW

Minor nit: I always thought the term "decision theory" referred to the meta-level task of formulating an algorithm which, given fully specified beliefs and values, tells you how to compare possible actions. On the contrary, when I see someone making a concrete decision using an EU calculation or some such, I don't think of them as "doing decision theory". So perhaps "decision making" or "positive sum game theory" rather than "decision theory"? It probably doesn't matter much.

Comment by alexflint on The simple picture on AI safety · 2018-05-28T04:05:00.385Z · score: 5 (4 votes) · LW · GW

It doesn't matter! :P

Comment by alexflint on Critique my Model: The EV of AGI to Selfish Individuals · 2018-04-10T02:54:06.789Z · score: 11 (3 votes) · LW · GW

I'd be interested in reading the literature you mention that suggests positive outcomes are more likely than negative outcomes, conditioned on AGI being developed. My sense is that if AGI is developed and the transition goes badly for humans, but an individual still lives for a long time, then it's quite likely that the individual has a bad life since if you select uniformly from environments that keep humans alive but are otherwise unoptimized for wellbeing, I'd expect most to be quite unhappy.

It also seems like you place around 66% probability (2.5 : 1.3) on our chances of successfully navigating the intelligence explosion. This seems quite high and may be worth pulling out into a separate variable just to make it more explicit.

Comment by alexflint on Opportunities for individual donors in AI safety · 2018-04-09T01:00:20.086Z · score: 11 (3 votes) · LW · GW

Ah, the note about Max Tegmark and the singularity summit was supposed to point to that, but I had the wrong number. Fixed now.

Comment by alexflint on Global insect declines: Why aren't we all dead yet? · 2018-04-02T16:12:06.056Z · score: 6 (3 votes) · LW · GW

From the link:

The study is the result of a collaboration with the Entomological Society of Krefeld – essentially made up of members of the public

This decreases my confidence in the robustness of the results of this particular study. Unfortunately I'm fairly unversed this area so I don't know whether this is a widely repeated result or an outlier.

Comment by alexflint on The most important step · 2018-03-24T13:00:56.280Z · score: 8 (4 votes) · LW · GW

Beautiful. Please write more!

Comment by alexflint on [RESCHEDULED] NYC Rationality Megameetup and Unconference: 4/5 - 4/6 · 2014-02-18T19:57:42.693Z · score: 0 (0 votes) · LW · GW

Me, 50%

Comment by alexflint on Building Phenomenological Bridges · 2013-12-24T04:02:43.076Z · score: 2 (2 votes) · LW · GW

I think we should be at least mildly concerned about accepting this view of agents in which the agent's internal information processes are separated by a bright red line from the processes happening in the outside world. Yes I know you accept that they are both grounded in the same physics, and that they interact with one another via ordinary causation, but if you believe that bridging rules are truly inextricable from AI then you really must completely delineate this set of internal information processing phenomena from the external world. Otherwise, if you do not delineate anything, what are you bridging?

So this delineation seems somewhat difficult to remove and I don't know how to collapse it, but it's at least worth questioning whether it's at this point that we should start saying "hmmmm..."

One way to start to probe this question (although this does not come close to resolving the issue) is to think about an AI already in motion. Let's imagine an AI built out of gears and pulleys, which is busy sensing, optimizing, and acting in the world, as all well-behaved AIs are known to do. In what sense can we delineate a set of "internal information processing phenomena" within this AI from the external world? Perhaps such a delineation would exist in our model of the AI, where it would be expedient indeed to postulate that the gears and pulleys are really just implementing some advanced optimization routine. But that delineation sounds much more like something that should belong in the map than in the territory.

What I'm suggesting is that starting with the assumption of an internal sensory world delineated by a bright red line from the external world should at least give us some pause.

Comment by alexflint on The Ultimate Sleeping Beauty Problem · 2013-10-05T03:49:50.141Z · score: 0 (0 votes) · LW · GW

This is unlike the St Petersburg paradox because it involves amnesia, so assigning probabilities arguably forces you to decide on some SIA/SSA-like quandary. But I do agree that making this into a decision problem is the key.

Comment by alexflint on What makes us think _any_ of our terminal values aren't based on a misunderstanding of reality? · 2013-10-01T03:23:13.918Z · score: 2 (2 votes) · LW · GW

Another way to think about Dave's situation is that his utility function assigns the same value to all possible futures (i.e. zero) because the one future that would've been assigned a non-zero value turned out to be unrealizable. His real problem is that his utility function has very little structure: it is zero almost everywhere.

I suspect our/my/your utility function is structured in a way that even if broad swaths of possible futures turn out to be unrealizable, the remainder will still contain gradients and local maxima, so there will be some more desirable and some less desirable possibilities.

Of course this is not guaranteed, but most utility functions have gradients and local maxima over most sets. You need a very special utility function and a very special set of realizable futures in order for all futures to be assigned exactly the same value.

Comment by alexflint on Bayesian probability as an approximate theory of uncertainty? · 2013-10-01T02:35:19.255Z · score: 0 (0 votes) · LW · GW

Ha, indeed. I should have made the analogy with finding a linear change of variables such that the result is decomposable into a product of independent distributions -- ie if (x,y) is distributed on a narrow band about the unit circle in R^2 then there is no linear change of variables that renders this distribution independent, yet a (nonlinear) change to polar coordinates does give independence.

Perhaps the way to construct a counterexample to UDT is to try to create causal links between and of the same nature as the links between and the in e.g. Newcomb's problem. I haven't thought this through any further.

Comment by alexflint on Bayesian probability as an approximate theory of uncertainty? · 2013-09-28T12:47:02.112Z · score: 2 (2 votes) · LW · GW

I have also been leaning towards the existence of a theory more general than probability theory, based on a few threads of thinking.

One thread is anthropic reasoning, where it is sometimes clear how to make decision, yet probabilities don't make sense and it feels to me that the information available in some anthropic situations just "doesn't decompose" into probabilities. Stuart Armstrong's paper on the sleeping beauty problem is, I think, valuable and greatly overlooked here.

Another thread is the limited-computation issue. We would all like to have a theory that pins down ideal reasoning, and then work out how to efficiently approximate that theory in a turing machine as a completely separate problem. My intuition is that things just don't decompose this way. I think that a complete theory of reasoning will make direct reference to models of computation.

This site has collected quite a repertoire of decision problems that challenge causal decision theory. They all share the following property (including your example in the comment above): that in a causal graph containing as a node, there are links from to that do not go via your (for newcomb-like problems) or that do not go via (anthropic problems). Or in other words, your decisions are not independent of your beliefs about the world. The UDT solution says: "instead of drawing a graph containing , draw one that contains and you will see that the independence between beliefs and decisions is restored!". This feels to me like a patch rather than a full solution, similar to saying "if your variables are correlated and you don't know how to deal with correlated distributions, try a linear change of variables -- maybe you'll find one that de-correlates them!". This only works if you're lucky enough to find a de-correlating change of variables. An alternate approach would be to work out how to deal with non-independent beliefs/decision directly.

One thought experiment I like to do is to ask probability theory to justify itself in a non-circular way. For example, let's say I propose the following Completely Stupid Theory Of Reasoning. In CSTOR, belief states are represented by a large sheet of paper where I write down everything that I have ever observed. What is my belief state at time t, you ask? Why, it is simply the contents of the entire sheet of paper. But what is my belief state about a specific event? Again, the contents of the entire sheet of paper. How does CSTOR update on new evidence? Easy! I simply add a line of writing to the bottom of the sheet. How does CSTOR marginalize? It doesn't! Marginalization is just for dummies who use probability theory, and, as you can see, CSTOR can do all the things that a theory of reasoning should do without need for silly marginalization.

So what really distinguishes CSTOR from probability theory? I think the best non-circular answer is that probability theory gives rise to a specific algorithm for making decisions, where CSTOR doesn't. So I think we should look at decision making as primary and then figure out how to decompose decision making into some abstract belief representation plus abstract notion of utility, plus some abstract algorithm for making decisions.

Comment by alexflint on Superrationality and network flow control · 2013-07-22T13:47:25.494Z · score: 5 (5 votes) · LW · GW

You're right - most users don't rewrite their TCP stack. But suppose you're designing the next version of TCP and you think "hey, instead of using fixed rules, let's write a TCP stack that optimizes for throughput". You will face a conceptual issue as you realize that the global outcome is now total network breakdown. So what do you optimize for instead? Superrationality says: make decisions as though deciding the output for all nodes at the current information set. This is conceptually helpful because it tells you what you should be optimizing for.

Now if you start out from the beginning (as in the paper) by thinking of optimizing over algorithms, with the assumption that the output will be run on every node then you're already doing superrationality. That's all superrationality is!

Comment by alexflint on Progress on automated mathematical theorem proving? · 2013-07-04T17:42:28.959Z · score: 9 (9 votes) · LW · GW

It's easy to give an algorithm that generates a proof of a mathematical theorem that's provable: choose a formal language with definitions and axioms, and for successive values of n, enumerate all sequences of mathematical deductions of length n, halting if the final line of a sequence is the statement of the the desired theorem. But the running time of this algorithm is exponential in the length of the proof, and the algorithm is infeasible to implement except for theorems with very short proofs.

Yes this approach is valid, but modern theorem provers more commonly reduce theorem proving to a sequence of SAT problems. Very roughly, for a first order sentence P, the idea is to search for counterexamples alternately to P and ~P in models of size 1,2,... . SAT solvers have improved rapidly over the past decade (http://www.satcompetition.org/), though they are still not good enough to facilitate theorem provers that can solve interesting math problems.

I highly recommend the concise and excellent "Handbook of Practical Logic and Automated Reasoning" http://www.cl.cam.ac.uk/~jrh13/atp/

Comment by alexflint on An attempt at a short no-prerequisite test for programming inclination · 2013-07-01T13:15:38.976Z · score: 4 (4 votes) · LW · GW

You're asking people to execute a program, but you should be asking people to write a program.

Comment by alexflint on You only need faith in two things · 2013-03-16T15:52:05.168Z · score: 3 (3 votes) · LW · GW

To get to Bayes, don't you also need to believe not just that probability theory is internally consistent (your well-ordered ordinal gives you that much) but also that it is the correct system for deducing credences from other credences? That is, you need to believe Cox's assumptions, or equivalently (I think) Jayes' desiderata (consistent, non-ideological, quantitative). Without these, you can do all the probability theory you want but you'll never be able to point at the number at the end of a calculation and say "that is now my credence for the sun rising tomorrow".

Comment by alexflint on Guilt: Another Gift Nobody Wants · 2012-12-26T17:08:22.930Z · score: 0 (0 votes) · LW · GW

This was from How The Mind Works: http://www.amazon.com/dp/1469228424

Comment by alexflint on Causal Universes · 2012-11-29T22:43:34.569Z · score: 0 (0 votes) · LW · GW

There would be no hypothesis in your hypothesis-space to describe the standard model of physics, where space is continuous, indefinitely divisible, and has complex amplitude assignments over uncountable cardinalities of points.

I'm not sure this is necessarily correct. We typically model quantum configurations as functions defined over a continuous domain, but it's yet possible that quantum configurations could be representable by a finite set of numbers (more precisely: that all possible configurations of our universe could be expressed as f(x) for some arbitrary but fixed f and some finite vector x). This would follow if the amount of information in the universe is finite, since we know that information is neither created nor destroyed over time. In this case we could represent states of the universe as a finite set of numbers and draw causal arrows between these arrows over time. Of course, such a representation might be much less convenient than thinking about continuous wavefunctions etc.

Comment by alexflint on Beyond Bayesians and Frequentists · 2012-11-01T21:31:30.802Z · score: 2 (2 votes) · LW · GW

These theorems, however, ignore the issue of computation --- while the best decision procedure may be Bayesian, the best computationally-efficient decision procedure could easily be non-Bayesian.

This raises another important point against Bayes, which is that the proper Bayesian interpretation may be very mathematically complex.

if we are trying to build a software package that should be widely deployable, we might want to use a frequentist method because users can be sure that the software will work as long as some number of easily-checkable assumptions are met.

I think these are the strongest reasons you've raised that we might want to deviate from pure Bayesianism in practice. We usually think of these (computation and understandability-by-humans) as irritating side issues, to be glossed over and mostly considered after we've made our decision about which algorithm to use. But in practice they often dominate all other considerations, so it would be nice to find a way to rigorously integrate these two desiderata with the others that underpin Bayesianism.

Comment by alexflint on Beyond Bayesians and Frequentists · 2012-11-01T21:23:32.299Z · score: 2 (2 votes) · LW · GW

Support vector machines [2], which try to pick separating hyperplanes that minimize generalization error, are one example of this where the algorithm is explicitly trying to maximize worst-case utility.

Could you expand on this a little? I've always thought of SVMs as minimizing an expected loss (the sum over hinge losses) rather than any best-worst-case approach. Are you referring to the "max min" in the dual QP? I'm interested in other interpretations...

Comment by alexflint on Causal Diagrams and Causal Models · 2012-11-01T10:49:52.926Z · score: 3 (3 votes) · LW · GW

I think you've missed an important piece of this picture, or perhaps have not emphasized it as much as I would. The real real reason we can elucidate causation from correlation is that we have a prior that prefers simple explanations over complex ones, and so when some observed frequencies can be explained by a compact (simple) bayes net we take the arrows in that bayes net to be causation.

A fully connected bayes net (or equivalently, a causal graph with one hidden node pointing to all observed nodes) can represent any probability distribution whatsoever. Such a Bayes net can never be flat-out falsified. Rather it is our preference for simple explanations that sometimes gives us reason to infer structure in the world.

This contradicts nothing you've said, but I guess I read this article as suggesting there is some fundamental rule that gives us a crisp method for extracting causation from observations, whereas I would look at it as a special case of inference-with-prior-and-likelihood, just like in other forms of Bayesian reasoning.

Comment by alexflint on Who Wants To Start An Important Startup? · 2012-08-17T03:34:28.182Z · score: 2 (2 votes) · LW · GW

I'm working for a mid-size startup and have been gathering insight into successful startups for a couple of years. Here is what I think is important.

Create value. Make sure your idea actually creates value in the world. Lots of value. It should conceivably be useful to many of people, and it should conceivably be of significant value to them. Value means your product would be important enough that, if forced to, they would give up other things in exchange for it.

Don't focus on monetization. Startups are subject to all sorts of counter-intuitive economics; it's unrealistic to plan exactly how you will make money. Make sure you're creating value, and check that there's nothing that would prevent you from ever collecting any of that value. Then go back to creating value.

Iteration beats brilliance. The speed at which you iterate is more important that the brilliance of the initial idea. Trying out a product in the real market is an experiment: the feedback your receive entangles your startup with other players in the market. Each experiment steers you towards a local optimum. To win you need (1) to start in the general vicinity of a good local optima and (2) rapid convergence to that optima.

The quality of the team is key. Early stage investors invest largely in the perceived quality of a team, and so should you invest your time alongside great people. An early stage startup should never hire consultants (wrong incentives), should never live in different cities (bad communication). Entering into a startup is like a marriage: it's very hard to get out.

Choose investors cautiously. You're also "married" to your investors on the day you sign a term sheet. Pick ones that you trust, that share your goals, and that can help you in ways other than by providing capital.

Comment by alexflint on Backward Reasoning Over Decision Trees · 2012-06-30T17:41:53.924Z · score: 4 (4 votes) · LW · GW

Could you include a reference to alpha-beta pruning, since that is precisely what you're describing? Some readers may be more familiar with that subject domain and appreciate explicitly linking game theory to an established search algorithm.

I think you mean minimax. Alpha-beta pruning is the optimization to minimax that prunes branches as soon as any max (min) node evaluates lower (higher) than the highest (lowest) opposite-colored node evaluated so far among the grandparents' children.

Comment by alexflint on Hofstadter's Superrationality · 2012-04-21T16:38:38.177Z · score: 24 (24 votes) · LW · GW

The first essay is by far the best introduction to TDT-like reasoning that I've ever read. In fact this paragraph sums up the whole informal part of the idea:

This solution depends in no way on telepathy or bizarre forms of causality. It’s just that the statement I’ll choose C and then everyone will, though entirely correct, is somewhat misleadingly phrased. It involves the word choice, which is incompatible with the compelling quality of logic. Schoolchildren do not choose what 507 divided by 13 is; they figure it out. Analogously, my letter really did not allow choice; it demanded reasoning. Thus, a better way to phrase the voodoo statement would be this: If reasoning guides me to say C, then, as I am no different from anyone else as far as rational thinking is concerned, it will guide everyone to say C.

Hofstadter's comparison of "choice" and "reasoning" is getting at the idea that people have decision routines rooted in physics, which can themselves be reasoned about, including reasoning that they are similar to one's own. I think this is really the core insight of the TDT idea.

And then the one-sentence:

Likewise, the argument "Whatever I do, so will everyone else do" is simply a statement of faith that reasoning is universal, at least among rational thinkers, not an endorsement of any mystical kind of causality.

Comment by alexflint on Attention control is critical for changing/increasing/altering motivation · 2012-04-13T01:13:15.170Z · score: 4 (4 votes) · LW · GW

Wow, I had no idea that we really really knew why CBT worked. Thank you for this post.

Comment by alexflint on Occam alternatives · 2012-02-08T22:41:00.273Z · score: 0 (2 votes) · LW · GW

People who don't know about Newtonian mechanics still believe that rocks fall downwards, but people who reject it explicitly will have a harder time reconciling their beliefs with the continued falling of rocks. It would be a mistake to reject Newtonian mechanics, then say "people who reject Newtonian mechanics clearly still believe that rocks fall", then to conclude that there is no problem in rejecting Newtonian mechanics. Similarly, if you reject Occam's razor then you need to replace it with something that actually fills the explanatory gap -- it's not good enough to say "well people who reject Occam's razor clearly still believe Occam's razor", and then just carry right on.

Comment by alexflint on Occam alternatives · 2012-02-01T21:27:36.696Z · score: 0 (2 votes) · LW · GW

Occam's razor is the basis for believing that those experiments tell us anything whatsoever about the future. Without it, there is no way to assign the probabilities you mention.

Comment by alexflint on Anthropic Decision Theory III: Solving Selfless and Total Utilitarian Sleeping Beauty · 2012-01-29T12:10:27.012Z · score: 1 (1 votes) · LW · GW

These could do with forward/backward links. The Article Navigator doesn't seem to be able to get me to number 4 in this series, and the page for 'sleeping_beauty' tag appears empty.

Comment by alexflint on Occam alternatives · 2012-01-29T10:15:54.248Z · score: 1 (1 votes) · LW · GW

Occam\s razor is famously difficult to justify except by circular appeal to itself. It's interesting to think of alternatives but you should be aware of what you give up when you give up Occam's razor. You can no longer make sensible inferences about the future based on your past experiences. For example, you can no longer have any confidence that the direction of gravity will still point downwards tomorrow, or that the laws of physics won't spontaneously change a minute from now. The experimental method itself no longer makes sense if you have no reason to think that the future will resemble the past.

You should read:

Comment by alexflint on Occam alternatives · 2012-01-29T10:10:39.570Z · score: 0 (0 votes) · LW · GW

The point is that to evaluate the utility of holding a belief, you need to have already decided upon a scheme to set your beliefs.

Comment by alexflint on Mandatory Secret Identities · 2012-01-28T09:08:03.536Z · score: 1 (1 votes) · LW · GW

Thanks for a thoughtful reply!

You could say much the same about painting/dancing/cooking/writing: There are many different sub-arts; it's hard to master all of them; practitioners can become unduly wedded to a single style; there are examples of styles that have "gone bonkers"; there are many factors in place that hurt the rationality of practitioners.

These are all valid concerns, but I don't think they're particularly problematic within martial arts in comparison to other hobbies.

Comment by alexflint on Mandatory Secret Identities · 2012-01-24T18:44:46.289Z · score: 0 (4 votes) · LW · GW

Martial arts seem to get an unreasonably bad rep on LW. It's at least as useful as painting or writing fiction, and I consider those to be fine personal development endeavours.

Comment by alexflint on POSITION: Design and Write Rationality Curriculum · 2012-01-19T14:48:59.884Z · score: 11 (11 votes) · LW · GW

Would it be helpful for us to try out these exercises with a small group of people and report back?

Comment by alexflint on Hack Away at the Edges · 2012-01-19T12:16:18.033Z · score: 0 (0 votes) · LW · GW

I agree. But compared to where we are right now, I think more people should actually go work directly on the core FAI problem. If the smartest half of each LW meetup earnestly and persistently worked on the most promising open problem they could identify, I'd give 50% chance that at least one would make valuable progress somewhere.

Comment by alexflint on Hack Away at the Edges · 2012-01-17T18:58:22.374Z · score: 8 (8 votes) · LW · GW

It is delusional for most people to believe that they can contribute usefully to really hard problems.

This seems more and more like the most damaging meme ever created on LessWrong. It persistently leads to people that could have made useful contributions (to AI safety) making no such contribution. Would it be a better world in which lots more people tried to contribute usefully to FAI and a small percentage succeeded? Yes, it would, even taking into account whatever cost the unsuccessful people pay.

Comment by alexflint on Advice Request: Baconmas Website · 2012-01-04T07:52:41.823Z · score: 1 (1 votes) · LW · GW

Q: What is Baconmas?

Baconmas is a relatively new holiday, celebrated on January 22nd (the birthday of Sir Francis Bacon) to celebrate the sciences, with a side order of bacon. You should try it!

That is excellent! Simple, light-hearted, and to the point.

Comment by alexflint on Prediction is hard, especially of medicine · 2011-12-24T23:35:50.506Z · score: 3 (3 votes) · LW · GW

I have three such ceramic implants. I remember having them put in over a simple half-hour operation, being awed by the amazing advances that medicine had made to allow me to carry on my life as if I hadn't knocked my teeth out at all. Little did I know that this was one of the only success stories of the last decade of medicine!

Comment by alexflint on Prediction is hard, especially of medicine · 2011-12-24T23:31:34.084Z · score: 3 (3 votes) · LW · GW

That's pretty much the only counterexample, though.

Comment by alexflint on The Zombie Preacher of Somerset · 2011-12-21T23:07:57.582Z · score: 0 (0 votes) · LW · GW

Fascinating case Yvain, thank you for writing this up.

So, what does Simon Browne add to the p-zombie debate?

Perhaps this case provides additional evidence that against the existence of (true) p-zombies. If a physical alteration to our brain can remove our experience of qualia, then this suggests again that qualia are just a by-product of a particular mental circuit.

Comment by alexflint on What independence between ZFC and P vs NP would imply · 2011-12-09T16:23:43.625Z · score: 0 (0 votes) · LW · GW

A different perspective: Godel doesn't say that there is any particular question about reality that we cannot answer, only that however far into the model-building enterprise we get, there will always be some undecidable propositions, which can be translated into questions about reality with the TM-enumerating-sentences experiment. So if we have a model of reality M and it fails to answer a question about reality Q then there's always hope that we could discover further regularities in reality to amend M so that it answers Q, but there is no hope that we would ever be free of any open questions. Am I correct in thinking that this rules out the possibility of a GUT, at least if a GUT is defined as a model that answers all questions.

Comment by alexflint on What independence between ZFC and P vs NP would imply · 2011-12-09T14:48:11.544Z · score: 0 (0 votes) · LW · GW

I think you may be confusing reality with our models here.

Yeah my claim was a little ambiguous. I meant to claim that either (1) our current model of reality fails to describe some truths about the universe or (2) P=NP is decidable in our model. [I'm only clarifying the claim, I'm now dubious about whether this it is true.] You're right- I should add (3) P=NP cannot be cast as a question about reality.

Comment by alexflint on AIXI and Existential Despair · 2011-12-09T09:36:10.691Z · score: 1 (1 votes) · LW · GW

This seems to be the observation that you can't have a Turing machine that implements AIXI. An approximate AIXI is not going to be able to simulate itself.

Yes, I guess you're right. But doesn't this also mean that no computable approximation of AIXI will ever hypothesize a world that contains a model of itself, for if it did then it will go into the infinite loop I described. So it seems the problem of Model 2 will never come up?

The main issue is that before you get to leverage the first N bits of AIXI's output you have to also explain the first N bits of AIXI's input

Not sure I'm understanding you correctly but this seems wrong. AIXI conditions on all its outputs so far, right? So if the world is a bit-repeater then one valid model of the world is literally a bit repeater, which explains the inputs but not the outputs.

Comment by alexflint on AIXI and Existential Despair · 2011-12-09T02:19:51.061Z · score: 0 (2 votes) · LW · GW

Voted up for being an insightful observation.

I think the core issue arises when A locates a model of the world that includes a model of A itself, thus explaining away the apparent correlation between the input and output tapes. I don't have a watertight objection to your argument, but I'm also not convinced that it goes through so easily.

Let's stick to the case where A is just a perfectly ordinary Turing approximation of AIXI. It seems to me that it's still going to have quite some difficulty reasoning about its own behaviour. In particular, suppose A locates a hypothesis H="the world consists of a connected to a and my outputs are irrelevant". Then the first step is that A asks what happens if its next output is (say) 0. To do that it needs to run H to produce the next bit that it expects to receive from the world. But running H involves running a simulation of A, and inside that simulation the exact same situation arises, namely that sim(A) considers various outputs that it might make and then runs simulations of its inferred model of the world, which themselves contain models of A, resulting in another level of recursion to sim(sim(A)), and so on in an infinite loop. Actually, I don't know what AIXI does about Turing that fail to produce output...

A different, perhaps weaker objection is that AIXI conditions on its outputs when performing inference, so they don't count towards the "burden of explanation". That doesn't resolve the issue you raise but perhaps this does: Is it possible to make Model 2 just slightly simpler by somehow leveraging the "free" information on the output tape? Perhaps by removing some description of some initial conditions from Model 2 and replacing that with a function of the information on the output tape. It's not clear that this is always possible but it seems plausible to me.

Comment by alexflint on What independence between ZFC and P vs NP would imply · 2011-12-09T00:36:35.915Z · score: 2 (2 votes) · LW · GW

Good point, evidently I failed to really internalize Godel. I had dismissed Godel sentences as not questions about reality but your example is compelling.

Interestingly, your post on integers seemed to suggest you were also thinking that since our models of integers fail to live up to expectations we've somehow failed to describe them, but that it might yet be possible to do so.

Comment by alexflint on Rationality Quotes December 2011 · 2011-12-02T12:22:22.160Z · score: -4 (4 votes) · LW · GW

I don't believe in an interventionist god

But I know darling that you do

But if I did I would kneel down and ask him

Not to intervene when it came to you

-- Nick Cave, Into My Arms

Comment by alexflint on Machine learning and unintended consequences · 2011-10-03T10:55:32.251Z · score: 0 (0 votes) · LW · GW

Dataset bias is a huge topic in computer vision at the moment. An excellent (and humorous) overview is given by Efros and Torralba: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5995347&tag=1.