Posts

Comments

Comment by aletheilia on Hard problem? Hack away at the edges. · 2011-09-28T10:02:49.061Z · LW · GW

He might have changed his mind till now, but in case you missed it: Recommended Reading for Friendly AI Research

Comment by aletheilia on Interview with Singularity Institute Research Fellow Luke Muehlhauser · 2011-09-19T10:19:00.796Z · LW · GW

This idea probably just comes from looking at the Blue Brain project that seems to be aiming in the direction of WBE and uses an expensive supercomputer for simulating models of neocortical columns... right, Luke? :)

(I guess because we'd like to see WBE come before AI, due to creating FAI being a hell of a lot more difficult than ensuring a few (hundred) WBEs behave at least humanly friendly and thereby be of some use in making progress on FAI itself.)

Comment by aletheilia on [SEQ RERUN] Occam's Razor · 2011-09-14T08:52:36.255Z · LW · GW

Perhaps the following review article can be of some help here: A Philosophical Treatise of Universal Induction

Comment by aletheilia on Singularity Institute Strategic Plan 2011 · 2011-08-28T15:42:08.875Z · LW · GW

Time to level-up then, eh? :)

(Just sticking to my plan of trying to encourage people for this kind of work.)

Comment by aletheilia on Singularity Institute Strategic Plan 2011 · 2011-08-28T15:37:15.305Z · LW · GW

Before resorting to 'large financial prizes', shouldn't level 1 include 'formalize open problems and publicise them'?

The trouble is, 'formalizing open problems' seems like by far the toughest part here, and it would thus be nice if we could employ collaborative problem-solving to somehow crack this part of the problem... by formalizing how to formalize various confusing FAI-related subproblems and throwing this on MathOverflow? :) Actually, I think LW is more appropriate environment for at least attempting this endeavor, since it is, after all, what a large part of Eliezer's sequences tried to prepare us for...

Comment by aletheilia on Singularity Institute Strategic Plan 2011 · 2011-08-27T09:35:26.295Z · LW · GW

...open problems you intend to work on.

You mean we? :)

...and we can start by trying to make a list like this, which is actually a pretty hard and important problem all by itself.

Comment by aletheilia on Help Fund Lukeprog at SIAI · 2011-08-26T21:42:23.951Z · LW · GW

I wonder if anyone here shares my hesitation to donate (only a small amount, since I unfortunately can't afford anything bigger) due to thinking along the lines of "let's see, if I donate a 100$, that may buy a few meals in the States, especially CA, but on the other hand, if I keep them, I can live ~2/3 of a month on that and since I also (aspire to) work on FAI-related issues, isn't this a better way to spend the little money I have?"

But anyway, since even the smallest donations matter (tax laws an' all that, if I'm not mistaken) and -5$ isn't going to kill me, I've just made this tiny donation...

Comment by aletheilia on IntelligenceExplosion.com · 2011-08-09T23:52:20.578Z · LW · GW

What is the difference between the ideas of recursive self-improvement and intelligence explosion?

They sometimes get used interchangeably, but I'm not sure they actually refer to the same thing. It wouldn't hurt if you could clarify this somewhere, I guess.

Comment by aletheilia on Hanson Debating Yudkowsky, Jun 2011 · 2011-07-06T08:13:46.039Z · LW · GW

How about a LW poll regarding this issue?

(Is there some new way to make one, since the site redesign, or are we still at vote-up-down-karma-balance pattern?)

Comment by aletheilia on SIAI’s Short-Term Research Program · 2011-06-24T23:01:18.401Z · LW · GW

Even if we presume to know how to build an AI, figuring out the Friendly part still seems to be a long way off. Some AI building plans or/and architectures (e.g. evolutionary methods) are also totally useless F-wise, even though they may lead to a general AI.

What we actually need is knowledge about how to build a very specific type of an AI, and unfortunately, it appears that the A(G)I (sub)field with it's "anything that works" attitude isn't going to provide one.

Comment by aletheilia on Advice for AI makers · 2011-06-24T10:49:06.751Z · LW · GW

Well, this can actually be done (yes, in Prolog with a few metaprogramming tricks), and it's not really that hard - only very inefficient, i.e. feasible only for relatively small problems. See: Inductive logic programming.

Comment by aletheilia on Meanings of Mathematical Truths · 2011-06-06T22:58:38.145Z · LW · GW

This kind of thinking (that I actually quite like) presupposes the existence of abstract mathematical facts, but I'm at loss figuring out in what part of the territory are they stored, and if this is a wrong question to ask, what precisely does it make it so.

Does postulating their existence buy us anything else than TDT's nice decision procedure?

Comment by aletheilia on Meanings of Mathematical Truths · 2011-06-06T22:46:42.649Z · LW · GW

Talking or reasoning about the uncomputable isn't the same as "computing" the uncomputable. The first may very well be computable while the second obviously isn't.

Comment by aletheilia on Meanings of Mathematical Truths · 2011-06-06T22:10:42.048Z · LW · GW

Logic is useful because it produces true beliefs.

I'd rather say it conserves true beliefs that were put into the system at the start, but these were, in turn, produced inductively.

[math statements] would be just as valid in any other universe

I've often heard this bit of conventional wisdom but I'm not totally convinced it's actually true. How would we even know?

Well, what if in some other universe every process isomorphic to a statement "2 + 2" concludes that it equals "3" instead of "4" - would this mean that the abstract fact "2 + 2 = 4" is false/invalid in that universe?

As far as I can see, this boils down to a question about where are these abstract mathematical facts stored, or perhaps, what controls these facts if not the deep physical laws of the universe that contains the calculators that try to discern these facts...

Comment by aletheilia on Meanings of Mathematical Truths · 2011-06-06T10:32:37.947Z · LW · GW

"Controlled by abstract fact" as in Controlling Constant Programs idea?

Since this notion of our brains being calculators of - and thereby providing evidence about - certain abstract mathematical facts seems transplanted from Eliezer's metaethics, I wonder if there are any important differences between these two ideas, i.e. between trying to answer "5324 + 2326 = ?" and "is X really the right thing to do?".

In other words, are moral questions just a subset of math living in a particularly complex formal system (our "moral frame of reference") or are they a beast of a different-but-similar-enough-to-become-confused kind? Is the apparent similarity just a consequence of both "using" the same (metaphorical) math/truth-discovering brain module?

Comment by aletheilia on The Multiverse Interpretation of Quantum Mechanics [link] · 2011-06-03T09:57:34.069Z · LW · GW

Sean Carroll has a nice explanation of the general idea on his blog that falls somewhere between newscientist's uninformativeness and the usual layman-inaccessible arxiv article.

Comment by aletheilia on [SEQ RERUN] Inductive Bias · 2011-05-27T12:51:11.570Z · LW · GW

Probability Theory:The Logic of Science?

Comment by aletheilia on The Urgent Meta-Ethics of Friendly Artificial Intelligence · 2011-02-04T11:52:52.985Z · LW · GW

Perhaps he's refering to the part of CEV that says "extrapolated as we wish that extrapolated, interpreted as we wish that interpreted". Even logical coherence becomes in this way a focus of extrapolation dynamics, and if this criterion should be changed to something else - as judged by the whole of our extrapolated morality in a strange-loopy way - well, so be it. The dynamics should reflect on itself and consider the foundational assumptions it was built upon, including the compelingness of basic logic we are currently so certain about - and of course, if it really should reflect on itself in this way.

Anyway, I'd really like to hear what Vladimir has to say about this. Even though it's often quite hard for me to parse his writings, he does seem to clear things up for me or at least direct my attention towards some new, unexplored areas...

Comment by aletheilia on Eliezer to speak in Oxford, 8pm Jan 25th · 2011-01-18T02:13:27.476Z · LW · GW

...and he also already had a talk at Oxford these days at FHI's Winter Intelligence Conference: http://www.fhi.ox.ac.uk/events_data/winter_conference

Videos should be available soon, and I must admit I'm a bit more eager to hear what he and others had to say at this gathering than what his forthcoming talk shall bring us...

Comment by aletheilia on Best career models for doing research? · 2010-12-07T19:29:25.919Z · LW · GW

Being in a similar position (also as far as aversion to moving to e.g. US is concerned), I decided to work part time (roughly 1/5 of the time of even less) in software industry and spend the remainder of the day studying relevant literature, leveling up etc. for working on the FAI problem. Since I'm not quite out of the university system yet, I'm also trying to build some connections with our AI lab staff and a few other interested people in the academia, but with no intention to actually join their show. It would eat away almost all my time, so I could work on some AI-ish bio-informatics software or something similarly irrelevant FAI-wise.

There are of course some benefits in joining the academia, as you mentioned, but it seems to me that you can reap quite a bit of them by just befriending an assistant professor or two.