Comment by incariol on 10-Step Anti-Procrastination Checklist · 2013-05-18T12:09:56.176Z · LW · GW

Here's another one: Skyrim soundtrack (a bit over 3,5 hours of epic fantasy music, with the last ~40 minutes being purely atmospheric/ambient).

Comment by incariol on Rationality Quotes March 2013 · 2013-03-18T19:47:03.942Z · LW · GW

Choice of attention - to pay attention to this and ignore that - is to the inner life what choice of action is to the outer. In both cases, a man is responsible for his choice and must accept the consequences, whatever they may be.

W. H. Auden

Comment by incariol on Course recommendations for Friendliness researchers · 2013-01-23T00:10:28.806Z · LW · GW

Apart from Numerical Analysis and Parallel Computing which seem a bit out of place here (*), and swapping Bishop's Pattern Recognition for Murphy's ML: A Probabilistic Perspective or perhaps Barber's freely available Bayesian Reasoning and ML, this is actually quite a nice list - if complemented with Vladimir Nesov's. ;)

(*) We're still in a phase that's not quite philosophy in a standard sense of the word, but nonetheless light years away from even starting to program the damn thing, and although learning functional programming from SICP is all good and well due to its mind-expanding effects, going into specifics of designing programs for parallel architectures or learning about various techniques for numerical integration is ... well, I'd rather invest my time in going through the Princeton Companion to get a nice bird's eye view of math, or grab Pearl's Probabilistic Reasoning in Intelligent Systems and Causality to get a feel for what a formal treatment/reduction of an intuitive concept looks like, and leave numerics and other software design issues for a time when they become relevant.

Comment by incariol on Godel's Completeness and Incompleteness Theorems · 2012-12-26T18:06:31.196Z · LW · GW

Given these recent logic-related posts, I'm curious how others "visualize" this part of math, e.g. what do you "see" when you try to understand Goedel's incompleteness theorem?

(And don't tell me it's kittens all the way down.)

Things like derivatives or convex functions are really easy in this regard, but when someone starts talking about models, proofs and formal systems, my mental paintbrush starts doing some pretty weird stuff. In addition to ordinary imagery like bubbles of half-imagined objects, there is also something machine-like in the concept of a formal system, for example, like it was imbued with a potential to produce a specific universe of various thingies in a larger multiverse (another mental image)...

Anyway, this is becoming quite hard to describe - and it's not all due to me being a non-native speaker, so... if anyone is prepared to share her mind's roundabouts, that would be really nice, but apart from that - is there a book, by a professional mathematician if possible, where one can find such revelations?

Comment by incariol on Checklist of Rationality Habits · 2012-11-16T12:35:38.338Z · LW · GW

Well, it has happened to me before - girls really can be pretty insistent. :) But this is not actually what concerns me - it's the distraction/wasted time induced by pretty-girl-contact event like apotheon explained below.

Comment by incariol on How can I reduce existential risk from AI? · 2012-11-12T19:42:41.811Z · LW · GW

When someone proposes what we should do, where by we he implicitly refers to a large group of people he has no real influence over (as in the banning AGI & hardware development proposal), I'm wondering what is the value of this kind of speculation - other than amusing oneself with a picture of "what would this button do" on a simulation of Earth under one's hands.

As I see it, there's no point in thinking about these kind of "large scale" interventions that are closely interweaved with politics. Better to focus on what relatively small groups of people can do (this includes, e.g. influencing a few other AGI development teams to work on FAI), and in this context, I think out best hope is in deeply understanding the mechanics of intelligence and thus having at least a chance at creating FAI before some team that doesn't care the least about safety dooms us all - and there will be such teams, regardless of what we do today, just take a look at some of the "risks from AI" interviews...

Comment by incariol on Checklist of Rationality Habits · 2012-11-11T23:42:09.668Z · LW · GW

What about "when faced with a hard problem, close your eyes, clear your mind and focus your attention for a few minutes to the issue at hand"?

It sounds so very simple, that I routinely fail to do it when, e.g. I try to solve some project euler problem or another, and I don't see a solution in the first few seconds, do something else for a while, until I finally get a handle on my slippery mind, sit down and solve the bloody thing.

Comment by incariol on Checklist of Rationality Habits · 2012-11-11T00:55:37.568Z · LW · GW

Another example: as I don't feel like getting in a relationship for the foreseeable future, I try to avoid circumstances with lots of pretty girls around, e.g. not going to certain parties, taking walks in those parts of the forest where I don't expect to meet any, and in general, trying to convince other parts of my brain that the only girl I could possibly be with exists somewhere in the distant future or not at all (if she can't do a spell or two and talk to dragons, she won't do ;-)).

It also helps being focused on math, programming and abstract philosophy - and spending time on LW, it seems. :)

Comment by incariol on Logical Pinpointing · 2012-11-11T00:28:41.658Z · LW · GW

Due to all this talk about logic I've decided to take a little closer look at Goedel's theorems and related issues, and found this nice LW post that did a really good job dispelling confusion about completeness, incompleteness, SOL semantics etc.: Completeness, incompleteness, and what it all means: first versus second order logic

If there's anything else along these lines to be found here on LW - or for that matter, anywhere, I'm all ears.

Comment by incariol on Logical Pinpointing · 2012-11-11T00:21:37.515Z · LW · GW

So this is where (one of the inspirations for) Eliezer's meta-ethics comes from! :)

A quick refresher from a former comment:

Cognitivism: Yes, moral propositions have truth-value, but not all people are talking about the same facts when they use words like "should", thus creating the illusion of disagreement.

... and now from this post:

Some people might dispute whether unicorns must be attracted to virgins, but since unicorns aren't real - since we aren't locating them within our universe using a causal reference - they'd just be talking about different models, rather than arguing about the properties of a known, fixed mathematical model.

(This little realization also holds a key to resolving the last meditation, I suppose.)

I've heard people say the meta-ethics sequence was more or less a failure since not that many people really understood it, but if these last posts were taken as a perequisite reading, it would be at least a bit easier to understand where Eliezer's coming from.

Comment by incariol on 2012 Less Wrong Census/Survey · 2012-11-05T12:41:45.630Z · LW · GW

Done it all!

With all those personality tests and surveys it took me a bit more than an hour, but it was quite interesting (particularly CFAR questions) so I won't complain, much. :)

Comment by incariol on Causal Reference · 2012-11-02T11:29:35.854Z · LW · GW

"Mass-energy is neither created nor destroyed..." It is then an effect of that rule, combined with our previous observation of the ship itself, which tells us that there's a ship that went over the cosmological horizon and now we can't see it any more.

It seems to me that this might be a point where logical reasoning takes it over from causal/graphical models, which in turn suggests why there are some problems with thinking about the laws of physics as nodes in a graph or even arrows as opposed to... well, I'm not really sure what specifically - or perhaps I'm just overapplying a lesson from the nature of logic where AI researchers tried to squeeze all the variety of cognitive processes into a logical reasoner and spectacularly failed at it.

Causal models, being as powerfull as they are, represent a similar temptation as logic did, and we should be wary not to make the same old (essentialy "hammer & nail") mistake, I think.

(Just thought I'd mention this so I don't forget this strange sense of something left not-quite-completely explained.)

Comment by incariol on Original Research on Less Wrong · 2012-10-30T22:04:03.878Z · LW · GW

Um... perhaps Wei Dai's analysis of the absent-minded driver problem (with it's subsequent resolution in the comments) and paulfchristiano's AIXI and existential despair would qualify?

Comment by incariol on Open Thread, October 16-31, 2012 · 2012-10-24T00:05:57.501Z · LW · GW

Use your imaginary friend whom you try to explain the gist of what you've just read when, say, brushing your teeth. :)

(Actually writing down an explanation would certainly be more effective but not as fast).

Comment by incariol on Stuff That Makes Stuff Happen · 2012-10-23T23:25:39.517Z · LW · GW

Um, let's see if I get this (thinking to myself but posting here if anyone happens to find this useful - or even intelligible)...

claiming you know about X without X affecting you, you affecting X, or X and your belief having a common cause, violates the Markov condition on causal graphs

The causal Markov condition is that a phenomenon is independent of its noneffects, given its direct causes. It is equivalent to the ordinary Markov condition for Bayesian nets (any node in a network is conditionally independent of its nondescendents, given its parents) when the structure of a Bayesian network accurately depicts causality.

So, this condition induces certain (conditional) independencies between nodes in a causal graph (that can be found using the D-separation trick), and when we find two such nodes, they must also be uncorrelated (this follows from probabilistic independence being a stronger property than uncorrelatedness).

If one therefore claims there's a persistent correlation between X and belief about X, this means there's got to be some active path in Bayesian network for probabilistic influence to flow between them - otherwise, X and Belief(X) would be D-separated and thereby independent and uncorrelated. Insisting there's no such path (e.g. no chain of directed links) leads to violation of Markov condition, since it maintains there's probabilistic dependence between two nodes in a graph that cannot be accounted for by the causal links currently in the graph.

Comment by incariol on Stuff That Makes Stuff Happen · 2012-10-18T09:09:48.940Z · LW · GW

Look at it as an exercise for the actively disbelieving mini-skill. :)

Comment by incariol on The Useful Idea of Truth · 2012-10-02T15:08:01.086Z · LW · GW

So... could this style of writing, with koans and pictures, be applied to transforming the majority of sequences into an even greater didactic tool?

Besides the obvious problems, I'm not sure how this would stand with Eliezer - they are, after all, his masterpiece.

Comment by incariol on The Useful Idea of Truth · 2012-10-02T15:01:56.591Z · LW · GW

Perhaps this: "accuracy" is a quantitative measue when "truth" is only qualitative/categorical.

Comment by incariol on The Useful Idea of Truth · 2012-10-02T14:59:07.282Z · LW · GW

We know there's such a thing as reality due to the reasons you mention, not truth - that's just a relation between reality and our beliefs.

"Arrangements of atoms" play a role in the idea that not all "syntactically correct" beliefs actually are meaningful and the last koan asks us to provide some rule to achieve this meaningfulness for all constructible beliefs (in an AI).

At least that's my understanding...

Comment by incariol on Advice On Getting A Software Job · 2012-07-12T20:45:44.894Z · LW · GW

What about some kind of online employment like the one offered by e.g. oDesk? Some time ago I stumbled upon this recommendation that also gave a few tips on how to approach this kind of work.

I haven't yet found the time to try it out, but since I'm also in a similar situation (finishing a CS degree then planning to find a job that'll pay the bills and use my free time for personal projects) I treat it as one of the most promising alternatives...

Comment by incariol on Suggest alternate names for the "Singularity Institute" · 2012-06-27T11:07:15.222Z · LW · GW


"The Mandate is a Gnostic School founded by Seswatha in 2156 to continue the war against the Consult and to protect the Three Seas from the return of the No-God.

... [it] also differs in the fanaticism of its members: apparently, all sorcerers of rank continuously dream Seswartha's experiences of the Apocalypse every night ...

...the power of the Gnosis makes the Mandate more than a match for schools as large as, say, the Scarlet Spires."

No-God/UFAI, Gnosis/x-rationality, the Consult/AGI community? ;-)

Comment by incariol on Reaching young math/compsci talent · 2012-06-17T09:52:11.461Z · LW · GW

You can find a few suggestions here, for starters.