Posts
Comments
I agree that it would be dangerous.
What I'm arguing is that dividing by resource consumption is an odd way to define intelligence. For example, under this definition is a mouse more intelligent than an ant? Clearly a mouse has much more optimisation power, but it also has a vastly larger brain. So once you divide out the resource difference, maybe ants are more intelligent than mice? It's not at all clear. That this could even be a possibility runs strongly counter to the everyday meaning of intelligence, as well as definitions given by psychologists (as Tim Tyler pointed out above).
Right, but the problem with this counter example is that it isn't actually possible. A counter example that could occur would be much more convincing.
Personally, if a GLUT could cure cancer, cure aging, prove mind blowing mathematical results, write a award wining romance novel, take over the world, and expand out to take over the universe... I'd be happy considering it to be extremely intelligent.
Sure, if you had an infinitely big and fast computer. Of course, even then you still wouldn't know what to put in the table. But if we're in infinite theory land, then why not just run AIXI on your infinite computer?
Back in reality, the lookup table approach isn't going to get anywhere. For example, if you use a video camera as the input stream and after just one frame of data your table would already need something like 256^1000000 entries. The observable universe only has 10^80 particles.
Machine learning and AI algorithms typically display the opposite of this, i.e. sub-linear scaling. In many cases there are hard mathematical results that show that this cannot be improved to linear, let alone super-linear.
This suggest that if a singularity were to occur, we might be faced with an intelligence implosion rather than explosion.
If I had a moderately powerful AI and figured out that I could double its optimisation power by tripling its resources, my improved AI would actually be less intelligent? What if I repeat this process a number of times; I could end up an AI that had enough optimisation power to take over the world, and yet its intelligence would be extremely low.
It's not clear to me from this description whether the SI predictor is also conditioned. Anyway, if the universal prior is not conditioned, then the convergence is easy as the uniform distribution has very low complexity. If it is conditioned, then you will have no doubt observed many processes well modelled by a uniform distribution over your life -- flipping a coin is a good example. So the estimated probability of encountering a uniform distribution in a new situation won't be all that low.
Indeed, with so much data SI will have built a model of language, and how this maps to mathematics and distributions, and in particular there is a good chance it will have seen a description of quantum mechanics. So if it's also been provided with information that these will be quantum coin flips, it should predict basically perfectly, including modelling the probability that you're lying or have simply set up the experiment wrong.
This is a tad confused.
A very simple measure on the binary strings is the uniform measure and so Solomonoff Induction will converge on it with high probability. This is easiest to think about from the Solomonoff-Levin definition of the universal prior where you take a mixture distribution of the measures according to their complexity -- thus a simple thing like a uniform prior gets a very high prior probability under the universal distribution. This is different from the sequence of bits itself being complex due to the bits being random. The confusing thing is when you define it the other way by sampling programs, and it's not at all obvious that things work out the same... indeed it's quite surprising I'd say.
I'd suggest reading the second chapter of "Machine Super Intelligence", I think it's clearer there than in my old master's thesis as I do more explaining and less proofs.
The way it works for me is this:
First I come up with a sketch of the proof and try to formalise it and find holes in it. This is fairly creative and free and fun. After a while I go away feeling great that I might have proven the result.
The next day or so, fear starts to creep in and I go back to the proof with a fresh mind and try to break it in as many ways as possible. What is motivating me is that I know that if I show somebody this half baked proof it's quite likely that they will point out a major flaw it. That would be really embarrassing. Thus, I imagine that it's somebody else's proof and my job is to show why it's broken.
After a while of my trying to break it, I'll then show it to somebody kind who won't laugh at me if it's wrong, but is pretty careful at checking these things. Then another person... slowly my fear of having screwed up lifts. Then I'm ready to submit to publish.
So in short: I'm motivated to get proofs right (I have yet to have a published proof corrected, not counting blog posts) out of a fear of looking bad. What motivates me to publish at all is the feeling for satisfaction that I draw from the achievement. In my moderate experience of mathematicians, they often seem to have similar emotional forces at work.
Glial cell are actually about 1:1. A few years ago a researcher wanted to cite something to back up the usual 9:1 figure, but after asking everybody for several months nobody knew where the figure came from. So, they did a study themselves and did a count and found it to be 1:1. I don't have the reference on me, it was a talk I went to about a year ago (I work at a neuroscience research institute).
I have asked a number of neuroscientists about the importance of glia and have always received the same answer: the evidence that they are functionally important is still "very weak". They might be wrong, but given that some of these guys could give hour long lectures on exactly why they think this, and know the few works that claim otherwise... I'm inclined to believe them.
Here's my method: (+8 for me)
I have a 45 minute sand glass timer and a simple abacus on my desk. Each row on the abacus corresponds to one type of activity that I could be doing, e.g. writing, studying, coding, emails and surfing,... First, I decide what type of activity I'd like to do and then start the 45 minute sand glass. I then do that kind of activity until it ends. At which point I count it on my abacus and have at least a 5 minute break. There are no rules about what I have to do, I do what ever I want. But I always do it in focused 45 minute units.
If you try this, do it exactly as I describe, at least to start with, as there are reasons for each of the elements. Let me explain some of them. Firstly the use of a physical timer and abacus. Having them sitting on your desk in view makes them a lot more effective than using something like a digital timer and spreadsheet on your computer. When you look up you see the sand running out. When you take a break you see a colourful physical bar graph of your time allocation -- it's there looking at you.
45 minutes is important because it's long enough to get a reasonable amount done if you work in a focused way, but it's short enough not to be discouraging, unlike an hour. Even with something I don't particularly want to do, sitting down and doing just 45 minutes of it is a bearable concept, knowing that at the end I'll have a break and then do something else if I want to. Also if you look at human mental performance, it doesn't make much sense trying to do more than 45 minutes hard work at a time. Better to have a break for 5 to 15 minutes and then start again. As I think 15 minute breaks are essential, at the end of the week the total number of units counted on my abacus are my total number of at-work-activity hours for the week.
Having no rule about what you have to do is also important. If you put rules in place you will start avoiding using the system. The only thing is that when you start a unit of 45 minutes you have to go through with it. But you're free not to start one if you don't want to. You might then think that you'd always just do the kind of work that you like doing, rather than units of the stuff you avoid but should be doing. Interestingly, no, indeed often I find that the reverse starts to happen, even though I'm not really aiming for that. The reason is the principle that what you measure and keep in mind you naturally tend to control. Thus you don't actually need any rules, in fact they are harmful as they make you dislike and avoid the system.
Another force at work is that momentum often builds enthusiasm. Thus you think that you'll just do 45 minutes on some project due in a week that you'd rather not be doing at all, and after that unit of time you actually feel like doing another one just to finish some part of it off.
So yeah, the only real rule is that when the sand glass is running you have to stay hard at work on the task, which isn't too bad as it's only so many minutes more before you're taking a break and once again free.
UPDATE: So it seems that what I'm doing is a variant on the "Pomodoro technique" (and probably quite a few others). The differences being that I prefer 45 minutes, I think that's a better chunk of time to get things moving, and I like a physical aspects of a sand glass timer and an abacus. I perhaps should add that when I was doing intense memorisation study before an exam I'd use a 20 minutes on 10-20 minutes off cycle as that matches human memory performance better. But for general tasks 45 minutes seems good to me.
A whole community of rationalists and nobody has noticed that his elementary math is wrong?
1.9 gigaFLOPS doubled 8 times is around 500 gigaFLOPS, not 500 teraFLOPS.
Big difference, and one that trashes his conclusion.
There is nothing about being a rationalist that says that you can't believe in God. I think the key point of rationality is to believe in the world as it is rather than as you might imagine it to be, which is to say that you believe in the existence of things due to the weight of evidence.
Ask yourself: do you want to believe in things due to evidence?
If the answer is no, then you have no right calling yourself a "wannabe rationalist" because, quite simply, you don't want to hold rational beliefs.
If the answer is yes, then put this into practice. Is the moon smaller than the earth? Does Zeus exist? Does my toaster still work? In each case, what is the evidence?
If you find yourself believing something that you know most rationalists don't believe in, and you think you're basing your beliefs on solid evidence and logical reasoning, then by all means come and tell us about it! At that point we can get into the details of your evidence and the many more subtle points of rational reasoning in order to determine whether you really do have a good case. If you do, we will believe.
The last Society for Neuroscience conference had 35,000 people attend. There must be at least 100 research papers coming out per week. Neuroscience is full of "known things" that most neuroscientists don't know about unless it's their particular area.
This professor, who has no doubt debated game theory with many other professors and countless students making all kinds of objections, gets three paragraphs in this article to make a point. Based on this, you figure that the very simple objection that you're making is news to him?
One thing that concerns me about LW is that it often seems to operate in a vacuum, disconnected from mainstream discourse.
Or how about Ray Solomonoff? He doesn't live that far away (Boston I believe) and still gives talks from time to time.
One of my favourite posts here in a while. When talking with theists I find it helpful to clarify that I'm not so much against their God, rather my core problem is that I have different epistemological standards to them. Not only does this take some of the emotive heat out of the conversation, but I also think it's the point where science/rationalism/atheism etc. is at its strongest and their system is very weak.
With respect to untheistic society, I remember when a guy I knew shifted to New Zealand from the US and was disappointed to find that relatively few people were interested in talking to him about atheism. The reason, I explained, is that most people simply aren't sufficiently interested in religion to be bothered with atheism. This is a society where the leaders of both major parties in the last election publicly stated that they were not believers and... almost nobody cared.
Sure, there will be a great many factors at work here in the real world that our model does not include. The challenge is to come up with a manageable collection of principles that can be observed and measured across a wide range of situations and that appears to explain the observed behaviour. For this purpose "can't be bothered" isn't a very useful principle. What we really want to know is why they can't be bothered.
For example, I know people who can be bothered going to a specific shop and queueing in line every week to get a lottery ticket, and then scheduling time to watch the draw on television. It would be a lot less total effort over the years if they went into their internet banking and transfered a few thousand dollars into an investment fund that their bank offers. Plus their expected return would be positive rather than negative. Even if you point this out to them, they probably still won't do it. Why is it then that they can't be bothered doing this, but they can be bothered buying lottery tickets? One potential explanation provided by prospect theory is probability weighting: the negative tail on stock returns gets over weighted, as does the chance of winning the lottery. No doubt you can come up with other hypotheses about what is going on.
I'll also add, that while this work is a bit different to things usually covered here, it is perhaps interesting for some people to see an attempt to incorporate some of these ideas of cognitive biases into a formal mathematical model of behaviour -- in this case the behaviour of investors.
In case it hasn't already been posted by somebody, here's a nice talk about irrational behaviour and loss aversion in particular.
Yeah... I know :-\ There are various political forces at work within this community that I try to stay clear of.
One group within the community calls it "Algorithmic Information theory" or AIT, and another "Kolmogorov complexity". I talked to Hutter when we was writing that article for Scholarpedia that you cite. He decided to use the more neutral term "algorithmic complexity" so as not to take sides on this issue. Unfortunately, "algorithmic complexity" is more typically taken as meaning "computational complexity theory". For example, if you search for it under Wikipedia you will get redirected. I know, it's all kind of ridiculous and confusing...
The title is a bit misleading. "Algorithmic complexity" is about the time and spaces resources required for computations (P != NP? etc...), whereas this web site seems to be more about "Algorithmic Information Theory", also known as "Kolmogorov Complexity Theory".
I've never heard of this guy before, but yes, that's the same idea at work.
The following is the way I've approached the problem, and it seems to have worked for me. I've never tried to see if it would work with somebody else before, indeed I don't think I've ever explained this to anybody else before.
As I see it, these problems arise when what I think I should do, and what I feel like doing are in conflict with each other. Going with what you feel is easy, it's sort of like the automatic mode of operation. Overriding this and acting on what you think takes effort, and the stronger your feelings are wanting to do something else the harder it is.
The trick then, is to try to reconcile the two. The way most people do it is that they starting doing what they feel, and then rationalise it to the point that it's also what they think to some degree. Fortunately, you can also do it the other way as your feelings are trainable. Find what ever it is that you want to rationally do, and then keep on reminding yourself not just why you want to do this, but also try to feel it. Imagine how doing well in, say, some course of study is going to benefit and advance you in the future. How it will give you an edge against others who haven't studied the harder aspects of it, etc. Be creative, think of all sorts of positive reasons why doing this thing that you already know you should do is a great thing for you. And, most importantly, try to feel how you will benefit from this. Imagine yourself in the future having kicked butt in this course, or what ever, and imagine what that is going to feel like. Really try to feel it!
It takes time, but you slowly build up positive emotions around these things that you should be doing. At first, it just doesn't take quite as much effort to do them. Then it comes quite naturally. And after a while you will find yourself actually wanting to do it, to the extent that it would take an act of will power to not do it. Really.
This process itself also becomes a habit. When you decide to do something, you will automatically start to build up positive emotions around what ever it is that you've decided to do. During my PhD writing it built up to such a degree that I'd have these dreams some nights about how amazingly happy and proud I was going to be when it was finished. Motivating myself to work on it wasn't a problem.
Imagine a world where the only way to become really rich is to win the lottery (and everybody is either risk averse or at least risk neutral). With an expected return of less than $1 per $1 spent on tickets, rational people don't buy lottery tickets. Only irrational people do that. As a result, all the really rich people in this world must be irrational.
In other words, it is possible to have situations where being rational increases your expected performance, but at the same time reduces your changes of being a super achiever. Thus, the claim that "rationalists should win" is not necessarily true, even in theory, if "winning" is taken to mean being among the top performers. A more accurate statement would be, "In a world with both rational and irrational agents, the rational agents should perform better on average than the population average."
The most effective way for you to internally understand the world and make good decisions is to be super rational. However, the most effective way to get other people to aid you on your quest for success is to practice the dark arts. The degree to which the latter matters is determined by the mean rationality of the people you need to draw support from, and how important this support is for your particular ambitions.
I usually have something nice to say about most things, even the ideas of some pretty crazy people. Perhaps less so online, but more in person. In my case the reason is not tolerance, but rather a habit that I have when I analyse things: when I see something I really like I ask myself, "Ok, but what's wrong with this?" I mentally try to take an opposing position. Many self described "rationalists" do this, habitually. The more difficult one is the reverse: when I see something I really don't like, but where the person (or better, a whole group) is clearly serious about it and has spent some time on it, I force myself to again flip sides and try to argue for their ideas. Over the years I suspect I've learnt more from the latter than the former. Externally, I might just sound like I'm being very tolerant.
I know about the birthday effect and similar. (I do math and stats for a living.) The problem is that when I try to estimate the probability of having these events happen I get probabilities that are too small.
Well, I'm getting my karma eaten so I'll return to being quiet about these events. :-)
No, but if those thousand people don't know if they are part of the thousand or not, after all in any normal situation I wouldn't tell these stories to anybody, shouldn't they assume that they probably aren't part of the 1 in 1000 and thus adjust their posterior distribution accordingly?
I am an atheist who does not believe in the super natural. Great. Tons of evidence and well thought out reasoning on my side.
But... well... a few things have happened in my life that I find rather difficult to explain. I feel like a statistician looking at a data set with a nice normal distribution... and a few very low probability outliers. Did I just get a weird sample, or is something going on here? I figure that they are most likely to be just weird data points, but they are weird enough to bother me.
Let me give you one example. A few years ago I had a dream that I was eating and out of the blue I discovered a shard of glass in my mouth. The dream bothered me so much that I had a flash back to the dream the next day as I was walking down the road. For me that's extremely unusual. It's rare that I can even remember a dream, and when I do they certainly don't bother me the next day. So, the day after that I was eating a salad and crunch. I spat out what was in my mouth and there was a seriously nasty looking slither of glass. I didn't cut my mouth or anything, no harm done. I just hit it with my tooth.
To the best of my knowledge that was the only time I've ever found glass in something I was eating, and it was the only time I've had a vivid dream about it that bothered me the next day (or any dream about it all). I didn't have any particular glass eating phobia before all this took place (except for a normal aversion to the idea), and I haven't been worried about it since (ok, except for looking rather carefully at salads from that particular cafeteria for a few weeks afterwards). Was this all just a really weird coincidence? As far as I can make out the probabilities are just too low to be ignored. To make matters worse, I have a few other stories that I find just as difficult to explain away as coincidence.
Now, I wouldn't say that I "believe" that something seriously weird is going on here. That would be much too strong. However, because I don't feel that I can adequately account for some of my observations of the world, I think I must assign a small probability that there is something very seriously strange going on in the universe and that these events were not weird flukes.
I have other things to say but that would get into topics currently banned from this blog :-/
I grew up knowing that Santa didn't exist. My parents had to then explain to me that I couldn't tell certain kids about this because their parents wanted them to still think Santa was real until they were a bit older. I still remember being quite shocked that these parents were lying to their kids, along with grandparents and other family members, and then expecting even me to join in. I was further shocked by the fact that most of these kids never worked it out themselves and had to eventually be told by their parents or a group of their friends (being told by one or two friends usually wasn't enough.)
So, while I never experienced the shock of finding out that Santa wasn't real, watching these parents lying to their kids about Santa again and again certainly left a strong impression on my young mind.