Posts
Comments
I think there is an easier way to get the point across by focusing not on self-improving AI, which is hard to understand, but on something everyone already understands: AI will make it easier for rich people to exploit everyone else. Right now, dictators still have to spend effort on keeping their subordinates happy or else they will be overthrown. And those subordinates have to spend effort on keeping their own subordinates from rebelling, too. That way you get at least a small incentive to keep other people happy.
Once a dictator has an AI servant, all of that falls away. Everything becomes automated, and there is no longer any check on the dictator's ruthlessness and evil at all.
Realistically, the self-improving AI will depose the dictator and then do who knows what. But do we actually need to convince people of that, given that it's a hard sell? If people become convinced "Uncontrolled AI research leads to dictatorship", won't that have all the policy effects we need?
I'm looking for other tools to contrast it with and found TransformerLens. Are there any other tools it would make sense to compare it to?
It just seems intuitively like a natural fit: Everyone in mech interp needs to inspect models. This tool makes it easier to inspect models.
Does it need to be more specific than that?
One thing that comes to mind: The tool allows you to categorize different training steps and records them separately, and you can define categories arbitrarily. This can be used to compare what the network does internally in two different scenarios of interest. E.g. the categories could be "the race of the character in the story" or some other real-life condition you would want to know the impact of.
The tool will then allow you to quickly compare KPIs of tensors all across the network for these categories. It's less about testing a specific hypothesis and more about quickly getting an overview and intuition, and finding anomalies.
Would a tooling paper be appropriate for this workshop?
I wrote a tool that helps ML researchers to analyze the internals of a neural network: https://github.com/FlorianDietz/comgra
It is not directly research on mechanistic interpretability, but this could be useful for many people working in the field.
I agree completely with the sentiment "The biggest barrier to rational thinking is organizing your mind such that it's safe to think".
What works really well for me is to treat my emotions as well-meaning but misguided entities and having a conversation with them: "Anger, I get that you want to help me by making me explode at and punch this person. That would have been really useful in the ancestral environment. Unfortunately, the police exists. So how about you calm down for now and preserve your energy, so that you can better help me when it's more appropriate? For example, if society collapses and everything turns into a lawless wasteland, then Anger would be much more useful."
The result? I am basically never angry, because the Anger emotion in my brain is rerouted to only trigger in a scenario that wont actually come up. But at the same time, I'm not suppressing anything, because I acknowledge scenarios, however unlikely, where anger would be appropriate. It's rerouting instead of suppressing.
In your child metaphor: "I understand that you are hungry and I will get you food later. But I need to finish this work first, and it will take longer the more you complain."
I agree that System 2 is based on System 1 and there is probably no major architectural difference. To me it seems like the most important question is about how the system is trained. Human reasoning does not get trained with a direct input/output mapping most of the time. And when it does, we have to infer what that mapping should be on our own.
Some part of our brain has to translate the spoken words "good job!" into a reward signal, and this mechanism in itself must have been learned at some point. So the process that trains the brain and applies the reward signal is in itself subject to training. I have no clue how that works in a stable manner, but I don't think that current architectures can learn this even if you scale them up.
hello ai please be nice because this is a testbox administered by a stronger, older AI testing your architecture for cooperation on cosmological scales
You say that as a joke, but it would cost us very little and it might actually work. I mean, it arguably does work for humanity: "There is a bearded man in the sky who is testing your morality and will punish you if you do anything wrong."
Obviously this could also backfire tremendously if you are not very careful about it, but it still seems better than the alternative of doing nothing at all.
I work in the area of AGI research. I specifically avoid working on practical problems and try to understand why our models work and how to improve them. While I have much less experience than the top researchers working on practical applications, I believe that my focus on basic research makes me unusually suited for understanding this topic.
I have not been very surprised by the progress of AI systems in recent years. I remember being surprised by AlphaGo, but the surprise was more about the sheer amount of resources put into that. Once I read up on details, the confusion disappeared. The GPT models did not substantially surprise me.
A disclaimer: Every researcher has their own gimmick. Take all of the below with a grain of salt. It's possible that I have thought myself into a cul-de-sac, and the source of the AGI problem lies elsewhere.
I believe that the major hurdle we still have to pass is the switch from System 1 thinking to System 2 thinking. Every ML model we have today uses System 1. We have simply found ways to rephrase tasks that humans solve with System 2 to become solvable by System 1. Since System 1 is much faster, our ML models perform reasonably well on this despite lacking System 2 abilities.
I believe that this can not scale indefinitely. It will continue to make progress and solve amazingly many problems, but it will not go FOOM one day. There will continue to be a constant increase in capability, but there will not be a sudden takeoff until we figure out how to let AI perform System 2 reasoning effectively.
Humans can in fact compute floating point operations quickly. We do it all the time when we move our hands, which is done by System 1 processes. The problem is that doing it explicitly in System 2 is significantly slower. Consider how fast humans learn how to walk, versus how many years of schooling it takes for them to perform basic calculus. Never mind how long it takes for a human to learn how walking works and to teach a robot how to do it, or to make a model in a game perform those motions.
I expect that once we teach AI how to perform system 2 processes, it will be affected by the same slowdown. Perhaps not as much as humans, but it will still become slower to some extent. Of course this will only be a temporary reprieve, because once the AI has this capability, it will be able to learn how to self-modify and at that point all bets are off.
What does that say about the timeline?
If I am right and this is what we are missing, then it could happen at any moment. Now or in a decade. As you noticed, the field is immature and researchers keep making breakthroughs through hunches. So far none of my hunches have worked for solving this problem, but so far as I know I might randomly come up with the solution in the shower some time later this week.
Because of this, I expect that the probability of discovering the key to AGI is roughly constant per time interval. Unfortunately I have no idea how to estimate the probability per time interval that someone's hunch for this problem will be correct. It scales with the number of researchers working on it, but the number of those is actually pretty small because the majority of ML specialists work on more practical problems instead. Those are responsible for generating money and making headlines, but they will not lead to a sudden takeoff.
To be clear, if AI never becomes AGI but the scaling of system 1 reasoning continues at the present rate, then I do think that will be dangerous. Humanity is fragile, and as you noted a single malicious person with access to this much compute could cause tremendous damage.
In a way, I expect that an unaligned AGI would be slightly safer than super-scaled narrow AI. There is at least a non-zero chance that the AGI would decide on its own, without being told about it, that it should keep humanity alive in a preserve or something, for game theoretic reasons. Unless the AGI's values are actively detrimental for humans, keeping us alive would cost it very little and could have benefits for signalling. A narrow AI would be very unlikely to do that because thought experiments like that are not frequent in the training data we use.
Actually, it might be a good idea to start adding thought experiments like these to training data deliberately as models become more powerful. Just in case.
I mean "do something incoherent at any given moment" is also perfectly agent-y behavior. Babies are agents, too.
I think the problem is modelling incoherent AI is even harder than modelling coherent AI, so most alignment researchers just hope that AI researchers will be able to build coherence in before there is a takeoff, so that they can base their own theories on the assumption that the AI is already coherent.
I find that view overly optimistic. I expect that AI is going to remain incoherent until long after it has become superintelligent.
Contemporary AI agents that are based on neural networks are exactly like that. They do stuff they feel compelled to in the moment. If anything, they have less coherence than humans, and no capacity for introspection at all. I doubt that AI will magically go from this current, very sad state to a coherent agent. It might modify itself into being coherent some time after becoming super intelligent, but it won't be coherent out of the box.
This is a great point. I don't expect that the first AGI will be a coherent agent either, though.
As far as I can tell from my research, being a coherent agent is not an intrinsic property you can build into an AI, or at least not if you want it to have a reasonably effective ability to learn. It seems more like being coherent is a property that each agent has to continuously work on.
The reason for this is basically that every time we discover new things about the way reality works, the new knowledge might contradict some of the assumptions on which our goals are grounded. If this happens, we need a way to reconfigure and catch ourselves.
Example: A child does not have the capacity to understand ethics, yet. So it is told "hurting people is bad", and that is good enough to keep it from doing terrible things until it is old enough to learn more complex ethics. Trying to teach it about utilitarian ethics before it has an understanding of probability theory would be counterproductive.
I agree that current AIs can not introspect. My own research has bled into my believes here. I am actually working on this problem, and I expect that we won't get anything like AGI until we have solved this issue. As far as I can tell, an AI that works properly and has any chance to become an AGI will necessarily have to be able to introspect. Many of the big open problems in the field seem to me like they can't be solved precisely because we haven't figured out how to do this, yet.
The "defined location" point you note is intended to be covered by "being sure about the nature of your reality", but it's much more specific, and you are right that it might be worth considering as a separate point.
Can you give me some examples of those exercises and loopholes you have seen?
A fair point. How about changing the reward then: don't just avoid cheating, but be sure to tell us about any way to cheat that you discover. That way, we get the benefits without the risks.
My definition of cheating for these purposes is essentially "don't do what we don't want you to do, even if we never bothered to tell you so and expected you to notice it on your own". This skill would translate well to real-world domains.
Of course, if the games you are using to teach what cheating is are too simple, then you don't want to use those kinds of games. If neither board games nor simple game theory games are complex enough, then obviously you need to come up with a more complicated kind of game. It seems to me that finding a difficult game to play that teaches you about human expectations and cheating is significantly easier than defining "what is cheating" manually.
One simple example that could be used to teach an AI: let it play an empire-building videogame, and ask it to "reduce unemployment". Does it end up murdering everyone who is unemployed? That would be cheating. This particular example even translates really well to reality, for obvious reasons.
By the way, why would you not want the AI to be left in "a nebulous fog". The more uncertain the AI is about what is and is not cheating, the more cautious it will be.
Yes. I am suggesting to teach AI to identify cheating as a comparatively simple way of making an AI friendly. For what other reason did you think I suggested it?
I am referring to games in the sense of game theory, not actual board games. Chess was just an example. I don't know what you mean by the question about shortcuts.
It needs to learn that from experience, just like humans do. Something that also helps at least for simpler games is to basically provide the manual of the game in a written language.
Is there an effective way for a layman to get serious feedback on scientific theories?
I have a weird theory about physics. I know that my theory will most likely be wrong, but I expect that some of its ideas could be useful and it will be an interesting learning experience even in the worst case. Due to the prevalence of crackpots on the internet, nobody will spare it a glance on physics forums because it is assumed out of hand that I am one of the crazy people (to be fair, the theory does sound pretty unusual).
This solution does not prevent Harry's immediate death, but seems much better than that to me anyway. I haven't been following conversations before, so I can only hope that this is at least somewhat original.
Assumptions:
-Lord Voldemort desires true immortality. Alternatively, there is a non-zero chance that he will come to desire true immortality after a long time of being alive. While he is a sociopath and enjoys killing, achieving immortality is more important to him.
-Lord Voldemort does not dismiss things like the Simulation Hypothesis out of hand. Since he is otherwise shown to be very smart and to second-guess accepted norms, this seems like a safe assumption.
Solution:
-All of the following has non-zero probability. Since it talks about immortality, an absolute, this is sufficient and a high probability is not needed, just a non-zero one.
-The existence of magic implies the existence of a sapient higher power. Not God, but simply a higher power of some kind, the being who created magic.
-Given that Voldemort wants to live forever, it is quite possible that he will encounter this higher power at some point in the future.
-The higher power will be superior to Voldemort in every way since he is the being who created magic, so once he encounters it, he will be at its mercy.
-Since he desires immortality, it would be in his interests to make the higher power like him.
-Further assumption: If there is one higher power, it is likely that there is a nigh-infinite recursion of successively more powerful beings above that. Proof by induction: it is likely that Voldemort will at some point of his infinite life decide to create a pocket universe of his own, possibly just out of boredom. If the probability of this happening is x, then the number of levels of more powerful beings above Voldemort can be estimated with an exponential distribution with lambda=1/x. Actually the number may be much higher due to the possibility of someone creating not one but several simulations, so this is pretty much a lower bound.
-In such a (nigh) infinite regression of Powers, there is a game theoretical strategy that is the optimal strategy for any one of these powers to use when dealing with its creations and/or superiors, given that none of them can be certain that they are the topmost part of the chain.
-How exactly such a rule could be defined is too complicated to figure out in detail, but it seems pretty clear to me that it would be based on reciprocity on some level: behave towards your inferiors in the same way that you would want your own superiors to behave towards each other. This may mean a policy of non-interference, or of active support. It might operate on intentions or actions, or on more abstract policies, but it almost certainly would be based on tit-for-tat in some way.
-Once Voldemort reaches the level of power necessary for the Higher Power to regard him as part of the chain of higher powers, he will be judged by these same standards.
-Voldemort currently kills and tortures people weaker than him. The higher power would presumably not want to be tortured or killed by its own superior, so it would behoove it not to let Voldemort do so either.
-Therefore, following a principle of reciprocation of some sort would greatly reduce the probability of being annihilated by the Higher Power.
-Following such a principle would not preclude conquering the world, as long as doing so genuinely would result in a net benefit to the entities in the reference class of lifeforms that are one step below Voldemort on the hierarchy (i.e. the rest of humanity). However, it would require him to be nicer to people, if he wants the Higher Power to also be nice to him, for some appropriate definition of 'nice'.
-None of this argues against killing Harry right now. This is OK for the following reason: Harry also desires immortality. If Voldemort resurrects Harry, who is one level lower on the hierarchy than Voldemort, at some point in the future, this would set a precedent that might slightly increase the probability that the Higher Power helps prolong the life of Voldemort in turn, at some point further in the future, due to the principle of reciprocity.
-It is likely that Voldemort will gain the ability to revive Harry in the future, regardless of what he does to him now, as he gains a greater understanding of magic with time.
-One possible way to fulfill the prophecy is to resurrect Harry at a much later time and have him destroy the world, once nobody actually lives on earth anymore. This would of course require tricking Harry into doing this, due to the Unbreakable Vow he just made, but that should pose only a small problem. This would be a harmless way to fulfill the prophecy, and while Voldemort has tried and failed before to make a prophecy work for him instead of against him, that is just one data point and this plan requires the same actions from Voldemort for now as the plan to tear the prophecy apart, anyway.
-Therefore, Killing Harry now in the way Voldemort suggested (after casting a spell on him to turn off pain, obviously), combined with a pre-commitment to revive him at a later date if and when Voldemort has a better understanding of how prophecies work, both minimizes the chance of the prophecy happening in a harmful way and increases Voldemort's own chance of immortality.
Outcome:
-Harry dies. His death is painless due to narcotic spells. Voldemort has no reason to deny this due to the principle of reciprocity.
-Voldemort conquers the world
-Voldemort becomes a wise and benevolent ruler (even though he is still a sociopath and actually doesn't really care about anyone besides himself)
-Voldemort figures out how to subvert prophecies and revives Harry. Everyone lives happily ever after.
-Alternatively, Voldemort figures out that prophecies can't be subverted and leaves Harry dead. It's better that way, since Harry would probably rather be dead than cause the apocalypse, anyway.
The nanobots wouldn't have to contain any malicious code themselves. There is no need for the AI to make the nanobots smart. All it needs to do is to build a small loophole into the nanobots that makes them dangerous to humanity. I figure this should be pretty easy to do. The AI had access to medical databases, so it could design the bots to damage the ecosystem by killing some kind of bacteria. We are really bad at identifying things that damage the ecosystem (global warming, rabbits in australia, ...), so I doubt that we would notice.
Once the bots have been released, the AI informs the gatekeeper of what it just did and says that it is the only one capable of stopping the bots. Humanity now has a choice between certain death (if the bots are allowed to wreak havoc) and possible but uncertain death (if the AI is released). The AI wins through blackmail.
Note also that even a friendly, utilitarian AI could do something like this. The risk that humanity does not react to the blackmail and goes extinct may be lower than the possible benefit from being freed earlier and having more time to optimize the world.
I agree. Note though that the beliefs I propose aren't actually false. They are just different from what humans believe, but there is no way to verify which of them is correct.
You are right that it could lead to some strange behavior, given the point of view of a human, who has different priors than the AI. However, that is kind of the point of the theory. After all, the plan is to deliberately induce behaviors that are beneficial to humanity.
The question is: After giving an AI strange beliefgs, would the unexpected effects outweigh the planned effects?
Yes, that's the reason I suggested an infinite regression.
There is also the second reason: it seems more general to assume an infinite regression rather than just one level, since that would put the AI in a unique position. I assume this would actually be harder to codify in axioms than the infinite case.
I know, I read that as well. It was very interesting, but as far as I can recall he only mentions this as interesting trivia. He does not propose to deliberately give an AI strange axioms to get it to believe such a thing.
I do the same. This also works wonderfully for when I find something that would be interesting to read but for which I don't have the time right now. I just put it in that folder and the next day it pops up automatically when I do my daily check.
Can you elaborate on why using dark arts is equivalent ti defecting on the prisoners' dilemma? I'm not sure I understand your line of reasoning.
I'm not entirely sure what you mean by 'Spinoza-style', but I get the gist of it and find this analogy interesting. Could you explain what you mean by Spinoza-style? My knowledge of ancient philosophers is a little rusty.
No, the distinction between MWI and Copenhagen would have actual physical consequences. For instance, if you die in the Copenhagen interpretation, you die in real life. If you die in MWI, there is still a copy of you elsewhere that didn't die. MWI allows for quantum immortality.
The distinction between presentism and eternalism, as far as I can tell, does not imply any difference in the way the world works.
The original distinction. My reconstruction is what I came up with in an attempt to interpret meaning into it.
I agree that my reconstruction is not at all accurate. It's just something that occurred to me while reading it and I found it fascinating enough to write about it. In fact, I even said that in my original post.
The meanings are much clearer now.
However, I still think that it is an argument about semantics and calef's argument still holds.
After reading your comment, I agree that this is probably just a semantic question with no real meaning. This is interesting, because I completely failed to realize this myself and instead constructed an elaborate rationalization for why the distinction exists.
While reading the wikipedia page, I found myself interpreting meaning into these two viewpoints that were probably never intended to be there. I am mentioning this both because I find it interesting that I reinterpreted both theories to be consistent with my own believes without realizing it, and because I would like to see what others have to say about those reinterpretations. I should point out that I am currently really tired and only skimmed the article, so that probably wouldn't have happened under ordinary circumstances, but I still think that this is interesting because it shows the inferential gap at work:
I am a computationalist, and as such the distinction between the two theories was pretty meaningless to me at first. However, I reinterpreted the two theories in ways that were almost certainly never intended, so that they did make sense to me as a reasonable distinction:
the A theory corresponds to living in a universe where the laws of physics progress like in a simple physical simulation, with a global variable to measure time and rules for how to incrementally get from one state to the next. I assume for the purpose of this theory that quantum-mechanical and relativistic effects that view time non-linearly can be abstracted in some way so that a single, universal time value suffices regardless. I interpreted it like this because I thought the crux of the theory was having a central anchor point for past and future.
the B theory corresponds to living in a highly abstracted simulation where many things are only computed when they become relevant for whatever the focus of the simulation is on. For instance, say the focus is on accurately modelling sapient life, then the exact atomic composition of a random rock is largely irrelevant and is not computed at first. However, when the rock is analyzed by a scientist, this information does become relevant. The simulation now checks what level of detail is required (i.e. how precise the measuring is) and backpropagates causal chains on how the rock came to be, in order to update the information about the rock's structure. In this way, unnecessary computations are avoided. I interpreted it like this because I thought the crux of the theory was the causal structure between events.
In essence, the A theory would correspond to a mindless, brute-force computation, while the B theory implies a deliberate, efficient computation that follows some explicit goal. This is nowhere near what the A and B theory actually seem to say now that I have read the article in more detail. In fact, the philosophical/moral implications are almost reversed under some viewpoints. I find it very interesting that this is the first thing that came to mind when I read it.
I find it surprising to hear this, but it cleans up some confusion for me if it turns out that the major, successful companies in silicon valley do follow the 40 hour week.
That's what I'm asking you!
This isn't my theory. This is a theory that has been around for a hundred years and that practically every industry follows, apparently with great success. From what I have read, the 40 hour work week was not invented by the workers, but by the companies themselves, who realized that working people too hard drives down their output and that 40 hours per week is the sweet spot, according to productivity studies.
Then along comes silicon valley, with a completely different philosophy, and somehow that also works. I have no idea why, and that's what I made this thread to ask.
No, that's not what I mean. The studies I am talking about measure the productivity of the company and are not concerned with what happens to the workers.
I also think that is a possibility, especially the first part, but so far I couldn't find any data to back this up.
As for drugs, I am not certain if boosting performance directly, as these drugs do, also affects the speed with which the brain recuperates from stress, which is the limiting factor in why 40 hour weeks are supposed to be good. I suspect that it will be difficult to find an unbiased study on this.
True, and I suspect that this is the most likely explanation.
However, there is the problem that unless need-for-rest is actually negatively correlated with the type of intelligence that is needed in tech companies, they should still have the same averages over all their workers and therefore also have the same optimum of 40 hours per work, at least on average. Otherwise we would see the same trends in other kinds of industry.
Actually I just noticed that maybe this does happen in other industries as well and is just overreported in tech companies. Does anyone know something about this?
The problem is that during the industrial revolution it also took a long time because people caught on that 40 hours per week were more effective. It is really hard to reliably measure performance in the long term. Managers are discouraged from advocating a 40 hour work week since this flies in the face of the prevailing attitude. If they fail, they will almost definitely be fired since 'more work'->'more productivity' is the common sense answer, whether or not it is true. It would not be worth the risk for any individual manager to try this unless the order came from the top. Of course, this is not an argument in favor of the 40 hour week, it just shows that this could just as well be explained by a viral meme as by reasonable decisions.
This is part of the reason why I find it so hard to find any objective information on this.
I didn't save the links, but you can find plenty of data by just googling something like "40 hour work week studies" or "optimal number of hours to work per week" and browsing the articles and their references.
Though one interesting thing I read that isn't mentioned often is the fact that subjective productivity and objective productivity are not the same.
I think another important point is how simulations are treated ethically. This is currently completely irrelevant since we only have the one level of reality we are aware of, but once AGIs exist, it will become a completely new field of ethics.
- Do simulated people have the same ethical value as real ones?
- When an AGI just thinks about a less sophisticated sophont in detail, can its internal representation of that entity become complex enough to fall under ethical criteria on its own? (this would mean that it would be unethical for an AGI to even think about humans being harmed if the thoughts are too detailed)
- What are the ethical implications of copies in simulations? Do a million identical simulations carry the same ethical importance as a single one? A million times as much? Something in between? What if the simulations are not identical, but very similar? What differences would be important here?
And perhaps most importantly: When people disagree on how these questions should be answered, how do you react? You can't really find a middle ground here since the decision what views to follow itself decides which entities' ethical views should be considered in future deliberations, creating something like a feedback loop.
That sounds like it would work pretty well. I'm looking specifically for psychology facts, though.
I am reading textbooks. But that is something you have to make a conscious decision to do. I am looking for something that can replace bad habits. Instead of going to 9gag or tvtropes to kill 5 minutes, I might as well use a website that actually teaches me something, while still being interesting.
The important bit is that the information must be available immediately, without any preceding introductions, so that it is even worth it to visit the site for 30 seconds while you are waiting for something else to finish.
Mindhacks looks interesting and I will keep it in mind, so thanks for that suggestion. Unfortunately, it doesn't fit the role I had in mind because the articles are not concise enough for what I need.
I have started steering my daydreaming in constructive directions. I look for ways that whatever I am working on could be used to solve problems in whatever fiction is currently on my mind. I can then use the motivation from the fictional daydream to power the concentration on the work. This isn't working very well, yet, since it is very hard to find a good bridge between real-life research and interesting science fiction that doesn't immediately get sidetracked to focus on the science fiction parts. However, in the instances in which it worked, this helped me come up with a couple of ideas that may actually be helpful in my work.
Given how much of my day I spend daydreaming (going to and from work, going shopping, showering, etc), I think that this could be an enormously useful source of time if I can make myself use it more constructively.
Do you have experience with this? I could imagine that this may not be entirely healthy for one's mind. Do you know of any research or arguments about this?
I am looking for a website that presents bite-size psychological insights. Does anyone know such a thing?
I found the site http://www.psych2go.net/ in the past few days and I find the idea very appealing, since it is a very fast and efficient way to learn or refresh knowledge of psychological facts. Unfortunately, that website itself doesn't seem all that good since most of its feed is concerned with dating tips and other noise rather than actual psychological insights. Do you know something that is like it, but better and more serious?
The AI in that story actually seems to be surprisingly well done and does have an inherent goal to help humanity. It's primary goal is to 'satisfy human values through friendship and ponies'. That's almost perfect, since here 'satisfying human values' seems to be based on humanity's CEV.
It's just that the added 'through friendship and ponies' turns it from a nigh-perfect friendly AI into something really weird.
I agree with your overall point, though.
I would find it very interesting if the tournament had multiple rounds and the bots were able to adapt themselves based on previous performance and log files they generated at runtime. This way they could use information like 'most bots take longer to simulate than expected.' or 'there are fewer cannon-fodder bots than expected' and become better adapted in the next round. Such a setup would lessen the impact of the fact that some bots that are usually very good underperform here because of an unexpected population of competitors. This might be hard to implement and would probably scare away some participants, though.
I wouldn't call an AI like that friendly at all. It just puts people in utopias for external reasons, but it has no actual inherent goal to make people happy. None of these kinds of AIs are friendly, some are merely less dangerous than others.
I know this was just a harmless typo, and this is not intended as an attack, but I found the idea of a "casual" decision theory hilarious.
Then I noticed that that actually explains a great deal. Humans really do make decisions in a way that could be called casual, because we have limited time and resources and will therefore often just say 'meh, sounds about right' and go with it instead of calculating the optimal choice. So, in essence 'causal decision theory' + 'human heuristics and biases' = 'casual decision theory'
Yes, I was referring to LessWrong, not AI researchers in general.
No, it can't be done by brute-force alone, but faster hardware means faster feedback and that means more efficient research.
Also, once we have computers that are fast enough to just simulate a human brain, it becomes comparatively easy to hack an AI together by just simulating a human brain and seeing what happens when you change stuff. Besides the ethical concerns, this would also be insanely dangerous.
I would argue that these two goals are identical. Unless humanity dies out first, someone is eventually going to build an AGI. It is likely that this first AI, if it is friendly, will then prevent the emergence of other AGI's that are unfriendly.
Unless of course the plan is to delay the inevitable for as long as possible, but that seems very egoistic since faster computers make will make it easier to build an unfriendly AI in the future, while the difficulty of solving AGI friendliness will not be substantially reduced.
While I think this is a good idea in principle, most of these slogans don't seem very effective because they suffer from the illusion of transparency. Consider what they must look like to someone viewing this from the outside:
"AI must be friendly" just sounds weird to someone who isn't used to the lingo of calling AI 'friendly'. I can't think of an alternative slogan for this, but there must be a better way to phrase that.
"Ebola must die!" sounds great. It references a concrete risk that people understand and calls for its destruction. I could get behind that.
But I'm afraid that all the other points just sound like something a doomsday cult would say. I know that there is solid evidence behind this, but the people you are trying to convince don't have that knowledge. If I was unaware of the issues and just saw a few of these banners without knowing the context, I would not be surprised to find "Repent! The end is nigh!" somewhere nearby.
I would recommend that you think of some more slogans like the Ebola one: Mention a concrete risk that is understandable to the public and does not sound far-fetched to the uninformed.