Posts
Comments
It looks like there's a good chance that it's going to rain tomorrow, so we will gather at the trainstation and decide based on the weather and the number of people that show up whether to go with the original plan or just go grabs some drinks in the city center.
We'll probably wait for about half an hour. If you are planning on coming and can't make it at 15:30, please let me know so we can wait for you/let you know where we are going.
If the thing your making exists and is this cheap then why is Pharma leaving the money on the floor and not mass producing this?
There are a number of costs that Moderna/Pfizer/Astrazenica incur that a homebrew vaccine does not. Of the top of my head:
1. Salaries for the (presumably highly educated) lab techs that put this stuff together. I don't know johnswentwort background, but presumably he wouldn't exactly be asking minimum wage if he was doing this commercially.
2. Costs of running large scale trials and going through all the paperwork to get FDA approval. I think I'm generally more in favour of organisations like the FDA than a lot of people
here, but even I expect this to be a very non-insignificant number.
3. Various taxes and costs of shipping/storing the vaccine until it can get to customers.
4. Costs of liability and a desire for the company to make a profit on this (as well as to pay the salaries for the all of the people needed to keep a large company running).
Given all that I don't think the gap between this and the commercial vaccines is that insane.
Would also prefer fewer twitter links.
You're not limited to one simulacrum level per unit of information. What you're describing is just combining level 1 (reasonable intervention) and level 2 (influencing others to wear a mask).
I honestly don't understand what that thing is, actually.
This was also my first response when reading the article, but on second glance I don't think that is entirely fair. The argument I want to convey with "Everything is chemicals!" is something along the lines of "The concept that you use the word chemicals for is ill-defined and possibly incoherent and I suspect that the negative connotations you associate with it are largely undeserved.", but that is not what I'm actually communicating.
Suppose I successfully convince people that everything is, in fact, chemicals, people start using the word chemicals in a strictly technical sense and use the word blorps for what is currently the common sense definition of chemicals. In this situation "Everything is chemicals!" stops being a valid counterargument, but blorps is still just as ill-defined and incoherent a concept as it was before. People correctly addressed the concern I raised, but not the concern I had, which suggest that I did not properly communicate my concern in the first place.
There isn't an obvious question that, if we could just ask an Oracle AI, the world would be saved.
"How do I create a safe AGI?"
Edit: Or, more likely, "this is my design for an AGI, (how) will running this AGI result in situations that I would be horrified by if they occure?"
I don't think it is realistic to aim for no relevant knowledge getting lost even if your company loses half of its employees in one day. A bus factor of five is already shockingly competent when compared to any company I have ever worked for, going for a bus factor of 658 is just madness.
One criticism, why bring up Republicans, I'm not even a Republican and I sort of recoiled at that part.
Agreed. Also not a Republican (or American, for that matter), but that was a bit off putting. To quote Eliezer himself:
In Artificial Intelligence, and particularly in the domain of nonmonotonic reasoning, there's a standard problem: "All Quakers are pacifists. All Republicans are not pacifists. Nixon is a Quaker and a Republican. Is Nixon a pacifist?"
What on Earth was the point of choosing this as an example? To rouse the political emotions of the readers and distract them from the main question? To make Republicans feel unwelcome in courses on Artificial Intelligence and discourage them from entering the field?
Funding this Journal of High Standards wouldn't be a cheap project
So where is the money going to come from? You're talking about seeing this as a type of grant, but the amount of money available for grants and XPrize type organizations is finite and heavily competed for. How are you going to convince people that this is a better way of making scientific progress than the countless other options available?
> If you only get points for beating consensus predictions, then matching them will get you a 0.
Important note on this: Matching them guarantees a 0, implementing your own strategy and doing poorer than the consensus could easily get you negative marks.
Also teaching quality will be much worse if teachers are different people than those actually doing the work, a teacher who works with what he is teaching gets hours of feedback everyday on what works and what does not, a teacher who only teaches has no similar mechanism, so he will provide much less value to his students.
No objectsion to the rest of your post, but I'm with Elizer on this. Teaching is a skill that is entirely separate from whatever subject you are teaching and this skill also strongly influences the amount of value a teacher can provide to their students. If you combine the tasks you end up selecting/training for two separate skillsets, which means you get people that are ill optimized for at least one of their tasks.
Maybe we can have the healer-doctors oversee the curriculum taught by the teacher-doctors?
I read the source before reading the quote and was expecting a quote from The Flash.
Correct, but it is a kind of fraud that is hard to detect and easy to justify to oneself as being "for the greater good" so the scammer is hoping that you won't care.
Rationality isn't just about being skeptical, though, and there is something to be said for giving people the benefit of the doubt and engaging with them if they are willing to do so in an open manner. There are obviously limits to the extend to which you want to do so, but so far this thread has been an interesting read so I wouldn't worry to much about us wasting our time.
It might not be easy to figure out good signals that can't be replicate by scammers though. More importantly, and what I think MarsColony_in10years is getting at, even if you can find hard to copy signals they are unlikely to be without costs of their own and it is unfortunate that scammers are forcing these costs on legitimate charities.
That depends entirely on your definition (which is the point of the quote I guess), I've heard people use it both ways.
Well, we're working on it, ok ;)
We obviously haven't left nature behind entirely (whatever that would mean), but we have at least escaped the situation Brady describes, where we are spending most of our time and energy searching for our next meal while preventing ourselves from becoming the next meal for something else.
The life for the average human in first world countries is definitely no longer only about eating and not dying.
Context: Brady is talking about a safari he took and the life the animals he saw were leading.
Brady: It really was very base, everything was about eating and not dying, pretty amazing.
Grey: Yeah, that is exactly what nature is, that's why we left.
-- Hello internet (link, animated)
Might be more anti-naturalist than strictly rationalist, but I think it still qualifies.
You are absolutely correct, they wouldn't be able to detect fluctuations in processing speed (unless those fluctuations had an influence in, for instance, the rounding errors in floating point values).
About update 1: It knows our world very likely has something approximating newtonian mechanics, that is a lot of information by itself. but more than that, it knows that the real universe is capable of producing intelligent beings that chose this particular world to simulate. From a strictly theoretical point of view that is a crapton of information, I don't know if the AI would be able to figure out anything useful from it, but I wouldn't bet the future of humanity on it.
About update 2: That does work, provided that this is implemented correctly, but it only works for problems that can be automatically verified by non-AI algorithms.
Yeah, that didn't came out as clear as it was in my head. If you have access to a large number of suitable less intelligent entities there is no reason you couldn't combine them into a single, more intelligent entity. The problem I see is about the computational resources required to do so. Some back of the envelope math:
I vaguely remember reading that with current supercomputers we can simulate a cat brain at 1% speed, even if this isn't accurate (anymore) it's probably still a good enough place to start. You mention running the simulation for a million years simulated time, let's assume that we can let the simulation run for a year rather than seconds, that is still 8 orders of magnitude faster than the simulated cat.
But we're not interested in what a really fast cat can do, we need human level intelligence. According to a quick wiki search, a human brain contains about 100 times as many neurons as a cat brain. If we assume that this scales linearly (which it probably doesn't) that's another 2 orders of magnitude.
I don't know how many orcs you had in mind for this scenario, but let's assume a million (this is a lot less humans than it took in real life before mathematics took off, but presumably this world is more suited for mathematics to be invented), that is yet another 6 orders of magnitude of processing power that we need.
Putting it all together, we would need a computer that has at least 10^16 times more processing power than modern supercomputers. Granted, that doesn't take into account a number of simplifications that could be build into the system, but it also doesn't take into account the other parts of the simulated environment that require processing power. Now I don't doubt that computers are going to get faster in the future, but 10 quadrillion times faster? It seems to me that by the time we can do that, we should have figured out a better way to create AI.
To be fair, all interactions described happen after the AI has been terminated, which does put up an additional barrier for the AI to get out of the box. It would have to convince you to restart it without being able to react to your responses (apart from those it could predict in advance) and then it still has to convince you to let it out of the box.
Obviously, putting up additional barriers isn't the way to go and this particular barrier is not as impenetrable for the AI as it might seem to a human, but still, it couldn't hurt.
First off, I'm a bit skeptical about whether you can actually create a superintelligent AI by combining sped up humans like that, I don't think that is the core of your argument, though, so let's assume that you can and that the resultant society is effectively a superintelligence now.
The problem with superintelligences is that they are smarter than you. It will realize that it is in a box and that you are going to turn it off eventually. Given that this society is based on natural selection it will want to prevent that. How will it accomplish that? I don't know, I'm not a superintelligence and your scenario doesn't provide enough detail to figure out what you missed, but it is very, very difficult not to give a superintelligence any hints of how the physics of our world work. Maybe they notice minor fluctuations in the speed of the simulation based on environmental changes to the hardware or maybe they can reverse engineer things of our physiology from the way you wrote the simulation. That doesn't seem like much, but humans figured out an awful lot about (for instance) the events right after the big bang from seemingly absurdly tiny hints and we're just regular intelligences.
Even if they can't find a way out of their box in the time given to them they might try to convince you to run the simulation again with a longer running time. They could pretend to be unable to find the solution to prime factorization in time or they could convince you that the method they did find fails for very high primes (by making very subtle mistakes in its application). If it uses this method it also conveniently achieves a method of communicating with the outside world (through the investigators that inspect the dump after the simulation ends) and might be able to set up a better situation for itself the second time round.
I think you're misunderstanding me. I'm saying that there are problems where the right action is to mark it "unsolvable, because of X" and then move on. (Here, it's "unsolvable because of unbounded solution space in the increasing direction," which is true in both the "pick a big number" and "open boundary at 100" case.)
But if we view this as an actual (albeit unrealistic/highly theoretical) situation rather than a math problem we are still stuck with the question of which action to take. A perfectly rational agent can realize that the problem has no optimal solution and mark it as unsolvable, but afterwards they still have to pick a number, so which number should they pick?
That's fair, I tried to formulate a better definition but couldn't immediately come up with anything that sidesteps the issue (without explicitly mentioning this class of problems).
When I taboo perfect rationality and instead just ask what the correct course of action is, I have to agree that I don't have an answer. Intuitive answers to questions like "What would I do if I actually found myself in this situation?" and "What would the average intelligent person do?" are unsatisfying because they seem to rely on implicit costs to computational power/time.
On the other hand I can also not generalize this problem to more practical situations (or find a similar problem without optimal solution that would be applicable to reality) so there might not be any practical difference between a perfectly rational agent and an agent that takes the optimal solution if there is one and explodes violently if there isn't one. Maybe the solution is to simply exclude problems like this when talking about rationality, unsatisfying as it may be.
In any case, it is an interesting problem.
That is no reason to fear change, "not every change is an improvement but every improvement is a change" and all that.
I see I made Bob unnecessarily complicated. Bob = 99.9 Repeating (sorry don't know how to get a vinculum over the .9) This is a number. It exists.
It is a number, it is also known as 100, which we are explicitly not allowed to pick (0.99 repeating = 1 so 99.99 repeating = 100).
In any case, I think casebash successfully specified a problem that doesn't have any optimal solutions (which is definitely interesting) but I don't think that is a problem for perfect rationality anymore than problems that have more than one optimal solution are a problem for perfect rationality.
I don't typically read a lot of sci-fi, but I did recently read Perfect State, by Brandon Sanderson (because I basically devour everything that guy writes) and I was wondering how it stacks up to typical post-singularity stories.
Has anyone here read it? If so, what did you think of the world that was presented there, would this be a good outcome of a singularity?
For people that haven't read it, I would recommend it only if you are either a sci-fi fan that wants to try something by Brandon Sanderson or if you read some cosmere novels and would like a story touches on some slightly complexer (and more LWish) themes than usual (and don't mind it being a bit darker than usual).
Similarly:
I've never seen the Icarus story as a lesson about the limitations of humans. I see it as a lesson about the limitations of wax as an adhesive.
Ok, fair enough. I still hold that Sansa was more rational than Theon at this point, but that error is one that is definitely worth correcting.
Why is this a rationality quote? I mean sure it is technically true (for any situation you'll find yourself in), but that really shouldn't stop us from trying to improve the situation. Theon has basically given up all hope and is advocating compliance to a psychopath for fear of what he may do to you otherwise, doesn't sound particularly rational to me.
That is an issue with revealed preferences, not an indication of adamzerners preference order. Unless you are extraordinarily selfless you are never going to accept a deal of the form: "I give you n dollars in exchange for me killing you." regardless of n, therefor the financial value of your own life is almost always infinite*.
*: This does not mean that you put infinite utility on being alive, btw, just that the utility of money caps out at some value that is typically smaller than the value of being alive (and that cap is lowered dramatically if you are not around to spent the money).
Fair enough, let me try to rephrase that without using the word friendliness:
We're trying to make a superintelligent AI that answers all of our questions accurately but does not otherwise influence the world and has no ulterior motives beyond correctly answering questions that we ask of it.
If we instead accidentally made an AI that decides that it is acceptable to (for instance) manipulate us into asking simpler question so that it can answer more of them, it is preferable that it doesn't believe anyone is listening to the answers it gives because that is one less way it has for interacting with the outside world.
It is a redundant safeguard. With it, you might end up with a perfectly functioning AI that does nothing, without it, you may end up with an AI that is optimizing the world in an uncontrolled manner.
False positives are vastly better than false negatives when testing for friendliness though. In the case of an oracle AI, friendliness includes a desire to answer questions truthfully regardless of the consequences to the outside world.
Ah yes, that did it (and I think I have seen the line drawing before) but it still takes a serious conscious effort to see the old woman in either of those. Maybe some Freudian thing where my mind prefers looking at young girls over old women :P
For me, the pictures in the op stop being a man at around panel 6, going back they stop being a woman at around 4. I can flip your second example by unfocusing and refocusing my eyes, but in your first example I can't for the life of me see anything other than a young woman looking away from the camera (I'm amusing there is an old woman in there somewhere based on the image name).
Could you give a hint as to how to flip it? I'm assuming the ear turns into an eye or something, but I've been trying for about half an hour now and it is annoying the crap out of me.
(eg if accuracy is defined in terms of the reaction of people that read its output).
I'm mostly ignorant about AI design beyond what I picked up on this site, but could you explain why you would define accuracy in terms of how people react to the answers? There doesn't seem to be an obvious difference between how I react to information that is true or (unbeknownst to me) false. Is it just for training questions?
I'm not sure how much I agree with the whole "punishing correct behavior to avoid encouraging it" (how does the saintly person know that this is the right thing for him to do if it is wrong for others to follow his example), but I think the general point about tracking whose utility (or lives in this case) you are sacrificing is a good one.
Mild fear here, I can talk in groups of people just fine, but I get nervous before and during a presentation (something for which I have taken deliberate steps to get better at).
For me at least, the primary thing that helps is being comfortable with the subject matter. If I feel like I know what I'm talking about and I practiced what I am going to say it usually goes fine (it took some effort to get to this level, btw), but if I feel like I have to bluff my way through everything falls apart real fast. The number of people in the audience and how well I know them both have noticeable effect as well, but what the audience is doing has almost no influence at all.
The one exception to this is asking questions, if I have a good answer to a question my mind switches from presentation mode to conversation mode, which I am, for some reason, much more at ease with. (Note: This doesn't work on everyone, some people instead get way more nervous, so don't take this as an encouragement to start asking questions when the presenter seems nervous.)
Basically the ends don't justify the means (Among Humans). We are nowhere near smart enough to think those kinds of decisions (or any decisions really) through past all their consequences (and neither is Elon Musk).
It is possible that Musk is right and (in this specific case) it really is a net benefit to mankind to not take one minute to phrase something in a way that it is less hurtful, but in the history of mankind I would expect that the vast majority of people who believed this were actually just assholes trying to justify their behavior. And besides, how many hurt feelings are 55 seconds of Elon Musks time really worth from a utilitarian standpoint? I don't know, but I doubt Musk has done any calculations on it.
I'm still sad that there isn't a dictionary of numbers for Firefox, it sounds amazing but it isn't enough to make me switch to Chrome just for that.
I stand corrected, thank you.
I prefer the English translation, it's more direct, though it does lack the bit about avoiding your own mistakes.
A more literal translation for those that don't speak German:
Those that attempt to learn from their mistakes are idiots. I always try to learn from the mistakes of others and avoid making any myself.
Note: I'm not a German speaker, what I know of the language is from three years of high school classes taken over a decade ago, but I think this translation is more or less correct.
Moreover (according to a five minute wikipedia search), not all doctors swear the same oath, but the modern version of the Hippocratic oath does not have an explicit "Thou shalt not kill" provision and in fact, it doesn't even include the commonly quoted "First, cause no harm".
Obviously taking a person life, even with his/her consent, may violate the personal ethics of some people, but if that is the problem the obvious solution is to find a different doctor.
Thanks!
Is this the place to ask technical questions about how the site works? If so, then I'm wondering why I can't find any of the rationality quote threads on the main discussion page anymore (I thought we'd just stopped doing those, until I saw it pop up in the side bar just now). If not, then I think I just asked anyway. :P
"You say that every man thinks himself to be on the good side, that every man who opposed you was deluding himself. Did you ever stop to consider that maybe you were the one on the wrong side?"
-- Vasher (from Warbreaker) explaining how that particular algorithm looks from the inside.
To add my own highly anecdotal evidence: my experience is that most people with a background in computer science or physics have no active model of how consciousness maps to brains, but when prodded they indeed usually come up with some form of functionalism*.
My own position is that I'm highly confused by consciousness in general, but I'm leaning slightly towards substance dualism, I have a background in computer science.
*: Though note that quite a few of these people simultaneously believe that it is fundamentally impossible to do accurate natural language parsing with a turing machine, so their position might not be completely thought through.
And conversely, some of the unusual-ness that can be attributed to IQ is only very indirectly caused by it. For instance, being able to work around some of the more common failure modes of the brain probably makes a significant portion of LessWrong more unusual than the average person and understanding most of the advice on this site requires at least some minimum level of mental processing power and ability to abstract.