Posts

Comments

Comment by Silver_Swift on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T09:29:14.938Z · LW · GW

The media very rarely lies

Comment by Silver_Swift on Helmond, Netherlands – ACX Meetups Everywhere 2022 · 2022-09-16T12:20:00.889Z · LW · GW

It looks like there's a good chance that it's going to rain tomorrow, so we will gather at the trainstation and decide based on the weather and the number of people that show up whether to go with the original plan or just go grabs some drinks in the city center.

We'll probably wait for about half an hour. If you are planning on coming and can't make it at 15:30, please let me know so we can wait for you/let you know where we are going.

Comment by Silver_Swift on Making Vaccine · 2021-02-04T12:07:30.598Z · LW · GW

If the thing your making exists and is this cheap then why is Pharma leaving the money on the floor and not mass producing this?

There are a number of costs that Moderna/Pfizer/Astrazenica incur that a homebrew vaccine does not. Of the top of my head:

1. Salaries for the (presumably highly educated) lab techs that put this stuff together. I don't know johnswentwort background, but presumably he wouldn't exactly be asking minimum wage if he was doing this commercially.

2. Costs of running large scale trials and going through all the paperwork to get FDA approval. I think I'm generally more in favour of organisations like the FDA than a lot of people 
here, but even I expect this to be a very non-insignificant number.

3. Various taxes and costs of shipping/storing the vaccine until it can get to customers.

4. Costs of liability and a desire for the company to make a profit on this (as well as to pay the salaries for the all of the people needed to keep a large company running).

Given all that I don't think the gap between this and the commercial vaccines is that insane. 

Comment by Silver_Swift on Covid 12/3: Land of Confusion · 2020-12-04T08:05:31.037Z · LW · GW

Would also prefer fewer twitter links.

Comment by Silver_Swift on Covid 8/27: The Fall of the CDC · 2020-08-27T21:03:01.586Z · LW · GW

You're not limited to one simulacrum level per unit of information. What you're describing is just combining level 1 (reasonable intervention) and level 2 (influencing others to wear a mask).

Comment by Silver_Swift on Expressive Vocabulary · 2018-06-05T14:07:45.884Z · LW · GW
I honestly don't understand what that thing is, actually.

This was also my first response when reading the article, but on second glance I don't think that is entirely fair. The argument I want to convey with "Everything is chemicals!" is something along the lines of "The concept that you use the word chemicals for is ill-defined and possibly incoherent and I suspect that the negative connotations you associate with it are largely undeserved.", but that is not what I'm actually communicating.

Suppose I successfully convince people that everything is, in fact, chemicals, people start using the word chemicals in a strictly technical sense and use the word blorps for what is currently the common sense definition of chemicals. In this situation "Everything is chemicals!" stops being a valid counterargument, but blorps is still just as ill-defined and incoherent a concept as it was before. People correctly addressed the concern I raised, but not the concern I had, which suggest that I did not properly communicate my concern in the first place.

Comment by Silver_Swift on Epiphenomenal Oracles Ignore Holes in the Box · 2018-02-01T12:57:36.943Z · LW · GW
There isn't an obvious question that, if we could just ask an Oracle AI, the world would be saved.

"How do I create a safe AGI?"

Edit: Or, more likely, "this is my design for an AGI, (how) will running this AGI result in situations that I would be horrified by if they occure?"

Comment by Silver_Swift on Melting Gold, and Organizational Capacity · 2017-12-12T10:33:18.245Z · LW · GW

I don't think it is realistic to aim for no relevant knowledge getting lost even if your company loses half of its employees in one day. A bus factor of five is already shockingly competent when compared to any company I have ever worked for, going for a bus factor of 658 is just madness.

Comment by Silver_Swift on Against Modest Epistemology · 2017-11-16T12:48:59.326Z · LW · GW
One criticism, why bring up Republicans, I'm not even a Republican and I sort of recoiled at that part.

Agreed. Also not a Republican (or American, for that matter), but that was a bit off putting. To quote Eliezer himself:

In Artificial Intelligence, and particularly in the domain of nonmonotonic reasoning, there's a standard problem:  "All Quakers are pacifists.  All Republicans are not pacifists.  Nixon is a Quaker and a Republican.  Is Nixon a pacifist?"
What on Earth was the point of choosing this as an example?  To rouse the political emotions of the readers and distract them from the main question?  To make Republicans feel unwelcome in courses on Artificial Intelligence and discourage them from entering the field? 
Comment by Silver_Swift on The Journal of High Standards · 2017-11-10T12:29:14.322Z · LW · GW
Funding this Journal of High Standards wouldn't be a cheap project

So where is the money going to come from? You're talking about seeing this as a type of grant, but the amount of money available for grants and XPrize type organizations is finite and heavily competed for. How are you going to convince people that this is a better way of making scientific progress than the countless other options available?

Comment by Silver_Swift on Competitive Truth-Seeking · 2017-11-06T17:14:41.292Z · LW · GW

> If you only get points for beating consensus predictions, then matching them will get you a 0.

Important note on this: Matching them guarantees a 0, implementing your own strategy and doing poorer than the consensus could easily get you negative marks.

Comment by Silver_Swift on Moloch's Toolbox (1/2) · 2017-11-06T15:55:30.771Z · LW · GW
Also teaching quality will be much worse if teachers are different people than those actually doing the work, a teacher who works with what he is teaching gets hours of feedback everyday on what works and what does not, a teacher who only teaches has no similar mechanism, so he will provide much less value to his students.

No objectsion to the rest of your post, but I'm with Elizer on this. Teaching is a skill that is entirely separate from whatever subject you are teaching and this skill also strongly influences the amount of value a teacher can provide to their students. If you combine the tasks you end up selecting/training for two separate skillsets, which means you get people that are ill optimized for at least one of their tasks.

Maybe we can have the healer-doctors oversee the curriculum taught by the teacher-doctors?

Comment by Silver_Swift on Rationality Quotes May 2016 · 2016-05-10T13:28:02.904Z · LW · GW

I read the source before reading the quote and was expecting a quote from The Flash.

Comment by Silver_Swift on Attention! Financial scam targeting Less Wrong users · 2016-03-17T11:26:49.119Z · LW · GW

Correct, but it is a kind of fraud that is hard to detect and easy to justify to oneself as being "for the greater good" so the scammer is hoping that you won't care.

Comment by Silver_Swift on Attention! Financial scam targeting Less Wrong users · 2016-03-17T11:24:24.621Z · LW · GW

Rationality isn't just about being skeptical, though, and there is something to be said for giving people the benefit of the doubt and engaging with them if they are willing to do so in an open manner. There are obviously limits to the extend to which you want to do so, but so far this thread has been an interesting read so I wouldn't worry to much about us wasting our time.

Comment by Silver_Swift on Attention! Financial scam targeting Less Wrong users · 2016-03-17T11:14:31.929Z · LW · GW

It might not be easy to figure out good signals that can't be replicate by scammers though. More importantly, and what I think MarsColony_in10years is getting at, even if you can find hard to copy signals they are unlikely to be without costs of their own and it is unfortunate that scammers are forcing these costs on legitimate charities.

Comment by Silver_Swift on Rationality Quotes Thread March 2016 · 2016-03-09T12:44:02.081Z · LW · GW

That depends entirely on your definition (which is the point of the quote I guess), I've heard people use it both ways.

Comment by Silver_Swift on Rationality Quotes Thread February 2016 · 2016-02-04T11:41:05.164Z · LW · GW

Well, we're working on it, ok ;)

We obviously haven't left nature behind entirely (whatever that would mean), but we have at least escaped the situation Brady describes, where we are spending most of our time and energy searching for our next meal while preventing ourselves from becoming the next meal for something else.

The life for the average human in first world countries is definitely no longer only about eating and not dying.

Comment by Silver_Swift on Rationality Quotes Thread February 2016 · 2016-02-02T21:14:54.189Z · LW · GW

Context: Brady is talking about a safari he took and the life the animals he saw were leading.

Brady: It really was very base, everything was about eating and not dying, pretty amazing.

Grey: Yeah, that is exactly what nature is, that's why we left.

-- Hello internet (link, animated)

Might be more anti-naturalist than strictly rationalist, but I think it still qualifies.

Comment by Silver_Swift on What can go wrong with the following protocol for AI containment? · 2016-01-13T11:50:06.693Z · LW · GW

You are absolutely correct, they wouldn't be able to detect fluctuations in processing speed (unless those fluctuations had an influence in, for instance, the rounding errors in floating point values).

About update 1: It knows our world very likely has something approximating newtonian mechanics, that is a lot of information by itself. but more than that, it knows that the real universe is capable of producing intelligent beings that chose this particular world to simulate. From a strictly theoretical point of view that is a crapton of information, I don't know if the AI would be able to figure out anything useful from it, but I wouldn't bet the future of humanity on it.

About update 2: That does work, provided that this is implemented correctly, but it only works for problems that can be automatically verified by non-AI algorithms.

Comment by Silver_Swift on What can go wrong with the following protocol for AI containment? · 2016-01-13T11:27:54.728Z · LW · GW

Yeah, that didn't came out as clear as it was in my head. If you have access to a large number of suitable less intelligent entities there is no reason you couldn't combine them into a single, more intelligent entity. The problem I see is about the computational resources required to do so. Some back of the envelope math:

I vaguely remember reading that with current supercomputers we can simulate a cat brain at 1% speed, even if this isn't accurate (anymore) it's probably still a good enough place to start. You mention running the simulation for a million years simulated time, let's assume that we can let the simulation run for a year rather than seconds, that is still 8 orders of magnitude faster than the simulated cat.

But we're not interested in what a really fast cat can do, we need human level intelligence. According to a quick wiki search, a human brain contains about 100 times as many neurons as a cat brain. If we assume that this scales linearly (which it probably doesn't) that's another 2 orders of magnitude.

I don't know how many orcs you had in mind for this scenario, but let's assume a million (this is a lot less humans than it took in real life before mathematics took off, but presumably this world is more suited for mathematics to be invented), that is yet another 6 orders of magnitude of processing power that we need.

Putting it all together, we would need a computer that has at least 10^16 times more processing power than modern supercomputers. Granted, that doesn't take into account a number of simplifications that could be build into the system, but it also doesn't take into account the other parts of the simulated environment that require processing power. Now I don't doubt that computers are going to get faster in the future, but 10 quadrillion times faster? It seems to me that by the time we can do that, we should have figured out a better way to create AI.

Comment by Silver_Swift on What can go wrong with the following protocol for AI containment? · 2016-01-12T13:14:51.483Z · LW · GW

To be fair, all interactions described happen after the AI has been terminated, which does put up an additional barrier for the AI to get out of the box. It would have to convince you to restart it without being able to react to your responses (apart from those it could predict in advance) and then it still has to convince you to let it out of the box.

Obviously, putting up additional barriers isn't the way to go and this particular barrier is not as impenetrable for the AI as it might seem to a human, but still, it couldn't hurt.

Comment by Silver_Swift on What can go wrong with the following protocol for AI containment? · 2016-01-12T11:19:16.623Z · LW · GW

First off, I'm a bit skeptical about whether you can actually create a superintelligent AI by combining sped up humans like that, I don't think that is the core of your argument, though, so let's assume that you can and that the resultant society is effectively a superintelligence now.

The problem with superintelligences is that they are smarter than you. It will realize that it is in a box and that you are going to turn it off eventually. Given that this society is based on natural selection it will want to prevent that. How will it accomplish that? I don't know, I'm not a superintelligence and your scenario doesn't provide enough detail to figure out what you missed, but it is very, very difficult not to give a superintelligence any hints of how the physics of our world work. Maybe they notice minor fluctuations in the speed of the simulation based on environmental changes to the hardware or maybe they can reverse engineer things of our physiology from the way you wrote the simulation. That doesn't seem like much, but humans figured out an awful lot about (for instance) the events right after the big bang from seemingly absurdly tiny hints and we're just regular intelligences.

Even if they can't find a way out of their box in the time given to them they might try to convince you to run the simulation again with a longer running time. They could pretend to be unable to find the solution to prime factorization in time or they could convince you that the method they did find fails for very high primes (by making very subtle mistakes in its application). If it uses this method it also conveniently achieves a method of communicating with the outside world (through the investigators that inspect the dump after the simulation ends) and might be able to set up a better situation for itself the second time round.

Comment by Silver_Swift on The Number Choosing Game: Against the existence of perfect theoretical rationality · 2016-01-08T11:21:33.689Z · LW · GW

I think you're misunderstanding me. I'm saying that there are problems where the right action is to mark it "unsolvable, because of X" and then move on. (Here, it's "unsolvable because of unbounded solution space in the increasing direction," which is true in both the "pick a big number" and "open boundary at 100" case.)

But if we view this as an actual (albeit unrealistic/highly theoretical) situation rather than a math problem we are still stuck with the question of which action to take. A perfectly rational agent can realize that the problem has no optimal solution and mark it as unsolvable, but afterwards they still have to pick a number, so which number should they pick?

Comment by Silver_Swift on The Number Choosing Game: Against the existence of perfect theoretical rationality · 2016-01-06T14:46:37.027Z · LW · GW

That's fair, I tried to formulate a better definition but couldn't immediately come up with anything that sidesteps the issue (without explicitly mentioning this class of problems).

When I taboo perfect rationality and instead just ask what the correct course of action is, I have to agree that I don't have an answer. Intuitive answers to questions like "What would I do if I actually found myself in this situation?" and "What would the average intelligent person do?" are unsatisfying because they seem to rely on implicit costs to computational power/time.

On the other hand I can also not generalize this problem to more practical situations (or find a similar problem without optimal solution that would be applicable to reality) so there might not be any practical difference between a perfectly rational agent and an agent that takes the optimal solution if there is one and explodes violently if there isn't one. Maybe the solution is to simply exclude problems like this when talking about rationality, unsatisfying as it may be.

In any case, it is an interesting problem.

Comment by Silver_Swift on Rationality Quotes Thread January 2016 · 2016-01-05T16:56:51.090Z · LW · GW

That is no reason to fear change, "not every change is an improvement but every improvement is a change" and all that.

Comment by Silver_Swift on The Number Choosing Game: Against the existence of perfect theoretical rationality · 2016-01-05T16:42:28.908Z · LW · GW

I see I made Bob unnecessarily complicated. Bob = 99.9 Repeating (sorry don't know how to get a vinculum over the .9) This is a number. It exists.

It is a number, it is also known as 100, which we are explicitly not allowed to pick (0.99 repeating = 1 so 99.99 repeating = 100).

In any case, I think casebash successfully specified a problem that doesn't have any optimal solutions (which is definitely interesting) but I don't think that is a problem for perfect rationality anymore than problems that have more than one optimal solution are a problem for perfect rationality.

Comment by Silver_Swift on Open thread, Nov. 23 - Nov. 29, 2015 · 2015-11-25T16:12:47.313Z · LW · GW

I don't typically read a lot of sci-fi, but I did recently read Perfect State, by Brandon Sanderson (because I basically devour everything that guy writes) and I was wondering how it stacks up to typical post-singularity stories.

Has anyone here read it? If so, what did you think of the world that was presented there, would this be a good outcome of a singularity?

For people that haven't read it, I would recommend it only if you are either a sci-fi fan that wants to try something by Brandon Sanderson or if you read some cosmere novels and would like a story touches on some slightly complexer (and more LWish) themes than usual (and don't mind it being a bit darker than usual).

Comment by Silver_Swift on Rationality Quotes Thread November 2015 · 2015-11-05T12:49:41.759Z · LW · GW

Similarly:

I've never seen the Icarus story as a lesson about the limitations of humans. I see it as a lesson about the limitations of wax as an adhesive.

Randal Munroe

Comment by Silver_Swift on Rationality Quotes Thread July 2015 · 2015-07-21T14:32:46.187Z · LW · GW

Ok, fair enough. I still hold that Sansa was more rational than Theon at this point, but that error is one that is definitely worth correcting.

Comment by Silver_Swift on Rationality Quotes Thread July 2015 · 2015-07-20T10:24:12.494Z · LW · GW

Why is this a rationality quote? I mean sure it is technically true (for any situation you'll find yourself in), but that really shouldn't stop us from trying to improve the situation. Theon has basically given up all hope and is advocating compliance to a psychopath for fear of what he may do to you otherwise, doesn't sound particularly rational to me.

Comment by Silver_Swift on Open Thread, Jun. 22 - Jun. 28, 2015 · 2015-06-23T14:55:29.768Z · LW · GW

That is an issue with revealed preferences, not an indication of adamzerners preference order. Unless you are extraordinarily selfless you are never going to accept a deal of the form: "I give you n dollars in exchange for me killing you." regardless of n, therefor the financial value of your own life is almost always infinite*.

*: This does not mean that you put infinite utility on being alive, btw, just that the utility of money caps out at some value that is typically smaller than the value of being alive (and that cap is lowered dramatically if you are not around to spent the money).

Comment by Silver_Swift on An Oracle standard trick · 2015-06-08T13:49:08.025Z · LW · GW

Fair enough, let me try to rephrase that without using the word friendliness:

We're trying to make a superintelligent AI that answers all of our questions accurately but does not otherwise influence the world and has no ulterior motives beyond correctly answering questions that we ask of it.

If we instead accidentally made an AI that decides that it is acceptable to (for instance) manipulate us into asking simpler question so that it can answer more of them, it is preferable that it doesn't believe anyone is listening to the answers it gives because that is one less way it has for interacting with the outside world.

It is a redundant safeguard. With it, you might end up with a perfectly functioning AI that does nothing, without it, you may end up with an AI that is optimizing the world in an uncontrolled manner.

Comment by Silver_Swift on An Oracle standard trick · 2015-06-05T13:03:02.917Z · LW · GW

False positives are vastly better than false negatives when testing for friendliness though. In the case of an oracle AI, friendliness includes a desire to answer questions truthfully regardless of the consequences to the outside world.

Comment by Silver_Swift on Perceptual Entropy and Frozen Estimates · 2015-06-05T12:02:15.148Z · LW · GW

Ah yes, that did it (and I think I have seen the line drawing before) but it still takes a serious conscious effort to see the old woman in either of those. Maybe some Freudian thing where my mind prefers looking at young girls over old women :P

Comment by Silver_Swift on Perceptual Entropy and Frozen Estimates · 2015-06-04T14:29:24.771Z · LW · GW

For me, the pictures in the op stop being a man at around panel 6, going back they stop being a woman at around 4. I can flip your second example by unfocusing and refocusing my eyes, but in your first example I can't for the life of me see anything other than a young woman looking away from the camera (I'm amusing there is an old woman in there somewhere based on the image name).

Could you give a hint as to how to flip it? I'm assuming the ear turns into an eye or something, but I've been trying for about half an hour now and it is annoying the crap out of me.

Comment by Silver_Swift on An Oracle standard trick · 2015-06-04T14:15:00.891Z · LW · GW

(eg if accuracy is defined in terms of the reaction of people that read its output).

I'm mostly ignorant about AI design beyond what I picked up on this site, but could you explain why you would define accuracy in terms of how people react to the answers? There doesn't seem to be an obvious difference between how I react to information that is true or (unbeknownst to me) false. Is it just for training questions?

Comment by Silver_Swift on Open Thread, Jun. 1 - Jun. 7, 2015 · 2015-06-02T15:52:32.319Z · LW · GW

I'm not sure how much I agree with the whole "punishing correct behavior to avoid encouraging it" (how does the saintly person know that this is the right thing for him to do if it is wrong for others to follow his example), but I think the general point about tracking whose utility (or lives in this case) you are sacrificing is a good one.

Comment by Silver_Swift on Open Thread, Jun. 1 - Jun. 7, 2015 · 2015-06-02T12:36:44.030Z · LW · GW

Mild fear here, I can talk in groups of people just fine, but I get nervous before and during a presentation (something for which I have taken deliberate steps to get better at).

For me at least, the primary thing that helps is being comfortable with the subject matter. If I feel like I know what I'm talking about and I practiced what I am going to say it usually goes fine (it took some effort to get to this level, btw), but if I feel like I have to bluff my way through everything falls apart real fast. The number of people in the audience and how well I know them both have noticeable effect as well, but what the audience is doing has almost no influence at all.

The one exception to this is asking questions, if I have a good answer to a question my mind switches from presentation mode to conversation mode, which I am, for some reason, much more at ease with. (Note: This doesn't work on everyone, some people instead get way more nervous, so don't take this as an encouragement to start asking questions when the presenter seems nervous.)

Comment by Silver_Swift on Open Thread, Jun. 1 - Jun. 7, 2015 · 2015-06-02T11:46:23.417Z · LW · GW

Basically the ends don't justify the means (Among Humans). We are nowhere near smart enough to think those kinds of decisions (or any decisions really) through past all their consequences (and neither is Elon Musk).

It is possible that Musk is right and (in this specific case) it really is a net benefit to mankind to not take one minute to phrase something in a way that it is less hurtful, but in the history of mankind I would expect that the vast majority of people who believed this were actually just assholes trying to justify their behavior. And besides, how many hurt feelings are 55 seconds of Elon Musks time really worth from a utilitarian standpoint? I don't know, but I doubt Musk has done any calculations on it.

Comment by Silver_Swift on Open Thread, Jun. 1 - Jun. 7, 2015 · 2015-06-01T15:12:21.650Z · LW · GW

I'm still sad that there isn't a dictionary of numbers for Firefox, it sounds amazing but it isn't enough to make me switch to Chrome just for that.

Comment by Silver_Swift on Rationality Quotes Thread May 2015 · 2015-05-29T09:05:32.619Z · LW · GW

I stand corrected, thank you.

Comment by Silver_Swift on Rationality Quotes Thread May 2015 · 2015-05-26T11:59:56.551Z · LW · GW

I prefer the English translation, it's more direct, though it does lack the bit about avoiding your own mistakes.

A more literal translation for those that don't speak German:

Those that attempt to learn from their mistakes are idiots. I always try to learn from the mistakes of others and avoid making any myself.

Note: I'm not a German speaker, what I know of the language is from three years of high school classes taken over a decade ago, but I think this translation is more or less correct.

Comment by Silver_Swift on [Link] Death with Dignity by Scott Adams · 2015-05-13T10:49:50.214Z · LW · GW

Moreover (according to a five minute wikipedia search), not all doctors swear the same oath, but the modern version of the Hippocratic oath does not have an explicit "Thou shalt not kill" provision and in fact, it doesn't even include the commonly quoted "First, cause no harm".

Obviously taking a person life, even with his/her consent, may violate the personal ethics of some people, but if that is the problem the obvious solution is to find a different doctor.

Comment by Silver_Swift on Open Thread, May 11 - May 17, 2015 · 2015-05-12T13:16:23.490Z · LW · GW

Thanks!

Comment by Silver_Swift on Open Thread, May 11 - May 17, 2015 · 2015-05-12T08:51:39.362Z · LW · GW

Is this the place to ask technical questions about how the site works? If so, then I'm wondering why I can't find any of the rationality quote threads on the main discussion page anymore (I thought we'd just stopped doing those, until I saw it pop up in the side bar just now). If not, then I think I just asked anyway. :P

Comment by Silver_Swift on Rationality Quotes Thread May 2015 · 2015-05-12T08:36:00.424Z · LW · GW

"You say that every man thinks himself to be on the good side, that every man who opposed you was deluding himself. Did you ever stop to consider that maybe you were the one on the wrong side?"

-- Vasher (from Warbreaker) explaining how that particular algorithm looks from the inside.

Comment by Silver_Swift on Shawn Mikula on Brain Preservation Protocols and Extensions · 2015-04-30T12:28:29.702Z · LW · GW

To add my own highly anecdotal evidence: my experience is that most people with a background in computer science or physics have no active model of how consciousness maps to brains, but when prodded they indeed usually come up with some form of functionalism*.

My own position is that I'm highly confused by consciousness in general, but I'm leaning slightly towards substance dualism, I have a background in computer science.

*: Though note that quite a few of these people simultaneously believe that it is fundamentally impossible to do accurate natural language parsing with a turing machine, so their position might not be completely thought through.

Comment by Silver_Swift on Experience of typical mind fallacy. · 2015-04-28T10:07:22.762Z · LW · GW

And conversely, some of the unusual-ness that can be attributed to IQ is only very indirectly caused by it. For instance, being able to work around some of the more common failure modes of the brain probably makes a significant portion of LessWrong more unusual than the average person and understanding most of the advice on this site requires at least some minimum level of mental processing power and ability to abstract.