Posts

FHI Essay Competition 2011-11-04T13:28:30.778Z

Comments

Comment by Gedusa on Where I agree and disagree with Eliezer · 2022-06-20T13:19:22.486Z · LW · GW

I found it really helpful to have a list of places where Eliezer and Paul agree. It's interesting to see that there is a lot of similarity on big picture stuff like AI being extremely dangerous.

Comment by Gedusa on AMA: Paul Christiano, alignment researcher · 2021-04-29T09:27:36.930Z · LW · GW

A number of people seem to have departed OpenAI at around the same time as you. Is there a particular reason for that which you can share? Do you still think that people interested in alignment research should apply to work at OpenAI?

Comment by Gedusa on What is the current bottleneck on genetic engineering of human embryos for improved IQ · 2020-10-26T09:24:04.474Z · LW · GW

In the case of reducing mutational load to near zero, you might be doing targeted changes to huge numbers of genes. There is presumably some point at which it's easier to create a genome from scratch.

I agree it's an open question though!

Comment by Gedusa on What is the current bottleneck on genetic engineering of human embryos for improved IQ · 2020-10-24T11:48:00.993Z · LW · GW

An alternative to editing many genes individually is to synthesise the whole genome from scratch, which is plausibly cheaper and more accurate.

Comment by Gedusa on Objective Dog Ratings: An Introduction & Explanation · 2020-08-30T16:55:49.726Z · LW · GW

I would find this more useful if you spelled out a bit more about your scoring method. You say:

They must be loyal, intelligent, and hardworking, they must have a sense of dignity, they must like humans, and above all they must be healthy.

Which of these do you think are the most important? Why do these traits matter? (for example, hardworking dogs are not really necessary in the modern world)

And why these traits and not others? (for example: size, cleanliness, appearance, getting along with other animals)

a dog which is as close to being a wolf as one can get without sacrificing any of those essential characteristics which define a dog as such

Why do you think a dog that is close to a wolf is objectively better than dogs which are further away?

Comment by Gedusa on Get Rich Real Slowly · 2019-06-11T13:30:07.233Z · LW · GW

Vanguard has a UK website, I use them and it works well.

Monevator also has a good guide to investment firms in the UK, along with a bunch of UK specific advice.

Comment by Gedusa on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-18T11:29:54.621Z · LW · GW

OpenPhil gave Carl Shulman $5m to re-grant

I didn't realise this was happening. Is there somewhere we can read about grants from this fund when/if they occur?

Comment by Gedusa on When is unaligned AI morally valuable? · 2018-05-28T20:53:34.041Z · LW · GW

Would this approach have any advantages vs brain uploading? I would assume brain uploading to be much easier than running a realistic evolution simulation, and we would have to worry less about alignment.

Comment by Gedusa on 2014 Less Wrong Census/Survey · 2014-10-23T12:53:02.590Z · LW · GW

I filled in the survey! Like many people I didn't have a ruler to use for the digit ratio question.

Comment by Gedusa on Harry Potter and the Methods of Rationality discussion thread, part 17, chapter 86 · 2012-12-17T21:11:05.615Z · LW · GW

Also, I'm torn between how to interpret Snape's last question - my first thought was that he was verifying the truth of a story he had been told("Your master tortured her, now join the light side already!" being the most likely), but upon rereading, I wonder if he was worried that she had been used as Horcrux fuel.

Or verifying a deal he made with Voldemort, though that might not make as much sense with Snape's character.

Comment by Gedusa on Room for more funding at the Future of Humanity Institute · 2012-11-17T13:53:02.462Z · LW · GW

Slightly off topic, but I'm very interested in the "policy impact" that FHI has had - I had heard nothing about it before and assumed that it wasn't having very much. Do you have more information on that? If it were significant, it would increase the odds that giving to FHI was a great option.

Comment by Gedusa on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-10T00:22:36.443Z · LW · GW

Possible consideration: meta-charities like GWWC and 80k cause donations to causes that one might not think are particularly important. E.g. I think x-risk research is the highest value intervention, but most of the money moved by GWWC and 80k goes to global poverty or animal welfare interventions. So if the proportion of money moved to causes I cared about was small enough, or the meta-charity didn't multiply my money much anyway, then I should give directly (or start a new meta-charity in the area I care about).

A bigger possible problem would be if I took considerations like the poor meat eater problem to be true. In that case, donating to e.g. 80k would cause a lot of harm even though it would move a lot of money to animal welfare charities, because it causes so much to go to poverty relief, which I could think was a bad thing. It seems like there are probably a few other situations like this around.

Do you have figures on what the return to donation (or volunteer time) is for 80,000 hours? i.e. is it similar to GWWC's $138 of donations per $1 of time invested? It would be helpful to know so I could calculate how much I would expect to go to the various causes.

Comment by Gedusa on Desired articles on AI risk? · 2012-11-02T15:44:56.184Z · LW · GW

Something on singletons: desirability, plausibility, paths to various kinds (strongly relates to stable attractors)

"Hell Futures - When is it better to be extinct?" (not entirely serious)

Comment by Gedusa on [Link]: GiveWell is aiming to have a new #1 charity by December · 2011-11-29T17:49:29.216Z · LW · GW

Recommendations are up!

Comment by Gedusa on Will the ems save us from the robots? · 2011-11-24T22:59:37.723Z · LW · GW

Maybe some kinds of ems could tell us how likely Oracle/AI-in-a-box scenarios were to be successful? We could see if ems of very intelligent people run at very high speeds could convince a dedicated gatekeeper to let them out of the box. It would at least give us some mild evidence for or against AIs-in-boxes being feasible.

And maybe we could use certain ems as gatekeepers - the AI wouldn't have a speed advantage anymore, and we could try to make alterations to the em to make it less likely to let the AI out.

Minor bad incidents involving ems might make people more cautious about full-blown AGI (unlikely, but I might as well mention it).

Comment by Gedusa on New Q&A by Nick Bostrom · 2011-11-16T11:09:11.884Z · LW · GW

I was the one who asked that question!

I was slightly disappointed by his answer - surely there can only be one optimal charity to give to? The only donation strategy he recommended was giving to whichever one was about to go under.

I guess what I'm really thinking is that it's pretty unlikely that the two charities are equally optimal.

Comment by Gedusa on Existential Risk · 2011-11-16T00:11:47.514Z · LW · GW

Point taken. This post seems unlikely to reach those people. Is it possible to communicate the importance of x-risks in such a short space to SL0's - maybe without mentioning exotic technologies? And would they change their charitable behavior?

I suspect the first answer is yes and the second is no (not without lots of other bits of explanation).

Comment by Gedusa on Existential Risk · 2011-11-15T21:49:39.520Z · LW · GW

I thought this article was for SL0 people - that would give it the widest audience possible, which I thought was the point?

If it's aimed at the SL0's, then we'd be wanting to go for an SL1 image.

Comment by Gedusa on Existential Risk · 2011-11-15T16:04:01.631Z · LW · GW

Whilst I really, really like the last picture - it seems a little odd to include it in the article.

Isn't this meant to seem like a hard-nosed introduction to non-transhumanist/sci-fi people? And doesn't the picture sort of act against that - by being slightly sci-fi and weird?

Comment by Gedusa on Why an Intelligence Explosion might be a Low-Priority Global Risk · 2011-11-14T12:30:48.918Z · LW · GW

Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.

I view this as one of the single best arguments against risks from paperclippers. I'm a little concerned that it hasn't been dealt with properly by SIAI folks - aside from a few comments by Carl Shulman on Katja's blog.

I suspect the answer may be something to do with anthropics - but I'm not really certain of exactly what it is.

Comment by Gedusa on Q&A with new Executive Director of Singularity Institute · 2011-11-07T15:20:57.996Z · LW · GW

What initiatives is the Singularity Institute taking or planning to take to increase it's funding to whatever the optimal level of funding is?

Comment by Gedusa on FHI Essay Competition · 2011-11-04T15:30:50.691Z · LW · GW

I'm guessing they mean a university affiliated person doing a formal philosophy degree of some kind.

Comment by Gedusa on 2011 Less Wrong Census / Survey · 2011-11-01T02:00:08.953Z · LW · GW

I didn't think of that - given that a huge chuck here have probably taken such tests, if Yvain allowed such an estimation, it would be very helpful.

excluded-middle bias

Yes! That's what I was thinking of :)

Comment by Gedusa on 2011 Less Wrong Census / Survey · 2011-11-01T00:57:59.059Z · LW · GW

This is great! I hope there's a big response.

It seems likely you're going to get skewed answers for the IQ question. Mostly it's the really intelligent and the below average who get (professional) IQ tests - average people seem less likely to get them.

I predict high average IQ, but low response rate on the IQ question, which will give bad results. Can you tell us how many people respond to that question this time? (no. of responses isn't registered on the previous survey)

Comment by Gedusa on [link] SMBC on utilitarianism and vegatarianism. · 2011-10-16T16:39:06.538Z · LW · GW

The obvious solution is to stop eating all those kinds of animal/animal products. That would satisfy CO2 concerns and killing concerns.

Of course, it might not satisfy things like fun of eating meat, ease of eating meat, health etc.

Comment by Gedusa on [link] SMBC on utilitarianism and vegatarianism. · 2011-10-16T12:55:33.442Z · LW · GW

Here maybe?

Comment by Gedusa on Case study: Folding@home · 2011-09-15T20:07:03.178Z · LW · GW

I take it that you partially changed "my mistakes" to include nicotine. I enjoyed your article on it - but how are you using it?

Are you rotating with other stimulants on a regular basis, using when you like, using to promote habit formation etc.

Comment by Gedusa on General Bitcoin discussion thread (June 2011) · 2011-06-20T10:50:19.990Z · LW · GW

Would anyone care to comment on the recent Mt Gox hack n' crash?

Personally, I'm thinking that this very bad. The currency won't look as good the mainstream, and I'm anticipating panic sells as soon as the exchanges get up and running again. I'm agnostic as to whether Bitcoin will die or not though...

Comment by Gedusa on Much-Better-Life Simulator™ - Sales Conversation · 2011-06-19T14:07:00.322Z · LW · GW

The obvious extra question is:

"If you think it's so great, how come you're not using it?" Unless the sales girl's enjoyable life includes selling the machine she's in to disinterested customers.

Comment by Gedusa on [LINK] Two articles on Bitcoin · 2011-05-16T20:59:39.493Z · LW · GW

And if you do assume "fiat money is doomed, doomed!" then why wouldn't something like bitcoin become the world's reserve currency?

Okay, I'm willing to grant that if the dollar/fiat money in general is doomed then something along the lines of bitcoin would probably take over. But I don't assume this. I guess it is rational to put lots of money into bitcoin if you do take this premise though.

I agree that the dollar becoming effectively worthless would be pretty bad to put it mildly!

Comment by Gedusa on [LINK] Two articles on Bitcoin · 2011-05-16T17:34:59.459Z · LW · GW

Weirdly, though I think that bitcoins will succeed (and accordingly have some) I don't think Calacanis' article is well-founded. To focus just on the points I feel I can judge with some merit:

Bitcoin is unstoppable without end-user prosecution.

I don't think this is true. Shutting down of all legitimate currency exchanges would tend to increase the barrier to investment by legitimate investors and be likely to decrease interest in Bitcoins. Anecdote: I would get less interested in bitcoins if this happened. Also, a focused government campaign against it might succeed in branding it as evil in public eyes, again reducing interest. I do recognize that these factors wouldn't destroy bitcoin, but they would reduce the chances of it "changing the world".

Bitcoins will change the world unless governments ban them with harsh penalties.

Again, I'm not so sure of this. Can anyone give me any legitimate reasons to anticipate this? So far I've just people on the bitcoin forums saying that the dollar/euro will collapse and bitcoin will attain worldwide dominance.

Basically, I predict they'll find a decent niche, won't replace any currency system in any major country and won't increase in value to insane levels.

Comment by Gedusa on Welcome to Less Wrong! (2010-2011) · 2011-05-15T19:22:02.702Z · LW · GW

Hi Less Wrong!

Decided to register after seeing this comment and wanting to post give a free $10 to a cause I value highly.

I got pulled into less wrong by being interested in transhumanist stuff for a few years, finally decided to read here after realizing that this was the best place to discuss this sort of stuff and actually end up being right as opposed to just making wild predictions with absolutely no merit. I'm an 18 year old male living in the UK. I don't have a background in maths or computer sci as a lot of people here do (though I'm thinking of learning them). I'm just finishing up at school and then going on to do a philosophy degree (hopefully - though I'm scared of it making me believe crap things)

I've found the most useful LW stuff to be along the lines of instrumental rationality (the more recent stuff). Lukeprog's sequence on winning at life is great! My favorite LW-related posts have been:

  • The Cynic's Conundrum: Because I used to think idealistically about my own thought processes and cynically about other people's. In essence I fell into comfortable cynicism.

  • Tsuyoku Naritai! (I Want To Become Stronger): Because this was just really galvanizing and made me want to do better, much more than any self-help stuff ever did!

  • A Suite of Pragmatic Considerations in Favor of Niceness: Fantastic as I tended (and still tend to) be mean for no real reason and this post put a lot of motivation towards stopping. I've actually started to have niceness as a terminal value now, which is a tad odd.

So anyway, I'm happy to have registered and I hope to get stronger and have fun here!

Comment by Gedusa on People who want to save the world · 2011-05-15T18:42:39.257Z · LW · GW

Your right action is most excellent!