Posts

Some thoughts on criticism 2020-09-18T04:58:37.042Z
How good is humanity at coordination? 2020-07-21T20:01:39.744Z
Six economics misconceptions of mine which I've resolved over the last few years 2020-07-13T03:01:43.717Z
Buck's Shortform 2019-08-18T07:22:26.247Z
"Other people are wrong" vs "I am right" 2019-02-22T20:01:16.012Z

Comments

Comment by buck on Buck's Shortform · 2020-08-04T14:57:48.077Z · LW · GW

I used to think that slower takeoff implied shorter timelines, because slow takeoff means that pre-AGI AI is more economically valuable, which means that economy advances faster, which means that we get AGI sooner. But there's a countervailing consideration, which is that in slow takeoff worlds, you can make arguments like ‘it’s unlikely that we’re close to AGI, because AI can’t do X yet’, where X might be ‘make a trillion dollars a year’ or ‘be as competent as a bee’. I now overall think that arguments for fast takeoff should update you towards shorter timelines.

So slow takeoffs cause shorter timelines, but are evidence for longer timelines.

This graph is a version of this argument: if we notice that current capabilities are at the level of the green line, then if we think we're on the fast takeoff curve we'll deduce we're much further ahead than we'd think on the slow takeoff curve.

For the "slow takeoffs mean shorter timelines" argument, see here: https://sideways-view.com/2018/02/24/takeoff-speeds/

This
point feels really obvious now that I've written it down, and I suspect it's obvious to many AI safety people, including the people whose writings I'm referencing here. Thanks to Caroline Ellison for pointing this out to me, and various other people for helpful comments.

I think that this is why belief in slow takeoffs is correlated with belief in long timelines among the people I know who think a lot about AI safety.

Comment by buck on How good is humanity at coordination? · 2020-07-21T20:59:48.813Z · LW · GW

I don't really know how to think about anthropics, sadly.

But I think that it's pretty likely that nuclear war could have not killed everyone. So I still lose Bayes points compared to the world where nukes were fired but not everyone died.

Comment by buck on $1000 bounty for OpenAI to show whether GPT3 was "deliberately" pretending to be stupider than it is · 2020-07-21T20:17:59.495Z · LW · GW
It's tempting to anthropomorphize GPT-3 as trying its hardest to make John smart. That's what we want GPT-3 to do, right?

I don't feel at all tempted to do that anthropomorphization, and I think it's weird that EY is acting as if this is a reasonable thing to do. Like, obviously GPT-3 is doing sequence prediction--that's what it was trained to do. Even if it turns out that GPT-3 correctly answers questions about balanced parens in some contexts, I feel pretty weird about calling that "deliberately pretending to be stupider than it is".

Comment by buck on Possible takeaways from the coronavirus pandemic for slow AI takeoff · 2020-07-21T06:32:40.284Z · LW · GW

If the linked SSC article is about the aestivation hypothesis, see the rebuttal here.

Comment by buck on Six economics misconceptions of mine which I've resolved over the last few years · 2020-07-13T15:42:25.235Z · LW · GW

Remember that I’m not interested in evidence here, this post is just about what the theoretical analysis says :)

In an economy where the relative wealth of rich and poor people is constant, poor people and rich people both have consumption equal to their income.

Comment by buck on Six economics misconceptions of mine which I've resolved over the last few years · 2020-07-13T03:57:55.879Z · LW · GW

I agree that there's some subtlety here, but I don't think that all that happened here is that my model got more complex.

I think I'm trying to say something more like "I thought that I understood the first-order considerations, but actually I didn't." Or "I thought that I understood the solution to this particular problem, but actually that problem had a different solution than I thought it did". Eg in the situations of 1, 2, and 3, I had a picture in my head of some idealized market, and I had false beliefs about what happens in that idealized market, just like I'd be able to be wrong about the Nash equilibrium of a game.

I wouldn't have included something on this list if I had just added complexity to the model in order to capture higher-order effects.

Comment by buck on Six economics misconceptions of mine which I've resolved over the last few years · 2020-07-13T03:44:39.778Z · LW · GW

I agree that the case where there are several equilibrium points that are almost as good for the employer is the case where the minimum wage looks best.

Re point 1, note that the minimum wage decreases total consumption, because it reduces efficiency.

Comment by buck on What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world? · 2020-03-04T22:28:34.562Z · LW · GW

I've now made a Guesstimate here. I suspect that it is very bad and dumb; please make your own that is better than mine. I'm probably not going to fix problems with mine. Some people like Daniel Filan are confused by what my model means; I am like 50-50 on whether my model is really dumb or just confusing to read.

Also don't understand this part. "4x as many mild cases as severe cases" is compatible with what I assumed (10%-20% of all cases end up severe or critical) but where does 3% come from?

Yeah my text was wrong here; I meant that I think you get 4x as many unnoticed infections as confirmed infections, then 10-20% of confirmed cases end up severe or critical.

Comment by buck on What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world? · 2020-03-04T19:09:22.159Z · LW · GW

Oh yeah I'm totally wrong there. I don't have time to correct this now. Some helpful onlooker should make a Guesstimate for all this.

Comment by buck on What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world? · 2020-03-04T18:29:19.952Z · LW · GW

Epistemic status: I don't really know what I'm talking about. I am not at all an expert here (though I have been talking to some of my more expert friends about this).

EDIT: I now have a Guesstimate model here, but its results don't really make sense. I encourage others to make their own.

Here's my model: To get such a large death toll, there would need to be lots of people who need oxygen all at once and who can't get it. So we need to multiply the proportion of people who might have be infected all at once by the fatality rate for such people. I'm going to use point estimates here and note that they look way lower than yours; this should probably be a Guesstimate model.

Fatality rate

This comment suggests maybe 85% fatality of confirmed cases if they don't have a ventilator, and 75% without oxygen. EDIT: This is totally wrong, see replies. I will fix it later. Idk what it does to the bottom line.

But there are plausibly way more mild cases than confirmed cases. In places with aggressive testing, like Diamond Princess and South Korea, you see much lower fatality rates, which suggests that lots of cases are mild and therefore don't get confirmed. So plausibly there are 4x as many mild cases as confirmed cases. This gets us to like 3% fatality rate (again assuming no supplemental oxygen, which I don't think is clear and I expect someone else to be able to make progress on forecasting if they want).

How many people get it at once

(If we assume that like 1000 people in the US currently have it, and doubling time is 5 days, then peak time is like 3 months away.)

To get to overall 2.5% fatality, you need more than 80% of living humans to get it, in a big clump such that they don't have oxygen access. This probably won't happen (20%), because of arguments like the following:

  • This doesn't seem to have happened in China, so it seems possible to prevent.
    • China is probably unusually good at handling this, but even if only China does this
  • Flu is spread out over a few months, and it's more transmissible than this, and not everyone gets it. (Maybe it's because of immunity to flu from previous flus?)
  • If the fatality rate looks on the high end, people will try harder to not get it

Other factors that discount it

  • The warm weather might make it get a lot less bad. (10% hail mary?)
  • Effective countermeasures might be invented in the next few months. Eg we might need to notice that some existing antiviral is helpful. People are testing a bunch of these, and there are some that might be effective. (20% hail mary?)

Conclusion

This overall adds up to like 20% * (1-0.1-0.2) = 14% chance of 2.5% mortality, based on multiplications of point estimates which I'm sure are invalid.

Comment by buck on What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world? · 2020-03-04T17:27:04.511Z · LW · GW

Just for the record, I think that this estimate is pretty high and I'd be pretty surprised if it were true; I've talked to a few biosecurity friends about this and they thought it was too high. I'm worried that this answer has been highly upvoted but there are lots of people who think it's wrong. I'd be excited for more commenters giving their bottom line predictions about this, so that it's easier to see the spread.

Wei_Dai, are you open to betting about this? It seems really important for us to have well-calibrated beliefs about this.

Comment by buck on AIRCS Workshop: How I failed to be recruited at MIRI. · 2020-01-07T18:06:58.660Z · LW · GW

(I'm unsure whether I should write this comment referring to the author of this post in second or third person; I think I'm going to go with third person, though it feels a bit awkward. Arthur reviewed this comment before I posted it.)

Here are a couple of clarifications about things in this post, which might be relevant for people who are using it to learn about the MIRI recruiting process. Note that I'm the MIRI recruiter Arthur describes working with.

General comments:

I think Arthur is a really smart, good programmer. Arthur doesn't have as much background with AI safety stuff as many people who I consider as candidates for MIRI work, but it seemed worth spending effort on bringing Arthur to AIRCS etc because it would be really cool if it worked out.

Arthur reports a variety of people in this post as saying things that I think are somewhat misinterpreted, and I disagree with several of the things he describes them as saying.

I still don't understand that: what's the point of inviting me if the test fails ? It would appear more cost efficient to wait until after the test to decide whether they want me to come or not (I don't think I ever asked it out loud, I was already happy to have a trip to California for free).

I thought it was very likely Arthur would do well on the two-day project (he did).

I do not wish to disclose how much I have been paid, but I'll state that two hours at that rate was more than a day at the French PhD rate. I didn't even ask to be paid; I hadn't even thought that being paid for a job interview was possible.

It's considered good practice to pay people to do work for trials; we paid Arthur a rate which is lower than you'd pay a Bay Area software engineer as a contractor, and I was getting Arthur to do somewhat unusually difficult (though unusually interesting) work.

I assume that if EA cares about animal suffering in itself, then using throwaways is less of a direct suffering factor.

Yep

So Anna Salamon gave us a rule: We don't speak of AI safety to people who do not express the desire to hear about it. When I asked for more informations, she specified that it is okay to mention the words "AI Safety"; but not to give any details until the other person is sure they want to hear about it. In practice, this means it is okay to share a book/post on AI safety, but we should warn the person to read it only if they feel ready. Which leads to a related problem: some people never experienced an existential crisis or anxiety attack of their life, so it's all too possible they can't really "be ready".

I think this is a substantial misunderstanding of what Anna said. I don't think she was trying to propose a rule that people should follow, and she definitely wasn't explaining a rule of the AIRCS workshop or something; I think she was doing something a lot more like talking about something she thought about how people should relate to AI risk. I might come back and edit this comment later to say more.

That means that, during circles, I was asked to be as honest as possible about my feelings while also being considered for an internship. This is extremely awkward.

For the record, I think that "being asked to be as honest as possible" is a pretty bad description of what circling is, though I'm sad that it came across this way to Arthur (I've already talked to him about this)

But just because they do not think of AIRCS as a job interview does not mean AIRCS is not a job interview. Case in point: half a week after the workshop, the recruiter told me that "After discussing some more, we decided that we don't want to move forward with you right now". So the workshop really was what led them to decide not to hire me.

For the record, the workshop indeed made the difference about whether we wanted to make Arthur an offer right then. I think this is totally reasonable--Arthur is a smart guy, but not that involved with the AI safety community; my best guess before the AIRCS workshop was that he wouldn't be a good fit at MIRI immediately because of his insufficient background in AI safety, and then at the AIRCS workshop I felt like it turned out that this guess was right and the gamble hadn't paid off (though I told Arthur, truthfully, that I hoped he'd keep in contact).

During a trip to the beach, I finally had the courage to tell the recruiter that AIRCS is quite complex to navigate for me, when it's both a CFAR workshop and a job interview.

:( This is indeed awkward and I wish I knew how to do it better. My main strategy is to be as upfront and accurate with people as I can; AFAICT, my level of transparency with applicants is quite unusual. This often isn't sufficient to make everything okay.

First: they could mention people coming to AIRCS for a future job interview that some things will be awkward for them; but that they have the same workshop as everyone else so they'll have to deal with it.

I think I do mention this (and am somewhat surprised that it was a surprise for Arthur)

Furthermore, I do understand why it's generally a bad idea to tell unknown people in your buildings that they won't have the job.

I wasn't worried about Arthur destroying the AIRCS venue; I needed to confer with my coworkers before making a decision.

I do not believe that my first advice will be listened to. During a discussion, the last night near the fire, the recruiter was discussing with some other miri staff and participants. And at some point they mentioned MIRI's recruiting process. I think that they were mentioning that they loved recruiting because it leads them to work with extremely interesting people, but that it's hard to find them. Given that my goal was explicitly to be recruited, and that I didn't have any answers yet, it was extremely awkward for me. I can't state explicitly why, after all I didn't have to add anything to their remark. But even if I can't explain why I think that, I still firmly believe that it's the kind of things a recruiter should avoid saying near their potential hire.

I don't quite understand what Arthur's complaint is here, though I agree that it's awkward having people be at events with people who are considering hiring them.

Miri here is an exception. I can see only so many reasons not to hire me that the outcome was unsurprising. The process and they considering me in the first place was.

Arthur is really smart and it seemed worth getting him more involved in all this stuff.

Comment by buck on We run the Center for Applied Rationality, AMA · 2019-12-21T06:32:02.680Z · LW · GW

For the record, parts of that ratanon post seem extremely inaccurate to me; for example, the claim that MIRI people are deferring to Dario Amodei on timelines is not even remotely reasonable. So I wouldn't take it that seriously.

Comment by buck on Let's talk about "Convergent Rationality" · 2019-12-05T01:21:31.848Z · LW · GW

In OpenAI's Roboschool blog post:

This policy itself is still a multilayer perceptron, which has no internal state, so we believe that in some cases the agent uses its arms to store information.

Comment by buck on Buck's Shortform · 2019-12-02T23:29:42.635Z · LW · GW

formatting problem, now fixed

Comment by buck on Aligning a toy model of optimization · 2019-12-02T03:47:02.605Z · LW · GW

Given a policy π we can directly search for an input on which it behaves a certain way.

(I'm sure this point is obvious to Paul, but it wasn't to me)

We can search for inputs on which a policy behaves badly, which is really helpful for verifying the worst case of a certain policy. But we can't search for a policy which has a good worst case, because that would require using the black box inside the function passed to the black box, which we can't do. I think you can also say this as "the black box is an NP oracle, not a oracle".

This still means that we can build a system which in the worst case does nothing, rather than in the worst case is dangerous: we do whatever thing to get some policy, then we search for an input on which it behaves badly, and if one exists we don't run the policy.

Comment by buck on Robustness to Scale · 2019-12-02T00:45:48.090Z · LW · GW

I think that the terms introduced by this post are great and I use them all the time

Comment by buck on Six AI Risk/Strategy Ideas · 2019-12-02T00:28:38.110Z · LW · GW

Ah yes this seems totally correct

Comment by buck on Buck's Shortform · 2019-12-02T00:27:14.852Z · LW · GW

[I'm not sure how good this is, it was interesting to me to think about, idk if it's useful, I wrote it quickly.]

Over the last year, I internalized Bayes' Theorem much more than I previously had; this led me to noticing that when I applied it in my life it tended to have counterintuitive results; after thinking about it for a while, I concluded that my intuitions were right and I was using Bayes wrong. (I'm going to call Bayes' Theorem "Bayes" from now on.)

Before I can tell you about that, I need to make sure you're thinking about Bayes in terms of ratios rather than fractions. Bayes is enormously easier to understand and use when described in terms of ratios. For example: Suppose that 1% of women have a particular type of breast cancer, and a mammogram is 20 times more likely to return a positive result if you do have breast cancer, and you want to know the probability that you have breast cancer if you got that positive result. The prior probability ratio is 1:99, and the likelihood ratio is 20:1, so the posterior probability is = 20:99, so you have probability of 20/(20+99) of having breast cancer.

I think that this is absurdly easier than using the fraction formulation. I think that teaching the fraction formulation is the single biggest didactic mistake that I am aware of in any field.


Anyway, a year or so ago I got into the habit of calculating things using Bayes whenever they came up in my life, and I quickly noticed that Bayes seemed surprisingly aggressive to me.

For example, the first time I went to the Hot Tubs of Berkeley, a hot tub rental place near my house, I saw a friend of mine there. I wondered how regularly he went there. Consider the hypotheses of "he goes here three times a week" and "he goes here once a month". The likelihood ratio is about 12x in favor of the former hypothesis. So if I previously was ten to one against the three-times-a-week hypothesis compared to the once-a-month hypothesis, I'd now be 12:10 = 6:5 in favor of it. This felt surprisingly high to me.

(I have a more general habit of thinking about whether the results of calculations feel intuitively too low or high to me; this has resulted in me noticing amusing inconsistencies in my numerical intuitions. For example, my intuitions say that $3.50 for ten photo prints is cheap, but 35c per print is kind of expensive.)

Another example: A while ago I walked through six cars of a train, which felt like an unusually long way to walk. But I realized that I'm 6x more likely to see someone who walks 6 cars than someone who walks 1.

In all these cases, Bayes Theorem suggested that I update further in the direction of the hypothesis favored by the likelihood ratio than I intuitively wanted to. After considering this a bit more, I have came to the conclusion that my intuitions were directionally right; I was calculating the likelihood ratios in a biased way, and I was also bumping up against an inconsistency in how I estimated priors and how I estimated likelihood ratios.

If you want, you might enjoy trying to guess what mistake I think I was making, before I spoil it for you.


Here's the main mistake I think I was making. Remember the two hypotheses about my friend going to the hot tub place 3x a week vs once a month? I said that the likelihood ratio favored the first by 12x. I calculated this by assuming that in both cases, my friend visited the hot tub place on random nights. But in reality, when I'm asking whether my friend goes to the hot tub place 3x every week, I'm asking about the total probability of all hypotheses in which he visits the hot tub place 3x per week. There are a variety of such hypotheses, and when I construct them, I notice that some of the hypotheses placed a higher probability on me seeing my friend than the random night hypothesis. For example, it was a Saturday night when I saw my friend there and started thinking about this. It seems kind of plausible that my friend goes once a month and 50% of the times he visits are on a Saturday night. If my friend went to the hot tub place three times a week on average, no more than a third of those visits could be on a Saturday night.

I think there's a general phenomenon where when I make a hypothesis class like "going once a month", I neglect to think about things about specific hypotheses in the class which make the observed data more likely. The hypothesis class offers a tempting way to calculate the likelihood, but it's in fact a trap.

There's a general rule here, something like: When you see something happen that a hypothesis class thought was unlikely, you update a lot towards hypotheses in that class which gave it unusually high likelihood.

And this next part is something that I've noticed, rather than something that follows from the math, but it seems like most of the time when I make up hypotheses classes, something like this happens where I initially calculate the likelihood to be lower than it is, and the likelihoods of different hypothesis classes are closer than they would be.

(I suspect that the concept of a maximum entropy hypothesis is relevant. For every hypothesis class, there's a maximum entropy (aka maxent) hypothesis, which is the hypothesis which is maximally uncertain subject to the constraint of the hypothesis class. Eg the maximum entropy hypothesis for the class "my friend visits the hot tub place three times a month on average" is the hypothesis where the probability of my friend visiting the hot tub place every day is equal and uncorrelated. In my experience in real world cases, hypotheses classes tend to contain non-maxent hypotheses which fit the data better much better. In general for a statistical problem, these hypotheses don't do better than the maxent hypothesis; I don't know why they tend to do better in problems I think about.)


Another thing causing my posteriors to be excessively biased towards low-prior high-likelihood hypotheses is that priors tend to be more subjective to estimate than likelihoods are. I think I'm probably underconfident in assigning extremely high or low probabilities to hypotheses, and this means that when I see something that looks like moderate evidence of an extremely unlikely event, the likelihood ratio is more extreme than the prior, leading me to have a counterintuitively high posterior on the low-prior hypothesis. I could get around this by being more confident in my probability estimates at the 98% or 99% level, but it takes a really long time to become calibrated on those.

Comment by buck on Open & Welcome Thread - November 2019 · 2019-12-01T23:27:03.336Z · LW · GW

Email me at buck@intelligence.org with some more info about you and I might be able to give you some ideas (and we can maybe talk about things you could do for ai alignment more generally)

Comment by buck on Six AI Risk/Strategy Ideas · 2019-08-29T05:02:46.939Z · LW · GW

Minor point: I think asteroid strikes are probably very highly correlated between Everett branches (though maybe the timing of spotting an asteroid on a collision course is variable).

Comment by buck on Buck's Shortform · 2019-08-21T01:20:18.379Z · LW · GW

A couple weeks ago I spent an hour talking over video chat with Daniel Cantu, a UCLA neuroscience postdoc who I hired on Wyzant.com to spend an hour answering a variety of questions about neuroscience I had. (Thanks Daniel for reviewing this blog post for me!)

The most interesting thing I learned is that I had quite substantially misunderstood the connection between convolutional neural nets and the human visual system. People claim that these are somewhat bio-inspired, and that if you look at early layers of the visual cortex you'll find that it operates kind of like the early layers of a CNN, and so on.

The claim that the visual system works like a CNN didn’t quite make sense to me though. According to my extremely rough understanding, biological neurons operate kind of like the artificial neurons in a fully connected neural net layer--they have some input connections and a nonlinearity and some output connections, and they have some kind of mechanism for Hebbian learning or backpropagation or something. But that story doesn't seem to have a mechanism for how neurons do weight tying, which to me is the key feature of CNNs.

Daniel claimed that indeed human brains don't have weight tying, and we achieve the efficiency gains over dense neural nets by two other mechanisms instead:

Firstly, the early layers of the visual cortex are set up to recognize particular low-level visual features like edges and motion, but this is largely genetically encoded rather than learned with weight-sharing. One way that we know this is that mice develop a lot of these features before their eyes open. These low-level features can be reinforced by positive signals from later layers, like other neurons, but these updates aren't done with weight-tying. So the weight-sharing and learning here is done at the genetic level.

Secondly, he thinks that we get around the need for weight-sharing at later levels by not trying to be able to recognize complicated details with different neurons. Our vision is way more detailed in the center of our field of view than around the edges, and if we need to look at something closely we move our eyes over it. He claims that this gets around the need to have weight tying, because we only need to be able to recognize images centered in one place.

I was pretty skeptical of this claim at first. I pointed out that I can in fact read letters that are a variety of distances from the center of my visual field; his guess is that I learned to read all of these separately. I'm also kind of confused by how this story fits in with the fact that humans seem to relatively quickly learn to adapt to inversion goggled. I would love to check what some other people who know neuroscience think about this.

I found this pretty mindblowing. I've heard people use CNNs as an example of how understanding brains helped us figure out how to do ML stuff better; people use this as an argument for why future AI advances will need to be based on improved neuroscience. This argument seems basically completely wrong if the story I presented here is correct.

Comment by buck on Buck's Shortform · 2019-08-20T21:04:12.980Z · LW · GW

I recommend looking on Wyzant.

Comment by buck on Buck's Shortform · 2019-08-18T07:22:26.379Z · LW · GW

I think that an extremely effective way to get a better feel for a new subject is to pay an online tutor to answer your questions about it for an hour.

It turns that there are a bunch of grad students on Wyzant who mostly work tutoring high school math or whatever but who are very happy to spend an hour answering your weird questions.

For example, a few weeks ago I had a session with a first-year Harvard synthetic biology PhD. Before the session, I spent a ten-minute timer writing down things that I currently didn't get about biology. (This is an exercise worth doing even if you're not going to have a tutor, IMO.) We spent the time talking about some mix of the questions I'd prepared, various tangents that came up during those explanations, and his sense of the field overall.

I came away with a whole bunch of my minor misconceptions fixed, a few pointers to topics I wanted to learn more about, and a way better sense of what the field feels like and what the important problems and recent developments are.

There are a few reasons that having a paid tutor is a way better way of learning about a field than trying to meet people who happen to be in that field. I really like it that I'm paying them, and so I can aggressively direct the conversation to wherever my curiosity is, whether it's about their work or some minor point or whatever. I don't need to worry about them getting bored with me, so I can just keep asking questions until I get something.

Conversational moves I particularly like:

  • "I'm going to try to give the thirty second explanation of how gene expression is controlled in animals; you should tell me the most important things I'm wrong about."
  • "Why don't people talk about X?"
  • "What should I read to learn more about X, based on what you know about me from this conversation?"

All of the above are way faster with a live human than with the internet.

I think that doing this for an hour or two weekly will make me substantially more knowledgeable over the next year.

Various other notes on online tutors:

  • Online language tutors are super cheap--I had some Japanese tutor who was like $10 an hour. They're a great way to practice conversation. They're also super fun IMO.
  • Sadly, tutors from well paid fields like programming or ML are way more expensive.
  • If you wanted to save money, you could gamble more on less credentialed tutors, who are often $20-$40 an hour.

If you end up doing this, I'd love to hear your experience.

Comment by buck on "Other people are wrong" vs "I am right" · 2019-02-25T01:12:01.891Z · LW · GW

I'm confused about what point you're making with the bike thief example. I'm reading through that post and its comments to see if I can understand your post better with that as background context, but you might want to clarify that part of the post (with a reader who doesn't have that context in mind).

Can you clarify what is unclear about it?

Comment by buck on Current AI Safety Roles for Software Engineers · 2018-11-10T04:40:13.666Z · LW · GW
I believe they would like to hire several engineers in the next few years.

We would like to hire many more than several engineers--we want to hire as many people as engineers as possible; this would be dozens if we could, but it's hard to hire, so we'll more likely end up hiring more like ten over the next year.

I think that MIRI engineering is a really high impact opportunity, and I think it's definitely worth the time for EA computer science people to apply or email me (buck@intelligence.org).

Comment by buck on Weird question: could we see distant aliens? · 2018-04-21T01:04:28.924Z · LW · GW

My main concern with this is the same as the problem listed on Wei Dai's answer: whether a star near us is likely to block out this light. The sun is about 10^9m across. A star that's 10 thousand light years away (this is 10% of the diameter of the Milky Way) occupies about (1e9m / (10000 lightyears * 2 * pi))**2 = 10^-24 of the night sky. A galaxy that's 20 billion light years away occupies something like (100000 lightyears / 20 billion lightyears) ** 2 ~= 2.5e-11. So galaxies occupy more space than stars. So it would be weird if individual stars blocked out a whole galaxy.

Comment by buck on Weird question: could we see distant aliens? · 2018-04-21T01:04:24.485Z · LW · GW

Another piece of idea: If you're extremely techno-optimistic, then I think it would be better to emit light at weird wavelengths than to just emit a lot of light. Eg emitting light at two wavelengths with ratio pi or something. This seems much more unmistakably intelligence-caused than an extremely bright light.

Comment by buck on Weird question: could we see distant aliens? · 2018-04-20T18:50:10.385Z · LW · GW

My first idea is to make two really big black holes and then make them merge. We observed gravitational waves from two black holes with solar masses of around 25 solar masses each located 1.8 billion light years away. Presumably this force decreases as an inverse square times exponential decay; ignoring the exponential decay this suggests to me that we need 100 times as much mass to be as prominent from 18 billion light years. A galaxy mass is around 10^12 solar masses. So if we spent 2500 solar masses on this each year, it would be at least as prominent as the gravitational wave that we detected, and we could do this a billion times with a galaxy. To be safe, I'd 10x the strength of the waves, so that we could do it 100 million times with a galaxy.

Currently our instruments aren't sensitive enough to detect which galaxy was emitting these bizarrely strong gravitational waves. So I'd combine this with Wei Dai's suggestion of making an extremely bright beacon using the accretion disks resulting from the creation of these black holes.