Posts

CFAR: Progress Report & Future Plans 2019-12-19T06:19:58.948Z · score: 66 (27 votes)
Why are the people who could be doing safety research, but aren’t, doing something else? 2019-08-29T08:51:33.219Z · score: 25 (6 votes)
What's the optimal procedure for picking macrostates? 2019-08-26T09:34:15.647Z · score: 12 (5 votes)
If the "one cortical algorithm" hypothesis is true, how should one update about timelines and takeoff speed? 2019-08-26T07:08:19.634Z · score: 24 (9 votes)
Cognitive Benefits of Exercise 2019-08-14T21:40:35.145Z · score: 29 (15 votes)
adam_scholl's Shortform 2019-08-12T00:53:37.221Z · score: 1 (1 votes)

Comments

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-27T07:44:14.153Z · score: 39 (6 votes) · LW · GW

Ben just to check, before I respond—would a fair summary of your position here be, "CFAR should write more in public, e.g. on LessWrong, so that A) it can have better feedback loops, and B) more people can benefit from its ideas?"

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-24T07:10:57.462Z · score: 10 (6 votes) · LW · GW

To be clear, others at CFAR have spent time looking into these things, I think; Anna might be able to chime in with details. I just meant that I haven't personally.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-23T04:07:53.358Z · score: 8 (5 votes) · LW · GW

Thanks for spelling this out. My guess is that there are some semi-deep cruxes here, and that they would take more time to resolve than I have available to allocate at the moment. If Eli someday writes that post about the Nisbett and Wilson paper, that might be a good time to dive in further.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-23T03:59:31.263Z · score: 6 (4 votes) · LW · GW

(Unsure, but I'm suspicious that the distinction between these two things might not be clear).

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-23T01:23:21.987Z · score: 8 (5 votes) · LW · GW

I just googled around for pictures of things I think are neat. I think ctenophores are neat, since they look like alien spaceships and maybe evolved neurons independently; I think it's neat that wind sometimes makes clouds do the vortex thing that canoe paddles make water do, etc.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-23T01:02:12.390Z · score: 5 (3 votes) · LW · GW

Yeah, same; I think this term has experienced some semantic drift, which is confusing. I meant to refer to pre-verbal intuitions in general, not just ones accompanied by physical sensation.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-23T00:55:30.511Z · score: 10 (6 votes) · LW · GW

I have an interest in making certain parts of philosophy more productive, and in turning some engineers into "people with more of some specific philosophical skills." I just meant that I'm not excited about most ways I can imagine of "making the average AIRCS participant's epistemics more like that of the average professional philosopher."

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-23T00:42:06.911Z · score: 21 (9 votes) · LW · GW

CFAR does spend substantially less time circling now than it did a couple years ago, yeah. I think this is partly because Pete (who spent time learning about circling when he was younger, and hence found it especially easy to notice the lack of circling-type skill among rationalists, much as I spent time learning about philosophy when I was younger and hence found it especially easy to notice the lack of philosophy-type skill among AIRCS participants) left, and partly I think because many staff felt like their marginal skill returns from circling practice were decreasing, so they started focusing more on other things.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-23T00:29:50.263Z · score: 16 (6 votes) · LW · GW

Said I appreciate you pointing out that I used the term "extrospection" in a non-standard way—I actually didn't realize that. The way I've heard it used, which is probably idiosyncratic local jargon, it means something like the theory of mind analog of introspection: something like "feeling, yourself, something of what the person you're talking with is feeling." You obviously can't do this perfectly, but I think many people find that e.g. it's easier to gain information about why someone is sad, and about how it feels for them to be currently experiencing this sadness, if you use empathy/theory of mind/the thing I think people are often gesturing at when they talk about "mirror neurons," to try to emulate their sadness in your own brain. To feel a bit of it, albeit an imperfect approximation of it, yourself.

Similarly, I think it's often easier for one to gain information about why e.g. someone feels excited about pursuing a particular line of inquiry, if one tries to emulate their excitement in one's own brain. Personally, I've found this empathy/emulation skill quite helpful for research collaboration, because it makes it easier to trade information about people's vague, sub-verbal curiosities and intuitions about e.g. "which questions are most worth asking."

Circlers don't generally use this skill for research. But it is the primary skill, I think, that circling is designed to train, and my impression is that many circlers have become relatively excellent at it as a result.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-22T11:26:41.995Z · score: 30 (12 votes) · LW · GW

(I want to be clear that the above is an account of why I personally feel excited about CFAR having investigated circling. I think this account also reasonably describes the motivations of many key staff, and of CFAR's behavior as an institution. But CFAR struggles with communicating research intuitions, too; I think in this case these intuitions did not propagate fully among our staff, and as a result that we did employ a few people for a while whose primary interest in circling was more like "for its own sake," who sometimes discussed it in ways which felt epistemically unhealthy to me. I think people correctly picked up on this as worrying, and I don't want to suggest that didn't happen; just that there is, I think, a sensible reason why CFAR as an institution tends to investigate local blindspots by searching for non-locals with a patch, thereby alarming locals about our epistemic allegiance).

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-22T11:26:23.819Z · score: 116 (35 votes) · LW · GW

I think a crisp summary here is: CFAR is in the business of helping create scientists, more than the business of doing science. Some of the things it makes sense to do to help create scientists look vaguely science-ish, but others don't. And this sometimes causes people to worry (understandably, I think) that CFAR isn't enthused about science, or doesn't understand its value.

Thing is, if you're looking to improve a given culture, one natural move is to explore that culture's blindspots. And exploring that culture's blindspots is, in many cases, I think, not going to look like an activity typical of that culture.

Here's an example: there's a particular bug that I encounter extremely often at AIRCS workshops, but rarely at other sorts of workshops. I don't yet feel like I have a great model of it, but it has something to do with not fully understanding how English words have referents at different levels of abstraction. It's the sort of confusion that I think reading A Human's Guide to Words often resolves in people, and which results in people asking questions like:

  • "Should I replace [my core goal x] with [this list of "ethical" goals I recently heard about]?"
  • "Why is the fact that I have a goal a good reason to optimize for it?"
  • "Are propositions like 'x is good' or 'y is beautiful' even meaningful claims?"

When I encounter this bug I often point to a nearby tree, and start describing it at different levels of abstraction. The word "tree" refers to a bunch of different related things: to a member of an evolutionarily related category of organisms, to the general sort of object humans tend to emit the phonemes "tree" to describe, to this particular mid-sized physical object here in front of us, to the particular arrangement of particles that composes the object, etc. And it's sensible to use the term "tree" anyway, as long as you're careful to track which level of abstraction you're referring to with a given proposition—i.e., as long as you're careful to be precise about exactly which map/territory correspondence you're asserting.

This is obvious to most science-minded people. But it's often less obvious that the same procedure, with the same carefulness, is needed to sensibly discuss concepts like "goal" and "good." Just as it doesn't make sense to discuss whether a given tree is "strong" without internally distinguishing between whether you mean "in terms of its likelihood to fall over" or "in terms of its molecular bonds," it doesn't make sense to discuss whether a goal is "good" without internally distinguishing between whether you mean "relative to societal consensus" or "relative to my current set of preferences" or "relative to the set of preferences I might come to have given more time to think."

This conversation often seems to help resolve the confusion. At some point, I may design a class about this, so that more such confusions can be resolved. But I expect that if I do, some of the engineers in the audience will get nervous, since it will look an awful lot like a philosophy class! (I already get this objection regularly one-on-one). That is, I expect some may wonder whether the AIRCS staff, which claim to be running workshops for engineers, are actually more enthusiastic about philosophy than engineering.

Truth is, we're not. Philosophy strikes me as, on the whole, an unusually unproductive field full of people with highly questionable epistemics. I certainly don't want to turn the engineers into philosophers—I just want to use a particular helpful insight from philosophy to patch a bug which, for whatever reason, seems to commonly afflict AIRCS participants.

CFAR faces this dilemma a lot. For example, we spent a bunch of time circling for a while, and this made many rationalists nervous—was CFAR as an institution, which claimed to be running workshops for science-minded, sequences-reading, law-based-reasoning-enthused rationalists, actually more enthusiastic about woo-laden authentic relating games?

We weren't. But we looked around, and noticed that lots of the promising people around us seemed particularly bad at extrospection—i.e., at simulating the felt senses of their conversational partners in their own minds. This seemed worrying, among other reasons because early-stage research intuitions (e.g. about which lines of inquiry feel exciting to pursue) often seem to be stored sub-verbally. So we looked to specialists in extraspection for a patch.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-22T08:42:26.129Z · score: 9 (5 votes) · LW · GW

Well, I think it can both be the case that a given staff member thinks the organization's mission is important, and also that due to their particular distribution of comparative advantages, current amount of burnout, etc., that it would be on net better for them to work elsewhere. And I think most of our turnover has resulted from considerations like this, rather than from e.g. people deciding CFAR's mission was doomed.

I think the concern about short median tenure leading to research loss makes sense, and has in fact occurred some. But I'm not all that worried about it, personally, for a few reasons:

  • This cost is reduced because we're in the teaching business. That is, relative to an organization that does pure research, we're somewhat better positioned to transfer institutional knowledge to new staff, since much of the relevant knowledge has already been heavily optimized for easy transferability.
  • There's significant benefit to turnover, too. I think the skills staff develop while working at CFAR are likely to be useful for work at a variety of orgs; I feel excited about the roles a number of former staff are playing elsewhere, and expect I'll be excited about future roles our current staff play elsewhere too.
  • Many of our staff already have substantial "work-related experience," in some sense, before they're hired. For example, I spent a bunch of time in college reading LessWrong, trying to figure out metaethics, etc., which I think helped me become a much better CFAR instructor than I might have been otherwise. I expect many lesswrongers, for example, have already developed substantial skill relevant to working effectively at CFAR.
Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-22T04:26:03.622Z · score: 19 (7 votes) · LW · GW

Yeah, I predict that if one showed Val or Pete the line about fitting naturally into CFAR’s environment without triggering antibodies, they would laugh hard and despairingly. There was definitely friction.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-22T04:12:01.235Z · score: 19 (10 votes) · LW · GW

I think it would depend a lot on which sort of individual life outcomes you wanted to compare. I have basically no idea where these programs stand, relative to CFAR, on things like increasing participant happiness, productivity, relationship quality, or financial success, since CFAR mostly isn't optimizing for producing effects in these domains.

I would be surprised if CFAR didn't come out ahead in terms of things like increasing participants' ability to notice confusion, communicate subtle intuitions, and navigate pre-paradigmatic technical research fields. But I'm not sure, since in general I model these orgs as having sufficiently different goals than us that I haven't spent much time learning about them.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-22T04:09:18.391Z · score: 9 (5 votes) · LW · GW

To be honest I haven't noticed much change, except obviously for the literal absence of Duncan (which is a very noticeable absence; among other things Duncan is an amazing teacher, imo better than anyone currently on staff).

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-22T02:40:52.667Z · score: 18 (7 votes) · LW · GW

Thanks to your recommendation I recently read New Atlantis, by Francis Bacon, and it was so great! It's basically Bacon's list of things he wished society had, ranging from "clothes made of sea-water-green satin" and "many different types of beverages" to "research universities that employ full-time specialist scholars."

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-22T01:23:01.135Z · score: 38 (9 votes) · LW · GW

I have a Google Doc full of ideas. Probably I'll never write most of these, and if I do probably much of the content will change. But here are some titles, as they currently appear in my personal notes:

  • Mesa-Optimization in Humans
  • Primitivist Priors v. Pinker Priors
  • Local Deontology, Global Consequentialism
  • Fault-Tolerant Note-Scanning
  • Goal Convergence as Metaethical Crucial Consideration
  • Embodied Error Tracking
  • Abnormally Pleasurable Insights
  • Burnout Recovery
  • Against Goal "Legitimacy"
  • Computational Properties of Slime Mold
  • Steelmanning the Verificationist Criterion of Meaning
  • Manual Tribe Switching
  • Manual TAP Installation
  • Keep Your Hobbies
Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-21T23:53:47.959Z · score: 3 (2 votes) · LW · GW

I expect there are a bunch which never hear about us due to language barrier, and/or because they're geographically distant from most of our alumni. But I would be surprised if there weren't also lots of geographically-near, epistemically-promising people who've just never happened to encounter someone recommending a workshop.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-21T13:09:39.842Z · score: 12 (7 votes) · LW · GW

All hail Logmoth, the rightful caliph!

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-21T12:40:38.570Z · score: 20 (8 votes) · LW · GW

CFAR relies heavily on selection effects for finding/filtering workshop participants. In general we do very little marketing or direct outreach, although AIRCS and MSFP do some of the latter; mostly people hear about us via word of mouth. This system actually works surprisingly (to me) well at causing promising people to apply.

Still, I think many of the people we would be most happy to have at a workshop probably never hear about us, or at least never apply. One could try fixing this with marketing/outreach strategies, but many of them seem to risk disrupting the selection effects which, so far, have been a necessary ingredient for nearly all of our impact.

So I fantasize sometimes about a new organization (or something) being created which draws loads of people together, via selection effects similar to those which have attracted people to e.g. LessWrong, which would make it easier for us to find more promising people.

(I also—and this isn’t a wish for an organization, exactly, but it gestures at the kind of problem I speculate some organization could potentially help solve—sometimes fantasize about developing something like “scouts” at existing places with such selection effects. For example, a bunch of safety researchers competed in IMO/IOI when they were younger; I think it would be plausibly valuable for us to make friends with some team coaches, and for them to occasionally put us in touch with promising people).

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-21T11:16:59.603Z · score: 15 (5 votes) · LW · GW

I really like Language, Truth and Logic, by A.J. Ayer. It's an old book—published in 1936—and it's pretty silly in some ways. It's basically an early pro-empiricism manifesto, and I think some of its arguments are oversimplified, overconfident, or wrong. Even so, it does a great job of teaching some core mental motions of analytic philosophy. And its motivating intuitions feel familiar: I suspect that if 25-year-old Ayer got transported to the present, given internet access etc., we would see him on LessWrong pretty quick.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-21T09:52:30.601Z · score: 16 (7 votes) · LW · GW

I really loved this post on Occam's Razor. Before encountering it, I basically totally misunderstood the case for using the heuristic, and so (wrongly, I now think) considered it kind of dumb.

I also especially loved "The Second Law of Thermodynamics, and Engines of Cognition," which gave me a wonderful glimpse (for the first time) into how "laws of inference" ground in laws of physics.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-21T09:27:23.074Z · score: 4 (3 votes) · LW · GW

I did. I have some but not all of the images saved; happy to share what I have, feel free to pm me for links.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-21T03:50:35.899Z · score: 17 (8 votes) · LW · GW

I think we eat our own dogfood a lot. It’s pretty obvious in meetings—e.g., people do Focusing-like moves to explain subtle intuitions, remind each other to set TAPs, do explicit double cruxing, etc.

As to whether this dogfood allows us to perform better—I strongly suspect so, but I’m not sure what legible evidence I can give about that. It seems to me that CFAR has managed to have a surprisingly large (and surprisingly good) effect on AI safety as a field, given our historical budget and staff size. And I think there are many attractors in org space (some fairly powerful) that would have made CFAR less impactful, had it fallen into them, that it’s avoided falling into in part because its staff developed unusual skill at noticing confusion/resolving internal conflict (e.g. about org-level crucial considerations).

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-21T03:39:11.662Z · score: 16 (6 votes) · LW · GW

The capacity to develop true beliefs, so as to better achieve your goals.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-21T03:32:46.048Z · score: 16 (7 votes) · LW · GW

We have not conducted thorough scientific investigation of our lamps, food or furniture. Just as one might have reasonable confidence in a proposition like "tired people are sometimes grumpy" without running an RCT, one can I think be reasonably confident that e.g. vegetarians will be upset if there’s no vegetarian food, or that people will be more likely to clump in small groups if the chairs are arranged in small groups.

I agree the lighting recommendations are quite specific. I have done lots of testing (relative to e.g. the average American) of different types of lamps, with different types of bulbs in different rooms, and have informally gathered data about people’s preferences. I have not done this super formally, since I don’t think that would be worth my time, but in my experience, informally, the bulb preferences of the subset of people who report any strong lighting preferences at all tend to correlate strongly with that bulb’s CRI. Currently, as far as I know, incandescents have the highest CRI of commonly available bulbs, so I generally recommend those. My other specific suggestions were developed via a similar process.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-21T03:00:23.889Z · score: 13 (6 votes) · LW · GW

I buy that General Semantics was in some sense a memetic precursor to some of the ideas described in the sequences/at CFAR, but I think this effect was mostly indirect, so it seems misleading to me to describe CFAR as being heavily influenced by it. Davis Kingsley, former CFAR employee and current occasional guest instructor, has read a bunch about GM, I think, and mentions it frequently, but I'm not aware of direct influences aside from this.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-21T02:37:34.790Z · score: 8 (6 votes) · LW · GW

I also think Musk is really cool in lots of ways. I didn't intend to express skepticism of him in particular, so much as of what might happen if one created e.g. 10k more people as agenty as him. For example, I can easily imagine this accelerating capabilities progress relative to safety progress, which on my current models is risky.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-20T23:42:21.921Z · score: 4 (3 votes) · LW · GW

I'm not aware of existing orgs that seem likely to me to create such a factory.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-20T22:35:32.838Z · score: 4 (3 votes) · LW · GW

Also worth noting that there are a few different claims of the sort OP mentions that people make, I think. One thing people sometimes mean by this is “CFAR no longer does the sort of curriculum development which would be necessary to create something like an 'Elon Musk factory.'"

CFAR never had the goal of hugely amplifying the general effectiveness of large numbers of people (which I’m happy about, since I’m not sure achieving that goal would be good). One should not donate to CFAR in order to increase the chances of an Elon Musk factory.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-20T21:02:01.329Z · score: 16 (12 votes) · LW · GW

Ambience and physical comfort are surprisingly important. In particular:

  • Lighting: Have lots of it! Ideally incandescent but at least ≥ 95 CRI (and mostly ≤ 3500k) LED, ideally coming from somewhere other than the center of the ceiling, ideally being filtered through a yellow-ish lampshade that has some variation in its color so the light that gets emitted has some variation too (sort of like the sun does when filtered through the atmosphere).

  • Food/drink: Have lots of it! Both in terms of quantity and variety. The cost to workshop quality of people not having their preferences met here sufficiently outweighs the cost of buying too much food, that in general it’s worth buying too much as a policy. It's particularly important to meet people's (often, for rationalists, amusingly specific) specific dietary needs, have a variety of caffeine options, and provide a changing supply of healthy, easily accessible snacks.

  • Furniture: As comfortable as possible, and arranged such that multiple small conversations are more likely to happen than one big one.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-20T19:07:03.461Z · score: 7 (5 votes) · LW · GW

So I’m imagining there might be both a question (for what types of reasons have CFAR staff left?) and a claim (CFAR’s rate of turnover is unusual) here. Anna should be able to better address the question, but with regard to the claim: I think it’s true, at least relative to average U.S. turnover. The median length Americans spend in jobs is 4.2 years, while the median length CFAR employees have stayed in their jobs is 2.2 years; 32% of our employees (7 people) left within their first year.

Comment by adam_scholl on We run the Center for Applied Rationality, AMA · 2019-12-20T18:24:36.091Z · score: 27 (9 votes) · LW · GW

I think it’s true that CFAR mostly moved away from teaching things like explicit probabilistic forecasting, and toward something else, although I would describe that something else differently—more like, toward helping people develop skills relevant to, for example, hypothesis generation, noticing confusion, communicating subtle intuitions, actually updating on evidence about crucial considerations, and in general (for lack of a better way to describe this) “not going insane when thinking about x-risk.”

I favor this shift, on the whole, because my guess is that skills of the former type are less important bottlenecks for the problems CFAR is trying to help solve. That is, all else equal, if I could press a button to either make alignment researchers and the people who surround them much better calibrated, or much better at any of those latter skills, I’d currently press the latter button.

But I do think it’s plausible CFAR should move somewhat backward on this axis, at the margin. Some skills from the former category would be pretty easy to teach, I think, and in general I have some kelly betting-ish inclination to diversify the goals of our curricular portfolio, in case our underlying assumptions are wrong.

Comment by adam_scholl on Raemon's Scratchpad · 2019-12-07T10:39:57.076Z · score: 6 (3 votes) · LW · GW

Fwiw, my experiences with DMVs in DC, Maryland, Virginia, New York, and Minnesota have all been about as terrible as my experiences in California.

Comment by adam_scholl on adam_scholl's Shortform · 2019-12-07T06:42:16.104Z · score: 11 (5 votes) · LW · GW

So apparently Otzi the Iceman still has a significant amount of brain tissue. Conceivably some memories are preserved?

Comment by adam_scholl on Raemon's Scratchpad · 2019-12-07T06:12:13.527Z · score: 4 (2 votes) · LW · GW

Fwiw, for reasons I can't explain I vastly prefer just the title bolded to the entire line bolded, and significantly prefer the status quo to title bolded.

Comment by adam_scholl on DanielFilan's Shortform Feed · 2019-10-18T06:58:58.876Z · score: 5 (3 votes) · LW · GW

I've been wondering recently whether CFAR should try having some workshops in India for this reason. Far more people speak English than in China, and I expect we'd encounter fewer political impediments.

Comment by adam_scholl on adam_scholl's Shortform · 2019-09-10T04:43:07.217Z · score: 5 (3 votes) · LW · GW

TIL that (according to this study, at least) adenovirus serotype 36 is present in 30% of obese humans, but only 11% of non-obese humans. The virus appears to cause obesity in chickens, mice, rats and monkeys. It may work (paper, pop summary) by binding to and permanently activating the PI3K enzyme, causing it to activate the insulin signaling pathway even when insulin isn't present.

Previous discussion on LessWrong.

Comment by adam_scholl on Why are the people who could be doing safety research, but aren’t, doing something else? · 2019-08-30T04:22:47.451Z · score: 1 (1 votes) · LW · GW

I think nuclear physics then had more of an established paradigm than AI safety has now; from what I understand, building a bomb was considered a hard, unsolved problem, but one which it was broadly known how to solve. So I think the answer to A is basically "no."

A bunch of people on the above list do seem to me to have actually tried before the project was backed by the establishment, though—from what I understand Fermi, Szilard, Wigner and Teller were responsible for getting the government involved in the first place. But their actions seem mostly to have been in the domains of politics, engineering and paradigmatic science, rather than new-branch-of-science-style theorizing.

(I do suspect it might be useful to find more ways of promoting the problem chiefly as interesting).

Comment by adam_scholl on Why are the people who could be doing safety research, but aren’t, doing something else? · 2019-08-30T01:02:24.542Z · score: 2 (2 votes) · LW · GW

"Not debatable" seems a little strong. For example, one might suspect both that it's plausible some rational humans might disprefer persisting, and also that most humans who think they have this preference would change their minds with more reflection.

Comment by adam_scholl on Why are the people who could be doing safety research, but aren’t, doing something else? · 2019-08-30T00:56:50.458Z · score: 2 (2 votes) · LW · GW

I expect most members of the 50, by virtue of being on the list, do have some sort of relevant comparative advantage. But it seems plausible some of them don't realize that.

Comment by adam_scholl on Why are the people who could be doing safety research, but aren’t, doing something else? · 2019-08-30T00:43:41.337Z · score: 5 (3 votes) · LW · GW

Strongly agree. Awareness of this risk is, I think, the reason for some of CFAR's actions that most-often confuse people—not teaching AI risk at intro workshops, not scaling massively, etc.

Comment by adam_scholl on Why are the people who could be doing safety research, but aren’t, doing something else? · 2019-08-30T00:39:46.765Z · score: 5 (3 votes) · LW · GW

I think this is a good candidate answer, but I feel confused by (what seems to me like) the relative abundance of historical examples of optimization-type behavior among scientists during pivotal periods in the past. For example, during WWII there were some excellent scientists (e.g. Shannon) who only grudgingly pursued research that was "important" rather than "interesting." But there were many others (e.g. Fermi, Szilard, Oppenheimer, Bethe, Teller, Von Neumann, Wigner) who seemed... to truly grok the stakes. To be interested in things in part because of their importance, to ruthlessly prioritize, to actually try.

Comment by adam_scholl on Why are the people who could be doing safety research, but aren’t, doing something else? · 2019-08-29T22:45:33.265Z · score: 4 (3 votes) · LW · GW

I also have this model, and think it well-predicts lots of human behavior. But it doesn't feel obvious to me that it also well-predicts the behavior of this 50, who I would expect to be unusually motivated by explicit arguments, unusually likely to gravitate toward the most interesting explicit arguments, etc.

Comment by adam_scholl on Why are the people who could be doing safety research, but aren’t, doing something else? · 2019-08-29T08:54:51.105Z · score: 11 (6 votes) · LW · GW

Example answers which strike me as plausible:

  • Most members of this set simply haven’t yet encountered one of the common attractors—LessWrong, CFAR, Superintelligence, HPMOR, 80k, etc. Perhaps this is because they don’t speak English, or because they’re sufficiently excited about their current research that they don’t often explore beyond it, or because they’re 16 and can’t psychologically justify doing things outside the category “prepare for college,” or because they’re finally about to get tenure and are actively trying to avoid getting nerd sniped by topics in other domains, or because they don’t have many friends so only get introduced to new topics they think to Google, or simply because despite being exactly the sort of person who would get nerd sniped by this problem if they’d ever encountered it they just… never have, not even the basic “maybe it will be a problem if we build machines smarter than us, huh?”, and maybe it shouldn’t be much more surprising that there might still exist pockets of extremely smart people who’ve never thought to wonder this than that there presumably existed pockets of extremely smart people for millennia who never thought to wonder what effects might result from more successful organisms reproducing more?
  • Most members of this set have encountered one of the common attractors, or at least the basic ideas, but only in some poor and limited form that left them idea inoculated. Maybe they heard Kurzweil make a weirdly-specific claim once, or the advisor they really respect told them the whole field is pseudoscience that assumes AI will have human-like consciousness and drives to power, or they tried reading some of Eliezer’s posts and hated the writing style, or they felt sufficiently convinced by an argument for super-long timelines that investigating the issue more didn’t feel decision-relevant.
  • The question is ill-formed: perhaps because there just aren’t 50 people who could helpfully contribute who aren't doing so already, or because the framing of the question implies the “50” is the relevant thing to track whereas actually research productivity is power law-ey and the vast majority of the benefit would come from finding just one or three particular members of this set and finding them would require asking different questions.
Comment by adam_scholl on If the "one cortical algorithm" hypothesis is true, how should one update about timelines and takeoff speed? · 2019-08-29T07:16:05.375Z · score: 2 (2 votes) · LW · GW

Confused what you mean—is the argument in your second sentence that a low-complexity learner will foom more easily?

Comment by adam_scholl on If the "one cortical algorithm" hypothesis is true, how should one update about timelines and takeoff speed? · 2019-08-29T07:09:38.136Z · score: 3 (2 votes) · LW · GW

The specifics of the proposal, at least, seem relatively easy to falsify. For example, he not only predicts the existence of cortical grid and displacement cells, but also their specific location—that they'll be found in layer 6 and layer 5 of the neocortex, respectively. So we may find out whether he's right fairly soon.

Comment by adam_scholl on If the "one cortical algorithm" hypothesis is true, how should one update about timelines and takeoff speed? · 2019-08-26T23:45:42.700Z · score: 8 (3 votes) · LW · GW

Grid cells are known to exist elsewhere in the brain—for example, in the entorhinal cortex. There are preliminary hints that grid cells may exist in neocortex too, but this hasn't yet been firmly established. Displacement cells, on the other hand, have never been observed anywhere—they're just hypothesized cells Hawkins predicts must exist, assuming his theory is true. So I took him to be making a few distinct claims: 1) grid cells also exist in neocortex, 2) displacement cells exist 3) displacement cells are located in neocortex.

Comment by adam_scholl on What's the optimal procedure for picking macrostates? · 2019-08-26T23:24:45.402Z · score: 1 (1 votes) · LW · GW

That's really helpful, thanks. But... should I understand "class" here to mean something like "a configuration of reality that would result in the observed data obtaining?" If so, aren't there many possible such classes for any given microstate? How do you choose? For example, if one were to ask an aligned oracle with infinite compute to estimate the information theoretic entropy of a given message—say, in order to minimize the probability it misunderstood you—how would it go about estimating this?

Comment by adam_scholl on adam_scholl's Shortform · 2019-08-26T22:36:38.638Z · score: 3 (3 votes) · LW · GW

Turns out there's an app (Apple, Android) which compiles evidence from 179 studies on probiotics, ranks them by strength of evidence (study design, etc.) then suggests the most evidence-supported probiotic for a given "indication" (allergies, IBS, etc.). The only available CNS-related indication is "Mood/Affect", though, and the review described in the OP isn't included in the study database, nor were any of the three studies included in that review that I spot-checked. But the two strains it recommends for mood/affect (b. longum and l. helveticus) are among the seven strains recommended in the OP.

Note that from what I can tell about the state of this field, "most evidence-supported intervention" should be read more as "better than choosing randomly, I guess" than "this is definitely promising."