Posts

Why are the people who could be doing safety research, but aren’t, doing something else? 2019-08-29T08:51:33.219Z · score: 25 (6 votes)
What's the optimal procedure for picking macrostates? 2019-08-26T09:34:15.647Z · score: 12 (5 votes)
If the "one cortical algorithm" hypothesis is true, how should one update about timelines and takeoff speed? 2019-08-26T07:08:19.634Z · score: 24 (9 votes)
Cognitive Benefits of Exercise 2019-08-14T21:40:35.145Z · score: 28 (14 votes)
adam_scholl's Shortform 2019-08-12T00:53:37.221Z · score: 1 (1 votes)

Comments

Comment by adam_scholl on DanielFilan's Shortform Feed · 2019-10-18T06:58:58.876Z · score: 3 (2 votes) · LW · GW

I've been wondering recently whether CFAR should try having some workshops in India for this reason. Far more people speak English than in China, and I expect we'd encounter fewer political impediments.

Comment by adam_scholl on adam_scholl's Shortform · 2019-09-10T04:43:07.217Z · score: 5 (3 votes) · LW · GW

TIL that (according to this study, at least) adenovirus serotype 36 is present in 30% of obese humans, but only 11% of non-obese humans. The virus appears to cause obesity in chickens, mice, rats and monkeys. It may work (paper, pop summary) by binding to and permanently activating the PI3K enzyme, causing it to activate the insulin signaling pathway even when insulin isn't present.

Previous discussion on LessWrong.

Comment by adam_scholl on Why are the people who could be doing safety research, but aren’t, doing something else? · 2019-08-30T04:22:47.451Z · score: 1 (1 votes) · LW · GW

I think nuclear physics then had more of an established paradigm than AI safety has now; from what I understand, building a bomb was considered a hard, unsolved problem, but one which it was broadly known how to solve. So I think the answer to A is basically "no."

A bunch of people on the above list do seem to me to have actually tried before the project was backed by the establishment, though—from what I understand Fermi, Szilard, Wigner and Teller were responsible for getting the government involved in the first place. But their actions seem mostly to have been in the domains of politics, engineering and paradigmatic science, rather than new-branch-of-science-style theorizing.

(I do suspect it might be useful to find more ways of promoting the problem chiefly as interesting).

Comment by adam_scholl on Why are the people who could be doing safety research, but aren’t, doing something else? · 2019-08-30T01:02:24.542Z · score: 2 (2 votes) · LW · GW

"Not debatable" seems a little strong. For example, one might suspect both that it's plausible some rational humans might disprefer persisting, and also that most humans who think they have this preference would change their minds with more reflection.

Comment by adam_scholl on Why are the people who could be doing safety research, but aren’t, doing something else? · 2019-08-30T00:56:50.458Z · score: 2 (2 votes) · LW · GW

I expect most members of the 50, by virtue of being on the list, do have some sort of relevant comparative advantage. But it seems plausible some of them don't realize that.

Comment by adam_scholl on Why are the people who could be doing safety research, but aren’t, doing something else? · 2019-08-30T00:43:41.337Z · score: 5 (3 votes) · LW · GW

Strongly agree. Awareness of this risk is, I think, the reason for some of CFAR's actions that most-often confuse people—not teaching AI risk at intro workshops, not scaling massively, etc.

Comment by adam_scholl on Why are the people who could be doing safety research, but aren’t, doing something else? · 2019-08-30T00:39:46.765Z · score: 5 (3 votes) · LW · GW

I think this is a good candidate answer, but I feel confused by (what seems to me like) the relative abundance of historical examples of optimization-type behavior among scientists during pivotal periods in the past. For example, during WWII there were some excellent scientists (e.g. Shannon) who only grudgingly pursued research that was "important" rather than "interesting." But there were many others (e.g. Fermi, Szilard, Oppenheimer, Bethe, Teller, Von Neumann, Wigner) who seemed... to truly grok the stakes. To be interested in things in part because of their importance, to ruthlessly prioritize, to actually try.

Comment by adam_scholl on Why are the people who could be doing safety research, but aren’t, doing something else? · 2019-08-29T22:45:33.265Z · score: 4 (3 votes) · LW · GW

I also have this model, and think it well-predicts lots of human behavior. But it doesn't feel obvious to me that it also well-predicts the behavior of this 50, who I would expect to be unusually motivated by explicit arguments, unusually likely to gravitate toward the most interesting explicit arguments, etc.

Comment by adam_scholl on Why are the people who could be doing safety research, but aren’t, doing something else? · 2019-08-29T08:54:51.105Z · score: 11 (6 votes) · LW · GW

Example answers which strike me as plausible:

  • Most members of this set simply haven’t yet encountered one of the common attractors—LessWrong, CFAR, Superintelligence, HPMOR, 80k, etc. Perhaps this is because they don’t speak English, or because they’re sufficiently excited about their current research that they don’t often explore beyond it, or because they’re 16 and can’t psychologically justify doing things outside the category “prepare for college,” or because they’re finally about to get tenure and are actively trying to avoid getting nerd sniped by topics in other domains, or because they don’t have many friends so only get introduced to new topics they think to Google, or simply because despite being exactly the sort of person who would get nerd sniped by this problem if they’d ever encountered it they just… never have, not even the basic “maybe it will be a problem if we build machines smarter than us, huh?”, and maybe it shouldn’t be much more surprising that there might still exist pockets of extremely smart people who’ve never thought to wonder this than that there presumably existed pockets of extremely smart people for millennia who never thought to wonder what effects might result from more successful organisms reproducing more?
  • Most members of this set have encountered one of the common attractors, or at least the basic ideas, but only in some poor and limited form that left them idea inoculated. Maybe they heard Kurzweil make a weirdly-specific claim once, or the advisor they really respect told them the whole field is pseudoscience that assumes AI will have human-like consciousness and drives to power, or they tried reading some of Eliezer’s posts and hated the writing style, or they felt sufficiently convinced by an argument for super-long timelines that investigating the issue more didn’t feel decision-relevant.
  • The question is ill-formed: perhaps because there just aren’t 50 people who could helpfully contribute who aren't doing so already, or because the framing of the question implies the “50” is the relevant thing to track whereas actually research productivity is power law-ey and the vast majority of the benefit would come from finding just one or three particular members of this set and finding them would require asking different questions.
Comment by adam_scholl on If the "one cortical algorithm" hypothesis is true, how should one update about timelines and takeoff speed? · 2019-08-29T07:16:05.375Z · score: 2 (2 votes) · LW · GW

Confused what you mean—is the argument in your second sentence that a low-complexity learner will foom more easily?

Comment by adam_scholl on If the "one cortical algorithm" hypothesis is true, how should one update about timelines and takeoff speed? · 2019-08-29T07:09:38.136Z · score: 3 (2 votes) · LW · GW

The specifics of the proposal, at least, seem relatively easy to falsify. For example, he not only predicts the existence of cortical grid and displacement cells, but also their specific location—that they'll be found in layer 6 and layer 5 of the neocortex, respectively. So we may find out whether he's right fairly soon.

Comment by adam_scholl on If the "one cortical algorithm" hypothesis is true, how should one update about timelines and takeoff speed? · 2019-08-26T23:45:42.700Z · score: 8 (3 votes) · LW · GW

Grid cells are known to exist elsewhere in the brain—for example, in the entorhinal cortex. There are preliminary hints that grid cells may exist in neocortex too, but this hasn't yet been firmly established. Displacement cells, on the other hand, have never been observed anywhere—they're just hypothesized cells Hawkins predicts must exist, assuming his theory is true. So I took him to be making a few distinct claims: 1) grid cells also exist in neocortex, 2) displacement cells exist 3) displacement cells are located in neocortex.

Comment by adam_scholl on What's the optimal procedure for picking macrostates? · 2019-08-26T23:24:45.402Z · score: 1 (1 votes) · LW · GW

That's really helpful, thanks. But... should I understand "class" here to mean something like "a configuration of reality that would result in the observed data obtaining?" If so, aren't there many possible such classes for any given microstate? How do you choose? For example, if one were to ask an aligned oracle with infinite compute to estimate the information theoretic entropy of a given message—say, in order to minimize the probability it misunderstood you—how would it go about estimating this?

Comment by adam_scholl on adam_scholl's Shortform · 2019-08-26T22:36:38.638Z · score: 3 (3 votes) · LW · GW

Turns out there's an app (Apple, Android) which compiles evidence from 179 studies on probiotics, ranks them by strength of evidence (study design, etc.) then suggests the most evidence-supported probiotic for a given "indication" (allergies, IBS, etc.). The only available CNS-related indication is "Mood/Affect", though, and the review described in the OP isn't included in the study database, nor were any of the three studies included in that review that I spot-checked. But the two strains it recommends for mood/affect (b. longum and l. helveticus) are among the seven strains recommended in the OP.

Note that from what I can tell about the state of this field, "most evidence-supported intervention" should be read more as "better than choosing randomly, I guess" than "this is definitely promising."

Comment by adam_scholl on Eli's shortform feed · 2019-08-26T06:34:36.666Z · score: 4 (3 votes) · LW · GW

There feel to me like two relevant questions here, which seem conflated in this analysis:

1) At what point did the USSR gain the ability to launch a comprehensively-destructive, undetectable-in-advance nuclear strike on the US? That is, at what point would a first strike have been achievable and effective?

2) At what point did the USSR gain the ability to launch such a first strike using ICBMs in particular?

By 1960 the USSR had 1,605 nuclear warheads; there may have been few ICBMs among them, but there are other ways to deliver warheads than shooting them across continents. Planes fail the "undetectable" criteria, but ocean-adjacent cities can be blown up by small boats, and by 1960 the USSR had submarines equipped with six "short"-range (650 km and 1,300 km) ballistic missiles. By 1967 they were producing subs like this, each of which was armed with 16 missiles with ranges of 2,800-4,600 km.

All of which is to say that from what I understand, RAND's fears were only a few years premature.

Comment by adam_scholl on Does it become easier, or harder, for the world to coordinate around not building AGI as time goes on? · 2019-08-26T04:49:38.695Z · score: 11 (5 votes) · LW · GW

A missing point in favor of coordination getting easier: AI safety as a field seems likely to mature over time, and as it does the argument "let's postpone running this AGI code until we first solve x" may become more compelling, as x increases in legibility and tractability.

Comment by adam_scholl on Benito's Shortform Feed · 2019-08-18T08:08:39.523Z · score: 1 (1 votes) · LW · GW

Not certain, but I think when your news feed becomes sparse enough it might actually become exhaustive.

Comment by adam_scholl on Benito's Shortform Feed · 2019-08-18T02:54:55.707Z · score: 3 (2 votes) · LW · GW

In my experience this problem is easily solved if you simply unfollow ~95% of your friends. You can mass unfollow people relatively easily from the News Feed Preferences page in Settings. Ever since doing this a few years ago, my Facebook timeline has had an extremely high signal-to-noise ratio—I'm quite glad to encounter something like 85% of posts. Also, since this 5% only produces ~5-20 minutes of reading/day, it's easy to avoid spending lots of time on the site.

Comment by adam_scholl on Cognitive Benefits of Exercise · 2019-08-15T19:58:25.366Z · score: 12 (4 votes) · LW · GW

Interesting that the Landrigan et al. review hereisonehand cited showed no effect of strength training on working memory; the review here reported no effect of aerobic exercise on working memory, but did report benefit from combined strength training and aerobic. Feels a bit fishy that each would have no effect individually, yet have an effect when combined.

Comment by adam_scholl on Matthew Barnett's Shortform · 2019-08-12T22:53:47.579Z · score: 8 (5 votes) · LW · GW

My impression is that academic philosophy has historically produced a lot of good deconfusion work in metaethics (e.g. this and this), as well as some really neat negative results like the logical empiricists' failed attempt to construct a language in which verbal propositions could be cached out/analyzed in terms of logic or set theory in a way similar to how one can cache out/analyze Python in terms of machine code. In recent times there's been a lot of (in my opinion) great academic philosophy done at FHI.

Comment by adam_scholl on adam_scholl's Shortform · 2019-08-12T21:19:02.908Z · score: 5 (3 votes) · LW · GW

I agree the effect is consistent enough that we should be suspicious of file drawer/p-hacking—although of course that's also what you'd expect to see if the effect were in fact large—but note that they were different studies, i.e. the human studies mostly weren't based on the non-human ones.

Comment by adam_scholl on adam_scholl's Shortform · 2019-08-12T00:53:37.351Z · score: 19 (8 votes) · LW · GW

I was surprised to find a literature review about probiotics which suggested they may have significant CNS effects. The tl;dr of the review seems to be: 1) You want doses of at least or CFU, and 2) You want, in particular, the strains B. longum, B. breve, B. infantis, L. helveticus, L. rhamnosus, L. plantarum, and L. casei.

I then sorted the top 15 results on Amazon for "probiotic" by these desiderata, and found that this one seems to be best.

Some points of uncertainty:

  • Probiotic manufacturers generally don't disclose the strain proportions of their products, so there's some chance they mostly include e.g. whatever's cheapest, plus a smattering of other stuff.
  • One of the reviewed studies suggests L. casei may impair memory. I couldn't find a product that didn't have L. casei but did have at least CFU of each other recommended strain, so if you take the L. casei/memory concern seriously your best option might be combining this and this.