February 2022 Open Thread

post by Raemon · 2022-02-02T18:38:28.703Z · LW · GW · 54 comments

Contents

54 comments

If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you want to explore the community more, I recommend reading the Library, [? · GW] checking recent Curated posts [? · GW], seeing if there are any meetups in your area [? · GW], and checking out the Getting Started [? · GW] section of the LessWrong FAQ [? · GW]. If you want to orient to the content on the site, you can also check out the new Concepts section [? · GW].

The Open Thread tag is here [? · GW]. The Open Thread sequence is here [? · GW].

54 comments

Comments sorted by top scores.

comment by niplav · 2022-02-03T11:18:20.019Z · LW(p) · GW(p)

Why don't female organisms insert more of their own DNA into the offspring?

I don't think Fisher's principle explains it, because it only applies to the level of which sex your offspring has, and there's a clear advantage of having a male offspring when there's many females around.

But if you're female, and sperm and egg mix, you could theoretically control how much of the male's DNA gets to contribute to the embryo. And there's not much to stop you, since the male can't check beforehand how much of its DNA you will use. But if a gene causes the female organism to increase more of its DNA into the offspring, that gene also increases its own chance of continuing to exist.

Sure, the offspring will be less fit on average, but I'm astonished that the equilibrium is that the female organism doesn't "cheat" at all, instead of there being a 2/3 female 1/3 male offspring genome or something.

Maybe this is a situation where every gene benefits by not cheating, but individual genes defecting would make sense? And if so, how does the reproductive process prevent this?

Replies from: ChristianKl, niplav
comment by ChristianKl · 2022-02-04T18:54:40.274Z · LW(p) · GW(p)

It's worth starting by noting, that male and female births are not 50-50. While conceptions are 50-50, births aren't and there are mechanisms that terminate pregnancies unsuccessfully that have different likelihoods based on gender. 

While it makes sense that the value is near 50% for humans it's not exactly both in reality and in computer models I did for human evolution (and it surprised me).

Sexual selection is very useful. In humans, mitochondrial DNA is only passed maternally and we see that evolution reduced the number of mitochondrial genes to a minimum. 

For each of the chromosomes we get one from our mother and one from our father. There's no easy way to give 1.25 from the mother and 0.75 from the father. If we get two of one of the chromosomes or none from one parent in most cases the pregnancy terminates unsuccessfully and in the few remaining cases, it produces severe harm (like down syndrome). 

Replies from: Alex Hollow, andrew-mcknight
comment by Alex Hollow · 2022-02-08T18:50:32.216Z · LW(p) · GW(p)

What asymmetries did you introduce into your simulations that lead to a difference? Models with no gender differences but with mandatory sexual reproduction usually tend to be 50/50 in my experience. 

Replies from: ChristianKl
comment by ChristianKl · 2022-02-08T22:22:04.536Z · LW(p) · GW(p)

My models had humans with their full 46 chromosomes and multiple genes per chromosome. In addition, I have transposons on those chromosomes. I also tried to have a model of mating behavior where males and females obviously have different roles. 

Mutations on the x-chromosome lead more frequently to pregnancy termination in male offspring. This is pretty obvious given that female offspring have more redundancy when it comes to the X chromosome. 

The transposon-related pregnancy terminations that in turn terminate more female pregnancies than male ones are less obvious. I think I have some insight there that could be publishable. If anyone wants to collaborate on a paper I'm happy to say more privately. 

Transposons and their effects get often not taken as seriously as they should. 

comment by Andrew McKnight (andrew-mcknight) · 2022-02-22T19:20:58.040Z · LW(p) · GW(p)

I think this makes sense because eggs are haploid (already only have 23 chromosomes) but a natural next question is: why are eggs haploid if there is a major incentive to pass more of the 46 chromosomes?

Replies from: ChristianKl
comment by ChristianKl · 2022-02-22T22:37:28.182Z · LW(p) · GW(p)

If you would say that there are two copies of chromosome 11 in the egg and none in sperm, you would lose sexual selection for chromosome 11.

comment by niplav · 2022-02-03T13:27:22.851Z · LW(p) · GW(p)

From a message on reddit by /u/eniteris:

I think the biggest reason why you don't commonly see the selective incorporation of male DNA is because the machinery to do the selection would be too costly compared to just transitioning to asexual reproduction.

That being said, there's a wide breadth of parthenogenesis strategies, of which the most relevant are the kleptons, which can sometimes incorporate some of the male DNA.

Individual genes defecting are probably closest to transposons and other selfish genetic elements, and those are in competition with systems that silence them to prevent them from defecting.

Polyploidy probably have greater flexibility for the non-balanced incorporation of DNA, but I'm not familiar enough to comment any more.

From another message:

I guess selective incorporation machinery might not be too costly, but why selectively incorporate when you can just turn to full asexual reproduction, or have both sexual and asexual reproduction? (ie: virgin birth in sharks, some reptiles, etc.)

I guess it's the difference between having each offspring being an 90/10 split of genetics, or having 80% of your population asexually reproduce and 20% sexually reproduce.

(I am satisfied with this as an explanation)

Replies from: lsusr
comment by lsusr · 2022-02-07T05:13:29.360Z · LW(p) · GW(p)

To put it another way, if a female inserts more of her DNA into an offspring then she loses out on the benefits of sexual reproduction.

Replies from: Pattern
comment by Pattern · 2022-02-08T18:33:06.014Z · LW(p) · GW(p)
Polyploidy probably have greater flexibility for the non-balanced incorporation of DNA, but I'm not familiar enough to comment any more.

It's not clear this is the case when the sum isn't 1. (i.e. 1/2 + 1/2 = 1, versus 1/2 + 1 = 3/2*)

*I'm guessing that's the breakdown.

comment by a gently pricked vein (strangepoop) · 2022-02-17T08:20:06.587Z · LW(p) · GW(p)

(A suggestion for the forum)

You know that old post on r/ShowerThoughts which went something like "People who speak somewhat broken english as their second language sound stupid, but they're actually smarter than average because they know at least one other language"?

I was thinking about this. I don't struggle with my grasp of English the language so much, but I certainly do with what might be called an American/Western cadence. I'm sure it's noticeable occasionally, inducing just the slightest bit of microcringe in the typical person that hangs around here. Things like strange sentence structure, or weird use of italics, or overuse of a word, or over/under hedging... all the writing skills you already mastered in grade school. And you probably grew up seeing that the ones who continued to struggle with it often didn't get other things quickly either. 

Maybe you notice some of these already in what you're reading right now (despite my painstaking efforts otherwise). It's likely to look "wannabe" or "amateurish" because it isone learns language and rhythm by imitating. But this imitation game is confined to language & rhythm, and it would be a mistake to also infer from this that the ideas behind them would be unoriginal or amateurish.

I'd like to think it wouldn't bother anyone on LW because people here believe that linguistic faux pas, as much as social ones, ought to be screened off by the content. 

But it probably still happens. You might believe it but not alieve it. Imagine someone saying profound things but using "u" and "ur" everywhere, even for "you're". You could actually try this (even though it would be a somewhat shallow experiment, because what I'm pointing at with "cadence" is deeper than spelling mistakes) to get a flavor for it.

A solution I can think of: make a [Non-Native Speaker] tag and allow people to self-tag. Readers could see it and shoot for a little bit more charity across anything linguistically-aesthetically displeasing. The other option is to take advantage of customizable display names here, but I wonder if that'd be distracting if mass-adopted, like twitter handles that say "[Name] ...is in New York".

I would (maybe, at some point) even generalize it to [English Writing Beginner] or some such, which you can self-assign even if you speak natively but are working on your writing skills. This one is more likely to be diluted though.

Replies from: nim
comment by nim · 2022-02-22T02:57:47.349Z · LW(p) · GW(p)

Could this be accomplished using custom commenting guidelines? Perhaps just adding a sentence about whether one wants to opt into or out of linguistic-aesthetic feedback would suffice if one has strong feelings on the matter.

This would work for top level posts, but for comment replies, the commenting guidelines feature would need to be expanded to show the guidelines of the person being replied to as well as the author of the main post. For instance, when writing this reply I see only Raemon's commenting guidelines.

comment by NunoSempere (Radamantis) · 2022-02-15T23:07:18.440Z · LW(p) · GW(p)

I just realized that jasoncrawford and joshwentworth are two different people. Explains so many things.

Replies from: Raemon, ege-erdil
comment by Raemon · 2022-02-16T00:13:23.828Z · LW(p) · GW(p)

...you are also probably thinking of johnswentworth.

comment by Ege Erdil (ege-erdil) · 2022-03-17T14:56:57.110Z · LW(p) · GW(p)

His username is actually johnswentworth. Easy to remember if you know his real name, John Wentworth.

comment by Wing (yun-chen) · 2022-02-27T16:06:58.236Z · LW(p) · GW(p)

Huh.

Read HPMOR(3 times start to finish), found and currently reading the sequences, and now actually venturing into LessWrong. Read maybe 10 articles as well as a lot of comments.

And I'm a bit intimidated :D. But that's fine LOL. Shut up and do the impossible! 

Replies from: habryka4
comment by habryka (habryka4) · 2022-02-27T21:25:36.403Z · LW(p) · GW(p)

Welcome! I agree it can be a bit intimidating at first, but I expect you'll find your bearings. And don't feel hesitant to ask questions here in this thread.

comment by lizard_brain · 2022-02-12T23:13:51.877Z · LW(p) · GW(p)

Has anyone tried to map relationships between (at least some) LessWrong posts?

What I'm looking for would be some kind of overview of what connects to what that could be parsed without clicking through links recursively. If I assume a chronologically earlier post cannot refer to a later one, I expect this type of structure to have interesting properties. A practical application that comes to mind is to inform an algorithm to decide what to give attention.

To define a relationship, links from one post to another would be a good if imperfect metric (not all links represent the same type of relationship, and not all relationships are represented by explicit links).

comment by Anirandis · 2022-02-12T23:07:59.168Z · LW(p) · GW(p)

I'm not sure if this is the right place to ask this, but does anyone know what point Paul's trying to make in the following part of this podcast? (Relevant section starts around 1:44:00)

Suppose you have a P probability of the best thing you can do and a one-minus P probably the worst thing you can do, what does P have to be so it’s the difference between that and the barren universe. I think most of my probability is distributed between you would need somewhere between 50% and 99% chance of good things and then put some probability or some credence on views where that number is a quadrillion times larger or something in which case it’s definitely going to dominate. A quadrillion is probably too big a number, but very big numbers. Numbers easily large enough to swamp the actual probabilities involved

[ . . . ]

I think that those arguments are a little bit complicated, how do you get at these? I think to clarify the basic position, the reason that you end up concluding it’s worse is just like conceal your intuition about how bad the worst thing that can happen to a person is vs the best thing or damn, the worst thing seems pretty bad and then the like first-pass responses, sort of have this debunking understanding, or we understand causally how it is that we ended up with this kind of preference with respect to really bad stuff versus really good stuff.

If you look at what happens over evolutionary history. What is the range of things that can happen to an organism and how should an organism be trading off like best possible versus worst possible outcomes. Then you end up into well, to what extent is that a debunking explanation that explains why humans in terms of their capacity to experience joy and suffering are unbiased but the reality is still biased versus to what extent is this then fundamentally reflected in our preferences about good and bad things. I think it’s just a really hard set of questions. I could easily imagine maybe shifting on them with much more deliberation.

It seems like an important topic but I'm a bit confused by what he's saying here. Is the perspective he's discussing (and puts non-negligible probability on) one that states that the worst possible suffering is a bajillion times worse than the best possible pleasure, and wouldn't that suggest every human's life is net-negative (even if your credence on this being the case is ~.1%)? Or is this just discussing the energy-efficiency of 'hedonium' and 'dolorium', in which case it's of solely altruistic concern & can be dealt with by strictly limiting compute?

 

Also, I'm not really sure if this set of views is more "a broken bone/waterboarding is a million times as morally pressing as making a happy person", or along the more empirical lines of "most suffering (e.g. waterboarding) is extremely light, humans can experience far far far far far^99 times worse; and pleasure doesn't scale to the same degree." Even a tiny chance of the second one being true is awful to contemplate. 

Replies from: Pattern
comment by Pattern · 2022-02-15T19:02:34.088Z · LW(p) · GW(p)

Here's a model that might simplify things:

Really negative events can affect people's lives for a long time afterward.


From that model, it's easier to have utility effects by, say, reducing extreme negative events, than say, making someone who is 'happy' a little bit happier. So while the second thing may seem easier to do (cost), the first thing may still be more impactful even if you divide by its cost.


The obvious connection is how things play out within a person's life. If, say, you break your arm, maybe it'll be harder to do other things because:

  • it's in a cast and you can't use it while it heals
  • You're in pain. Maybe you don't enjoy things like, like watching a movie, as much, when you're in a lot of pain.

[Insert argument for wearing a helmet while riding a bike or motorcycle even if it's mildly inconvenient - because it helps reduce/prevent stuff that's way more inconvenient.]


and pleasure doesn't scale to the same degree

It's easy to scale pain? This just seems like an argument that 'Becoming slightly happier' is less pressing morally than 'reducing the amount of torture* in the world'.

*Might be worth noting that if this is about extreme pain, then this implies 'improving access to medical care' can be a very powerful intervention, i.e., effective altruism.

Replies from: Anirandis
comment by Anirandis · 2022-02-19T02:44:40.078Z · LW(p) · GW(p)

Thanks for the response; I'm still somewhat confused though. The question was to do with the theoretical best/worst things possible, so I'm not entirely sure whether parallels to (relatively) minor pleasures/pains are meaningful here. 

 

Specifically I'm confused about:

Then you end up into well, to what extent is that a debunking explanation that explains why humans in terms of their capacity to experience joy and suffering are unbiased but the reality is still biased

I'm not really sure what's meant by "the reality" here, nor what's meant by biased. Is the assertion that humans' intuitive preferences are driven by the range of possible things that could happen in the ancestral environment & that this isn't likely to match the maximum possible pleasure vs. suffering ratio in the future? If so, how does this lead one to end up concluding it's worse (rather than better)? I'm not really sure how these arguments connect in a way that could lead one to conclude that the worst possible suffering is a quadrillion times as bad as the best bliss is good.

Replies from: Pattern
comment by Pattern · 2022-02-19T06:10:35.863Z · LW(p) · GW(p)
so I'm not entirely sure whether parallels to (relatively) minor pleasures/pains are meaningful here. 

Ah. I suggested them because I figured that such '(relatively) minor' things are what people have experienced and thus are the obvious source for extrapolating out to theoretical maximum/s.


I don't know what's meant by 'reality' there. Your guess seems reasonable (and was more transparent than what you quoted).

I'm not sure how to guess the maximum ratio.


I'm not really sure how these arguments connect in a way that could lead one to conclude that the worst possible suffering is a quadrillion times as bad as the best bliss is good.

Likewise. (A quadrillion seems like a lot - I'd need a detailed explanation to get why someone would choose that number.)

I think...it makes sense less as emotion, than as a utility function - but that's not what is being talked about.

Part of it is...when people are well off do they pursue the greatest pleasure? I think negative extremes prompt a focus on basics. In better conditions, people may pursue more complicated things. Overall, there's something about focus I guess:

'I don't want to die' versus 'I'm happy to be alive!'. Which sentiment is stronger? It's easy to pull that up for a thought experiment, that's extreme, but, if people don't have that as a risk in their lives then maybe the second thing, or the absence of the risk doesn't have as much salience, because the risk isn't present? (Short version: a) it's hard to reason about scenarios outside of experience*, b) this might induce asymmetry in estimates or intuition.)

*I have experienced stuff and found 'wow, that was way more intense than I'd expected' - for stuff I had never experienced before.

comment by superads91 · 2022-02-05T14:36:39.362Z · LW(p) · GW(p)

Hey everyone. I'm new here. I've recently been kinda freaking out about AGI and its timelines... Specially after reading Eliezer's post about there being no fire-alarm.

However, he also mentions that "one could never be sure of a new thing coming, except for the last one or two years" (something along those lines).

So, in your opinion, are we already at a stage where AGI could arrive anytime? Because due to things like GPT-3, Wu Dao 2.0 and AlphaCode, I've been getting really scared... Plus if there is something more advanced being developed in secret...

Or will there at least be a 1-2 year "last epistemic stage" which we can be sure we haven't reached yet? (as Soares also mentions)

Cause everyday I've been looking out the window expecting the nano-swarms to come lol... But I'm just a lay person, so I'd like to hear some more expert opinions.

Replies from: Benito, daniel-kokotajlo, lhc, Mawrak, scarcegreengrass
comment by Ben Pace (Benito) · 2022-02-05T20:27:33.780Z · LW(p) · GW(p)

(If you're scared, in general it's good to do things that give you courage. Perhaps think through your strengths, the ways you can change the world, and make sure you have good relationships with friends/family when you need support.)

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-02-05T19:47:59.599Z · LW(p) · GW(p)

IMO we are already at a stage where AGI could arrive at any time in some sense, but the probability of it arriving in the next year or so is pretty small--some AI lab would need to have some major breakthrough between now and then, something that enables them to do with merely hundreds of millions of dollars of compute what seems like it should take trillions (with current algorithms). I think we probably have like eight years left or something like that.

Replies from: superads91
comment by superads91 · 2022-02-05T23:35:19.341Z · LW(p) · GW(p)

Sober view as well, and much closer to mine. I definitely agree that compute will be the big bottleneck - GPT-3 and the scaling hypothesis scare the heck out of me.

8 years makes a lot of sense, after all many predictions point to 2030.

A more paranoid me would like to ask, what number would you give to the probabilities of it arriving: a) next week, b) next year?

And also: are you also paranoid like me looking out the window fom the nano-swarms, or think that at least in the very, very near-term it's still close to impossible?

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-02-06T02:51:45.758Z · LW(p) · GW(p)

I am not looking out my window for the nano-swarms; I think there's a less than 1% chance of that happening this year. We would need a completely new AI paradigm I think, which is not impossible (It's happened a bunch of times in the past, and there are a few ideas floating around that could be it) but unlikely and especially unlikely to happen all of a sudden without me hearing signs first. And then even with said new paradigm it would be surprising if takeoff was so fast that I saw nanobots before hearing any disturbing news through the grapevine.

So, <1% chance of nano-swarms surprising me this year, <<1% this week.

Maybe something like 2% chance of agentic AGI (or, APS-AI [LW · GW] to use Carlsmith's term) happening this year?

Replies from: superads91
comment by superads91 · 2022-02-06T03:58:20.273Z · LW(p) · GW(p)

Fair argument, thanks.

comment by lhc · 2022-02-17T09:16:17.118Z · LW(p) · GW(p)

Another (very weird) counterpoint: you might not see the "swarm coming" because the annexing of our cosmic endowment might look way stranger than the best strategy human minds can come up with.

I remember a safety researcher once mentioned to me that they didn't necessarily expect us to be killed, just contained, while superintelligence takes over the universe. The argument being that it might want to preserve its history (ie. us) to study it, instead of permanently destroying it. This is basically as bad as also killing everyone, because we'd still be imprisoned away from our largest possible impact. Similar to the serious component in "global poverty is just a rounding error".

Now I think if you add that our "imprisonment" might be made barely comfortable (which is quite unlikely, but maybe plausible in some almost-aligned-but-ultimately-terrible uncanny value scenarios [LW · GW]), then it's possible that there's never a discontinuous horror that we would see striking us; instead we will suddenly be blocked from our cosmic endowment without our knowledge. Things will seem to be going essentially on track. But we never quite seem to get to the future we've been waiting for.

It would be a steganographic takeoff.

Here's a (only slightly) more fleshed out argument:

If 

  • deception is something that "blocks induction on it" (eg. you can't run a million tests on deceptive optimizers and hope for the pattern on the tests to continue), and if
  • all our "deductions" are really just an assertion of induction at higher levels of abstraction (eg. asserting that Logic will continue to hold) 

then deception could look "steganographic" when it's done at really meta levels, exploiting our more basic metaphysical mistakes.

Replies from: superads91
comment by superads91 · 2022-02-18T20:47:05.949Z · LW(p) · GW(p)

Interesting stuff. And I agree. Once you have a nanosystem or something of equivalent power, humans are no longer any threat. But we're yet to be sure if such thing is physically possible. I know many here think so, but I still have my doubts.

Maybe it's even more likely that some random narrow AI failure will start big wars before anything more fancy. Although with the scaling hypothesis on sight, AGI could come suddenly indeed.

"This is basically as bad as also killing everyone, because we'd still be imprisoned away from our largest possible impact."

Although I quite disagree with this. I'm not a huge supporter of our largest possible impact. I guess it's naive to attribute any net positive expectation to that when you look at history or at the present. In fact, such outcome (things staying exactly the same forever) would probably be among the most positive ones in the advent of non aligned AI. As long as we could still take care of Earth, like ending factory farming and dictatorships, it really wouldn't be that bad...

comment by Mawrak · 2022-02-05T16:07:15.281Z · LW(p) · GW(p)

I am not an expert by any means, but here are my thoughts: While I find GPT-3 quite impressive, it's not even close to AGI. All the models you mentioned are still focused on performing specific tasks. This alone will (probably) not be enough to create AGI, even if you try to increase the size of the models even further. I believe AGI is at least decades away, perhaps even a hundred years away. Now,  there is a possibility of stuff being developed in secret, which is impossible to account for, but I'd say the probability of these developments being significantly more advanced that the publicly available technologies is pretty low.

Replies from: superads91
comment by superads91 · 2022-02-05T16:46:13.416Z · LW(p) · GW(p)

A sober opinion (even if quite different from mine). My biggest fear is scaling a transformer + completing it with other "parts", as in an agent (even if a dumb one), etc. Thanks

Replies from: delton137
comment by delton137 · 2022-02-18T16:13:14.249Z · LW(p) · GW(p)

Has GPT-3 / large transformers actually led to anything with economic value? Not from what I can tell although anecdotal reports on Twitter are that many SWEs are finding Github Copilot extremely useful (it's still in private beta though). I think transformers are going to start providing actual value soon, but the fact they haven't so far despite almost two years of breathless hype is interesting to contemplate. I've learned to ignore hype, demos, cool cherry-picked sample outputs, and benchmark chasing and actually look at what is being deployed "in the real world" and bringing value to people. So many systems that looked amazing in academic papers have flopped when deployed - even from top firms - for instance Microsoft's Tay and Google Health's system for detecting diabetic retinopathy. Another example is Google's Duplex. And for how long have we heard about burger flipping robots taking people's jobs?

There are reasons to be skeptical about about a scaled up GPT leading to AGI. I touched on some of those points [LW · GW] here. There's also an argument that the hardware costs are going to balloon so quickly to make the entire project economically unfeasible, but I'm pretty skeptical about that.

I'm more worried about someone reverse engineering the wiring of cortical columns in the neocortex in the next few years and then replicating it in silicon.

Long story short, is existentially dangerous AI eminent? Not as far as we can see right now knowing what we know right now (we can't see that far in the future, since it depends on discoveries and scientific knowledge we don't have). Could that change quickly anytime? Yes. There is Knightian uncertainty here, I think (to use a concept that LessWrongers generally hate lol).

Replies from: superads91
comment by superads91 · 2022-02-18T21:03:05.597Z · LW(p) · GW(p)

Economic value might not be a perfect measure. Nuclear fission didn't generate any economic value either until 200.000 in Japan were incinerated. My fear is that a mixture of experts approach can lead to extremely fast progress towards AGI. Perhaps even less - maybe all it takes is an agent AI that can code as well as humans, to start a cascade of recursive self-improvement.

But indeed, a Knightian uncertainty here would already put me at some ease. As long as you can be sure that it won't happen "just anytime" before some more barriers are crossed, at least you can still sleep at night and have the sanity to try to do something.

I don't know, I'm not a technical person, that's why I'm asking questions and hoping to learn more.

"I'm more worried about someone reverse engineering the wiring of cortical columns in the neocortex in the next few years and then replicating it in silicon."

Personally that's what worries me the least. We can't even crack c.elegans! I don't doubt that in 100-200 years we'd get there but I see many other way faster routes.

comment by scarcegreengrass · 2022-02-24T19:22:38.330Z · LW(p) · GW(p)

In general, whenever Reason makes you feel paralyzed, remember that Reason has many things to say. Thousands of people in history have been convinced by trains of thought of the form 'X is unavoidable, everything is about X, you are screwed'. Many pairs of those trains of thought contradict each other. This pattern is all over the history of philosophy, religion, & politics. 

Future hazards deserve more research funding, yes, but remember that the future is not certain.

Replies from: superads91
comment by superads91 · 2022-02-24T21:16:55.176Z · LW(p) · GW(p)

"Thousands of people in history have been convinced by trains of thought of the form 'X is unavoidable, everything is about X, you are screwed'."

Care to give a few examples? Because I'd venture saying that, except for religious and other superstitious beliefs, and except for crazy lunatics too like fascists and communists, they were mostly right.

"the future is not certain"

Depends on what you mean by that. If you mean that it's not extremely likely, like 90% plus, that we will develop some truly dangerous form of AI this century that will pose immense control challenges, then I'd say you're deeply misguided given the smoke signals that have been coming up since 2017.

I mean, it's like worrying about nuclear war. Is it certain that we'll ever get a big nuclear war? No. Is it extremely likely if things stay the same and if enough time passes (10, 50, 100, 200, 300 years)? Hell yes. I mean, just look at the current situation...

Though I don't care about nuclear war much because it is also extremely likely that it will come with a warning, so you can also run to the countryside, and even then if things go bad like you're starving to death or dying of radiation poisoning, you can always put an end to your own suffering. With AI you might not be so lucky. You might end in an unbreakable dictatorship a la With Folded Hands.

How can you not feel paralyzed when you see chaos pointed at your head and at the heads of other humans, coming in as little as 5 or 10 years, and you see absolutely no solution, or much less anything you can do yourself?

We can't even build a provably safe plane, how are we gonna build a provably safe TAI with the work of a few hundred people over 5-30 years, and with complete ignorance by most?

The world would have to wake up, and I don't think it will.

Really, the only ways we will not build dangerous and uncontrollable AI is if either we destroy ourselves by some other way first (or even just with narrow AI), or the miracle happens that someone cracks advanced nanotechnology/magic through narrow AI and becomes a benevolent and omnipotent world dictator. There's really no other way we won't end up doing it.

comment by Pattern · 2022-02-09T22:57:14.695Z · LW(p) · GW(p)

What's the art on the frontpage now?

Replies from: Raemon
comment by Raemon · 2022-02-09T23:46:40.313Z · LW(p) · GW(p)

It’s our celebration art for the Best of LessWrong 2020, ie the end of the review. It was generated by a neural net with the prompt:

The ancient hidden library of luminescent ethereal alchemy crystalline fractals books knowledge secrets, aquarelle by Ross Tran outrun #conceptart #pixelart #monochrome | green on white color palette

Replies from: ChristianKl
comment by ChristianKl · 2022-02-10T09:11:33.514Z · LW(p) · GW(p)

Is the neural net publically available?

Replies from: habryka4
comment by habryka (habryka4) · 2022-02-13T07:30:54.806Z · LW(p) · GW(p)

Yep, it's the one in the Eleuther AI Discord!

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-02-05T22:31:48.018Z · LW(p) · GW(p)

Is there any way for my non-American friend to fly domestically in the USA without his passport? He has a student ID but no other form of ID at the moment (no drivers licence, etc.) The reason he doesn't have his passport is that he sent it in to get renewed & it probably won't arrive back in time.

He still has a few weeks before the flight, so he can e.g. go apply for a State ID if that's a possibility. Unfortunately it seems that you need a passport to get a State ID... He probably can't get a drivers license for similar reasons, and anyway he doesn't know how to drive.

comment by niplav · 2022-02-03T11:06:06.038Z · LW(p) · GW(p)

This user has been banned for this post

Can someone recommend me a textbook on causal inference?

I read the first ~120 pages of Causality by Pearl, and I think it's not the right book for me. The biggest two problems were the lack of exercises, and the long historical degressions. I also had difficulties wrapping my head around the big picture, even when understanding the low-level concepts (I don't think I could implement a full stack for causal inference, even with the book, although I could probably implement the IC* algorithm).

I am still looking for something that's formal, and will probably try chapter 16 of “All of Statistics” by Wassermann.

Replies from: Vaniver, Raemon, Charlie Steiner
comment by Vaniver · 2022-02-03T19:40:53.426Z · LW(p) · GW(p)

Did you check out his more recent primer? I think it's better on exercises, tho I don't think it's fully there yet. 

Replies from: niplav
comment by niplav · 2022-02-04T11:05:19.430Z · LW(p) · GW(p)

That does look more concise and to the point, thank you.

comment by Raemon · 2022-02-03T17:01:36.723Z · LW(p) · GW(p)

This comment says ‘user has been banned for this post’ which is super confusing to me

Replies from: niplav
comment by niplav · 2022-02-03T17:19:00.814Z · LW(p) · GW(p)

Sorry, I was making a meta joke about “Causality” and Pearl being super revered here, and rejecting the book being nearly heresy. Should I remove it?

Replies from: Raemon, habryka4
comment by Raemon · 2022-02-03T18:46:53.179Z · LW(p) · GW(p)

I definitely didn’t get the joke, but if you have this meta commentary here now I think it’s fine though.

comment by habryka (habryka4) · 2022-02-03T21:33:51.690Z · LW(p) · GW(p)

Nah, does indeed seem kinda funny.

comment by Charlie Steiner · 2022-02-05T04:31:25.967Z · LW(p) · GW(p)

Elements of Causal Inference by Peters is supposed to be good. I read Probabilistic Graphical Models by Koller and Friedman but didn't like it much, but I liked Causality, so maybe we'll be reversed and it's your jam.

comment by Pattern · 2022-02-15T19:04:27.967Z · LW(p) · GW(p)

Is the font for comments different from the font for posts?

comment by tailcalled · 2022-02-04T07:20:10.879Z · LW(p) · GW(p)

Should I expand this comment into a proper top-level post that goes into more detail in the concept and does longer/more reviews of methods?

https://www.lesswrong.com/posts/oqzasmQ9Lye45QDMZ/causality-transformative-ai-and-alignment-part-i?commentId=rJ7Ssb82bR677F2hq [LW(p) · GW(p)]

Replies from: Pattern
comment by Pattern · 2022-02-08T18:41:47.473Z · LW(p) · GW(p)

1.

That does sound interesting.

2.

If you don't want to do one big post that's super long, but still want the benefits of that structure, you could try drafting it out, then doing parts of it as shorter posts.

(I don't know how long of a post you're thinking of doing, but I'm throwing that out there in case it helps.*)

3.

If you want to do a post about doing posts (or a series of posts) afterwards, there might be interest as well. Or it might just be helpful in a 'personal blogs are useful notes for their author'.


*Maybe this already exists, but a link FAQ or something about making posts might make a good addition to the open thread text. Or a related tag or two, if it exists.

comment by Tofly · 2022-02-20T06:12:15.017Z · LW(p) · GW(p)

[Meta] The jump from Distinct Configurations [? · GW] to Collapse Postulates [? · GW] in the Quantum Physics and Many Worlds [? · GW] sequence is a bit much - I don't think the assertiveness of Collapse Postulates is justified without a full explanation of how many worlds explains things.  I'd recommend adding at least On Being Decoherent [LW · GW] in between.

Replies from: Pattern
comment by Pattern · 2022-02-25T06:03:20.927Z · LW(p) · GW(p)

I was going to suggest this should go on a talk page, but it looks like tags have those but not Sequences.

comment by blf · 2022-02-14T08:54:45.998Z · LW(p) · GW(p)

Contest: making a one-page comic on artificial intelligence for amateur mathematicians by March 9. The text must be in French and the original drawing on paper must be sent to them. Details at https://images.math.cnrs.fr/Onzieme-edition-de-Bulles-au-carre-a-vos-crayons-jusqu-au-9-mars?lang=fr

I'm not related in any way to this contest but I figured there may be some people interested in popularizing Alignment. I can help translate text to French. The drawing quality does not need to be amazing, see some previous winners at https://images.math.cnrs.fr/Resultats-du-9e-concours-Bulles-au-carre.html?lang=fr