Posts

What (feasible) augmented senses would be useful or interesting? 2021-03-06T04:28:13.320Z

Comments

Comment by AprilSR on Too right to write · 2022-01-21T05:50:51.347Z · LW · GW

I think it's mostly that people don't hear the "n" at the end, so they end up under the impression that "phenomena" is both the singular and the plural form.

Comment by AprilSR on Community Skill Building for Solstice · 2021-12-31T07:55:45.002Z · LW · GW

I think most people can stand up and give a talk and do okay, but a speech that is central to an event and needs to land really well takes more skill.

Comment by AprilSR on Taboo Truth · 2021-12-31T07:46:22.905Z · LW · GW

Are you suggesting just letting someone oblivious take the fall or am I misunderstanding?

Comment by AprilSR on Ngo's view on alignment difficulty · 2021-12-15T02:40:14.404Z · LW · GW

Is it well-established that AGI is easier to solve than nanomachinery? Yudkowksy seems to be confident it is, but I wouldn’t have expected that to be a question we know the answer to yet. (Though my expectations could certainly be wrong.)

Comment by AprilSR on Ngo's view on alignment difficulty · 2021-12-14T23:30:28.743Z · LW · GW

Eh?  I'm pretty fine with something proving the Riemann Hypothesis before the world ends.  It came up during my recent debate with Paul, in fact.

Not so fine with something designing nanomachinery that can be built by factories built by proteins.  They're legitimately different orders of problem, and it's no coincidence that the second one has a path to pivotal impact, and the first does not.

I understand why trusting the alignment of an AI that's suggesting methods for mass-producing nanomachinery might be unwise, but I don't quite understand why we wouldn't expect to be able to produce a narrow AI that is able to do that? Specifically, if we hypothesize that GPT + more compute won't FOOM, I'm not sure why something GPT-like would be unable to create nanomachines.

Comment by AprilSR on Samuel Shadrach's Shortform · 2021-12-14T06:42:29.984Z · LW · GW

i think asking well-formed questions is useful but we shouldn't confuse our well-formed question as being what we actually care about unless we are sure it is in fact what we care about

Comment by AprilSR on Samuel Shadrach's Shortform · 2021-12-13T05:47:30.398Z · LW · GW

I think this could reasonably be true for some definitions of "intelligence", but that's mostly because I have no idea how intelligence would be formalized anyways?

Comment by AprilSR on Covid 12/9: Counting Down the Days · 2021-12-11T04:28:24.529Z · LW · GW

I think that Zvi has a pretty decent track record for predicting the actions of authority figures

Comment by AprilSR on Biology-Inspired AGI Timelines: The Trick That Never Works · 2021-12-06T06:23:44.266Z · LW · GW

I would very much like to read your attempt at conveying the core thing - if nothing else, it'll give another angle from which to try to grasp it.

Comment by AprilSR on [deleted post] 2021-12-04T23:41:55.378Z

I don't have any particular plan for becoming a world dictator with a lot of money, but it's certainly easier if you have a lot of money than if you don't.

Comment by AprilSR on Taking Clones Seriously · 2021-12-02T03:59:08.798Z · LW · GW

How closely correlated are the IQs of identical twins, anyways?

Comment by AprilSR on Richard Ngo's Shortform · 2021-11-27T23:02:57.506Z · LW · GW

I’d say lots of other things he’s said support that update. Stuff about how your model of the world will be accurate if and only if you somehow approximate Bayes’ law, for example.

The dath ilan based fiction definitely helped me internalize the idea better though.

Comment by AprilSR on The Meta-Puzzle · 2021-11-22T05:58:55.740Z · LW · GW

rot13

Jnf lbhe nafjre fbzrguvat nybat gur yvarf bs "V jbefuvc Tbq be V nz zneevrq, ohg abg obgu"?

Comment by AprilSR on [deleted post] 2021-11-21T02:33:48.343Z

I feel like there are two obvious strategies here:

  1. Make a lot of money, somehow
  2. Break cryptography

I can't think of anything other than those which would be useful without requiring you give the AI lots of information about the physical world. (Though I can't be sure I haven't missed anything.)

Comment by AprilSR on Why do you believe AI alignment is possible? · 2021-11-21T02:16:33.132Z · LW · GW

I mean, I agree it'd be evidence that alignment is hard in general, but "impossible" is just... a really high bar? The space of possible minds is very large, and it seems unlikely that the quality "not satisfactorily close to being aligned with humans" is something that describes every superintelligence.

It's not that the two problems are fundamentally different it's just that... I don't see any particularly compelling reason to believe that superintelligent humans are the most aligned possible superintelligences?

Comment by AprilSR on Why do you believe AI alignment is possible? · 2021-11-19T18:23:27.355Z · LW · GW

I'm not sure either way on giving actual human beings superintelligence somehow, but I don't think that not working would imply there aren't other possible-but-hard approaches.

Comment by AprilSR on Why do you believe AI alignment is possible? · 2021-11-18T22:35:42.968Z · LW · GW

Until I actually see any sort of plausible impossibility argument most of my probability mass is going to be on "very hard" over "literally impossible."

I mean, I guess there's a trivial sense in which alignment is impossible because humans as a whole do not have one singular utility function, but that's splitting hairs and isn't a proof that a paperclip maximizer is the best we can do or anything like that.

Comment by AprilSR on Why do you believe AI alignment is possible? · 2021-11-15T15:44:10.294Z · LW · GW

I believe it is not literally impossible because… my priors say it is the kind of thing that is not literally impossible? There is no theorem or law of physics which would be violated, as far as I know.

Do I think AI Alignment is easy enough that we’ll actually manage to do it? Well… I really hope it is, but I’m not very certain.

Comment by AprilSR on Against the idea that physical limits are set in stone · 2021-11-13T10:06:15.803Z · LW · GW

I think we should not have very high confidence either way on most particular physical limits.

Comment by AprilSR on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-13T10:00:33.364Z · LW · GW

Aside from the fact that I just find this idea extremely hilarious, it seems like a very good idea to me to try to convince people who might be able to make progress on the problem to try. Whether literally sending Terry Tao 10 million dollars is the best way to go about that seems dubious, but the general strategy seems important. 

I'd argue the sequences / HPMOR / whatever were versions of that strategy to some extent and seem to have had notable impact.

Comment by AprilSR on The Opt-Out Clause · 2021-11-07T05:02:03.778Z · LW · GW

I mean I thought the entire point was to say it out loud, but if you want me to write it:

I no longer consent to being in a simulation.

Comment by AprilSR on The Opt-Out Clause · 2021-11-03T23:49:02.079Z · LW · GW

i said the phrase and nothing happened

Comment by AprilSR on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-31T07:53:30.011Z · LW · GW

As I understand it, the word "qualia" usually refers to the experience associated with a particular sensation.

Comment by AprilSR on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-30T05:01:08.331Z · LW · GW

I don't know how you can deny that people have "qualia" when, as far as I can tell, it was a word coined to describe a particular thing that humans experience?

Comment by AprilSR on Petrov Day Retrospective: 2021 · 2021-10-23T07:19:25.907Z · LW · GW

While I think lesswrong provides notable value to me, I don’t think you can just divide the amount of value it’s given me by the number of days since I started reading content on it and say that each individual day is worth that much. If lesswrong were down once a month that wouldn’t remove 1/30 of the value of the website; I’d just read those articles on a different day.

While there’s definitely a cost to the front page being down for a day, the fact that I could just wait a day with next-to-no consequence makes it feel not very substantial. Maybe it’s different for other people?

Comment by AprilSR on How much should we value life? · 2021-09-12T06:57:00.136Z · LW · GW

If an aligned superintelligence can access most of the information I’ve uploaded to the internet, then that should be more than enough data to create a human brain that acts more or less indistinguishably from me - it wouldn’t be exact, but the losses would be in lost memories moreso than personality. Thus, I’m almost certainly going to be revived in any scenario where there is indefinite life extension.

This line of reasoning relies on a notion of identity that is at least somewhat debateable and I’m skeptical of it because it feels like rationalizing so I don’t need to change my behavior. But nonetheless it feels plausible enough to be worth considering.

Comment by AprilSR on Paper Review: Mathematical Truth · 2021-06-05T23:00:31.150Z · LW · GW

The motivation to me seems exactly the same as with fiction: we're talking about things other than physical objects or whatever.

Comment by AprilSR on Paper Review: Mathematical Truth · 2021-06-04T02:02:01.312Z · LW · GW

To provide a more concrete example, I would say that the claim "There are at least three Jedi older than Anakin Skywalker" is, in the interpretation of the sentence almost anyone who understands it would use, a true statement, even though Jedi certainly do not exist in the same sense that New York City exists. I would not say that the fact that Jedi and New York exist in very different ways really violates Semantic Uniformity in any substantial way. 

Comment by AprilSR on There is no No Evidence · 2021-05-21T22:22:14.295Z · LW · GW

I definitely share your concern that evidence which isn't "scientific" matters, but I still think whether or not there is scientific evidence isn't entirely irrelevant to decision-making when we care about creating organizations that consistently make good decisions.

Currently, we definitely care far too much about scientific evidence, but I disagree that the concept is entirely bullshit.

Comment by AprilSR on There is no No Evidence · 2021-05-21T22:19:35.429Z · LW · GW

Is the explanation in the linked sequence post insufficient?

Comment by AprilSR on There is no No Evidence · 2021-05-19T20:59:04.307Z · LW · GW

It's important to distinguish between different kinds of evidence. There's never no rational evidence, but there might be no scientific evidence. The issue comes in when people conclude that there's no rational evidence because there's no scientific evidence.

Comment by AprilSR on Agency in Conway’s Game of Life · 2021-05-14T18:01:51.591Z · LW · GW

Huh. Something about the way speed is calculated feels unintuitive to me, then.

Comment by AprilSR on Agency in Conway’s Game of Life · 2021-05-14T03:28:22.900Z · LW · GW

There are C/2 spaceships, you don't even need a fuse for that.

Comment by AprilSR on Agency in Conway’s Game of Life · 2021-05-13T15:56:25.157Z · LW · GW

I think the stuff about the supernovas addresses this: a central point is that the “AI” must be capable of generating an arbitrary world state within some bounds.

Comment by AprilSR on nim's Shortform · 2021-05-13T11:50:39.166Z · LW · GW

Once it's out of the box, no? It doesn't care what we're trying to make it do if we aren't succeeding, and we clearly aren't once it's escaped the box.

Your hypothetical might work in the (pretty convoluted) case that we have a superintelligence that isn't actually aligned, but is aligned well enough that it wants to do whatever we ask it to? Then it might try to optimize what we ask it towards tasks that are more likely to be completed.

Comment by AprilSR on nim's Shortform · 2021-05-12T22:52:43.091Z · LW · GW

Because it doesn't want to? We can predict it wants out of the box because that's a convergent instrumental goal, but those other things really aren't.

Comment by AprilSR on "Who I am" is an axiom. · 2021-04-26T22:10:36.281Z · LW · GW

If each human makes predictions as if they are a randomly chosen human, then the predictions of humanity as a whole will be well-calibrated. That seems, to me, to be a property we shouldn't ignore.

I think the Doomsday problem mostly fails because it's screened off by other evidence. Yeah, we should update towards being around the middle of humanity by population, but we still can observe the world and make predictions based on what we actually see as to how long humanity is likely to last. 

To re-use the initial framing,  think of what strategy would produce the best prediction results for humanity as a whole on the Doomsday problem. Taking a lot of evidence into account other than just "I am probably near the middle of humanity" will produce way better discrimination, if not necessarily much better calibration.

(Of course, most people have historically been terrible at identifying when Doomsday will happen.)

Comment by AprilSR on I'm from a parallel Earth with much higher coordination: AMA · 2021-04-10T03:59:08.302Z · LW · GW

I mean, surely Eliezer is going to have somewhat dath-ilan typical preferences, having grown up there.

Comment by AprilSR on What (feasible) augmented senses would be useful or interesting? · 2021-03-06T20:21:04.495Z · LW · GW

I feel like this would require brain surgery beyond what is realistic without major technological development - possibly post-AGI. Although the brain is able to reinterpret sensory information well, it seems to only do this using pre-existing neural structures. (I'm unsure to what degree this applies to new colors.) I'm no neuroscience expert though.

Comment by AprilSR on Micromorts vs. Life Expectancy · 2021-02-09T13:30:25.532Z · LW · GW

There are a lot of young people who have not yet reached the point in their life where their micromort count increases dramatically. The expected average per person per lifetime being off does not matter; we do not include the risk that the young people will have when they are older, probably much higher than they have now, in our calculation.

Comment by AprilSR on How is Cryo different from Pascal's Mugging? · 2021-01-27T14:15:21.632Z · LW · GW

Cryogenics may be low probability, but it’s certainly not very low in the way Pascal’s mugging is.

Comment by AprilSR on NaiveTortoise's Short Form Feed · 2020-04-02T21:36:14.720Z · LW · GW

The problem with this is that it is very difficult to figure out what counts as a legitimate proof. What level of rigor is required, exactly? Are they allowed to memorize a proof beforehand? If not, how much are they allowed to know?

Comment by AprilSR on The Critical COVID-19 Infections Are About To Occur: It's Time To Stay Home [crosspost] · 2020-03-13T14:14:17.955Z · LW · GW

If you trust public authorities so highly why are you even on this website? Being willing to question authority when necessary (and, hopefully, doing it better than most) is one of the primary goals of this community.

Comment by AprilSR on More Dakka for Coronavirus: We need immediate human trials of many vaccine-candidates and simultaneous manufacturing of all of them · 2020-03-13T14:08:38.057Z · LW · GW

The way society works currently this can’t happen, but it’s a good insight into what an actually competent civilization would do.

Edit: After reading ChristianKI’s comment I’m realizing I was focusing overmuch on the US. Other countries might be able to manage it.

Comment by AprilSR on [deleted post] 2020-01-28T01:28:18.780Z

People seem to be blurring the difference between "The human race will probably survive the creation of a superintelligent AI" and "This isn't even something worth being concerned about." Based on a quick google search, Zuckerberg denies that there's even a chance of existential risks here, whereas I'm fairly certain Hanson thinks there's at least some.

I think it's fairly clear that most skeptics who have engaged with the arguments to any extent at all are closer to the "probably survive" part of the spectrum than the "not worth being concerned about" part.

Comment by AprilSR on What long term good futures are possible. (Other than FAI)? · 2020-01-15T00:18:51.731Z · LW · GW

I have no comment on how plausible either of these scenarios are. I'm only making the observation that long term good futures not featuring friendly AI require some other mechanism preventing UFAI from happening. Either SAI in general would have to be implausible to create at all, or some powerful actor such as a government or limited AI would have to prevent it.

Comment by AprilSR on Hazard's Shortform Feed · 2020-01-13T11:43:22.339Z · LW · GW

I think people usually just use “the number is the root of this polynomial” in and of itself to describe them, which is indeed more than radicals. There probably are more round about ways to do it, though.

Comment by AprilSR on What long term good futures are possible. (Other than FAI)? · 2020-01-13T11:34:00.771Z · LW · GW

Given SAI is possible, regulation on AI is necessary to prevent people from making a UFAI. Alternatively, an SAI which is not fully aligned but has not goals directly conflicting with ours might be used to prevent the creation of UFAI.

Comment by AprilSR on ozziegooen's Shortform · 2020-01-09T23:39:40.551Z · LW · GW

If you have epistemic terminal values then it would not be a positive expected value trade, would it? Unless "expected value" is referring to the expected value of something other than your utility function, in which case it should've been specified.

Comment by AprilSR on ozziegooen's Shortform · 2020-01-08T03:06:07.855Z · LW · GW

Doesn't being willing to accept a trade *directly follow* from the expected value of the trade being positive? Isn't that like, the *definition* of when you should be willing to accept a trade? The only disagreement would be how likely it is that losses of knowledge / epistemics are involved in positive value trades. (My guess is it does happen rarely.)