post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by [deleted] · 2022-03-14T18:08:53.931Z · LW(p) · GW(p)
comment by [deleted] · 2022-01-04T18:25:15.820Z · LW(p) · GW(p)Replies from: Pattern
comment by Pattern · 2022-01-04T18:39:10.085Z · LW(p) · GW(p)

If you read https://meaningness.com there's probably some stuff in there about 'partial control'. (Though I haven't double checked what is where since https://metarationality.com got broken off into a separate website.)

It's critical of a lot of stuff on LW in a particular, reasoned fashion.

Although you might see some criticism on here as well, like this post today: https://www.lesswrong.com/posts/cBH9FT7AWNNhJycaG/the-map-territory-distinction-creates-confusion [LW · GW]

LessWrong isn't exactly founded on the map-territory model of truth, but it's definitely pretty core to the LessWrong worldview. The map-territory model implies a correspondence theory of truth. But I'd like to convince you that the map-territory model creates confusion and that the correspondence theory of truth, while appealing, makes unnecessary claims that infect your thinking with extraneous metaphysical assumptions. Instead we can see what's appealing about the map-territory metaphor but drop most of it in favor of a more nuanced and less confused model of how we know about the world.
Replies from: None
comment by [deleted] · 2022-01-05T08:42:29.301Z · LW(p) · GW(p)Replies from: Pattern, Pattern, Pattern
comment by Pattern · 2022-01-05T22:07:56.296Z · LW(p) · GW(p)
Do you by any chance have link to a summary or review?

There are some summaries in the (hyper text) book, but they're probably too short to give an overview.

I could write a review, but I'd probably want to PM rather than post that.

One reason I haven't written a review is that I find the book to be: easy to read, and short enough that it probably doesn't save time, without some work.


I could try to summarize anything you have questions about after or while reading this: https://meaningness.com/control

A dialogue might be more (immediately) constructive and save time relative to trying to cover everything.

comment by Pattern · 2022-01-05T21:59:50.376Z · LW(p) · GW(p)

That is, however, not as short as things could be with skimming. I figured the first two sections are more pre-requisites than things which go off somewhere else.

Something rather fast, which might fail, is just reading the page: https://meaningness.com/control. Maybe it has what you're looking for there, maybe it doesn't. If you have any questions feel free to PM me.


I found that page using this search: site:meaningness.com partial control

The next two hits from that search only have a one hit for the page search for partial, at the very end of them. Beyond that hits seem to be from a repeated phrase, which is just a short summary of a section in the table of contents.

comment by Pattern · 2022-01-05T21:54:41.602Z · LW(p) · GW(p)

I don't have a map of what is a pre-requisite of what. But, assuming that that's handled if you read it in order, then: 'partial control' is addressed in https://meaningness.com/control. I'd guess... you can read section one (Why meaningness? and it's sub pages 4), section 2 (Stances and its subpages about 14). Then that page on control is late section 3. The preceding sub-pages don't have sub-pages, but eternalism has 3 sub-subpages of some kind or another before it).

(Things that might change: https://meaningness.com/meaningness-practice which comes late part 2 contains a subsection that is currently 100 words but could become more relevant to what you're looking for if it's updated, or pages are added after that page. That possibility is mentioned in a note from July 2014 though, so if you read this soon, I'm guessing that won't happen by then.)

Not counting: https://meaningness.com/all-dimensions-schematic-overview towards the word count or anything because it's a bunch of charts. (Which might help summarize if you have a little bit of the necessary background.)

Here's how long the first two sections are: (using https://wordcount.com, copying the text in, (adding a return a "-" and another two returns) not the table of contents on the left because it's repeated. Using the url led to a massive overcount by at least an order of magnitude, possibly some builtin recursive counting on the links or something, so I didn't use that.)

This does not include the comments (a separate page, which you don't have to read, but might be useful if you're confused or have questions, though such things might also be answered later on in the book).

12,154
Words

80,013
Characters

67,310
Characters without space

21,985
Syllables

827
Sentences

487
Paragraphs*

This one includes every "-", all 18 I added, so it's actually:

469

Paragraphs


If you figure section 3 is 75% the length of all the stuff before it, then, including the page on control, that's:

estimated numbers:

8,000
Words

60,000
Characters

50,250
Characters without space

16,500
Syllables

620.25
Sentences

351.75

Paragraphs


That comes out to an estimated 20,000 words.

comment by [deleted] · 2021-11-19T09:33:08.599Z · LW(p) · GW(p)
comment by [deleted] · 2021-11-09T16:42:02.763Z · LW(p) · GW(p)Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2021-11-09T21:46:23.962Z · LW(p) · GW(p)

Plus some loops to iterate if the limit is too large. And handle errors. Plus some way to share the credentials securely - or make it into a browser plugin.

comment by [deleted] · 2022-03-12T11:10:38.296Z · LW(p) · GW(p)
comment by [deleted] · 2022-01-03T20:08:44.580Z · LW(p) · GW(p)Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2022-01-04T22:30:47.959Z · LW(p) · GW(p)

Logical uncertainty is hard.  But the intuition that I have is that humans exist, so there's at least a proof of concept for a sort of aligned AGI (although admittedly not a proof of concept for an ASI)

Replies from: None
comment by [deleted] · 2022-01-05T06:13:11.480Z · LW(p) · GW(p)Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2022-01-05T14:13:23.241Z · LW(p) · GW(p)

I don't think it's that weak?

Replies from: None
comment by [deleted] · 2022-01-05T18:51:09.005Z · LW(p) · GW(p)Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2022-01-05T19:35:05.771Z · LW(p) · GW(p)

But if your definition of alignment is "an AI that does things in a way such that all humans agree on it's ethical choices" I think you're doomed from the start, so this counterintuition proves too much.  I don't think there is an action an AI could take or a recommendation it could make that would satisfy that criteria (in fact, many people would say that the AI by it's nature shouldn't be taking actions or making recommendations)

Replies from: None
comment by [deleted] · 2022-01-06T07:47:50.110Z · LW(p) · GW(p)Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2022-01-06T14:39:09.020Z · LW(p) · GW(p)

It seems like something like "An AI that acts and reasons in a way that most people who are broadly considered moral consider moral" would be a pretty good outcome.

Replies from: None
comment by [deleted] · 2022-01-06T17:41:55.420Z · LW(p) · GW(p)
comment by [deleted] · 2021-12-26T10:21:04.162Z · LW(p) · GW(p)Replies from: ChristianKl
comment by ChristianKl · 2021-12-28T10:51:56.728Z · LW(p) · GW(p)

The problem is that this sets a lot of incentives for corruption. 

Replies from: None
comment by [deleted] · 2021-12-28T13:01:57.738Z · LW(p) · GW(p)Replies from: ChristianKl
comment by ChristianKl · 2021-12-28T14:55:20.889Z · LW(p) · GW(p)

People will try to drive votes to get their predictions to come true and thus the vote count becomes a worse signal for quality.

Replies from: None
comment by [deleted] · 2021-12-28T17:48:13.188Z · LW(p) · GW(p)Replies from: ChristianKl
comment by ChristianKl · 2021-12-28T21:35:50.454Z · LW(p) · GW(p)

In the case of LessWrong comments that argue against the OP or for it's value can effect voting.

Replies from: None
comment by [deleted] · 2021-12-29T07:47:36.662Z · LW(p) · GW(p)
comment by [deleted] · 2021-12-14T15:39:54.551Z · LW(p) · GW(p)
comment by [deleted] · 2021-11-26T21:10:19.336Z · LW(p) · GW(p)Replies from: None
comment by [deleted] · 2021-11-27T04:59:07.993Z · LW(p) · GW(p)
comment by [deleted] · 2021-11-21T18:44:28.834Z · LW(p) · GW(p)
comment by [deleted] · 2021-11-16T20:12:11.536Z · LW(p) · GW(p)
comment by [deleted] · 2021-11-04T13:52:30.953Z · LW(p) · GW(p)Replies from: Dagon
comment by Dagon · 2021-11-04T14:47:54.681Z · LW(p) · GW(p)

I haven't seen any government, let alone the set of governments, demonstrate any capability of commitment on this kind of topic.  States (especially semi-representative ones like modern democracies) just don't operate with a model that makes this effective.

Replies from: None, None
comment by [deleted] · 2021-11-04T17:13:48.214Z · LW(p) · GW(p)Replies from: Dagon
comment by Dagon · 2021-11-04T17:55:57.085Z · LW(p) · GW(p)

I don't know if it is or not.  Human cloning seems both less useful and less harmful (just less impactful overall), so simultaneously easier to implement and not a good comparison to AGI.

Replies from: None
comment by [deleted] · 2021-11-05T06:23:43.174Z · LW(p) · GW(p)Replies from: Dagon
comment by Dagon · 2021-11-05T13:40:19.749Z · LW(p) · GW(p)

I'm not following the connection between human cloning and AGI.  Are you talking about something different from https://en.wikipedia.org/wiki/Human_cloning , where a baby is created with only one parent's genetic material?

To me, human cloning is just an expensive way to make normal babies.  

Replies from: None
comment by [deleted] · 2021-11-07T05:43:01.111Z · LW(p) · GW(p)Replies from: Dagon
comment by Dagon · 2021-11-08T16:03:46.538Z · LW(p) · GW(p)

You can keep cloning the most intelligent people.

Do you have any reason to believe that this is happening AT ALL?  I'd think the selection of who gets cloned (especially when it's illicit, but probably even if it were common) would follow wealth more than intelligence.

Selective embryo implantation based on genetic examination of two-parent IVF would seem more effective, and even that's not likely to do much unless it becomes a whole lot more common, and if intelligence were valued more highly in the general population.

Since these clones will hopefully retain basic human values

Huh?  Why these any more than the general population?  The range of values and behaviors found in humans is very wide, and "basic human values" is a pretty thin set.

Most importantly, a 25-year improvement cycle, with a mandatory 15-20 year socialization among many many humans of each new instance is just not as scary as an AGI with an improvement cycle under a year (perhaps much much faster), and with direct transmission of models/beliefs from previous generations.  Just not comparable.

Replies from: None
comment by [deleted] · 2021-11-09T16:01:09.887Z · LW(p) · GW(p)
comment by [deleted] · 2021-11-04T17:13:03.999Z · LW(p) · GW(p)
comment by [deleted] · 2021-11-03T16:20:10.842Z · LW(p) · GW(p)Replies from: Dagon
comment by Dagon · 2021-11-03T16:53:14.257Z · LW(p) · GW(p)

I don't think, at this scale, that "the government" is a useful model.  There are MANY governments, and many non-government coalitions that will impact any large-scale system.  The trick is delivering incremental value at each stage of your path, to enough of the selectorate of each group who can destroy you.

Replies from: None
comment by [deleted] · 2021-11-04T04:04:57.817Z · LW(p) · GW(p)
comment by [deleted] · 2021-10-31T18:24:03.350Z · LW(p) · GW(p)Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2021-10-31T22:10:35.565Z · LW(p) · GW(p)

atoms they causally impact

This doesn't help. In a counterfactual, atoms are not where they are in actuality. Worse, they are not even where the physical laws say they must be in the counterfactual, the intervention makes the future contradict the past before the intervention.

Replies from: None
comment by [deleted] · 2021-11-01T05:33:36.277Z · LW(p) · GW(p)Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2021-11-01T12:53:11.163Z · LW(p) · GW(p)

The point is that the weirdness with counterfactuals breaking physical laws is the same for controlling the world through one agent (as in orthodox CDT) and for doing the same through multiple copies of an agent in concert (as in FDT). Similarly, in actuality neither one-agent intervention nor coordinated many-agent intervention breaks physical laws. So this doesn't seem relevant for comparing the two, that's what I meant by "doesn't help".

By "outside view" you seem to be referring to actuality. I don't know what you mean by "inside view". Counterfactuals are not actuality as normally presented, though to the extent they can be constructed out of data that also defines actuality, they can aspire to be found in some nonstandard semantics of actuality.

Replies from: None
comment by [deleted] · 2021-11-02T05:48:25.069Z · LW(p) · GW(p)
comment by [deleted] · 2021-10-30T13:31:26.605Z · LW(p) · GW(p)Replies from: niplav
comment by niplav · 2021-10-31T13:49:16.207Z · LW(p) · GW(p)

I believe that in the LCPW it would be the right decision to kill one person to save two, and I also predict that I wouldn't do it anyway, mainly because I couldn't bring myself to do it.

In general, I understood the Complexity of Value sequence to be saying "The right way to look at ethics is consequentialism, but utilitarianism specifically is too narrow, and we want to find a more complex utility function that matches our values better."

Replies from: None
comment by [deleted] · 2021-10-31T15:54:35.333Z · LW(p) · GW(p)Replies from: niplav
comment by niplav · 2021-11-03T13:19:10.528Z · LW(p) · GW(p)

Why do you feel it would be the right decision to kill one? Who defines "right"?

I define "right" to be what I want, or, more exactly, what I would want if I knew more, thought faster and was more the person I wish I could be. This is of course mediated by considerations on ethical injunctions [? · GW], when I know that the computations my brain carries out are not the ones I would consciously endorse, and refrain from acting since I'm running on corrupted hardware. (You asked about the LCPW, so I didn't take these into account and assumed that I could know that I was being rational enough).

It's been a while since I read Thou Art Godshatter and the related posts, so maybe I'm conflating the message in there with things I took from other LW sources.

Replies from: None, None
comment by [deleted] · 2021-11-03T15:27:18.736Z · LW(p) · GW(p)Replies from: niplav
comment by niplav · 2021-11-03T15:52:25.449Z · LW(p) · GW(p)

Just FYI, I've become convinced that most online communication through comments with a lot of context are much better settled through conversations, so if you want, we could also talk about this over audio call.

Replies from: None
comment by [deleted] · 2021-11-03T16:05:44.505Z · LW(p) · GW(p)
comment by [deleted] · 2021-11-04T08:34:31.421Z · LW(p) · GW(p)
comment by [deleted] · 2021-10-30T12:29:17.188Z · LW(p) · GW(p)
comment by [deleted] · 2021-09-30T09:33:04.448Z · LW(p) · GW(p)Replies from: Viliam
comment by Viliam · 2021-10-01T09:27:53.919Z · LW(p) · GW(p)

You can't prove why inconsistent FOL theories are "bad" inside of the very same FOL theory

If the theory is inconsistent, you can prove anything in it, can't you? So you should also be able to prove that inconsistent theories are "bad".

Replies from: None
comment by [deleted] · 2021-10-03T05:21:54.396Z · LW(p) · GW(p)
comment by [deleted] · 2021-12-09T20:59:17.429Z · LW(p) · GW(p)Replies from: Dagon
comment by Dagon · 2021-12-09T21:25:29.687Z · LW(p) · GW(p)

I think that the link from micro to macro is too weak for this to be a useful line of inquiry.  "intelligence" applies on a level of abstraction that is difficult (perhaps impossible for human-level understanding) to predict/define in terms of neural configuration, let alone Turing-machine or quantum descriptions.

Replies from: None
comment by [deleted] · 2021-12-10T04:18:25.978Z · LW(p) · GW(p)Replies from: Dagon, AprilSR
comment by Dagon · 2021-12-13T01:14:38.429Z · LW(p) · GW(p)

I'm  not sure what you're asking.  A lot of reality doesn't make sense to me, so that's pretty weak evidence either way.  And it does seem believable that, since there is a very wide range of consistency and dimensionality to human values that don't seem well-correlated to intelligence, the same could be true of AIs.

Replies from: None
comment by [deleted] · 2021-12-13T06:45:14.571Z · LW(p) · GW(p)
comment by AprilSR · 2021-12-13T05:47:30.398Z · LW(p) · GW(p)

I think this could reasonably be true for some definitions of "intelligence", but that's mostly because I have no idea how intelligence would be formalized anyways?

Replies from: None
comment by [deleted] · 2021-12-13T06:47:21.691Z · LW(p) · GW(p)Replies from: AprilSR
comment by AprilSR · 2021-12-14T06:42:29.984Z · LW(p) · GW(p)

i think asking well-formed questions is useful but we shouldn't confuse our well-formed question as being what we actually care about unless we are sure it is in fact what we care about

Replies from: None
comment by [deleted] · 2021-12-14T15:35:50.176Z · LW(p) · GW(p)