post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by magfrump · 2017-10-16T23:21:45.434Z · LW(p) · GW(p)

I feel like there are some good intentions behind this post but I didn't feel like I got anything from reading it. I know it can be disconcerting to get downvotes without feedback so I'll try to summarize what feels off to me.

  1. You start this post by saying you "always disagreed [with the community]" but don't outline any specific disagreements. In particular your concluding points sound like they're repeating Eliezer's talking points.

  2. You suggest that the community doesn't have a strong background in the formal sciences, but this seems not only unjustified but explicitly contradicted by the results of the various community surveys over the years--over 50% of the community works in computers, engineering, or math as of 2016. Of course, this has fluctuated over time and I don't want to push too hard to group professors in AI with people who work tech support, but if anything my experience is that the community is substantially more literate in formal logic, math, etc., than could reasonably be expected.

  3. I'm guessing your work in logic is really interesting and that we'd all be interested in reading your writing on the subject. But the introduction you give here doesn't distinguish between possible authors who are undergrads versus authors who are ivy league professors. In particular outside of a couple of buzzwords you don't tell us much about what you study, how much of an expert you are in the subject, and why formal logic specifically should be relevant to AGI.

My guess is you have some interesting things to share with the community, so I hope this is helpful for you writing your next posts for the LW audience and doesn't come off as too rude.

Replies from: jsteinhardt, None
comment by jsteinhardt · 2017-10-17T03:04:59.264Z · LW(p) · GW(p)

Galfour was specifically asked to write his thought up in this thread: https://www.lesserwrong.com/posts/BEtzRE2M5m9YEAQpX/there-s-no-fire-alarm-for-artificial-general-intelligence/kAywLDdLrNsCvXztL

It seems either this was posted to the wrong place, or there is some disagreement within the community (e.g. between Ben in that thread and the people downvoting).

Replies from: gjm, None
comment by gjm · 2017-10-17T08:31:25.796Z · LW(p) · GW(p)

You may well be right, but it's also possible that some readers think (1) Galfour did well to write up his thoughts but (2) now that we've seen them his thoughts are terrible. (Ridiculous over-the-top analogy: you ask a friend to tell you honestly and without filters what his political opinions are. He turns out to be an unreconstructed Nazi. You're glad he told you honestly, but now he's done it you don't want to be his friend any more.)

Or some may think: (1) as above but (2) the comments above aren't actually writing up his thoughts about AGI, and aren't interesting.

Or some may think: (1) as above but (2) Ben specifically suggested that personal thoughts on AGI should go on personal LW2 blogs rather than the front page, whereas here Galfour is saying that when he writes up his thoughts they will go on the front page.

(Lest I be misunderstood: I have not downvoted this post; I don't think anything he wrote above was terrible; I also don't think it's terribly interesting, but since it's intended mostly as background I don't see any particular reason why it needs to be; I have no strong opinion on whether Galfour's AGI opinions, once written, will belong on the front page.)

comment by [deleted] · 2017-10-17T09:53:04.845Z · LW(p) · GW(p)
comment by [deleted] · 2017-10-17T09:49:59.599Z · LW(p) · GW(p)Replies from: magfrump
comment by magfrump · 2017-10-17T17:51:29.728Z · LW(p) · GW(p)

Thanks for responding! I think you make fair points--I hadn't seen the previous thread in detail, I just try to read all the posts but afaik there isn't a good way of tracking which comment threads continue to live for a while.

I think the center of our disagreement on point 2 is a matter of the "purpose of LessWrong;" if you intend to use it as a place to have communal discussions of technical problems which you hope to make progress on through posts, then I agree that introducing more formal background is necessary even in the case that everyone has the needed foundations. I am skeptical that this will be a likely outcome, since the blog has cross purposes of building communities and general life rationality, and building technical foundations is rough for a blog post and might better be assigned to textbooks and meetup groups. That limits engagement much more heavily and I definitely don't mean to suggest you shouldn't try, but I wasn't really in that mindset when first reading this. I had a more general response on the lines of "this person wants to do something mathematically rigorous but is a bit condescending and hasn't written anything interesting." I hope/believe that will change in future posts!

Replies from: habryka4, None
comment by habryka (habryka4) · 2017-10-17T20:12:29.017Z · LW(p) · GW(p)

I personally would really like to see more direct technical work on LessWrong, though I am unsure about the best format. I am heavily in favor of people writing sequences of posts that introduce technical fields to the community, and think that a lot of the best content in the history of LessWrong was of that nature.

Replies from: RobbBB, None
comment by Rob Bensinger (RobbBB) · 2017-10-17T21:11:42.452Z · LW(p) · GW(p)

Strong +1. I'd love to see most of LW developing into object-level technical discussion of interesting things (object-level math, science, philosophy, etc.), skewed toward things that are neglected and either interesting or important; and very little meta or community-building stuff. Rationality should be an important part of all that, but most posts probably shouldn't be solely about rationality techniques.

Replies from: None
comment by [deleted] · 2017-10-17T21:12:50.710Z · LW(p) · GW(p)Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2017-10-17T21:16:37.969Z · LW(p) · GW(p)

Time allowing!

comment by [deleted] · 2017-10-17T21:13:25.584Z · LW(p) · GW(p)Replies from: habryka4
comment by habryka (habryka4) · 2017-10-17T21:18:56.914Z · LW(p) · GW(p)

I have in the past engaged with a good amount of technical material (primarily MIRI's agent foundation agenda). In general time is short though, and I can't make any promises of participating in any specific effort.

I think so far I am not particularly compelled by the approach you are proposing in this and your next post, but am open to be convinced otherwise.

Replies from: None
comment by [deleted] · 2017-10-17T21:21:54.507Z · LW(p) · GW(p)
comment by [deleted] · 2017-10-17T18:16:47.096Z · LW(p) · GW(p)Replies from: magfrump
comment by magfrump · 2017-10-17T18:27:37.682Z · LW(p) · GW(p)

I don't think I disagree with the claim you're making here--I think formal background for things like decision theory is a big contributor to day to day rationality. But I think posts detailing formal background on this site will often be speaking either to people who already have the formal background, and be boring, or be speaking to people who do not, and it would be better to refer them to textbooks or online courses.

On the other hand, if someone wanted to take on the monumental task of opening up the possibility of running interactive jupyter notebooks to add coding exercises to notebooks and start building online courses here, I'd be excited for that to happen--it just seems like if we want to build more formal background it will be a struggle with the current site setup to match other resources.

Replies from: None
comment by [deleted] · 2017-10-17T19:53:41.605Z · LW(p) · GW(p)
comment by gjm · 2017-10-17T13:45:46.861Z · LW(p) · GW(p)

How do you know it's throw-away or used for trolling? So far as I can see, all we know is that so far it's been used only to ask the question above.

Replies from: None
comment by [deleted] · 2017-10-17T14:12:18.143Z · LW(p) · GW(p)