Open Thread June 2018

post by Elo · 2018-05-31T22:34:56.656Z · LW · GW · 22 comments

If it’s worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

  1. Check if there is an active Open Thread before posting a new one (use search for Open Thread ).

  2. Monthly open threads seem to get lost and maybe we should switch to fortnightly.

  3. What accomplishments are you celebrating from the last month?

  4. What are you reading?

22 comments

Comments sorted by top scores.

comment by cousin_it · 2018-06-01T09:24:19.474Z · LW(p) · GW(p)

I made a small programming tutorial for beginners, and tried it on kids in a programming class I'm teaching. It seems to work pretty well. Don't want to publish it yet, but I'd like to try it on someone online, preferably someone who never learned programming and doesn't feel naturally gifted at it. Any takers?

comment by Elo · 2018-05-31T22:40:03.236Z · LW(p) · GW(p)

Last month I got published on an Australian national media website.

I also moved into Sydney's first lesswrong group house.

I read a book about schema therapy (post coming soon).

I read Scott Adams, "how to fail at everything and still win big".

I also read my first paper book this year. (it was a novel)

Also discovered bone conduction headphones and I am impressed with the quality.

I am reading "principles of neural science".

Replies from: ryjm
comment by ryjm · 2018-06-01T16:48:45.709Z · LW(p) · GW(p)
Also discovered bone conduction headphones and I am impressed with the quality.

Do you have a recommendation? Constantly on the look out for new headphone styles, I have weird ear holes that nothing fits in.

Replies from: Elo, jam_brand
comment by Elo · 2018-06-02T17:20:08.801Z · LW(p) · GW(p)

Ebay is where I got mine. They are "aftershockz bluez 2s". I would buy them again now in a heartbeat. Have to wait a month before I decide they are worth it or I'd the upgrade was worth it. But I suspect the answer is yes.

comment by jam_brand · 2018-06-01T23:32:08.141Z · LW(p) · GW(p)

Curious about this as well since neither of these recently-updated articles from the NYTimes-owned (meta)review site The Wirecutter mention being able to find any bone-conduction headphones they liked.

comment by mpr · 2018-06-05T15:45:38.062Z · LW(p) · GW(p)

I am moving to the Bay Area from the east coast, and have been looking for a job out there for some time. I signed an offer letter last week from a company I am excited to start working with.

Yesterday I published an app that will send you to a random slatestarcodex article. You can find it as a subdomain of my blog: http://random-ssc.pulsarcoffee.com.

I am reading The Fall of Hyperion by Dan Simmons, and Programming in Scala by Martin Odersky.

comment by Rafael Harth (sil-ver) · 2018-06-15T22:53:02.695Z · LW(p) · GW(p)

OpenAi has recently released this charter outlining their strategic approach.

We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”

We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.

My reaction to this was that it sounds like incredibly good and pretty important news. it reads very genuine and distinct from trying to merely appease critics. But I haven't seen anyone on LW mentioning it, which leaves me wondering if I'm naive.

So I guess I'm just curious about other opinions here. I'm also reminded of this post [LW · GW] which seemed reasonable to me at the time.

comment by Gurkenglas · 2018-06-02T23:45:17.421Z · LW(p) · GW(p)

I have a LW draft, but I'm only mostly sure its publishing would help AI safety research more than AI research if it works. We should have a review process for this. I could just send Eliezer the .html, but surely he has random internet people sending him quite enough.

On recommendation from the #lesswrong IRC channel, I'm sending it to a particular LW account for review.

comment by philh · 2018-06-03T10:02:46.968Z · LW(p) · GW(p)

I have an exercise app that feeds me lotus if I work out every day. (In the form of streaks, achievements, and eventually unlocking more exercises.) I want not to work out every day.

I'm solving this dilemma by lying to the app.

Replies from: Gurkenglas
comment by Gurkenglas · 2018-06-03T12:22:32.639Z · LW(p) · GW(p)

Can you set it up so the lotus is conditional on you not having lied? For example, the app could praise you for not lying, so your guilt would outweigh the feeling of achievement if you lie. It could say that the app only keeps track of an approximation to the true, platonic streaks and achievements, which is only as accurate as the information you give it, and if you lie it becomes much harder for you to figure out whether you receive lotus.

Perhaps it could just ask you questions of the form "Have you lied in this time period?" and do as much with that information as won't make you lie about that. For example, there might be a "What if?" tool which lets you select a subset of your statements, and shows you what your progress had been if those were all you stated.

Replies from: philh
comment by philh · 2018-06-04T14:22:09.120Z · LW(p) · GW(p)

I feel like you missed what I was getting at. (Either that, or I missed what you're getting at.) Context is noticing the taste of lotus [LW · GW].

It kind of sounds like you think that I'm lying because I lack willpower, or something along those lines. But that's not it. The point of lying is to decouple the lotus from whether I'm actually exercising, so that I exercise when I choose to, not when the app thinks I should. (I think I should probably exercise more than I currently choose to, but less than the app thinks I should.)

With that in mind, I'm not really sure why "only get lotus when I tell the truth" is something I would want.

That said, it would be kind of nice if I could tag my lies and have the app show me only truthful workouts. As it is I can't tell how often I'm actually exercising. (It occurs to me I can get some of that benefit by creating a custom workout named "fake".)

Replies from: maia, Ikaxas
comment by maia · 2018-06-16T07:54:46.488Z · LW(p) · GW(p)

So you don't like the gamification on the app. Have you considered... using a less gamified app to track workouts? Or not using an app at all?

Replies from: philh
comment by philh · 2018-06-17T10:50:57.343Z · LW(p) · GW(p)

I have. I like having the app better than not having an app. It's likely that a better app exists, but finding it is higher activation energy.

comment by Vaughn Papenhausen (Ikaxas) · 2018-06-04T14:46:47.716Z · LW(p) · GW(p)

Actually that sounds like a good idea not just because you'd get more accurate information about how often you exercise, but also for the following reason: what often happens (at least to me) when I'm tracking something I want to do is that when I have to put in a failed instance I feel guilty. Due to Goodhart's Imperius this then disincetivizes me to track the behavior in the first place (esp if I'm failing often) because I get negative feedback from the tracking, so the simplest solution from the monkey brain's perspective is to stop the tracking. But if you get the lotus whether you did the thing or not, conditional on you entering that information into the app, then that gives the proper incentive to track. So I would predict this would work well.

comment by weft · 2018-06-02T20:27:08.509Z · LW(p) · GW(p)

comment by Vaughn Papenhausen (Ikaxas) · 2018-06-02T02:41:59.397Z · LW(p) · GW(p)

I finished my senior thesis and graduated from college (okay technically the thesis was done in April, but it was at the end of April and the presentation was in May).

I am reading:

  • Other Minds: The Octopus, The Sea, and the Deep Origins of Consciousness by Peter Godfrey-Smith
  • r!Animorphs: The Reckoning by Duncan Sabien
  • Volume II of On What Matters by Derek Parfit (Also about halfway through Reasons and Persons, but that's on hold for the moment)
  • I am also rereading Nausicaa of the Valley of the Wind, and when I'm done with Other Minds I'm intending to start Sidgwick's Methods of Ethics

Also, I have a question: What do people think of the His Dark Materials series? I see a decent bit of discussion of Ender's Game around here, and while I love Ender's Game I think His Dark Materials is on a similar level, and should be similarly revered by rationalists. Granted, Lyra is not portrayed as extraordinarily intelligent like Ender, but she is extremely strong, and the series has several rationalist themes, e.g. s-risk (gur haqrejbeyq*), x-risk (gur fhogyr xavsr*), saving the world from these, the Problem of Evil, many-worlds (kind of), etc. Is it just that not as many people have read His Dark Materials, or is there some other reason it's not really talked about?

*rot13

Replies from: gjm, Ikaxas
comment by gjm · 2018-06-15T23:47:45.201Z · LW(p) · GW(p)

I enjoyed His Dark Materials but felt that the quality of the writing went downward as the amount of anti-religious axe-grinding went up. (Not because I have an axe to grind; I am an atheist myself and enjoy anti-religious axe-grinding when it's done well.) I wouldn't say that the books feel particularly rationalist, for what it's worth, despite the relevant themes you mention.

Replies from: Ikaxas
comment by Vaughn Papenhausen (Ikaxas) · 2018-06-16T03:43:19.185Z · LW(p) · GW(p)

Yep, I agree (ETA: about the fact that the books aren't especially "rationalist"; I don't remember thinking that the quality of the writing went down as the amount of anti-religious axe-grinding went up, but it's been long enough since I read the books that maybe if I read them again with that claim in mind I would agree). Rereading Ender's Game and have changed my mind about His Dark Materials being especially rationalist since writing that comment. ETA: Ender's game has a ton more stuff in it than I remembered that could basically have come straight out of the sequences, so my mental baseline for "especially rationalist-y fiction" was a lot lower than it probably should have been. Also probably some halo effect going on: I like the books, I like rationalism, so my brain wanted to associate them.

comment by Vaughn Papenhausen (Ikaxas) · 2018-06-02T02:52:24.264Z · LW(p) · GW(p)

On reading this again, I suppose the technically correct answer to my question is probably something like, "discussing books is not the primary purpose of this site, so the vast majority of books will never be discussed here. So it shouldn't be surprising that [x book series] is not discussed here." I guess I don't really intend the comment to be asking that question literally, but more as 1.) a query of people's opinions of the books, and 2.) a suggestion that these books might be good candidates to earn a similar status around here as e.g. Ender's Game, for purposes of referencing for metaphors, inspiration, etc. (of course a big factor here is "how many people read these as a kid and were influenced by them". If it turns out that just very few people have even read the books, then that would be a reason not to give them that status, because the references wouldn't be gotten by many people. But if many people have read the books, what I'm doing here is something like putting in a bid to make the books more culturally salient).

comment by gwillen · 2018-06-05T06:35:08.435Z · LW(p) · GW(p)

Test comment. Please don't upvote etc. etc.

(Sorry moderators, I assume you deleted my other ones, but I can't really try to debug notification breakage without creating more.)

Replies from: gwillen-test
comment by gwillen-test · 2018-06-05T06:35:49.346Z · LW(p) · GW(p)

Test reply.

Replies from: gwillen
comment by gwillen · 2018-06-05T06:36:10.302Z · LW(p) · GW(p)

Test reply-reply.