supposedlyfun's Shortform

post by supposedlyfun · 2020-09-26T20:56:30.074Z · LW · GW · 20 comments

20 comments

Comments sorted by top scores.

comment by supposedlyfun · 2020-09-26T20:56:30.386Z · LW(p) · GW(p)

I'm grateful for MIRI etc and their work on what is probably as world-endy as nuclear war was (and look at all the intellectual work that went into THAT).

The thing that's been eating me lately, almost certainly mainly triggered by the political situation in the U.S., is how to manage the transition from 2020 to what I suspect is the only way forward for the species--genetic editing to reduce or eliminate the genetically determined cognitive biases we inherited from the savannah.  My objectives for the transition would be

  1. Minimize death
  2. Minimize physical suffering
  3. Minimize mental/emotional suffering
  4. Maximize critical thinking
  5. Maximize sharing of economic resources

I'm extra concerned about tribalism/outgrouping and have been thinking a lot about the lunch-counter protestors in the U.S. practice/role-playing the inevitable taunts, slurs, and mild or worse physical violence they would receive at a sit-in, knowing that if they were anything less than absolute model minorities, their entire movement could be written off overnight. 

I'm only just starting to look into what research there might already be on such a broad topic, so if you see this, and you have literally any starting points whatsoever (beyond what's on this site's wiki and SlateStarCodex), say something.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-09-26T22:52:23.630Z · LW(p) · GW(p)

Do you think genetic editing could remove biases?  My suspicsion is that they're probably baked pretty deeply into our brains and society, and you can't just tweak a few genes to get rid of them.

Replies from: supposedlyfun
comment by supposedlyfun · 2020-09-27T21:16:09.303Z · LW(p) · GW(p)

I figure that at some point in the next ~300 years, computers will become powerful enough to do the necessary math/modeling to figure this out based on advances in understanding genetics.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-09-29T00:49:16.239Z · LW(p) · GW(p)

It just feels like "biases" are such a high level of abstraction that are based on basic brain architecture.  To get rid of them would be like creating a totally different design.

comment by supposedlyfun · 2021-03-23T03:07:07.477Z · LW(p) · GW(p)

Remote Desktop is bad for your brain?  

I live abroad but work for a US company and connect to my computer, located inside the company's office, through a VPN shell and then Windows' Remote Desktop function. I have a two-monitor setup at my local desk and use them both for RDP, the left one in horizontal orientation (email, Excel, billing software) and the right one vertical (for reading PDFs, drafting emails in Outlook, drafting documents in Word).

My computer shut itself off after hours in the US, so I had to get a Word document emailed to me so I could keep drafting it on my local computer.  I feel like getting rid of the lag between [keypress] and [character appears on screen], due to RDP lag (admittedly mild), is making me 30% smarter.  Like the delay was making me worse at thinking.  It's palpable. So it's either real or some kind of placebo effect associated with me being persnickety or both.  Anyone seen any data on this?

Replies from: wunan, ChristianKl
comment by wunan · 2021-03-24T02:17:13.688Z · LW(p) · GW(p)

Yes, the value of minimizing response time is a well-studied area of human-computer interfaces: https://www.nngroup.com/articles/response-times-3-important-limits/

Replies from: supposedlyfun
comment by supposedlyfun · 2021-03-24T02:25:32.221Z · LW(p) · GW(p)

This is great. Thank you. I'm fascinated by the fact that this problem was studied as far back as the 1960s.

comment by ChristianKl · 2021-03-23T15:20:28.969Z · LW(p) · GW(p)

How do you evaluate the 30%? By outcome metrics or by how you feel during the activity?

Replies from: supposedlyfun
comment by supposedlyfun · 2021-03-23T21:17:21.116Z · LW(p) · GW(p)

Just by feel. At this stage, I'm just spitballing and reporting subjective sensation. The sensation went down but not away after a few hours.

comment by supposedlyfun · 2021-06-16T12:18:07.470Z · LW(p) · GW(p)

This is a Humble Bundle with a bunch of AI-related publications by Morgan & Claypool. $18 for 15 books. I'm a layperson re the material, but I'm pretty confident it's worth $18 just to have all of these papers collected in one place and formatted nicely. NB increasing my payment from $18 to $25 would have raised the amount donated to the charity from $0.90 to $1.25--I guess the balance of the $7 goes directly to Humble.

https://www.humblebundle.com/books/ai-morgan-claypool-books

comment by supposedlyfun · 2021-03-10T09:38:33.801Z · LW(p) · GW(p)

In case anyone sees this: I turned off my Vote Notifications, and it has increased my enjoyment of the site by at least 10%. You should, too.

Replies from: Dagon, habryka4
comment by Dagon · 2021-03-10T16:33:15.472Z · LW(p) · GW(p)

Counterpoint: I get value from being notified of votes/karma changes.  Especially when someone bothers to vote on an old post, it's nice to revisit it and update my mental model of which comments of mine will be popular or not.  As a result, I've changed my target from 80% upvotes to 90% - If I don't get some downvotes, I'm likely over-editing and over-filtering myself, but people are kind enough that I have to be pretty bad to get many downvotes.

Definitely try it on or off for a week or two every year, and optimize for yourself :)

Replies from: supposedlyfun, supposedlyfun
comment by supposedlyfun · 2021-05-22T22:41:43.581Z · LW(p) · GW(p)

I eventually got tired of not knowing where the karma increments were coming from, so I changed it to cache once a week. I just got my first weekly cache, and the information I got from seeing what was voted on outweighed the encouragement of any Internet Points Neurosis I may have.

comment by supposedlyfun · 2021-03-10T22:02:35.843Z · LW(p) · GW(p)

This makes sense re old posts.  Thanks for pointing to a valid use.

Inside my brain, I feel especially susceptible to anything that acts like Internet Points, and that little star was triggering the itch.  Without the star there, I click less often on my username to see how many Internet Points I got.  (I was also clicking on the star even when I knew there was no new information there!)  Removing the star removed some of the emotional immediacy.

comment by habryka (habryka4) · 2021-03-10T22:38:00.984Z · LW(p) · GW(p)

Yep, I expect some people will want them turned off, which is why we tried to make that pretty easy! It might also make sense to batch them into a weekly batch instead of a daily one, which I've done at some points to reduce the degree to which I felt like I was goodharting on them.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2021-05-25T15:28:31.753Z · LW(p) · GW(p)

Why isn't weekly notifications the default? Daily is likely more harmful than useful for the typical person

Replies from: habryka4, Kaj_Sotala
comment by habryka (habryka4) · 2021-05-25T16:42:56.928Z · LW(p) · GW(p)

Most people definitely wanted daily, since that's what their LessWrong habits were already. I also am pretty okay with daily, and think it gets rid of most of the bad "repeatedly check 10 times a day" loop that things like Facebook can get me into.

comment by Kaj_Sotala · 2021-05-25T16:05:47.011Z · LW(p) · GW(p)

I'm of the type to get easily addicted to notifications, and daily has felt rare enough for me to not trigger any reaction.

comment by supposedlyfun · 2020-10-03T14:44:36.064Z · LW(p) · GW(p)

Does anyone have some good primary/canonical/especially insightful sources on the question of "Once we make a superintelligent AI, how do we get people to do what it says?"

I'm trying to hold the question to the question posed, rather than get into the weeds on "how would we know the AI's solutions were good" and "how do we know it's benign" and "evil AI in a box" as I know where to look for that information.  

So assume (if you will) all other problems with AI are solved and that the AI's solutions are perfect except that they are totally opaque.  "To fix global warming, inject 5.024 mol of boron into the ionosphere at the following GPS coordinates via a clone of Buzz Aldrin in a dirigible...".  And then maybe global warming would be solved, but Exxon's PR team spends $30 million on a campaign to convince people it was actually because we all used fewer plastic straws, because Exxon's baby AI is telling them that the superintelligence is about tell us to dismantle Exxon and execute its board of directors by burning at the stake.

Or give me some key words to google.  

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-10-03T16:25:52.032Z · LW(p) · GW(p)

Once one species of primate evolves to be much smarter than the others, how will it come about that the others do as it says? --For the most part, it doesn't matter whether the others do as it says. The other primates aren't the ones in the drivers seat, literally and figuratively. --But when it matters, the super-apes (humans) will figure out a variety of tricks and bribes that work most of the time.