supposedlyfun's Shortform

post by supposedlyfun · 2020-09-26T20:56:30.074Z · score: 2 (1 votes) · LW · GW · 6 comments

6 comments

Comments sorted by top scores.

comment by supposedlyfun · 2020-09-26T20:56:30.386Z · score: 9 (2 votes) · LW(p) · GW(p)

I'm grateful for MIRI etc and their work on what is probably as world-endy as nuclear war was (and look at all the intellectual work that went into THAT).

The thing that's been eating me lately, almost certainly mainly triggered by the political situation in the U.S., is how to manage the transition from 2020 to what I suspect is the only way forward for the species--genetic editing to reduce or eliminate the genetically determined cognitive biases we inherited from the savannah.  My objectives for the transition would be

  1. Minimize death
  2. Minimize physical suffering
  3. Minimize mental/emotional suffering
  4. Maximize critical thinking
  5. Maximize sharing of economic resources

I'm extra concerned about tribalism/outgrouping and have been thinking a lot about the lunch-counter protestors in the U.S. practice/role-playing the inevitable taunts, slurs, and mild or worse physical violence they would receive at a sit-in, knowing that if they were anything less than absolute model minorities, their entire movement could be written off overnight. 

I'm only just starting to look into what research there might already be on such a broad topic, so if you see this, and you have literally any starting points whatsoever (beyond what's on this site's wiki and SlateStarCodex), say something.

comment by mr-hire · 2020-09-26T22:52:23.630Z · score: 5 (3 votes) · LW(p) · GW(p)

Do you think genetic editing could remove biases?  My suspicsion is that they're probably baked pretty deeply into our brains and society, and you can't just tweak a few genes to get rid of them.

comment by supposedlyfun · 2020-09-27T21:16:09.303Z · score: 1 (1 votes) · LW(p) · GW(p)

I figure that at some point in the next ~300 years, computers will become powerful enough to do the necessary math/modeling to figure this out based on advances in understanding genetics.

comment by mr-hire · 2020-09-29T00:49:16.239Z · score: 2 (1 votes) · LW(p) · GW(p)

It just feels like "biases" are such a high level of abstraction that are based on basic brain architecture.  To get rid of them would be like creating a totally different design.

comment by supposedlyfun · 2020-10-03T14:44:36.064Z · score: 1 (1 votes) · LW(p) · GW(p)

Does anyone have some good primary/canonical/especially insightful sources on the question of "Once we make a superintelligent AI, how do we get people to do what it says?"

I'm trying to hold the question to the question posed, rather than get into the weeds on "how would we know the AI's solutions were good" and "how do we know it's benign" and "evil AI in a box" as I know where to look for that information.  

So assume (if you will) all other problems with AI are solved and that the AI's solutions are perfect except that they are totally opaque.  "To fix global warming, inject 5.024 mol of boron into the ionosphere at the following GPS coordinates via a clone of Buzz Aldrin in a dirigible...".  And then maybe global warming would be solved, but Exxon's PR team spends $30 million on a campaign to convince people it was actually because we all used fewer plastic straws, because Exxon's baby AI is telling them that the superintelligence is about tell us to dismantle Exxon and execute its board of directors by burning at the stake.

Or give me some key words to google.  

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-10-03T16:25:52.032Z · score: 2 (1 votes) · LW(p) · GW(p)

Once one species of primate evolves to be much smarter than the others, how will it come about that the others do as it says? --For the most part, it doesn't matter whether the others do as it says. The other primates aren't the ones in the drivers seat, literally and figuratively. --But when it matters, the super-apes (humans) will figure out a variety of tricks and bribes that work most of the time.