Posts
Comments
The next meetup will be on March 23 - RSVP here.
Next meetup will be February 24 - RSVP here.
Also there will be another article discussion group on February 18th, details and RSVP here.
Owing to a tragic laundry mishap, I'll actually be wearing a dark blue Waikiki Aquarium t-shirt with an octopus on it.
The next meetup will be on January 27th - RSVP here.
Also there will be an article discussion meeting on January 20th, discussing financial markets and investment.
The next meetup will be December 16th - RSVP here.
Also, one of our attendees is interested in having a recurring article discussion group and has scheduled an event in St. Paul on December 9th, which I encourage you to attend.
The next meetup will be Saturday, November 18th. RSVP here.
Looks like today will be on the lighter side, so if anybody has a board or card game that doesn't take too long, I suggest you bring it. I will bring my copy of Race For The Galaxy.
Also, for the "Meetups Everywhere" events there is a central coordinator and they have a survey - if willing, you can fill it out here.
The next meetup will be October 21st - see here,
All - I came down with a slight fever later yesterday. I took a rapid COVID test this morning and it came back negative, so I leave it as an exercise to the reader to determine the odds that I have COVID, but I might have given you some other disease. Sorry about that.
The next meetup will be September 16th - RSVP [here](The next meetup will be September 16th - RSVP here.
(Sorry for the handful of you who may be notified repeatedly - the summer meetup was unusually low in attendance so I figured I would play it safe and post this to the past two.)
The next meetup will be September 16th - RSVP here.
Apologies for the late notice, the next event is this coming Saturday. RSVP here: https://www.lesswrong.com/events/jo6uxJuh4DLkx2Exx/twin-cities-acx-meetup-june-2023
I feel like this metaphor doesn't strike me as accurate because humanity can engage in commerce and insects cannot.
But also humanity causes a lot of environmental degradation but we still don't actually want to bring about the wholesale destruction of the environment.
I have a few objections here:
-
Even when objectives aren't aligned, that doesn't mean the outcome is literally death. No corporation I interact with us aligned with me, but in many/most cases I am still better off for being able to transact with them.
-
I think there are plenty of scenarios where "humanity continues to exist" has benefits for AI - we are a source of training data and probably lots of other useful resources, and letting us continue to exist is not a huge investment, since we are mostly self-sustaining. Maybe this isn't literally "being aligned" but I think supporting human life has instrumental benefits to AI.
-
I think the formal claim is only true inasmuch as it's also true that the space of all objectives that align with the AI's continued existence is also incredibly small. I think it's much less clear how many of the objectives that are in some way supportive of the AI also result in human extinction.
This is a common problem with a lot of these hypothetical AI scenarios - WHY does the Oracle do this? How did the process of constructing this AI somehow make it want to eventually cause some negative consequence?
Next event will be April 30th - link.
Invite for the next meetup (March 24th) is now up - see here: https://www.lesswrong.com/events/Bf9rqXH93FtkAj6a5/twin-cities-acx-meetup-mar-2023
Invite for the next meetup (March 24th) is now up - see here: https://www.lesswrong.com/events/Bf9rqXH93FtkAj6a5/twin-cities-acx-meetup-mar-2023