comment by AnnaLeptikon ·
2014-09-28T07:01:29.172Z · LW(p) · GW(p)
Review about the Rationality Meetup in Vienna on 27.09.2014
Superintelligence Summary by Marko Thiel
People who were there: Andreas, Matthias Brandner, Manuel K., Monika, Marko, Viliam Bur, Alex, Philip, Anna, Philipp, Tino/Sandy, Austin, Milica, Luka, Ivan, Axel, Lea, Lio, Andreas V., Manuel M.
(20 people - awesome! And so many were new!)
Superintelligence by Nick Bostrom (presented by Marko Thiel)
First announcement: ethics will not be discussed and rather be ignored today, because otherwise he would never finish
Quote "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."
(I sadly didn't take notes here)
AI -> strong AI
whole brain emulation
biological cognition (improve humans, chemical substances/nootropics, embryo selection -> lots of generations in vitro)
interface with machine
networks and organizations
speed Superintellicence (like human but faster)
collective Superintelligence (together smarter)
quality Superintelligence (totally different to human)
rate of intelligence change
cover + preparation
-> arises the question of where the motivation to do so comes from!
the super intelligence will:
- the orthogonality thesis: any intelligence can build a pair with any motivation (Eliezer would probably call it "anti prediction)
convergent instrumental goals (for example: self-preservation, goal-content integrity, cognitive enhancement, technology, resource acquisition -> infrastructure profusion)
other topics: failure modes, control problem, acquire values, choosing who to choose, do what I mean, the strategic picture
Comments I wrote down
by Andreas: "Does SI include self-awareness?" answer: "No"
by Ivan: Has someone tried to breed an human intelligent dog?
by Lio: Is it possible to have something that's good in all the things? -> Viliam: Yes, with a network structure
by Austin: Is there a difference between biological or programmed happiness? -> Manuel: the utility function of a thermostat is to be at 20 degrees, it doesn't WANT to be there
in the evening:
- We talked about daily things members of the group do which they consider rational/daily effective rituals/personal sleeping duration
for the next meetup: