AI risk-related improvements to the LW wiki
post by Kaj_Sotala · 2012-11-07T09:24:05.070Z · LW · GW · Legacy · 10 commentsContents
10 comments
Back in May, Luke suggested the creation of a scholarly AI risk wiki, which was to include a large set of summary articles on topics related to AI risk, mapped out in terms of how they related to the central debates about AI risk. In response, Wei Dai suggested that among other things, the existing Less Wrong wiki could be improved instead. As a result, the Singularity Institute has massively improved the LW wiki, in preparation for a more ambitious scholarly AI risk wiki. The outcome was the creation or dramatic expansion of the following articles:
- 5-and-10
- Acausal Trade
- Acceleration thesis
- Agent
- AGI chaining
- AGI skepticism
- AGI Sputnik moment
- AI advantages
- AI arms race
- AI Boxing
- AI-complete
- AI takeoff
- AIXI
- Algorithmic complexity
- Anvil problem
- Astronomical waste
- Bayesian decision theory
- Benevolence
- Ben Goertzel
- Bias
- Biological Cognitive Enhancement
- Brain-computer interfaces
- Carl Shulman
- Causal decision theory
- Church-Turing thesis
- Coherent Aggregated Volition
- Coherent Blended Volition
- Coherent Extrapolated Volition
- Computing overhang
- Computronium
- Consequentialism
- Counterfactual mugging
- Creating Friendly AI
- Cyc
- Decision theory
- Differential intellectual progress
- Economic consequences of AI and whole brain emulation
- Eliezer Yudkowsky
- Empathic inference
- Emulation argument for human-level AI
- EURISKO
- Event horizon thesis
- Evidential Decision Theory
- Evolutionary algorithm
- Evolutionary argument for human-level AI
- Existential risk
- Expected utility
- Expected value
- Extensibility argument for greater-than-human intelligence
- FAI-complete
- Fallacy
- Fragility_of_value
- Friendly AI
- Fun Theory
- Future of Humanity Institute
- Game theory
- Gödel machine
- Great Filter
- History of AI risk thought
- Human-AGI integration and trade
- Induction
- Infinities in ethics
- Information hazard
- Instrumental convergence thesis
- Intelligence
- Intelligence explosion
- Jeeves Problem
- Lifespan dilemma
- Machine ethics
- Machine learning
- Malthusian Scenarios
- Metaethics
- Moore's law
- Moral divergence
- Moral uncertainty
- Nanny AI
- Nanotechnology
- Neuromorphic AI
- Nick Bostrom
- Nonperson predicate
- Observation selection effect
- Ontological crisis
- Optimal philanthropy
- Optimization power
- Optimization process
- Oracle AI
- Orthogonality thesis
- Paperclip maximizer
- Pascal's mugging
- Prediction market
- Preference
- Prior probability
- Probability theory
- Recursive self-improvement
- Reflective decision theory
- Regulation and AI risk
- Reinforcement learning
- Search space
- Seed AI
- Self Indication Assumption
- Self Sampling Assumption
- Scoring rule
- Simulation argument
- Simulation hypothesis
- Singleton
- Singularitarianism
- Singularity
- Subgoal stomp
- Superintelligence
- Superorganism
- Technological forecasting
- Technological revolution
- Terminal value
- Timeless decision theory
- Tool AI
- Unfriendly AI
- Universal intelligence
- Utility
- Utility extraction
- Utility indifference
- Value extrapolation
- Value learning
- Whole brain emulation
- Wireheading
In managing the project, I focused on content over presentation, so a number of articles still have minor issues such as the grammar and style having room for improvement. It's our hope that, with the largest part of the work already done, the LW community will help improve the articles even further.
Thanks to everyone who worked on these pages: Alex Altair, Adam Bales, Caleb Bell, Costanza Riccioli, Daniel Trenor, João Lourenço, Joshua Fox, Patrick Rhodes, Pedro Chaves, Stuart Armstrong, and Steven Kaas.
10 comments
Comments sorted by top scores.
comment by gwern · 2012-11-07T16:32:54.272Z · LW(p) · GW(p)
I've watched a lot of these edits through the RSS feed as part of my daily spam-fighting; good work everyone!
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-11-07T22:00:09.910Z · LW(p) · GW(p)
Give this man some upvotes for his daily spam-fighting, as well as for his assistance when auto-bans targeted at spammers accidentally hit us. :)
comment by MichaelAnissimov · 2012-11-07T19:53:22.771Z · LW(p) · GW(p)
Great work! That is a lot of updated pages.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-11-07T22:00:24.604Z · LW(p) · GW(p)
Thanks. :)
comment by Slackson · 2012-11-07T10:21:46.892Z · LW(p) · GW(p)
This is awesome. Thanks for doing all that work.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-11-07T22:00:20.654Z · LW(p) · GW(p)
Thanks. :)
comment by lukeprog · 2012-11-17T03:22:21.640Z · LW(p) · GW(p)
LW wiki articles I wish LWers would write/expand:
- Iterated embryo selection (update: AlexMennen wrote it)
- Doomsday argument (update: AlexMennen wrote it)
- Simpleton gambit (update: AlexMennen wrote it)
- Delusion box (update: AlexMennen wrote it
- Causality
- Robot's Rebellion
- Dysrationalia
- Epistemic prisoner's dilemma (update: D_Malik wrote it)
- Counterfactual resiliency (update: AlexMennen wrote it
- Personal identity (update: AlexMennen wrote it)
- Adversarial collaboration (update: AlexMennen wrote it)
- Imagination inflation
comment by John_Maxwell (John_Maxwell_IV) · 2012-11-08T00:58:57.857Z · LW(p) · GW(p)
It's our hope that, with the largest part of the work already done, the LW community will help improve the articles even further.
Has someone watchlisted these pages to make sure no one accidentally makes them less accurate in the process of improving their presentation?
comment by Alex_Altair · 2012-11-07T22:03:06.600Z · LW(p) · GW(p)
I am pretty excited about the AI risk wiki.
comment by MichaelAnissimov · 2012-12-12T02:19:27.396Z · LW(p) · GW(p)
A key element in making use of this Wiki will be to set up a system that blocks spammers from registering accounts. Perhaps there should be a CAPTCHA with an answer that only a genuine Less Wronger would know? Anyone who knows how to set this up would be a tremendous help.