Welcome, new contributors!

post by orthonormal · 2015-03-23T21:53:20.000Z · LW · GW · 2 comments

Contents

2 comments

Today is the day; we're opening up this forum to allow contributions from more people! See our How to Contribute page for the details.

Now is a great time to say a bit about what the Intelligent Agent Foundations Forum is, and how it came about. The short answer is that the Machine Intelligence Research Institute (MIRI) helped build this forum in order to facilitate research discussion on the topics in its technical agenda and related subjects.

Many of the early users of this forum previously contributed to a closed email group on decision theory, or wrote relevant posts on the group blog Less Wrong. MIRI wanted to build a forum that could focus on these topics and support high-quality mathematical collaboration, while being transparent and allowing new contributors to find and join it directly.

Broadly speaking, the topics of this forum concern the difficulties of value alignment- the problem of how to ensure that machine intelligences of various levels adequately understand and pursue the goals that their developers actually intended, rather than getting stuck on some proxy for the real goal or failing in other unexpected (and possibly dangerous) ways. As these failure modes are more devastating the farther we advance in building machine intelligences, MIRI's goal is to work today on the foundations of goal systems and architectures that would work even when the machine intelligence has general creative problem-solving ability beyond that of its developers, and has the ability to modify itself or build successors. (For still more related info on the motivations for this work, see the Future of Life Institute's research priorities letter or Nick Bostrom's recent book Superintelligence.)

In that context, there are many interesting problems that come up; here are several from MIRI's technical agenda page:

This is not an exhaustive list of topics or of progress! In the next few days, several forum contributors plan to consolidate the work and discussions already on this forum, and produce summary posts with links for each group of topics (including some not listed above).

But the list does help us to point out what we consider to be on-topic in this forum. Besides the topics mentioned there, other relevant subjects include groundwork for self-modifying agents, abstract properties of goal systems, tractable theoretical or computational models of the topics above, and anything else that is directly connected to MIRI's research mission.

It's important for us to keep the forum focused, though; there are other good places to talk about subjects that are more indirectly related to MIRI's research mission, and the moderators here may close down discussions on subjects that aren't a good fit for this mission. Some examples of subjects that we would consider off-topic (unless directly applied to a more relevant area) include general advances in artificial intelligence and machine learning, general mathematical logic, general philosophy of mind, general futurism, existential risks, effective altruism, human rationality, and non-technical philosophizing.

As Benja said in the original welcome post, the software is still fairly minimal and a little rough around the edges (though we do have LaTeX support). We hope to improve quickly! If you want to help us, the code is on GitHub. And if you find bugs, we hope you’ll let us know!

We look forward to your contributions!

2 comments

Comments sorted by top scores.

comment by IAFF-User-61 (Imported-IAFF-User-61) · 2015-05-26T21:05:41.000Z · LW(p) · GW(p)

It would be very helpful if you could document whether you support HTML tags, and if so what subset.

Replies from: orthonormal
comment by orthonormal · 2015-05-27T03:27:23.000Z · LW(p) · GW(p)

The Pandoc markdown documentation discusses this- click "formatting help" next to the text box, and then the link at the bottom of that page. Maybe the initial help should include instructions for some more common use cases, though.