Arbital has been imported to LessWrong
post by RobertM (T3t), jimrandomh, Ben Pace (Benito), Ruby · 2025-02-20T00:47:33.983Z · LW · GW · 22 commentsContents
New content New (and updated) features The new concepts page The new wiki/tag page design Non-tag wiki pages Lenses "Voting" Inline Reacts Summaries Redlinks Claims The edit history page Misc. None 22 comments
Arbital was envisioned as a successor to Wikipedia. The project was discontinued [LW · GW] in 2017, but not before many new features had been built and a substantial amount of writing about AI alignment and mathematics had been published on the website.
If you've tried using Arbital.com the last few years, you might have noticed that it was on its last legs - no ability to register new accounts or log in to existing ones, slow load times (when it loaded at all), etc. Rather than try to keep it afloat, the LessWrong team worked with MIRI to migrate the public Arbital content to LessWrong, as well as a decent chunk of its features. Part of this effort involved a substantial revamp of our wiki/tag pages, as well as the Concepts page. After sign-off[1] from Eliezer, we'll also redirect arbital.com links to the corresponding pages on LessWrong.
As always, you are welcome to contribute edits, especially to stubs, redlinks, or otherwise incomplete pages, though note that we'll have a substantially higher bar for edits to high-quality imported Arbital pages, especially those written in a "single author" voice.
New content
While Arbital had many contributors, Eliezer was one of the most prolific, and wrote something like a quarter million words across many pages, mostly on alignment-relevant subjects.
If you just want to jump into reading, we've curated what we consider to be some of the best selections of that writing.
If you really hate clicking links, I've copied over the "Tier 1" recommendations below.
Recommendations
1. | AI safety mindset | What kind of mindset is required to successfully build an extremely advanced and powerful AGI that is "nice"? |
2. | Convergent instrumental strategies and Instrumental pressure | Certain sub-goals like "gather all the resources" and "don't let yourself be turned off" are useful for a very broad range of goals and values. |
3. | Context disaster | Current terminology would call this "misgeneralization". Do alignment properties that hold in one context (e.g. training, while less smart) generalize to another context (deployment, much smarter)? |
4. | Orthogonality Thesis | The Orthogonality Thesis asserts that there can exist arbitrarily intelligent agents pursuing any kind of goal. |
5. | Hard problem of corrigibility | It's a hard problem to build an agent which, in an intuitive sense, reasons internally as if from the developer's external perspective – that it is incomplete, that it requires external correction, etc. This is not default behavior for an agent. |
6. | Coherent Extrapolated Volition | If you're extremely confident in your ability to align an extremely advanced AGI on complicated targets, this is what you should have your AGI pursue. |
7. | Epistemic and instrumental efficiency | "Smarter than you" is vague. "Never ever makes a mistake that you could predict" is more specific. |
8. | Corporations vs. superintelligences | Is a corporation a superintelligence? (An example of epistemic/instrumental efficiency in practice.) |
9. | Rescuing the utility function | "Love" and "fun" aren't ontologically basic components of reality. When we figure out what they're made of, we should probably go on valuing them anyways. |
10. | Nearest unblocked strategy | If you tell a smart consequentialist mind "no murder" but it is actually trying, it will just find the next best thing that you didn't think to disallow. |
11. | Mindcrime | The creation of artificial minds opens up the possibility of artificial moral patients who can suffer. |
12. | General intelligence | Why is AGI a big deal? Well, because general intelligence is a big deal. |
13. | Advanced agent properties | The properties of agents for which (1) we need alignment, (2) are relevant in the big picture. |
14. | Mild optimization | "Mild optimization" is where, if you ask your advanced AGI to paint one car pink, it just paints one car pink and then stops, rather than tiling the galaxies with pink-painted cars, because it's not optimizing that hard. It's okay with just painting one car pink; it isn't driven to max out the twentieth decimal place of its car-painting score. |
15. | Corrigibility | The property such that if you tell your AGI that you installed the wrong values in it, it lets you do something about that. An unnatural property to build into an agent. |
16. | Pivotal Act | An act which would make a large positive difference to things a billion years in the future, e.g. an upset of the gameboard that's decisive "win". |
17. | Bayes Rule Guide | An interactive guide to Bayes' theorem, i.e, the law of probability governing the strength of evidence - the rule saying how much to revise our probabilities (change our minds) when we learn a new fact or observe new evidence. |
18. | Bayesian View of Scientific Virtues | A number of scientific virtues are explained intuitively by Bayes' rule. |
19. | A quick econ FAQ for AI/ML folks concerned about technological unemployment | An FAQ aimed at a very rapid introduction to key standard economic concepts for professionals in AI/ML who have become concerned with the potential economic impacts of their work. |
New (and updated) features
The new concepts page
The new wiki/tag page design
We rolled this change out a few weeks ago, in advance of the Arbital migration, to shake out any bugs (and to see how many people complained). It's broadly similar to the new post page design [LW(p) · GW(p)], though for wiki pages the table of contents is always visible, as opposed to only being visible on-hover.
Non-tag wiki pages
LessWrong has hosted wiki pages that couldn't be used as "tags" for a while (using otherwise the same architecture and UI). These were primarily for imports from the old LessWrong wiki, but users couldn't create "wiki-only" pages. Given that most Arbital pages make more sense as wiki pages rather than tags, we decided to let users create them as well. See here [? · GW] for an explanation of the distinction.
Lenses
Lenses are... tabs. Opinionated tabs! As the name implies, they're meant to be different "lenses" on the same subject. You can click between them, and even link to them directly, if you want to send someone a specific lens.
"Voting"
In the past, you could vote on individual revisions of wiki pages. The intent was to create a feedback loop that rewarded high-quality contributions. This didn't quite work out, but even if it had, it wouldn't have solved the problem of readers having any way to judge the quality of various wiki pages. We've implemented something similar to Arbital's "like" system, so you can now "like" wiki pages and lenses. This has no effect on the karma of any users, but does control the ordering of pages displayed on the new Concepts page. A user's "like" has as much weight as their strong vote, and pages are displayed in descending order based on the maximum "score" of any given page across all of its lenses[2].
You can like a page by clicking on the thumbs-up button in the top-right corner of a page, or the same icon on the tab for the lens you want to like.
As was the case on Arbital, likes on wiki pages are not anonymous.
Inline Reacts
Along with voting, we've added the ability to leave inline reacts on the contents of wiki pages. Please use responsibly.
Summaries
(expand to read about summaries)
Summaries are a way to control the preview content displayed when a user hovers over a link to a wiki page. By default, LessWrong displays the first paragraph of content from the wiki page.
Arbital did the same, but also allowed users to write one or more custom summaries, which you can now do as well.
If a page has more than one summary, you'll see a slightly different view in the hover preview, with titled tabs for each summary.
Redlinks
(expand to read about redlinks)
Redlinks are links to wiki pages that don't yet exist. The typical usage on Arbital was to signal to readers: "Hey, this is a placeholder for a concept or idea that hasn't been explained yet."
You can create a redlink by just linking to a wiki url (/w/...) that doesn't point to an existing page.
To readers, redlinks look like links... which are red.
Claims
(expand to read about claims)
Arbital had Claims. LessWrong used to have embeddable, interactive predictions [LW · GW][3]. These were close to identical features, so we brought it back.
The edit history page
The edit history page now also includes edits to lenses, summaries, and page metadata, as well as edits to the main page content, comments on the page, and tag-applications.
Misc.
You can now triple-click to edit wiki pages.
Some other Arbital features, such as guides/paths, subject prerequisites, page speeds and relationships, etc, have been imported in limited read-only formats to LessWrong. These features will be present on imported Arbital content, to preserve the existing structure of Arbital content as much as possible, but we haven't implemented the ability to use those features for new wiki content on LessWrong.
As always, we're interested in your feedback, though make no promises about what we do with it. Please do report bugs or unexpected behavior if/when you find them, especially around wikitags - you can reach us on Intercom (bottom-right corner of the screen) or leave a comment on this post.
- ^
This might involve non-trivial updates to the current features and UI. No fixed timeline but hopefully within a few weeks.
- ^
If Page A has Lens A' with a score of 50, a link to Page A will be displayed above a link to Page B with a score of 40, even if Page A itself only has a score of 30.
- ^
These were originally an integration with Ought. When Ought sunset their API, we imported all the existing questions and predictions and made them read-only.
22 comments
Comments sorted by top scores.
comment by Vladimir_Nesov · 2025-02-20T01:03:53.891Z · LW(p) · GW(p)
GreaterWrong Arbital mirror (created many years ago [LW · GW]) is also highly useful, doesn't suffer from the original site's loading time issues.
Replies from: habryka4↑ comment by habryka (habryka4) · 2025-02-20T01:13:55.127Z · LW(p) · GW(p)
Yeah, I've been very glad to have that up. It does lack a quite large fraction of Arbital features (such as UI for picking between multiple lenses, probabilistic claims, and tons of other small UI things which were a lot of work to import), but it's still been a really good resource for linking to.
comment by Chris_Leong · 2025-02-20T01:22:17.560Z · LW(p) · GW(p)
Lenses are... tabs. Opinionated tabs
Could you explain the intended use further?
Replies from: habryka4↑ comment by habryka (habryka4) · 2025-02-20T01:42:00.661Z · LW(p) · GW(p)
The central problem of any wiki system is [1]"what edits do you accept to a wiki page?". The lenses system is trying to provide a better answer to that question.
My default experience on e.g. Wikipedia when I am on pages where I am highly familiar with the domain is "man, I could write a much better page". But writing a whole better page is a lot of effort, and the default consequence of rewriting the page is that the editor who wrote the previous page advocates for your edits to be reverted, because they are attached to their version of the page.
With lenses, if you want to suggest large changes to a wiki page, your default action is now "write a new lens". This leaves the work of the previous authors intact, while still giving your now page the potential for substantial readership. Lenses are sorted in order of how many people like them. If you think you can write a better lens, you can make a new lens, and if it's better, it can replace the original lens after it got traction.
More broadly, wikis suffer a lot from everything feeling like it is written by a committee. Lenses enable more individual authorship, while still trying to have some collective iteration on canonicity and structure of the wiki.
- ^
Well, after you have solved the problem of "does anyone care about this wiki?"
↑ comment by cubefox · 2025-02-20T16:02:02.369Z · LW(p) · GW(p)
Is there perhaps a more descriptive name than "lens"? Maybe "variant" or "alternative"?
Replies from: MondSemmel↑ comment by MondSemmel · 2025-02-20T16:35:11.287Z · LW(p) · GW(p)
I assume the idea of "lens" as a term is that it's one specific person's opinionated view of a topic. As in, "here's the concept seen through EY's lens". So terms like "variant" or "alternative" are too imprecise, but e.g. "perspective" might also work.
Replies from: EniScien, Nick_Tarleton, cubefox↑ comment by EniScien · 2025-02-20T20:46:32.389Z · LW(p) · GW(p)
IIRC you are wrong, lenses are just different ways to see the page of same topic. They're also used for "version ML programmers", "version for DT professors", "version for usual people". Or for Wikipedia it would be "scientifically precise encyclopedia" and "quickly get useful info about topic for usual person".
Edit: oh, also, as I know, lenses are from tvtropes (caution: addictive memetic hazard)
Replies from: justinpombrio↑ comment by justinpombrio · 2025-02-21T03:59:28.092Z · LW(p) · GW(p)
The fact that you so naturally used the word "version" here (it was essentially invisible, it didn't feel like a terminology choice at all) suggests that "version" would be a good term to use instead of "lens". Downside being that it's a sufficiently common word that it doesn't sound like a Term of Art.
↑ comment by Nick_Tarleton · 2025-02-20T19:59:22.862Z · LW(p) · GW(p)
I don't feel a different term is needed/important, but n=1, due to some uses I've seen of 'lens' as a technical metaphor it strongly makes me think 'different mechanically-generated view of the same data/artifact', not 'different artifact that's (supposed to be) about the same subject matter', so I find the usage here a bit disorienting at first.
↑ comment by cubefox · 2025-02-20T17:48:32.633Z · LW(p) · GW(p)
I think lens and even perspective are metaphors here, where it isn't immediately obvious what they mean.
Replies from: Raemon↑ comment by Raemon · 2025-02-20T20:09:08.114Z · LW(p) · GW(p)
What would be less metaphorical than perspective that still captures the ‘one opinionated viewpoint?’ thing?
Replies from: cubefox↑ comment by cubefox · 2025-02-20T20:14:32.570Z · LW(p) · GW(p)
Good question. Variant or alternative are not metaphorical but also less specific.
Replies from: Raemon↑ comment by Raemon · 2025-02-20T20:22:44.304Z · LW(p) · GW(p)
I guess I'm just kinda surprised "perspective" feels metaphorical to you – it seems like that's exactly what it is.
(I think it's a bit of a long clunky word so not obviously right here, but, still surprised about your take)
Replies from: cubefox↑ comment by Chris_Leong · 2025-02-20T06:50:26.782Z · LW(p) · GW(p)
Interesting idea. Will be interesting to see if this works out.
comment by plex (ete) · 2025-02-21T12:34:59.684Z · LW(p) · GW(p)
This is awesome! Three comments:
- Please make an easy to find Recent Changes feed (maybe a thing on the home page which only appears if you've made wiki edits). If you want an editor community, that will be their home, and the thing they're keeping up with and knowing to positively reinforce each other.
- The concepts portal is now a slightly awkward mix of articles and tags, with potentially very high use tags being quite buried because no one's written a good article for it (e.g Rationality Quotes has 136 pages tagged, but zero karma, so requires many clicks to reach). I'm especially thinking about the use case of wanting to know what types of articles there are to browse around. I'm not sure exactly what to do about this.. maybe having the sorting not be just about karma, but a mix of karma and number of tagged posts? Like (k+10)*(t+10) or something? Disadvantage is this is opaque and drops alphabetical much harder.
- A bunch of the uncategorized ones could be categorized, but I'm not seeing a way to do this with normal permissions.
Adjusting 2 would make it much cleaner to categorize the many ones in 3 without that clogging up the normal lists.
Replies from: ete↑ comment by plex (ete) · 2025-02-21T12:49:18.171Z · LW(p) · GW(p)
Also I suggest that given the number of tags in each section, load more should be load all.
comment by Nathan Young · 2025-02-20T11:56:25.516Z · LW(p) · GW(p)
I am excited about improvements to the wiki. Might write some.
comment by Nathan Young · 2025-02-20T11:55:32.107Z · LW(p) · GW(p)
Claims
The claims logo is ugly.
Replies from: habryka4↑ comment by habryka (habryka4) · 2025-02-20T19:04:26.336Z · LW(p) · GW(p)
It's true
comment by Guive (GAA) · 2025-02-20T09:16:38.976Z · LW(p) · GW(p)
Thanks for doing this, guys. This import will make it easier to access some important history.
comment by EniScien · 2025-02-20T20:29:29.187Z · LW(p) · GW(p)
Very cool. I probably already for two years wondered why to have Arbital as additional site instead of doing it on LW. And that would be very good if now I will be able to read it without bugs and even make edits (easily, by three clicks!). I also like that there are tabs instead of "lenses", I've always thought that "lense" is improper idea if it can show you completely different set of contents.
Also I for a long time thought that it would be good to post Sequences, HPMoR and some other things as wiki pages, they are too crucial for LW to their edits be vetoed as personal blog pages. And also post into them their translations to other languages, so you can read them on LW with all it's functions, in the same place with all the other content (I for a long wasn't on LW because I read Sequences as fb2 translations). And you could add and edit translations in wiki way.
And also probably write one sentence (speaking) names and one paragraph summaries, so you could get quick understanding of Sequences better than "highlights". And probably also figure out which points of sequences are the most important to be convinced in the beginning vs just knowing that it's community opinion.
And probably also find a way to know in advance which point will be more and less obvious for me. Eg Pebble Sorters were completely obvious for me, probably can be checked by Orthogonality Thesis, but Truly Part of You was very unobvious to me and I suspect it's a question of generation. And some points already were in HPMoR, and other people could read planecrash before Sequences or Feynman, GEB, Kahneman.
And just to not forget to say it: I for a long wanted to add spaced repetition reminders for reading posts, otherwise I forget them and forget that I forgot.