Idea: Open Access AI Safety Journal
post by Gordon Seidoh Worley (gworley) · 2018-03-23T18:27:01.166Z · LW · GW · 11 commentsContents
11 comments
This is short because I'm mainly trying to gather feedback on how much the idea might be worth pursuing, but how much value does it seem the existence of an open access AI safety journal would provide? Some reasons I can think of in favor:
- Have a journal focused on AI safety, giving the field better visibility.
- Peer review from researchers active in AI safety research.
- Open access venue for AI safety research to avoid ethical concerns with publishing in closed journals.
- A venue where AI safety ideas too far outside the mainstream but are academically rigorous can be published.
- Give AI safety researchers a respectable venue for publishing to satisfy their career needs while reducing the effort they have to expend to get their work published, allowing more AI safety research to happen.
Some reasons against:
- There are already plenty of journals.
- Pre-prints are good enough.
- Journals are a waste of time to signal quality that is generally easy to assess from reading articles themselves.
- Most of the value could be had by regularly publishing a reviewed list of pre-prints and otherwise published articles in AI safety.
- Drag on time of folks who could be doing research instead of reviewing and editing articles.
It appears that starting an open access journal is a little tedious but not especially hard, sc. this step-by-step guide, so it could probably be done by a team of 1-3 volunteers, but hopefully could be professionalized if it takes off.
Thoughts?
11 comments
Comments sorted by top scores.
comment by Rohin Shah (rohinmshah) · 2018-03-24T17:55:04.627Z · LW(p) · GW(p)
To the point of peer review, many AI safety researchers already get peer review by circling their drafts around to other researchers.
It seems to me that this is only a good use of your time if the journal became respectable. (Otherwise you barely increase visibility of the field, no one will care about publishing in the journal, and it doesn't help academics' careers much.) There can even be a negative effect where AI safety is perceived as "that fringe field that publishes in <journal>", which makes AI researchers more reluctant to work on safety.
I don't know how a journal becomes respectable but I would expect that it's hard and takes a lot of work (and probably luck), and would want to see a good plan for how the journal will become respectable before I'd be excited to see this happen. I would guess that this wouldn't be doable without the effort of a senior AI/ML researcher.
Replies from: gworley, gworley↑ comment by Gordon Seidoh Worley (gworley) · 2018-03-24T19:18:28.461Z · LW(p) · GW(p)
To the point of peer review, many AI safety researchers already get peer review by circling their drafts around to other researchers.
I expect this to be less feasible as the field grows, especially as new researchers enter the field who do not have strong connections yet. For example in my own work it has been useful to share things with folks to get early feedback but then there is also value in comments I received in peer review to point out things like related work none of us was aware of (based on my publishing results in mathematics some years ago).
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2018-04-09T20:58:09.037Z · LW(p) · GW(p)
Yeah, I think I agree with this. You are still able to get peer review from the people you work with, if you work at an organization, but it is preferable to get more varied feedback, and some people may not work at an organization.
↑ comment by Gordon Seidoh Worley (gworley) · 2018-03-24T19:23:55.329Z · LW(p) · GW(p)
I don't know how a journal becomes respectable but I would expect that it's hard and takes a lot of work (and probably luck), and would want to see a good plan for how the journal will become respectable before I'd be excited to see this happen. I would guess that this wouldn't be doable without the effort of a senior AI/ML researcher.
I agree with all of this except that this seems to suggest to me that it is worth trying in the spirit of trying potentially high impact projects even if they have a low chance of success since the expected utility is probably enough to overcome opportunity costs. Yes there are many challenges and maybe only a 10% chance of success, but that seems good enough to try if the idea is otherwise valuable.
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2018-04-09T21:09:12.643Z · LW(p) · GW(p)
Sorry for the super late response, I only just discovered notifications.
In the cases where the things you are trying out are meta-type things that affect other people, I think it's worth trying things _well_ even if they have a low chance of success, but quite costly to try things in an okayish way if it has a low chance of success.
One major downside of trying new things is that it makes future attempts to do the same thing less likely to work (because people are less enthusiastic about it and expect it to fail, or you get a proliferation of the new things and half of the people are on one of them and half are on the other and you lose out on network effects and economies of scale). This means that when you try new things, especially ones that make asks of other people, you want to put a _lot_ of effort into getting it right quickly. If you do the 20% effort version and that fails, maybe before you had done this the 90% effort version would have succeeded but now it simply can't be done, and you've lost that value entirely. Whereas if you do the 90% effort version from the start and it fails, you can be reasonably confident that it was just not doable.
In this particular case, there's also an object-level downside in the case of failure, namely that AI safety is thought of as "that fringe group that publishes in <journal>".
comment by Viliam · 2018-03-26T01:31:57.985Z · LW(p) · GW(p)
Would it make sense to build something like this on the top of some already existing solution? For example, to put the articles themselves on arXiv, and only have the separate infrastructure for highlighting the best contributions, providing specialized discussion, or whatever you think would be the most important added value.
That way, even if things go wrong (prior probability: I think high enough), the articles would still remain on arXiv.
(To be more precise, I wouldn't make arXiv mandatory, but rather the default option; you could post the content anywhere, and just provide the link. Division of labor, don't reinvent the wheel, et cetera.)
Replies from: Kaj_Sotala, Davidmanheim↑ comment by Kaj_Sotala · 2018-03-26T11:47:37.543Z · LW(p) · GW(p)
(the term for this is overlay journal)
↑ comment by Davidmanheim · 2018-03-26T18:22:18.793Z · LW(p) · GW(p)
YES - I was going to say this, and Ithink an overlay journal would be an excellent idea. The key benefit to authors would be useful peer review and promotion of their work. The question is if people would use it - and I have no idea on that front.
comment by Gordon Seidoh Worley (gworley) · 2018-03-23T18:31:29.095Z · LW(p) · GW(p)
This is the sort of project I might be willing to take the lead on if folks think it's valuable (I'm currently uncertain enough to commit), but also others might be better positioned to take it over if they find it valuable. I put that out there just to make clear that I'll do it if it seems worth doing, but if someone else is especially excited about the idea I invite you to "steal" it from me and run with it since my time is, alas, currently expensive, I have a lot of other commitments already, and I like doing philosophy in my spare time rather than more work of the sort I normally get paid for.
comment by avturchin · 2018-03-24T15:54:59.061Z · LW(p) · GW(p)
I think it is great idea, but may be we need "Existential risk journal" which will cover not only AI safety but other things?
The main obstacle I see is that to have a high quality journal, it should be started by well established institution (MIRI or FHI), and also there should be one highly experienced scientific editor who are also deep in the topic and on full time occupation, as well as some funds.
In generally, high quality peer-review improves quality of papers, as reviewers are obliged to find ALL errors. Preprints are good in promoting articles which are already of high quality, but less quality inputs will be mostly ignored, so their authors will get more vague feedback which will slow the process of their improvement.
Ideal system should combine obligatory peer review and commenting from everybody interested, but solving academic publishing problem is another very large task.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2018-03-24T19:25:51.604Z · LW(p) · GW(p)
Interesting, I hadn't considered covering all of x-risk but it does seem an underserved area.