Erich_Grunewald's Shortform
post by Erich_Grunewald · 2022-09-16T10:42:21.833Z · LW · GW · 5 commentsContents
5 comments
5 comments
Comments sorted by top scores.
comment by Erich_Grunewald · 2022-12-07T10:33:01.823Z · LW(p) · GW(p)
Here's what I usually try when I want to get the full text of an academic paper:
- Search Sci-Hub. Give it the DOI (e.g.
https://doi.org/...
) and then, if that doesn't work, give it a link to the paper's page at an academic journal (e.g.https://www.sciencedirect.com/science...
). - Search Google Scholar. I can often just search the paper's name, and if I find it, there may be a link to the full paper (HTML or PDF) on the right of the search result. The linked paper is sometimes not the exact version of the paper I am after -- for example, it may be a manuscript version instead of the accepted journal version -- but in my experience this is usually fine.
- Search the web for
"name of paper in quotes" filetype:pdf
. If that fails, search for"name of paper in quotes"
and look at a few of the results if they seem promising. (Again, I may find a different version of the paper than the one I was looking for, which is usually but not always fine.) - Check the paper's authors' personal websites for the paper. Many researchers keep an up-to-date list of their papers with links to full versions.
- Email an author to politely ask for a copy. Researchers spend a lot of time on their research and are usually happy to learn that somebody is eager to read it.
↑ comment by Lao Mein (derpherpize) · 2022-12-07T12:56:40.553Z · LW(p) · GW(p)
I would add Semantic Scholar to the list. It gives consistantly better search results than Google Scholar and has a better interface. I've also found a really difficult-to-find paper on pre-print websites once or twice.
Replies from: Erich_Grunewald↑ comment by Erich_Grunewald · 2022-12-07T16:26:00.875Z · LW(p) · GW(p)
Thanks for the suggestion! I'll be trying it out and adding it to the list if I find it useful.
comment by Erich_Grunewald · 2023-03-29T20:36:50.807Z · LW(p) · GW(p)
I'm really confused by this passage from The Six Mistakes Executives Make in Risk Management (Taleb, Goldstein, Spitznagel):
We asked participants in an experiment: “You are on vacation in a foreign country and are considering flying a local airline to see a special island. Safety statistics show that, on average, there has been one crash every 1,000 years on this airline. It is unlikely you’ll visit this part of the world again. Would you take the flight?” All the respondents said they would.
We then changed the second sentence so it read: “Safety statistics show that, on average, one in 1,000 flights on this airline has crashed.” Only 70% of the sample said they would take the flight. In both cases, the chance of a crash is 1 in 1,000; the latter formulation simply sounds more risky.
One crash every 1,000 years is only the same as one crash in 1,000 flights if there's exactly one flight per year on average. I guess they must have stipulated that in the experiment (of which there's no citation), because otherwise it's perfectly rational to suppose the first option is safer (since generally an airline serves >1 flight per year)?
comment by Erich_Grunewald · 2022-09-16T10:42:22.084Z · LW(p) · GW(p)
A few months ago I wrote a post about Game B. The summary:
I describe Game B, a worldview and community that aims to forge a new and better kind of society. It calls the status quo Game A and what comes after Game B. Game A is the activity we’ve been engaged in at least since the dawn of civilisation, a Molochian competition over resources. Game B is a new equilibrium, a new kind of society that’s not plagued by collective action problems.
While I agree that collective action problems (broadly construed) are crucial in any model of catastrophic risk, I think that
- civilisations like our current one are not inherently self-terminating (75% confidence);
- there are already many resources allocated to solving collective action problems (85% confidence); and
- Game B is unnecessarily vague (90% confidence) and suffers from a lack of tangible feedback loops (85% confidence).
I think it can be of interest to some LW users, though it didn't feel on-topic enough to post in full here.