Open & Welcome Thread - June 2020
post by habryka (habryka4) · 2020-06-02T18:19:36.166Z · LW · GW · 101 commentsContents
102 comments
If it’s worth saying, but not worth its own post, here's a place to put it. (You can also make a shortform post)
And, if you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome.
If you want to explore the community more, I recommend reading the Library, [? · GW] checking recent Curated posts [? · GW], seeing if there are any meetups in your area [? · GW], and checking out the Getting Started [LW · GW] section of the LessWrong FAQ [LW · GW].
The Open Thread sequence is here [? · GW].
101 comments
Comments sorted by top scores.
comment by willbradshaw · 2020-06-05T19:13:32.104Z · LW(p) · GW(p)
How much rioting is actually going on in the US right now?
If you trust leftist (i.e. most US) media, the answer is "almost none, virtually all protesting has been peaceful, nothing to see here, in fact how dare you even ask the question, that sounds suspiciously like something a racist would ask".
If you take a look on the conservative side of the veil, the answer is "RIOTERS EVERYWHERE! MINNEAPOLIS IS IN FLAMES! MANHATTEN IS LOST! TAKE YOUR KIDS AND RUN!"
So...how much rioting has there actually been? How much damage (very roughly)? How many deaths? Are there estimates of the number of rioters vs peaceful protesters?
(I haven't put much effort into actually trying to answer these questions, so no-one should feel much obligation to make the effort for me, but if someone already knows some of these answers, that would be cool.)
Replies from: Charlie Steiner↑ comment by Charlie Steiner · 2020-06-13T06:06:43.762Z · LW(p) · GW(p)
I don't know much about nationally, but I know that locally there's been none to indetectable rioting, some amount of looting from opportunistic criminals / idiot teenagers (say, 1 per 700 protesters) and a less than expected but still some cop/protester violence that could look like rioting if you squint.
comment by Yitz (yitz) · 2020-06-09T03:16:42.495Z · LW(p) · GW(p)
Hi, I joined because I was trying to understand Pascal’s Wager, and someone suggested I look up “Pascal’s mugging”... next thing I know I’m a newly minted HPMOR superfan, and halfway through reading every post Yudkowsky has ever written. This place is an incredible wellspring of knowledge, and I look forward to joining in the discussion!
Replies from: mingyuancomment by Filipe Marchesini (filipe-marchesini) · 2020-06-03T10:58:46.386Z · LW(p) · GW(p)
LessWrong warned [LW · GW] me two months before it occurred here. The suggested preparedness was clear and concise, and I felt the power on my hands. I had valuable info no one on my tribe had. I alarmed my mom and she listened me, stayed home and safe, when everyone was partying out (carnival). I long-talked with friends, and explained to them what I believed it was happening and why I believed that. I showed the numbers, the math, the predictions to the next week, next week came, and reality presented its metallic taste. Week after week, the light was getting brighter and brigher until it turned really hard to refuse to see it, or to believe on the belief that everything was just fine.
One thing I learned is that it doesn't matter if you just know something really valuable, but can't convince those that do matter for you. I tried to explain my 50 years experienced physician father that he should listen to me. He blamed my low status. But even after weeks, police at the streets forcing citizens to stay at home, he could not believe. He was in denial and my incompetence to change his mind made him to the hospital, 16 days and he isn't still back. Don't worry, he is getting better and I am just babbling [LW · GW] around. Father of a cousin died. Brother tested positive. Stepmother obviously got it too, but tested negative. You know, I really tried, not just tried to try [LW · GW], not even planned to try, I really did it right way. Successive failures. It wasn't enough. I don't have anything valuable to share more than 'you have to learn ways of convincing your most loved ones' urgently, if you don't have this tool, but I don't know how to do it yet, I am struggling to find a way, and I would ask you to share when you get one. Things as simple as "there is food over there" or "there is a lion coming to you", on level 1 talk. Maybe the dark arts could have helped me when level 1 failed, but not sure. But I feel very happy for a lot of peers I helped along the way, and all is due to LessWrong, I am thankful for this community, and I changed the behavior of some people I know on the right moment of the outbreak. This simple text forum saves many lives and I am on the path to contribute too on larger scale.
I know I am not that new on the forum, I don't remember exactly when I started here, but I believe it was on the last months of the last year, but I still think I am noob, but learning. Don't upvote this comment, but do comment if you wanna say something.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2020-06-06T16:40:59.056Z · LW(p) · GW(p)
I'm glad you're trying, and am sorry to hear it is so hard; that sounds really hard. You might try the book "How to have Impossible Conversations." I don't endorse every bit of it, but there's some good stuff in there IMO, or at least I got mileage from it.
comment by Wei Dai (Wei_Dai) · 2020-06-08T05:00:19.556Z · LW(p) · GW(p)
Please share ideas/articles/resources for immunizing ones' kids against mind viruses.
I think I was lucky myself in that I was partially indoctrinated in Communist China, then moved to the US before middle school, which made it hard for me to strongly believe any particular religion or ideology. Plus the US schools I went to didn't seem to emphasize ideological indoctrination as much as schools currently do. Plus there was no social media pushing students to express the same beliefs as their classmates.
What can I do to help prepare my kids? (If you have specific ideas or advice, please mention what age or grade they are appropriate for.)
Replies from: riceissa, romeostevensit, ESRogs, NancyLebovitz, AllAmericanBreakfast, ryan_b, rudi-c↑ comment by riceissa · 2020-06-08T06:14:46.907Z · LW(p) · GW(p)
Do you think that having your kids consume rationalist and effective altruist content and/or doing homeschooling/unschooling are insufficient for protecting your kids against mind viruses? If so, I want to understand why you think so (maybe you're imagining some sort of AI-powered memetic warfare?).
Eliezer has a Facebook post where he talks about how being socialized by old science fiction was helpful for him.
For myself, I think the biggest factors that helped me become/stay sane were spending a lot of time on the internet (which led to me discovering LessWrong, effective altruism, Cognito Mentoring) and not talking to other kids (I didn't have any friends from US public school during grades 4 to 11).
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2020-06-08T06:28:13.829Z · LW(p) · GW(p)
Do you think that having your kids consume rationalist and effective altruist content and/or doing homeschooling/unschooling are insufficient for protecting your kids against mind viruses?
Homeschooling takes up too much of my time and I don't think I'm very good at being a teacher (having been forced to try it during the current school closure). Unschooling seems too risky. (Maybe it would produce great results, but my wife would kill me if it doesn't. :) "Consume rationalist and effective altruist content" makes sense but some more specific advice would be helpful, like what material to introduce, when, and how to encourage their interest if they're not immediately interested. Have any parents done this and can share their experience?
and not talking to other kids (I didn’t have any friends from US public school during grades 4 to 11)
Yeah that might have been a contributing factor for myself as well, but my kids seem a lot more social than me.
Replies from: riceissa, Benito↑ comment by riceissa · 2020-06-08T09:42:50.079Z · LW(p) · GW(p)
“Consume rationalist and effective altruist content” makes sense but some more specific advice would be helpful, like what material to introduce, when, and how to encourage their interest if they’re not immediately interested. Have any parents done this and can share their experience?
I don't have kids (yet) and I'm planning to delay any potential detailed research until I do have kids, so I don't have specific advice. You could talk to James Miller [LW(p) · GW(p)] and his son [LW(p) · GW(p)]. Bryan Caplan seems to also be doing well in terms of keeping his sons' views similar to his own; he does homeschool, but maybe you could learn something from looking at what he does anyway. There are a few other rationalist parents, but I haven't seen any detailed info on what they do in terms of introducing rationality/EA stuff. Duncan Sabien has also thought a lot about teaching children, including designing a rationality camp for kids.
I can also give my own data point: Before discovering LessWrong (age 13-15?), I consumed a bunch of traditional rationality content like Feynman, popular science, online philosophy lectures, and lower quality online discourse like the xkcd forums. I discovered LessWrong when I was 14-16 (I don't remember the exact date) and read a bunch of posts in an unstructured way (e.g. I think I read about half of the Sequences but not in order), and concurrently read things like GEB and started learning how to write mathematical proofs. That was enough to get me to stick around, and led to me discovering EA, getting much deeper into rationality, AI safety, LessWrongian philosophy, etc. I feel like I could have started much earlier though (maybe 9-10?) and that it was only because of my bad environment (in particular, having nobody tell me that LessWrong/Overcoming Bias existed) and poor English ability (I moved to the US when I was 10 and couldn't read/write English at the level of my peers until age 16 or so) that I had to start when I did.
↑ comment by Ben Pace (Benito) · 2020-06-08T06:39:59.496Z · LW(p) · GW(p)
If you're looking for a datapoint, I found and read this ePub of all of Eliezer's writing [LW · GW] when I was around 13 or 14. Would read it late into the night every day (1am, 2am) on the tablet I had at the time, I think an iPhone.
Before that... the first book I snuck out to buy+read was Sam Harris's "Letter to a Christian Nation" when I was 12-13, and I generally found his talks and books to be really exciting and mind-expanding.
↑ comment by romeostevensit · 2020-06-08T06:41:01.732Z · LW(p) · GW(p)
Opening the Heart of Compassion outlines the Buddhist model of 6 deleterious configurations that people tend to fall into. On top of this I would add that much of the negative consequences of this come from our tendency towards monism: to find one thing that works and then try to build an entire worldview out of it.
↑ comment by ESRogs · 2020-06-17T00:50:38.751Z · LW(p) · GW(p)
Are you most concerned that:
1) they will believe false things (which is bad for its own sake)
2) they will do harm to others due to false beliefs
3) harm will come to them because of their false beliefs
4) they will become alienated from you because of your disagreements with each other
5) something else?
It seems like these different possibilities would suggest different mitigations. For example, if the threat model is that they just adopt the dominant ideology around them (which happens to be false on many points), then that results in them having false beliefs (#1), but may not cause any harm to come to them from it (#3) (and may even be to their benefit, in some ways).
Similarly, depending on whether you care more about #1 or #4, you may try harder to correct their false ideas, or to establish a norm for your relationship that it's fine to disagree with each other. (Though I suspect that, generally speaking, efforts that tend to produce a healthy relationship will also tend to produce true beliefs, in the long run.)
Replies from: Wei_Dai, Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2020-06-17T10:04:32.620Z · LW(p) · GW(p)
I should also address this part:
For example, if the threat model is that they just adopt the dominant ideology around them (which happens to be false on many points), then that results in them having false beliefs (#1), but may not cause any harm to come to them from it (#3) (and may even be to their benefit, in some ways).
Many Communist true believers in China met terrible ends as waves of "political movements" swept through the country after the CCP takeover, and pitted one group against another, all vying to be the most "revolutionary". (One of my great-grandparents could have escaped but stayed in China because he was friends with a number of high-level Communists and believed in their cause. He ended up committing suicide when his friends lost power to other factions and the government turned on him.)
More generally, ideology can change so quickly that it's very difficult to follow it closely enough to stay safe, and even if you did follow the dominant ideology perfectly you're still vulnerable to the next "vanguard" who pushes the ideology in a new direction in order to take power. I think if "adopt the dominant ideology" is sensible as a defensive strategy for living in some society, you'd still really want to avoid getting indoctrinated into being a true believer, so you can apply rational analysis to the political struggles that will inevitably follow.
↑ comment by Wei Dai (Wei_Dai) · 2020-06-17T05:41:06.067Z · LW(p) · GW(p)
I guess I'm worried about
- They will "waste their life", for both the real opportunity cost and the potential regret they might feel if they realize the error later in life.
- My own regret in knowing that they've been indoctrinated into believing wrong things (or into having unreasonable certainty about potentially wrong things), when I probably could have done something to prevent that.
- Their views making family life difficult. (E.g., if they were to secretly record family conversations and post them on social media as examples of wrongthink, like some kids have done.)
Can't really think of any mitigations for these aside from trying not to let them get indoctrinated in the first place...
↑ comment by NancyLebovitz · 2020-06-25T14:51:27.140Z · LW(p) · GW(p)
I don't have children, and my upbringing wasn't especially good or bad on learning rationality.
Still, what I'm noticing in your post and the comments so far is the idea that rationality is something to put into your children.
I believe that rationality mostly needs to be modeled. Take your mind and your children's connection to the universe seriously. Show them that thinking and arguing are both fun and useful.
↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2020-06-11T16:10:31.256Z · LW(p) · GW(p)
Do you mean how to teach them critical thinking skills? Or how to get them to prize the truth over fitting in?
I'm going to assume you're not a radical leftist. What if your 16 year old kid started sharing every leftist meme because they've really thought about it and think it's true? What if they said "it doesn't matter if there's pressure to hold these political opinions; they're as true as gravity!"
Would you count that as a success, since they're bold enough to stand up to an authority figure (you) to honestly express their deeply-considered views? Or a failure? If the latter, why?
Replies from: ChristianKl, frontier64↑ comment by ChristianKl · 2020-06-11T17:53:15.496Z · LW(p) · GW(p)
I'm going to assume you're not a radical leftist. What if your 16 year old kid started sharing every leftist meme because they've really thought about it and think it's true?
I don't think that most people who really think issues through agree with every leftists meme and think the meme is true. Part of modern leftish ideology is that you should say certain things even when they are not true, because you want to show solidarity. There's also a belief that certain values shouldn't be "thought through". They are sacred and not supposed to be questioned.
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2020-06-11T18:31:01.256Z · LW(p) · GW(p)
It sounds like you're setting the bar for epistemic hygiene (i.e. not being infected by a mind virus) at being able to justify your worldview from the ground up. Is that an isolated demand for rigor, or would you view anyone unable to do that as an unreasonable conformist?
Replies from: ChristianKl↑ comment by ChristianKl · 2020-06-11T21:46:53.386Z · LW(p) · GW(p)
I think you ignore that plenty of people do believe in epistemics that value not engaging in critical analysis in the sense of critical thinking but only in the sense of critical theory.
In leftish activism people are expected to be able to approve at the same time of the meme "homophobia should always be challenged" and "Islam shouldn't be challenged". Explicit discussions about how those values should be traded of against each other are shunned because they violate the underlying sacredness.
Frequently, there's an idea that beliefs should be based on experience or trusting people with experience and not based on thinking thing things through. Valuing thinking things through is not universal.
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2020-06-11T22:21:37.090Z · LW(p) · GW(p)
I'm just not convinced that the radical left has epistemic norms or value priorities that are unusually bad. Imagine you were about to introduce me to five of your friends to talk politics. One identifies as a radical leftist, one a progressive moderate, another a libertarian, the fourth a conservative, and the fifth apolitical. All five of them share a lot of memes on Facebook. They also each have a blog where they write about their political opinions.
I would not be particularly surprised if I had a thoughtful, stimulating conversation with any of them.
My prior is that intellectual profiling based on ideology isn't a good way to predict how thoughtful somebody is.
So for me, if Wei Dei Jr. turned out to be a 16 year old radical leftist, I wouldn't think he's any more conformist than if he'd turned out to be a progressive, libertarian, conservative, or apolitical.
That might just be a crux of disagreement for us based on differing experiences in interacting with each of these groups.
↑ comment by frontier64 · 2020-06-11T17:13:33.526Z · LW(p) · GW(p)
A 16yo going into the modern school system and turning into a radical leftist is much more often than not a failure state than a success state.
Young leftist conformists outnumber the thought-out and well-reasoned young leftists by at least 10 to 1 so that's where our prior should be at. Hypothetical Wei then has a few conversations with his hypothetical, radical leftist kid and the kid reasons well for a 16yo. We would expect a well-reasoned leftist to reason well more often than a conformed leftist so that updates our priors, but I don't think we'd go as far as saying that it overcomes our original 10 to 1 prior. Well-reasoned people only make arguments sound well-reasoned to others maybe 90% of the time max and even conformists can make nice-sounding arguments (for a 16yo) fairly often.
Even after the conversations, it's still more likely that the hypothetical radical leftist kid is a conformist rather than well-reasoned. If hypothetical Wei had some ability to determine to a high degree of certainty whether his kid was a conformist or well-reasoned then that would be a very different case and he likely wouldn't have the concerns that his children will be indoctrinated that he expressed in the original post.
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2020-06-11T17:35:08.453Z · LW(p) · GW(p)
You're neglecting the base rate of 16 year old conformity. I think this is some pretty silly speculation, but let's run with it. Isn't the base rate for 16 year old conformity at least 10 to 1? If so, a 16 year old who's a leftist is no more likely to be a conformist than any other.
In the end, what we're looking for is a reliable signal that, whatever the 16 year old thinks, it's due to their independent reasoning.
Widely shared reasonable beliefs won't cut it, because they wouldn't have to think it out for themselves. Outrageous contrarian views won't work, because that's not reasonable.
You'd have to look for them to hold views that are both reasonable and contrarian. So, a genius. Is that a realistic bar to diagnose your kid as uninfected by mind viruses?
Replies from: frontier64↑ comment by frontier64 · 2020-06-11T18:12:53.944Z · LW(p) · GW(p)
Ideological conformity in the school system is not uniform. A person turning left when everybody else is turning right is much less likely to be a conformist than someone else turning right.
ETA: Without metaphor, our priors for conformist vs. well-reasoned is different for young rightists or non-leftists in the school system.
↑ comment by ryan_b · 2020-06-26T16:01:24.467Z · LW(p) · GW(p)
My daughter is 2. Everything we do with her is either indoctrination or play; she doesn't have enough language yet for the learning-begets-learning we naturally assume with older kids and adults.
I was in the military, which is probably the most successful employer of indoctrination in the US. I believe the key to this success rests with the clarity of the indoctrination's purpose and effectiveness: the purpose is to keep everyone on the same page, because if we aren't our people will die (where our people means the unit). Indoctrination is the only tool available for this because there isn't time for sharing all the relevant information or doing analysis.
I plan to capture these benefits for my daughter by being specific about the fact that I'm using indoctrination and why indoctrination is a good tool for the situation instead of how we think or feel about it, when she inevitably has questions.
The bearing I think this has on the question of mind viruses is that she will know what indoctrination looks like when she sees it. Further, she will have expectations of purpose and impact; political indoctrination fails these tests, which I hope will trigger rejection (or at least forestall overcommitment).
↑ comment by Rudi C (rudi-c) · 2020-06-14T07:17:53.101Z · LW(p) · GW(p)
How are you handling the problem that rationality will often pay negative if not over a critical mass (e.g., it often leads to poor signaling or anti-signaling if one is lucky)?
comment by Wei Dai (Wei_Dai) · 2020-06-03T22:52:46.908Z · LW(p) · GW(p)
-
People I followed on Twitter for their credible takes on COVID-19 now sound insane. Sigh...
-
I feel like I should do something to prep (e.g., hedge risk to me and my family) in advance of AI risk being politicized, but I'm not sure what. Obvious idea is to stop writing under my real name, but cost/benefit doesn't seem worth it.
↑ comment by CarlShulman · 2020-06-05T15:43:55.152Z · LW(p) · GW(p)
Re hedging, a common technique is having multiple fairly different citizenships and foreign-held assets, i.e. such that if your country become dangerously oppressive you or your assets wouldn't be handed back to it. E.g. many Chinese elites pick up a Western citizenship for them or their children, and wealthy people fearing change in the US sometimes pick up New Zealand or Singapore homes and citizenship.
There are many countries with schemes to sell citizenship, although often you need to live in them for some years after you make your investment. Then emigrate if things are starting to look too scary before emigration is restricted.
My sense, however, is that the current risk of needing this is very low in the US, and the most likely reason for someone with the means to buy citizenship to leave would just be increases in wealth/investment taxes through the ordinary political process, with extremely low chance of a surprise cultural revolution (with large swathes of the population imprisoned, expropriated or killed for claimed ideological offenses) or ban on emigration. If you take enough precautions to deal with changes in tax law I think you'll be taking more than you need to deal with the much less likely cultural revolution story.
Replies from: hg00, Wei_Dai↑ comment by hg00 · 2020-06-06T08:41:01.719Z · LW(p) · GW(p)
Permanent residency (as opposed to citizenship) is a budget option. For example, for Panama, I believe if you're a citizen of one of 50 nations on their "Friendly Nations" list, you can obtain permanent residency by depositing $10K in a Panamanian bank account. If I recall correctly, Paraguay's permanent residency has similar prerequisites ($5K deposit required) and is the easiest to maintain--you just need to be visiting the country every 3 years.
↑ comment by Wei Dai (Wei_Dai) · 2020-06-11T06:20:22.643Z · LW(p) · GW(p)
I was initially pretty excited about the idea of getting another passport, but on second thought I'm not sure it's worth the substantial costs involved. Today people aren't losing their passports or having their movements restricted for (them or their family members) having expressed "wrong" ideas, but just(!) losing their jobs, being publicly humiliated, etc. This is more the kind of risk I want to hedge against (with regard to AI), especially for my family. If the political situation deteriorates even further to where the US government puts official sanctions on people like me, humanity is probably just totally screwed as a whole and having another passport isn't going to help me that much.
Replies from: hg00↑ comment by hg00 · 2020-08-08T14:26:25.916Z · LW(p) · GW(p)
I spent some time reading about the situation in Venezuela, and from what I remember, a big reason people are stuck there is simply that the bureaucracy for processing passports is extremely slow/dysfunctional (and lack of a passport presents a barrier for achieving a legal immigration status in any other country). So it might be worthwhile to renew your passport more regularly than is strictly necessary, so you always have at least a 5 year buffer on it say, in case we see the same kind of institutional dysfunction. (Much less effort than acquiring a second passport.)
Side note: I once talked to someone who became stuck in a country that he was not a citizen of because he allowed his passport to expire and couldn't travel back home to get it renewed. (He was from a small country. My guess is that the US offers passport services without needing to travel back home. But I could be wrong.)
↑ comment by riceissa · 2020-06-04T02:10:52.556Z · LW(p) · GW(p)
People I followed on Twitter for their credible takes on COVID-19 now sound insane. Sigh...
Are you saying that you initially followed people for their good thoughts on COVID-19, but (a) now they switched to talking about other topics (George Floyd protests?), and their thoughts are much worse on these other topics, (b) their thoughts on COVID-19 became worse over time, (c) they made some COVID-19-related predictions/statements that now look obviously wrong, so that what they previously said sounds obviously wrong, or (d) something else?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2020-06-04T08:55:49.188Z · LW(p) · GW(p)
You'll have to infer it from the fact that I didn't explain more and am not giving a straight answer now. Maybe I'm being overly cautious, but my parents and other relatives lived through (and suffered in) the Cultural Revolution and other "political movements", and wouldn't it be silly if I failed to "expect the Spanish Inquisition" despite that?
Replies from: Dagon↑ comment by Dagon · 2020-06-04T16:25:24.523Z · LW(p) · GW(p)
It's helpful (to me, in understanding the types of concerns you're having) to have mentioned the Cultural Revolution. For this, posting under a pseudonym probably doesn't help - the groups who focus on control rather than thriving have very good data collection and processing capability, and that's going to leak to anyone who gets sufficient power with them. True anonymity is gone forever, except by actually being unimportant to the new authorities/mobs.
I wasn't there, but I had neighbors growing up who'd narrowly escaped and who had friends/relatives killed. Also, a number of friends who relayed family stories from the Nazi Holocaust. The lesson I take is that it takes off quickly, but not quite overnight. There were multi-month windows in both cases where things were locked down, but still porous for those lucky enough to have planned for it, or with assets not yet confiscated, or willing to make sacrifices and take large risks to get out. I suspect those who want to control us have _ALSO_ learned this lesson, and the next time will have a smaller window - perhaps as little as a week. Or perhaps I'm underestimating the slope and it's already too late.
My advice is basically a barbell strategy for life. Get your exit plan ready to execute on very short notice, and understand that it'll be costly if you do it. Set objective thresholds for triggers that you just go without further analysis, and also do periodic gut checks to decide to go for triggers you hadn't considered in advance. HOWEVER, most of your time and energy should go to the more likely situation where you don't have to ditch. Continue building relationships and personal capital where you think it's best for your overall goals. Do what you can to keep your local environment sane, so you don't have to run, and so the world gets back onto a positive trend.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2020-06-05T04:31:25.688Z · LW(p) · GW(p)
Get your exit plan ready to execute on very short notice, and understand that it’ll be costly if you do it.
What would be a good exit plan? If you've thought about this, can you share your plan and/or discuss (privately) my specific situation?
Do what you can to keep your local environment sane, so you don’t have to run, and so the world gets back onto a positive trend.
How? I've tried to do this a bit, but it takes a huge amount of time, effort, and personal risk, and whatever gains I manage to eek out seem to be highly ephemeral at best. It doesn't seem like a very good use of my time when I can spend it on something like AI safety instead. Have you been doing this yourself, and if so what has been your experience?
Replies from: Dagon, Bjartur Tómas↑ comment by Dagon · 2020-06-05T16:27:12.411Z · LW(p) · GW(p)
I do not intend to claim that I'm particularly great at this, and I certainly don't think I have sufficient special knowledge for 1-1 planning. I'm happy to listen and make lightweight comments if you think it'd be helpful.
What would be a good exit plan
My plans are half-formed, and include maintaining some foundational capabilities that will help in a large class of disasters that require travel. I have bank accounts in two nations and currencies, and I keep some cash in a number of currencies. Some physical precious metals or hard-to-confiscate digital currency is a good idea too. I have friends and coworkers in a number of countries (including over a border I can cross by land), who I visit enough that it will seem perfectly normal for me to want to travel there. I'm seriously considering real-estate investments in one or two of those places, to make it even easier to justify travel if it becomes restricted or suspicious.
I still think that the likelihood is low that I'll need to go, but there may come a point where the tactic of maintaining rolling refundable tickets becomes reasonable - buy a flight out at 2 weeks and 4 weeks, and every 2 weeks cancel the near one and buy a replacement further one.
Do what you can to keep your local environment sane .
How?
This is harder to advise. I'm older than most people on LW, and have been building software and saving/investing for decades, so I have resources that can help support what seem to be important causes, and I have a job that has (indirect, but clear) impact on keeping the economy and society running.
I also support and participate in protests and visibility campaigns to try to make it clear to the less-foresightful members of society that tightening control isn't going to work. This part is more personal, less clearly impactful toward my goals, and takes a huge amount of time, effort, and personal risk. It's quite possible that I'm doing it more for the social connections with friends and peers than for purely rational goal-seeking. I wouldn't fault anyone for preferring to put their effort (which will ALSO take a huge amount of time, effort, and risk (though maybe less short-term physcial risk); everything worthwhile does) into other parts of the large and multidimensional risk-space.
↑ comment by Tomás B. (Bjartur Tómas) · 2020-06-05T14:26:16.056Z · LW(p) · GW(p)
What would be a good exit plan? If you've thought about this, can you share your plan and/or discuss (privately) my specific situation?'
+1 for this. Would love to talk to other people seriously considering exit. Maybe we could start a Telegram or something.
Replies from: Xodarap↑ comment by [deleted] · 2020-06-04T12:10:09.066Z · LW(p) · GW(p)
I saw this "stopped clock" assumption catching a bunch of people with COVID-19, so I wrote a quick post [LW · GW] on why it seems unlikely to be a good strategy.
↑ comment by Liam Donovan (liam-donovan-1) · 2020-07-16T06:33:49.188Z · LW(p) · GW(p)
What/who does #1 refer to? I've changed my mind a lot due to reading tweets from people I initially followed due to their credible COVID-19 takes, and you saying they sound insane would be a major update for me.
comment by Ben Pace (Benito) · 2020-06-05T05:46:32.075Z · LW(p) · GW(p)
Scott's new post on Problems With Paywalls reminds me to mention the one weird trick I use to get around paywalls. Many places like NYT will make the paywall appear a few seconds after landing on the page, so I reliably hit cmd-a and cmd-c and then paste the whole post into a text editor, and read it there instead of on the site. This works for the majority of paywalled articles I encounter personally.
Replies from: Wei_Dai, nwj, gilch↑ comment by Wei Dai (Wei_Dai) · 2020-06-05T07:05:59.431Z · LW(p) · GW(p)
Or you can use Bypass Paywalls with Firefox or Chrome.
Replies from: ryan_b↑ comment by nwj · 2020-06-07T14:04:37.399Z · LW(p) · GW(p)
If you use Firefox, there is an extension called Temporary Containers. This allows you to load a site in a temporary container tab, which is effectively like opening the site in a fresh install of a browser or on a new device. For sites with rate limited pay walls like the NYT, this effectively defeats the paywall as it never appears to them that you have gone over their rate limit.
The extension can be configured so that every instance of a particular url is automatically opened in its own temporary container, which defeats these paywalls at very little cost to convenience.
↑ comment by gilch · 2020-06-08T23:51:29.396Z · LW(p) · GW(p)
You can often find articles in the Wayback Machine even if they're paywalled.
comment by Wei Dai (Wei_Dai) · 2020-06-19T04:51:17.599Z · LW(p) · GW(p)
Personal update: Over the last few months, I've become much less worried that I have a tendency to be too pessimistic (because I frequently seem to be the most pessimistic person in a discussion). Things I was worried about more than others (coronavirus pandemic, epistemic conditions getting significantly worse) have come true, and when I was wrong [LW(p) · GW(p)] in a pessimistic direction, I updated [LW(p) · GW(p)] quickly after coming across a good argument (so I think I was wrong just because I didn't think of that argument, rather than due to a tendency to be pessimistic).
Feedback welcome, in case I've updated too much about this.
comment by ADITHYA SRINIVASAN (adithya-srinivasan) · 2020-06-08T19:59:16.346Z · LW(p) · GW(p)
Hi guys,been a long time lurker here.Wanted to ask this,have you guys ever done rereads for the Sequences so that new guys can engage with he content better and discuss..Just a thought
Replies from: gilchcomment by limestone · 2020-06-23T18:09:23.994Z · LW(p) · GW(p)
Hello; just joined; working through the Library. I appreciate the combination of high standards and welcoming tone. I'm a homeschooling (pre-Covid-19) parent in the US South, so among other things I'm looking forward to finding thoughts here on education for children.
I found Slate Star Codex before LessWrong and hope this doxxing/outing situation works out safely.
Replies from: mingyuan, habryka4↑ comment by mingyuan · 2020-06-23T18:38:42.720Z · LW(p) · GW(p)
There are certainly a lot of people here interested in the same topic! Jeff (https://www.lesswrong.com/users/jkaufman [LW · GW]) is probably the most prolific poster on raising children, though his kids are still quite young. Good luck and have fun!
↑ comment by habryka (habryka4) · 2020-06-23T18:31:45.315Z · LW(p) · GW(p)
Welcome limestone! And feel free to leave comments here or ping the admins on Intercom (the small chat bubble in the bottom right) if you run into any problems!
comment by adamShimi · 2020-06-04T21:12:05.508Z · LW(p) · GW(p)
I noticed that all posts for the last day and a half are still personal blogposts, even though many are more "Frontpage" kind of stuff. Is there a bug in the site, is it a new policy for what makes it to frontpage, or is it just that the moderation team didn't have time to go through the post?
Replies from: Benito, Zack_M_Davis↑ comment by Ben Pace (Benito) · 2020-06-04T21:36:04.765Z · LW(p) · GW(p)
Thanks for commenting. So, the latest admin-UI is that we have decide which core tags to give a post before deciding whether to frontpage it, which is a trivial inconvenience, which leads to delays. At the minute I do care a fair bit about getting the core tags right, so I'm not sure what the best thing to do about this is.
Replies from: Zack_M_Davis, adamShimi↑ comment by Zack_M_Davis · 2020-06-04T21:50:36.366Z · LW(p) · GW(p)
This seems kind of terrible? I expect authors and readers care more about new posts being published than about the tags being pristine.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-06-04T22:27:54.187Z · LW(p) · GW(p)
Yeah, to be clear, I agree on both counts, see my reply to adam below about how long I think the frontpage decisions should take. I do think the tags are important so it's been good to experiment with this, but it isn't the right call to have delays of this length in general and I/the team should figure out a way to prevent the delays pretty soon.
Added: Actually, I think that as readers use tags more to filter their frontpage posts, it'll be more important to many of them that a post is filtered in/out of their feed, than whether it was frontpaged efficiently. But I agree that for author experience, efficiency of frontpage is a big deal.
↑ comment by adamShimi · 2020-06-04T21:46:01.612Z · LW(p) · GW(p)
Okay, this makes sense. Personally, that's slightly annoying because this means a post I wrote yesterday will probably be lost in the burst of posts pushed to Frontpage (as I assume it would be going to Frontpage), but I also value the tag system, so I can take a hit or two for that.
That being said, it doesn't seem sustainable for you: the backlog keeps growing, and I assume the delays will too, resulting in posts pushed to Frontpage a long time after they were posted.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-06-04T21:55:40.359Z · LW(p) · GW(p)
I just went through and tagged+frontpaged the 10 outstanding posts.
In general I think it's necessary for at least 95% of posts to be frontpaged-or-not within 24 hours of being published, and I think we can get the median to be under 12 hours, and potentially much faster. I don't actually have a number for that, maybe we should just put the average time for the past 14 days on the admin-UI to help us keep track.
Replies from: adamShimi↑ comment by Zack_M_Davis · 2020-06-04T21:24:43.708Z · LW(p) · GW(p)
I was wondering about this, too. (If the implicit Frontpaging queue is "stuck", that gives me an incentive to delay publishing my new post, so that it doesn't have to compete with a big burst of backlogged posts being Frontpaged at the same time.)
comment by SurvivalBias (alex_lw) · 2020-06-24T05:30:38.208Z · LW(p) · GW(p)
Hi! I've been reading LessWrong and Slate Star Codex for years, but until the today's events commented pretty much exclusively on SSC. Hope everything will resolve to the better, although personally I'm rather pessimistic.
In any case, I've been wondering for a while is there any online places for casual discussions a-la SSC Open Threads, but more closely related to Less Wrong and the Bay Area rationalist community? Threads like this are one such place obviously, but they seem rare and unpopulated. I've tried to fins facebook groups, but with very limited success. Any recommendations?
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-06-24T21:14:23.789Z · LW(p) · GW(p)
I think various discussions on LessWrong are probably your best bet. A lot of LessWrong discussion is distributed over a large number of posts and platforms, so things end up less centralized than for SSC stuff, which has benefits and drawbacks.
For an experience similar to an SSC Open Thread, I think the LW Open Threads are your best bet, though they are definitely a lot less active than the SSC ones.
comment by Lucent · 2020-06-12T05:45:49.632Z · LW(p) · GW(p)
I'm sure this phenomenon has a name by now, but I'm struggling to find it. What is it called when requirements are applied to an excess of applicants solely for the purpose of whittling them down to a manageable number, but doing so either filters no better than chance or actively eliminates the ideal candidate?
For example, a job may require a college degree, but its best workers would be those without one. Or an apartment complex is rude to applicants knowing there are an excess, scaring off good tenants in favor of those desperate. Or someone finds exceptional luck securing online dating "matches" and begins to fill their profile with requirements that put off worthwhile mates.
Replies from: alex_lw, thomas-kwa, NancyLebovitz, filipe-marchesini↑ comment by SurvivalBias (alex_lw) · 2020-06-24T05:05:53.330Z · LW(p) · GW(p)
I think something like "market inefficiency" might be the word. Disclaimer - I'm not an economist and don't know the precise technical meaning of this term. But roughly speaking, the situations you describe seem to be those where the law of supply and demand is somehow prevented from acting directly on the monetary price, so the non-monetary "price" is increased/decreased instead. In the case of the apartments, they'd probably be happy to increase the price until they've got exactly the right number of applicants but are kept from doing it by rent control or reputation or something, so they incur moral costs on the applicants. In case of hiring, they're probably kept from lowering their wages through some combination of: inability to lower wages of the existing employees on the similar positions, wages not being exactly public anyway, and maybe some psychological expectations where nobody with required credentials will agree to work for less than X, no matter how good the conditions are (or alternatively they're genuinely trying to pick the best and failing, than it's Goodheart's law). And in the case of the dating market there simply is no universal currency to begin with.
↑ comment by Thomas Kwa (thomas-kwa) · 2020-06-24T02:09:48.460Z · LW(p) · GW(p)
"throwing the baby out with the bathwater"?
↑ comment by NancyLebovitz · 2020-06-25T14:58:23.900Z · LW(p) · GW(p)
Conservation of thought, perhaps. The root problem is having more options than you can handle, probably amplified by bad premises. Or the other hand, if you're swamped, when will you have time to improve your premises?
"Conservation of thought" is from an early issue of The New York Review of Science Fiction.
↑ comment by Filipe Marchesini (filipe-marchesini) · 2020-06-13T13:46:28.184Z · LW(p) · GW(p)
I think you are referring to Goodheart's law, because all the measures your examples used as a proxy to achieve some goal were gamified in a way that the proxy stopped working reliably.
Replies from: ESRogs↑ comment by ESRogs · 2020-06-17T00:57:11.864Z · LW(p) · GW(p)
Hmm, this seems a little different from Goodhart's law (or at least it's a particular special case that deserves its own name).
This concept, as I understand it, is not about picking the wrong metric to optimize. It's more like picking the wrong metric to satisfice, or putting the bar for satisficing in the wrong place.
comment by habryka (habryka4) · 2020-06-11T00:46:51.748Z · LW(p) · GW(p)
Sorry for the outages today (we had two outages, one around 1:00PM PT, one around 3:30PM PT, with intermittent slow requests in the intervening time). As far as I can tell it was caused by a bot that was crawling particularly expensive pages (pages with tons of comments) at a relatively high rate. We've banned the relevant IP range and everything appears back to normal, though I am still watching the logs and server metrics attentively.
Again, sorry for any inconveniences this caused, and please let us know via Intercom if you run into any further problems.
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-06-11T00:48:26.503Z · LW(p) · GW(p)
We might still have some problems with comment and PM submissions that I am looking into. Not sure what's causing that.
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-06-11T02:33:43.788Z · LW(p) · GW(p)
All remaining problems with document submission should be resolved. If you had opted into beta features and had trouble submitting documents in the past few hours, you should be able to do that again, and please let me know via Intercom if you can't.
comment by Zack_M_Davis · 2020-06-07T05:05:13.050Z · LW(p) · GW(p)
Comment and post text fields default to "LessWrong Docs [beta]" for me, I assume because I have "Opt into experimental features" checked in my user settings. I wonder if the "Activate Markdown Editor" setting should take precedence?—no one who prefers Markdown over the Draft.js WYSIWYG editor is going to switch because our WYSIWYG editor is just that much better, right? (Why are you guys writing an editor, anyway? Like, it looks fun, but I don't understand why you'd do it other than, "It looks fun!")
Replies from: habryka4, Raemon↑ comment by habryka (habryka4) · 2020-06-07T20:22:52.581Z · LW(p) · GW(p)
Just to clarify, I wouldn't really say that "we are building our own editor". We are just customizing the CKEditor 5 framework. It is definitely a bunch of work, but we aren't touching any low-level abstractions (and we've spent overall more time than that trying to fix bugs and inconsistencies in the current editor framework we are using, so hopefully it will save us time in the long-run).
↑ comment by Raemon · 2020-06-07T05:13:47.422Z · LW(p) · GW(p)
Ah, yeah that makes sense, just an oversight. I'll try to fix that next week.
We're using CkEditor5 as a base to build some new features. There are a number of reasons for this (in the immediate future, it means you can finally have tables), but the most important (later on down the line) reason is that it provides Google Docs style collaborative editing. In addition to being a generally nice set of features for coauthors, I'm hoping that it dovetails significantly with the LW 2019 review in December, allowing people to suggest changes for nominated posts.
comment by Richard_Kennaway · 2020-06-22T18:26:50.798Z · LW(p) · GW(p)
Since posting this [LW · GW], I've revised my paper, now called "Unbounded utility and axiomatic foundations", and eliminated all the placeholders marking work still to be done. I believe it's now ready to send off to a journal. If anyone wants to read it, and especially if anyone wants to study it and give feedback, just drop me a message. As a taster, here's the introduction.
Several axiomatisations have been given of preference among actions, which all lead to the conclusion that these preferences are equivalent to numerical comparison of a real-valued function of these actions, called a “utility function”. Among these are those of Ramsey [11], von Neumann and Morgenstern [17], Nash [8], Marschak [7], and Savage [13, 14].
These axiomatisations generally lead also to the conclusion that utilities are bounded. (An exception is the Jeffrey-Bolker system [6, 2], which we shall not consider here.) We argue that this conclusion is unnatural, and that it arises from a defect shared by all of these axiom systems in the way that they handle infinite games. Taking the axioms proposed by Savage, we present a simple modification to the system that approaches infinite games in a more principled manner. All models of Savage’s axioms are models of the revised axioms, but the revised axioms additionally have models with unbounded utility. The arguments to bounded utility based on St. Petersburg-like gambles do not apply to the revised system.
comment by Sherrinford · 2020-06-11T18:05:35.146Z · LW(p) · GW(p)
Can someone please explain what the following sentence from the terms of use means? "In submitting User-Generated Content to the Website, you agree to waive all moral rights in or to your User-Generated Content across the world, whether you have or have not asserted moral rights."
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-06-11T19:10:46.517Z · LW(p) · GW(p)
(We inherited the terms of use from the old LessWrong so while I tried my best to understand them, I don't have as deep of a sense of understanding as I wish I had, and it seemed more important to keep the terms of use consistent than to rewrite everything, to maintain consistency between the agreements that authors made when they contributed to the site at different points in time)
The Wikipedia article on Moral Rights:
The moral rights include the right of attribution, the right to have a work published anonymously or pseudonymously, and the right to the integrity of the work.[1] The preserving of the integrity of the work allows the author to object to alteration, distortion, or mutilation of the work that is "prejudicial to the author's honor or reputation".[2] Anything else that may detract from the artist's relationship with the work even after it leaves the artist's possession or ownership may bring these moral rights into play. Moral rights are distinct from any economic rights tied to copyrights. Even if an artist has assigned his or her copyright rights to a work to a third party, he or she still maintains the moral rights to the work.[3]
What exactly these rights are seems to differ a bunch from country to country. In the U.S. the protection of these moral rights is pretty limited. From the same article:
Moral rights[17] have had a less robust tradition in the United States. Copyright law in the United States emphasizes protection of financial reward over protection of creative attribution.[5]:xiii The exclusive rights tradition in the United States is inconsistent with the notion of moral rights as it was constituted in the Civil Code tradition stemming from post-Revolutionary France. When the United States acceded to the Berne Convention, it stipulated that the Convention's "moral rights" provisions were addressed sufficiently by other statutes, such as laws covering slander and libel.[5]
Concrete instances where I can imagine this waiving becoming relevant, and where I think this makes sense (though this is just me guessing, I have not discussed this in detail with a lawyer):
- An author leaves a comment on a post that starts with a steelman of an opposing position. We display a truncated version of the comment by default, which now only shows them arguing for a position they find abhorrent. This could potentially violate their moral rights by altering their contribution in a way that violates their honor or reputation.
- An author leaves a comment and another user quotes a subsection of that comment, bolding, or italicizing various sections that they disagree with, and inserting sentences using notation.
comment by Sammy Martin (SDM) · 2020-06-27T15:04:55.280Z · LW(p) · GW(p)
Some good news for the claim public awareness of X risk in general should go up after coronavirus - the economist cover story: https://www.economist.com/node/21788546?frsc=dg|e, https://www.economist.com/node/21788589?frsc=dg|e
comment by Thomas Kwa (thomas-kwa) · 2020-06-24T02:08:10.790Z · LW(p) · GW(p)
I've been searching for a LW post for half an hour. I think it was written within the last few months. It's about how to understand beliefs that stronger people have, without simply deferring to them. It was on the front page while I was reading the comments to this post of mine [LW · GW], which is how I found it. Anyone know which post I'm trying to find?
Replies from: TurnTrout↑ comment by TurnTrout · 2020-06-24T11:52:08.939Z · LW(p) · GW(p)
It's on EAForum [EA · GW], perhaps?
Replies from: thomas-kwa↑ comment by Thomas Kwa (thomas-kwa) · 2020-06-25T01:37:53.736Z · LW(p) · GW(p)
That was it, thanks!
comment by Sherrinford · 2020-06-09T13:08:40.413Z · LW(p) · GW(p)
Is there a name for intuition/fallacy that an advanced AI or alien race must also be morally superior?
Replies from: filipe-marchesini, frontier64, MakoYass↑ comment by Filipe Marchesini (filipe-marchesini) · 2020-06-09T21:32:39.239Z · LW(p) · GW(p)
I think you can refer the person to orthogonality thesis
↑ comment by frontier64 · 2020-06-09T18:43:15.550Z · LW(p) · GW(p)
Seems like an appeal to ?false? authority. May not be a fallacy because there's a demonstrable trend between technological superiority and moral superiority at least on Earth. Assuming that trend extends to other civilizations off Earth? I'm sure there's something fallacious about that, maybe too geocentric.
↑ comment by mako yass (MakoYass) · 2020-07-03T04:42:10.507Z · LW(p) · GW(p)
It might generally be Moral Realism (anti-moral-relativism). The notion that morality is some universal objective truth that we gradually uncover more of as we grow wiser. That's how those people usually conceive it.
I sometimes call it anti-orthogonalism.
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2020-07-03T08:12:01.466Z · LW(p) · GW(p)
I want to explain my downvoting this post. I think you are attacking a massive strawman by equating moral realism with [disagreeing with the orthogonality thesis].
Moral realism says that moral questions have objective answers. I'm almost certain this is true. The relevant form of the orthogonality thesis says that there exist minds such that intelligence is independent of goals. I'm almost certain this is true.
It does not say that intelligence is orthogonal to goals for all agents. Relevant quote from EY:
I mean, I would potentially object a little bit to the way that Nick Bostrom took the word “orthogonality” for that thesis. I think, for example, that if you have humans and you make the human smarter, this is not orthogonal to the humans’ values. It is certainly possible to have agents such that as they get smarter, what they would report as their utility functions will change. A paperclip maximizer is not one of those agents, but humans are.
And the wiki page Filipe Marchesini [LW · GW] linked to also gets this right:
Replies from: MakoYassThe orthogonality thesis states that an artificial intelligence have any combination of intelligence level and goal. [emphasis added]
↑ comment by mako yass (MakoYass) · 2020-07-04T02:06:59.437Z · LW(p) · GW(p)
Good comment, but... Have you read Three Worlds Collide? If you were in a situation similar to what it describes, would you still be calling your position moral realism?
I am not out to attack the position that humans fundamentally, generally align with humans. I don't yet agree with it, its claim, "every moral question has a single true answer" might turn out to be a confused paraphrasing of "every war has a victor", but I'm open to the possibility that it's meaningfully true as well.
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2020-07-04T11:20:38.862Z · LW(p) · GW(p)
Good comment, but... Have you read Three Worlds Collide? If you were in a situation similar to what it describes, would you still be calling your position moral realism?
Yes and yes. I got very emotional when reading that. I thought rejecting the happiness... surgery or whatever it wast that the advanced alien species prescribed was blatantly insane.
comment by Sherrinford · 2020-06-09T13:08:09.258Z · LW(p) · GW(p)
Hi, just as a note: https://www.lesswrong.com/allPosts?filter=curated&view=new [? · GW] looks really weird (which you get from googling for curated posts) because the shortform posts are not filtered out.
comment by Zian · 2020-06-27T17:47:12.895Z · LW(p) · GW(p)
This is a question about prioritization and to do lists. I find that my affairs can be sorted into:
- urgent and important (do this or else you will lose your job; the doctor says to do X or you will die a horrible death in Y days)
- This stuff really needs to get done soon but the world won't end (paying bills/dealing with bugs in one's life/fixing chronic health issues)
- Everything else
Due to some of the things in the 2nd category, I have very little time to spend on the latter 2 categories. Therefore, I find that when I have a moment to sit down and try to plan the remaining minutes/hours of the day, I keep thinking of stuff I've forgotten. For instance, at T-0, I will say "I should do A, B, and C". At T+5, I will remember D and say "I should do D, A, and B; there is no time for C today". And on it goes until spending time on A or B seems profoundly foolish and doomed.
In HPMOR, the problem is also presented at the end of the book.
Has anyone written about dealing with this problem?
comment by NancyLebovitz · 2020-06-27T14:07:29.488Z · LW(p) · GW(p)
https://www.coindesk.com/blackballed-by-paypal-scientific-paper-pirate-takes-bitcoin-donations
" In 2017, a federal court, the U.S. Southern District Court of New York, sided with Elsevier and ruled Sci-Hub should stop operating and pay $15 million in damages. In a similar lawsuit, the American Chemistry Society won a case against Elbakyan and the right to demand another $4.8 million in damages.
In addition, both courts effectively prohibited any U.S. company from facilitating Sci-Hub’s work. Elbakyan had to migrate the website from its early .org domain, and the U.S.-based online payment services are no longer an option for her. She can no longer use Cloudflare, a service that protects websites from denial-of-service attacks, she said. "
comment by adamShimi · 2020-07-02T22:24:49.311Z · LW(p) · GW(p)
Am I the only one for whom all comments in the Alignment Forum have 0 votes?
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-07-02T22:29:01.055Z · LW(p) · GW(p)
Nope, looks like a bug, will look into it.
comment by adamShimi · 2020-06-10T11:04:38.064Z · LW(p) · GW(p)
Just got a Roam account; is there any good resource on how to use it? I looked into the help page, but most links don't lead anywhere. Thanks.
Replies from: max-dalton, So-Low Growth↑ comment by Max Dalton (max-dalton) · 2020-06-12T13:18:03.199Z · LW(p) · GW(p)
I'm planning to try this: https://learn.nateliason.com/. I think that the Roam founder also recommends it.
↑ comment by So-Low Growth · 2020-06-10T11:36:56.950Z · LW(p) · GW(p)
I recently applied for a Roam account. Can I ask when it was you applied and how long before they got back to you?
Replies from: adamShimi↑ comment by adamShimi · 2020-06-10T12:00:15.134Z · LW(p) · GW(p)
I would say I applied a couple of weeks ago (maybe 3 weeks), and received an email yesterday telling me that accounts were opening again.
Replies from: So-Low Growth↑ comment by So-Low Growth · 2020-06-10T13:21:33.301Z · LW(p) · GW(p)
Thanks.
comment by waveman · 2020-06-05T05:29:31.017Z · LW(p) · GW(p)
Mod note: Copied over from one of Zvi's Covid posts [LW(p) · GW(p)] to keep the relevant thread on-topic: [LW · GW]
Murdered
I would simply like to point out here 3 things.
1. The definition of homicide from wikipedia "A homicide requires only a volitional act by another person that results in death, and thus a homicide may result from accidental, reckless, or negligent acts even if there is no intent to cause harm" Such a finding in an autopsy report does not imply a crime let alone murder.
2. The autopsy report ordered by his family showed quantities of numerous drugs including very significant, potentially lethal, quantities of Fentanyl, a drug often associated with respiratory failure, in George Floyd's blood. Floyd was also positive for Coronavirus, which is known to impact heart and lung function, and had heart disease and various other relevant medical conditions. Consider the possibility that the causal chain is less clear than might appear superficially.
3. I see at this point no court finding of murder.
OP asked for only comments on the CV pandemic but I think that his inflammatory comment requires some clarification.