Open & Welcome Thread – October 2020
post by Ben Pace (Benito) · 2020-10-01T19:06:45.928Z · LW · GW · 54 commentsContents
54 comments
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited.
If you want to explore the community more, I recommend reading the Library, [? · GW] checking recent Curated posts [? · GW], seeing if there are any meetups in your area [? · GW], and checking out the Getting Started [LW · GW] section of the LessWrong FAQ [LW · GW]. If you want to orient to the content on the site, you can also check out the new Concepts section [? · GW].
The Open Thread tag is here [? · GW].
54 comments
Comments sorted by top scores.
comment by Elizabeth (pktechgirl) · 2020-10-23T00:41:04.811Z · LW(p) · GW(p)
I’m Elizabeth.You may remember me from such series as Epistemic Spot Checks and the LessWrong Covid Effort [LW · GW], but for the last year my main focus has been developing a method for Knowledge Bootstrapping- going from 0 to 1 in an unfamiliar field without undue deference to credentialism. I’m at the stage where I have a system that works well for me, and I’ve gotten feedback from a few other people about what works and doesn’t work for them, but there’s a long way to go. A lot of my knowledge is implicit and not explained on the page, plus I am only one person; what works for me will not translate perfectly for every human. So I’m looking for test subjects.
One particular part of my method is breaking down one large question into many smaller questions. This has several purposes: it forces you to clarify what you actually care about, and makes it more obvious what information is relevant. I describe this process and the reasoning behind it here, but not very well. I’m looking for test subjects that have a research question, and would like to practice breaking it down into smaller questions, with the goal of refining the technique and my teaching of it.
What This Looks Like
- Come up with a question you might like to research.
- You book a phone call with me via calendly, or email me at elizabeth -at- acesounderglass.com to set up a time.
- We discuss your question in an attempt to break it down into smaller parts.
- I sure hope some people actually go off and research the new questions but there’s no commitment required to do so.
What Are the Expected Outcomes?
- You will have a better understanding of what you actually want to know and will be better positioned to find answers.
- You will be better able to break down your next research question, without me.
- I will make some of my metis on breaking down questions more explicit.
- I will become better at teaching the technique of breaking down questions.
- I learn techniques from you I couldn’t have learned on my own.
comment by Wei Dai (Wei_Dai) · 2020-10-02T20:14:16.701Z · LW(p) · GW(p)
Watching cancel culture go after rationalists/EA, I feel like one of the commentators on the Known Net watching the Blight chase after Out of Band II. Also, Transcend = academia, Beyond = corporations/journalism/rest of intellectual world, Slow Zone = ...
(For those who are out of the loop on this, see https://www.facebook.com/bshlgrs/posts/10220701880351636 for the latest development.)
Replies from: Wei_Dai, ryan_b↑ comment by Wei Dai (Wei_Dai) · 2020-10-03T16:17:50.435Z · LW(p) · GW(p)
Except it's like, the Blight has already taken over all of the Transcend and almost all of the Beyond, even a part of the ship itself and some of its crew members, and many in the crew are still saying "I'm not very worried." Or [EA(p) · GW(p)] "If worst comes to worst, we can always jump ship!"
Replies from: Alexei↑ comment by Alexei · 2020-10-04T14:08:05.720Z · LW(p) · GW(p)
If you think we should be more worried, I’d appreciate a more detailed post. This is all new to me.
Replies from: Wei_Dai, SDM↑ comment by Wei Dai (Wei_Dai) · 2020-10-15T16:04:29.269Z · LW(p) · GW(p)
Writing a detailed post is too costly and risky for me right now. One of my grandparents was confined in a makeshift prison for ten years during the Cultural Revolution and died shortly after, for something that would normally be considered totally innocent that he did years earlier. None of them saw that coming, so I'm going to play it on the safe side and try to avoid saying things that could be used to "cancel" me or worse. But there are plenty of articles on the Internet you can find by doing some searches. If none of them convinces you how serious the problem is, PM me and I'll send you some links.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-10-15T16:12:10.905Z · LW(p) · GW(p)
I do expect to be able to vacate a given country in a timely manner if it seems to be falling into a cultural Revolution.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2020-10-15T16:20:04.107Z · LW(p) · GW(p)
My grandparents on both sides of my family seriously considered leaving China (to the point of making concrete preparations), but didn't because things didn't seem that bad, until it was finally too late.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-10-15T18:01:50.734Z · LW(p) · GW(p)
That's pretty scary.
I expect I have much more flexibility than your family did – I have no dependents, I have no property / few belongings to tie me down, and I expect flight travel is much more readily available to me in the present-day. I also expect to notice it faster than the supermajority of people (not disanalogous to how I was prepped for Covid like a month before everyone else).
↑ comment by Sammy Martin (SDM) · 2020-10-30T13:45:47.316Z · LW(p) · GW(p)
I don't know Wei Dai's specific reasons for having such a high level of concern, but I suspect that they are similar to the arguments given by the historian Niall Ferguson in this debate with Yascha Mounk on how dangerous 'cancel culture' is. Ferguson likes to try and forecast social and cultural trends years in advance and thinks that he sees a cultural-revolution like trend growing unchecked.
Ferguson doesn't give an upper bound on how bad he thinks things could get, but he thinks 'worse than McCarthyism' is reasonable to expect over the next few years, because he thinks that 'cancel culture' has more broad cultural support and might also gain hard power in institutions.
Now - I am more willing to credit such worries than I was a year ago, but there's a vast gulf between a trend being concerning and expecting another Cultural Revolution. It feels too much like a direct linear extrapolation fallacy - 'things have become worse over the last year, imagine if that keeps on happening for the next six years!' I wasn't expecting a lot of what happened over the last eight months in the US on the 'cancel culture' side, but I think that a huge amount of this is due to a temporary, Trump- and Covid- and Recession-related heating up of the political discourse, not a durable shift in soft power or people's opinions. I think the opinion polls back this up. If I'm right that this will all cool down, we'll know in another year or so.
I also think that Yascha's arguments in that debate about the need for hard institutional power that's relatively unchecked, to get a Cultural-Revolution like outcome, are really worth considering. I don't see any realistic path to that level of hard, governmental power at enough levels being held by any group in the US.
comment by WrongPlanet · 2020-10-17T07:59:39.602Z · LW(p) · GW(p)
Hello there! :)
For about a month I have been reading lots on LessWrong and correlating websites/blogs. Now I finally want to become active and maybe in the future if it ever comes down to that contribute in some way... But first I will introduce myself:
I am 17 years old, currently I want to dedicate my future for AI stuff and also for making the world a better place. I found LessWrong by accident and was so delighted to find out that there is such a huge community about rationality, effective altruism, and other things. The way the people treat each other here is rare to find somewhere else (-> awesome community -> You - yes you - are a great person). Also many users of LessWrong have awesome blogs themselves or are fans of other awesome stuff. As a result, I learned so much and it changed my life in many aspects :)! At first I was hoping to contribute with essays that I wrote for myself about systematic approaches to improve one's life, biases, effective altruism etc. which I wrote before finding out about Lesswrong. After starting reading the sequences and other posts I had decided that I should probably read more, write completely new essays, throw my old ones away... :'P So just writing comments, maybe short posts and getting lots of feedback seems like the best way to go right now... If you have tips for improving my writing style, that would be awesome!
I want to further improve myself, update my beliefs, integrate Bayesian theory, etc. into my thinking, help where I can -> become an overall more rational being, therefore feedback is greatly appreciated :D
Any questions or recommendations etc.?
Thanks for taking your time to read this :)
Replies from: scarcegreengrass↑ comment by scarcegreengrass · 2020-11-02T23:13:23.117Z · LW(p) · GW(p)
Welcome! Discovering the rationalsphere is very exciting, isn't it? I admire your passion for self improvement.
I don't know if I have advice that isn't obvious. Read whoever has unfamiliar ideas. I learned a lot from reading Robin Hanson and Paul Christiano.
As needed, journal or otherwise speak to yourself.
Be wary of the false impression that your efforts have become ruined. Sometimes i encounter a disrespectful person or a shocking philosophical argument that makes me feel like giving up on a wide swathe of my life. I doubt giving up is appropriate in these disheartening circumstances.
Seek to develop friendships with people you can have great conversations with.
Speak to rationalists like you would speak to yourself, and speak tactfully to everyone else.
That's the advice i would give to a version of myself in your situation. Have fun!
Replies from: WrongPlanet↑ comment by WrongPlanet · 2020-11-03T06:27:02.404Z · LW(p) · GW(p)
Thank you very much for your motivation and advice!
I will follow your suggestions and read about those two you mentioned.
comment by Ben Pace (Benito) · 2020-10-01T23:35:47.559Z · LW(p) · GW(p)
I propose a norm change for using Google Docs.
I'm not a fan of the way Google Docs encourages people to only give in-line comments, rather than comments on the document as a whole. It's a format that lends itself to nitpicks over substance.
So by default, if I'm given comment access, I add a new heading at the bottom, then write freely my thoughts on the doc overall. This often fills up 0.5-1.5 pages. I leave a comment saying "delete this if you didn't want it" but 100% of the time so far the person who shared it with me likes that I did it.
At minimum, I'd suggest by-default writing an overall-comment inline at the top. Otherwise people just get lots of things that feel like nitpicks, and don't get people's overall impressions.
Replies from: Dagon↑ comment by Dagon · 2020-10-02T15:58:03.275Z · LW(p) · GW(p)
Commenting on docs at my workplace has taken the norm of putting general doc comments on the title or introductory sentence, and general comments about a section on the header, and comments about missing topics on the TOC. And, of course, comments about specific items in the text on that item.
It does put a lot of comments too early, but it also doesn't bury them after all the nitpicks. The technique to resolve that problem is to read (or at least skim) the entire doc before looking at any comments.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-10-02T18:45:39.108Z · LW(p) · GW(p)
Yeah, that's definitely a better norm than the status quo. The issue I still have with it is that the inline comment box is really tiny (read: thin) and has a max character count below what I want to write more than 50% of the time, which sends signals like "You're not supposed to write that much" and just generally makes it inconvenient. I want (for myself) an unrestricted space to find out what I think and how much I have to say (by babbling a bunch).
Replies from: Dagon↑ comment by Dagon · 2020-10-02T19:31:07.928Z · LW(p) · GW(p)
There are times (not always, and I don't recommend it generally) that instead of inline comments, I create my own comment doc, and my comments on the doc in question include links to anchors in that doc. This is especially valuable for early technical documents where there's going to be a lot of iteration, and it's OK for discussion to be spread out a bit, expected to consolidate on the next major revision (which will be a new doc, and the old doc and comments kept for historical purposes only.
comment by smiley314 · 2020-10-04T23:46:03.873Z · LW(p) · GW(p)
Hi everyone! :)
So I'm actually introducing myself now!
I'm a long-time lurker, 21 years old, living in Germany, and I'm currently in… the equivalent of high school (my education path is pretty serpentine – long story). I will hopefully be graduating next spring/summer and then presumably study mathematics.
I actually don‘t know how I first ended up here – I vaguely remember stumbling across a few articles and then succumbing to link-hopping (resulting in a few dozen open tabs). It seems to be ages since I know about LessWrong. Also, last December I attended a meetup.
Currently, I am working myself through the Sequences; although I will probably revisit at least some of the parts later. Especially those with probability theory.
It‘s not like I don‘t have at least basic knowledge about statistics and probabilities – I have math as AP for Gauß‘s sake! – it‘s just (at least for now) not my particular cup of tea (I tend to like abstract stuff more – algebra, number theory, set theory, I've dipped my toes in category theory – probability and statistics seems like so much calculating) and I think, I should go over this with more time and intent to practice; instead of (just?) trying to get a complete overview.
I‘m not (yet?) used to explicitly calculate how certain I should be about whatever. -- I'm planning on starting a journal for that particular purpose, and also doing a deep dive... some time... but I don't know when I'll actually get to that, especially considering my already way-to-long to-do-list. Also, perhaps I should learn estimating probabilities (and betting) properly first, I don't know?
My interests tend to be quite broad, but very much aligned with this site and its orbit - this might actually partially be a chicken-and-egg-problem; this site might actually have influenced me quite a bit.
I originally planned to finish the sequences first before I write anything at all here; but that might take a lot of time and I'm starting to think that that might be a bad idea...
On the other hand, I don't know exactly what I could contribute just yet, except for maybe one other comment. But I think, that will be alright in time - one comment at a time.
...I don't quite know what to write, but feel free to ask me what you would like to know!
Best regards,
smiley314
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-10-05T00:33:07.169Z · LW(p) · GW(p)
Welcome :) Best of skill with reading The Sequences. And yeah, in the meantime do keep your eye out for anywhere that you can contribute something with a comment. See you around.
comment by Ben Pace (Benito) · 2020-10-08T23:00:43.981Z · LW(p) · GW(p)
Motherflipping new Bostrom paper! (With Carl Shulman!) Hurrah! Time for some new concepts!
https://www.nickbostrom.com/papers/monster.pdf
I'll make a post with some overview shortly, for people to share their favorite quotes in the comments.
comment by John_Maxwell (John_Maxwell_IV) · 2020-10-07T07:59:01.358Z · LW(p) · GW(p)
There hasn't been an LW survey since 2017 [LW · GW]. That's the longest we've ever gone without a survey since the first survey. Are people missing the surveys? What is the right interval to do them on, if any?
comment by Vermillion (VermillionStuka) · 2020-10-11T02:30:08.580Z · LW(p) · GW(p)
Replies from: mingyuan, Benito↑ comment by mingyuan · 2020-10-11T06:10:22.245Z · LW(p) · GW(p)
Entering the rationalist community felt like coming back to a home I never knew I had
That was exactly my experience as well! Welcome to the community :)
Replies from: VermillionStuka↑ comment by Vermillion (VermillionStuka) · 2020-10-11T12:20:31.362Z · LW(p) · GW(p)
↑ comment by Ben Pace (Benito) · 2020-10-11T02:34:05.499Z · LW(p) · GW(p)
Welcome! Your story is pretty cool to hear. Look forward to seeing you around more. By the way, I like your comments, and thought they were all positive contributions (I had upvoted one of them) :)
Replies from: VermillionStuka↑ comment by Vermillion (VermillionStuka) · 2020-10-11T04:01:33.006Z · LW(p) · GW(p)
comment by Wei Dai (Wei_Dai) · 2020-10-19T18:44:04.828Z · LW(p) · GW(p)
There's a time-sensitive trading opportunity (probably lasting a few days), i.e., to short HTZ because it's experiencing an irrational spike in prices. See https://seekingalpha.com/article/4379637-over-1-billion-hertz-shares-traded-on-friday-because-of-bankruptcy-court-filings for details. Please only do this if you know what you're doing though, for example you understand that HTZ could spike up even more and the consequences of that if it were to happen and how to hedge against it. Also I'm not an investment advisor and this is not investment advice.
comment by Ben Pace (Benito) · 2020-10-03T05:34:36.785Z · LW(p) · GW(p)
I recently had opportunity to introspect on my conflicted feelings about Calendly. I wrote the following down, which has helped me resolve my feelings quite a bit.
Replies from: David HornbeinMy feelings are both that it's a great app and yet sometimes I'm irritated when the other person sends me theirs.
If I introspect on the times when I feel the irritation, I notice I feel like they are shirking some work. Previously we were working together to have a meeting, but now I'm doing the work to have a meeting with the other person, where it's my job and not theirs to make it happen.
I think I expect some of of the following asymmetries in responsibility to happen with a much higher frequency than with old-fashioned-coordination:
- I will book a time, then in a few days they will tell me actually the time doesn't work for them and I should pick again (this is a world where I had made plans around the meeting time and they hadn't)
- I will book a time, and just before the meeting they will email to say they hadn't realized when I'd booked it and actually they can't make it and need to reschedule, and they will feel this is calendly's fault far more than theirs
- I will book a time, and they won't show up or will show up late and feel that they don't hold much responsibility for this, thinking of it as a 'technical failure' on behalf of calendly.
All of these are quite irritating and feel like I'm the one holding my schedule open for them, right up until it turns out they can't make it.
I think I might be happier if there was an explicit and expected part of the process where the other person confirms they are aware of the meeting and will show up, either by emailing to say "I'll see you at <time>!" or if they have to click "going" to the calendar invitation and I would get a notification saying "They confirmed", and only then was it 'officially happening'.
Having written this out, I may start pinging people for confirmation after filling out their calendlys...
↑ comment by David Hornbein · 2020-10-05T02:50:19.479Z · LW(p) · GW(p)
Huh, in the past I've used Calendly pretty heavily from both ends, and never experienced anything like the issues you describe.
Having written this out, I may start pinging people for confirmation after filling out their calendlys...
Probably a good idea. Still, I suspect this will only partially solve your problem, considering what seems to be the attitude of the people you're scheduling with.
comment by Mary Chernyshenko (mary-chernyshenko) · 2020-10-03T06:52:04.604Z · LW(p) · GW(p)
We get to argue about published research but not the kind that was so bad it remained unpunished. And it certainly exists. I wish there were an "Editors Anonymous" platform to just rant and vent about why they don't accept manuscripts. Or how they wish they could decline ones that might follow the technical requirements but really just suck "epistemically".
So that one could unburden one's soul without compromising one's journal.
comment by habryka (habryka4) · 2020-10-21T22:29:46.970Z · LW(p) · GW(p)
Really sorry for the downtime for parts of the last hour. The cause was a straightforward merge conflict that wasn't highlighted by Git, and in a particularly sad instance of carelessness we merged even though our tests told us the build wasn't working. This was basically fully a process failure on our side to not pay attention to the systems we put in place to prevent exactly this happening, and pressing big red override buttons because a previous problem with our CI-system had forced us to press some override buttons frequently enough that we weren't thinking of it as something to be really careful with. Again, sorry for that.
I already made some of the most obvious changes to our process to prevent this from happening again, and am working on a bunch of larger changes that will make stuff in this reference class much less bad.
comment by Ben Pace (Benito) · 2020-10-17T23:35:44.527Z · LW(p) · GW(p)
Another (mild) norm proposal: I am against comments that do a line-by-line reply to the comment it's replying to.
I think it reliably makes a conversation very high effort and in-the-weeds, to the cost of talking about big picture disagreements. It often means there's no part of the comment which communicates directly, saying "this is my response and where I think our overarching disagreement lies", it just has lots of small pieces.
This is similar to my open thread post about google docs [LW(p) · GW(p)] which was about how inline commenting seems to disincentivize big-picture responses.
It's fine to drop threads in conversations, not everything needs to be addressed, the big picture is more important in most situations. Writing a flowing paragraph is often much better conversationally than loads of one-line replied to one-lines.
Replies from: Pongocomment by Kaj_Sotala · 2020-10-13T08:17:52.167Z · LW(p) · GW(p)
This bit of popular science news crossed my feed: "A new interpretation of quantum mechanics suggests reality does not depend on the measurer", about this paper; claims to eliminate any need to have observer-dependence in QM. I guess this would apply even if one was skeptical of many-worlds, but I don't know enough to evaluate the paper or its significance at all; anyone able to tell whether the paper's actually meaningful?
comment by Wei Dai (Wei_Dai) · 2020-10-31T18:04:39.710Z · LW(p) · GW(p)
By "planting flags" on various potentially important and/or influential ideas (e.g., cryptocurrency, UDT, human safety problems), I seem to have done well for myself in terms of maximizing the chances of gaining a place [LW(p) · GW(p)] in the history of ideas. Unfortunately, I've recently come to dread [LW(p) · GW(p)] more than welcome the attention of future historians. Be careful what you wish for, I guess.
comment by Wei Dai (Wei_Dai) · 2020-10-31T17:32:27.430Z · LW(p) · GW(p)
Free speech norms can only last if "fight hate speech with more speech" is actually an effective way to fight hate speech (and other kinds of harmful speech). Rather than being some kind of human universal constant, that's actually only true in special circumstances when certain social and technological conditions come together in a perfect storm. That confluence of conditions has now gone away, due in part to technological change, which is why the most recent free speech era in Western civilization is rapidly drawing to an end. Unfortunately, its social scientists failed to appreciate the precious rare opportunity for what it was, and didn't use it to make enough progress on important social scientific questions that will become taboo (or already has become taboo) once again to talk about.
Replies from: aaro-salosensaari↑ comment by Aaro Salosensaari (aaro-salosensaari) · 2020-10-31T23:49:50.698Z · LW(p) · GW(p)
I noticed this comment on main page and would push back on the sentiment: I don't think there ever has been such conditions that "more speech" was universally agreed to be better way than restrictions to fight hate speech (or more generically, speech deemed harmful), or there is in general something inevitable about not having free speech in certain times and places because it is simply not workable in certain conditions. (Maybe it isn't, but that is kinda useless to speculate beforehand and it is obvious when one does not certainly have such conditions.)
Free speech, in particular talking about and arguing for free speech is more of commitment to certain set of values (against violence to dissemination of ideas etc), often made in presence of opposition to those values, and less of something that has been empirically deemed to be best policy at some past time but the conditions of those times are for some reason lost. Freedom of speech is not an on-off thing; the debate about free speech seems to be quite a constant in the West since the idea's conception, while the hot topics will change. (When Life of Brian came out, Pythons found themselves debating its merits with clergymen on TV: the debate can be found on YouTube and feels antiquated to watch.)
Moreover, there is something that bugs me in the claim that with certain technological and social conditions, free speech becomes unworkable. The part about social conditions is difficult, as one could say that social conditions in places with free press were the necessary social conditions for free press, and places without had not the necessary conditions, but that feels bit too circutous.
If we allow some more leeway and pick an example of place with some degree of freedom in speech, one can quite often point to places in the same historical period with broadly similar conditions where nevertheless the free speech norms were not there. Sometimes it is the same place just a bit later where the free speech had broken down some way or other and maybe in spectacular fashion. (History of France provides many fascinating examples of this.) Obviously there is a difference in social conditions between such pair of societies, but are the differences inevitable in the sense of resistance to great tidal wave of history being futile, or is the difference of there being just not couple of more individuals putting in effort or making the right move at some crucial point?
Anyway, for a specific example how Enlightenment ideals about relations in society (freedom of speech veing one of them) were argued for because the society was very much not like the ideals, I'd like to highlight Voltaire's Treatise On Toleration. (While the Treatise is not exactly about free speech and more about religious and political toleration and also good judicial procedure, I refer to it because I am familiar with it, and in any case, it is close enough. Anyhow:)
The treatise deals with Voltaire's indignation at a case of the cruel injustice, a brutal murder of one Jean Calas, a Protestant, committed by local authorities with the cheers of local populace against in Toulouse (I recommend reading it for details; it is a fairly short text). Voltaire presents various arguments and rhetorics to convince the reader that what happened was morally wrong and also the primary reason why it happened, religious bigotry, is not a good or useful thing to have in a society.
Voltaire is one of the most famous Enlightenment era thinkers known as proponents of ideals like religious toleration and free speech. This is not because France (or rest of Europe) of his time was very tolerant or had lots of free speech; as he found ample evidence in form of the case of Jean Calas, it was not. In general concerning matters of free speech, the French royal government had active press censorship bureaucracy. The French public life was restricted: there was no formal avenue for political opposition to the establishment. During approximately same period of time, Rousseau spent much of his time in exile from various authorities for his writings, which were banned and burned several times. (The hand on censorship was evidently imperfect.) D'Alembert and Diderot faced various troubles and widespread condemnation for their Encyclopedie. France and Europe had intoleration and restricted speech in abundance (not as effectual and totalitarian restrictions as in some parts of Europe in 20th century). At the same time, Britain generally had wider freedoms of press; many hoped to change conditions of France to be more like British ones, which was one cause why Revolution played out like it did).
However, the two reasons I launched on this bit longwinded tangent was this: To me it is quite unclear what to make of the "big" societal or technological forces in France at Voltaire's time, and we have the benefit of retrospection. Today's future is more difficult to judge.
Also, people who wrote and acted in defence of Enlightenment values such as free speech did it because they felt they had an opportunity but a reason to defend such ideals. It was often unclear how the dice would fall, both for them personally in immediate future and in grand societal or historical scale later on. Sometimes they were successful in increasing the amount of liberty in the world.
(Phew, well that was a bit of mouthful and I think I got a bit too excited and may have lost my train of thought)
comment by Taleuntum · 2020-10-23T12:27:31.130Z · LW(p) · GW(p)
Replication Markets is going to start a new project focusing on COVID studies. Infos:
- Surveys open on October 28, 2020.
- Markets open on November 11, 2020.
- A total of $14,520 in prizes will be awarded.
- Contest will forecast (1) publication, (2) citation, (3) replication, and (4) usefulness for the Top-400 claims from COVID-19 research, using both surveys and markets.
comment by PeterL (peter-loksa) · 2020-10-11T05:26:32.368Z · LW(p) · GW(p)
Hello, I would like to ask, whether you think that some ideas can be dangerous to discuss publicly despite you are honest with them and even despite you are doing your best attempt to be logical/rational and even despite you are wishing nothing bad to other people/beings and even despite you are open for its discussion in terms of being prepared for its rejection according to a justified reason.
In this stage, I will just tell you I would like to discuss a specific moral issue, which might be original, and therefore I am skeptical this way and I feel a little insecure about discussing it publicly.
Replies from: lsusr, Benito, seed↑ comment by Ben Pace (Benito) · 2020-10-11T05:30:58.036Z · LW(p) · GW(p)
There is information that's dangerous to share. Private data, like your passwords. Information that can be used for damage, like how to build an atom bomb or smallpox. And there will be more ideas that are damaging in the future.
(That said I don't expect your idea is one of these.)
Replies from: peter-loksa↑ comment by PeterL (peter-loksa) · 2020-10-11T15:29:02.383Z · LW(p) · GW(p)
I would like to ask you, whether there are some criteria (I am fine even with the subjective ones) according to which you, experienced rationalists, would accept/consider some metaethics despite the very bad humankind's experience with them.
I expect answers like: convincing; convincing after very careful attempt to find its flaws; logical; convincing after very careful attempt to find its flaws by 10 experienced rationalists; after careful questioning; useful; harmless; etc.
comment by a gently pricked vein (strangepoop) · 2020-10-04T04:45:11.028Z · LW(p) · GW(p)
- I noticed a thing that might hinder the goals of longevity as described here [LW · GW] ("build on what was already said previously"): it feels like a huge cost to add a tiny/incremental comment to something because of all the zero-sum attention games it participates in.
It would be nice to do a silent comment, which:- Doesn't show up in Recent Comments
- Collapsed by default
- (less confident) Doesn't show up in author's notifications (unless "Notify on Silent" is enabled in personal settings)
- (kinda weird) Comment gets appended automatically to previous comment (if yours) in a nice, standard format.
- The operating metaphor is to allow the equivalent of bulleted lists to span across time, which I suppose would mostly be replies to yourself.
- It feels strange to keep editing one comment, and too silent. Also disrupts flow for readers.
- I don't see often that people have added several comments (via edit or otherwise) across months, or even days. Yet people seem to use a lot of nested lists here. Hard to believe that those list-erious ways go away if spread out in time.
↑ comment by a gently pricked vein (strangepoop) · 2020-10-04T05:14:08.063Z · LW(p) · GW(p)
As a meta-example, even to this I want to add:
- There's this other economy to keep in mind of readers scrolling past walls of text. Often, I can and want to make what I'm saying cater to multiple attention spans (a la arbital?), and collapsed-by-default comments allow the reader to explore at will.
- A strange worry (that may not be true for other people) is attempting to contribute to someone else's long thread or list feels a little uncomfortable/rude without reading it all/carefully. With collapsed-by-default, you could set up norms that it's okay to reply without engaging deeply.
- It would be nice to have collapsing as part of the formatting
- With this I already feel like I'm setting up a large-ish personal garden that would inhibit people from engaging in this conversation even if they want to, because there's so much going on.
- And I can't edit this into my previous comment without cluttering it.
- There's obviously no need for having norms of "talking too much" when it's decoupled from the rest of the control system
- I do remember Eliezer saying in a small comment somewhere long ago that "the thumb rule is to not occupy more than three places in the Recent Comments page" (paraphrased).
↑ comment by John_Maxwell (John_Maxwell_IV) · 2020-10-07T07:55:37.975Z · LW(p) · GW(p)
Why not just have a comment which is a list of bullet points and keep editing it?
comment by xiaolongbao · 2020-10-27T18:02:52.146Z · LW(p) · GW(p)
I'm posting on behalf of my friend, who is an aspiring AI researcher in his early 20's, and is looking to live with likeminded individuals. He currently lives in Southern California, but is open to relocating (preferably USA, especially California).
Please message jeffreypythonclass+ea@gmail.com if you're interested!
comment by algon33 · 2020-10-21T18:02:02.820Z · LW(p) · GW(p)
Somewhat urgent: can anyone recommend a good therapist or psychiatrist for anxiety/depression in the UK? Virtual sessions are probably required. Private is fine. Also, they shouldn't be someone biased towards rationalist types. The person I'm thinking of has nearly no knowledge of these ideas.
Other recommendations that seem relevant welcome.
comment by Vermillion (VermillionStuka) · 2020-10-12T03:35:32.212Z · LW(p) · GW(p)
I just had a question about post formatting, how do I turn a link into text like this example? Thanks.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-10-12T03:37:26.134Z · LW(p) · GW(p)
Highlight the text where you want the link to be, and the editor menu should appear. Then click the link icon (looks like a rotated oval with a straight line in the centre), and enter the link.
Replies from: habryka4, VermillionStuka↑ comment by habryka (habryka4) · 2020-10-12T05:11:00.041Z · LW(p) · GW(p)
Alternatively, press CMD+K