Open & Welcome Thread – November 2020
post by habryka (habryka4) · 2020-11-03T20:46:12.745Z · LW · GW · 61 commentsContents
61 comments
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited.
If you want to explore the community more, I recommend reading the Library, [? · GW] checking recent Curated posts [? · GW], seeing if there are any meetups in your area [? · GW], and checking out the Getting Started [LW · GW] section of the LessWrong FAQ [LW · GW]. If you want to orient to the content on the site, you can also check out the new Concepts section [? · GW].
The Open Thread tag is here [? · GW].
61 comments
Comments sorted by top scores.
comment by Kaj_Sotala · 2020-11-03T21:46:01.473Z · LW(p) · GW(p)
The Pope asks people to pray for AI safety this month:
Each year, the Holy Father asks for our prayers for a specific intention each month. You are invited to answer the Holy Father's request and to join with many people worldwide in praying for this intention each month. [...]
November
Artificial Intelligence
We pray that the progress of robotics and artificial intelligence may always serve humankind.
comment by AnnaSalamon · 2020-11-04T05:04:30.683Z · LW(p) · GW(p)
"And the explosive mood had rapidly faded into a collective sentiment which might perhaps have been described by the phrase: Give us a break!
Blaise Zabini had shot himself in the name of Sunshine, and the final score had been 254 to 254 to 254."
comment by Adam Zerner (adamzerner) · 2020-11-18T22:35:21.552Z · LW(p) · GW(p)
I have a story to tell that I have to get off my chest.
Two nights ago I was falling asleep and I heard faint screaming. I wasn't sure where it was coming from, but I decided to look out my window. Across the street is the parking lot of a different apartment complex. There was a car with a woman on top of the windshield. She was screaming and periodically would punch and kick the windshield of the car.
Then I saw the car accelerate and decelerate around the parking lot in an attempt to get her off, like a bull trying to get a bull rider off. I was appalled. I immediately called the police and gave a quick yell out the window saying "Hey, stop!" or something. She very well could have fell off, been run over, and died. At the end of it my girlfriend also was screaming out the window to tell her to get off and stop. She was able to get the girl to start talking, then the man inside the car started talking. Turns out she cheated on him, the man wanted to break up and get away, and she didn't want to let their relationship end. When my girlfriend said we were calling the police the woman finally got off the car, the man opened the door, let her in, and they drove away.
The next day I called the local police department because after having hung up with 911 that night, I learned that the man in the car's name was Alex. I figured that providing that information to the police might be useful. Hopefully there'd only be one male resident named Alex in the apartment complex who is living with a girl and owns a hatchback car.
The lady I talked to sounded annoyed that I called and thought that this was nothing. When I described what happened she immediately blamed the woman, saying that she shouldn't have been on top of the car kicking and banging in the first place. Of course. But two wrongs don't make a right. That doesn't give the man the right to w recklessly endanger her life like that.
Then she got to asking me what my new information was. I said how I learned that the guy's name is Alex and that maybe they could track him down. She said there's nothing to track down and to call back if I see anything violent happen. But she also asked me what my thinking is. So I explained that a) what the man did seems illegal and probably worth pursuing criminally, and b) he might be a danger. The lady looked at me like I had two heads. She asked me: did you see a gun? A knife? Punches? Shoving? Anything violent? No, but the guy could have ran her over or seriously injured her. Seems like a dangerous person. Domestic violence seems very plausible. Seems worth following up on.
Here's where we get into rationality a little bit. After hearing me out, she explained that all of this is just my opinion. I don't have any "facts", like seeing a gun or a punch being thrown. To her, "facts" like those would be a sign that the man was dangerous and maybe worth following up on, but here it's merely my opinion that a guy who tries to whip someone off the roof of his car is dangerous.
I am seeing three (related) errors in her logic. The first is having an arbitrary cutoff for what "counts" as evidence. As explained in Scientific Evidence, Legal Evidence, Rational Evidence [LW · GW], rational evidence is anything that "moves the needle" but in various social institutions they may have arbitrary cutoffs for what "counts" for whatever reasons, like how science often has the cutoff of p < 0.05. It's possible that there is a good reason why she is drawing the cutoff here. It seems highly unlikely though. But more to the point, she seemed to be saying that because it is merely my opinion, it doesn't even count as rational evidence. That "I have nothing". I would have thought that someone working in law enforcement would be better at assessing danger than this.
But this brings me to the second error: belief in belief [LW · GW]. I don't think that she actually would predict this man in the car to be an average threat. I don't think she would be happy to find herself crossing paths with him in a back alley. Yet she professed as if all of this is merely my opinion which I'm not basing on any "hard facts" and thus there's no reason to suspect anything of this man.
Third error: fallacy of grey [? · GW]. Even if you want to draw an arbitrary line between opinion and fact, all opinions aren't created equally. Some shades of grey are darker than others. Some opinions are stronger indicators than others.
It's sad because there really does seem to be a practical path forward here. Maybe there were cameras that captured it. Maybe there were other witnesses. I'd imagine that there would be. They were screaming pretty loudly, the car was screeching, and there were a lot of other apartments in the vicinity. Most of which were closer than mine. Mine was across the street and my window was closed. And even if you don't want to prosecute the man for some sort of reckless endangerment it seems worth following up with the woman to see if there is any domestic abuse going on and if she feels safe.
Maybe this just doesn't relate to me and so I should stop talking about it? Maybe. Or maybe the police officer's rationality is my business [LW · GW], given that I am a member of the community that she is policing and trying to keep safe. It's certainly the business of the girl who could have been ran over.
Another sad thought: there is a security patrol that is usually stationed maybe 200 feet away from where this all happened. I couldn't see if they were out there from my window, but last night I was out for a bike ride, saw a security car stationed in that usual spot. I told him what happened the previous night and asked if he thinks there might have been someone stationed there. He said there probably was because they're supposed to be out all night, but that they wouldn't have been able to do anything. They are contracted by some factory nearby and if he said if he left his post for something unrelated to the factory like that that he'd be fired on the spot. These experiences have all shifted my beliefs about just how inflexible and inhumane bureaucracies can be.
Replies from: Gurkenglas, maxwell-peterson, Zian↑ comment by Gurkenglas · 2020-11-22T05:57:52.501Z · LW(p) · GW(p)
Perhaps the police officer simply thinks that your average person will easily do dangerous things like shaking someone off their car without thinking much of it, but will not take a knife to another's guts. Therefore, the car incident would not mark the man as unusually dangerous.
There is no mathematically canonical way to distinguish between trade and blackmail, between act and omission, between different ways of assigning blame. The world where nobody jumps on cars is as safe as the one where nobody throws people off cars. We decide between them by human intuition, which differs by memetic background.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2020-12-07T04:42:13.009Z · LW(p) · GW(p)
Perhaps the police officer simply thinks that your average person will easily do dangerous things like shaking someone off their car without thinking much of it, but will not take a knife to another's guts. Therefore, the car incident would not mark the man as unusually dangerous.
Perhaps. That is something that I personally would disagree with, but respect that it is the place of the officer to make that judgement.
The bigger issue I see is how the officer distinguished between fact an opinion. She called things like a knife or gun "facts" and the car thing merely my opinion. I worry that by drawing hard lines like these, she isn't giving serious enough consideration to the danger that the guy Alex might pose. Eg. "That's not hard evidence or facts. Therefore I'm going to dismiss it and not think more about it."
↑ comment by Maxwell Peterson (maxwell-peterson) · 2020-11-28T21:43:45.497Z · LW(p) · GW(p)
Great work by you and your girlfriend! It takes courage to intervene in a situation like that, and skill to actually defuse it. Well done.
I don't agree about what you're calling the first error. Her job is to take in statements like yours, and output decisions. She could output "send police to ask questions", or "send a SWAT team now", or "do nothing". She chose a decision you don't agree with, but she had to choose some decision. It's not like she could update the database with "update your prior to be a little more suspicious of Alexes in hatchbacks".
I also don't think it's correct to call it arbitrary in the same way that the p < 0.05 threshold is arbitrary. I don't really know how to say this clearly, but it's like... the p < 0.05 rule is a rule for suspending human thought. Things you want to consider when publishing include: "what's the false negative cost here? false positive cost? How bad would it be to spread this knowledge even if I'm not yet certain the studied effect is real?". The rule "p < 0.05 yes or no" is bad because it throws all those questions away. It is arbitrary, like you say. But it doesn't follow that any questionable decision was made by an arbitrary decision rule. If she thought about the things you said, and decided they didn't merit sending anyone out to follow up, that isn't arbitrary! All it takes to not be arbitrary is some thinking and some weighing of the probabilities and costs (and this process can be quick). You did that and came to one decision. She did that and came to another. That difference... seems to me... is a difference of opinion.
I don't know the actual conversation you had with her, and it sounds like she didn't do a very good job of justifying her decision to you, and possibly said obviously incorrect things, like "you have literally 0 evidence of any sort". But I don't think the step from "I think she was wrong" to "I think her decision rule is arbitrary" is justified. Reading this didn't cause me to make any negative update on police department bureaucracy. (the security company is a different story, if indeed someone was there just watching!)
↑ comment by Adam Zerner (adamzerner) · 2020-12-07T05:05:38.712Z · LW(p) · GW(p)
Great work by you and your girlfriend! It takes courage to intervene in a situation like that, and skill to actually defuse it. Well done.
Thank you :)
But I don't think the step from "I think she was wrong" to "I think her decision rule is arbitrary" is justified.
To be clear, I'm not making that step because I think her output was wrong. It's the way she went about it that made me think that her decision rule was problematic.
She treated things like guns and knives as "fact" and everything else as "opinion". It sounded to me like because she categorized the car thing as an opinion, she was very quick to dismiss it, rather than pausing to consider how dangerous it actually is, and perhaps ask some follow up questions, perhaps regarding how fast and reckless the driving really was. On the other hand, if she had the mindset that it all counts as Bayesian evidence and her job is to try to judge how strong the evidence is, I think she would have spent more effort thinking about it.
I guess that's the crux of it to me: giving things appropriate thought and not dismissing them. Thinking in terms of Bayesian evidence rather than fact vs opinion is a means to that end, but I think there are other means to that end. I even think fact vs opinion could be a means to that end, if you actually think carefully about whether the thing should be considered fact or opinion. But here, I don't think that was happening. It sounded like a snap judgement.
It is possible that hard cutoffs like these are actually for the best. That if you give officers more freedom to make judgements, bad things will happen. However, I'd bet pretty strongly against it [LW(p) · GW(p)]. I don't have any experience in the field so I can't be too, too confident, but I'd imagine that a 911 operator would be presented with highly, highly varied situations and would need to use a good amount of judgement to do the job. If you give that operator a list of things that "count" as fact/evidence, there are surely going to be things that you forget to include, if only because they are too obscure to imagine, such the situation here with trying to fling someone off the roof of your car like a bull.
Reading this didn't cause me to make any negative update on police department bureaucracy.
I'm curious how you feel about the more punitive aspect of it. Ie. one reason to go after the guy is if you think he is a danger to society, but another reason is because he did something wrong and wrong things are supposed to be punished, either for punitive reasons or preventative ones. Personally, I updated pretty strongly towards thinking that people often get away with things like this. Previously I would have imagined that they'd come after you for something like this, but now it seems like that usually doesn't happen, because of a) poor judgement and b) lack of resources.
↑ comment by Zian · 2020-11-22T02:17:51.263Z · LW(p) · GW(p)
inflexible
The call taker may be required to follow an algorithm (e.g. https://prioritydispatch.net/resource-library/). This is not to discount all your points; everything you wrote is likely true too.
Finally, it's possible that the high arbitrary cutoff for evidence is a reflection of the agency's priorities and resources.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2020-12-07T04:45:36.037Z · LW(p) · GW(p)
Yeah, maybe a highly structured algorithm + hard cutoffs for what counts as evidence is the way to go. I'd bet pretty confidently against it, but not super confidently because I don't have any experience in that domain.
comment by habryka (habryka4) · 2020-11-13T19:48:00.974Z · LW(p) · GW(p)
I am experimenting with holding office hours where people can talk to me about really anything they want to talk to me about. First iteration, next week Wednesday at noon (PT) at this link:
http://garden.lesswrong.com?code=bSu7&event=habryka-s-office-hours
Some more event description:
Replies from: riceissa, ChristianKlCome by if you want to talk to me! I am a man of many hats, so many topics are up for discussion. Some topics that seem natural:
+ LessWrong
+ AI Alignment Forum
+ LTFF Grantmaking
+ Survival and Flourishing Fund grantmakingSome non-institutional topics that seem interesting to discuss:
+ Is any of the stuff around Moral Uncertainty real? I think it's probably all fake, but if you disagree, let's debate!
+ Should people move somewhere else than the Bay? I think they probably shouldn't, but am pretty open to changing my mind, and good arguments are appreciated.
+ Is there any way we can make it more likely that we get access to the vaccine soon and can get back to life? If you have any plans, let me know.
+ What digital infrastructure other than the EA-Forum, LessWrong, and the AI Alignmnent Forum do you want the LessWrong team to build? Should we revive reciprocity.io?
+ Do you think that we aren't at the hinge of history because of Will's arguments? Debate me, because I disagree, and I would really like someone to defend this position, because I feel confused about what's happening discourse wise.
Almost anything else is also fair game. Feel free to come by and tell me about any books you recently read, or respond to any of the many things I've written in comments and posts over the years.
↑ comment by riceissa · 2020-11-18T23:41:25.219Z · LW(p) · GW(p)
Is any of the stuff around Moral Uncertainty real? I think it’s probably all fake, but if you disagree, let’s debate!
Can you say more about this? I only found this comment [LW(p) · GW(p)] after a quick search.
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-11-19T00:35:07.409Z · LW(p) · GW(p)
Don't really feel like writing this up in a random comment thread. That's why I proposed it as a topic for a casual chat at my office hours!
Replies from: riceissa↑ comment by riceissa · 2020-11-19T02:48:48.148Z · LW(p) · GW(p)
Ok. Since visiting your office hours is somewhat costly for me, I was trying to gather more information (about e.g. what kind of moral uncertainty or prior discussion you had in mind, why you decided to capitalize the term, whether this is something I might disagree with you on and might want to discuss further) to make the decision.
More generally, I've attended two LW Zoom events so far, both times because I felt excited about the topics discussed, and both times felt like I didn't learn anything/would have preferred the info to just be a text dump so I could skim and move on. So I am feeling like I should be more confident that I will find an event useful now before attending.
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-11-19T03:23:15.074Z · LW(p) · GW(p)
Yeah, that's pretty reasonable. We'll see whether I get around to typing up my thoughts around this, but not sure whether I will ever get around to it.
↑ comment by ChristianKl · 2020-11-17T13:03:11.851Z · LW(p) · GW(p)
+ What digital infrastructure other than the EA-Forum, LessWrong, and the AI Alignmnent Forum do you want the LessWrong team to build? Should we revive reciprocity.io?
To me that sounds like you want to divert resources away from doubling down on scaling up the existing infrastracture.
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-11-17T18:20:57.582Z · LW(p) · GW(p)
Huh, that's a weird way of phrasing it. Why would it be "divert away"? We've always worked on a bunch of different things, and while LessWrong is obviously our main project, we just work on whatever stuff seems most likely to have the best effect on the world and fits well with our other projects.
Replies from: Raemon↑ comment by Raemon · 2020-11-17T19:17:53.478Z · LW(p) · GW(p)
I think it's not very obvious how many other projects we work on.
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-11-17T20:05:18.393Z · LW(p) · GW(p)
I... don't think I understand what this has to do with my comment? I agree that it's not overwhelmingly obvious, but what does that have to do with my comment (or Christian's for that matter?).
I guess maybe this whole thread just feels kinda confused, since I don't understand what the goal of Christian's comment is.
Replies from: Raemon↑ comment by Raemon · 2020-11-17T20:47:23.567Z · LW(p) · GW(p)
My read was:
- Christian responds to your comment with "you want to divert resources away from the thing you usually work on" (with possibly an implication that Christian cares a lot about the thing he thinks you usually work on, and doesn't want fewer resources allocated to it.)
- You respond "huh that's a weird way of phrasing it. why would it be 'divert away?' we've always worked on other projects"
- That seemed, to me, to be a weird way of replying, because, like, theory of mind says that ChristianKI doesn't know about all those other projects. If you assume the LessWrong team mostly builds LessWrong, it's quite reasonable to respond to a query about "what stuff should we build other than LW?" with "that sounds like you're diverting resources away from LessWrong". And a more sensible response would have been "ah, yeah I see why you'd think that if you think we only build LessWrong, but actually we do other projects."
- (moreover, I think it actually is plausibly bad that we spread our focus as thinly as we do. I don't think it's an obvious call because the other projects we work on are also important and it's a reasonable high-level-call for us to be "the rationality infrastructure team" rather than "the LessWrong team". But, a priori I do feel a lot more doomy about small teams that spread themselves thin)
↑ comment by ChristianKl · 2020-11-18T20:57:41.605Z · LW(p) · GW(p)
I feel like Raemon got where I was coming from.
I think it makes sense to have multiple installations of the same software, so that the EA-Forum and AI Alignment Forum as they reuse the code base. Code has a feature of often providing exponential returns and thus it makes sense to double down on good projects instead of speading efforts.
comment by Zian · 2020-11-09T06:29:52.289Z · LW(p) · GW(p)
I'm trying out Bayes Theorem with a simple example and getting really strange results.
p(disease A given that a patient has disease B) = p(b|a)p(a) / p(b)
p(disease B given existing diagnosis of disease A) = 0.21
p(A) = 0.07
p(B) = 0.01
I get 1.47 or 147%. I know that the answer can't be >=100% because there are patients with A and not B.
Where am I going wrong?
Replies from: Zack_M_Davis, Alexei↑ comment by Zack_M_Davis · 2020-11-09T06:59:19.014Z · LW(p) · GW(p)
The problem is that you're lying!
You claim that P(B|A) = 0.21, P(A) = 0.07, and P(B) = 0.01. But that can't actually be true! Because P(B) = P(B|A)P(A) + P(B|¬A)P(¬A), and if what you say is true, then P(B|A)P(A) = (0.21)(0.07) = 0.0147, which is bigger than 0.01. So because P(B|¬A)P(¬A) can't be negative, P(B) also has to be bigger than 0.01. But you said it was 0.01! Stop lying!
Replies from: Zian↑ comment by Zian · 2020-11-09T07:29:46.955Z · LW(p) · GW(p)
Wow, that's a bit strongly worded.
I'm going to have to figure out why the journal article gave those figures. Maybe I should send your comment to the authors...
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2020-11-09T17:52:27.686Z · LW(p) · GW(p)
(I thought the playful intent would be inferred from lying-accusations being incongruous with the genre of math help. Curious what article this was?)
comment by khafra · 2020-11-25T21:41:16.418Z · LW(p) · GW(p)
I built a thing.
UVC lamps deactivate viruses in the air, but harm skin, eyes, and DNA. So I made a short duct out of cardboard, with a 60W UVC corn bulb in a recessed compartment, and put a fan in it.
I plan to run it whenever someone other than my wife and I visits my house.
https://imgur.com/a/QrtAaUz
comment by qyng · 2020-11-13T00:00:01.185Z · LW(p) · GW(p)
Hey LessWrong, new member here. I'm a Psych major from Australia who's still figuring out things out. I discovered LW through Effective Altruism through 80 000 hours through.. I've forgotten but most likely Internet stumbling.
I'd been feeling the itch to write as a way to express and iron out my thoughts, but it's taken me somewhere near a decade to get around to writing anywhere publicly online. So here we are. Looking forward to engaging, discussing and chiming in more with the LW community.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-11-13T00:35:49.540Z · LW(p) · GW(p)
Welcome :) I look forward to reading your thoughts.
In case it helps to know, you can also get started by posting to your shortform before writing full posts if you prefer.
comment by Rafael Harth (sil-ver) · 2020-11-10T20:39:43.896Z · LW(p) · GW(p)
I've just looked at scoring functions for predictions. There's the Brier score, which measures the squared distance from probabilities ( with ) to outcomes ( with ), i.e.,
(Maybe scaled by .) Then there's logarithmic scoring, which sums up the logarithms of the probabilities that came to pass, i.e.,
Both of these have the property that, if out of predictions come true, then the probability that maximizes your score (provided you have to choose the same probability for all predictions) is . That's good. However, logarithmic scoring also has the property that, for a prediction that came true, as your probability for that prediction approaches 0, your score approaches . This feels like a property that any system should have; for an event that came true, predicting (1:100) odds is much less bad than (1:1000) odds, which is much less bad than (1:10000) odds and so forth. The penalty shouldn't be bounded.
Brier score has bounded penalties. For , the predictions and receive almost identical scores. This seems deeply philosophically wrong. Why is anyone using Brier scoring? Do people disagree with the intuition that penalties should be unbounded?
Replies from: habryka4, Radamantis↑ comment by habryka (habryka4) · 2020-11-10T21:17:49.894Z · LW(p) · GW(p)
Yeah, I also don't like Brier scores. My guess is they are better at allowing people to pretend that brier scores on different sets of forecasts are meaningfully comparable (producing meaningless sentences like "superforecasters generally have a brier score around 0.35"), whereas the log scoring rule only ever loses you points, so it's more clear that it really isn't comparable between different question sets.
↑ comment by NunoSempere (Radamantis) · 2020-11-10T23:57:51.362Z · LW(p) · GW(p)
In practice, you can't (monetarily) reward forecasters with unbounded scoring rules. You may also want scoring rules to be somewhat forgiving.
comment by Sherrinford · 2020-11-09T15:12:57.257Z · LW(p) · GW(p)
Great vaccine news!
comment by Steven Byrnes (steve2152) · 2020-11-26T12:57:13.608Z · LW(p) · GW(p)
Age 4-7ish learning resources recommendations! Needless to say, different kids will take to different things. But here's my experience fwiw.
MATH - DragonBox - A+, whole series of games running from basic numeracy through geometry and algebra. Excellent gameplay, well-crafted, kid loves them.
Number Blocks (Netflix) - A+, basic numeracy, addition, concept of multiplication. Kid must have watched each episode 10 times and enthused about it endlessly.
Counting Kingdom - A+, Mastering mental addition. Excellent gameplay; fun for adults too. Note: Not currently available on ipad; I got it on PC Steam.
Slice Fractions 1 & 2 - A+, Teaches fractions. Great gameplay, great pedagogy.
An old-fashioned pocket calculator - A+, an underrated toy.
LITERACY: Explode The Code book - A, been around since at least the 1980s, still good.
Teach Your Monster To Read - B+, gameplay is a bit repetitive & difficulty progresses too quickly, but got a few hours of great learning in there before he lost interest
Poio - A-, Good gameplay, kid really liked it. Limited scope but great for what it is.
For reading, no individual thing seemed to make a huge difference and none of them kept his interest too long. But it all added up, bit by bit, and now he's over the hump, reading unprompted. Yay!
PROGRAMMING - Scratch Jr - A+ , duh
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2020-11-26T13:40:26.843Z · LW(p) · GW(p)
Also: favorite parenting books for ages 0-2: No Bad Kids, Ferber, Anthropology of Childhood [LW(p) · GW(p)], Oh Crap, King Baby, Bringing Up Bebe
comment by Zack_M_Davis · 2020-11-23T06:39:18.207Z · LW(p) · GW(p)
I was confused why I got a "Eigil Rischel has created a new post: Demystifying the Second Law of Thermodynamics" notification. Turns out there's a "Subscribe to posts" button on people's user pages!
I don't remember clicking that option on Eigil's page [LW · GW] (nor having any particular memory of who Eigil Rischel is), but I presumably must have clicked it by accident sometime in the past, probably last year (because the document icon tab of my notifications pane also has an old notification for his previous post [LW · GW], and it looks like I had already upvoted his two next [LW · GW]-previous [LW · GW] posts from October and September 2019).
comment by a gently pricked vein (strangepoop) · 2020-11-15T15:49:47.060Z · LW(p) · GW(p)
So... it looks like the second AI-Box experiment was technically a loss.
Not sure what to make of it, since it certainly imparts the intended lesson anyway. Was it a little misleading that this detail wasn't mentioned? Possibly. Although the bet was likely conceded, a little disclaimer of "overtime" would have been nice when Eliezer discussed it [LW · GW].
Replies from: tetraspace-grouping↑ comment by Tetraspace (tetraspace-grouping) · 2020-11-15T17:57:20.612Z · LW(p) · GW(p)
:0, information on the original AI box games!
In that round, the ASI convinced me that I would not have created it if I wanted to keep it in a virtual jail.
What's interesting about this is that, despite the framing of Player B being the creator of the AGI, they are not. They're still only playing the AI box game, in which Player B loses by saying that they lose, and otherwise they win.
For a time I suspected that the only way that Player A could win a serious game is by going meta, but apparently this was done just by keeping Player B swept up in their role enough to act how they would think the creator of the AGI would act. (Well, saying "take on the role of [someone who would lose]" is meta, in a sense.)
comment by [deleted] · 2020-11-04T15:44:45.594Z · LW(p) · GW(p)
GPT-1: *sentiment neuron*
Skeptics : Cute
GPT-2: *writes poems*Skeptics: Meh
GPT-3: *writes code for a simple but functioning app*
Skeptics: Gimmick.
GPT-4: *proves simple but novel math theorems*
Skeptics: Interesting but not useful.
GPT-5: *creates GPT-6*
Skeptics: Wait! What?
GPT-6: *FOOM*
Skeptics: *dead*
-- XiXiDu (I added a reaction to GPT1).
Replies from: Alexeicomment by Zian · 2020-11-28T19:43:37.326Z · LW(p) · GW(p)
I found 2 bugs in the Less Wrong website. Where do they go? (this is the first bug; I couldn't find a place to report problems after looking through the FAQ and home page)
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-11-28T20:05:55.998Z · LW(p) · GW(p)
The best place for bug reports is the Intercom in the lower right corner. The second best place for bugs is our Github repo: https://github.com/LessWrong2/Lesswrong2/tree/devel
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-11-28T20:06:41.177Z · LW(p) · GW(p)
But also, doesn't the FAQ have this section?
Replies from: ZianI have feedback, bug reports, or questions not answered in this FAQ. What should I do?
You have several options.
- Message the LessWrong team via Intercom (available in the bottom right). Ensure you don't have Hide Intercom set in your account settings [LW · GW].
- Send a private message to a member of the LessWrong team (see these on the team page [LW · GW])
- Open an issue on the LessWrong Github repository.
- Ask a question [LW · GW]
- For complaints and concerns regarding the LessWrong team, you can message Vaniver [LW · GW].
comment by MichaelLowe · 2020-11-22T21:30:23.391Z · LW(p) · GW(p)
Does anybody have recommended resources that explain the timeline of clinical trials of interventions? Specifically why they take so long and whether that is because of practical necessity or regulatory burden. Bonus points if Covid-19 is included as a context.
Replies from: Patterncomment by Mary Chernyshenko (mary-chernyshenko) · 2020-11-08T17:15:52.398Z · LW(p) · GW(p)
So I've read Pearl's The Book of Why and although it is really well written I don't understand some things.
Say we have two variables, and variable X could 'listen' to variable Y, Y--->X. But we don't know if it is qualitative or quantitative. I would have appreciated it if the book included a case study or two on how people plot their studies around this thing.
For example, we want to know what features of an experimental system can influence the readout of our measuring equipment. Say, Y (feature) is the variety of fungi species inhabiting the root system of a plant, and X is the % of cases in which we register specific mycorrhyzal structures on slides we view through a microscope (readout). And our 'measuring equipment' is a staining/viewing procedure.
Conceivably, if there are several species of fungi present, the mycorrhizal one(s) might form fewer (or more numerous) specific structures. This would be what I mean by a quantitative effect. Also conceivably, only some species or combinations of them have this effect on X. This would be qualitative.
Measuring both Y and X is more or less impossible, since you either stain a root or try to obtain a mycorrhizal culture from it (which is expensive.)· Even if we do try out some number of combinations of fungal inoculum, who knows how it compares against the diversity in the wild.
So... does this mean that we should split Y into Y-->Y1-->X and Y-->Y2-->X... or what?
· we don't consider some stains which maybe allow both.
Replies from: None↑ comment by [deleted] · 2020-11-08T18:46:17.406Z · LW(p) · GW(p)
The first way to treat this in the DAG paradigm that comes to mind is that the "quantitative" question is a question about a causal effect given a hypothesized diagram
On the other hand, the "qualitative" question can be framed in two ways, I think. In the first, the question is about which DAG best describes reality given the choice of different DAGs that represent different sets of species having an effect. But in principle, we could also just construct a larger graph with all possible species as s having arrows pointing to $ X $ and try to infer all the different effects jointly, translating the qualitative question into a quantitative one. (The species that don't effect $ X $ will just have a causal effect of $ 0 $ on $ X $.)
To your point about diversity in the wild, in theoretical causality, our ability to generalize depends on 1) the structure of the DAG and 2) our level of knowledge of the underlying mechanisms. If we only have a blackbox understanding of the graph structure and the size of the average effects (that is, $ P(Y \mid \text{do}(\mathbf{X})) $), then there exist [certain situations](https://ftp.cs.ucla.edu/pub/stat_ser/r372-a.pdf) in which we can "transport" our results from the lab to other situations. If we actually know the underlying mechanisms (the structural causal model equations in causal DAG terminology), then we can potentially apply our results even outside of the situations in which our graph structure and known quantities are "transportable".
Replies from: mary-chernyshenko↑ comment by Mary Chernyshenko (mary-chernyshenko) · 2020-11-09T08:21:23.298Z · LW(p) · GW(p)
Thank you. It looks even more unfeasible than I thought (given the number of species of mycorrhizal and other root-inhabiting fungi); I'll have to just explicitly assume that Y does not have an effect on X, in a given root system from the wild. At least things seem much cheaper to do now)))
comment by habryka (habryka4) · 2020-11-13T18:19:51.978Z · LW(p) · GW(p)
In the interest of making people's experiences less surprising/frustrating: I am aware of a bug that seems to happen on some posts, where you can't vote on comments that are deeply nested. (It happens on the latest Zvi post for me, for example). I am working on fixing it. In the meantime, you can still vote on them, but your votes will only display when you refresh (which of course isn't an acceptable state of affairs, but might make things slightly less frustrating).
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-11-14T01:16:02.156Z · LW(p) · GW(p)
This should now be fixed. Sorry for the inconvenience!
comment by Alexei · 2020-11-12T17:38:51.418Z · LW(p) · GW(p)
I wanted to make two points as feedback for the LW designers, but I realized there's actually only one point: Personal Blog feature makes me very sad.
- I wanted to find this Open Thread page, but it didn't show up on my front page as it usually does.
- I was wondering why Zvi hasn't posted about covid in a while. I went to his user page and turns out he has been posting, but because they are Personal Blog posts, I haven't seen them.
↑ comment by habryka (habryka4) · 2020-11-12T18:00:42.928Z · LW(p) · GW(p)
Yeah, it’s a trade off. I think it makes sense for both of those to not be the first thing new people see when they show up. Happy about suggestions for changes. I don’t think the current thing is perfect, but I definitely thought about it a good amount and haven’t been able to come up with something much better.
Maybe we could make the filter button for personal blog posts more noticeable? It seems pretty prominent at the moment, but we could make it more prominent.
Replies from: Alexei↑ comment by Alexei · 2020-11-12T18:15:05.996Z · LW(p) · GW(p)
Maybe we could make the filter button for personal blog posts more noticeable? It seems pretty prominent at the moment, but we could make it more prominent.
Oh! I did not see that at all until I just went to search for it. I think the problem there is that it was not clear at all to me what those tags (next to Latest) meant. I thought they were just showing recent tags from recent posts. I had no idea they were filter settings. I think a bit of a UI tweak should help, like maybe adding a filter icon.
Also probably have Personal Blog on by default.
Another two places I'd expect to see those settings: https://www.lesswrong.com/account [? · GW] and https://www.lesswrong.com/allPosts [? · GW] (but they aren't in either of those two places)
I think another thing that would help is when you're looking at a Personal Blog page for someone you're not subscribed to, there should be a relatively prominent message that says: "Hey, do you want to subscribe to this person? That way you'll see their posts (like this one) in your feed."
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-11-12T21:31:56.142Z · LW(p) · GW(p)
Yeah, adding a filter icon seems pretty reasonable to me. Maybe just replacing the small plus icon with a filter icon. I will give that a try.
Also probably have Personal Blog on by default.
It is a plausible decision to turn on personal blogposts by default when you log in, but I am currently leaning against, but it's definitely a non-obvious decision.
Yeah, agree that it should be here. The whole account page is kind of a mess and I really want to improve it. Currently adding UI there is kind of a pain and it's already super cluttered.
I agree that there should be filters on that page, but I do think by default it should be separate filters than from the frontpage (since I think the "All Posts" page should definitely by default show all posts).
Replies from: Alexei↑ comment by Alexei · 2020-11-13T02:42:57.586Z · LW(p) · GW(p)
It is a plausible decision to turn on personal blogposts by default when you log in, but I am currently leaning against, but it's definitely a non-obvious decision.
I'm curious why. (You don't have to reply if you don't think it's worth the time.) My original intuition was that it's hard to discover things you've never seen. Vs turning things off is usually pretty intuitive.
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-11-13T20:08:38.992Z · LW(p) · GW(p)
The goal is really to balance the forces that move LessWrong towards a news-driven and politics-driven direction, with the need to sometimes have to talk about urgent and political stuff, at least with people who have context. I am worried that if we make all that stuff visible by default to logged-in users, this will skew the balance too much in the direction of having things be news-driven and political, and we end up in a bad attractor. I also like the general principle of "these topics aren't forbidden on the site, but when you write about them, you can generally expect to only get people who have been around a lot and have more context on your culture to read them and discuss them with you, instead of being exposed to tons of people who don't really know what's going on".
The other thing is that I don't really like it when the frontpage changes without making it explicit what happened. Like, I wouldn't expect logging in to change the frontpage algorithm, so it feels a bit bad to make that happen, and makes the site feel a bit less understandable and transparent. This isn't a huge deal, but it is a consideration that I do think is somewhat important.
Replies from: Alexei, Alexei↑ comment by Alexei · 2020-11-17T23:41:49.041Z · LW(p) · GW(p)
Ok, here's the thing that doesn't quite make sense. You're mostly concerned about specific topics (like politics) not being visible. But this issue is being solved by hiding all personal blog posts. Clearly there could be a large number of personal blog posts that are not about the sensitive topics.
Now that you have tags, I think a better solution is to show all personal blog posts unless they have certain tags (like politics). Which solves the problem more directly. (Edit: I guess that opens the door for some users to add politics tags to a lot of posts to hide them from the front page... Hmm. May be these tags are reserved for trusted users.)
Also, yeah, definitely people should be able to say that their post shouldn't appear on the front page. That's totally fine.
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-11-17T23:55:26.970Z · LW(p) · GW(p)
There are three reasons why a post might be a personal blogpost instead of a frontpage post:
- It's about a topic we intentionally want to limit discussion on
- It's about a niche topic that's only interesting to a very small fraction of the LW audience
- The author wanted it to not show up on the frontpage
It seems that for all three of those, the reasons, it makes sense to limit visibility. I don't think there are personal blogposts that don't fit into any of the above three categories.
Replies from: Saran↑ comment by Saran · 2020-11-26T00:58:08.966Z · LW(p) · GW(p)
What about threads like "Open & Welcome Thread"? I had a bit trouble finding it today.
A way to make these would be to make a second version of the Personal Blog, which shows on the main page.
Or entirely different tag, "Community Post"? Available for trusted members. Though it would probably be the same as "Open Threads" tag.