Posts
Comments
Hey all, I've booked a table that seats up to 10, so we have room for a couple more in case there are any last minute RSVPs (it was the only table they had left anyway). We'll meet outside the restaurant at 7pm and wait a bit before heading in. Please try to be on-time-ish, and let me know if you're likely to be late. Eight Treasures is quite busy tomorrow and they're requesting advanced orders, so I'll send out a menu to you all soon and you can pick some stuff. If you're fine with whatever, then I'll fill out the order with a variety of 'recommend' dishes. I think most dishes will be shared (it's a round table with a turntable), so there should be a lot of variety for those who don't really know what to order.
Looking forward to tomorrow!
Edit: Here's the menu. Also, this FoodPanda link has dietary stuff marked (most dishes are vegan), and also has prices (click item to see small/medium/large price).
[...] the only good thing about any form of utilitarianism: [...] Anyone with complete information of the universe could [...]
It doesn't make sense to use an impossibility as part of the judging criteria for the goodness of something.
An action/utility function can only ever be a function over (at most) all information available to the agent.
I'm not arguing for any particular view, but the causal reach of possible policies/actions should determine the reach of the averaging function, right? It makes as little sense to include other possible universes in our calculation of optimal policies for this universe, as it does to try to coordinate policies between two space-time points (within the same universe) that can't causally interact with one another.
If we're trying to find and implement good policies (according to whatever definition of "good"), then in deciding on goodness of a policy, we should only care about the things that we can actually affect, and in proportion to the degree to which we can affect them.
A bunch of new names on the RSVP - looking forward to meeting you all tomorrow! Note that Yi Xin is so 'small and casual' that it doesn't usually take table bookings, but it shouldn't take any longer than 10 mins to get a table. Please try to let me know in the comments here if you're likely to be more than 20 mins late so we can time our 'chope-ing' (we'll need multiple tables).
See you tomorrow night!
Ah, interesting to think about the stuff in these principles/ideas for non-mathy problems!
Regarding your point about actively forgetting: Yeah, it seems like there's an interesting trade-off to consider in choosing how long to stay "steeped" in a task. I'm guessing it depends a lot on the "depth" of the problem being solved - i.e. for some problems I think it can take several days or more just to get to the point where you've loaded in enough "context" to start actually making any progress on it, and so you need to stay deep in it for several days at a time. Whereas with other problems it could take hours or less to get "deep" into the problem, and so in that case it probably makes sense to have a faster cadence of solving vs actively forgetting because you hit diminishing returns fairly quickly after attacking the problem for several hours. I'll have to think more about this.
Thanks for your comment!
Thanks for sharing, matto - I'll look into those two authors 👍
Thanks for commenting - glad to know it's useful!
The short answer is that most of the lessons come from my hobby of thinking about weird learning rules - alternatives to back-prop that make more sense under certain conditions/requirements.
(Can't say I pass my own test from the "Are you solving the right problem?" section, but it's just a hobby anyway.)
Hey all, @FlorianH is right, Yi Xin is closed today - my bad! So we'll be heading across the road to the backup location - the hawker center has lots of different options (including McDonalds). They have excellent an Mala stall (I'll be getting that), but there are a million other stalls there. We'll still be meeting at Yi Xin just in case someone doesn't get this notification.
We'll either eat at the hawker center (if it's quiet enough today), or get takeaway and sit in a public mini park thing that's nearby, or sit in McDonalds (I'll buy us some drinks as an excuse if no one orders from there).
Sorry for the late reply to Florian's message! My sleep schedule is such that I'm currently nocturnal (I'm trying to loop it around), and I just woke up. Completely understand if some of you have to change your RSVP status!
Hey all, Nathan and I sitting inside - I'm wearing a grey shirt and black shorts. Nathan is wearing a black shirt that says "deep note".
Hey friends, just a quick reminder that the dinner is tomorrow night. I'm going to get there a bit early to try and grab us a table. See you tomorrow! :)
I'm here early, and managed to snag a table. I'm wearing a light-brown shirt and black shorts. My hat is in the middle of the table (it's one of the two outside tables).
No problem 👍
No problem!
Hey khushjammu, I think there is a second group being formed, but if that doesn't go ahead, then you're welcome to join the original group tonight, since @papa can't make it. But please don't abandon the second group if that's going ahead and you've already committed! If you don't reply by 2pm, and assuming the second group isn't being formed, then I'll open it up to the others to fill the slot. Hope that's okay!
No problem!
@khushjammu will get this message as a notification (since they have RSVPed), but I think you'll have to send a private message to @smallsilo (unless they manually subscribed to the comments on this post), because it doesn't seem like LessWrong has proper user tagging.
If you do manage to set up a second group, you may want to organise to arrive 30 minutes earlier or later if possible, since that would make it easier for us to find our respective group members, and minimise any accidental "mixing". If a big group of us all arrive at once, that could look a bit sus, even if we are being sure to remain separated.
Hey! Currently there are seven people so a group of 5, plus 2 waiting (khushjammu and yourself). I'll probably organise another catch up (probably same place and time) in the next few weeks, so if you want you can wait for that. If another person or two join then you could form a second group. In any case, I'll let you know if there's an opening for this week's event due to a couple of people having to drop out 👍
Hey everyone! Since it's such a small gathering and there are some people on the "wait list", please comment as a reply to this message to double-confirm that you're coming.
Note that you'll of course need to be vaccinated to get in (vaccinated=2 weeks after second shot, per trace together app).
We're not able to book a table a head of time (walk-ins only at the moment), but I think we should be able to get a table even if we have to wait outside for 15 mins (I'll arrive a bit before 6pm). Worst case we can just order take-away and head across the road to the People's Park Complex hawker centre for seating. We may head there after Yi Xin (which closes at 8pm) anyway if we feel like we're getting kicked out in the middle of an interesting discussion.
Also, if you want, bring along a list of things that you'd like to discuss - e.g. big/important model updates that you've recently had, or things that you've been thinking a lot about.
See you on Saturday!
We may actually be full for this one since I think @papa didn't realise there was an RSVP button (he only commented), but you're first on the "wait list" in case someone needs to drop out. Either way I'm going to organise a few of these so I'll let you know when the next one is 👍
Ah, thanks!
Great! I've just posted a root-level comment in this thread, but I think since you're not RSVPed yet, you won't have received a notification?
Hey Zmavli, Joe, Sunny - looking forward to meeting you next week! There's only one slot remaining and it's been less than a day. If there ends up being a lot more than 5, we could split into a couple of groups, but we'll see. If you do need to pull out, please do so as early as possible to allow another person to swap in. I believe all who have RSVPed will get notifications if I post a comment here, so I'll post a message a few days from the date to make sure everything is in place and maybe start a whatsapp/telegram/signal group (depending on what most people have) and we can go from there 👍
A markdown version of the body doesn't to be in any of the possible fields of the `Post` type. Am I looking in the wrong place? I'm using this:
https://www.lesswrong.com/graphiql
The researchers made the people from this area play different economic games and find out that people with a state are less cooperative than stateless people.
[...]
In the new dog-eat-dog society there are no rules. Everyone fights for himself and there's no shame in behaving in an anti-social way.
Did they control for the size of the village? I'd have thought a smaller village is naturally going to be more cooperative, since they're ingrained with social norms that lean on the fact that everyone knows everyone. E.g. if you screw someone over, then you can't "escape" that reputational damage by moving to a different group of friends - everyone knows what you did. As societies get bigger they can't lean on those benefits of culture as much, so they need to move to a "trustless" model - i.e. lots of laws, formal procedures, etc.
I really like this experimentation. Some thoughts:
- Regarding finding the ideal set of axes: I wonder if it would make sense to give quite a few of them (that seem plausibly good), and then collect data for a month or so, and then select a subset based on usage and orthogonality. Rather than tentatively trying new axes in a more one-by-one fashion, that is. You'd explicitly tell users that the axes are being experimented with, and to vote on the axes which seem most appropriate. This might also be a way to collect axis ideas - if the user can't find the axis that they want, they can click a button to suggest one. Relying on the in-the-moment intuitions of users could be a great way to quickly search the "axis space".
- I really like the "seeks truth/conflict" axis. A comment has an inherent "gravity" to it which makes it inappropriate/costly for pointing out "small" things. If a comment is very slightly hostile, then there's a kind of social cost to pointing it out, since it really isn't worth a whole comment. This results in a threshold under which incivility/conflict-seeking can simmer - being essentially immune to criticism.
- One weird experiment that probably wouldn't work, but which I'd love to see is for the reactions to be more like a tag system, where there are potentially hundreds of different tags. They're essentially "quick comments", and could be quite "subtle" in their meaning. It would be a bit like platforms that allow you to react with any emoji, except that you can be much more precise with your reactions - e.g. "Unnecessary incivility" or "Interesting direction" or "Good steelman" or "Please expand" or "Well-written" or "Hand-wavy" or "Goodhart's Law" (perhaps implying that the concept is relevant in a way that's unacknowledged by the author). There could also be some emergent use-cases with tags. For example, tags could be used as a way for a commenter to poll the people reading the comment by asking them to tag a digit between 1 and 5, for example.
- There are lots of ways this idea could end up being a net negative - in particular it may be that any level of subtlety beyond a few basic voting axes really would benefit from a comment in almost all cases, and then that comment essentially becomes the "tag" that people can vote on. Still, I'd love to see an experiment.
- This isn't about this experiment specifically, but: One problem with showing an absolute vote count is that it relies on people explicitly not voting on something if they think it has reached an appropriate level of upvotes/downvotes. E.g. if a comment that you think is kinda bad has a score of +10, you might downvote it, but if it already has a score of -3, you might leave it because to downvote further would be "too harsh". This obviously isn't ideal, because a couple of hours later that -3 comment could have climbed to +5 and so it turns out you should have actually downvoted it. There are a few ways to solve this - e.g. use more of a star rating system, or cap the upside and downside (but keep the real votes so that e.g. if a comment gets to -10, only -5 is displayed and reflected in the user's karma, but it would require 6 upvotes to get to -4), display as a ratio plus total number of votes, etc. - they all have their trade-offs though, so I'm not sure there's a clear solution here. This is another place where tags are interesting, because if everyone things a comment is just slightly conflict-seeking, then they can use the "Slightly conflict-seeking" tag, and they can all vote on that without giving the comment author the impression that everyone thinks their comment is extremely conflict seeking.
Like I said, I love this experimentation - please keep at it! I think this topic is completely underappreciated by basically every social platform.
Thanks!
Fair point! Done.
It is still concerning to me (of course, having read your original comment), but I can see how it may have mislead others who were skimming.
If someone spent 100 hours of close interaction with Julia or Dan or Kenzie, I would expect them to have zero negative effects and to have had a great time.
If someone spent 100 hours of close interaction with Anna or Val or Pete, I would want to make absolutely sure they had lots of resources available to them just in case (those three being much more head-melty and having a much wider spread of impacts on people)
As a complete outsider who stumbled upon this post and thread, I find it surprising and concerning that there's anyone at MIRI/CFAR with whom spending a few weeks might be dangerous, mental-health-wise.
Would "Anna or Val or Pete" (I don't know who these people are) object to your statement above? If not, I'd hope they're concerned about how they are negatively affecting people around them and are working to change that. If they have this effect somewhat consistently, then the onus is probably on them to adjust their behavior.
Perhaps some clarification is needed here - unless the intended and likely readers are insiders who will have more context than me.
(Edited to make top quote include more of the original text - per Duncan's request)
Neurons fire at around 200 Hz on average.
The average cortical neuron firing rate is much lower than this.[0] You might have meant maximum rather than average - or am I misunderstanding?
[0] https://aiimpacts.org/rate-of-neuron-firing/#:~:text=Based%20on%20the%20energy%20budget,around%200.16%20times%20per%20second.