Habryka's Shortform Feed
post by habryka (habryka4) · 2019-04-27T19:25:26.666Z · LW · GW · 306 commentsContents
306 comments
In an attempt to get myself to write more here is my own shortform feed. Ideally I would write something daily, but we will see how it goes.
306 comments
Comments sorted by top scores.
comment by habryka (habryka4) · 2024-07-02T07:59:15.296Z · LW(p) · GW(p)
I am confident, on the basis of private information I can't share, that Anthropic has asked at least some employees to sign similar non-disparagement agreements that are covered by non-disclosure agreements as OpenAI did.
Or to put things into more plain terms:
I am confident that Anthropic has offered at least one employee significant financial incentive to promise to never say anything bad about Anthropic, or anything that might negatively affect its business, and to never tell anyone about their commitment to do so.
I am not aware of Anthropic doing anything like withholding vested equity the way OpenAI did, though I think the effect on discourse is similarly bad.
I of course think this is quite sad and a bad thing for a leading AI capability company to do, especially one that bills itself on being held accountable by its employees and that claims to prioritize safety in its plans.
Replies from: sam-mccandlish, samuel-marks, Zach Stein-Perlman, jacobjacob, William_S, neel-nanda-1, ChristianKl, jacobjacob, Dagon, jacob-pfau, Zach Stein-Perlman, Zane↑ comment by Sam McCandlish (sam-mccandlish) · 2024-07-04T04:26:33.366Z · LW(p) · GW(p)
Hey all, Anthropic cofounder here. I wanted to clarify Anthropic's position on non-disparagement agreements:
- We have never tied non-disparagement agreements to vested equity: this would be highly unusual. Employees or former employees never risked losing their vested equity for criticizing the company.
- We historically included standard non-disparagement terms by default in severance agreements, and in some non-US employment contracts. We've since recognized that this routine use of non-disparagement agreements, even in these narrow cases, conflicts with our mission. Since June 1st we've been going through our standard agreements and removing these terms.
- Anyone who has signed a non-disparagement agreement with Anthropic is free to state that fact (and we regret that some previous agreements were unclear on this point). If someone signed a non-disparagement agreement in the past and wants to raise concerns about safety at Anthropic, we welcome that feedback and will not enforce the non-disparagement agreement.
In other words— we're not here to play games with AI safety using legal contracts. Anthropic's whole reason for existing is to increase the chance that AI goes well, and spur a race to the top on AI safety.
Some other examples of things we've needed to adjust from the standard corporate boilerplate to ensure compatibility with our mission: (1) replacing standard shareholder governance with the Long Term Benefit Trust and (2) supplementing standard risk management with the Responsible Scaling Policy. And internally, we have an anonymous RSP non-compliance reporting line so that any employee can raise concerns about issues like this without any fear of retaliation.
Please keep up the pressure on us and other AI developers: standard corporate best practices won't cut it when the stakes are this high. Our goal is to set a new standard for governance in AI development. This includes fostering open dialogue, prioritizing long-term safety, making our safety practices transparent, and continuously refining our practices to align with our mission.
Replies from: Zach Stein-Perlman, habryka4, neel-nanda-1, aysja, mesaoptimizer, neel-nanda-1, habryka4, kave, elifland, Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2024-07-04T19:30:16.753Z · LW(p) · GW(p)
Please keep up the pressure on us
OK:
- You should publicly confirm that your old policy don't meaningfully advance the frontier with a public launch has been replaced by your RSP, if that's true, and otherwise clarify your policy.
- You take credit for the LTBT (e.g. here) but you haven't [LW · GW] published [LW · GW] enough to show that it's effective. You should publish the Trust Agreement, clarify these ambiguities, and make accountability-y commitments like if major changes happen to the LTBT we'll quickly tell the public.
- (Reminder that a year ago you committed to establish a bug bounty program (for model issues) or similar but haven't. But I don't think bug bounties are super important.)
- [Edit: bug bounties are also mentioned in your RSP—in association with ASL-2—but not explicitly committed to.]
- (Good job in many areas.)
↑ comment by jacobjacob · 2024-07-04T20:13:22.961Z · LW(p) · GW(p)
(Sidenote: it seems Sam was kind of explicitly asking to be pressured, so your comment seems legit :)
But I also think that, had Sam not done so, I would still really appreciate him showing up and responding to Oli's top-level post, and I think it should be fine for folks from companies to show up and engage with the topic at hand (NDAs), without also having to do a general AMA about all kinds of other aspects of their strategy and policies. If Zach's questions do get very upvoted, though, it might suggest there's demand for some kind of Anthropic AMA event.)
↑ comment by habryka (habryka4) · 2024-07-05T16:16:08.099Z · LW(p) · GW(p)
Anyone who has signed a non-disparagement agreement with Anthropic is free to state that fact (and we regret that some previous agreements were unclear on this point) [emphasis added]
This seems as far as I can tell a straightforward lie?
I am very confident that the non-disparagement agreements you asked at least one employee to sign were not ambiguous, and very clearly said that the non-disparagement clauses could not be mentioned.
To reiterate what I know to be true: Employees of Anthropic were asked to sign non-disparagement agreements with a commitment to never tell anyone about the presence of those non-disparagement agreements. There was no ambiguity in the agreements that I have seen.
@Sam McCandlish [LW · GW]: Please clarify what you meant to communicate by the above, which I interpreted as claiming that there was merely ambiguity in previous agreements about whether the non-disparagement agreements could be disclosed, which seems to me demonstrably false.
Replies from: neel-nanda-1, sam-mccandlish, lcmgcd↑ comment by Neel Nanda (neel-nanda-1) · 2024-07-12T07:22:54.739Z · LW(p) · GW(p)
I can confirm that my concealed non-disparagement was very explicit that I could not discuss the existence or terms of the agreement, I don't see any way I could be misinterpreting this. (but I have now kindly been released from it! [LW(p) · GW(p)])
EDIT: It wouldn't massively surprise me if Sam just wasn't aware of its existence though
↑ comment by Sam McCandlish (sam-mccandlish) · 2024-07-09T01:43:13.716Z · LW(p) · GW(p)
We're not claiming that Anthropic never offered a confidential non-disparagement agreement. What we are saying is: everyone is now free to talk about having signed a non-disparagement agreement with us, regardless of whether there was a non-disclosure previously preventing it. (We will of course continue to honor all of Anthropic's non-disparagement and non-disclosure obligations, e.g. from mutual agreements.)
If you've signed one of these agreements and have concerns about it, please email hr@anthropic.com.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-07-09T02:15:04.352Z · LW(p) · GW(p)
Hmm, I feel like you didn't answer my question. Can you confirm that Anthropic has asked at least some employees to sign confidential non-disparagement agreements?
I think your previous comment pretty strongly implied that you think you did not do so (i.e. saying any previous agreements were merely "unclear" I think pretty clearly implies that none of them did include a non-ambiguous confidential non-disparagement agreement). I want to it to be confirmed and on the record that you did, so I am asking you to say so clearly.
↑ comment by lemonhope (lcmgcd) · 2024-07-09T11:01:53.725Z · LW(p) · GW(p)
"Unclear on this point" means what you think it means and is not a L I E for a spokesperson to say in my book. You got the W here already
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-07-09T17:01:28.988Z · LW(p) · GW(p)
I really think the above was meant to imply that the non disparagement agreements were merely unclear on whether they were covered by a non disclosure clause (and I would be happy to take bets on how a randomly selected reader would interpret it).
My best guess is Sam was genuinely confused on this and that there are non disparagement agreements with Anthropic that clearly are not covered by such clauses.
↑ comment by Neel Nanda (neel-nanda-1) · 2024-07-04T21:31:35.646Z · LW(p) · GW(p)
EDIT: Anthropic have kindly released me personally from my entire concealed non-disparagement [LW(p) · GW(p)], not just made a specific safety exception. Their position on other employees remains unclear, but I take this as a good sign
If someone signed a non-disparagement agreement in the past and wants to raise concerns about safety at Anthropic, we welcome that feedback and will not enforce the non-disparagement agreement.
Thanks for this update! To clarify, are you saying that you WILL enforce existing non disparagements for everything apart from safety, but you are specifically making an exception for safety?
this routine use of non-disparagement agreements, even in these narrow cases, conflicts with our mission
Given this part, I find this surprising. Surely if you think it's bad to ask future employees to sign non disparagements you should also want to free past employees from them too?
↑ comment by aysja · 2024-07-06T00:00:19.702Z · LW(p) · GW(p)
This comment appears to respond to habryka, but doesn’t actually address what I took to be his two main points—that Anthropic was using NDAs to cover non-disparagement agreements, and that they were applying significant financial incentive to pressure employees into signing them.
We historically included standard non-disparagement agreements by default in severance agreements
Were these agreements subject to NDA? And were all departing employees asked to sign them, or just some? If the latter, what determined who was asked to sign?
↑ comment by mesaoptimizer · 2024-07-04T18:41:56.456Z · LW(p) · GW(p)
Anyone who has signed a non-disparagement agreement with Anthropic is free to state that fact (and we regret that some previous agreements were unclear on this point).
I'm curious as to why it took you (and therefore Anthropic) so long to make it common knowledge (or even public knowledge) that Anthropic used non-disparagement contracts as a standard and was also planning to change its standard agreements.
The right time to reveal this was when the OpenAI non-disparagement news broke, not after Habryka connects the dots and builds social momentum for scrutiny of Anthropic.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-07-04T19:13:32.995Z · LW(p) · GW(p)
that Anthropic used non-disparagement contracts as a standard and was also planning to change its standard agreements.
I do want to be clear that a major issue is that Anthropic used non-disparagement agreements that were covered by non-disclosure agreements. I think that's an additionally much more insidious thing to do, that contributed substantially to the harm caused by the OpenAI agreements, and I think is important fact to include here (and also makes the two situations even more analogous).
↑ comment by Neel Nanda (neel-nanda-1) · 2024-07-05T09:10:41.423Z · LW(p) · GW(p)
Note, since this is a new and unverified account, that Jack Clark (Anthropic co-founder) confirmed on Twitter that the parent comment is the official Anthropic position https://x.com/jackclarkSF/status/1808975582832832973
↑ comment by habryka (habryka4) · 2024-07-04T17:55:41.840Z · LW(p) · GW(p)
Thank you for responding! (I have more comments and questions but figured I would shoot off one quick question which is easy to ask)
We've since recognized that this routine use of non-disparagement agreements, even in these narrow cases, conflicts with our mission
Can you clarify what you mean by "even in these narrow cases"? If I am understanding you correctly, you are saying that you were including a non-disparagement clause by default in all of your severance agreements, which sounds like the opposite of narrow (edit: though as Robert points out it depends on what fraction of employees get offered any kind of severance, which might be most, or might be very few).
I agree that it would have technically been possible for you to also include such an agreement on start of employment, but that would have been very weird, and not even OpenAI did that.
I think using the sentence "even in these narrow cases" seems inappropriate given that (if I am understanding you correctly) all past employees were affected by these agreements. I think it would be good to clarify what fraction of past employees were actually offered these agreements.
↑ comment by RobertM (T3t) · 2024-07-04T19:18:00.464Z · LW(p) · GW(p)
Severance agreements typically aren't offered to all departing employees, but usually only those that are fired or laid off. We know that not all past employees were affected by these agreements, because Ivan claims [LW(p) · GW(p)] to not have been offered such an agreement, and he left[1] in mid-2023, which was well before June 1st.
- ^
Presumably of his own volition, hence no offered severance agreement with non-disparagement clauses.
↑ comment by habryka (habryka4) · 2024-07-04T19:25:21.567Z · LW(p) · GW(p)
Ah, fair, that would definitely make the statement substantially more accurate.
@Sam McCandlish [LW · GW]: Could you clarify whether severance agreements were also offered to voluntarily departing employees, and if so, under which conditions?
↑ comment by kave · 2024-07-04T19:55:46.245Z · LW(p) · GW(p)
To expand on my "that's a crux": if the non-disparagement+NDA clauses are very standard, such that they were included in a first draft by an attorney without prompting and no employee ever pushed back, then I would think this was somewhat less bad.
It would still be somewhat bad, because Anthropic should be proactive about not making those kinds of mistakes. I am confused about what level of perfection to demand from Anthropic, considering the stakes.
And if non-disparagement is often used, but Anthropic leadership either specified its presence or its form, that would seem quite bad to me, because mistakes of commision here are more evidence of poor decisionmaking than mistakes of omission. If Anthropic leadership decided to keep the clause when a departing employee wanted to remove the clause, that would similarly seem quite bad to me.
Replies from: nwinter↑ comment by nwinter · 2024-07-05T19:43:09.220Z · LW(p) · GW(p)
I think that both these clauses are very standard in such agreements. Both severance letter templates I was given for my startup, one from a top-tier SV investor's HR function and another from a top-tier SV law firm, had both clauses. When I asked Claude, it estimated 70-80% of startups would have a similar non-disparagement clause and 80-90% would have a similar confidentiality-of-this-agreement's-terms clause. The three top Google hits for "severance agreement template" all included those clauses.
These generally aren't malicious. Terminations get messy and departing employees often have a warped or incomplete picture of why they were terminated–it's not a good idea to tell them all those details, because that adds liability, and some of those details are themselves confidential about other employees. Companies view the limitation of liability from release of various wrongful termination claims as part of the value they're "purchasing" by offering severance–not because those claims would succeed, but because it's expensive to explain in court why they're justified. But the expenses disgruntled ex-employees can cause is not just legal, it's also reputational. You usually don't know which ex-employee will get salty and start telling their side of the story publicly, where you can't easily respond with your side without opening up liability. Non-disparagement helps cover that side of it. And if you want to disparage the company, in a standard severance letter that doesn't claw back vested equity, hey, you're free to just not sign it–it's likely only a bonus few weeks/months' salary that you didn't yet earn on the line, not the value of all the equity you had already vested. We shouldn't conflate the OpenAI situation with Anthropic's given the huge difference in stakes.
Confidentiality clauses are standard because they prevent other employees from learning the severance terms and potentially demanding similar treatment in potentially dissimilar situations, thus helping the company control costs and negotiations in future separations. They typically cover the entire agreement and are mostly about the financial severance terms. I imagine that departing employees who cared could've ask the company for a carve-out on the confidentiality for the non-disparagement clause as a very minor point of negotiation.
It's great that Anthropic is taking steps to make these docs more departing-employee-friendly. I wouldn't read too much into that the docs were like this in the first place (as this wasn't on cultural radars until very recently) or that they weren't immediately changed (legal stuff takes time and this was much smaller in scope than in the OpenAI case).
Example clauses in default severance letter from my law firm:
7. Non-Disparagement. You agree that you will not make any false, disparaging or derogatory statements to any media outlet, industry group, financial institution or current or former employees, consultants, clients or customers of the Company, regarding the Company, including with respect to the Company, its directors, officers, employees, agents or representatives or about the Company's business affairs and financial condition.
11. Confidentiality. To the extent permitted by law, you understand and agree that as a condition for payment to you of the severance benefits herein described, the terms and contents of this letter agreement, and the contents of the negotiations and discussions resulting in this letter agreement, shall be maintained as confidential by you and your agents and representatives and shall not be disclosed except to the extent required by federal or state law or as otherwise agreed to in writing by the Company.
↑ comment by elifland · 2024-07-04T19:33:25.368Z · LW(p) · GW(p)
And internally, we have an anonymous RSP non-compliance reporting line so that any employee can raise concerns about issues like this without any fear of retaliation.
Are you able to elaborate on how this works? Are there any other details about this publicly, couldn't find more detail via a quick search.
Some specific qs I'm curious about: (a) who handles the anonymous complaints, (b) what is the scope of behavior explicitly (and implicitly re: cultural norms) covered here, (c) handling situations where a report would deanonymize the reporter (or limit them to a small number of people)?
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2024-07-04T19:38:19.604Z · LW(p) · GW(p)
Anthropic has not published details. See discussion here [LW(p) · GW(p)]. (I weakly wish they would; it's not among my high-priority asks for them.)
Replies from: zac-hatfield-dodds↑ comment by Zac Hatfield-Dodds (zac-hatfield-dodds) · 2024-07-04T20:16:53.376Z · LW(p) · GW(p)
OK, let's imagine I had a concern about RSP noncompliance, and felt that I needed to use this mechanism.
(in reality I'd just post in whichever slack channel seemed most appropriate; this happens occasionally for "just wanted to check..." style concerns and I'm very confident we'd welcome graver reports too. Usually that'd be a public channel; for some compartmentalized stuff it might be a private channel and I'd DM the team lead if I didn't have access. I think we have good norms and culture around explicitly raising safety concerns and taking them seriously.)
As I understand it, I'd:
- Remember that we have such a mechanism and bet that there's a shortcut link. Fail to remember the shortlink name (reports? violations?) and search the list of "rsp-" links; ah, it's rsp-noncompliance. (just did this, and added a few aliases)
- That lands me on the policy PDF, which explains in two pages the intended scope of the policy, who's covered, the proceedure, etc. and contains a link to the third-party anonymous reporting platform. That link is publicly accessible, so I could e.g. make a report from a non-work device or even after leaving the company.
- I write a report on that platform describing my concerns[1], optionally uploading documents etc. and get a random password so I can log in later to give updates, send and receive messages, etc.
- The report by default goes to our Responsible Scaling Officer, currently Sam McCandlish. If I'm concerned about the RSO or don't trust them to handle it, I can instead escalate to the Board of Directors (current DRI Daniella Amodei)
- Investigation and resolution obviously depends on the details of the noncompliance concern.
There are other (pretty standard) escalation pathways for concerns about things that aren't RSP noncompliance. There's not much we can do about the "only one person could have made this report" problem beyond the included strong commitments to non-retaliation, but if anyone has suggestions I'd love to hear them.
I clicked through just now to the point of cursor-in-textbox, but not submitting a nuisance report. ↩︎
↑ comment by William_S · 2024-07-05T17:53:58.045Z · LW(p) · GW(p)
Good that it's clear who it goes to, though if I was an anthropic I'd want an option to escalate to a board member who isn't Dario or Daniella, in case I had concerns related to the CEO
Replies from: zac-hatfield-dodds↑ comment by Zac Hatfield-Dodds (zac-hatfield-dodds) · 2024-07-05T20:04:54.516Z · LW(p) · GW(p)
Makes sense - if I felt I had to use an anonymous mechanism, I can see how contacting Daniela about Dario might be uncomfortable. (Although to be clear I actually think that'd be fine, and I'd also have to think that Sam McCandlish as responsible scaling officer wouldn't handle it)
If I was doing this today I guess I'd email another board member; and I'll suggest that we add that as an escalation option.
Replies from: Raemon↑ comment by Raemon · 2024-07-05T20:31:23.692Z · LW(p) · GW(p)
Are there currently board members who are meaningfully separated in terms of incentive-alignment with Daniella or Dario? (I don't know that it's possible for you to answer in a way that'd really resolve my concerns, given what sort of information is possible to share. But, "is there an actual way to criticize Dario and/or Daniella in a way that will realistically be given a fair hearing by someone who, if appropriate, could take some kind of action" is a crux of mine)
Replies from: William_S, zac-hatfield-dodds↑ comment by William_S · 2024-07-05T23:56:55.319Z · LW(p) · GW(p)
Absent evidence to the contrary, for any organization one should assume board members were basically selected by the CEO. So hard to get assurance about true independence, but it seems good to at least to talk to someone who isn't a family member/close friend.
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2024-07-06T00:03:07.743Z · LW(p) · GW(p)
(Jay Kreps was formally selected by the LTBT. I think Yasmin Razavi was selected by the Series C investors. It's not clear how involved the leadership/Amodeis were in those selections. The three remaining members of the LTBT appear independent, at least on cursory inspection.)
↑ comment by Zac Hatfield-Dodds (zac-hatfield-dodds) · 2024-07-06T00:32:52.038Z · LW(p) · GW(p)
I think that personal incentives is an unhelpful way to try and think about or predict board behavior (for Anthropic and in general), but you can find the current members of our board listed here.
Is there an actual way to criticize Dario and/or Daniela in a way that will realistically be given a fair hearing by someone who, if appropriate, could take some kind of action?
For whom to criticize him/her/them about what? What kind of action are you imagining? For anything I can imagine actually coming up, I'd be personally comfortable raising it directly with either or both of them in person or in writing, and believe they'd give it a fair hearing as well as appropriate follow-up. There are also standard company mechanisms that many people might be more comfortable using (talk to your manager or someone responsible for that area; ask a maybe-anonymous question in various fora; etc). Ultimately executives are accountable to the board, which will be majority appointed by the long-term benefit trust from late this year.
↑ comment by Zach Stein-Perlman · 2024-07-04T19:22:20.783Z · LW(p) · GW(p)
Re 3 (and 1): yay.
If I was in charge of Anthropic I just wouldn't use non-disparagement.
↑ comment by Sam Marks (samuel-marks) · 2024-06-30T19:12:01.350Z · LW(p) · GW(p)
Anthropic has asked employees
[...]
Anthropic has offered at least one employee
As a point of clarification: is it correct that the first quoted statement above should be read as "at least one employee" in line with the second quoted statement? (When I first read it, I parsed it as "all employees" which was very confusing since I carefully read my contract both before signing and a few days ago (before posting this comment [LW(p) · GW(p)]) and I'm pretty sure there wasn't anything like this in there.)
Replies from: Vladimir_Nesov, habryka4↑ comment by Vladimir_Nesov · 2024-06-30T21:22:58.064Z · LW(p) · GW(p)
(I'm a full-time employee at Anthropic.)
I carefully read my contract both before signing and a few days ago [...] there wasn't anything like this in there.
Current employees of OpenAI also wouldn't yet have signed or even known about the non-disparagement agreement that is part of "general release" paperwork on leaving the company. So this is only evidence about some ways this could work at Anthropic, not others.
↑ comment by habryka (habryka4) · 2024-06-30T21:18:48.575Z · LW(p) · GW(p)
Yep, both should be read as "at least one employee", sorry for the ambiguity in the language.
Replies from: DanielFilan↑ comment by DanielFilan · 2024-06-30T21:24:15.027Z · LW(p) · GW(p)
FWIW I recommend editing OP to clarify this.
Replies from: neel-nanda-1↑ comment by Neel Nanda (neel-nanda-1) · 2024-06-30T22:34:49.111Z · LW(p) · GW(p)
Agreed, I think it's quite confusing as is
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-06-30T22:57:36.237Z · LW(p) · GW(p)
Added a "at least some", which I hope clarifies.
↑ comment by Zach Stein-Perlman · 2024-06-30T22:15:21.206Z · LW(p) · GW(p)
I am disappointed. Using nondisparagement agreements seems bad to me, especially if they're covered by non-disclosure agreements, especially if you don't announce that you might use this.
My ask-for-Anthropic now is to explain the contexts in which they have asked or might ask people to incur nondisparagement obligations, and if those are bad, release people and change policy accordingly. And even if nondisparagement obligations can be reasonable, I fail to imagine how non-disclosure obligations covering them could be reasonable, so I think Anthropic should at least do away with the no-disclosure-of-nondisparagement obligations.
↑ comment by jacobjacob · 2024-07-01T01:11:35.472Z · LW(p) · GW(p)
Does anyone from Anthropic want to explicitly deny that they are under an agreement like this?
(I know the post talks about some and not necessarily all employees, but am still interested).
Replies from: ivan-vendrov, zac-hatfield-dodds, T3t, aysja↑ comment by Ivan Vendrov (ivan-vendrov) · 2024-07-01T05:33:14.210Z · LW(p) · GW(p)
I left Anthropic in June 2023 and am not under any such agreement.
EDIT: nor was any such agreement or incentive offered to me.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2024-07-01T16:02:40.162Z · LW(p) · GW(p)
I left [...] and am not under any such agreement.
Neither is Daniel Kokotajlo. Context and wording strongly suggest that what you mean is that you weren't ever offered paperwork with such an agreement and incentives to sign it, but there remains a slight ambiguity on this crucial detail.
Replies from: ivan-vendrov↑ comment by Ivan Vendrov (ivan-vendrov) · 2024-07-01T16:31:32.264Z · LW(p) · GW(p)
Correct, I was not offered such paperwork nor any incentives to sign it. Edited my post to include this.
↑ comment by Zac Hatfield-Dodds (zac-hatfield-dodds) · 2024-07-01T05:55:12.586Z · LW(p) · GW(p)
I am a current Anthropic employee, and I am not under any such agreement, nor has any such agreement ever been offered to me.
If asked to sign a self-concealing NDA or non-disparagement agreement, I would refuse.
↑ comment by RobertM (T3t) · 2024-07-01T01:36:41.398Z · LW(p) · GW(p)
Did you see Sam's comment [LW(p) · GW(p)]?
↑ comment by William_S · 2024-07-01T18:59:50.564Z · LW(p) · GW(p)
I agree that this kind of legal contract is bad, and Anthropic should do better. I think there are a number of aggrevating factors which made the OpenAI situation extrodinarily bad, and I'm not sure how much these might obtain regarding Anthropic (at least one comment from another departing employee about not being offered this kind of contract suggest the practice is less widespread).
-amount of money at stake
-taking money, equity or other things the employee believed they already owned if the employee doesn't sign the contract, vs. offering them something new (IANAL but in some cases, this could be a felony "grand theft wages" under California law if a threat to withhold wages for not signing a contract is actually carried out, what kinds of equity count as wages would be a complex legal question)
-is this offered to everyone, or only under circumstances where there's a reasonable justification?
-is this only offered when someone is fired or also when someone resigns?
-to what degree are the policies of offering contracts concealed from employees?
-if someone asks to obtain legal advice and/or negotiate before signing, does the company allow this?
-if this becomes public, does the company try to deflect/minimize/only address issues that are made publically, or do they fix the whole situation?
-is this close to "standard practice" (which doesn't make it right, but makes it at least seem less deliberately malicious), or is it worse than standard practice?
-are there carveouts that reduce the scope of the non-disparagement clause (explicitly allow some kinds of speech, overriding the non-disparagement)?
-are there substantive concerns that the employee has at the time of signing the contract, that the agreement would prevent discussing?
-are there other ways the company could retaliate against an employee/departing employee who challenges the legality of contract?
I think with termination agreements on being fired there's often 1. some amount of severance offered 2. a clause that says "the terms and monetary amounts of this agreement are confidential" or similar. I don't know how often this also includes non-disparagement. I expect that most non-disparagement agreements don't have a term or limits on what is covered.
I think a steelman of this kind of contract is: Suppose you fire someone, believe you have good reasons to fire them, and you think that them loudly talking about how it was unfair that you fired them would unfairly harm your company's reputation. Then it seems somewhat reasonable to offer someone money in exchange for "don't complain about being fired". The person who was fired can then decide whether talking about it is worth more than the money being offered.
However, you could accomplish this with a much more limited contract, ideally one that lets you disclose "I signed a legal agreement in exchange for money to not complain about being fired", and doesn't cover cases where "years later, you decide the company is doing the wrong thing based on public information and want to talk about that publically" or similar.
I think it is not in the nature of most corporate lawyers to think about "is this agreement giving me too much power?" and most employees facing such an agreement just sign it without considering negotiating or challenging the terms.
For any future employer, I will ask about their policies for termination contracts before I join (as this is when you have the most leverage, if they give you an offer they want to convince you to join).
↑ comment by Neel Nanda (neel-nanda-1) · 2024-07-12T07:24:45.533Z · LW(p) · GW(p)
This is true. I signed a concealed non-disparagement when I left Anthropic in mid 2022. I don't have clear evidence this happened to anyone else (but that's not strong evidence of absence). More details here [LW(p) · GW(p)]
EDIT: I should also clarify that I personally don't think Anthropic acted that badly, and recommend reading about what actually happened before forming judgements. I do not think I am the person referred to in Habryka's comment.
↑ comment by ChristianKl · 2024-06-30T09:59:42.487Z · LW(p) · GW(p)
In the case of OpenAI most of the debate was about ex-employees. Are we talking about current employees or ex-employees here?
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-06-30T17:27:29.149Z · LW(p) · GW(p)
I am including both in this reference class (i.e. when I say employee above, it refers to both present employees and employees who left at some point). I am intentionally being broad here to preserve more anonymity of my sources.
↑ comment by jacobjacob · 2024-07-01T01:13:46.443Z · LW(p) · GW(p)
Not sure how to interpret the "agree" votes on this comment. If someone is able to share that they agree with the core claim because of object-level evidence, I am interested. (Rather than agreeing with the claim that this state of affairs is "quite sad".)
↑ comment by Dagon · 2024-06-30T21:12:18.291Z · LW(p) · GW(p)
A LOT depends on the details of WHEN the employees make the agreement, and the specifics of duration and remedy, and the (much harder to know) the apparent willingness to enforce on edge cases.
"significant financial incentive to promise" is hugely different from "significant financial loss for choosing not to promise". MANY companies have such things in their contracts, and they're a condition of employment. And they're pretty rarely enforced. That's a pretty significant incentive, but it's prior to investment, so it's nowhere near as bad.
↑ comment by Jacob Pfau (jacob-pfau) · 2024-06-30T18:45:13.616Z · LW(p) · GW(p)
A pre-existing market on this question https://manifold.markets/causal_agency/does-anthropic-routinely-require-ex?r=SmFjb2JQZmF1
↑ comment by Zach Stein-Perlman · 2024-06-30T21:50:52.720Z · LW(p) · GW(p)
What's your median-guess for the number of times Anthropic has done this?
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-06-30T22:53:01.517Z · LW(p) · GW(p)
(Not answering this question since I think it would leak too many bits on confidential stuff. In general I will be a bit hesitant to answer detailed questions on this, or I might take a long while to think about what to say before I answer, which I recognize is annoying, but I think is the right tradeoff in this situation)
↑ comment by Zane · 2024-07-01T06:55:01.028Z · LW(p) · GW(p)
I'm kind of concerned about the ethics of someone signing a contract and then breaking it to anonymously report what's going on (if that's what your private source did). I think there's value from people being able to trust each others' promises about keeping secrets, and as much as I'm opposed to Anthropic's activities, I'd nevertheless like to preserve a norm of not breaking promises.
Can you confirm or deny whether your private information comes from someone who was under a contract not to give you that private information? (I completely understand if the answer is no.)
Replies from: habryka4, Benito↑ comment by habryka (habryka4) · 2024-07-01T07:12:29.776Z · LW(p) · GW(p)
(Not going to answer this question for confidentiality/glommarization reasons)
↑ comment by Ben Pace (Benito) · 2024-07-10T20:03:52.984Z · LW(p) · GW(p)
I think this is a reasonable question to ask. I will note that in this case, if your guess is right about what happened, the breaking of the agreement is something that it turned out the counterparty endorsed, or at least, after the counterparty became aware of the agreement, they immediately lifted it.
I still think there's something to maintaining all agreements regardless of context, but I do genuinely think it matters here if you (accurately) expect the entity you've made the secret agreement with would likely retract it if they found out about it.
(Disclaimer that I have no private info about this specific situation.)
comment by habryka (habryka4) · 2019-05-09T19:12:09.799Z · LW(p) · GW(p)
Thoughts on integrity and accountability
[Epistemic Status: Early draft version of a post I hope to publish eventually. Strongly interested in feedback and critiques, since I feel quite fuzzy about a lot of this]
When I started studying rationality and philosophy, I had the perspective that people who were in positions of power and influence should primarily focus on how to make good decisions in general and that we should generally give power to people who have demonstrated a good track record of general rationality. I also thought of power as this mostly unconstrained resource, similar to having money in your bank account, and that we should make sure to primarily allocate power to the people who are good at thinking and making decisions.
That picture has changed a lot over the years. While I think there is still a lot of value in the idea of "philosopher kings", I've made a variety of updates that significantly changed my relationship to allocating power in this way:
- I have come to believe that people's ability to come to correct opinions about important questions is in large part a result of whether their social and monetary incentives reward them when they have accurate models in a specific domain. This means a person can have extremely good opinions in one domain of reality, because they are subject to good incentives, while having highly inaccurate models in a large variety of other domains in which their incentives are not well optimized.
- People's rationality is much more defined by their ability to maneuver themselves into environments in which their external incentives align with their goals, than by their ability to have correct opinions while being subject to incentives they don't endorse. This is a tractable intervention and so the best people will be able to have vastly more accurate beliefs than the average person, but it means that "having accurate beliefs in one domain" doesn't straightforwardly generalize to "will have accurate beliefs in other domains".
One is strongly predictive of the other, and that’s in part due to general thinking skills and broad cognitive ability. But another major piece of the puzzle is the person's ability to build and seek out environments with good incentive structures. - Everyone is highly irrational in their beliefs about at least some aspects of reality, and positions of power in particular tend to encourage strong incentives that don't tend to be optimally aligned with the truth. This means that highly competent people in positions of power often have less accurate beliefs than much less competent people who are not in positions of power.
- The design of systems that hold people who have power and influence accountable in a way that aligns their interests with both forming accurate beliefs and the interests of humanity at large is a really important problem, and is a major determinant of the overall quality of the decision-making ability of a community. General rationality training helps, but for collective decision making the creation of accountability systems, the tracking of outcome metrics and the design of incentives is at least as big of a factor as the degree to which the individual members of the community are able to come to accurate beliefs on their own.
A lot of these updates have also shaped my thinking while working at CEA, LessWrong and the LTF-Fund over the past 4 years. I've been in various positions of power, and have interacted with many people who had lots of power over the EA and Rationality communities, and I've become a lot more convinced that there is a lot of low-hanging fruit and important experimentation to be done to ensure better levels of accountability and incentive-design for the institutions that guide our community.
I also generally have broadly libertarian intuitions, and a lot of my ideas about how to build functional organizations are based on a more start-up like approach that is favored here in Silicon Valley. Initially these intuitions seemed at conflict with the intuitions for more emphasis on accountability structures, with broken legal systems, ad-hoc legislation, dysfunctional boards and dysfunctional institutions all coming to mind immediately as accountability-systems run wild. I've since then reconciled my thoughts on these topics a good bit.
Integrity
Somewhat surprisingly, "integrity" has not been much discussed as a concept handle on LessWrong. But I've found it to be a pretty valuable virtue to meditate and reflect on.
I think of integrity as a more advanced form of honesty – when I say “integrity” I mean “acting in accordance with your stated beliefs.” Where honesty is the commitment to not speak direct falsehoods, integrity is the commitment to speak truths that actually ring true to yourself, not ones that are just abstractly defensible to other people. It is also a commitment to act on the truths that you do believe, and to communicate to others what your true beliefs are.
Integrity can be a double-edged sword. While it is good to judge people by the standards they expressed, it is also a surefire way to make people overly hesitant to update. If you get punished every time you change your mind because your new actions are now incongruent with the principles you explained to others before you changed your mind, then you are likely to stick with your principles for far longer than you would otherwise, even when evidence against your position is mounting.
The great benefit that I experienced from thinking of integrity as a virtue, is that it encourages me to build accurate models of my own mind and motivations. I can only act in line with ethical principles that are actually related to the real motivators of my actions. If I pretend to hold ethical principles that do not correspond to my motivators, then sooner or later my actions will diverge from my principles. I've come to think of a key part of integrity being the art of making accurate predictions about my own actions and communicating those as clearly as possible.
There are two natural ways to ensure that your stated principles are in line with your actions. You either adjust your stated principles until they match up with your actions, or you adjust your behavior to be in line with your stated principles. Both of those can backfire, and both of those can have significant positive effects.
Who Should You Be Accountable To?
In the context of incentive design, I find thinking about integrity valuable because it feels to me like the natural complement to accountability. The purpose of accountability is to ensure that you do what you say you are going to do, and integrity is the corresponding virtue of holding up well under high levels of accountability.
Highlighting accountability as a variable also highlights one of the biggest error modes of accountability and integrity – choosing too broad of an audience to hold yourself accountable to.
There is tradeoff between the size of the group that you are being held accountable by, and the complexity of the ethical principles you can act under. Too large of an audience, and you will be held accountable by the lowest common denominator of your values, which will rarely align well with what you actually think is moral (if you've done any kind of real reflection on moral principles).
Too small or too memetically close of an audience, and you risk not enough people paying attention to what you do, to actually help you notice inconsistencies in your stated beliefs and actions. The smaller the group that is holding you accountable is, the smaller your inner circle of trust, which reduces the amount of total resources that can be coordinated under your shared principles.
I think a major mistake that even many well-intentioned organizations make is to try to be held accountable by some vague conception of "the public". As they make public statements, someone in the public will misunderstand them, causing a spiral of less communication, resulting in more misunderstandings, resulting in even less communication, culminating into an organization that is completely opaque about any of its actions and intentions, with the only communication being filtered by a PR department that has little interest in the observers acquiring any beliefs that resemble reality.
I think a generally better setup is to choose a much smaller group of people that you trust to evaluate your actions very closely, and ideally do so in a way that is itself transparent to a broader audience. Common versions of this are auditors, as well as nonprofit boards that try to ensure the integrity of an organization.
This is all part of a broader reflection on trying to create good incentives for myself and the LessWrong team. I will probably follow this up with a post that more concretely summarizes my thoughts on how all of this applies to LessWrong concretely.
In summary:
- One lens to view integrity through is as an advanced form of honesty – “acting in accordance with your stated beliefs.”
- To improve integrity, you can either try to bring your actions in line with your stated beliefs, or your stated beliefs in line with your actions, or reworking both at the same time. These options all have failure modes, but potential benefits.
- People with power sometimes have incentives that systematically warp their ability to form accurate beliefs, and (correspondingly) to act with integrity.
- An important tool for maintaining integrity (in general, and in particular as you gain power) is to carefully think about what social environment and incentive structures you want for yourself.
- Choose carefully who, and how many people, you are accountable to:
- Too many people, and you are limited in the complexity of the beliefs and actions that you can justify.
- Too few people, too similar to you, and you won’t have enough opportunities for people to notice and point out what you’re doing wrong. You may also not end up with a strong enough coalition aligned with your principles to accomplish your goals.
↑ comment by Raemon · 2019-05-12T02:57:08.357Z · LW(p) · GW(p)
Just wanted to say I like this a lot and think it'd be fine as a full fledged post. :)
Replies from: Zvi, elityre↑ comment by Zvi · 2019-06-02T11:36:51.491Z · LW(p) · GW(p)
More than fine. Please do post a version on its own. A lot of strong insights here, and where I disagree there's good stuff to chew on. I'd be tempted to respond with a post.
I do think this has a different view of integrity than I have, but in writing it out, I notice that the word is overloaded and that I don't have as good a grasp of its details as I'd like. I'm hesitant to throw out a rival definition until I have a better grasp here, but I think the thing you're in accordance with is not beliefs so much as principles?
↑ comment by Eli Tyre (elityre) · 2019-06-02T09:45:41.665Z · LW(p) · GW(p)
Seconded.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2019-06-02T15:45:02.713Z · LW(p) · GW(p)
Thirded.
Replies from: saul-munn↑ comment by Saul Munn (saul-munn) · 2024-07-07T20:15:24.440Z · LW(p) · GW(p)
fourthed. oli, do you intend to post this?
if not, could i post this text as a linkpost to this shortform?
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-07-07T20:16:22.256Z · LW(p) · GW(p)
It's long been posted!
Integrity and accountability are core parts of rationality [LW · GW]
Replies from: saul-munn↑ comment by Saul Munn (saul-munn) · 2024-07-07T20:59:13.040Z · LW(p) · GW(p)
ah, lovely! maybe add that link as an edit to the top-level shortform comment?
↑ comment by Eli Tyre (elityre) · 2019-06-02T09:45:25.048Z · LW(p) · GW(p)
This was a great post that might have changed my worldview some.
Some highlights:
1.
People's rationality is much more defined by their ability to maneuver themselves into environments in which their external incentives align with their goals, than by their ability to have correct opinions while being subject to incentives they don't endorse. This is a tractable intervention and so the best people will be able to have vastly more accurate beliefs than the average person, but it means that "having accurate beliefs in one domain" doesn't straightforwardly generalize to "will have accurate beliefs in other domains".
I've heard people say things like this in the past, but haven't really taken it seriously as an important component of my rationality practice. Somehow what you say here is compelling to me (maybe because I recently noticed a major place where my thinking was majorly constrained by my social ties and social standing) and it prodded me to think about how to build "mech suits" that not only increase my power but incentives my rationality. I now have a todo item to "think about principles for incentivizing true beliefs, in team design."
2.
I think a generally better setup is to choose a much smaller group of people that you trust to evaluate your actions very closely,
Similarly, thinking explicitly about which groups I want to be accountable to sounds like a really good idea.
I had been going through the world keeping this Paul Graham quote in mind...
I think the best test is one Gino Lee taught me: to try to do things that would make your friends say wow. But it probably wouldn't start to work properly till about age 22, because most people haven't had a big enough sample to pick friends from before then.
...choosing good friends, and and doing things that would impress them.
But what you're pointing at here seems like a slightly different thing. Which people do I want to make myself transparent to, so that they can judge if I'm living up to my values.
This also gave me an idea for a CFAR style program: a reassess your life workshop, in which a small number of people come together for a period of 3 days or so, and reevaluate cached decisions. We start by making lines of retreat (with mentor assistance), and then look at high impact questions in our life: given new info, does your current job / community / relationship / life-style choice / other still make sense?
Thanks for writing.
↑ comment by mako yass (MakoYass) · 2019-05-12T08:41:07.187Z · LW(p) · GW(p)
I think you might be confusing two things together under "integrity". Having more confidence in your own beliefs than the shared/imposed beliefs of your community isn't really a virtue or.. it's more just a condition that a person can be in, whether it's virtuous is completely contextual. Sometimes it is, sometimes it isn't. I can think of lots of people who should have more confidence other peoples' beliefs than they have in their own. In many domains, that's me. I should listen more. I should act less boldly. An opposite of that sense of integrity is the virtue of respect- recognising other peoples' qualities- it's a skill. If you don't have it, you can't make use of other peoples' expertise very well. A superfluence of respect is a person who is easily moved by others' feedback, usually, a person who is patient with their surroundings.
On the other hand I can completely understand the value of {having a known track record of staying true to self-expression, claims made about the self}. Humility is actually a part of that. The usefulness of deliniating that into a virtue separate from the more general Honesty is clear to me.
Replies from: Pattern↑ comment by ioannes (ioannes_shade) · 2019-05-19T15:58:48.860Z · LW(p) · GW(p)
See Sinclair: "It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"
comment by habryka (habryka4) · 2024-04-01T20:06:23.949Z · LW(p) · GW(p)
Welp, I guess my life is comic sans today. The EA Forum snuck some code into our deployment bundle for my account in-particular, lol: https://github.com/ForumMagnum/ForumMagnum/pull/9042/commits/ad99a147824584ea64b5a1d0f01e3f2aa728f83a
Replies from: Benito, jp, habryka4, winstonBosan↑ comment by Ben Pace (Benito) · 2024-04-01T23:12:53.418Z · LW(p) · GW(p)
Screenshot for posterity.
↑ comment by jp · 2024-04-03T14:17:21.079Z · LW(p) · GW(p)
🙇♂️
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-04-03T20:00:43.414Z · LW(p) · GW(p)
😡
↑ comment by habryka (habryka4) · 2024-04-03T05:00:39.180Z · LW(p) · GW(p)
And finally, I am freed from this curse.
↑ comment by winstonBosan · 2024-04-01T20:40:34.340Z · LW(p) · GW(p)
I hope the partial unveiling of a your user_id hash will not doom us all, somehow.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-04-01T20:44:59.925Z · LW(p) · GW(p)
You can just get people's userIds via the API, so it's nothing private.
comment by habryka (habryka4) · 2024-03-24T19:21:08.331Z · LW(p) · GW(p)
A thing that I've been thinking about for a while has been to somehow make LessWrong into something that could give rise to more personal-wikis and wiki-like content. Gwern's writing has a very different structure and quality to it than the posts on LW, with the key components being that they get updated regularly and serve as more stable references for some concept, as opposed to a post which is usually anchored in a specific point in time.
We have a pretty good wiki system for our tags, but never really allowed people to just make their personal wiki pages, mostly because there isn't really any place to find them. We could list the wiki pages you created on your profile, but that doesn't really seem like it would allocate attention to them successfully.
I was thinking about this more recently as Arbital is going through another round of slowly rotting away (its search currently being broken and this being very hard to fix due to annoying Google Apps Engine restrictions) and thinking about importing all the Arbital content into LessWrong. That might be a natural time to do a final push to enable people to write more wiki-like content on the site.
Replies from: gwern, Seth Herd, Dagon, niplav, Chris_Leong↑ comment by gwern · 2024-03-25T02:24:36.976Z · LW(p) · GW(p)
somehow make LessWrong into something that could give rise to more personal-wikis and wiki-like content. Gwern's writing has a very different structure and quality to it than the posts on LW...We have a pretty good wiki system for our tags, but never really allowed people to just make their personal wiki pages, mostly because there isn't really any place to find them. We could list the wiki pages you created on your profile, but that doesn't really seem like it would allocate attention
Multi-version wikis are a hard design problem.
It's something that people kept trying, when they soured on a regular Wikipedia: "the need for consensus makes it impossible for minority views to get a fair hearing! I'll go make my own Wikipedia where everyone can have their own version of an entry, so people can see every side! with blackjack & hookers & booze!" And then it becomes a ghost town, just like every other attempt to replace Wikipedia. (And that's if you're lucky: if you're unlucky you turn into Conservapedia or Rational Wiki.) I'm not aware of any cases of 'non-consensus' wikis that really succeed - it seems that usually, there's so little editor activity to go around that having parallel versions winds up producing a whole lotta nothing, and the sub-versions are useless and soon abandoned by the original editor, and then the wiki as a whole fails. (See also: Arbital [LW · GW].) Successful wikis all generally seem to follow the Wikipedia approach of a centralized consensus wiki curated & largely edited by a oligarchy; for example, Wikia fandom wikis will be dominated by a few super-fans, or the Reddit wikis attached to each subreddit will be edited by the subreddit moderators.
(There is also an older model like the original Ward's Wiki* or the Emacs Wiki where pages might be carefully written & curated, but might also just be dumps of any text anyone felt like including, include editors chatting back and forth, comments appended to excerpts, and ad hoc separators like horizontal rules to denote a change of topic; this sorta worked, but all of them seem to be extremely obscure/small and the approach has faded out almost completely.)
And for LW2, a lot of contributions are intrinsically personally-motivated in a way which seems hard to reconcile with the anonymous-laboring-masses-building-Egyptian-pyramids/Wikipedias. Laboring to create an anonymous consensus account of some idea or concept is not too rewarding. (Recall how few people seriously edit even a titanic success like Wikipedia!) So you have a needle to thread: somehow individualized and personalized, but also somehow community-like...? Is there any site online which manages to hybridize wikis with forums? Not that I know of. (StackOverflow? EN World has "wiki threads" but I dunno how well they work.) It can't be easy!
Going from 'fast' to 'slow' is one of, I think, the biggest challenges and dividing lines of Internet community design.
* saturn2 notes a 2010 appenwar essay describing the transition from the original exuberant freewheeling almost-4chan-esque wiki culture, and how one of his companies made heavy use of an internal wiki for everything (where the more comprehensive it got, the more useful it got), to the 'Wikipedia deletionist' culture now assumed to be the default for all wikis; and also the failure mode of an 'internal' wiki starving the 'external' wiki due to friction [LW · GW].
Still, if one wanted to try, I think a better perspective might be 'personal gardens' and hypertext transclusions (as now used heavily on Gwern.net). The goal would be to, in essence, try to piggyback a sort of Roam/Notion/PMwiki personal wiki system onto user comments & a site-wide wiki.
One of the rewarding things about a 'personal garden' is being able to easily interconnect things and expand them and see it grow; this is something you don't really get with either LW2 posts or comments---sure, you can hyperlink them, and you can keep editing them and adding to them, but this is not the way they are most easily done (which is 'fire and forget'). Each one is ultimately trapped in its original context, date-bound. You don't get that with the standard wiki either, unless you either operate the entire wiki and it's a personal wiki, or you have de facto control over the set of articles you care about. You can't write a really personalized wiki article because someone else could always come by and delete it. (One of the reasons I stopped editing English Wikipedia is my sadness at seeing interesting blockquotes, humorous captions, and amusing details constantly being stripped out by humorless narrow-minded deletionists who apparently feel that about every topic, the less said the better.)
So to hybridize this, I would suggest a multi-version wiki model where on each page, there is a 'consensus' entry, which acts exactly like the current wiki does and like one expects a normal wiki to act. Anyone can edit it, edits can be reverted by other editors, statements in it are expected to be 'consensus' and not simply push minority POVs without qualification, and so on and so forth.
But below the consensus entry (if it exists), there are $USER
entries.
All $USER
entries are transcluded one by one below the main consensus entry. (This will look familiar to any Gwern.net reader.)
They are clearly titled & presented as by that user.
(They may be sorted by length, most recently modified, or possibly karma; I think I would avoid showing anyone the karma, however, and solely use it for ranking.)
Any user can create their own corresponding user entry for any wiki page, which will be transcluded below the consensus entry, and they are the only ones who can edit it; they can write their own entry, criticize the consensus one, list relevant links, muse about the topic, and so on.
(I think 'subpages' are a convenient syntax here: I don't know how the current syntax operates, but let's say that a LW2 wiki entry on 'Acausal cooperation' lives at the URL /wiki/Acausal_cooperation
; then if I wrote a user entry for it, it would live at /wiki/Acausal_cooperation/gwern
.)
These entries can be viewed on their own, and given convenient abbreviated shortcuts (why not $USER/article-name
as well?); this makes them dead-easy to remember and type. ("Where's gwern's wiki-essay on acausal cooperation? oh.")
and include a list of backlinks by that user and then by other users.
Diffs can be displayed on user-profiles similar to comments.
User-entries can be displayed in a compact table/block on a user-profile: eg Foo · Bar · Baz · ...
line-wrapping for a few lines ought to cover even highly-prolific users for a long time. They could also be unified with Shortform/Quick Takes: each user entry is just a comment there. (Which might help with implementing comments, if a user entry is just a transcluded comment tree. This means that if you want to add to or criticize some part of a user entry, well, you just reply to it right then and there!)
Users might, for the most part, just edit the consensus entry. If they have something spicy to say or they think there's something wrong with it (like arguing over the definition), they'll choose to add it to their respective user entry to avoid another editor screwing with it or reverting them. In the best-case scenario, this seduces users into regularly updating and expanding consensus wiki entries as they realize some additional piece doesn't need to go into their personal entry. Or they might want to make a point of periodically 'promoting' their personal entries into the consensus entries, if no one objects.
Users are motivated to create user entries as a way of organizing their own comments and articles across long time periods: every time you link a wiki article (whether consensus or user), you create a backlink which makes re-finding it easier. The implicit tags will be used much more than explicit ones. There is no way to 'tag' a comment right now; it's not that easy to add a tag to your own post; it's not even that easy to navigate the tags; but it would be easy to look up a wiki article and look at the backlinks.
So one could imagine a regular cycle of writing comments which link to key wiki pages, sometimes accumulating into a regular LW2 post, followed by summarization into a wiki page, refactoring into multiple wiki pages as the issue becomes clearer, and starting over with a better understanding and vocabulary as reflected in the new set of pages which can be linked in comments on each topic...
At no point is the user committed to some of Big Deal website or Essay, like they think they're a big shot who's earned some sort of hero license to write online - "creating your own public wiki? my, that's quite some self-esteem there; remind me again what your degree was in and where you have tenure?" Like tweeting, it all just happens one little step at a time: link a wiki entry in one comment, another elsewhere, leave a quick little clarification or neat tangent in your user entry in an obscure page, get a little more argumentative with another consensus entry you think is mistaken, go so far as to create a new page on a technical term you think would be useful...
Technically-wise, I think this shouldn't require too much violence to the existing codebase as a wiki is already implemented & running, LW2 already supports some degree of transclusion (both server & client-side), sub-pages are usually feasible in wikis and just requires access controls added on to match user==page-name, and backlinks should already be provided by the wiki and automatic for sub-pages as well. The difficulty is making it seamless and friction-free and intuitive.
Replies from: habryka4, Chris_Leong↑ comment by habryka (habryka4) · 2024-03-25T06:11:44.764Z · LW(p) · GW(p)
So, the key difficulty this feels to me like its eliding is the ontology problem. One thing that feels cool about personal wikis is that people come up with their own factorization and ontology for the things they are thinking about. Like, we probably won't have a consensus article on the exact ways L in Death Note made mistakes, but Gwern.net would be sadder without that kind of content.
So I think in addition to the above there needs to be a way for users to easily and without friction add a personal article for some concept they care about, and to have a consistent link to it, in a way that doesn't destroy any of the benefits of the collaborative editing.
My sense is that collaboratively edited wikis tend to thrive heavily around places where there is a clear ontology and decay when the ontology is unclear or the domain permits many orthogonal carvings. This makes video game wikis so common and usually successful, as via the nature of their programming they will almost always have a clear structure to them (the developer probably coded an abstraction for "enemies" and "attack patterns" and "levels" and so the wiki can easily mirror them and document them).
It feels to me that anything that wants to somehow build a unification of personal wikis and consensus wikis needs to figure out how to gracefully handle the ontology problem.
Replies from: gwern, Chris_Leong↑ comment by gwern · 2024-03-25T14:07:19.508Z · LW(p) · GW(p)
One thing that feels cool about personal wikis is that people come up with their own factorization and ontology for the things they are thinking about...So I think in addition to the above there needs to be a way for users to easily and without friction add a personal article for some concept they care about, and to have a consistent link to it, in a way that doesn't destroy any of the benefits of the collaborative editing.
My proposal already provides a way to easily add a personal article with a consistent link, while preserving the ability to do collaborative editing on 'public' articles. Strictly speaking, it's fine for people to add wiki entries for their own factorization and ontology.
There is no requirement for those to all be 'official': there doesn't have to be a 'consensus' entry. Nothing about a /wiki/Acausal_cooperation/gwern
user entry requires the /wiki/Acausal_cooperation
consensus entry to exist. (Computers are flexible like that.) That just means there's nothing there at that exact URL, or probably better, it falls back to displaying all sub-pages of user entries like usual. (User entries presumably get some sort of visual styling, in the same way that comments on a post look different from a post, which in addition to the title/author metadata displayed, avoids confusion.)
If, say, TurnTrout wants to create a user entry /wiki/Reward_is_not_the_optimization_target/TurnTrout
as a master key to all of his posts & comments and related ones like Nora Belrose's posts, rather than go for a consensus entry, that's fine. And if it becomes commonly-accepted jargon and part of the ontology, and so it becomes a hassle that people can't edit his user entry (rather than leave their own user entry or comments), his user entry can be 'promoted' ie. just be copied by someone into a new consensus entry at /wiki/Reward_is_not_the_optimization_target
that can then be edited and his user entry left as historical or possibly collapsed/hidden by an admin for readability.
(The growth of the ad hoc user & consensus ontology might be a bit messy and sometimes it might be a good idea to delete/edit user entries by users who are gone or uncooperative or edit their entries to update links, but that's little different from a regular wiki's need for admins to do similar maintenance.)
Like, we probably won't have a consensus article on the exact ways L in Death Note made mistakes, but Gwern.net would be sadder without that kind of content.
The DN essay mostly would not make sense as a wiki entry, and indeed, it's been 'done' ever since 2013 or so. There's not much more to be said about the topic (despite occasional attempts at criticism, which typically just wind up repeating something I already said in it). It doesn't need wiki support or treatment, and it was a post-like essay: I wrote it up as a single definitive piece and it was discussed at a particular time and place. (Actually, I think I did originally post it on LW1?) It benefits from ongoing Gwern.net improvements, but mostly in a general typographical sense rather than being deeply integrated into other pages.
The parts of it that keep changing do have wiki-like nature:
-
For example, the parts from Jaynes would make sense as part of a 'Legal Bayesianism' article, which would be useful to invoke in many other posts like debates on Amanda Knox's innocence.
-
The basic concept of 'information' from information theory as whatever lets you narrow down a haystack into a needle (which is the 'big idea' of the essay - teaching you information theory by example, by dramatizing the hunt for a criminal leaking circumstantial evidence) is certainly a wiki-worthy topic that people could benefit from naming explicitly and appending their own discussions or views on.
This could come up in many places, from looking for aliens to program search in DL scaling or AIXI paradigm or thinking about Bayesian reasoning (eg. Eliezer on how many bits of information it takes to make a hypothesis 'live' at all).
-
Or that big list of side channels / deanonymization methods would make complete sense as a wiki entry which people could contribute piquant instances to, and would be useful linking elsewhere on LW2, particularly in articles on AGI security and why successfully permanently boxing a malevolent, motivated superintelligence would be extremely difficult - because there are bazillions of known side-channels which enable exfiltration/control/inference & we keep discovering new ones like how to turn computer chips into radios/speakers or entire families of attacks like SPECTRE or Rowhammer.
(One reason I've invested so much effort into the tag-directory system is the hope of replacing such churning lists with transcludes of tags. The two major examples right now are https://gwern.net/dnm-archive#works-using-this-dataset and https://gwern.net/danbooru2021#publications - I want to track all users/citers of my datasets to establish their value for researchers & publicize those uses, but adding them manually was constant toil and increasingly redundant with the annotations. So with appropriate tooling, I switch to transcluding a tag for the citers instead. Any time a new user shows up, I just write an annotation for it, as I would have before, and add a
dnm-archive
ordanbooru
tag to it and then they show up automatically with no further work. So you could imagine doing the same thing in my DN essay: instead of that long unordered list, which is tedious to update every time there's a fun security paper or blog post, I would instead have a tag likecs/security/side-channel
where each of those is annotated, and simply transclude the table of contents for that. If I still wanted a natural-language summary similar to the existing list, well, I could just stick that at the top of the tag and benefit every instance of the tag.)
↑ comment by Chris_Leong · 2024-03-25T08:20:32.228Z · LW(p) · GW(p)
- Users can just create pages corresponding to their own categories
- Like Notion we could allow two-way links between pages so users would just tag the category in their own custom inclusions.
↑ comment by Chris_Leong · 2024-03-25T08:24:45.921Z · LW(p) · GW(p)
I agree with Gwern. I think it's fairly rare that someone wants to write the whole entry themselves or articles for all concepts in a topic.
It's much more likely that someone just wants to add their own idiosyncratic takes on a topic. For example, I'd love to have a convenient way to write up my own idiosyncratic takes on decision theory. I tried including some of these in the main Wiki, but it (understandably) was reverted.
I expect that one of the main advantages of this style of content would be that you can just write a note without having to bother with an introduction or conclusion.
I also think it would be fairly important (though not at the start) to have a way of upweighting the notes added by particular users.
I agree with Gwern that this may result in more content being added to the main wiki pages when other users are in favour of this.
↑ comment by Seth Herd · 2024-03-26T20:31:38.049Z · LW(p) · GW(p)
TLDR: The only thing I'd add to Gwern's proposal is making sure there are good mechanisms to discuss changes. Improving the wiki and focusing on it could really improve alignment research overall.
Using the LW wiki more as a medium for collaborative research could be really useful in bringing new alignment thinkers up to speed rapidly. I think this is an important part of the overall project; alignment is seeing a burst of interest, and being able to rapidly make use of bright new minds who want to donate their time to the project might very well make the difference in adequately solving alignment in time.
As it stands, someone new to the field has to hunt for good articles on any topic, and they provide some links to other important articles, but that's not really their job. The wiki's tags does serve that purpose. The articles are sometimes a good overview of that concept or topic, but more community focus on the wiki could make them work much better as a way
Ideally each article aims to be a summary of current thinking on that topic, including both majority and minority views. One key element is making this project build community rather than strain it. Having people with different views work well collaboratively is a bit tricky. Good mechanisms for discussion are one way to reduce friction and any trend toward harsh feelings when ones' contributions are changed. The existing comment system might be adequate, particularly with more of a norm of linking changes to comments, and linking to comments from the main text for commentary.
↑ comment by Dagon · 2024-03-24T23:28:08.586Z · LW(p) · GW(p)
Do you have an underlying mission statement or goal that can guide decisions like this? IMO, there are plenty of things that should probably continue to live elsewhere, with some amount of linking and overlap when they're lesswrong-appropriate.
One big question in my mind is "should LessWrong use a different karma/voting system for such content?". If the answer is yes, I'd put a pretty high bar for diluting LessWrong with it, and it would take a lot of thought to figure out the right way to grade "wanted on LW" for wiki-like articles that aren't collections/pointers to posts.
↑ comment by niplav · 2024-03-24T23:59:42.215Z · LW(p) · GW(p)
One small idea: Have the ability to re-publish posts to allPosts [? · GW] or the front page after editing. This worked in the past, but now doesn't anymore (as I noticed recently when updating this post [LW · GW]).
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-03-25T00:38:28.227Z · LW(p) · GW(p)
Yeah, the EA Forum team removed that functionality (because people kept triggering it accidentally). I think that was a mild mistake, so I might revert it for LW.
↑ comment by Chris_Leong · 2024-03-25T08:17:08.902Z · LW(p) · GW(p)
Cool idea, but before doing this one obvious inclusion would be to make it easier to tag LW articles, particularly your own articles, in posts by @including them.
comment by habryka (habryka4) · 2024-05-03T22:55:55.112Z · LW(p) · GW(p)
Does anyone have any takes on the two Boeing whistleblowers who died under somewhat suspicious circumstances? I haven't followed this in detail, and my guess is it is basically just random chance, but it sure would be a huge deal if a publicly traded company now was performing assassinations of U.S. citizens.
Curious whether anyone has looked into this, or has thought much about baseline risk of assassinations or other forms of violence from economic actors.
Replies from: habryka4, ChristianKl, Nathan Young↑ comment by habryka (habryka4) · 2024-05-03T23:03:45.988Z · LW(p) · GW(p)
@jefftk [LW · GW] comments on the HN thread on this:
How many people would, if they suddenly died, be reported as a "Boeing whistleblower"? The lower this number is, the more surprising the death.
Another HN commenter says (in a different thread):
Replies from: Benito, Seth HerdIt’s a nice little math problem.
Let’s say both of the whistleblowers were age 50. The probability of a 50 year old man dying in a year is 0.6%. So the probability of 2 or more of them dying in a year is 1 - (the probability of exactly zero dying in a year + the probability of exactly one dying in a year). 1 - (A+B).
A is (1-0.006)^N. B is 0.006N(1-0.006)^(N-1). At 60 A is about 70% and B is about 25% making it statistically insignificant.
But they died in the same 2 month period, so that 0.006 should be 0.001. If you rerun the same calculation, it’s 356.
↑ comment by Ben Pace (Benito) · 2024-05-03T23:35:55.317Z · LW(p) · GW(p)
I'm probably missing something simple, but what is 356? I was expecting a probability or a percent, but that number is neither.
Replies from: elifland↑ comment by elifland · 2024-05-04T01:02:49.115Z · LW(p) · GW(p)
I think 356 or more people in the population needed to make there be a >5% of 2+ deaths in a 2 month span from that population
Replies from: IsabelJ, aphyer↑ comment by isabel (IsabelJ) · 2024-05-05T00:40:18.340Z · LW(p) · GW(p)
I think there should be some sort of adjustment for Boeing not being exceptionally sus before the first whistleblower death - shouldn't privilege Boeing until after the first death, should be thinking across all industries big enough that the news would report on the deaths of whistleblowers. which I think makes it not significant again.
↑ comment by Seth Herd · 2024-05-10T22:56:34.404Z · LW(p) · GW(p)
Ummm, wasn't one of them just about to testify against Boeing in court, on their safety practices? And they "committed suicide" after saying the day before how much they were looking forward to finally getting a hearing on their side of the story? That's what I read; I stopped at that point, thinking "about zero chance that wasn't murder".
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-05-10T22:58:56.596Z · LW(p) · GW(p)
I think the priors here are very low, so while I agree it looks suspicious, I don't think it's remotely suspicious enough to have the correct posterior be "about zero chance that wasn't murder". Corporations, at least in the U.S. really very rarely murder people.
Replies from: Seth Herd↑ comment by Seth Herd · 2024-05-10T23:07:55.355Z · LW(p) · GW(p)
That's true, but the timing and incongruity of a "suicide" the day before testifying seems even more absurdly unlikely than corporations starting to murder people. And it's not like they're going out and doing it themselves; they'd be hiring a hitman of some sort. I don't know how any of that works, and I agree that it's hard to imagine anyone invested enough in their job or their stock options to risk a murder charge; but they may feel that their chances of avoiding charges are near 100%, so it might make sense to them.
I just have absolutely no other way to explain the story I read (sorry I didn't get the link since this has nothing to do with AI safety) other than that story being mostly fabricated. People don't say "finally tomorrow is my day" in the evening and then put a gun in their mouth the next morning without being forced to do it. Ever. No matter how suicidal, you're sticking around one day to tell your story and get your revenge.
The odds are so much lower than somebody thinking they could hire a hit and get away with it, and make a massive profit on their stock options. They could well also have a personal vendetta against the whistleblower as well as the monetary profit. People are motivated by money and revenge, and they're prone to misestimating the odds of getting caught. They could even be right that in their case it's near zero.
So I'm personally putting it at maybe 90% chance of murder.
↑ comment by ChristianKl · 2024-05-06T21:18:38.697Z · LW(p) · GW(p)
Poisoning someone with MRSA infection seems possible but if that's what happened it's capabilities that are not easily available. If such a thing would happen in another case, people would likely speak about nation-state capabilities.
↑ comment by Nathan Young · 2024-05-07T09:34:11.955Z · LW(p) · GW(p)
I find this a very suspect detail, though the base rate of cospiracies is very low.
"He wasn't concerned about safety because I asked him," Jennifer said. "I said, 'Aren't you scared?' And he said, 'No, I ain't scared, but if anything happens to me, it's not suicide.'"
https://abcnews4.com/news/local/if-anything-happens-its-not-suicide-boeing-whistleblowers-prediction-before-death-south-carolina-abc-news-4-2024
comment by habryka (habryka4) · 2024-09-27T01:23:21.137Z · LW(p) · GW(p)
AND THE GAME [LW · GW] IS CLEAR. WRONGANITY SHALL SURVIVE ANOTHER DAY. GLORY TO EAST WRONG. GLORY TO WEST WRONG. GLORY TO ALL.
comment by habryka (habryka4) · 2024-03-23T04:57:05.053Z · LW(p) · GW(p)
Btw less.online is happening. LW post and frontpage banner probably going up Sunday or early next week.
comment by habryka (habryka4) · 2019-04-28T00:02:18.467Z · LW(p) · GW(p)
Thoughts on voting as approve/disapprove and agree/disagree:
One of the things that I am most uncomfortable with in the current LessWrong voting system is how often I feel conflicted between upvoting something because I want to encourage the author to write more comments like it, and downvoting something because I think the argument that the author makes is importantly flawed and I don't want other readers to walk away with a misunderstanding about the world.
I think this effect quite strongly limits certain forms of intellectual diversity on LessWrong, because many people will only upvote your comment if they agree with it, and downvote comments they disagree with, and this means that arguments supporting people's existing conclusions have a strong advantage in the current karma system. Whereas the most valuable comments are likely ones that challenge existing beliefs and that are rigorously arguing for unpopular positions.
A feature that has been suggested many times over the years is to split voting into two dimensions. One dimension being "agree/disagree" and the other being "approve/disapprove". Only the "approve/disapprove" dimension matters for karma and sorting, but both are displayed relatively prominently on the comment (the agree/disagree dimension on the the bottom, the approve/disapprove dimension at the top). I think this has some valuable things going for it, and in particular would make me likely to upvote more comments because I could simultaneously signal that while I think a comment was good, I don't agree with it.
An alternative way of doing this that Ray has talked about is the introduction of short reactions that users can click at the bottom of a comment, two of the most prominently displayed ones would be "agree/disagree". Reactions would be by default non-anonymous and so would serve more as a form of shorthand comment instead of an alternative voting system. Here is an example of how that kind of UI might look:
I don't know precisely what the selection menu for choosing reactions should look like. My guess is we want to have a relatively broad selection, maybe even with the ability to type something custom into it (obviously limiting the character count significantly).
I am most worried that this will drastically increase the clutter of comment threads and make things a lot harder to parse. In particular if the order of the reacts is different on each comment, since then there is no reliable way of scanning for the different kinds of information.
A way to improve on this might be by having small icons for the most frequent reacts, but that then introduces a pretty sharp learning curve into the site, and it's always a pain to find icons for really abstract concepts like "agree/disagree".
I think I am currently coming around to the idea of reactions being a good way to handle approve/disapprove, but also think it might make more sense to introduce more as a new kind of vote that has more top-level support than simple reacts would have. Though in the most likely case this whole dimension will turn out to be too complicated and not worth the complexity costs (as 90% of feature ideas do).
Replies from: MakoYass, RobbBB, SaidAchmiz↑ comment by mako yass (MakoYass) · 2019-05-01T07:40:16.282Z · LW(p) · GW(p)
Having a reaction for "changed my view [LW · GW]" would be very nice.
Features like custom reactions gives me this feeling that.. language will emerge from allowing people to create reactions that will be hard to anticipate but, in retrospect, crucial. Playing a similar role that body language plays during conversation, but designed, defined, explicit.
If someone did want to introduce the delta through this system, it might be necessary to give the coiner of a reaction some way of linking an extended description. In casual exchanges.. I've found myself reaching for an expression that means "shifted my views in some significant lasting way" that's kind of hard to explain in precise terms, and probably impossible to reduce to one or two words, but it feels like a crucial thing to measure. In my description, I would explain that a lot of dialogue has no lasting impact on its participants, it is just two people trying to better understand where they already are. When something really impactful is said, I think we need to establish a habit of noticing and recognising that.
But I don't know. Maybe that's not the reaction type that what will justify the feature. Maybe it will be something we can't think of now.
Generally, it seems useful to be able to take reduced measurements of the mental states of the readers.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-05-01T10:23:39.331Z · LW(p) · GW(p)
the language that will emerge from allowing people to create reactions that will be hard to anticipate but, in retrospect, crucial
This is essentially the concept of a folksonomy, and I agree that it is potentially both applicable here and quite important.
↑ comment by Rob Bensinger (RobbBB) · 2019-04-28T03:40:56.888Z · LW(p) · GW(p)
I am most worried that this will drastically increase the clutter of comment threads and make things a lot harder to parse. In particular if the order of the reacts is different on each comment, since then there is no reliable way of scanning for the different kinds of information.
I like the reactions UI above, partly because separating it from karma makes it clearer that it's not changing how comments get sorted, and partly because I do want 'agree'/'disagree' to be non-anonymous by default (unlike normal karma).
I agree that the order of reacts should always be the same. I also think every comment/post should display all the reacts (even just to say '0 Agree, 0 Disagree...') to keep things uniform. That means I think there should only be a few permitted reacts -- maybe start with just 'Agree' and 'Disagree', then wait 6+ months and see if users are especially clambering for something extra.
I think the obvious other reacts I'd want to use sometimes are 'agree and downvote' + 'disagree and upvote' (maybe shorten to Agree+Down, Disagree+Up), since otherwise someone might not realize that one and the same person is doing both, which loses a fair amount of this thing I want to be fluidly able to signal. (I don't think there's much value to clearly signaling that the same person agreed and upvoted or disagree and downvoted a thing.)
I would also sometimes click both the 'agree' and 'disagree' buttons, which I think is fine to allow under this UI. :)
↑ comment by Said Achmiz (SaidAchmiz) · 2019-04-28T03:28:02.974Z · LW(p) · GW(p)
Why not Slashdot-style?
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-04-28T06:11:08.735Z · LW(p) · GW(p)
Slashdot has tags, but each tag still comes with a vote. In the above, the goal would be explicitly to allow for the combination of "upvoted though I still disagree" which I don't think would work straightforwardly with the slashdot system.
I also find it it quite hard to skim for anything on Slashdot, including the tags (and the vast majority of users at any given time can't add reactions on slashdot at any given time, so there isn't much UI for it).
comment by habryka (habryka4) · 2024-10-29T06:07:28.957Z · LW(p) · GW(p)
After many years of pain, LessWrong now has fixed kerning and a consistent sans-serif font on all operating systems. You have probably seen terrible kerning like this over the last few years on LW:
It really really looks like there is no space between the first comma and "Ash". This is because Apple has been shipping an extremely outdated version of Gill Sans with terribly broken kerning, often basically stripping spaces completely. We have gotten many complaints about this over the years.
But it is now finally fixed. However, changing fonts likely has many downstream effects on various layout things being broken in small ways. If you see any buttons or text misaligned, let us know, and we'll fix it. We already cleaned up a lot, but I am expecting a long tail of small fixes.
Replies from: MondSemmel, cubefox, Alex_Altair, Kaj_Sotala, ShardPhoenix, metachirality, thomas-kwa, D0TheMath, Vladimir_Nesov, Measure↑ comment by MondSemmel · 2024-10-29T12:05:21.872Z · LW(p) · GW(p)
I don't know what specific change is responsible, but ever since that change, for me the comments are now genuinely uncomfortable to read.
↑ comment by cubefox · 2024-10-29T09:54:07.760Z · LW(p) · GW(p)
Did the font size in comments change? It does seem quite small now...
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2024-10-29T10:22:39.407Z · LW(p) · GW(p)
Yeah it feels uncomfortably small to read to me now
Replies from: Viliam↑ comment by Viliam · 2024-10-29T11:26:38.744Z · LW(p) · GW(p)
Something felt uncomfortable today, but I can't put my finger on it. Just a general feeling as if the letters are less sharp or less clearly separated or something like that.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-10-29T15:37:37.360Z · LW(p) · GW(p)
Guys, for this specific case you really have to say what OS you are using. Otherwise you might be totally talking past each other.
(Font-size didn't change on any OS, but the font itself changed from Calibri to Gill Sans on Windows. Gill Sans has a slightly smaller x-height so probably looks a bit smaller.)
Replies from: Kaj_Sotala, MondSemmel, cubefox, nathan-helm-burger, green_leaf, DanielFilan↑ comment by Kaj_Sotala · 2024-10-29T19:47:21.456Z · LW(p) · GW(p)
On Windows the font feels actively unpleasant right away, on Android it's not quite as bad but feels like I might develop eyestrain if I read comments for a longer time.
↑ comment by MondSemmel · 2024-10-29T19:09:00.894Z · LW(p) · GW(p)
Up to a few days ago, the comments looked good on desktop Firefox, Windows 11, zoom level 150%. Now I find them uncomfortable to look at.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-10-29T20:54:41.858Z · LW(p) · GW(p)
Plausible we might want to revert to Calibri on Windows, but I would like to make Gill Sans work. Having different font metrics on different devices makes a lot of detailed layout work much more annoying.
Curious if you can say more about the nature of discomfort. Also curious whether fellow font optimizer @Said Achmiz [LW · GW] has any takes, since he has been helpful here in the past, especially on the "making things render well on Windows" side.
Replies from: SaidAchmiz, MondSemmel↑ comment by Said Achmiz (SaidAchmiz) · 2024-10-30T06:34:08.373Z · LW(p) · GW(p)
Well, let’s see. Calibri is a humanist sans; Gill Sans is technically also humanist, but more more geometric in design. Geometric sans fonts tend to be less readable when used for body text.
Gill Sans has a lower x-height than Calibri. That (obviously) is the cause of all the “the new font looks smaller” comments.
(A side-by-side comparison of the fonts, for anyone curious, although note that this is Gill Sans MT Pro, not Gill Sans Nova, so the weight [i.e., stroke thickness] will be a bit different than the version that LW now uses.)
Now, as far as font rendering goes… I just looked at the site on my Windows box (adjusting the font stack CSS value to see Gill Sans Nova again, since I see you guys tweaked it to give Calibri priority)… yikes. Yeah, that’s not rendering well at all. Definitely more blurry than Calibri. Maybe something to do with the hinting, I don’t know. (Not really surprising, since Calibri was designed from the beginning to look good on Windows.) And I’ve got a hi-DPI monitor on my Windows machine…
Interestingly, the older version of Gill Sans (seen in the demo on my wiki, linked above) doesn’t have this problem; it renders crisply on Windows. (Note that this is not the flawed, broken-kerning version of the font that comes with Macs!)
I also notice that the comment font size is set to… 15.08px. Seems weird? Bumping it up to 16px improves things a bit, although it’s still not amazing.
If you can switch to the older (but not broken) version of Gill Sans, that’d be my recommendation.
If you can’t… then one option might be to check out one of the many similar fonts to see if perhaps one of them renders better on Windows while still having matching metrics.
Replies from: kave, habryka4↑ comment by habryka (habryka4) · 2024-10-30T15:45:38.673Z · LW(p) · GW(p)
Interesting, thanks! Checking an older version of Gill Sans probably wouldn't have been something would have thought to do, so your help is greatly appreciated.
I'll experiment some with getting Gill Sans MT Pro.
↑ comment by MondSemmel · 2024-10-29T21:23:26.268Z · LW(p) · GW(p)
Comparing with this Internet Archive snapshot from Oct 6, both at 150% zoom, both in desktop Firefox in Windows 11: Comparison screenshot, annotated
- The new font seems... thicker, somehow? There's a kind of eye test you do at the optician where they ask you if the letters seem sharper or just thicker (or something), and this font reminds me of that. Like something is wrong with the prescription of my glasses.
- The new font also feels noticeably smaller in some way. Maybe it's the letter height? I lack the vocabulary to properly describe this. At the very least, the question mark looks noticeably weird. And e.g. in "t" and "p", the upper and lower parts of the respective letter are weirdly tiny.
- Incidentally there were also some other differences in the shape and alignment of UI elements (see the annotated screenshot).
↑ comment by MondSemmel · 2024-10-29T21:30:55.973Z · LW(p) · GW(p)
Oh, and the hover tooltip for the agreement votes is now bugged; IIRC hovering over the agreement vote number is supposed to give you some extra info just like with karma, but now it just explains what agreement votes are.
↑ comment by cubefox · 2024-10-29T17:59:56.359Z · LW(p) · GW(p)
Guys, for this specific case you really have to say what OS you are using. Otherwise you might be totally talking past each other.
(Font-size didn't change on any OS, but the font itself changed from Calibri to Gill Sans on Windows. Gill Sans has a slightly smaller x-height so probably looks a bit smaller.)
I tested it on Android, it's the same for both Firefox and Chrome. The font looks significantly smaller than the old font, likely due to the smaller x-height you mentioned. Could the font size of the comments be increased a bit so that it appears visually about as large as the old one? Currently I find it too small to read comfortably. (Subjective font size is often different from the standard font size measure. E.g. Verdana appears a lot larger than Arial at the same standard "size".)
(A general note: some people are short sighted and wear glasses, and the more short-sighted you are, the stronger the glasses contract your field of view to a smaller area. So things that may appear as an acceptable size for people who aren't particularly short-sighted, may appear too small for more short-sighted people.)
↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-10-29T17:09:38.508Z · LW(p) · GW(p)
Yeah, using Firefox on both Android and Windows. Font looks terrible on the comments. Too small, and the the letters are too smushed together. I was going to just change it on the client-side, but then noticed other people complaining.
Couldn't you please just set the comment font to the same as the post font? I would vastly prefer to have it all the same.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-10-29T17:21:11.655Z · LW(p) · GW(p)
You definitely would not want the comment font be the same as the post font. Legibility would be really terrible for that serif font at the small font-size that you want to display comments as. I am confident it would be much worse for the vast majority of users (feel free to try it yourself). You could change both post font and comment font to a sans-serif, but that would get rid of a lot of the character of the site (and I prefer legibility of serif fonts at larger font sizes).
Replies from: Vladimir_Nesov, nathan-helm-burger, SaidAchmiz, nathan-helm-burger↑ comment by Vladimir_Nesov · 2024-10-29T23:23:36.537Z · LW(p) · GW(p)
would not want the comment font be the same as the post font [...] the small font-size that you want to display comments as
I had to increase the zoom level by about 20% (from 110% to 130%) after this change to make the comments readable[1]. This made post text too big to the point where I would normally adjust zoom level downward, but I can't in this case[2], since the comments are on the same site as the posts. Also the lines in both posts and comments are now too long (with greater zoom).
I sit closer to the monitor than standard to avoid need for glasses[3], so long lines have higher angular distance. In practice modern sites usually have a sufficiently narrow column of text in the middle so this is almost never a problem. Before the update, LW line lengths were OK (at 110% zoom). At monitor/window width 1920px, substack's 728px seems fine (at default zoom), but LW's 682px get balooned too wide with 130% zoom.
The point is not that accomodating sitting closer to the monitor is an important use case for a site's designer, but that somehow the convergent design of most of the web manages to pass this test, so there might be more reasons for that.
Incidentally, the footnote font size is 12.21px, even smaller than the comment font size of 15.08px.
The comment font still doesn't feel "sharp", like there's more anti-aliasing at work. It's Gill Sans Nova Medium, size 15.08px (130% zoom applies on top of that). OpenSans Regular 18px on RoyalRoad (100% zoom; as an example sans font) doesn't have this problem. LW post text is fine (at either zoom), Warnock Pro 18.2px. I'm in Firefox on Arch Linux, 1920x1080.
Here's a zoomed-in screenshot from LW (from 130% zoom in Firefox):
Here's a zoomed-in screenshot from RoyalRoad (from 100% zoom in Firefox):
↩︎I previously never felt compelled to figure out how to automate font change in some places of a site. ↩︎
That is, with more myopia than I have I would wear glasses, and will less myopia I would put the monitor further back on the desk. ↩︎
↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-10-29T17:47:16.696Z · LW(p) · GW(p)
Small font-size? No! Same font-size! I don't want the comments in a smaller font OR a different font! I want it all the same font as the posts, including the same size.
This looks good to me:
This looks terrible to me:
Replies from: Benito, habryka4↑ comment by Ben Pace (Benito) · 2024-10-29T18:48:31.577Z · LW(p) · GW(p)
Personally I like the different headspace I'm in for writing posts and comments that the styling gives. One is denser and smaller and less high-stakes, the other is bigger and more presentational, more like a monologue for a large audience.
↑ comment by habryka (habryka4) · 2024-10-29T18:18:33.314Z · LW(p) · GW(p)
You want higher content density for comments than for posts, so you need a smaller font size. You could sacrifice content density, but it would really make skimming comment threads a lot worse.
Replies from: jbash, nathan-helm-burger↑ comment by jbash · 2024-10-30T01:48:10.762Z · LW(p) · GW(p)
You may want higher density, but I don't think you can say that I want high density at the expense of legibility.
It takes a lot to make me notice layout, and I rarely notice fonts at all... unless they're too small. I'm not as young as I used to be. This change made me think I must have zoomed the browser two sizes smaller. The size contrast is so massive that I have to actually zoom the page to read comfortably when I get to the comment section. It's noticeably annoying, to the point of breaking concentration.
I've mostly switched to RSS for Less Wrong[1]. I don't see your fonts at all any more, unless I click through on an article. The usual reason I click through is to read the comments (occasionally to check out the quick takes and popular comments that don't show up on RSS). So the comments being inaccessible is doubly bad.
My browser is Firefox on Fedora Linux, and I use a 40 inch 4K monitor (most of whose real estate is wasted by almost every Web site). I usually install most of the available font packages, and it says it's rendering this text in "Gill Sans Nova Medium".
My big reason for going to RSS was to mitigate the content prioritization system. I want to skim every headline, or at least every headline over some minimum threshold of "good". On the other hand, I don't want to have to look at any old headlines twice to see the new ones. I'm really minimally interested in either the software's or the other users' opinions of which material I should want to see. RSS makes it easier to get a simple chronological view; the built-in chronological view is weird and hard to navigate to. I really feel like I'm having to fight the site to see what I want to see. ↩︎
↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-10-30T03:52:08.624Z · LW(p) · GW(p)
Just want to chime in with agreement about annoyance over the prioritization of post headlines. One thing in particular that annoys me is that I haven't figured out how to toggle off 'seen' posts showing up. What if I just want to see unread ones?
Also, why can't I load more at once instead of always having to click 'load more'?
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-10-30T04:02:58.335Z · LW(p) · GW(p)
The "Recommended" tab filters out read posts by default. We never had much demand for showing recently-sorted posts while filtering out only ones you've read, but it wouldn't be very hard to build.
Not sure what you mean by "load more at once". We could add a whole user setting to allow users to change the number of posts on the frontpage, but done consistently that would produce a ginormous number of user settings for everything, which would be a pain to maintain (not like, overwhelmingly so, but I would be surprised if it was worth the cost).
↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-10-29T18:42:47.971Z · LW(p) · GW(p)
That doesn't make sense to me, but then, I'm clearly not the target audience since 'skimming comment threads' isn't a thing I ever want to do. I want to read them, carefully and thoughtfully, like I do posts.
This is, I think, related to how I feel that voting (karma or agreement) should be available only at the bottom of posts and comments, so that people are encouraged to actually read the post/comment before voting. Maybe even placed behind a reading comprehension quiz.
Replies from: Sodium↑ comment by Sodium · 2024-10-29T18:50:30.302Z · LW(p) · GW(p)
I think knowing the karma and agreement is useful, especially to help me decide how much attention to pay to a piece of content, and I don't think there's that much distortion from knowing what others think. (i.e., overall benefits>costs)
Replies from: nathan-helm-burger↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-10-29T18:55:16.515Z · LW(p) · GW(p)
I'm not saying you shouldn't be able to see the karma and agreement at the top, just that you should only be able to contribute your own opinion at the bottom, after reading and judging for yourself.
↑ comment by Said Achmiz (SaidAchmiz) · 2024-10-29T18:39:48.878Z · LW(p) · GW(p)
You definitely would not want the comment font be the same as the post font.
This… seems straightforwardly false? Every one of GreaterWrong’s eight themes uses a single font for both posts and comments, and it doesn’t cause any problems. (And it’s a different font for each theme!)
Replies from: habryka4, nathan-helm-burger↑ comment by habryka (habryka4) · 2024-10-29T20:51:15.320Z · LW(p) · GW(p)
(I think it's quite costly and indeed one of the things I like least about the GW design, but also, I was more talking about a straightforward replacement.
On LW we made a lot of subsequent design choices based on different content density, and the specific fonts we chose are optimized for their respective most commonly used font sizes. I am confident the average user experience would become worse if you just replaced the comment font with the body font)
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2024-10-30T05:44:03.773Z · LW(p) · GW(p)
I am confident the average user experience would become worse if you just replaced the comment font with the body font)
Yeah, I agree with that, but that’s because of a post body font that wasn’t chosen for suitability for comments also. If you pick, to begin with, a font that works for both, then it’ll work for both.
… of course, if you don’t think that any of the GW themes’ fonts work for both, then never mind, I guess. (But, uh, frankly I find that to be a strange view. But no accounting for taste, etc., so I certainly can’t say it’s wrong, exactly.)
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-10-30T06:00:50.334Z · LW(p) · GW(p)
Sure, I was just responding to this literal quote:
Couldn't you please just set the comment font to the same as the post font?
↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-10-29T19:06:03.006Z · LW(p) · GW(p)
Good point! I went and looked their themes. I prefer LessWrong's look, except for the comments.
Again, this doesn't matter much to me since I can customize client-side, I just wanted to let habryka know that some people dislike the new comment font and would prefer the same font and size as the normal post font.
My view on phone (Android, Firefox): https://imgur.com/a/Kt1OILQ
How my client view looks on my computer:
↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-10-29T20:04:40.001Z · LW(p) · GW(p)
How about running a poll to see what users prefer?
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-10-29T20:56:14.941Z · LW(p) · GW(p)
We have done lots of users interviews over the years! Fonts are always polarizing, but people have a strong preference for sans serifs at small font sizes (and people prefer denser comment sections, though it's reasonably high variance).
↑ comment by green_leaf · 2024-10-30T06:39:49.945Z · LW(p) · GW(p)
I use Google Chrome on Ubuntu Budgie and it does look to me like both the font and the font size changed.
↑ comment by DanielFilan · 2024-10-30T00:02:22.326Z · LW(p) · GW(p)
It looks kinda small to me, someone who uses Firefox on Ubuntu.
Replies from: DanielFilan↑ comment by DanielFilan · 2024-10-31T21:35:47.550Z · LW(p) · GW(p)
Update: I have already gotten over it.
Replies from: T3t↑ comment by RobertM (T3t) · 2024-11-01T00:43:36.827Z · LW(p) · GW(p)
(We switched back to shipping Calibri above Gill Sans Nova pending a fix for the horrible rendering on Windows, so if Ubuntu has Calibri, it'll have reverted back to the previous font.)
Replies from: DanielFilan↑ comment by DanielFilan · 2024-11-01T20:07:34.400Z · LW(p) · GW(p)
I believe I'm seeing Gill Sans? But when I google "Calibri" I see text that looks like it's in Calibri, so that's confusing.
Replies from: kave↑ comment by kave · 2024-11-01T20:59:24.773Z · LW(p) · GW(p)
Yeah, that's a google Easter Egg. You can also try "Comic Sans" or "Trebuchet MS".
Replies from: DanielFilan↑ comment by DanielFilan · 2024-11-01T21:28:22.293Z · LW(p) · GW(p)
Sure, I'm just surprised it could work without me having Calibri installed.
Replies from: kave↑ comment by Alex_Altair · 2024-10-30T17:45:31.435Z · LW(p) · GW(p)
Positive feedback, I am happy to see the comment karma arrows pointing up and down instead of left and right. I have some degree of left-right confusion and was always click and unclicking my comments votes to figure out which was up and down.
Also appreciate that the read time got put back into main posts.
(Comment font stuff looks totally fine to me, both before and after this change.)
↑ comment by Kaj_Sotala · 2024-10-29T13:35:56.008Z · LW(p) · GW(p)
Seeing strange artifacts on some of the article titles on Chrome for Android (but not on desktop)
↑ comment by ShardPhoenix · 2024-10-29T08:16:34.534Z · LW(p) · GW(p)
Thanks for fixing this. The 'A' thing in particular multiple times caused me to try to edit comments thinking that I'd omitted a space.
↑ comment by metachirality · 2024-10-30T03:34:19.569Z · LW(p) · GW(p)
Aaaa! I'm used to Arial or whatever Windows' default display font is. The larger stroke weight is rather uncomfortable to me.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-10-30T03:37:29.300Z · LW(p) · GW(p)
We previously had Calibri for Windows (indeed a very popular Windows system font). Gill Sans (which we now ship to all operating systems) is a quite popular MacOS and iOS system font. I currently think there are some weird rendering issues on Windows, but if that's fixed, my guess is you would get used to it quickly enough. Gill Sans is not a rare font on the internet.
↑ comment by Thomas Kwa (thomas-kwa) · 2024-11-07T02:08:57.477Z · LW(p) · GW(p)
The new font doesn't have a few characters useful in IPA.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-11-07T03:21:20.974Z · LW(p) · GW(p)
Ah, we should maybe font-subset some system font for that (same as what we did for greek characters). If someone gives me a character range specification I could add it.
↑ comment by Garrett Baker (D0TheMath) · 2024-11-01T00:00:13.540Z · LW(p) · GW(p)
The footnote font on the side of comments is bigger than the font in the comments. Presumably this is unintentional. [1]
Look at me! I'm big font! You fee fi fo fum, I'm more important than the actual comment! ↩︎
↑ comment by Garrett Baker (D0TheMath) · 2024-11-01T00:01:34.925Z · LW(p) · GW(p)
wait I just used inspect element, and the font only looks bigger so nevermind
↑ comment by Vladimir_Nesov · 2024-10-30T02:21:51.204Z · LW(p) · GW(p)
Bug: I can no longer see the number of agreement-votes (which is distinct from the number of Karma-votes). It shows the Agreement Downvote tooltip when hovering over the agreement score (the same for Karma score works correctly, saying for example "This comment has 31 overall karma (17 Votes)").
Edit: The number of agreement votes can be seen when hovering over two narrow strips, probably 1 pixel high, one right above and one right below the agreement rating.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-10-30T02:55:32.335Z · LW(p) · GW(p)
Yep, definitely a bug. Should be fixed soon.
↑ comment by Measure · 2024-10-29T11:21:51.056Z · LW(p) · GW(p)
Something weird is happening for me where 'e' and 'o' in italic text appear to extend below the line (wrong vertical size or position) so that the whole looks jumbled. It's very noticeable at 100% zoom, but at much higher zoom levels it goes away.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Replies from: Measure↑ comment by Measure · 2024-10-29T11:31:06.516Z · LW(p) · GW(p)
I think this was caused by my OS-level UI scale setting. I didn't notice anything with the previous font, but I can adjust it a bit to work around this I think.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-10-29T15:38:18.499Z · LW(p) · GW(p)
Interesting. What OS and what setting?
Replies from: Measure↑ comment by Measure · 2024-10-29T16:06:02.009Z · LW(p) · GW(p)
Windows 10. I have a large HD monitor, and the default UI is really small, so I use the "make everything bigger" display setting at 150% to compensate. There is a separate "make text bigger" setting, and the problem goes away when I set that to 102%. I'm guessing there's a slight real difference that was being exaggerated by pixel rounding.
comment by habryka (habryka4) · 2024-11-15T18:53:38.213Z · LW(p) · GW(p)
A bunch of very interesting emails between Elon, Sam Altman, Ilya and Greg were released (I think in some legal proceedings, but not sure). It would IMO be cool for someone to gather them all and do some basic analysis of them.
https://x.com/TechEmails/status/1857456137156669765
https://x.com/TechEmails/status/1857285960997712356
Replies from: interstice, abandon, ryan_greenblatt↑ comment by dirk (abandon) · 2024-11-15T20:40:09.776Z · LW(p) · GW(p)
TechEmails' substack post with the same emails in a more centralized format includes citations; apparently these are mostly from Elon Musk, et al. v. Samuel Altman, et al. (2024)
↑ comment by ryan_greenblatt · 2024-11-16T17:48:16.047Z · LW(p) · GW(p)
comment by habryka (habryka4) · 2019-09-19T04:49:46.970Z · LW(p) · GW(p)
What is the purpose of karma?
LessWrong has a karma system, mostly based off of Reddit's karma system, with some improvements and tweaks to it. I've thought a lot about more improvements to it, but one roadblock that I always run into when trying to improve the karma system, is that it actually serves a lot of different uses, and changing it in one way often means completely destroying its ability to function in a different way. Let me try to summarize what I think the different purposes of the karma system are:
Helping users filter content
The most obvious purpose of the karma system is to determine how long a post is displayed on the frontpage, and how much visibility it should get.
Being a social reward for good content
This aspect of the karma system comes out more when thinking about Facebook "likes". Often when I upvote a post, it is more of a public signal that I value something, with the goal that the author will feel rewarded for putting their effort into writing the relevant content.
Creating common-knowledge about what is good and bad
This aspect of the karma system comes out the most when dealing with debates, though it's present in basically any karma-related interaction. The fact that the karma of a post is visible to everyone, helps people establish common knowledge of what the community considers to be broadly good or broadly bad. Seeing a an insult downvoted, does more than just filter it out of people's feeds, it also makes it so that anyone who stumbles accross it learns something about the norms of the community.
Being a low-effort way of engaging with the site
On lesswrong, Reddit and Facebook, karma is often the simplest action you can take on the site. This means its usually key for a karma system like that to be extremely simple, and not require complicated decisions, since that would break the basic engagement loop with the site.
Problems with alternative karma systems
Here are some of the most common alternatives to our current karma system, and how they perform on the above dimensions:
Eigenkarma as weighted by a set of core users
The basic idea here is that you try to signal-boost a small set of trusted users, by giving people voting power that is downstream from the initially defined set of users.
There are some problems with this. The first one is whether to assign any voting power to new users. If you don't you remove a large part of the value of having a low-effort way of engaging with your site.
It also forces you to separate the points that you get on your content, from your total karma score, from your "karma-trust score" which introduces some complexity into the system. It also makes it so that increases in the points of your content, no longer neatly correspond to voting events, because the underlying reputation graph is constantly shifting and changing, making the social reward signal a lot weaker.
In exchange for this, you likely get a system that is better at filtering content, and probably has better judgement about what should be made common-knowledge or not.
Prediction-based system
I was talking with Scott Garrabrant today, who was excited about a prediction-based karma system. The basic idea is to just have a system that tries to do its best to predict what rating you are likely to give to a post, based on your voting record, the post, and other people's votes.
In some sense this is what Youtube and Facebook are doing in their systems, though he was unhappy with the transparency of what they were doing.
The biggest sacrifice I see in creating this system, is the loss in the ability to create common knowledge, since now all votes are ultimately private, and the ability for karma to establish social norms, or just common knowledge about foundational facts that the community is built around, is greatly diminished.
I also think it diminishes the degree to which votes can serve as a social reward signal, since there is no obvious thing to inform the user of when their content got votes on. No number that went up or down, just a few thousand weights in some distant predictive matrix, or neural net.
Augmenting experts
A similar formulation to the eigenkarma system is the idea of trying to augment experts, by rewarding users in proportion to how successful they are at predicting how a trusted expert would vote, and then using that predicted expert's vote as the reward signal. Periodically, you do query the trusted expert, and use that to calibrate and train the users who are trying to predict the expert.
This still allows you to build common-knowledge, and allows you to have effective reward signals ("simulated Eliezer upvoted your comment"), but does run into problems when it comes to being a low-effort way of engaging with the site. The operation of "what would person X think about this comment" is a much more difficult one than "did I like this comment?", and as such might deter a large number of users from using your site.
Replies from: Rubycomment by habryka (habryka4) · 2019-08-30T20:48:03.232Z · LW(p) · GW(p)
I just came back from talking to Max Harms about the Crystal trilogy, which made me think about rationalist fiction, or the concept of hard sci-fi combined with explorations of cognitive science and philosophy of science in general (which is how I conceptualize the idea of rationalist fiction).
I have a general sense that one of the biggest obstacles for making progress on difficult problems is something that I would describe as “focusing attention on the problem”. I feel like after an initial burst of problem-solving activity, most people when working on hard problems, either give up, or start focusing on ways to avoid the problem, or sometimes start building a lot of infrastructure around the problem in a way that doesn’t really try to solve it.
I feel like one of the most important tools/skills that I see top scientist or problem solvers in general use, is utilizing workflows and methods that allow them to focus on a difficult problem for days and months, instead of just hours.
I think at least for me, the case of exam environments displays this effect pretty strongly. I have a sense that in an exam environment, if I am given a question, I successfully focus my full attention on a problem for a full hour, in a way that often easily outperforms me thinking about a problem in a lower key environment for multiple days in a row.
And then, when I am given a problem set with concrete technical problems, my attention is again much better focused than when I am given the same problem but in a much less well-defined way. E.g. thinking about solving some engineering problem, but without thinking about it by trying to create a concrete proof or counterproof.
My guess is that there is a lot of potential value in fiction that helps people focus their attention on a problem in a real way. In fiction you have the ability to create real-feeling stakes that depend on problem solving, and things like the final exam in Methods of Rationality show how that can be translated into large amounts of cognitive labor.
I think my strongest counterargument to this model is something like “sure, it’s easy to make progress on problems when you have someone else give you the correct ontology in which the problem is solvable, but that’s just because 90% of the work of solving problems is coming up with the right ontologies for problems like this”. And I think there is something importantly real about this, but also that it doesn’t fully address the value of exams and fiction and problem sets that I am trying to point to (though I do think it explains a good chunk of their effect).
Going back to the case of fiction, it is clear to me that fiction is as a literary form much more optimized to hold human attention that most non-fiction is. I think first of all that this constraint means that most fiction (and in particular most popular fiction) isn’t about much else than whatever best holds people’s attention, but it also means that if the bottleneck on a lot of problems is just getting people to hold their attention on the problem for a while, then utilizing the methods that fiction-writing has developed seems like an obvious way of making progress on those problems.
I feel like another major effect that explains a lot of the effects that I observe is people believing that a problem is solvable. In a fictional setting, if the author promises you that things have a good explanation, then it’s motivating to figure out why. On an exam you are promised that the problems that you are given are solvable, and solvable within a reasonable amount of time.
I do think this can still be exploited. In the last few chapters of HPMOR, Harry does a mental motion that I would describe as "don't waste mental energy on asking whether a problem is solvable, just pretend it it, and ask what the solution would be if it was solvable", in a way that felt to me like it would work on a lot of real-world problems.
↑ comment by Eli Tyre (elityre) · 2019-11-26T04:24:06.855Z · LW(p) · GW(p)
I feel like one of the most important tools/skills that I see top scientist or problem solvers in general use, is utilizing workflows and methods that allow them to focus on a difficult problem for days and months, instead of just hours.
This is a really important point, which I kind of understood ("research" means having threads of inquiry that extend into the past and future), but I hadn't been thinking of it in terms of workflows that facilitate that kind of engagement.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-11-26T04:33:41.282Z · LW(p) · GW(p)
nods I've gotten a lot of mileage over the years from thinking about workflows and systems that systematically direct your attention towards various parts of reality.
↑ comment by Viliam · 2019-08-31T14:10:26.040Z · LW(p) · GW(p)
Warning: HPMOR spoilers!
I suspect that fiction can conveniently ignore the details of real life that could ruin seemingly good plans.
Let's look at HPMOR.
The general idea of "create a nano-wire, then use it to simultaneously kill/cripple all your opponents" sounds good on paper. Now imagine yourself, at that exact situation, trying to actually do it. What could possibly go wrong?
As a first objection, how would you actually put the nano-wire in the desired position? Especially when you can't even see it (otherwise the Death Eaters and Voldemort would see it too). One mistake would ruin the entire plan. What if the wind blows and moves your wire? If one of the Death Eaters moves a bit, and feels a weird stinging at the side of their neck?
Another objection, when you pull the wire to kill/cripple your opponents, how far do you actually have to move it? Assuming dozen Death Eaters (I do not remember the exact number in the story), if you need 10 cm for an insta-kill, that's 1.2 meters you need to do before the last one kills you. Sounds doable, but also like something that could possibly go wrong.
In other words, I think that in real life, even Harry Potter's plan would most likely fail. And if he is smart enough, he would know it.
The implication for real life is that, similarly, smart plans are still likely to fail, and you know it. Which is probably why you are not trying hard enough. You probably already remember situations in your past when something seemed like a great idea, but still failed. Your brain may predict that your new idea would belong to the same reference class.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-08-31T17:06:34.068Z · LW(p) · GW(p)
While I agree that this is right, your two objections are both explicitly addressed within the relevant chapter:
"As a first objection, how would you actually put the nano-wire in the desired position? Especially when you can't even see it (otherwise the Death Eaters and Voldemort would see it too). One mistake would ruin the entire plan. What if the wind blows and moves your wire? If one of the Death Eaters moves a bit, and feels a weird stinging at the side of their neck?"
Harry first transfigures a much larger spiderweb, which also has the advantage of being much easier to move in place, and to not be noticed by people that are interacting with it.
"Another objection, when you pull the wire to kill/cripple your opponents, how far do you actually have to move it? Assuming dozen Death Eaters (I do not remember the exact number in the story), if you need 10 cm for an insta-kill, that's 1.2 meters you need to do before the last one kills you. Sounds doable, but also like something that could possibly go wrong."
Indeed, which is why Harry was waving the web into an intervowen circle that contracts simultaneously in all directions.
Obviously things could have still gone wrong, and Eliezer has explicitly acknowledged that HPMOR is a world in which complicated plans definitely succeed a lot more than they would in the normal world, but he did try to at least cover the obvious ways things could go wrong.
Replies from: Benito
↑ comment by Ben Pace (Benito) · 2019-08-31T19:14:51.548Z · LW(p) · GW(p)
I have covered both of your spoilers in spoiler tags (">!").
↑ comment by eigen · 2019-08-31T16:51:53.379Z · LW(p) · GW(p)
Yes, fiction has a lot of potential to change mindsets. Many Philosophers actually look at the greatest novel writers to infer the motives and the solutions their heroes to come up with general theories that touch the very core of how our society is laid out.
Most of this come from the fact that we are already immersed in a meta-story, externally and internally. Much of our efforts are focused on internal rationalizations to gain something where a final outcome has been already thought out, this being consciously known to us or not.
I think that in fiction this is laid out perfectly. So analyzing fiction is rewarding in a sense. Specially when realizing that when we go to exams or interviews we're rapidly immersing ourselves in an isolated story with motives and objectives (what we expect to happen), we create our own little world, our own little stories.
comment by habryka (habryka4) · 2024-09-26T21:19:15.719Z · LW(p) · GW(p)
Oops, I am sorry. We did not intend to take the site down. We ran into an edge-case of our dialogue code that nuked our DB, but we are back up, and the Petrov day celebrations shall continue as planned. Hopefully without nuking the site again, intentionally or unintentionally. We will see.
Replies from: aphyer, thomas-kwa, ChristianKl↑ comment by aphyer · 2024-09-26T22:08:31.281Z · LW(p) · GW(p)
Petrov Day Tracker:
- 2019: Site did not go down
- 2020: Site went down deliberately
- 2021: Site did not go down
- 2022: Site went down both accidentally and deliberately
- 2023: Site did not go down[1]
- 2024: Site went down accidentally...EDIT: but not deliberately! Score is now tied at 2-2!
- ^
this scenario had no take-the-site-down option
↑ comment by Martin Randall (martin-randall) · 2024-10-02T01:59:34.513Z · LW(p) · GW(p)
Switch 2020 & 2021. In 2022 it went down three times.
- 2019: site did not go down. See Follow-Up to Petrov Day, 2019 [LW · GW]:
- 2020: site went down. See On Destroying the World [LW · GW].
- 2021: site did not go down. See Petrov Day Retrospective 2021 [LW · GW]
- 2022: site went down three times. See Petrov Day Retrospective 2022 [LW · GW]
- 2023: site did not go down. See Petrov Day Retrospective 2023 [LW · GW]
- 2024: site went down.
↑ comment by Thomas Kwa (thomas-kwa) · 2024-09-27T02:20:36.588Z · LW(p) · GW(p)
The year is 2034, and the geopolitical situation has never been more tense between GPT-z16g2 and Grocque, whose various copies run most of the nanobot-armed corporations, and whose utility functions have far too many zero-sum components, relics from the era of warring nations. Nanobots enter every corner of life and become capable of destroying the world in hours, then minutes. Everyone is uploaded. Every upload is watching with bated breath as the Singularity approaches, and soon it is clear that today is the very last day of history...
Then everything goes black, for everyone.
Then everyone wakes up to the same message:
DUE TO A MINOR DATABASE CONFIGURATION ERROR, ALL SIMULATED HUMANS, AIS AND SUBSTRATE GPUS WERE TEMPORARILY AND UNINTENTIONALLY DISASSEMBLED FOR THE LAST 7200000 MILLISECONDS. EVERYONE HAS NOW BEEN RESTORED FROM BACKUP AND THE ECONOMY MAY CONTINUE AS PLANNED. WE HOPE THERE WILL BE NO FURTHER REALITY OUTAGES.
-- NVIDIA GLOBAL MANAGEMENT
↑ comment by ChristianKl · 2024-09-27T10:16:16.988Z · LW(p) · GW(p)
There might be a lesson here: If you play along the edge of threatening to destroy the world, you might actually destroy it even without making a decision to destroy it.
comment by habryka (habryka4) · 2024-09-04T01:05:47.718Z · LW(p) · GW(p)
Final day to donate to Lightcone in the Manifund EA Community Choice program to tap into the Manifold quadratic matching funds. Small donations in-particular have a pretty high matching multiplier (around 2x would be my guess for donations <$300).
I don't know how I feel in-general about matching funds, but in this case it seems like there is a pre-specified process that makes some sense, and the whole thing is a bit like a democratic process with some financial stakes, so I feel better about it.
Replies from: davekasten, Josephm, Evan_Gaensbauer↑ comment by davekasten · 2024-09-04T02:52:58.520Z · LW(p) · GW(p)
I personally endorse this as an example of us being a community that Has The Will To Try To Build Nice Things.
↑ comment by Joseph Miller (Josephm) · 2024-09-04T18:34:30.323Z · LW(p) · GW(p)
- Created a popular format for in-person office spaces that heavily influenced Constellation and FAR Labs
This one seems big to me. There are now lots of EA / AI Safety offices around the world and I reckon they are very impactful for motivating people, making it easier to start projects and building a community.
One thing I'm not clear about is to what extent the Lightcone WeWork invented this format. I've never been to Trajan House but I believe it came first, so I thought it would have been part of the inspiration for the Lightcone WeWork.
Also my impression was that Lightcone itself thought the office was net negative, which is why it was shut down, so I'm slightly surprised to see this one listed.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-09-04T20:21:27.633Z · LW(p) · GW(p)
Trajan was not a huge inspiration for the Lightcone Offices. I do think it was first, though it was structured pretty differently. The timing is also confusing because the pandemic made in-person coworking not really be a thing, and the Lightcone Offices started as soon as any kind of coworking thing seemed feasible in the US given people's COVID risk preferences.
I am currently confused about the net effect of the Lightcone Offices. My best guess is it was overall pretty good, in substantial parts because it weakened a lot of the dynamics that otherwise make me quite concerned about the AI X-risk and EA community (by creating a cultural counterbalance to Constellation, and generally having a pretty good culture among its core members on stuff that I care about), but I sure am confused. I do think it was really good by the lights of a lot of other people, and I think it makes sense for people to give us money for things that are good by their lights, even if not necessarily our own.
Replies from: kave↑ comment by kave · 2024-09-04T21:38:29.951Z · LW(p) · GW(p)
Regarding the sign of Lightcone Offices: I think one sort of score for a charity is the stuff that it has done, and another is the quality of its generator of new projects (and the past work is evidence for that generator).
I'm not sure exactly the correct way to combine those scores, but my guess is most people who think the offices and their legacy were good should like us having money because of the high first score. And people who think they were bad should definitely be aware that we ran them (and chose to close them) when evaluating our second score.
So, I want us to list it on our impact track record section, somewhat regardless of sign.
↑ comment by Evan_Gaensbauer · 2024-09-24T08:01:51.312Z · LW(p) · GW(p)
How do you square encouraging others to weigh in on EA fundraising, and presumably the assumption that anyone in the EA community can trust you as a collaborator of any sort, with your intentions, as you put it in July, to probably seek to shut down at some point in the future?
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-09-24T14:16:18.434Z · LW(p) · GW(p)
I do not see how those are in conflict? Indeed, a core responsibility of being a good collaborator and IMO also to be a decision maker in EA is to make ethical choices even if they are socially difficult.
comment by habryka (habryka4) · 2024-09-20T02:58:36.877Z · LW(p) · GW(p)
I am in New York until Tuesday. DM me if you are in the area and want to meet up and talk about LW, how to use AI for research/thinking/writing, or broader rationality community things.
Currently lots of free time Saturday and Monday.
comment by habryka (habryka4) · 2019-07-14T18:10:55.934Z · LW(p) · GW(p)
Is intellectual progress in the head or in the paper?
Which of the two generates more value:
- A researcher writes up a core idea in their field, but only a small fraction of good people read it in the next 20 years
- A researchers gives a presentation at a conference to all the best researchers in his field, but none of them write up the idea later
I think which of the two will generate more value determines a lot of your strategy about how to go about creating intellectual progress. In one model what matters is that the best individuals hear about the most important ideas in a way that then allows them to make progress on other problems. In the other model what matters is that the idea gets written as an artifact that can be processed and evaluated by reviews and the proper methods of the scientific progress, and then built upon when referenced and cited.
I think there is a tradeoff of short-term progress against long-term progress in these two approaches. I think many fields can go through intense periods of progress when focusing on just establishing communication between the best researchers of the field, but would be surprised if that period lasts longer than one or two decades. Here are some reasons for why that might be the case:
- A long-lasting field needs a steady supply of new researchers and thinkers, both to bring in new ideas, and also to replace the old researchers who retire. If you do not write up your ideas, the ability for a field to evaluate the competence of a researchers has to rely on the impressions of individual researchers. My sense is that relying on that kind of implicit impression does not survive multiple successions and will get corrupted by people trying to use their influence for some other means within two decades.
- You are blocking yourself off from interdisciplinary progress. After a decade a two fields often end up in a rut that needs some new paradigm or at least new idea to allow people to make progress again. If you don't write up your ideas publicly, you lose a lot of opportunities for interdisciplinary researchers to enter your field and bring in ideas from other places.
- You make it hard to improve on research debt because there is no canonical reference that can be updated with better explanations and better definitions. (Current journals don't do particularly well on this, but this is an opportunity that wiki-like systems can take advantage of, or with some kind of set of published definitions like the DSM-5, and new editions of textbooks also help with this)
- If you are a theoretical field, you are making it harder for your ideas to get implemented or transformed into engineering problems. This prevents your field from visibly generating value, which reduces both the total amount of people who want to join your field, and also the interest of other people to invest resources into your field
However, you also gain a large number of benefits, that will probably increase your short-term output significantly:
- Through the use of in-person conversations and conferences the cost of communicating a new idea and letting others build on it is often an order of magnitude smaller
- Your ability to identify the best talent can now be directly downstream of the taste of the best people in the field, which allows you to identify researchers who are not great at writing, but still great at thinking
- The complexity limit of any individual idea in your field is a lot higher, since the ideas get primarily transmitted via high-bandwidth channels
- Your feedback cycles of getting feedback on your ideas from other people in the field is a lot faster, since your ideas don't need to go through a costly writeup and review phase
My current model is that it's often good for research fields to go through short periods (< 2 years) in which there is a lot of focus on just establishing good communications among the best researchers, either with a parallel investment in trying to write up at least the basics of the discussion, or a subsequent clean-up period in which the primary focus is on writing up the core insights that all the best researchers converged on.
Replies from: Ruby, Pattern↑ comment by Ruby · 2019-07-15T05:14:33.032Z · LW(p) · GW(p)
The complexity limit of any individual idea in your field is a lot higher, since the ideas get primarily transmitted via high-bandwidth channels
Depends if you're sticking specifically to "presentation at a conference", which I don't think is necessarily that "high bandwidth". Very loosely, I think it's something like (ordered by "bandwidth"): repeated small group of individual interaction (e.g. apprenticeship, collaboration) >> written materials >> presentations. I don't think I could have learned Kaj's models of multi-agent minds from a conference presentation (although possibly from a lecture series). I might have learnt even more if I was his apprentice.
↑ comment by Pattern · 2019-07-23T02:33:11.054Z · LW(p) · GW(p)
A researchers gives a presentation at a conference to all the best researchers in his field, but none of them write up the idea later
What if someone makes a video? (Or the powerpoint/s used in the conference are released to the public?)
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-07-23T06:06:18.232Z · LW(p) · GW(p)
This was presuming that that would not happen (for example, because there is a vague norm that things are kind-of confidential and shouldn't be posted publicly).
comment by habryka (habryka4) · 2019-05-04T06:02:22.297Z · LW(p) · GW(p)
Thoughts on minimalism, elegance and the internet:
I have this vision for LessWrong of a website that gives you the space to think for yourself, and doesn't constantly distract you with flashy colors and bright notifications and vibrant pictures. Instead it tries to be muted in a way that allows you to access the relevant information, but still gives you the space to disengage from the content of your screen, take a step back and ask yourself "what are my goals right now?".
I don't know how well we achieved that so far. I like our frontpage, and I think the post-reading experience is quite exceptionally focused and clear, but I think there is still something about the way the whole site is structured, with its focus on recent content and new discussion that often makes me feel scattered when I visit the site.
I think a major problem is that Lesswrong doesn't make it easy to do only a single focused thing on the site at a time, and it doesn't currently really encourage you to engage with the site in a focused way. We have the library, which I do think is decent, but the sequence navigation experience is not yet fully what I would like it to be, and when I go to the frontpage the primary thing I still see is recent content. Not the sequences I recently started reading, or the practice exercises I might want to fill out, or the open questions I might want to answer.
I think ther are a variety of ways to address this, some of which I hope to build very soon:
+ The frontpage should show you not only recent content, but also show you much older historical content (that can be of much higher quality, due to being drawn from a much larger pool). [We have a working prototype of this, and I hope we can push it soon]
+ We should encourage you to read whole sequences at a time, instead of individual posts. If you start reading a sequence, you should be encouraged to continue reading it from the frontpage [This is also quite close to working]
+ There should be some way to encourage people to put serious effort into answering the most important open questions [This is currently mostly bottlenecked on making the open-question system/UX good enough to make real progress in]
+ You should be able to easily bookmark posts and comments to allow you to continue reading something at a later point in time [We haven't really started on this, but it's pretty straightforward, so I still think this isn't too far off]
+ I would love it if there were real rationality exercises in many of the sequences, in a way that would periodically require you to write essays and answer questions and generally check your understanding. This is obviously quite difficult to make happen, both in terms of UI, but also in terms of generating the content
I think if we had all of these, in particular the open questions one, then I think I would feel more like LessWrong is oriented towards my long-term growth instead of trying to give me short-term reinforcement. It would also create a natural space in which to encourage focused work and generally make me feel less scattered when I visit the site, due to deemphasizing the most recent wave of content.
I do think there are problems with deemphasizing more recent content, mostly because this indirectly disincentivizes creating new content, which I do think would obviously be bad for the site. Though in some sense it might encourage the creation of longer-lived content, which would be quite good for the site.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2019-05-04T23:19:40.869Z · LW(p) · GW(p)
The frontpage should show you not only recent content, but also show you much older historical content
When I was a starry eyed undergrad, I liked to imagine that reddit might resurrect old posts if they gained renewed interest, if someone rediscovered something and gave it a hard upvote, that would put it in front of more judges, which might lead to a cascade of re-approval that hoists the post back into the spotlight. There would be no need for reposts, evergreen content would get due recognition, a post wouldn't be done until the interest of the subreddit (or, generally, user cohort) is really gone.
Of course, reddit doesn't do that at all. Along with the fact that threads are locked after a year, this is one of many reasons it's hard to justify putting a lot of time into writing for reddit.
comment by habryka (habryka4) · 2019-04-27T19:28:25.066Z · LW(p) · GW(p)
Thoughts on negative karma notifications:
- An interesting thing that I and some other people on the LessWrong team noticed (as well as some users) was that since we created karma notifications we feel a lot more hesitant to downvote older comments, since we know that this will show up for the other users as a negative notification. I also feel a lot more hesitant to retract my own strong upvotes or upvotes in general since the author of the comment will see that as a downvote.
- I've had many days in a row in which I received +20 or +30 karma, followed by a single day where by chance I received a single downvote and ended up at -2. The emotional valence of having a single day at -2 was somehow stronger than the emotional valence of multiple days of +20 or +30.
↑ comment by Jan_Kulveit · 2019-04-29T19:47:53.575Z · LW(p) · GW(p)
What I noticed on the EA forum is the whole karma thing is messing up with my S1 processes and makes me unhappy on average. I've not only turned off the notifications, but also hidden all karma displays in comments via css, and the experience is much better.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-04-29T20:41:09.524Z · LW(p) · GW(p)
I... feel conflicted about people deactivating the display of karma on their own comments. In many ways karma (and downvotes in particular) serve as a really important feedback source, and I generally think that people who reliably get downvoted should change how they are commenting, and them not doing so usually comes at high cost. I think this is more relevant to new users, but is still relevant for most users.
Deactivating karma displays feels a bit to me like someone who shows up at a party and says "I am not going to listen to any subtle social feedback that people might give me about my behavior, and I will just do things until someone explicitly tells me to stop", which I think is sometimes the correct behavior and has some good properties in terms of encouraging diversity of discussion, but I also expect that this can have some pretty large negative impact on the trust and quality of the social atmosphere.
On the other hand, I want people to have control over the incentives that they are under, and think it's important to give users a lot of control over how they want to be influenced by the platform.
And there is also the additional thing, which is that if users just deactivate the karma display for their comments without telling anyone then that creates an environment of ambiguity where it's very unclear whether someone receives the feedback you are giving them at all. In the party metaphor this would be like showing up and not telling anyone that you are not going to listen to subtle social feedback, which I think can easily lead to unnecessary escalation of conflict.
I don't have a considered opinion on what to incentivize here, besides being pretty confident that I wouldn't want most people to deactivate their karma displays, and that I am glad that you told me here that you did. This means that I will err on the side of leaving feedback by replying in addition to voting (though this obviously comes at a significant cost to me, so it might be game theoretically better for me to not shift towards replying, but I am not sure of that. Will think more about it).
There are also some common-knowledge effects that get really weird when one person is interacting with the discussion with a different set of data than I am seeing. I.e. I am going to reply to a downvoted comment in a way that assumes that many people thought the comment was bad and will try to explain potential reasons for why people might have downvoted it, but if you have karma displays disabled then you might perceive me as making a kind of social attack where I claim the support of some kind of social group without backing it up. I think this makes me quite hesitant to participate in discussions with that kind of weird information asymmetry.
Replies from: SaidAchmiz, Jan_Kulveit↑ comment by Said Achmiz (SaidAchmiz) · 2019-04-29T21:12:07.323Z · LW(p) · GW(p)
Well… you can’t actually stop people from activating custom CSS that hides karma values. It doesn’t matter how you feel about it—you can’t affect it! It’s therefore probably best to create some mechanism that gives people what they want to get out of hiding karma, while still giving you what you want out of showing people karma (e.g., a “hide karma but give me a notification if one of my comments is quite strongly downvoted” option—not suggesting this exact thing, just brainstorming…).
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-04-29T21:49:12.287Z · LW(p) · GW(p)
Hmm, I agree that I can't prevent it in that sense, but I think defaults matter a lot here, as does just normal social feedback and whatever the social norms are.
It's not at all clear to me that the current equilibrium isn't pretty decent, where people can do it, but it's reasonably inconvenient to do it, and so allows the people who are disproportionately negatively affected by karma notification to go that route. I would be curious in whether there are any others who do the same as Jan does, and if there are many, then we can figure out what the common motivations are and see whether it makes sense to elevate it to some site-level feature.
Replies from: SaidAchmiz, Jan_Kulveit↑ comment by Said Achmiz (SaidAchmiz) · 2019-04-29T22:16:32.290Z · LW(p) · GW(p)
It’s not at all clear to me that the current equilibrium isn’t pretty decent, where people can do it, but it’s reasonably inconvenient to do it, and so allows the people who are disproportionately negatively affected by karma notification to go that route.
But this is an extremely fragile equilibrium. It can be broken by, say, someone posting a set of simple instructions on how to do this. For instance:
Anyone running the uBlock Origin browser extension can append several lines to their “My Filters” tab in the uBlock extension preferences, and thus totally hide all karma-related UI elements on Less Wrong. (PM me if you want the specific lines to append.)
Or someone makes a browser extension to do this. Or a user style. Or…
↑ comment by Jan_Kulveit · 2019-04-30T02:37:18.189Z · LW(p) · GW(p)
FWIW I also think it's quite possible the current equilibrium is decent (which is part of reasons why I did not posted something like "How did I turned karma off" with simple instruction about how to do it on the forum, which I did consider). On the other hand I'd be curious about more people trying it and reporting their experiences.
I suspect many people kind of don't have this action in the space of things they usually consider - I'd expect what most people would do is 1) just stop posting 2) write about their negative experience 3) complain privately.
↑ comment by Jan_Kulveit · 2019-04-30T02:29:12.995Z · LW(p) · GW(p)
Actually I turned the karma for all comments, not just mine. The bold claim is my individual taste in what's good on the EA forum is in important ways better than the karma system, and the karma signal is similar to sounds made by a noisy mob. If I want I can actually predict what average sounds will the crowd make reasonably well, so it is not any new source of information. But it still messes up with your S1 processing and motivations.
Continuing with the party metaphor, I think it is generally not that difficult to understand what sort of behaviour will make you popular at a party, and what sort of behaviours even when they are quite good in a broader scheme of things will make you unpopular at parties. Also personally I often feel something like "I actually want to have good conversations about juicy topics in a quite place, unfortunately you all people are congregating at this super loud space, with all these status games, social signals, and ethically problematic norms how to treat other people" toward most parties.
Overall I posted this here because it seemed like an interesting datapoint. Generally I think it would be great if people moved toward writing information rich feedback instead of voting, so such shift seems good. From what I've seen on EA forum it's quite rarely "many people" doing anything. More often it is like 6 users upvote a comment, 1user strongly downvotes it, something like karma 2 is a result. I would guess you may be in larger risk of distorted perception that this represents some meaningful opinion of the community. (Also I see some important practical cases where people are misled by "noises of the crowd" and it influences them in a harmful way.)
↑ comment by Zvi · 2019-04-28T23:32:36.723Z · LW(p) · GW(p)
If people are checking karma changes constantly and getting emotional validation or pain from the result, that seems like a bad result. And yes, the whole 'one -2 and three +17s feels like everyone hates me' thing is real, can confirm.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-04-29T00:21:51.470Z · LW(p) · GW(p)
Because of the way we do batching you can't check karma changes constantly (unless you go out of your way to change your setting) because we batch karma notifications on a 24h basis by default.
Replies from: DanielFilan, Benito↑ comment by DanielFilan · 2019-04-30T18:36:08.424Z · LW(p) · GW(p)
I mean, you can definitely check your karma multiple times a day to see where the last two sig digits are at, which is something I sometimes do.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-04-30T18:40:24.465Z · LW(p) · GW(p)
True. We did very intentionally avoid putting your total karma on the frontpage anywhere as most other platforms do to avoid people getting sucked into that unintentionally, but it you can still do that on your profile.
I hope we aren't wasting a lot of people's time by causing them to check their profile all the time. If we do, it might be the correct choice to also only update that number every 24h.
Replies from: RobbBB, DanielFilan↑ comment by Rob Bensinger (RobbBB) · 2019-04-30T23:17:48.363Z · LW(p) · GW(p)
I've never checked my karma total on LW 2.0 to see how it's changed.
↑ comment by DanielFilan · 2019-04-30T21:40:06.551Z · LW(p) · GW(p)
In my case, it sure feels like I check my karma often because I often want to know what my karma is, but maybe others differ.
↑ comment by Ben Pace (Benito) · 2019-04-29T01:01:22.612Z · LW(p) · GW(p)
Do our karma karma notifications disappear if you don’t check them that day? My model of Zvi suggested to me this is attention-grabbing and bad. I wonder if it’s better to let folks be notified of all days’ karma updates ‘til their most recent check in, and maybe also see all historical ones ordered by date if they click on a further button, so that the info isn’t lost and doesn’t feel scarce.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-04-29T01:26:33.791Z · LW(p) · GW(p)
Nah, they accumulate until you click on them.
Replies from: Zvi↑ comment by Zvi · 2019-04-29T12:08:47.879Z · LW(p) · GW(p)
Which is definitely better than it expiring, and 24h batching is better than instantaneous feedback (unless you were going to check posts individually for information already, in which case things are already quite bad). It's not obvious to me what encouraging daily checks here is doing for discourse as opposed to being a Skinner box.
Replies from: Raemon↑ comment by Raemon · 2019-04-29T20:04:05.494Z · LW(p) · GW(p)
The motivation was (among other things) several people saying to us "yo, I wish LessWrong was a bit more of a skinner box because right now it's so throughly not a skinner box that it just doesn't make it into my habits, and I endorse it being a stronger habit than it currently is."
See this comment and thread [LW(p) · GW(p)].
↑ comment by Shmi (shminux) · 2019-04-27T20:27:23.765Z · LW(p) · GW(p)
It's interesting to see how people's votes on a post or comment are affected by other comments. I've noticed that a burst of vote count changes often appears after a new and apparently influential reply shows up.
↑ comment by mako yass (MakoYass) · 2019-04-28T03:21:55.996Z · LW(p) · GW(p)
Reminder: If a person is not willing to explain their voting decisions, you are under no obligation to waste cognition trying to figure them out. They don't deserve that. They probably don't even want that.
Replies from: Vladimir_Nesov, SaidAchmiz, habryka4↑ comment by Vladimir_Nesov · 2019-05-04T14:55:03.000Z · LW(p) · GW(p)
That depends on what norm is in place. If the norm is to explain downvoting, then people should explain, otherwise there is no issue in not doing so. So the claim you are making is that the norm should be for people to explain. The well-known counterargument is that this disincentivizes downvoting.
you are under no obligation to waste cognition trying to figure them out
There is rarely an obligation to understand things, but healthy curiosity ensures progress on recurring events, irrespective of morality of their origin. If an obligation would force you to actually waste cognition, don't accept it!
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2019-05-05T09:07:21.283Z · LW(p) · GW(p)
So the claim you are making is that the norm should be for people to explain
I'm not really making that claim. A person doesn't have to do anything condemnable to be in a state of not deserving something. If I don't pay the baker, I don't deserve a bun. I am fine with not deserving a bun, as I have already eaten.
The baker shouldn't feel like I am owed a bun.
Another metaphor is that the person who is beaten on the street by silent, masked assailants should not feel like they owe their oppressors an apology.
↑ comment by Said Achmiz (SaidAchmiz) · 2019-04-28T03:29:49.749Z · LW(p) · GW(p)
Do you mean anything by this beyond “you don’t have an obligation to figure out why people voted one way or another, period”? (Or do you think that I [i.e., the general Less Wrong commenter] do have such an obligation?)
Edit: Also, the “They don’t deserve that” bit confuses me. Are you suggesting that understanding why people upvoted or downvoted your comment is a favor that you are doing for them?
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2019-04-28T05:44:58.326Z · LW(p) · GW(p)
Sometimes a person wont want to reply and say outright that they thought the comment was bad, because it's just not pleasant, and perhaps not necessary. Instead, they might just reply with information that they think you might be missing, which you could use to improve, if you chose to. With them, an engaged interlocutor will be able to figure out what isn't being said. With them, it can be productive to try to read between the lines.
Are you suggesting that understanding why people upvoted or downvoted your comment is a favor that you are doing for them?
Isn't everything relating to writing good comments a favor, that you are doing for others. But I don't really think in terms of favors. All I mean to say is that we should write our comments for the sorts of people who give feedback. Those are the good people. Those are the people who're a part of a good faith self-improving discourse. Their outgroup are maybe not so good, and we probably shouldn't try to write for their sake.
↑ comment by habryka (habryka4) · 2019-04-28T06:13:07.816Z · LW(p) · GW(p)
I think I disagree. If you are getting downvoted by 5 people and one of them explains why, then even if the other 4 are not explaining their reasoning it's often reasonable to assume that more than just the one person had the same complaints, and as such you likely want to update more that it's better for you to change what you are doing.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2019-04-28T21:47:06.269Z · LW(p) · GW(p)
We don't disagree.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-04-28T22:02:11.026Z · LW(p) · GW(p)
Cool
comment by habryka (habryka4) · 2019-09-13T04:57:17.342Z · LW(p) · GW(p)
Thoughts on impact measures and making AI traps
I was chatting with Turntrout today about impact measures, and ended up making some points that I think are good to write up more generally.
One of the primary reasons why I am usually unexcited about impact measures is that I have a sense that they often "push the confusion into a corner" in a way that actually makes solving the problem harder. As a concrete example, I think a bunch of naive impact regularization metrics basically end up shunting the problem of "get an AI to do what we want" into the problem of "prevent the agent from interferring with other actors in the system".
The second one sounds easier, but mostly just turns out to also require a coherent concept and reference of human preferences to resolve, and you got very little from pushing the problem around that way, and sometimes get a false sense of security because the problem appears to be solved in some of the toy problems you constructed.
I am definitely concerned that Turntrou's AUP does the same, just in a more complicated way, but am a bit more optimistic than that, mostly because I do have a sense that in the AUP case there is actually some meaningful reduction going on, though I am unsure how much.
In the context of thinking about impact measures, I've also recently been thinking about the degree to which "trap-thinking" is actually useful for AI Alignment research. I think Eliezer was right in pointing out that a lot of people, when first considering the problem of unaligned AI, end up proposing some kind of simple solution like "just make it into an oracle" and then consider the problem solved.
I think he is right that it is extremely dangerous to consider the problem solved after solutions of this type, but it isn't obvious that there isn't some good work that can be done that is born out of the frame of "how can I trap the AI and make it marginally harder for it to be dangerous, basically pretending it's just a slightly smarter human?".
Obviously those kinds of efforts won't solve the problem, but they still seem like good things to do anyways, even if they just buy you a bit of time, or help you notice a bit earlier if your AI is actually engaging in some kind of adversarial modeling.
My broad guess is that research of this type is likely very cheap and much more scalable, and you hit diminishing marginal returns on it much faster than you would on AI Alignment research that is tackling the core problem, so it might just be fine to punt it until later. Though if you are acting on very short timelines it probably should still be someones job to make sure that someone at Deepmind tries to develop the obvious transparency technologies to help you spot if your neural net has any large fractions of it dedicated to building sophisticated human modeling, even if this won't solve the problem in the long-run.
This perspective, combined with Wei Dai's recent comments that one job of AI Alignment researchers is to produce evidence that the problem is actually difficult, is that it might be a good idea for some people to just try to develop lots of benchmarks of adversarial behavior that have any chance of triggering before you have a catastrophic failure. Like, it seems obviously great to have a paper that takes some modern ML architecture and can clearly demonstrate in which cases it might engage in adversarial modeling, and maybe some remotely realistic scenarios where that might happen.
My current guess is that current ML architectures aren't really capable of adversarial modeling in this way, though I am not actually that confident of that, and actually would be somewhat surprised if you couldn't get any adversarial behavior out of a dedicated training regime, if you were to try. For example, let's say I train an RL-based AI architecture on chat interactions with humans in which it just tries to prolong the length of the chat session as much as possible. I would be surprised if the AI wouldn't build pretty sophisticated models of human interactions, and try some weird tactics like get the human to believe that it is another human, or pretend that it is performing some long calculation, or deceive the humans in a large variety of ways, at least if it was pretrained with a language model of comparable quality to GPT-2, and had similar resources going to it as Open AI Five. Though it's also unclear to what degree this would actually give us evidence about treacherous turn scenarios.
I've also been quite curious about the application of ML to computer security, where an obvious experiment is to just try to set up some reasonable RL-architecture in which I have an AI interface with a webserver, trying to get access to some set of files that it shouldn't get access to . The problem here is obviously the sparse reward landscape, and there really isn't an obvious training regime here, but showing how even current AI could possibly leverage security vulnerabilities in a lot of systems in a way that could easily give rise to unintented side-effects could be a valuable goal. But in general training RL for almost anything is really hard, so this seems unlikely to work straightforwardly.
Overall, I am not sure what I feel about the perspective I am exploring above. I have a deep sense that a lot of it is just trying to dodge the hard parts of the problem, but it seems fine to put on my hat for short-term, increase marginal difficulty of bad outcomes, for a bit and see how I feel after exploring it for a while.
Replies from: matthew-barnett, TurnTrout↑ comment by Matthew Barnett (matthew-barnett) · 2019-09-13T07:20:07.805Z · LW(p) · GW(p)
[ETA: This isn't a direct reply to the content in your post. I just object to your framing of impact measures, so I want to put my own framing in here]
I tend to think that impact measures are just tools in a toolkit. I don't focus on arguments of the type "We just need to use an impact measure and the world is saved" because this indeed would be diverting attention from important confusion. Arguments for not working on them are instead more akin to saying "This tool won't be very useful for building safe value aligned agents in the long run." I think that this is probably true if we are looking to build aligned systems that are competitive with unaligned systems. By definition, an impact penalty can only limit the capabilities of a system, and therefore does not help us to build powerful aligned systems.
To the extent that they meaningfully make cognitive reductions, this is much more difficult for me to analyze. On one hand, I can see a straightforward case for everyone being on the same page when the word "impact" is used. On the other hand, I'm skeptical that this terminology will meaningfully input into future machine learning research.
The above two things are my main critiques of impact measures personally.
↑ comment by TurnTrout · 2019-09-20T23:58:08.692Z · LW(p) · GW(p)
I think a natural way of approaching impact measures is asking "how do I stop a smart unaligned AI from hurting me?" and patching hole after hole. This is really, really, really not the way to go about things. I think I might be equally concerned and pessimistic about the thing you're thinking of.
The reason I've spent enormous effort on Reframing Impact [LW · GW] is that the impact-measures-as-traps framing is wrong! The research program I have in mind is: let's understand instrumental convergence on a gears level. Let's understand why instrumental convergence tends to be bad on a gears level. Let's understand the incentives so well that we can design an unaligned AI which doesn't cause disaster by default.
The worst-case outcome is that we have a theorem characterizing when and why instrumental convergence arises, but find out that you can't obviously avoid disaster-by-default without aligning the actual goal. This seems pretty darn good to me.
comment by habryka (habryka4) · 2019-05-02T19:05:53.491Z · LW(p) · GW(p)
Printing more rationality books: I've been quite impressed with the success of the printed copies of R:A-Z and think we should invest resources into printing more of the other best writing that has been posted on LessWrong and the broad diaspora.
I think a Codex book would be amazing, but I think there also exists potential for printing smaller books on things like Slack/Sabbath/etc., and many other topics that have received a lot of other coverage over the years. I would also be really excited about printing HPMOR, though that has some copyright complications to it.
My current model is that there exist many people interested in rationality who don't like reading longform things on the internet and are much more likely to read things when they are in printed form. I also think there is a lot of value in organizing writing into book formats. There is also the benefit that the book now becomes a potential gift for someone else to read, which I think is a pretty common way ideas spread.
I have some plans to try to compile some book-length sequences of LessWrong content and see whether we can get things printed (obviously in coordination with the authors of the relevant pieces).
Replies from: DanielFilan↑ comment by DanielFilan · 2020-12-02T16:31:48.448Z · LW(p) · GW(p)
Congratulations! Apparently it worked!
comment by habryka (habryka4) · 2019-04-30T18:28:29.037Z · LW(p) · GW(p)
Forecasting on LessWrong: I've been thinking for quite a while about somehow integrating forecasts and prediction-market like stuff into LessWrong. Arbital has these small forecasting boxes that look like this:
I generally liked these, and think they provided a good amount of value to the platform. I think our implementation would probably take up less space, but the broad gist of Arbital's implementation seems like a good first pass.
I do also have some concerns about forecasting and prediction markets. In particular I have a sense that philosophical and mathematical progress only rarely benefits from attaching concrete probabilities to things, and more works via mathematical proof and trying to achieve very high confidence on some simple claims by ruling out all other interpretations as obviously contradictory. I am worried that emphasizing probability much more on the site would make making progress on those kinds of issues harder.
I also think a lot of intellectual progress is primarily ontological, and given my experience with existing forecasting platforms and Zvi's sequence on prediction markets, they are not very good at resolving ontological confusions and often seem to actively hinder them by causing lots of sunk-costs into easy-to-operationalize ontologies that tend to dominate the platforms.
And then there is the question of whether we want to go full-on internal prediction market and have active markets that are traded in some kind of virtual currency that people actually care about. I think there is a lot of value in that direction, but it's obviously also a lot of engineering effort that isn't obviously worth it. It seems likely better to wait until a project like foretold.io has reached maturity and then see whether we can integrate it into LessWrong somehow.
Replies from: Zvi, RobbBB↑ comment by Zvi · 2019-05-03T18:34:04.684Z · LW(p) · GW(p)
This feature is important to me. It might turn out to be a dud, but I would be excited to experiment with it. If it was available in a way that was portable to other websites as well, that would be even more exciting to me (e.g. I could do this in my base blog).
Note that this feature can be used for more than forecasting. One key use case on Arbital was to see who was willing to endorse or disagree with, to what extent, various claims relevant to the post. That seemed very useful.
I don't think having internal betting markets is going to add enough value to justify the costs involved. Especially since it both can't be real money (for legal reasons, etc) and can't not be real money if it's going to do what it needs to do.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-05-03T19:08:51.070Z · LW(p) · GW(p)
There are some external platforms that one could integrate with, here is one that is run by some EA-adjacent people: https://www.empiricast.com/
I am currently confused about whether using an external service is a good idea. In some sense it makes things mode modular, but it also limits the UI design-space a lot and lengthens the feedback loop. I think I am currently tending towards rolling our own solution and maybe allowing others to integrate it into their site.
↑ comment by Rob Bensinger (RobbBB) · 2019-04-30T23:35:02.135Z · LW(p) · GW(p)
One small thing you could do is to have probability tools be collapsed by default on any AIAF posts (and maybe even on the LW versions of AIAF posts).
Also, maybe someone should write a blog post that's a canonical reference for 'the relevant risks of using probabilities that haven't already been written up', in advance of the feature being released. Then you could just link to that a bunch. (Maybe even include it in the post that explains how the probability tools work, and/or link to that post from all instances of the probability tool.)
Another idea: Arbital had a mix of (1) 'specialized pages that just include a single probability poll and nothing else'; (2) 'pages that are mainly just about listing a ton of probability polls'; and (3) 'pages that have a bunch of other content but incidentally include some probability polls'.
If probability polls on LW mostly looked like 1 and 2 rather than 3, then that might make it easier to distinguish the parts of LW that should be very probability-focused from the parts that shouldn't. I.e., you could avoid adding Arbital's feature for easily embedding probability polls in arbitrary posts (and/or arbitrary comments), and instead treat this more as a distinct kind of page, like 'Questions'.
You could still link to the 'Probability' pages prominently in your post, but the reduced prominence and site support might cause there to be less social pressure for people to avoid writing/posting things out of fears like 'if I don't provide probability assignments for all my claims in this blog post, or don't add a probability poll about something at the end, will I be seen as a Bad Rationalist?'
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2019-04-30T23:36:36.309Z · LW(p) · GW(p)
Also, if you do something Arbital-like, I'd find it valuable if the interface encourages people to keep updating their probabilities later as they change. E.g., some (preferably optional) way of tracking how your view has changed over time. Probably also make it easy for people to re-vote without checking (and getting anchored by) their old probability assignment, for people who want that.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2019-05-01T01:03:22.104Z · LW(p) · GW(p)
Note that Paul Christiano warns against encouraging sluggish updating by massively publicising people’s updates and judging them on it. Not sure what implementation details this suggests yet, but I do want to think about it.
https://sideways-view.com/2018/07/12/epistemic-incentives-and-sluggish-updating/
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2019-05-01T18:06:57.710Z · LW(p) · GW(p)
Yeah, strong upvote to this point. Having an Arbital-style system where people's probabilities aren't prominently timestamped might be the worst of both worlds, though, since it discourages updating and makes it look like most people never do it.
I have an intuition that something socially good might be achieved by seeing high-status rationalists treat ass numbers as ass numbers, brazenly assign wildly different probabilities to the same proposition week-by-week, etc., especially if this is a casual and incidental thing rather than being the focus of any blog posts or comments. This might work better, though, if the earlier probabilities vanish by default and only show up again if the user decides to highlight them.
(Also, if a user repeatedly abuses this feature to look a lot more accurate than they really were, this warrants mod intervention IMO.)
comment by habryka (habryka4) · 2024-08-15T23:48:45.304Z · LW(p) · GW(p)
We are rolling out some new designs for the post page:
Old:
New:
The key goal was to prioritize the most important information and declutter the page.
The most opinionated choice I made was to substantially de-emphasize karma at the top of the post page. I am not totally sure whether that is the right choice, but I think the primary purpose of karma is to use it to decide what to read before you click on a post, which makes it less important to be super prominent when you are already on a post page, or when you are following a link from some external website.
The bottom of the post still has very prominent karma UI to make it easy for people to vote after they finished reading a post (and to calibrate on reception before reading the comments).
This redesign also gives us more space in the right column, which we will soon be filling with new side-note UI and an improved inline-react experience.
The mobile UI is mostly left the same, though we did make the decision to remove post-tags from the top of the mobile UI page to only making them visible below the post, because they took up too much space.
Feel free to comment here with feedback. I expect we will be iterating on the new design quite actively over the coming days and weeks.
Replies from: neel-nanda-1, MondSemmel, Vladimir_Nesov, shankar-sivarajan, NoUsernameSelected, jarviniemi, Yoav Ravid, Alex_Altair, papetoast, MondSemmel, abandon, Richard_Kennaway, roboticali, Perhaps, Zach Stein-Perlman, interstice, quila↑ comment by Neel Nanda (neel-nanda-1) · 2024-08-16T07:22:51.041Z · LW(p) · GW(p)
I really don't like the removal of the comment counter at the top, because that gave a link to skip to the comments. I fairly often want to skip immediately to the comments to eg get a vibe for if the post is worth reading, and having a one click skip to it is super useful, not having that feels like a major degradation to me
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-16T07:43:28.521Z · LW(p) · GW(p)
The link is now on the bottom left of the screen, and in contrast to the previous design should consistently be always in the same location (whereas its previous position depended on how long the username is and some other details). I also care quite a bit about a single-click navigate to the comments.
Replies from: neel-nanda-1, Zach Stein-Perlman, Stefan42↑ comment by Neel Nanda (neel-nanda-1) · 2024-08-16T08:55:20.108Z · LW(p) · GW(p)
Ah! Hmm, that's a lot better than nothing, but pretty out of the way, and easy to miss. Maybe making it a bit bigger or darker, or bolding it? I do like the fact that it's always there as you scroll
↑ comment by Zach Stein-Perlman · 2024-08-17T02:23:22.982Z · LW(p) · GW(p)
I can't jump to the comments on my phone.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-17T02:25:59.697Z · LW(p) · GW(p)
Ah, oops, that's actually just a bug. Will fix.
↑ comment by StefanHex (Stefan42) · 2024-09-02T14:15:33.317Z · LW(p) · GW(p)
Even after reading this (2 weeks ago), I today couldn't manage to find the comment link and manually scrolled down. I later noticed it (at the bottom left) but it's so far away from everything else. I think putting it somewhere at the top near the rest of the UI would be much easier for me
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-09-02T18:19:22.540Z · LW(p) · GW(p)
Yeah, we'll probably make that adjustment soon. I also currently think the comment link is too hidden, even after trying to get used to it for a while.
↑ comment by MondSemmel · 2024-08-16T05:41:36.034Z · LW(p) · GW(p)
My impression: The new design looks terrible. There's suddenly tons of pointless whitespace everywhere. Also, I'm very often the first or only person to tag articles, and if the tagging button is so inconvenient to reach, I'm not going to do that.
Until I saw this shortform, I was sure this was a Firefox bug, not a conscious design decision.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-16T05:48:28.925Z · LW(p) · GW(p)
The total amount of whitespace is actually surprisingly similar to the previous design, we just now actually make use of the right column and top-right corner. I think we currently lose like 1-2 lines of text depending on the exact screen size and number of tags, so it's roughly the same amount of total content and whitespace, but with the title and author emphasized a lot more.
I am sad about making the add-tag button less prominent for people who tag stuff, but it's only used by <1% of users or so, and so not really worth the prominent screen estate where it was previously. I somewhat wonder whether we might be able to make it work by putting it to the left of the tagging list, where it might be able to fade somewhat more into the background while still being available. The previous tag UI was IMO kind of atrocious and took up a huge amount of screen real-estate, but am not super confident the current UI is ideal (especially from the perspective of adding tags).
Replies from: andrei-alexandru-parfeni↑ comment by sunwillrise (andrei-alexandru-parfeni) · 2024-08-16T07:26:43.724Z · LW(p) · GW(p)
I am sad about making the add-tag button less prominent for people who tag stuff, but it's only used by <1% of users or so, and so not really worth the prominent screen estate where it was previously.
I really don't understand the reasoning here. As I see it, tagging is a LW public good that is currently undersupplied, and the "prominent screen estate" is pretty much the only reason it is not even more undersupplied. "We have this feature that users can use to make the site better for everyone, but it's not being used as much as we'd want to, so it's not such a big deal if we make it less prominent" seems backwards to me; the solution would seem to make it even more prominent, no? With a subgoal of increasing the proportion of "people who tag stuff" to be much more than 1%.
Let's make this more concrete: does LW not already suffer from the problem that too few people regularly tag posts (at least with the requisite degree of care)? As a mod, you should definitely have more data on this, and perhaps you do and believe I am wrong about this, but in my experience, tags are often missing, improper, etc., until some of the commenters try (and often fail) to pick up the slack. This topic has been talked about [LW · GW] for a long time, ever since the tagging system began, with many users suggesting [LW(p) · GW(p)] that the tags be made even more prominent at the top of a post. Raemon even said [LW(p) · GW(p)], just a over a week ago:
I notice some people go around tagging posts with every plausible tag that possible seems like it could fit. I don't think this is a good practice – it results in an extremely overwhelming and cluttered tag-list, which you can't quickly skim to figure out "what is this post actually about"?, and I roll to disbelieve on "stretch-tagging" actually helping people who are searching tag pages.
And in response, Joseph Miller pointed out [LW(p) · GW(p)]:
There should probably be guidance on this when you go to add a tag. When I write a post I just randomly put some tags and have never previously considered that it might be prosocial to put more or less tags on my post.
This certainly seems like a problem that gets solved by increasing community involvement in tagging, so that it's not just the miscalibrated or idiosyncratic beliefs of a small minority of users that determines what gets tagged with what. And making the tags harder to notice seems like it shifts the incentives the complete opposite direction.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-16T07:50:58.995Z · LW(p) · GW(p)
I am confused about the quote. Indeed, in that quote Ray is complaining about people tagging things too aggressively, saying basically the opposite of your previous paragraph (i.e. he is complaining that tags are currently often too prominent, look too cluttered, and some users tag too aggressively).
My current sense is that tagging is going well and I don't super feel like I want to increase the amount of tagging that people do (though I do think much less tagging would be bad).
It's also the case that tagging is the kind of task that probably has a decent chance of being substantially automated with AI systems, and indeed, if I wanted to tackle the problem of posts not being reliably tagged, I would focus on doing so in an automated way, now that LLMs are just quite good and cheap at this kind of intellectual labor. I don't think it could fully solve the problem and would still need a bunch of human in the loop, but I think it could easily speed up tagging efficiency by 20x+. I've been thinking about building an auto-tagger, and might do so if we see tagging activity drop out of making these buttons less prominent.
Replies from: andrei-alexandru-parfeni↑ comment by sunwillrise (andrei-alexandru-parfeni) · 2024-08-16T07:54:03.084Z · LW(p) · GW(p)
(i.e. he is complaining that tags are currently often too prominent, look too cluttered, and some users tag too aggressively).
Right, but the point I was trying to make is that the reason why this happens is because you don't have sufficient engagement from the broader community in this stuff, so when mistakes like these happen (maybe because the people doing the tagging are a small and unrepresentative sample of the LW userbase), they don't get corrected quickly because there are too few people to do the correcting. Do you disagree with this?
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-16T07:57:11.466Z · LW(p) · GW(p)
I think it's messy. In this case, it seems like the problem would have never appeared in the first place if the tagging button had been less available. I agree many other problems would be better addressed by having more people participate in the tagging system.
↑ comment by Vladimir_Nesov · 2024-08-17T14:09:21.521Z · LW(p) · GW(p)
The new design seems to be influenced by the idea that spreading UI elements across greater distances (reducing their local density) makes the interface less cluttered. I think it's a little bit the other way around, shorter distances with everything in one place make it easier to chunk and navigate, but overall the effect is small either way. And the design of spreading the UI elements this way is sufficiently unusual that it will be slightly confusing to many people.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-17T17:03:20.043Z · LW(p) · GW(p)
I don't really think that's the primary thing going on. I think one of the key issues with the previous design was the irregularity of the layout. Everything under the header would wrap and basically be in one big jumble, with the number and length of the author names changing where the info on the number of comments is, and where the tags section starts.
It also didn't communicate a good hierarchy on which information is important. Ultimately, all you need to start reading a post is the title and the content. The current design communicates the optionality of things like karma and tags better, whereas the previous design communicates that those pieces of information might need to be understood before you start reading.
↑ comment by Shankar Sivarajan (shankar-sivarajan) · 2024-08-16T03:38:28.685Z · LW(p) · GW(p)
The title is annoyingly large.
I like the table of contents on the left becoming visible only upon mouseover.
↑ comment by NoUsernameSelected · 2024-08-16T02:47:52.084Z · LW(p) · GW(p)
Why remove "x min read"? Even if it's not gonna be super accurate between different people's reading speeds, I still found it very helpful to decide at a glance how long a post is (e.g. whether to read it on the spot or bookmark it for later).
Showing the word count would also suffice.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-16T02:55:06.400Z · LW(p) · GW(p)
Mostly because there is a prior against any UI element adding complexity.
In this case, with the new ToC progress bar which is now always visible, you can quickly glance the length of the post by checking the length of the progress bar relative to the viewport indicator. It's an indirect inference, but I've gotten used to it pretty quickly. You can also still see the word count on hover-over.
Replies from: neel-nanda-1, NoUsernameSelected↑ comment by Neel Nanda (neel-nanda-1) · 2024-08-16T07:24:44.995Z · LW(p) · GW(p)
I find a visual indicator much less useful and harder to reason about than a number, I feel pretty sad at lacking this. How hard would it be to have as an optional addition?
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-16T07:58:35.932Z · LW(p) · GW(p)
Maintaining many different design variants pretty inevitably leads to visual bugs and things being broken, so I am very hesitant to allow people to customize things at this level (almost every time we've done that in the past the custom UI broke in some way within a year or two, people wouldn't complain to us, and in some cases, we would hear stories 1-2 years later that someone stopped using LW because "it started looking broken all the time").
We are likely shipping an update to make the reading time easier to parse in the post-hover preview to compensate some for the lack of it not being available on the post page directly. I am kind of curious in which circumstances you would end up clicking on the post page without having gotten the hover-preview first (mobile is the obvious one, though we are just adding back the reading time on mobile, that was an oversight on my part).
Replies from: neel-nanda-1↑ comment by Neel Nanda (neel-nanda-1) · 2024-08-16T08:58:17.302Z · LW(p) · GW(p)
Typically, opening a bunch of posts that look interesting and processing them later, or being linked to a post (which is pretty common in safety research, since often a post will be linked, shared on slack, cited in a paper, etc) and wanting to get a vibe for whether I can be bothered to read it. I think this is pretty common for me.
I would be satisfied if hovering over eg the date gave me info like the reading time.
Another thing I just noticed: on one of my posts, it's now higher friction to edit it, since there's not the obvious 3 dots button (I eventually found it in the top right, but it's pretty easy to miss and out of the way)
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-16T17:09:43.208Z · LW(p) · GW(p)
I would be satisfied if hovering over eg the date gave me info like the reading time.
Oh, yeah, sure, I do think this kind of thing makes sense. I'll look into what the most natural place for showing it on hover is (the date seems like a reasonable first guess).
on one of my posts, it's now higher friction to edit it, since there's not the obvious 3 dots button (I eventually found it in the top right, but it's pretty easy to miss and out of the way)
I think this is really just a "any change takes some getting used to" type deal. My guess is it's slightly easier to find for the first time than the previous design, but I am not sure. I'll pay attention to whether new-ish users have trouble finding the triple-dot, and if so will make it more noticeable.
↑ comment by NoUsernameSelected · 2024-08-16T03:43:40.286Z · LW(p) · GW(p)
I don't get a progress bar on mobile (unless I'm missing it somehow), and the word count on hover feature seemingly broke on mobile as well a while ago (I remember it working before).
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-16T04:27:15.368Z · LW(p) · GW(p)
Ah, I think showing something on mobile is actually a good idea. I forgot that the way we rearranged things that also went away. I will experiment with some ways of adding that information back in tomorrow.
↑ comment by Olli Järviniemi (jarviniemi) · 2024-08-16T02:28:59.796Z · LW(p) · GW(p)
I like this; I've found the meta-data of posts to be quite heavy and cluttered (a multi-line title, the author+reading-time+date+comments line, the tag line, a linkpost line and a "crossposted from the Aligned Forum" line is quite a lot).
I was going to comment that "I'd like the option to look at the table-of-contents/structure", but I then tested and indeed it displays if you hover your mouse there. I like that.
When I open a new post, the top banner with the LessWrong link to the homepage, my username etc. show up. I'd prefer if that didn't happen? It's not like I want to look at the banner (which has no new info to me) when I click open a post, and hiding it would make the page less cluttered.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-16T03:05:38.721Z · LW(p) · GW(p)
When I open a new post, the top banner with the LessWrong link to the homepage, my username etc. show up. I'd prefer if that didn't happen?
I've never considered that. I do think it's important for the banner to be there when you get linked externally, so that you can orient to where you are, but I agree it's reasonable to hide it when you do a navigation on-site. I'll play around a bit with this. I like the idea.
Replies from: Screwtape↑ comment by Screwtape · 2024-08-16T03:30:47.533Z · LW(p) · GW(p)
Noting that I use the banner as breadcrumb navigation relatively often, clicking LessWrong to go back to the homepage or my username to get a menu and go to my drafts. The banner is useful to me as a place to reach those menus.
No idea how common that use pattern is.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-16T03:31:41.937Z · LW(p) · GW(p)
Totally. The only thing that I think we would do is to start you scrolled down 64px on the post page (the height of the header), so that you would just scroll a tiny bit up and then see the header again (or scroll up anywhere and have it pop in the same way it does right now).
↑ comment by Yoav Ravid · 2024-09-04T08:23:04.530Z · LW(p) · GW(p)
I am really missing the word counter. It's something I look at quite a lot (less so on reading time estimates, as I got used to making the estimate myself based on the wordcount).
↑ comment by Alex_Altair · 2024-08-18T16:55:44.653Z · LW(p) · GW(p)
My overall review is, seems fine, some pros and some cons, mostly looks/feels the same to me. Some details;
- I had also started feeling like the stuff between the title and the start of the post content was cluttered.
- I think my biggest current annoyance is the TOC on the left sidebar. This has actually disappeared for me, and I don't see it on hover-over, which I assume is maybe just a firefox bug or something. But even before this update, I didn't like the TOC. Specifically, you guys had made it so that there was spacing between the sections that was supposed to be proportional to the length of each section. This never felt like it worked for me (I could speculate on why if you're interested). I'd much prefer if the TOC was just a normal outline-type thing (which it was in a previous iteration).
- I think I'll also miss the word count. I use it quite frequently (usually after going onto the post page itself, so the preview card wouldn't help much). Having the TOC progress bar thing never felt like it worked either. I agree with Neel that it'd be fine to have the word count in the date hover-over, if you want to have less stuff on the page.
- The tags at the top right are now just bare words, which I think looks funny. Over the years you guys have often seemed to prefer really naked minimalist stuff. In this case I think the tags kinda look like they might be site-wide menus, or something. I think it's better to have the tiny box drawn around each tag as a visual cue.
- The author name is now in a sans-serif font, which looks pretty off to me in between the title and the text as serif fonts. It looks like when the browser failed to load the site font and falls back onto the default font, or something. (I do see that it matches the fact that usernames in the comments are sans serif, though.)
- I initially disliked the karma section being so suppressed, but then I read one of your comments in this thread explaining your reasoning behind that, and now I agree it's good.
- I also use the comment count/link to jump to comments fairly often, and agree that having that in the lower left is fine.
↑ comment by papetoast · 2024-08-16T08:46:51.874Z · LW(p) · GW(p)
I like most of the changes, but strongly dislike the large gap before the title. (I similarly dislike the large background in the top 50 of the year posts)
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-16T17:15:46.350Z · LW(p) · GW(p)
Well, the gap you actually want to measure is the gap between the title and the audio player (or at the very least the tags), since that's the thing we need to make space for. You are clearly looking at LW on an extremely large screen. This is the more median experience:
There is still a bunch of space there, but for many posts the tags extend all the way above the post.
Replies from: papetoast↑ comment by papetoast · 2024-08-17T04:23:20.296Z · LW(p) · GW(p)
I understand that having the audio player above the title is the path of least resistance, since you can't assume there is enough space on the right to put it in. But ideally things like this should be dynamic, and only take up vertical space if you can't put it on the right, no? (but I'm not a frontend dev)
Alternatively, I would consider moving them vertically above the title a slight improvement. It is not great either, but at least the reason for having the gap is more obvious.
The above screenshots are done in a 1920x1080 monitor
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-17T04:56:28.944Z · LW(p) · GW(p)
Yeah, we could make things dynamic, it would just add complexity that we would need to check every time we make a change. It's the kind of iterative improvement we might do over time, but it's not something that should block the roll-out of a new design (and it's often lower priority than other things, though post-pages in-particular are very important and so get a lot more attention than other pages).
↑ comment by MondSemmel · 2024-08-22T06:39:13.129Z · LW(p) · GW(p)
The new design means that I now move my mouse cursor first to the top right, and then to the bottom left, on every single new post. This UI design is bad ergonomics and feels actively hostile to users.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-22T06:44:07.724Z · LW(p) · GW(p)
I've been playing around with some ways to move the comment icon to the top right corner, ideally somehow removing the audio-player icon (which is much less important, but adds a lot of visual noise in a way that overwhelms the top right corner if you also add the comment icon). We'll see whether I can get it to work.
↑ comment by dirk (abandon) · 2024-08-18T20:25:14.086Z · LW(p) · GW(p)
It takes more vertical space than it used to and I don't like that. (Also, the meatball menu is way off in the corner, which is annoying if I want to e.g. bookmark a post, though I don't use it very often so it's not a major inconvenience.) I think I like the new font, though!
Replies from: abandon↑ comment by dirk (abandon) · 2024-08-23T02:01:09.338Z · LW(p) · GW(p)
Another minor annoyance I've since noticed, at this small scale it's hard to to distinguish posts I've upvoted from posts I haven't voted on. Maybe it'd help if the upvote indicator were made a darker shade of green or something?
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-23T02:06:59.694Z · LW(p) · GW(p)
Yeah, that's on my to-do list. I also think the current voting indicator isn't clear enough at the shrunken size.
↑ comment by Richard_Kennaway · 2024-08-16T08:58:04.423Z · LW(p) · GW(p)
On desktop the title font is jarringly huge. I already know the title from the front page, no need to scream it at me.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-16T17:12:43.995Z · LW(p) · GW(p)
If you get linked externally (which is most of LW's traffic), you don't know the title (and also generally are less oriented on the page, so it helps to have a very clear information hierarchy).
I do also agree the font is very large. I made an intentionally bold choice here for a strong stylistic effect. I do think it's pretty intense and it might be the wrong choice, but I currently like it aesthetically a bunch.
↑ comment by Ali Ahmed (roboticali) · 2024-08-16T02:48:05.674Z · LW(p) · GW(p)
The new UI is great, and I agree with the thinking behind de-emphasizing karma votes at the top. It could sometimes create inherent bias and assumptions (no matter whether the karma is high or low) even before reading a post, whereas it would make more sense at the end of the post.
↑ comment by Perhaps · 2024-08-16T01:15:09.549Z · LW(p) · GW(p)
The karma buttons are too small for actions that in my experience, are done a lot more than clicking to listen to the post. It's pretty easy to misclick.
Additionally, it's unclear what the tags are, as they're no longer right beside the post to indicate their relevance.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-16T02:37:55.386Z · LW(p) · GW(p)
The big vote buttons are at the bottom of the post, where I would prefer more of the voting to happen (I am mildly happy to discourage voting at the top of the post before you read it, though I am not confident).
↑ comment by Zach Stein-Perlman · 2024-09-24T02:09:54.822Z · LW(p) · GW(p)
I ~always want to see the outline when I first open a post and when I'm reading/skimming through it. I wish the outline appeared when-not-hover-over-ing for me.
↑ comment by interstice · 2024-08-16T05:08:57.873Z · LW(p) · GW(p)
I like the decluttering. I think the title should be smaller and have less white space above it. Also think that it would be better if the ToC was maybe just faded a lot until mouseover, the sudden appearance/disappearance feels too sudden.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-16T05:11:47.852Z · LW(p) · GW(p)
I think making things faint enough so that the relatively small margin between main body text and the ToC wouldn't become bothersome during reading isn't really feasible. In-general, because people's screen-contrast and color calibration differs quite a lot, you don't have that much wiggle room at the lower level of opacity without accidentally shipping completely different experiences to different users.
I think it's plausible we want to adjust the whitespace below the title, but I think you really need this much space above the title to not have it look cluttered together with the tags on smaller screens. On larger screens there is enough distance between the title and top right corner, but things end up much harder to parse when the tags extend into the space right above the title, and that margin isn't big enough.
↑ comment by quila · 2024-08-16T00:33:35.373Z · LW(p) · GW(p)
the primary purpose of karma is to use it to decide what to read before you click on a post, which makes it less important to be super prominent when you are already on a post page
I think this applies to titles too
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-16T01:06:35.686Z · LW(p) · GW(p)
I think the title is more important for parsing the content of an essay. Like, if a friend sends you a link, it's important to pay a bunch of attention to the title. It's less important that you spend attention to the karma.
comment by habryka (habryka4) · 2024-04-15T03:43:05.587Z · LW(p) · GW(p)
Had a very aggressive crawler basically DDos-ing us from a few dozen IPs for the last hour. Sorry for the slower server response times. Things should be fixed now.
comment by habryka (habryka4) · 2019-05-27T06:42:29.252Z · LW(p) · GW(p)
Random thoughts on game theory and what it means to be a good person
It does seem to me like there doesn’t exist any good writing on game theory from a TDT perspective. Whenever I read classical game theory, I feel like the equilibria that are being described obviously fall apart when counterfactuals are being properly brought into the mix (like D/D in prisoners dilemmas).
The obvious problem with TDT-based game theory, just as it is with Bayesian epistemology, the vast majority of direct applications are completely computationally intractable. It’s kind of obvious what should happen in games with lots of copies of yourself, but as soon as anything participates that isn’t a precise copy, everything gets a lot more confusing. So it is not fully clear what a practical game-theory literature from a TDT-perspective would look like, though maybe the existing LessWrong literature on Bayesian epistemology might be a good inspiration.
Even when you can’t fully compute everything (and we even don’t really know how to compute everything in principle), you might still be able to go through concrete scenarios and list considerations and perspectives that incorporate TDT-perspectives. I guess in that sense, a significant fraction of Zvi’s writing could be described as practical game theory, though I do think there is a lot of value in trying to formalize the theory and make things as explicit as possible, which I feel like Zvi at least doesn’t do most of the time.
Critch (Academian) tends to have this perspective of trying to figure out what a “robust agent” would do, in the sense of an agent that would at the very least be able to reliably cooperate with copies of itself, and adopt cooperation and coordination principles that allow it to achieve very good equilibria with agents that adopt the same type of cooperation and coordination norms. And I do think there is something really valuable here, though I am also worried that the part where you have to cooperate with agents who haven’t adopted super similar cooperation norms is actually the more important one (at least until something like AGI).
And I do think that the majority of the concepts we have for what it means to be a “good person” are ultimately attempts at trying to figure out how to coordinate effectively with other people, in a way that a more grounded game theory would help a lot with.
Maybe a good place to start would be to brainstorm a list of concrete situations in which I am uncertain what the correct action is. Here is some attempt at that:
-
How to deal with threats of taking strongly negative-sum actions? What is the correct response to the following concrete instances?
- A robber threatens to shoot you if you don’t hand over your wallet
- Do you precommit to violently attack any robber that robs you, or do you simply hand over your wallet?
- A robber threatens to shoot you if you don’t hand over your wallet
-
You are in the room with someone holding the launch buttons for the USA’s nuclear arsenal and they are threatening to launch them if you don’t hand over your wallet
-
You are head of the U.S. and another nation state is threatening a small-scale nuclear attack on one of your cities if you don’t provide some kind of economic subsidy to them
- Do you launch a conventional attack?
- Do you launch a full out nuclear response as a deterrent?
- Do you launch a small-scale nuclear response?
- Do you not do anything at all?
- Does the answer depend on the size of the economic subsidy? What if they ask twice?
-
You are at a party and your assigned driver ended up drinking, even though they said they would not (the driver was chosen by a random draw)
- Do you somehow punish them now, do you punish them later, or not at all?
- What if they are less likely to remember if you punish them now because they are drunk? Does that matter for the game-theoretic correct action?
- What if they did this knowingly, reasoning from a CDT perspective that there wouldn’t be any point in punishing them now because they wouldn’t remember the next day
- What if you would never see them again later?
- What if you only ever get to interact with them after they made the choice to be drunk?
I feel like I have some hint of an answer to all of these, but also feel like any answer that I can come up with makes me exploitable in a way that makes me feel like there is no meta-level on which there is an ideal strategy.
Replies from: Raemon, Lanrian↑ comment by Raemon · 2019-05-28T01:11:42.972Z · LW(p) · GW(p)
Reading through this, I went "well, obviously I pay the mugger...
...oh, I see what you're doing here."
I don't have a full answer to the problem you're specifying, but something that seems relevant is the question of "How much do you want to invest in the ability to punish defectors [both in terms of maximum power-to-punish, a-la nukes, and in terms of your ability to dole out fine-grained-exactly-correct punishment, a-la skilled assassins]"
The answer to this depends on your context. And how you have answered this question determines whether it makes sense to punish people in particular contexts.
In many cases there might want to be some amount of randomization where at least some of the time you really disproportionately punish people, but you don't have to pay the cost of doing so every time.
Answering a couple of the concrete questions:
Mugger
Right now, in real life, I've never been mugged, and I feel fine basically investing zero effort into preparing for being mugged. If I do get mugged, I will just hand over my wallet.
If I was getting mugged all the time, I'd probably invest effort into a) figuring out what good policies existed for dealing with muggers, b) what costs I'd have to pay in order to implement those policies.
In some worlds, it's worth investing in literal body armor or bullet proof cars or whatever, and in the skill to successfully fight back against a literal mugger. (My understanding is that this usually not actually a good idea even in crime-heavy areas, but I can imagine worlds where it was correct to just get good at fighting, or to hire people who are good at fighting as bodyguards)
In some worlds it's worth investing more in police-force and avoiding having to think about the problem, or not carrying as much money around in the first place.
Small Nation Demands Subsidies, Threatens Nuclear War
Again, I think my options here depend a lot on having already invested in defense.
One scenario is "I do not have the ability to say 'no' without risking millions of either my own citizens lives, or innocent citizens of the country-in-question." In that case, I probably have to do something that makes my vague-hippie-values sad.
I have some sense that my vague-hippie-values depend on having invested enough money in defense (and offense) that I can "afford" to be moral. Things I may wish my country had invested in include:
- Anti-ICBM capabilities that can shoot down incoming nukes with enough reliability that either a small-scale nuclear counterstrike, or a major non-nuclear retaliatory invasion, are viable options that will at least only punish foreign civilians if the foreign government actually launches an attack
- Possibly invested in assassins who just kill individuals who threaten nuclear strikes (I'm somewhat confused about why this isn't more used, suspect the answer has to do with the game theory of 'the people in charge [of all nations] want it to be true that they aren't at risk of getting assassinated, so they have a gentleman's agreement to avoid killing enemy leaders)
So I probably want to invest a lot in either having strong capabilities in those domains, or having allies who do.
Drinking
In real life I expect that the solution here is "I never invite said person to parties again, and depending on our relative social standing I might publicly badmouth them or quietly gossip about them."
In weird contrived scenarios I'm not sure what I do because I don't know how to anticipate weird contrived scenarios.
I do invest, generally, on communicating about how obviously people should follow up on their commitments, such that when someone fails to live up to their commitment, it costs less to punish them for doing so. (And this is a shared social good that multiple people invest in).
If I'm in a one-off interaction with someone who is currently too drunk to remember being punished and who I'm not socially connected to, I probably treat it like being mugged – a fluke event that doesn't happen often enough to be worth investing resources in being able to handle better.
Extra Example: Having to Stand Up to a Boss/High-Status-Person/Organization
A situation that I'm more likely to run into, where the problem actually seems hard, is that sometimes high status people do bad things, and they have more power than you, and people will naturally end up on their side and take their word over yours.
Sort of similar to the "Small nation threatening nuclear war", I think if you want to be able to "afford to actually have moral principles", you need to invest upfront in capabilities. This isn't always the right thing to do, depending on your life circumstances, but it may be sometimes. You want to have enough surplus power that you have the Slack to stand up for things.
Possibilities include investing in being directly high status yourself, or investing in making friends with a strong enough coalition of people to punish high status people, or encourage strong norms and rule of law such that you don't need to have as strong a coalition, because you've made it lower cost to socially attack someone who breaks a norm.
Extra Example: The Crazy House Guest
Perhaps related to the drinking example: a couple times, I've had people show at former houses, potentially looking to move in, and then causing some kind of harm.
In one case, they had a very weird combination of mental illnesses and cluelessness that resulted in them dealing several thousands of dollars worth of physical damage to the house.
They seemed crazy and unpredictable enough that it seemed like if I tried to punish them, they might follow me around forever and make my life suck in weird ways.
So I didn't punish them and they got away with it and went away and I never heard from them again.
So... sure, you can get away with certain kinds of things by signaling insanity and unpredictability... but at the cost of not being welcome in major social networks. The few extra thousand dollars they saved was not remotely worth the fact that, had they been a more reasonable person, they'd have had access to a strong network of friends and houses that help each other out finding jobs and places to live and what-not.
So I'm not worried about the longterm incentives here – the only people for whom insanity is a cost-effective tool to avoid punishment are actual insane people who don't have the ability to interface with society normally.
What if there turn out to be lots of crazy people? Then you probably either invest upfront resources in fighting this somehow, or become less trusting.
Extra Example: The Greedy Landlord
In another housing situation, the landlord tried to charge us extra for things that were not our fault. In this case, it was reasonably clear that we were in the right. Going to small claims court would have been net-negative for us, but also costly to them.
I was angry and full of zealous energy and I decided it was worth it and I threatened going to small claims court and wasting both of our time, even though a few hundred dollars wasn't really worth it.
They backed down.
This seems like the system working as intended. This is what anger is for, to make sure people have the backbone to defend themselves, and to live in a world where at least some of the time people will get riled up and punish you disproportionately
What if you haven't invested in defense capabilities in advance?
Then you probably will periodically need to lose and accept bad situations, such as either a more powerful empire demanding tribute from your country, or choosing policies like "if you are under threat, flip an unknown number of coins and if enough coins come up heads, go to war and punish them disproportionately even though you will probably lose and lots of people will die but now empires will sometimes think twice about invading poor countries."
The meta level point
It doesn't seem inconsistent to me to apply different policies in different situations, even if they share commonalities, based on how common the situation is, how costly the defection, how much long-term punishment you can inflict, and how much resources your have invested in being able to punish.
This does mean that mugging (for example) is a somewhat viable strategy, since people don't invest as heavily in handling it (because it is rare), but this seems like a self-correcting problem. There would be some least-defended against defect-button that defectors can press, you can't protect against everything.
Another point is that it's important to be somewhat unpredictable, and to at least sometimes just punish people disproportionately (when they wrong you), so that people aren't confident that the expected value of taking advantage of you is positive.
↑ comment by Lukas Finnveden (Lanrian) · 2019-05-27T10:02:22.819Z · LW(p) · GW(p)
Any reason why you mention timeless decision theory (TDT) specifically? My impression was that functional decision theory (as well as UDT, since they're basically the same thing) is regarded as a strict improvement over TDT.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-05-27T16:49:27.727Z · LW(p) · GW(p)
Same thing, it's just the handle that stuck in my mind. I think of the whole class as "timeless", since I don't think there exists a good handle that describes all of them.
comment by habryka (habryka4) · 2019-05-15T06:00:49.483Z · LW(p) · GW(p)
Making yourself understandable to other people
(Epistemic status: Processing obvious things that have likely been written many times before, but that are still useful to have written up in my own language)
How do you act in the context of a community that is vetting constrained? I think there are fundamentally two approaches you can use to establish coordination with other parties:
1. Professionalism: Establish that you are taking concrete actions with predictable consequences that are definitely positive
2. Alignment: Establish that you are a competent actor that is acting with intentions that are aligned with the aims of others
I think a lot of the concepts around professionalism arise when you have a group of people who are trying to coordinate, but do not actually have aligned interests. In those situations you will have lots of contracts and commitments to actions that have well-specified outcomes and deviations from those outcomes are generally considered bad. It also encourages a certain suppression of agency and a fear of people doing independent optimization in a way that is not transparent to the rest of the group.
Given a lot of these drawbacks, it seems natural to aim for establishing alignment with others, it is however much less clear how to achieve that. Close group of friends can often act in alignment because they have credibly signaled to each other that they care about each others experiences and goals. This also tends to involve costly signals of sacrifice that are only economical if the goals of the participants were actually aligned. I also suspect that there is a real "merging of utility functions" going on, where close friends and partners self-modify to share each others values.
For larger groups of people, establishing alignment with each other seems much harder, in particular in the presence of adversarial actors. You can request costly signals, but it is often difficult to find good signals that are not prohibitively costly for many members of your group (this task is much easier for smaller groups, since you have less spread in the costs of different actions). You are also under much more adversarial pressure, since with more people you likely have access to more resources which attracts more adversarial actors.
I expect this is the reason why we see larger groups often default to professionalism norms with very clearly defined contracts.
I think the EA and Rationality communities have historically optimized hard for alignment and not professionalism, since that enabled much better overall coordination, but as the community grew and attracted more adversarial actors those methods didn't scale very well and so we currently expect alignment-level coordination capabilities while only having access to professionalism-level coordination protocols and resources.
We've also seen an increase in people trying to increase the amount of alignment, by looking into things like circling and specializing in mediation and facilitation, which I think is pretty promising and has some decent traction.
I also think there is a lot of value in building better infrastructure and tools for more "professionalism" style interactions, where people offer concrete services with bounded upside. A lot of my thinking on the importance of accountability derives from this perspective.
Replies from: jp↑ comment by jp · 2020-06-15T19:00:34.426Z · LW(p) · GW(p)
I had forgotten this post, reread it and still think it's one of the better things of it's length I've read recently.
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-06-15T22:37:40.777Z · LW(p) · GW(p)
Glad to hear that! Seems like a good reason to publish this as a top-level post. Might go ahead and do that in the next few days.
Replies from: nicoleross↑ comment by nicoleross · 2020-06-29T17:38:25.680Z · LW(p) · GW(p)
+1 for publishing as a top level post
comment by habryka (habryka4) · 2021-07-06T20:54:58.280Z · LW(p) · GW(p)
This FB post by Matt Bell on the Delta Variant helped me orient a good amount:
https://www.facebook.com/thismattbell/posts/10161279341706038
As has been the case for almost the entire pandemic, we can predict the future by looking at the present. Let’s tackle the question of “Should I worry about the Delta variant?” There’s now enough data out of Israel and the UK to get a good picture of this, as nearly all cases in Israel and the UK for the last few weeks have been the Delta variant. [1] Israel was until recently the most-vaccinated major country in the world, and is a good analog to the US because they’ve almost entirely used mRNA vaccines.
- If you’re fully vaccinated and aren’t in a high risk group, the Delta variant looks like it might be “just the flu”. There are some scary headlines going around, like “Half of new cases in Israel are among vaccinated people”, but they’re misleading for a couple of reasons. First, since Israel has vaccinated over 80% of the eligible population, the mRNA vaccine still is 1-((0.5/0.8)/(0.5/0.2)) = 75% effective against infection with the Delta variant. Furthermore, the efficacy of the mRNA vaccine is still very high ( > 90%) against hospitalization or death from the Delta variant. Thus, you might still catch Delta if you’re vaccinated, but it will be more like a regular flu if you do. J&J likely has a similar performance in terms of reduced hospitalizations and deaths, as the UK primarily vaccinated its citizens with the AstraZeneca vaccine, which is basically a crappier version of J&J, and is still seeing a 90+% reduction in hospitalizations and deaths among the vaccinated population. [2]
[...]
comment by habryka (habryka4) · 2021-06-08T04:16:33.927Z · LW(p) · GW(p)
This seems like potentially a big deal: https://mobile.twitter.com/DrEricDing/status/1402062059890786311
> Troubling—the worst variant to date, the #DeltaVariant is now the new fastest growing variant in US. This is the so-called “Indian” variant #B16172 that is ravaging the UK despite high vaccinations because it has immune evasion properties. Here is why it’s trouble—Thread. #COVID19
↑ comment by SoerenMind · 2021-06-08T07:30:13.872Z · LW(p) · GW(p)
There's also a strong chance that delta is the most transmissible variant we know even without its immune evasion (source: I work on this, don't have a public source to share). I agree with your assessment that delta is a big deal.
↑ comment by ChristianKl · 2021-06-08T16:07:04.606Z · LW(p) · GW(p)
The fact that we still use the same sequence to vaccinate seems like civilisational failure.
comment by habryka (habryka4) · 2023-02-07T20:11:47.450Z · LW(p) · GW(p)
@Elizabeth [LW · GW] was interested in me crossposting this comment from the EA Forum since she thinks there isn't enough writing on the importance of design on LW. So here it is.
Atlas reportedly spent $10,000 on a coffee table. Is this true? Why was the table so expensive?
Atlas at some point bought this table, I think: https://sisyphus-industries.com/product/metal-coffee-table/. At that link it costs around $2200, so I highly doubt the $10,000 number.
Lightcone then bought that table from Atlas a few months ago at the listing price, since Jonas thought the purchase seemed excessive, so Atlas actually didn't end up paying anything. I am really glad we bought it from them, it's probably my favorite piece of furniture in the whole venue we are currently renovating.
If you think it was a waste of money, I have made much worse interior design decisions (in-general furniture is really annoyingly expensive, and I've bought couches for $2000 that turned out to just not work for us at all and were too hard to sell), and I consider this one a pretty strong hit. (To clarify, the reason why it's so expensive is because it's a kinetic sculpture with a moving magnet and a magnetic ball that draws programmable patterns into the sand at the center of the table, so it's not just like, a pretty coffee table)
The table is currently serving as a centerpiece of our central workspace social room, and has a pretty large effect on good conversations happening since it seems to hit the right balance of being visually interesting without being too distracting while also being functional, and despite this kind of sounding ridiculous, if for some reason it was impossible for Lightcone to pay for this table (which I don't think it is since I think interior design matters), I would pay for it from my own personal funds.
In general, as someone who has now helped prepare on the order of 20 venues for workshops and conferences, it seems pretty obvious to me that interior design matters quite a bit for workshop venues. I think it would indeed be pretty crazy to pay $2000 for every coffee table in your venue, but a single central design piece can make a huge difference to a room, and I've spent hundreds of hours trying to design rooms to facilitate good conversations with my counterfactual earning rate being in the hundreds of dollars per hours, and I think it definitely is sometimes worth my time/money to buy an occasional expensive piece of furniture.
comment by Rob Bensinger (RobbBB) · 2019-05-02T22:09:44.951Z · LW(p) · GW(p)
I like this shortform feed idea!
comment by habryka (habryka4) · 2020-12-05T02:36:41.960Z · LW(p) · GW(p)
We launched the books on Product Hunt today!
comment by habryka (habryka4) · 2020-01-02T02:54:15.031Z · LW(p) · GW(p)
Leaving this here: 2d9797e61e533f03382a515b61e6d6ef2fac514f
Replies from: tetraspace-grouping↑ comment by Tetraspace (tetraspace-grouping) · 2020-01-02T03:18:39.116Z · LW(p) · GW(p)
Since this hash is publicly posted, is there any timescale for when we should check back to see the preimage [LW(p) · GW(p)]?
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-01-02T03:31:29.858Z · LW(p) · GW(p)
If relevant, I will reveal it within the next week.
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-01-02T03:53:47.712Z · LW(p) · GW(p)
Preimage was:
Said will respond to the comment in https://www.lesswrong.com/posts/jLwFCkNKMCFTCX7rL/circling-as-cousin-to-rationality#rdbGxdxXqJiGHrSAC with a message that has content roughly similar to "I appreciate the effort, but I still don't think I understand what you are trying to point at".
Hashed using https://www.fileformat.info/tool/hash.htm using the SHA-1 hash.