Posts

OpenAI Email Archives (from Musk v. Altman) 2024-11-16T06:38:03.937Z
Using Dangerous AI, But Safely? 2024-11-16T04:29:20.914Z
Open Thread Fall 2024 2024-10-05T22:28:50.398Z
If-Then Commitments for AI Risk Reduction [by Holden Karnofsky] 2024-09-13T19:38:53.194Z
Open Thread Summer 2024 2024-06-11T20:57:18.805Z
"AI Safety for Fleshy Humans" an AI Safety explainer by Nicky Case 2024-05-03T18:10:12.478Z
Goal oriented cognition in "a single forward pass" 2024-04-22T05:03:18.649Z
Express interest in an "FHI of the West" 2024-04-18T03:32:58.592Z
Structured Transparency: a framework for addressing use/mis-use trade-offs when sharing information 2024-04-11T18:35:44.824Z
LessWrong's (first) album: I Have Been A Good Bing 2024-04-01T07:33:45.242Z
How useful is "AI Control" as a framing on AI X-Risk? 2024-03-14T18:06:30.459Z
Open Thread Spring 2024 2024-03-11T19:17:23.833Z
Is a random box of gas predictable after 20 seconds? 2024-01-24T23:00:53.184Z
Will quantum randomness affect the 2028 election? 2024-01-24T22:54:30.800Z
Vote in the LessWrong review! (LW 2022 Review voting phase) 2024-01-17T07:22:17.921Z
AI Impacts 2023 Expert Survey on Progress in AI 2024-01-05T19:42:17.226Z
Originality vs. Correctness 2023-12-06T18:51:49.531Z
The LessWrong 2022 Review 2023-12-05T04:00:00.000Z
Open Thread – Winter 2023/2024 2023-12-04T22:59:49.957Z
Complex systems research as a field (and its relevance to AI Alignment) 2023-12-01T22:10:25.801Z
How useful is mechanistic interpretability? 2023-12-01T02:54:53.488Z
My techno-optimism [By Vitalik Buterin] 2023-11-27T23:53:35.859Z
"Epistemic range of motion" and LessWrong moderation 2023-11-27T21:58:40.834Z
Debate helps supervise human experts [Paper] 2023-11-17T05:25:17.030Z
How much to update on recent AI governance moves? 2023-11-16T23:46:01.601Z
AI Timelines 2023-11-10T05:28:24.841Z
How to (hopefully ethically) make money off of AGI 2023-11-06T23:35:16.476Z
Integrity in AI Governance and Advocacy 2023-11-03T19:52:33.180Z
What's up with "Responsible Scaling Policies"? 2023-10-29T04:17:07.839Z
Trying to understand John Wentworth's research agenda 2023-10-20T00:05:40.929Z
Trying to deconfuse some core AI x-risk problems 2023-10-17T18:36:56.189Z
How should TurnTrout handle his DeepMind equity situation? 2023-10-16T18:25:38.895Z
The Lighthaven Campus is open for bookings 2023-09-30T01:08:12.664Z
Navigating an ecosystem that might or might not be bad for the world 2023-09-15T23:58:00.389Z
Long-Term Future Fund Ask Us Anything (September 2023) 2023-08-31T00:28:13.953Z
Open Thread - August 2023 2023-08-09T03:52:55.729Z
Long-Term Future Fund: April 2023 grant recommendations 2023-08-02T07:54:49.083Z
Final Lightspeed Grants coworking/office hours before the application deadline 2023-07-05T06:03:37.649Z
Correctly Calibrated Trust 2023-06-24T19:48:05.702Z
My tentative best guess on how EAs and Rationalists sometimes turn crazy 2023-06-21T04:11:28.518Z
Lightcone Infrastructure/LessWrong is looking for funding 2023-06-14T04:45:53.425Z
Launching Lightspeed Grants (Apply by July 6th) 2023-06-07T02:53:29.227Z
Yoshua Bengio argues for tool-AI and to ban "executive-AI" 2023-05-09T00:13:08.719Z
Open & Welcome Thread – April 2023 2023-04-10T06:36:03.545Z
Shutting Down the Lightcone Offices 2023-03-14T22:47:51.539Z
Review AI Alignment posts to help figure out how to make a proper AI Alignment review 2023-01-10T00:19:23.503Z
Kurzgesagt – The Last Human (Youtube) 2022-06-29T03:28:44.213Z
Replacing Karma with Good Heart Tokens (Worth $1!) 2022-04-01T09:31:34.332Z
Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22] 2021-11-03T18:22:58.879Z
The LessWrong Team is now Lightcone Infrastructure, come work with us! 2021-10-01T01:20:33.411Z

Comments

Comment by habryka (habryka4) on Oliver Daniels-Koch's Shortform · 2024-11-21T04:23:45.627Z · LW · GW

Yeah, IMO we should just add a bunch of functionality for integrating alignment forum stuff more with academic things. It’s been on my to do list for a long time.

Comment by habryka (habryka4) on Akash's Shortform · 2024-11-20T18:57:47.835Z · LW · GW

I think "full visibility" seems like the obvious thing to ask for, and something that could maybe improve things. Also, preventing you from selling your products to the public, and basically forcing you to sell your most powerful models only to the government, gives the government more ability to stop things when it comes to it. 

I will think more about this, I don't have any immediate great ideas.

Comment by habryka (habryka4) on Akash's Shortform · 2024-11-20T17:06:58.012Z · LW · GW

If the project was fueled by a desire to beat China, the structure of the Manhattan project seems unlikely to resemble the parts of the structure of the Manhattan project that seemed maybe advantageous here, like having a single government-controlled centralized R&D effort.

My guess is if something like this actually happens, it would involve a large number of industry subsidies, and would create strong institutional momentum that even when things got dangerous, to push the state of the art forward, and in as much as there is pushback, continue dangerous development in secret. 

In the case of nuclear weapons the U.S. really went very far under the advisement of Edward Teller, so I think the outside view here really doesn't look good: 

Comment by habryka (habryka4) on Making a conservative case for alignment · 2024-11-20T07:34:46.199Z · LW · GW

I don't remember ever adjudicating this, but my current intuition, having not thought about it hard, is that I don't see a super clear line here (like, in a moderation dispute I can imagine judging either way depending on the details).

Comment by habryka (habryka4) on What are the good rationality films? · 2024-11-20T06:27:34.609Z · LW · GW

The Truman Show: Great depiction of crisis of faith, noticing your confusion, and generally is about figuring out the truth.

Most relevant sequence posts: Crisis of Faith, Lonely Dissent

Comment by habryka (habryka4) on Making a conservative case for alignment · 2024-11-18T17:24:23.352Z · LW · GW

Going by today's standards, we should have banned Gwern in 2012.

(I don't understand what this is referring to)

Comment by habryka (habryka4) on Monthly Roundup #24: November 2024 · 2024-11-18T16:20:00.174Z · LW · GW

Indeed. I fixed it. Let's see whether it repeats itself (we got kind of malformed HTML from the RSS feed).

Comment by habryka (habryka4) on OpenAI Email Archives (from Musk v. Altman) · 2024-11-16T18:19:20.144Z · LW · GW

Update: I have now cross-referenced every single email for accuracy, cleaned up and clarified the thread structure, and added subject lines and date stamps wherever they were available. I now feel comfortable with people quoting anything in here without checking the original source (unless you are trying to understand the exact thread structure of who was CC'd and when, which was a bit harder to compress into a linear format).

(For anyone curious, the AI transcription and compilation made one single error, which is that it fixed a typo in one of Sam's messages from "We did this is a way" to "We did this in a way". Honestly, my guess is any non-AI effort would have had a substantially higher error rate, which was a small update for me on the reliability of AI for something like this, and also makes the handwringing about whether it is OK post something like this feel kind of dumb. It also accidentally omitted one email with a weird thread structure.)

Comment by habryka (habryka4) on OpenAI Email Archives (from Musk v. Altman) · 2024-11-16T17:14:09.455Z · LW · GW

FWIW, my best guess is the document contains fewer errors than having a human copy-paste things and stitch it together. The errors have a different nature to them, and so it makes sense to flag them, but like, I started out with copy-pasting and OCR, and that did not actually have an overall lower error rate.

Comment by habryka (habryka4) on OpenAI Email Archives (from Musk v. Altman) · 2024-11-16T16:56:33.544Z · LW · GW

If other people have to check it before they quote it, why is it OK for you not to check it before you post it?

Because I said prominently at the top that I used AI assistance for it. Of course, feel free to do the same.

Comment by habryka (habryka4) on OpenAI Email Archives (from Musk v. Altman) · 2024-11-16T09:05:35.671Z · LW · GW

Fixed! That specific response had a very weird thread structure, so makes sense the AI I used got confused. Plausible something else is still missing, though I think I've now read through all the original PDFs and didn't see anything new.

Comment by habryka (habryka4) on lemonhope's Shortform · 2024-11-16T04:33:32.539Z · LW · GW

What do you mean by "applied research org"? Like, applied alignment research?

Comment by habryka (habryka4) on Habryka's Shortform Feed · 2024-11-15T18:53:38.213Z · LW · GW

A bunch of very interesting emails between Elon, Sam Altman, Ilya and Greg were released (I think in some legal proceedings, but not sure). It would IMO be cool for someone to gather them all and do some basic analysis of them. 

https://x.com/TechEmails/status/1857456137156669765 

https://x.com/TechEmails/status/1857285960997712356 

Comment by habryka (habryka4) on Seven lessons I didn't learn from election day · 2024-11-15T17:58:08.423Z · LW · GW

This was a really good analysis of a bunch of election stuff that I hadn't seen presented clearly like this anywhere else. If it wasn't about elections and news I would curate it.

Comment by habryka (habryka4) on An alternative way to browse LessWrong 2.0 · 2024-11-14T17:21:19.427Z · LW · GW

Not sure what you mean. The API continues to exist (and has existed since the beginning of LW 2.0).

Comment by habryka (habryka4) on johnswentworth's Shortform · 2024-11-14T12:45:52.158Z · LW · GW

I think the comment more confirms than disconfirms John's comment (though I still think it's too broad for other reasons). OP "funding" something historically has basically always meant recommending a grant to GV. Luke's language to me suggests that indeed the right of center grants are no longer referred to GV (based on a vague vibe of how he refers to funders in plural).

OP has always made some grant recommendations to other funders (historically OP would probably describe those grants as "rejected but referred to an external funder"). As Luke says, those are usually ignored, and OP's counterfactual effect on those grants is much less, and IMO it would be inaccurate to describe those recommendations as "OP funding something". As I said in the comment I quote in the thread, most OP staff would like to fund things right of center, but GV does not seem to want to, as such the only choice OP has is to refer them to other funders (which sometimes works, but mostly doesn't).

As another piece of evidence, when OP defunded all the orgs that GV didn't want to fund anymore, the communication emails that OP sent said that "Open Philanthropy is exiting funding area X" or "exiting organization X". By the same use of language, yes, it seems like OP has exited funding right-of-center policy work.

(I think it would make sense to taboo "OP funding X" in future conversations to avoid confusion, but also, I think historically it was very meaningfully the case that getting funded by GV is much better described as "getting funded by OP" given that you would never talk to anyone at GV and the opinions of anyone at GV would basically have no influence on you getting funded. Things are different now, and in a meaningful sense OP isn't funding anyone anymore, they are just recommending grants to others, and it matters more what those others think then what OP staff thinks)

Comment by habryka (habryka4) on Bogdan Ionut Cirstea's Shortform · 2024-11-14T12:18:00.833Z · LW · GW

One of these types of orgs is developing a technology with the potential to kill literally all of humanity. The other type of org is funding research that if it goes badly mostly just wasted their own money. Of course the demands for legibility and transparency should be different.

Comment by habryka (habryka4) on johnswentworth's Shortform · 2024-11-13T00:19:35.779Z · LW · GW

My best guess this is false. As a quick sanity-check, here are some bipartisan and right-leaning organizations historically funded by OP: 

Of those, I think FAI is the only one at risk of OP being unable to fund them, based on my guess of where things are leaning. I would be quite surprised if they defunded the other ones on bipartisan grounds.

Possibly you meant to say something more narrow like "even if you are trying to be bipartisan, if you lean right, then OP is substantially less likely to fund you" which I do think is likely true, though my guess is you meant the stronger statement, which I think is false.

Comment by habryka (habryka4) on johnswentworth's Shortform · 2024-11-12T18:46:09.364Z · LW · GW

Curious whether this is a different source than me. My current best model was described in this comment, which is a bit different (and indeed, my sense was that if you are bipartisan, you might be fine, or might not, depending on whether you seem more connected to the political right, and whether people might associate you with the right): 

Yep, my model is that OP does fund things that are explicitly bipartisan (like, they are not currently filtering on being actively affiliated with the left). My sense is in-practice it's a fine balance and if there was some high-profile thing where Horizon became more associated with the right (like maybe some alumni becomes prominent in the republican party and very publicly credits Horizon for that, or there is some scandal involving someone on the right who is a Horizon alumni), then I do think their OP funding would have a decent chance of being jeopardized, and the same is not true on the left.

Another part of my model is that one of the key things about Horizon is that they are of a similar school of PR as OP themselves. They don't make public statements. They try to look very professional. They are probably very happy to compromise on messaging and public comms with Open Phil and be responsive to almost any request that OP would have messaging wise. That makes up for a lot. I think if you had a more communicative and outspoken organization with a similar mission to Horizon, I think the funding situation would be a bunch dicier (though my guess is if they were competent, an organization like that could still get funding).

More broadly, I am not saying "OP staff want to only support organizations on the left". My sense is that many individual OP staff would love to fund more organizations on the right, and would hate for polarization to occur, but that organizationally and because of constraints by Dustin, they can't, and so you will see them fund organizations that aim for more engagement with the right, but there will be relatively hard lines and constraints that will mostly prevent that.

If it is true that OP has withdrawn funding from explicitly bipartisan orgs, even if not commonly associated with the right, then that would be an additional update for me, so am curious whether this is mostly downstream of my interpretations or whether you have additional sources.

Comment by habryka (habryka4) on Cole Wyeth's Shortform · 2024-11-09T18:36:09.122Z · LW · GW

Huh o1 and the latest Claude were quite huge advances to me. Basically within the last year LLMs for coding went to "occasionally helpful, maybe like a 5-10% productivity improvement" to "my job now is basically to instruct LLMs to do things, depending on the task a 30% to 2x productivity improvement".

Comment by habryka (habryka4) on evhub's Shortform · 2024-11-09T04:18:25.866Z · LW · GW

(and Anthropic has a Usage Policy, with exceptions, which disallows weapons stuff — my guess is this is too strong on weapons).

I think usage policies should not be read as commitments, and so I think it would be reasonable to expect that Anthropic will allow weapon development if it becomes highly profitable (and in contrast to other things Anthropic has promised, to not be interpreted as a broken promise when they do so).

Comment by habryka (habryka4) on evhub's Shortform · 2024-11-09T03:39:50.677Z · LW · GW

FWIW, as a common critic of Anthropic, I think I agree with this. I am a bit worried about engaging with the DoD being bad for Anthropic's epistemics and ability to be held accountable by the government and public, but I think the basics of engaging on defense issues seems fine to me, and I don't think risks from AI route basically at all through AI being used for building military technology, or intelligence analysis.

Comment by habryka (habryka4) on Habryka's Shortform Feed · 2024-11-07T03:21:20.974Z · LW · GW

Ah, we should maybe font-subset some system font for that (same as what we did for greek characters). If someone gives me a character range specification I could add it.

Comment by habryka (habryka4) on The Shallow Bench · 2024-11-06T04:46:01.843Z · LW · GW

"stop reading here if you don't want to be spoiled."

(I added that sentence based on Jonathan Claybrough's comment, feel free to suggest an alternative one)

Comment by habryka (habryka4) on Matt Goldenberg's Short Form Feed · 2024-11-05T19:14:07.990Z · LW · GW

I watched the video and didn't see any stats from their own experiment. Do you have a frame or a section?

Comment by habryka (habryka4) on Bogdan Ionut Cirstea's Shortform · 2024-11-05T17:35:07.560Z · LW · GW

(Most people in AI Alignment work at scaling labs and are therefore almost exclusively working on LLM alignment. That said, I don't actually know what it means to work on LLM alignment over aligning other systems, it's not like we have a ton of traction on LLM alignment, and most techniques and insights seem general enough to not be conditional specifically on LLMs)

Comment by habryka (habryka4) on The Shallow Bench · 2024-11-05T16:45:59.019Z · LW · GW

Note: I added some spoiler warnings (given the one comment complaining). I don't feel strongly, so feel free to revert

Comment by habryka (habryka4) on How to (hopefully ethically) make money off of AGI · 2024-11-04T19:05:24.245Z · LW · GW

It was "advice" just not... "investment advice"? I do admit I do not understand the proper incantations and maybe should study them more.

Comment by habryka (habryka4) on Bogdan Ionut Cirstea's Shortform · 2024-11-04T17:31:13.153Z · LW · GW

Who says those things? That doesn't really sound like something that people say. Like, I think there are real arguments about why LLM agents might not be the most likely path to AGI, but "they are still pretty dumb, therefore that's not a path to AGI" seems like obviously a strawman, and I don't think I've ever seen it (or at least not within the last 4 years or so).

Comment by habryka (habryka4) on The Median Researcher Problem · 2024-11-03T18:19:18.324Z · LW · GW

Yep, it seems like pretty standard usage to me (and IMO seems conceptually fine, despite the fact that "genetic" means something different, since for some reason using "memetic" in the same way feels very weird or confused to me, like I would almost never say "this has memetic origin")

Comment by habryka (habryka4) on JargonBot Beta Test · 2024-11-03T06:24:49.400Z · LW · GW

Great idea!

@Screwtape?

Comment by habryka (habryka4) on MichaelDickens's Shortform · 2024-11-02T17:03:47.204Z · LW · GW

The "could" here is (in context) about "could not get funding from modern OP". The whole point of my comment was about the changes that OP underwent. Sorry if that wasn't as clear, it might not be as obvious to others that of course OP was very different in the past.

Comment by habryka (habryka4) on The Compendium, A full argument about extinction risk from AGI · 2024-11-02T07:43:45.928Z · LW · GW

Like, here's a sanity-check: suppose you must convince a specific Creationist that the AGI Risk is real. Do you need to argue them out of Creationism in order to do so?

My guess is no, but also, my guess is we will probably still have better comms if I err on the side of explaining things how they come naturally to me, and entangled with the way I came to adopt a position, and then they can do a bunch of the work of generalizing. Of course, if something is deeply triggering or mindkilly to someone, then it's worth routing, but it's not like any analogy with evolution is invalid from the perspective of someone who believes in Creationism. Yes, some of the force of such an analogy would be lost, but most of it comes from the logical consistency, not the empirical evidence.

Comment by habryka (habryka4) on MichaelDickens's Shortform · 2024-11-02T07:40:43.742Z · LW · GW

In 2023/2024 OP drastically changed it's funding process and priorities (in part in response to FTX, in part in response to Dustin's preferences). This whole conversation is about the shift in OPs giving in this recent time period.

See also: https://forum.effectivealtruism.org/posts/foQPogaBeNKdocYvF/linkpost-an-update-from-good-ventures 

Comment by habryka (habryka4) on The Compendium, A full argument about extinction risk from AGI · 2024-11-01T21:53:31.361Z · LW · GW

No, I think this kind of very naive calculation does predictably result in worse arguments propagating, people rightfully dismissing those bad arguments (because they are not entangled with the real reasons why any of the people who have thought about the problem have formed beliefs on an issue themselves), and then ultimately the comms problem getting much harder.

I am in favor of people thinking hard about these issues, but I think exactly this kind of naive argument are in an uncanny valley where I think your comms gets substantially worse.

Comment by habryka (habryka4) on The Compendium, A full argument about extinction risk from AGI · 2024-11-01T21:38:30.513Z · LW · GW

Yeah, I agree with a lot of this in-principle. But I think the specific case of avoiding saying anything that might have something to do with evolution is I think a pretty wrong take, on this dimension, trying to communicate clearly.

Comment by habryka (habryka4) on JargonBot Beta Test · 2024-11-01T21:03:49.152Z · LW · GW

Seems like a mistake! Agree it's not uncommon to use them less, though my guess (with like 60% confidence) is that the majority of authors on LW use them daily, or very close to daily.

Comment by habryka (habryka4) on JargonBot Beta Test · 2024-11-01T20:25:15.576Z · LW · GW

First of all, even taking what Gwern says there at face value, how many of the posts here that are written “with AI involvement” would you say actually are checked, edited, etc., in the rigorous way which Gwern describes? Realistically?

My guess is very few people are using AI output directly (at least the present it's pretty obvious as their writing is kind of atrocious). I do think most posts probably involved people talking to an LLM through their thoughts, or ask for some editing help, or ask some factual questions. My guess is basically 100% of those went through the kind of process that Gwern was describing here. 

Comment by habryka (habryka4) on JargonBot Beta Test · 2024-11-01T19:44:34.190Z · LW · GW

Do you not use LLMs daily? I don't currently find them out-of-the-box useful for editing, but find them useful for a huge variety of tasks related to writing things. 

I think it would be more of an indictment of LessWrong if people somehow didn't use them, they obviously increase my productivity at a wide variety of tasks, and being an early-adopter of powerful AI technologies seems like one of the things that I hope LessWrong authors excell at.

In general, I think Gwern's suggested LLM policy seems roughly right to me. Of course people should use LLMs extensively in their writing, but if they do, they really have to read any LLM writing that makes it into their post and check what it says is true: 

I am also fine with use of AI in general to make us better writers and thinkers, and I am still excited about this. (We unfortunately have not seen much benefit for the highest-quality creative nonfiction/fiction or research, like we aspire to on LW2, but this is in considerable part due to technical choices & historical contingency, which I've discussed many times before, and I still believe in the fundamental possibilities there.) We definitely shouldn't be trying to ban AI use per se.

However, if someone is posting a GPT-4 (or Claude or Llama) sample which is just a response, then they had damn well better have checked it and made sure that the references existed and said what the sample says they said and that the sample makes sense and they fixed any issues in it. If they wrote something and had the LLM edit it, then they should have checked those edits and made sure the edits are in fact improvements, and improved the improvements, instead of letting their essay degrade into ChatGPTese. And so on.

Comment by habryka (habryka4) on JargonBot Beta Test · 2024-11-01T17:21:09.949Z · LW · GW

And also, I do not personally want to be running into any writing that AI had a hand in.

(My guess is the majority of posts written daily on LW are now written with some AI involvement. My best guess is most authors on LessWrong use AI models on a daily level, asking factual questions, and probably also asking for some amount of editing and writing feedback. As such, I don't think this is a coherent ask.)

Comment by habryka (habryka4) on The Compendium, A full argument about extinction risk from AGI · 2024-11-01T17:16:09.716Z · LW · GW

I don't think this kind of surface-level naive popularity optimization gives rise to a good comms strategy. Evolution is true, and mostly we should focus on making arguments based on true premises. 

Comment by habryka (habryka4) on Open Thread Fall 2024 · 2024-11-01T02:01:26.764Z · LW · GW

On mobile we by default use a markdown editor, so you can use markdown to format things.

Comment by habryka (habryka4) on Habryka's Shortform Feed · 2024-10-30T15:45:38.673Z · LW · GW

Interesting, thanks! Checking an older version of Gill Sans probably wouldn't have been something would have thought to do, so your help is greatly appreciated. 

I'll experiment some with getting Gill Sans MT Pro.

Comment by habryka (habryka4) on Habryka's Shortform Feed · 2024-10-30T06:00:50.334Z · LW · GW

Sure, I was just responding to this literal quote: 

Couldn't you please just set the comment font to the same as the post font?

Comment by habryka (habryka4) on MIRI 2024 Communications Strategy · 2024-10-30T04:15:19.945Z · LW · GW

(My model of Daniel thinks the AI will likely take over, but probably will give humanity some very small fraction of the universe, for a mixture of "caring a tiny bit" and game-theoretic reasons)

Comment by habryka (habryka4) on Habryka's Shortform Feed · 2024-10-30T04:02:58.335Z · LW · GW

The "Recommended" tab filters out read posts by default. We never had much demand for showing recently-sorted posts while filtering out only ones you've read, but it wouldn't be very hard to build. 

Not sure what you mean by "load more at once". We could add a whole user setting to allow users to change the number of posts on the frontpage, but done consistently that would produce a ginormous number of user settings for everything, which would be a pain to maintain (not like, overwhelmingly so, but I would be surprised if it was worth the cost).

Comment by habryka (habryka4) on Habryka's Shortform Feed · 2024-10-30T03:37:29.300Z · LW · GW

We previously had Calibri for Windows (indeed a very popular Windows system font). Gill Sans (which we now ship to all operating systems) is a quite popular MacOS and iOS system font. I currently think there are some weird rendering issues on Windows, but if that's fixed, my guess is you would get used to it quickly enough. Gill Sans is not a rare font on the internet.

Comment by habryka (habryka4) on Habryka's Shortform Feed · 2024-10-30T02:55:32.335Z · LW · GW

Yep, definitely a bug. Should be fixed soon.

Comment by habryka (habryka4) on Habryka's Shortform Feed · 2024-10-29T20:56:14.941Z · LW · GW

We have done lots of users interviews over the years! Fonts are always polarizing, but people have a strong preference for sans serifs at small font sizes (and people prefer denser comment sections, though it's reasonably high variance).

Comment by habryka (habryka4) on Habryka's Shortform Feed · 2024-10-29T20:54:41.858Z · LW · GW

Plausible we might want to revert to Calibri on Windows, but I would like to make Gill Sans work. Having different font metrics on different devices makes a lot of detailed layout work much more annoying.

Curious if you can say more about the nature of discomfort. Also curious whether fellow font optimizer @Said Achmiz has any takes, since he has been helpful here in the past, especially on the "making things render well on Windows" side.