Posts

Comments

Comment by Mitchell Reynolds (mitchell-reynolds) on Changing main content font to Valkyrie? · 2023-09-22T21:56:28.981Z · LW · GW

Why the preference for exclusively serif fonts? Maybe add the option of choosing between a small set of fonts?

Sans-serif fonts are better for those of us with dyslexia (overall base rates are ~20%). For specific suggestions, Roboto (open source with an Adobe license) or Proxima Nova are clean and inclusive.

Comment by Mitchell Reynolds (mitchell-reynolds) on Estimating the Current and Future Number of AI Safety Researchers · 2023-04-05T02:53:30.430Z · LW · GW

Solid analysis and an exploratory comment in thinking about the future of AI safety researchers.

I find it striking how many well-known organizations working on AI safety were founded very recently. This trend suggests that some of the most influential AI safety organizations will be founded in the future.


In various winner take all scenarios, being early at the right time is often more important than being truly "the best." I'm modestly confident this will be true for capabilities but unsure if this will be true for AI safety organizations.

I think this matters in thinking about where the ~1000 future AI safety researchers will be employed and what research agendas are being pursued. My low-confidence guess for the future distribution would be a slightly higher n vs today with a higher average of researchers-per-organization vs today.

Comment by Mitchell Reynolds (mitchell-reynolds) on [$20K in Prizes] AI Safety Arguments Competition · 2022-04-26T21:42:41.115Z · LW · GW

I had a similar thought to prompt GPT-3 for one liners or to summarize some article (if available). I think involving the community to write 500-1000 winning submissions would have the positive externality of non-winners to distill/condense their views. My exploratory idea is that this would be instrumentally useful when talking with those new to AI x-risk topics.