LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?
gwern on Scientific Notation OptionsI think this framing conflates the question of input with that of presentation. The 'e' notation seems easiest to input - simple, unambiguous and reliable to parse, enterable everywhere - but it's not a good one to read, because if nothing else now it looks like it's multiplying variables & numbers etc.
They don't have to be the same. If numbers are written uniformly, they can be parsed & rendered differently.
For example, I think that one of the things that makes calculations or arguments hard to follow is that they shamelessly break human subitizing and intuitive numeracy by promiscuously mixing units, which makes it hard to do one of the most common things we do with numbers - compare them - while not really making anything easier.
In much the same way that people sloppily will quote dollar amounts from decades apart as if they were the same thing (which is why I inflation adjust them automatically into current dollars), they will casually talk about "10 million" vs "20 billion", imposing a burden of constant mental arithmetic as one tries to juggle back and forth all of these different base units. Sure, decimal numbers or metric units may not be as bad as trying to convert hogheads to long fathoms or swapping between binary and decimal, but it's still not ideal.
It is no wonder that people constantly are off by orders of magnitude and embarrass themselves on social media when they turn out to be a factor of 10 off because they accidentally converted by 100 instead of 1000, or they convert milligrams and grams wrong and poison themselves on film. If someone is complaining about the US federal government, which is immediately more understandable: "of $20 billion, $10 million was spent on engineering a space pen" or "of $20,000 million, $10 million was spent on a space pen"? (And this is an easy case, with about the most familiar possible units. As soon as it becomes something like milligrams and grams...)
I mean, imagine if this was normal practice with statistical graphs: "oh, the blue and red bar columns, even though they are the same size in the image and describe the same thing, dollars, are actually 10x different. Didn't you see in the legend where it clearly says that 'blue = 1; red = 10'?" "Er, OK, but if they're the same sort of thing, then why are some blue and some the larger red?" "No reason. I just didn't feel like multiplying the blue datapoints by 10 before graphing." "...I see."
So while it might look a little odd, I try to write with a single base-unit throughout a passage of writing, to enable immediate comparison. (I think this helps a lot with DL scaling too, because somehow when you talk about a model having '50 million parameters' and are comparing it to multi-billion parameter models like a "GPT-3-175b", that seems a lot bigger than if you had written '0.05b parameters'. Or if you compare, say, a Gato with 1b parameters to a GPT-4 with 1,400b parameters, the comparison feels a lot more intuitive than if I had written 'a GPT-4 with 1.4 trillion parameters'.)
This might seem too annoying for the author (although if it is, that should be a warning sign: if it's hard for you, the author, to corral these units while writing them, how do you expect the reader to handle them?), but it could just be automated. Write all numbers which are numerical in a standard format, whether it's 10e2
or 1,000
, and then a program can simply parse it for numbers, take the first number, extract the largest base that makes it a single-digit number ("thousand") and then rewrite all following numbers with that as the unit, formatted in your preferred format as '1 × 102' or whatever.
(And you can, for HTML, make them copy-paste as regular full-length numbers through a similar trick as we do to provide the original LaTeX for math formulas which were converted from LaTeX, so it can be fully compatible with copy-pasting into a REPL or other application.)
wassname on Language Models Model UsIf you are using llama you can use https://github.com/wassname/prob_jsonformer, or snippets of the code to get probabilities over a selection of tokens
phib on Stephen Fowler's ShortformHonestly, maybe further controversial opinion, but this [30 million for a board seat at what would become the lead co. for AGI, with a novel structure for nonprofit control that could work?] still doesn't feel like necessarily as bad a decision now as others are making it out to be?
The thing that killed all value of this deal was losing the board seat(s?), and I at least haven't seen much discussion of this as a mistake.
I'm just surprised so little prioritization was given to keeping this board seat, it was probably one of the most important assets of the "AI safety community and allies", and there didn't seem to be any real fight with Sam Altman's camp for it.
So Holden has the board seat, but has to leave because of COI, and endorses Toner to replace, "... Karnofsky cited a potential conflict of interest because his wife, Daniela Amodei, a former OpenAI employee, helped to launch the AI company Anthropic.
Given that Toner previously worked as a senior research analyst at Open Philanthropy, Loeber speculates that Karnofsky might’ve endorsed her as his replacement."
Like, maybe it was doomed if they only had one board seat (Open Phil) vs whoever else is on the board, and there's a lot of shuffling about as Musk and Hoffman also leave for COIs, but start of 2023 it seems like there is an "AI Safety" half to the board, and a year later there are now none. Maybe it was further doomed if Sam Altman has the, take the whole company elsewhere, card, but idk... was this really inevitable? Was there really not a better way to, idk, maintain some degree of control and supervision of this vital board over the years since OP gave the grant?
I'm not sure if those are precisely the terms of the charter, but that's besides the point. It is still "private" in the sense that there is a small group of private citizens who own the thing and decide what it should do with no political accountability to anyone else. As for the "non-profit" part, we've seen what happens to that as soon as it's in the way.
justus on What's the risk that AI tortures us all?When do you think it would happen if it did happen?
justus on What's the risk that AI tortures us all?What do you think the likelihood of extinction is and when would it probably happen?
dave-orr on What's the risk that AI tortures us all?If you want a far future fictional treatment of this kind of situation, I recommend Surface Detail by Iain Banks.
akash-wasil on robo's ShortformOh good point– I think my original phrasing was too broad. I didn't mean to suggest that there were no high-quality policy discussions on LW, moreso meant to claim that the proportion/frequency of policy content is relatively limited. I've edited to reflect a more precise claim:
The vast majority of high-quality content on LessWrong is about technical stuff, and it's pretty rare to see high-quality policy discussions on LW these days (Zvi's coverage of various bills would be a notable exception). Partially as a result of this, some "serious policy people" don't really think LW users will have much to add.
(I haven't seen much from Scott or Robin about AI policy topics recently– agree that Zvi's posts have been helpful.)
(I also don't know of many public places that have good AI policy discussions. I do think the difference in quality between "public discussions" and "private discussions" is quite high in policy. I'm not quite sure what the difference looks like for people who are deep into technical research, but it seems likely to me that policy culture is more private/secretive than technical culture.)
viliam on On PrivilegeWhat are the advantages of noticing all of this?