LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
Note that Andy Drucker is not claiming to have discovered this; the paper you link is expository.
Since Drucker doesn't say this in the link, I'll mention that the objects you're discussing are conventionally know as PA degrees. The PA here stands for Peano arithmetic; a Turing degree solves the consistent guessing problem iff it computes some model of PA. This name may be a little misleading, in that PA isn't really special here. A Turing degree computes some model of PA iff it computes some model of ZFC, or more generally any Σ1 theory capable of expressing arithmetic.
Drucker also doesn't mention the name of the theorem that this result is a special case of: the low basis theorem. "Low" here suggests low computability strength. Explicitly, a Turing degree A is low if solving the halting problem for machines with an oracle for A is equivalent (in the sense of reductions) to solving the halting problem for Turing machines without any oracle. The low basis theorem says that every computable binary tree has a low path. We are able to apply the theorem to this problem, concluding that there is a consistent guessing oracle C which is low. So, we cannot use this oracle to solve the halting problem; if we could, then an oracle machine with access to C would be at least as strong as an oracle machine with access to the halting set, but we know that the halting set suffices to compute the halting problem for such a machine, which is a contradiction.
Various other things are known about PA degrees, though I'm not sure what might be of interest to you or others here. This stuff is discussed in books on computability theory, like Robert Soare's Computability Theory and Applications. Though, I thought I learned about PA degrees from his earlier book, but now I don't see them in there, so maybe I just learned about PA degrees around the same time, possibly following my interest in your and others' work on reflective oracles. The basics of computability theory--Turing degrees, the Turing jump, and the arithmetic hierarchy in the computability sense--may be of interest to the extent there is anything there that you're not already familiar with. With regard to PA degrees, in particular people like to talk about diagonally nonrecursive functions. This works as follows. Let φn denote the nth partial computable function according to some Goedel numbering. The PA degrees are exactly the Turing degrees that compute functions f:N→2 such that f(n)≠φn(n) for all numbers n at which the right-hand side is defined. This is suggestive of the ideas around reflective oracles, the Lawvere fixed-point theorem, etc. But I wouldn't say that when I think about these things, I think of them in terms of diagonally nonrecursive functions; plausibly that's not an interesting direction to point people in.
wei-dai on Stephen Fowler's ShortformIt's also notable that the topic of OpenAI nondisparagement agreements was brought to Holden Karnofsky's attention in 2022, and he replied with "I don’t know whether OpenAI uses nondisparagement agreements; I haven’t signed one." (He could have asked his contacts inside OAI about it, or asked the EA board member to investigate. Or even set himself up earlier as someone OpenAI employees could whistleblow to on such issues.)
If the point was to buy a ticket to play the inside game, then it was played terribly and negative credit should be assigned on that basis, and for misleading people about how prosocial OpenAI was likely to be (due to having an EA board member).
shminux on On PrivilegeExcellent point about the compounding, which is often multiplicative, not additive. Incidentally, multiplicative advantages result in a power law distribution of income/net worth, whereas additive advantages/disadvantages result in a normal distribution. But that is a separate topic, well explored in the literature.
shminux on On PrivilegeI mostly meant your second point, just generally being kinder to others, but the other two are also well taken.
kevin-dorst on The Natural Selection of Bad Vibes (Part 1)Agreed that people have lots of goals that don't fit in this model. It's definitely a simplified model. But I'd argue that ONE of (most) people's goals to solve problems; and I do think, broadly speaking, it is an important function (evolutionarily and currently) for conversation. So I still think this model gets at an interesting dynamic.
linch on Ilya Sutskever and Jan Leike resign from OpenAI [updated]I'd be a bit surprised if that's the answer, if OpenAI doesn't offer any vested equity, that half-truth feels overly blatant to me.
alexander-gietelink-oldenziel on Alexander Gietelink Oldenziel's ShortformI don't know what you mean by 'general intelligence' exactly but I suspect you mean something like human+ capability in a broad range of domains. I agree LLMs will become generally intelligent in this sense when scaled, arguably even are, for domains with sufficient data. But that's kind of the sticker right? Cave men didn't have the whole internet to learn from yet somehow did something that not even you seem to claim LLMs will be able to do: create the (date of the) Internet.
(Your last claim seems surprising. Pre-2014 games don't have close to the ELO of alphaZero. So a next-token would be trained to simulate a human player up tot 2800, not 3200+. )
localdeity on On PrivilegeAlso:
One of the things you probably notice is that having some advantages tends to make other advantages more valuable. Certainly career-wise, several of those things are like, "If you're doing badly on this dimension, then you may be unable to work at all, or be limited to far less valuable roles". For example, if one person's crippling anxiety takes them from 'law firm partner making $1 million' to 'law analyst making $200k', and another person's crippling anxiety takes them from 'bank teller making $50k' to 'unemployed', then, well, from a utilitarian perspective, fixing one person's problems is worth a lot more than the other's. That is probably already acted upon today—the former law partner is more able to pay for therapy/whatever—but it could inform people who are deciding how to allocate scarce resources to young people, such as the student versions of the potential law partner and bank teller.
(Of course, the people who originally wrote about "privilege" would probably disagree in the strongest possible terms with the conclusions of the above lines of reasoning.)
joe_collman on Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI SystemsThis seems interesting, but I've seen no plausible case that there's a version of (1) that's both sufficient and achievable. I've seen Davidad mention e.g. approaches using boundaries formalization. This seems achievable, but clearly not sufficient. (boundaries don't help with e.g. [allow the mental influences that are desirable, but not those that are undesirable])
The [act sufficiently conservatively for safety, relative to some distribution of safety specifications] constraint seems likely to lead to paralysis (either of the form [AI system does nothing], or [AI system keeps the world locked into some least-harmful path], depending on the setup - and here of course "least harmful" isn't a utopia, since it's a distribution of safety specifications, not desirability specifications).
Am I mistaken about this?
I'm very pleased that people are thinking about this, but I fail to understand the optimism - hopefully I'm confused somewhere!
Is anyone working on toy examples as proof of concept?
I worry that there's so much deeply technical work here that not enough time is being spent to check that the concept is workable (is anyone focusing on this?). I'd suggest focusing on mental influences: what kind of specification would allow me to radically change my ideas, but not to be driven insane? What's the basis to think we can find such a specification?
It seems to me that finding a fit-for-purpose safety/acceptability specification won't be significantly easier than finding a specification for ambitious value alignment.
tenthkrige on Forecasting: the way I think about itGood points well made. I'm not sure what you mean by "my expected log score is maximized" (and would like to know), but in any case it's probably your average world rather than your median world that does it?