LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
There is a field called Forensic linguistics where detectives use someone's "linguistic fingerprint" to determine the author of a document (famously instrumental in catching Ted Kaczynski by analyzing his manifesto). It seems like text is often used to predict things like gender, socioeconomic background, and education level.
If LLMs are superhuman at this kind of work, I wonder whether anyone is developing AI tools to automate this. Maybe the demand is not very strong, but I could imagine, for example, that an authoritarian regime might have a lot of incentive to de-anonymize people. While a company like OpenAI seems likely to have an incentive to hide how much the LLM actually knows about the user, I'm curious where anyone would have a strong incentive to make full use of superhuman linguistic analysis.
I think what quila is pointing at is their belief in the supposed fragility of thoughts at the edge of research questions. From that perspective I think their rebuttal is understandable, and your response completely misses the point: you can be someone who spends only four hours a day working and the rest of the time relaxing, but also care a lot about not losing the subtle and supposedly fragile threads of your thought when working.
Note: I have a different model of research thought, one that involves a systematic process towards insight, and because of that I also disagree with Johannes' decisions.
rhollerith_dot_com on Stephen Fowler's ShortformCOI == conflict of interest.
localdeity on On PrivilegeAbsolutely. For a quick model of why you get multiplicative results:
Then you literally multiply those three quantities together and it's the expected value per week of your intellectual work. My mentor says that these are the three most important traits that determine the best scientists.
ete on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?"By my models of anthropics, I think this goes through.
templarrr on Monthly Roundup #18: May 2024POSIWID. Metric being optimized is not "having the most money". It is debatable if it should be, as one of the "poor Europeans" my personal opinion is that we're doing just fine.
ete on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?"This is correct. I'm not arguing about p(total human extinction|superintelligence), but p(nature survives|total human extinction from superintelligence), as this conditional probability I see people getting very wrong sometimes.
It's not implausible to me that we survive due to decision theoretic reasons, this seems possible though not my default expectation (I mostly expect Decision theory does not imply we get nice things [LW · GW], unless we manually win a decent chunk more timelines than I expect).
My confidence is in the claim "if AI wipes out humans, it will wipe out nature". I don't engage with counterarguments to a separate claim, as that is beyond the scope of this post and I don't have much to add over existing literature like the other posts you linked.
Edit: Partly retracted, I see how the second to last paragraph made a more overreaching claim, edited to clarify my position.
jiao-bu on Teaching CS During Take-Off"[I]s a traditional education sequence the best way to prepare myself for [...?]"
This is hard to answer because in some ways the foundation of a broad education in all subjects is absolutely necessary. And some of them (math, for example), are a lot harder to patch in later if you are bad at them at say, 28.
However, the other side of this is once some foundation is laid and someone has some breadth and depth, the answer to the above question, with regards to nearly anything, is often (perhaps usually) "Absolutely Not."
So, for a 17 year old, Yes. For a 25 year old, you should be skipping as many pre-reqs and hoops as possible to do precisely what you want. You should not spend too much time on the traditional pedagogical steps as once you know enough, a lot can be learned along the way and bootstrapped to what you need while working on harder or more cutting-edge projects or coursework. To do this type of learning, you have to be "all in" and it feels exceedingly hard, but you get to high level. Also, you should not spend too much time on books and curricula that are not very good.
Somewhere in the middle of these two points though, are things that are just being done badly (math, for example, in the USA).
These thoughts remind me of something Scott Alexander once wrote - that sometimes he hears someone say true but low status things - and his automatic thoughts are about how the person must be stupid to say something like that, and he has to consciously remind himself that what was said is actually true.
For anyone who's curious, this is what Scott said, in reference to him getting older – I remember it because I noticed the same in myself as I aged too:
I look back on myself now vs. ten years ago and notice I’ve become more cynical, more mellow, and more prone to believing things are complicated. For example: [list of insights] ...
All these seem like convincing insights. But most of them are in the direction of elite opinion. There’s an innocent explanation for this: intellectual elites are pretty wise, so as I grow wiser I converge to their position. But the non-innocent explanation is that I’m not getting wiser, I’m just getting better socialized. ...
I’m pretty embarassed by Parable On Obsolete Ideologies, which I wrote eight years ago. It’s not just that it’s badly written, or that it uses an ill-advised Nazi analogy. It’s that it’s an impassioned plea to jettison everything about religion immediately, because institutions don’t matter and only raw truth-seeking is important. If I imagine myself entering that debate today, I’d be more likely to take the opposite side. But when I read Parable, there’s…nothing really wrong with it. It’s a good argument for what it argues for. I don’t have much to say against it. Ask me what changed my mind, and I’ll shrug, tell you that I guess my priorities shifted. But I can’t help noticing that eight years ago, New Atheism was really popular, and now it’s really unpopular. Or that eight years ago I was in a place where having Richard Dawkins style hyperrationalism was a useful brand, and now I’m (for some reason) in a place where having James C. Scott style intellectual conservativism is a useful brand. A lot of the “wisdom” I’ve “gained” with age is the kind of wisdom that helps me channel James C. Scott instead of Richard Dawkins; how sure am I that this is the right path?
Sometimes I can almost feel this happening. First I believe something is true, and say so. Then I realize it’s considered low-status and cringeworthy. Then I make a principled decision to avoid saying it – or say it only in a very careful way – in order to protect my reputation and ability to participate in society. Then when other people say it, I start looking down on them for being bad at public relations. Then I start looking down on them just for being low-status or cringeworthy. Finally the idea of “low-status” and “bad and wrong” have merged so fully in my mind that the idea seems terrible and ridiculous to me, and I only remember it’s true if I force myself to explicitly consider the question. And even then, it’s in a condescending way, where I feel like the people who say it’s true deserve low status for not being smart enough to remember not to say it. This is endemic, and I try to quash it when I notice it, but I don’t know how many times it’s slipped my notice all the way to the point where I can no longer remember the truth of the original statement.
This was back in 2017.
sameisenstat on The consistent guessing problem is easier than the halting problemNote that Andy Drucker is not claiming to have discovered this; the paper you link is expository.
Since Drucker doesn't say this in the link, I'll mention that the objects you're discussing are conventionally know as PA degrees. The PA here stands for Peano arithmetic; a Turing degree solves the consistent guessing problem iff it computes some model of PA. This name may be a little misleading, in that PA isn't really special here. A Turing degree computes some model of PA iff it computes some model of ZFC, or more generally any Σ1 theory capable of expressing arithmetic.
Drucker also doesn't mention the name of the theorem that this result is a special case of: the low basis theorem. "Low" here suggests low computability strength. Explicitly, a Turing degree A is low if solving the halting problem for machines with an oracle for A is equivalent (in the sense of reductions) to solving the halting problem for Turing machines without any oracle. The low basis theorem says that every computable binary tree has a low path. We are able to apply the theorem to this problem, concluding that there is a consistent guessing oracle C which is low. So, we cannot use this oracle to solve the halting problem; if we could, then an oracle machine with access to C would be at least as strong as an oracle machine with access to the halting set, but we know that the halting set suffices to compute the halting problem for such a machine, which is a contradiction.
Various other things are known about PA degrees, though I'm not sure what might be of interest to you or others here. This stuff is discussed in books on computability theory, like Robert Soare's Computability Theory and Applications. Though, I thought I learned about PA degrees from his earlier book, but now I don't see them in there, so maybe I just learned about PA degrees around the same time, possibly following my interest in your and others' work on reflective oracles. The basics of computability theory--Turing degrees, the Turing jump, and the arithmetic hierarchy in the computability sense--may be of interest to the extent there is anything there that you're not already familiar with. With regard to PA degrees, in particular people like to talk about diagonally nonrecursive functions. This works as follows. Let φn denote the nth partial computable function according to some Goedel numbering. The PA degrees are exactly the Turing degrees that compute functions f:N→2 such that f(n)≠φn(n) for all numbers n at which the right-hand side is defined. This is suggestive of the ideas around reflective oracles, the Lawvere fixed-point theorem, etc. But I wouldn't say that when I think about these things, I think of them in terms of diagonally nonrecursive functions; plausibly that's not an interesting direction to point people in.