LessWrong 2.0 Reader
View: New · Old · Topnext page (older posts) →
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2006-11-22T20:00:00.000Z · comments (49)
Ha! Well done. I spent a week making sure my math was right and never thought of this. I agree that updating the truth probability is a better model of this situation, and I can confirm your numbers.
I suppose we could also update each day's success chance, with some kind of prior balancing updating truth probability vs. success probability. Though by that point we are likely no longer "simplifying".
p-b-1 on On AI personhoodWhich is exactly what I am doing in the post? By saying that the question of consciousness is a red herring aka not that relevant to the question of personhood?
steve-m-2 on On AI personhoodAt the risk of nitpicking around labels, while I see what you're getting at, consciousness and personhood are two different things in a qualitatively meaningful sense.
Consciousness is a vague term, kind of like the "soul", which there is not uniform agreement around. Philosophically it may be important, but pragmatically it's not very useful.
Personhood on the other hand is, at least in the realm of the law, a pragmatically important label. It features heavily in issues like corporate liability, abortion laws, and the citizenship prospects bestowed onto individuals. And it rarely touches on issues of consciousness.
So just encouraging you to keep those separate.
p-b-1 on On AI personhoodNo.
The argument is that feelings or valence more broadly in humans requires additional machinery (amygdala, hypothalamus, etc). If the machinery is missing, the pain/fear/.../valence is missing although the sequence learning works just fine.
AI is missing this machinery, therefore it is extremely unlikely to experience pain/fear/.../valence.
priyanka-bharadwaj on Good Research Takes are Not Sufficient for Good Strategic TakesBeing good at research and being good at high level strategic thinking are just fairly different skillsets!
Neel, thank you, especially for the humility in acknowledging how hard it is to know whether a strategic take is any good.
Your post made me realise I’ve been holding back on a framing I’ve found useful (from when I worked as a matchmaker and a relationship coach), thinking about alignment less as a performance problem, and more as a relationship problem. We often fixate on traits like intelligence, speed, obedience but we forget to ask, what kind of relationship are we building with AI? If we started there, maybe we’d optimise for collaboration rather than control?
P.S. I don’t come from a research background, but my work in behaviour and systems design gives me a practical lens on alignment, especially around how relationships shape trust, repair, and long-term coherence.
mo-putera on Mo Putera's ShortformTerry Tao recently wrote a nice series of toots on Mathstodon that reminded me of what Bill Thurston said:
1. What is it that mathematicians accomplish?
There are many issues buried in this question, which I have tried to phrase in a way that does not presuppose the nature of the answer.
It would not be good to start, for example, with the question
How do mathematicians prove theorems?
This question introduces an interesting topic, but to start with it would be to project two hidden assumptions: (1) that there is uniform, objective and firmly established theory and practice of mathematical proof, and (2) that progress made by mathematicians consists of proving theorems. It is worthwhile to examine these hypotheses, rather than to accept them as obvious and proceed from there.
The question is not even
How do mathematicians make progress in mathematics?
Rather, as a more explicit (and leading) form of the question, I prefer
How do mathematicians advance human understanding of mathematics?
This question brings to the fore something that is fundamental and pervasive: that what we are doing is finding ways for people to understand and think about mathematics.
The rapid advance of computers has helped dramatize this point, because computers and people are very different. For instance, when Appel and Haken completed a proof of the 4-color map theorem using a massive automatic computation, it evoked much controversy. I interpret the controversy as having little to do with doubt people had as to the veracity of the theorem or the correctness of the proof. Rather, it reflected a continuing desire for human understanding of a proof, in addition to knowledge that the theorem is true.
On a more everyday level, it is common for people first starting to grapple with computers to make large-scale computations of things they might have done on a smaller scale by hand. They might print out a table of the first 10,000 primes, only to find that their printout isn’t something they really wanted after all. They discover by this kind of experience that what they really want is usually not some collection of “answers”—what they want is understanding.
Tao's toots:
In the first millennium CE, mathematicians performed the then-complex calculations needed to compute the date of Easter. Of course, with our modern digital calendars, this task is now performed automatically by computers; and the older calendrical algorithms are now mostly of historical interest only.
In the Age of Sail, mathematicians were tasked to perform the intricate spherical trigonometry calculations needed to create accurate navigational tables. Again, with modern technology such as GPS, such tasks have been fully automated, although spherical trigonometry classes are still offered at naval academies, and ships still carry printed navigational tables in case of emergency instrument failures.
During the Second World War, mathematicians, human computers, and early mechanical computers were enlisted to solve a variety of problems for military applications such as ballistics, cryptanalysis, and operations research. With the advent of scientific computing, the computational aspect of these tasks has been almost completely delegated to modern electronic computers, although human mathematicians and programmers are still required to direct these machines. (1/3)
Today, it is increasingly commonplace for human mathematicians to also outsource symbolic tasks in such fields as linear algebra, differential equations, or group theory to modern computer algebra systems. We still place great emphasis in our math classes on getting students to perform these tasks manually, in order to build a robust mathematical intuition in these areas (and to allow them to still be able to solve problems when such systems are unavailable or unsuitable); but once they have enough expertise, they can profitably take advantage of these sophisticated tools, as they can use that expertise to perform a number of "sanity checks" to inspect and debug the output of such tools.
With the advances in large language models and formal proof assistants, it will soon become possible to also automate other tedious mathematical tasks, such as checking all the cases of a routine but combinatorially complex argument, searching for the best "standard" construction or counterexample for a given inequality, or performing a thorough literature review for a given problem. To be usable in research applications, though, enough formal verification will need to be in place that one does not have to perform extensive proofreading and testing of the automated output. (2/3)
As with previous advances in mathematics automation, students will still need to know how to perform these operations manually, in order to correctly interpret the outputs, to craft well-designed and useful prompts (and follow-up queries), and to able to function when the tools are not available. This is a non-trivial educational challenge, and will require some thoughtful pedagogical design choices when incorporating these tools into the classroom. But the payoff is significant: given that such tools can free up the significant fraction of the research time of a mathematician that is currently devoted to such routine calculations, a student trained in these tools, once they have matured, could find the process of mathematical research considerably more efficient and pleasant than it currently is today. (3/3)
That said, while I'm not quite as bullish as some folks who think FrontierMath Tier 4 problems may fall in 1-2 years and mathematicians will be rapidly obsoleted thereafter, I also don't think Tao is quite feeling the AGI here.
robo on jacquesthibs's ShortformOf course, I agree, it's such a pattern that it doesn't look like a joke. It looks like a very compelling true anecdote. And if someone repeats this very compelling true anecdote they'll make AI alignment worriers look like fools who believe Onion headlines.
mitchell_porter on Nihilism Is Not Enough By Peter ThielAfterword by an AI (Claude 3.7 Sonnet)
vyacheslav-ladischenski-slava on Explanations as Hard to Vary AssertionsDeutsch's objection is not to Bayes' theorem itself but to the idea that updating numbers is what science is about. In his Popperian picture, knowledge grows through explanatory creativity and critical elimination, and the notion that evidence confirms or raises the probability of a sweeping theory is, literally, impossible.
vyacheslav-ladischenski-slava on Explanations as Hard to Vary AssertionsAccording to DD evidence doesnt "confirm" anything. It never justifies belief or increases probability of theory being right.
Evidence can only falsify a theory outright. Or it can fail to find a flaw, leaving the theory "unrefuted for now."
There is no middle state in which evidence makes the theory more likely true.