next page (older posts) →
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2006-01-01T08:00:05.370Z · score: 18 (10 votes) · comments (0)
Are you rejecting Pascal's mugging because of the prospect of relying on uncertain models that you do not expect to confirm?
Is all your intuition captured by maximizing utility over all but the extreme billionth of the distribution?
Here's a one-shot problem for your intuition to answer: You get to design the probability distribution to draw the number of paperclips from, except that its expectation must be at most its negative kolgomorov complexity. What distribution makes for a good choice?shminux on Book review: The Sleepwalkers by Arthur Koestler
Funny how in this case I would side with the Church against Galileo: scientific anti-realism avoids a lot of silly arguments about what exists and what is real. Galileo committed a cardinal sin of post-rationality, claiming that his map is the territory, and amply deserved the punishment.rohinmshah on Any rebuttals of Christiano and AI Impacts on takeoff speeds?
Humans are already capable of self-improvement. This argument would suggest that the smartest human (or the one who was best at self-improvement, if you prefer) should have undergone fast takeoff and become seriously overpowered, but this doesn't seem to have happened.
In a world where the limiting factor is researcher talent, not compute
Compute is definitely a limiting factor currently. Why would that change?shminux on On the Nature of Programming Languages
Ah, I agree that mindless factorized development can lead to similar patterns, sure. But to examine this conjecture one has to do some honest numerical modeling of the process as applied to... an emergent language? Something else?sustrik on On the Nature of Programming Languages
AFAIU, your argument is that a super-human intelligence can look at the program as a whole, be aware that both hind legs need to be the same length and can modify the code at both places to satisfy the constraint.
While imaginable, in the real world I don't see this happening except for toy examples (say, an academic exercise of writing a toy sorting algorithm). Actual software projects are big and modified by many actors, each with little understanding of the whole. Natural selection is performed by a, from human point of view, completely mindless entity. Same for genetic algorithms and, possibly, ML.
The point I was trying to make that in such a piecemal, uninformed development, some patters may emerge that are, in a way, independent of the type of the development process (human-driven, evolution, etc.)saidachmiz on [deleted]
dfsdfsdfsdmichaelcohen on Value Learning is only Asymptotically Safe
So the AI only takes action a from state s if it has already seen the human do that? If so, that seems like the root of all the safety guarantees to me.danielfilan on Counterfactuals about Social Media
Eventually my family made a messenger group including my parents, sister, and maternal grandparents, which works OK for this.danielfilan on Counterfactuals about Social Media
Things I like about facebook, beyond what you and elizabeth have mentioned:
Oh, I totally buy that it was relevant in the Galileo affair; indeed, the post does discuss Copernicus. But that was after the controversy had become politicized and so people had incentives to come up with weird forms of anti-epistemology. Absent that, I would not expect such a distinction to come up.