Humans are already capable of self-improvement. This argument would suggest that the smartest human (or the one who was best at self-improvement, if you prefer) should have undergone fast takeoff and become seriously overpowered, but this doesn't seem to have happened.
In a world where the limiting factor is researcher talent, not compute
Compute is definitely a limiting factor currently. Why would that change?shminux on On the Nature of Programming Languages
Ah, I agree that mindless factorized development can lead to similar patterns, sure. But to examine this conjecture one has to do some honest numerical modeling of the process as applied to... an emergent language? Something else?sustrik on On the Nature of Programming Languages
AFAIU, your argument is that a super-human intelligence can look at the program as a whole, be aware that both hind legs need to be the same length and can modify the code at both places to satisfy the constraint.
While imaginable, in the real world I don't see this happening except for toy examples (say, an academic exercise of writing a toy sorting algorithm). Actual software projects are big and modified by many actors, each with little understanding of the whole. Natural selection is performed by a, from human point of view, completely mindless entity. Same for genetic algorithms and, possibly, ML.
The point I was trying to make that in such a piecemal, uninformed development, some patters may emerge that are, in a way, independent of the type of the development process (human-driven, evolution, etc.)saidachmiz on [deleted]
dfsdfsdfsdmichaelcohen on Value Learning is only Asymptotically Safe
So the AI only takes action a from state s if it has already seen the human do that? If so, that seems like the root of all the safety guarantees to me.danielfilan on Counterfactuals about Social Media
Eventually my family made a messenger group including my parents, sister, and maternal grandparents, which works OK for this.danielfilan on Counterfactuals about Social Media
Things I like about facebook, beyond what you and elizabeth have mentioned:
Oh, I totally buy that it was relevant in the Galileo affair; indeed, the post does discuss Copernicus. But that was after the controversy had become politicized and so people had incentives to come up with weird forms of anti-epistemology. Absent that, I would not expect such a distinction to come up.shminux on On the Nature of Programming Languages
Like you, I am a fan of Lem, who is sadly, underrated in the West. And I am quite sure that we will not only be unable to communicate with alien lifeforms, we would not even recognize them as such. (Well, I do not even believe that we are a lifeform to begin with, but that topic is for another day.)
As for the programming languages, and your gazelle analogy, notice that you fixed the gene position, something that is not likely an issue for a non-human mind. Just restructure the algorithm as needed. As long as the effort is not exponential, who cares. Computer languages are crutches for the feeble human brain. An intelligence that is not hindered by human shortcomings would just create the algorithm and run it without any intermediate language/compiler/debugger needed.ricraz on Book review: The Sleepwalkers by Arthur Koestler
Hmm, interesting. It doesn't discuss the Galileo affair, which seems like the most important case where the distinction is relevant. Nevertheless, in light of this, "geocentric models with epicycles had always been in the former category" is too strong and I'll amend it accordingly.