Posts

Comments

Comment by naasking on Explaining the Twitter Postrat Scene · 2023-07-15T14:02:47.867Z · LW · GW

s/compliment/complement.

Comment by naasking on LLM cognition is probably not human-like · 2023-05-09T12:14:25.915Z · LW · GW
  1. Yes, GPTs would have alien-like cognition.
  2. Whether they can translate is unclear because limits of translation of human languages are still unknown.
  3. Yes, they are trained in logs of human thoughts. Each log entry corresponds to a human thought, eg. there is a bijection. There is thus no formal difference.
  4. Re: predicting encodings of human thought, I'm not sure what is supposed to be compelling about this. GPTs currently would only learn a subset of human cognition, namely, that subset that generates human text. So sure, trained on more types of human cognition might make it more accurately follow more types of human cognition. Therefore...?
  5. Yes, a brain and a Python interpreter do not have a similar internal structure in evaluating Python semantics. So what? This is as interesting as the fact that a mechanical computer is internally different from an electronic computer. What matters is that they both implement basically the same externally observable semantics in interpreting Python.

Suffice it to say that I didn't find anything here particularly compelling.

Comment by naasking on Why I think strong general AI is coming soon · 2022-09-30T12:58:37.027Z · LW · GW

I don't think any of the claims you just listed are actually true. I guess we'll see.

Comment by naasking on Why I think strong general AI is coming soon · 2022-09-30T04:23:55.014Z · LW · GW

I don't see any indication of AGI so it does not really worry me at all.

Nobody saw any indication of the atomic bomb before it was created. In hindsight would it have been rational to worry?

Your claims about the about the compute and data needed and alleged limits remind me of the fact that Heisenberg actually thought there was no reason to worry because he had miscalculated the amount of U-235 that would be needed. It seems humans are doomed to continue repeating this mistake and underestimating the severity of catastrophic long tails.

Comment by naasking on Why I think strong general AI is coming soon · 2022-09-29T21:44:49.338Z · LW · GW

In this context for me, an intelligent agent is able to understand common language and act accordingly, e.g. if a question is posed it can provide a truthful answer

Humans regularly fail at such tasks but I suspect you would still consider humans generally intelligent.

In any case, it seems very plausible that whatever decision procedure is behind more general forms of inference, it will very likely fall to the inexorable march of progress we've seen thus far.

If it does, the effectiveness of our compute will potentially increase exponentially almost overnight, since you are basically arguing that our current compute is hobbled by an effectively "weak" associative architecture, but that a very powerful architecture is potentially only one trick away.

The real possibility that we are only one trick away from a potentially terrifying AGI should worry you more.

Comment by naasking on Why I think strong general AI is coming soon · 2022-09-29T14:37:10.738Z · LW · GW

Chess playing is similar story, we thought that you have to be intelligent, but we found a heuristic to do that really well.

You keep distinguishing "intelligence" from "heuristics", but no one to my knowledge has demonstrated that human intelligence is not itself some set of heuristics. Heuristics are exactly what you'd expect from evolution after all.

So your argument then reduces to a god of the gaps, where we keep discovering some heuristics for an ability that we previously ascribed to intelligence, and the set of capabilities left to "real intelligence" keeps shrinking. Will we eventually be left with the null set, and conclude that humans are not intelligent either? What's your actual criterion for intelligence that would prevent this outcome?

Comment by naasking on Why has nuclear power been a flop? · 2021-04-20T18:19:49.537Z · LW · GW

From prior research, I understood the main problem of nuclear power plant cost to be constant site-specific design adjustments leading to constant cost and schedule overruns. This means there is no standard plant design or construction, each installation is unique with its own quirks, its own parts, its own customizations and so nothing is fungible and training is barely transferrable.

This was the main economic promise behind small modular reactors: small, standard reactors modules that can be assembled at a factory and shipped to a site using regular transportation and just installed, with little assembly, and which you can "daisy chain" in a way to get whatever power output you need. This strikes right at the heart of some of the biggest costs of nuclear.

Of course, it might just be a little too late, as renewables are now cheaper than nuclear in almost every sense. Just need more investment in infrastructure and grid storage.

Comment by naasking on Zombies Redacted · 2016-07-27T13:08:49.003Z · LW · GW

I'm not sure in what way it's unjustified for me to have an intuition that qualia are different from physical structures

It's unjustified in the same way that vilalism was an unjustified explanation of life: it's purely a product of our ignorance. Our perception of subjective experience/first-hand knowledge is no more proof of accuracy than our perception that water breaks pencils.

Intuition pumps supporting the accuracy of said perception either beg the question or multiply entities unnecessarily (as detailed below).

Nothing you said indicates that p-zombies are inconceivable or even impossible.

I disagree. You've said that epiphenominalists hold that having first-hand knowledge is not causally related to our conception and discussion of first-hand knowledge. This premise has no firm justification.

Denying it yields my original argument of inconceivability via the p-zombie world. Accepting it requires multiplying entities unnecessarily, for if such knowledge is not causally efficacious, then it serves no more purpose than vital in vitalism and will inevitably be discarded given a proper scientific account of consciousness, somewhat like this one.

I previously asked for any example of knowledge that was not a permutation of properties previously observed. If you can provide one such an example, this would undermine my position.

Comment by naasking on Zombies Redacted · 2016-07-16T16:31:43.531Z · LW · GW

Epiphenomenalists, like physicalists, believe that sensory data causes the neurophysical responses in the brain which we identify with knowledge. They disagree with physicalists because they say that our subjective qualia are epiphenomenal shadows of those neurophysical responses, rather than being identical to them. There is no real world example that would prove or disprove this theory because it is a philosophical dispute. One of the main arguments for it is, well, the zombie argument.

Which seems to suggest that epiphenominalism either begs the question, or multiplies entities unnecessarily by accepting unjustified intuitions.

So my original argument disproving p-zombies would seem to be on just as solid footing as the original p-zombie argument itself, modulo our disagreements over wording.

Comment by naasking on Zombies Redacted · 2016-07-06T15:19:43.930Z · LW · GW

Epiphenomenalists do not deny that we have first-hand experience of subjectivity; they deny that those experiences are causally responsible for our statements about consciousness.

Since this is the crux of the matter, I won't bother debating the semantics of most of the other disagreements in the interest of time.

As for whether subjectivity is causally efficacious, all knowledge would seem to derive from some set of observations. Even possibly fictitious concepts, like unicorns and abstract mathematics, are generalizations or permutations of concepts that were first observed.

Do you have even a single example of a concept that did not arise in this manner? Generalizations remove constraints on a concept, so they aren't an example, it's just another form of permutation. If no such example exists, why should I accept the claim that knowledge of subjectivity can arise without subjectivity?

Comment by naasking on Zombies Redacted · 2016-07-06T00:39:42.763Z · LW · GW

I would hope not. 3 is entirely conceivable if we grant 2, so 4 is unsupported

It's not, and I'm surprised you find this contentious. 3 doesn't follow from 2, it follows from a contradiction between 1+2.

1 states that consciousness has no effect upon matter, and yet it's clear from observation that the concept of subjectivity only follows if consciousness can affect matter, ie. we only have knowledge of subjectivity because we observe it first-hand. P-zombies do not have first-hand knowledge of subjectivity as specified in 2.

If there were another way to infer subjectivity without first-hand knowledge, then that inference would resolve how physicalism entails consciousness and epiphenomenalism can be discarded using Occam's razor.

Of course they would - our considerations of other people's feelings and consciousness changes our behavior all the time. And if you knew every detail about the brain, you could give an atomic-level causal account as to why and how.

Except the zombie world wouldn't have feelings and consciousness, so your rebuttal doesn't apply.

The concept of a rich inner life influences decision processes.

That's an assertion, not an argument. Basically, you and epiphenominalists are merely asserting that that a) p-zombies would somehow derive the concept of subjectivity without having knowledge of subjectivity, and b) that this subjectivity would actually be meaningful to p-zombies in a way that would influence their decisions despite them having no first-hand knowledge of any such thing or its relevance to their life.

So yes, EY is saying it's implausible because it seems to multiply entities unnecessarily, I'm taking it one step further and flat out saying this position either multiplies entities unnecessarily, or it's inconsistent.

Comment by naasking on Zombies Redacted · 2016-07-04T19:31:54.065Z · LW · GW

This was longer than it needed to be

Indeed. The condensed argument against p-zombies:

  1. Assume consciousness has no effect upon matter, and is therefore not intrinsic to our behaviour.
  2. P-zombies that perfectly mimic our behaviour but have no conscious/subjective experience are then conceivable.
  3. Consider then a parallel Earth that was populated only by p-zombies from its inception. Would this Earth also develop philosophers that argue over consciousness/subjective experience in precisely the same ways we have, despite the fact that none of them could possibly have any knowledge of such a thing?
  4. This p-zombie world is inconceivable.
  5. Thus, p-zombies are not observationally indistinguishable from real people with consciousness.
  6. Thus, p-zombies are inconceivable.

In the epiphenomenalist view, for whatever evolutionary reason, we developed to have discussions and beliefs in rich inner lives.

Except such discussions would have no motivational impact. A "rich inner life" has no relation to any fact in a p-zombies' brain, and so in what way could this term influence their decision process? What specific sort of discussions of "inner life" do you expect in the p-zombie world? And if it has no conceivable impact, how could we have evolved this behaviour?

Comment by naasking on A case study in fooling oneself · 2013-10-28T04:50:00.813Z · LW · GW

This is an interesting discussion, but this claim struck me as odd:

If something exists, it can be counted (or given a cardinality, if it is infinite).

This seems like an open philosophical question. Clearly you are a finitist of some sort, but as far as I know it hasn't been empirically verified that real numbers don't exist. Certainly continuous functions are widely employed in physics, but whether all of physics can be cast into a finitist framework is an open question last I checked.

So your assertion above doesn't seem firmly justified, as uncountable entities could exist. I have no informed opinion as to whether worlds must be countable or can be uncountable. It certainly seems like they ought to be countable, since the total number of particle configurations in the universe at any given moment in time seems finite, but that's just an uneducated guess.

Comment by naasking on [link] Scott Aaronson on free will · 2013-06-16T15:24:47.209Z · LW · GW

"Copy" implies having more than 1 object : The Copy and the Original at the same point of time, but not space.

Why preference space over time? Time is just another dimension after all. buybuydandavis's definition of "copy" seems to avoiding preference for a particular dimension, and so seems more general.

Comment by naasking on Why Many-Worlds Is Not The Rationally Favored Interpretation · 2013-05-13T20:02:36.140Z · LW · GW

It is not scientific induction, since you can't measure elegance quantitatively.

You can formally via Kolmogorov complexity.

Comment by naasking on Many Worlds, One Best Guess · 2013-05-13T19:00:05.877Z · LW · GW

there is another argument speaking for many-worlds (indeed, even for all possible worlds - which raises new interesting questions of what is possible of course - certainly not everything that is imaginable): that to specify one universe with many random events requires lots of information, while if everything exists the information content is zero - which fits nicely with ex nihilo nihil fit

Now THAT's an interesting argument for MWI. It's not a final nail in the coffin for de Broglie-Bohm, but the naturalness of this property is certainly compelling.

Comment by naasking on An Intuitive Explanation of Solomonoff Induction · 2013-03-08T15:15:15.304Z · LW · GW

maybe there will be some good discrete model but so far the Plank length is not a straightforward discrete unit, not like cell in game of life.

't Hooft has been quite successful in defining QM in terms of discrete cellular automata, taking "successful" to mean that he has reproduced an impressive amount of quantum theory from such a humble foundation.

More interesting still is why reals have been so useful (and not just reals, but also complex numbers, vectors, tensors, etc. which you can build out of reals but which are algebraic objects in their own right).

This is answered quite trivially by simple analogy: second-order logics are more expressive than first-order logics, allowing us to express propositions more succinctly. And so reals and larger numeric abstractions allow some shortcuts that we wouldn't be able to get away with when modelling with less powerful abstractions.

Comment by naasking on An Intuitive Explanation of Solomonoff Induction · 2013-03-08T15:03:30.554Z · LW · GW

(simplicity of the map) alone is sufficient to judge a theory- you also need to take into account the theory's parsimony (simplicity of the territory).

Solomonoff Induction gauges a theory's parsimony via Kolmogorov complexity, which is a formalization of Occam's razor. It's not a naive measurement of simplicity.

Comment by naasking on An Intuitive Explanation of Solomonoff Induction · 2013-03-08T15:01:13.666Z · LW · GW

MWI is not even considered because MWI does not output a string that begins with the observed data, i.e. MWI will never be found when doing Solomonoff induction.

The same observed that produced Copenhagen and de Broglie-Bohm produced MWI. You acknowledge as much when you state that Copenhagen extends MWI with more axioms. The observation string for MWI is then identical to Copenhagen, and there is no reason to select Copenhagen as preferred.