Posts

Help to find a blog I don't remember the name of 2023-11-23T22:49:30.599Z
how do short-timeliners reason about the differences between brain and AI? 2023-09-27T08:13:58.659Z
How much of a concern are open-source LLMs in the short, medium and long terms? 2023-05-10T09:14:47.578Z
What do "attractor dynamics" refer to in the context of social structures? 2023-03-20T01:39:57.770Z

Comments

Comment by JavierCC (javier-caeiro-canabal) on How to deal with the sense of demotivation that comes from thinking about determinism? · 2024-02-17T20:32:31.269Z · LW · GW

I don't have any tips for this, but as a note, this and the idea that the self is not 'real' (that there's no permanent Cartesian homunculus 'experiencing') caused me a lot of dread at the age of 13-14. Sometimes I stop to think about it and I'm amazed at how little it bothers me nowadays.

Comment by JavierCC (javier-caeiro-canabal) on What experiment settles the Gary Marcus vs Geoffrey Hinton debate? · 2024-02-15T23:55:05.135Z · LW · GW

There are fake/fictional presidents in the training data.

Comment by JavierCC (javier-caeiro-canabal) on Labs should be explicit about why they are building AGI · 2023-10-18T20:15:02.806Z · LW · GW

Which are those labs? OpenAI, Anthropic, DeepMind maybe?, what else?

Comment by JavierCC (javier-caeiro-canabal) on how do short-timeliners reason about the differences between brain and AI? · 2023-09-29T09:37:58.345Z · LW · GW

The point that you brought up seemed to rest a lot on Hinton's claims, so it seems that his opinions on timelines and AI progress should be quite important

 

Do you have any recent source on his claims about AI progress? 

Comment by JavierCC (javier-caeiro-canabal) on how do short-timeliners reason about the differences between brain and AI? · 2023-09-28T22:51:13.850Z · LW · GW

So in your model how much of the progress to AGI can be made just by adding more compute + more data + working memory + algorithms that 'just' keep up with the scaling?

Specifically, do you think that self-reflective thought already emerges from adding those?

Comment by JavierCC (javier-caeiro-canabal) on how do short-timeliners reason about the differences between brain and AI? · 2023-09-27T09:27:06.857Z · LW · GW

Can you quote any source that provides evidence for that conclusion? 

 

The process of evolution optimised the structures of the brain themselves through generations, the training is just equivalent to the development of the individual. The structures of the brain seem to not only be determined by development, but that's one reason why I said "apparent complexity", from Yudkowsky:

  • "Metacognitive" is the optimization that builds the brain - in the case of a human, natural selection; in the case of an AI, either human programmers or, after some point, the AI itself.
Comment by JavierCC (javier-caeiro-canabal) on how do short-timeliners reason about the differences between brain and AI? · 2023-09-27T09:21:27.820Z · LW · GW

Yeah, but I would need more specificity than just giving an example of a brain with a different design.

Comment by JavierCC (javier-caeiro-canabal) on Primitive Perspectives and Sleeping Beauty · 2023-09-11T23:22:58.136Z · LW · GW

But you've generalised your position on perspective beyond conscious beings. My understanding is that perspective is not reducible to non-perspective facts in the theory because the perspective is contingent, but nothing there explicitly refers to consciousness.

You can adopt mutatus mutandis a different perspective in the description of a problem and arrive to the right conclusion. There's no appeal to a phenomenal perspective there.

The epistemic limitations of minds that map to the idea of a perspective-centric epistemology and metaphysics come from facts about brains.

Comment by JavierCC (javier-caeiro-canabal) on Primitive Perspectives and Sleeping Beauty · 2023-09-10T12:43:18.175Z · LW · GW

Your claims about the limitations on knowing about consciousness and free will based on the primitivity of perspective seem to me pretty random.

The perspective that we are taking is a primitive, but I don't understand why you connect that with consciousness given that the perspective is completely independent on any claims about it being conscious. I don't see how to link both non-arbitrarily, the mechanisms of consciousness exist regardless of the perspective taken. The epistemic limitations come from facts about brains not from an underlying notion of perspective.

And in the case of free will, there's no reason why we cannot have a third-person account of what we mean by free will. There's no problematic loop.

Comment by JavierCC (javier-caeiro-canabal) on AI#28: Watching and Waiting · 2023-09-07T23:40:18.671Z · LW · GW

People turn these things into agents easily already, and they already contain goal-driven subagent processes.

Sorry, what is this referring to exactly?

Comment by JavierCC (javier-caeiro-canabal) on AI #26: Fine Tuning Time · 2023-08-24T23:06:39.860Z · LW · GW

The video in the link is not available.

Comment by JavierCC (javier-caeiro-canabal) on Seth Explains Consciousness · 2023-08-23T12:24:08.281Z · LW · GW

I don't like the word "illusionism" here because people just get caught on the obvious semantic 'contradiction' and always complain about it.

The arguments based on perceptual illusions in general are meant to show that our perception is highly constructed by the brain, it's not something 'simple'. The point of illusionism is just to say that we are confused about what the phenomenological properties of qualia really are qua qualia because of wrong ideas that come from introspection.

Comment by JavierCC (javier-caeiro-canabal) on Does one have reason to believe the simulation hypothesis is probably true? · 2023-08-23T00:48:53.729Z · LW · GW

I've been checking Joscha Bach's blog these days, and I had found this blog post on the topic which I think goes to interesting depth on the question: 

 

Do we live in a simulation?

Comment by JavierCC (javier-caeiro-canabal) on AI #24: Week of the Podcast · 2023-08-11T13:40:09.462Z · LW · GW

I'm curious, who is the man that says that it's fine for AIs to replace humanity because there will be more interesting forms of consciousness?

Comment by JavierCC (javier-caeiro-canabal) on The Control Problem: Unsolved or Unsolvable? · 2023-08-02T20:26:54.607Z · LW · GW

To what extent are humans by themselves evidence of GI alignment, though? A human can acquire values that disagree with those of the humans that taught them those values just by having new experiences/knowledge, to the point of even desiring completely opposite things to their peers (like human progress VS human extinction), doesn't that mean that humans are not robustly aligned? 

Comment by JavierCC (javier-caeiro-canabal) on My Weirdest Experience · 2023-07-12T01:36:42.981Z · LW · GW

Who knows, maybe it was your right hemisphere. 

 

Shout-outs to them, if so. Almost definitely the first time someone has directly referred to them, that's got to be very exciting. 

 

Even if you are not literally their right hemisphere (not like you would know of course), but if you are there and if you have access to high-level knowledge of the world: hi, good job all of these years! 

Comment by JavierCC (javier-caeiro-canabal) on Why it's so hard to talk about Consciousness · 2023-07-05T09:49:57.234Z · LW · GW

Do you think you have experienced a dissociative crisis at any point of your life? I mean the sensations of derealisation/depersonalisation, not other symptoms, and it doesn't need to have been 'strong' at all. 

 

 

 

I ask because those sensations are not in any obvious way about processing sensory data, and because of the feeling of detachment from reality that comes with them. So I was curious if you could identify anything like that. 

Comment by JavierCC (javier-caeiro-canabal) on What money-pumps exist, if any, for deontologists? · 2023-06-29T11:14:13.675Z · LW · GW
Comment by JavierCC (javier-caeiro-canabal) on Freedom under Naturalistic Dualism · 2023-06-28T10:34:57.856Z · LW · GW

But conscious states are strongly determined by brain states as far as we can check. The argument that people use to argue against fully identifying the two comes down to deriving the metaphysical nature of qualia from their phenomenological properties. It seems to me that is epistemically problematic to argue against objective claims with intuition about something that we cannot even contrast with anything. We just have our intuition about phenomenology, no conceivable way to track the processes behind the phenomenon from that intuition. This is the reason why people imagine qualia to be individual entities and then think they can remove them ceteris paribus, or that they can't be tracked by Laplace's demons. 

 

Consciousness doesn't need to be fundamentally distinct from non-consciousness. Rocks can't monitor their own states at all, but computers can, that doesn't mean that a fundamentally new property was added when you turn a rock into a computer. If we stop trying to derive metaphysics from phenomenology, the same account can be applied to consciousness. Then whatever processes track with what we feel consciousness to be will be trackable by a Laplace demon. 

Comment by JavierCC (javier-caeiro-canabal) on Decision Theory with the Magic Parts Highlighted · 2023-05-22T13:57:33.646Z · LW · GW

Isn't this the distinction between symbolic and non-symbolic AI? 

Comment by JavierCC (javier-caeiro-canabal) on The Stanley Parable: Making philosophy fun · 2023-05-22T13:22:54.742Z · LW · GW

I intellectually understand that (libertarian) free will is an illusion that emerges from the subjective experience of thinking about counterfactually having done something else at a specific moment, but I still often catch myself feeling bad about past mistakes, imagining how I could have done something else as if it had been an actual literal possibility at that time. 

 

I'm sure it's not uncommon at all, but I feel like it's not the best way of framing those memories from the point of view of improving oneself. 

 

It seems that imagining oneself as succeeding is often used as a 'substitute' for actually succeeding (even if it doesn't feel nearly as good), which might not help to motivate oneself. 

Comment by JavierCC (javier-caeiro-canabal) on Uploads are Impossible · 2023-05-22T12:28:41.417Z · LW · GW

I think this is sound. I thought you were making stronger claims about cognitive processes that might be embodied. 

 

Do you know any interesting literature on the topic?

Comment by JavierCC (javier-caeiro-canabal) on GPT as an “Intelligence Forklift.” · 2023-05-22T12:18:44.735Z · LW · GW

It's still typically acknowledged that the evolution of intelligence from more primitive apes to humans was mostly an increase of computational power (proportionally bigger brains) with little innovation on structures. So, there seems to be merit to the idea. 

 

Larger animals, all things being equal, need more neurons to perform the same basic functions than we do because of their larger bodies. 

Comment by JavierCC (javier-caeiro-canabal) on Uploads are Impossible · 2023-05-20T21:05:17.795Z · LW · GW

I still don't get why from your perspective: "It seems weird to me that you ask this." 

 

It is true that at that time I'd lost some of the content, but I know that, I even mentioned situs inversus as an example. 

 

But I still would need an actual example of what kind of computations you think would need to be performed in this weirdly-placed organs that are not possible based on the common idea that the brain maps the positions of the organs. 

Comment by JavierCC (javier-caeiro-canabal) on Uploads are Impossible · 2023-05-19T21:46:33.821Z · LW · GW

Yeah but you didn't tell me how different the way those organs are wired is compare to the typical way. Even if the relative position is different, I would need specific examples to understand why the mapping of the brain of those organs wouldn't work here. 

Comment by JavierCC (javier-caeiro-canabal) on Uploads are Impossible · 2023-05-19T21:13:23.845Z · LW · GW

Mmm, maybe(?, do you have an actual example of this phenomenon or something? It seems weird to me that you ask this. How would this work?

 

Even if they are wired differently, cognition might still be solely in the brain and the way the brain models the body will still be based on the way those nerves connect to the brain. 

Comment by JavierCC (javier-caeiro-canabal) on Uploads are Impossible · 2023-05-16T01:36:00.400Z · LW · GW

There's not much point on having mentioned it really, but I meant in the case that somehow the relative position of the organs could affect the way they are wired, yeah, probably not conceivable in real life. 

 

Something like situs inversus. 

Comment by JavierCC (javier-caeiro-canabal) on Uploads are Impossible · 2023-05-15T21:05:04.022Z · LW · GW

Sorry, at that moment I didn't read the entire comment, I don't know how it happened. I probably got distracted. 

 

 

I said that the body is "simulated/emulated" instead of just "simulated" to account for the possibility of having to emulate the literal body of the individual, instead of just simulating a new body (which is confusing and might be based on a misunderstanding of the difference between the two terms, but that was my intention). 

 

Regardless, in that quotation I was assuming that the brain was the source of cognition by itself, if that's so, the brain might even adapt to not have a proper human body (it might be problematic, but the neuroplasticity of the brain is pretty flexible, so it might be possible).

 

 

 

Even then, if the mapping of the positions of organs still needed to be a certain way, we could account for that by looking at the nerves that connect to the brain (if the cognition is exclusively in the brain

Comment by JavierCC (javier-caeiro-canabal) on Uploads are Impossible · 2023-05-15T18:07:18.637Z · LW · GW

As inputs on the digital nerves that link the brain with the different parts of the body. It might not be necessary to simulate a body that can actually move. 

Comment by JavierCC (javier-caeiro-canabal) on Uploads are Impossible · 2023-05-15T09:00:05.157Z · LW · GW

Even if you maintain that the brain is the "sole" source of cognition, the brain is still an organ and is heavily affected by the operation of other organs.

Sure but if all of the cognition is within the brain, the rest can be conceivably simulated as inputs to the brain, we might also have to simulate an environment for it.

 

Yours is ultimately a thesis about embodied cognition, as I understand it. If cognition is strongly embodied, then a functional human brain will need to have a very accurately simulated/emulated human body. If cognition is weakly embodied, and the brain's digitalised neuroplasticity is flexible enough, we can get away with not simulating an actual human body.  

 

I don't think the debate is settled.

Comment by JavierCC (javier-caeiro-canabal) on Reality and reality-boxes · 2023-05-13T23:44:55.666Z · LW · GW

OP is primarily describing different things that people mean by "existing", not prescribing them. 

Comment by JavierCC (javier-caeiro-canabal) on The way AGI wins could look very stupid · 2023-05-13T22:45:06.449Z · LW · GW

Alchemists still performed experiments on chemical reactions, discovered new ones and described them, practiced how to separate substances and for that they developed tools and methods, that were later in chemistry. It's not like it was an inherent waste of time, it was a necessary stepping-stone to get to chemistry, which developed from it more gradually than it's typically acknowledged. 

Comment by JavierCC (javier-caeiro-canabal) on AI #11: In Search of a Moat · 2023-05-11T20:56:13.438Z · LW · GW

Judea Pearl is also Turing-awarded, and because of contributions to AI.

Comment by JavierCC (javier-caeiro-canabal) on seank's Shortform · 2023-05-11T08:54:39.059Z · LW · GW

If it’s fine for me to enter the discussion

It seems to me that:

 

A very effective narrow AI is an AI that can solve certain closed-ended problems very effectively, but can’t generalise.

 

Since agents are necessarily limited in the number of factors that they can account for in their calculations, open-ended problems are fundamentally closed-ended problems but with influxes of mixed more-or-less undetermined data that affect what solutions are viable (so we can’t easily compute how that data will affect the space of possible actions, at least initially). But there are open-ended problems that have so many possible factors that need to be accounted for (like ‘solving the economy and increasing growth’), that the space of possible actions that a general system (like a human) can conceivably take to solve one of those problems effectively IS the space of all possible actions that a narrow AI need to consider to solve the problem as effectively as a human would, at the very least.

 

At that point, a “narrow AI that can solve an open-ended problem” is at least as general as an average human. If the number of possible actions that it can take increases then it's even more general than the average human. 

Kinds and species are fundamentally the same thing.

Comment by JavierCC (javier-caeiro-canabal) on Yoshua Bengio argues for tool-AI and to ban "executive-AI" · 2023-05-10T08:28:31.256Z · LW · GW

There's not much context to this claim made by Yoshua Bengio, but while searching in Google News I found a Spanish online newspaper that has an article* in which he claims that:

 

We need to create machines that assist us, not independent beings. That would not be a good idea; it would lead us down a very dangerous path.

 

*https://www.larazon.es/sociedad/20221121/5jbb65kocvgkto5hssftdqe7uy.html

Comment by JavierCC (javier-caeiro-canabal) on Quadratic Reciprocity's Shortform · 2023-05-08T13:34:39.170Z · LW · GW

Do you think it's worth doing it if you will cause them distress? I find that hard to decide

Comment by JavierCC (javier-caeiro-canabal) on TED talk by Eliezer Yudkowsky: Unleashing the Power of Artificial Intelligence · 2023-05-07T18:51:29.765Z · LW · GW

Can't it be reported so they'll have to remove it? I assume that it won't happen because they probably wouldn't even upload it at this point , but given Youtube's copyright rules... I don't know.

Comment by JavierCC (javier-caeiro-canabal) on What is it like to be a compatibilist? · 2023-05-07T13:34:16.847Z · LW · GW

Why do you think LFW is real? The only naturalistic frameworks that I've seen that support LFW are the ones that are like Penrose's Orch-OR, that postulate that 'decisions' are quantum (any process that is caused by the collapse of the quantum states of the brain). But it seems unlikely that the brain behaves as a coherent quantum state. If the brain is classical, decisions are macroscopic and they are determined, even in Copenhagen.

And in this sense, what you have is some inherent randomness within the decision-making algorithms of the brain, there's no special capability of the self to 'freely' choose while at the same time not being determined by their circumstances, there's just a truly-random factor in the decision-making process.

Comment by JavierCC (javier-caeiro-canabal) on What is it like to be a compatibilist? · 2023-05-05T18:56:12.867Z · LW · GW

I expected most compatibilists to hold that we do in some important sense, though of course not the libertarian one, influence the future via our choices. And I was looking to better understand why the sense in which we can influence the future is strong enough to be a good match for the concept 'free will', while the sense in which we can influence the past is presumably non-existent or too weak to worry about

What is missing here is a definition of 'people' to determine how we are effective causes of anything.

When you adopt a compatibilist view, you are already implicitly accepting a deflationary view of free will. There's no interesting sense in which people cause things to happen 'fundamentally' (non-arbitrarily, it's a matter of setting boundaries), the idea of compatibilism is just to lay down a foundation for moral responsibility. They are talking past each other in a way. It becomes a discussion on semantics.

The different deflationary conceptions of free will are mostly just trying to repurpose the expression 'free will' to fit it for the needs of our society and our 'naive' understanding of people's behavior.

Sure, our predispositions bias the distribution of possible actions that we're gonna take such that, counterfactually, if we had different predispositions, we would have acted differently. That's all there is to it.

Another different thing is what is a mechanistic explanation of choice-making in our brains, but compatibilism is largely agnostic to that.

Comment by JavierCC (javier-caeiro-canabal) on We don’t need AGI for an amazing future · 2023-05-05T17:55:29.784Z · LW · GW

Good list. Another one that caught my attention that I saw in the EU act was AIs specialised into subliminal messages. people's choices can be somewhat conditioned in favor or against things in certain ways by feeding them sensory data even if it's not consciously perceptible, it can also affect their emotional states more broadly.

I don't know how effective this stuff is in real life, but I know that it at least works.

Anything that tries to classify humans into risk groups based on, well, anything.

A particular example of that one is systems of social scoring, which are surely gonna be used by authoritarian regimes. You can screw people up in so many ways when social control is centralised with AI systems. It's great to punish people for not being chauvinists

Comment by javier-caeiro-canabal on [deleted post] 2023-05-05T10:00:07.980Z

What would be a reasonable standard of action by you? Genuinely asking

Comment by JavierCC (javier-caeiro-canabal) on Robin Hanson and I talk about AI risk · 2023-05-05T00:55:34.858Z · LW · GW

Wouldn't it be better to make these replies in the comment section of the video?

Comment by JavierCC (javier-caeiro-canabal) on We don’t need AGI for an amazing future · 2023-05-04T20:44:45.340Z · LW · GW

A lot of the current narrow AI applications need bannin' anyhow

Which? I wonder.

Comment by JavierCC (javier-caeiro-canabal) on Mental Models Of People Can Be People · 2023-05-02T06:38:56.760Z · LW · GW

What about dissociative identities? What is their ontology compare to the ego of a non-dissociated individual?

Since they apparently can have:

changes in behavior, attitudes, and memories. These personalities may have unique names, ages, genders, voices, and mannerisms, and may have different likes and dislikes, strengths and weaknesses, and ways of interacting with others.

They don't seem to be fundamentally different from our non-dissociated egos.

Comment by JavierCC (javier-caeiro-canabal) on [Linkpost] Sam Altman's 2015 Blog Posts Machine Intelligence Parts 1 & 2 · 2023-04-28T18:03:36.703Z · LW · GW

He said that they weren't training a GPT-5 and that they prefer to aim to adapt smaller AIs to society (I suppose they might still be researching on AGI regardless, just not training new models).

I thought that it might have been to slow down the race.

Comment by JavierCC (javier-caeiro-canabal) on AI chatbots don't know why they did it · 2023-04-27T07:04:56.470Z · LW · GW

What is the reason by which ChatGPT has not been given any memory? Security/privacy issues? Performance?

I assume that it has to do with performance, or limitations of data transfer in particular? probably both?

Comment by JavierCC (javier-caeiro-canabal) on Philosophy by Paul Graham Link · 2023-04-27T06:38:56.823Z · LW · GW

From the positivist perspective, causality isn't something that exists because it's about counterfactual reality. Positivism is a philosophy that held science back a lot.

Could you provide actual examples please? I'm asking genuinely not rhetorically, but it seems to me that the question of positivism mostly just affected edge cases and not that strongly. People say that behaviorism was a mistake that came from positivism, but wasn't it based mostly on experimental limitations regardless?

And what do we mean by "causality is something that exists"?, and how much can our heuristics change with an account of causality compare to another?

Comment by JavierCC (javier-caeiro-canabal) on Mental Models Of People Can Be People · 2023-04-26T07:53:21.441Z · LW · GW

Yeah, I find that plausible, although that doesn't have to do very much with the question of how much they suffer (as far as I can say). Even if consciousness is cognitively just a form of awareness of your own perception of things (like in AST or HOT theories), you at least still need a bound locus to experience, and if the locus is the same as 'yours', then whatever the simulacra experience will be registered within your own experiences.

 

I think the main problem here is to simulate beings that are suffering considerably, if you don't suffer too much while simulating them (which is how most people experience the simulations, except maybe for those that are hyper-empathic or people with really detailed tulpas/personas, maybe) then it's not a problem. 

 

It might be a problem something like if you consciously create a persona that then you want to delete, and they are aware of it, and they feel bad about it (or more generally, if you know that you'll create a persona that will suffer because of things, like disliking certain aspects of the world). But you should notice those feelings just like you notice the feelings of any of the  'conflicting agents' that you might have in your mind.

Comment by JavierCC (javier-caeiro-canabal) on Mental Models Of People Can Be People · 2023-04-26T06:52:45.920Z · LW · GW

I realised that the level of suffering and the fidelity of the simulation don't need to be correlated, but I didn't make an explicit distinction.

Most think that you need dedicated cognitive structures to generate a subjective I, if that's so, then there's no room for conscious simulacra that feel things that the simulator doesn't.

Comment by JavierCC (javier-caeiro-canabal) on Mental Models Of People Can Be People · 2023-04-26T06:30:13.464Z · LW · GW

I'd understood that already, but I would need a reason to find that believable, because it seems really unlikely. You are not directly simulating the cognitive structures of the being, it's impossible, the only way you are simulating someone is by repurposing your cognitive structures to simulate them, and then the intensity of their emotions is the same as what you registered.

How simple do you think the emergency of subjective awareness is?, most people will say that you need dedicated cognitive structures to generate the subjective I, even in theories that are mostly just something like strange loops or higher-level awareness, like HOT or AST, you at least still need a bound locus to experience. If that's so, then there's no room for conscious simulacra that feel things that the simulator doesn't.

This is from a reply that I gave to Vladimir:

I think the main problem here is to simulate beings that are suffering considerably, if you don't suffer too much while simulating them (which is how most people experience the simulations, except maybe for those that are hyper-empathic or people with really detailed tulpas/personas, maybe) then it's not a problem.

It might be a problem something like if you consciously create a persona that then you want to delete, and they are aware of it, and they feel bad about it (or more generally, if you know that you'll create a persona that will suffer because of things, like disliking certain aspects of the world). But you should notice those feelings just like you notice the feelings of any of the 'conflicting agents' that you might have in your mind.