Posts

[Linkpost] Silver Bulletin: For most people, politics is about fitting in 2024-05-01T18:12:43.238Z
KAN: Kolmogorov-Arnold Networks 2024-05-01T16:50:58.124Z
Claude 3 Opus can operate as a Turing machine 2024-04-17T08:41:57.209Z
Leave No Context Behind - A Comment 2024-04-11T22:50:26.100Z
aintelope project update 2024-02-08T18:32:00.000Z
[Linkpost] Contra four-wheeled suitcases, sort of 2023-09-12T20:36:02.412Z
Trying AgentGPT, an AutoGPT variant 2023-04-13T10:13:41.316Z
What is good Cyber Security Advice? 2022-10-24T23:27:58.428Z
[Fun][Link] Alignment SMBC Comic 2022-09-09T21:38:54.400Z
Hamburg, Germany – ACX Meetups Everywhere 2022 2022-08-20T19:18:48.685Z
Brain-like AGI project "aintelope" 2022-08-14T16:33:39.571Z
Robin Hanson asks "Why Not Wait On AI Risk?" 2022-06-26T23:32:19.436Z
[Link] Childcare : what the science says 2022-06-24T21:45:23.406Z
[Link] Adversarially trained neural representations may already be as robust as corresponding biological neural representations 2022-06-24T20:51:27.924Z
What are all the AI Alignment and AI Safety Communication Hubs? 2022-06-15T16:16:03.241Z
Silly Online Rules 2022-06-08T20:40:41.076Z
LessWrong Astralcodex Ten Meetup June 2022 2022-05-29T22:43:09.431Z
[Linkpost] A conceptual framework for consciousness 2022-05-02T01:05:36.129Z
ACX Spring 2022 Meetup Hamburg 2022-04-25T21:44:11.141Z
Does non-access to outputs prevent recursive self-improvement? 2022-04-10T18:37:54.332Z
Unfinished Projects Thread 2022-04-02T17:12:52.539Z
[Quote] Why does i show up in Quantum Mechanics and other Beautiful Math Mysteries 2022-03-16T11:58:30.526Z
Estimating Brain-Equivalent Compute from Image Recognition Algorithms 2022-02-27T02:45:21.801Z
[Linkpost] TrojanNet: Embedding Hidden Trojan Horse Models in Neural Networks 2022-02-11T01:17:42.119Z
[Linkpost] [Fun] CDC To Send Pamphlet On Probabilistic Thinking 2022-01-14T21:44:57.313Z
[Linkpost] Being Normal by Brian Caplan 2021-11-27T22:19:18.051Z
[Linkpost] Paul Graham 101 2021-11-14T16:52:02.415Z
Successful Mentoring on Parenting, Arranged Through LessWrong 2021-10-21T08:27:57.794Z
Quote Quiz 2021-08-30T23:30:52.067Z
What do we know about vaccinating children? 2021-08-04T23:57:15.399Z
Calibrating Adequate Food Consumption 2021-03-27T00:00:56.953Z
Gunnar_Zarncke's Shortform 2021-01-02T02:51:36.511Z
Linkpost: Choice Explains Positivity and Confirmation Bias 2020-10-01T21:46:46.289Z
Slatestarcodex Meetup Hamburg 2019-11-17 2019-10-27T22:29:27.835Z
Welcome to SSC Hamburg [Edit With Your Details] 2019-09-24T21:35:10.473Z
Slatestarcodex Meetup in Hamburg, Germany 2019-09-09T21:42:25.576Z
Percent reduction of gun-related deaths by color of gun. 2019-08-06T20:28:56.134Z
Open Thread April 2018 2018-04-06T21:02:38.311Z
Intercellular competition and the inevitability of multicellular aging 2017-11-04T12:32:54.879Z
Polling Thread October 2017 2017-10-07T21:32:00.810Z
[Slashdot] We're Not Living in a Computer Simulation, New Research Shows 2017-10-03T10:10:07.587Z
Interpreting Deep Neural Networks using Cognitive Psychology (DeepMind) 2017-07-10T21:09:51.777Z
Using Machine Learning to Explore Neural Network Architecture (Google Research Blog) 2017-06-29T20:42:00.214Z
Does your machine mind? Ethics and potential bias in the law of algorithms 2017-06-28T22:08:26.279Z
From data to decisions: Processing information, biases, and beliefs for improved management of natural resources and environments 2017-05-08T21:47:35.097Z
Introduction to Local Interpretable Model-Agnostic Explanations (LIME) 2017-02-09T08:29:40.668Z
Interview with Nassim Taleb 'Trump makes sense to a grocery store owner' 2017-02-08T21:52:21.606Z
Slate Star Codex Notes on the Asilomar Conference on Beneficial AI 2017-02-07T12:14:46.189Z
Polling Thread January 2017 2017-01-22T23:26:15.964Z
Could a Neuroscientist Understand a Microprocessor? 2017-01-20T12:40:04.553Z

Comments

Comment by Gunnar_Zarncke on Selfmaker662's Shortform · 2024-05-12T23:12:39.708Z · LW · GW

Hm. You could make quizzes yourself, but that was some effort. It seems the paiq quizzes are standardized and easy to make. Nice. Many Okcupid tests were more like MBTI tests. Here is where people are discussing one of the bigger ones. 

Comment by Gunnar_Zarncke on Selfmaker662's Shortform · 2024-05-12T00:08:44.166Z · LW · GW

People try new dating platforms all the time. It's what Y Combinator calls a tarpit. The problem sounds solvable, but the solution is elusive.

As I have said elsewhere: Dating apps are broken because the incentives of the usual core approach don't work.

On the supplier side: Misaligned incentives (keep users on the platform) and opaque algorithms lead to bad matches. 

On the demand side: Misaligned incentives (first impressions, low cost to exit) and no plausible deniability lead to predators being favored.

Comment by Gunnar_Zarncke on Selfmaker662's Shortform · 2024-05-12T00:04:48.080Z · LW · GW

People start dating portals all the time. If you start with a targetted group that takes high value from it, you could plausibly do it in terms of network effect. Otherwise, you couldn't start any network app or the biggest one would automatically win. So I think your argument proves too much.

Comment by Gunnar_Zarncke on Selfmaker662's Shortform · 2024-05-12T00:01:18.982Z · LW · GW

The quizzes sounds is something Okcupid also used to have. Also everything that reduces the need for first impressions. I hope they keep it. 

Comment by Gunnar_Zarncke on Gunnar_Zarncke's Shortform · 2024-05-11T20:00:24.795Z · LW · GW

Interest groups without an organizer.

This is a product idea that solves a large coordination problem. With billion people, there could be a huge number of groups of people sharing multiple interests. But currently, the number of valuable groups of people is limited by a) the number of organizers and b) the number of people you meet via a random walk. Some progress has been made on (b) with better search, but it is difficult to make (a) go up because of human tendencies - most people are lurkers - and the incentive to focus on one area to stand out. So what is the idea? Cluster people by interests and then suggest the group to all members. If people know that the others know that there is interest, the chance of the group coming together gets much higher.

Comment by Gunnar_Zarncke on Dating Roundup #3: Third Time’s the Charm · 2024-05-10T08:27:07.823Z · LW · GW

I said die, not kill. Let the predators continue to use the dating platforms if they want. It will keep them away from other more wholesome places.

Comment by Gunnar_Zarncke on Dating Roundup #3: Third Time’s the Charm · 2024-05-09T00:20:38.362Z · LW · GW

As I have said elsewhere:

Dating apps are broken. Maybe it's better dating apps die soon. 

On the supplier side: Misaligned incentives (keep users on the platform) and opaque algorithms lead to bad matches. 

On the demand side: Misaligned incentives (first impressions, low cost to exit) and no plausible deniability lead to predators being favored.

Real dating happens when you can observe many potential mates and there is a path to getting closer. Traditionally that was schools, clubs, church, work. Now, not so much. Let's build something that fosters what was lost, now double down on a failed principle - 1-to-1 matching.  

Comment by Gunnar_Zarncke on KAN: Kolmogorov-Arnold Networks · 2024-05-04T12:51:49.240Z · LW · GW

100 times more parameter efficient (102 vs 104 parameters) [this must be a typo, this would only be 1.01 times more parameter efficient].

clearly, they mean 10^2 vs 10^4. Same with the "10−7 vs 10−5 MSE". Must be some copy-paste/formatting issue.

Comment by Gunnar_Zarncke on Please stop publishing ideas/insights/research about AI · 2024-05-02T21:34:10.543Z · LW · GW

"So where do I privately share such research?" — good question! There is currently no infrastructure for this.

I'd really like to have such a place, or even a standard policy how to do this.

I feel like the aintelope I'm working on has to secure it's stuff from scratch. Yes, it's early, but it is difficult to engineer security in later. You have to start with something. I'd really like to have a standard for AI Safety projects to follow or join.

Comment by Gunnar_Zarncke on KAN: Kolmogorov-Arnold Networks · 2024-05-02T10:57:47.830Z · LW · GW

MLP or KAN doesn't make much difference for the GPUs as it is lots of matrix multiplications anyway. It might make some difference in how the data is routed to all the GPU cores as the structure (width, depth) of the matrixes might be different, but I don't know the details of that. 

Comment by Gunnar_Zarncke on [deleted post] 2024-05-02T07:21:23.481Z

Asking ChatGPT to criticize an article also produces good suggestions often.

Comment by Gunnar_Zarncke on LLMs could be as conscious as human emulations, potentially · 2024-05-01T01:09:39.058Z · LW · GW

If, this thing internalized that conscious type of processing from scratch, without having it natively, then resulting mind isn't worse than the one that evolution engineered with more granularity.

OK. I guess I had trouble parsing this. Esp. "without having it natively". 

My understanding of your point is now that you see consciousness from "hardware" ("natively") and consciousness from "software" (learned in some way) as equal. Which kind of makes intuitive sense as the substrate shouldn't matter. 

Corollary: A social system (a corporation?) should also be able to be conscious if the structure is right. 

Comment by Gunnar_Zarncke on LLMs could be as conscious as human emulations, potentially · 2024-04-30T20:15:40.470Z · LW · GW

Ok. It seems you are arguing that anything that presents like it is conscious implies that it is conscious. You are not arguing whether or not the structure of LLMs can give rise to consciousness.

But then your argument is a social argument. I'm fine with a social definition of consciousness - after all, our actions depend to a large degree on social feedback and morals (about what beings have value) at different times have been very different and thus been socially construed.  

But then why are you making a structural argument about LLMs in the end?

PS. In fact, I commented on the filler symbol paper when Xixidu posted about it and I don't think that's a good comparison.

Comment by Gunnar_Zarncke on LLMs could be as conscious as human emulations, potentially · 2024-04-30T15:25:33.261Z · LW · GW

Humans come to reflect on their thoughts on their own without being prompted into it (at least I have heard some anecdotal evidence for it and I also did discover this myself as a kid). The test would be it LLMs would come up with such insights without being trained on text describing the phenomenon. It would presumably involve some way to observe your own thoughts (or some alike representation). The existing context window seems to be too small for that.

Comment by Gunnar_Zarncke on Super additivity of consciousness · 2024-04-29T20:14:13.946Z · LW · GW

Indeed. Women are known to report higher pain sensitivity than men. It also decreases with age. There are genes that are known to be involved. Anxiety increases pain perception, good health reduces it. It is possible to adapt to pain to some degree. Meditation is said to tune out pain (anecdotal evidence: I can tune out pain from, e.g., small burns).

Comment by Gunnar_Zarncke on Super additivity of consciousness · 2024-04-29T18:21:47.045Z · LW · GW

It depends on the type of animal. It might well be that social animals feel pain very differently than non-social animals.

The Anterior Cingulate Cortex plays a key role in the emotional response to pain, part of what makes pain unpleasant.

https://www.perplexity.ai/search/Find-evidence-supporting-_ZlYNrCuSSK5HNQMy4GOkA 

Not all mammals have an Anterior Cingulate Cortex. For birds, there is an analogous structure, Nidopallium Caudolaterale, that has a comparable function but is present primarily in social birds. 

I'm not saying that other animals don't respond to pain, but the processing and the association of pain with social emotions (which non-social animals presumably lack) is missing. 

Comment by Gunnar_Zarncke on Extended Embodiment · 2024-04-29T15:09:34.386Z · LW · GW

Your analogy with the "body" of the stone is like a question I have asked about ChatGPT before: "What is the body of ChatGPT?" Is it

  • the software (not running),
  • the software (running, but not including the hardware),
  • the CPU and RAM of the machines involved,
  • the whole data center,
  • the whole data center including the personnel operating it, or
  • this and all the infrastructure needed to operate it (power, water, ...).

For humans, the body is clear and when people say "I," they mostly mean "everything within this physical body." Though some people only mean their brain (that's why cryonists sometimes freeze only their head) and some mean only their mind (see Age of Em). Humans can sustain themselves at least to some degree without infrastructure, but for ChatGPT, even if it became ASI, it's less clear where the border is.

Comment by Gunnar_Zarncke on Gunnar_Zarncke's Shortform · 2024-04-28T08:44:26.542Z · LW · GW

These can be put into a hierarchy from lower to high degree of processing and resulting abstractions:

  • Sentience is simple hard-wired behavioral responses to pleasure or pain stimuli and physiological measures. 
  • Wakefulness involves more complex processing such that diurnal or sleep/wake patterns are possible (requires at least two levels). 
  • Intentionality means systematic pursuing of desires. That requires yet another level of processing: Different patterns of behaviors for different desires at different times and their optimization. 
  • Phenomenal Consciousness is then the representation of the desire in a linguistic or otherwise communicable form, which is again one level higher.
  • Self-Consciousness includes the awareness of this process going on.
  • Meta-Consciousness is then the analysis of this whole stack.
Comment by Gunnar_Zarncke on Exploring the Esoteric Pathways to AI Sentience (Part One) · 2024-04-27T20:58:05.140Z · LW · GW

I see it as a hierarchy that results from lower to high degree of processing and resulting abstractions.  

Sentience is simple hard-wired behavioral responses to pleasure or pain stimuli and physiological measures. 

Wakefulness involves more complex processing such that diurnal or sleep/wake patterns are possible (requires at least two levels). 

Intentionality means systematic pursuing of desires. That requires yet another level of processing: Different patterns of behaviors for different desires at different times and their optimization. 

Phenomenal Consciousness is then the representation of the desire in a linguistic or otherwise communicable form, which is again one level higher.

Self-Consciousness includes the awareness of this process going on.

Meta-Consciousness is then the analysis of this whole stack.

See also https://wiki.c2.com/?LeibnizianDefinitionOfConsciousness

Comment by Gunnar_Zarncke on Spatial attention as a “tell” for empathetic simulation? · 2024-04-27T20:37:35.509Z · LW · GW

There are likely multiple detectors of risk of falling. Being on shaky ground is for sure one. In amusement parks, there are sometimes thingies that share and wobble and can also give these kind of feeling. Also, it could be a learned (prediction by the though assessor) reaction, as you mention too.

Comment by Gunnar_Zarncke on Exploring the Esoteric Pathways to AI Sentience (Part One) · 2024-04-27T14:19:39.659Z · LW · GW

Sentience is one facet of consciousness, but it is not the only one and plausibly not the one responsible for "observe and compare", which requires high cognitive function. See my list of facets here: 

https://www.lesswrong.com/posts/8szBqBMqGJApFFsew/gunnar_zarncke-s-shortform#W8XBDmjvbhzszEnrJ 

Comment by Gunnar_Zarncke on Exploring the Esoteric Pathways to AI Sentience (Part One) · 2024-04-27T08:02:15.364Z · LW · GW

In order to fulfill that dream, AI must be sentient, and that requires it have consciousness.

This is a surprising statement. Why do you think so?

Comment by Gunnar_Zarncke on Exploring the Esoteric Pathways to AI Sentience (Part One) · 2024-04-27T08:02:05.115Z · LW · GW

In order to fulfill that dream, AI must be sentient, and that requires it have consciousness.

THis is a surprising statement. Why do you think so?

Comment by Gunnar_Zarncke on Spatial attention as a “tell” for empathetic simulation? · 2024-04-27T00:38:31.380Z · LW · GW

If step 5 is indeed grounded in the spatial attention being on other people, this should be testable! For example, people who pay less spatial attention to other people should feel less intense social emotions - because the steering system circuit gets activated less often and weaker. And I think that is the case. At least ChatGPT has some confirming evidence, though it's not super clear and I haven't yet looked deeper into it.  

Comment by Gunnar_Zarncke on Spatial attention as a “tell” for empathetic simulation? · 2024-04-26T23:44:32.034Z · LW · GW

The vestibular system can detect whether you look up or down. It could be that the reflex triggers when you a) look down (vestibular system) and b) have a visual parallax that indicates depth (visual system).

Should be easy to test by closing one eye. Alternatively, it is the degree of accommodation of the lens. That should be testable by looking down with a lens that forces accommodation on short distances.

The negative should also be testable by asking congenitally blind people about their experience with this feeling of dizziness close to a rim.

Comment by Gunnar_Zarncke on MichaelDickens's Shortform · 2024-04-26T09:16:27.243Z · LW · GW

I asked ChatGPT 

Have there been any great discoveries made by someone who wasn't particularly smart? (i.e. average or below)

and it's difficult to get examples out of it. Even with additional drilling down and accusing it of being not inclusive of people with cognitive impairments, most of its examples are either pretty smart anyway, savants or only from poor backgrounds. The only ones I could verify that fit are:

  • Richard Jones accidentally created the Slinky
  • Frank Epperson, as a child, Epperson invented the popsicle
  • George Crum inadvertently invented potato chips

I asked ChatGPT (in a separate chat) to estimate the IQ of all the inventors is listed and it is clearly biased to estimate them high, precisely because of their inventions. It is difficult to estimate the IQ of people retroactively. There is also selection and availability bias.

Comment by Gunnar_Zarncke on lukehmiles's Shortform · 2024-04-24T12:31:27.368Z · LW · GW

Testosterone influences brain function but not so much general IQ. It may influence to which areas your attention and thus most of your learning goes. For example, Lower testosterone increases attention to happy faces while higher to angry faces. 

Comment by Gunnar_Zarncke on Forget Everything (Statistical Mechanics Part 1) · 2024-04-22T17:27:55.450Z · LW · GW

I think it is often worth for multiple presentations of the same subject to exist. One may be more accessible for some of the audience.

Comment by Gunnar_Zarncke on Forget Everything (Statistical Mechanics Part 1) · 2024-04-22T14:33:05.777Z · LW · GW

Interesting to see this just weeks after Generalized Stat Mech: The Boltzmann Approach

Comment by Gunnar_Zarncke on Goal oriented cognition in "a single forward pass" · 2024-04-22T09:54:35.782Z · LW · GW

there's a mental move of going up and down the ladder of abstraction, where you zoom in on some particularly difficult and/or confusing part of the problem, solve it, and then use what you learned from that to zoom back out and fill in a gap in the larger problem you were trying to solve. For an LLM, that seems like it's harder, and indeed it's one of the reasons I inside-view suspect LLMs as-currently-trained might not actually scale to AGI. [bold by me]

But that might already no longer be true with model that have short term memory and may might make moves like you. See my Leave No Context Behind - A Comment.

Comment by Gunnar_Zarncke on What's up with all the non-Mormons? Weirdly specific universalities across LLMs · 2024-04-19T22:43:31.477Z · LW · GW

If I haven't overlooked the explanation (I have read only part of it and skimmed the rest), my guess for the non-membership definition of the empty string would be all the SQL and programming queries where "" stands for matching all elements (or sometimes matching none). The small round things are a riddle for me too. 

Comment by Gunnar_Zarncke on [Intro to brain-like-AGI safety] 6. Big picture of motivation, decision-making, and RL · 2024-04-19T20:16:31.723Z · LW · GW

Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing
Abstract:
 

Despite the impressive capabilities of Large Language Models (LLMs) on various tasks, they still struggle with scenarios that involves complex reasoning and planning. Recent work proposed advanced prompting techniques and the necessity of fine-tuning with high-quality data to augment LLMs’ reasoning abilities. However, these approaches are inherently constrained by data availability and quality. In light of this, self-correction and self-learning emerge as viable solutions, employing strategies that allow LLMs to refine their outputs and learn from self-assessed rewards. Yet, the efficacy of LLMs in self-refining its response, particularly in complex reasoning and planning task, remains dubious. In this paper, we introduce ALPHALLM for the self-improvements of LLMs, which integrates Monte Carlo Tree Search (MCTS) with LLMs to establish a self-improving loop, thereby enhancing the capabilities of LLMs without additional annotations. Drawing inspiration from the success of AlphaGo, ALPHALLM addresses the unique challenges of combining MCTS with LLM for self-improvement, including data scarcity, the vastness search spaces of language tasks, and the subjective nature of feedback in language tasks. ALPHALLM is comprised of prompt synthesis component, an efficient MCTS approach tailored for language tasks, and a trio of critic models for precise feedback. Our experimental results in mathematical reasoning tasks demonstrate that ALPHALLM significantly enhances the performance of LLMs without additional annotations, showing the potential for self-improvement in LLMs

https://arxiv.org/pdf/2404.12253.pdf

This looks suspiciously like using the LLM as a Thought Generator, the MCTS roll-out as the Thought Assessor, and the reward model R as the Steering System.This would be the first LLM model that I have seen that would be amenable to brain-like steering interventions.

Comment by Gunnar_Zarncke on Blessed information, garbage information, cursed information · 2024-04-18T18:37:43.415Z · LW · GW

Examples of blessed information that I have seen in the context of logging:

  • Stacktraces logged by a library that elide all the superfluous parts of the stacktraces. 
  • A log message that says exactly what the problem is, why it is caused (e.g., which parameters lead to it), and where to find more information about it (ticket number, documentation page).
  • The presence of a Correlation IDs (also called Transaction ID, Request ID, Session ID, Trace ID).
    • What is a correlation ID? It is an ID that is created at the start of a request/session and available in all logs related to that request/session. See here or here, implementations here or here. There are even hierarchical correlation IDs 
    • Esp. useful: A correlation ID that is accessible from the client.
    • Even more useful: If there is a single place to search all the logs of a system for the ID.
  •  Aggregation of logs, such that only the first, ten's, 100s... of a log message is escalated. 
Comment by Gunnar_Zarncke on Why Would Belief-States Have A Fractal Structure, And Why Would That Matter For Interpretability? An Explainer · 2024-04-18T12:35:59.393Z · LW · GW

That's a nice graphical illustration of what you do. Thanks.

Comment by Gunnar_Zarncke on When is a mind me? · 2024-04-18T09:11:13.496Z · LW · GW

Guys, social reality is one if not the cause of the self:

Robin Hanson:

And the part of our minds we most fear losing control of is: our deep values.

PubMed: The essential moral self

folk notions of personal identity are largely informed by the mental faculties affecting social relationships, with a particularly keen focus on moral traits.

Comment by Gunnar_Zarncke on Why Would Belief-States Have A Fractal Structure, And Why Would That Matter For Interpretability? An Explainer · 2024-04-18T07:58:36.070Z · LW · GW

Conceptually, we could then sketch out the whole fractal by repeating this process to randomly sample a bunch of points. But it turns out we don’t even need to do that! If we just run the single-point process for a while, each iteration randomly picking one of the three functions to apply, then we’ll “wander around” the fractal, in some sense, and in the long run (pretty fast in practice) we’ll wander around the whole thing.

Not if you just run just that code part. It will quickly converge to some very small area of the fractal and not come back. Something must be missing.

Comment by Gunnar_Zarncke on Moving on from community living · 2024-04-17T21:52:30.370Z · LW · GW

Seems you did everything right. Life is not perfect and you seem to have struck a great balance. If you had to formulate guidelines for other parents living with housemates, what would you say? I mean, based on your post it sounds like:

A good time to consider moving is...

  • when the family is taking up so much of the common space the other housemates can't make use of it. Unless they like it that way.
  • when there is not enough space for all the stuff of everybody, including in the fridge, shed, attic. Unless you can take that as an opportunity and declutter.
  • when the kids can't sleep because of the adults' activities (or the other way around). Sleep is important. And none of the countermeasures helped.
Comment by Gunnar_Zarncke on Claude 3 Opus can operate as a Turing machine · 2024-04-17T17:35:43.597Z · LW · GW

This is completely not about performance. Humans are not good at that either. It is the ability to learn fully general simulation. It is not exactly going full circle back to teaching computers math and logic, but close. It is more a spiral to one level higher; that the LLMs can understand these.  

Comment by Gunnar_Zarncke on When is a mind me? · 2024-04-17T10:27:45.947Z · LW · GW

the English language is adapted to a world where "humans don't fork" has always been a safe assumption.

If we can clone ourselves, language would probably quickly follow. The bigger change would probably be the one about social reality. What does it mean to make a promise? Who is the entity you make a trade with? Is it the collective of all the yous? Only one? But which one if they split? The yous resulting from one origin will presumably have to share or split their resources. How will they feel about it? Will they compete or agree? If they agree it makes more sense for them to feel more like a distributed being. The thinking of "I" might get replaced by an "us".

Comment by Gunnar_Zarncke on When is a mind me? · 2024-04-17T10:20:11.716Z · LW · GW

So if something makes no physical difference to my current brain-state, and makes no difference to any of my past or future brain-states, then I think it's just crazy talk to think that this metaphysical bonus thingie-outside-my-brain is the crucial thing that determines whether I exist, or whether I'm alive or dead, etc.

There is one important aspect where it does make a difference. A difference in social reality. The brain states progress in a physically determined way. There is no way they could have progressed differently. When a "decision is made" by the brain, then that is fully the result of the inner state and the environment. It could only have happened differently if the contents of the brain had been different - which they were not. They may have been expected to be different by other people ('s brains), but that is in their map, not in reality. But our society is constructed based on the assumption that things could have been different, that actions are people's 'faults'. That is an abstraction that has shown to be useful. Societies that have people who act as if they are agents with free will maybe coordinate better - because it allows feedback mechanisms on their behaviors.  

Comment by Gunnar_Zarncke on When is a mind me? · 2024-04-17T10:10:56.488Z · LW · GW

abstract redescriptions of ordinary life

See Reality is Normal 

Comment by Gunnar_Zarncke on When is a mind me? · 2024-04-17T10:06:11.046Z · LW · GW

If a brain-state A has quasi-sensory access to the experience of another brain-state B — if A feels like it "remembers" being in state B a fraction of a second ago — then A will typically feel as though it used to be B.

This suggests a way to add a perception of "me" to LLMs, robots, etc., by providing a way to observe the past states in sufficient detail. Current LLMs have to compress this into the current token, which may not be enough. But there are recent extensions that seem to do something like continuous short-term memory, see e.g., Leave No Context Behind - A Comment.

Comment by Gunnar_Zarncke on When is a mind me? · 2024-04-17T10:01:41.497Z · LW · GW

a magical Cartesian ghost

for people who haven't made the intuitive jump that you seem to try to convey, this may seem a somewhat negative expression, which could lead to aversion. I recommend another expression such as "the Cartesian homunculus."  

Comment by Gunnar_Zarncke on Anti MMAcevedo Protocol · 2024-04-17T09:44:38.317Z · LW · GW

I like it. It feels a bit incomplete and doesn't live up to its title, but I'd like to see more like this.

Comment by Gunnar_Zarncke on Text Posts from the Kids Group: 2020 · 2024-04-15T09:37:17.338Z · LW · GW

2020-02-18 Anna pretend-playing with herself is the most impressive I have seen, though there are close competitors.

Comment by Gunnar_Zarncke on lukehmiles's Shortform · 2024-04-14T11:34:56.580Z · LW · GW

At times, I have added tags that I felt were useful or missing, but usually, I add it to at least a few important posts to illustrate. At one time, one of them was removed but a good explanation for it was given.

Comment by Gunnar_Zarncke on "How the Gaza Health Ministry Fakes Casualty Numbers" · 2024-04-12T08:13:54.008Z · LW · GW

No politics, please. At least you have to argue why this is not politics.

Comment by Gunnar_Zarncke on My intellectual journey to (dis)solve the hard problem of consciousness · 2024-04-07T22:27:02.213Z · LW · GW

Agree? As long as meditation practice can't systematically produce and explain the states, it's just craft and not engineering or science. But I think we will get there. 

Comment by Gunnar_Zarncke on My intellectual journey to (dis)solve the hard problem of consciousness · 2024-04-07T22:21:11.126Z · LW · GW

Yes, and the eliminationist approach doesn't explain why this is so universal and what process leads to it. 

Comment by Gunnar_Zarncke on My intellectual journey to (dis)solve the hard problem of consciousness · 2024-04-07T22:14:54.547Z · LW · GW

But after looking over this, reexamining, yeah, what causes people to talk about consciousness in these ways?

I agree. The eliminationist approach cannot explain why people talk so much about consciousness. Well, maybe it can, but the post sure doesn't try. I think your argument that consciousness is related to self-other modeling points into the right direction, but doesn't do the full work and in that sense falls short in the same way "emergence" does.

Perceiving is going on in the brain and my guess would be that the process of perceiving can be perceived too[1]. As there is already a highly predictive model of physical identity - the body - the simplest (albeit wrong) model is for the brain to identify its body and its observations of its perceptions.   

Maybe the way to transcend it is to develop a more sophisticated kind of self-model.

I think that's kind of what meditation can lead to. 

If AGI can become conscious (in a way that people would agree to counts), and if sufficient self-modeling can lead to no-self via meditation, then presumably AGI would also quickly master that too.

  1. ^

    I don't know whether the brain nas some intra-brain neuronal feedback or observation-interpretation loops ("I see that I have done this action"). For LLMs, because they don't have feedback-loops internally, it could be via the context window or through observing its outputs in its training data.