Posts

Claude 3 Opus can operate as a Turing machine 2024-04-17T08:41:57.209Z
Leave No Context Behind - A Comment 2024-04-11T22:50:26.100Z
aintelope project update 2024-02-08T18:32:00.000Z
[Linkpost] Contra four-wheeled suitcases, sort of 2023-09-12T20:36:02.412Z
Trying AgentGPT, an AutoGPT variant 2023-04-13T10:13:41.316Z
What is good Cyber Security Advice? 2022-10-24T23:27:58.428Z
[Fun][Link] Alignment SMBC Comic 2022-09-09T21:38:54.400Z
Hamburg, Germany – ACX Meetups Everywhere 2022 2022-08-20T19:18:48.685Z
Brain-like AGI project "aintelope" 2022-08-14T16:33:39.571Z
Robin Hanson asks "Why Not Wait On AI Risk?" 2022-06-26T23:32:19.436Z
[Link] Childcare : what the science says 2022-06-24T21:45:23.406Z
[Link] Adversarially trained neural representations may already be as robust as corresponding biological neural representations 2022-06-24T20:51:27.924Z
What are all the AI Alignment and AI Safety Communication Hubs? 2022-06-15T16:16:03.241Z
Silly Online Rules 2022-06-08T20:40:41.076Z
LessWrong Astralcodex Ten Meetup June 2022 2022-05-29T22:43:09.431Z
[Linkpost] A conceptual framework for consciousness 2022-05-02T01:05:36.129Z
ACX Spring 2022 Meetup Hamburg 2022-04-25T21:44:11.141Z
Does non-access to outputs prevent recursive self-improvement? 2022-04-10T18:37:54.332Z
Unfinished Projects Thread 2022-04-02T17:12:52.539Z
[Quote] Why does i show up in Quantum Mechanics and other Beautiful Math Mysteries 2022-03-16T11:58:30.526Z
Estimating Brain-Equivalent Compute from Image Recognition Algorithms 2022-02-27T02:45:21.801Z
[Linkpost] TrojanNet: Embedding Hidden Trojan Horse Models in Neural Networks 2022-02-11T01:17:42.119Z
[Linkpost] [Fun] CDC To Send Pamphlet On Probabilistic Thinking 2022-01-14T21:44:57.313Z
[Linkpost] Being Normal by Brian Caplan 2021-11-27T22:19:18.051Z
[Linkpost] Paul Graham 101 2021-11-14T16:52:02.415Z
Successful Mentoring on Parenting, Arranged Through LessWrong 2021-10-21T08:27:57.794Z
Quote Quiz 2021-08-30T23:30:52.067Z
What do we know about vaccinating children? 2021-08-04T23:57:15.399Z
Calibrating Adequate Food Consumption 2021-03-27T00:00:56.953Z
Gunnar_Zarncke's Shortform 2021-01-02T02:51:36.511Z
Linkpost: Choice Explains Positivity and Confirmation Bias 2020-10-01T21:46:46.289Z
Slatestarcodex Meetup Hamburg 2019-11-17 2019-10-27T22:29:27.835Z
Welcome to SSC Hamburg [Edit With Your Details] 2019-09-24T21:35:10.473Z
Slatestarcodex Meetup in Hamburg, Germany 2019-09-09T21:42:25.576Z
Percent reduction of gun-related deaths by color of gun. 2019-08-06T20:28:56.134Z
Open Thread April 2018 2018-04-06T21:02:38.311Z
Intercellular competition and the inevitability of multicellular aging 2017-11-04T12:32:54.879Z
Polling Thread October 2017 2017-10-07T21:32:00.810Z
[Slashdot] We're Not Living in a Computer Simulation, New Research Shows 2017-10-03T10:10:07.587Z
Interpreting Deep Neural Networks using Cognitive Psychology (DeepMind) 2017-07-10T21:09:51.777Z
Using Machine Learning to Explore Neural Network Architecture (Google Research Blog) 2017-06-29T20:42:00.214Z
Does your machine mind? Ethics and potential bias in the law of algorithms 2017-06-28T22:08:26.279Z
From data to decisions: Processing information, biases, and beliefs for improved management of natural resources and environments 2017-05-08T21:47:35.097Z
Introduction to Local Interpretable Model-Agnostic Explanations (LIME) 2017-02-09T08:29:40.668Z
Interview with Nassim Taleb 'Trump makes sense to a grocery store owner' 2017-02-08T21:52:21.606Z
Slate Star Codex Notes on the Asilomar Conference on Beneficial AI 2017-02-07T12:14:46.189Z
Polling Thread January 2017 2017-01-22T23:26:15.964Z
Could a Neuroscientist Understand a Microprocessor? 2017-01-20T12:40:04.553Z
Scott Adams mentions Prediction Markets and explains Cognitive Blindness bias 2016-12-20T21:23:33.468Z
Take the Rationality Test to determine your rational thinking style 2016-12-09T23:10:00.251Z

Comments

Comment by Gunnar_Zarncke on Exploring the Esoteric Pathways to AI Sentience (Part One) · 2024-04-27T08:02:15.364Z · LW · GW

In order to fulfill that dream, AI must be sentient, and that requires it have consciousness.

This is a surprising statement. Why do you think so?

Comment by Gunnar_Zarncke on Exploring the Esoteric Pathways to AI Sentience (Part One) · 2024-04-27T08:02:05.115Z · LW · GW

In order to fulfill that dream, AI must be sentient, and that requires it have consciousness.

THis is a surprising statement. Why do you think so?

Comment by Gunnar_Zarncke on Spatial attention as a “tell” for empathetic simulation? · 2024-04-27T00:38:31.380Z · LW · GW

If step 5 is indeed grounded in the spatial attention being on other people, this should be testable! For example, people who pay less spatial attention to other people should feel less intense social emotions - because the steering system circuit gets activated less often and weaker. And I think that is the case. At least ChatGPT has some confirming evidence, though it's not super clear and I haven't yet looked deeper into it.  

Comment by Gunnar_Zarncke on Spatial attention as a “tell” for empathetic simulation? · 2024-04-26T23:44:32.034Z · LW · GW

The vestibular system can detect whether you look up or down. It could be that the reflex triggers when you a) look down (vestibular system) and b) have a visual parallax that indicates depth (visual system).

Should be easy to test by closing one eye. Alternatively, it is the degree of accommodation of the lens. That should be testable by looking down with a lens that forces accommodation on short distances.

The negative should also be testable by asking congenitally blind people about their experience with this feeling of dizziness close to a rim.

Comment by Gunnar_Zarncke on MichaelDickens's Shortform · 2024-04-26T09:16:27.243Z · LW · GW

I asked ChatGPT 

Have there been any great discoveries made by someone who wasn't particularly smart? (i.e. average or below)

and it's difficult to get examples out of it. Even with additional drilling down and accusing it of being not inclusive of people with cognitive impairments, most of its examples are either pretty smart anyway, savants or only from poor backgrounds. The only ones I could verify that fit are:

  • Richard Jones accidentally created the Slinky
  • Frank Epperson, as a child, Epperson invented the popsicle
  • George Crum inadvertently invented potato chips

I asked ChatGPT (in a separate chat) to estimate the IQ of all the inventors is listed and it is clearly biased to estimate them high, precisely because of their inventions. It is difficult to estimate the IQ of people retroactively. There is also selection and availability bias.

Comment by Gunnar_Zarncke on lukehmiles's Shortform · 2024-04-24T12:31:27.368Z · LW · GW

Testosterone influences brain function but not so much general IQ. It may influence to which areas your attention and thus most of your learning goes. For example, Lower testosterone increases attention to happy faces while higher to angry faces. 

Comment by Gunnar_Zarncke on Forget Everything (Statistical Mechanics Part 1) · 2024-04-22T17:27:55.450Z · LW · GW

I think it is often worth for multiple presentations of the same subject to exist. One may be more accessible for some of the audience.

Comment by Gunnar_Zarncke on Forget Everything (Statistical Mechanics Part 1) · 2024-04-22T14:33:05.777Z · LW · GW

Interesting to see this just weeks after Generalized Stat Mech: The Boltzmann Approach

Comment by Gunnar_Zarncke on Goal oriented cognition in "a single forward pass" · 2024-04-22T09:54:35.782Z · LW · GW

there's a mental move of going up and down the ladder of abstraction, where you zoom in on some particularly difficult and/or confusing part of the problem, solve it, and then use what you learned from that to zoom back out and fill in a gap in the larger problem you were trying to solve. For an LLM, that seems like it's harder, and indeed it's one of the reasons I inside-view suspect LLMs as-currently-trained might not actually scale to AGI. [bold by me]

But that might already no longer be true with model that have short term memory and may might make moves like you. See my Leave No Context Behind - A Comment.

Comment by Gunnar_Zarncke on What's up with all the non-Mormons? Weirdly specific universalities across LLMs · 2024-04-19T22:43:31.477Z · LW · GW

If I haven't overlooked the explanation (I have read only part of it and skimmed the rest), my guess for the non-membership definition of the empty string would be all the SQL and programming queries where "" stands for matching all elements (or sometimes matching none). The small round things are a riddle for me too. 

Comment by Gunnar_Zarncke on [Intro to brain-like-AGI safety] 6. Big picture of motivation, decision-making, and RL · 2024-04-19T20:16:31.723Z · LW · GW

Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing
Abstract:
 

Despite the impressive capabilities of Large Language Models (LLMs) on various tasks, they still struggle with scenarios that involves complex reasoning and planning. Recent work proposed advanced prompting techniques and the necessity of fine-tuning with high-quality data to augment LLMs’ reasoning abilities. However, these approaches are inherently constrained by data availability and quality. In light of this, self-correction and self-learning emerge as viable solutions, employing strategies that allow LLMs to refine their outputs and learn from self-assessed rewards. Yet, the efficacy of LLMs in self-refining its response, particularly in complex reasoning and planning task, remains dubious. In this paper, we introduce ALPHALLM for the self-improvements of LLMs, which integrates Monte Carlo Tree Search (MCTS) with LLMs to establish a self-improving loop, thereby enhancing the capabilities of LLMs without additional annotations. Drawing inspiration from the success of AlphaGo, ALPHALLM addresses the unique challenges of combining MCTS with LLM for self-improvement, including data scarcity, the vastness search spaces of language tasks, and the subjective nature of feedback in language tasks. ALPHALLM is comprised of prompt synthesis component, an efficient MCTS approach tailored for language tasks, and a trio of critic models for precise feedback. Our experimental results in mathematical reasoning tasks demonstrate that ALPHALLM significantly enhances the performance of LLMs without additional annotations, showing the potential for self-improvement in LLMs

https://arxiv.org/pdf/2404.12253.pdf

This looks suspiciously like using the LLM as a Thought Generator, the MCTS roll-out as the Thought Assessor, and the reward model R as the Steering System.This would be the first LLM model that I have seen that would be amenable to brain-like steering interventions.

Comment by Gunnar_Zarncke on Blessed information, garbage information, cursed information · 2024-04-18T18:37:43.415Z · LW · GW

Examples of blessed information that I have seen in the context of logging:

  • Stacktraces logged by a library that elide all the superfluous parts of the stacktraces. 
  • A log message that says exactly what the problem is, why it is caused (e.g., which parameters lead to it), and where to find more information about it (ticket number, documentation page).
  • The presence of a Correlation IDs (also called Transaction ID, Request ID, Session ID, Trace ID).
    • What is a correlation ID? It is an ID that is created at the start of a request/session and available in all logs related to that request/session. See here or here, implementations here or here. There are even hierarchical correlation IDs 
    • Esp. useful: A correlation ID that is accessible from the client.
    • Even more useful: If there is a single place to search all the logs of a system for the ID.
  •  Aggregation of logs, such that only the first, ten's, 100s... of a log message is escalated. 
Comment by Gunnar_Zarncke on Why Would Belief-States Have A Fractal Structure, And Why Would That Matter For Interpretability? An Explainer · 2024-04-18T12:35:59.393Z · LW · GW

That's a nice graphical illustration of what you do. Thanks.

Comment by Gunnar_Zarncke on When is a mind me? · 2024-04-18T09:11:13.496Z · LW · GW

Guys, social reality is one if not the cause of the self:

Robin Hanson:

And the part of our minds we most fear losing control of is: our deep values.

PubMed: The essential moral self

folk notions of personal identity are largely informed by the mental faculties affecting social relationships, with a particularly keen focus on moral traits.

Comment by Gunnar_Zarncke on Why Would Belief-States Have A Fractal Structure, And Why Would That Matter For Interpretability? An Explainer · 2024-04-18T07:58:36.070Z · LW · GW

Conceptually, we could then sketch out the whole fractal by repeating this process to randomly sample a bunch of points. But it turns out we don’t even need to do that! If we just run the single-point process for a while, each iteration randomly picking one of the three functions to apply, then we’ll “wander around” the fractal, in some sense, and in the long run (pretty fast in practice) we’ll wander around the whole thing.

Not if you just run just that code part. It will quickly converge to some very small area of the fractal and not come back. Something must be missing.

Comment by Gunnar_Zarncke on Moving on from community living · 2024-04-17T21:52:30.370Z · LW · GW

Seems you did everything right. Life is not perfect and you seem to have struck a great balance. If you had to formulate guidelines for other parents living with housemates, what would you say? I mean, based on your post it sounds like:

A good time to consider moving is...

  • when the family is taking up so much of the common space the other housemates can't make use of it. Unless they like it that way.
  • when there is not enough space for all the stuff of everybody, including in the fridge, shed, attic. Unless you can take that as an opportunity and declutter.
  • when the kids can't sleep because of the adults' activities (or the other way around). Sleep is important. And none of the countermeasures helped.
Comment by Gunnar_Zarncke on Claude 3 Opus can operate as a Turing machine · 2024-04-17T17:35:43.597Z · LW · GW

This is completely not about performance. Humans are not good at that either. It is the ability to learn fully general simulation. It is not exactly going full circle back to teaching computers math and logic, but close. It is more a spiral to one level higher; that the LLMs can understand these.  

Comment by Gunnar_Zarncke on When is a mind me? · 2024-04-17T10:27:45.947Z · LW · GW

the English language is adapted to a world where "humans don't fork" has always been a safe assumption.

If we can clone ourselves, language would probably quickly follow. The bigger change would probably be the one about social reality. What does it mean to make a promise? Who is the entity you make a trade with? Is it the collective of all the yous? Only one? But which one if they split? The yous resulting from one origin will presumably have to share or split their resources. How will they feel about it? Will they compete or agree? If they agree it makes more sense for them to feel more like a distributed being. The thinking of "I" might get replaced by an "us".

Comment by Gunnar_Zarncke on When is a mind me? · 2024-04-17T10:20:11.716Z · LW · GW

So if something makes no physical difference to my current brain-state, and makes no difference to any of my past or future brain-states, then I think it's just crazy talk to think that this metaphysical bonus thingie-outside-my-brain is the crucial thing that determines whether I exist, or whether I'm alive or dead, etc.

There is one important aspect where it does make a difference. A difference in social reality. The brain states progress in a physically determined way. There is no way they could have progressed differently. When a "decision is made" by the brain, then that is fully the result of the inner state and the environment. It could only have happened differently if the contents of the brain had been different - which they were not. They may have been expected to be different by other people ('s brains), but that is in their map, not in reality. But our society is constructed based on the assumption that things could have been different, that actions are people's 'faults'. That is an abstraction that has shown to be useful. Societies that have people who act as if they are agents with free will maybe coordinate better - because it allows feedback mechanisms on their behaviors.  

Comment by Gunnar_Zarncke on When is a mind me? · 2024-04-17T10:10:56.488Z · LW · GW

abstract redescriptions of ordinary life

See Reality is Normal 

Comment by Gunnar_Zarncke on When is a mind me? · 2024-04-17T10:06:11.046Z · LW · GW

If a brain-state A has quasi-sensory access to the experience of another brain-state B — if A feels like it "remembers" being in state B a fraction of a second ago — then A will typically feel as though it used to be B.

This suggests a way to add a perception of "me" to LLMs, robots, etc., by providing a way to observe the past states in sufficient detail. Current LLMs have to compress this into the current token, which may not be enough. But there are recent extensions that seem to do something like continuous short-term memory, see e.g., Leave No Context Behind - A Comment.

Comment by Gunnar_Zarncke on When is a mind me? · 2024-04-17T10:01:41.497Z · LW · GW

a magical Cartesian ghost

for people who haven't made the intuitive jump that you seem to try to convey, this may seem a somewhat negative expression, which could lead to aversion. I recommend another expression such as "the Cartesian homunculus."  

Comment by Gunnar_Zarncke on Anti MMAcevedo Protocol · 2024-04-17T09:44:38.317Z · LW · GW

I like it. It feels a bit incomplete and doesn't live up to its title, but I'd like to see more like this.

Comment by Gunnar_Zarncke on Text Posts from the Kids Group: 2020 · 2024-04-15T09:37:17.338Z · LW · GW

2020-02-18 Anna pretend-playing with herself is the most impressive I have seen, though there are close competitors.

Comment by Gunnar_Zarncke on lukehmiles's Shortform · 2024-04-14T11:34:56.580Z · LW · GW

At times, I have added tags that I felt were useful or missing, but usually, I add it to at least a few important posts to illustrate. At one time, one of them was removed but a good explanation for it was given.

Comment by Gunnar_Zarncke on "How the Gaza Health Ministry Fakes Casualty Numbers" · 2024-04-12T08:13:54.008Z · LW · GW

No politics, please. At least you have to argue why this is not politics.

Comment by Gunnar_Zarncke on My intellectual journey to (dis)solve the hard problem of consciousness · 2024-04-07T22:27:02.213Z · LW · GW

Agree? As long as meditation practice can't systematically produce and explain the states, it's just craft and not engineering or science. But I think we will get there. 

Comment by Gunnar_Zarncke on My intellectual journey to (dis)solve the hard problem of consciousness · 2024-04-07T22:21:11.126Z · LW · GW

Yes, and the eliminationist approach doesn't explain why this is so universal and what process leads to it. 

Comment by Gunnar_Zarncke on My intellectual journey to (dis)solve the hard problem of consciousness · 2024-04-07T22:14:54.547Z · LW · GW

But after looking over this, reexamining, yeah, what causes people to talk about consciousness in these ways?

I agree. The eliminationist approach cannot explain why people talk so much about consciousness. Well, maybe it can, but the post sure doesn't try. I think your argument that consciousness is related to self-other modeling points into the right direction, but doesn't do the full work and in that sense falls short in the same way "emergence" does.

Perceiving is going on in the brain and my guess would be that the process of perceiving can be perceived too[1]. As there is already a highly predictive model of physical identity - the body - the simplest (albeit wrong) model is for the brain to identify its body and its observations of its perceptions.   

Maybe the way to transcend it is to develop a more sophisticated kind of self-model.

I think that's kind of what meditation can lead to. 

If AGI can become conscious (in a way that people would agree to counts), and if sufficient self-modeling can lead to no-self via meditation, then presumably AGI would also quickly master that too.

  1. ^

    I don't know whether the brain nas some intra-brain neuronal feedback or observation-interpretation loops ("I see that I have done this action"). For LLMs, because they don't have feedback-loops internally, it could be via the context window or through observing its outputs in its training data.

Comment by Gunnar_Zarncke on My intellectual journey to (dis)solve the hard problem of consciousness · 2024-04-07T21:51:54.209Z · LW · GW

I agree with Kaj that this is a nicely presented and readable story of your intellectual journey that teaches something on the way. I think there are a lot of parts in there that could be spun out into their own posts that would be individually more digestible to some people. My first thought was to just post this as a sequence with each chapter one post, but I think that's not the best way, really, as the arc is lost. But a sequence of sorts would still be a good idea as some topics build upon earlier ones. 

One post I'd really like to be spun out is the one about pain that you have relegated to the addendum.

Comment by Gunnar_Zarncke on Privacy and writing · 2024-04-07T11:29:29.897Z · LW · GW

I knew the On Privacy post by Holly Elmore, in fact, I had copied this paragraph to my Anki deck: 

I now think privacy is important for maximizing self-awareness and self-transparency. The primary function of privacy is not to hide things society finds unacceptable, but to create an environment in which your own mind feels safe to tell you things. If you’re not allowing these unshareworthy thoughts and feelings a space to come out, they still affect your feelings and behavior– you just don’t know how or why. And all the while your conscious self-image is growing more alienated from the processes that actually drive you. Privacy creates the necessary conditions for self-honesty, which is a necessary prerequisite to honesty with anyone else. When you only know a cleaned-up version of yourself, you’ll only be giving others a version of your truth.

Another entry in my Anki deck is about arguments against “If you have nothing to hide, you have nothing to fear”:

  1. The rules may change: Once the invasive surveillance is in place to enforce rules that you agree with, the ruleset that is being enforced could change in ways that you don’t agree with at all – but then, it is too late to protest the surveillance. 
  2. It’s not you who determines if you have something to fear: You may consider yourself law-abidingly white as snow, and it won’t matter a bit. What does matter is whether you set off the red flags in the mostly-automated surveillance or maybe even faulty metrics and after having been investigated, you may have lost everything.
  3. Laws must be broken for society to progress, for in hindsight, it may turn out that the criminals were the ones in the moral right. It is an absolute necessity to be able to break unjust laws for society to progress and question its own values, in order to learn from mistakes and move on as a society.
  4. Privacy is a basic human need: Implying that only the dishonest people have need of any privacy ignores a basic property of the human psyche, and sends a creepy message of strong discomfort. 


 

Comment by Gunnar_Zarncke on What's with all the bans recently? · 2024-04-07T09:42:39.508Z · LW · GW

Then we agree about the general moderation of LW.

Did your comment also apply to the latest automated bans?

Comment by Gunnar_Zarncke on Inferring the model dimension of API-protected LLMs · 2024-04-06T20:34:59.925Z · LW · GW

Could this be used to determine an estimate of the "number of parameters" of the brain?

Comment by Gunnar_Zarncke on What's with all the bans recently? · 2024-04-06T08:12:52.627Z · LW · GW

No. There can be many means in between or different altogether.

But back to my original comment: It was about the not made explicit action of what to do with bad comments. I agree that the dynamic for posts and comments is different. But I disagree with what I saw was the push that negative comments should be stronger discouraged because they have higher weight. 

But when rereading, I see that you don't say what to do about these comments. You only point out negative effects. What is your proposal? 

Note: I'm in favor of tending the garden and discouraging orcs and banning trolls. But I'm also in favor of critical and negative remarks. Reduce their visibility maybe, but don't completely prevent them.

Comment by Gunnar_Zarncke on What's with all the bans recently? · 2024-04-05T19:55:09.338Z · LW · GW

Sure, but literally bad ones will quickly get downvoted and the poster banned. This is about the less clearcut cases, right?

Comment by Gunnar_Zarncke on What's with all the bans recently? · 2024-04-05T14:08:19.699Z · LW · GW

I disagree. Negative comments often provide feedback to the author he wouldn't get elsewhere. And if you are annoyed by it you can filter them out (settings -> hide low votes). 

Comment by Gunnar_Zarncke on Gunnar_Zarncke's Shortform · 2024-04-05T13:51:00.186Z · LW · GW

Thanks. That's helpful. 

I guess the training data was also sandwiched like that. I wonder what they took as user and system content in their training data. 

Comment by Gunnar_Zarncke on Gunnar_Zarncke's Shortform · 2024-04-03T15:51:37.843Z · LW · GW

Can somebody explain how system and user messages (as well as custom instructions in case of ChatGPT) are approximately handled by LLMs? In the end it's all text tokens, right? Is the only difference that something like "#### SYSTEM PROMPT ####" is prefixed during training and then inference will pick up the pattern? And does the same thing happen for custom instructions? How did they train that? How do OSS models handle such things?

Comment by Gunnar_Zarncke on Gunnar_Zarncke's Shortform · 2024-04-03T12:30:40.312Z · LW · GW

Yes, I'd also like to search them. I edit the summary so it better reflects what I'd search for, but yes, that doesn't cover the content.

There are some alternate ChatGPT UIs you could have a look at:

https://github.com/billmei/every-chatgpt-gui 

Comment by Gunnar_Zarncke on Gunnar_Zarncke's Shortform · 2024-04-03T10:31:31.262Z · LW · GW

I'm discarding most ChatGPT conversations except for a few, typically 1-2 per day. These few fall into these categories:

  • conversations that led to insights or things I want to remember (examples: The immune function of tonsils, Ringwoodite transformation and the geological water cycle, oldest religious texts)
  • conversations that I want to continue (examples: Unusual commitment norms)
  • conversations that I expect to follow up to (a chess book for my son)
  • conversations with generated images that I want to keep and haven't yet copied elsewhere

Most job-related queries, such as code generation and debugging, I usually delete as soon as the code changes have been committed.

How do you handle it?

Comment by Gunnar_Zarncke on Falling fertility explanations and Israel · 2024-04-03T05:47:27.499Z · LW · GW

A financial payment per child that depends on age of parent and goes down with age would be a strong incentive I'd say.

Comment by Gunnar_Zarncke on Fertility Roundup #3 · 2024-04-02T17:04:08.892Z · LW · GW

One failure mode of Robin Hanson's solution (or rather prediction) of fertile sub-cultures is that the surplus of such cultures may be leached away by mainstream culture. This would happen in much the same way we already see Western countries taking the smartest people from poorer countries. 

Comment by Gunnar_Zarncke on What can we learn about childrearing from J. S. Mill? · 2024-04-02T06:17:53.555Z · LW · GW

Or the son inherited the abilities of the father.

Comment by Gunnar_Zarncke on Back to Basics: Truth is Unitary · 2024-03-31T19:35:29.455Z · LW · GW

No. I used this ChatGPT-4 prompt:

Create a picture based on this description pieced together from your story: 

The Temple's stone walls were built to last, but rotting plywood covered the apertures that once framed stained glass. Inside, the Temple wasn't warm, but it was mostly dry. The large circular domed chamber was ringed with statues. Rain fell through the oculus in the eye of the dome. The statues' paint had partially worn away. The prospect's cloak was so soaked it was keeping him colder than warming him up. There were no chairs or coat rack. The girl had hung her own hagoromo on the statue of Mukami-Sama, the God of Atheism. He paced around the circumference of the chamber, taking care with each step as if the floor could collapse under him. Half the gods he didn't even recognize. Of those statues he did… Math-sama's too-perfect curves? No. Moloch? Azathoth? Multivac? Three times no. Morpheus?

I tried other prompts and such esp. to include the girl but none were convincing.

Comment by Gunnar_Zarncke on Back to Basics: Truth is Unitary · 2024-03-31T08:24:05.029Z · LW · GW

I tried to Dall-E a picture for this, but I'm not so satisfied with the results:

Comment by Gunnar_Zarncke on Gunnar_Zarncke's Shortform · 2024-03-30T20:40:42.249Z · LW · GW

Attractors in Trains of Thought

This is slightly extended version of my comment on Idea Black Holes which I want to give a bit more visibility.

The prompt of an Idea Black Hole reminded me strongly of an old idea of mine. That activated a desire to reply, which led to a quick search where I had written about it before, then to the realization that it wasn't so close. Then back to wanting to write about it and here we are.

I have been thinking about the brain's may of creating a chain of thoughts as a dynamic process where a "current thought" moves around a continuous concept space and keeps spending much time in larger or smaller attractors. You know, one thought can lead to the next and some thoughts keep coming back in slight variations. I'm illustrating this with the sentence above. 

Examples of smaller temporary attractors are the current tasks one is working on. For example, me writing this text right now. It is any task you are focused on and keep getting back to after short distractions such as a sound or an impulse. I'm writing this post and continue doing so even after hearing my kids talk and quickly listening in or after scratching my head, also after larger distractions such as browsing the web (which may or may not end up being related to the writing). 

The thought "writing this article" is not a discrete thing but changes slightly with each letter typed and each small posture change. All of that can slightly influence the next word typed (like an LLM that has not only text tokens as inputs but all kinds of sense inputs). That's why I say that concept space is continuous (and very high-dimensional).

An example of a medium size attractor is a mood such as anger about something, that keeps influencing all kinds of behaviors. It is an attractor because the mood tends to reinforce itself. Another example is depression. If you are depressed you prefer things that keep you depressed. Scott Alexander has described depression as some kind of mental attractor. It requires a bigger change or a resolution of the underlying cause to get out of the attractor.

With the medium-sized attractors, it is more intuitive to see the way that the feedback on thoughts acts and thereby creates the attractor. For small attractors, you may say: How is that an attractor? Isn't it just a discrete unit of action we do? But consider procrastination: People seem to feel that something is pulling them away from the task they want to do or should do and instead toward some procrastination action, often a comfortable activity. That other activity is another attractor or rather both are forming a combined unstable attractor.   

The biggest attractor is one's identity. Our thinking about what we are and what we want to do. I think this one results from two forces combining or being balanced: 

  1. The satisfaction of needs. Overall and over a longer term, the brain has learned a very large pattern of behaviors that satisfy the sum of all needs (not perfectly, but as good as it has managed so far). Diverging from this attractor basin will lead to impulses that get back to it.
  2. The feedback from others. Positive and negative feedback from other people and the environment overall contributes to th. The brain has learned to anticipate this feedback ("internalized it") and creates impulses that keep us in positive states. As the brain prefers simpler patterns, this likely takes the form of a single attractor. 

We are not permanently in the same attractor even if overall it "pulls" our thoughts back because a) our bodies and their states (hunger, tiredness, ...) and b) our physical environment (physical location and other people) changes. Both extert a strong and varying influence and put us closer to one attractor state or another. 

Society at large is influencing these attractors strongly, most prominently with the media. Meditation on the other hand reduces outside influence and kind of allows to create your own very strong attractor states.

More examples of attractor states are left as exercises for the reader.

Comment by Gunnar_Zarncke on SAE reconstruction errors are (empirically) pathological · 2024-03-29T19:05:35.346Z · LW · GW

I have difficulty following all of these metrics without being able to relate them to the "concepts" being represented and measured. You say: 

What I take from this plot is that the gap has pretty high variance. It is not the case that every SAE substitution is kind-of-bad, but rather there are both many SAE reconstructions that are around the expectation and many reconstructions that are very bad.

But it is hard to judge what is a high variance and whether the bad reconstructions are so because of systematic error or insufficient stability of the model or something else.

The only thing that helps me get an intuition about the concepts is the table with the top 20 tokens by average KL gap. These tokens seem rare? I think it is plausible that the model doesn't "know" much about them and that might lead to the larger errors? It's hard to say without more information what tokens representing what concepts are affected.

Comment by Gunnar_Zarncke on Idea black holes · 2024-03-29T09:38:34.292Z · LW · GW

Your prompt of an idea black hole reminded me strongly of an old idea of mine. That activated a desire to reply, which led to a quick search where I had written about it before, then to the realization that it wasn't so close. Then back to wanting to reply and here we are.

I have been thinking about thought processes as a dynamic process where a "current thought" moves around continuous concept space and keeps spending much time in larger or smaller attractors. You know, one thought can lead to the next and some thoughts keep coming back in slight variations as illustrated with the first sentence. 

Examples of smaller temporary attractors are the current task one is working on right now and that one keeps getting back to after short distractions such as a sound or an impulse. Such as writing this post and continuing it after hearing my kids talk and quickly listening in or after scratching my head. The thought "writing this article" is not a discrete thing but changes slightly with each letter typed and each small posture change. All of that can slightly influence the next word typed (like an LLM that has not only text tokens as inputs but all kinds of sense inputs). That's why I say that concept space is continuous (and very high-dimensional).

An example of a medium size attractor is a mood such as anger about something, that keeps influencing all kinds of behaviors and that tends to reinforce itself. Scott Alexander has described depression as some kind of mental attractor.   

The biggest attractor is one's identity. One's thinking about what you are and what one wants to do.

We are not permanently in the same attractor even if overall it "pulls" our thoughts back because a) our bodies and their states (hunger, tiredness, ...) and b) our physical environment (physical location and other people) changes. Both extert a strong and varying influence and put us closer to one attractor state or another. 

Society at large is influencing these attractors strongly, most prominently with the media. Meditation on the other hand reduces outside influence and kind of allows to create your own very strong attractor states.        

Your idea black holes sound very much like larger instances of these attractors esp. if they are shared by multiple people and reinforced by the shared environment.  

Comment by Gunnar_Zarncke on Many people lack basic scientific knowledge · 2024-03-29T08:54:36.612Z · LW · GW

Forget about science. Most people can't use computers really.

What Most Users Can Do

(Skill level 1), [60% of users]

- Little or no navigation required to access the information or commands required to solve the problem

- Few steps and a minimal number of operators

- Problem resolution requiring the respondent to apply explicit criteria only (no implicit criteria)

- Few monitoring demands (e.g., having to check one’s progress)

- Identifying content and operators done through simple match

- No need to contrast or integrate information

https://www.nngroup.com/articles/computer-skill-levels/

http://www.oecd-ilibrary.org/education/skills-matter_9789264258051-en 

Data from the OECD study of technical skills show the distribution among skill levels across countries as well as the average for all OECD countries.
Comment by Gunnar_Zarncke on Some Things That Increase Blood Flow to the Brain · 2024-03-29T08:36:36.213Z · LW · GW

Note that vasodilators can reduce the blood flow to the brain because they potentially work on all blood vessels, not only those in the brain.