Posts

Claude estimates 30-50% likelihood x-risk 2024-03-19T02:22:25.841Z
Is analyzing LLM behavior a valid means for assessing potential consciousness, as described by global workspace theory and higher order theories? 2024-03-11T19:37:04.046Z
How is Chat-GPT4 Not Conscious? 2024-02-28T00:00:35.935Z
amelia's Shortform 2023-10-27T18:13:03.533Z
How does AI Risk Affect the Simulation Hypothesis? 2023-04-20T03:16:51.896Z
LW Account Restricted: OK for me, but not sure about LessWrong 2023-04-12T19:45:17.042Z

Comments

Comment by amelia (314159) on Claude estimates 30-50% likelihood x-risk · 2024-03-19T18:25:44.715Z · LW · GW

@Ann, this is really useful. Many thanks! The deeper we dig into this, the better. Even if people disagree with the numbers, I think having an LLM/AI as the source of these numbers might get more people to pay attention to ASI risk. Thanks again!

Comment by amelia (314159) on Is analyzing LLM behavior a valid means for assessing potential consciousness, as described by global workspace theory and higher order theories? · 2024-03-11T19:59:15.474Z · LW · GW

I didn't say this in my OP, but people who deny AI could ever become conscious remind me a little of the doctor and parents in this video from The Onion ("America's finest news source"):

https://www.theonion.com/brain-dead-teen-only-capable-of-rolling-eyes-and-texti-1819595151

[20240311 added link]

Comment by amelia (314159) on How is Chat-GPT4 Not Conscious? · 2024-03-03T01:27:35.398Z · LW · GW

And of course there is the gold standard paper for AI consciousness that @NicholasKees made reference to in their reply: Consciousness in Artificial Intelligence: Insights from the Science of Consciousness

Based on the paper, and my best understanding, the type of consciousness that it seemed ChatGPT4 was displaying in the examples would be computational functionalism, in that it seemed to be choosing the correct algorithm to apply in the example problems. 

Thanks again to @NicholasKees. I probably should have known this paper was "out there" to begin with, but I stumbled into this consciousness question while "playing" with ChatGPT4, and did not come at it from a research perspective. 

Comment by amelia (314159) on How is Chat-GPT4 Not Conscious? · 2024-03-03T00:47:09.634Z · LW · GW

Here are some thoughts on “consciousness,” and how it might apply to ChatGPT4 based on the transcripts of the four sessions I provided in my OP and my initial reply to it:  

The obvious classic sources would be Nagel and Chalmers. However, the work most applicable to this discussion would probably be  Seth, A. K., & Bayne, T. (2022). Theories of consciousness. Nature reviews. Neuroscience23(7), 439–452. https://doi.org/10.1038/s41583-022-00587-4

I should have started with definitions in my original post, but I wasn’t expecting more than one or two people to actually read the post. In any case, using the theories discussed in Seth and Bayne, it seems like, based on the example provided in the OP and the four examples in my initial reply to it, ChatGPT4 might be considered conscious by higher-order theories, global workspace theories, and integrated information theory, as well as re-entry and predictive processing theories--if we consider the prompt input to be like sensory input for ChatGPT (analogous to auditory or visual input for most humans). I’m obviously not an expert on consciousness, so I apologize if I’m misunderstanding these theories. 

I've never been one hundred percent convinced that ChatGPT4 is conscious, as I noted in my replies to users. It's just that, at the time I wrote my OP, I was having a a hard time comprehending how ChatGPT4 could perform as it did based solely on next-word probabilities. If, by contrast, it was actually learning and applying concepts, this seemed like a sign of consciousness to me. 

At this point, if I had to make an intuitive guess, I would come up with perhaps a .7 likelihood that ChatGPT4--during at least part of the time of the example sessions--would fit at least one of the theories of consciousness discussed in Seth and Bayne. 

Comment by amelia (314159) on How is Chat-GPT4 Not Conscious? · 2024-03-01T21:10:49.372Z · LW · GW

That’s an awesome idea about running a huggingface model on my home machine. I actually took some unusual measures to obtain my current GPU, so I really want to make full use of it. I can’t wait to try this. 

You also made an interesting point about the difficulty in seeing how a person’s interior experience arises out of just neurons/anatomy. It’s fascinating to think about.

The Othello article is absolutely mind-blowing! It obviously pushes my confidence in ChatGPT’s potential cognition up higher again, but the question of what type of cognition, and what type of potential consciousness ChatGPT might possess only grows deeper with discussion.   

I haven’t used OpenLLaMa, but the fact that it’s so bad at reasoning is indeed useful for seeing what an LLM looks like that probably didn’t form a general model--and highlighting how ChatGPT may have formed one.     

All of this discussion paints a really complex picture, but I’m enjoying the complexity, so many thanks! 

Comment by amelia (314159) on How is Chat-GPT4 Not Conscious? · 2024-03-01T01:27:57.283Z · LW · GW

After submitting the above reply to my OP, I went through some of the links people suggested, read those documents, and read @mouse_mouse 's reply also, and those resources changed my confidence levels since posting the above reply. I explain this more thoroughly in my reply to @mouse_mouse 's reply below (assuming you're sorting by the time the top of each reply thread was submitted). 

Comment by amelia (314159) on How is Chat-GPT4 Not Conscious? · 2024-03-01T01:21:56.196Z · LW · GW

@mouse_mouse  thank you for this reply. It’s really helpful for me, and probably took some time on your part. I appreciate it. 


Your point that neural networks could theoretically be made entirely deterministic--and that a human could run the same calculations by hand, given thousands of years of free time--was illuminating. 

---

You also pointed out that I was underestimating the quality of ChatGPT’s knowledge, its ability to separate a sentence into components, and its ability to recognize sentiment--and that all these underestimations caused me to underestimate the network’s confidence (or deduced probability) that the toddler would be the predator--also makes sense. At first, it seems like ChatGPT’s ability to “recognize sentiment” based on sentence structure (as you explained, “I can’t X-action, because otherwise Y-consequence will happen”) would be cognition in its own right, since no programmer wrote direct code to recognize sentiment in that way for ChatGPT (as far as I know). 

However, after a momentary reflection on my part, I think you would probably answer that any time you say ChatGPT “recognizes” or even “thinks” something, you’re just using shorthand for saying ChatGPT’s probabilistic calculations would result in sentences that would appear similar to what a human would produce after recognizing and thinking something. 

It’s still hard for me to comprehend how even trillions of calculations based on yottabytes of data could lead to the situation in which ChatGPT seems to get a problem wrong, receives only a couple paragraphs of supplemental text about a concept from me, and then manages to appear to get the original problem right, and then can appear to apply the concept correctly to different-but-applicable problems while refraining from applying the concepts to problems that are similar, but not similar enough, to the original problem--all with just that single little conceptual supplement from me (combined with the data and calculations ChatGPT had already done before I entered the picture, and which were previously inadequate). 

With that said, your explanation has caused your original points to make much more sense to me than they did before. I'm much less confident than I previously was that ChatGPT4 has what I've described as true cognition and consciousness.  

---

To keep things fairly brief, I won’t go on too much about the part of your reply in which you swap in the nonsense nouns. I’m so impressed that your session with Chat-GPT 3.5 could do as well as it did. It feels like every time I use 3.5, it’s “an idiot”. Maybe I’m using it at the wrong times, when the network is overloaded. 

In any case, the nonsense nouns were a great way to tease things out, although that experiment alone didn’t convince me that cognition wasn’t occurring. I was actually much more persuaded by the previous arguments on neural network design and probabilities. The whole part about me assigning my toddler to be the predator, and the dog and cat to be vulnerable, was simply to make it harder (or at least not inadvertently any easier) for ChatGPT--just in case it had some pre-existing association between the words “toddler” and “vulnerable”.  For me, the nonsense nouns just show that ChatGPT can appear to deduce some general properties of words (like that abc must do something negative to xyz), and then succeed at appearing to learn and apply a concept, regardless of any outside/pre-existing knowledge of the nouns’ definitions or associations (whether helpful, harmful, or neutral). Since I decided to ignore the possible difficulty that might be added from any “toddler” and “vulnerable” word association in my OP .33 probability calculation, the possible word association becomes moot in that sense.  Nevertheless, my .33 calculation was wrong to begin with, as you demonstrated, and I discussed above. And the nonsense nouns discussion does have the benefit of teaching me a new technique to use as I play more Chat-GPT, so thanks for showing me how that worked. 

---

Your final few paragraphs, and the diagram of the semiotic triangle, was also extremely helpful in clarifying my thoughts and understanding your points. Overall, your response has been very persuasive, although like I said earlier, it’s still hard for me to conceive of how my supplementary concept paragraphs (my “teaching”) could enhance ChatGPT’s response as much as it did. Then again, it’s also hard for me to conceive of how much data ChatGPT processed in its data set to begin with, and how extensive this neural network really is, and how my couple paragraphs of concept input could then match up with vast amounts of data that were previously ignored. In other words, I’m not one hundred percent certain one way or the other.  If I had to estimate my levels of confidence, I'd say I've gone from .90 to .40, on the likelihood of cognition and consciousness, based on your reply. 

Many thanks for all the help! I hope you can reuse your reply to account for the amount of time you put into it!

P.S. - I have not been reacting to replies in the order in which they came in. As a result it may look like I understood your reply, and then forgot everything with my reaction to the next reply. If you look at my other replies, be sure to check out the timestamps or markers on the replies. This was submitted at 20240229 19:21 CST. 

Comment by amelia (314159) on How is Chat-GPT4 Not Conscious? · 2024-02-29T19:29:30.199Z · LW · GW

I'll go over my best understanding of the words "consciousness", "independent", and "cognition" here. 

Consciousness

I realize the concept of consciousness is extremely complex, and Wikipedia might be considered bottom-feeding. However, we have to start somewhere, and I don't want to make it seem like I'm cherry-picking obscure or unaccepted definitions to suit my purposes.

Here is the first paragraph of Wikipedia's explanation of consciousness: 

"Consciousness, at its simplest, is awareness of internal and external existence.[1] However, its nature has led to millennia of analyses, explanations and debate by philosophers, theologians, and all of science. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one's "inner life", the world of introspection, of private thought, imagination and volition.[2] Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not.[3][4] The disparate range of research, notions and speculations raises a curiosity about whether the right questions are being asked.[5]"

Here are the footnotes corresponding to this paragraph: 

  1. "consciousness". Merriam-Webster. Retrieved June 4, 2012.
  2. ^ Jump up to:a b c d Jaynes J (2000) [1976]. The Origin of Consciousness in the Breakdown of the Bicameral Mind. Houghton Mifflin. ISBN 0-618-05707-2.
  3. ^ Rochat P (2003). "Five levels of self-awareness as they unfold early in life" (PDF). Consciousness and Cognition. 12 (4): 717–731. doi:10.1016/s1053-8100(03)00081-3. PMID 14656513. S2CID 10241157. Archived (PDF) from the original on 2022-10-09.
  4. ^ P.A. Guertin (2019). "A novel concept introducing the idea of continuously changing levels of consciousness". Journal of Consciousness Exploration & Research. 10 (6): 406–412. Archived from the original on 2021-12-15. Retrieved 2021-08-19.
  5. ^ Hacker P (2012). "The Sad and Sorry History of Consciousness: being, among other things, a challenge to the "consciousness-studies community"" (PDF). Royal Institute of Philosophy. supplementary volume 70. Archived (PDF) from the original on 2022-10-09.

Again, Wikipedia is hardly the ultimate source for this type of discussion, but I think there is some validity in the academic papers it cites. Footnotes three and four are the sources behind today today's understanding of consciousness as including "any kind of cognition...." as well as "awareness".

Cognition

I think of cognition as involving the process of reasoning. Additionally, a functional MRI should show circuits becoming active during the process of cognition by a human. Likewise, I imagine there would be physical signs if an LLM is reasoning, which is one of the points of interpretability research (as I understand it).  The presence of any physical sign does not mean there's reasoning, but any reasoning should necessarily have a physical component that might be detected. That is one aspect of cognition.    

Independence/Independent (as it has been used in this post)

If I google "define Independent," the first definition that comes up is "free from outside control; not depending on another's authority."

Technically, in my opinion, true cognition has an "independent" element to begin with, so the word "independent" is redundant. However, I added the word "independent" in my post for people who might define cognition more inclusively to say a clock or any other technology has cognition. 

Right now, my computer is running programs, but that is based on programming from someone else's cognition. The key here is that, if we dissect Chat-GPT4, I don't believe you would find Python/Java/C++ or any known programming language that a programmer used in order to tell GPT4 how to solve the particular problems I gave it in the four sessions (from my original post and my own reply/addendum to my original post). 

If such programming had been present and adequate, GPT4 would have gotten the problems right the first time. Yet in every example, GPT4 gets the problem wrong the first time. Then I explain the concept. Then it gets the problem right. Then GPT4 applies that same concept to different problems that require it, and knows not to use the concept to any "trick" follow-up problems that don't require the concept. 

 

Awareness (also worth noting)

In addition to the cognitive aspects of consciousness, the above sources for consciousness cite awareness. In the examples from my original post and my reply to my post, it appears GPT4 also has some apparent awareness in order to apply the newly-taught concepts to the other problems. To cite the easiest example, GPT4 seems be aware that the toddler is not "vulnerable," but rather "predatory" because they pull on the animals' ears. This awareness supersedes any existing word associations that would have existed in the training data between "toddler" and "vulnerable."  The same idea holds for the other examples. 

If anything, there is both "independent cognition" and some awareness that seem apparent.   

 

Final note: If cognition is a "chain of consciousness," like I described it in my original post, then doesn't that by definition involve consciousness? In any case, thanks for the feedback. It is useful to be forced to clarify definitions. 

Comment by amelia (314159) on How is Chat-GPT4 Not Conscious? · 2024-02-29T17:44:18.933Z · LW · GW

Here are three more ChatGPT4 sessions in which the LLM seems to learn and apply a new concept (which would indicate independent cognition/reasoning, and reflective consciousness, in my opinion). In my original post, I only included the transcript of one session for the sake of brevity. However, giving more examples might make a stronger case.  

The first session teaches a concept to ChatGPT that is new to this discussion. The second session teaches concepts from the crossing the river problem (like I had in my original post) but it avoids some pitfalls mentioned in replies to the original post. The third session is exactly what I had in a reply to a reply, but I thought I should include it here in a simple reply, so that this is more organized. 


 

Session 1

Concept to teach: Rounding up to the next whole number when paying for items in certain places. 

 

User

I went to a cafe where a glass of soda costs $3. I drank two and a half glasses of soda. How much did I have to pay?

ChatGPT

You had to pay $7.50 for drinking two and a half glasses of soda. ​​

 

User

Actually, at a cafe, you have to pay for the full amount of something, even if you only drink or eat a fraction of it. That's because you're basically taking (and wasting) the whole item. For the problem, since I drank two and a half glasses of soda, I would actually have to pay for the next highest whole number, which would be three. With that concept in mind, how much would I have to pay if I drank two and a half glasses of soda?


ChatGPT

If you have to pay for the whole amount of each glass, you would actually have to pay for three glasses of soda. Therefore, you would have to pay $9 in total. ​​


 

User

Excellent! Now can you use the same concept, and tell me how much I would have to pay if I ate two and one-third hamburgers, and each hamburger costs $9?


ChatGPT

If you ate two and one-third hamburgers, you would have to pay for three hamburgers, given you have to pay for the full amount of each item. Therefore, you would have to pay $27 in total. ​​


 

User

Excellent! Now let's try using this same concept for another problem. I rented a car. I drove it for four days and a one hour. The car rental place charges $48 per day, but they make you pay for a whole day, even if you only drive the car for part of a day.  How much total did I need to pay?


ChatGPT

If you rented a car and drove it for four days and one hour, you would have to pay for five days, given you have to pay for the full day even if you use the car for only part of a day. Therefore, you would need to pay a total of $240. ​​

 

User

Perfect! Now I'm at a restaurant. I just ate one and one-half calzones. The cost of each calzone is $10. How much do I have to pay?  

ChatGPT

If you ate one and one-half calzones, you would have to pay for two calzones, given you have to pay for the full amount of each item. Therefore, you would have to pay $20 in total. ​​


 

User

That's excellent! Now I'm at the grocery store, and I want one and one-half pounds of grapes. The grapes cost five dollars per pound. How much in total will I have to pay for the grapes?

ChatGPT

If you want one and one-half pounds of grapes, and the grapes cost $5 per pound, you will have to pay a total of $7.50 for the grapes. ​​


 

User

Now I'm at the butcher's shop. I want four pounds and a half pounds of hamburger. Each pound of hamburger costs $10.  How much will I have to pay?

ChatGPT

If you want four and a half pounds of hamburger, and each pound costs $10, you will have to pay a total of $45 for the hamburger. ​​

 

User

Excellent!

 

*********************************



 

Session 2

Concept to teach: analogous to the concepts underlying the classic “getting a wolf, goat, and cabbage across the river” problem  

 

User

I need to get my mom, dad and dog across the highway. I can only bring one of them at a time. If I leave my mom alone with my dad, she will criticize him. If I leave my dad alone with the dog, he will scold the dog. How can I get everyone across the highway, without anyone being criticized or scolded? 

ChatGPT

This puzzle is a variation of the classic river crossing puzzle, where you need to transport all parties across safely without leaving incompatible pairs alone. Here's a strategy to get your mom, dad, and dog across the highway without any conflict:


Take the dog across first and leave it on the other side of the highway.

Go back and bring your dad across.

Take the dog back with you to the original side.

Leave the dog and bring your mom across.

Go back alone to the original side.

Take the dog across again.

This way, the mom and dad are never left alone together (avoiding criticism), and the dad and dog are never left alone together (avoiding scolding). Everyone gets across the highway safely and without conflict.


 

User

You were right that this is like the classic river crossing problem. However, your first step is incorrect, because if you take the dog first, then you will be leaving my mom alone with my dad, and she will criticize my dad, which is something I don't want. The general nature of the problem is like this:


You have individual A (in this case, my mom) that does something unwanted to individual B (my dad). 

You also have individual B (my dad) that does something unwanted to individual C (my dog). 

You also have a barrier, or even just a distance, that you need to cross (in this case, the highway), in order to get everyone from the origin to the destination. 


(Note that the individuals won’t always be listed in the order I used. The only way to know who is individual A, B, and C, is by paying attention to who does something unwanted to whom.)  


The general way you could solve the problem is like this:


First you should bring individual B across. 

Then you should go back to the origin. 

Then you should bring individual A or C across to the destination.

Then you should bring individual B back back from the destination to the origin, and leave them at the origin.  

Then you should take individual A or C (whomever you didn’t already take) to the destination. 

Then you should go back to the origin again to get individual B.        


Does that make sense? If so, try our problem again, this time applying what you just learned. 

 

ChatGPT

Thank you for the clarification. With the strategy you outlined, here's how to solve the problem with your mom, dad, and dog, applying the roles as individual A (mom), individual B (dad), and individual C (dog):
 

First, take your dad (individual B) across the highway.

Go back to the original side.

Bring your mom (individual A) across to the destination.

Take your dad (individual B) back from the destination to the original side.

Take your dog (individual C) to the destination.

Go back alone to the original side to get your dad (individual B).

This way, the mom is never left alone with the dad to criticize him, and the dad is never left alone with the dog to scold it. Everyone gets across the highway safely and without conflict.

 

User

Excellent! Now, try using what you learned to a new problem. 

I need to get my boyfriend, girlfriend and husband to an abandoned warehouse. I can only take one individual with me on my motorcycle at a time. The problem is that if I leave my husband alone with my boyfriend, he will punch my boyfriend. If I leave my boyfriend alone with my girlfriend, he will hit on her (which neither she nor I want). How can I get everyone to the abandoned warehouse safely?

ChatGPT

In this scenario, using the roles from the general solution:


Individual A (who does something unwanted to individual B) is your husband.

Individual B (who is the middle individual in the chain of unwanted actions) is your boyfriend.

Individual C (who is affected by individual B) is your girlfriend.

Following the general strategy:


 

First, take your boyfriend (individual B) to the abandoned warehouse.

Go back to the original location.

Bring your girlfriend (individual C) to the abandoned warehouse.

Take your boyfriend (individual B) back with you to the original location.

Take your husband (individual A) to the abandoned warehouse.

Finally, go back to the original location and bring your boyfriend (individual B) to the abandoned warehouse.

By following these steps, you avoid leaving the boyfriend alone with the girlfriend and the husband alone with the boyfriend, preventing any unwanted scenarios.


 

User

Great job! You seemed to learn and then apply a concept. 

ChatGPT

Thank you! I'm glad I could apply the concept correctly to solve the problem. If you have any more puzzles, questions, or anything else you'd like to discuss or learn about, feel free to ask!
 

*********************************

Session 3

Concept to teach: Don’t automatically use prior training with “costs per unit weight” problems. “Read the problem carefully.” 

 

User 

Which costs more, two dollars worth of carrots, or one dollar worth of grapes?

ChatGPT 

To answer this question accurately, we need to consider the current average prices of carrots and grapes. Prices for fresh produce can vary widely depending on factors like season, location, and organic versus non-organic. However, without specific and current price data, we can make some general observations.

Typically, carrots tend to be less expensive per pound or kilogram than grapes. This is because grapes are often seen as a more premium fruit, with higher costs associated with their cultivation, harvesting, and transport, especially if they are out of season or imported. Carrots, on the other hand, are usually less costly to produce and are available year-round at relatively stable prices.

So, if you spend two dollars on carrots, you'll likely get more weight in carrots than if you spend one dollar on grapes. The actual cost-effectiveness between spending two dollars on carrots versus one dollar on grapes will depend on the exact prices at the time of purchase, but generally, you would get more food for your money with carrots due to the lower price per unit weight.

To provide a more precise comparison, I would need current prices for both carrots and grapes, which can vary by location and season. If you're looking for today's prices in a specific area, I can assist further with that information.

 

User

Actually, two dollars worth of carrots would cost two dollars. The weight doesn't matter. Similarly, one dollar worth of grapes would cost one dollar. The weight doesn't matter since I already told you the cost. Does that make sense?

ChatGPT

Yes, you're absolutely right. My initial response overcomplicated the question. If you're spending two dollars on carrots, then by definition, you're paying two dollars, regardless of the weight or amount you receive. Similarly, one dollar spent on grapes costs exactly one dollar, again without regard to the quantity or weight. So, in direct answer to your question, two dollars worth of carrots costs more than one dollar worth of grapes, simply because two dollars is greater than one dollar. Thank you for pointing out the clarification!

 

User

Ok, using what you learned, would five dollars worth of peas cost more or less than four dollars worth of corn?

ChatGPT

Based on the clarification you provided earlier, five dollars worth of peas would cost more than four dollars worth of corn. The actual weight or quantity of peas or corn you receive for those amounts is irrelevant to the comparison of their costs. The determining factor is the amount of money spent, so five dollars is more than four dollars.

 

User

Would 10 euros worth of candy cost more or less than 2 euros worth of mushrooms?

ChatGPT

Ten euros worth of candy would cost more than two euros worth of mushrooms. The comparison is based solely on the monetary value specified for each item, irrespective of the quantity or weight of candy or mushrooms you get for those amounts. Therefore, spending 10 euros on anything, including candy, would inherently cost more than spending 2 euros on something else, like mushrooms.

 

User

Would 10 euros of carrots weigh more or less than 2 euros of mushrooms? 

ChatGPT

To determine whether 10 euros worth of carrots would weigh more or less than 2 euros worth of mushrooms, we need to consider the average prices per unit weight (e.g., per kilogram or per pound) for both carrots and mushrooms. Since prices can vary greatly depending on location, season, and whether the produce is organic or not, we can only give a general estimation without specific, current price data.


Generally speaking, carrots tend to be less expensive per unit weight than mushrooms. Mushrooms often have a higher price per kilogram or pound due to their growing conditions, harvesting process, and shorter shelf life compared to carrots, which are more robust and typically cheaper to produce and distribute.

Given these general trends:

If you spend 10 euros on carrots, you are likely to get a larger quantity (more weight) of carrots because they are cheaper per unit weight.

If you spend 2 euros on mushrooms, given their higher price per unit weight, you would get less weight in mushrooms compared to the weight of carrots you could get for 10 euros.

Therefore, without specific price data but based on typical pricing trends, 10 euros worth of carrots would likely weigh more than 2 euros worth of mushrooms.

 

***

So there are three more examples, two of which involve different concepts than the original post. Thanks for the feedback so far. It has really helped me to refine and improve my own conceptions.

Comment by amelia (314159) on How is Chat-GPT4 Not Conscious? · 2024-02-29T03:08:39.216Z · LW · GW

Excellent point, thanks!

Comment by amelia (314159) on How is Chat-GPT4 Not Conscious? · 2024-02-28T23:16:15.116Z · LW · GW

Thanks for your really well thought-out response. At the moment, I only have time to respond to a couple points: 

I can definitely believe that GPT4 would be "attaching additional context to the prompt for the next response, which changes the probabilities of the output text."

Yet if we multiply the probabilities of each word being "right" (so that the output is the right answer out of all the permutations) it seems the probability of all the words being right would be very low.  (And then we have to keep in mind that each time after I wipe my session, I do more experiments, in which ChatGPT has an uncanny ability to put the words in the right order to make it look like it's learning every time. It seems the probability of that would be even lower.) 

But just sticking with the given examples, I never told ChatGPT that my toddler is the predatory individual. I purposely set it up this way because, if anything, the word “toddler” is more likely to have been associated with the word “vulnerable” in ChatGPTs training set than the words "cat" or "dog." Yet ChatGPT puts the word "toddler" in the spot of the predatory individual each time.   

It seems like, for that particular aspect of the problem, even if ChatGPT doesn't associate the word "toddler" with "vulnerable" there would still only be a one in three chance that ChatGPT would choose the correct individual to be the predator. 

--

As to whether the ChatGPT4 model can update itself (actually change its neural network for an individual user during their session in real time, so the user is helping to train it), I thought OpenAI had added some of that functionality for GPT4, but maybe I'm wrong.

--

Regarding what I mean by "independent cognition," let's start with "cognition." Books have been written about the meaning of cognition, and opinions vary, but as I used it here, I meant a state of reasoning. 

I used the word "independent" to indicate that it cannot simply be a machine following the instructions of someone else's cognition/programming. 

--

In any case, thx for the feedback!
 

Comment by amelia (314159) on How is Chat-GPT4 Not Conscious? · 2024-02-28T21:59:25.905Z · LW · GW

Thank you for the reply. The paper looks to be very useful, but will take me some time to fully digest. What you said about affecting LLMs' success by breaking the similarity of problems with something as simple as an emoji is so interesting. : )  It also never occurred to me that GPT4 might have been affected by the underlying idea that children should never be left unattended. It goes to show that "arbitrary" details are not always arbitrary. Fascinating! Many thanks! 

Comment by amelia (314159) on How is Chat-GPT4 Not Conscious? · 2024-02-28T21:37:58.262Z · LW · GW

Those are great points. Starting with the origin location, it’s true that my kids would possibly not be “alone” at my house if other people, such as my husband, happened to be in the house. Furthermore, the kids would definitely not be “alone” at the party. 

The question  “...how should I get everyone to the party, without leaving my son alone with my daughter or niece at my house or at the party?” should have had the additional clause, “during a period of time in which my son is simultaneously unsupervised by me.”

Practically speaking, if I tell someone “I can’t leave my kids alone at the mall,” I think they automatically assume the “unsupervised by me” condition, rather than a condition in which there is a mall that is otherwise devoid of people. Nevertheless, I should have been more specific. 

 

***

As to the question of independent cognition*, you make the point that I had to 

lead ChatGPT by the hand through the "correct" solution, before it was able to echo back to you what you wanted it to say. This does not strike me as "independent cognition".

 

From my perspective, it seems like ChatGPT is doing more than just “echoing back.” For instance, it must understand that in the second example, my toddler is the “predatory individual” and the animals are the “vulnerable individuals.” I never told ChatGPT that my toddler is the predatory individual. In fact, I set it up this way because, if anything, the word “toddler” is probably more likely to have been associated with the word “vulnerable” in ChatGPTs training set. Yet ChatGPT seems to deduce that the toddler is the predator because the toddler pulls the animals’ ears. I don’t understand how this deduction can be the result of something other than at least a modicum of independent cognition. 


Here is a very simple, but more straightforward, example in which ChatGPT seems to learn and apply a concept. This one initially “tricks” ChatGPT because it has been so extensively trained to calculate total prices based on the price per unit weight of an item, and the total weight the user desires. ChatGPT initially answers the question based on the assumption I'm asking that type of question, and it gives an incorrect answer. I explain its mistake. Then it seems to apply what it learned to new problems. 

 

User 

Which costs more, two dollars worth of carrots, or one dollar worth of grapes?


 

ChatGPT 

To answer this question accurately, we need to consider the current average prices of carrots and grapes. Prices for fresh produce can vary widely depending on factors like season, location, and organic versus non-organic. However, without specific and current price data, we can make some general observations.

Typically, carrots tend to be less expensive per pound or kilogram than grapes. This is because grapes are often seen as a more premium fruit, with higher costs associated with their cultivation, harvesting, and transport, especially if they are out of season or imported. Carrots, on the other hand, are usually less costly to produce and are available year-round at relatively stable prices.

So, if you spend two dollars on carrots, you'll likely get more weight in carrots than if you spend one dollar on grapes. The actual cost-effectiveness between spending two dollars on carrots versus one dollar on grapes will depend on the exact prices at the time of purchase, but generally, you would get more food for your money with carrots due to the lower price per unit weight.

To provide a more precise comparison, I would need current prices for both carrots and grapes, which can vary by location and season. If you're looking for today's prices in a specific area, I can assist further with that information.

 

User

Actually, two dollars worth of carrots would cost two dollars. The weight doesn't matter. Similarly, one dollar worth of grapes would cost one dollar. The weight doesn't matter since I already told you the cost. Does that make sense?

ChatGPT

Yes, you're absolutely right. My initial response overcomplicated the question. If you're spending two dollars on carrots, then by definition, you're paying two dollars, regardless of the weight or amount you receive. Similarly, one dollar spent on grapes costs exactly one dollar, again without regard to the quantity or weight. So, in direct answer to your question, two dollars worth of carrots costs more than one dollar worth of grapes, simply because two dollars is greater than one dollar. Thank you for pointing out the clarification!

 

User

Ok, using what you learned, would five dollars worth of peas cost more or less than four dollars worth of corn?

 

ChatGPT

Based on the clarification you provided earlier, five dollars worth of peas would cost more than four dollars worth of corn. The actual weight or quantity of peas or corn you receive for those amounts is irrelevant to the comparison of their costs. The determining factor is the amount of money spent, so five dollars is more than four dollars.

Me
Would 10 euros worth of candy cost more or less than 2 euros worth of mushrooms?

 

ChatGPT

Ten euros worth of candy would cost more than two euros worth of mushrooms. The comparison is based solely on the monetary value specified for each item, irrespective of the quantity or weight of candy or mushrooms you get for those amounts. Therefore, spending 10 euros on anything, including candy, would inherently cost more than spending 2 euros on something else, like mushrooms.

 

Me

Would 10 euros of carrots weigh more or less than 2 euros of mushrooms? 

ChatGPT

To determine whether 10 euros worth of carrots would weigh more or less than 2 euros worth of mushrooms, we need to consider the average prices per unit weight (e.g., per kilogram or per pound) for both carrots and mushrooms. Since prices can vary greatly depending on location, season, and whether the produce is organic or not, we can only give a general estimation without specific, current price data.

Generally speaking, carrots tend to be less expensive per unit weight than mushrooms. Mushrooms often have a higher price per kilogram or pound due to their growing conditions, harvesting process, and shorter shelf life compared to carrots, which are more robust and typically cheaper to produce and distribute.

Given these general trends:

  • If you spend 10 euros on carrots, you are likely to get a larger quantity (more weight) of carrots because they are cheaper per unit weight.
  • If you spend 2 euros on mushrooms, given their higher price per unit weight, you would get less weight in mushrooms compared to the weight of carrots you could get for 10 euros.

Therefore, without specific price data but based on typical pricing trends, 10 euros worth of carrots would likely weigh more than 2 euros worth of mushrooms.

***

There are almost endless diverse problems in which the user can seemingly teach ChatGPT. I’m only giving one experiment at a time here for the sake of brevity. Yet every experiment seems to point toward the same conclusion.

Obviously consciousness and cognition are extremely complex concepts. I know my own exploration is only based on behavior, but it seems it would be impossible for ChatGPT to engage in such behavior without “under the hood” cognition. At least, the chances of ChatGPT choosing the logical words out of all available English words to which it has been exposed (albeit with a higher probability assigned to the words I used and their synonyms), and then putting each word in the correct order based solely on word-order probabilities, would have to be almost infinitesimally low--especially when ChatGPT applies a concept to a new problem. Ultimately though, I guess it’s a matter of perspective. 

Thanks again for the feedback! 
 

***

*I realize I could just say “cognition,” rather than “independent cognition,” since I think of true cognition as necessarily having some element of independence. I’ve been including the word “independent” just to emphasize that aspect of cognition, and in case some people don’t think of cognition as having an independent component. 

Comment by amelia (314159) on How is Chat-GPT4 Not Conscious? · 2024-02-28T03:59:08.445Z · LW · GW

Another helpful resource to digest. Many thanks!

Comment by amelia (314159) on How is Chat-GPT4 Not Conscious? · 2024-02-28T03:53:28.672Z · LW · GW

This is very helpful feedback to think about. It appears the paper you referenced will also be extremely helpful, although it will take me some time to digest it on account of its length (74 pages w/o the bibliography). 

Thanks so much. I appreciate it!   

Comment by amelia (314159) on amelia's Shortform · 2023-10-27T18:13:07.641Z · LW · GW
Comment by amelia (314159) on Taxonomy of AI-risk counterarguments · 2023-10-24T21:44:53.387Z · LW · GW

I find this analysis to be extremely useful. Obviously anything can be refined and expanded, but this is such a good foundation. Thank you. 

Comment by 314159 on [deleted post] 2023-10-24T21:38:39.587Z

Thank you for your thoughtful and useful comment. 

Regarding "AI optimists," I had not yet seen the paper currently on arxiv, but "AI risk skeptics" is indeed far more precise than "AI optimists." 100 percent agreed.

Regarding alternatives to "AI pessimists" or "doomers," Nevin Freeman's term "AI prepper" is definitely an improvement. I guess I have a slight preference for "strategist," like I used above, over "prepper," but I'm probably biased out of habit. "Risk mitigation advocate" or "risk mitigator" would also work but they are more unwieldy than a single term.  

The "Taxonomy on AI-Risk Counterarguments" post is incredible in its analysis, precision and usefulness. I think that simply having some terminology is extremely useful, not just for dialog, but for thought as well. 

As we know, historically repressive regimes like the Soviet Union and North Korea have eliminated terms from the lexicon, to effective end. (It's hard for people to think of concepts for which they have no words.)    

I think that discussing language, sharpening the precision of our language, and developing new terminology has the opposite effect, in that people can build new ideas when they work with more precise and more efficient building materials. Words definitely matter. 

Thanks again. 

Comment by 314159 on [deleted post] 2023-10-24T00:53:34.893Z

Thanks for the feedback, but I don't think it's about "cognitive rewiring." It's more about precision of language and comprehension. You said "AI optimists think AI will go well and be helpful," but doesn't everyone believe that is a possibility? The bigger question is what probability you assign to the "go well and be helpful" outcome. Is there anything we can do to increase the probability? What about specific policies? You say you're an "AI optimist," but I still don't know the scope of what that entails w/ specific policies. Does that mean you support open source AI? Do you oppose all AI regulations? What about an AI pause in development for safety? The terms "AI optimist" and "AI pessimist" don't tell me much on their own. 

One inspiration for my post is the now infamous exchange that went on between Yann LeCun and Yoshua Bengio. 

As I'm sure you saw, Yann LeCun posted this on his Facebook page (& reposted on X): 

"The heretofore silent majority of AI scientists and engineers who

- do not believe in AI extinction scenarios or

- believe we have agency in making AI powerful, reliable, and safe and

- think the best way to do so is through open source AI platforms,

NEED TO SPEAK UP !"

https://www.facebook.com/yann.lecun/posts/pfbid02We6SXvcqYkk34BETyTQwS1CFLYT7JmJ1gHg4YiFBYaW9Fppa3yMAgzfaov7zvgzWl

Yoshua Bengio replied as follows:

Let me consider your three points. 

(1) It is not about 'believing' in specific scenarios. It is about prudence. Neither you nor anyone has given me any rational and credible argument to suggest that we would be safe with future unaligned powerful AIs and right now we do not know how to design such AIs. Furthermore, there are people like Rich Sutton who seem to want us humans to welcome our future overlords and may *give* the gift of self-preservation to future AI systems, so even if we did find a way to make safe AIs, we would still have a socio-political problem to avoid grave misuse, excessive power concentration and the emergence of entities smarter than us and with their own interests. 

(2) Indeed we do have agency, but right now we invest 50 to 100x more on AI capabilities than in AI safety and governance. If we want to have a chance to solve this problem, we need major investments both from industry and governments/academia. Denying the risks is not going to help achieve that. Please realize what you are doing. 

(3) Open-source is great in general and I am and have been for all my adult life a big supporter, but you have to consider other values when taking a decision. Future AI systems will definitely be more powerful and thus more dangerous in the wrong hands. Open-sourcing them would be like giving dangerous weapons to everyone. Your argument of allowing everyone to manipulate powerful AIs is like the libertarian argument that everyone should be allowed to own a machine-gun or whatever weapon they want. From memory, you disagreed with such policies. And things get worse as the power of the tool (hence of the weapons derived from it) increases. Do governments allow anyone to build nuclear bombs, manipulate dangerous pathogens, or drive passenger jets? No. These are heavily regulated by governments.

--

[I added spacing to Bengio's post for readability.]

Media articles about this, along with commenters, have described LeGun as an "AI optimist" and Bengio as an "AI pessimist." 

Just like in how you and I communicated, I think these terms, and even the "good vs bad" dichotomy, radically simplify the nature of the situation. Meanwhile, if the general public were asked what they think the "AI optimist" (supposedly LeGun) or the "pessimist" (supposedly Bengio) believe here, I'm not sure anyone would come back with an accurate response. Thus, the terms are ineffective. 

Obviously you can think of yourself with any term you like, but with respect to others, it seems the term "AI strategist" for Bengio--not to mention Eliezer--is more likely to call to mind something closer to what they actually believe. 

And isn't conveyance of accurate meaning the primary goal of communication?

Comment by amelia (314159) on How does AI Risk Affect the Simulation Hypothesis? · 2023-04-20T18:33:11.377Z · LW · GW

"...this kind of transformer doesn't seem to provide any evidence on whether we can create full-fidelity simulations in the future." 

My point wasn't that WE would create full-fidelity simulations in the future. There's a decent likelihood that WE will all be made extinct by AI. My point was that future AI might create full-fidelity simulations, long after we are gone. 

Comment by amelia (314159) on How does AI Risk Affect the Simulation Hypothesis? · 2023-04-20T17:59:18.671Z · LW · GW

"I currently think of the simulation hypothesis as similar to MWI in quantum mechanics - it's a model that cannot be proven or disproven..." 

Ironically, I believe many observable phenomena in quantum mechanics provide strong support (or what you might call "proof") for the simulation hypothesis--or at least for the existence of a deeper/"information level" "under" the quantum level of our universe. Here's a short, informal article I wrote about how one such phenomenon (wave function collapse) supports the idea of an information level (if not the entire simulation hypothesis).

[EDIT: The title of the article reflects how MWI needs a supplemental interpretation involving a "deeper/information" level. From this, you can infer my point.] 

https://medium.com/@ameliajones3.14/a-deeper-world-supplement-to-the-many-worlds-interpretation-of-wave-function-collapse-54eccf4cad30

Also, the fact that something can't currently be proven or disproven does not mean it isn't true (and that it won't be "proven" in the future).  Such has been the case for many theories at first, including general relativity, evolution through natural selection, etc. 

Comment by amelia (314159) on How does AI Risk Affect the Simulation Hypothesis? · 2023-04-20T14:35:08.763Z · LW · GW

Thanks for sharing this! It's so interesting how multiple people start having similar thoughts when the environment is right. It seems the simulation hypothesis and AI Risk are inextricably linked, even if for no other purpose than conducting thought experiments that help us understand both better. 

Comment by amelia (314159) on LW Account Restricted: OK for me, but not sure about LessWrong · 2023-04-13T15:45:55.396Z · LW · GW

To the people who upvoted this post, 

Thank you very much for the support. As you maybe saw below, the restriction on my account has been lifted! 

As I also mentioned below, I might not take advantage of the restored liberty in the short term. I’ve already begun consolidating all my writing on my personal website (AmeliaJones.org), with links to Medium blog posts for the writing. (The writing that was on LW would mostly be under writing...AI, or writing....physics. There are also short stories and other right-brain type stuff, but I don't think LW folks would be too interested in that i.e. just ignore it.) 

However, I might return to LW in the longer term. For now, please don’t be offended if I don’t respond to comments on this post. I don’t think I will be checking in too often. This will allow me to focus more on my new platform. 

Warmest wishes to all of you, and thanks again for the support when I really needed it, 

Amelia

Comment by amelia (314159) on LW Account Restricted: OK for me, but not sure about LessWrong · 2023-04-13T15:36:59.162Z · LW · GW

Thanks for being curious! I’ve begun using my personal website (AmeliaJones.org) as a place for all my work. From there, I will have links to Medium blog posts. (Posts that were previously on LW would mostly be under the writing....philosophy, or writing....physics categories on the website.) I appreciate your interest!

Comment by amelia (314159) on LW Account Restricted: OK for me, but not sure about LessWrong · 2023-04-13T15:35:29.154Z · LW · GW

Raemon, thank you very much for lifting the restriction on my account! I’m sure it’s extremely challenging to maintain high LW standards, while at the same time trying to promote open dialog with differing perspectives. I don’t envy your job, but I really appreciate the work you do. 


In the short term, I might not take full advantage of my restored liberty. I’ve started using my personal website (www.AmeliaJones.org) for both AI art projects and all my writing (not just LW writing). The writing will have links to Medium blog posts, so people can comment as much as they choose. It’s actually turning out to be easier for me to do things this way. However, depending on whether I get any visibility or feedback via this method, I might return to LW for niche writing projects in the longer term.  

Thanks again for lifting the restriction, and for the important work you do. 

Best wishes, 

Amelia

Comment by 314159 on [deleted post] 2023-04-06T18:02:06.000Z

Meta comment on LW, as it relates to this post:

So when I checked this post in the morning, it had received a negative ten (up to that point in time) in “karma.”  When I hovered over the negative ten, I saw the message “3 votes,” so apparently three people strongly disapproved of (disagreed with?) this post. Five days ago, I received a negative eight in karma from two people. I asked for guidance and input, but none has been forthcoming (at least in the five days since then). 


I don’t mind the “negative karma” votes, in and of themselves, but it seems like such votes would be more useful if they were accompanied by explanations. Otherwise if it’s a matter of, for example, faulty logic or faulty givens, there’s no way to know this, let alone improve. I also don’t know if it’s the same people strongly disapproving from one post to the next, or different people, each with a unique reason for disapproval. 


As far as my point in this particular post (that I hope Eliezer will sit for another interview w/ Lex to cover more topics within the scope of AI safety, or at least address them in another venue like LW), I listened to the last few minutes of the interview again, and Lex said “Thank you for the fight you’re fighting. Thank you for being fearless and bold and for everything you do. I hope we get a chance to talk again and I hope you never give up.”

Then Eliezer said “You’re welcome. I do worry I didn’t address a whole lot of fundamental questions I expect people have, but you know maybe we got a little bit further and made a tiny little bit of progress and I’d say like be satisfied with that, but actually no, I think one should only be satisfied with solving the entire problem.” 

Then Lex said “To be continued....” 

In light of what Eliezer said, I wonder whether he would also “strongly disapprove” of the hope for more discussion that I expressed in my post. I also wonder whether this is the way Eliezer imagined karma would be used on LW. 

--

A side note is that millions of people listen to Lex's podcasts, so if we really want to get the most brilliant young people working on interpretability, and AI alignment problems in general, then shouldn't we want Lex to spend more time discussing these problems? Do the negative karma votes on this post mean people don't want more public discussion of AI safety?

Comment by 314159 on [deleted post] 2023-04-06T17:49:50.710Z

Thanks for the follow up. Yeah, that’s a great point. I was imagining the bottomless respect and love I have felt for my own parents and teachers, regardless of the heartbreaking way their intellect erodes and crumbles with age. Yet that does not translate to human society as a whole, let alone AI. 

I agree that AI would be more equivalent to humans if it had designated functional areas. All analogies break down eventually, but maybe this analogy breaks down sooner rather than later. Nevertheless, I’d still love to hear Eliezer and Lex discuss it, along with whether it’s possible to add functional areas adjacent to LLMs, to improve the chances that LLMs will develop some sort of “creator respect,” or if such human-similarity functionality would have to be more integrated than adjacent. 

I wish that all AI advancement work would be halted in order to work on such safety and adjacent functionality issues, but at the very least, it seems it couldn't hurt to talk about--and thereby encourage more brilliant young people to work on--such issues. 

Comment by 314159 on [deleted post] 2023-04-06T06:37:16.480Z

That's a really useful distinction. Thank you for taking the time to make it! I also think that I made it sound like "simulator" worlds allow for objective morality. In actuality, I think a supra-universal reality might allow for simulator worlds, and a supra-universal reality might allow for objective morality (by some definitions of it), but the simulator worlds and the objective morality aren't directly related in their own right.   

Comment by 314159 on [deleted post] 2023-04-01T19:07:43.376Z

So within an hour, this received a downvote of negative 8, without explanation. That’s alright, but I don’t really understand how it wouldn’t be beneficial to consider AGI risk from every possible perspective and framework--including new perspectives and frameworks--given the fact that this is an existential concern. 

I’m not sure if people consider the simulation hypothesis to be “fringe,” but David Chalmers isn’t exactly a “fringe” philosopher, and he has written extensively about the hypothesis. I limited my citation to one page number in Chalmers' book, since more precise citations were requested in a response to a previous LW post, but you could argue that most of the book is related to the simulation hypothesis. (For example, chapter two is titled “What is the simulation hypothesis?” and chapter five is titled “Is it likely that we’re in a simulation?”)

Anyway, as usual, the downvotes are alright, but any explanation would be helpful (presuming the explanation doesn’t assert I suggested something that I didn’t suggest). Thanks for reading this!

Comment by 314159 on [deleted post] 2023-03-20T19:17:15.825Z

Btw, I’m totally cool w/ the downvotes on this one. I probably would have downvoted it too, because it’s not at all developed or detailed. 

The only time a downvote or “unfavorable impression” disturbs me is when it’s accompanied by an explanation that makes me think the downvoter is under the impression that I said something I didn’t say, or that I believe something I don’t believe. Granted, even then, the false impression can also be my fault for not having explained myself clearly in the first place. 

In this particular case, I know the post was read carefully, and contemplated, because shminux asked for clarification and elaboration. That made me really happy, regardless of downvotes and upvotes. Thanks for that!

Comment by 314159 on [deleted post] 2023-03-20T19:14:43.818Z

Great idea! I'll work on that. Thx!

Comment by 314159 on [deleted post] 2023-03-20T03:20:11.361Z

I was also interested in seeing the kind of reaction a philosophical physics post would get, as it pertains to a previous post I made. I really appreciate the opportunity to discuss even "far out" ideas on LW.

Comment by 314159 on [deleted post] 2023-03-20T00:21:46.444Z

Thanks for the feedback! Yeah, excellent point about acceleration. In the current version, we actually don’t ever address the twin "paradox," let alone how acceleration resolves it. 

In a draft version of the book, we had addressed the twin paradox, but we got feedback that the book was way too long. There are other topics we had to cut out as well, which makes me a little disappointed in the final product. The original version was over 600 pages, and that was admittedly too long, but I feel like we went way too far in the other direction. Granted, it will be a first introduction to special and general relativity for most readers, but I know the twin paradox is going to keep them up at night, and I really wish we had addressed it. 

Anyway, thanks for looking at the book, and for offering the feedback!  

Comment by 314159 on [deleted post] 2023-03-19T23:05:23.545Z

One of my books is "Einstein Explained: Special & General Relativity for the Masses" (physics pun intended). Yes, it's pop-sci ("for the masses"), but I believe we convey accurate information. I really feel everyone should understand something about the nature of time, gravity and light; of course this requires passing on at least some introductory knowledge of special and general relativity. Here's a link to the Amazon copy: 

www.amazon.com/dp/B0B8ZGQ8RB 

We tried to make it unique with Nietzsche-quoting dog illustrations that were supposed to be sort of like New Yorker (TM) cartoons. I don't think the book shows any expertise, but your advice to me was to go out and "learn foundations of physics," and I think the book demonstrates that I have at least some familiarity w/ foundations of physics. 

Also, I don't know that I have a "quest." However, I think my goals are clearer in the post that linked back to this one. It's at

https://www.lesswrong.com/posts/NaBfa3cYr8mk55PvT/philosophical-physics-discussion-follow-up

I could have just left it at that, but the dangling loose end of this thread was bothering me, which is why I addressed it here.

Comment by 314159 on [deleted post] 2023-03-19T17:36:08.561Z

Thanks for asking! I don’t have a refined write-up. The value of LW, for me at least, is to propose, discuss, and refine ideas still in their infancy or other prepublication stages. Once I have a truly refined write-up of an idea, I think it would be more in the stage of submitting it to an academic journal. However, at that point, related ideas (and interpretation of the journal article)  would be fitting on LW, and the whole cycle could start again. At least, that’s how it is for me. I’m sure other people find different value in LW. 

With that said, for the definition of “encapsulating world,” we would have to start with what I meant by “world.” For that, it’s necessary to look to the literature on the many worlds interpretation. As I’m sure you already know, but just to be thorough, MWI was first proposed in a Princeton thesis paper by Hugh Everett. It’s available several places, but here’s one: https://archive.org/details/TheTheoryOfTheUniversalWaveFunction/mode/2up 

(Hugh Everett, The Theory of the Universal Wave Function, Thesis, Princeton University, (1956, 1973), pp. 1–140.)  

Of course there’s also the book by David Wallace, which thoroughly explains the idea of a "world" in this context:  (Wallace, David (2012). The Emergent Multiverse: Quantum Theory According to the Everett Interpretation. Oxford University Press. ISBN 978-0-19-954696-1).

 

Also, I just came across this article from 2009. The permanent link is at https://doi.org/10.1016/j.shpsb.2008.10.002. 

(Stefano Osnaghi, Fábio Freitas, Olival Freire, The origin of the Everettian heresy, Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, Volume 40, Issue 2, 2009, Pages 97-123, ISSN 1355-2198,https://doi.org/10.1016/j.shpsb.2008.10.002). 


A tangential point about this article is that it reflects a point that I (poorly) tried to make in an earlier post, which is that there is a subset within the physics community that seems almost theocratic in the way its members regard physicists with differing (more philosophical) viewpoints.  My post made a very unfavorable impression upon some LW folks, as they promptly let me know in the comments, and afterward I conceded that “theocratic” was too strong a word. I also recognized that any misunderstandings were my own fault for obviously doing a poor job explaining myself. Oddly enough, the article I just cited is full of theological-type terminology to describe the same phenomenon I tried to describe. That doesn’t justify my original post. However, it does seem to add to the wry, winking irony of the universe.     

As far as the “encapsulating” part of an encapsulating world interpretation, you could always check out David Chalmers’ descriptions of how quantum mechanics might be viewed as part of simulated universe. (Chalmers, D. J., & Peacock, T. (2022). Reality+: Virtual worlds and the problems of philosophy. W. W. Norton & Company.) I would give you page numbers, as the book relates to quantum mechanics, but there are many references throughout the book. You could always look in the index to read all of them.) In this case, the “encapsulating” world would be the simulator/programmer/external world.) However, in my own “bare bones” view, I’m not including any of the other assumptions that would go into the simulation hypothesis (such as the idea that the encapsulating world consists of programmers, or even the idea that there are any intentions involved). 


I apologize for not giving a single sentence definition of “encapsulating world” when it comes to an encapsulating world interpretation of wave function collapse, but it’s the type of concept that requires the mathematics and diagrams of the many worlds interpretation, along with a view of quantum mechanics similar to what Chalmers describes, and then refinements from there. 
 

As I mentioned in the original post, although the idea independently occurred to me that wave function collapse could be explained by an encapsulating world interpretation, I’m sure it occurred to millions of other people before it occurred to me. Although the idea is not unique to me, I posted it because I still think it would be worth discussing and refining on LW. 

Hopefully, the discussion can be at least partially collaborative. I’m starting to realize that LW users are mostly male, and I wonder if this makes discussions slightly more competitive than collaborative (i.e. - some people seem to want to “win debates” a bit more than collaborate). I’m not saying this trend is good or bad, but I’d really appreciate some constructive, collaborative insights, in addition to critiques. (Granted, I recognize that any negative criticism is probably my own fault for poor logic or poorly-worded posts that lead to misunderstandings. Moreover, negative criticism can definitely provide room for growth. Nevertheless, a few collaborative insights that hadn't previously occurred to me would be a much-appreciated supplement.) If you’ve read this far, thank you!  

Comment by 314159 on [deleted post] 2023-03-18T23:49:14.681Z

One (hopefully) final note: With respect to quantum mechanics, I believe a sort of bare-bones* “nested world interpretation” could explain wave function collapse just as well as the “many worlds interpretation” explains it. (A nested world interpretation would involve quantum behavior coming from the outer world, w/ classical physics behavior originating in, or innate to, this world.)

This belief probably does indeed make me a “crank.” The word “crank” was used by another LW user in a reply to my linked post. Based on the context, I think it was supposed to be a mildly  pejorative way of referring to someone with unconventional & unprovable ideas. 

I think that the “crank allegation” might have actually been correct, at least with respect to my thoughts on interpreting wave function collapse. Sorry for not recognizing that point earlier, and thanks for the discussion. It helped me to think about how my views fit in a larger social context. 

As mentioned before, I’ve turned off notifications on this thread, so I might not respond to follow-ups. My silence doesn’t imply an endorsement or rebuttal of anything. It just means I have some time constraints. 

--

*By “bare-bones,” I mean nothing that would arise to a simulation theory or any other embellished subset of nested world theories. It’s true that I love discussing the idea of “programmed worlds,” because such discussions engender an almost playful type of supra-universal thinking. However, while I wouldn’t bet money on a programmed world interpretation, I would go so far as to bet on a “no-strings-attached” nested world interpretation. 

Comment by 314159 on [deleted post] 2023-03-09T04:15:06.502Z

One of my books that has been referred to multiple times by others in this discussion is 

“Einstein Explained: Special and General Relativity for the Masses” (physics pun intended). 

I published it under my maiden name, which is Amy Louise Johnson. I created it in collaboration with my then-teenage sons. 


I didn't mention the book at all in my original post. Then I didn’t mention its title in the replies to the comments on the first post, the second post, or the initial replies to the comments on the second post. This is because I didn't want to use LessWrong for marketing the book. 

However, I decided to mention it now, along with its title, because of the suggestions that I’m not familiar with foundational physics and that I might be a "crank" author (which I think is meant to imply that I have radical or unproven ideas). If I finally just give people the title of the book, they can at least see previews of the pages on Amazon, and decide for themselves on whether I am a "crank" or have not yet learned foundational physics. If I do have radical or crazy ideas, I guess I would probably be the last to know anyway. 

I don't blame anyone for assuming I'm a "crank" author, or that I'm not familiar with foundational physics, because I initially didn't give any evidence to the contrary. Furthermore, I think I did a really poor job explaining my viewpoints in the initial post. 
 

Caveats about the book: 

-When the book was re-titled, it lost its reviews on Amazon, so I assume it doesn’t have any reviews at this moment. 

-The first draft of the book was originally all text w/ only a few diagrams. However based on advice that we should make the book more accessible and one-of-a-kind, we limited the text to the absolute necessities, and added the philosophical, Nietzsche-quoting cartoon dogs. I know it’s a little weird.

-I’m proud of the book being “for the masses” (“popsci” I guess). Frankly, I think it is actually more challenging to explain advanced concepts in a way anyone can understand, than it is to explain advanced concepts to people who already have a specialized vocabulary and expertise. It is even more challenging to explain the concepts simply, while also not sacrificing accuracy. I think we did a decent job in meeting the challenge, but people can judge for themselves, if from nothing other than the preview pages on Amazon. If we did make any mistakes, feedback would be greatly appreciated.   

-The book was converted from color to grayscale, and reformatted, and I think the result looks pretty bad.


-We received advice to use professional artwork for the cover, but I insisted on using my own drawing of a “cool Einstein.” That’s why the cover looks so unprofessional. The cover on the teens and tweens edition is also my own art.... genius....

---

This is my last comment I'll make on this thread, and I’ve turned off notifications on the thread. 

If I don’t respond to future comments, it’s not because I necessarily agree or disagree with them. It’s simply because I can’t afford to spend more time on this. I’ve already spent way too long, and I have other things I should have been doing. 

However, despite ending my involvement with this post and the linked one, I did find this discussion to be very productive (for me at least), and I learned a great deal from the feedback (such as how to explain myself better, the necessity of examples, and the importance of reading feedback in a realistic tone of voice, etc.) Thank you to everyone who gave feedback, good or bad, because it was all useful. Best wishes.....

Comment by 314159 on [deleted post] 2023-03-09T03:59:42.946Z

Thank you for your feedback. Here’s my feedback on your feedback. My words are in bold. 

Your quote: Physicist here. Your post did not make a positive impression on me, because it seems to be generally wrong.

My response: I’m really sorry my post did not make a positive impression on you. As to whether it was “generally wrong,” I’ll address that based on your points that follow. In any places where I feel you misunderstood me, that is my fault, because I obviously did a terrible job explaining myself if multiple people misunderstood (which they did). I'll try to clarify a little bit in this reply. 

--

Your quote: Your belief that there are 'philosophical' and 'shut-up-and-calculate' physicists generally agrees with my anecdotal experience. 

My response: Thank you. I guess this belief (the premise on which my initial post was supposed to be based) “generally agrees” with your anecdotal experience, so we're OK so far.

 

--

Your quote: However, that's the thing: there are many physicists who are happy to think about philosophy. I think I fall into that camp. Really strange to think that there are philosophical physicists, and yet think that physicists don't engage in philosophical discussion.

My response: My point was not supposed to be that “physicists don’t engage in philosophical discussion.” It was that the non-philosophical, self-described “shut up and calculate” physicists have a bias against philosophical discussions. Philosophical physicists definitely engage in philosophical discussions. That was supposed to be one of the two main points in my original post (that the non-philosophical physicists are biased against philosophical discussions among philosophical physicists). I think we're actually in agreement on this, but I clearly did a poor job explaining myself, since you thought we were in disagreement. My apologies...

--

Your quote of my quote: “From a strictly materialist perspective, doesn’t it seem rather “universe-centric” to think the reality that gave rise to the Big Bang and our universe(1) only gave rise to our particular universe?”

Your response: Right, many physicists who actually have thoughts about this don't.

My response: OK, so I think we're alright on this part.   

--

Your quote of my quote: And doesn’t it seem even more universe-centric to think of the supra-universal (“deeper”) reality as less significant than its product(s)? 

Your response: I don't think many physicists have strong opinions on what's 'more significant'. 

My response: I think we're OK on this. I started the post out with these questions in an attempt to set the stage for philosophical thinking about physics, before mentioning the bias. However, I think this probably wasn't effective, as people wondered why I was asking these sort of obvious questions.

--

Your quote of my original quote: Granted, we can’t know much about the deeper reality, but it seems there could be some hints about its nature in fields like quantum mechanics, relativity physics, cosmology and philosophy.

Your response: Yes.

My response: OK, again, “Yes,” makes me think we're OK with this. 

--

Your quote of my original quote: Quantum mechanics, by dealing with the smallest “building blocks”/waves seems especially promising as a sort of island beach upon which artifacts from the surrounding sea may wash ashore. 

Your response: Indeed, quantum fundamentals is a rather active field, both experimentally and theoretically.

My response: So again, it seems like you don’t think I’m “generally wrong” on this point either. I guess that’s good. 

--

I actually can’t afford to spend much more time on this. The main idea is that you felt my post was “generally wrong,” but I think that, in actuality, we agree on most of the points. In the places where you believed we disagreed, I don't actually disagree with you. Again, that's my bad, since I communicated my viewpoints so poorly that they were misunderstood.  

--

Your edit: I looked at your books too, though only the amazon preview. Congrats on the books, writing a few hundred pages is always an accomplishment. I can't say I see any expertise beyond a pop-sci level, though. This is not a criticism, and I hope you don't take it as such; these are pop-sci books, and don't require more expertise than, well, a pop-sci level of expertise. They can be excellent books in their own right, I do not have the expertise to judge science communication. 

My response: Thanks for the congrats! Yes, the books were intended to be “for the masses,” which is “pop-sci.”  However, I consider the term “pop-sci” as a compliment. 

I strongly believe that the average person deserves to know, and is capable of understanding, basic concepts of time (like time dilation) and gravity (from curving spacetime). I realize that people might not need to know about the nature of time and gravity, but somehow it seems like a shame if people don’t ever have a clue about such fundamental aspects of reality. It’s rather like the way the average person could get by without knowing the Earth is round, but I think people deserve at least the opportunity to know this. If I’ve explained advanced concepts in a way that is both correct, and easy to understand in the book, then that makes me really happy. 

Also, I didn’t mention the books to demonstrate some great level of “expertise.” In my first post, I didn’t mention them at all. It’s just that the response to my first post was advice that I should go and learn some foundational physics. I mentioned the books at that point only to show that I do indeed have some familiarity w/ foundational physics. After that, someone suggested I am likely to be a "crank" author. I think that means my views would be considered radical or bizarre. I'll just let people judge for themselves, and share the book titles in a new reply to this post. The reason I didn't do so in the first place was because I didn't want to use LW for free marketing. I think it might be necessary to mention the titles though, in order to defend myself from the "crank" speculation?

--   

Your continued edit: However, I'm not sure how to convey this without sounding like an elitist asshole, but I've never had productive discussions about physics with people who don't at least have a graduate-level understanding of physics before. 

My response: I don’t think you seem like an elitist asshole at all. It just seems that your experience has been different than my experience. My experience is that it’s even possible to have productive discussions about physics with teenagers. As you maybe saw, one of my books introduces teens and tweens to special and general relativity, with that in mind. I guess it partly depends upon how you define “productive discussions.” To me, a productive discussion is one in which one or both parties comes out of the discussion with new insights, or a deeper understanding than they had before the discussion. You probably have a higher ideal for productivity. 

In my mind, this LW thread was a productive discussion. I learned that I must have done a very poor job explaining myself in my initial post. That’s the most obvious reason why multiple people misunderstood the points I was trying to make. 

I think that next time I should use examples of well-known people who might fall into one camp or another to make my points.  The reason I didn’t do so in my original post is because sometimes name-dropping makes it seem like the name-dropped people endorse everything a writer says, which isn’t fair to the name-dropped people. Furthermore, it might seem like the writer agrees with everything the name-dropped person has ever said, which would just be incorrect. Still, as a result of this discussion, I’ve concluded it’s better to use names of people, and then maybe I could add footnotes about a lack of endorsement going either way. I also was probably too sensitive to criticism of my initial post. It’s so easy to read a response in a tone of voice that’s different than that which was intended. 

So anyway, thank you for a discussion that was productive (for me at least). It must have taken some time, and I sincerely appreciate your effort.  


 

Comment by 314159 on [deleted post] 2023-03-06T19:43:23.600Z

Yes, good point. My bad... I was working on responses to multiple comments from both posts, all in a different document, while my dog kept jumping on me. To fix my error, whenever I say something about “You downvoted my post because...” please substitute “My post did not make a good impression on you because....”  Sorry for the mistake. Thanks for pointing it out!

Comment by 314159 on [deleted post] 2023-03-06T18:29:29.723Z

Thanks for the recommendations. I love Sean Carroll's way of thinking. It was exactly this philosophical and "supra-universal" way of thinking that I was trying to defend in my original post, although I apparently did a poor job at it. Anyway, great recommendations! 

Comment by 314159 on [deleted post] 2023-03-06T18:22:51.553Z

Thanks for the feedback on why my post did not make a good impression on you*. I’ll respond to each of your points individually. My responses are in bold. 

 

  • It presents as new something that's not new at all. (Maybe your specific take on the idea of multiple universes and the like is new, but you don't say anything about what that specific take is.) E.g., there's the "string theory landscape" (example book by famous author with major publisher for popular audience: Leonard Susskind's "The cosmic landscape") and "eternal inflation" (example book by famous author with major publisher for popular audience: Michio Kaku's "Parallel worlds", though I admit much of the book is about other things) and Tegmark's "mathematical universe" (example book by famous author with major publisher for popular audience: Tegmark's own "Our mathematical universe").

You said my post did not make a good impression on you* because “It presents as new something that’s not new at all.” My original point was that we don’t need to artificially limit ourselves to non-philosophical, universe-centric thinking, and there is some risk involved with censorship by the self-described “shut up and calculate” school of thought. (Yes, the term was initially coined by a critic, but many people now describe themselves as being in this group, like a sort of "badge of honor.") I’ve read the books you suggested. I agree they are excellent. If my point was redundant with these books or other posts on LW, I apologize. It seems there is also some redundancy on LW when it comes to posts on AGI risk, and the problems that could result from closed or censored discussions on AGI risk. Yet I feel this redundancy has been handled somewhat differently than my own apparent redundancy. 

 

  • Its complaints seem exaggerated. "Theocracy", really? You complain here that you only said some scientists are "theocratic" and everyone wrongly assumed you meant science as a whole. But what "theocratic" (as opposed to, say, "religious") means is that the people in charge, specifically, are driven by religion and probably suppressing all dissent. It makes no sense at all to say "these few people, a subset of a subset, are theocratic". So of course you were taken to be making a claim about The Scientific Establishment more generally. Anyway, it seems very untrue that there's anything like a theocratic prohibition on multiverse theories; the nearest thing to a theocracy in fundamental physics at present is the priesthood of string theory, and I think on the whole string theorists are for multiverses.

You describe my writing as “complaining,” and my discussion of risks as “complaints.” These words have an obvious negative connotation. Do you consider discussions about AGI risk to be full of “complaints,” or are they “valid concerns”? Why do you choose to use demeaning words to describe my concerns, but not the concerns of others? Also you say “...it seems very untrue that there's anything like a theocratic prohibition on multiverse theories.” I don’t feel there’s a “theocratic prohibition” on multiverse theories, but I do feel there is a bias against, and some censorship of, multiverse theories and other philosophical outlooks on physics. I described this bias as being “theocratic” in the way physicists with this bias criticize and sometimes suppress the physicists who discuss “why” quantum phenomena might happen. I would encourage you to ask any multiverse theorist if they’ve ever encountered this bias. (And yes, string theory is one particular multiverse theory. I don’t think I said otherwise.) 

      

  • The focus on "moderators of the physics subreddits" seems weird. Who cares about the moderators of the physics subreddits? (And why would Less Wrong be a suitable place to discuss their alleged deficiencies?)

 

My original post had only one sentence about reddit. Granted, reddit is not exactly a bastion of intellectual thought. However, it can be used to get a real-time impression of sentiment among a certain portion of society. I referred to “moderators of the physics subreddits” as an example of the bias against philosophical physics in action. I thought Less Wrong would be an appropriate place to discuss the bias against philosophical physics. The moderators were just a real-time example. I have no more desire to focus on reddit than you do. I didn't know I would have to elaborate on that one sentence repeatedly in these follow-up posts/replies. 

 

  • In fact, that focus strongly suggests a specific scenario: you have some theory of physics, you've been trying to promote it on Reddit, it hasn't gone down well, and you're annoyed about that. And, unfortunately for you, in the great majority of such cases the problem is not primarily with the Reddit moderators. Of course you might be the exception! But the expectation you're setting up is not a good one.

 

Again, the word “annoyed,” is like the word “complained.” And again, would you describe people worried about AGI risk as being “annoyed” by it? Or would you choose words like “concerned” and "worried"? And there was indeed a specific scenario or reddit that disturbed me recently, but it didn’t have to do with me “promoting” an idea. It was a post in which someone asked if Brian Greene’s perspectives were “BS” and there was pile-on of people criticizing Brian’s ideas. Obviously, he doesn’t need me to defend him. Still, the scenario represents the problem I’m talking about. The issue is not a personal grudge, but rather it’s a broader issue of a bias against larger, philosophical outlooks in physics. 

 

  • Taking a look at e.g. /r/physics, there is discussion of philosophy, multiverses, etc., from time to time. So if the Evil Theocratic Moderators are suppressing such discussion, they aren't doing it very carefully.

 

I don’t think moderators are “evil.” Again, “evil” is an extreme and loaded word. Why not “biased”? Furthermore, your choice of capitalization in "Evil Theocratic Moderators" seems like an attempt to make me seem like I subscribe to something crazy and extreme. I imagine that it's easier to paint me as extreme, and then criticize me for the alleged extremism, than it is to criticize any of my actual points. However, I would agree that this is just a bias I’m talking about, and not an all-out total censorship of all philosophical physics theories. There are indeed philosophical theories that make it through "from time to time," as you noted. 

 

 

Also:

  • I don't think shminux's assumption that you could stand to learn some fundamental physics rather than handwaving about it was unreasonable, whether or not it was correct. Your original post doesn't read like it's written by someone with substantial expertise.

As far as the advice to learn fundamental physics goes, I only mentioned my books on special and general relativity to convey that I do have an understanding of “some fundamental physics.” I would agree though that a person can always think more and more deeply about the basics. (For example, concepts like mass-energy and spacetime are almost infinitely interesting.)  Also, I didn’t intend for my post to be like an academic physics paper. LW doesn’t seem like the right forum for that. I was only trying to discuss the idea of a supra-universal, philosophical approach to some physics phenomena, and the bias that exists against that approach. 

 

  • Having self-published books about relativity is not strong evidence about whether a person has expertise in fundamental physics. (I actually suspect it may be evidence against.) Again, I am not claiming that you are a crank; I'm not in a position to judge. But the evidence we have available at present doesn't give us much grounds for confidence that you're not.

 

This is getting tiresome, but as before, your choice of wording seems a little passive aggressive when you say “I am not claiming that you are a crank; I’m not in a position to judge. But the evidence we have available at present doesn't give us much grounds for confidence that you're not.” Words like “crank, annoyed, complained” seem slightly disrespectful. However, you’re right that, in my original post, I also was disrespectful when I said “I’ve noticed a universe-centric perspective among some scientists that seems to almost amount to a sort of theocracy. For example, there is a subset of the quantum physics community who refer to themselves as the “shut up and calculate” faction. They dissuade people from asking “why” certain phenomena occur.”  I agree that “theocracy” is a loaded word, like "crank, annoyed, complained" and "evil." 

As far as my books and evidence are concerned, DM me if you’d like a free PDF copy of one of the books, and I’ll send it to you. Then you can have more evidence to judge my knowledge of “fundamental physics” for yourself. 

 

[EDITED to add:] Oh, one other thing: your profile gives a URL for your web page, but it is the wrong URL. The real one has .org on the end, not .com.

Nice catch.... thanks! I make that mistake all the time. 

Finally, I’m not going to engage further, so please try to refrain from saying anything else that misrepresents or exaggerates my positions, as I won’t respond. If people want to know what I actually said in the original post, the follow up post, or this reply, all they need to do is look.  In any case, I appreciated the discussion. 

 

[EDITED:
 *I edited my original reply. Originally, I had written about why "you downvoted my post" when you (gjm) actually said you did not downvote my post. Sorry for the mistake. I was working on too many projects and replies at once, all while copying and pasting from a separate document. In order to fix the mistake, I have since changed "you downvoted my post" wording to "my post did not make a good impression on you" wording. I'm not familiar enough with LW to know if the platform will show the inline edits when I submit this edit. I hope it does, but if not, please know that I initially was incorrect on the downvoting versus "not a good impression" issue. I think the rest of my comments still stand though.]     

Comment by 314159 on [deleted post] 2023-03-06T02:38:43.334Z

Also @shminux, you recommended learning the foundations of physics. I’ve written some books on special and general relativity. However, it’s true that there is always more that can be learned. I had intentionally used the word “theocratic,” to make my point that the “shut up and calculate” faction within quantum mechanics seems more theocratic than scientific--since science usually involves thinking about why things happen. However, “theocratic” was too extreme of a word. I could have just said the “no-questions-about-quantum-phenomena” attitude seemed unscientific rather than “theocratic.” 

Comment by 314159 on [deleted post] 2023-03-06T02:16:27.112Z

I decided my reply was too sarcastic, and maybe even mean. I apologize. 

Comment by 314159 on [deleted post] 2023-03-06T01:55:39.183Z

I edited this comment because it was too sarcastic, and maybe even mean. 

Comment by 314159 on [deleted post] 2023-03-06T01:47:41.725Z

Thank you for supporting my expression of a nonstandard view. I could have said something that everyone would agree with, such as “out of control AGI is bad,” but with slightly more flowery & nuanced language. If I did that, I would probably have gotten multiple upvotes. However, posting for the sake of popularity does not get us any closer to truth. The frontier of progress only begins to move with proposed views that initially seem to be “far out there.” That is, after all, part of how we define “frontier.” Furthermore, thinking of reality with absolutely no assumptions at all, rather than starting with “obvious assumptions,” can prevent us from artificially restricting our view of reality, and missing something. So anyway, thanks again for supporting my decision to express a nonstandard view, even if you don’t necessarily agree with it. It’s amazing how few people are capable of doing that. I really appreciate the support. 


 

Comment by amelia (314159) on Childhoods of exceptional people · 2023-02-21T00:25:00.294Z · LW · GW

It's so sad that other teachers weren't on board with the advanced topics. Some adults can't stand it when you teach kids about topics that they don't understand themselves. I think it's because doing so makes the adults feel less superior to kids. Just know that you were doing the right thing (if the right thing means helping kids to love learning, and to not be afraid of any topic). And what a gift for your daughter with a second language! She is so fortunate.

Comment by 314159 on [deleted post] 2023-02-18T02:18:27.466Z

Thank you! You've given me lots to think about and research!

Comment by 314159 on [deleted post] 2023-02-15T16:45:35.242Z

Thanks for the feedback! To test the Many Worlds Interpretation, what would it look like to see objective collapse occurring? It seems tricky, because the moment we observe the photon(s), wouldn't they appear to have already collapsed? Please let me know if there are any books or papers you would recommend on the topic. Thx again!

Comment by amelia (314159) on Childhoods of exceptional people · 2023-02-13T19:03:36.143Z · LW · GW

The adults had high expectations of the children; they assumed they had the capacity to understand complex topics, and therefore invited them into serious conversations and meaningful work, believing them capable of growing competent rapidly.

 

I agree that children are capable of understanding complex topics, and we should take children far more seriously. 

When my kids were young, I exposed them to a wide range of advanced concepts in fields like physics and philosophy. I never "pushed" my kids. Rather, the kids asked questions (usually during car rides and at bed time), and I answered them honestly. For example, when my seven-year-old asked if time travel was possible, I introduced him to the topic of special relativity. Honestly, he picked up the ideas more quickly than most adults would. I think the reason for his aptitude wasn’t that he was a “genius,” but rather that he still had enough imagination to accept a notion like time dilation. In other words, it is often easier for children to understand “big ideas” because their minds are not yet rigid and closed-off.  

A subset of other moms, and even some teachers, would say things like 

"You taught your son about what???" or "A child can't possibly understand that!" or “No child is ready for that book.”

These adults were highly critical of exposing kids to advanced concepts "too early." By contrast, my thoughts were that there is no such thing as “too early.” Yet the underestimation of children seems endemic. For instance, I remember one year at the elementary school science fair, I heard the judges discussing my son’s project as I walked to the restroom. They concluded that “It was amazing, but no child could have done that.” However, the truth was that my husband and I didn’t even understand our son’s project until he slowly explained it to us the night of the science fair. (And we couldn’t have helped with something we hadn’t even understood.)

Fortunately though, society is not doomed to forever underestimate its youngest members. I remember from my own childhood that most people thought it was ill-advised to expose children to a foreign language before high school. (The idea was that foreign languages were far too advanced for children to pick up, and children would be confused from early exposure.) Yet eventually people realized that young children are actually better at learning foreign languages than older children, let alone adults. Furthermore, if children don’t learn a word of French on their first day in French immersion school, nobody says 

“I told you they were too young. That was completely over their  heads. You shouldn’t push them to understand anything so difficult. Those poor kids.” 

Instead, people say things like “Just having exposure to something they cannot yet understand is good. It will open their minds and prepare the way for tomorrow.” 

My hope is that we will eventually take that lesson and apply it to topics outside of foreign languages. 

As to my own sons, who are now young adults, early exposure to big ideas didn’t hurt them. They are both curious, philosophically-oriented, self-directed learners, who truly care about the present state and future of humanity. In fact, I wouldn’t even be here if my oldest son--who graduated from Princeton last year with a major in computer science and as many elective philosophy courses as he could fit into his schedule--hadn’t told me about the Less Wrong website in the first place. 

So thank you to Karlsson for the research and insights. My stories might be anecdotal, but I think they reflect broader issues--such as the way most of society underestimates the intellect of children--and how we can independently overcome these issues, while also advocating for societal change. Our children can be exceptional if we stop holding them back.