Posts

Pondering how good or bad things will be in the AGI future 2024-07-09T22:46:31.874Z
The Underreaction to OpenAI 2024-01-18T22:08:32.188Z
Taboo "human-level intelligence" 2023-02-26T20:42:25.880Z
A poem co-written by ChatGPT 2023-02-16T10:17:36.487Z
Two very different experiences with ChatGPT 2023-02-07T13:09:27.389Z
Which intro-to-AI-risk text would you recommend to... 2022-08-01T09:36:11.733Z
Covid-19 in India: Why didn't it happen earlier? 2021-04-27T19:13:00.798Z
“Meditation for skeptics” – a review of two books, and some thoughts 2021-03-20T23:35:23.037Z
Should it be a research paper or a blog post? 2020-09-24T08:09:08.179Z
Book Review: Fooled by Randomness 2020-07-13T21:02:36.549Z
Don't punish yourself for bad luck 2020-06-24T21:52:37.045Z
Dietary Debates among the Fruit Gnomes 2020-06-03T14:09:15.561Z
Sherrinford's Shortform 2020-05-02T17:19:22.661Z
How to navigate through contradictory (health/fitness) advice? 2019-08-05T20:58:14.659Z
Is there a standard discussion of vegetarianism/veganism? 2018-12-30T20:22:33.330Z
Cargo Cult and Self-Improvement 2018-08-07T12:45:30.661Z

Comments

Comment by Sherrinford on Pondering how good or bad things will be in the AGI future · 2024-07-12T15:13:14.248Z · LW · GW

With respect to what you write here and what you wrote earlier, in particular "and have solutions to some problems you wanted to solve, but could not solve them before, novel mental visualization of math novel to you, novel insights, and an entirely new set of unsolved problems for the next day, and all of your key achievements of the night surviving into subsequent days).", it seems to me that you are describing a situation in which simultaneously there is a machine that can seemingly overcome all computational, cognitive and physical limits but that will also empower you to overcome all computational, cognitive and physical limits. 

The machine completely different from all machines that humanity has invented; while for example a telescope enables us to see the surface of the moon, we do not depend on the goodwill of the telescope, and a telescope could not explore and understand the moon without us. 

Maybe my imagination of such a new kind of post-singularity machine somehow leaps too far, but I just don't see a role for you in "solving problems" in this world. The machine may give you a set of problems or exercises to solve, and maybe you can be happy when you solved them like when you complete a level of a computer game.

The other experiences you describe maybe seem like "science and philosophy on a rave/trance party", except if you are serious about the omnipotence of the AGI, it's probably more like reading a science book or playing with a toy lab set on a rave/trance party, because if you could come up with any new insights, the AGI would have had them a lot earlier. 

So in a way, it confirms my intuition that people who are positive about AGI seem to expect a world that is similar to being on (certain) drugs all of the time. But maybe I misunderstand that.

Comment by Sherrinford on Reliable Sources: The Story of David Gerard · 2024-07-11T20:29:05.948Z · LW · GW

I'm surprised about the "disagree" vote to my comment. How do you judge the truth of the cited statement based on the post. (I'd say linked webpages do not count. The webpages are not part of the article and the article does not list them as evidence.)

Comment by Sherrinford on Pondering how good or bad things will be in the AGI future · 2024-07-11T13:06:12.935Z · LW · GW

Thanks, mishka. That is a very interesting perspective! It does in some sense not feel "real" to me, but I'll admit that that is possibly the case due to some bias or limited imagination on my side.  

However, I'd also still be interested in your answer to the questions "How do you prepare? What do you expect your typical day to be like in 2050?"

Comment by Sherrinford on Reliable Sources: The Story of David Gerard · 2024-07-10T22:07:03.064Z · LW · GW

repeatedly caught publishing false information, conspiracy theories and hoaxes, [undue weight] for opinions

So, is this true or not? I cannot judge this based on your post.

Comment by Sherrinford on Sherrinford's Shortform · 2024-07-10T18:14:48.437Z · LW · GW

That sure is often the case, but not always, and therefore it may count as a bit of evidence but not extremely much. Otherwise it would be very easy to automatically delete bots on Twitter and similar platforms.

Comment by Sherrinford on Sherrinford's Shortform · 2024-07-10T15:02:05.289Z · LW · GW

That's interesting, because 

b) Wouldn't an LLM let it end in a rhyme exactly because that is what a user would expect it to do? Therefore, I thought not letting it end in a rhyme is like saying "don't annoy me, now I am going to make fun of you!" 

a) If my reading of b) is correct, then the account DID poke fun at the other user.

So, in a way, your reply confirms my rabbit/duck interpretation of the situation, and I assume people will have many more rabbit/duck situations in the future.

 

Of course you are right that the account suspension is evidence.

Comment by Sherrinford on Sherrinford's Shortform · 2024-07-10T08:42:06.186Z · LW · GW

Noah Smith writes about 

"1) AI flooding social media with slop, and 2) foreign governments flooding English-language social media with disinformation. Well, if you take a look at the screenshot at the top of this post, you’ll see the intersection of the two!"

Check the screenshot in his post and tell me whether you see a rabbit or a duck.

I see a person called A. Mason writing on Twitter and ironically subverting the assumption that she is a bot, by answering with the requested poem but letting it end with a sentence about Biden that confirms her original statement and doesn't rhyme.

Of course, this could also be an AI being so smart that it can create exactly that impression. This would be the start of the disintegration of social reality.

Comment by Sherrinford on Open Thread Summer 2024 · 2024-07-08T15:59:10.088Z · LW · GW

Thanks! I thought the previously usual sorting was not just "latest" but also took a post's karma into account. I probably misunderstood that.

Comment by Sherrinford on Open Thread Summer 2024 · 2024-07-06T21:41:22.530Z · LW · GW

Can I somehow get the old sorting algorithm for posts back? My lesswrong homepage is flooded with very old posts.

Comment by Sherrinford on Open Thread Summer 2024 · 2024-07-06T21:36:10.161Z · LW · GW

I wondee whether more people from those areas take part in the survey. They can assume that there are many people from the same area and often same age and same jobs, which implies that they can be sure their entries will remain anonymous.

Comment by Sherrinford on Would you have a baby in 2024? · 2024-07-05T14:47:41.566Z · LW · GW

I assume we are either lost in translation (which means I cannot phrase my thoughts clearly or am unable to put myself in your shoes) or you do not want to think about the question for some reason. I think I have to give up here. Nonetheless, thank you very much for the answer.

Comment by Sherrinford on Would you have a baby in 2024? · 2024-07-05T14:41:48.209Z · LW · GW

Of course, preferences are shaped by your social environment, but I assume that in any given situation you could still state a preference on the basis of which you would then enter into an exchange with the other relevant people?

Comment by Sherrinford on Would you have a baby in 2024? · 2024-07-05T14:36:46.598Z · LW · GW

Thanks. I don't understand the sentence "Note that my comments about my responses to this probability are different from actual responses to having a baby because the scenrio is very differenz." Would you be willing to elaborate?

Comment by Sherrinford on Would you have a baby in 2024? · 2024-07-05T12:58:58.420Z · LW · GW

This comment is just to note that I'd still be happy about an answer.

Comment by Sherrinford on Would you have a baby in 2024? · 2024-06-23T09:23:18.421Z · LW · GW

While I don't have much to elaborate, maybe the following headline captures the relevant mood: https://unchartedterritories.tomaspueyo.com/p/what-would-you-do-if-you-had-8-years

Comment by Sherrinford on Raising children on the eve of AI · 2024-06-23T09:14:56.387Z · LW · GW

I did not intend the word 'prepper' to be detogatory, but to be a word for 'classical' preparedness skills.

While I understand your risk assessment and it may be true that increasing societal risk makes such prepper skills more valuable, I think it neglects the problem that 'digital' skills, both for job qualifications and for disaster situations, may also become more valuable than before. As time is still only 24 hours a day, it is not clear how the 'life preparedness curriculum' composition should be different compared to, for example, growing up 20 years ago.

Comment by Sherrinford on Raising children on the eve of AI · 2024-06-17T20:02:30.712Z · LW · GW

I try to summarize your position: 

  1. You think that with a relevant probability, major catastrophic events will happen that lead to situations in which traditional non-digital "prepper" skills are relevant,
  2. and therefore, parents or families should invest a larger share of their own and their children's time and resources into learning such skills,
  3. compared to a world that was not "on the eve of AI".

Right?

Comment by Sherrinford on Would you have a baby in 2024? · 2024-06-17T19:43:47.763Z · LW · GW

About your reaction to the thought experiment:

"But fine, let's come up with a scenario that might fit the bill: It must be something that can't be influenced, so natural causes are out."

No, maybe I was not clear enough: The scenario is just about something that I cannot influence to a relevant extent. It does not matter whether mankind together is theoretically able to mitigate the disaster, because that is not directly relevant to individual decisions about having children.

"And I guess I can work with the 1-p as it just means "stable comparable utility.""

I am not sure whether I understand what you mean, but I just meant that it is a world where the world can develop into one of two directions, and you have subjective probabilities about it.

"What would I do? In this case, how we lived might still leave a testimonial of our life that the aliens might care about. Did we give up? Did we care about our families? Should we try to warn other civilizations about these aliens? So in this scenario, there is still a balance between what I could do, if not against it, then still about it. This trades against having children."

Yes, technically you might be able to do something relevant, but this is not why I came up with the thought experiment. So you can just assume that "you" in this scenario will not be able to save the world.

"But note that simply posing the scenario with a non-zero p slightly alters the decision for children: The presence of aliens would alter society and people might want to do something about it."

How would that affect the decision, in your opinion?

"Also, decision for children is a shared decision, so I additionally assume the partner would match this."

Even if something is a shared decision, you can always first think about your own preferences.

"But then it still depends on life circumstances. So I guess at p=~2% it would very slightly reduce the decision for children. From p=~25% it would start to draw more capacity. At =~80% there wouldn't be much left to have children."

Thanks for the precise answer!

I am surprised about the 2%. May I ask 

  • what your expectations about global catastrophic risks are for then next decades? (No extremely precise answer necessary.)
  • whether "would start to draw more capacity" implies that the whole expectation would only affect your decisions because you believe you would invest your time into saving the world, but not because the effect of the expected future development on your (hypothetical) child's life?
Comment by Sherrinford on Would you have a baby in 2024? · 2024-06-17T19:29:03.590Z · LW · GW

First of all, thanks for the detailed answer. I do not fully understand your position here, but the clarity of the answer to the thought experiment was helpful.

You reject the usefulness of the thought experiment, but I do not really understand why. Your reasons are that "in practice, there is almost always a possibility to affect the outcome" and that "the outcome is also almost never absolute". With respect to the possibility to affect the outcome, I would say that I, as an individual, have to take most global situations as given. With respect to whether the outcome is "absolute", you seem to mean that it is not a certain outcome or that not literally everybody would die. If it is just about the certainty, well, I included the subjective probability in the thought experiment. If it is about whether everybody dies, of course you can think of any probability distribution of outcomes, but what is gained by that? Then you say: "And on top of that, my presumed inability to influence outcomes somehow also doesn't influence by interest in wanting to have children." I do not really understand that sentence. Do you imply that powerful people naturally have a different amount of interest in wanting to have children? If so, why does that matter for the decision in the thought experiment?

You ask what I want to gain from this thought experiment.

Following lesswrong or EA community discussions about decisions about having children, I get the impression that the factors that influence the decision seem to be:

  • potentially reduced productivity (less time and energy for saving the world?),
  • immediate happiness / stress effect on the parents.

However, the ethics of bringing children into the world seem to be touched only superficially. This seems strange to me for a community in which thinking about ethics and thinking about the future are seen as valuable. @Julia Wise, writing about "Raising children on the eve of AI" says: "This is all assuming that the worst case is death rather than some kind of dystopia or torture scenario. Maybe unsurprisingly, I haven’t properly thought through the population ethics there. I find that very difficult to think about, and if you’re on the fence you should think more about it." At the same time, the median community member's expectation about the future seems very gloomy to me (though there are also people who seem very excited about a future of mind uploading, turning the world into a holodeck, or whatever).

I am confused about this attitude, and I try to determine whether

  • I just do not understand whether people on lesswrong expect the future to be bad or good,
  • people think even in case of a disaster with relevant likelihood, the future will definitely not include suffering that could outweigh some years of happiness,
  • people (who have children) have not thought about this in detail,
  • people do not think that any of this matters for some reason I overlook,
  • people tend to be taken in by motivated reasoning,
  • or something else.

So I tried to design a clear scenario to understand some parameters driving the decisions.

Why did I ask you about it? You have four children, you take part in discussions about the topic, you also write about alignment / AI risk.

Comment by Sherrinford on Would you have a baby in 2024? · 2024-06-16T16:12:06.658Z · LW · GW

Maybe I am not clear enough or we are talking past each other. So forget about the overpopulation. Suppose you lived in a universe where in the year in which you decided to have your first child some omniscient demon appeared and told you that

  • with probability p, some disaster happens 10 years later which causes dying of starvation for everybody with certainty (with all the societal and psychological side effects that such a famine implies), 
  • with probability 1-p life in your country remains forever as it was in that year (it's hypothetical, so plesae do not question whether that is possible).

So my questions:

  • Would there be some p where you would have decided not to have children?
  • How would the quality / kind of the disaster affect the decision?
  • How would the time horizon (10 years in my example) affect the decision?
  • Are there other societal or other global conditions where you think people should not have children?
Comment by Sherrinford on Would you have a baby in 2024? · 2024-06-16T14:13:43.929Z · LW · GW

Sorry, I don't fully understand the answer. 1) You think that you would have reacted to perceived overpopulation by having less children, but 2) at the same time, you think that expecting your children to have to live in a permanent postapocalyptic fight for food is not such a strong argument because that was normal in former times. But if the consideration in point 2 would not have mattered, why would the perception of overpopulation have mattered to you?

Comment by Sherrinford on Would you have a baby in 2024? · 2024-06-16T10:20:24.562Z · LW · GW

So you thought that overpopulation is not much of a concern and the world is not so bad, right? But if you had thought that overpopulation (or something else) was a really strong problem and also would have had very bad effects on your children (for example, if you had expected with a high probability that their life would have been a permanent postapocalyptic fight for food), would that have affected your decision? 

Comment by Sherrinford on Would you have a baby in 2024? · 2024-06-16T10:16:08.591Z · LW · GW

How would you expect the end of the world to take place if the AI doom scenarios turn out to be true?

Comment by Sherrinford on Sherrinford's Shortform · 2024-06-12T21:39:04.325Z · LW · GW

How much does technological progress of consumer goods increase happiness? 

Sure I prefer a modern TV to the TV we had in the 90's if I have to choose, but if I compare "pleasure I had from watching Star Trek TNG, which was only possible at 3 pm" to "pleasure I have from watching one of the many currently available TV shows at any time of the day", I am not really sure the second one really feels much better.

Comment by Sherrinford on Raising children on the eve of AI · 2024-05-28T18:37:29.900Z · LW · GW

On the one hand, I understand your point that preparing for breakdown of the economy may be more important if the likelihood of disasters in general increases; even though the most catastrophic AI scenarios would not leave a space to flee to, maybe the likelihood of more mundane disasters also increases? However, it is also possible that the marginal expected value of investing time in such skills goes down. After all, in a more technological society, learning technology skills may be more important than before, so the opportunity cost goes up.

Comment by Sherrinford on Raising children on the eve of AI · 2024-05-27T15:28:39.021Z · LW · GW

So that is not related to AI, right?

Comment by Sherrinford on Raising children on the eve of AI · 2024-05-27T06:27:01.551Z · LW · GW

About your "prepper" points, it would be helpful to know the scenario you have in mind here.

Comment by Sherrinford on Raising children on the eve of AI · 2024-05-27T06:26:14.568Z · LW · GW

2. AGI will more significantly disrupt white-collar job markets than blue collar job markets( with a few exceptions). Consequently, you might help your children develop some hard skills (eg. how to repair an appliance, build something with wood, patch clothing, change your car oil, an Arduino project etc.) 

If we really see AI radically changing everything, why should this assessment still be correct in 10 years? I assume that 30 years ago, people thought the opposite was true. It seems hard to be sure about what to teach children. I do not really see what the uniquely usefull skills of a human will be in 2040 or 2050. Nonetheless, developing these skills as a hobby, without really expecting that to teach something specific as a basic job skill, may be a good idea, also for your point 3.

Comment by Sherrinford on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-04-05T07:16:28.996Z · LW · GW

I said Estevéz because he is the less famous aspect of the person, not because I super-finetuned the analogy.

Updating the trust into your therapist seems to be a legitimate interest even if he is not famous for his psychiatric theory or practice. Suppose for example that an influential and controversial (e.g. White-supremacist) politician spent half his week being a psychiatrist and the other half doing politics, but somehow doing the former anymously. I think patients might legitimately want to know that their psychiatrist is this person. This might even be true if the psychiatrist is only locally active, like the head of a KKK chapter. And journalists might then find it inappropriate to treat the two identities as completely separate.

I assume there are reasons for publishing the name and reasons against. It is not clear that being a psychiatrist is always an argument against.

Part of the reason is, possibly, that patients often cannot directly judge the quality of therapy. Therapy is a credence good and therapists may influence you in ways that are independent of your depression or anorexia. So having more information about your psychiatrist may be helpful. At the same time, psychiatrists try to keep their private life out of the therapy, for very good reasons. It is not completely obvious to me where journalists should draw the line.

Comment by Sherrinford on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-04-03T18:03:20.483Z · LW · GW

Estevéz. If I recall this correctly, Scott thought that potential or actual patients could be influenced in their therapy by knowing his public writings. (But I may mistemember that.)

Comment by Sherrinford on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-30T22:30:22.916Z · LW · GW

Suppose Carlos Irwin Estévez worked as a therapist part-time, and he kept his identities separate such that his patients could not use his publicly known behavior as Sheen in order to update about whether they should believe his methods work. Should journalists writing about the famous Estevéz method of therapy keep his name out of the article to support him?

Comment by Sherrinford on Shortform · 2024-03-15T08:07:26.256Z · LW · GW

What is that reason you are referring to?

Comment by Sherrinford on "How could I have thought that faster?" · 2024-03-13T07:24:38.201Z · LW · GW

Thanks for giving a useful example. 

For most people I guess it would be better to delete the phrase "I'm such a fool" from the evaluation, in order to avoid self-blame that becomes a self-image.

Comment by Sherrinford on Sherrinford's Shortform · 2024-02-20T07:30:12.839Z · LW · GW

The "Snake cult of consciousness" theory sounds extremely fascinating. Qt the same time, it also sounds like the explanations why the pyramids were built by aliens. For laypeople, it is hard to distinguish between Important insights and clever nonsense.

Comment by Sherrinford on Open Thread – Winter 2023/2024 · 2024-02-04T21:52:54.741Z · LW · GW

Thank you very much. Why would liability for harms caused by AIs discourage the publishing of the weights of the most powerful models?

Comment by Sherrinford on Open Thread – Winter 2023/2024 · 2024-01-28T19:43:46.205Z · LW · GW

Okay, maybe I should rephrase my question: What is the typical AI safety policy they would enact if they could advise president, parliament and other real-world institutions?

Comment by Sherrinford on [Repost] The Copenhagen Interpretation of Ethics · 2024-01-27T08:23:27.638Z · LW · GW

https://laneless.substack.com/p/the-copenhagen-interpretation-of-ethics Isn't this the substack of the original author?

Comment by Sherrinford on Open Thread – Winter 2023/2024 · 2024-01-23T17:23:52.015Z · LW · GW

By now there are several AI policy organizations. However, I am unsure what the typical AI safety policy is that any of them would enforce if they had unlimited power. Is there a summary of that?

Comment by Sherrinford on Open Thread – Winter 2023/2024 · 2024-01-07T17:34:59.418Z · LW · GW

I don't really understand why Substack became so popular, compared to eg WordPress. Is Substack writing easier to monetize?

Comment by Sherrinford on Would you have a baby in 2024? · 2024-01-06T22:36:20.489Z · LW · GW

So your timelines are the same as in 2018?

Thanks for the article recommendations.

Comment by Sherrinford on Would you have a baby in 2024? · 2024-01-06T22:29:22.137Z · LW · GW

Did you take such things into account when you made the decision, or decisions?

Comment by Sherrinford on Open Thread – Winter 2023/2024 · 2024-01-06T14:31:55.194Z · LW · GW

Almost all the blogs in the world seem to have switched to Substack, so I'm wondering if I'm the only one whose browser is very slow in loading and displaying comments from Substack blogs. Or is this a firefox problem?

Comment by Sherrinford on Would you have a baby in 2024? · 2023-12-26T10:55:09.978Z · LW · GW

I think the "stable totalitarianism" scenario is less science-fiction than the annihilation scenario, because you only need an extremely totalitarian state (something that already exists or existed) enhanced by AI. It is possible that this would come along with random torture. This would be possible with a misguided AI as well.

Comment by Sherrinford on Most People Don't Realize We Have No Idea How Our AIs Work · 2023-12-25T22:46:03.730Z · LW · GW

I don't fully understand your implicatioks of why unpredictable things should not be frightening. In general, there is a difference between understanding and creating. The weather is unpredictable but we did not create it; where we did and do create it, we indeed seem to be too careless. For human brains, we at least know that preferences are mostly not too crazy, and if they are, capabilities are not superhuman. With respect to the immune system, understanding may be not very deep, but intervention is mostly limited by understanding, and where that is not true, we may be in trouble.

Comment by Sherrinford on Would you have a baby in 2024? · 2023-12-25T22:03:14.291Z · LW · GW

Do you think there could be an amount of suffering at the end of of a life that would outweigh 20 good years? (Including that this end could take very long.)

Comment by Sherrinford on Would you have a baby in 2024? · 2023-12-25T20:31:00.364Z · LW · GW

Thanks. What are the things that AI will, in 10, 20 or 30 years, have "trouble with", and want are the "relevant skills" to train your kids in?

Comment by Sherrinford on Would you have a baby in 2024? · 2023-12-25T20:15:53.689Z · LW · GW

The post's starting point is "how fast AI is advancing and all the uncertainty associated with that (unemployment, potential international conflict, x-risk, etc.)". You don't need concrete high-p-of-doom timelines for that, or even expect AGI at all. It is not necessary for "potential international conflict", for example.

Comment by Sherrinford on Would you have a baby in 2024? · 2023-12-25T20:07:57.854Z · LW · GW

Could you please briefly describe the median future you expect?

Comment by Sherrinford on EU policymakers reach an agreement on the AI Act · 2023-12-16T16:49:55.026Z · LW · GW

A minor point regarding the EU's institutions:

  • The European Parliament does not have "population-proportional membership from each country", but: "the seats are distributed according to "degressive proportionality", i.e., the larger the state, the more citizens are represented per MEP. As a result, Maltese and Luxembourgish voters have roughly 10x more influence per voter than citizens of the six largest countries." (https://en.wikipedia.org/wiki/European_Parliament)
  • The Council of the EU does not have "one vote per country", but its rules usually prescribe a more complicated majority rule and sometimes unanimity.
Comment by Sherrinford on 2023 Unofficial LessWrong Census/Survey · 2023-12-04T21:11:43.960Z · LW · GW

I completed the survey! 

I'd still like to ask those questions (or a similar set of questions) somewhere. If someone has an idea where and how that could make sense, feel free to answer that as a comment to mine.