Posts

Why comparative advantage does not help horses 2024-09-30T22:27:57.450Z
An "Observatory" For a Shy Super AI? 2024-09-27T21:22:40.296Z
Pondering how good or bad things will be in the AGI future 2024-07-09T22:46:31.874Z
The Underreaction to OpenAI 2024-01-18T22:08:32.188Z
Taboo "human-level intelligence" 2023-02-26T20:42:25.880Z
A poem co-written by ChatGPT 2023-02-16T10:17:36.487Z
Two very different experiences with ChatGPT 2023-02-07T13:09:27.389Z
Which intro-to-AI-risk text would you recommend to... 2022-08-01T09:36:11.733Z
Covid-19 in India: Why didn't it happen earlier? 2021-04-27T19:13:00.798Z
“Meditation for skeptics” – a review of two books, and some thoughts 2021-03-20T23:35:23.037Z
Should it be a research paper or a blog post? 2020-09-24T08:09:08.179Z
Book Review: Fooled by Randomness 2020-07-13T21:02:36.549Z
Don't punish yourself for bad luck 2020-06-24T21:52:37.045Z
Dietary Debates among the Fruit Gnomes 2020-06-03T14:09:15.561Z
Sherrinford's Shortform 2020-05-02T17:19:22.661Z
How to navigate through contradictory (health/fitness) advice? 2019-08-05T20:58:14.659Z
Is there a standard discussion of vegetarianism/veganism? 2018-12-30T20:22:33.330Z
Cargo Cult and Self-Improvement 2018-08-07T12:45:30.661Z

Comments

Comment by Sherrinford on Fertility Roundup #3 · 2024-11-18T23:07:45.081Z · LW · GW

Here was the combined effect

Where do the numbers come from?

Comment by Sherrinford on Open Thread Fall 2024 · 2024-11-18T23:03:53.070Z · LW · GW

Having read something about self-driving cars actually being a thing now, I wonder how the trolley-problem thing (and whatever other ethics problems come up) was solved in the relevant regulation?

Comment by Sherrinford on Fertility Roundup #3 · 2024-11-18T22:59:39.036Z · LW · GW

Bryan Caplan: Conformity drives a lot of fertility behavior. The main driver of the Baby Boom really was, “Everyone else is having big families; we should, too.”

 

Is that just a claim or does he provide evidence for that?

Comment by Sherrinford on Fertility Roundup #3 · 2024-11-18T22:58:10.554Z · LW · GW
  • Except then we started shaming ‘incorrectly’ having children directly.
  • We have also continuously raised the bar on what counts as ‘incorrect.’

This is not so obviously correct, or at least the "bar" seems multidimensional. Some decades ago, it was a shame for an unmarried couple to have children, and in particular it was a great shame for a single mother to have children. At least where I live that has changed.

Comment by Sherrinford on Fertility Roundup #3 · 2024-11-18T22:55:07.722Z · LW · GW

The problem is that the shaming we used to do mostly did have an underlying societal purpose.

This claim would be stronger with some examples.

Comment by Sherrinford on Monthly Roundup #24: November 2024 · 2024-11-18T21:35:30.754Z · LW · GW

"Most people who want them all fired would be totally fine paying the extra salaries indefinitely. "

That is likely wrong, but in any case it's just a claim and should be phrased like that.

Comment by Sherrinford on Monthly Roundup #24: November 2024 · 2024-11-18T19:44:44.431Z · LW · GW

"Stephanie Murray reports that the village thing can still be done, and in particular has pulled off a ‘baby swapping’ system that periodically pools child care so parents can have time for themselves."

Maybe there is more detail in the linked blog but just from this post it sounds like a reinvention of Kindergarten.

Comment by Sherrinford on Fertility Roundup #3 · 2024-11-15T22:27:32.081Z · LW · GW

Offering $7,500 total is likely on the high end of what is practical before people start inefficiently gaming the system.

What does that mean? The wikipedia article Child benefit lists several examples of child benefit systems that yield more than $7,500.

Comment by Sherrinford on Seven lessons I didn't learn from election day · 2024-11-15T21:08:03.218Z · LW · GW

The most important fact about politics in 2024 is that across the world, it's a terrible time to be an incumbent. For the first time this year since at least World War II, the incumbent party did worse than it did in the previous election in every election in the developed world. ...

What influence does the exclusion of "years where fewer than five countries had elections" in the graph have?

Comment by Sherrinford on Was the K-T event a Great Filter? · 2024-10-22T21:31:40.524Z · LW · GW

Does this question require that there is only one big filter per species?

Comment by Sherrinford on Open Thread Fall 2024 · 2024-10-20T21:49:33.464Z · LW · GW

I appreciate that you posted a response to my question. However, I assume there is some misunderstanding here.

Zvi notes that he will not "be engaging with any of the arguments against this, of any quality" (which suggests that there are also good or relevant arguments). Zvi includes the statement that "AI is going to kill everyone", and notes that he "strongly disagrees". 

As I asked for "arguments related to or a more detailed discussion" of these issues, you mention some people you call "random idiots" and state that their arguments are "batshit insane". It thus seems like a waste of time trying to find arguments relevant to my question based on these keywords. 

So I wonder: was your answer actually meant to be helpful?

Comment by Sherrinford on Open Thread Fall 2024 · 2024-10-20T19:02:51.934Z · LW · GW

So you think that looking up "random idiots" helps me find "arguments related to or a more detailed discussion about this disagreement"?

Comment by Sherrinford on Open Thread Fall 2024 · 2024-10-16T19:05:17.272Z · LW · GW

In Fertility Rate Roundup #1, Zvi wrote   

"This post assumes the perspective that more people having more children is good, actually. I will not be engaging with any of the arguments against this, of any quality, whether they be ‘AI or climate change is going to kill everyone’ or ‘people are bad actually,’ other than to state here that I strongly disagree." 

Does anyone of you have an idea where I can find arguments related to or a more detailed discussion about this disagreement (with respect to AI or maybe other global catastrophic risks; this is not a question about how bad climate change is)?

Comment by Sherrinford on Open Thread Fall 2024 · 2024-10-14T22:08:16.342Z · LW · GW

Expecting that, how do you prepare?

Comment by Sherrinford on European Progress Conference · 2024-10-07T13:40:17.196Z · LW · GW

It is an interesting question how justified this stereotype is, given that many regulations aim at creating a single market and reducing trade barriers.

Comparing EU growth to the US is hard for different reasons, for instance demography but also the decarbonization efforts of the EU.

Comment by Sherrinford on European Progress Conference · 2024-10-07T13:13:05.009Z · LW · GW

I know the internal European discourse, which is why I think depicting politicians in Europe as being mostly impervious to "pro-growth ideas" seems like a strawman. It is mainstream in the EU to try to find ways for higher economic growth rates. Everybody is talking about deregulation, but there are very different ideas what kind of policies would lead to higher growth rates.

Comment by Sherrinford on European Progress Conference · 2024-10-07T11:22:18.627Z · LW · GW

are not completely impervious to pro-growth ideas

 

Depicting "eurocrats" as mostly impervious to "pro-growth ideas" seems like a strawman.

Comment by Sherrinford on European Progress Conference · 2024-10-07T11:19:19.650Z · LW · GW
Comment by Sherrinford on European Progress Conference · 2024-10-07T11:08:51.925Z · LW · GW

This stuff is scary: I've seen degrowthers

It is unclear how strongly related such degrowthers are to the beyond-growth conference people used as an example in the previous sentence.

Comment by Sherrinford on European Progress Conference · 2024-10-07T11:07:12.239Z · LW · GW

European parliament even hosted a degrowth conference.

 

The linked abstract does not contain the word "degrowth". The title is "Beyond growth: Pathways towards sustainable prosperity in the EU", the abstract is relatively unclear but - among other things - seems to criticize GDP as a measure, and talk positively of "research and innovation". The executive summary of the study that can be found there seems to talk positively of delivering "greener and more sustainable growth through technological or social innovations" and of "decoupling of economic growth from increased emissions of carbon dioxide". So in general, this seems to be about limiting the growth of the usage of natural resources in order to stay within sustainable levels.

Comment by Sherrinford on European Progress Conference · 2024-10-07T10:51:24.586Z · LW · GW

Europe has become known as a hub of degrowth.

 

It is unclear what this claims is supposed to mean. The characters "europ" do not appear in the Conclusions of the linked article. It is not clear what the fact that some authors of papers covering "degrowth" come from Europe, whatever that means in the specific paper, is supposed to prove.

Comment by Sherrinford on Sherrinford's Shortform · 2024-10-02T14:47:01.167Z · LW · GW

In the last weeks, I saw some posts or comments arguing why it would be in the self-interest of an extremely powerful AI to leave some power or habitat or whatever to humans. This seems to try to be an answer to the briader question "why should AI dobthings that we want even though we are powerless?" But it skips the complicqted question "What do we actually want an AI to do?" If we can answer that second question, then maybe the whole "please don't do things that we really do not want" quest becomes easier to solve.

Comment by Sherrinford on Why comparative advantage does not help horses · 2024-10-02T14:07:15.872Z · LW · GW

Right; my point was just that the hypothetical superintelligence does not need to trade with humans if it can force them; therefore trade-related arguments are not relevant. However, it is of course likely that such a superintelligence would neither want to trade nor care enough about the production of humans to force them to do anything.

Comment by Sherrinford on "Slow" takeoff is a terrible term for "maybe even faster takeoff, actually" · 2024-10-02T09:52:19.245Z · LW · GW

I just wanted to add that I proposed this because many other possible terms (like "smooth") might have positive connotations.

Comment by Sherrinford on Why comparative advantage does not help horses · 2024-10-01T22:36:07.891Z · LW · GW

With respect to the horses, I did not check Eliezer's claim. However, the exact numbers of the horse population do not really seem to matter for Eliezer's point or for mine. The same is true for the rebound of the Native American population.

Comment by Sherrinford on "Slow" takeoff is a terrible term for "maybe even faster takeoff, actually" · 2024-09-30T18:40:01.943Z · LW · GW

exponential / explosive

Comment by Sherrinford on Sherrinford's Shortform · 2024-09-28T07:26:31.158Z · LW · GW

Thanks for helping. In the end, I deleted the post and started from scratch and then it worked.

Comment by Sherrinford on Sherrinford's Shortform · 2024-09-27T13:58:38.995Z · LW · GW

Sorry, but where/how would I do that?

Comment by Sherrinford on Sherrinford's Shortform · 2024-09-27T06:35:39.048Z · LW · GW

When I write a post and select text, a menu appears where I can select  text appearance properties etc. However, in my latest post, this menu does not appear when I edit the post and select text. Any idea why that could be the case?

Comment by Sherrinford on Sherrinford's Shortform · 2024-09-27T05:45:54.200Z · LW · GW

That would be great, but maybe it is covered much more in your bubble than in large newspapers etc? Moreover, if this is covered like the OpenAI-internal fight last year, the typical news outlet comment will be: "crazy sci-fi cult paranoid people are making noise about this totally sensible change in the institutional structure of this very productive firm!"

Comment by Sherrinford on Sherrinford's Shortform · 2024-09-26T19:00:37.395Z · LW · GW

My impression is that the OpenAI thing has a larger effect ive negative impact on the world compared to the FTX thing, but less people will notice it.

Comment by Sherrinford on The Best Lay Argument is not a Simple English Yud Essay · 2024-09-13T10:33:08.597Z · LW · GW

It probably depends on whom are communicating to. I guess there are people not used to using such analogies or thought experiments, and would immediately think: "This is a silly question, orangutans cannot invent humans!", and the same people would still think about the question in the way you intend if you break it down into several steps.

Comment by Sherrinford on Sherrinford's Shortform · 2024-09-07T17:12:59.840Z · LW · GW

The goal would be that forecasters would be forced to make internally consistent forecasts. That should reduce noise, firstly by reducing unintentional errors, secondly by cleaning up probabilities (by quasi-automatically adjusting the percentages of candidates who may previously have been considered to be low-but-relevant-probability candidates), and thirdly by crowding out forecasters who do not want to give consistent forecasts (which I assume correlates with low-quality forecasts). It should also make forecasts more legible and thus increase the demand for metaculus. 

Metaculus currently lists 20 people who could be elected US President ("This question will resolve as Yes for the person who wins the 2024 US presidential election, and No for all other options.", "Closes Nov 7, 2024"), and the sum of their probabilities is greater than 104%. Either this is not consistent, or I don't understand it and with all due modesty, if that is the reason for my confusion, then I think many people in the target audience will also be confused.

Comment by Sherrinford on Kaj's shortform feed · 2024-09-07T16:36:58.594Z · LW · GW

The link is a link to a facebook webpage telling my that I am about to leave facebook. Is that intentional?

Comment by Sherrinford on Sherrinford's Shortform · 2024-09-07T12:35:50.752Z · LW · GW

Metaculus should adjust election forecasting questions such that forecasters are forced to make their forecasts add up to 100% over all options (with an additional option "noone of the above").

Comment by Sherrinford on Beware the science fiction bias in predictions of the future · 2024-08-20T14:42:31.096Z · LW · GW

I agree that Nanobots are not a necessary part of AI takeover scenarios. However, I perceive them as a very illustrative kind of "the AI is smart enough for plans that make resistance futile and make AI takeover fast" scenarios.

The word "typical" is probably misleading, sorry; most scenarios on LW do not include Nanobots. OTOH, LW is a place where such scenarios are at least taken seriously. 

So p(scenario contains Nanobots|LW or rationality community is the place of discussion of the scenario) is probably not very high, but p(LW or rationality community is the place of discussion of the scenario|scenario contains Nanobots) probably is...?

Comment by Sherrinford on Beware the science fiction bias in predictions of the future · 2024-08-20T06:27:46.206Z · LW · GW

Yes, people care about things that are expected to happen today rather than in 1,000 years or later. That is a problem that people fighting against climate change have been pointing out for a long time. At the same time, with respect to AI, my impression is that many people do not react to developments that will quickly have strong implications, while some others write a lot about caring about humanity's long-term future.

Comment by Sherrinford on Beware the science fiction bias in predictions of the future · 2024-08-20T06:00:38.712Z · LW · GW

Thanks for the list! Yes, it is possible to imagine stories that involve a superintelligence.

I could not imagine a movie/successful story where everybody is killed by an AGI within seconds because it has prepared that in secrecy and nobody realized it, and nobody could do anything about it. Seems like lacking a happy end and even a story.

However, I am glad to be corrected, and will check the links, the stories will surely be interesting!

Comment by Sherrinford on Beware the science fiction bias in predictions of the future · 2024-08-19T21:28:00.089Z · LW · GW

Gnargh. Of course someone has a counterexample. But I don't think that is the typical lw AGI warning scenario. However, this could become a "no true Scotsman" discussion...

Comment by Sherrinford on Beware the science fiction bias in predictions of the future · 2024-08-19T18:31:51.590Z · LW · GW

I don't understand this question. Why would the answer to that question matter? (In your post, you write "If the answer is yes to all of the above, I’d be a little more skeptical.") Also, the "story" is not really popular. Outside of LessWrong discussions and few other places, people seem to think that every expectation about the future that involves a superintelligent agentic AGI sounds like science fiction and therefore does not have to be taken seriously.

Comment by Sherrinford on Beware the science fiction bias in predictions of the future · 2024-08-19T17:26:10.844Z · LW · GW

Actually, lesswrong AGI warnings don't sound like they could be the plot of a successful movie. In a movie, John Connor organizes humanity to fight against skynet. That does not seem plausible with LW-typical nanobot scenarios.

Comment by Sherrinford on What is AI Safety’s line of retreat? · 2024-07-28T21:55:41.762Z · LW · GW

Wouldn't way 2 likely create a new species unaligned with humans?

Comment by Sherrinford on Raising children on the eve of AI · 2024-07-27T13:41:10.830Z · LW · GW

Congratulations! If it is not too personal, would you share your considerations that informed your answer to that question?

Comment by Sherrinford on Raising children on the eve of AI · 2024-07-24T18:31:31.130Z · LW · GW

I don't understand your point, is it:

a) Life always ends with death, and many people believe that if their life ends with death they don't want to live at all or

b) Giving birth always gives "joy to yourself and the newborn" while also causing "suffering of other newborns". (If so, why?)

Comment by Sherrinford on Pondering how good or bad things will be in the AGI future · 2024-07-12T19:30:45.075Z · LW · GW

If you have a source on the Roman Empire, I'd be interested. Both in just descriptions of trends and in rigorous causal analysis. I've heard somewhere that there was population growth-rate decline in the Roman Empire below replacement level, which  doesn't seem to fit with all the claims about the causes of population growth-rate decline I heard in my life.

Comment by Sherrinford on Pondering how good or bad things will be in the AGI future · 2024-07-12T15:13:14.248Z · LW · GW

With respect to what you write here and what you wrote earlier, in particular "and have solutions to some problems you wanted to solve, but could not solve them before, novel mental visualization of math novel to you, novel insights, and an entirely new set of unsolved problems for the next day, and all of your key achievements of the night surviving into subsequent days).", it seems to me that you are describing a situation in which simultaneously there is a machine that can seemingly overcome all computational, cognitive and physical limits but that will also empower you to overcome all computational, cognitive and physical limits. 

The machine completely different from all machines that humanity has invented; while for example a telescope enables us to see the surface of the moon, we do not depend on the goodwill of the telescope, and a telescope could not explore and understand the moon without us. 

Maybe my imagination of such a new kind of post-singularity machine somehow leaps too far, but I just don't see a role for you in "solving problems" in this world. The machine may give you a set of problems or exercises to solve, and maybe you can be happy when you solved them like when you complete a level of a computer game.

The other experiences you describe maybe seem like "science and philosophy on a rave/trance party", except if you are serious about the omnipotence of the AGI, it's probably more like reading a science book or playing with a toy lab set on a rave/trance party, because if you could come up with any new insights, the AGI would have had them a lot earlier. 

So in a way, it confirms my intuition that people who are positive about AGI seem to expect a world that is similar to being on (certain) drugs all of the time. But maybe I misunderstand that.

Comment by Sherrinford on Reliable Sources: The Story of David Gerard · 2024-07-11T20:29:05.948Z · LW · GW

I'm surprised about the "disagree" vote to my comment. How do you judge the truth of the cited statement based on the post. (I'd say linked webpages do not count. The webpages are not part of the article and the article does not list them as evidence.)

Comment by Sherrinford on Pondering how good or bad things will be in the AGI future · 2024-07-11T13:06:12.935Z · LW · GW

Thanks, mishka. That is a very interesting perspective! It does in some sense not feel "real" to me, but I'll admit that that is possibly the case due to some bias or limited imagination on my side.  

However, I'd also still be interested in your answer to the questions "How do you prepare? What do you expect your typical day to be like in 2050?"

Comment by Sherrinford on Reliable Sources: The Story of David Gerard · 2024-07-10T22:07:03.064Z · LW · GW

repeatedly caught publishing false information, conspiracy theories and hoaxes, [undue weight] for opinions

So, is this true or not? I cannot judge this based on your post.

Comment by Sherrinford on Sherrinford's Shortform · 2024-07-10T18:14:48.437Z · LW · GW

That sure is often the case, but not always, and therefore it may count as a bit of evidence but not extremely much. Otherwise it would be very easy to automatically delete bots on Twitter and similar platforms.