Mateusz Bagiński's Shortform

post by Mateusz Bagiński (mateusz-baginski) · 2022-12-26T15:16:17.970Z · LW · GW · 15 comments

Contents

15 comments

15 comments

Comments sorted by top scores.

comment by Mateusz Bagiński (mateusz-baginski) · 2025-02-10T15:58:58.736Z · LW(p) · GW(p)

Are there any memes prevalent in the US government that make racing to AGI with China look obviously foolish?

The "let's race to AGI with China" meme landed for a reason. Is there something making the US gov susceptible to some sort of counter-meme, like the one expressed in this comment by Gwern [LW(p) · GW(p)]?

Replies from: sharmake-farah, Davidmanheim
comment by Noosphere89 (sharmake-farah) · 2025-02-10T17:02:19.524Z · LW(p) · GW(p)

The no interest in an AI arms race is now looking false, as apparently China as a state has devoted $137 billion to AI, which is at least a yellow flag that they are interested in racing.

Replies from: MondSemmel
comment by MondSemmel · 2025-02-10T20:50:13.010Z · LW(p) · GW(p)

apparently China as a state has devoted $1 trillion to AI

Source? I only found this article about 1 trillion Yuan, which is $137 billion.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2025-02-10T21:05:10.584Z · LW(p) · GW(p)

Yeah, that was what I was referring to, and I thought it would actually be a trillion dollars, sorry for the numbers being wrong.

comment by Davidmanheim · 2025-02-10T17:21:04.024Z · LW(p) · GW(p)

I'd vainly hope that everyone would know about the zero-sum nature of racing to the apocalypse from nuclear weapons, but the parallel isn't great, and no-one seems to have learned the lesson anyways, given the failure of holding SALT III or even doing START II.

comment by Mateusz Bagiński (mateusz-baginski) · 2023-07-21T09:48:11.074Z · LW(p) · GW(p)

I've read the SEP entry on agency and was surprised how irrelevant it feels to whatever it is that makes me interested in agency. Here I sketch some of these differences by comparing an imaginary Philosopher of Agency (roughly the embodiment of the approach that the "philosopher community" seems to have to these topics), and an Investigator of Agency (roughly the approach exemplified by the LW/AI Alignment crowd).[1]

If I were to put my finger on one specific difference, it would be that Philosopher is looking for the true-idealized-ontology-of-agency-independent-of-the-purpose-to-which-you-want-to-put-this-ontology, whereas Investigator wants a mechanistic model of agency, which would include a sufficient understanding of goals, values, dynamics of development of agency (or whatever adjacent concepts we're going to use after conceptual refinement and deconfusion), etc.

Another important component is the readiness to take one's intuitions as the starting point, but also assume they will require at least a bit of refinement before they start robustly carving reality at its joints. Sometimes you may even need to discard almost all of your intuitions and carefully rebuild your ontology from scratch, bottom-up. Philosopher, on the other hand, seems to (at least more often than Investigator) implicitly assume that their System 1 intuitions can be used as the ground truth of the matter and the quest for formalization of agency ends when the formalism perfectly captures all of our intuitions and doesn't introduce any weird edge cases.

Philosopher asks, "what does it mean to be an agent?" Investigator asks, "how do we delineate agents from non-agents (or specify some spectrum of relevant agency-adjacent) properties, such that they tell us something of practical importance?"

Deviant causal chains are posed as a "challenge" to "reductive" theories of agency, which try to explain agency by reducing it to causal networks.[2] So what's the problem? Quoting:

… it seems always possible that the relevant mental states and events cause the relevant event (a certain movement, for instance) in a deviant way: so that this event is clearly not an intentional action or not an action at all. … A murderous nephew intends to kill his uncle in order to inherit his fortune. He drives to his uncle’s house and on the way he kills a pedestrian by accident. As it turns out, this pedestrian is his uncle.

At least in my experience, this is another case of a Deep Philosophical Question that no longer feels like a question, once you've read The Sequences or had some equivalent exposure to the rationalist (or at least LW-rationalist) way of thinking.

About a year ago, I had a college course in philosophy of action. I recall having some reading assigned, in which the author basically argued that for an entity to be an agent, it needs to have an embodied feeling-understanding of action. Otherwise, it doesn't act, so can't be an agent. No, it doesn't matter that it's out there disassembling Mercury and reusing its matter to build the Dyson Sphere. It doesn't have the relevant concept of action, so it's not an agent.


  1. This is not a general diss on philosophizing, I certainly think there is value in philosophy-like thinking. ↩︎

  2. My wording, not SEP's, but I think it's correct. ↩︎

Replies from: dylan-2
comment by Dylan (dylan-2) · 2024-09-24T12:27:25.044Z · LW(p) · GW(p)

You are suffused with a return-to-womb mentality - desperately destined for the material tomb. Your philosophy is unsupported. Why do AI researchers think they are philosophers when its very clear they are deeply uninvested in the human condition? there should be another term, 'conjurers of the immaterial snake oil', to describe the actions you take when you riff on Dyson Sphere narratives to legitimize your paltry and thoroughly uninteresting research 

comment by Mateusz Bagiński (mateusz-baginski) · 2025-02-13T12:28:06.195Z · LW(p) · GW(p)

Is there any research on how the actual impact of [the kind of AI that we currently have] lives up to the expectations from the time [shortly before we had that kind of AI but close enough that we could clearly see it coming]?

This is vague but not unreasonable periods for the second time would be:

  • After OA Copilot, before ChatGPT (so summer-autumn 2022).
  • After PaLM, before Copilot.
  • After GPT-2, before GPT-3.

I'm also interested in research on historical over- and under-performance of other tech (where "we kinda saw it (or could see) it coming") relative to expectations.

comment by Mateusz Bagiński (mateusz-baginski) · 2023-03-14T17:42:53.455Z · LW(p) · GW(p)

Does severe vitamin C deficiency (i.e. scurvy) lead to oxytocin depletion?

According to Wikipedia

The activity of the PAM enzyme [necessary for releasing oxytocin fromthe neuron] system is dependent upon vitamin C (ascorbate), which is a necessary vitamin cofactor.

I.e. if you don't have enough vitamin C, your neurons can't release oxytocin. Common sensically, this should lead to some psychological/neurological problems, maybe with empathy/bonding/social cognition?

Quick googling "scurvy mental problems" or "vitamin C deficiency mental symptoms" doesn't return much on that. This meta-analysis finds some association of sub-scurvy vitamin C deficiency with depression, mood problems, worse cognitive functioning and some other psychiatric conditions but no mention of what I'd suspect from lack of oxytocin. Possibly oxytocin is produced in low enough levels that this doesn't really matter because you need very little vit C? But on the other hand (Wikipedia again)

By chance, sodium ascorbate by itself was found to stimulate the production of oxytocin from ovarian tissue over a range of concentrations in a dose-dependent manner.

So either this (i.e. disturbed social cognition) is not how we should expect oxytocin deficiencies to manifest or vitamin C deficiency manifests in so many ways in the brain that you don't even bother with "they have worse theory of mind than when they ate one apple a day".

Replies from: Richard_Kennaway, carl-feynman
comment by Richard_Kennaway · 2023-03-17T20:13:13.924Z · LW(p) · GW(p)

"they have worse theory of mind than when they ate one apple a day".

Just a detail, but shouldn't this be one orange a day? Apples do not contain much vitamin C.

Replies from: mateusz-baginski
comment by Mateusz Bagiński (mateusz-baginski) · 2023-03-18T06:36:35.995Z · LW(p) · GW(p)

Huh, you're right. I thought most fruits have enough to cover daily requirements.

comment by Carl Feynman (carl-feynman) · 2023-03-17T20:28:42.745Z · LW(p) · GW(p)

Googling for "scurvy low mood", I find plenty of sources that indicate that scurvy is accompanied by "mood swings — often irritability and depression".  IIRC, this has remarked upon for at least two hundred years.

Replies from: mateusz-baginski
comment by Mateusz Bagiński (mateusz-baginski) · 2023-03-18T06:39:18.027Z · LW(p) · GW(p)

That's also what this meta-analysis found but I was mostly wondering about social cognition deficits (though looking back I see it's not clear in the original shortform)

comment by Mateusz Bagiński (mateusz-baginski) · 2022-12-26T15:16:18.230Z · LW(p) · GW(p)

Mlyyrczo, I summon Thee!

Replies from: mlyyrczo
comment by mlyyrczo · 2022-12-26T21:11:02.273Z · LW(p) · GW(p)

Hi.