Posts

AI Alignment Metastrategy 2023-12-31T12:06:11.433Z
Critical review of Christiano's disagreements with Yudkowsky 2023-12-27T16:02:50.499Z
Learning-theoretic agenda reading list 2023-11-09T17:25:35.046Z
[Closed] Agent Foundations track in MATS 2023-10-31T08:12:50.482Z
Which technologies are stuck on initial adoption? 2023-04-29T17:37:34.749Z
The Learning-Theoretic Agenda: Status 2023 2023-04-19T05:21:29.177Z
Compositional language for hypotheses about computations 2023-03-11T19:43:40.064Z
Human beats SOTA Go AI by learning an adversarial policy 2023-02-19T09:38:58.684Z
[Closed] Prize and fast track to alignment research at ALTER 2022-09-17T16:58:24.839Z
[Closed] Hiring a mathematician to work on the learning-theoretic AI alignment agenda 2022-04-19T06:44:18.772Z
[Closed] Job Offering: Help Communicate Infrabayesianism 2022-03-23T18:35:16.790Z
Infra-Bayesian physicalism: proofs part II 2021-11-30T22:27:04.744Z
Infra-Bayesian physicalism: proofs part I 2021-11-30T22:26:33.149Z
Infra-Bayesian physicalism: a formal theory of naturalized induction 2021-11-30T22:25:56.976Z
My Marriage Vows 2021-07-21T10:48:24.443Z
Needed: AI infohazard policy 2020-09-21T15:26:05.040Z
Introduction To The Infra-Bayesianism Sequence 2020-08-26T20:31:30.114Z
Deminatalist Total Utilitarianism 2020-04-16T15:53:13.953Z
The Reasonable Effectiveness of Mathematics or: AI vs sandwiches 2020-02-14T18:46:39.280Z
Offer of co-authorship 2020-01-10T17:44:00.977Z
Intelligence Rising 2019-11-27T17:08:40.958Z
Vanessa Kosoy's Shortform 2019-10-18T12:26:32.801Z
Biorisks and X-Risks 2019-10-07T23:29:14.898Z
Slate Star Codex Tel Aviv 2019 2019-09-05T18:29:53.039Z
Offer of collaboration and/or mentorship 2019-05-16T14:16:20.684Z
Reinforcement learning with imperceptible rewards 2019-04-07T10:27:34.127Z
Dimensional regret without resets 2018-11-16T19:22:32.551Z
Computational complexity of RL with traps 2018-08-29T09:17:08.655Z
Entropic Regret I: Deterministic MDPs 2018-08-16T13:08:15.570Z
Algo trading is a central example of AI risk 2018-07-28T20:31:55.422Z
The Learning-Theoretic AI Alignment Research Agenda 2018-07-04T09:53:31.000Z
Meta: IAFF vs LessWrong 2018-06-30T21:15:56.000Z
Computing an exact quantilal policy 2018-04-12T09:23:27.000Z
Quantilal control for finite MDPs 2018-04-12T09:21:10.000Z
Improved regret bound for DRL 2018-03-02T12:49:27.000Z
More precise regret bound for DRL 2018-02-14T11:58:31.000Z
Catastrophe Mitigation Using DRL (Appendices) 2018-02-14T11:57:47.000Z
Bugs? 2018-01-21T21:32:10.492Z
The Behavioral Economics of Welfare 2017-12-22T11:35:09.617Z
Improved formalism for corruption in DIRL 2017-11-30T16:52:42.000Z
Why DRL doesn't work for arbitrary environments 2017-11-30T12:22:37.000Z
Catastrophe Mitigation Using DRL 2017-11-22T05:54:42.000Z
Catastrophe Mitigation Using DRL 2017-11-17T15:38:18.000Z
Delegative Reinforcement Learning with a Merely Sane Advisor 2017-10-05T14:15:45.000Z
On the computational feasibility of forecasting using gamblers 2017-07-18T14:00:00.000Z
Delegative Inverse Reinforcement Learning 2017-07-12T12:18:22.000Z
Learning incomplete models using dominant markets 2017-04-28T09:57:16.000Z
Dominant stochastic markets 2017-03-17T12:16:55.000Z
A measure-theoretic generalization of logical induction 2017-01-18T13:56:20.000Z
Towards learning incomplete models using inner prediction markets 2017-01-08T13:37:53.000Z

Comments

Comment by Vanessa Kosoy (vanessa-kosoy) on Vanessa Kosoy's Shortform · 2024-04-21T14:43:05.729Z · LW · GW

Sort of obvious but good to keep in mind: Metacognitive regret bounds are not easily reducible to "plain" IBRL regret bounds when we consider the core and the envelope as the "inside" of the agent.

Assume that the action and observation sets factor as  and , where  is the interface with the external environment and  is the interface with the envelope.

Let  be a metalaw. Then, there are two natural ways to reduce it to an ordinary law:

  • Marginalizing over . That is, let  and  be the projections. Then, we have the law .
  • Assuming "logical omniscience". That is, let  be the ground truth. Then, we have the law . Here, we use the conditional defined by . It's easy to see this indeed defines a law.

However, requiring low regret w.r.t. neither of these is equivalent to low regret w.r.t :

  • Learning  is typically no less feasible than learning , however it is a much weaker condition. This is because the metacognitive agents can use policies that query the envelope to get higher guaranteed expected utility.
  • Learning  is a much stronger condition than learning , however it is typically infeasible. Requiring it leads to AIXI-like agents.

Therefore, metacognitive regret bounds hit a "sweep spot" of stength vs. feasibility which produces a genuinely more powerful agents than IBRL[1].

  1. ^

    More precisely, more powerful than IBRL with the usual sort of hypothesis classes (e.g. nicely structured crisp infra-RDP). In principle, we can reduce metacognitive regret bounds to IBRL regret bounds using non-crsip laws, since there's a very general theorem for representing desiderata as laws. But, these laws would have a very peculiar form that seems impossible to guess without starting with metacognitive agents.

Comment by Vanessa Kosoy (vanessa-kosoy) on When is a mind me? · 2024-04-21T11:36:23.582Z · LW · GW

The topic of this thread is: In naive MWI, it is postulated that all Everett branches coexist. (For example, if I toss a quantum fair coin  times, there will be  branches with all possible outcomes.) Under this assumption, it's not clear in what sense the Born rule is true. (What is the meaning of the probability measure over the branches if all branches coexist?)

Comment by Vanessa Kosoy (vanessa-kosoy) on When is a mind me? · 2024-04-20T13:05:04.297Z · LW · GW

Your reasoning is invalid, because in order to talk about updating your beliefs in this context, you need a metaphysical framework which knows how to deal with anthropic probabilities (e.g. it should be able to answer puzzles in the vein of the anthropic trilemma according to some coherent, well-defined mathematical rules). IBP is such a framework, but you haven't proposed any alternative, not to mention an argument for why that alternative is superior.

Comment by Vanessa Kosoy (vanessa-kosoy) on When is a mind me? · 2024-04-20T12:59:54.107Z · LW · GW

The problem is this requires introducing a special decision-theory postulate that you're supposed to care about the Born measure for some reason, even though Born measure doesn't correspond to ordinary probability.

Comment by Vanessa Kosoy (vanessa-kosoy) on When is a mind me? · 2024-04-19T09:21:21.234Z · LW · GW

Not sure what you mean by "this would require a pretty small universe".

If we live in naive MWI, an IBP agent would not care for good reasons, because naive MWI is a "library of babel" where essentially every conceivable thing happens no matter what you do.

Also not sure what you mean by "some sort of sampling". AFAICT, quantum IBP is the closest thing to a coherent answer that we have, by a significant margin.

Comment by Vanessa Kosoy (vanessa-kosoy) on When is a mind me? · 2024-04-18T09:09:45.842Z · LW · GW

The solution is here. In a nutshell, naive MWI is wrong, not all Everett branches coexist, but a lot of Everett branches do coexist s.t. with high probability all of them display expected frequencies.

Comment by Vanessa Kosoy (vanessa-kosoy) on Wei Dai's Shortform · 2024-04-18T09:01:35.172Z · LW · GW

My model is that the concept of "morality" is a fiction which has 4 generators that are real:

  • People have empathy, which means they intrinsically care about other people (and sufficiently person-like entities), but, mostly about those in their social vicinity. Also, different people have different strength of empathy, a minority might have virtually none.
  • Superrational cooperation is something that people understand intuitively to some degree. Obviously, a minority of people understand it on System 2 level as well.
  • There is something virtue-ethics-like which I find in my own preferences, along the lines of "some things I would prefer not to do, not because of their consequences, but because I don't want to be the kind of person who would do that". However, I expect different people to differ in this regard.
  • The cultural standards of morality, which it might be selfishly beneficial to go along with, including lying to yourself that you're doing it for non-selfish reasons. Which, as you say, becomes irrelevant once you secure enough power. This is a sort of self-deception which people are intuitively skilled at.
Comment by Vanessa Kosoy (vanessa-kosoy) on Vanessa Kosoy's Shortform · 2024-04-09T11:56:00.342Z · LW · GW

Is it possible to replace the maximin decision rule in infra-Bayesianism with a different decision rule? One surprisingly strong desideratum for such decision rules is the learnability of some natural hypothesis classes.

In the following, all infradistributions are crisp.

Fix finite action set  and finite observation set .  For any  and , let

be defined by

In other words, this kernel samples a time step  out of the geometric distribution with parameter , and then produces the sequence of length  that appears in the destiny starting at .

For any continuous[1] function , we get a decision rule. Namely, this rule says that, given infra-Bayesian law  and discount parameter , the optimal policy is

The usual maximin is recovered when we have some reward function  and corresponding to it is

Given a set  of laws, it is said to be learnable w.r.t.  when there is a family of policies  such that for any 

For  we know that e.g. the set of all communicating[2] finite infra-RDPs is learnable. More generally, for any  we have the learnable decision rule

This is the "mesomism" I taked about before

Also, any monotonically increasing  seems to be learnable, i.e. any  s.t. for  we have . For such decision rules, you can essentially assume that "nature" (i.e. whatever resolves the ambiguity of the infradistributions) is collaborative with the agent. These rules are not very interesting.

On the other hand, decision rules of the form  are not learnable in general, and so are decision rules of the form  for  monotonically increasing.

Open Problem: Are there any learnable decision rules that are not mesomism or monotonically increasing?

A positive answer to the above would provide interesting generalizations of infra-Bayesianism. A negative answer to the above would provide an interesting novel justification of the maximin. Indeed, learnability is not a criterion that was ever used in axiomatic constructions of decision theory[3], AFAIK.

  1. ^

    We can try considering discontinuous functions as well, but it seems natural to start with continuous. If we want the optimal policy to exist, we usually need  to be at least upper semicontinuous.

  2. ^

    There are weaker conditions than "communicating" that are sufficient, e.g. "resettable" (meaning that the agent can always force returning to the initial state), and some even weaker conditions that I will not spell out here.

  3. ^

    I mean theorems like VNM, Savage etc.

Comment by Vanessa Kosoy (vanessa-kosoy) on Vanessa Kosoy's Shortform · 2024-04-08T13:05:12.979Z · LW · GW

First, given nanotechnology, it might be possible to build colonies much faster.

Second, I think the best way to live is probably as uploads inside virtual reality, so terraforming is probably irrelevant.

Third, it's sufficient that the colonists are uploaded or cryopreserved (via some superintelligence-vetted method) and stored someplace safe (whether on Earth or in space) until the colony is entirely ready.

Fourth, if we can stop aging and prevent other dangers (including unaligned AI), then a timeline of decades is fine.

Comment by Vanessa Kosoy (vanessa-kosoy) on Vanessa Kosoy's Shortform · 2024-04-08T12:44:32.798Z · LW · GW

I don't know whether we live in a hard-takeoff singleton world or not. I think there is some evidence in that direction, e.g. from thinking about the kind of qualitative changes in AI algorithms that might come about in the future, and their implications on the capability growth curve, and also about the possibility of recursive self-improvement. But, the evidence is definitely far from conclusive (in any direction).

I think that the singleton world is definitely likely enough to merit some consideration. I also think that some of the same principles apply to some multipole worlds.

Commit to not make anyone predictably regret supporting the project or not opposing it" is worrying only by omission -- it's a good guideline, but it leaves the door open for "punish anyone who failed to support the project once the project gets the power to do so".

Yes, I never imagined doing such a thing, but I definitely agree it should be made clear. Basically, don't make threats, i.e. don't try to shape others incentives in ways that they would be better off precommitting not to go along with it.

Comment by Vanessa Kosoy (vanessa-kosoy) on Vanessa Kosoy's Shortform · 2024-04-07T06:41:31.088Z · LW · GW

It's not because they're not on Earth, it's because they have a superintelligence helping them. Which might give them advice and guidance, take care of their physical and mental health, create physical constraints (e.g. that prevent violence), or even give them mind augmentation like mako yass suggested (although I don't think that's likely to be a good idea early on). And I don't expect their environment to be fragile because, again, designed by superintelligence. But I don't know the details of the solution: the AI will decide those, as it will be much smarter than me.

Comment by Vanessa Kosoy (vanessa-kosoy) on Vanessa Kosoy's Shortform · 2024-04-07T06:29:31.387Z · LW · GW

I don't have to know in advance that we're in hard-takeoff singleton world, or even that my AI will succeed to achieve those objectives. The only thing I absolutely have to know in advance is that my AI is aligned. What sort of evidence will I have for this? A lot of detailed mathematical theory, with the modeling assumptions validated by computational experiments and knowledge from other fields of science (e.g. physics, cognitive science, evolutionary biology). 

I think you're misinterpreting Yudkowsky's quote. "Using the null string as input" doesn't mean "without evidence", it means "without other people telling me parts of the answer (to this particular question)".

I'm not sure what is "extremely destructive and costly" in what I described? Unless you mean the risk of misalignment, in which case, see above.

Comment by Vanessa Kosoy (vanessa-kosoy) on Vanessa Kosoy's Shortform · 2024-04-06T19:47:13.794Z · LW · GW

I know, this is what I pointed at in footnote 1. Although "dumbest AI" is not quite right: the sort of AI MIRI envision is still very superhuman in particular domains, but is somehow kept narrowly confined to acting within those domains (e.g. designing nanobots). The rationale mostly isn't assuming that at that stage it won't be possible to create a full superintelligence, but assuming that aligning such a restricted AI would be easier. I have different views on alignment, leading me to believe that aligning a full-fledged superintelligence (sovereign) is actually easier (via PSI or something in that vein). On this view, we still need to contend with the question, what is the thing we will (honestly!) tell other people that our AI is actually going to do. Hence, the above.

Comment by Vanessa Kosoy (vanessa-kosoy) on Vanessa Kosoy's Shortform · 2024-04-06T11:06:00.691Z · LW · GW

People like Andrew Critch and Paul Christiano have criticized MIRI in the past for their "pivotal act" strategy. The latter can be described as "build superintelligence and use it to take unilateral world-scale actions in a manner inconsistent with existing law and order" (e.g. the notorious "melt all GPUs" example). The critics say (justifiably IMO), this strategy looks pretty hostile to many actors and can trigger preemptive actions against the project attempting it and generally foster mistrust.

Is there a good alternative? The critics tend to assume slow-takeoff multipole scenarios, which makes the comparison with their preferred solutions to be somewhat "apples and oranges". Suppose that we do live in a hard-takeoff singleton world, what then? One answer is "create a trustworthy, competent, multinational megaproject". Alright, but suppose you can't create a multinational megaproject, but you can build aligned AI unilaterally. What is a relatively cooperative thing you can do which would still be effective?

Here is my proposed rough sketch of such a plan[1]:

  • Commit to not make anyone predictably regret supporting the project or not opposing it. This rule is the most important and the one I'm the most confident of by far. In an ideal world, it should be more-or-less sufficient in itself. But in the real world, it might be still useful to provide more tangible details, which the next items try to do.
  • Within the bounds of Earth, commit to obey the international law, and local law at least inasmuch as the latter is consistent with international law, with only two possible exceptions (see below). Notably, this allows for actions such as (i) distributing technology that cures diseases, reverses aging, produces cheap food etc. (ii) lobbying for societal improvements (but see superpersuation clause below).
  • Exception 1: You can violate any law if it's absolutely necessary to prevent a catastrophe on the scale comparable with a nuclear war or worse, but only to the extent it's necessary for that purpose. (e.g. if a lab is about to build unaligned AI that would kill millions of people and it's not possible to persuade them to stop or convince the authorities to act in a timely manner, you can sabotage it.)[2]
  • Build space colonies. These space colonies will host utopic societies and most people on Earth are invited to immigrate there.
  • Exception 2: A person held in captivity in a manner legal according to local law, who faces death penalty or is treated in a manner violating accepted international rules about treatment of prisoners, might be given the option to leave to the colonies. If they exercise this option, their original jurisdiction is permitted to exile them from Earth permanently and/or bar them from any interaction with Earth than can plausibly enable activities illegal according to that jurisdiction[3].
  • Commit to adequately compensate any economy hurt by emigration to the colonies or other disruption by you. For example, if space emigration causes the loss of valuable labor, you can send robots to supplant it.
  • Commit to not directly intervene in international conflicts or upset the balance of powers by supplying military tech to any side, except in cases when it is absolutely necessary to prevent massive violations of international law and human rights.
  • Commit to only use superhuman persuasion when arguing towards a valid conclusion via valid arguments, in a manner that doesn't go against the interests of the person being persuaded. 
  1. ^

    Importantly, this makes stronger assumptions about the kind of AI you can align than MIRI-style pivotal acts. Essentially, it assumes that you can directly or indirectly ask the AI to find good plans consistent with the commitments below, rather than directing it to do something much more specific. Otherwise, it is hard to use Exception 1 (see below) gracefully.

  2. ^

    A more conservative alternative is to limit Exception 1 to catastrophes that would spill over to the space colonies (see next item).

  3. ^

    It might be sensible to consider a more conservative version which doesn't have Exception 2, even though the implications are unpleasant.

Comment by Vanessa Kosoy (vanessa-kosoy) on Vanessa Kosoy's Shortform · 2024-04-05T15:25:32.512Z · LW · GW

Ratfic idea / conspiracy theory: Yudkowsky traveled back in time to yell at John Nash about how Nash equilibria are stupid[1], and that's why Nash went insane.

h/t Marcus (my spouse)

  1. ^

    They are.

Comment by Vanessa Kosoy (vanessa-kosoy) on tailcalled's Shortform · 2024-03-30T06:46:41.919Z · LW · GW

Sure, if after updating on your discovery, it seems that the current trajectory is not doomed, it might imply accelerating is good. But, here it is very far from being the case.

Comment by Vanessa Kosoy (vanessa-kosoy) on tailcalled's Shortform · 2024-03-29T18:15:51.506Z · LW · GW

I missed that paragraph on first reading, mea culpa. I think that your story about how it's a win for interpretability and alignment is very unconvincing, but I don't feel like hashing it out atm. Revised to weak downvote.

Also, if you expect this to take off, then by your own admission you are mostly accelerating the current trajectory (which I consider mostly doomed) rather than changing it. Unless you expect it to take off mostly thanks to you?

Comment by Vanessa Kosoy (vanessa-kosoy) on tailcalled's Shortform · 2024-03-29T17:47:46.764Z · LW · GW

Because it's capability research. It shortens the TAI timeline with little compensating benefit.

Comment by Vanessa Kosoy (vanessa-kosoy) on tailcalled's Shortform · 2024-03-29T17:30:32.757Z · LW · GW

Downvoted because conditional on this being true, it is harmful to publish. Don't take it personally, but this is content I don't want to see on LW.

Comment by Vanessa Kosoy (vanessa-kosoy) on Vanessa Kosoy's Shortform · 2024-03-25T01:27:56.945Z · LW · GW

Formalizing the richness of mathematics

Intuitively, it feels that there is something special about mathematical knowledge from a learning-theoretic perspective. Mathematics seems infinitely rich: no matter how much we learn, there is always more interesting structure to be discovered. Impossibility results like the halting problem and Godel incompleteness lend some credence to this intuition, but are insufficient to fully formalize it.

Here is my proposal for how to formulate a theorem that would make this idea rigorous.

(Wrong) First Attempt

Fix some natural hypothesis class for mathematical knowledge, such as some variety of tree automata. Each such hypothesis  represents an infradistribution over : the "space of counterpossible computational universes". We can say that  is a "true hypothesis" when there is some  in the credal set  (a distribution over ) s.t. the ground truth  "looks" as if it's sampled from . The latter should be formalizable via something like a computationally bounded version of Marin-Lof randomness.

We can now try to say that  is "rich" if for any true hypothesis , there is a refinemen which is also a true hypothesis and "knows" at least one bit of information that  doesn't, in some sense. This is clearly true, since there can be no automaton or even any computable hypothesis which fully describes . But, it's also completely boring: the required  can be constructed by "hardcoding" an additional fact into . This doesn't look like "discovering interesting structure", but rather just like brute-force memorization.

(Wrong) Second Attempt

What if instead we require that  knows infinitely many bits of information that  doesn't? This is already more interesting. Imagine that instead of metacognition / mathematics, we would be talking about ordinary sequence prediction. In this case it is indeed an interesting non-trivial condition that the sequence contains infinitely many regularities, s.t. each of them can be expressed by a finite automaton but their conjunction cannot. For example, maybe the -th bit in the sequence depends only the largest  s.t.  divides , but the dependence on  is already uncomputable (or at least inexpressible by a finite automaton).

However, for our original application, this is entirely insufficient. This is because in the formal language we use to define  (e.g. combinator calculus) has some "easy" equivalence relations. For example, consider the family of programs of the form "if 2+2=4 then output 0, otherwise...". All of those programs would output 0, which is obvious once you know that 2+2=4. Therefore, once your automaton is able to check some such easy equivalence relations, hardcoding a single new fact (in the example, 2+2=4) generates infinitely many "new" bits of information. Once again, we are left with brute-force memorization.

(Less Wrong) Third Attempt

Here's the improved condition: For any true hypothesis , there is a true refinement  s.t. conditioning  on any finite set of observations cannot produce a refinement of .

There is a technicality here, because we're talking about infradistributions, so what is "conditioning" exactly? For credal sets, I think it is sufficient to allow two types of "conditioning":

  • For any given observation  and , we can form .
  • For any given observation  s.t. , we can form .

This rules-out the counterexample from before: the easy equivalence relation can be represented inside , and then the entire sequence of "novel" bits can be generated by a conditioning.

Alright, so does  actually satisfy this condition? I think it's very probable, but I haven't proved it yet. 

Comment by Vanessa Kosoy (vanessa-kosoy) on New report: Safety Cases for AI · 2024-03-20T17:09:35.748Z · LW · GW

Linkpost to Twitter thread is a bad format for LessWrong. Not everyone has Twitter.

Comment by Vanessa Kosoy (vanessa-kosoy) on Tamsin Leake's Shortform · 2024-03-13T16:48:49.871Z · LW · GW

I agree that in the long-term it probably matters little. However, I find the issue interesting, because the failure of reasoning that leads people to ignore the possibility of AI personhood seems similar to the failure of reasoning that leads people to ignore existential risks from AI. In both cases it "sounds like scifi" or "it's just software". It is possible that raising awareness for the personhood issue is politically beneficial for addressing X-risk as well. (And, it would sure be nice to avoid making the world worse in the interim.)

Comment by vanessa-kosoy on [deleted post] 2024-03-04T13:06:01.266Z

 .

What is ? Also, we should allow adding some valid reward function of .

Comment by vanessa-kosoy on [deleted post] 2024-03-04T12:21:57.921Z

 is a polytope with , corresponding to allowed action distributions at that state. 

I think it's mathematically cleaner to get rid of A and have those be abstract polytopes.

Comment by Vanessa Kosoy (vanessa-kosoy) on Open Thread – Winter 2023/2024 · 2024-03-02T14:04:17.271Z · LW · GW

Did anyone around here try Relationship Hero and has opinions?

Comment by Vanessa Kosoy (vanessa-kosoy) on evhub's Shortform · 2024-02-04T15:43:35.310Z · LW · GW

First, I said I'm not a utilitarian, I didn't say that I don't value other people. There's a big difference!

Second, I'm not willing to step behind that veil of ignorance. Why should I? Decision-theoretically, it can make sense to argue "you should help agent X because in some counterfactual, agent X would be deciding whether to help you using similar reasoning". But, there might be important systematic differences between early people and late people (for example, because late people are modified in some ways compared to the human baseline) which break the symmetry. It might be a priori improbable for me to be born as a late person (and still be me in the relevant sense) or for a late person to be born in our generation[1].

Moreover, if there is a valid decision-theoretic argument to assign more weight to future people, then surely a superintelligent AI acting on my behalf would understand this argument and act on it. So, this doesn't compel me to precommit to a symmetric agreement with future people in advance.

  1. ^

    There is a stronger case for intentionally creating and giving resources to people who are early in counterfactual worlds. At least, assuming people have meaningful preferences about the state of never-being-born.

Comment by Vanessa Kosoy (vanessa-kosoy) on A sketch of acausal trade in practice · 2024-02-04T14:45:35.711Z · LW · GW

Your "psychohistory" is quite similar to my "metacosmology".

Comment by Vanessa Kosoy (vanessa-kosoy) on evhub's Shortform · 2024-02-03T19:10:25.767Z · LW · GW

Disagree. I'm in favor of (2) because I think that what you call a "tyranny of the present" makes perfect sense. Why would the people of the present not maximize their utility functions, given that it's the rational thing for them to do by definition of "utility function"? "Because utilitarianism" is a nonsensical answer IMO. I'm not a utilitarian. If you're a utilitarian, you should pay for your utilitarianism out of your own resource share. For you to demand that I pay for your utilitarianism is essentially a defection in the decision-theoretic sense, and would incentivize people like me to defect back.

As to problem (2.b), I don't think it's a serious issue in practice because time until singularity is too short for it to matter much. If it was, we could still agree on a cooperative strategy that avoids a wasteful race between present people.

Comment by Vanessa Kosoy (vanessa-kosoy) on Chapter 1 of How to Win Friends and Influence People · 2024-01-29T12:23:51.683Z · LW · GW

John Wentworth, founder of the stores that bear his name, once confessed: "I learned thirty years ago that it is foolish to scold. I have enough trouble overcoming my own limitations without fretting over the fact that God has not seen fit to distribute evenly the gift of intelligence." 

@johnswentworth is an ancient vampire, confirmed.

Comment by Vanessa Kosoy (vanessa-kosoy) on Open Thread – Winter 2023/2024 · 2024-01-28T11:06:54.208Z · LW · GW

I'm going to be in Berkeley February 8 - 25. If anyone wants to meet, hit me up!

Comment by Vanessa Kosoy (vanessa-kosoy) on AI #48: Exponentials in Geometry · 2024-01-18T15:34:57.836Z · LW · GW

Where do the Base Rate Times report on AI? I don't see it on their front page.

Comment by Vanessa Kosoy (vanessa-kosoy) on The impossible problem of due process · 2024-01-16T17:20:25.613Z · LW · GW

I honestly don't know. The discussions of this problem I encountered are all in the American (or at least Western) context[1], and I'm not sure whether it's because Americans are better at noticing this problem and fixing it, or because American men generate more unwanted advances, or because American women are more sensitive to such advances, or because this is an overreaction to a problem that's much more mild than it's portrayed.

Also, high-status men, really? Men avoiding meetups because they get too many propositions from women is a thing?

  1. ^

    To be clear, we certainly have rules against sexual harassment here in Israel, but that's very different from "don't ask a woman out the first time you meet her".

Comment by Vanessa Kosoy (vanessa-kosoy) on The impossible problem of due process · 2024-01-16T12:35:22.743Z · LW · GW

"It's true that we don't want women to be driven off by a bunch of awkward men asking them out, but if we make everyone read a document that says 'Don't ask a woman out the first time you meet her', then we'll immediately give the impression that we have a problem with men awkwardly asking women out too much — which will put women off anyway."

 

American social norms around romance continue to be weird to me. For the record, y'all can feel free to ask me out the first time you meet me, even if you do it awkwardly ;)

Comment by Vanessa Kosoy (vanessa-kosoy) on Saving the world sucks · 2024-01-13T17:28:51.259Z · LW · GW

"Virtue is its own reward" is a nice thing to believe in when you feel respected, protected and loved. When you feel tired, lonely and afraid, and nobody cares at all, it's very hard to understand why you should be making big sacrifices for the sake of virtue. But, hey, people are different. Maybe, for you virtue is truly, unconditionally, its own reward, and a sufficient one at that. And maybe EA is a community professional circle only for people who are that stoic and selfless. But, if so, please put the warning in big letters on the lid.

Comment by Vanessa Kosoy (vanessa-kosoy) on Saving the world sucks · 2024-01-13T13:36:56.913Z · LW · GW

There is tension between the stance that "EA is just a professional circle" and the (common) thesis that EA is a moral ideal. The latter carries the connotation of "things you will be rewarded for doing" (by others sharing the ideal). Likely some will claim that, in their philosophy, there is no such connotation: but it is on them to emphasize this, since this runs contrary to the intuitive perception of morality by most people. People who take up the ideology expecting the implied community aspect might understandably feel disappointed or even betrayed when they find it lacking, which might have happened to the OP.

As I said, cooperation is rational. There are, roughly speaking, two mechanisms to achieve cooperation: the "acausal" way and the "causal" way. The acausal way means doing something out of abstract reasoning that, if many others do the same, it will be in everyone's benefit, and moreover many others follow the same reasoning. This might work even without a community, in principle.

However, the more robust mechanism is causal: tit-for-tat. This requires that other people actually reward you for doing the thing. One way to reward is by money, which EA does to some extent: however, it also encourages members to take pay cuts and/or make donations. Another way to reward is by the things money cannot buy: respect, friendship, emotional support and generally conveying the sense that you're a cherished member of the community. On this front, more could be done IMO.

Even if we accept that EA is nothing more than a professional circle, it is still lacking in the respects I pointed out. In many professional circles, you work in an office with peers, leading naturally to a network of personal connections. On the other hand, AFAICT many EAs work independently/remotedly (I am certainly one of those), which denies the same benefits.

Comment by Vanessa Kosoy (vanessa-kosoy) on Saving the world sucks · 2024-01-11T14:47:00.865Z · LW · GW

I agree with the OP that: Utilitarianism is not a good description of most people's values, possibly not even a good description of anyone's values. Effective altruism encourages people to pretend that they are intrinsically utilitarian, which is not healthy or truth-seeking. Intrinsic values are (to 1st approximation) immutable. It's healthy to understand your own values, it's bad to shame people for having "wrong" values.

I agree with critics of the OP that: Cooperation is rational, we should be trying to help each other over and above the (already significant) extent to which we intrinsically care about each other, because this is in our mutual interest. A healthy community rewards prosocial behavior and punishes sufficiently antisocial behavior (there should also be ample room for "neutral" though).

A point insufficiently appreciated by either: The rationalist/EA community doesn't reward prosocial behavior enough. In particular, we need much more in the way of emotional support and mental health resources for community members. I speak from personal experience here: I am very grateful to this community for support in the career/professional sense. However, on the personal/emotional level, I never felt that the community cares about what I'm going through.

Comment by Vanessa Kosoy (vanessa-kosoy) on You can just spontaneously call people you haven't met in years · 2024-01-11T08:36:54.822Z · LW · GW

For the record, I contacted 3/4 but it led to nothing, alas. (I also thought of another person to contact but she moved to a different country in the intervening time.)

Comment by Vanessa Kosoy (vanessa-kosoy) on Where I agree and disagree with Eliezer · 2024-01-11T07:07:47.910Z · LW · GW

I wrote a review here. There, I identify the main generators of Christiano's disagreement with Yudkowsky[1] and add some critical commentary. I also frame it in terms of a broader debate in the AI alignment community.

  1. ^

    I divide those into "takeoff speeds", "attitude towards prosaic alignment" and "the metadebate" (the last one is about what kind of debate norms should we have about this or what kind of arguments should we listen to.)

Comment by Vanessa Kosoy (vanessa-kosoy) on The Learning-Theoretic Agenda: Status 2023 · 2024-01-10T10:48:20.248Z · LW · GW

Yes, this is an important point, of which I am well aware. This is why I expect unbounded-ADAM to only be a toy model. A more realistic ADAM would use a complexity measure that takes computational complexity into account instead of . For example, you can look at the measure  I defined here. More realistically, this measure should be based on the frugal universal prior.

Comment by Vanessa Kosoy (vanessa-kosoy) on Why aren't Yudkowsky & Bostrom getting more attention now? · 2024-01-09T10:09:59.432Z · LW · GW

Part of the reason is that Yudkowsky radicalized his position to stay out of the overton window. Fifteen years ago, his position was "we need to do research into AI safety, because AI will pose a threat to humanity some time this century". Now, the latter is becoming mainstream-adjacent, but he shifted to "it's too late to do research, we need to stop all capability work or else we all die in 10-15 years". And, "even if we stop all capability work as much as an international treaty can conceivably accomplish, we must augment human intelligence in adults in order to be able to solve the problem in time."

Comment by Vanessa Kosoy (vanessa-kosoy) on MIRI 2024 Mission and Strategy Update · 2024-01-05T15:12:52.414Z · LW · GW

It is tricky, but there might be some ways for data to defend itself.

Comment by Vanessa Kosoy (vanessa-kosoy) on 2023 in AI predictions · 2024-01-02T12:11:27.821Z · LW · GW

Nice!

I'll toss in some predictions of my own. I predict that all of the following things will not happen without a breakthrough substantially more significant than the invention of transformers:

  • AI inventing new things in science and technology, not via narrow training/design for a specific subtask (like e.g. AlphaFold) but roughly the way humans do it. (Confidence: 80%)
  • AI being routinely used by corporate executives to make strategic decisions, not as a glorified search engine but as a full-fledged advisor. (Confidence: 75%)
  • As above, but politicians instead of corporate executives. (Confidence: 72%)
  • AI learning how to drive using a human driving teacher, within a number of lessons similar to what humans take, without causing accidents (that the teacher fails to prevent) and without any additional driving training data or domain-specific design. (Confidence: 67%)
  • AI winning gold in IMO, using a math training corpus comparable in size to the number of math problems human contestants see in their lifetime. (Confidence: 65%)
  • AI playing superhuman Diplomacy, using a training corpus (including self-play) comparable in size to the number of games played by human players, while facing reputation incentives similar to those of human players. (Confidence: 60%)
  • As above, but Go instead of Diplomacy. (Confidence: 55%)
Comment by Vanessa Kosoy (vanessa-kosoy) on 2023 Unofficial LessWrong Census/Survey · 2023-12-28T10:16:01.857Z · LW · GW

 is the probability of the event that actually occured. You can't submit  without knowing what is true in advance. For example, suppose you need to predict who wins the next US presidential election. You assign probability 0.6 to Biden, 0.3 to Trump and 0.1 to Eliezer Yudkowsky. Then, if Biden wins, . But, if Yudkowsky wins then .

Comment by Vanessa Kosoy (vanessa-kosoy) on Critical review of Christiano's disagreements with Yudkowsky · 2023-12-27T18:53:38.664Z · LW · GW

Thank you for the clarification.

How do you expect augmented humanity will solve the problem? Will it be something other than "guessing it with some safe weak lesser tries / clever theory"?

Comment by Vanessa Kosoy (vanessa-kosoy) on Is being sexy for your homies? · 2023-12-14T08:02:47.105Z · LW · GW

Not especially important to your main points, but for the sake of pedantry:

While it's true that transwomen are biologically distinct from ciswomen, medically-transitioning transwomen are also biologically distinct from cismen. In particular, most of them (and all of the post-op) can't make babies with anyone. So, from a purely reproductive perspective, those transwomen are in a group onto itself. From a sexual-attraction perspective, this group is somewhat more similar to ciswomen than to cismen, in the sense that a much bigger fraction of straight men would be attracted to a (medically-transitioning in advance stage) transwoman than the fraction of straight women attracted to that transwoman (even if it the fraction of straight men attracted to a same-percentile-of-attractiveness ciswoman would be larger still).

Comment by Vanessa Kosoy (vanessa-kosoy) on Google Gemini Announced · 2023-12-07T12:24:51.182Z · LW · GW

in each of the 50 different subject areas that we tested it on, it's as good as the best expert humans in those areas

 

That sounds like an incredibly strong claim, but I suspect that the phrasing is very misleading. What kind of tests is Hassabis talking about here? Maybe those are tests that rely on remembering known facts much more than on making novel inferences? Surely Gemini is not (say) as good as the best mathematicians at solving open problems in mathematics?

Comment by Vanessa Kosoy (vanessa-kosoy) on 2023 Unofficial LessWrong Census/Survey · 2023-12-05T14:40:46.140Z · LW · GW

Imagine that, for every question, you will have to pay  dollars if the event you assigned a probability  occurs. Here,  is some sufficiently small constant (this assumes your strategy doesn't fluctuate as  approaches 0). Answer in the optimal way for that game, according to whatever decision theory you follow. (But choosing which questions to answer is not part of the game.)

Comment by Vanessa Kosoy (vanessa-kosoy) on The LessWrong 2022 Review · 2023-12-05T08:46:48.755Z · LW · GW

The LessWrong moderation team will take the voting results as a strong indicator of which posts to include in the Best of 2022 sequence.

Will there also be a Best of 2021 sequence at some point?

Comment by Vanessa Kosoy (vanessa-kosoy) on Neither EA nor e/acc is what we need to build the future · 2023-11-28T16:50:25.325Z · LW · GW

The analogy between SBF and Helen Toner is completely misguided. SBF did deeply immoral things, with catastrophic results for everyone, whatever his motivations has been. With Toner, we don't know what really happened, but if she indeed was willing to destroy OpenAI for safety reasons, then AFAICT she was 100% justified. The only problem is that she didn't succeed. (Where "success" would mean actually removing OpenAI from the gameboard, rather than e.g. rebranding it as part of Microsoft.)

Comment by Vanessa Kosoy (vanessa-kosoy) on Shallow review of live agendas in alignment & safety · 2023-11-27T12:29:04.739Z · LW · GW

Nice work.

Regarding the Learning-Theoretic Agenda:

  • We don't have 3-6 full time employees. We have ~2 full time employees and another major contributor.
  • In "funded by", Effective Ventures and Lightspeed Grants should appear as well.