Posts

Thriving in the Weird Times: Preparing for the 100X Economy 2023-05-08T13:44:40.341Z
Lucie Philippon's Shortform 2022-12-04T21:17:09.449Z
[Rough notes, BAIS] Human values and cyclical preferences 2022-05-13T13:28:04.907Z
How can I determine that Elicit is not some weak AGI's attempt at taking over the world ? 2022-04-12T00:54:10.621Z

Comments

Comment by Lucie Philippon (lucie-philippon) on Lucie Philippon's Shortform · 2024-04-22T17:34:42.909Z · LW · GW

There was this voice inside my head that told me that since I got Something to protect, relaxing is never ok above strict minimum, the goal is paramount, and I should just work as hard as I can all the time.

This led me to breaking down and being incapable to work on my AI governance job for a week, as I just piled up too much stress.

And then, I decided to follow what motivated me in the moment, instead of coercing myself into working on what I thought was most important, and lo and behold! my total output increased, while my time spent working decreased.

I'm so angry and sad at the inadequacy of my role models, cultural norms, rationality advice, model of the good EA who does not burn out, which still led me to smash into the wall despite their best intentions. I became so estranged from my own body and perceptions, ignoring my core motivations, feeling harder and harder to work. I dug myself such deep a hole. I'm terrified at the prospect to have to rebuild my motivation myself again.

Comment by Lucie Philippon (lucie-philippon) on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-04T08:27:40.705Z · LW · GW

On the Spotify release, there is a typo in "First they came for the epsistemology".

Comment by Lucie Philippon (lucie-philippon) on How does it feel to switch from earn-to-give? · 2024-04-01T13:19:20.984Z · LW · GW

Over the last two years, I discovered LessWrong, learned about x-risks, joined the rationalist community, joined EA, started a rationalist/EA group house, and finally left my comfy high earning crypto job last September, to start working on AI safety. During this time, I definitely felt multiple switch of taking on different kinds of responsibilities. 

The first responsibility I learned, by reading HPMOR and The Sequences, was the sense that more was possible, that I could achieve greatness, become as cool as I ever wanted, but that it needed actual work, that I was not on the right path to achieve it, that I would need to take risks and that I could not count on any of my then friends to help me with it. It was at this time that I took responsibility over what my life would be like.

I joined the rationalist community somewhat quickly, and after a few months ended up creating a rationalist group house. There, I spent lots of time with people very different from those I spent time with, in no small part because they questioned my beliefs. I realized lots of the factual knowledge I got from the Internet or from school was incomplete or flat out wrong, that the procedural knowledge I got from my parents and common culture was deeply suboptimal, that strong and counterproductive emotional patterns were driving me a large chunk of the time, and that generally my epistemics were broken, which prevented solving the other problems. I realized I could not trust anyone to give me correct knowledge, to show me the right way, even on the most basic stuff. It was at this time I took responsibility over my cognitive processes and beliefs, because blind faith was not a reliable way to navigate the world.

Leaving my job for AI Safety definitely felt like jumping into the unknown. For the first time in my life, I cared about achieving a goal that was wildly above my level. I finally had something to protect, and was taking active steps each day towards getting better. It felt like taking the responsibility of shaping the future like I wanted. I realized nobody else would do it for me.

Working on AI safety full-time also led to a large increase in the amount of stress I experienced, as working harder and caring more than I ever did exposed lots of flaws that were never a problem when I was just going with the flow. I can give more details on the issues I experienced, but basically I was terrible at noticing issues and kept ignoring my emotions, nearly leading to burnout twice. I realized nobody could manage my internal states except me. It felt like taking responsibility over my motivation, my happiness, my reactions to any event. This is still a work in progress, though.

When I first read HPMOR, I expected that taking responsibility was just a single jump you had to take once. Now, it seems to be a succession of realization, where the tools I had been given proved to be insufficient, and I had to take upon myself to reforge better tools. I'm actually looking forward to the next realization now. I hope you and I have the courage to continue down this road.

Comment by Lucie Philippon (lucie-philippon) on Environmental allergies are curable? (Sublingual immunotherapy) · 2023-12-27T11:07:30.837Z · LW · GW

I was allergic to acarids when I was a child, and this caused me a severe asthma crisis when I was around 10. I live in France, and I got prescribed SLIT by the first allergy specialist my mother found, so I guess it's quite a common treatment there. I took it for more than 5 years, and now 8 years later I don't ever have any symptoms of allergy.

Comment by Lucie Philippon (lucie-philippon) on 2023 Unofficial LessWrong Census/Survey · 2023-12-09T11:19:26.886Z · LW · GW

I filled in the survey! It was a fun way to relax this morning

Comment by Lucie Philippon (lucie-philippon) on Lucie Philippon's Shortform · 2023-07-09T13:50:32.792Z · LW · GW

Thank you for the pointer ! I found the article you mentioned, and then found the tag Postmortem & Retrospective which led me to three additional posts:

Comment by Lucie Philippon (lucie-philippon) on Lucie Philippon's Shortform · 2023-07-09T06:16:09.299Z · LW · GW

Yesterday, I was searching for posts by alignment researchers describing how they got into the field. I was searching specifically for personal stories rather than guides on how other people can get into the field.

I was trying to perform Intuition flooding, by reading lots of accounts, and getting intuitions on which techniques work to enter the field.

I only managed to find three which fit somewhat my target:

Neel Nanda's post was the central example of what I was looking for, and I was surprised to not find more. Does anyone know where I can find more posts like this ?

Comment by lucie-philippon on [deleted post] 2023-06-14T06:42:50.668Z

blog.jaibot.com does not seem to exist anymore.

Comment by Lucie Philippon (lucie-philippon) on Open Thread With Experimental Feature: Reactions · 2023-05-27T14:51:45.480Z · LW · GW

I don't have the intuition that reactions will replace some comments which would have been written without this feature. What makes you think this will happen?

If reactions were tied to posting a comment, such as reactions could not decrease the number of comments, would this make you more likely to support this feature?

Incidentally, thinking about which reaction to put to this comment instead of just up or downvoting made me realize I did not understand completely what you meant, and motivated me to write a comment instead.

Comment by Lucie Philippon (lucie-philippon) on Seize the Maximal Probability Moment · 2023-05-18T16:35:41.479Z · LW · GW

I think in this situation, you could use the momentum to implement one hack which increases the probability of implementing all of them is the future. For example, buying a white board, writing all the life-hacks ideas you got from the minicamp and putting it in a very visible place.

Comment by Lucie Philippon (lucie-philippon) on Thriving in the Weird Times: Preparing for the 100X Economy · 2023-05-09T13:00:34.926Z · LW · GW

We're in agreement. I'm not sure what's my expectation for the length of this phase or the final productivity boost, but I was exploring what we would need to do now to prepare for the kind of world where there is a short period of time when productivity skyrockets. If we end up in such a world, I would prefer people working on AI alignment to be ready to exploit the productivity gains fully.

Comment by Lucie Philippon (lucie-philippon) on Thriving in the Weird Times: Preparing for the 100X Economy · 2023-05-09T08:38:27.948Z · LW · GW

The question I was exploring was not how to find the tools that do make their users more productive, as I expect good curation to appear in time with the tools, but whether there were resources which would be necessary to use those tools, but difficult to acquire in a short time when the tools are released.

The post was not optimized for SEO, but it definitely has a ChatGPT style I dislike. It's one of my first post, so I'm still exploring how to write good quality post. Thank you for the feedback!

Comment by Lucie Philippon (lucie-philippon) on Thriving in the Weird Times: Preparing for the 100X Economy · 2023-05-09T08:24:54.823Z · LW · GW

At the individual level, I expect agentic AI to allow even more powerful tools, like ACT acting as a semi autonomous digital assistant, or AutoGPT acting as a lower level executor, taking in your goals and doing most of the work.

Once we have powerful agentic AGI, of the kind that can run continuously and disempower humanity, I expect that at this point we'll be leaving the "world as normal but faster" phase where tools are useful, and then what happens next depends on our alignment plan I guess.

Comment by Lucie Philippon (lucie-philippon) on Thriving in the Weird Times: Preparing for the 100X Economy · 2023-05-09T08:02:14.959Z · LW · GW

I think I focused too much on the "competitive" part, but my main point was that only certain factors would maintain a difference between individuals productivity, whether they are zero-sum or not. If future AI assistants require large personal datasets to perform well, only the people with preexisting datasets will perform well for a while, even though anyone could start their own dataset at that point.

Comment by Lucie Philippon (lucie-philippon) on Cyborgism · 2023-02-13T20:09:07.547Z · LW · GW

Conjecture is "a team of researchers dedicated to applied, scalable AI alignment research." according to their website https://www.conjecture.dev/

They are publishing regularly on the alignment forum and LessWrong https://www.lesswrong.com/tag/conjecture-org

I also searched their website, and it does not look like Bonsai is publicly accessible. This must be some internal tool they developed ?

Comment by Lucie Philippon (lucie-philippon) on The Rationalists of the 1950s (and before) also called themselves “Rationalists” · 2023-01-14T22:12:27.055Z · LW · GW

This post points at an interesting fact: some people, communities or organizations already called themselves "rationalists" before the current rationalist movement. It brings forth the idea that the rationalist movement may be anchored in a longer history than what might first seem from reading LessWrong/Overcoming Bias/Eliezer history.

However, this post reads more like a Wikipedia article, or an historical overview. It does not read like it has a goal. Is this post making some sort of argument that the current rationalist community is descended from those earlier groups ? Is it poking at the consensus history of how the rationalist community ended up choosing "rationalist" as an identifier ? I don't know whether any of those things is argued in this post.

This feels like an interesting bag of facts, full of promising threads of inquiry which could develop in new historical insights and make great posts. I am looking forward to reading those follow-ups, but for now this feels incomplete and lacking purpose.

Comment by Lucie Philippon (lucie-philippon) on Lucie Philippon's Shortform · 2022-12-04T21:17:09.740Z · LW · GW

TIL that the expected path a new user of LW is expected to follow, according to https://www.lesswrong.com/posts/rEHLk9nC5TtrNoAKT/lw-2-0-strategic-overview, is to become comfortable with commenting regularly in 3-6 month, and comfortable with posting regularly in 6-9 month. I discovered the existence of shortforms. I (re)discovered the expectation that your posts should be treated as a personal blog medium style ?

As I'm typing this I'm still unsure whether I'm destroying the website with my bad shortform, even though the placeholder explicitly said... (\*right click inspect\*)

Write your thoughts here! What have you been thinking about?
Exploratory, draft-stage, rough, and rambly thoughts are all welcome on Shortform.

I'm definitely rambling ! Look ! I'm following the instructions !

I feel like a "guided tour of LW" is missing when joining the website ? Some sort if premade path to get up to speed on "what am I supposed and allowed to do as a user of LW, except reading posts ?". Could take some inspiration from Duolingo, Brilliant, or any other app trying to get a user past the initial step of interacting with the content ?

Comment by Lucie Philippon (lucie-philippon) on AGI Ruin: A List of Lethalities · 2022-06-09T18:26:10.573Z · LW · GW

I vehemently disagree here, based on my personal and generalizable or not history. I will illustrate with the three turning points of my recent life.

First step: I stumbled upon HPMOR, and Eliezer way of looking straight into the irrationality of all our common ways of interacting and thinking was deeply shocking. It made me feel like he was in a sense angrily pointing at me, who worked more like one of the PNJ rather than Harry. I heard him telling me you're dumb and all your ideals of making intelligent decisions, being the gifted kid and being smarter than everyone are all are just delusions. You're so out of touch with reality on so many levels, where to even start.

This attitude made me embark on a journey to improve myself, read the sequences, pledge on Giving What we can after knowing EA for many years, and overall reassess whether I was striving towards my goal of helping people (spoiler: I was not).

Second step:  The April fools post also shocked me on so many levels. I was once again deeply struck by the sheer pessimism of this figure I respected so much. After months of reading articles on LessWrong and so many about AI alignment, this was the one that made me terrified in the face of the horrors to come.

Somehow this article, maybe by not caring about not hurting people, made me join an AI alignment research group in Berlin. I started investing myself into the problem, working on it regularly, diverting my donations towards effective organizations in the field. It even caused me to publish my first bit of research on preference learning.

Third step: Today this post, by not hiding any reality of the issue and striking a lot of ideas down that I was relying on for hope, made me realize I was becoming complacent. Doing a bit of research in the weekend is the way to be able to say “Yeah I participated in solving the issue” once it's solved, not making sure it is in fact solved.

Therefore, based on my experience, not a lot of works made me significantly alter my life decisions. And those who did are all strangely ranting, smack-in-your-face works written by Eliezer.

Maybe I'm not the audience to optimize for to solve the problem, but on my side, I need even more smacks in the face, breaking you fantasy style posts.

Comment by Lucie Philippon (lucie-philippon) on Less Wrong Community Weekend 2022 · 2022-05-28T11:36:53.252Z · LW · GW

Regarding the schedule, when does the event start on friday and end of monday ? I would like to already book my trip to take advantage of low prices.

Comment by Lucie Philippon (lucie-philippon) on AI Safety Berlin Kickoff Meetup · 2022-04-19T11:35:32.371Z · LW · GW

I would love to go, and was pondering quite hard whether to try other people interested in this endeavour in Berlin. Sadly I am not available this weekend. Can I join on saturday 30th without going to the first one ?

Comment by Lucie Philippon (lucie-philippon) on How can I determine that Elicit is not some weak AGI's attempt at taking over the world ? · 2022-04-12T18:12:42.019Z · LW · GW

Thank you for the reply. I know that worry is unnecessary, I was rather asking about what you would do if you didn't know for a fact that it was indeed based on GPT-3, or that humans were effectively overseeing it, to determine whether it is an unsafe AGI trying to manipulate humans using it ?

I know that no one could detect a super intelligent AGI trying to manipulate them, but I think it's can be non-obvious that a sub human AGI is trying to manipulate you if you don't look for it.

Primarily, I think that currently, no one uses AI systems with the expectation that it could try to deceive them, so they don't apply the basic level of doubt you put in every human when you don't know their intentions.

Comment by Lucie Philippon (lucie-philippon) on Open & Welcome Thread December 2021 · 2021-12-29T20:30:07.432Z · LW · GW

Thank you for the heads-up ! I joined the meetup group and i am looking forward to new events :)

Comment by Lucie Philippon (lucie-philippon) on Open & Welcome Thread December 2021 · 2021-12-29T15:27:17.115Z · LW · GW

Hello everyone ! My name is Lucie, and I am studying computer science. I'm fascinated by this website and started binge reading the sequences after finishing reading HPMOR. With all the information I was exposed to on this website during the last week, I am hyped and thinking frantically about how can all of this change my life goals.

However, I know that for me only reading more and more post, and getting more and more information will only sustain me for a while. When my hype die down, I think I will not be as motivated as right now into reading posts if I don't find a way to tie it to my life more than pure curiosity.

I think I need to fill at least a bit part of a community and tie it into my social life to keep my interest for long enough. Therefore, I'm making this comment, and asking you how to meet some people from this community, either online or offline.

Right now, I'm a bit lost as to what is the next step for me in this journey. I don't know whether the lack of explicit way of getting into the community is an intentional filter for people with enough intrinsic motivation to continue learning on their own for a long time ? Is there a will for new active members, whatever that means ?

So anyway, if you want to help me, to chat or to meet in Berlin, feel free to reply or to send me a message !