Posts

Which LessWrongers are (aspiring) YouTubers? 2023-10-23T13:21:49.004Z
Private Biostasis & Cryonics Social 2023-08-16T02:34:19.605Z
Cryonics Career Survey (more jobs than you think) 2023-06-18T02:13:11.159Z
What fraction of words written/read are AI-written? 2023-06-17T13:15:40.865Z
Rishi to outline his vision for Britain to take the world lead in policing AI threats when he meets Joe Biden 2023-06-06T04:47:30.798Z
Who are the Experts on Cryonics? 2023-05-27T19:24:39.958Z
The Case for Brain-Only Preservation 2023-04-20T22:01:43.457Z
Near-mode cryonics: A thought experiment 2023-04-09T22:21:19.704Z
What games are using the concept of a Schelling point? 2023-04-09T17:21:33.964Z
Mati's introduction to pausing giant AI experiments 2023-04-03T15:56:58.542Z
Survey on lifeloggers for a research project 2023-03-26T00:02:40.090Z
[Link] Petition on brain preservation: Allow global access to high-quality brain preservation as an option rapidly after death 2023-02-26T15:56:55.656Z
[fiction] Our Final Hour 2022-12-11T05:49:01.489Z
Great Cryonics Survey of 2022 2022-12-03T05:10:14.536Z
Exams-Only Universities 2022-11-06T22:05:39.373Z
Mérida ACX Meetup 2022-08-17T17:06:15.937Z
Should I delay having children to take advantage of polygenic screening? 2021-12-19T06:39:25.071Z
Which of our online writings was used to train GPT-3? 2021-10-30T21:52:08.706Z
What are fiction stories related to AI alignment? 2021-10-29T02:59:52.920Z
Longtermism vs short-termism for personal life extension 2021-07-17T03:52:54.407Z
If instead of giving out dividends, public companies bought total market index funds, which companies would be the biggest? 2021-06-15T17:07:05.338Z
What's your probability that the concept of probability makes sense? 2021-05-22T21:53:31.766Z
[May – weekly] Focusmate: identity preservation 2021-04-25T17:58:28.897Z
Is there any plausible mechanisms for why taking an mRNA vaccine might be undesirable for a young healthy adult? 2021-04-08T01:18:42.434Z
What do you think would be the best investment policy for a cryonics trust? 2021-04-04T19:20:22.705Z
2 innovative life extension approaches using cryonics technology 2021-04-02T00:54:04.539Z
[April – weekly] Focusmate: biostasis, lifelogging, death-focus ethics 2021-03-31T16:49:02.604Z
[event] Cryonics icebreaker 2021-03-21T17:44:29.114Z
What are fun little puzzles / games / exercises to learn interesting concepts? 2021-03-18T03:26:00.948Z
[recurrent] Focusmate: cryonics and lifelogging 2021-03-11T16:28:34.263Z
Welcome to LW/SSC/EA in San Juan, Puerto Rico 2021-03-09T18:49:35.733Z
What ethical thought experiments can be reversed? 2021-03-06T15:12:09.826Z
Focusmate: Cryonics sign up 2021-03-01T15:41:10.366Z
How much harder is it to revive a neuro-only cryonics patient? 2021-01-12T23:24:45.963Z
Signaling importance 2020-12-08T09:14:36.148Z
Predictions made by Mati Roy in early 2020 2020-11-21T03:24:56.020Z
What fraction of Dan Ariely's Irrational Game hasn't replicated? 2020-11-09T20:25:27.445Z
What features would you like a prediction platform to have? 2020-10-13T00:48:03.024Z
Reviews of the book 'The Alignment Problem' 2020-10-11T07:41:14.841Z
Reviews of TV show NeXt (about AI safety) 2020-10-11T04:31:48.363Z
Buying micro-biostasis 2020-10-07T10:55:19.006Z
What reacts would you like to be able to give on posts? (emoticons, cognicons, and more) 2020-10-04T18:31:06.596Z
What are examples of Rationalist fable-like stories? 2020-09-28T16:52:13.500Z
What are good ice breaker questions for meeting people in this community? 2020-09-28T15:07:16.798Z
What hard science fiction stories also got the social sciences right? 2020-09-27T20:37:44.256Z
Surviving Petrov Day 2020-09-26T16:40:03.169Z
Has anyone written stories happening in Hanson's em world? 2020-09-21T14:37:11.150Z
For what X would you be indifferent between living X days, but forgetting your day at the end of everyday, or living 10 days? (terminally) 2020-09-18T04:05:59.078Z
How do you celebrate your birthday? 2020-09-17T10:00:50.609Z
What are examples of simpler universes that have been described in order to explain a concept from our more complex universe? 2020-09-17T01:31:10.367Z

Comments

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2024-04-18T21:56:27.888Z · LW · GW

topic: economics

idea: when building something with local negative externalities, have some mechanism to measure the externalities in terms of how much the surrounding property valuation changed (or are expected to change based, say, through a prediction market) and have the owner of that new structure pay the owners of the surrounding properties.

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2024-04-15T22:38:43.952Z · LW · GW

I wonder what fraction of people identify as "normies"

I wonder if most people have something niche they identify with and label people outside of that niche as "normies"

if so, then a term with a more objective perspective (and maybe better) would be non-<whatever your thing is>

like, athletic people could use "non-athletic" instead of "normies" for that class of people

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2024-04-14T17:37:17.295Z · LW · GW

just a loose thought, probably obvious

some tree species self-slected themselves for height (ie. there's no point in being a tall tree unless taller trees are blocking your sunlight)

humans were not the first species to self-select (although humans can now do it intentionally)

on human self-selection: https://www.researchgate.net/publication/309096532_Survival_of_the_Friendliest_Homo_sapiens_Evolved_via_Selection_for_Prosociality

Comment by Mati_Roy (MathieuRoy) on What games are using the concept of a Schelling point? · 2024-04-14T17:14:26.194Z · LW · GW

Board game: Medium

2 players reveal a card with a word, then they need to say a word based on that and get points if it's the same word (basically, with some more complexities).

Example at 1m20 here: https://youtu.be/yTCUIFCXRtw?si=fLvbeGiKwnaXecaX

Comment by Mati_Roy (MathieuRoy) on What games are using the concept of a Schelling point? · 2024-04-14T03:15:56.763Z · LW · GW

I'm glad past Mati cast a wider net has the specifics for this year's Schelling day are different ☺️☺️

Comment by Mati_Roy (MathieuRoy) on San Francisco ACX Meetup “First Saturday” · 2024-02-03T23:09:14.267Z · LW · GW

idk if the events are often going over time, but I might pass by now if it's still happening ☺️

Comment by Mati_Roy (MathieuRoy) on When scientists consider whether their research will end the world · 2024-01-01T20:58:14.191Z · LW · GW

I liked reading your article; very interesting! 🙏

One point I figured I should x-post with our DMs 😊 --> IMO, if one cares about future lives (as much as present ones) then the question stops really being about expected lives and starts just being about whether an action increases or decreases x-risks. I think a lot/all of the tech you described also have a probability of causing an x-risk if they're not implemented. I don't think we can really determine whether a probability of some of those x-risk is low enough in absolute terms as those probabilities would need to be unreasonably low, leading to full paralysis, and full paralysis could lead to x-risk. I think instead someone with those values (ie. caring about unborn people) should compare the probability of x-risks if a tech gets developed vs not developed (or whatever else is being evaluated). 🙂

Comment by Mati_Roy (MathieuRoy) on Cryonics signup guide #1: Overview · 2024-01-01T16:19:27.747Z · LW · GW

new, great, complementary post: Critical Questions about Patient Care in Cryonics and Biostasis

Comment by Mati_Roy (MathieuRoy) on Without - MicroFiction 250 words · 2023-12-13T05:46:41.837Z · LW · GW

I love this story so much, wow! It feels so incredibly tailored to me (because it is 😄). I value that a lot! It's a very scarce resource to begin with, but it hardly gets more tailored than that 😄

Comment by Mati_Roy (MathieuRoy) on Montreal cryonics and longevity hangout 2023-10-01 · 2023-10-24T03:12:46.262Z · LW · GW

that's awesome; thanks for letting me know :)

Comment by Mati_Roy (MathieuRoy) on Montreal cryonics and longevity hangout 2023-10-01 · 2023-10-23T12:29:51.173Z · LW · GW

i'd be curious to know how the first event went if you're inclined to share ☺

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-08-14T18:39:43.207Z · LW · GW

cars won't replace horses, horses with cars will

Comment by Mati_Roy (MathieuRoy) on What are examples of Rationalist posters or Rationalist poster ideas? · 2023-08-07T00:42:34.228Z · LW · GW

12 virtues of rationality: https://www.lesswrong.com/posts/DAFuuqpDcMztQGt7P/12-virtues-of-rationality-posters-icons?fbclid=IwAR2a8dbjJfl3hIpu0LTCttDzdraKcyyJ1eazs14Mr88hbXkuLjH4GPsWbyI

Comment by Mati_Roy (MathieuRoy) on Babble challenge: 50 ways of sending something to the moon · 2023-08-04T13:56:25.255Z · LW · GW

from ChatGPT

Sounds like a fun challenge! Here are 50 ways 

to send something to the moon:

1. Catapult
2. Giant crossbow
3. Balloon with an endless supply of helium
4. A giant seesaw
5. Sky elevator
6. Beam of light (if the object can ride light)
7. Teleportation device
8. Moon magnet
9. Whale blowhole
10. Bubble with a perfect vacuum inside
11. Tornado creator
12. Inflatable space tube
13. A jump by the Hulk
14. Sonic boom from a supersonic plane
15. Floating on a cloud machine
16. Warp drive
17. Ice cannon
18. Rocket rollercoaster
19. A super springboard
20. Fling via a giant trebuchet
21. Antigravity boots
22. Pneumatic tube system
23. Ride on a meteor
24. Build stairs to the moon
25. Giant hamster wheel that propels forward
26. Human pyramid
27. Bounce house to the moon
28. A moon hook and pull system
29. Jetpack
30. Superfast elevator
31. A gigantic yo-yo system
32. Umbrella carried by solar winds
33. Giant's throw
34. Rocket sneakers
35. Bungee cord to the moon
36. Space swing set
37. Moon vacuum
38. Space surfboard
39. Massive drone
40. Launch via space trebuchet
41. Space pogo stick
42. Inflatable space mountain
43. Magnetic repulsion system
44. Time travel to when the moon was closer
45. Huge space slingshot
46. Giant space slinky
47. An extremely powerful fan
48. A chain of trampolines
49. Magic carpet
50. Use a giant's bow and arrow
 

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-07-17T15:05:39.211Z · LW · GW

topics: AI, sociology

thought/hypothesis: when tech is able to create brains/bodies as good or better than ours, it will change our perception of ourselves: we won't be in a separate magistra from our tools anymore. maybe people will see humans as less sacred, and value life less. if you're constantly using, modifying, copying, deleting, enslaving AI minds (even AI minds that have a human-like interface), maybe people will become more okay doing that to human minds as well.

(which seems like it would be harmful for the purpose of reducing death)

Comment by Mati_Roy (MathieuRoy) on Catching the Eye of Sauron · 2023-07-17T14:50:26.899Z · LW · GW

I'm surprised this has this many upvotes. You're taking the person that contributed the most to warning humanity about AI x-risks, and are saying what you think they could have done better in what comes across as blamy to me. If you're blaming zir, you should probably blame everyone. I'd much rather if you wrote what people could have done in general rather than targeting one of the best contributors.

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-07-07T18:53:17.407Z · LW · GW

ok that's fair yeah! thanks for your reply. I'm guessing a lot of those historical quotes are also taking out of context actually.

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-07-07T15:54:40.687Z · LW · GW

you know those lists about historical examples of notable people mistakenly saying that some tech will not be useful (for example)

Elon Musk saying that VR is just a TV on your nose will probably become one of those ^^

https://youtube.com/shorts/wYeGVStouqw?feature=share

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-07-07T13:45:25.599Z · LW · GW

related concept: https://en.wikipedia.org/wiki/Information_panspermia

video on this that was posted ~15 hours ago: https://www.youtube.com/watch?v=K4Zghdqvxt4

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-07-04T15:34:52.906Z · LW · GW

idea: Stream all of humanity's information through the cosmos in hope an alien civ reconstruct us (and defends us against an Earth-originating maligned ASI)

I guess finding intelligent ETs would help with that as we could stream in a specific direction instead of having to broadcast the signal broadly

It could be that maligned alien ASIs would mostly ignore our information (or at least not use it to like torture us) whereas friendly align ASI would use it beneficially 🤷‍♀️

Comment by Mati_Roy (MathieuRoy) on My current thoughts on the risks from SETI · 2023-06-28T16:56:50.817Z · LW · GW

there remains a credible possibility that grabby aliens would benefit by sending a message that was carefully designed to only be detectable by civilizations at a certain level of technological development

oh wow, after reading this, I came up with the same explanation you wrote in the following 2 paragraphs just before reading them 😄

Comment by Mati_Roy (MathieuRoy) on Agentic Mess (A Failure Story) · 2023-06-23T13:40:34.636Z · LW · GW

I really liked the story, and love that you made a video version! I think it was really well made!

I'm impressed by the AI voice!

I just suggested to AI Impacts to add this story to their story repository.

I recommend / suggest considering adding "Agentic Mess (A Failure Story)" in your list.

It was developed at the 8th AI Safety Camp in 2023.

You can see the text-version here: https://www.lesswrong.com/posts/LyJAFBuuEfd4kxgsw/agentic-mess-a-failure-story

You can see the video-version here: https://www.youtube.com/watch?app=desktop&v=6edrFdkCEUE

It starts pretty close to our current AI reality and explores the potentiality of AI agents replicating and trying to improve in order to achieve their goal, and, as a result, propagating like a virus. The story explores the selection pressure that would bring and the results that would have.

Comment by Mati_Roy (MathieuRoy) on What fraction of words written/read are AI-written? · 2023-06-18T02:10:00.442Z · LW · GW

Thanks for your input :)

Comment by Mati_Roy (MathieuRoy) on Who are the Experts on Cryonics? · 2023-05-28T19:35:01.645Z · LW · GW

idk what CLARITY is, but yeah, I'd love to see room temperature preservation protocols developed for human brain preservation. it also has the possibility of significantly reducing cost given a significant fraction of the cost goes towards paying for indefinite liquid nitrogen refills

Nectome is working on aldehyde-stabilized cryopreservation for humans which I think might provide some of those benefits (?) OregonCryo is also trying to do / doing something like that.

i know another researcher working on this which could probably use funding in the near future. if any of you know someone that might be interested in funding this, please lmk so I can put you in touch. i think this is one of the top opportunities for improving cryonics robustness and adoption (and maybe quality)

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-05-08T22:10:34.268Z · LW · GW

topic: intellectual discussion, ML tool, AI x-risks

Idea: Have a therapist present during intellectual debate to notice triggers, and help defuse them. Triggers activate a politics mindset where the goal becomes focused on status/self-preservation/appearances/looking smart/making the other person look stupid/etc. which makes it hard to think clearly.

Two people I follow will soon have a debate on AI x-risks which made me think of that. I can't really propose that intervention though because it will likely be perceived and responded as if it was a political move itself.

Another idea I had recently, also based on one of those people, was to develop a neural network helping us notice when we were activated in that way so we became aware of it and helped defuse it. AI is too important for our egos to get in the way (but it's easier said than done).

x-post Facebook

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-04-30T15:31:06.063Z · LW · GW

Topics: cause prioritization; metaphor

note I took on 2022-08-01; I don't remember what I had in mind, but I feel like it can apply to various things

from an utilitarian point of view though, i think this is almost like arguing whether dying with a red or blue shirt is better; while there might be an answer, i think it's missing the point, and we should focus on reducing risks of astronomical disasters

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-04-30T15:17:27.854Z · LW · GW

Topics: AI, forecasting, privacy

I wonder how much of a signature we leave in our writings. Like, how hard would it be for an AI to be rather confident I wrote this text? (say if it was trained on LessWrong writings, or all public writings, or maybe even private writings) What if I ask someone else to write an idea for me--how helpful is it in obfuscating the source?

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-04-25T02:43:26.350Z · LW · GW

Topic: AI strategy (policies, malicious use of AI, AGI misalignment)

Epistemic status: simplistic; simplified line of reasoning; thinking out loud; a proposed frame

A significant "warning shot" from a sovereign misaligned AI doesn't seem likely to me because a human-level (and plausibly a subhuman-level) intelligence can both 1) learn deception, yet 2) can't (generally) do a lot of damage (i.e. perceptible for humanity). So the last "warning shot" before AI learns deception won't be very big (if even really notable at all), and then a misaligned agent would hide (its power and/or intentions) until it's confident it can overpower humanity (because it's easy to gain power that way)--at which point it would cause an omnicide. An exception to that is if an AI thinks other AIs are hiding in the world, then it might want to take a higher risk to overpower humanity before it's confident it can do so because it's concerned another AI will do so first otherwise. I'm not very hopeful this would give us a good warning shot though because I think multiple such AIs trying to overpower humanity would likely be too damaging for us to regroup in time.

However, it seems much more plausible to me that (non-agentic) AI tools would be used maliciously, which could lead the government to highly regulate AIs. Those regulations (ex.: nationalizing AI) preventing malicious uses could also potentially help with negligent uses. Assuming a negligent use (i.e. resulting in AGI misalignment) is much more likely to cause an existential catastrophe than a malicious use of AI, and that regulations against malicious uses are more memetically fit, then the ideal regulations to advocate for might be those that are good at preventing both malicious uses and the negligent creation of a misaligned AGI.

note to self: not posted on Facebook (yet)

Comment by Mati_Roy (MathieuRoy) on The April 2023 meetup will be April 29th at 2:00 PM · 2023-04-24T20:04:17.937Z · LW · GW

Wonderful, thank you! 🙏

Comment by Mati_Roy (MathieuRoy) on The April 2023 meetup will be April 29th at 2:00 PM · 2023-04-24T17:08:26.819Z · LW · GW

I'm 1h20 north of Georgia. I don't think I'll make it this time, but I'd love to connect with people in Georgia, so feel free to reach out ☺

Comment by Mati_Roy (MathieuRoy) on Cryonics companies should let people make conditions for reawakening · 2023-04-24T00:35:39.806Z · LW · GW

ah, yeah that's true, I did know that actually. What some of the people I know want though is to be thawed after a certain condition rather than simply not being reanimated, and ir I remember correctly, when I asked Alcor, they said they couldn't do that. Conditions included AI progress and family not being preserved (or somethings along those lines)

Comment by Mati_Roy (MathieuRoy) on The Case for Brain-Only Preservation · 2023-04-23T03:51:31.216Z · LW · GW

wow, i like the creativity!

Comment by Mati_Roy (MathieuRoy) on The Case for Brain-Only Preservation · 2023-04-22T15:05:52.376Z · LW · GW

Right, that one is part of "Easier emergency relocation" (I just edited the summary to add it, but it's in the post), but maybe that legal status also has more advantages than just transport.

Comment by Mati_Roy (MathieuRoy) on The Case for Brain-Only Preservation · 2023-04-22T15:01:37.948Z · LW · GW

All the advantages listed in the post are advantages compared to preserving the skull along with the brain.

Advantages of leaving the brain in the skull:

  • Additional protection provided by the skull
  • Avoid delicate procedure of removing the brain

The post links to this Isolation of the Brain for Human Cryopreservation.

I was told by someone performing cryopreservations that you can remove the brain and just leave a scar at the top of the forehead hidden by hair.

Comment by Mati_Roy (MathieuRoy) on Cryonics companies should let people make conditions for reawakening · 2023-04-20T22:12:05.605Z · LW · GW

I only read the title, but wanted to record here that I know a few people that would sign up if they could do that.

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-04-10T01:11:58.403Z · LW · GW

topic: AI alignment, video game | status: idea

Acknowledgement: Inspired from an idea I heard from Eliezer in zir podcast with Lex Friedman and the game Detroit: Become Human.

Video game where you're in an alternate universe where aliens create an artificial intelligence that's a human. The human has various properties typical of AI, such has running way faster than the aliens in that world and being able to duplicate themselves. The goal of the human is to take over the world to stop some atrocity happening in that world. The aliens are trying to stop the human from taking over the world.

Comment by Mati_Roy (MathieuRoy) on What games are using the concept of a Schelling point? · 2023-04-09T19:03:15.454Z · LW · GW

Thanks for the tip! I guess I'm also interested in collecting such games more generally. I'd rather cast my net wider. ☺

Comment by Mati_Roy (MathieuRoy) on What games are using the concept of a Schelling point? · 2023-04-09T18:53:29.499Z · LW · GW

Hmmm. I guess the ideal for my immediate desire would be a 2-3 player game that takes less than 30 minutes to play. Goal is just to do something fun, and I find it extra-fun to follow the Schelling Day theme ^^

Comment by Mati_Roy (MathieuRoy) on jacquesthibs's Shortform · 2023-04-09T18:45:45.972Z · LW · GW

Nicholas' doc on Cyborgism

do you have a link?

I'd be interested in being added to the Discord

Comment by Mati_Roy (MathieuRoy) on Forecasting Thread: AI Timelines · 2023-03-26T21:28:10.935Z · LW · GW

Update: 18% <2033 18% 2033-2043 18% 2043-2053 18% 2050-2070 28% 2070+ or won't happen

see more details on my shortform: https://www.lesswrong.com/posts/DLepxRkACCu8SGqmT/mati_roy-s-shortform?commentId=KjxnsyB7EqdZAuLri

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-03-26T20:47:43.143Z · LW · GW

✨ topic: AI timelines

Note: I'm not explaining my reasoning in this post, just recording my predictions and sharing how I feel.

I'll sound like a boring cliche at this point, but I just wanted to say it publicly: my AGI timelines have shorten earlier this year.

Without thinking about too much about quantifying my probabilities, I'd say the probabilities that we'll get AGI or AI strong enough to prevent AGI (including through omnicide) are:

  • 18% <2033
  • 18% 2033-2043
  • 18% 2043-2053
  • 18% 2050-2070
  • 28% 2070+ or won't happen

But at this point I feel like not much would surprise me in terms of short timelines. Transformative AI seems really close. Short timelines and AI x-risk concerns are common among people working in AI and among people trying to predict the development of this tech. It's the first time I've been feeling sick to my stomach when thinking about AI timelines. First time that my mind is as focused emotionally on the threat, simulating how the last moments before an AI omnicide would look like.

What fraction of the world would be concerned about AI x-risk 1 second before an AI omnicide? Plausibly very low.

  • Will people see their death coming? For example, because a drone breaks their house window just before shooting them in the head. And if so, will people be able to say "Ah, Mati was right" just before they die or will they just think it's a terrorist attack or something like that? I imagine losing access to Internet and cellphone communication, not thinking much of it, while a drone is on its journey to kill me.
  • Before AI overpowers humanity, will people think that I was wrong because AI is actually providing a crazy amount of wealth? (despite me already thinking this)
  • Will I have time to post my next AI x-risk fiction story before AI kills us all? I better get to it.

To be clear, this fear is not at all debilitating or otherwise pathological.

(I know some of those thoughts are sily; I'm obviously predominantly concerned about omnicide, not about publishing my fiction or being acknowledged)

I'm feeling wanting and finding myself simplifying my life, doing things faster, and focusing even more on AI. (I still care and support cryonics and cause areas adjacent to AI like genetic engineering.)

In a few years, I might live in a constant state of thinking I could drop dead at any time from an AGI.

I used to think the most likely cause of my death would be an insufficiently good cryopreservation, but now I think it's misaligned AGI. It seems likely to me that most people alive today will die from an AI omnicide.

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-03-26T19:46:34.387Z · LW · GW

topic: genetic engineering

'Revolutionary': Scientists create mice with two fathers

(I just read the title)

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-03-16T23:08:09.483Z · LW · GW

topic: genetic engineering

'Revolutionary': Scientists create mice with two fathers

(I just read the title)

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-03-16T23:07:43.220Z · LW · GW

topic: AI

Lex Fridman:

I'm doing podcast with Sam Altman (@sama), CEO of OpenAI next week, about GPT-4, ChatGPT, and AI in general. Let me know if you have any questions/topic suggestions.

PS: I'll be in SF area next week. Let me know if there are other folks I should talk to, on and off the mic.

https://twitter.com/lexfridman/status/1636425547579310080

Comment by Mati_Roy (MathieuRoy) on Alpaca: A Strong Open-Source Instruction-Following Model · 2023-03-16T23:00:25.800Z · LW · GW

Damn, that's something I had been worrying about recently.

Eliezer said:

I don't think people realize what a big deal it is that Stanford retrained a LLaMA model, into an instruction-following form, by cheaply fine-tuning it on inputs and outputs from text-davinci-003.

It means: If you allow any sufficiently wide-ranging access to your AI model, even by paid API, you're giving away your business crown jewels to competitors that can then nearly-clone your model without all the hard work you did to build up your own fine-tuning dataset. If you successfully enforce a restriction against commercializing an imitation trained on your I/O - a legal prospect that's never been tested, at this point - that means the competing checkpoints go up on bittorrent.

I'm not sure I can convey how much this is a brand new idiom of AI as a technology. Let's put it this way:

If you put a lot of work into tweaking the mask of the shoggoth, but then expose your masked shoggoth's API - or possibly just let anyone build up a big-enough database of Qs and As from your shoggoth - then anybody who's brute-forced a core unmasked shoggoth can gesture to your shoggoth and say to their shoggoth "look like that one", and poof you no longer have a competitive moat.

It's like the thing where if you let an unscrupulous potential competitor get a glimpse of your factory floor, they'll suddenly start producing a similar good - except that they just need a glimpse of the inputs and outputs of your factory. Because the kind of good you're producing is a kind of pseudointelligent gloop that gets sculpted; and it costs money and a simple process to produce the gloop, and separately more money and a complicated process to sculpt the gloop; but the raw gloop has enough pseudointelligence that it can stare at other gloop and imitate it.

In other words: The AI companies that make profits will be ones that either have a competitive moat not based on the capabilities of their model, OR those which don't expose the underlying inputs and outputs of their model to customers, OR can successfully sue any competitor that engages in shoggoth mask cloning.

https://twitter.com/ESYudkowsky/status/1635577836525469697

Interesting video on the topic: The Model That Changes Everything: Alpaca Breakthrough (ft. Apple's LLM, BritGPT, Ernie and AlexaTM)

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-03-15T03:56:52.347Z · LW · GW

topic: lifelogging as life extension

pivotal acts might require destroying a lot of hardware (ex.: through EMPs); ideally this would be targeted destruction of hardware, but increases my probability that worlds in which lifelogging as life extension is useful are more likely to require EMP-proof lifelogging

Comment by Mati_Roy (MathieuRoy) on [Link] Petition on brain preservation: Allow global access to high-quality brain preservation as an option rapidly after death · 2023-02-27T15:51:19.202Z · LW · GW

Makes sense

Comment by Mati_Roy (MathieuRoy) on Gemini modeling · 2023-02-05T23:56:58.272Z · LW · GW

i just read the beginning

I thought Alice wanted bananas for a change, but they weren't ready yet, so ze went for the Cheerios :p

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2022-12-13T18:45:18.701Z · LW · GW

Idea for a line of thinking: What if as a result of automation we could use the ~entire human population to control AI — any way we could meaningfully organize this large workforce towards that goal?

Comment by Mati_Roy (MathieuRoy) on [fiction] Our Final Hour · 2022-12-11T18:40:46.698Z · LW · GW

Oh, a lot of what I wrote is for 'cinematic' effect and symbolism. Maybe tagging it as "Rationalist fic" made it seems like this was a prediction; I changed it for just "fiction" and added a note.

But I appreciate your input/perspective!