Posts

Cryonics is free 2024-09-29T17:58:17.108Z
AI: 4 levels of impact [micropost] 2024-06-12T16:58:31.888Z
Second-Order Rationality, System Rationality, and a feature suggestion for LessWrong 2024-06-05T07:20:10.178Z
Which LessWrongers are (aspiring) YouTubers? 2023-10-23T13:21:49.004Z
Private Biostasis & Cryonics Social 2023-08-16T02:34:19.605Z
Cryonics Career Survey (more jobs than you think) 2023-06-18T02:13:11.159Z
What fraction of words written/read are AI-written? 2023-06-17T13:15:40.865Z
Rishi to outline his vision for Britain to take the world lead in policing AI threats when he meets Joe Biden 2023-06-06T04:47:30.798Z
Who are the Experts on Cryonics? 2023-05-27T19:24:39.958Z
The Case for Brain-Only Preservation 2023-04-20T22:01:43.457Z
Near-mode cryonics: A thought experiment 2023-04-09T22:21:19.704Z
What games are using the concept of a Schelling point? 2023-04-09T17:21:33.964Z
Mati's introduction to pausing giant AI experiments 2023-04-03T15:56:58.542Z
Survey on lifeloggers for a research project 2023-03-26T00:02:40.090Z
[Link] Petition on brain preservation: Allow global access to high-quality brain preservation as an option rapidly after death 2023-02-26T15:56:55.656Z
[fiction] Our Final Hour 2022-12-11T05:49:01.489Z
Great Cryonics Survey of 2022 2022-12-03T05:10:14.536Z
Exams-Only Universities 2022-11-06T22:05:39.373Z
Mérida ACX Meetup 2022-08-17T17:06:15.937Z
Should I delay having children to take advantage of polygenic screening? 2021-12-19T06:39:25.071Z
Which of our online writings was used to train GPT-3? 2021-10-30T21:52:08.706Z
What are fiction stories related to AI alignment? 2021-10-29T02:59:52.920Z
Longtermism vs short-termism for personal life extension 2021-07-17T03:52:54.407Z
If instead of giving out dividends, public companies bought total market index funds, which companies would be the biggest? 2021-06-15T17:07:05.338Z
What's your probability that the concept of probability makes sense? 2021-05-22T21:53:31.766Z
[May – weekly] Focusmate: identity preservation 2021-04-25T17:58:28.897Z
Is there any plausible mechanisms for why taking an mRNA vaccine might be undesirable for a young healthy adult? 2021-04-08T01:18:42.434Z
What do you think would be the best investment policy for a cryonics trust? 2021-04-04T19:20:22.705Z
2 innovative life extension approaches using cryonics technology 2021-04-02T00:54:04.539Z
[April – weekly] Focusmate: biostasis, lifelogging, death-focus ethics 2021-03-31T16:49:02.604Z
[event] Cryonics icebreaker 2021-03-21T17:44:29.114Z
What are fun little puzzles / games / exercises to learn interesting concepts? 2021-03-18T03:26:00.948Z
[recurrent] Focusmate: cryonics and lifelogging 2021-03-11T16:28:34.263Z
Welcome to LW/SSC/EA in San Juan, Puerto Rico 2021-03-09T18:49:35.733Z
What ethical thought experiments can be reversed? 2021-03-06T15:12:09.826Z
Focusmate: Cryonics sign up 2021-03-01T15:41:10.366Z
How much harder is it to revive a neuro-only cryonics patient? 2021-01-12T23:24:45.963Z
Signaling importance 2020-12-08T09:14:36.148Z
Predictions made by Mati Roy in early 2020 2020-11-21T03:24:56.020Z
What fraction of Dan Ariely's Irrational Game hasn't replicated? 2020-11-09T20:25:27.445Z
What features would you like a prediction platform to have? 2020-10-13T00:48:03.024Z
Reviews of the book 'The Alignment Problem' 2020-10-11T07:41:14.841Z
Reviews of TV show NeXt (about AI safety) 2020-10-11T04:31:48.363Z
Buying micro-biostasis 2020-10-07T10:55:19.006Z
What reacts would you like to be able to give on posts? (emoticons, cognicons, and more) 2020-10-04T18:31:06.596Z
What are examples of Rationalist fable-like stories? 2020-09-28T16:52:13.500Z
What are good ice breaker questions for meeting people in this community? 2020-09-28T15:07:16.798Z
What hard science fiction stories also got the social sciences right? 2020-09-27T20:37:44.256Z
Surviving Petrov Day 2020-09-26T16:40:03.169Z
Has anyone written stories happening in Hanson's em world? 2020-09-21T14:37:11.150Z

Comments

Comment by Mati_Roy (MathieuRoy) on Cryonics is free · 2024-10-03T19:04:54.903Z · LW · GW

I don't know. The brain preservation prize to preserve the connective of a large mammal was won with aldehyde-stabilization though

Comment by Mati_Roy (MathieuRoy) on Cryonics is free · 2024-10-03T19:03:52.554Z · LW · GW

Oregon Brain Preservation uses a technique allowing fridge temperature storage, and seem well funded, so idk if the argument works out

Idk the finances for Cryonics Germany, but I would indeed guess that Tomorrow Bio has more funding + provides better SST. I would recommend using Tomorrow Bio over Cryonics Germany if you can afford it

Comment by Mati_Roy (MathieuRoy) on Cryonics is free · 2024-10-01T03:40:48.402Z · LW · GW

To be clear, it's subsidized. So it's not like there's no money to maintain you in preservation. As far as I know, Oregon Brain Preservation has a trust similar to Alcor in terms of money per volume preserved for it's cryonics patients. Which seems more than enough to maintain in storage just with the interests. Of course, there could be major economic disruptions that change that. I'm not sure about how much Cryonics Germany is putting aside though.

Plus, Oregon Brain Preservation's approach seems to work at fridge temperature rather than requiring PB2 temperature.

What would a guarantee mean here? Like they give money to your heirs if they accidentally thaw you? I'm not sure what you're asking.

Alternatives to that are paid versions of cryonics or otherwise burial and cremation.

Comment by Mati_Roy (MathieuRoy) on Cryonics is free · 2024-09-30T20:33:39.385Z · LW · GW

fair enough! maybe i should edit my post with "brain preservation some through cryonics for indefinite storage with the purpose of future reanimation is sufficiently subsidized to be free or marginally free in some regions of the world" 😅

Comment by Mati_Roy (MathieuRoy) on Cryonics is free · 2024-09-29T22:37:05.387Z · LW · GW

i don't think killing yourself before entering the cryotank vs after is qualitatively different, but the latter maintains option value (in that specific regard re MUH) 🤷‍♂️

Comment by Mati_Roy (MathieuRoy) on Cryonics is free · 2024-09-29T22:33:57.926Z · LW · GW

if you're alive, you can kill yourself when s-risks increases beyond your comfort point. if you're preserved, then you rely on other people to execute on those wishes

Comment by Mati_Roy (MathieuRoy) on Cryonics is free · 2024-09-29T22:28:03.089Z · LW · GW

I mean, it's not a big secret, there's a wealthy person behind it. And there's 2 potential motivations for it:
1) altruistic/mission-driven
2) helps improve the service to have more cases, which can benefit themselves as well.

But also, Oregon Brain Preservation is less expensive as a result of:
1) doing brain-only (Alcor doesn't extract the brain for its neuro cases)
2) using chemical preservation which doesn't require LN2 (this represents a significant portion of the cost)
3) not including the cost of stand-by, which is also a significant portion (ie. staying at your bedside in advance until you die)
4) collaborating with local funeral homes (instead of having a fully in-house team that can be deployed anywhere)
5) only offering the service locally (no flights)

I visited Oregon Brain Preservation, talked with Jordan Spark and exchanged emails, and been following them for many years, and Jordan seems really solid IMO.

Cryonics Germany people seem very caring and seem to understand well how to work with a thanatologist. I also had email exchanges with them, but not as much.

🤷‍♂️

Concerns about personal s-risks makes sense.

Comment by Mati_Roy (MathieuRoy) on Cryonics is free · 2024-09-29T22:22:01.092Z · LW · GW

I mean, you can trust it to preserve your brain more than you can trust a crematorium to preserve your brain.

And if you do chemical preservation, the operational complexity of maintaining a brain in storage is fairly simple. LN2 isn't that complex either, but does have higher risks.

That said, I would generally suggest using Tomorrow Biostasis for Europe residents if you can afford it.

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2024-09-29T16:48:40.120Z · LW · GW

here's my new fake-religion, taking just-world bias to its full extreme

the belief that we're simulations and we'll get transcended to Utopia in 1 second because future civilisation is creating many simulations of all possible people in all possible contexts and then uploading them to Utopia so that from anyone's perspective you have a very high probability of transcending to Utopia in 1 second

^^

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2024-09-26T18:09:24.520Z · LW · GW

Is the opt-in button for Petrov Day a trap? Kinda scary to press on large red buttons 😆

Comment by Mati_Roy (MathieuRoy) on You Only Live Twice · 2024-08-12T07:43:07.720Z · LW · GW

Llifelogging as life extension version of this post would be like "You Only Live 1.5 Times" ^^

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2024-07-22T01:45:27.664Z · LW · GW

epistemic status: speculative, probably simplistic and ill defined

Someone asked me "What will I do once we have AGI?"

I generally define the AGI-era starting at the point where all economically valuable tasks can be performed by AIs at a lower cost than a human (at subsistance level, including buying any available augmentations for the human). This notably excludes:

1) any tasks that humans can do that still provide value at the margin (ie. the caloric cost of feeding that human while they're working vs while they're not working rather than while they're not existing)

2) things that are not "tasks", such as:

a) caring about the internal experience of the service provider (ex.: wanting a DJ that feels human emotions regardless of its actions) --> although, maybe you could include that in the AGI definition too. but what if you value having a DJ be exactly a human? then the best an AGI could do is 3D print a human or something like that. or maybe you're even more specific, and you want a "pre-singulatarian natural human", in which case AGI seems impossible by (very contrived) definition.

b) the value of the memories encoded in human brains

c) the value of doing scientific experiments on humans

For my answer to the question, I wanted to say something like, think about what I should do with my time for a long time, and keep my options open (ex.: avoid altering my mind in ways I don't understand the consequences well). But then, that seems like something that might be economically useful to sell, so using the above definition, it seems like I should have AI system that are able to do that better/cheaper than me (unless I intrinsically didn't want that, or something like that). So maybe I have AI systems computing that for me and keeping me posted with advice while I do whatever I want.

But maybe I can still do work that is useful at the margin, as per (1), and so would probably do that. But what if even that wasn't worth the marginal caloric cost, and it was better to feed those calories into AI systems?

(2) is a bit complex, but probably(?) wouldn't impact the answer to the initial question much.

So, what would I do? I don't know. Main thing that comes to mind is observe how the worlds unfold (and listen to what the AGIs are telling me).

But maybe "AGI" shouldn't be defined as "aligned AGI". Maybe a better definition of AGI is like "outperforming humans at all games/tasks that are well defined" (ie. where humans don't have a comparative advantage just by knowing what humans value). In which case, my answer would be "alignment research" (assuming it's not "die").

Comment by Mati_Roy (MathieuRoy) on MIRI 2024 Communications Strategy · 2024-06-02T04:27:48.743Z · LW · GW

related: https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significantly-enhancing-adult-intelligence-with-gene-editing

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2024-05-28T03:07:29.427Z · LW · GW

imagine (maybe all of a sudden) we're able to create barely superhuman-level AIs aligned to whatever values we want at a barely subhuman-level operation cost

we might decide to have anyone able to buy AI agents aligned with their values

or we might (generally) think this way to give access to that tech would be bad, but many companies are already incentivized to do that individually and can't all cooperate not to (and they actually reached this point gradually, previously selling near human-level AIs)

then it seems like everyone/most people would start to run such an AI and give it access to all their resources--at which point that AI can decide what to do, whether that's investing in some companies and then paysing themselves periodically or invest in running more copies of themselves, etc. deciding when to use those resources for the human to consume vs reinvesting them

maybe people would wish for everyone to run AI systems with "aggregated human values" instead of their personal values, but given others aren't doing that, they won't either

now, intelligence isn't static anymore--presumably, the more money you have, the more intelligence you have, and the more intelligence the more money.

so let's say we suddenly have this tech and everyone is instantiating one such agent (which will make decisions about number and type of agents) that has access to all their resources

what happens?

maximally optimist scenario: solving coordination is not too late and gets done easily and at a low cost. utopia

optimist scenario: we don't substantially improve coordination, but our current coordination level is good enough for an Okay Outcome

pessimist scenario: agents are incentived to create subagents with other goals for instrumentally convergent purposes. defecting is better than cooperating individually, but defecting-defecting still leads to extremely bad outcomes (just not as bad as if you had cooperated in a population of cooperators). those subagents quickly take over and kill all humans (those who cooperated are killed slightly sooner). or, not requiring misaligned AIs, maybe the aestivation hypothesis is true but we won't coordinate to delay energy consumption or wars will use all surplus leaving nothing for humans to consume

I'm not confident we're in an optimist scenario. being able to download one's values and then load them in an AI system (and having initial conditions where that's all that happens) might not be sufficient for good outcomes

this is evidence for the importance of coordinating on how AGI systems get used, and that distributing that wealth/intelligence directly might not be the way to go. rather, it might be better to keep that intelligence concentrated and have some value/decision aggregation mechanism to decide what to do with it (rather than distributing it and later not being able to pool it back together if that's needed, which seems plausible to be)

a similar reasoning can apply for poverty alleviation: if you want to donate money to a group of people (say residents of a poor country) and if you think they haven't solved their coordination problem, then maybe instead of distributing that money and let them try to coordinate to put back (part of) that money in a shared pool for collective goods, you can just directly put that money in such a pool--the problem about figuring out the shared goal remains but it at least arguably solves the problem of pooling that money (ex.: to fund research for a remedy to a disease affecting that population)

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2024-05-28T02:34:45.258Z · LW · GW
Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2024-05-28T01:15:50.974Z · LW · GW

AI is improving exponentially with researchers having constant intelligence. Once the AI research workforce become itself composed of AIs, that constant will become exponential which would make AI improve even faster (superexponentially?)

it doesn't need to be the scenario of a singular AI agent self-improving its own self; it can be a large AI population participating in the economy and collectively improving AI as a whole, with various AI clans* focusing on different subdomains (EtA: for the main purpose of making money, and then using that money to buy tech/data/resources that will improve them)

*I'm wanting myself to differentiate between a "template NN" and its multiple instantiation, and maybe adopting the terminology from The Age of Em for that works well

Comment by Mati_Roy (MathieuRoy) on Cryonics signup guide #1: Overview · 2024-05-25T22:23:11.406Z · LW · GW

Oregon Brain Preservation is a solid organization offering a free option in the US: https://www.oregoncryo.com/services.html, and Cryonics Germany a free option in Europe: https://cryonics-germany.org/en/

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2024-05-21T00:20:12.047Z · LW · GW

Thanks for engaging with my post. I keep thinking about that question.

I'm not quite sure what you mean by "values and beliefs are perfectly correlated here", but I'm guessing you mean they are "entangled".

there is no test we could perform which would distinguish what it wants from what it believes.

Ah yeah, that seems true for all systems (at least if you can only look at their behaviors and not their mind); ref.: Occam’s razor is insufficient to infer the preferences of irrational agents. Summary: In principle, all possible sets of possible value-system has a belief-system that can lead to any set of actions.

So, in principle, the cat classifier, looked from the outside, could actually be a human mind wanting to live a flourishing human life, but with a decision making process that's so wrong that the human does nothing but say "cat" when they see a cat, thinking this will lead them to achieve all their deepest desires.

I think the paper says noisy errors would cancel each other (?), but correlated errors wouldn't go away. One way to solve for them would be coming up with "minimal normative assumptions".

I guess that's as much relevant to the "value downloading" as it is to the "value (up)loading" on. (I just coined the term “value downloading” to refer to the problem of determining human values as opposed to the problem of programming values into an AI.)

The solution-space for determining the values of an agent at a high-level seems to be (I'm sure that's too simplistic, and maybe even a bit confused, but just thinking out loud):

  • Look in their brain directly to understand their values (and maybe that also requires solving the symbol-grounding problem)
  • Determine their planner (ie. “decision-making process”) (ex.: using some interpretability methods), and determine their values from the policy and the planner
  • Make minimal normative assumptions about their reasoning errors and approximations to determine their planner from their behavior (/policy)
  • Augment them to make their planners flawless (I think your example fits into improving the planner by improving the image resolution--I love that thought 💡)
  • Ask the agent questions directly about their fundamental values which doesn't require any planning (?)

Approaches like “iterated amplifications” correspond to some combination of the above.

But going back to my original question, I think a similar way to put it is that I wonder how complex the concept of "preferences''/"wanting" is. Is it a (messy) concept that's highly dependent on our evolutionary history (ie. not what we want, which definitely is, but the concept of wanting itself) or is it a concept that all alien civilizations use in exactly the same way as us? It seems like a fundamental concept, but can we define it in a fully reductionist (and concise) way? What’s the simplest example of something that “wants” things? What’s the simplest planner a wanting-thing can have? Is it no planner at all?

A policy seems well defined–it’s basically an input-output map. We’re intuitively thinking of a policy as a planner + an optimization target, so if either of the latter 2 can be defined robustly, then it seems like we should be able to define the other as well. Although, maybe for a given planner or optimization target there are many possible optimization targets or planners to get a given policy, but maybe Occam’s razor would be helpful here.

Relatedly, I also just read Reward is not the optimization target which is relevant and overlaps a lot with ideas I wanted to write about (ie. neural-net-executor, not reward-maximizers as a reference to Adaptation-Executers, not Fitness-Maximizers). A reward function R will only select a policy π that wants R if wanting R is the best way to achieve R in the environment the policy is being developped. (I’m speaking loosely: technically not if it’s the “best” way, but just if it’s the way the weight-update function works.)

Anyway, that’s a thread that seems valuable to pull more. If you have any other thoughts or pointers, I’d be interested 🙂

Comment by Mati_Roy (MathieuRoy) on Let's split the cake, lengthwise, upwise and slantwise · 2024-05-09T05:11:28.962Z · LW · GW

thanks, it worked! https://web.archive.org/web/20150412211654/http://reducing-suffering.org/wp-content/uploads/2015/02/wild-animals_2015-02-28.pdf

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2024-05-09T05:10:46.698Z · LW · GW

i want a better conceptual understanding of what "fundamental values" means, and how to disentangled that from beliefs (ex.: in an LLM). like, is there a meaningful way we can say that a "cat classifier" is valuing classifying cats even though it sometimes fail?

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2024-05-02T21:26:06.282Z · LW · GW

when potentially ambiguous, I generally just say something like "I have a different model" or "I have different values"

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2024-04-27T19:09:21.825Z · LW · GW

it seems to me that disentangling beliefs and values are important part of being able to understand each other

and using words like "disagree" to mean both "different beliefs" and "different values" is really confusing in that regard

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2024-04-18T21:56:27.888Z · LW · GW

topic: economics

idea: when building something with local negative externalities, have some mechanism to measure the externalities in terms of how much the surrounding property valuation changed (or are expected to change based, say, through a prediction market) and have the owner of that new structure pay the owners of the surrounding properties.

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2024-04-15T22:38:43.952Z · LW · GW

I wonder what fraction of people identify as "normies"

I wonder if most people have something niche they identify with and label people outside of that niche as "normies"

if so, then a term with a more objective perspective (and maybe better) would be non-<whatever your thing is>

like, athletic people could use "non-athletic" instead of "normies" for that class of people

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2024-04-14T17:37:17.295Z · LW · GW

just a loose thought, probably obvious

some tree species self-selected themselves for height (ie. there's no point in being a tall tree unless taller trees are blocking your sunlight)

humans were not the first species to self-select (for humans, the trait being intelligence) (although humans can now do it intentionally, which is a qualitatively different level of "self-selection")

on human self-selection: https://www.researchgate.net/publication/309096532_Survival_of_the_Friendliest_Homo_sapiens_Evolved_via_Selection_for_Prosociality

Comment by Mati_Roy (MathieuRoy) on What games are using the concept of a Schelling point? · 2024-04-14T17:14:26.194Z · LW · GW

Board game: Medium

2 players reveal a card with a word, then they need to say a word based on that and get points if it's the same word (basically, with some more complexities).

Example at 1m20 here: https://youtu.be/yTCUIFCXRtw?si=fLvbeGiKwnaXecaX

Comment by Mati_Roy (MathieuRoy) on What games are using the concept of a Schelling point? · 2024-04-14T03:15:56.763Z · LW · GW

I'm glad past Mati cast a wider net has the specifics for this year's Schelling day are different ☺️☺️

Comment by Mati_Roy (MathieuRoy) on San Francisco ACX Meetup “First Saturday” · 2024-02-03T23:09:14.267Z · LW · GW

idk if the events are often going over time, but I might pass by now if it's still happening ☺️

Comment by Mati_Roy (MathieuRoy) on When scientists consider whether their research will end the world · 2024-01-01T20:58:14.191Z · LW · GW

I liked reading your article; very interesting! 🙏

One point I figured I should x-post with our DMs 😊 --> IMO, if one cares about future lives (as much as present ones) then the question stops really being about expected lives and starts just being about whether an action increases or decreases x-risks. I think a lot/all of the tech you described also have a probability of causing an x-risk if they're not implemented. I don't think we can really determine whether a probability of some of those x-risk is low enough in absolute terms as those probabilities would need to be unreasonably low, leading to full paralysis, and full paralysis could lead to x-risk. I think instead someone with those values (ie. caring about unborn people) should compare the probability of x-risks if a tech gets developed vs not developed (or whatever else is being evaluated). 🙂

Comment by Mati_Roy (MathieuRoy) on Cryonics signup guide #1: Overview · 2024-01-01T16:19:27.747Z · LW · GW

new, great, complementary post: Critical Questions about Patient Care in Cryonics and Biostasis

Comment by Mati_Roy (MathieuRoy) on Without - MicroFiction 250 words · 2023-12-13T05:46:41.837Z · LW · GW

I love this story so much, wow! It feels so incredibly tailored to me (because it is 😄). I value that a lot! It's a very scarce resource to begin with, but it hardly gets more tailored than that 😄

Comment by Mati_Roy (MathieuRoy) on Montreal cryonics and longevity hangout 2023-10-01 · 2023-10-24T03:12:46.262Z · LW · GW

that's awesome; thanks for letting me know :)

Comment by Mati_Roy (MathieuRoy) on Montreal cryonics and longevity hangout 2023-10-01 · 2023-10-23T12:29:51.173Z · LW · GW

i'd be curious to know how the first event went if you're inclined to share ☺

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-08-14T18:39:43.207Z · LW · GW

cars won't replace horses, horses with cars will

Comment by Mati_Roy (MathieuRoy) on What are examples of Rationalist posters or Rationalist poster ideas? · 2023-08-07T00:42:34.228Z · LW · GW

12 virtues of rationality: https://www.lesswrong.com/posts/DAFuuqpDcMztQGt7P/12-virtues-of-rationality-posters-icons?fbclid=IwAR2a8dbjJfl3hIpu0LTCttDzdraKcyyJ1eazs14Mr88hbXkuLjH4GPsWbyI

Comment by Mati_Roy (MathieuRoy) on Babble challenge: 50 ways of sending something to the moon · 2023-08-04T13:56:25.255Z · LW · GW

from ChatGPT

Sounds like a fun challenge! Here are 50 ways 

to send something to the moon:

1. Catapult
2. Giant crossbow
3. Balloon with an endless supply of helium
4. A giant seesaw
5. Sky elevator
6. Beam of light (if the object can ride light)
7. Teleportation device
8. Moon magnet
9. Whale blowhole
10. Bubble with a perfect vacuum inside
11. Tornado creator
12. Inflatable space tube
13. A jump by the Hulk
14. Sonic boom from a supersonic plane
15. Floating on a cloud machine
16. Warp drive
17. Ice cannon
18. Rocket rollercoaster
19. A super springboard
20. Fling via a giant trebuchet
21. Antigravity boots
22. Pneumatic tube system
23. Ride on a meteor
24. Build stairs to the moon
25. Giant hamster wheel that propels forward
26. Human pyramid
27. Bounce house to the moon
28. A moon hook and pull system
29. Jetpack
30. Superfast elevator
31. A gigantic yo-yo system
32. Umbrella carried by solar winds
33. Giant's throw
34. Rocket sneakers
35. Bungee cord to the moon
36. Space swing set
37. Moon vacuum
38. Space surfboard
39. Massive drone
40. Launch via space trebuchet
41. Space pogo stick
42. Inflatable space mountain
43. Magnetic repulsion system
44. Time travel to when the moon was closer
45. Huge space slingshot
46. Giant space slinky
47. An extremely powerful fan
48. A chain of trampolines
49. Magic carpet
50. Use a giant's bow and arrow
 

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-07-17T15:05:39.211Z · LW · GW

topics: AI, sociology

thought/hypothesis: when tech is able to create brains/bodies as good or better than ours, it will change our perception of ourselves: we won't be in a separate magistra from our tools anymore. maybe people will see humans as less sacred, and value life less. if you're constantly using, modifying, copying, deleting, enslaving AI minds (even AI minds that have a human-like interface), maybe people will become more okay doing that to human minds as well.

(which seems like it would be harmful for the purpose of reducing death)

Comment by Mati_Roy (MathieuRoy) on Catching the Eye of Sauron · 2023-07-17T14:50:26.899Z · LW · GW

I'm surprised this has this many upvotes. You're taking the person that contributed the most to warning humanity about AI x-risks, and are saying what you think they could have done better in what comes across as blamy to me. If you're blaming zir, you should probably blame everyone. I'd much rather if you wrote what people could have done in general rather than targeting one of the best contributors.

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-07-07T18:53:17.407Z · LW · GW

ok that's fair yeah! thanks for your reply. I'm guessing a lot of those historical quotes are also taking out of context actually.

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-07-07T15:54:40.687Z · LW · GW

you know those lists about historical examples of notable people mistakenly saying that some tech will not be useful (for example)

Elon Musk saying that VR is just a TV on your nose will probably become one of those ^^

https://youtube.com/shorts/wYeGVStouqw?feature=share

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-07-07T13:45:25.599Z · LW · GW

related concept: https://en.wikipedia.org/wiki/Information_panspermia

video on this that was posted ~15 hours ago: https://www.youtube.com/watch?v=K4Zghdqvxt4

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-07-04T15:34:52.906Z · LW · GW

idea: Stream all of humanity's information through the cosmos in hope an alien civ reconstruct us (and defends us against an Earth-originating maligned ASI)

I guess finding intelligent ETs would help with that as we could stream in a specific direction instead of having to broadcast the signal broadly

It could be that maligned alien ASIs would mostly ignore our information (or at least not use it to like torture us) whereas friendly align ASI would use it beneficially 🤷‍♀️

Comment by Mati_Roy (MathieuRoy) on My current thoughts on the risks from SETI · 2023-06-28T16:56:50.817Z · LW · GW

there remains a credible possibility that grabby aliens would benefit by sending a message that was carefully designed to only be detectable by civilizations at a certain level of technological development

oh wow, after reading this, I came up with the same explanation you wrote in the following 2 paragraphs just before reading them 😄

Comment by Mati_Roy (MathieuRoy) on Agentic Mess (A Failure Story) · 2023-06-23T13:40:34.636Z · LW · GW

I really liked the story, and love that you made a video version! I think it was really well made!

I'm impressed by the AI voice!

I just suggested to AI Impacts to add this story to their story repository.

I recommend / suggest considering adding "Agentic Mess (A Failure Story)" in your list.

It was developed at the 8th AI Safety Camp in 2023.

You can see the text-version here: https://www.lesswrong.com/posts/LyJAFBuuEfd4kxgsw/agentic-mess-a-failure-story

You can see the video-version here: https://www.youtube.com/watch?app=desktop&v=6edrFdkCEUE

It starts pretty close to our current AI reality and explores the potentiality of AI agents replicating and trying to improve in order to achieve their goal, and, as a result, propagating like a virus. The story explores the selection pressure that would bring and the results that would have.

Comment by Mati_Roy (MathieuRoy) on What fraction of words written/read are AI-written? · 2023-06-18T02:10:00.442Z · LW · GW

Thanks for your input :)

Comment by Mati_Roy (MathieuRoy) on Who are the Experts on Cryonics? · 2023-05-28T19:35:01.645Z · LW · GW

idk what CLARITY is, but yeah, I'd love to see room temperature preservation protocols developed for human brain preservation. it also has the possibility of significantly reducing cost given a significant fraction of the cost goes towards paying for indefinite liquid nitrogen refills

Nectome is working on aldehyde-stabilized cryopreservation for humans which I think might provide some of those benefits (?) OregonCryo is also trying to do / doing something like that.

i know another researcher working on this which could probably use funding in the near future. if any of you know someone that might be interested in funding this, please lmk so I can put you in touch. i think this is one of the top opportunities for improving cryonics robustness and adoption (and maybe quality)

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-05-08T22:10:34.268Z · LW · GW

topic: intellectual discussion, ML tool, AI x-risks

Idea: Have a therapist present during intellectual debate to notice triggers, and help defuse them. Triggers activate a politics mindset where the goal becomes focused on status/self-preservation/appearances/looking smart/making the other person look stupid/etc. which makes it hard to think clearly.

Two people I follow will soon have a debate on AI x-risks which made me think of that. I can't really propose that intervention though because it will likely be perceived and responded as if it was a political move itself.

Another idea I had recently, also based on one of those people, was to develop a neural network helping us notice when we were activated in that way so we became aware of it and helped defuse it. AI is too important for our egos to get in the way (but it's easier said than done).

x-post Facebook

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-04-30T15:31:06.063Z · LW · GW

Topics: cause prioritization; metaphor

note I took on 2022-08-01; I don't remember what I had in mind, but I feel like it can apply to various things

from an utilitarian point of view though, i think this is almost like arguing whether dying with a red or blue shirt is better; while there might be an answer, i think it's missing the point, and we should focus on reducing risks of astronomical disasters

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-04-30T15:17:27.854Z · LW · GW

Topics: AI, forecasting, privacy

I wonder how much of a signature we leave in our writings. Like, how hard would it be for an AI to be rather confident I wrote this text? (say if it was trained on LessWrong writings, or all public writings, or maybe even private writings) What if I ask someone else to write an idea for me--how helpful is it in obfuscating the source?

Comment by Mati_Roy (MathieuRoy) on Mati_Roy's Shortform · 2023-04-25T02:43:26.350Z · LW · GW

Topic: AI strategy (policies, malicious use of AI, AGI misalignment)

Epistemic status: simplistic; simplified line of reasoning; thinking out loud; a proposed frame

A significant "warning shot" from a sovereign misaligned AI doesn't seem likely to me because a human-level (and plausibly a subhuman-level) intelligence can both 1) learn deception, yet 2) can't (generally) do a lot of damage (i.e. perceptible for humanity). So the last "warning shot" before AI learns deception won't be very big (if even really notable at all), and then a misaligned agent would hide (its power and/or intentions) until it's confident it can overpower humanity (because it's easy to gain power that way)--at which point it would cause an omnicide. An exception to that is if an AI thinks other AIs are hiding in the world, then it might want to take a higher risk to overpower humanity before it's confident it can do so because it's concerned another AI will do so first otherwise. I'm not very hopeful this would give us a good warning shot though because I think multiple such AIs trying to overpower humanity would likely be too damaging for us to regroup in time.

However, it seems much more plausible to me that (non-agentic) AI tools would be used maliciously, which could lead the government to highly regulate AIs. Those regulations (ex.: nationalizing AI) preventing malicious uses could also potentially help with negligent uses. Assuming a negligent use (i.e. resulting in AGI misalignment) is much more likely to cause an existential catastrophe than a malicious use of AI, and that regulations against malicious uses are more memetically fit, then the ideal regulations to advocate for might be those that are good at preventing both malicious uses and the negligent creation of a misaligned AGI.

note to self: not posted on Facebook (yet)