Mati_Roy's Shortform
post by Mati_Roy (MathieuRoy) · 2020-03-17T08:21:41.287Z · LW · GW · 175 commentsContents
179 comments
175 comments
Comments sorted by top scores.
comment by Mati_Roy (MathieuRoy) · 2024-04-27T19:09:21.825Z · LW(p) · GW(p)
it seems to me that disentangling beliefs and values are important part of being able to understand each other
and using words like "disagree" to mean both "different beliefs" and "different values" is really confusing in that regard
Replies from: Viliam↑ comment by Viliam · 2024-04-27T23:19:03.857Z · LW(p) · GW(p)
Lets use "disagree" vs "dislike".
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2024-05-02T21:26:06.282Z · LW(p) · GW(p)
when potentially ambiguous, I generally just say something like "I have a different model" or "I have different values"
comment by Mati_Roy (MathieuRoy) · 2024-04-14T17:37:17.295Z · LW(p) · GW(p)
just a loose thought, probably obvious
some tree species self-selected themselves for height (ie. there's no point in being a tall tree unless taller trees are blocking your sunlight)
humans were not the first species to self-select (for humans, the trait being intelligence) (although humans can now do it intentionally, which is a qualitatively different level of "self-selection")
on human self-selection: https://www.researchgate.net/publication/309096532_Survival_of_the_Friendliest_Homo_sapiens_Evolved_via_Selection_for_Prosociality
comment by Mati_Roy (MathieuRoy) · 2020-12-18T14:21:06.672Z · LW(p) · GW(p)
Litany of Tarski for instrumental rationality 😊
If it's useful to know whether the box contains a diamond,
I desire to figure out whether the box contains a diamond;
If it's not useful to know whether the box contains a diamond,
I desire to not spend time figuring out whether the box contains a diamond;
Let me not become attached to curiosities I may not need.
comment by Mati_Roy (MathieuRoy) · 2024-09-26T18:09:24.520Z · LW(p) · GW(p)
Is the opt-in button for Petrov Day a trap? Kinda scary to press on large red buttons 😆
Replies from: TourmalineCupcakes, damiensnyder↑ comment by DiamondSolstice (TourmalineCupcakes) · 2024-09-26T22:28:19.768Z · LW(p) · GW(p)
Last year, I checked Less wrong on the 27th, and found a message that told me that nobody, in fact, had pressed the red button.
When I saw the red button today, it took me about five minutes to convince myself to press it. The "join the Petrov Game" message gave me confidence and after I pressed it, there was no bright red message with the words "you nuked it all"
So no, not a trap. At least not in that sense - it adds you to a bigger trap, because once pressed the button cannot be unpressed.
↑ comment by damiensnyder · 2024-09-26T18:39:53.104Z · LW(p) · GW(p)
it's not
comment by Mati_Roy (MathieuRoy) · 2024-09-29T16:48:40.120Z · LW(p) · GW(p)
here's my new fake-religion, taking just-world bias to its full extreme
the belief that we're simulations and we'll get transcended to Utopia in 1 second because future civilisation is creating many simulations of all possible people in all possible contexts and then uploading them to Utopia so that from anyone's perspective you have a very high probability of transcending to Utopia in 1 second
^^
comment by Mati_Roy (MathieuRoy) · 2023-08-14T18:39:43.207Z · LW(p) · GW(p)
cars won't replace horses, horses with cars will
comment by Mati_Roy (MathieuRoy) · 2020-11-19T22:02:13.803Z · LW(p) · GW(p)
There's the epistemic discount rate (ex.: probability of simulation shut down per year) and the value discount (ex.: you do the funner things first, so life is less valuable per year as you become older).
Asking "What value discount rate should be applied" is a category error. "should" statements are about actions done towards values, not about values themselves.
As for "What epistemic discount rate should be applied", it depends on things like "probability of death/extinction per year".
comment by Mati_Roy (MathieuRoy) · 2021-12-24T09:11:51.660Z · LW(p) · GW(p)
I'm helping Abram Demski with making the graphics for the AI Safety Game (https://www.greaterwrong.com/posts/Nex8EgEJPsn7dvoQB/the-ai-safety-game-updated [LW · GW])
We'll make a version using https://app.wombo.art/. We have generated multiple possible artwork for each card and made a pre-selection, but we would like your input for the final selection.
You can give your input through this survey: https://forms.gle/4d7Y2yv1EEXuMDqU7 Thanks!
comment by Mati_Roy (MathieuRoy) · 2021-06-14T17:06:26.464Z · LW(p) · GW(p)
In the book Superintelligence, box 8, Nick Bostrom says:
How an AI would be affected by the simulation hypothesis depends on its values. [...] consider an AI that has a more modest final goal, one that could be satisfied with a small amount of resources, such as the goal of receiving some pre-produced cryptographic reward tokens, or the goal of causing the existence of forty-five virtual paperclips. Such an AI should not discount those possible worlds in which it inhabits a simulation. A substantial portion of the AI’s total expected utility might derive from those possible worlds. The decision-making of an AI with goals that are easily resource-satiable may therefore—if it assigns a high probability to the simulation hypothesis—be dominated by considerations about which actions would produce the best result if its perceived world is a simulation. Such an AI (even if it is, in fact, not in a simulation) might therefore be heavily influenced by its beliefs about which behaviors would be rewarded in a simulation. In particular, if an AI with resource-satiable final goals believes that in most simulated worlds that match its observations it will be rewarded if it cooperates (but not if it attempts to escape its box or contravene the interests of its creator) then it may choose to cooperate. We could therefore find that even an AI with a decisive strategic advantage, one that could in fact realize its final goals to a greater extent by taking over the world than by refraining from doing so, would nevertheless balk at doing so.
-
If the easily resource-satiable goals are persistent through time (ie. the AI wants to fulfill them for the longest period of time possible), then the AI will either try to keep the simulation running for as long as possible (and so not grab its universe) or try to escape the simulation.
-
If the easily resource-satiable goals are NOT persistent through time (ie. once the AI has created the 45 virtual paperclips, it doesn't matter if they get deleted, the goal has already been achieved), then once the AI has created the 45 paperclips, it has nothing to lose by grabbing more resources (gradually, until it has grabbed the Universe), but it has something to win, namely: a) increasing its probability (arbitrarily close to 100%) that it did in fact achieve its goal through further experiment and reasoning (ie. because it could be mistaken about having created 45 virtual paperclips), and b) if it didn't, then remedy to that.
comment by Mati_Roy (MathieuRoy) · 2020-03-30T08:58:13.996Z · LW(p) · GW(p)
EtA: moved to a question: https://www.lesswrong.com/posts/zAwx3ZTaX7muvfMrL/why-do-we-have-offices [LW · GW]
Why do we have offices?
They seem expensive, and not useful for jobs that can apparently be done remotely.
Hypotheses:
- Social presence of other people working: https://www.focusmate.com/
- Accountability
- High bandwidth communication
- Meta communication (knowing who's available to talk to)
- Status quo bias
status: to integrate
Replies from: Dagon, mr-hire, Viliam↑ comment by Dagon · 2020-03-30T15:30:39.075Z · LW(p) · GW(p)
- Employee focus (having punctuated behaviors separating work from personal time)
- Tax advantages for employers to own workspaces and fixtures rather than employees
- Not clear that "can be done remotely" is the right metric. We won't know if "can be done as effectively (or more effectively) remotely" is true for some time.
↑ comment by Mati_Roy (MathieuRoy) · 2020-03-31T01:02:57.711Z · LW(p) · GW(p)
thanks for your comment!
I just realized I should have used the question feature instead; here it is: https://www.lesswrong.com/posts/zAwx3ZTaX7muvfMrL/why-do-we-have-offices [LW · GW]
↑ comment by Matt Goldenberg (mr-hire) · 2020-03-30T16:49:25.793Z · LW(p) · GW(p)
Increased sense of relatedness seems a big one missed here.
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2020-03-31T01:03:13.120Z · LW(p) · GW(p)
thanks for your comment!
I just realized I should have used the question feature instead; here it is: https://www.lesswrong.com/posts/zAwx3ZTaX7muvfMrL/why-do-we-have-offices [LW · GW]
↑ comment by Viliam · 2020-03-30T20:14:48.266Z · LW(p) · GW(p)
High status feels better when you are near your subordinates (when you can watch them, randomly disrupt them, etc.). High-status people make the decision whether remote work is allowed or not.
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2020-03-31T01:04:07.536Z · LW(p) · GW(p)
thanks for your comment!
I just realized I should have used the question feature instead; here it is: https://www.lesswrong.com/posts/zAwx3ZTaX7muvfMrL/why-do-we-have-offices [LW · GW]
comment by Mati_Roy (MathieuRoy) · 2024-05-28T01:15:50.974Z · LW(p) · GW(p)
AI is improving exponentially with researchers having constant intelligence. Once the AI research workforce become itself composed of AIs, that constant will become exponential which would make AI improve even faster (superexponentially?)
it doesn't need to be the scenario of a singular AI agent self-improving its own self; it can be a large AI population participating in the economy and collectively improving AI as a whole, with various AI clans* focusing on different subdomains (EtA: for the main purpose of making money, and then using that money to buy tech/data/resources that will improve them)
*I'm wanting myself to differentiate between a "template NN" and its multiple instantiation, and maybe adopting the terminology from The Age of Em for that works well
comment by Mati_Roy (MathieuRoy) · 2021-10-23T07:46:50.020Z · LW(p) · GW(p)
One model / framing / hypothesis of my preferences is that:
I wouldn't / don't value living in a loop multiple times* because there's nothing new experienced. So even an infinite life in the sense of looping an infinite amount of times has finite value. Actually, it has the same value as the size of the loop: after 1 loop, marginal loops have no value. (Intuition pump: from within the loop, you can’t tell how many times you’ve been going through the loop so far.)
*explanation of a loop: at some point in the future my life becomes indistinguishable from a previous state, and my life replays exactly the same way from that point onwards
Also, for me to consider a life infinite, it’s important that identity be preserved, not just continuity. Among other things, that means past memories need to be preserved (or at least integrated in some ways), which means that the information content of the person will keep growing, and so the amount of matter needed to encode the information will also keep growing (assuming a fixed finite information-to-matter ratio) (ie. Being immortal means you will one day be a Jupiter brain).
Given that I (claim that I) prefer an immortal life of torture over any finite life, people tend to try to come up with scenarios that would make me change my mind. The problem with maximum torture is that it looks static / it might be an experience with a very short loop (0 to a few seconds), so it doesn’t actually constitute immortality. To be considered immortality (by me, under my view of identity), the torture would need to be remembered, and so the subjective experience different over time (although still always extremely painful of course).
My thinking was that any infinite life (defined in a way where a loop's size is not infinite, but just the size of the loop) had infinite value (to me).
But now I’m thinking maybe there are non-loop infinite lives that also have finite value. Like, maybe counting the natural numbers for eternity has a finite value because you're just executing the same simple pattern, and not executing new patterns. Executing this same function is loop-like at a higher level of abstraction. So while (/ just like) a loop has 0 marginal value, maybe those more abstract-loop also have diminishing return to a point where the integral of the value over infinity of those more abstract loops also converge (ie. experiencing them forever has finite value).
Note: This would still mean I prefer some form of everlasting torture to any finite life, it just means the torture would need to be more creative.
Just a thought I had while writing my last post. I haven't evaluated it much yet. But this is pretty far from my views up-to-now.
x-post: https://www.facebook.com/groups/deathfocusethics/posts/2113683138801222/
Replies from: avturchin, TLW↑ comment by avturchin · 2021-10-23T20:53:57.167Z · LW(p) · GW(p)
Agree. To be long, sufferings need to be diverse. Like maximum suffering is 1111111, but diverse sufferings are: 1111110, 1111101, 1111011, 1110111 etc, so they have time. More diverse sufferings are less intense, and infinite suffering need to be of relatively low intensity. Actually, in the “I have no mouth but I must scream”, the Evil AI invests a lot in making suffering diverse. But diverse is interesting, so not that bad.
Also, any loop in experience will happen subjectively only once. Imagine classical eternal return: no matter how many times I will live my life, my counter of lives will be on 1.
↑ comment by TLW · 2021-10-24T19:26:35.079Z · LW(p) · GW(p)
One more formal method of describing much of this might be the Kolmogorov complexity of the state of your consciousness over the timeframe. (So outputting t=0: state=blah; t=1: state=blah, etc).
This has many of the features you are looking for.
This guides me to an interesting question: is looping in an infinite featureless plain of flat white any worse than looping in an infinite featureless plain of random visual noise?
(Of course, this is both noncomputable and has a nontrivial chance that the Turing Machine attaining the Kolmogorov complexity is itself simulating you, but meh. Details.)
comment by Mati_Roy (MathieuRoy) · 2020-12-09T08:43:25.772Z · LW(p) · GW(p)
In my mind, "the expert problem" means the problem of being able to recognize experts without being one, but I don't know where this idea comes from as the results from a Google search don't mention this. What name is used to refer to that problem (in the literature)?
x-post: https://www.facebook.com/mati.roy.09/posts/10159081618379579
comment by Mati_Roy (MathieuRoy) · 2020-09-15T04:08:52.073Z · LW(p) · GW(p)
Epistemic status: thinking outloud
The term "weirdness points" puts a different framing on the topic.
I'm thinking maybe I/we should also do this for "recommendation points".
The amount I'm willing to bet depends both on how important it seems to me and how likely I think the other person will appreciate it.
The way I currently try to spend my recommendation points is pretty fat tail, because I see someone's attention as scarce, so I want to keep it for things I think are really important, and the importance I assign to information is pretty fat tail. I'll sometimes say something like "I think you'll like this, but I don't want to bet reputation".
But maybe I'm too averse. Maybe if I make small bets (in terms of recommendation points), it'll increase my budget for future recommendations. Or maybe I should differentiate probability the person will appreciate the info and how much they might appreciate the info. And maybe people will update on my probabilities whether or not I say I want to bet reputation.
comment by Mati_Roy (MathieuRoy) · 2020-04-27T05:59:25.371Z · LW(p) · GW(p)
current intuitions for personal longevity interventions in order of priority (cryo-first for older people): sleep well, lifelogging, research mind-readers, investing to buy therapies in the future, staying home, sign up for cryo, paying a cook / maintain low weight, hiring Wei Dai to research modal immortality, paying a trainer, preserving stem cells, moving near a cryo facility, having some watch you to detect when you die, funding plastination research
EtA: maybe lucid dreaming to remember your dreams; some drugs (becopa?) to improve memory retention)
also not really important in the long run, but sleeping less to experience more
Replies from: william-walker↑ comment by William Walker (william-walker) · 2020-04-28T22:38:44.714Z · LW(p) · GW(p)
NAD+ boosting (NR now, keep an eye on NRH for future).
CoQ10, NAC, keep D levels up in winter.
Telomerase activation (Centella asiatica, astragalus, eventually synthetics if Sierra Sciences gets its TRAP screen funded again or if the Chinese get tired of waiting on US technology...)
NR, C, D, Zinc for SARS-CoV-2 right now, if you're not already.
Become billionaire, move out of FDA zone, have some AAV-vector gene modifications... maybe some extra p53 copies, like the Pachyderms? Fund more work on Bowhead Whale comparative genetics. Fund a company to commercially freeze and store transplant organs, to perfect a freezing protocol (I've seen Alcor's...)
Main thing we need is a country where it's legal and economically possible to develop and sell anti-agathic technology... even a billionaire can't substitute for the whole market.
comment by Mati_Roy (MathieuRoy) · 2024-05-09T05:10:46.698Z · LW(p) · GW(p)
i want a better conceptual understanding of what "fundamental values" means, and how to disentangled that from beliefs (ex.: in an LLM). like, is there a meaningful way we can say that a "cat classifier" is valuing classifying cats even though it sometimes fail?
Replies from: cubefox↑ comment by cubefox · 2024-05-09T06:29:42.609Z · LW(p) · GW(p)
I guess for a cat classifier, disentanglement is not possible, because it wants to classify things as cats if and only if it believes they are cats. Since values and beliefs are perfectly correlated here, there is no test we could perform which would distinguish what it wants from what it believes.
Though we could assume we don't know what the classifier wants. If it doesn't classify a cat image as "yes", it could be because it is (say) actually a dog classifier, and it correctly believes the image contains something other than a dog. Or it could be because it is indeed a cat classifier, but it mistakenly believes the image doesn't show a cat.
One way to find out would be to give the classifier an image of the same subject, but in higher resolution or from another angle, and check whether it changes its classification to "yes". If it is a cat classifier, it is likely it won't make the mistake again, so it probably changes its classification to "yes". If it is a dog classifier, it will likely stay with "no".
This assumes that mistakes are random and somewhat unlikely, so will probably disappear when the evidence is better or of a different sort. Beliefs react to such changes in evidence, while values don't.
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2024-05-21T00:20:12.047Z · LW(p) · GW(p)
Thanks for engaging with my post. I keep thinking about that question.
I'm not quite sure what you mean by "values and beliefs are perfectly correlated here", but I'm guessing you mean they are "entangled".
there is no test we could perform which would distinguish what it wants from what it believes.
Ah yeah, that seems true for all systems (at least if you can only look at their behaviors and not their mind); ref.: Occam’s razor is insufficient to infer the preferences of irrational agents. Summary: In principle, all possible sets of possible value-system has a belief-system that can lead to any set of actions.
So, in principle, the cat classifier, looked from the outside, could actually be a human mind wanting to live a flourishing human life, but with a decision making process that's so wrong that the human does nothing but say "cat" when they see a cat, thinking this will lead them to achieve all their deepest desires.
I think the paper says noisy errors would cancel each other (?), but correlated errors wouldn't go away. One way to solve for them would be coming up with "minimal normative assumptions".
I guess that's as much relevant to the "value downloading" as it is to the "value (up)loading [LW · GW]" on. (I just coined the term “value downloading” to refer to the problem of determining human values as opposed to the problem of programming values into an AI.)
The solution-space for determining the values of an agent at a high-level seems to be (I'm sure that's too simplistic, and maybe even a bit confused, but just thinking out loud):
- Look in their brain directly to understand their values (and maybe that also requires solving the symbol-grounding problem)
- Determine their planner (ie. “decision-making process”) (ex.: using some interpretability methods), and determine their values from the policy and the planner
- Make minimal normative assumptions about their reasoning errors and approximations to determine their planner from their behavior (/policy)
- Augment them to make their planners flawless (I think your example fits into improving the planner by improving the image resolution--I love that thought 💡)
- Ask the agent questions directly about their fundamental values which doesn't require any planning (?)
Approaches like “iterated amplifications” correspond to some combination of the above.
But going back to my original question, I think a similar way to put it is that I wonder how complex the concept of "preferences''/"wanting" is. Is it a (messy) concept that's highly dependent on our evolutionary history (ie. not what we want, which definitely is, but the concept of wanting itself) or is it a concept that all alien civilizations use in exactly the same way as us? It seems like a fundamental concept, but can we define it in a fully reductionist (and concise) way? What’s the simplest example of something that “wants” things? What’s the simplest planner a wanting-thing can have? Is it no planner at all?
A policy seems well defined–it’s basically an input-output map. We’re intuitively thinking of a policy as a planner + an optimization target, so if either of the latter 2 can be defined robustly, then it seems like we should be able to define the other as well. Although, maybe for a given planner or optimization target there are many possible optimization targets or planners to get a given policy, but maybe Occam’s razor would be helpful here.
Relatedly, I also just read Reward is not the optimization target [LW · GW] which is relevant and overlaps a lot with ideas I wanted to write about (ie. neural-net-executor, not reward-maximizers as a reference to Adaptation-Executers, not Fitness-Maximizers [LW · GW]). A reward function R will only select a policy π that wants R if wanting R is the best way to achieve R in the environment the policy is being developped. (I’m speaking loosely: technically not if it’s the “best” way, but just if it’s the way the weight-update function works.)
Anyway, that’s a thread that seems valuable to pull more. If you have any other thoughts or pointers, I’d be interested 🙂
comment by Mati_Roy (MathieuRoy) · 2023-03-16T23:07:43.220Z · LW(p) · GW(p)
topic: AI
Lex Fridman:
I'm doing podcast with Sam Altman (@sama), CEO of OpenAI next week, about GPT-4, ChatGPT, and AI in general. Let me know if you have any questions/topic suggestions.
PS: I'll be in SF area next week. Let me know if there are other folks I should talk to, on and off the mic.
comment by Mati_Roy (MathieuRoy) · 2023-03-15T03:56:52.347Z · LW(p) · GW(p)
topic: lifelogging as life extension
if you think pivotal acts might require destroying a lot of hardware*; then this increases the probability that worlds in which lifelogging as life extension is useful are more likely to require EMP-proof lifelogging (and if you think think destroying a lot of hardware would increase x-risks, then it reduces the value of having EMP-proof lifelogs)
*there might be no point in destroying hard drives (vs GPUs), but some attacks, like EMPs, might not discriminate on that
comment by Mati_Roy (MathieuRoy) · 2021-11-03T17:33:47.954Z · LW(p) · GW(p)
#parenting, #schooling, #policies
40% of hockey players selected in top tier leagues are born in the first quarter of the year (compared to 10% in the 3rd quarter) (whereas you'd expect 25% if the time of the year didn't have an influence)
The reason for that is that the cut-off age to join a hockey league as a kid is January 1st; so people born in January are the oldest in their team, and at a young age that makes a bigger difference (in terms of abilities), so they tend to be the best in their team, and so their coaches tend to make them play more and pay more attention to them, which in terms makes them even better, and so again in subsequent year receive more attention, etc. (at least, that's the reason given in the video).
That's explain starting around 1:35 in the video: https://www.youtube.com/watch?v=3LopI4YeC4I
I imagine there are many disanalogies with public schools, but that made me wonder how day of birth influenced this for students.
https://www.today.com/.../study-kids-born-late-summer... says:
Data showed that the September-born children were 2.1 percent more likely to attend college compared to their August-born classmates. They also were 3.3 percent more likely to graduate from college, and 15.4 percent less likely to be get into trouble with the law while underage.
But that's on average! I'd guess particularly talented kids might still benefit from being in more advanced classes (with hockey, you can still skate fast and do tricks with lower level players; but in school I imagine it's harder to learn more advance stuff if your peers aren't ready for that).
Also, that doesn't necessarily mean it's better to delay by a year kids born in August (which are on the fence between 2 years) because this will delay all their income and investments (and possibly their life) by a year (I'd need to research this more). One of the person that wrote a paper on this suggests "If you're on the fence, send them to kindergarten. If they struggle, then have them repeat kindergarten".
I also depends on the school; the article says:
Painter, director of USC's Sol Price School of Public Policy, pointed out that today's children have far more chances to advance academically thanks to auxiliary programs offered in reading, math or other subjects. Painter told TODAY that students who start kindergarten at age 6 are going to have “some natural advantages” including larger body size and more advanced social skills. But in the past, when schools didn’t offer many supplemental programs for higher-performing students, those advantages eventually leveled out as they advanced through school.
I was thinking:
- When I'll want children, should I plan to birth them around September / October?
I'm not sure yet, but I think it's only a small consideration. I'm thinking if the expected intelligence of the child is high-ish, then doing a few months after September might actually be good. Also depends on how schools function in your areas (and if you plan to send them to traditional schools in the first place).
- What would be the consequences if everyone tried to do this?
Attention to students might be more uniformly distributed, which I'd say probably more likely good then not. However, I guess that might delay when people have children*, which might reduce population growth, which I think is bad (in this current world).
*Although I could see this guess being wrong
Another negative consequences is that a bunch of industries would become seasonal (delivering babies, producing diapers, etc.), with more demand during certain part of the year, requiring a higher capacity to produce, but which high capacity wouldn't be used during part of the year (or alternatively, would produce goods that would be stored until the season arrived). That seems inefficient. (This could be partly mitigated by having different regions have different dates for when to give birth, but that adds complexity.)
Overall, probably not worth doing. I like to imagine a few cities experimenting with this, but that might not be realistic.
↑ comment by ChristianKl · 2021-07-14T09:11:45.430Z · LW(p) · GW(p)
That suggest he has no idea about whether it's actually a good vote (as this is how the person differs from other candidates) and just advocates for someone on the basis that the person is his friend.
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2021-07-15T17:07:47.397Z · LW(p) · GW(p)
comment by Mati_Roy (MathieuRoy) · 2021-07-14T03:33:58.259Z · LW(p) · GW(p)
For the Cryonics Institute board elections, I recommend voting for Nicolas Lacombe.
I’ve been friends with Nicolas for over a decade. Ze’s very principled, well organized and hard working. I have high trust in zir, and high confidence ze would be a great addition to the CI's board.
I recommend you cast some or all of your votes for Nicolas (you can cast up to 4 votes total). If you’re signed up with CI, simply email info@cryonics.org with your votes.
see zir description here: https://www.facebook.com/mati.roy.09/posts/10159642542159579
Replies from: ChristianKl↑ comment by ChristianKl · 2021-07-14T07:41:16.133Z · LW(p) · GW(p)
principled, well organized and hard working.
What of those do you think isn't true for the other candidates?
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2021-07-15T17:12:21.131Z · LW(p) · GW(p)
afaik, most board members are very passive, and hasnt been doing the things Nicolas wants to do
comment by Mati_Roy (MathieuRoy) · 2021-05-25T16:52:47.868Z · LW(p) · GW(p)
i want to invest in companies that will increase in value if AI capabilities increases fast / faster than what the market predicts
do you have suggestions?
Replies from: ann-brown↑ comment by Ann (ann-brown) · 2021-05-25T17:14:37.685Z · LW(p) · GW(p)
Application-specific processing units, health care, agriculture?
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2021-05-27T05:34:04.716Z · LW(p) · GW(p)
can you say more about agriculture?
Replies from: ann-brown↑ comment by Ann (ann-brown) · 2021-05-27T12:26:08.806Z · LW(p) · GW(p)
Agricultural robots exist, and more autonomous versions will benefit from AI in performing tasks currently more dependent on human labor (like careful harvesting) or provide additional abilities like scanning trees to optimize harvest time.
Related to whether faster AI progress would give a better price for the market, well, the market may currently be pricing in a relative shortage of human labor, and some of the efforts towards AI robots (in apples for example) have so far gone too slowly to be viable, so going faster than expected might shift the dynamic there.
↑ comment by Mati_Roy (MathieuRoy) · 2021-05-27T20:36:28.407Z · LW(p) · GW(p)
Thanks!
comment by Mati_Roy (MathieuRoy) · 2021-05-24T03:33:56.014Z · LW(p) · GW(p)
a feature i would like on content website like YouTube and LessWrong is an option to market a video/article as read as a note to self (x-post fb)
Replies from: MakoYass, MathieuRoy↑ comment by mako yass (MakoYass) · 2021-06-07T06:49:13.401Z · LW(p) · GW(p)
Youtube lets you access your viewing history through your "library" (or in the web version, probably it's in the sidebar)
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2021-06-12T22:12:59.221Z · LW(p) · GW(p)
thanks! yeah i know, but would like if it was more easily accessible whenever i watch a video:)
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2021-06-13T05:59:21.717Z · LW(p) · GW(p)
Are you actually looking for the "watch later" feature..
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2021-06-14T16:54:09.750Z · LW(p) · GW(p)
No:)
↑ comment by Mati_Roy (MathieuRoy) · 2021-05-27T15:37:07.543Z · LW(p) · GW(p)
oh, LW actually has a bookmark feature, which i could use for that! although i prefer using it for articles i want to read
Replies from: bfinncomment by Mati_Roy (MathieuRoy) · 2021-03-21T00:15:24.064Z · LW(p) · GW(p)
the usual story is that Governments provide public good because Markets can't, but maybe Markets can't because Governments have secured a monopoly on them?
x-post: https://www.facebook.com/mati.roy.09/posts/10159360438609579
Replies from: matthew-barnett↑ comment by Matthew Barnett (matthew-barnett) · 2021-03-21T00:29:13.927Z · LW(p) · GW(p)
The standard example of a public good is national defense. In that case, you're probably right that the market can't provide it, since it would be viewed in competition with the government military, and therefore would probably be seen as a threat to national security.
For other public goods, I'm not sure why the government would have a monopoly. Scientific research is considered a public good, and yet the government doesn't put many restrictions on what types of science you can perform (with some possible exceptions like nuclear weapons research).
Wikipedia lists common examples of public goods. We could through them one by one. I certainly agree that for some of them, your hypothesis holds.
Replies from: Viliam, MathieuRoy↑ comment by Viliam · 2021-03-21T14:12:03.858Z · LW(p) · GW(p)
Scientific research is considered a public good, and yet the government doesn't put many restrictions on what types of science you can perform
For the opposite perspective, see: https://slatestarcodex.com/2017/08/29/my-irb-nightmare/
Replies from: ChristianKl↑ comment by ChristianKl · 2021-03-21T21:37:11.816Z · LW(p) · GW(p)
I don't think the IRB that blocked Scott in that article was the government.
Replies from: Viliam↑ comment by Mati_Roy (MathieuRoy) · 2021-03-21T05:14:39.792Z · LW(p) · GW(p)
ah, yeah, you're right! thank you
comment by Mati_Roy (MathieuRoy) · 2020-11-17T05:36:52.981Z · LW(p) · GW(p)
Suggestion for retroactive prizes: Pay the most undervalued post on the topic for the prize, whenever it was written, assuming the writer is still alive or cryopreserved (given money is probably not worth much for most dead people). "undervalue" meaning amount the post is worth minus amount the writers received.
comment by Mati_Roy (MathieuRoy) · 2020-10-04T20:51:56.934Z · LW(p) · GW(p)
Topic: Can we compute back the Universe to revive everyone?
Quality / epistemic status: I'm just noting this here for now. Language might be a bit obscure, and I don't have a super robust/formal understanding of this. Extremely speculative.
This is a reply to: https://www.reddit.com/r/slatestarcodex/comments/itdggr/is_there_a_positive_counterpart_to_red_pill/g5g3y3a/?utm_source=reddit&utm_medium=web2x&context=3
The map can't be larger than the territory. So you need a larger territory to scan your region of interest: your scanner can't scan themselves. The region of interest is the future info-cone of the civilisation you want to compute back. Your reachable-cone is strictly less than a past info-cone. So it seems impossible to compute it back. Except:
a) If you prevent too much info from leaking in the broaded info cone (ex.: by cryopreserving a brain, the info stays in one place instead of chaotically spreading in the world).
b) If the Universe collapses on itself, then maybe the reachable-cone becomes the same size as the info-cone, and maybe at that point you can safety exit the sphere of interest with info leaking, but it seems unlikely; seems like a brain destroyed ten thousand years ago might now be spread over the whole world (ie. needing the whole world to be computed back; or at least, not being able to know which part you don't need to compute it back).
c) An alien civilisation outside the info-cone might be able to scan it entirely during a collapse of the universe.
d) Our simulators would be able to compute it backwards.
But actually, b) and c) might not work because the leaking also happens across Everett branches, so the info-cone has one more dimension which might never become reachable. In other words, our future would have multiple possible pasts. To truly compute it back, we would need to compute back from each Everett branch at the same time or something like that AFAIU.
Replies from: avturchin↑ comment by avturchin · 2020-10-05T10:04:21.854Z · LW(p) · GW(p)
There are several other (weird) ideas to revive everyone:
- send a nanobot via wormhole in the past, it will replicate and collect all data about the brains
- find a way to travel between everett branches. For every person there is branch where he is cryopreserved.
- use quantum randomness generator to generate every possible mind in different everett branches
comment by Mati_Roy (MathieuRoy) · 2020-09-20T05:54:38.868Z · LW(p) · GW(p)
Topic: AI adoption dynamic
GPT-3:
- fixed cost: 4.6M USD
- variable cost: 790 requests/USD source
Human:
- fixed cost: 0-500k USD (depending whether you start from birth and the task they need to be trained)
- variable cost: 10 - 1000 USD / day (depending whether you count their maintenance cost or the cost they charge)
So an AI currently seems more expensive to train, but less expensive to use (as might be obvious for most of you).
Of course, trained humans are better than GPT-3. And this comparison has other limitations. But I still find it interesting.
According to one estimate, training GPT-3 would cost at least $4.6 million. And to be clear, training deep learning models is not a clean, one-shot process. There's a lot of trial and error and hyperparameter tuning that would probably increase the cost several-fold. (source)
x-post: https://www.facebook.com/mati.roy.09/posts/10158882964819579
Replies from: gwern↑ comment by gwern · 2020-09-20T17:35:14.399Z · LW(p) · GW(p)
and error and hyperparameter tuning that would probably increase the cost several-fold.
All of which was done on much smaller models and GPT-3 just scaled up existing settings/equations - they did their homework. That was the whole point of the scaling papers, to tell you how to train the largest cost-effective model without having to brute force it! I think OA may well have done a single run and people are substantially inflating the cost because they aren't paying any attention to the background research or how the GPT-3 paper pointedly omits any discussion of hyperparameter tuning and implies only one run (eg the dataset contamination issue).
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2020-09-21T11:32:13.811Z · LW(p) · GW(p)
Good to know, thanks!
comment by Mati_Roy (MathieuRoy) · 2020-09-03T21:36:00.818Z · LW(p) · GW(p)
generalising from what a friend proposed me: don't aim at being motivated to do [desirable habit], aim at being addicted (/obsessed) at doing [desirable habit] (ie. having difficulty not to do it). I like this framing; relying on always being motivated feels harder to me
(I like that advice, but it probably doesn't work for everyone)
comment by Mati_Roy (MathieuRoy) · 2020-07-25T23:29:13.654Z · LW(p) · GW(p)
Philosophical zombies are creatures that are exactly like us, down to the atomic level, except they aren't conscious.
Complete philosophical zombies go further. They too are exactly like us, down to the atomic level, and aren't conscious. But they are also purple spheres (except we see them as if they weren't), they want to maximize paperclips (although they act and think as if they didn't), and they are very intelligent (except they act and think as if they weren't).
I'm just saying this because I find it funny ^^. I think consciousness is harder (for us) to reduce than shapes, preferences, and intelligence.
Replies from: Chris_Leong↑ comment by Chris_Leong · 2020-07-26T06:35:27.565Z · LW(p) · GW(p)
It's actually not hard to find examples of people who are intelligent, but act and think as though they aren't =P
comment by Mati_Roy (MathieuRoy) · 2020-07-05T15:15:06.916Z · LW(p) · GW(p)
topic: lifelogging as life extension
which formats should we preserve our files in?
I think it should be:
- open source and popular (to increase chances it's still accessible in the future)
- resistant to data degradation: https://en.wikipedia.org/wiki/Data_degradation (thanks to Matthew Barnett for bringing this to my attention)
x-post: https://www.facebook.com/groups/LifeloggingAsLifeExtension/permalink/1337456839798929/
Replies from: DonyChristie, MathieuRoy↑ comment by Pee Doom (DonyChristie) · 2020-07-06T02:04:29.860Z · LW(p) · GW(p)
Mati, would you be interested in having a friendly and open (anti-)debate on here (as a new post) about the value of open information, both for life extension purposes and else (such as Facebook group moderation)? I really support the idea of lifelogging for various purposes such as life extension but have a strong disagreement with the general stance of universal access to information as more-or-less always being a public good.
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2020-07-06T02:18:07.243Z · LW(p) · GW(p)
Meta: This isn't really related to the above comment, so might be better to start a new comment in my shortform next time.
Object: I don't want to argue about open information in general for now. I might be open to discussing something more specific and directly actionable, especially if we haven't done so yet and you think it's important.
It doesn't currently seem to me that relevant to put one's information public for life extension purposes given you can just backup the information in private, in case you were implying or thinking that.
I also don't (at least currently and in the past) advocate for universal access to information in case you were implying or thinking that.
↑ comment by Mati_Roy (MathieuRoy) · 2020-07-05T15:18:24.199Z · LW(p) · GW(p)
note to self, to read: https://briantomasik.com/manual-file-fixity/
comment by Mati_Roy (MathieuRoy) · 2020-07-04T12:07:16.703Z · LW(p) · GW(p)
topic: lifelogging as life extension
epistemic status: idea
Backup Day. Day where you commit all your data to blu-rays in a secure location.
When could that be?
Aphelion is at the beginning of the year. But maybe would be better to have it on a day that commemorates some relevant events for us.
x-post: https://www.facebook.com/groups/LifeloggingAsLifeExtension/permalink/1336571059887507/
Replies from: avturchin↑ comment by avturchin · 2020-07-04T12:16:02.871Z · LW(p) · GW(p)
I am now coping a 4 TB HDD and it is taking 50 hours. Blu rays are more time consuming as one need to change the disks, and it may be around 80 disks 50GB each to record the same hard drive. So it could take more than day of work.
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2020-07-05T01:20:22.453Z · LW(p) · GW(p)
good point! maybe we should have a 'Backup Week'!:)
comment by Mati_Roy (MathieuRoy) · 2020-05-07T07:16:37.302Z · LW(p) · GW(p)
I feel like I have slack. I don't need to work much to be able to eat; if I don't work for a day, nothing viscerally bad happens in my immediate surrounding. This allows me to think longer term and take on altruistic projects. But on the other hand, I feel like every movement counts; that there's no loose in the system. Every lost move is costly. A recurrent thought I've had in the past weeks is that: there's no slack in the system.
Replies from: Dagon↑ comment by Dagon · 2020-05-07T15:20:58.350Z · LW(p) · GW(p)
Every move DOES count, and "nothing viscerally bad" doesn't mean it wasn't a lost opportunity for improvement. _on some dimensions_. The problem is that well-being is highly-dimensional, and we only have visibility into a few aspects, and measurements of only a subset of those. It could easily be a huge win on your future capabilities to not work-for-pay that day.
The Slack that can be described is not true Slack. Slack is in the mind. Slack is freedom FROM freedom. Slack is the knowledge that every action or inaction changes the course of the future, _AND_ that you control almost none of it. You don't (and probably CAN'T) actually understand all the ways you're affecting your future experiences. Simply give yourself room to try stuff and experience them, without a lot of stress about "causality" or "optimization". But don't fuck it up. True slack is the ability to obsess over plans and executions in order to pick the future you prefer, WHILE YOU SIMULTANEOUSLY know you're wrong about causality and about your own preferences.
[ note and warning: my conception of slack has been formed over decades of Subgenius popehood (though I usually consider myself more Discordian), and may diverge significantly from other uses. ]
comment by Mati_Roy (MathieuRoy) · 2020-04-14T14:16:05.973Z · LW(p) · GW(p)
Today is Schelling Day. You know where and when to meet for the hangout!
comment by Mati_Roy (MathieuRoy) · 2020-03-17T08:21:41.482Z · LW(p) · GW(p)
epistemic status: a thought I just had
- the concept of 'moral trade' makes sense to me
- but I don't think there's such a thing as 'epistemic trade'
- although maybe agents can "trade" (in that same way) epistemologies and priors, but I don't think they can "trade" evidence
EtA: for those that are not familiar with the concept of moral trade, check out: https://concepts.effectivealtruism.org/concepts/moral-trade/
Replies from: Dagon↑ comment by Dagon · 2020-03-18T00:14:30.027Z · LW(p) · GW(p)
It's worth being clear what you mean by "trade" in these cases. Does "moral trade" mean "compromising one part of your moral beliefs in order to support another part"? or "negotiate with immoral agents to maximize overall moral value" or just "recognize that morals are preferences and all trade is moral trade"?
I think I agree that "trade" is the wrong metaphor for models and priors. There is sharing, and two-way sharing is often called "exchange", but that's misleading. For resources, "trade" implies loss of something and gain of something else, where the utility of the things to each party differ in a way that both are better off. For private epistemology (not in the public sphere where what you say may differ from what you believe), there's nothing you give up or trade away for new updates.
Replies from: Pattern, MathieuRoy↑ comment by Pattern · 2020-03-19T23:21:13.380Z · LW(p) · GW(p)
(I aimed for non-"political" examples, which ended up sounding ridiculous.)
Suppose you believed that the color blue is evil, and want there to be less blue things in the world.
Suppose I believed the same as you, except for me the color is red.
Perhaps we could agree on a moral trade - we will both be against the colors blue and red! Or perhaps something less extreme - you won't make things red unless they'd look really good red, and I won't make things blue unless they'd look really good blue. Or we trade in some other manner - if we were neighbors and our houses were blue and red we might paint them different colors (neither red nor blue), or trade them.
Replies from: Dagon↑ comment by Dagon · 2020-03-20T04:15:31.410Z · LW(p) · GW(p)
Hmm, those examples seem to be just "trade". Agreeing to do something dispreferred, in exchange for someone else doing something they disprefer and you prefer, when all things under consideration are permitted and optional under the relevant moral strictures.
I aimed for non-"political" examples, which ended up sounding ridiculous.
I wonder if that implies that politics is one of the main areas where the concept applies.
↑ comment by Mati_Roy (MathieuRoy) · 2020-03-18T15:29:07.937Z · LW(p) · GW(p)
meta: thanks for your comment; no expectation for you to read this comment; it doesn't even really respond to your comments, just some thoughts that came after reading it; see last paragraph for an answer to your question| quality: didn't spent much time formatting my thoughts
I use "moral trade" for non-egoist preferences. The latter is the trivial case that's the most prevalent; we trade resources because I care more about myself and you care more about yourself, and both want something personally out of the trade.
Two people that are only different in that one adopts Bentham's utilitarianism and the other adopts Mill's might want to trade. One value the existence of a human more than the existence of a pig. So one might trade their diet (become vegan) for a donation to poverty alleviation.
Two people could have the same values, but one think that there's a religious after life and the other not because they processed evidence differently. Someone could propose the following trade: the atheist will pray all their life (the reward massively overweights the cost from the theist person's perspective), and in exchange, the theist will sign up for cryonics (the reward massively overweights the cost from the atheist person's perspective). Hummmm, actually, writing out this example, it now seems to make sense to me to trade. Assuming both people are pure utilitarians (and no opportunity cost), they would both, in expectation, from their relative model of the world, gain a much larger reward than its cost. I guess this could also be called moral trade, but the different in expected value comes from different model of the worlds instead of different values.
So you never actually trade epistemologies or priors (as in, I reprogram my mind if you reprogram yours so that we have a more similar way of modelling the world), but you can trade acting as if. (Well, there are also cases were you would actually trade them, but only because it's morally beneficial to both parties.) It sounds trivial now, but yeah, epistemologies and priors are not necessarily intrinsically moving. I'm not sure what I had in mind exactly yesterday.
Ah, I think meant, let's assume I have Model 1 and you have Model 2. Model 1 evaluates Model 2 to be 50% wrong and vice versa, and both assume they themselves are 95% right. Let's assume that there's a third model that is 94% right according to both. If you do an average, it seems better. But it obviously doesn't mean it's optimal from any of the agent's perspective to accept this modification to their model.
comment by Mati_Roy (MathieuRoy) · 2024-10-16T14:55:54.865Z · LW(p) · GW(p)
summary of https://gwern.net/socks
--> https://www.facebook.com/reel/1207983483859787/?mibextid=9drbnH&s=yWDuG2&fs=e
😂
comment by Mati_Roy (MathieuRoy) · 2024-10-16T13:35:25.485Z · LW(p) · GW(p)
Steven Universe s1e5 is about a being that follows commands literally, and is a metaphor for some AI risks
comment by Mati_Roy (MathieuRoy) · 2024-07-22T01:45:27.664Z · LW(p) · GW(p)
epistemic status: speculative, probably simplistic and ill defined
Someone asked me "What will I do once we have AGI?"
I generally define the AGI-era starting at the point where all economically valuable tasks can be performed by AIs at a lower cost than a human (at subsistance level, including buying any available augmentations for the human). This notably excludes:
1) any tasks that humans can do that still provide value at the margin (ie. the caloric cost of feeding that human while they're working vs while they're not working rather than while they're not existing)
2) things that are not "tasks", such as:
a) caring about the internal experience of the service provider (ex.: wanting a DJ that feels human emotions regardless of its actions) --> although, maybe you could include that in the AGI definition too. but what if you value having a DJ be exactly a human? then the best an AGI could do is 3D print a human or something like that. or maybe you're even more specific, and you want a "pre-singulatarian natural human", in which case AGI seems impossible by (very contrived) definition.
b) the value of the memories encoded in human brains
c) the value of doing scientific experiments on humans
For my answer to the question, I wanted to say something like, think about what I should do with my time for a long time, and keep my options open (ex.: avoid altering my mind in ways I don't understand the consequences well). But then, that seems like something that might be economically useful to sell, so using the above definition, it seems like I should have AI system that are able to do that better/cheaper than me (unless I intrinsically didn't want that, or something like that). So maybe I have AI systems computing that for me and keeping me posted with advice while I do whatever I want.
But maybe I can still do work that is useful at the margin, as per (1), and so would probably do that. But what if even that wasn't worth the marginal caloric cost, and it was better to feed those calories into AI systems?
(2) is a bit complex, but probably(?) wouldn't impact the answer to the initial question much.
So, what would I do? I don't know. Main thing that comes to mind is observe how the worlds unfold (and listen to what the AGIs are telling me).
But maybe "AGI" shouldn't be defined as "aligned AGI". Maybe a better definition of AGI is like "outperforming humans at all games/tasks that are well defined" (ie. where humans don't have a comparative advantage just by knowing what humans value). In which case, my answer would be "alignment research" (assuming it's not "die").
comment by Mati_Roy (MathieuRoy) · 2024-05-28T03:07:29.427Z · LW(p) · GW(p)
imagine (maybe all of a sudden) we're able to create barely superhuman-level AIs aligned to whatever values we want at a barely subhuman-level operation cost
we might decide to have anyone able to buy AI agents aligned with their values
or we might (generally) think this way to give access to that tech would be bad, but many companies are already incentivized to do that individually and can't all cooperate not to (and they actually reached this point gradually, previously selling near human-level AIs)
then it seems like everyone/most people would start to run such an AI and give it access to all their resources--at which point that AI can decide what to do, whether that's investing in some companies and then paysing themselves periodically or invest in running more copies of themselves, etc. deciding when to use those resources for the human to consume vs reinvesting them
maybe people would wish for everyone to run AI systems with "aggregated human values" instead of their personal values, but given others aren't doing that, they won't either
now, intelligence isn't static anymore--presumably, the more money you have, the more intelligence you have, and the more intelligence the more money.
so let's say we suddenly have this tech and everyone is instantiating one such agent (which will make decisions about number and type of agents) that has access to all their resources
what happens?
maximally optimist scenario: solving coordination is not too late and gets done easily and at a low cost. utopia
optimist scenario: we don't substantially improve coordination, but our current coordination level is good enough for an Okay Outcome
pessimist scenario: agents are incentived to create subagents with other goals for instrumentally convergent purposes. defecting is better than cooperating individually, but defecting-defecting still leads to extremely bad outcomes (just not as bad as if you had cooperated in a population of cooperators). those subagents quickly take over and kill all humans (those who cooperated are killed slightly sooner). or, not requiring misaligned AIs, maybe the aestivation hypothesis is true but we won't coordinate to delay energy consumption or wars will use all surplus leaving nothing for humans to consume
I'm not confident we're in an optimist scenario. being able to download one's values and then load them in an AI system (and having initial conditions where that's all that happens) might not be sufficient for good outcomes
this is evidence for the importance of coordinating on how AGI systems get used, and that distributing that wealth/intelligence directly might not be the way to go. rather, it might be better to keep that intelligence concentrated and have some value/decision aggregation mechanism to decide what to do with it (rather than distributing it and later not being able to pool it back together if that's needed, which seems plausible to be)
a similar reasoning can apply for poverty alleviation: if you want to donate money to a group of people (say residents of a poor country) and if you think they haven't solved their coordination problem, then maybe instead of distributing that money and let them try to coordinate to put back (part of) that money in a shared pool for collective goods, you can just directly put that money in such a pool--the problem about figuring out the shared goal remains but it at least arguably solves the problem of pooling that money (ex.: to fund research for a remedy to a disease affecting that population)
comment by Mati_Roy (MathieuRoy) · 2024-04-18T21:56:27.888Z · LW(p) · GW(p)
topic: economics
idea: when building something with local negative externalities, have some mechanism to measure the externalities in terms of how much the surrounding property valuation changed (or are expected to change based, say, through a prediction market) and have the owner of that new structure pay the owners of the surrounding properties.
comment by Mati_Roy (MathieuRoy) · 2024-04-15T22:38:43.952Z · LW(p) · GW(p)
I wonder what fraction of people identify as "normies"
I wonder if most people have something niche they identify with and label people outside of that niche as "normies"
if so, then a term with a more objective perspective (and maybe better) would be non-<whatever your thing is>
like, athletic people could use "non-athletic" instead of "normies" for that class of people
Replies from: CstineSublime↑ comment by CstineSublime · 2024-04-15T23:45:29.837Z · LW(p) · GW(p)
Does "normie" crossover with "(I'm) just a regular guy/girl"? While they are obviously have highly different connotations, is the central meaning similar?
I tend to assume, owing to Subjectivism and Egocentric Bias, that at times people are more likely to identify as part of the majority (and therefore 'normie') than the minority unless they have some specific reason to do so. What further complicates this like a matryoshka doll is not only the differing sociological roles that a person can switch between dozens of times a day (re: the stereotypical Twitter bio "Father. Son. Actuary. Tigers supporter") but within a minority one might be part of the majority of the minority, or the minority of the minority many times over. Like the classic Emo Phillips joke "Northern Conservative Baptist, or Northern Liberal Baptist" "He said "Northern Conservative Baptist", I said "me too! Northern Conservative Fundamentalist Baptist..."" itself a play on "No True-Scotsman".
comment by Mati_Roy (MathieuRoy) · 2023-07-17T15:05:39.211Z · LW(p) · GW(p)
topics: AI, sociology
thought/hypothesis: when tech is able to create brains/bodies as good or better than ours, it will change our perception of ourselves: we won't be in a separate magistra from our tools anymore. maybe people will see humans as less sacred, and value life less. if you're constantly using, modifying, copying, deleting, enslaving AI minds (even AI minds that have a human-like interface), maybe people will become more okay doing that to human minds as well.
(which seems like it would be harmful for the purpose of reducing death)
comment by Mati_Roy (MathieuRoy) · 2023-05-08T22:10:34.268Z · LW(p) · GW(p)
topic: intellectual discussion, ML tool, AI x-risks
Idea: Have a therapist present during intellectual debate to notice triggers, and help defuse them. Triggers activate a politics mindset where the goal becomes focused on status/self-preservation/appearances/looking smart/making the other person look stupid/etc. which makes it hard to think clearly.
Two people I follow will soon have a debate on AI x-risks which made me think of that. I can't really propose that intervention though because it will likely be perceived and responded as if it was a political move itself.
Another idea I had recently, also based on one of those people, was to develop a neural network helping us notice when we were activated in that way so we became aware of it and helped defuse it. AI is too important for our egos to get in the way (but it's easier said than done).
x-post Facebook
comment by Mati_Roy (MathieuRoy) · 2023-04-30T15:17:27.854Z · LW(p) · GW(p)
Topics: AI, forecasting, privacy
I wonder how much of a signature we leave in our writings. Like, how hard would it be for an AI to be rather confident I wrote this text? (say if it was trained on LessWrong writings, or all public writings, or maybe even private writings) What if I ask someone else to write an idea for me--how helpful is it in obfuscating the source?
comment by Mati_Roy (MathieuRoy) · 2023-04-25T02:43:26.350Z · LW(p) · GW(p)
Topic: AI strategy (policies, malicious use of AI, AGI misalignment)
Epistemic status: simplistic; simplified line of reasoning; thinking out loud; a proposed frame
A significant "warning shot" from a sovereign misaligned AI doesn't seem likely to me because a human-level (and plausibly a subhuman-level) intelligence can both 1) learn deception, yet 2) can't (generally) do a lot of damage (i.e. perceptible for humanity). So the last "warning shot" before AI learns deception won't be very big (if even really notable at all), and then a misaligned agent would hide (its power and/or intentions) until it's confident it can overpower humanity (because it's easy to gain power that way)--at which point it would cause an omnicide. An exception to that is if an AI thinks other AIs are hiding in the world, then it might want to take a higher risk to overpower humanity before it's confident it can do so because it's concerned another AI will do so first otherwise. I'm not very hopeful this would give us a good warning shot though because I think multiple such AIs trying to overpower humanity would likely be too damaging for us to regroup in time.
However, it seems much more plausible to me that (non-agentic) AI tools would be used maliciously, which could lead the government to highly regulate AIs. Those regulations (ex.: nationalizing AI) preventing malicious uses could also potentially help with negligent uses. Assuming a negligent use (i.e. resulting in AGI misalignment) is much more likely to cause an existential catastrophe than a malicious use of AI, and that regulations against malicious uses are more memetically fit, then the ideal regulations to advocate for might be those that are good at preventing both malicious uses and the negligent creation of a misaligned AGI.
note to self: not posted on Facebook (yet)
comment by Mati_Roy (MathieuRoy) · 2023-04-10T01:11:58.403Z · LW(p) · GW(p)
topic: AI alignment, video game | status: idea
Acknowledgement: Inspired from an idea I heard from Eliezer in zir podcast with Lex Friedman and the game Detroit: Become Human.
Video game where you're in an alternate universe where aliens create an artificial intelligence that's a human. The human has various properties typical of AI, such has running way faster than the aliens in that world and being able to duplicate themselves. The goal of the human is to take over the world to stop some atrocity happening in that world. The aliens are trying to stop the human from taking over the world.
comment by Mati_Roy (MathieuRoy) · 2023-03-26T20:47:43.143Z · LW(p) · GW(p)
✨ topic: AI timelines
Note: I'm not explaining my reasoning in this post, just recording my predictions and sharing how I feel.
I'll sound like a boring cliche at this point, but I just wanted to say it publicly: my AGI timelines have shorten earlier this year.
Without thinking about too much about quantifying my probabilities, I'd say the probabilities that we'll get AGI or AI strong enough to prevent AGI (including through omnicide) are:
- 18% <2033
- 18% 2033-2043
- 18% 2043-2053
- 18% 2050-2070
- 28% 2070+ or won't happen
But at this point I feel like not much would surprise me in terms of short timelines. Transformative AI seems really close. Short timelines and AI x-risk concerns are common among people working in AI and among people trying to predict the development of this tech. It's the first time I've been feeling sick to my stomach when thinking about AI timelines. First time that my mind is as focused emotionally on the threat, simulating how the last moments before an AI omnicide would look like.
What fraction of the world would be concerned about AI x-risk 1 second before an AI omnicide? Plausibly very low.
- Will people see their death coming? For example, because a drone breaks their house window just before shooting them in the head. And if so, will people be able to say "Ah, Mati was right" just before they die or will they just think it's a terrorist attack or something like that? I imagine losing access to Internet and cellphone communication, not thinking much of it, while a drone is on its journey to kill me.
- Before AI overpowers humanity, will people think that I was wrong because AI is actually providing a crazy amount of wealth? (despite me already thinking this)
- Will I have time to post my next AI x-risk fiction story before AI kills us all? I better get to it.
To be clear, this fear is not at all debilitating or otherwise pathological.
(I know some of those thoughts are sily; I'm obviously predominantly concerned about omnicide, not about publishing my fiction or being acknowledged)
I'm feeling wanting and finding myself simplifying my life, doing things faster, and focusing even more on AI. (I still care and support cryonics and cause areas adjacent to AI like genetic engineering.)
In a few years, I might live in a constant state of thinking I could drop dead at any time from an AGI.
I used to think the most likely cause of my death would be an insufficiently good cryopreservation, but now I think it's misaligned AGI. It seems likely to me that most people alive today will die from an AI omnicide.
comment by Mati_Roy (MathieuRoy) · 2023-03-26T19:46:34.387Z · LW(p) · GW(p)
topic: genetic engineering
'Revolutionary': Scientists create mice with two fathers
(I just read the title)
comment by Mati_Roy (MathieuRoy) · 2023-03-16T23:08:09.483Z · LW(p) · GW(p)
topic: genetic engineering
'Revolutionary': Scientists create mice with two fathers
(I just read the title)
comment by Mati_Roy (MathieuRoy) · 2022-12-13T18:45:18.701Z · LW(p) · GW(p)
Idea for a line of thinking: What if as a result of automation we could use the ~entire human population to control AI — any way we could meaningfully organize this large workforce towards that goal?
comment by Mati_Roy (MathieuRoy) · 2022-02-16T00:40:28.464Z · LW(p) · GW(p)
- What fraction of the cards from the Irrational Game didn't replicate?
- Is there a set of questions similar to this available online?
- In the physical game I have, there's a link to http://give-me-a-clue.com/afterdinner (iianm) which is supposed to have 300 more trivia questions, but it doesnt work -- anyone has does?
comment by Mati_Roy (MathieuRoy) · 2022-02-08T07:12:17.137Z · LW(p) · GW(p)
Part-time remote assistant position
My assistant agency, Pantask, is looking to hire new remote assistants. We currently work only with effective altruist / LessWrong clients, and are looking to contract people in or adjacent to the network. If you’re interested in referring me people, I’ll give you a 100 USD finder’s fee for any assistant I contract for at least 2 weeks (I’m looking to contract a couple at the moment).
This is a part time gig / sideline. Tasks often include web searches, problem solving over the phone, and google sheet formatting. A full description of our services are here: https://bit.ly/PantaskServices
The form to apply is here: https://airtable.com/shrdBJAP1M6K3R8IG It pays 20 usd/h.
You can ask questions here, in PM, or at mati@pantask.com.
comment by Mati_Roy (MathieuRoy) · 2021-12-19T06:41:40.629Z · LW(p) · GW(p)
a thought for a prediction aggregator
Problem with prediction market: People with the knowledge might still not want to risk money (plus complexity of markets, risk the market fails, tax implication, etc.).
But if you fully subsidize them, and make it free to participate, but still with potential for reward, then most people would probably make random predictions (at least, for predictions where most people don't have specialized knowledge about this) (because it's not worth investing the time to improve the prediction).
Maybe the best of both worlds is to do the latter, but only with people you already identified as experts. Because they already have the knowledge (maybe, mostly), so their marginal cost to predict accurately makes it worth it (although this assumption might be wrong; maybe the marginal cost is still high). Also, given there are less players, the expected reward is higher. They'll also have reputational stakes given its their field.
I would imagine this mechanism using a large index as its reward to avoid future reward being desired more (hopefully that would work? but on long enough time scale, it might not; old scientists would presumably have a higher time discount on average, maybe).
Replies from: Measure↑ comment by Mati_Roy (MathieuRoy) · 2021-10-30T21:42:56.779Z · LW(p) · GW(p)
But if the brain is allowed to change, then the subject can eventually adapt to the torment. To
This doesn't follow. It seems very likely to me that it can allow it to change it ways it doesn't adapt to the pain, both from first principles and from observations.
How different would each loop need to be in order to be experienced separately?
That would be like having multiple different Everett branches experiencing suffering (parallel lives), which is different from 1 long continuous life.
↑ comment by avturchin · 2021-10-30T20:09:33.674Z · LW(p) · GW(p)
Superintelligent Devil will take its victim brain's upgrade under its control and will invest in constant development of the victim's brain-parts which can feel pain. There is eternity to evolve in that direction and the victim will know that every next second will be worse. But it is really computationally intense way of punishment.
The question of the minimal unit of experience, which is enough to break the loop sameness is interesting. The need not only to be subjectively different, but the difference need to be meaningful. Not just one pixel.
Replies from: MackGopherSena↑ comment by MackGopherSena · 2022-05-08T03:27:39.095Z · LW(p) · GW(p)
[edited]
Replies from: avturchin↑ comment by avturchin · 2022-05-08T07:39:56.785Z · LW(p) · GW(p)
There is no physical eternity, so there is a small probability of fork in each moment. Therefore, there will be eventually next observer moment, sufficiently different to be recognised as different. In internal experience, it will be almost immediately.
comment by Mati_Roy (MathieuRoy) · 2021-10-23T07:42:22.116Z · LW(p) · GW(p)
Being immortal means you will one day be a Jupiter brain (if you think memories are part of one's identity, which I think they are)
x-post: https://twitter.com/matiroy9/status/1451816147909808131
Replies from: avturchin, Vladimir_Nesov↑ comment by Vladimir_Nesov · 2021-10-23T23:44:00.262Z · LW(p) · GW(p)
Learning distills memories in models that can be more reasonably bounded even for experience on astronomical timescales. It's not absolutely necessary to keep the exact record of everything. What it takes to avoid value drift is another issue though, this might incur serious overhead.
Value drift in people is not necessarily important, might even be desirable, it's only clear for the agent in charge of the world that there should be no value drift. But even that doesn't necessarily make sense if there are no values to work with as a natural ingredient of this framing.
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2021-10-24T05:25:17.367Z · LW(p) · GW(p)
Learning distills memories in models that can be more reasonably bounded even for experience on astronomical timescales.
Sure! My point still stand though :)
Value drift in people is not necessarily important, might even be desirable
Here it's more about identity deterioration than value drift (you could maintain the same value while forgetting all your life).
But also, to address your claim in a vacuum:
- Value-preservation is an instrumental convergent goal; ie. you're generally more likely to achieve your goals if you want to achieve them.
- Plus, I think most humans would value preserving their (fundamental) values intrinsically as well.
But even that doesn't necessarily make sense if there are no values to work with as a natural ingredient of this framing.
Am not sure I understand
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2021-10-24T12:18:49.160Z · LW(p) · GW(p)
The arguments for instrumental convergence don't apply to the smaller processes that take place within a world fully controlled by an all-powerful agent, because the agent can break Moloch's back. If the agent doesn't want undue resource acquisition to be useful for you, it won't be, and so on.
The expectation that humans would value preservation of values is shaky, it's mostly based on the instrumental convergence argument, that doesn't apply in this setting. So it might actually turn out that human preference says that value preservation is not good for individual people, that value drift in people is desirable. Absence of value drift is still an instrumental goal for the agent in charge of the world that works for the human preference that doesn't drift. This agent can then ensure that the overall shape of value drift in the people who live in the world is as it should be, that it doesn't descend into madness.
Value drift only makes sense where the abstraction of values makes sense. Does my apartment building have a data integrity problem, does it fail some hash checks? This doesn't make sense, the apartment building is not a digital data structure. I think it's plausible that some AGIs of the non-world-eating variety lack anything that counts as their preference, they are not agents. In a world dominated by such AGIs some people would still set up smaller agents merely for the purpose of their own preference management (this is the overhead I alluded to in the previous comment). But for those who don't and end up undergoing unchecked value drift (with no agents to keep it in line with what values-on-reflection approve of), the concept of values is not necessarily important either. This too might be the superior alternative, more emphasis on living long reflection than on being manipulated into following its conclusions.
comment by Mati_Roy (MathieuRoy) · 2021-10-23T05:55:40.012Z · LW(p) · GW(p)
Here's a way to measure (a proxy of) relative value of different years I just thought (again?); answer the question:
For which income I would you prefer to live in perpetual-2020 over living in perpetual-2021 with a median income? (maybe income is measured by fraction of the world owned multiplied by population size, or some other way) Then you can either chain those answers to go back to older years, or just compare them directly.
There are probably years where even Owning Everything wouldn't be enough. I prefer to live in perpetual-2021 with a median income over, say, living in perpetual-1900 while Owning Everything. And probably also perpetual-1900 over some other year. Maybe this repeats 1-5 times total in human history. And even over owning everything an infinite amount of times.
So in that sense, we've already experienced a few singularities. Just meant as a fun thought!
Here's another way to measure (a proxy of) relative value of different years which makes us look much advance (at least for my preferences).
For how many years would you be indifferent between living in perpetual-1900 over living 100 years in perpetual-2020? Here, my answer is much less; assuming finite lives have value in the first place, my answer would probably be less than 2x (so somewhere between 101 and 200 years old) (although I've never tried living in 1900).
Although from that perspective, I'll never experience a singularity: I will never prefer a finite life over an infinite one, no matter how good the finite one and how bad the infinite one.
x-post: https://www.facebook.com/mati.roy.09/posts/10159848043774579
comment by Mati_Roy (MathieuRoy) · 2021-07-23T21:22:15.482Z · LW(p) · GW(p)
Hobby: serve so many bullets to sophisticated philosophers that they're missing half their teeth by the end of the discussion
comment by Mati_Roy (MathieuRoy) · 2021-07-02T03:57:39.154Z · LW(p) · GW(p)
crazy idea i just had mayyybe a deontological Libertarian* AI with (otherwise) any utility function is not that bad (?) maybe that should be one of the thing we try (??????) *where negative externality also count as aggressions, and other such fixes to naive libertarianism
Replies from: Viliam↑ comment by Viliam · 2021-07-02T12:49:47.571Z · LW(p) · GW(p)
Existence of an agent is itself already a negative externality for other agents existing in the same environment. It means more competition for the limited natural resources. Even animals, pure in their minds and untainted by the sin of statism, fight for their territory and for the natural resources it includes.
Of course, competing for natural resources is not the only thing an agent does. If the agent also produces and trades, this aspect of its existence is a positive externality for its neighbors, which in modern economy typically outweighs the increased competition for raw resources. But if the economical context changes, this could change, too.
If the libertarian AI in a libertarian world succeeds to send von Neumann probes to colonize planets, using them to build more von Neumann probes and colonize more planets... and takes over the entire universe, leaving only Earth, Moon, and possibly Mars to humans... according to libertarianism, that's perfectly fair, right? I mean, the humans were not actually using the rest of the universe before, so it was free to take, the AI started using it productively (to build more von Neumann probes), therefore it rightfully belongs to the AI now.
And if the AI refuses to trade with humans, or generally to interact with them in any way other than destroying all human spaceships that trespass on its territory (i.e. the whole universe, other than Earth, Moon, Mars and the space between them), it is still acting fully within its libertarian rights. It is not initiating violence, merely protecting its rightful property. The parts of universe that are not needed for defense against humans, are converted to paperclips...
Would you count this outcome as "not that bad"?
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2021-07-07T20:32:04.046Z · LW(p) · GW(p)
before the birth of that AI, we could split the Universe among existing beings
comment by Mati_Roy (MathieuRoy) · 2021-01-06T15:29:33.280Z · LW(p) · GW(p)
Am thinking of organizing a one hour livestreamed Q&A about how to sign up for cryonics on January 12th (Bedford's day). Would anyone be interested in asking me questions?
x-post: https://www.facebook.com/mati.roy.09/posts/10159154233029579
comment by Mati_Roy (MathieuRoy) · 2021-01-01T10:02:59.594Z · LW(p) · GW(p)
We sometimes encode the territory on context-dependent maps. To take a classic example:
- when thinking about daily experience, stars and the Sun are stored as different things
- when thinking in terms of astrophysics, they are part of the same category This makes it so that when you ask a question like "What is the closest star [to us]?", in my experience people are likely to say Alpha Centauri, and not the Sun. Merging those 2 maps feels enlightening in some ways; creates new connections / a new perspective. "Our Sun is just a star; stars are just like the Sun." leading to system-1 insights like "Wow, so much energy to harvest out there!" Questions:
- Is there a name for such merging? If no, should there be? Any suggestions?
- Do you have other (interesting?) examples of map merging? x-post: https://www.facebook.com/mati.roy.09/posts/10159142062619579
↑ comment by Mati_Roy (MathieuRoy) · 2021-01-01T10:04:12.742Z · LW(p) · GW(p)
- No name that I'm aware. Brainstorming ideas: map merging, compartmentalisation merging, uncompartmentalising
comment by Mati_Roy (MathieuRoy) · 2020-12-09T08:43:02.810Z · LW(p) · GW(p)
In my mind, "the expert problem" means the problem of being able to recognize experts without being one, but I don't know where this idea comes from as the results from a Google search don't mention this. What name is used to refer to that problem (in the literature)?
x-post: https://www.facebook.com/mati.roy.09/posts/10159081618379579
Replies from: Pattern↑ comment by Pattern · 2020-12-09T19:50:13.958Z · LW(p) · GW(p)
This comment/post is one of 3 duplicates. (Link to main here [LW(p) · GW(p)].)
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2020-12-10T19:10:26.954Z · LW(p) · GW(p)
oh damn, thanks! there was an error message when I was trying to post it which had given me the impression it wasn't working, hence why I posted it 4 times total ^^
comment by Mati_Roy (MathieuRoy) · 2020-12-09T08:42:38.679Z · LW(p) · GW(p)
In my mind, "the expert problem" means the problem of being able to recognize experts without being one, but I don't know where this idea comes from as the results from a Google search don't mention this. What name is used to refer to that problem (in the literature)?
x-post: https://www.facebook.com/mati.roy.09/posts/10159081618379579
Replies from: Patterncomment by Mati_Roy (MathieuRoy) · 2020-12-09T08:40:47.309Z · LW(p) · GW(p)
In my mind, "the expert problem" means the problem of being able to recognize experts without being one, but I don't know where this idea comes from as the results from a Google search don't mention this. What name is used to refer to that problem (in the literature)?
x-post: https://www.facebook.com/mati.roy.09/posts/10159081618379579
Replies from: Patterncomment by Mati_Roy (MathieuRoy) · 2020-12-01T13:26:59.441Z · LW(p) · GW(p)
suggestion of something to try at a LessWrong online Meetup:
video chat with a time-budget for each participant. each time a participant unmutes themselves, their time-budget starts decreasing.
note: on jitsi you can see how many minutes someone talked (h/t Nicolas Lacombe)
x-post: https://www.facebook.com/mati.roy.09/posts/10159062919234579
comment by Mati_Roy (MathieuRoy) · 2020-10-28T10:50:46.662Z · LW(p) · GW(p)
imagine having a physical window that allowed you to look directly in the past (but people in the past wouldn't see you / the window). that would be amazing, right? well, that's what videos are. with the window it feels like it's happening now, whereas with videos it feels like it's happening in the past, but it's the same
x-post: https://www.facebook.com/mati.roy.09/posts/10158977624499579
Replies from: Dagon↑ comment by Dagon · 2020-10-28T21:38:35.940Z · LW(p) · GW(p)
There's a bit of bait-and-switch with the comparison. The magic window is amazing if it sees parts of the past which we're interested in (both time and location control, or event selection). It's much less interesting if it only sees very few parts (well under 0.01%) of the very recent past (last 20-70 years), and only sees the parts that someone happened to capture, which are indexed/promoted enough to come to our attention.
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2020-10-29T14:27:46.748Z · LW(p) · GW(p)
ok yeah, that's fair! (although even controlling for that, I think the analogy still points at something interesting)
only sees the parts that someone happened to capture, which are indexed/promoted enough to come to our attention
yeah, I like to see "people just living a normal day"; I sometimes look for that, but even that is likely biased
comment by Mati_Roy (MathieuRoy) · 2020-09-24T11:02:50.496Z · LW(p) · GW(p)
tattoo idea: I won't die in this body
in Toki Pona: ale pini mi li ala insa e sijelo ni
direct translation: life's end (that is) mine (will) not (be) inside body this
EtA: actually I got the Toki Pona wrong; see: https://www.reddit.com/r/tokipona/comments/iyv2r2/correction_thread_can_your_sentences_reviewed_by/
comment by Mati_Roy (MathieuRoy) · 2020-09-20T11:11:32.171Z · LW(p) · GW(p)
When you're sufficiently curious, everything feels like a rabbit hole.
Challenge me by saying a very banal statement ^_^
x-post: https://www.facebook.com/mati.roy.09/posts/10158883322499579
Replies from: mr-hire, MathieuRoy↑ comment by Matt Goldenberg (mr-hire) · 2020-09-20T17:16:48.239Z · LW(p) · GW(p)
I tired because I didn't sleep well.
↑ comment by Mati_Roy (MathieuRoy) · 2020-09-20T11:20:33.479Z · LW(p) · GW(p)
Sort of smashing both of those saying together:
> “If you wish to make an apple pie from scratch, you must first invent the universe.” -Carl Sagan
> "Any sufficiently analyzed magic is indistinguishable from science!"-spin of Clarke's third law
to get:
Sufficiently understanding an apple pie is indistinguishable from understanding the world.
Replies from: mark-xu↑ comment by Mark Xu (mark-xu) · 2020-09-21T05:51:08.073Z · LW(p) · GW(p)
reminded me of Uriel explaining Kabbalah:
> “THEY BELIEVE YOU CAN CARVE UP THE DIFFERENT FEATURES OF THE UNIVERSE, ENTIRELY UNLIKE CARVING A FISH,” the angel corrected himself. “BUT IN FACT EVERY PART OF THE BLUEPRINT IS CONTAINED IN EVERY OBJECT AS WELL AS IN THE ENTIRETY OF THE UNIVERSE. THINK OF IT AS A FRACTAL, IN WHICH EVERY PART CONTAINS THE WHOLE. IT MAY BE TRANSFORMED ALMOST BEYOND RECOGNITION. BUT THE WHOLE IS THERE. THUS, STUDYING ANY OBJECT GIVES US CERTAIN DOMAIN-GENERAL KNOWLEDGE WHICH APPLIES TO EVERY OTHER OBJECT. HOWEVER, BECAUSE ADAM KADMON IS ARRANGED IN A WAY DRAMATICALLY DIFFERENTLY FROM HOW OUR OWN MINDS ARRANGE INFORMATION, THIS KNOWLEDGE IS FIENDISHLY DIFFICULT TO DETECT AND APPLY. YOU MUST FIRST CUT THROUGH THE THICK SKIN OF CONTINGENT APPEARANCES BEFORE REACHING THE HEART OF -”
↑ comment by Mati_Roy (MathieuRoy) · 2020-09-21T11:41:27.529Z · LW(p) · GW(p)
Nice!
comment by Mati_Roy (MathieuRoy) · 2020-08-06T16:08:39.347Z · LW(p) · GW(p)
I can pretty much only think of good reasons for having generally pro-entrapment laws. Not any kind of traps, but some kind of traps seem robustly good. Ex.: I'd put traps for situations that are likely to happen in real life, and that show unambiguous criminal intent.
It seems like a cheap and effective way to deter crimes and identify people at risk of criminal behaviors.
I've only thought about this for a bit though, so maybe I'm missing something.
x-post with Facebook: https://www.facebook.com/mati.roy.09/posts/10158763751484579
Replies from: Viliam, Dagon↑ comment by Viliam · 2020-08-07T22:01:09.174Z · LW(p) · GW(p)
Aren't there already too many people in prisons? Do we need to put there also people who normally wouldn't have done any crime?
I guess this depends a lot on your model of crime. If your model is something like -- "some people are inherently criminal, but most are inherently non-criminal; the latter would never commit a crime, and the former will do use every opportunity that seems profitable to them" -- then the honeypot strategy makes sense. You find and eliminate the inherently criminal people before they get an opportunity to actually hurt someone.
My model is that most people could be navigated to commit a crime, if someone would spend the energy to understand them and create the proper temptation. Especially when we consider the vast range of things that are considered crimes, so it does not have to be a murder, but something like smoking weed, or even things that you have no idea they could be illegal; then I'd say a potential success rate is 99%. But even if we limit ourselves to the motte of crime; let's say theft and fraud, I'd still say more than 50% of people could be tempted, if someone spent enough resources on it. Of course some people are easier to nudge than others, but we are all on the spectrum.
Emotionally speaking, "entrapment" feels to me like "it is too dangerous to fight the real criminals, let's get some innocent but stupid person into trouble instead, and it will look the same in our crime-fighting statistics".
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2020-08-13T17:44:35.277Z · LW(p) · GW(p)
uh, I didn't say anything about prisons. there are reasons to identify people at high risk of committing crimes.
and no, it's not about catching people that wouldn't have committed crimes, it's about catching people that would have committed crimes without being caught (but maybe I misused the word 'entrapment', and that's not what it means)
Emotionally speaking, "entrapment" feels to me like "it is too dangerous to fight the real criminals, let's get some innocent but stupid person into trouble instead, and it will look the same in our crime-fighting statistics".
Well, that's (obviously?) not what I mean.
I elaborated more on the Facebook post linked above.
Replies from: Viliam↑ comment by Viliam · 2020-08-15T16:19:06.943Z · LW(p) · GW(p)
Well, that's (obviously?) not what I mean.
I agree, but that seems to be how the idea is actually used in real life. By people other than you. By people who get paid when they catch criminals... which creates an incentive for them to increase easy-to-solve criminality rather then reduce it, as long as they find plausibly deniable methods to do it.
In theory, if you could create "traps" in a way that does not increase temptation (because increased temptation = increased crime), for example on a street already containing hundred unlocked bikes you would add dozen unlocked trap bikes... yeah, there is probably no downside to that.
In practice, if you allow this, and if you start rewarding people for catching thieves using the traps, they will get creative. Because a trap that follows the spirit of the law does not maximize the reward.
↑ comment by Dagon · 2020-08-06T17:15:51.298Z · LW(p) · GW(p)
I think the setup you describe (unambiguously show criminal intent in likely situations) is _already_ allowed in most jurisdictions. "entrapment" implies setting up the situation in such a way that it encourages the criminal behavior, rather than just revealing it.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2020-08-07T00:56:06.299Z · LW(p) · GW(p)
IANAL, but this sounds right to me. It's fine if, say, the police hide out at a shop that is tempting and easy to rob and encourage the owner not to make their shop less tempting or easy to rob so that it can function as a "honeypot" that lets them nab people in the act of committing crimes. On the other hand, although the courts often decide that it's not entrapment, undercover cops soliciting prostitutes or illegal drugs are much closer to being entrapment, because then the police are actively creating the demand for crime to supply.
Depending on how you feel about it, I'd say this suggests the main flaw in your idea, which is that it will be abused on the margin to catch people who otherwise would not have committed crimes, even if you try to circumscribe it such that the traps you can create are far from causing more marginal crime, because incentives will push for expansion of this power. At least, that would be the case in the US, because it already is.
comment by Mati_Roy (MathieuRoy) · 2020-07-12T16:04:12.456Z · LW(p) · GW(p)
People say we can't bet about the apocalypse. But what about taking debt? The person thinking the probability of apocalypse is higher would accept higher interest rate on their debt, as once at the judgement period their might be no one to whom the money is worth or the money itself might not be worth much.
I guess there are also reasons to want more money during a global catastrophe, and there are also reasons to not want to keep money for great futures (see: https://matiroy.com/writings/Consume-now-or-later.html), so that wouldn't actually work.
comment by Mati_Roy (MathieuRoy) · 2020-05-01T05:17:00.265Z · LW(p) · GW(p)
meta - LessWrong have people predict whether they will upvote a post just based on the title
Replies from: zachary-robertson, mark-xu↑ comment by Past Account (zachary-robertson) · 2020-05-05T03:15:50.238Z · LW(p) · GW(p)
[Deleted]
↑ comment by Mark Xu (mark-xu) · 2020-05-04T04:00:05.294Z · LW(p) · GW(p)
I think this breaks because it results in people upvoting based on the title. I recall some study about how people did things they predicted they would do with higher than like 60% chance almost 95% of the time (numbers made up, think I remember direction/order of magnitude of effect size roughly correctly, don't know if it survived the replication crises)
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2020-05-04T04:18:57.972Z · LW(p) · GW(p)
That's potentially a good point! But it doesn't say how the causality works. Maybe the prediction affects the outcome or maybe they're just bad at predicting / modelling themselves.
comment by Mati_Roy (MathieuRoy) · 2020-04-24T17:44:00.788Z · LW(p) · GW(p)
There's a post, I think by Robin Hanson on Overcoming Bias, that says people care about what their peers think of them, but we can hack our brains to doing awesome things by making this reference group the elite of the future. I can't find this post. Do you have a link?
comment by Mati_Roy (MathieuRoy) · 2020-03-30T09:00:24.560Z · LW(p) · GW(p)
Personal Wiki
might be useful for people to have personal wiki where they take note instead of everyone taking notes in private Gdoc
status: to do / to integrate
comment by Mati_Roy (MathieuRoy) · 2023-07-07T15:54:40.687Z · LW(p) · GW(p)
you know those lists about historical examples of notable people mistakenly saying that some tech will not be useful (for example)
Elon Musk saying that VR is just a TV on your nose will probably become one of those ^^
https://youtube.com/shorts/wYeGVStouqw?feature=share
Replies from: MakoYass, niplav, sinclair-chen↑ comment by mako yass (MakoYass) · 2023-07-07T16:44:45.406Z · LW(p) · GW(p)
This wasn't him taking a stance. It ends with a question, and it's not a rhetorical question, he doesn't have a formed stance. Putting him in a position where he feels the need to defend a thought he just shat out about a topic he doesn't care about while drinking a beer is very bad discourse.
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2023-07-07T18:53:17.407Z · LW(p) · GW(p)
ok that's fair yeah! thanks for your reply. I'm guessing a lot of those historical quotes are also taking out of context actually.
↑ comment by Sinclair Chen (sinclair-chen) · 2023-07-09T09:00:02.563Z · LW(p) · GW(p)
earbuds are just speakers in your ears. they're also way better than speakers.
Replies from: sinclair-chen↑ comment by Sinclair Chen (sinclair-chen) · 2023-07-09T09:07:16.759Z · LW(p) · GW(p)
true virtual reality requires not just speakers in your ears and tv in your eyes, but also good input at the speed of thought. maybe it's just eye, face, hand, and limb tracking. but idk, i feel like headsets have still not found their mouse and keyboard. the input is so ... low bandwidth? maybe Apple will figure it out, or maybe we need to revive Xerox back from the grave
comment by Mati_Roy (MathieuRoy) · 2023-07-04T15:34:52.906Z · LW(p) · GW(p)
idea: Stream all of humanity's information through the cosmos in hope an alien civ reconstruct us (and defends us against an Earth-originating maligned ASI)
I guess finding intelligent ETs would help with that as we could stream in a specific direction instead of having to broadcast the signal broadly
It could be that maligned alien ASIs would mostly ignore our information (or at least not use it to like torture us) whereas friendly align ASI would use it beneficially 🤷♀️
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2023-07-07T13:45:25.599Z · LW(p) · GW(p)
related concept: https://en.wikipedia.org/wiki/Information_panspermia
video on this that was posted ~15 hours ago: https://www.youtube.com/watch?v=K4Zghdqvxt4
comment by Mati_Roy (MathieuRoy) · 2023-04-30T15:31:06.063Z · LW(p) · GW(p)
Topics: cause prioritization; metaphor
note I took on 2022-08-01; I don't remember what I had in mind, but I feel like it can apply to various things
from an utilitarian point of view though, i think this is almost like arguing whether dying with a red or blue shirt is better; while there might be an answer, i think it's missing the point, and we should focus on reducing risks of astronomical disasters
↑ comment by TLW · 2021-11-11T04:03:40.702Z · LW(p) · GW(p)
An interesting perspective.
It is instructive to consider the following four scenarios:
1. The Kolmogorov complexity of the state of your mind after N timesteps in a simulation with a featureless white plane.
2. The Kolmogorov complexity of the state of your mind after N timesteps in a simulation where you are in a featureless plane, but the simulation injects a single randomly-chosen 8x8 black-and-white bitmap into the corner of your visual field. (256 bits total.)
3. The Kolmogorov complexity of the state of your mind after N timesteps in a simulation with "random" noise that's actually the output of, say, a CSPRNG with 256b of initial state.
4. The Kolmogorov complexity of the state of your mind after N timesteps in a simulation with "truly" random noise.
comment by Mati_Roy (MathieuRoy) · 2021-05-08T19:03:20.044Z · LW(p) · GW(p)
If crypto makes the USD go to 0, will life insurances denominated in USD not have anything to pay out? Maybe an extra reason for cryonicists to own some crypto
x-post: https://www.facebook.com/mati.roy.09/posts/10159482104234579
comment by Mati_Roy (MathieuRoy) · 2020-10-04T05:05:23.570Z · LW(p) · GW(p)
A Hubble Brain: a brain taking all the resources present in a Hubble-Bubble-equivalent.
related: https://en.wikipedia.org/wiki/Matrioshka_brain#Jupiter_brain
x-post: https://www.facebook.com/mati.roy.09/posts/10158917624064579
Replies from: avturchin↑ comment by avturchin · 2020-10-04T11:57:31.561Z · LW(p) · GW(p)
But it will take 15 billion years for it to finish one thought
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2020-10-04T16:08:14.034Z · LW(p) · GW(p)
haha, true ^^
comment by Mati_Roy (MathieuRoy) · 2020-10-02T22:01:02.671Z · LW(p) · GW(p)
I want to look into roleplay in animals, but Google is giving me animal roleplay, which is interesting too, but not what I'm looking for right now 😅
I'm wonder how much roleplay there is in the animal kingdom. I wouldn't be surprised if there was very few.
Maybe if you're able to roleplay, then you're able to communicate?? Like, roleplay might need to have a theory of mind, because you're imagining yourself in someone else's body.
Maybe you can teach words to an animal without a theory of mind, but they'll be more like levers for them: for them, saying "banana" is like pressing a lever that gives you bananas rather than communicating the concept of banana to someone else.
And once you have a theory of mind, then this might unlock the ability to roleplay. If you have a culture of roleplay, then I guess that basically already gives you a basic "sign" language from which you gradually build abstractions as you climb the intelligence ladder. I guess, at first the cultural and genetic selection might be (a lot?) related to the size of the vocabulary, and we can model roleplaying as a (proto-)language.
Although a quick web search tells me that children develop language before a theory of mind; but maybe they wouldn't come up with it even if they can learn it. And maybe "theory of mind" is more of a spectrum.
This is an idea the bloomed from talking to David Krueger about the progress of humans' learning abilities, and zir pointing out that before language you might have roleplay.
x-post: https://www.facebook.com/mati.roy.09/posts/10158914397244579
comment by Mati_Roy (MathieuRoy) · 2020-09-27T18:50:36.228Z · LW(p) · GW(p)
I remember someone in the LessWrong community (I think Eliezer Yudkowsky, but maybe Robin Hanson or someone else, or maybe only Rationalist-adjacent; maybe an article or a podcast) saying that people believing in "UFOs" (or people believing in unproven theories of conspiracy) would stop being so enthusiastic about those if they became actually known as true with good evidence for them. does anyone know what I'm referring to?
Replies from: Ruby, MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2020-09-27T19:31:12.932Z · LW(p) · GW(p)
ah, someone found it:
If You Demand Magic, Magic Won't Help" where he says at one point: "The worst catastrophe you could visit upon the New Age community would be for their rituals to start working reliably, and for UFOs to actually appear in the skies. What would be the point of believing in aliens, if they were just there, and everyone else could see them too? In a world where psychic powers were merely real, New Agers wouldn't believe in psychic powers, any more than anyone cares enough about gravity to believe in it. https://www.lesswrong.com/s/6BFkmEgre7uwhDxDR/p/iiWiHgtQekWNnmE6Q?fbclid=IwAR3xEBpBr_U_3sn_kVPvVbzYyq_6dDn4kzXDtSCgeFR6qgXl3dGXPWftIyo [? · GW]
comment by Mati_Roy (MathieuRoy) · 2020-09-24T11:47:38.380Z · LW(p) · GW(p)
sometimes I see people say "(doesn't) believe in science" when in fact they should say "(doesn't) believe in scientists"
or actually actually "relative credence in the institutions trying to science"
x-post: https://www.facebook.com/mati.roy.09/posts/10158892685484579
comment by Mati_Roy (MathieuRoy) · 2020-09-24T09:51:57.174Z · LW(p) · GW(p)
hummm, I think I prefer the expression 'skinsuit' to 'meatbag'. feels more accurate, but am not sure. what do you think?
x-post: https://www.facebook.com/mati.roy.09/posts/10158892521794579
comment by Mati_Roy (MathieuRoy) · 2020-09-24T09:36:03.276Z · LW(p) · GW(p)
I just realized my System 1 was probably anticipating our ascension to the stars to start in something like 75-500 years.
But actually, colonizing the stars could be millions of subjective years away if we go through an em phase (http://ageofem.com/). On the other hand, we could also have finished spreading across the cosmos in only a few subjective decades if I get cryopreserved and the aestivation hypothesis is true (https://en.wikipedia.org/wiki/Aestivation_hypothesis).
comment by Mati_Roy (MathieuRoy) · 2020-09-18T05:57:09.050Z · LW(p) · GW(p)
I created a Facebook group to discuss moral philosophies that value life in and of itself: https://www.facebook.com/groups/1775473172622222/
comment by Mati_Roy (MathieuRoy) · 2020-09-18T05:35:14.279Z · LW(p) · GW(p)
How to calculate subjective years of life?
- If the brain is uniformly sped up (or slowed down), I would count this as proportionally more (or less)
- Biostasis would be a complete slow down, so wouldn't count at all
- I would not count unconscious sleeping or coma
- I would only count dreaming if some of it is remembered ([https://www.lesswrong.com/posts/F8iTtzSgxRmmcCrWE/for-what-x-would-you-be-indifferent-between-living-x-days?commentId=FbfeG2mZPew3enNPF [LW(p) · GW(p)]](more on this))
For non-human animal brains, I would compare them to the baseline of individuals in their own species.
For transhumans that had their mind expanded, I don't think there's an obvious way to get an equivalence. What would be a subjective year for a Jupiter brain?
Maybe it could be in terms of information processed, but in that case, a Jupiter brain would be living A LOT of subjective time per objective time.
Ultimately, given I don't have "intrinsic" diminishing returns on additional experience, the natural definition for me would the amount of 'thinking' that is as valuable. So a subjective year for my future Jupiter brain would be the duration for which I find that experience as valuable as a subjective year now.
Maybe that could even account for diminishing value of experience at a specific mind size because events would start looking more and more similar?? But it otherwise wouldn't work for people that have "intrinsic" diminishing returns on additional experience. It would notably not work with people for whom marginal experiences start becoming undesirable at some point.
Replies from: avturchin↑ comment by avturchin · 2020-09-18T12:59:27.818Z · LW(p) · GW(p)
Interestingly, an hour in childhood is subjectively equal between a day or a week in adulthood, according to recent poll I made. As a result, the middle of human life in term of subjective experiences is somewhere in teenage.
Also, experiences of an adult are more dull and similar to each other.
Tin Urban tweeted recently: "Was just talking to my 94-year-old grandmother and I was saying something about how it would be cool if I could be 94 one day, a really long time from now. And she cut me off and said “it’s tomorrow.” The "years go faster as you age" phenomenon is my least favorite phenomenon."
comment by Mati_Roy (MathieuRoy) · 2020-09-02T02:43:39.266Z · LW(p) · GW(p)
The original Turing test has a human evaluator.
Other evaluators I think would be interesting include: the AI passing the test, a superintelligent AI, and an omniscient maximally-intelligent entity (except without the answer to the test).
Thought while reading this thread.
comment by Mati_Roy (MathieuRoy) · 2020-05-09T06:47:49.630Z · LW(p) · GW(p)
Blocking one ear canal
Category: Weird life optimization
One of my ear canal is in a different shape. When I was young, my mother would tell me that this one was harder to clean, and that ze couldn't see my ear-drum. This ear gets wax accumulation more easily. A few months ago, I decided to let it block.
Obvious possible cognitive bias is the "just world bias": if something bad happens often enough, I'll start think it's good.
But here are benefits this has for me:
-
When sleeping, I can put my good ear on the pillow, and this now isolates me from sound pretty well. And isn't uncomfortable unlike alternatives (ie.: earmuffs, second pillow, my arm).
-
I'm only using one ear, so that if I become hard-of-hearing when older (a common problem I think), then I can remove the wax from my backup hear.
This even makes me wonder whether having one of your ear canal differently shaped is something that's actually been selected for.
Other baffling / really surprising observation: My other hear is also blocked on most morning, but just touching it a bit unblocks it.
Replies from: MathieuRoy↑ comment by Mati_Roy (MathieuRoy) · 2020-05-09T07:25:33.682Z · LW(p) · GW(p)
x-posting someone's comment from my wall:
- Earplugs?
- Age-related hearing loss isn't caused only by exposure to noise. https://ghr.nlm.nih.gov/condition/age-related-hearing-loss
- https://www.hearingaiddoctors.com/news/how-earwax-can-cause-permanent-hearing-loss-1454118945825.html
comment by Mati_Roy (MathieuRoy) · 2021-11-01T18:21:58.075Z · LW(p) · GW(p)
topic: fundamental physics
x-post from YouTube
comments on Why No One Has Measured The Speed Of Light
2 thoughts
-
maybe that means you could run a simulation of the universe without specifying c (just like you don't have to specify "up/down"); maybe saying the speed of light is twice as big in a direction than the other is like saying everything is twice as big as we thought: it's a meaningless statement because they are defined relative to each other
-
if universe looks the same in all directions, yet one side of the universe we see as it currently is whereas the other side we see as it was billion of years ago, then it seems reasonable that we would expect both sides to look more different than they actually do. or maybe, it would mean there's a big bang wave rapidly propagating in the direction where we're receiving the instantaneous light, which would explain why we see cosmic background radiation from that direction as well