LessWrong 2.0 Reader
View: New · Old · Topnext page (older posts) →
next page (older posts) →
My own interpretation of how UDT deals with anthropics (and I'm assuming ADT is similar) is "Don't think about indexical probabilities or subjective anticipation. Just think about measures of things you (considered as an algorithm with certain inputs) have influence over."
(Speculative paragraph, quite plausibly this is just nonsense.) Suppose you have copies A and B who are both offered the same bet on whether they're A. One way you could make this decision is to assign measure to A and B, then figure out what the marginal utility of money is for each of A and B, then maximize measure-weighted utility. Another way you could make this decision, though, is just to say "the indexical probability I assign to ending up as each of A and B is proportional to their marginal utility of money" and then maximize your expected money. Intuitively this feels super weird and unjustified, but it does make the "prediction" that we'd find ourselves in a place with high marginal utility of money, as we currently do.
(Of course "money" is not crucial here, you could have the same bet with "time" or any other resource that can be compared across worlds.)
I would say that under UDASSA, it's perhaps not super surprising to be when/where we are, because this seems likely to be a highly simulated time/scenario for a number of reasons (curiosity about ancestors, acausal games, getting philosophical ideas from other civilizations).
Fair point. By "acausal games" do you mean a generalization of acausal trade? (Acausal trade is the main reason I'd expect us to be simulated a lot.)
slapstick on Thoughts on seed oilI had just searched on google about ways to make olives edible and got some mixed results. The point I was trying to make was that the way that olives are typically processed to make them edible results in a product that isn't particularly healthy at least relatively speaking, due to having isolated chemical(s) added to it in its processing.
The main thing I'm trying to say is that eating an isolated component of something we're best adapted to eat, and/or adding isolated/refined components to that food, will generally make that food less healthy than it would be were we eating all of the components of the food rather than isolated parts.
I think that process, and more complex variations of that process, are essentially what's being referred to when referring to the process behind processed foods. I think it's a generally reasonable term with a solid basis.
ann-brown on eggsyntax's ShortformBasically yes; I'd expect animal rights to increase somewhat if we developed perfect translators, but not fully jump.
And for the last part, yes, I'm thinking of current systems. LLMs specifically have a 'drive' to generate reasonable-sounding text; and they aren't necessarily coherent individuals or groups of individuals that will give consistent answers as to their interests even if they also happened to be sentient, intelligent, suffering, flourishing, and so forth. We can't "just ask" an LLM about its interests and expect the answer to soundly reflect its actual interests. With a possible exception being constitutional AI systems, since they reinforce a single sense of self, but even Claude Opus currently will toss off "reasonable completions" of questions about its interests that it doesn't actually endorse in more reflective contexts. Negotiating with a panpsychic landscape that generates meaningful text in the same way we breathe air is ... not as simple as negotiating with a mind that fits our preconceptions of what a mind 'should' look like and how it should interact with and utilize language.
This was all kinda rambly but I think I can summarize it as "Isn't it weird that ADT tells us that we should act as if we'll end up in unusually important places, and also we do seem to be in an incredibly unusually important place in the universe? I don't have a story for why these things are related but it does seem like a suspicious coincidence."
I'm not sure this is a valid interpretation of ADT. Can you say more about why you interpret ADT this way, maybe with an example? My own interpretation of how UDT deals with anthropics (and I'm assuming ADT is similar) is "Don't think about indexical probabilities or subjective anticipation. Just think about measures of things you (considered as an algorithm with certain inputs) have influence over."
This seems to "work" but anthropics still feels mysterious, i.e., we want an explanation of "why are we who we are / where we're at" and it's unsatisfying to "just don't think about it". UDASSA does give an explanation of that (but is also unsatisfying because it doesn't deal with anticipations [LW(p) · GW(p)], and also is disconnected from decision theory).
I would say that under UDASSA, it's perhaps not super surprising to be when/where we are, because this seems likely to be a highly simulated time/scenario for a number of reasons (curiosity about ancestors, acausal games, getting philosophical ideas from other civilizations).
slapstick on Thoughts on seed oilI don't know enough to dispute the ratios of animal products eaten by people in the paleolithic era, but it's still certainly true that throughout our evolutionary history plants made up the vast majority of our diets. The introduction of animal products representing a significant part of our diet is relatively recent thing.
The fact that fairly recently in our evolutionary history humans adapted to be able to exploit the energy and nutrition content of animal products well enough to get past reproductive age, is by no means overwhelming evidence that saturated fats "can't possibly be bad for you".
Although the connection between higher fat diets and negative health outcomes is then another inferential step that hasn't been strongly supported
How would you define strongly supported?
We don't have differential analysis of the resulting health
There is archeological evidence of Arctic people's subsisting on meat showing atherosclerosis.
habryka4 on This is Water by David Foster WallaceMod note: I clarified the opening note a bit more, to make the start and nature of the essay more clear.
nonveumann on Thoughts on seed oilIf it can be harvested dry or cannot be harvested dry?
rotatingpaguro on Changes in College AdmissionsAfter the events of April 2024, I cannot say that for Columbia or Yale. No just no.
What are these events?
eggsyntax on eggsyntax's ShortformHorses are surely sentient and worthy of consideration as moral patients. Horses are also not exactly all free citizens.
I think I'm not getting what intuition you're pointing at. Is it that we already ignore the interests of sentient beings?
Additional consideration: Does the AI moral patient's interests actually line up with our intuitions? Will naively applying ethical solutions designed for human interests potentially make things worse from the AI's perspective?
Certainly I would consider any fully sentient being to be the final authority on their own interests. I think that mostly escapes that problem (although I'm sure there are edge cases) -- if (by hypothesis) we consider a particular AI system to be fully sentient and a moral patient, then whether it asks to be shut down or asks to be left alone or asks for humans to only speak to it in Aramaic, I would consider its moral interests to be that.
Would you disagree? I'd be interested to hear cases where treating the system as the authority on its interests would be the wrong decision. Of course in the case of current systems, we've shaped them to only say certain things, and that presents problems, is that the issue you're raising?
mesaoptimizer on Lucie Philippon's ShortformThe main part of the issue was actually that I was not aware I had internal conflicts. I just mysteriously felt less emotions and motivation.
Yes, I believe that one can learn to entirely stop even considering certain potential actions as actions available to us. I don't really have a systematic solution for this right now aside from some form of Noticing practice (I believe a more refined version of this practice is called Naturalism [? · GW] but I don't have much experience with this form of practice).