LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
I think you are slightly muddling your phrases.
You are richer if you can afford more goods and better goods. But not all goods will necessarily change price in the same direction. Its entirely possible that you can become richer, but that food prices grow faster than your new income. (For example, imagine that your income doubles, that food prices also double, but prices of other things drop so that inflation remains zero. You can afford more non-food stuff, and the same amount of food, so you are richer overall. This could happen even if food prices had gone up faster than your income.)
I think a (slightly cartoony) real life example is servants. Rich people today are richer than rich people in Victorian times, but fewer rich people today (in developed countries) can afford to have servants. This is because the price of hiring servants has gone up faster than the incomes of these rich people. So it is possible for people to get richer overall, while at the same time some specific goods or services become less accessible.
Maybe a more obvious example is rent (or housing in general). A modern computer programmer in Silicon valley could well be paying a larger percentage of their income on housing than a medieval peasant. But, they can afford more of other things than that peasant could.
ustice on Taking into account preferences of past selvesAfter 5 years, I think experience matters more.
ustice on What if Ethics is Provably Self-Contradictory?Given the state of AI, I think AI systems are more likely to infer our ethical intuitions by default.
ustice on If digital goods in virtual worlds increase GDP, do we actually become richer?You’re basically talking about the software industry. Meta isn’t special. Considering how big the video game industry is, not to mention digital entertainment, and business software, I don’t think we have anything to worry about there.
richard_kennaway on If digital goods in virtual worlds increase GDP, do we actually become richer?There is a metaverse already. It's called Second Life and has been around for more than 20 years. Never huge, but never going away. It has a marketplace of virtual goods that residents of Second Life have created. The market deals in "Linden dollars", which can be both bought with real dollars and sold for real dollars.
But look at a few random prices at that Marketplace link. The exchange rate is stable at about L$250 = $1. A skirt for L$399 = $1.60. A massage table (with built-in animations) for L$1698 = $7. (Three times that for the version with built-in sex animations.) A tattoo for L$299 = $1.20. The most expensive car currently on the marketplace is L$50,000 = $200, but there are also plenty selling for under $1.
There are only a very few people who have made a living from selling things in Second Life. The number of spectacular successes might be countable on the fingers of one finger.
While I love Second Life, I do not see an economy of this sort growing to become a substantial part of the total economy. What, after all, is the value of these digital goods? They are decoration for an immersive social space, and game assets for recreational use within that space. They do have value, but the marketplace shows what that value is: $200 for a top-end virtual car.
torekp on When is a mind me?Suppose someone draws a "personal identity" line to exclude this future sunrise-witnessing person. Then if you claim that, by not anticipating, they are degrading the accuracy of the sunrise-witness's beliefs, they might reply that you are begging the question.
linda-linsefors on A Review of In-Context Learning Hypotheses for Automated AI Alignment ResearchI disagree. In verbal space MARS and MATS are very distinct, and they look different enough to me.
However, if you want to complain, you should talk to the organisers, not one of the participants.
Here is their website: MARS — Cambridge AI Safety Hub
(I'm not involved in MARS in any way.)
I mean if the universe is big enough for every conceivable thing to happen, then we should notice that we find ourselves in a surprisingly structured environment and need to assume some sort of an effect where if a cognitive architecture opens its eyes, it opens its eyes in a different places with the likelihood corresponding to how common these places are (e.g., among all Turing machines).
I.e., if your brain is uploaded, and you see a door in front of you, and when you open it, 10 identical computers start running a copy of you each: 9 show you a green room, 1 shows you a red room, you expect that if you enter a room and open your eyes, in 9/10 cases you’ll find yourself in a green room.
So if it is the situation we’re in- everything happens- then I think a more natural way to rescue our values would be to care about what cognitive algorithms usually experience, when they open their eyes/other senses. Do they suffer or do they find all sorts of meaningful beauty in their experiences? I don’t think we should stop caring about suffering just because it happens anyway, if we can still have an impact on how common it is.
If we live in a naive MWI, an IBP agent doesn’t care for good reasons internal to it (somewhat similar to how if we’re in our world, an agent that cares only about ontologically basic atoms doesn’t care about our world, for good reasons internal to it), but I think conditional on a naive MWI, humanity’s CEV is different from what IBP agents can natively care about.
torekp on When is a mind me?I have a closely related objection/clarification. I agree with the main thrust of Rob's post, but this part:
Presumably the question xlr8harder cares about here isn't semantic question of how linguistic communities use the word "you"...
Rather, I assume xlr8harder cares about more substantive questions like: (1) If I expect to be uploaded tomorrow, should I care about the upload in the same ways (and to the same degree) that I care about my future biological self? (2) Should I anticipate experiencing what my upload experiences? (3) If the scanning and uploading process requires destroying my biological brain, should I say yes to the procedure?
..strikes me as confused or at least confusing.
Take your chemistry/physics tests example. What does "I anticipate the experience of a sense of accomplishment in answering the chemistry test" mean? Well for one thing, it certainly indicates that you believe the experience is likely to happen (to someone). For another, it often means that you believe it will happen to you - but that invites the semantic question that Rob says this isn't about. For a third - and I propose that this is a key point that makes us feel there is a "substantive" question here - it indicates that you empathize with this future person who does well on the test.
But I don't see how empathizing or not-empathizing can be assessed for accuracy. It can be consistent or inconsistent with the things one cares about, which I suppose makes it subject to rational evaluation, but that looks different from accuracy/inaccuracy.
simon-lermen on Creating unrestricted AI Agents with Command R+I think that is a fair categorization. I think it would be really bad if some super strong tool-use model gets released and nobody had any idea before this could lead to really bad outcomes. Crucially, I expect future models to be able to remove their own safety guardrails as well. I really try to think about how these things might positively affect AI safety, I don't want to just maximize for shocking results. My main intention was almost to have this as a public service announcement that this is now possible. People are often behind on the Sota and most people are probably not aware that jailbreaks can now literally produce these "Bad Agents". In general, 1) I expect people being more informed to have a positive outcome and 2) I hope that this will influence labs to be more thoughtful with releases in the future.