Posts
Comments
I think the issue is with the "get what I want" part. Isn't this treating people as a means to an end, instead of treating them as an end in and of itself?* (I think that Kant would not be happy - though I don't know of anything that has been written on lesswrong about this.)
If you are talking to another person and you are trying to convince them to adopt a certain view of you, that is not what I would call truth-oriented. So, whether you specifically lie, omit, or whatever; it's already secondary. If your goal is to have an honest interaction with another being, I don't think you can in that interaction want to edit their perception of you (apart from misunderstandings etc).
I'd say that the way you achieve your goal is to become what you want to be seen as. This is, of course, harder than just lying, but in a way it takes less effort, too.
Plus, you avoid another important pitfall I could see here: Lying to yourself about wanting a connection with a person who doesn't share your values. If you have to lie to fit in with them, maybe not fitting in with them is a good thing, and you should pay attention to that. In this way, the impulse to lie may be similarly useful as the tiny voice telling you that you are confused.
(The following is just about the effort it takes to lie vs truth. Not really required for the core idea, read if you wish^^)
Imagine what insane effort it would take to lie all the time but try to be perceived as being honest! While "just" being honest is hard in a different way, though on subtler and subtler levels, I at least was freed of a lot of the mental overhead that lying brings with it. (Sure, part of that was replaced by the mental habits of self-checking, but still, way less. I don't have to worry about what I may have said at some point if I don't remember. I will see what I would say now, and unless I acquired new information or insight, this will probably approximate what I said then. If I am also honest about this process, my self-perceived fault of not perfect memory isn't too bad anymore. This can never work with lying, because you need to keep tabs on what you told whom, how they may have gained additional information, etc.)
*(The fact that you specified the gender of the other person also implies a certain degree of "means to an end" to me (yes, even without knowing your gender) unless you are talking about one specific situation and nothing else. But that may just as well be wrong.)
"Would I be willing to publicly defend this as a situation in which unusually honest people should lie, if somebody posed it as a hypothetical?" Maybe that just gets turned into "It's permissible to lie so long as you'd be honest about whether you'd tell that lie if anyone asks you that exact question and remembers to say they're invoking the meta-honesty code," because people can't process the meta-part correctly.
Thank you for this direct contrast! It gave me the opportunity to understand why you added this part in the first place.
(The difference between the statements seemed obvious enough, but engaging with the difference, I think I now understand why you specifically say "willing to publicly defend ... if someone posed it as a hypothetical?" - because that thought process is needed for your counterfactual selves to feel safe with you, basically.
Speaking with all realities equally existing for a moment: If you do not check that box, and someone asks you a hypothetical that describes a counterfactual self's actual circumstances in which they believed that unusually honest people should lie, you will not think to defend it, thereby putting you roughly in the situation of "I only Glomarize if I have something to hide". (This is much less precise than your essay, obviously, but I needed to phrase it in a way that is natural to me to check if my understanding is actually present.))
Somehow neither spoiler is working...
! :::spoiler Doesn't that run into the same issue as Harry in HPMoR with his experiment with time? Namely, that there are a lot of scenarios in which you never (get to) open the second box, easiest case: you died. But also probably any number of other results (it gets stolen, f.e.) :::
I notice, that your answer confuses me. My understanding is as follows:
Your choice doesn't change where you exist. In the situation which you describe, not opening the second box doesn't actually improve your situation (being simulated), and I would expect it to go the same way (being shut down).
I agree with the reasoning that you must be in a simulation, but I fail to see how your choice actually changes things, here.
You already exist in one reality (and potentially in a simulation), and you are only finding out in which one you are. So, isn't the only thing you are preserving by not opening the second box your lack of knowledge?
Opening the box doesn't transport you to a different reality, it just either means that your understanding of Omega was incomplete, or that you were in a simulation. But, if you are in a simulation, no matter how you decide, you still are in a simulation.
(I must admit that I said yes without realizing the implications, because I didn't credit the omniscience sufficiently.)
What did I miss?
Extra Credit:
I tell someone else that I got Newcomb'ed and that I will sell them the two boxes for X amount. Neither of us knows what will happen, since I actually won't open either box, but considering there is an omniscient Omega in the world, obtaining that information should be worth more than the money that may or may not be in the boxes.
(To ensure that it doesn't count as opening the boxes by proxy, we settle on a fixed price. I think 250+ could be reasonably negotiated, both considering the value ranges known from Omega, and the potential value of the information.)
Then again, it may really not be a good idea to go out of bounds.
(I quote "'Do not mess with time.' in slightly shaky hand writing."^^)
On the other hand, if Omega meant harm... Well, this is a longer debate, I think.
In regards to the point you disagree on: As I understood it, (seemingly) linear relationships between the behaviour and the capabilities of a system don't need to stay that way. For example, I think that Robert Miles recently was featured in a video on Computerphile (YouTube), in which he described how the answers of LLMs to "What happens if you break a mirror" actually got worse with more capability.
As far as I understand it, you can have a system that behaves in a way which seems completely aligned, and which still hits a point of (... let's call it "power"...) power at which it starts behaving in a way that is not aligned. (And/Or becomes deceptive.) The fact that GPT-4 seems to be more aligned may well be because it hasn't hit this point yet.
So, I don't see how the point you quoted would be an indicator of what future versions will bring, unless they can actually explain what exactly made the difference in behaviour, and how it is robust in more powerful systems (with access to their own code).
If I'm mistaken in my understanding, I'd be happy about corrections (:
Thank you for everything you did. My experience in this world has been a lot better since I discovered your writings, and while I agree with your assessment on the likely future, and I assume you have better things to spend your time doing than reading random comments, I still wanted to say that.
I'm curious to see what exactly the future brings. Whilst the result of the game may be certain, I can't predict the exact moves.
Enjoy it while it lasts, friends.
(Not saying give up, obviously.)
Thank you for pointing out the difference between breaking and stopping to peddle.
I read it, continued, then I got confused about you saying that your practice didn't leave "an empty silence".
I'm going to try what you described, because I may have gotten to that silence by breaking habitually when I was younger, instead of just not putting energy into it.
Might I ask what kind of recovery you were talking about? And how it came to be?
I can very much emphasize with having to loop thoughts to keep them, and if there's something that you did to improve your memory, I'd be extremely interested in trying it. Even accepting that I don't know if it will work for me, it's still way better than having no approach.
I'm glad that you got better!
Hi! Questions about volunteering follow:
"They will only be expected to work either before, after or during the event while joining sessions is still often feasible."
Could I get a rephrasing of that? I'm not certain, if the options of before/during/after are (or can be) exclusive, and I am also unclear on what is meant by "joining sessions is often feasible".
I am happy to help, but I would like to know how much of the time during the event (if any) would be, basically, not the event^^
Best regards
This sounds like a case of "wrong" perspective. (Whoa, what?! Yes, keep reading pls^^)
Like someone believing (to believe) in Nihilism. To Nihilism, I haven't thought of a good and correct counter-statement, except:
"You are simply wrong on all accounts, but by such a small amount that it's hard to point to, because it will sound like »You don't have a right to your own perspective«", (Of course, I also would not agree with disallowing personal opinions (as long as we ARE talking about opinions, not facts).)
Granted, I haven't tried to have that kind of discussion since I really started reading and applying the Sequences. But that may be due to my growing habit of not throwing myself into random and doomed discussions, that I don't have a stake in.
But for Bruce, I think I can formulate it:
I am aware of the fact that I still don't allow myself to succeed sometimes. I have recently found that I stand before a barrier that I can summarize as a negative kind of sunk cost fallacy ("If I succeed here, I could have just done that ten years ago"), and I still haven't broken through, yet.*
But... Generalizing this kind of observation to "We all have this Negativity-Agent in our brain" feels incorrect to me. It both obscures the mistake and makes it seem like there is a plan to it.
If I think "Okay, you just detected that thought-pattern that you identified as triggering a bad situation, now instead do X" I feel in control, I can see myself progress, I can do all the things.
If I think "Damn, there's Bruce again!", not only do I externalize the locus of control, I am also "creating" an entity, that can then rack up "wins" against me, making me feel less like I can "beat" them.
It's not an agent. It's a habit that I need to break. That's a very different problem!
I assume that people will say "Bruce is a metaphor". But, provided I have understood correctly, the brain is very prone to considering things as agents (f.e. natural gods, "The System", The whole bit about "life being (not) fair", ...), so feeding it this narrative would seem like a bad idea.
I predict that it will be harder to get rid of the problem, once one gives it agency and/or agenthood. (Some might want an enemy to fight, but even there I take issue with externalizing the locus of control.)
[*In the spirit of "Don't tell me how flawed you are, unless you also tell me how you plan to fix it", I am reading through Fun Theory to defuse it (yes, first read, I am not procrastinating with "need to read more"):
For me it's: I don't want to do X, I want to do something enjoyable Y. And then, when I do Y, I drift into random things, that often aren't all that enjoyable, but just continue the status quo. All the while X is beginning to loom, accrue negative charge and triggering avoidance routines. But if I do X instead, I don't know how to allow myself to take breaks without sliding into the above pattern. So I intend to optimize my fun and expand the area of things that I find fun. That reorientation should help me with dosing it, too. (And yes, I do have adhd, in case you read it out of the text and were wondering if you should point me there ^^)
Also I recently discovered a belief (in...) that I like to learn. I realized that I really don't like learning. I like understanding, but what I call "learning" has a very negative connotation, so I barely do it. Will discover how to effectively facilitate understanding, too. ]
I hope that you are not still struggling with this, but for anyone else in this situation: I would think that you need to change the way you set your goals. There is loads of advice out there on this topic, but there's a few rules I can recall off the top of my head:
- "If you formulate a goal, make it concrete, achievable, and make the path clear and if possible decrease the steps required." In your case, every one of the subgoals already had a lot of required actions, so the overarching goal of "publish a book" might be too broadly formulated.
- "If at all possible don't use external markers for your goals." What apparently usually happens is that either you drop all your good behaviour once you cross the finish line, or your goal becomes/reveals itself to be unreachable and you feel like you can do nothing right (seriously, the extend to which this happens... incredible.), etc.
- "Focus more on the trajectory than on the goal itself." Once you get there, you will want different things and what you have learned and acquired will just be normal. There is no permanent state of "achieving the goal", there is the path there, and then the path past it.
Very roughly speaking.
All the best.
If I may recommend a book that might make you shift your non-AI related life expectancy: Lifespan by Sinclair.
Quite the fascinating read, my takeaway would be: We might very well not need ASI to reach nigh-indefinite life extension. Accidents of course still happen, so in a non-ASI branch of this world I currently estimate my life expectancy at around 300-5000 years, provided this tech happens in my lifetime (which I think is likely) and given no cryonics/backups/...
(I would like to make it clear that the author barely talks about immortality, more about health and life span, but I suspect that this has to do with decreasing the risk of not being taken seriously. He mentions f.e. millennia old organisms as ones to "learn" from.)
Interestingly, the increase in probability estimation of non-ASI-dependent immortality automatically and drastically impacts the importance of AI safety, since you are a) way more likely to be around (bit selfish, but whatever) when it hits, b) we may actually have the opportunity to take our time (not saying we should drag our feet), so the benefit from taking risks sinks even further, and c) if we get an ASI that is not perfectly aligned, we actually risk our immortality, instead of standing to gain it.
All the best to you, looking forward to meeting you all some time down the line.
(I am certain that the times and locations mentioned by HJPEV will be realized for meet-ups, provided we make it that far.)
It seems to me that the agents you are considering don't have as complex a utility function as people, who seem to at least in part consider their own well being as part of their utility funciton. Additionally, people usually don't have a clear idea of what their actual utility function is, so if they want to go all-in on it, they let some values fall by the wayside. AFAIK this limitation not a requirement for an agent.
If you had your utility function fully specified, I don't think you could be considered both rational and also not a "holy madman". (This borders on my answer to the question of free will, which so far as I can tell, is a question that should not explicitly be answered, so as to not spoil it for anyone who wants to figure it out for themselves.)
Suffice it to say that optimized/optimal function should be a convergent instrumental goal, similar to self-preservation, and a rational agent should thereby have it as a goal. If I am not mistaken, this means that a problem in work-life balance, as you put it, is not something that an actual rational agent would tolerate, provided there are options to choose from that don't include this problem and have a similar return otherwise.
Or did I misinterpret what you wrote? I can be dense sometimes...^^
An idea that might be both unsustainable and potentially dangerous, but also potentially useful, is to have someone teach as a final test. Less an exam and more a project (with oversight?). Of course, these trainees could be authentic or disguised testers.
Problems with this idea (non-exhaustive): - Rationality doesn't necessarily make you good at teaching, - Teaching the basics badly are likely to have negative effects on the trainee, - This could potentially be gamed by reformulated regurgitation.
So... What behaves differently in the presence of Rationality. I like Brennan's idea of time pressure, though he himself demonstrates that you don't need to have finished training for it, and it doesn't really hit the mark.
Or: What requires Rationality? Given Hidden Knowledge (may only require facts that are known, but not to them), one could present new true facts that need to be distinguished from new well-crafted falsehoods (QM anyone?^^). This still only indicates, but it may be part of the process. If they game this by studying everything, and thinking for themselves, and coming to correct conclusions, I think that counts as passing the test. Maybe I am currently not creative enough though. This test could also be performed in isolation, and since time would probably be a relevant component, it would likely not require huge amounts of resources to provide this isolation. Repeat tests could incorporate this (or seemingly incorporate it) too.
If you wanted to invest more effort, you could also specifically not isolate them, but put them in a pressured situation (again, I am being influenced by memories of a certain ceremony. But it is simply really good.) This doesn't have to be societal pressure, but this kind at least makes rash decisions less likely to be costly.
I can't really formulate the idea concretely, but: A test inspired by some of ye olden psychology experiments might provide double yield by both testing the rationality of the person in question and also disabuse them of their trust. Though I can see a lot of ways this idea could go awry.
An issue that most if not all of my tests run into is that they limit what could be taught, since it is still part of the test. This is a problem that should be solved, not just because it irritates me, but because this also means that random chance could easier change the results.
This is, I think, because so far all tests check for the correct answer. This, in itself, may be the wrong approach. Since we try to test techniques which have an impact on the whole person, not "just" their problem solving. I would for example hope that a crisis situation would on average benefit from the people being trained in rationality, not just in regards to "the problem solving itself", but also the emotional response, the ability to see the larger picture, prioritization and initial reaction speed, and so on.
(Maybe having them devise a test is a good test...^^ Productive, too, on the whole.)
(I can think of at least one problem of yours that I still haven't solved, though I therefore can't say whether or not my not-solving-it is actually showing a lack of rationality[though it's likely], or rather depends on something else. Not sure if I should mention it, but since you (thankfully) protect the answer, I don't think that I need to. This, still, is asking for a correct answer though.)
That's all I can think of for now. Though I am not really satisfied... Do I need to be "at a higher level" to be able to evaluate this, since I don't fully grasp what it is that should be tested yet? Seems like either an option or a stop sign..
If there's any basis whatsoever to this notion of "continuity of consciousness"—I haven't quite given up on it yet, because I don't have anything better to cling to—then I would guess that this is how it works.
Why "cling to"? It all adds up to normality, right? What you are saying sounds like someone resisting the "winds of evidence" (in this case added complexity, I am guessing).
I tried to come up with ways to explain my observations of consciousness, but they all seem incomplete too, so far. But I don't see how that impacts your argument here. I'm not saying "stop asking". I just don't see the reason to "cling" to this "notion of continuity".
And if you think there is a reason, and I don't see it, I am somewhat worried.
Best regards
I would sooner associate experience with the arrows than the nodes, if I had to pick one or the other! I would sooner associate consciousness with the change in a brain than with the brain itself, if I had to pick one or the other.
This also lets me keep, for at least a little while longer, the concept of a conscious mind being connected to its future Nows, and anticipating some future experiences rather than others. Perhaps I will have to throw out this idea eventually, because I cannot seem to formulate it consistently; but for now, at least, I still cannot do without the notion of a "conditional probability".
I wrote a long comment, but the main question is: Why do you guess an answer here, instead of "Shut up and calculate"?
Why are/were you favouring the hypothesis? Considering what I have read so far from you, I find it more likely that I have missed something, than there being no reason, but I can't find it....
Well, intelligence doesn't equate skills. It's probably easier to aquire skills (like mental maths) with high intelligence, but no matter the intelligence, you still need to learn it.
P(easy learning | high intelligence) may be higher than P(easy learning | not high intelligence) for a given subject (f.e. mental math), but P(mental math) is not dependent on the ease of learning [P(mental math | no easy learning) would be low] but rather on actually learning/training it: P(mental math | no learning) is pretty low.
So people who learn mental math may have different speeds or difficulty doing so, however I would guess that it is more dependent on educational context, curiosity, or need(,...), rather than ease of learning.
But, if your self-assessment is correct, and my mentioned assumptions are as well, you should be able to remedy the predicament relatively easily ;)
The finale was a specific instance of two people who were in a very unusual situation. You can not "just" KO a powerful wizard. The whole reason that worked, was the restrictions that arose from this situation.
If someone was able to KO Lord Voldemort in a confrontation in which he was allowed to use magic, I assume they would afterwards be able to perform rituals to change their mind, similar to how Bellatrix was broken.
Also...I mean, you can just kill them at that point. Being also able to change their mind doesn't seem like that much of an additional burden.
In regards to how the world looks: We already were told several ways to overthrow the Ministry of Magic. Since nobody has bothered to do so, I would assume that the same logic proposed to answer the question of "why not" in the story, also applies to this, right?
I would also like to add that the spells in this world are not at all balanced. As was also noted in HPMoR: The False Memory Charm should be unforgivable. And I think Obliviate is pretty close to that, too.
Agreed.
However... Rowena was not an Occlumense? Without knowing that this problem might arise later, I doubt that she would have not learned it. And creating the Diadem is likely to be harder than learning Occlumency... Unless there's people who just can't learn it, and she is one of them. Seems like a stretch. Or could she never use it?
Also: if the Founders had all the requirements already, why didn't they finish it back then?
I enjoy the story, however there's at least two points here that I can't just let pass:
-
How would Finite reawaken an unconscious person? Isn't it more likely to assume that Stupefy knocks you out, instead of the theory that seems to be put forth here of the hex actually being an enchantment that persists over x amount of time?
-
A Finite has to be at the same strength or stronger than the spell it is dispelling. I believe that the ring has had several protections put on it, at the end of the first year. A standard Finite, from Lockhart of all people, should not suffice to overpower something that Harry and Moody together thought up. Especially considering that the protections were probably cast by Moody himself, and General Chaos has had to consider Finite in battle for months at that point.
I don't think that these quotes imply that Occlumency is a perfect shield of the mind. IIRC it specifically counters any attempt to divine whether or not someone believes they are telling the truth.
To completely Obliviate someone, you don't need to read their mind first, you are just setting all the 1 to 0. (Apparently somehow only hitting memory and not vital processes.)
I don't see a problem then. If you want to find out specific information, you can not use Legilimency, unless you first somehow break the person. And you will never know if you actually did, if they are a perfect Occlumense, so this doesn't work.
It would have to be tested, what exactly happens if you try to delete something from memory, that isn't there in the first place. And also what actually happens, when you try to delete a part of the Occlumency Barrier. If I had to guess, though, you would literally just cut a hole in a pretend personality. So if the Occlumense has f.e. 2 layers active plus their "base", they could have each hold/pretend to hold the knowledge of how to do Occlumency. If you now delete it from the top layer, you will just be let in into the next one, which is being adapted to represent the changes. Also, I would expect the Occlumense to create another layer.
This also assumes that you actually see the whole personality the Occlumense pretends to be, which is not actually correct, IIRC. You only see surface thoughts. Which would probably muddle this even further.... Still, I don't see a problem there.
Did I miss something obvious?
Spoilers for HPMoR (though I guess that's a given at this point):
c) how does she miss the fact that the map shows two Tom Riddle's walking down the passage way to the graveyard? That should not be easy to miss. It says "it was tangled and confused", but it would still show the name twice, and if she can read the one, it stands to reason she could read the other. Especially since we have precedent from HPMoR of both being in the same room without the map being confused.
Actually the poor are lazy and animal rights seem to fit to me. Animal rights were a hard sell for me, but thinking about it I had to come to the conclusion, that the bottom line "we should treat animals well" was probably either motivated by "I don't want to eat sick food" or "Awww, cuute!". Not by "I believe that animals in general need rights, because..." What? They react faster to stimuli than plants? They show complex behaviour? In that case, do you not kill mosquitos? Do you want rights for some fungi as well? How about programs that show complex behaviour? It seems like this was written after the bottom line.
Similarly, since we do not live in an equal world, simply saying that the poor were lazy makes sense if your motivation is to not feel guilty about not trying to help them.
Alternative energies however... I think time proved our dear OP wrong on that front. We may not need to use any one of these specifically, but we need to get away from fossil fuels and until we have fusion or solar farms in orbit, alternative energies are the longest term option. Even nuclear runs out of fuel in a relatively short amount of time.
It's part of the worship option, I would say. As he would otherwise be contradicting himself when he outlines the actual Explain option.
Edit: Whoops, the next comment is by Eliezer addressing this question but I'm going to leave this here for ease of reading.
It seems to me that some reasons may be: a lot of interesting comments have been made, which may intimidate; the posts are often very concise; and since the posts are so old, one may expect the page to be dead.
Personally I don't think I have read any "new" articles so far^^
If so, then as you study the skills of rationality and train yourself not to deny facts, your feelings will become stronger.
How does one go about this?
I have begun reading everything I can find by you on this page - I will probably also read other things, but it seems a foundation by (one of) the founder would be useful.
Still, while I see the ideas presented as very useful, I find myself wondering how do actually go about implementing them. Take any one thing as an example here, such as "Making Beliefs Pay Rent". (I hope you are not annoyed by this Outside The Box-Box^^)
One way of doing this seems to be to simply read or think about this over and over until I have the thought ingrained into my minds commonly used pathways, so as to enable me to have more opportunities to actually work on my beliefs/implement these ideas into my day.
This seems inefficient, even though I don't know if it simply is inefficient to start doing something like that.
Another way would probably be do sit down somewhere and try to let your beliefs flow through you while watching for inconsistencies.
This however appears to me to be unlikely to actually work / in my experience I start to either drift and/or probably) miss most things.
So, how does one study the skills of rationality and train oneself (not to deny facts/X/Y/..)?
If I missed something obvious and this annoys you, I hope I get an answer before you delete this and want to know that I would be feeling sad about having annoyed you without offering you my submission.