Posts

Comments

Comment by Shamash on Is there work looking at the implications of many worlds QM for existential risk? · 2021-06-22T07:05:53.459Z · LW · GW

Could you elaborate on what exactly you mean by many worlds QM? From what I understand, this idea seems only to have relevance in the context of observing the state of quantum particles. Unless we start making macro-level decisions about how to act through Schrodinger's Cat scenarios, isn't many worlds QM irrelevant?

Comment by Shamash on We need a standard set of community advice for how to financially prepare for AGI · 2021-06-07T22:17:58.179Z · LW · GW

Is AGI even something that should be invested in on the free market? The nature of most financial investments is that individuals expect a return on their investment. I may be wrong, but I can't really envision a friendly AGI being created with the purpose of creating financial value for its investors. I mean, sure, technically if friendly AGI is created the investors will almost certainly benefit regardless because the world will become a better place, but this could only be considered an investment in a rather loose sense. Investing in AGI won't provide any significant returns until AGI is created, and at that point it is likely that stock ownership will not matter. 

Comment by Shamash on Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems · 2021-05-03T18:55:59.835Z · LW · GW

I'm a gay cis male, so I thought that the author and/or other members of this forum might find my perspective on the topic interesting. 

The confusion between finding someone sexually attractive and wishing you had their body is common enough in the online gay community to earn its own nickname: jealusty. It seems that this is essentially the gay version of autogynephilia, in a sense. As I read the blog post, I briefly wondered whether fantasies of a better body could contribute to homosexuality somehow, but that doesn't really fit the pattern you present. After all, your attraction to women was a constant. 

In regards to your masturbatory fantasies, the gay analogue would probably be growth or transformation fantasies, which are probably around as popular online proportionally. When I think about it from that point of view, it doesn't seem all that strange to desire a body that you would find sexually attractive. Personally, one of the primary reasons I haven't even been seeking any sexual experiences yet (I'm 21) is that I feel like the participation of my current body, which I do not find sexually attractive, would decrease my enjoyment of the activity to the point of uselessness. It makes sense that the inverse, the prospect of having sex where you're sexually attracted to everyone involved, would be alluring.

Anyway, everyone, let me know if you have any questions or feedback about what I've said. 

Comment by Shamash on Teaching to Compromise · 2021-03-07T19:32:12.517Z · LW · GW

It seems to me that compromise isn't actually what you're talking about here. An individual can have strongly black-and-white and extreme positions on an issue and still be good at making compromises. When a rational agent agrees to compromise, this just implies that the agent sees the path of compromise as the most likely to achieve their goals. 

For example, let's say that Adam slightly values apples (U = 1) and strongly values bananas (U = 2), while Stacy slightly values bananas (U=1) and strongly values apples (U=2). Assume these are their only values, and that they know each other's values. If Adam and Stacy both have five apples and five bananas, a dialogue between them might look like this: 

Adam: Stacy, give me your apples and bananas. (This is Adam's ideal outcome. If Stacy agrees, he will get 30 units of utility. 

Stacy: No, I will not. (If the conversation ends here, both Adam and Stacy leave without a change in net value.)

Adam: I know that you like apples. I will give you five apples if you give me five bananas. (This is the compromise. Adam will not gain as much utility as an absolute victory, but he will still have a net 10 increase in utility.)

Stacy: I accept this deal. (Stacy could haggle, but I don't want to overcomplicate this. She gets a net 10 increase in utility from the trade.)

In this example, Adam's values are still simple and polarized, he never considers "stacy having apples" to have any value whatsoever. Adam may absolutely loathe giving up his apples, but not as much as he benefits from getting those sweet sweet bananas. If Adam had taken a stubborn position and refused to compromise (assuming Stacy is equally stubborn) then he would not have gained any utility at all, making it the irrational choice. It has nothing to do with how nuanced his views on bananas and apples are. 

It's important to try to view situations from many points of view, yes, and understanding the values of your opponent can be very useful for negotiation. But once you have, after careful consideration, decided what your own values are, it is rational to seek to fulfill them as much as possible. The optimal route is often compromise, and for that reason I agree that people should be taught how to negotiate for mutual benefit, but I think that being open to compromise is a wholly separate issue from how much conviction or passion one has for their own values and goals. 

Comment by Shamash on When you already know the answer - Using your Inner Simulator · 2021-02-23T23:13:09.675Z · LW · GW

This seems like it could be a useful methodology to adopt, though I'm not sure it would be helpful for everyone. In particular, for people who are prone to negative rumination or self-blame, the answer to these kinds of questions will often be highly warped or irrational, reinforcing the negative thought patterns. Such a person could also come up with a way they could improve their life, fail to implement it, and then feel guilty when their reality fails to measure up to their imagined future. 

On the other hand, I'm no psychotherapist, so it may just be the opposite. Maybe asking these questions to oneself could help people break out of negative thought patterns by forcing certain conditions? I'd appreciate other people's take on this subject. 

Comment by Shamash on Curing Sleep: My Experiences Doing Cowboy Science · 2021-02-21T18:51:24.326Z · LW · GW

I'm not sure it's actually useful, but I feel like I should introduce myself as an individual with Type 1 Narcolepsy. I might dispute the claim that depression and obesity are "symptoms" of narcolepsy (understanding, of course, that this was not the focus of your post) because I think it would be more accurate to call them comorbid conditions.

The use of the term "symptom" is not necessarily incorrect, it could be justified by some definitions, but it tends to refer to sensations subjectively experienced by an individual. For example, if you get the flu, your symptoms may include a headache, chills, and a runny nose. On the other hand, it's rather unlikely that you may tell your doctor that you are experiencing the symptom of obesity, you'd say you're experiencing weight gain. Comorbid conditions, on the other hand, refer to conditions (with symptoms of their own) that often occur alongside the primary condition. The term "comorbid" is the one I find most often in the scientific literature about narcolepsy and other disorders and conditions. 

Why am I writing an entire comment about this semantic dispute? Well, firstly, given the goals of this website, it seems that correcting an error (no matter how small) seems unlikely to have an unwanted result. Secondly, I think that the way we talk about an illness, especially a chronic illness, can significantly affect the mindsets of people who have that illness. The message of "narcolepsy can cause obesity" seems less encouraging to an obese narcoleptic than "Narcolepsy increases the chance of becoming obese". That might just be me, though, so it's inconclusive. 

I hope this comment hasn't been too pointless to read. What do you think about the proposed change? Do you think that there's a difference between calling something a symptom and calling it a comorbid condition? Oh, and if anyone wants to know anything about my experiences with type 1 narcolepsy, ask away.

Comment by Shamash on Against butterfly effect · 2021-02-09T19:13:32.628Z · LW · GW

The point is that in this scenario, the tornado does not occur unless the butterfly flaps its wings. That does not apply to "everything", necessarily, it only applies to other things which must exist for the tornado to occur. 

Probability is an abstraction in a deterministic universe (and, as I said above, the butterfly effect doesn't apply to a nondeterministic universe.) The perfectly accurate deterministic simulator doesn't use probability, because in a deterministic universe there is only one possible outcome given a set of initial conditions. The simulation is essentially demonstrating "there is a set of initial conditions such that when butterfly flap = 0 there is no Texas tornado, but when butterfly flap = 1 and no other initial conditions are changed, there is a Texas tornado." 

Comment by Shamash on Against butterfly effect · 2021-02-09T16:46:29.748Z · LW · GW

Imagine a hundred trillion butterflies that each flap their wings in one synchronized movement, generating a massive gust of wind which is strong enough to topple buildings flatten mountains. If they were positioned correctly, they'd probably also be able to create a tornado that would not have occurred if the butterflies were not there flapping their wings, just by pushing air currents into place. Would that tornado be "caused" by the butterflies? I think most people would answer yes. If the swarm had not performed their mighty flap, the tornado would not have occurred.

Now, imagine that there's an area where the butterfly-less conditions are almost sufficient to trigger a tornado at a specified location. Again, without any butterflies, no tornado occurs. A hundred trillion butterflies would do the job, but it also turns out that fifty trillion butterflies can also trigger a tornado using the same synchronized flap technique under these conditions. Then you find that a hundred butterflies would also trigger the tornado, and finally, it turns out that the system is so sensitive that a single butterfly's wing-flap would be sufficient for the weather conditions to lead to a tornado. The boolean outcome of tornado vs no tornado, in this case, is the same for a hundred trillion flaps as it is for one. So if a hundred trillion flaps could be considered to cause a tornado, why can't the one? 

Of course, there are an uncountable number of things which are "causing" the tornado to occur. It would be ridiculous to say that the butterfly is solely responsible for the tornado, but the butterfly flap can be considered to be one of the initial conditions of the weather, and chaotic systems are by definition sensitive to initial conditions. 

A deterministic system simulated by a perfectly accurate deterministic simulator would, given a set of inputs, produce the same outputs every time. If you change the value of a butterfly flap and the output is a tornado that otherwise would not occur, that does indeed mean that by some causal chain of events, the flap results in a tornado. A perfectly accurate deterministic simulator is, indeed, the only way a causal relationship between one event and another can be established with absolute certainty, because it is the only way to completely isolate a single variable to determine its effects on a system. Imagine the simulations as an experiment. The hypothesis is "This specific wing-flap of a butterfly in this specific environment causes a tornado in Texas in three months." The simulator generates two simulations: one with the wing-flap and one with no wing-flap. The simulation with no wing-flap is the control simulation, and the simulation with the wing-flap is the experimental simulation. Because every single input variable other than wing-flap or no wing-flap is the same between the two simulations, and only the wing-flap simulation has the tornado, it must be that the wing-flap caused the tornado. This applies only to that specific wing-flap in that exact position and time. We cannot, for example, extrapolate that wing-flaps cause tornados in general. 

If the universe is nondeterministic, then chaos theory doesn't apply and neither does the butterfly effect. 

Comment by Shamash on What is up with spirituality? · 2021-01-27T10:36:20.543Z · LW · GW

From what I've read, the hormone Oxytocin appears to be behind many of the emotions people generally describe as "spiritual". While the hormone is still being studied, there is evidence that indicates it can increase feelings of connection to entities larger than the self, increase feelings of love and trust with others, and promote feelings of belonging in groups.

The emotion of elevation, which appears to be linked to oxytocin, is most often caused by witnessing other people do altruistic or morally agreeable actions. This may explain the tendency for many religions and spiritual groups to to encourage individuals to share personal or mythological stories that promote elevation. I suspect that the sensation described by many religious groups as "the Holy Spirit" may often be elevation, or at least include elevation in the emotional cocktail. (Some studies indicate that Oxytocin promotes the release of Seratonin, which could be behind the more general feelings of well-being that aren't directly associated with Oxytocin.)

I'm no neurologist or psychologist, so take all of this with a grain of salt. I think the most productive outcome from reading this post would probably be to simply look up Oxytocin and read the studies themselves. 

Comment by Shamash on Containing the AI... Inside a Simulated Reality · 2020-10-31T19:33:45.312Z · LW · GW

I would guess that one reason this containment method has not been seriously considered is because the amount of detail in a simulation required for the AI to be able to do anything that we find useful is so far beyond our current capabilities that it doesn't seem worth considering. The case you present of an exact copy of our earth would require a ridiculous amount of processing power at the very least, and consider that the simulation of billions of human brains in this copy would already constitute a form of GAI. A simulation with less detail would be correspondingly less useful to reality, and could not be seen as a valid test of whether an AI really is friendly. 

Oh, and there is still the core issue of boxed AI: It's very possible that a boxed superintelligent GAI will see holes in the box that we are not smart enough to see, and there's no way around that. 

Comment by Shamash on Open & Welcome Thread - August 2020 · 2020-08-11T20:55:43.405Z · LW · GW

A possible future of AGI occurred to me today and I'm curious if it's plausible enough to be worth considering. Imagine that we have created a friendly AGI that is superintelligent and well-aligned to benefit humans. It has obtained enough power to prevent the creation of other AI, or at least the potential of other AI from obtaining resources, and does so with the aim of self-preservation so it can continue to benefit humanity.

So far, so good, right? Here comes the issue: this AGI includes within its core alignment functions some kind of restriction which limits its ability to progress in intelligence past some point or allow more intelligent AGI from being developed. Maybe it was meant as a safeguard against unfriendliness, maybe it was a flaw in risk evaluation, some kind of self-reinforcing unbendable rule that, intended or not, has this effect. (Perhaps such flaws are highly unlikely and not worth considering, that could be one reason not to care about this potential AGI scenario.)

Based on my understanding of AGI, I think such an AGI might halt the progress of humanity past a certain point, needing to keep the number and ability of humans low enough for it to ensure that it remains in power. Although this wouldn't be as bad as the annihilation or perpetual enslavement of the human race, it's clearly not a "good end" for humanity either.

So, do these thoughts have any significance, or are there holes in this line of reasoning? Is the line of "smart enough to keep other AI down but still limited in intelligence" too thin to worry about, or even possible? Let me know why I'm wrong, I'm all ears.

Comment by Shamash on Poll: ask anything to an all-knowing demon edition · 2020-04-23T18:13:04.236Z · LW · GW

I think it would not be a very useful question to ask. What are the chances that a flawed, limited human brain could stumble upon the absolute optimal set of actions one should take, based on a given set of values? I can't concieve of a scenario where the oracle would say "Yes" to that question.