Posts

Non-human centric view of existence 2024-09-25T05:47:07.480Z
ZY's Shortform 2024-08-27T05:34:42.732Z

Comments

Comment by ZY (AliceZ) on Politics is the Mind-Killer · 2024-10-12T17:49:50.013Z · LW · GW

I think the title could be a bit more specific like - "involving political party in science discussions might not be productive", or something similar.  If using the word "politics", it would be crucial to define what "politics" here mean or refer to. The reason I say this is "politics" might not be just about actual political party's power dynamics, but also includes general policy making, strategies, and history that aim to help individuals in the society, and many other aspects. These other types of things included in the word "politics" is crucial to consider when talking about many things (I think it is a little bit similar to precedents in law). 

(Otherwise, if this article is about not bringing things all to political party level, I agree. I have observed that many things in the US at least are debated over political party lines, and if a political party debates about A, people reversely attribute some general social topics A to "political values or ideology", which is false to me.)

Comment by ZY (AliceZ) on What Do We Mean By "Rationality"? · 2024-10-12T17:07:03.055Z · LW · GW

I think by winning, he meant: "art of choosing actions that lead to outcomes ranked higher in your preferences", though I don't completely agree with this word choice which could be ambiguous/causing confusion.

A bit unrelated, but more of a general comment on this - in my belief, I think people generally have unconscious preferences, and knowing/acknowledging these before weighing out preferences are very important, even if some preferences are short term.

Comment by ZY (AliceZ) on Open letter to young EAs · 2024-10-11T22:35:04.415Z · LW · GW

I also had similar feelings on the simplicity part, and also how theory/idealized situation and execution could be very different.  Also agree on the conflict part (and to me many different type of conflicts).  And, I super super strongly support the section on The humans behind the numbers.
(These thoughts still persist after taking intro to EA courses). 

I think EA's big overall intentions are good to me and I am happy/energized by see how passionate people are comparing to no altruism at all at least; but the details/execution are not quite there to me.

Comment by ZY (AliceZ) on sarahconstantin's Shortform · 2024-10-11T00:11:25.798Z · LW · GW

I have been having some similar thoughts on the main points here for a while and thanks for this.

I guess to me what needs attention is when people do things along the lines of "benefit themselves and harm other people". That harm has a pretty strict definition,  though I know we may always be able to give borderline examples. This definitely includes the abuse of power in our current society and culture, and any current risks etc. (For example, if we are constraining to just AI with warning on content, https://www.iwf.org.uk/media/q4zll2ya/iwf-ai-csam-report_public-oct23v1.pdf.  And this is very sad to see.) On the other hand, with regards to climate change (can also be current too) or AI risks, it probably should also be concerned when corporates or developers neglect known risks or pursue science/development irresponsibly. I think it is not wrong to work on these, but I just don't believe in "do not solve the other current risks and only work on future risks."

On some comments that were saying our society is "getting better" - sure, but the baseline is a very low bar (slavery for example). There are still many, many, many examples in different societies of how things are still very systematically messed up.

Comment by ZY (AliceZ) on “She Wanted It” · 2024-10-10T18:40:12.351Z · LW · GW

This is about basic human dignity and respect to other humans, and has nothing to do with politics.

Comment by ZY (AliceZ) on What makes one a "rationalist"? · 2024-10-09T04:45:45.877Z · LW · GW

Oxford languages (or really just after googling) says "rational" is "based on or in accordance with reason or logic."

I think there are a lot of other types of definitions (I think lesswrong mentioned it is related to the process of finding truth). For me, first of all it is useful to break this down into two parts: 1) observation and information analysis, and 2) decision making.

For 1): Truth, but also particularly causality finding. (Very close to the first one you bolded, and I somehow feel many other ones are just derived from this one. I added causality because many true observations are not really causality).

For 2): My controversial opinion is everyone are probably/usually "rationalists" - just sometimes the reasonings are conscious, and other times they are sub/un-conscious. These reasonings/preferences are unique to each person. It would be dangerous in my opinion if someone try to practice "rationality" based on external reasonings/preferences, or reasonings/preferences that are only recognized by the person's conscious mind (even if a preference is short term). I think a useful practice is to 1. notice what one intuitively want to do vs. what one think they should do (or multiple options they are considering), 2. ask why there is the discrepancy, 3. at least surface the unconscious reasoning, and 4. weigh things (the potential reasonings that leads to conflicting results) out.  

Comment by ZY (AliceZ) on Shortform · 2024-10-09T01:20:48.043Z · LW · GW

From my perspective - would say it's 7 and 9.

For 7: One AI risk controversy is we do not know/see existing model that pose that risk yet. But there might be models that the frontier companies such as Google may be developing privately, and Hinton maybe saw more there.

For 9: Expert opinions are important and adds credibility generally as the question of how/why AI risks can emerge is by root highly technical. It is important to understand the fundamentals of the learning algorithms.

Lastly for 10: I do agree it is important to listen to multiple sides as experts do not agree among themselves sometimes. It may be interesting to analyze the background of the speaker to understand their perspectives. Hinton seems to have more background in cognitive science comparing with LeCun who seems to me to be more strictly computer science (but I could be wrong). Not very sure but my guess is these may effect how they view problems. (Only saying they could result in different views, but not commenting on which one is better or worse. This is relatively unhelpful for a person to make decisions on who they want to align more with.)

Comment by ZY (AliceZ) on Open Thread Fall 2024 · 2024-10-07T02:40:39.684Z · LW · GW

(Like the answer on declarative vs procedural). Additionally, reflecting on practicing Hanon for piano (which is almost a pure finger strength/flexibility type of practice) - might be also for physical muscle development and control.

Comment by ZY (AliceZ) on Adverse Selection by Life-Saving Charities · 2024-10-05T22:27:33.294Z · LW · GW

Agree with a lot of the things in this post, including "But implicit in that is the assumption that all DALYs are equal, or that disability or health effects are the only factors that we need to adjust for while assessing the value of a life year. However, If DALYs vary significantly in quality (as I’ll argue and GiveWell acknowledges we have substantial evidence for), then simply minimizing the cost of buying a DALY risks adverse selection. "

Had the same question/thoughts when I did the Introduction to Effective Altruism course as well. 

Comment by ZY (AliceZ) on Why I Define My Experience At the Monastic Academy As Sexual Assault · 2024-10-05T22:10:59.084Z · LW · GW

I came across this post recently, and want to really appreciate you speaking up, fighting for your rights, and raising awareness. Our society does not have proper and successful education on what consent is, and even though at my university we have consent courses during first week of school, people don't take them seriously. Maybe they should have added a formal test on that, and you cannot enter school unless you pass. This should apply to high school as well.  A 2018 article have pointed out how to teach consent at all education stages. https://www.gse.harvard.edu/ideas/usable-knowledge/18/12/consent-every-age

Many sexual assaults happen between partners, and your experience is clearly non-consensual and therefore sexual assault. I am a bit hesitate to comment as I am afraid to reopen any wounds, but also want to show support and gratitude🙏. 

Comment by AliceZ on [deleted post] 2024-10-05T04:37:18.671Z

I am guessing maybe it is the definition of "alignment" that people don't agree on/mixed on?

Some possible definitions I have seen:

  • (X risks) and/or (catastrophic risks) and/or (current safety risks)
  • Any of above + general capabilities (an example I saw is "how do you get the AI systems that we’re training to optimize the thing that we actually want to optimize" from https://arize.com/blog/openai-on-rlhf/)

And maybe some people don't think it got to solving X risks yet if they view the definition of alignment as X risks only. 

Comment by ZY (AliceZ) on DanielFilan's Shortform Feed · 2024-10-04T19:09:56.943Z · LW · GW

My guess is:

  • AI pause: no observation on what safety issue to address, work on capabilities anyways, then may lead to only capability improvements. (Assumption is that AI pausing means no releasing of models.)
  • RSP: observed O, shift more resources to work on mitigating O and less on capabilities, and when protection P is done, publish the model, then shift back to capabilities. (Ideally.)
Comment by ZY (AliceZ) on "Slow" takeoff is a terrible term for "maybe even faster takeoff, actually" · 2024-10-01T18:14:26.205Z · LW · GW

Nice post pointing out this! Relatedly for misused/overloaded terms - I think I have seen this getting more common recently (including overloaded terms that means something else in the wider academic community or society; and self-reflecting - I sometimes do this too and need to improve on this).

Comment by ZY (AliceZ) on Is Text Watermarking a lost cause? · 2024-10-01T18:05:07.699Z · LW · GW

I like the idea and direction of text watermarks, and more research could be done to see how to feasibly do this as adding watermarks to text seems to be much harder than images.

Maybe already mentioned in the article but I missed - have you done any analysis on how these methods effect the semantics/word choice availability from the author's perspective?

Comment by ZY (AliceZ) on OpenAI o1, Llama 4, and AlphaZero of LLMs · 2024-09-28T22:12:21.722Z · LW · GW

How did you translated the dataset, and what is the translation quality?

Comment by ZY (AliceZ) on How do you feel about LessWrong these days? [Open feedback thread] · 2024-09-27T15:25:27.839Z · LW · GW

I am relatively new to the community, and was excited to join and learn more about the actual methods to address AI risks, and how to think scientifically generally. 

However after using for a while, I am a bit disappointed. I realized I probably had to filter many things here.

Good:

  • There are good discussions and previous summaries that are actually useful on alignment. There are people who work on these things from both ngos and industry showing what research they are doing, or what actions they have taken on safety. Similarly with bioweapons etc.
  • I also like articles like trying to identify how to find what's the best thing to do with intersections of passion, skill, and importance. 
  • I like the articles that mention/promotes contacting reality.

Bad:

  • Sometimes I feel the atmosphere is "edgy", and sometimes I see people may argue over relatively small things that I don't know how the conclusion will lead to actual actions. And maybe this is just the culture here, but I found it surprising how easy people call each other "wrong" although many times I felt like both sides are just opinions. And I felt like I see less "I think" or "in my opinion" to quality claims than usual workplace at least. People appear to be very confident or sure about their own belief when communicating. From my understanding, I think people may practicing the "strong opinion weakly hold" thinking they could say something strong and change easily - I found that to be easier in verbal communication among colleagues, schoolmates or friends where one can talk to them (a relatively small group) every day. But on a platform where there are a lot more people, and tracking on opinions changes is hard, it might be more productive to consider modifying the "strong" opinion part and quality more in the first place.
  • I do think the downvote or upvote, which is related to how much you can comment or contribute to the site (or if you can block a person or not), encourage group think and you would need to be identify with the sentiment of the majority (I think another answer mentioned group think as well).
  • I am feeling many articles or comments are quite personal/non-professional (communication feels different from what I encounter at work), which makes this community a bit confusing mixing personal and professional opinions/sharing. I think it would be nice to have a professional section, and a separate personal sections, and also encourage different communication rules for more communication efficiency, and I guess could naturally filter some articles for people at certain times wanting to focus on different things. Could be good to organize articles better by section as well? Though there is "tags" currently. 
  • This is a personal belief, but I am a bit biased for action and hope to see more discussions on how to execute things, or at least how should actions change based on a proposed belief change.
  • This might be something more fundamental that is based on personal belief vs (some but not everyone on) lesswrong belief - to a certain extend I appreciate prioritization, but when it is too extreme I feel it is 1) counterproductive on solving issue itself, 2) too extreme that discourages new comers that also want to work on shared issues. It also feels more fear driven rather than rationality driven, which is discrediting in my opinion. 
    • For 1, Many areas to work on sometimes are interrelated, and focusing only on one may not actually achieve the goal. 
    • For 2, Sometimes it just feels alarming/scary when I see people trying to say "do not let other issues we need to solve get in the way of {AI risks/some particular thing}". 
  • I am sensing (though I am still kinda new here, so I might not have dig enough through articles) is that we may lack some social science connections/backgrounds and how the world actually works, even when talking about society related things (I forgot what specific articles gave me this feeling, maybe related to AI governance.) 

I think for now, I probably will continue using but with many many filters. 

Comment by ZY (AliceZ) on Non-human centric view of existence · 2024-09-26T16:42:41.639Z · LW · GW

I am not sure if that is a good reasoning, though I am also looking for reasoning/justification. The reasoning here seems to say - animals cannot talk our languages, and so it is okay to assume they do not want to survive (this is assuming the existence of humans naturally has conflict with the other specifies). 

The reasoning I think I am trying to accept is that by nature it seems we want to self-preserve, and maybe unfortunately many altruism we want to do have non-altruism roots, which maybe be fine/unavoidable. Maybe it would be good to also consider the earth as a whole/expanding moral circles (when we can), and less exhibiting human arrogance. Execution wise, this may be super hard, but I think "thinking" wise, there is value in recognizing this aspect.

Comment by ZY (AliceZ) on Non-human centric view of existence · 2024-09-25T17:22:37.718Z · LW · GW

I don't think that is only the viewpoint of the dead (it also seems very individually focused/personal rather than collective specifies experiment/exploration focused). This is about thinking critically and from different perspectives for truth finding, which is related to definition of rationality on lesswrong (the process of seeking truth).

 I am operating on the assumption that many of us seek true altruism on this platform. I could move this to the effective altruism platform.

Comment by ZY (AliceZ) on Non-human centric view of existence · 2024-09-25T16:17:04.070Z · LW · GW

This sounds individually (which is still an option), but the question is about collectively. 

The question is zooming out from the humanity itself, and view from out-of-human kind of angle. I think it is an interesting angle, and remind ourselves that many things we do, may not be as altruistic as we thought.

Also, I think maybe that would mean suffering risks would need more attention.

Ultimately, my answer to this might be - morally humans do not need to last forever, but we are self preservation focused, and that is okay to pursue and practice altruism whenever we can either individually or collectively; but when there is conflict against our preservation, how to pursue this "without significantly harming others" is tricky,

Comment by ZY (AliceZ) on Non-human centric view of existence · 2024-09-25T05:36:16.116Z · LW · GW

I do not think they meant anything AI specific, just general existence about humanity vs other species.

The question was not about whether humanity live forever, the original prompt is "why must human/humanity live/continue forever?", which is in the original question.

Do not feel the need to reply in anyway, nothing here is urgent. (Not sure why you would reply in bad shape or mention that; initially thought it is related to the topic).

Comment by ZY (AliceZ) on Non-human centric view of existence · 2024-09-24T02:50:21.479Z · LW · GW

I am not sure if this is the point or the focus of the topic as it is irrelevant to the question. [private]

Also curious - why are you interested in knowing how this question came up? Is that helpful to your answer of this question, and how would it change the answer? Curious to see how your answers would change depending on different ways this question may arise.

Comment by ZY (AliceZ) on Non-human centric view of existence · 2024-09-23T23:01:57.334Z · LW · GW

Was talking with a friend about AI risks on overpowering humanity, and then death vs suffering

Comment by ZY (AliceZ) on ZY's Shortform · 2024-09-14T18:28:34.699Z · LW · GW

A recent thought on AI racing - it may not lead to more intelligent models necessarily especially at a time when low hanging fruits are taken and now more advanced breakthroughs need to come from longer term exploration and research. But this also does not necessarily mean that AI racing (particularly on LLMs in this context, but I think generally too) is not something to be worried about. It may waste a lot of compute/resources to achieve only marginally better models. Additionally the worst side effect of AI racing to me is the potential negligence on safety mitigations, and lack of safety focused mindset/culture.

Comment by ZY (AliceZ) on Current AIs Provide Nearly No Data Relevant to AGI Alignment · 2024-09-13T03:33:43.623Z · LW · GW

If you meant for current LLMs, some of them could be misuse of current LLM by humans, or risks such as harmful content, harmful hallucination, privacy, memorization, bias, etc. For some other models such as ranking/multiple ranking, I have heard some other worries on deception as well (this is only what I recall of hearing, so it might be completely wrong).

Comment by ZY (AliceZ) on Current AIs Provide Nearly No Data Relevant to AGI Alignment · 2024-09-13T03:17:35.000Z · LW · GW

This aligns similarly with my current view. Wanted to add a thought - current LLMs could still have unintended problems/misalignment like factuality or privacy or copyrights or harmful content, which still should be studied/mitigated, together with thinking about other more AGI like models (we don’t know what exactly yet, but could exist.) And a LLM (especially a fine tuned one), if doing increasingly well on generalization ability, should still be monitored. To be prepared for future, having a safety mindset/culture is important for all models.

Comment by ZY (AliceZ) on Zach Stein-Perlman's Shortform · 2024-09-11T16:07:40.596Z · LW · GW

I also wish to see more safety papers. I guess/from my experience that it might also be - really good quality research takes time, and the papers so far from them seems pretty good. Though I don’t know if they are actively withholding things on purpose which could also be true - any insider/sources for this guess?

Comment by ZY (AliceZ) on ZY's Shortform · 2024-09-07T19:15:44.374Z · LW · GW

I think this is conditioning on one problem with one goal, but I haven’t thought about the other good collectively (more of a discussion on consequentialism).

For best of personal ability, I think the purpose is to distinguish what one can do personally, and what one can do to engage collaboratively/collectively, but I need to think through that better it seems, so that is a good question.

My reason on the na is: have intention, no execution/enough execution, did good is more like an accident, which is the same with no intention, no execution, and did good.

Comment by ZY (AliceZ) on ZY's Shortform · 2024-09-06T18:55:33.607Z · LW · GW

When thinking about deontology and consequentialism in application, it is useful to me to rate morality of actions based on intention, execution, and outcome. (Some cells are "na" as they are not really logical in real world scenarios.)

In reality, to me, it seems executed "some" intention matters (though I am not sure how much) the most when doing something bad, and executed to the best ability matters the most when doing something good.

It also seems useful to me, when we try to learn about applications of philosophy from law. (I am not an expert though in neither philosophy nor law, so these may contain errors.)

Intention to kill the personExecuted "some" intentionKilled the person"Bad" levelLaw
YesYesYes10murder
YesYesNo8-10as an example, attempted first-degree murder is punished by life in state prison (US, CA)
YesNoYesna 
YesNoNo0-5no law on this (I can imagine for reasons on "it's hard to prove") but personally, assuming multiple "episodes", or just more time, this leads to murder and attempted murder later anyways; very rare a person can have this thought without executing it in reality.
NoYesYesna 
NoYesNona 
NoNoYes0-5typically not a crime, unless something like negligence
NoNoNo0 
     
Intention to save a person (have limited time)Executed intention to the best of abilitySaved the person"Good" Level 
YesYesYes10 
YesYesNo10 
YesNoYesna 
YesNoNo0-5 
NoYesYesna 
NoYesNona 
NoNoYes0-5 
NoNoNo0 
     
Intention to do good (have more time)Executed intention to the best of personal abilityDid good"Good" Level 
YesYesYes10 
YesYesNo8-10 
YesNoYesna 
YesNoNo0-5 
NoYesYesna 
NoYesNona 
NoNoYes0-5 
NoNoNo0 
Comment by ZY (AliceZ) on ZY's Shortform · 2024-09-06T16:34:38.857Z · LW · GW

(Mentioned some of these in our chat, but allow me to be repetitive)

On first: I don't think efforts to reduce rape or discrimination needs 100% abolition, but working towards that currently has huge return at this point of history. Education has snowball effect as well. Just because it is hard to achieve 100%, does not mean there should not be efforts, nor impossible at all. In fact, it is something that is rarely worked on alone; for example, one strategy might actually be to bring up education generally, or economic disparity, and during this process, teach people how to respect other people. 

On second: We likely need good value system to align AI on, otherwise, the only alignment AI would know is probably not to overpower the most powerful human. But that does not seem to be the successful outcome of "aligned AI". I think there are a few posts recently on this as well.

Third: I have seen many people having this confusion/mixed up: rape play/bdsm is consensual, and the definition of rape is non-consensual. Rape is purely about going against the person's will. If you view it as murder it might be more comparable, but in this case, it is historically one group on to another group due to biological features and power differences that people cannot easily change on, though also a lot of men to men. In my view, it is worse than murder because it is extreme suffering, and that suffering will carry through the victims' whole lives, and many may end with suicide anyways.

Otherwise, I am glad to see you thinking about these and open to discussion.

Comment by ZY (AliceZ) on Executable philosophy as a failed totalizing meta-worldview · 2024-09-05T23:35:26.707Z · LW · GW

Reminded me of the complex systems chapter from a textbook (Center for AI Safety)

https://www.aisafetybook.com/textbook/complex-systems

Comment by ZY (AliceZ) on ZY's Shortform · 2024-09-05T06:08:37.282Z · LW · GW

Equity vs equality considerations (https://belonging.berkeley.edu/equity-vs-equality-whats-difference):

  • What caused the differences in outcome? 
    • Historical factors: should definitely apply equity. Conditioning on history is important and corrective efforts are needed.
  • Is the desired outcome a human "necessity"?
    • The definition of necessity may be tricky, or even differ by culture. Generally in the US, if it is something like healthcare, or access to education, should move towards/apply equity.
Comment by ZY (AliceZ) on Two Neglected Problems in Human-AI Safety · 2024-09-04T16:39:01.388Z · LW · GW

Really appreciate the post; wondering if you had any thoughts on these since the post was first published? Do think something like RLHF now is an effective enough way?

(Could I also get some advice on why the downvote?)

Comment by ZY (AliceZ) on ZY's Shortform · 2024-09-02T15:57:20.689Z · LW · GW

Relatedly on my first reply to the person's DM:

On resource scarcity - I also find it difficult for humans to execute altruism without resources, and may be a privilege to even think about these (financially, educationally, health-wise, and information-wise. Would starting with something important but may or may not be the "most" important help? Sometimes I find the process of finding the "most" a bit less rational than people would expect/theorize. 

Bias and self-preservation by human nature, but need correction

To expand from there, that is one thing that I am worried about. Our views/beliefs are by nature limited by our experiences, and if humans do not recognize or acknowledge that self-preserving natures, we will be claiming altruism but not actually doing them. It will also prevent us from establishing a really good system that incentivize people to do so.

To give an example, there was a manifold question (https://manifold.markets/Bayesian/would-you-rather-manifold-chooses?r=enlj), asking if manifold wants to "Push someone in front of a trolley to save 5 others (NO) -OR- Be the person someone else pushes in front of the trolley to save 5 others (YES)". When I saw the stats before close, it was 92% (I chose YES, and lost, I would say my YES is with 85% confidence).  While we propose we value people's life equally, I see self-preservation persist. Then I see that could radiate to I prefer preservation of my family, friends, people you are more familiar with, etc. 

Simple majority vs minority situation

This is similar with majority voting. When the population has 80% of people A, and 20% of people B, assuming equal vote/power for everyone, when there are issues that is in conflict between A and B, especially some sort of historical oppression of A towards B, then B's concerns would rarely be addressed. (Hope humans nowadays are better than this, and more educated, but who knows).

Power

When power (social and economical) is added into the self-preservation, without a proper social contract, this will be even more messed-up. The decision maker of these problems will always the people with some sort of power. 

This is true in reality, where the people who get to choose/plan what is the most important in reality and our society, are either the powerful people, or the population outnumbering others. Worse if both. Therefore, we see less passionate caring on minority issues, on a platform like this practicing rationality.  With power, everything can go below it, including other people, policies, and law. 

How do we create a world with max. freedom to people, with constraint on people do not harm other people, regardless of they have power or not/regardless if they will get caught or not? What level of "constraint" are historically powerful groups willing to accept? I am not sure of right balance/what is most effective yet; I have seen cases where some men were reluctant to receive education materials on consent, claiming they know, but they don't. This is a very very small cost, but yet there are people who are not willing to even take that. This makes me frustrated.

Regardless, I do believe in the effectiveness of awareness, continuous education, law, and the need to foster a culture to cultivate humans that truly respect other lives.

Real life examples

I was not very aware of many related women's suffering issues, until recently, from the Korean pop star group 

to recent female doctor case (https://www.npr.org/sections/goats-and-soda/2024/08/26/g-s1-18366/rape-murder-doctor-india-sexual-violence-protests). I was believing in 2024, these are things humans should have been able to solve but not yet, but maybe I was in my own bubble. And organizationally, on the country level, I believe you would have seen many news on different wars now.

Thanks for sharing the two pages! Not sure if the above are clear or not, but I try to document all of my thoughts.

Comment by ZY (AliceZ) on ZY's Shortform · 2024-09-02T15:44:17.396Z · LW · GW

Thanks for commenting.

I don't see the need to remove human properties we cherish to remove/reduce murder nor rape. Maybe war. Rape especially, is never defensive. That is an extremely linear way of connecting these.

Particularly, it is the claim that rape is in any way remotely related to erotic excitement that I found outrageous. That is simply wrong, and is an extremely narrow and perpetrator centered way of thinking. Check out the link that is linked in my comment if you want to learn more on myths about rape.

For the do not care point - my personal belief is that if one believes that suffering risks should not be taking up resources to be worked on, even not oneself but also other people, I do not see too much of a difference on with not care from consequentialist point of view/simply practically, or the degree of caring is small/not big enough to agree on allocation of resources/attention, while in my initial message, I have already linked multiple cases recently that has happened. (The person mentioned that working on these are "dangerous distractions", not for himself, "ourselves", seems to say society as a whole.) Again, you do not need to emphasis the importance of AI risks by downplaying other social issues. That is not rational.

Finally, the thread post is not on the user alone; it is a bit standalone to reflect on the danger of not having correct awareness or attention on these issues collectively, and how it is related to successful AI alignment as well.

Comment by ZY (AliceZ) on ZY's Shortform · 2024-09-02T15:43:03.633Z · LW · GW

You have reached out to me, with a DM, to my initial post questioning whose beautiful world this is on recent events about rape.

It is not you do not embrace "my goals",  but seems but you believe , "working on addressing rape or war would be distractions to AI safety risks".  

You do have the horrifying misinterpretation of what rape is, and you insist on that. You have mentioned that "Since rape is a side effect of erotic excitement, and so removing rape would mean removing erotic excitement." This claim is outrageous. First of all rape is about power, and limited relation to erotic excitement. This view on "erotic excitement" is very perpetrator-centric. At best you could compare with murder, but I would argue it is worse than murder because the person enjoys humiliating another human. Removing rape does not mean removing erotic excitement. This is just simply wrong. 

[Edited: Initially I was soft in responding with empathy, but now that I look back, after a while, it is still a horrible view the user holds, and if people collectively believe this, there will be naturally more acceptance towards the harm one human does to another.]

Comment by ZY (AliceZ) on ZY's Shortform · 2024-09-01T15:47:02.920Z · LW · GW

(This should have been attached/replied to my first quick take, as opposed to a separate quick take.)

From someone's DM to me:

"Regarding war, rape, and murder, it's one thing to make them taboo, view them with disgust, and make them illegal. It's another thing to want to see them disappear from the world entirely. They are side effects of human capacities that in other contexts we cherish, like the willingness to follow powerful leaders, to make sacrifices, to protect ourselves by fighting and punishing enemies who threaten or abuse us, and the ability to feel erotic excitement. This is why they keep happening."

I am seeing - "side effects", "rape -> erotic excitement" - by going against another human's will by definition. I found it disappointing to have to explain why this is not the case in the first place by linking https://valleycrisiscenter.org/sexual-assault/myths-about-sexual-assault/#:~:text=Reality%3A%20Rape%20is%20not%20about,him%20accountable%20for%20his%20actions. in 2024.

I am not sure if this person is 1) trying to promote "AI risks are more important than anything else" to emphasis its sole importance (based on other parts from this person's DM) by promoting that other issues are not issues/dangerous, or 2) truly believe in the above about "unavoidable side effects" of the abuse of power such as rape and see it as one sided erotic excitement based on sufferings from another by definition. They also comment that it is similar to having matches in homes without setting homes on fire - which is a poor analogy as matches are useful, but not rape intentions/intentions to harm other people on offense. I am not sure of this person's conscious/unconscious motivation, and not sure which one is a better motivation to hope for. 

I especially hope this moral standard from this comment is not prevalent among our community - we will likely not going to have properly aligned AI, if we don't care nor understand war, rape, nor murder, or view them as distractions to work on with any resources collectively.  Not only we need to know they are wrong, so are AIs.

Finally,  I would encourage posting these on the public comments, rather than DM. The person is aware of this. Be open and transparent. 

[edited]

Comment by ZY (AliceZ) on ZY's Shortform · 2024-08-28T15:10:15.292Z · LW · GW

If you look into a bit more history on social justice/equality problems, you would see we have actually made many many progress (https://gcdd.org/images/Reports/us-social-movements-web-timeline.pdf), but not enough as the bar was so low. These also have made changes in our law. Before 1879, women cannot be lawyers (https://en.wikipedia.org/wiki/Timeline_of_women_lawyers_in_the_United_States). On war, I don't have too much knowledge myself, so I will refrain from commenting for now. It is also my belief that we should not stop at attempt, but attempt is the first step (necessary but not sufficient), and they have pushed to real changes as history shown, but it will have to take piles of piles of work, before a significant change. Just because something is very hard to do, does not mean we should stop, nor there will not be a way (just like ensuring there is humanity in the future.) For example, we should not give up on helping people during war nor try to reduce wars in the first place, and we should not give up on preventing women being raped. In my opinion, this is in a way ensuring there is future, as human may very well be destroyed by other humans, or by mistakes by ourselves. (That's also why in the AI safety case, governance is so important so that we consider the human piece.)

As you mentioned political party - it is interesting to see surveys happening here; a side track - I believe general equality problems such as "women can go to school", is not dependent on political party. And something like "police should not kill a black person randomly" should not be supported just by blacks, but also other races (I am not black). 

Thanks for the background otherwise. 

Comment by ZY (AliceZ) on ZY's Shortform · 2024-08-28T14:57:43.783Z · LW · GW

I meant people on this platform initially, but also good to reflect people generally, which is also where some of my concerns are.

I agree on attending NGOs, and if by you, you meant me. For awareness - most awareness does not involve detailed stories or images, and those events did leave a strong impression/emotion (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2827459/#:~:text=At%20the%20very%20least%20emotions,thought%20and%20away%20from%20others.). 

It is a bit sad to see for a lot of humans, even the ones who are already caring about the world generally (which is already a privilege, because many may need to focus on being alive themselves), if something is not relatable, they don't deeply care. When that is correlated with power, it is the worst.  

Comment by ZY (AliceZ) on ZY's Shortform · 2024-08-27T18:46:49.979Z · LW · GW

Thanks for the thoughtful comments first of all.

What I am sensing or seeing right now is in order to promote AGI risks and X risks, I am seeing people downplay/lower ranking the importance of the people who already cannot see this beautiful world. It does not feel logical to me, that because there are always future issues that affect everyone’s lives, the issues that cause some people to be already on the miserable side should not be addressed.

The problem here is that we are not have the same starting line for “everyone” now, and therefore progress towards saving everyone with future focus might not mean the same thing. I maybe should draw a graph to demonstrate my point. As opposed to only consider problems that concerns everyone, I think we should also focus a bit more on an inclusive finish line that is connected to current realities of not the same starting line. If this world continues to be this way, I also worry if the future generations would like it, or would want to be brought into this.

I understand the utilitarian intentions, but I myself also believe we could incorporate equalitarian views. And in fact, a mindset or rules promoting equality or along similar lines actually helps everyone. In many situations a human will be one of those people at some point in their life in some way. Maybe a person’s home suddenly became war zone. Maybe got disabled suddenly. Maybe experienced sexual assault for self or loved one. Establishing a good system to reduce these and prevent these helps human in the future as well. I would like to formalize this a bit more later.

Both views/also current vs future views should really joint forces, as opposed to exclude each other. There are many tasks that I see are shared such as social good mindsets and governance.

Some background about me; myself believe in looking into both, and believe in value in looking into both. It would be dangerous to focus on only one either way by promoting another, and gradually we overlook/go backwards on things that we have started.

Comment by ZY (AliceZ) on ZY's Shortform · 2024-08-27T18:38:15.511Z · LW · GW

That’s good to hear. Any posts you have encountered that are good and mention these/solutions on these?

Comment by ZY (AliceZ) on ZY's Shortform · 2024-08-27T15:40:36.555Z · LW · GW

Do you think people are firmly aware of this in the first place? I would love to hear that the answer is yes.

On solutions - I am not sure yet on universal solution, as it is very dependent on each case. For some problems, some solutions would be around raising awareness and international cooperation on educating human who hurts other human, pushing for law reforms, and alternative sentencing. Solutions aside, I am not sure if I am seeing enough people even care about these. My power alone is not enough, that’s why we need to join force.

I am worried about how people would down vote on this on this platform. I don’t think worrying about long term is bad, or it should not be looked into, but at the same time, it should not be the only thing we care either. This worries me as "we should only work on long term X risks, but nothing about the people now, and any comments that seek to say otherwise is wrong" type of sentiment is what I am seeing more and more.

There is danger in overlooking current risks. Besides obvious reasons on we are ignoring current people, from the future perspective we would be missing the connection between the future and the present, and missing the opportunity to practice applying solutions to reality. And thanks for the link to effective altruism, and I am aware of the initiative/program after attending an introductory program. It feels to me it was merging a couple different directions, and somehow it felt like recent efforts were mostly on long term X risks. For its original goal, I am also not entirely sure if it considers fairness enough, despite an effort to consider scarcity. An EA concept I see some people not understanding enough is the marginal part - marginal dollar, and marginal labor, which should allow people to invest in a portfolio of things. Would welcome any recommended readings further on this.

Comment by ZY (AliceZ) on ZY's Shortform · 2024-08-27T05:34:42.843Z · LW · GW

I saw someone wrote somewhere: will try their best in their life to address X risks to make sure future humans/generations will see this beautiful world.  I respect that.  However, many recent news and events make me also think about the current humans/people right now.  Who are seeing this beautiful world right now? Whose beautiful world this is? 

The people in war didn't see it. It is not their beautiful world.  The female doctor who got brutally gang raped while on duty didn't see it.  It is not their beautiful world.  Are there also enough efforts to help these people, to see a beautiful and fair world? Is this really a beautiful world? Does this beautiful world belong to everyone?

Comment by ZY (AliceZ) on Predictable updating about AI risk · 2024-08-21T03:33:34.682Z · LW · GW

To me, it just all depends on the estimated percentage that GPT-6 is going to be incrementally “scary smarter" to that person (and what would it mean for that person to translate that into uncontrollability by humans), which are just the conditional probabilities, and the probabilities of the conditions.

 Also, in my opinion, we know that many models currently have many problems or known to have expected problems (some are model specific) that are results of misalignment/the way they are optimized, so even if we don't know what's going on in the future, addressing these problems are needed anyways, and will be useful overall shared mitigation strategies for any problems coming up.

Comment by ZY (AliceZ) on Higher Purpose · 2024-03-01T00:52:53.051Z · LW · GW

"The external world is just a stream of victims for you to rescue." I do not see a big problem with this, as I agree with the last point that this not pure but still useful, except when are considering equality and majority/minority dynamic. Does a high purpose need to be "pure"? Are humans really completely capable of that? Does it make sense if humans are capable of that from a sociology perspective? To me, the key is to ensure a system that effectively incentivize humans to extend compassion.

I do think a lot of high purposes come from adverse experiences, and by that adverse experience, such as rape/sexual assault, this person would understand a bit more of what the challenges are, how miserable it is, and it makes sense for this person to be personally invested in this clause and make contributions to it.

Comment by ZY (AliceZ) on Scope Insensitivity · 2024-02-02T17:48:07.521Z · LW · GW

I understand action wise, it might be good collectively; but I also understand for victims of certain crimes for example, it is very hard to tell them hey what you feel about the crime is not rational, and please donate to something else

Comment by ZY (AliceZ) on Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI · 2024-01-28T00:48:22.980Z · LW · GW