Posts

Non-human centric view of existence 2024-09-25T05:47:07.480Z
ZY's Shortform 2024-08-27T05:34:42.732Z

Comments

Comment by ZY (AliceZ) on Our Intuitions About The Criminal Justice System Are Screwed Up · 2024-12-18T18:21:23.137Z · LW · GW

how to punish fewer people in the first place

This seems to be hard when actual crimes (murder, violent crimes, etc.) are committed; seems to be good to figure out why they commit the crimes, and reducing that reason in the first place is more fundamental.

Comment by ZY (AliceZ) on Our Intuitions About The Criminal Justice System Are Screwed Up · 2024-12-10T04:28:56.517Z · LW · GW

A side note - 

We don’t own slaves, women can drive, while they couldn’t in Ancient Rome, and so on.

Seems to be a very low bar for being "civilized"

Comment by ZY (AliceZ) on Enemies vs Malefactors · 2024-12-08T17:21:46.919Z · LW · GW

focusing less on intent and more on patterns of harm

In a general context, understanding intent though will help to solve the issue fundamentally. There might be two general reasons behind harmful behaviors: 1.do not know this will cause harm, or how not to cause harm, aka uneducated on this behavior/being ignorant, 2.do know this will cause harm, and still decided to do so. There might be more nuances but these two are probably the two high level categories. Knowing what the intent is helps to create strategies to address the issue - 1.more education? 2.more punishments/legal actions?

Comment by ZY (AliceZ) on Model Integrity: MAI on Value Alignment · 2024-12-07T20:51:01.975Z · LW · GW

In my opinion, theoretically, the key to have "safe" humans and "safe" models, is "to do no harm" under any circumstances, even when they have power. This is roughly what law is about, and what moral values should be about (in my opinion)

Comment by ZY (AliceZ) on Daniel Kokotajlo's Shortform · 2024-12-01T17:35:58.691Z · LW · GW

Yeah nice; I heard youtube also has something similar for checking videos as well

Comment by ZY (AliceZ) on Daniel Kokotajlo's Shortform · 2024-11-27T19:35:57.237Z · LW · GW

It is interesting; I am only a half musician but I wonder what a true musician think about the music generation quality generally; also this reminds me of the Silicon Valley show's music similarity tool to check for copyright issues; that might be really useful nowadays lmao

Comment by ZY (AliceZ) on Have we seen any "ReLU instead of sigmoid-type improvements" recently · 2024-11-23T06:25:11.457Z · LW · GW

On the side - could you elaborate why you think "relu better than sigmoid" is a "weird trick", if that is implied by this question?

The reason that I thought to be commonly agreed is that it helps with the vanishing gradient problem (this could be shown from the graphs).

Comment by ZY (AliceZ) on Reducing x-risk might be actively harmful · 2024-11-19T02:03:58.200Z · LW · GW

I personally agree with your reflection on suffering risks (including factory farming, systemic injustices, and wars) and the approach to donating to different cause areas. My (maybe unpopular under "prioritizing only 1" type of mindset) thought is: maybe we should avoid prioritizing only one single area (especially collectively), but recognize that in reality there are always multiple issues we need to fight about/solve. Personally we could focus professionally on one issue, and volunteer for/donate to another cause area, depending on our knowledge, interests, and ability; additionally, we could donate to multiple cause areas. Meanwhile, a big step is to be aware of and open our ears to the various issues we may be facing as a society, and that will (I hope) translate into multiple type of actions. After all some of these suffering risks involve human actions, and each of us doing something differently could help with reducing these suffering risks in both short and long term. But there are also many things that I do not know how to best balance as well.

A side note - I also hope you are not very very sad by thinking of "missing crucial considerations" (but also appreciate that you are trying to gather more information and learn more quickly; we all should do more of this too)! The key to me might be an open mind and the ability to consider different aspects of things; hopefully we will be on the path towards something "more complete". Proactively, one approach I often try to do is talking to people who are passionate in different areas, who are different from me, and understand more from there. Also, I sometimes refer to https://www.un.org/en/global-issues for some ideas.

Comment by ZY (AliceZ) on Rauno's Shortform · 2024-11-18T19:42:38.023Z · LW · GW

Yeah that makes sense; the knowledge should still be there, just need to re-shift the distribution "back"

Comment by ZY (AliceZ) on Shortform · 2024-11-17T07:53:17.910Z · LW · GW

Haven't looked too closely at this, but my initial two thoughts:

  1. child consent is tricky.
  2. likely many are foreign children, which may or may not be in the 75 million statistic

It is good to think critically, but I think it would be beneficial to present more evidence before making the claim or conclusion

Comment by ZY (AliceZ) on Rauno's Shortform · 2024-11-16T16:51:19.559Z · LW · GW

This is very interesting, and thanks for sharing. 

  • One thing that jumps out at me is they used an instruction format to prompt base models, which isn't typically the way to evaluate base models. It should be reformatted to a completion type of task. If this is redone, I wonder if the performance of the base model will also increase, and maybe that could isolate the effect further to just RLHF.
  • I wonder if this has anything to do with also the number of datasets added on by RLHF (assuming a model go through supervised/instruction finetuning first, and then RLHF), besides the algorithm themselves.
  • Another good model to test on is https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3 which only has instruction finetuning it seems as well.

The author seems to say that they figured it out at the end of the article, and I am excited to see their exploration in the next post.

Comment by ZY (AliceZ) on eggsyntax's Shortform · 2024-11-13T22:08:46.022Z · LW · GW

I find it useful sometimes to think about "how to differentiate this term" when defining a term. In this case, in my mind it would be thinking about "reasoning", vs "general reasoning" vs "generalization".

  • Reasoning: narrower than general reasoning, probably would be your first two bullet points combined in my opinion
  • Generalization: even more general than general reasoning (does not need to be focused on reasoning). Seems could be the last two bullet points you have, particularly the third
  • General reasoning (this is not fully thought through): Now that we talked about "reasoning" and "generalization", I see two types of definition
    • 1. A bit closer to "reasoning". first two of your bullet points, plus in multiple domains/multiple ways, but not necessarily unseen domains. In other simpler words, "reasoning in multiple domains and ways".
    • 2. A bit closer to "general" (my guess is this is closer to what you intended to have?): generalization ability, but focused on reasoning.
Comment by ZY (AliceZ) on Consider tabooing "I think" · 2024-11-12T04:44:07.601Z · LW · GW

In my observation (trying to avoid I think!), "I think" is intended to (or actually should have been used to) point out perspective differences (which helps to lead to more accurate conclusions, including collaborative and effective communication), rather than confidence. In the latter case of misuse, it would be good if people clarify "this term is about confidence, not perspective in my sentence". 

Comment by ZY (AliceZ) on What is malevolence? On the nature, measurement, and distribution of dark traits · 2024-11-11T18:30:53.683Z · LW · GW

True. I wonder for the average people, if being self-aware would at least unconsciously be a partial "blocker" on the next malevolence action they might do, and that may evolve across time too (even if it may take a bit longer than a mostly-good)

Comment by ZY (AliceZ) on Lessons learned from talking to >100 academics about AI safety · 2024-11-09T16:59:39.182Z · LW · GW

I highly agree with almost all of these points, and those are very consistent with my observation. As I am still relatively new to lesswrong, one big observation (based on my experience) I still see today, is disconnected concepts, definitions, and or terminologies with the academic language. Sometimes I see terminology that already exists in academia and introducing new concepts with the same name may be confusing without using channels academics are used to. There are some terms that I try to search on google for example, but the only relevant ones are from lesswrong or blogposts (which I still then read personally). I think this is getting better - in one of the recent conference reviews, I saw significant increase in submissions in AI safety working on X risks. 

Another point as you have mentioned is the reverse ingestion of papers from academia; there are rich papers in interpretability as you have mentioned for example, and some concrete confusion I saw from professors or people already in that field is that why there is feels like a lack of connection with these papers or concepts, even though they seems to be pretty related.

About actions - many people that I see are concerned about AI safety risks in my usual professional group are people who are concerned about or working in current intentional risks like misuse. Those are actually also real risks and have already started (CSAM, deep fake porn with real people's faces, privacy, potential bio/chem weapons), and needs to be worked on as well. It is hard to stop working on them, and transition directly to X risks.

However, I do think it is beneficial to keep merging the academic and AI safety groups, which I see are already underway with examples like more papers, and some PhD positions on AI Safety, industry positions etc; This will increase awareness of AI safety, and as you have mentioned the interests in the technical parts are shared, as they could be applied potentially to many kinds of safety, and hopefully not that much on capabilities (though sometimes not separable).

Comment by ZY (AliceZ) on Safety isn’t safety without a social model (or: dispelling the myth of per se technical safety) · 2024-11-08T06:44:44.878Z · LW · GW

What would be some concrete examples/areas to work on for human flourishing? (Just saw a similar question on the definition; I wonder what could be some concrete areas or examples)

Comment by ZY (AliceZ) on Should CA, TX, OK, and LA merge into a giant swing state, just for elections? · 2024-11-07T04:54:10.795Z · LW · GW

True; and they would only need to merge up to they reach a "swing state" type of voting distribution.

Comment by ZY (AliceZ) on Should CA, TX, OK, and LA merge into a giant swing state, just for elections? · 2024-11-07T03:18:55.525Z · LW · GW

That would be interesting; on the other hand, why not just merge all the states? I guess it would be a more dramatic change and may be harder to execute and unnecessary in this case.

Comment by ZY (AliceZ) on Non-human centric view of existence · 2024-11-04T16:41:42.832Z · LW · GW

Yes, what I meant is exactly "there is no must, but only want". But it feels like a "must" in some context that I am seeing, but I do not recall exactly where. And yeah true, there may be some survival bias.

I agree it is tragedy from human race's perspective, but I think what I meant is from a non-human perspective to view this problem. For example, to an alien who is observing earth, human is just another species that rise up as a dominant species, as a thought experiment. 

(On humans prefer to be childless - actually this already slowed down in many countries due to cost of raising a child etc, but yeah this is a digress on my part.)

Comment by ZY (AliceZ) on Context-dependent consequentialism · 2024-11-04T16:08:49.963Z · LW · GW

My two cents:

  1. The system has a fixed goal that it capably works towards across all contexts.
  2. The system is able to capably work towards goals, but which it does, if any, may depend on the context.

From these two above, seems it would be good for you to define/clarify what exactly you mean by "goals". I can see two definitions: 1. goals as in a loss function or objective that the algorithm is optimizing towards, 2. task specific goals like summarize an article, planning. There may be some other goals that I am unaware of, or this is obvious elsewhere in some context that I am not aware of. (From the shortform in the context shared, seems to be 1, but I have a vague feeling that 2 may not be aligned on this.)

For the example with dQw4w9WgXcQ in your initial operationalization when you were wondering about if it always generate Q - it just depends on the frequency. A good paper is https://arxiv.org/pdf/2202.07646 on frequency of this data and their rate of memorization if you were wondering if it is always (same context with training data, not different context/instruction).

Comment by ZY (AliceZ) on Trading Candy · 2024-11-01T03:50:19.061Z · LW · GW

I think that is probably not a good reason to be libertarian in my opinion? Could you also share maybe how much older were your than you siblings? If you are not that far apart, you and your siblings came from the same starting line, distributing is not going to happen in real life economically nor socially even if not libertarian (in real life, where we need equity is when the starting line is not the same and is not able to be changed by choice. A more similar analogy might be some kids are born with large ears, and large ears are favored by the society, and the large eared kids always get more candy). If you are ages apart with you being a lot older, it may make some limited sense to for your parents to re-distribute.

Comment by ZY (AliceZ) on ZY's Shortform · 2024-10-31T06:50:31.502Z · LW · GW
Comment by ZY (AliceZ) on Failures in Kindness · 2024-10-29T06:28:46.060Z · LW · GW

I am not quite sure about the writing/examples in computational kindness and responsibility offloading, but I think I feel the general idea. 

For computational kindness, I think it is just really the difference in how people prefer to communicate, or making plans it seems, with the example on trip planning. I, for example, personally prefer being offered with their true thoughts - if they are okay with just really anything, or not. Anything is fine as long as that is what they really think or prefer (side talk: I generally think communicating real preferences will be the most efficient). I do not mind planning the trip myself in ways that I wanted. There is not really a right or wrong style. If the host person offered "anything is okay", but the receiver do not like planning, they could also simply say "Any recommendations? I like xxx or xxx generally." Communication goes both ways. The reason I think we should not say one way is better than another is, if the friend really wants to plan themselves, and then the host person planned a bunch, the receiver may also feel bad to reject the planned activities. Maybe what you really want to see, is that the host person cares enough to put some efforts into planning (just guessing)? And seems the relationship in this example between these two people are relatively close or require showing some efforts?

For responsibility offloading, I think some of these examples are not quite similar or parallel situations, but I generally get the proposal: "do not push other people in a pushy manner and should offer the clear option to say no" as opposed to a fake ask. In my opinion a fake ask is not true kindness - it is fake, so it is not really in any way kind. But at the same time, I've trained myself into taking the question literally - okay, if you asked, then it means you expect my answer to go both ways, then I will say no if I am thinking no. In the case the question is genuine - great! in the case it is not, too bad, the smoker should've be consistent with their words.

Most of these seems to be communication style differences, that just require another communication to sort out if the two parties need to communicate frequently.

Comment by ZY (AliceZ) on Open Thread Fall 2024 · 2024-10-29T00:27:36.230Z · LW · GW

Ah thanks. Do you know why these former rationalists were "more accepting" of irrational thinking? And to be extremely clear, does "irrational" here mean not following one's preference with their actions, and not truth seeking when forming beliefs?

Comment by ZY (AliceZ) on If I have some money, whom should I donate it to in order to reduce expected P(doom) the most? · 2024-10-28T21:12:18.407Z · LW · GW

I don't understand either. If it is meant what it meant, this is a very biased perception and not very rational (truth seeking or causality seeking). There should be better education systems to fix that.

Comment by ZY (AliceZ) on Open Thread Fall 2024 · 2024-10-28T07:01:57.375Z · LW · GW

On what evidence do I conclude what I think is know is correct/factual/true and how strong is that evidence? To what extent have I verified that view and just how extensively should I verify the evidence?


For this, aside from traditional paper reading from credible sources, one good approach in my opinion is to actively seek evidence/arguments from, or initiate conversations with people who have a different perspective with me (on both side of the spectrum if the conclusion space is continuous). 

Comment by ZY (AliceZ) on Open Thread Fall 2024 · 2024-10-28T06:59:05.597Z · LW · GW

I am interested in learning more about this, but not sure what "woo" means; after googling, is it right to interpret as "unconventional beliefs" of some sort?

Comment by ZY (AliceZ) on Open Thread Fall 2024 · 2024-10-28T06:55:38.764Z · LW · GW

I personally agree with you on the importance of these problems. But I myself might also be a more general responsible/trustworthy AI person, and I care about other issues outside of AI too, so not sure about a more specific community, or what the definition is for "AI Safety" people.

For funding, I am not very familiar and want to ask for some clarification: by "(especially cyber-and bio-)security", do you mean generally, or "(especially cyber-and bio-)security" caused by AI specifically?

Comment by ZY (AliceZ) on davekasten's Shortform · 2024-10-27T23:51:49.747Z · LW · GW

Does "highest status" here mean highest expertise in a domain generally agreed by people in that domain, and/or education level, and/or privileged schools, and/or from more economically powerful countries etc? It is also good to note that sometimes the "status" is dynamic, and may or may not imply anything causal with their decision making or choice on priorities.

One scenario is "higher status" might correlates with better resources to achieve those statuses, and a possibility is as a result they haven't experienced or they are not subject to many near-term harms. In other words, it is not really about the difference between "average" and "high status"'s people's intelligence, but more about what kind of world they are exposed to. 

I do think it is good to hear all different perspectives to stay curious/open-minded. 

edit: I just saw Dragon nicely listed two potential reasons, with scenario 2 mentioning something similar with my comment here. But something slightly specific in my thinking, is that these choices made by "average" and "high status" people may or may not be conscious, but rather from the experience from their lives and the world they are exposed to.

Comment by ZY (AliceZ) on Originality vs. Correctness · 2024-10-26T18:44:13.540Z · LW · GW

Could you define what you mean by "correctness" in this context? I think there might be some nuances into this, in terms of what "correct" means, and under what context

Comment by ZY (AliceZ) on Jimrandomh's Shortform · 2024-10-26T03:08:17.668Z · LW · GW

Based on the words from this post alone -

I think that would depend on what the situation is; in the scenario of price increases, if the business is a monopoly or have very high market power, and the increase is significant (and may even potentially cause harm), then anger would make sense. 

Comment by ZY (AliceZ) on Could randomly choosing people to serve as representatives lead to better government? · 2024-10-24T22:08:26.151Z · LW · GW

Thanks! I think the term duration is interesting and creative. 

Do you think for the short-term ones there might be pre-studies they need to do for the exact topics they need to learn on? Or maybe could design the short-term ones for topics that can be learnt quickly and solved quickly? I am a little worried about the consistency in policy as well (for example even with work, when a person on a project take vacation, and someone need to cover for them, there are a lot of onboarding docs, and prior knowledge to transfer), but could not find a good way just yet. I will think more about these.

Comment by ZY (AliceZ) on What is malevolence? On the nature, measurement, and distribution of dark traits · 2024-10-23T21:45:07.338Z · LW · GW

Amazingly detailed article covering malevolence, interaction with power, and the other nuances! Have been thinking of exploring similar topics, and found this very helpful. Besides the identified research questions, some of which I highly agree with, one additional question I was wondering is: do self-awareness of one's own malevolence factors help one to limit the malevolence factors? if so how effective would that be? how would this change when they have power? 

Comment by ZY (AliceZ) on Could randomly choosing people to serve as representatives lead to better government? · 2024-10-22T17:17:41.528Z · LW · GW

Interesting idea, and I think there is a possibility that the responsibility will make the "normal people" make better choices or learn more even though they do not know policy, etc in the first place. 

A few questions:

  • Do you think there is a situation where selected random people do not want to be in office/leadership and want to pursue their own passion/career and thus due to this reason may do a bad job? Is this mandatory?
  • What are some nuances about population and diversity? (I am not sure yet)
Comment by ZY (AliceZ) on Cipolla's Shortform · 2024-10-21T23:46:54.120Z · LW · GW

Could you maybe elaborate on "long term academic performance"?

Comment by ZY (AliceZ) on A Rocket–Interpretability Analogy · 2024-10-21T23:07:59.998Z · LW · GW

Agree with this, and wanted to add that I am also not completely sure if mechanistic interpretability is a good "commercial bet" yet based on my experience and understanding, with my definition of commercial bet being materialization of revenue or simply revenue generating. 

One revenue generating path I can see for LLMs is the company uses them to identify data that are most effective for particular benchmarks, but my current understanding (correct me if I am wrong) is that it is relatively costly to first research a reliable method, and then run interpretability methods for large models for now; additionally, it would be generally very intuitive to researchers on what datasets could be useful to specific benchmarks already. On the other hand, the method would be much useful to look into nuanced and hard to tackle safety problems. In fact there are a lot of previous efforts in using interpretability generally for safety mitigations. 

Comment by ZY (AliceZ) on Against empathy-by-default · 2024-10-17T02:13:33.897Z · LW · GW

Would agree with most of the posts; To me, humans have some general shared experiences that may activate empathy related to those experiences, but the the numerous small differences in experience make it very hard to know exactly what the others would think/feel, even if in exactly the same situations.  We could never really model the entire learning/experience history from another person. 

My belief/additional point I want to add/urge is that this should not be interpreted as say empathy is not needed because we don't get it right anyways (I saw some other comments saying evolution is against empathy) - it is more to recognize what we are not naturally good at empathy(or less well than we thought), and thus create mindsets/systems (such as asking and promoting on gathering more information about the other person) that encourage empathy consciously (when needed).

(maybe I misread, and the second point is addressing another comment)

Comment by ZY (AliceZ) on Isaac King's Shortform · 2024-10-15T17:38:15.018Z · LW · GW

I think I observe this generally a lot: "as soon as those implications do not personally benefit them", and even more so when this comes with a cost/conflict of interest.

On rationality on decision making (not the seeking truth part on belief forming I guess) - I thought it is more like being consistent with their own preference and values (if we are constraining to the definition on lesswrong/sequence ish)? I have a hot take that:

  1. If the action space of commit to a belief is a binary choice, then when people do not commit to a belief, the degree they believe in that belief is less than those who do. If we have to make it into binary classification, then it is not really a true belief if they do not commit to that belief.
  2. It could be the action of a belief is a spectrum, and then people in this case for example could eat less meat, matching the degree of belief "eating meat is not moral".
Comment by ZY (AliceZ) on Politics is the Mind-Killer · 2024-10-12T17:49:50.013Z · LW · GW

I think the title could be a bit more specific like - "involving political party in science discussions might not be productive", or something similar.  If using the word "politics", it would be crucial to define what "politics" here mean or refer to. The reason I say this is "politics" might not be just about actual political party's power dynamics, but also includes general policy making, strategies, and history that aim to help individuals in the society, and many other aspects. These other types of things included in the word "politics" is crucial to consider when talking about many things (I think it is a little bit similar to precedents in law). 

(Otherwise, if this article is about not bringing things all to political party level, I agree. I have observed that many things in the US at least are debated over political party lines, and if a political party debates about A, people reversely attribute some general social topics A to "political values or ideology", which is false to me.)

Comment by ZY (AliceZ) on What Do We Mean By "Rationality"? · 2024-10-12T17:07:03.055Z · LW · GW

I think by winning, he meant: "art of choosing actions that lead to outcomes ranked higher in your preferences", though I don't completely agree with this word choice of "winning" which could be ambiguous/causing confusion.

A bit unrelated, but more of a general comment on this - in my belief, I think people generally have unconscious preferences, and knowing/acknowledging these before weighing out preferences are very important, even if some preferences are short term.

Comment by ZY (AliceZ) on Open letter to young EAs · 2024-10-11T22:35:04.415Z · LW · GW

I also had similar feelings on the simplicity part, and also how theory/idealized situation and execution could be very different.  Also agree on the conflict part (and to me many different type of conflicts).  And, I super super strongly support the section on The humans behind the numbers.
(These thoughts still persist after taking intro to EA courses). 

I think EA's big overall intentions are good to me and I am happy/energized by see how passionate people are comparing to no altruism at all at least; but the details/execution are not quite there to me.

Comment by ZY (AliceZ) on sarahconstantin's Shortform · 2024-10-11T00:11:25.798Z · LW · GW

I have been having some similar thoughts on the main points here for a while and thanks for this.

I guess to me what needs attention is when people do things along the lines of "benefit themselves and harm other people". That harm has a pretty strict definition,  though I know we may always be able to give borderline examples. This definitely includes the abuse of power in our current society and culture, and any current risks etc. (For example, if we are constraining to just AI with warning on content, https://www.iwf.org.uk/media/q4zll2ya/iwf-ai-csam-report_public-oct23v1.pdf.  And this is very sad to see.) On the other hand, with regards to climate change (can also be current too) or AI risks, it probably should also be concerned when corporates or developers neglect known risks or pursue science/development irresponsibly. I think it is not wrong to work on these, but I just don't believe in "do not solve the other current risks and only work on future risks."

On some comments that were saying our society is "getting better" - sure, but the baseline is a very low bar (slavery for example). There are still many, many, many examples in different societies of how things are still very systematically messed up.

Comment by ZY (AliceZ) on “She Wanted It” · 2024-10-10T18:40:12.351Z · LW · GW

This is about basic human dignity and respect to other humans, and has nothing to do with politics.

Comment by ZY (AliceZ) on What makes one a "rationalist"? · 2024-10-09T04:45:45.877Z · LW · GW

Oxford languages (or really just after googling) says "rational" is "based on or in accordance with reason or logic."

I think there are a lot of other types of definitions (I think lesswrong mentioned it is related to the process of finding truth). For me, first of all it is useful to break this down into two parts: 1) observation and information analysis, and 2) decision making.

For 1): Truth, but also particularly causality finding. (Very close to the first one you bolded, and I somehow feel many other ones are just derived from this one. I added causality because many true observations are not really causality).

For 2): My controversial opinion is everyone are probably/usually "rationalists" - just sometimes the reasonings are conscious, and other times they are sub/un-conscious. These reasonings/preferences are unique to each person. It would be dangerous in my opinion if someone try to practice "rationality" based on external reasonings/preferences, or reasonings/preferences that are only recognized by the person's conscious mind (even if a preference is short term). I think a useful practice is to 1. notice what one intuitively want to do vs. what one think they should do (or multiple options they are considering), 2. ask why there is the discrepancy, 3. at least surface the unconscious reasoning, and 4. weigh things (the potential reasonings that leads to conflicting results, for example short term preference vs long term goals) out.  

Comment by ZY (AliceZ) on Shortform · 2024-10-09T01:20:48.043Z · LW · GW

From my perspective - would say it's 7 and 9.

For 7: One AI risk controversy is we do not know/see existing model that pose that risk yet. But there might be models that the frontier companies such as Google may be developing privately, and Hinton maybe saw more there.

For 9: Expert opinions are important and adds credibility generally as the question of how/why AI risks can emerge is by root highly technical. It is important to understand the fundamentals of the learning algorithms. Additionally they might have seen more algorithms. This is important to me as I already work in this space.

Lastly for 10: I do agree it is important to listen to multiple sides as experts do not agree among themselves sometimes. It may be interesting to analyze the background of the speaker to understand their perspectives. Hinton seems to have more background in cognitive science comparing with LeCun who seems to me to be more strictly computer science (but I could be wrong). Not very sure but my guess is these may effect how they view problems. (Only saying they could result in different views, but not commenting on which one is better or worse. This is relatively unhelpful for a person to make decisions on who they want to align more with.)

Comment by ZY (AliceZ) on Open Thread Fall 2024 · 2024-10-07T02:40:39.684Z · LW · GW

(Like the answer on declarative vs procedural). Additionally, reflecting on practicing Hanon for piano (which is almost a pure finger strength/flexibility type of practice) - might be also for physical muscle development and control.

Comment by ZY (AliceZ) on Adverse Selection by Life-Saving Charities · 2024-10-05T22:27:33.294Z · LW · GW

Agree with a lot of the things in this post, including "But implicit in that is the assumption that all DALYs are equal, or that disability or health effects are the only factors that we need to adjust for while assessing the value of a life year. However, If DALYs vary significantly in quality (as I’ll argue and GiveWell acknowledges we have substantial evidence for), then simply minimizing the cost of buying a DALY risks adverse selection. "

Had the same question/thoughts when I did the Introduction to Effective Altruism course as well. 

Comment by ZY (AliceZ) on Why I Define My Experience At the Monastic Academy As Sexual Assault · 2024-10-05T22:10:59.084Z · LW · GW

I came across this post recently, and want to really appreciate you speaking up, fighting for your rights, and raising awareness. Our society does not have proper and successful education on what consent is, and even though at my university we have consent courses during first week of school, people don't take them seriously. Maybe they should have added a formal test on that, and you cannot enter school unless you pass. This should apply to high school as well.  A 2018 article have pointed out how to teach consent at all education stages. https://www.gse.harvard.edu/ideas/usable-knowledge/18/12/consent-every-age

Many sexual assaults happen between partners, and your experience is clearly non-consensual and therefore sexual assault. I am a bit hesitate to comment as I am afraid to reopen any wounds, but also want to show support and gratitude🙏. 

Comment by AliceZ on [deleted post] 2024-10-05T04:37:18.671Z

I am guessing maybe it is the definition of "alignment" that people don't agree on/mixed on?

Some possible definitions I have seen:

  • (X risks) and/or (catastrophic risks) and/or (current safety risks)
  • Any of above + general capabilities (an example I saw is "how do you get the AI systems that we’re training to optimize the thing that we actually want to optimize" from https://arize.com/blog/openai-on-rlhf/)

And maybe some people don't think it got to solving X risks yet if they view the definition of alignment as X risks only. 

Comment by ZY (AliceZ) on DanielFilan's Shortform Feed · 2024-10-04T19:09:56.943Z · LW · GW

My guess is:

  • AI pause: no observation on what safety issue to address, work on capabilities anyways, then may lead to only capability improvements. (Assumption is that AI pausing means no releasing of models.)
  • RSP: observed O, shift more resources to work on mitigating O and less on capabilities, and when protection P is done, publish the model, then shift back to capabilities. (Ideally.)