Posts

Comments

Comment by tutor vals on [deleted post] 2022-10-13T17:41:50.343Z

Could you add some glossary or quick summary of what PCK  or CGI stand for? It would be nice if this post had value without having to read the full cited text or having to look at the original links for context. I'd be up for a longer explanation and reformulation in your own words of why exactly people think they're bad at math and alignment and ESPECIALLY, what's the next step to fix that? (Currently I feel a little click baited by the title, in the sense that the content doesn't seem to justify the article without delving deep into it. Side not, I'm probably not the intended audience of this post as I feel I'm pretty good at math and enjoyed it my whole life. I think the intended audience would have even less patience than I did)

Comment by tutor vals on Estimating the Current and Future Number of AI Safety Researchers · 2022-10-13T17:33:10.162Z · LW · GW

Commenting to signal appreciation despite this post currently being low upvotes after a while. The number of researchers currently working in AI is a datapoint that I often brink up in conversation about the importance of going into AI safety and the potential impact once could have. So far I've been saying there are between a hundred and a thousand AI safety researchers, which seems like I was correct, but it's more strong if I can say the current best estimate I found is around 200. 
Thanks.

Comment by tutor vals on Dwarves & D.Sci: Data Fortress Evaluation & Ruleset · 2022-08-16T19:19:49.571Z · LW · GW

Just a quick comment of encouragement. I haven't played and might not play them live or comment, but I still find these scenarios really cool and enjoy reading both the write-ups and how close the players come! It's also great that you're building the backlog because it gives great opportunity to try the older puzzles at my own pace. Great work! Keep it up, you and everyone playing :D 

Comment by tutor vals on Culture wars in riddle format · 2022-07-20T08:36:10.715Z · LW · GW

The most direct modelisation of the problem does lead to that result without any trickery, that seems like a concrete reason and one you can calculate before looking at the real world. 

Suppose each interview leads to a Measured Competence Score PCS, which is Competence Score * random var pulled from a normal distribution. We suppose men and women have the same Competence Score from the assumptions that they do the same work, but suppose men are going to twice as many interviews as women because have more accepting criteria on where to work. Finally suppose the algorithm for fixing pay is simply MCS multiplied by some constant (which is indeed not directly related to gender). 
It's easy to see that a company received twice as many male candidates and selecting the top x% of all candidates will end up with more male candidates with higher salaries, even though competence and work done is exactly the same.

A very interesting point here is to notice that a smarter employer who realises this bias exists can outcompete the market by correcting for this bias, for example by multiplying MCS of women by a constant (calculated based on the ratio of applicants). He will thus have more competent people for a certain price point than their competitors. In this simple toy model, affirmative action works and makes the world more meritocratic (people are payed closer to the value they provide).

I also note that the important factors here is that interviews lead to variance in measured competence score and there is a disproportion of number of applications per person per gender. It does not seem  to matter if there is only a disproportion of number of applications per gender (eg. in tech if 10% of applications come from women and that accurately reflects the number of applicants, then there will be no average pay difference in the end, and so affirmative action does not help for simple population disproportions, only for applications per person disproportions). In fact, this doesn't need to be corrected by gender. If applicants had to answer how many interviews they were doing total, the  algorithm could directly correct for that per person and again reach an unbiased measurement of competence. 

Comment by tutor vals on Being a donor for Fecal Microbiota Transplants (FMT): Do good & earn easy money (up to 180k/y) · 2022-07-09T12:22:20.799Z · LW · GW

Right, we probably largely agree with each other. I don't dispute looking for super donors amongst top athletes, as that way you can do a unilateral search (ie. you find a list of top athletes and start asking). In the context of directly asking for recommendations, you gain the possibility of listing any criteria, that can be far more personal and less searchable, and you'll gain access to populations you can't through search. For example, if the criteria is "seems to never fall ill, recovers extremely quickly from illness or injury, highly active and motivated", you can't search for that but I can recommend the top people of my network that meet this criteria, and then you could interview them and get their recommendations along those criteria, and you move up those links to finding more and more healthy people. 

I skimmed the one study on top athletes being better than less top athletes (the one with traditional martial arts ie. not martial arts but actually gymnastics) and was not particularly convinced it was a good basis (because of don't trust one study, and because the critera for being a top athlete in an art+gymnastics competition might not be so objective as to strongly relate to gut microbia. I would have been more reassured if it was on powerlifting with a continuously rising correlation between weight lifted and 'gut health'.

For the specific person I gave as example, he'll be approaching mid thirties by now so though I strongly feel he'd be a very strong candidate at 25 (also the peak  of his athletic performance), he seems less particularly appropriate now due to age and not practising sports as much in the last few years. 

I don't want to be a dead end either, I can forward this article to folks in that engineering school currently (who'll be around 25) and see if there's anyone interested enough that I could give you their contact details to continue from.

Comment by tutor vals on Being a donor for Fecal Microbiota Transplants (FMT): Do good & earn easy money (up to 180k/y) · 2022-07-09T11:29:22.106Z · LW · GW

I was also surprised on the large emphasis on top athlete, as opposed to simply athletes, and as opposed to generally very healthy people. My main opposition to looking at high athletes only is that I say many high performing people would waste their potential by becoming athletes, and that looking for athletes filters away many very healthy very high performing people. 

For example I know someone who's been high performing all his life, in kinda all domains (sports, socialising, technical skills, computer games...). He'd be top of class, also had strong motivation and work ethic which got him highest place in an entrance exam to the best engineering school of the country (main subjects being math, physics, engineering, algorithmics). He so rarely fell ill (less than once per several years) it was a shock for him when he did, for the 2 days it would last (to be precise, I'm using ill in a 'ill enough to notice' way, not just a runny nose in winter). He went on to cofound a still successful company in a technical sector (drones). 

I dressed this portrait not to pitch that person to you particularly, but to illustrate that actually there're a whole bunch of people with very similar portraits, all you'll find them all concentrated in certain top engineering schools (there might be similar profiles in other similar top school of other domains but I don't know those). Few of these people become top level athletes (often by preference for something else, though there's also a higher percentage of top level athletes in that population) yet many would have the potential too. As long as we're just basing microbiota transplants on the assumption "very healthy high performing people probably have good microbiota", it makes sense to me to test more of these people for effectiveness in transplants. 

Comment by tutor vals on LessWrong Has Agree/Disagree Voting On All New Comment Threads · 2022-07-06T07:28:24.401Z · LW · GW

Partially agreed for replacing 'have to be thinking about' by 'consider', ie : 
If you're really into manipulating public opinion, you should also consider strong upvoting [...]

Disagreed on replacing the "should also" part because it reminds you this is only hypothetical and not actually good behaviour. 

Comment by tutor vals on LessWrong Has Agree/Disagree Voting On All New Comment Threads · 2022-06-24T14:05:22.608Z · LW · GW

Giving a post's creator the option to enable/disable this secondary axis voting seems valuable. A post creator will probably know when his post will generally need nuanced comments with differing opinions, or is more lightweight (ie. what's your favourite icecream) and would appreciate the lighter UI. 

Comment by tutor vals on LessWrong Has Agree/Disagree Voting On All New Comment Threads · 2022-06-24T14:03:27.906Z · LW · GW

If you're really into manipulating public opinion, you should also consider strong upvoting posts you disagree with but that are weakly written, so as to present an easily defeated strawman. 

I'd say you're correct this new addition does not change much to the previous incentives that exist in manipulating comment visibility, but that's not the point of this new addition, so not a negative of this update. 

[Edited for clarity thanks to Pattern's comment]

Comment by tutor vals on Relationship Advice Repository · 2022-06-20T18:43:07.339Z · LW · GW

Though I expected it to be a joke, I'm still happy that the first comment on this (good btw) post is a call out on the astrology section. I did not bother to click the link because I did not imagine I could find anything of value behind it so I don't get the occasion to confirm it was a joke until arriving to this comment. 

Comment by tutor vals on MIRI announces new "Death With Dignity" strategy · 2022-04-04T11:04:12.433Z · LW · GW

I at first also downvoted because your first argument looks incredibly weak (this post has little relation to arguing for/against the difficulty of the alignment problem, what update are you getting on that from here?), as did the followup 'all we need is...' which is formulation which hides problems instead of solving them.  
Yet, your last point does have import and that you explicitly stated that is useful in allowing everyone to address it, so I reverted to an upvote for honesty, though strong disagree. 

To the point, I also want to avoid being in a doomist cult. I'm not a die hard long term "we're doomed if don't align AI" guy, but from my readings throughout the last year am indeed getting convinced of the urgency of the problem. Am I getting hoodwinked by a doomist cult with very persuasive rhetoric? Am I myself hoodwinking others when I talk about these problems and they too start transitioning to do alignment work? 

I answer these questions not by reasoning on 'resemblance' (ie. how much does it look like a doomist cult) but going into finer detail. An implicit argument being made when you call [the people who endorse the top-level post] a doomist cult is that they share the properties of other doomist cults (being wrong, having bad epistemics/policy, preying on isolated/weird minds) and are thus bad. I understand having a low prior for doomist cults look-alikes actually being right (since there is no known instance of a doomist cult of world end being right), but that's not reason to turn into a rock (as in https://astralcodexten.substack.com/p/heuristics-that-almost-always-work?s=r , believing that "no doom prophecy is ever right". You can't prove that no doom prophecy is ever right, only that they're rarely right (and probably only once). 

I thus advise changing your question "do [the people who endorse the top-level post] look like a doomist cult?" into "What would be sufficient level of argument and evidence so I would take this doomist-cult-looking goup seriously?". It's not a bad thing to call doom when doom is on the way. Engage with the object level argument and not with your precached pattern recognition "this looks like a doom cult so is bad/not serious". Personally, I had similar qualms as you're expressing, but having looked into the arguments, it feels very strong and much more real to believe in "Alignement is hard and by default AGI is an existential risk" rather than not. I hope your conversation with Ben will be productive and that I haven't only expressed points you already considered (fyi they have already been discussed on LessWrong). 

Comment by tutor vals on A fate worse than death? · 2021-12-17T03:38:12.060Z · LW · GW

This sounds like dogma specific to the culture you're currently in, not some kind of universal rule. Throughout history many humans lived in slavery (think Rome), and a non zero percentage greatly enjoyed their lives, and would definitely prefer their lives than being dead. It is still an open question as to what causes positive or negative valence, but submission is probably not a fundamental part of it. 

Comment by tutor vals on A fate worse than death? · 2021-12-17T03:30:50.084Z · LW · GW

I appreciate that you went through the effort of sharing your thoughts, and as some commenters have noted, I also find the topic interesting. Still, you do not seem to have laid bare your assumptions that guide your models, and when examined it seems most of your musings seem to miss essential aspects of valence as experienced in our universe. I will be examining this question through the lens of total utilitarian consequentialism, where you sum the integral of valences of all lives over the lifespan of the universe. Do specify if you were using another framework. 

When you conclude "Bad feelings are vastly less important than saved lives.", it seems you imply that 
1) Over time our lives will always get better (or positive) 
2) That there's always enough time left in the universe to contribute more good than bad. 
(You could otherwise be implying that life is good in of itself, but that seems too wrong to discuss much, and I don't expect you would value someone suffering 100 years and then dying as better than someone dying straight away). 
In a S-risk scenario, most lives suffer until heath death, and keeping those lives alive is worse than not, so 1 is not always true. 2 also doesn't hold in scenarios where a life is tortured for half of the universe's lifespan (supposing positive valence is symmetrical to negative valence). It is only when considering there's always infinite time left that you could be so bold as to say keeping people alive through suffering is always worth it, but that's not the case of our universe. 

More fundamentally, you don't seem to be taking into account yet non existing people/lives into account, the limited nature of our universe in time and accessible space, or the fungibility of accumulated valence. Suppose A lives 100 years happy, dies, and then B lives 100 years happy, it seems there's as much experienced positive valence in the universe as having had A around happy for 200 years. You call it a great shame that someone should die, but once they're dead they are not contributing negative valence, and there is space for new lives that contribute positive valence. Thus, it seems that if someone was fated to suffer a 100 years, it would be better they die now, and that someone else is born and lives 100 years happy, than trying to keep that original life around and making them live 200 years happy after the fact to compensate. Why should we care that the positive valence is experienced by a specific life and not another ?
In our world, there are negative things associated with death, such as age related ill heath (generally with negative valence associated), and negative feelings from knowing someone has died (because it changes our habits, we lose something we liked), so it would cause less suffering if we solved ageing and death. But there is no specific factor in the utility function marking death as bad. 

With these explanations of the total utility point of view, do you agree that a large amount of suffering (for example over half the lifespan of the universe) IS worse than death? 

Comment by tutor vals on Cryonics signup guide #1: Overview · 2021-06-08T17:12:55.746Z · LW · GW

Hi. I'm seeing this post because it's curated and assume this will be the case of quite a few other people who'll read this article soon. Before rushing to sign up on cryonics, I'd be interested in discussion on the grievances brought up against Alcor here by Michael-G-Darwin  https://www.reddit.com/r/cryonics/comments/d6s41b/can_alcor_get_any_worse/) . For reference Michael G Darwin (https://en.wikipedia.org/wiki/Mike_Darwin) worked at Alcor a long while. 

In the post I've linked he quite extensively explains faults he finds in how Alcor has handled patients in the last years. Having read it I'm not inclined to go forwards with cryonics before having good evidence that standards of care have improved, or that Mike Darwin's claims have been solidly refuted. In general I'd appreciate strong evidence that the level of care given to most patients (and that which anyone signing up would expect to receive) is 'the best we can do' and not 'just good enough that people continue paying and scandals don't break out too often'.

Are there any such discussions debating these points available elsewhere? I've currently only looked around for a couple hours max so I'm not knowledgeable on the subject. I'm mostly bringing this up so other novices at least know there's been some debate and there's more to look into than just what the companies offering those services say.