Open and Welcome Thread – July 2021

post by habryka (habryka4) · 2021-07-03T19:53:07.048Z · LW · GW · 20 comments

Contents

20 comments

If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you want to explore the community more, I recommend reading the Library, [? · GW] checking recent Curated posts [? · GW], seeing if there are any meetups in your area [? · GW], and checking out the Getting Started [LW · GW] section of the LessWrong FAQ [LW · GW]. If you want to orient to the content on the site, you can also check out the new Concepts section [? · GW].

The Open Thread tag is here [? · GW]. The Open Thread sequence is here [? · GW].

20 comments

Comments sorted by top scores.

comment by skot523 · 2021-08-12T23:21:19.830Z · LW(p) · GW(p)

Hey guys I’m Sam. You may have read my post on the obesity epidemic. I studied econ, although I’d say I’m really only qualified to assert any expertise at all when it comes to financial markets. I’ve been on reddit for what, 10 years and have thus stumbled on here from time to time.

From me, you’ll probably see a lot of trying to figure out just what in the world is going on in a complex system, and then throwing my hands in the air. Also, you may see me going to logical extremes (“how about we model the universe as one atom going in a straight line the void”) both of these are habits from econ that die hard.

I have a particular focus on insightful and novel ideas (sometimes, i even have them), and I have a strong tendency to prefer what is simple and elegant as an explanation for nearly anything. You’ll probably find that i have a very detached style; this gets me in trouble in real life but this is the only place where it seems to go over well.

comment by Jozdien · 2021-07-17T15:25:23.153Z · LW(p) · GW(p)

I need some advice.

A little context: I'm a CSE undergraduate who'll graduate next July.  I think AI Safety is what I should be working on.  There are, as far as I've seen, no opportunities for that in my country.  I don't know what path to go down in the immediate future.

Ideally, I'd begin working in Safety directly next year.  But I don't know how likely that is, given I don't have a Master's degree or a PhD; and MIRI's scaling back on new hires, as I understand it (I thought about interning after I graduate, but I’m not sure if they’ll take interns next year).

I plan to apply to Master's programs anyway, but those are also a long shot - the tuition fees are even steeper when converted to other currencies, so I don't want to apply to programs that aren't worth it (it's possible my qualifications are sufficient for some of the ones I'll apply to, but I have little context to tell).

I could work in software for a couple years, trying to do independent research in that time, to switch over after.  This is complicated both by that independent research is shockingly difficult when you can't bounce ideas off of someone who gets it (I don't really know anyone who does at the level where this is viable), and that I'll need to spend a non-insignificant amount of time and effort now to get a decent chance of getting a great job (I really don't want to work at a job I both don't believe in and that doesn't build career capital), time that could be better spent, especially if my chances aren't good even in the end (again, I speak from my expectations given limited context).

I've been thinking seriously about this for a couple weeks from several angles (I tried to hold off on this until I had enough qualifications to make credible predictions about my chances, but now I think the bar for credible is much higher than I'd expected), and came to some answers, but also decided I needed to ask the opinion of someone who gets my motivations, and hopefully has better context for any of this than I do.  Both about the future, and the present.

Some additional info about what I think I'm good at, relative to an undergrad level, if that helps: I have a couple years of experience with frontend systems for web and mobile (although I've recently been told I should work on improving my code structure, since I learned it all on my own and have worked primarily on my own projects).  I understand ML theory (DL slightly more) to an extent (I have a preprint on cGAN image processing I’m trying to figure out how to publish, since my university really doesn’t help with this stuff, I welcome any advice on that too).  I also have some amount of experience tinkering with their code; while I doubt it reaches the level of familiarity even a new industry ML developer would have, I'm fairly confident that I could get there without much trouble (could be wrong, correct me if this isn't your experience).

I typically try to avoid making posts of this sort, but this is kinda sorta important to me, and I feel comfortable trusting the people here to help me a little in making the right call here.  So thanks for that.  And thanks in advance for any suggestions.

Replies from: Algernoq, ChristianKl
comment by Algernoq · 2021-07-23T05:57:26.905Z · LW(p) · GW(p)

Good luck man. I did a different kind of engineering, but here is some advice I wish I had heard 15 years ago:

https://www.calnewport.com/blog/2009/03/12/some-thoughts-on-grad-school/ 

Thought #6: Listen to the Married Graduate Students and Ignore the Unmarried Students Who Live in the Dorms

Students with families have perspective on life and friends outside of the university. They tend to be happy and productive and think sleeping on the futon in your office is childish. They also bathe every day. Which is a nice bonus. The students who are unmarried and living in the dorm have probably escaped, thus far, exposure to the real world in any meaningful form, and because of this they are likely to have a warped sense of personal worth and work habits, and suffer from weird guilt issues. Ignore them.

In other words, don't try to be some sort of software ronin: this is less effective than having enough balance and boundaries to maintain some relationships that aren't about your special interest. If you would rather do programming than be around people, that's OK but it's still good to do other activities with other people even if they are not "useful". What is meant by "usefulness" if not you and others enjoying what you have created? Generally speaking, if you are doing work to "save the world" rather than for cash money, you are being lied to and underpaid, and the dollar amount that you are being underpaid is the amount you value feeling like you are "saving the world".

Also, and this is not a popular opinion on this forum, I think Elon Musk has the right idea about AI Safety. This is heavily cultural, and Elon's proposal (let everyone grid-link themselves to their own all-powerful AI) is in line with culturally Protestant values, while the LW proposal (appoint an all-powerful council of elders who decree who is and is not worthy to use AI technology, based on their own research into the doctrine) is in line with culturally  Catholic values. I will never give up my heritage of freedom, my right of self-defense, my right to privacy on my own computer in my own home, and my cultural ideal of equality of all before the law and before the Creator. I look forward to healthy debate with the AI Safety Experts. The American heritage of "fair play" and civil rights is a defense against totalitarian government. The AI Safety Expert Panel would be in a position to cause the AI equivalent of the Irish Potato Famine by hoarding all the AI and distributing it in an "equitable" way that does not include my fellow Irish. The great thing about freedom is that I get to make up my own mind about what software I want to use, create, or buy; the AI Safety Expert Panel does not and will never have the right to confiscate my rightful property; and this heritage of freedom will save the AI Safety Expert Panel from accidentally becoming the dystopia that they seek to prevent.

Replies from: gjm, ChristianKl, Jozdien
comment by gjm · 2021-08-09T10:09:14.708Z · LW(p) · GW(p)

I am not convinced that "the LW proposal" is to appoint an all-powerful council of elders who decree who is and who isn't worthy to use AI technology, and in fact I don't recall ever seeing anything resembling that. (Though of course I might well have missed it.)

What I think I have seen suggested or implied is that something like that might be beneficial for the development of possibly-superhumanly-intelligent AIs, on the basis that random individuals are simply not competent to judge whether what they're doing is safe and that if it isn't the results might be catastrophic.

To whatever extent it's true that (1) humans are capable of producing superhumanly intelligent AIs and (2) superhumanly intelligent AIs are likely to have or acquire vastly superhuman power and (3) even conditional on being able to make the superhuman AIs, making them so that they don't use that power in ways we'd consider catastrophic is a Very Hard Problem (and I think it's fair to say that (1-3), or at least their possibility, is pretty central to the LW community's thinking on this), a permissively libertarian position on possibly-superhuman AI development seems uncomfortably close to a permissively libertarian position on, say, nuclear bombs.

Whether (1-3) are right, and whether a "council of elders" is the best solution if they are, are debatable. But I don't think it should be even slightly controversial that conditional on (1-3) it's unconscionably dangerous to say "everyone should try to make their own superhuman AI and no one should try to stop them, because Freedom".

The most freedom-positive society in human history is probably the United States of America. Even there, there are few people arguing that the Second Amendment confers on all the right to keep and bear nuclear warheads.

Of course, if free-for-all AI development is in fact perfectly safe (at least in the sense of being vanishingly unlikely to result in outright catastrophe) then "everyone has to be free to do it because Freedom" is a much more reasonable position. But then the key point in your argument, at least around these parts where most people endorse (1-3) and lean at least somewhat libertarian, is not "Freedom!" but "having everyone develop their own superhuman AI is unlikely to be catastrophic, because ...". Which requires an actual argument, not just a scattering of boo-words like "council of elders" and "totalitarian" and "famine" and "dystopia" and yay-words like "freedom", "privacy", "equality", "fair play", "freedom", "rightuful", "freedom", and "freedom".

(I feel like I should repeat a key point from earlier: you write as if the question is who will decide who gets to own/use superhuman AIs once they exist, but so far as I know "the LW proposal" doesn't involve anything remotely like a "council of elders" for that. The point at which something of the sort might be appropriate is in the development of possibly-superhuman AIs.)

comment by ChristianKl · 2021-07-24T11:24:53.112Z · LW(p) · GW(p)

This is heavily cultural, and Elon's proposal (let everyone grid-link themselves to their own all-powerful AI) is in line with culturally Protestant values, while the LW proposal (appoint an all-powerful council of elders who decree who is and is not worthy to use AI technology, based on their own research into the doctrine) is in line with culturally  Catholic values. 

Deciding based on the two approaches based on which values they align with misunderstands the problem. A good strategy depends on what's actually possible.

The idea that human/AI hybrids are competitive at requiring resources in an enviroment with strong AGIs is doubtful. That means that over time all the resources and power go to the AGIs.

Replies from: Algernoq
comment by Algernoq · 2021-07-24T17:45:02.985Z · LW(p) · GW(p)

Human nature suggests that an all-powerful council-of-elders always becomes corrupt, so that approach might not be possible either.

Replies from: ChristianKl
comment by ChristianKl · 2021-07-24T17:49:47.510Z · LW(p) · GW(p)

Human nature suggests that an all-powerful council-of-elders always becomes corrupt

Human nature is relatively irrelevant to the behavior of AIs. At the same time that's basically saying that the alignment is a hard problem.

The alignment problem is one of the key AI safety problems.

comment by Jozdien · 2021-07-23T08:18:24.416Z · LW(p) · GW(p)

Thanks.

I'm not sure if you thought of it while reading my comment or if it's generally your go-to advice, but I may have accidentally given the wrong impression about how much I prioritize work over being around other people.  It's good to be actively reminded about it though for entropy reasons, so I appreciate it.

I admit that what I know about AI Safety comes from reading posts about it instead of talking with the experts about their meta-level ideas, but that doesn't sound like the impression I got.  CEV, for example, is one example that deals with the ethical mess of which people's values are worth including.  The discussion around that generally had a very negative prior to anyone having the power to decide whose values are good enough, is what it appeared like to me.  Elon's proposal comes with its own set of problems, a couple that stick out to me being co-ordination problems between multiple AGI, and grid-linking not completely solving the alignment problem because we'll still be far inferior to good AGI.

comment by ChristianKl · 2021-07-24T14:02:15.132Z · LW(p) · GW(p)

I don't myself work in AI risk so I'm not the ideal person to respond but I'm in the community for quite a while so given that nobody who actually works in the field answered I will try to give my answer:

80,000 hours has a general guide for AI risk: https://80000hours.org/articles/ai-policy-guide/ the also published a podcast.

One of the key features is that there's a pretty high bar to be payed to work in AI safety.

I don't want to apply to programs that aren't worth it (it's possible my qualifications are sufficient for some of the ones I'll apply to, but I have little context to tell).

The bar to do a MIRI internship is not lower then the bar to getting into a top university. I would expect that applying for a master at the universities that the 80,000 article lists is one of your best bets. 

While those universities do have high tution and you likely will be in debt after leaving, a computer science degree in those universities allows for access to very high paying jobs, so the debt can be worth it even in the case you don't end up going into AI risk.

Replies from: Jozdien
comment by Jozdien · 2021-07-24T15:40:13.660Z · LW(p) · GW(p)

Thank you.

I saw that guide a while back and it was helpful, but it helped more with "what" than "how" - although it still does how better than most guides.  For the most part, I'm concerned about things I'm missing that are obvious if you have the right context.  Like that given my goals, there are better things to be prioritizing, or that I should be applying to X for achieving Y.

I've been thinking about it for a while since posting it, and I think I agree with you on that applying for a Master's is the best route for me.  (By the way, did you mean the universities the article mentions in the "Short-term Policy Research Options" subheading?  I didn't find any other).

Replies from: ChristianKl
comment by ChristianKl · 2021-07-24T17:54:15.439Z · LW(p) · GW(p)

When it comes to chosing universities there's:

One could also do academic research at any university, though it helps to be somewhere with enough people working on related issues to form a critical mass. Examples of universities with this sort of critical mass include the University of Oxford, University of Cambridge, UC Berkeley, MIT, the University of Washington, and Stanford

While that passage isn't directly about where to do your masters, they are places where there are people who can support you in learning about AI safety research.

comment by Chantiel · 2021-07-30T21:59:29.348Z · LW(p) · GW(p)

There is a situation I've thought of in which functional decision theory, according to my understanding of it, does poorly. I might just be making some sort of mistake, but I tried to be pretty careful when reading Eliezer's paper on functional decision theory, and it still seems to be a problem. I'm interested in what others think of this.

The situation is just like Newcomb's problem, except that player is a superintelligent AI who is aware of the exact mental makeup of the predictor (the one considering placing things in the boxes) and can infer the predictor's choices with what is effectively certainty. The predictor is just a regular, non-superintelligent creature that knows the superintelligence uses function decision theory and is superintelligent.

In this situation, there doesn't seem to be a logical connection between what the superintelligent outputs and the its prediction of what the predictor does. I mean, the superintelligence can exactly infer what the predictor does, without referencing its own action, so it doesn't seem like the superintelligence knowing what it itself does would really by informative.

So, suppose the superintelligence predicts the predictor puts money in both boxes, and would believe this prediction no matter what the superintelligence decides to do. In this situation, I don't see any reason for it to not take two boxes.

And the predictor reasons, "Since the AI can predict my exact output with effective certainty, eliminating a logical connection between its choice and the content of the box, the AI has no reason not to just take two boxes if they contain money. So, I predict the AI would two-box, so I won't put money in either box."

And then the superintelligent AI gets $0.

I don't think that's what a superintelligence is supposed to get. And if the AI didn't have the knowledge and power to predict the predictor's output exactly, then a logical connection could have be preserved and the AI could potentially get money. But I don't think more knowledge and more intelligence are supposed to make agents do worse.

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-07-31T20:05:40.916Z · LW(p) · GW(p)

This is known as "agent simulates predictor". There has been plenty of discussion of this problem. I'm currently feeling too lazy to try to summarize or link all the approaches, but here [LW(p) · GW(p)] are some thoughts I had about it via my infra-Bayesian theory.

comment by Richard_Kennaway · 2021-07-29T18:44:01.398Z · LW(p) · GW(p)

"Can You Ever Be Too Smart for Your Own Good? Comparing Linear and Nonlinear Effects of Cognitive Ability on Life Outcomes", by Matt I. Brown, Jonathan Wai, Christopher F. Chabris, 2021 Mar 8. (Open preprint.)

In a word, no.

"We found no support for any downside to higher ability and no evidence for a threshold beyond which greater scores cease to be beneficial. Thus, greater cognitive ability is generally advantageous—and virtually never detrimental."

comment by Fer32dwt34r3dfsz (rodeo_flagellum) · 2021-07-23T21:41:44.538Z · LW(p) · GW(p)

I have being reading content from LW sporadically for the last several years; only recently, though, did I find myself visiting here several times per day, and have made an account given my heightened presence. 

From what I can tell, I am in a fairly similar position to Jozdien, and am also looking for some advice.

I am graduating with a B.A. in Neuroscience and Mathematics this January. My current desire is to find remote work (this is important to me) that involves one or more of: [machine learning, mathematics, statistics, global priorities research]. 

In spirit of the post The topic is not the content [LW · GW], I would like to spend my time (the order is arbitrary) doing at least some of the following: discussing research with highly motivated individuals, conducting research on machine learning theory, specifically relating to NN efficiency and learnability, writing literature reviews on cause areas, developing computational models and creating web-scraped datasets to measure the extent of a problem or the efficacy of potential solution, and recommending courses-of-action (based on my assessments generated from the previous listed entity). 

Generally, my skill set and current desires lead me to believe that I will find advancing the capabilities of machine learning systems, quantifying and defining problem afflicting humans, and synthesizing research literature to inform action, all fulfilling, and that I will be effective in working on these things done as well. My first question: How should I proceed with satisfying my desires, i.e. what steps should I take to determine whether I enjoy machine learning research more than global priorities research, or vice versa? 

It is my plan to attend graduate school for one of [machine learning, optimization, computer science] at some point in life (my estimate is around the age of 27-30), but I would first like to experiment with working at an EA affiliated organization (global priorities research) or in industry doing machine learning research. I am aware that it is difficult to get a decent research position without a Master's or PhD, but I believe it is still worth trying for. I have worked on research projects in computational neuroscience/chemistry for one company and three different professors at my school, but none of these projects turned into publications. This summer, I am at a research internship and am about to submit my research on ensemble learning for splice site prediction for review in the journal Bioinformatics - I am 70% confident that this work will get published, with me as the first author. Additionally, my advisor said he'd be willing to work with me to publish a dataset of 5000 fossils image I've taken of various fossils from my collection. While this work is not in machine learning theory, it increases my capacity for being hired and is helping me refine my competence as a researcher / scientist.  

Several weeks ago, I applied to Open Philanthropy's Research Fellow position, which is a line of work I would love doing and would likely be effective at. They will contact me with updates on or before August 4th, and I anticipate that I will not be given the several follow-up test assignments OpenPhil uses to evaluate its candidates, provided that their current Research Fellows have more advanced degrees and more experience with the social sciences than I do. I have not yet applied to any organizations whose focus is machine learning, but will likely begin doing so during this coming November. This brings me to my final questions: What can I do to increase my capacity for being hired by an organization whose focus is global priorities research? Also, which organizations or institutions might be a good fit for both my skills in computational modeling and machine learning and my desire to conduct global priorities research?

Any other advice is welcome, especially advice of the form "You can better prioritize / evaluate your desires by doing [x]", "You seem to have [x] problem in your style of thought / reasoning, which may be assuaged by reading [y] and then thinking about [z]", or "You should look into work on [x], you might like it given your desire to optimize/measure/model things". Thank you, live well. 

comment by Pattern · 2021-07-14T03:32:47.876Z · LW(p) · GW(p)

I wonder if Shortform posts are replacing this, in terms of use:

If it’s worth saying, but not worth its own post, here's a place to put it.
Replies from: ChristianKl
comment by ChristianKl · 2021-07-14T06:39:09.566Z · LW(p) · GW(p)

If they would be replacing it we would see nobody use the latest open threads. 

comment by Yoav Ravid · 2021-07-24T05:49:47.897Z · LW(p) · GW(p)

Does anyone know an article that expands on the idea of separating teaching institutions and assessment? I wrote a short expansion (750~ words) myself, and will probably publish it soon, but I'd like to read articles by other people too if they exist.

Replies from: Pattern, ChristianKl
comment by Pattern · 2021-07-29T16:58:21.430Z · LW(p) · GW(p)

It might come up around new stuff that attempts to serve one of those roles.*

*ETA : teaching, or assessment.

comment by ChristianKl · 2021-07-25T11:49:10.544Z · LW(p) · GW(p)

While I have no specific article, I find the German Heilpraktiker system a good example where there's a stable system that exist 70+ years that separates the two.