Seeking information relevant to deciding whether to try to become an AI researcher and, if so, how.
post by ChrisHallquist · 2012-06-11T12:23:20.875Z · LW · GW · Legacy · 45 commentsContents
45 comments
I was initially going to title this post, "try to become an AI researcher and, if so, how?" but see Hold Off On Proposing Solutions. So instead, I'm going to ask people to give me as much relevant information as possible. The rest of this post will be a dump of what I've figured out so far, so people can read it and try to figure out what I might be missing.
If you yourself are trying to make this decision, some of what I say about myself may apply to you. Hopefully, some of the comments on this post will also be generally applicable.
Oh, and if you can think of any bias-avoiding advice that's relevant here, along the lines of holding off on proposing solutions, that would be most helpful.
Though I'm really hard to offend in general, I've made a conscious decision to operate by Crocker's Rules in this thread.
One possibility that's crossed my mind for getting involved is going back go graduate school in philosophy to study under Bostrom or Chalmers. But I really have no idea what the other possible routes for me are, and ought to know about them before making a decision.
~~~~~
Now for the big dump of background info. Feel free to skip and just respond based on what's above the squigglies.
I seem to be good at a lot of different things (not necessarily everything), but I'm especially good at math. SAT was 800 math, 790 verbal, GRE was 800 on both, but in both cases, I studied for the test and getting my verbal score up was much harder work. However, I know there are plenty people who are much better at math than I am. In high school, I was one of the very few students from my city to qualify for the American Regions Math League competition (ARML), but did not do especially well.
Going to ARML persuaded me that I was probably not quite smart enough to be a world-class anything. I entered college as a biochemistry major, with the idea that I would go to medical school and then join an organization like Doctors Without Borders to just do as much good as I could, even if I wasn't a world-class anything. I did know at the time that I was better at math than biology, but hadn't yet read Peter Unger's Living High and Letting Die, so "figure out the best way to covert math aptitude into dollars and donate what you don't need to charity" wasn't a strategy I even considered.
After getting mostly B's in biology and organic chemistry my sophomore year, I decided maybe I wasn't well-suited for medical school and began looking for something else to do with my life. To this day, I'm genuinely unsure why I don't do so well in biology. Did the better students in my classes have some aptitude I lacked? Or was it that being really good at math made me lazy about things that are inherently time-consuming to study, like (possibly) anatomy?
I took a couple of neuroscience classes junior year, and considered a career in the subject, but eventually ended up settling on philosophy, silencing some inner doubts I had about philosophy as a field. I applied to grad school in philosophy at over a dozen programs and was accepted in to exactly one: the University of Notre Dame. I accepted, which was in retrospect the first or second stupidest decision I've made in my life.
Why was it a stupid decision? To give only three of the reasons: (1) Notre Dame is a department where evangelical Christian anti-evolutionists like Alvin Plantinga are given high status (2) it was weak in philosophy of mind, which is what I really wanted to study (3) I was squishing what were, in retrospect, legitimate doubts about academic philosophy, because once I made the decision to go, I had to make it sound as good as possible to myself.
Why did I do it? I'm not entirely sure, and I'd like to better understand this mistake so as to not make a similar one again. Possible contributing factors: (1) I didn't want to admit to myself that I didn't know what I was doing with my life (2) I had an irrational belief that if I said "no" I'd never get another opportunity like that again (3) my mom and dad went straight from undergrad to graduate school in biochemistry and dental school, respectively, and I was using that as a model for what my life should look like without really questioning it (4) Notre Dame initially waitlisted me and then, when they finally accepted me, gave me very little time to decide whether or not to accept, which probably unintentionally invoked one or two effects straight out of Cialdini.
So a couple years later, I dropped out of the program and now I'm working a not-especially-challenging, not-especially-exciting, not-especially-well-paying job while I figure out what I should do next.
My main reason for now being interested in AI is that through several years of reading LW/OB, and the formal publications of the people who are popular around here, I've become persuaded that even if specific theses endorsed by the Singularity Institute are wrong potentially world-changing AI is close enough to be worth putting a lot of thought into seriously thinking about.
It helps that it fits with interests I've had for a long time in cognitive science and philosophy of mind. I think I actually was interested in the idea of being an AI researcher some time around middle school, but by the time I was entering college I had gotten the impression that human-like AI was about as likely in the near future as FTL travel.
The other broad life-plan I'm considering is the thing I should have considered going into college, "figure out the best way to covert math aptitude into dollars and donate what you don't need to charity." One sub-option is to look into computer programming as suggested in HPMOR author's note a month or two ago. My dad thinks I should take some more stats and go for work as an analyst for some big eastern firm. And there are very likely options in this area that I'm missing.
~~~~~
I think that covers most of the relevant information I have. Now, what am I missing?
45 comments
Comments sorted by top scores.
comment by [deleted] · 2012-06-11T13:23:01.703Z · LW(p) · GW(p)
Note that FAI research, AI research, and AGI research are three very different things.
Currently, FAI research is conducted solely by the Singularity Institute and researchers associated with them. Looking at SI's publications over the last few years, the FAI research program has more in common with philosophy and machine ethics than programming.
More traditional AI research, which is largely conducted at universities, consists mostly of programming and math and is conducted with the intent to solve specific problems. For the most part, AI researchers aren't trying to build general intelligence of the kind discussed on LW, and a lot of AI work is split into sub-fields like machine learning, planning, natural language processing, etc. (I'm an undergraduate intern for an AI research project at a US university. Feel free to PM with questions.)
AGI research mostly consists of stuff like OpenCog or Numenta, i.e. near-term projects that attempt to create general intelligence.
It's also worth remembering that AGI research isn't taken very seriously by some (most?) AI researchers, and the notion of FAI isn't even on their radar.
Replies from: ChrisHallquist, jacob_cannell, VincentYu↑ comment by ChrisHallquist · 2012-06-12T01:59:41.340Z · LW(p) · GW(p)
This is useful, and suggests "learn programming" is useful preparation for both work on AI and just converting math ability into $.
One thing it seems to leave out, though, is the stuff Nick Bostrom has done on AI, which isn't strictly about FAI, though it is related. Perhaps we need to add a category of "general strategic thinking on how to navigate the coming of AI."
I should learn more about AGI projects. My initial guess is that near-term projects are hopeless, but in their "Intelligence Explosion" paper Luke and Anna express the view that a couple AGI projects have a chance of succeeding relatively soon. I should know more about that. Where to begin learning about AGI?
Replies from: None↑ comment by [deleted] · 2012-06-12T02:23:43.885Z · LW(p) · GW(p)
As I understand it, "learn programming" is essential for working in pretty much every sub-field of computer science.
The stuff Nick Bostrom has done is generally put into the category machine ethics. (Note that this is seen as a sub-field of philosophy rather than computer science.)
I don't know much about the AGI community beyond a few hours of Googling. My understanding is that there are a handful of AGI journals, the largest and most mainstream being this one. Then there are independent AGI projects, which are sometimes looked down upon for being hopelessly optimistic. I don't know where you should begin learning about the field--I suppose Wikipedia is as good a place as any.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-06-12T16:03:16.535Z · LW(p) · GW(p)
My understanding is that there are a handful of AGI journals, the largest and most mainstream being this one.
That's the only one.
Replies from: None↑ comment by jacob_cannell · 2012-06-11T21:59:46.524Z · LW(p) · GW(p)
Note that FAI research, AI research, and AGI research are three very different things.
Perhaps FAI differs, but AI and AGI really just describe project scale/scope/ambition variance within the same field.
It's also worth remembering that AGI research isn't taken very seriously by some (most?) AI researchers,
Most of the high profile AI researchers all seem to point to AGI as the long term goal, and this appears just as true today as it was in the early days. There may be a trend to downplay the AGI-Singularity associations.
Replies from: None↑ comment by [deleted] · 2012-06-12T02:06:19.852Z · LW(p) · GW(p)
Perhaps FAI differs, but AI and AGI really just describe project scale/scope/ambition variance within the same field.
My understanding is that AI and AGI differ in terms of who is currently carrying out the research. The term "AI research" (when contrasted with "AGI research") usually refers to narrow-AI research projects carried out by computer scientists. "AGI research" has become more associated with independent projects and the AGI community, which has differentiated itself somewhat from the AI/CS community at large. Still, there are many prominent AI researchers and computer scientists who are involved with AGI, e.g. these people. "FAI research" is something completely different--it's not a recognized field (the term only has meaning to people who have heard of the Singularity Institute), and currently it consists of philosophy-of-AI papers.
Most of the high profile AI researchers all seem to point to AGI as the long term goal, and this appears just as true today as it was in the early days.
Yes, but they are much more pessimistic about when such intelligences will be developed than they were, say, 30 years ago, and academics no longer start AI research projects with the announcement that they will create general intelligence in a short period of time. Independent AGI projects of the sort we see today do make those kind of grandiose announcements, which is why they attract scorn from academics.
There may be a trend to downplay the AGI-Singularity associations.
Definitely.
↑ comment by VincentYu · 2012-06-11T13:52:13.990Z · LW(p) · GW(p)
I'm an undergraduate intern for an AI research project at a US university.
Off-topic, potentially interesting to other undergrads: Do you have information/experience on the competitiveness* of summer research programs (particularly REUs or other formally organized programs) in AI (or CS in general), relative to programs in physics/astronomy and math (or other disciplines)?
(I've found that recent math REUs seem to be substantially more competitive than similar physics/astronomy programs. Obviously, this is a broad observation that should not be applied to individual programs.)
*Competitiveness as in difficulty of being accepted by a program.
Replies from: Nonecomment by [deleted] · 2012-06-11T12:50:05.666Z · LW(p) · GW(p)
Another option: get finance credentials and work at a hedge fund. Finance relies a lot on math, which I infer you are pretty awesome at. If you do well, then you can make a lot of money. Beyond your living costs, donate your preferred percentage to charity and save much of the rest. Retire relatively early and use your savings to fund the philosophical projects you're interested in.
That's assuming best case scenario, of course. But it's a path I'm seriously looking into and I see some similarities between our situations. Hope this is of help. Let me know if you'd like more info.
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2012-06-12T02:02:16.847Z · LW(p) · GW(p)
Which leads to the question: what information can I use to decide between programming vs. hedge fundie vs. something else as the most effective way to convert math ability into $?
Replies from: handoflixue, None↑ comment by handoflixue · 2012-06-12T18:07:02.652Z · LW(p) · GW(p)
Risk (how likely are you to succeed at this career) vs Reward (salary seems very important to you, job enjoyment is always important since it will motivate you to succeed and push yourself)
Salary should be fairly easy to research, although at least in CS pay is going to vary dramatically depending on where you live (be sure to adjust salary for cost of living!), and how advanced you are in the field. I'd assume that's also true of accounting, but CS is the field I know and can speak to from experience.
Job enjoyment and probability of success should be fairly easy to get a feel for by just spending a few days at each activity.
I'd personally weigh enjoyment as the most important factor, simply because I've found that a happy job makes me motivated, and motivation makes me do better work, which leads to promotions, which leads to more pay. If you have a strong ability to do uninteresting tasks, then salary would probably become more important.
One last factor is cost of living based on profession: Computer science tends to let you get away with jeans and a t-shirt, there isn't really any need to display wealth. Working at a hedge fund, you might be expected to drive a nice car and wear fancy clothes if you want to succeed, which can reduce your salary. This is especially important if you're prone to such behavior already - working in accounting might very well encourage you to tie up a lot of your style in status symbols instead of charity (I would doubt this is the case for you, based on what you wrote, but I like to make my advice general :))
Replies from: Viliam_Bur, ChrisHallquist↑ comment by Viliam_Bur · 2012-06-13T10:23:33.293Z · LW(p) · GW(p)
Job enjoyment and probability of success should be fairly easy to get a feel for by just spending a few days at each activity.
Trying something is much better than just thinking or talking about it, but unfortunately, spending a few days doing it also brings some biases.
For example many IT jobs are essentially this: "get data from database column X and display them on web page in input field Y". When you do it for the first time, it is interesting, you learn something, you think about how you would improve the process. When you do it for the 100th time, it becomes kind of boring, and your first year at the job is not over yet. When you do it 10 years in a row, you realize you have wasted your life.
There is also a difference between designing an algorithm and writing a new elegant code -- what you do when you are a new person in a new project --, and adding a new functionality or fixing a bug in a huge undocumented spaghetti code written a few years ago by a person no longer working for the company -- the longer you stay in the company, the more of this kind of work you get, unless you move to a management position.
I'd personally weigh enjoyment as the most important factor, simply because I've found that a happy job makes me motivated, and motivation makes me do better work, which leads to promotions, which leads to more pay.
I agree that work enjoyment is more important that the salary (assuming the salary is enough to survive decently), but unfortunately the working conditions change, while the salary often stays the same -- be careful not to trade too much salary for a good first impression.
The chance of promotion depends not only on your work, but also on size and structure of the company. In a very small company, there can simply be no position to be promoted to. In a medium size company, the positions above you can consist of only management work, not programming -- if you enjoy programming, but don't enjoy management, you probably don't want this kind of promotion. On the other hand, when you are part of a larger team, your individual contribution can dissolve in the work of others; and if someone else brings the whole project to ruin, whatever good work you did becomes just a part of a failed project. Your good work is not automatically visible; to be promoted, your social skills are no less important than your technical skills.
So I agree about importance of job enjoyment, but I doubt how much it can be estimated by spending a few days doing it.
Replies from: handoflixue↑ comment by handoflixue · 2012-06-13T18:07:14.580Z · LW(p) · GW(p)
It's worth noting that, at least in programming, experience in one job can generally be used to request (and receive) a higher salary at your next job. Even if the current position doesn't offer promotions, you can still move up by moving out.
When you do it 10 years in a row, you realize you have wasted your life.
This is why I like programming. If I do something more than a few times, I write a program that then does it for me. I've had jobs where I was a "Full Time Employee" that worked 10-20 hours/week because I'd automated most of the job away. I mostly just provided on-site training and support for the code I'd written. I eventually moved on because sitting in a cube playing NetHack was boring, even if it was a living :)
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-06-13T21:00:29.139Z · LW(p) · GW(p)
experience in one job can generally be used to request (and receive) a higher salary at your next job. Even if the current position doesn't offer promotions, you can still move up by moving out.
Just don't do it too often, or the employers may notice the pattern and start asking about it at job interviews.
Replies from: handoflixue↑ comment by handoflixue · 2012-06-13T21:28:32.699Z · LW(p) · GW(p)
You raise a valid point. However, programming contracts are often 3-12 months. It's not really exceptional behavior to a lot of employers :)
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-06-14T08:52:14.262Z · LW(p) · GW(p)
This sounds like a different side of the planet. (checks user details) Yes, it is. :D
Here the cultural preference is one employment for life for average jobs, and for IT jobs I would guess 5 or 10 years per job. (For more adventurous people, there is an opportunity to do 3-12 months contracts for foreign companies, but this includes either a lot of travelling or living abroad. This is mostly done as a sole proprietorship, which means that if you happen to screw up something, say goodbye to your property. I have considered that too, but unfortunately I hate travelling.)
↑ comment by ChrisHallquist · 2012-06-13T00:50:51.059Z · LW(p) · GW(p)
Your points about enjoyment are very good ones. After all, we know that job satisfaction is more important than money for happiness (as long as you're above the poverty line.)
In fact, I'm not naturally inclined to care much about money. But I know money would (1) allow me to do good through charitable giving and (2) allow me to help my (hypothetical) kids pay for college - something my dad did for me and which I'm extremely grateful for.
↑ comment by [deleted] · 2012-06-12T02:27:51.864Z · LW(p) · GW(p)
I haven't done too much specific research into specifically comp sci vs finance vs et al. While I'll eventually have to, I'm not yet at the point in my life when I have to choose between those options. So I can't give any firm advice rooted in experience.
But I don't see why we can't approach it like any other decision problem. Figure out what specific factors you want to maximize. You seem to have done that qualitatively, but try to refine it more and quantify it. Money is important, of course, but I infer that "the most efficient way to convert math ability into money" is a fake utility function. Working to maximize a fake utility function won't be optimal. So first off, your real values are very important information.
Once your goals are well defined, get a list of five to ten of your best options. For each option, research and construct a plan of how you would go about maximizing those values. For example, let's talk about the finance route. Look at stats from the US labor department about the professions. How much would you likely make in a year? Is that enough for what you want? Also look at the investments you'd have to make in order to work in finance. You need finance credentials (such as a CFA) and licensing from the state. The CFA is pretty hard to get, and takes three years. You'd likely also have to move to a large city. Does that gel with your preferences about where to live? The answers to all of these questions (and many more) are also important information. After you evaluate all the plans, make a choice and go for it.
I realize this is all basic 101 decision theory, not some real insight on my behalf. But often times when faced with a big decision, I've found it very useful to review the basics. Helps me break it down into manageable bits. If you'd care to, I'd be happy to get on Skype and chat with you in more detail. Personally, I sometimes find it easier to work through these problems verbally. Easier to have a rapport and throw ideas around.
comment by jsteinhardt · 2012-06-12T16:52:20.122Z · LW(p) · GW(p)
I speak as someone who is about to enter their first year of grad school in AI; note that I have also already spent a year doing robotics and a year doing AI research, so I am slightly less naieve than the typical first-year grad student.
The first half of this comment will discuss my thoughts on the merits of AI / AGI / FAI research. The second half will discuss how to do AI research if you want to. They will be separated by a bunch of dashes (-------).
I personally chose to go into AI research for a few reasons, although I don't have a high degree of certainty that it's the right thing to do. But my reasons were --- high degree of aptitude and belief that FAI research uninformed by current state of the art in AI / ML is much more likely to fail (I also think that FAI is much more a social problem than a technical problem; but I'm not sure if that's relevant).
I am pretty skeptical of AGI projects in general, mostly based on an outside view and based on looking at a small number of them. Claims of near-term AI indicate a lack of appreciation for the difficulty of the problem (both engineering barriers that seem surmountable with enough man-hours, and theoretical barriers that require fundamentally new insights to get past). I don't want to say that it's impossible that there is some magical way around these, but to me it pattern-matches to amateur mathematicians proposing approaches to Fermat's Last Theorem. If you have some particular AGI project that you think is likely to work, I'm happy to look at it and make specific comments.
I am less skeptical of FAI research of the form done by e.g. Wei Dai, Vladimir Nesov, etc. I view it as being on the far theoretical end of what I see as the most interesting line of research within the AI community. I also think it's possible for such research to be conducted as essentially AI research, at one of the more philosophically inclined labs (most likely actually a computational cognitive science lab rather than an AI lab, such as Tom Griffiths, Noah Goodman, Josh Tenenbaum, or Todd Kemp).
Okay, so say you want to be an AI researcher. How do you go about doing this? It turns out that research is less about mathematical ability (although that is certainly helpful) and more about using your time effectively (both in terms of being able to work without constant deadlines and in terms of being able to choose high-value things to work on). There are also a ton of other important skills for research, which I don't have time to go into now.
If you want to work at a top university (which I highly recommend, if you are able to), then you should probably start by learning about the field and then doing some good research that you can point to when you apply. As an undergrad, I was able to do this by working in labs at my university, which might not be an option for you. The harder way is to read recent papers until you get a good idea, check to see if anyone else has already developed that idea, and if not, develop it yourself, write a paper, and submit it (although to get it accepted, you will probably also have to write it in the right format; most notably, introductions to papers are notoriously hard to write well). It also helps to be in contact with other good researchers, and to develop your own sense of a good research program that you can write about in your research statement when you apply. These are all also skills that are very important as a grad student, so developing them now will be helpful later.
Unfortunately, I have to go now, but if I left anything out feel free to ask about it and I can clarify.
comment by Shmi (shminux) · 2012-06-11T19:47:04.450Z · LW(p) · GW(p)
Have you looked into theoretical computer science, for example, of the type Scott Aaronson is doing? It is mathematical enough for your liking, yet applied enough to avoid competition with the best and the brightest. It is also quite relevant to the AI research.
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2012-06-12T02:13:41.600Z · LW(p) · GW(p)
No, I haven't. Tell me more (or link me to places where I can read more),
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-12T04:06:17.456Z · LW(p) · GW(p)
Scott Aaronson's web page is a good place to start, It has links to a few relevant courses he taught over the years, as well as to his research. His blog frequently touches on the matters of AI, as well.
comment by VincentYu · 2012-06-11T13:00:51.612Z · LW(p) · GW(p)
I understand that you are mainly interested in FAI research, but perhaps you can try out mainstream AI research (or something else; see Tetronian's comment) for a few months to see if AI research in general suits you? I mention this because I think you are likely able to find work in a nearby academic AI lab if you offer to do so at low or no wages (the standard undergrad rate where I am is $10/hr). Just contact the PI directly. Of course, the opportunity cost may be too large – I'd imagine that you'd have to permanently quit your current job to do this. (But it only costs sending a few emails to find out whether there are nearby AI groups who would take you in.)
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2012-06-12T02:13:15.677Z · LW(p) · GW(p)
Ugh. My job doesn't pay that well, but $10/hr would hurt. Maybe if it were a way to build up marketable experience as a computer programmer?
comment by Jayson_Virissimo · 2012-06-12T04:34:32.453Z · LW(p) · GW(p)
Have you considered applying to a cognitive science program? It is an interdisciplinary field which overlaps with AI (which you are already considering), neuroscience (which you have a little experience with), and philosophy (which you seem to like).
Replies from: Will_Newsome, ChrisHallquist↑ comment by Will_Newsome · 2012-06-12T06:30:34.408Z · LW(p) · GW(p)
(Specifically, all the cool kids do computational cognitive science.)
Replies from: jsteinhardt↑ comment by jsteinhardt · 2012-06-12T07:29:35.921Z · LW(p) · GW(p)
↑ comment by ChrisHallquist · 2012-06-12T10:10:30.392Z · LW(p) · GW(p)
Forgive me for possible cluelessness, but isn't cognitive science an umbrella term that covers psychology, neuroscience, AI, etc., and which you could therefore get into through a program in any of those things?
But I take it there are programs that are labeled "cognitive science" programs. Links to more information on them in general, how to apply to them, etc?
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2012-06-12T11:07:10.935Z · LW(p) · GW(p)
For example, my alma mater, ASU, has a PhD program in Simulation, Modeling, and Applied Cognitive Science. There are similar programs at other schools. As far as I can tell, they allow students to take classes in all the disciplines you mentioned, but allow them to perform research that doesn't fit uniquely into their individual domains.
comment by kjmiller · 2012-06-12T03:54:08.361Z · LW(p) · GW(p)
Consider getting back into neuroscience!
AGI as a project is trying to make machines that can do what brains do. One great way to help that project is to study how the brains themselves work. Many many key ideas in AI come from ideas in neuroscience or psychology, and there are plenty of labs out there studying the brain with AI in mind.
Why am I telling you this? You claim that you'd like to be an AI researcher, but later you imply that you're new to computer programming. As mentioned in some of the comments, this is likely to present a large barrier to "pure" AI research in an academic CS department, corporation, etc. A computational psychology or neuroscience lab is likely to be much more forgiving for a newbie programmer (though you'll still have to learn). The major things that graduate programs in computational neuro look for are strong math skills, an interest in the mind/brain, and a bit of research experience. It sounds like you've got the first two covered, and the last can be gained by joining an academic lab as a research assistant on a temporary basis.
If you're considering giving academia another shot, its worth thinking about neuro and psych (and indeed linguistics and cognitive science, or a different/better philosophy program) as well as computer science and pure AI.
Good luck!
comment by John_Maxwell (John_Maxwell_IV) · 2012-06-11T19:47:03.369Z · LW(p) · GW(p)
I'm in favor of you becoming an FAI researcher, possibly independent of SI, because you've done a decent job of disagreeing with them in the past and you seem to have relevant interests and aptitudes.
Obviously you'll have to support yourself or get someone to fund you. Getting a high income career, living frugally, and quitting after a while to do research full-time could be an interesting idea. One possibility would be to take periodic years-long "research vacations" in parts of the world with lower cost-of-living. You could buy some dogs for guaranteed companionship. Note: these are wild ideas; I have no idea if they would actually work.
The Republic of Georgia has the most lax visa laws (if you're from a developed country) of any country I looked at. You can basically stay in the country an entire year without any sort of visa, if I recall correctly. And they have a program for English teachers where they will pay for your plane ticket in and out of the country, give you housing and a stipend, etc.
comment by IlyaShpitser · 2012-06-11T18:47:51.403Z · LW(p) · GW(p)
I think the most important question here is whether you can deal with general issues related to being an academic. If your answer is "yes," the specific thing you want to study is less important, in my opinion (people often drastically change trajectories in their careers, especially after defending).
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2012-06-12T02:11:32.832Z · LW(p) · GW(p)
What are those "general issues"? I feel fairly aware of the status games, is that the main things, or are there other major things to take into account?
My current impressions of academia is that the status games are everywhere, but they're worse in some places than others. In particular, fields where there's no general consensus about what "getting it right" looks like, that means scholars can't be rewarded for getting things right.
Philosophy is one of those disciplines. But even in philosophy, Nick Bostrom somehow manages to get away with Nick Bostrom. Maybe the secret is just to persuade enough of the right people that what you want to do is valuable, so they'll let you do it, and don't worry too much about what the rest of the philosophical community thinks of you?
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2012-06-12T02:18:29.808Z · LW(p) · GW(p)
Status games are everywhere. I mostly meant it might be a good idea to talk to grad student friends, or better yet professor friends about what life is like as an academic. Being in academia is in many ways an "unusual" job.
comment by Daniel_Burfoot · 2012-06-12T01:51:18.524Z · LW(p) · GW(p)
Why not do an MSc in computer science with a focus on AI? If you decide you like research, you can continue on down that path, if not, you will have a valuable credential that you can use to get a good job. Many subfields of AI are heavily neuroscience-inspired, so that part of your undergrad coursework will come in handy.
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2012-06-12T10:11:01.681Z · LW(p) · GW(p)
Links to what programs are good, how to make a good application, etc?
comment by Tuxedage · 2012-06-11T15:53:47.405Z · LW(p) · GW(p)
When you talk about "AI research", are you specifically referring to narrow AI or strong AI? Because these two fields are very different things, with very different requirements and skillsets.
If you happen to be referring to Strong AI, aka Friendly AGI, check out "So you want to be a Seed AI Programmer", assuming you haven't already seen it. http://web.archive.org/web/20101227203946/http://www.acceleratingfuture.com/wiki/So_You_Want_To_Be_A_Seed_AI_Programmer
Replies from: timtyler, ChrisHallquist, jacob_cannell↑ comment by timtyler · 2012-06-12T09:58:40.444Z · LW(p) · GW(p)
When you talk about "AI research", are you specifically referring to narrow AI or strong AI? Because these two fields are very different things, with very different requirements and skillsets.
Allegedly - but look at who the proponents are for why they might think that.
↑ comment by ChrisHallquist · 2012-06-12T02:31:00.896Z · LW(p) · GW(p)
Thanks for the link. Made for interesting reading, even though I'm skeptical of some of the premises of it.
↑ comment by jacob_cannell · 2012-06-11T21:55:01.610Z · LW(p) · GW(p)
Because these two fields are very different things, with very different requirements and skillsets.
Really? How so?
comment by Kaj_Sotala · 2012-06-22T12:25:08.330Z · LW(p) · GW(p)
Related:
- Ben Goertzel's Sketch of an AGI curriculum
- Pei Wang's Suggested Education for Future AGI Researchers
comment by private_messaging · 2012-06-11T19:38:53.812Z · LW(p) · GW(p)
I would suggest working on practical software, such as that which solves for best designs of transistors (gradually generalized), and perhaps for more optimal code for that. The AGI will have to out-foom such software to beat it, and probably won't, as the 'general==good' is just the halo effect (and the software self improvement is likely to be of very limited help for automated superior computer design). And if it won't outfoom such software, then we don't have the scenario of AGI massively outpowering the mankind, and the whole risk issue is a lot lower.
This prevents boat load of nasty scenarios including "the leader of FAI team is a psychopath and actually sets himself a dictator, legalizes rape, etc" which should be expected to have risk of at least 2..3% if the FAI team is to be technically successful (about 2..3% of people are psychopaths, and those folks have edge over normals when it comes to talking people into giving them money and/or control).
Replies from: timtyler↑ comment by timtyler · 2012-06-12T10:00:12.519Z · LW(p) · GW(p)
I would suggest working on practical software, such as that which solves for best designs of transistors (gradually generalized), and perhaps for more optimal code for that. The AGI will have to out-foom such software to beat it, and probably won't [...]
A bizarre optimisation problem to make that claim of.