Want to work on "strong AI" topic in my bachelor thesis
post by kotrfa · 2014-05-14T10:28:55.354Z · LW · GW · Legacy · 16 commentsContents
16 comments
Hello,
I currently study maths, physics and programming (general course) on CVUT at Prague (CZE). I'm finishing second year and I'm really into AI. The most interesting questions for me are:
- what formalism to use for connecting epistemology questions (about knowledge, memory...) and cognitive sciences with maths and how to formulate them
- find principles of those and trying to "materialize" them into new models
- I'm also kind of philosophy-like questions about AI
- Am I naive to hope that I can do anything useful and fulfilling (based on the given data) in this area ("strong AI")?
- Where (and for what) should I try to find someone (something) who could help me in the next progress in my studies in this area?
- Do you think my enthusiasm is going in good direction? Haven't I bitten off more than I could chew? Where should I go then?
16 comments
Comments sorted by top scores.
comment by Risto_Saarelma · 2014-05-15T16:49:50.393Z · LW(p) · GW(p)
You might want to try to build yourself a T-shaped skillset on the relevant disciplines, based on MIRI's course recommendation list for example. Meaning that you'll try to master one part of the domain well enough that you could eventually do a PhD on it, and have enough awareness on the rest to be reasonably conversant about it.
My impression on that stuff is that if you want to be serious about it, most of it is quite heavy going compared to general STEM undergraduate fare. You'll probably want to be either the sort of good at math who regularly leaves "is good at math" people in the dust or be prepared to work quite hard.
Replies from: Kaj_Sotala, kotrfa↑ comment by Kaj_Sotala · 2014-05-16T14:27:59.154Z · LW(p) · GW(p)
You'll probably want to be either the sort of good at math who regularly leaves "is good at math" people in the dust or be prepared to work quite hard.
At least if you're going by the "AGI is all about math" route. If one takes the "AGI is more about cognitive science and psychology" approach, then they don't necessarily need to be quite that good at math, though a basic competence is still an absolute must.
Replies from: kotrfa, Risto_Saarelma↑ comment by kotrfa · 2014-05-17T11:08:19.243Z · LW(p) · GW(p)
Thank you for answer.
Could you redirect me to somewhere, where I could find what problems/directions are you talking about? Since I'm not so shining mathematician, maybe I could contribute in these areas, which I found similar interesting.
↑ comment by Risto_Saarelma · 2014-05-17T04:56:50.487Z · LW(p) · GW(p)
Have there been any significant advances in AI or AGI theory so far made by people from cognitive science or psychology background who didn't also have very strong math or computer science skills? It's a bit worrisome when Douglas Hofstadter comes to mind as the paradigmatic example of this approach, and he seems to have achieved nothing worth writing home about during a 30+-year career. Not to mention that he did have strong enough math skills to initially do a PhD on theoretical physics.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2014-05-17T11:05:52.682Z · LW(p) · GW(p)
That's hard to answer, given that there's no general agreement of what would count as a significant advance in AGI theory. Something like LIDA feels like it could possibly be important and useful for AGI, but also maybe not. The Global Workspace Theory behind it does seem important, though. Also various other neuroscience work like the predictive coding hypothesis of the brain seems plausibly important.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2014-05-17T11:58:33.720Z · LW(p) · GW(p)
So far I'd count AIXI and whatever went into building IBM Watson (incidentally, what did go into building it, is there a summary somewhere about what you'd want to study if you wanted to end up capable of working on something like that?) as reasonably significant steps. AIXI is pure compsci, and I haven't heard anything about insights from cognitive science playing a big part in getting Watson working compared to plain old math and engineering effort.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2014-05-17T13:37:02.053Z · LW(p) · GW(p)
I'd count the predictive coding model and probably also GWT as larger steps than AIXI. I'm not sure where I'd put Watson.
incidentally, what did go into building it, is there a summary somewhere about what you'd want to study if you wanted to end up capable of working on something like that?
Here is a paper about how Watson works in general, and here's another about how it reads a clue. (Unsurprisingly, machine learning, natural language processing, and statistics skills seem relevant.)
↑ comment by kotrfa · 2014-05-17T10:59:51.134Z · LW(p) · GW(p)
Thanks for the answer.
I don't know how I could miss MIRI's course recommendation list. It looks great. Will definitely take a closer look at it.
Second part is a bit disappointment for me, since I'm not that kind of student. I'm in the stronger group of mathematicians in my university, but in that group I'm in or below average (they are one of the best in my country).
Maybe I put too much weight too maths part of AGI, which are obviously aren't for me. And I'm not sure about taking PhD in it right now also. Do I understand correctly that right now there are no less complicated problems or problems for regular people in AGI? Nothing, where I could develop my skills which could be useful even for other ways than taking a PhD and do heavy research with AGI world leaders?
Thanks
Replies from: Kaj_Sotala, Risto_Saarelma↑ comment by Kaj_Sotala · 2014-05-17T11:07:09.160Z · LW(p) · GW(p)
Alternative AGI course recommendation lists: one by Pei Wang, another by Ben Goertzel.
↑ comment by Risto_Saarelma · 2014-05-17T12:43:03.814Z · LW(p) · GW(p)
Do I understand correctly that right now there are no less complicated problems or problems for regular people in AGI? Nothing, where I could develop my skills which could be useful even for other ways than taking a PhD and do heavy research with AGI world leaders?
What you're looking for is work within a research program (in the philosophy of science sense, not in the organizational sense). You get a research program when you've figured out the basic paradigms of your field and have established that you can get meaningful progress by following those. Then you can get work like "grow these microbial strains in petri dishes and document which of these chemicals kills them fastest" or "work on this approximation algorithm for a special case of graph search and see if you can shave off a fraction from the exponent in the complexity class".
The problem with AGI is that nobody really has an idea on the basic principles to build an expansive research program like that on yet. The work is basically about just managing to be clever and knowledgeable enough in all the right ways so that you have some chance to luck out into actually working on something that ends up making progress. It's like trying to study electromagnetism in 1820 instead of today, or classical mechanics in 1420 instead of today.
Also, since there's no consensus on what will work, the field is full of semi-crackpots like Selmer Bringsjord, so even if you do manage to get doing academic research with someone credentialed and working on AGI, chances are you've found someone running an obviously dead-ended avenue of research and end up with a PhD on the phenomenological analysis of modal logics for hypercomputation in Brooksian robotics, and being even more useless for anyone with actual chances of developing an AGI than you were before you even got started. I'm not even sure if the problem is just "there are some cranks around in academic AGI research and you should avoid them", it might be "academic AGI research is currently considered a dead-end field and so most of the people who end up there will be useless cranks."
If there's no entry level in AGI, the best thing to do is to try to figure out what the people who actually seem to be doing something promising in AI (AIXI, Watson, Google's self-driving cars) were doing as their entry-level disciplines. My guess is lots and lots of theoretical computer science and math.
It is an open problem, so there's no guarantee that what gets the most impressive results today will get the most impressive results 5 years from now. And we don't even know which of the current research directions will end up actually being on the right track. But whoever is making progress in AGI in 5 years is probably going to be someone who can and does understand what's going on in today's state-of-the-art.
comment by Transfuturist · 2014-05-14T13:31:49.737Z · LW(p) · GW(p)
By the first two bullet points, do you mean you want to form formalized models of naturalized induction?
I've written an essay on the effects of interactive computation as an improvement for Solomonoff-like induction. (It was written in two all-nighters for an English class, so it probably still needs proofreading. It isn't well-sourced, either.) Do you mean things like that? I want to form a better formalization of naturalized induction than Solomonoff induction, one designed to be usable by space-, time-, and rate-limited agents, and interactive computation was a necessary first step. AIXI is by no means an ideal inductive agent.
I very much want to be hired for work like this.
Replies from: kotrfacomment by Risto_Saarelma · 2014-05-17T12:58:43.500Z · LW(p) · GW(p)
Previous thread on getting into AI research.
comment by Kaj_Sotala · 2014-05-15T07:30:06.500Z · LW(p) · GW(p)
You could look at the papers published in past AGI conferences for people/topics that seem relevant or worthwhile. E.g. AGI 2013, AGI 2012. There's also a Journal of Artificial General Intelligence whose past issues you could browse.
For something more "mainstream-friendly", there are the various computational cognitive architectures that have been developed in cognitive psychology, such as ACT-R and LIDA. Stan Franklin, one of the people behind LIDA, has also had some involvement with the AGI community.
Am I naive to hope that I can do anything useful and fulfilling (based on the given data) in this area ("strong AI")?
I don't know what the requirements for a Bachelor's thesis at your university are, but at mine, you could do a Bachelor's thesis that was basically just a literature review and didn't even try to produce any new information. Even if you couldn't actually contribute anything new at this point, even taking this opportunity to familiarize yourself with the existing work in the field would probably be useful for your future efforts.
Replies from: kotrfa↑ comment by kotrfa · 2014-05-17T11:05:01.447Z · LW(p) · GW(p)
Thank you. I'm just going to go through the papers publishers. Great idea!
The "mainstream-friendly" stuffs are maybe the middle-path for which I'm looking for, since response from Risto_Saarelma is pretty explanatory about my possibilities.
And it is possible to do similar kind of Bachelor's thesis and I believe it would be possible. That is not a problem. But, to be honest, I'd like to do some work which I find fulfilling even at tiniest amount. I'm doing literature review in my free-time.
comment by kotrfa · 2014-06-10T21:13:07.064Z · LW(p) · GW(p)
Hello.
I was searching more about my interests and I've found a opportunity has a nice Bachelor's topic in maths/informatics/neuroscience. I was offered two topics:
- Analyse properties of correlation matrix graphs (e.g small-world property)
- Conditional mutual information - how to detect synergy and which values it can takes when there is restriction on cardinality of given variables.
Both are connected with neuroscience (e.g the correlation matrix is created by brain activity, variables are activities of different parts of brain etc.)
Does anyone have any informations or advices to this?