Posts
Comments
Your account of "proof" is not actually an alternative to the "proofs are social constructs" description, since these are addressing two different aspects of proof. You have focused on the standard mathematical model of proofs, but there is a separate sociological account of how professional mathematicians prove things.
Here is an example of the latter from Thurston's "On Proof and Progress in Mathematics."
When I started as a graduate student at Berkeley, I had trouble imagining how I could “prove” a new and interesting mathematical theorem. I didn’t really understand what a “proof” was.
By going to seminars, reading papers, and talking to other graduate students, I gradually began to catch on. Within any field, there are certain theorems and certain techniques that are generally known and generally accepted. When you write a paper, you refer to these without proof. You look at other papers in the field, and you see what facts they quote without proof, and what they cite in their bibliography. You learn from other people some idea of the proofs. Then you’re free to quote the same theorem and cite the same citations. You don’t necessarily have to read the full papers or books that are in your bibliography. Many of the things that are generally known are things for which there may be no known written source. As long as people in the field are comfortable that the idea works, it doesn’t need to have a formal written source.
At first I was highly suspicious of this process. I would doubt whether a certain idea was really established. But I found that I could ask people, and they could produce explanations and proofs, or else refer me to other people or to written sources that would give explanations and proofs. There were published theorems that were generally known to be false, or where the proofs were generally known to be incomplete. Mathematical knowledge and understanding were embedded in the minds and in the social fabric of the community of people thinking about a particular topic. This knowledge was supported by written documents, but the written documents were not really primary.
I think this pattern varies quite a bit from field to field. I was interested in geometric areas of mathematics, where it is often pretty hard to have a document that reflects well the way people actually think. In more algebraic or symbolic fields, this is not necessarily so, and I have the impression that in some areas documents are much closer to carrying the life of the field. But in any field, there is a strong social standard of validity and truth. ...
I agree on most of this, but would you mind explaining why you think neuroscience is "mostly useless?" My intuition is the opposite. Also agreed that pure mathematics seems useful.
Would you mind tabooing the word "preference" and re-writing this post? It's not clear to me that the research cited in your "crash course" post actually supports what you seem to be claiming here.
If you can come up with better images to represent Friendly AI, please let me know!
How about an image of a paper clip?
Apologies for the pedantry that follows.
Today, we know how Hebb's mechanism works at the molecular level.
This quote gives the impression that there is a unitary learning mechanism at work in the brain called "Hebbian learning," and that how it works is well understood. It is my understanding that this is not accurate.
For example, spike-timing-dependent plasticity is a Hebbian learning rule which has been postulated to underlie at least some forms of long-term potentiation and long-term depression. However, there is ongoing debate as to how accurate/useful this concept is, including one recent attempt at a re-formulation of classical STDP.
With regard to molecular mechanisms, it was my understanding that even fundamental issues like whether LTP/LTD primarily involve presynaptic or postsynaptic modifications (or both) have not yet been cleared up.
I think your statement should be changed to something like "Though there are likely a variety of Hebbian learning mechanisms at work in the brain, neuroscientists are beginning to understand the few of them that have been discovered so far."
That thread is way too long, so I'm not going to read it, but I did a quick search for and didn't see any discussion on what I consider the dealbreaker when considering the evidence for or against most religions (but especially any flavor of Christianity), which is the existence of "souls." Simply put, the "soul" hypothesis doesn't jive with current evidence from physics, and it doesn't pay rent with regard to observations from neuroscience (or any kind of observations, for that matter). I strongly suspect that the Book of Mormon doesn't deal with evidence from neuroscience, which means that, due to the "soul" hypothesis being fairly central to Christian belief (it is the postulated mechanism by which a person is judged for "sins" committed in their life), you don't have to read it.
As an aside, I consider this line of reasoning to be something like "atheism for dummies" since most religions that I've seen depend on humans having something like a soul.
Isn't 12.0 something like quadruple-beta of the "Stable" version of Chrome?
I'm not entirely sure what you mean here. It's the current stable release
OP: For the record, I'm on Chrome 13 and I haven't noticed anything like you mentioned here. The graphical glitches make me think something is up with your video card or the drivers for it, but if it's only happening for LW...I'm not sure what to tell you.
In the past year I've been involved in two major projects at SIAI. Steve Rayhawk and I were asked to review existing AGI literature and produce estimates of development timelines for AGI.
You seem to suggest that this work is incomplete, but I'm curious: is this available anywhere or is it still a work in-progress? I would be very interested in reading this, even if its incomplete. I would even be interested in just seeing a bibliography.
What is "divulgation"? (Yes, I googled it.)
Really? You googled it and didn't see these results? You should talk to someone, I think your Google might be broken.
I'm interested in ... winning arguments ...
Ack, that won't do. It is generally detrimental to be overly concerned with winning arguments. Aside from that, though, welcome to LW!
What. That quote seems to be directly at odds with the entire idea of "Friendly AI". And of course it is, as a later version of Eliezer refuted it:
(In April 2001, Eliezer said that these comments no longer describe his opinions, found at "Friendly AI".)
I'm also not sure it makes sense to call SIAI a "closed-source" machine intelligence outfit, given that I'm pretty sure there's no code yet.
They appear to be aiming for whole brain emulation, trying to scale up previous efforts that simulated a rat neocortical column.
Here's another interim report on the longitudinal effects of CR on rhesus monkeys, this one a bit more recent (2009) than the one linked in the OP. From the abstract:
We report findings of a 20-year longitudinal adult-onset CR study in rhesus monkeys aimed at filling this critical gap in aging research. In a population of rhesus macaques maintained at the Wisconsin National Primate Research Center, moderate CR lowered the incidence of aging-related deaths. At the time point reported 50% of control fed animals survived compared with 80% survival of CR animals. Further, CR delayed the onset of age-associated pathologies. Specifically, CR reduced the incidence of diabetes, cancer, cardiovascular disease, and brain atrophy. These data demonstrate that CR slows aging in a primate species.
Have you read A Human's Guide to Words? You seem to be confused about how words work.
Looking back at your posts in this sequence so far, it seems like it's taken you four posts to say "Philosophers are confused about meta-ethics, often because they spend a lot of time disputing defintions." I guess they've been well-sourced, which is worth something. But it seems like we're still waiting on substantial new insights about metaethics, sadly.
"Save the world" has icky connotations for me. I also suspect that it's too vague for there to be much benefit to people announcing that they would like to do so. Better to discuss concrete problems, and then ask who is interested/concerned with those problems and who would like to try to work on them.
Good reminder that reversed stupidity is not intelligence.
Adding to the list: Hans Berger invented the EEG while trying to investigate telepathy, which he was convinced was real. Even fools can make important discoveries.
Won't music-theoretic analysis be basically irrelevant to a description of why some people enjoy, for instance, Merzbow?
One thing I didn't see you mention is neuroscience. My understanding is that some AGI researchers are currently taking this route; e.g. Shane Legg, mentioned in another comment, is an AGI researcher who is currently studying theoretical neuroscience with Peter Dayan. Demis Hassabis is another person interested in AGI who's taking the neuroscience route (see his talk on this subject from the most recent Singularity Summit). I'm personally interested in FAI, and I suspect that we need to study the brain to understand in more detail the nature of human preference. In terms of a career path, it's possible I'll go to graduate school at some point in the future, but my current plans are to just get a programming job and study neuroscience in my free time.
Have you given a thought to just taking the day job route? There are some problems, as I've found more than a few journal articles locked behind a paywall, but there are some ways for dealing with this. Furthermore, I've found a surprising number of recent neuro articles are available through open access journals like PNAS, Frontiers and through other routes (Google, Google Scholar, CiteseerX, author websites). If you're interested more in CS research, then I suspect you'll have even less trouble; for some reason recent (CS papers) seem to almost always be available over the internet.
What about in the case where the first punch constitutes total devastation, and there is no last punch? I.e. the creation of unfriendly AI. It would seem preferable to initiate aggression instead of adhering to "you should never throw the first punch" and subsequently dying/losing the future.
Edit: In concert with this comment here, I should make it clear that this comment is purely concerned with a hypothetical situation, and that I definitely do not advocate killing any AGI researchers.
Ahh, good point. My comment is somewhat irrelevant then with regards to this, as it seems that what you're interested in is beyond the scope of science at present.
A brief poke around in Google Scholar produced these papers, which look useful:
Alterations in Brain and Immune Function Produced by Mindfulness Meditation. Psychosomatic Medicine 65:564 –570 (2003)
Mindfulness training modifies subsystems of attention. COGNITIVE, AFFECTIVE, & BEHAVIORAL NEUROSCIENCE Volume 7, Number 2, 109-119
Long-term meditation is associated with increased gray matter density in the brain stem. NeuroReport 2009, 20:170–17
Attention regulation and monitoring in meditation. Trends Cogn Sci. 2008 April; 12(4): 163–169.
You think that claiming to have no understanding at all of ordinary words is getting at reality?
It's almost never sufficient, but it is often necessary to discard wrong words.
It was interesting to see the really negative comment from (presumably the real) Greg Egan:
The Yudkowsky/Bostrom strategy is to contrive probabilities for immensely unlikely scenarios, and adjust the figures until the expectation value for the benefits of working on — or donating to — their particular pet projects exceed the benefits of doing anything else. Combined with the appeal to vanity of “saving the universe”, some people apparently find this irresistible, but frankly, their attempt to prescribe what rational altruists should be doing with their time and money is just laughable, and it’s a shame you’ve given it so much air time.
Suggestion: when you read a piece of nonfiction, have a goal in mind
Agreed. See also: Chase your reading
Hmm, but it does seem like trauma triggers and the psychic-distress-via-salmon work via the same mechanism. So probably the key here is to distinguish between actual psychic stress and feigned stress used for status maneuvers. It is not, however, clear to me how to do that in general.
Another case that's interesting to consider is the Penny Arcade dickwolves controversy. The PA fellows made a comic which mentioned the word "rape", some readers got offended, and the PA guys, being thick-skinned individuals, dismissed and mocked their claims of being offended by making "dickwolves" T-shirts. Hubbub ensues.
What's most interesting about this case is that, apart from perhaps some bloggers, many of the people taking offense appear to be rape survivors for whom reading the word "rape" is traumatic (I guess? This is what I gathered, but being thick-skinned and not a rape survivor it is impossible for me to understand). I don't think it's possible to claim Machiavellian maneuverings here, given that a feminist blog who made a dickwolves protest shirt eventually stopped selling the shirt on account of some rape survivors saying that the shirt acted as a trigger for them.
More to the point: there is apparently a small population for whom using the word "rape" causes psychic horror. So what, are we now not allowed to ever use that word? Or can we not even allude to the act? Of course, reasonable concessions should be made (i.e. not using the word when directly in the presence of such a person), but at what point do sensitive individuals need to take it upon themselves to relocate their attention elsewhere?
I agree, yours is a more reasonable interpretation. I think I was interpreting "winds" as referring to "the winds of evidence," which is not reasonable in this context.
I do think your accusing me of "tribal affiliation signaling" was unnecessary and uncharitable: I don't consider Bush to have been a significantly worse a president than any other recent presidents. I just happened to have run into the quote awhile back, and in my misinterpretation thought it was a good anti-rationality quote.
Edit: I did some thinking to try to figure out how I could have missed the obviously correct interpretation of Bush's words. The first hypothesis (which Constant first put forth) was that I was signalling tribal loyalties -- boo Republicans, yay Democrats. That does not make much sense, however, because I pretty solidly dislike all major political parties and the entire politics theater of the U.S. Maybe I was attempting to signal loyalty to the "boo politicians" tribe, but I think there's a better explanation: a cached thought. Even though I do not currently belong to the anti-Republican tribe, I did belong to that tribe in my high school years (i.e. during Bush' presidency), and I was most likely operating on a "Bush is stupid/irrational" cached thought.
Here's the expanded quote:
Is it hard to make decisions as President? Not really. If you know what you believe, decisions come pretty easy. If you're one of these types of people that are always trying to figure out which way the wind is blowing, decision making can be difficult. But I find that -- I know who I am. I know what I believe in, and I know where I want to lead the country. And most of the decisions come pretty easily for me, to be frank with you.
When we take into account further context that this was spoken to children in elementary school, I think the only strained reading is the one which sees this quote as reasonable. Hey kids, the only thing you need to make good decisions is to know what you already believe in! Reasoning is so much easier when you write the bottom line first.
Hmm, good point: you need to take into account background knowledge about George W. Bush (such as that he is a person who believes that God talks to him.) If we take the quote to mean:
If you know what you believe (and you have sound reasons for having the beliefs that you do), decisions come pretty easy...
then the quote is actually pretty reasonable. If, on the other hand, you take it to mean
If you know what you believe (because careful reasoning and evidence are unimportant to you), decisions come pretty easy...
then the quote is clearly arational. Also, I interpreted his disparaging of people who are "always trying to figure out which way the wind is blowing" as him essentially saying "forget the territory, I've already got a map and that's good enough for me".
Is it hard to make decisions as president? Not really. If you know what you believe, decisions come pretty easy. If you’re one of these types of people that are always trying to figure out which way the wind is blowing, decision making can be difficult.
-- George W. Bush
It's not clear to me what the disagreement is here. Which heuristic are you defending again?
If it's not published, it's not science
Response: Can we skip the pointless categorizations and evaluate whether material is valid or useful on a case by case basis? Clearly there is some material that has not been published that is useful (see: This website).
If it's not published in a peer-reviewed journal, there's no reason to treat it any differently than the ramblings of the Time Cube guy.
Response: Ahh yes, anything not peer-reviewed clearly contains Time Cube-levels of crazy.
Or none of the above? I'm not sure we actually disagree on anything here.
Yeah, I've tried org-mode, but the problem isn't that its Emacs-based (I use Emacs to write code), but it's that it isn't web-based. I wanted my notes to be accessible not only from both OSes I dual boot, but from pretty much any computer I might ever be at. I could make the file accessible I guess by putting it in a Dropbox public folder, but then there's still the issue of "what if the computer I'm on doesn't have Emacs".
Also the time-intensitivity thing of rolling my own code isn't a major drawback, as I'm trying to find a programming job at the moment and I needed something to add to my portfolio. :D
I'm not really familiar with the topic matter here, but I want to note that Michael Nielsen contradicts what you said (though Nielsen isn't exactly an unbiased source here as an Open Science advocate):
Perelman's breakthrough solving the Poincare conjecture ONLY appeared at the arXiv
The important point is that it doesn't appear that Perelman produced the paper for publishing in a journal, but he made it and left it on the arXiv, which was later (you claim) published in journals. That's quite a different view than "if it's not published, it's not science"
This post that you have excreted has essentially zero content. You restate the core idea behind the representativeness heuristic repeatedly, and baldly assert that there are good reasons for people having the intuitions that they do, that people are "using valuable real life skills" when they give incorrect answers to questions. No ones arguing that it hasn't been an evolutionarily useful heuristic, just that it happens to be incorrect from time to time. I cannot figure out where in your post you actually made an argument that the conjunction fallacy "doesn't exist", and I am overjoyed that you no longer have the karma to make top-level posts.
Please stop posting and read the sequences.
If you want to play that game, then it's not clear to me that the SIAI is doing "science" either, given that the focus is on existential risk due to AI (more like "philosophy" than "science") and formal friendliness (math).
I think a better interpretation of your quote is to replace the word "science" with "disseminated scholarly communication."
Perelman's proof of the Poincare conjecture was never published in an academic journal, but was merely posted on arXiv. If that's not science, then being correct is more important than being "scientific".
Relevant tweet: http://twitter.com/#!/vladimirnesov/status/34254933578481664
If you want to carry a brimming cup of coffee without spilling it, you may want to "change" your goal to instead primarily concentrate on humming.
I keep reading this over and over, trying to figure out what it means. What does humming have to do with not spilling a cup of coffee?
Pungent is a web-based note-taking app that I'm working on. I made this because I had a need for something to organize personal notes, but nothing I found was satisfactory. Right now it's essentially a less-featured clone of Workflowy, but I plan to develop it further once I figure out what direction to go in. Development is on hold for the moment while I spend some time using it and figuring out what I want it to do.
I'm also working on a research project to try to understand how human cognition works. I think FAI is really interesting + important, but I'm baffled by the decision theory approach that seems to be popular around here. Not that I have strong reasons to believe that this line of inquiry should not be pursued, but every time I think about intelligent entities purely in terms of decision theory (i.e. as entites with a "utility function" that assigns values to "states of the world", and then takes actions which maximize said utility), I notice that I am confused.
So I'm wading through neuroscience papers at the moment. Spatial cognition and memory seem like a well-studied phenomena that are likely to act as a foundation for other cognitive abilities, and so seems like as good a place to start as any. I don't have a website up yet for my findings, but here's what I've been looking at to start:
- Place cells, grid cells, and the brain's spatial representation system - decent, recent review article for spatial cognition.
- Tracking the Emergence of Conceptual Knowledge during Human Decision Making is a recent paper with some findings that seem relevant to understanding "concepts".
- The Medial Temporal Lobe is a good review of structures in the MTL, which includes structures important for spatial cognition and memory.
- I've also been looking at Jeff Hawkins' work on Hierarchical Temporal Memory because it is not simply another neural network model, but is actually proposed to be a model of the neocortex. Even if his views on intelligence are wrong or highly incomplete, his methodology seems sound: his work is biologically-grounded, but he doesn't get caught up in unnecessary details.
My goal for this project is to become less confused about Friendly AI. I'd like to set up a webpage to record my progress on this project, so I'll likely edit this post when I have a link for that.
True heroism is minutes, hours, weeks, year upon year of the quiet, precise, judicious exercise of probity and care—with no one there to see or cheer.
— David Foster Wallace, The Pale King
Yeah, more people donated to an animal shelter than to an organization working on existential risk. Makes me feel all warm and fuzzy inside. No, wait, the opposite of that.
Sorry, I could not make sense of any of this. Especially the symbolic part, but also the conversation part. And all the other parts too.
Note that this is not just my vision of how to get published in journals. It's my vision of how to do philosophy.
Your vision of how to do philosophy suspiciously conforms to how philosophy has traditionally been done, i.e. in journals. Have you read Michael Nielsen's Doing Science Online? It's written specifically about science, but I see no reason why it couldn't be applied to any kind of scholarly communication. He makes a good argument for including blog posts into scientific communication, which, at present, doesn't seem to be amenable with writing journal articles (is it kosher to cite blog posts?):
Many of the best blog posts contain material that could not easily be published in a conventional way: small, striking insights, or perhaps general thoughts on approach to a problem. These are the kinds of ideas that may be too small or incomplete to be published, but which often contain the seed of later progress.
You can think of blogs as a way of scaling up scientific conversation, so that conversations can become widely distributed in both time and space. Instead of just a few people listening as Terry Tao muses aloud in the hall or the seminar room about the Navier-Stokes equations, why not have a few thousand talented people listen in? Why not enable the most insightful to contribute their insights back?
I would much rather see SIAI form an open-access online journal or scholarly FAI/existential risks wiki or blog for the purposes of disseminating writings/thoughts on these topics. This likely would not reach as many philosophers as publishing in philosophy journals, but would almost certainly reach far more interested outsiders. Plus, philosophers have access to the internet, right?
Can't imagine the other commenters learned programming by jumping into Scheme or Haskell, or reading SICP, or whatever it is they're recommending :-)
Agreeing with this. I love CS theory, and I love SICP, but I learned to program by basically ignoring all that and hacking together stuff that I wanted to make. If you want to learn to program, you should probably make things first.
I think you meant to link here: http://blogs.discovermagazine.com/gnxp/2011/03/your-genes-your-rights-fdas-jeffrey-shuren-not-a-fan/
I remember reading the argument in one of the sequence articles, but I'm not sure which one. The essential idea is that any such rules just become a problem to solve for the AI, so relying on a superintelligent, recursively self-improving machine to be unable to solve a problem is not a very good idea (unless the failsafe mechanism was provably impossible to solve reliably, I suppose. But here we're pitting human intelligence against superintelligence, and I, for one, wouldn't bet on the humans). The more robust approach seems to be to make the AI motivated to not want to do whatever the failsafe was designed to prevent it from doing in the first place, i.e. Friendliness.
Here are the reasons to be skeptical that I picked up from that blog post:
- The website of the Journal of Cosmology is ugly
- The figures in the paper are "annoying"
- Perhaps the claimed bacteria aren't bacteria at all, but just squiggles.
- The photos of the found bacteria aren't at the same magnification as photos of real bacteria
- It seems like the bacteria are too well-preserved for having traveled the solar system for such a long time.
- Haha, maybe next they'll find bigfoot footprints on a meteor.
I think the point is that if you're trying to convince someone to pay you to write code for them and you have no prior experience with professional programming, a solid way to convince them that you're hireable is contributing significant amounts of code to an open source project. This demonstrates that 1) you know how to write code, 2) that you can work with others and 3) that you're comfortable working with a complicated codebase (depending on the project).
I'm not certain that its the most effective way to achieve this objective, but I can't think of a better alternative. Suggestions are welcome.
Math is not necessary for many kinds of programming. Yeah, some algorithms make occasional use of graph theory, and there certainly are areas of programming that are math-heavy (3d graphics, perhaps? Also, stuff like Google's PageRank algorithm uses linear algebra), but there are huge swaths of software development for which no (or little) math is needed. In fact, just to hammer on this point, I distinctly remember sitting in a senior-level math course and overhearing some math majors discuss how they once took an introductory programming course and found the experience confusing and unenjoyable. So yes, math and programming are quite distinct.
The probability I would place on you being able to make a living doing programming is dependent on only one factor: your willingness to spend your free time writing code. There's plenty of people with CS degrees who don't know how to program (and, amazingly, don't even know how to FizzBuzz), and it's almost certainly because they've never spent significant amounts of time actually building software. Programming is "how-to" knowledge, so if you can find a project that motivates you enough to gain significant experience, you should be set.