Posts
Comments
I think you have hit upon the crux of the matter in your last paragraph: the authors are in no way trying to find the best solution. I can't speak for the authors you cite, but the questions asked by philosophers are different than, "what is the best answer?" They are more along the lines of, "How do we generate our answers anyways?" and "What might follow?" This may lead to an admittedly harmful lack of urgency in updating beliefs.
Because I enjoy making analogies: Science provides the map of the real world; philosophy is the cartography. An error on a map must be corrected immediately for accuracy's sake; an error in efficient map design theory may take a generation or two to become immediately apparent.
Finally, you use Pearl as the champion of AI theory, but he is equally a champion of philosophy. As misguided as your citations may have been (as philosophers), Pearl's work is equally well-guided in redeeming philosophers. I don't think you have sufficiently addressed the cherrypicking charge: if your cited articles are strong evidence that philosophers don't consider each other's viewpoints, then every article in which philosophers do sufficiently consider each other's viewpoints is weak evidence of the opposite.
It feels to me as though you are cherrypicking both evidence and topic. It may very well be that philosophers have a lot of work to do in the important AI field. This does not invalidate the process. Get rid of the term, talk about the process of refining human intelligence through means other than direct observation. The PROCESS, not the results (like the article you cite).
Speaking of that article from Noûs, it was published in 2010. Pearl did lots of work on counterfactuals and uncertainty dating back to 1980, but I would argue that, "The algorithmization of counterfactuals" contains the direct solution you reference. That paper was published in 2011. Unless, of course, you are referring to "Causes and Explanations - a sturctural model approach," which was published in 2005 in the British Journal for the PHILOSOPHY of Science.
It seems to me that pop philosophy is being compared to rigorous academic science. Philosophers make great effort to undertand each others' frameworks. Controversy and disagreement abound, but exercising the mind in predicting consequences using mental models is fundamental to both scientific progress AND everyday life. You and I may disagree on our metaphysical views, but that doesn't prevent us from exploring the consequences each viewpoint predicts. Eventually, we may be able to test these beliefs. Predicting these consequences in advance helps us use resources effectively (as opposed to testing EVERY possibility scientifically). (Human) philosophy is an important precursor to science.
I'm also glad to see in other comments that the AI case has greater uncertainty than the sleeper cell case.
Having made one counterpoint and mentioned another, let me add that this was a good read and a nice post.
Well said again, and well-considered that ideas in minds can only move forwards through time (not a physical law). My initial reaction to this article was, "What about philosophy of science?" However, it seems my PoSc objections extend to other realms of philosophy as well. Thank you for leading me here.
Popper (or Popperism) predicted that falsifiable models would yield more information than non-falsifiable ones.
I don't think this is precisely testable, but it references precisely testable models. That is why I would categorize it as philosophy (of science), but not science.
Yes, I may have made an inferential leap here that was wrong or unnecessary. You and I agree very strongly on there being a distinction between Philosophy of Science and Experimental Philosophy. I wanted to draw a distinction between the kind of, "street philosophy" done by Socrates and the more rigorous, mathematical Philosophy of Science. "Experiment" may not have been the most appropriate verbiage.
I would be glad to reconsider my stance that this rationalist community privileges emotivist readings of ethics. I will begin looking into this. My reason for including this argument is the idea (from the article) that when philosophers ask questions about right and wrong or good and bad, they are really asking how people feel about these concepts.
I like your interpretation of philosophy as it pertains to ethics, aesthetics, and perhaps metaphysics. Your Socrates example, and LW in general, privileges emotivist ethics, but this is an interesting point and not a drawback. Looking at ethics as a cognitive science is not necessarily a flawed approach, but it is important to consider the potential alternative models.
Philosophy has a branch called "philosophy of science" where your dissolution falls apart. Popperian falsifiability, Kuhnian paradigm shifts, and Bayesian reasoning all fall into this domain. There is a great compendium by Curd and Cover; I recommend searching the table of contents for essays also available online. Here, philosophers experiment with the precision of testable models rather than hypotheses.
I don't mean to advocate an epiphany-driven model of discovery.
To use your Scientology example and terminology, what I am advocating is not that we find the "next big thing," but that we pursue refinement of the original, "genuinely useful material." Of course, it is much easier to advocate this than to put the work in, but that's why I'm using the open thread.
There are some legitimate issues with some of the Sequences (both resolved and unresolved). The comments represent a very nice start, but there may be some serious philosophical work to be done. There is a well of knowledge about pursuing wells of knowledge, and I would find it purposeful to refine the effective pursuit of knowledge.
What are your heuristics for telling whether posts/comments contain "high-quality opinions," or "LW mainstream"? Also, what did you think of Loosemore's recent post on fallacies in AI predictions?
I see that I used the word "growth" capriciously. I don't necessarily mean greater numbers, I mean the opposite of stagnation. Of course a call for action is easier and less effective than acting, but that's why we have open threads.
A few thoughts on Mark_Friedenbach's recent departure:
I thought it could be unpacked into two main points. (1) is that Mark is leaving the community. To Mark, or anyone who makes this decision, I think the rational response is, "good luck and best wishes." We are here for reasons, and when those reasons wane, I wouldn't begrudge anyone looking elsewhere or doing other things.
(2) is that the community is in need of growth. My interpretation of this is as follows: the Sequences are not updated, and yet they are still referenced as source material. I wouldn't mind reading if someone took a crack at a Sequences 2.0, or something completely different. Perhaps something with a more empirical/scientific focus (as opposed to foundational/philosophical), as Mark recommended.
The impermanence of things is an excellent reason to get really enthusiastic about them.
I think of it as "improvematism." Maybe "improvementism" would sound more serious.
"What if they kicked the mirror-maker out of town and awarded the actual worker?"
This is the question I keep asking myself. In the story as written, the village rewards the clever skilled worker over the diligent skilled worker. This might work in the short term, and the clever worker's gamble pays off for him personally as he sees increased business from increased prestige. If we consider the village (or the judges) to be actors in the game, however, they act in their own disinterest by disincentivizing craftsmanship in favor of craftiness. And here I am, arguing for or against a parable...
The difference being that on a football field or basketball court, there is a settled outcome of competition, and no sincere value attached to certain outcomes. An average person might prefer that their chosen sports team wins, but I think they would acknowledge that it does not make the world a better place. In politics, however, the preference that a chosen team wins is very closely tied to the view that the win is beneficial for everybody.
"... thinking through the implications of an AI that is so completely unable to handle context, that it can live with Grade A contradictions at the heart of its reasoning, leads us to a mass of unbelievable inconsistencies in the 'intelligence' of this supposed superintelligence."
This is all at once concise, understandable, and reassuring. Thank you. I still wonder if we are accurately broadening the scope of defined "intelligence" out too far, but my wonder comes from gaps in my specific knowledge and not from gaps in your argument.
The idea that I find least entangled but still very potentially beneficial is that politics is the mind-killer. I realize it's an old sequence, and it doesn't have much traction here (since LW is ostensibly un-killed minds).
Feel free to disengage; TheAncientGeek helped me shift my paradigm correctly.
Thank you for responding and attempting to help me clear up my misunderstanding. I will need to do another deep reading, but a quick skim of the article from this point of view "clicks" a lot better for me.
You have my apologies if you thought I was attacking or pigeonholing your argument. While I lack the technical expertise to critique the technical portion of your argument, I think it could benefit from a more explicit avoidance of the fallacy mentioned above. I thought the article was very interesting and I will certainly come back to it if I ever get to the point where I can understand your distinctions between swarm intelligence and CFAI. I understand you have been facing attacks for your position in this article, but that is not my intention. Your meticulous arguments are certainly impressive, but you do them a disservice by dismissing well intentioned critique, especially as it applies to the structure of your argument and not the substance.
Einstein made predictions about what the universe would look like if there were a maximum speed. Your prediction seems to be that well built ai will not misunderstand its goals (please assume that I read your article thoroughly and that any misunderstandings are benign). What does the universe look like if this is false?
I probably fall under category a in your disjunction. Is it truly pointless to help me overcome my misunderstanding? From the large volume of comments, it seems likely that this misunderstanding is partially caused by a gap between what you are trying to say, and what was said. Please help me bridge this gap instead of denying its existence or calling such an exercise pointless.
Stepping in as an interlocuter; while I agree that "all-powerful" is poor terminology, I think the described power here is likely with AGI. One feature AGI is nearly certain to have is superhuman processing power; this allows large numbers of Monte Carlo simulations which an AGI could use to predict human responses; especially if there is a Bayesian calibrating mechanism. An above-human ability to predict human responses is an essential component to near-perfect social engineering. I don't see this as an outrageous, magic-seeming power. Such an AGI could theoretically have the power to convince humans to adopt any desired response. I believe your paper maintains that an AGI wouldn't use this power, and not that such a power is outrageous.
My personal feelings twards this article are that is sounds suspiciously close to a "No true Scotsman" argument. "No true (designed with friendly intentions) AI would submit to these catastrophic tendencies." While your arguments are persuasive, I wonder if a catastrophe did occur, would you dismiss it as the work of "not a true AI?" By way of disclaimer, my strengths are in philosophy and mathematics, and decidedly not computer science. I hope you have time to reply anyways.
I'm looking at the possible causal relationships between certain actions and resultant discomfort. As I understand your argument, you believe that certain actions by one person will always result in discomfort by the other. I disagree, and I submit that the discomfort is a product of the original action and its response. In other words, if someone has made you feel uncomfortable, it may be possible for you to reduce that discomfort independently of the precipitating action. Your discomfort may be due to an irrational bias. This would be a reason not to shun someone for making you feel uncomfortable.
There is a difference between analyzing an action and communicating that you are analyzing an action. To speak to your concluding example, "smiling back" and, "[going] in your head and think about whether or not that signal means that she likes you," are NOT mutually exclusive. With practice, you can do both at once. I would call this leveling up.
It seems to me that this discomfort is not a necessary product of the behavior. It may even be a cognitive bias, on the order of thinking that unconditional love is more powerful than conditional love. I submit that a rationalist should expect his or her prospective partners to "calculate their love" and not be afraid of the results.
Your statement has a nice "should" in it. The reason for people not to shun you is because their discomfort is based on a (debatably) flawed heuristic.
In many cases, discomfort is a natural part of changing one's mind. I can see, though, why romance would be an exception. Discomfort due to unrequited affections, for example, is not evidence of an impending paradigm shift. Discomfort due to a rational calculus, however, might indicate a high likelihood of irrationality.
Thank you - I have this, and some dense Hutter yet to read.
The cognitive theory is beyond me, but the math looks interesting. I need to exert more thought on this, but I would submit an open Question for the community: might there be a way to calculate error bounds on outputs conditioned on "world models" based on the models' predictive accuracy and/or complexity? If this were possible, it would be strong support for mathematical insight into the "meta model".
My day to day life is populated with many who do not understand the lessons in this section. Interaction with these people is paramount in achieving my own goals; I am facing a situation in which the rational choice is to communicate irrationally. In specific, my colleagues and other associates seem to prefer "applause lights" and statements which offer no information. Therefore, attaining my personal, rationally selected goals might mean claiming irrational beliefs. I don't think this is an explicit paradox, but it is an interesting point. There is a middle ground between "other-optimizing" (pointing out these applause lights as what they are) and changing my actual beliefs to those communicated by "applause lights", but I do not believe it is tenable, and it may represent a conflict of goals (personal success in my field vs. spreading rational thought). Perhaps it is a microcosm of the precarious balance between self-optimization and world-optimization.
Do you have a heuristic for differentiating big rushes from small rushes? I think any time you are trying to perform a task, and some epsilon greater than zero of your conscious capacity is focusing on the ticking clock, then that represents a deficit from maximal focus. I think the deep breathing advice is good for any rush.
https://intelligence.org/files/CognitiveBiases.pdf in part 8 has a note on time pressure increasing the effect of the affect heuristic, but it doesn't quite fit with what you are talking about (fumbling for keys).
Prior to lurking here and reading the excellent posts on rationality, I had never before considered eating a tomato. I decided that I didn't like them at a young age, and never revisited the belief. In the past, I figured it was my business and it wasn't hurting anyone if I decided to avoid tomatoes. Now, I understand that it was an arbitrary preference, that the taste is non-offensive (I may grow to like them), and that they are rich in lycopene (which may be good for you, but almost certainly isn't bad). In short, I changed a belief I never before thought necessary to revisit. So far, so good.