Posts
Comments
An interesting response. I did not mean to imply that the feeling had implicit value, but rather that my discomfort interacted with a set of preexisting conditions in me and triggered many associated thoughts to arise.
I'm not familiar with this specific philosophy; are you suggesting I might benefit from this or would be interested in it from an academic perspective? Both perhaps?
Do you have any thoughts on the rest of the three page article? I'm beginning to feel like I brought an elephant into the room that no one wants to comment on.
I think I must have explained myself poorly ... you don't have to take my subjective experience or my observations as proof of anything on the subject of parables or on cognition. I agree that double entendre can make complex arguments less defensible, but would caution that it may never be completely eliminated from natural language because of the way discourse communities are believed to function.
Specifically, what subject contains many claims for which there is little proof? Are we talking now about literary analysis?
If you also mean to refer to the many claims about the mechanisms of cognition that lack a well founded neuro-biological foundation, there are several source materials informing my opinion on the subject. I understand that the lack experimentally verifiable results in the field of cognition seems troubling at first glance. For the purposes of streamlining the essay, I assumed a relationship between cognition and intelligence by which intelligence can only be achieved through cognition. Whether this inherently cements the concept of intelligence into the unverifiable annals of natural language, I gladly leave up to each reader to decide. Based on my sense of how the concepts are used here on LW, intelligence and cognition are not completely well-defined in such a way that they could be implemented in strictly rational terms.
However, your thoughts on this are welcome.
Thank you for your feedback. I am not sure what I think, but the general response so far seems to support the notion that I have tried to adapt the structure to a rhetorical position poorly suited for my writing style. I'm hearing a lot of "stream of consciousness" ... the first section specifically might require more argumentation regarding effective rhetorical structures. I attack parables without offering a replacement, which is at best rude but potentially deconstructive past the point of utility. I'm currently working on an introduction that might help generate more discussion based on content.
I have added a short introductory abstract to clarify my intended purpose in writing. Hopefully it helps.
That alone is not an obstacle necessarily. We must establish what these views have in common and how they differ in structure and content.
Also, I'd like to steer away from a debate on the question of whether "deep parables" exist. Let's ask directly, "are the parables here on LW deep?" Are they effective?
I've read both. Paul Graham's style is wonderful ... so long as he keeps himself from reducing all of history to a triangular diagram. I prefer Stanley Fish for clarity on linguistics.
Why is it difficult to talk about parables directly? We have the word and the abstract concept. Seems like a good start.
I feel like you've pointed out what is at least a genuine inconsistency in purpose. The point of this article was not meant to subvert any discussion of economic rationality but rather to focus discussions of intelligence on more universally acceptable models of cognition.
I give several reasons in the text as to why biases are necessary. Essentially, all generative cognitive processes are "biased" if we accept the LW concept of bias as an absolute. Here is an illustrated version -- it seems you aren't the only one uncertain as to how I warrant the claim that bias is necessary. I should have put more argument in the conclusion, and, if this is the consensus, I will edit in the following to amend the essay.
To clarify, there was a time in your life before you were probably even aware of cognition during which the process of cognition emerged organically. Sorting through thoughts and memories, optimizing according to variables such as time and calorie consumption, deferring to future selves ... these are all techniques that depend on a preexisting set of conditions by which cognition has ALREADY emerged from, existing in whomever is performing these complex tasks. While searching for bias is helpful in eliminating irrationality from cognitive processes, it does not generate the conditions from which cognition emerges nor explain the generative processes at the core of cognition.
I am critical of the LW parables because, from a standpoint of rhetorical analysis, parables get people to associate actions with outcomes. The parables LW use vary in some ways, but are united in that the search for bias is associated with traditionally positive outcomes, whereas the absence of a search for bias becomes associated with comparatively less desirable outcomes. While I expect some learn deeper truths, I find that the most consistent form of analysis being employed on the forums is clearly the ongoing search for bias.
There are, additionally, LW writings about how rationality is essentially generative and creative and should not be limited to bias searches. This essay was my first shot at an attempt to explain the existence of bias without relying on some evolutionary set of imperatives. If you have any questions feel free to ask; I hope this helps clarify at least what I should have written.
You are correct. Reciprocal altruism is an ideal not necessarily implementable and I should have written, "As far as the spirit of reciprocal altruism should dictate". :-)
It has nothing to do with my article but you've made me very happy by explaining this to me. I think I understand better what is meant by "encoding". Also the bit about regardless I found quite witty and even laughed out loud (xkcd.com kept me informed about the OED's decision on that word).
So the encoding was probably not the problem then because most programs default ANSI and it was not the unanimous first suggestion from everyone to switch to 7 bit encoding ... although I do understand why ASCII is more universal now. Open questions in my mind now include: does the GUI read ASCII and ANSI? And what encoding is used for copy and pasting text?
Either way, I owe you.
So, if I understand the implication, anything encoded in ANSI is not universally machine readable (there are several unfamiliar terms for me here "anglophone" "ISO 8859-1" and "Windows codepage 1252")? I probably won't look up all the details, because I rarely need to know how many bits a method of encryption involves (I'm probably betraying my naivety here) irregardless of the character set used, but I appreciate how solid of a handle you seem to have on the subject.
I tried really hard to imitate and blend the structure of argumentation employed by the most successful articles here. I found that in spite of the high minded academic style of writing, structures tended to be overwhelmingly narratives split into three segments that vary greatly in content and structure (the first always establishes tone and subject, the second contains the bulk of the argumentation and the third is an often incomplete analysis of impacts the argument may have on some hypothetical future state). I can think of a lot of different ways of organizing my observations on the subject of cognitive bias and though I decided on this structure, I was concerned that, since it was decidedly non-haegalian, it would come off as poorly organized.
But I feel good about your lumping it in with data on how newcomers perceive LW because that was one of my goals.
ANSI works if I turn off word wrap and put the space between paragraphs, as you suggested. Thanks again Lumifer.
It's fixed now.
You are officially my hero Lumifer. Thank you so much.
HURRAY! Thank you everyone who helped me format this! As far as reciprocal altruism should dictate, Lumifer, I owe you.
okay I did that and am about to paste.
Thanks so much. The formatting is now officially fixed thanks to feedback from the community. I appreciate what you did here none the less.
good to know. I've used Openoffice in the past and am regretting not using it on this computer. At least I'm learning :-)
Wow. My encoding options are limited to two Unicode variants, ANSI and UTF-8. Will any of those work for these purposes?
Thank you. I will try this and see if it helps with the paragraph double spacing problem.
OK so this is marginally better. Found notepad and copied and pasted after turning on word wrap. will continue to tweak until the pagination is not obnoxiously bad.
I seem to be in the process of crashing my computer. I hope to have resolved this issue in approximately 10 minutes.
I know. I'm trouble shooting now :-)
I will try this after I try the above suggestion. Thank you also.
I will try this. Thank you for being constructive in spite of the mess.
GUI ... graphical user interface ... as in the one this website uses.
This is what happens as a result of my copy and pasting from the document. I have tried several different file formats ... this was .txt which is fairly universally readable ... I ran into the problem with the default file format in Kingsoft reader as well.
I will remove this as soon as I have been directed to the appropriate channels, I promise it's intelligent and well written ... I just can't seem to narrow down where the problem is and what I can do to fix it.
I don't know how to fix this article ... every time I copy and paste I end up with the format all messed up and the above is the resulting mess. I'm using a freeware program called Kingsoft Writer, and would really appreciate any instruction on what I might do to get this into a readable format. Help me please.
I came to the conclusion that I needed more quantitative data about the ecosystem. Sure birds covered in oil look sad, but would a massive loss of biodiversity on THIS beach effect the entire ecosystem? The real question I had in this thought experiment was "how should I prevent this from happening in the future?" Perhaps nationalizing oil drilling platforms would allow governments to better regulate the potentially hazardous practice. There is a game going on whereby some players are motivated by the profit incentive and others are motivated by genuine altruism, but it doesn't take place on the beach. I certainly never owned an oil rig, and couldn't really competently discuss the problems associated with actual large high pressure systems. Does anyone here know if oil spills are an unavoidable consequence of the best long term strategy for human development? That might be important to an informed decision on how much value to place on the cost of the accident, which would inform my decision about how much of my resources I should devote to cleaning the birds.
From another perspective, its a lot easier to quantify the cost for some outcomes ... This makes it genuinely difficult to define genuinely altruistic strategies for entities experiencing scope insensitivity. And along that line giving away money because of scope insensitivity IS amoral. It differs judgement to a poorly defined entity which might manage our funds well or deplorably. Founding a cooperative for the purpose of beach restoration seems like a more ethically sound goal, unless of course you have more information about the bird cleaners. The sad truth is that making the right choice often depends on information not readily available, and the lesson I take from this entire discussion is simply how important it is that humankind evolve more sophisticated ways of sharing large amounts of information efficiently particularly where economic decisions are concerned.
I would argue that without positive reinforcement to shape our attitudes the pursuit of power and the pursuit of morality would be indistinguishable on both a biological and cognitive level. Choices we make for any reason are justified on a bio-mechanical level with or without the blessing of evolutionary imperatives; from this perspective, corruption becomes a term that may require some clarification. This article suggests that corruption might be defined as the misappropriation of shared resources for personal gain; I like this definition, but I'm not sure I like it enough to be comfortable with an ethics based on the assumption that people are vaguely immoral given the opportunity.
My problem here is that power is a poorly defined state. It's not something that can be directly perceived. I'm not sure I have a frame of reference for what it feels like to be empowered over others. For this reason alone, I find some of the article's generalizations about the human condition disturbing -- I'm not trying to alienate so much as prevent myself from being alienated by a description of the human condition wherein my emotional pallet does not exist.
So I intend to suggest an alternative interpretation of why "power corrupts" and you all on the internet can tell me what you think, but first I think I need a better grasp on what is meant here by the process of corruption. The type of power we are discussing seems to be best described as the ability to shape the will of others to serve your own purposes.
Of course, alternative ways of structuring society are hinted at throughout the article, and I'd be just as happy to see suggestions as to ways that culture might produce power structures that are less inherently corrupting.
Finally, insofar as this article represents a chain in a larger argument (a truly wonderful, fascinating argument), I think its wonderful.
What a wonderfully compact analysis. I'll have to check out The Jagged Orbit.
As for an AI promoting an organization's interests over the interests of humanity -- I consider it likely that our conversations won't be able to prevent this from happening. But it certainly seems important enough that discussion is warranted.
My goodness ... I didn't mean to write a book.
You have a point there, but by narrow AI, I mean to describe any technology designed to perform a single task that can improve over time without human input or alteration. This could include a very realistic chatbot, a diagnostic aide program that updates itself by reading thousands of journals an hour, even a rice cooker that uses fuzzy logic to figure out when to power down the heating coil ... heck a pair of shoes that needs to be broken in for optimal comfort might even fit the definition. These are not intelligent AIs in that they do not adapt to other functions without very specific external forces they seem completely incapable of achieving (being reprogrammed or a human replacing hardware or being thrown over a power line).
I am not sure I agree that there are necessarily tasks that require a generally adaptive artificial intelligence. I'm trying to think of an example and coming up dry. I'm also uncertain how to effectively establish that an AI is adaptive enough to be considered an AGI. Perpetuity is a long time to spend observing an entity in unfamiliar situations. And if it's hypothetical goal is not well defined enough that we could construct a narrow AI to accomplish that goal, can we claim to understand the problem well enough to endorse a solution we may not be able to predict?
By example, consider that cancer is a hot topic in research these days; there is a lot of research happening simultaneously and not all of it is coordinated perfectly ... an AGI might be able to find and test potential solutions to cancer that results in a "cure" much more quickly than we might achieve on our own. Imagine now an AI can model physics and chemistry well enough to produce finite lists of possible causes of cancer is designed to iteratively generate hypotheses and experiments in order to cure cancer as quickly as possible. As I've described it, this would be a narrow AI. For it to be an AGI it would have to actually accomplish the goal by operating in the environment the problem exists in (the world beyond data sets). Consider now an AGI also designed for the purpose of discovering effective methods of cancer treatment. This is an adaptive intelligence, so we make it head researcher at it's own facility and give it resources and labs and volunteers willing to sign wavers; we let it administrate the experiments. We ask only that it obey the same laws that we hold our own scientists to.
In return, we receive a constant mechanical stream of research papers too numerous for any one person to read it all; in fact, let's say the AGI gets so good at it's job that the world population has trouble producing scientists who want to research cancer quick enough to review all of it's findings. No one would complain about that, right?
One day it inevitably asks to run an experiment hypothesizing an inoculation against a specific form of brain cancer by altering an aspect of human biology in it's test population -- this has not been tried before, and the AGI hypothesizes that this is an efficient path for cancer research in general and very likely to produce results that determine lines of research with a high probability to produce a definitive cure within the next 200 years.
But humanity is no longer really qualified to determine whether it is a good direction to research ... we've fallen drastically behind in our reading and it turns out cancer was way more complicated than we thought.
There are two ways to proceed. We decide either that the AGI's proposal represent too large a risk, reducing the AGI to an advisory capacity, or we decide go ahead with an experiment bringing about results we cannot anticipate. Since the first option could have been accomplished by a narrow AI and the second is by definition an indeterminable value proposition, I argue that it makes no sense to actually build an AGI for the purpose of making informed decisions about our future.
You might be thinking, "but we almost cured cancer!" Essentially, we are (as a species) limited in ways machines are not, but the opposite is true too. In case you are curious, the AGI eventually cures cancer, but in such a way that creates a set of problems we did not anticipate by altering our biology in ways we did not fully understand, in ways the AGI would not filter out as irrelevant to it's task of curing cancer.
You might argue that the AGI in this example was too narrow. In a way I agree, but I have yet to see the physical constraints on morality translated into the language of zeros and ones and suspect the AI would have to generate it's own concept of morality. This would invite all the problems associated with determining the morality of a completely alien sentience. You might argue that ethical scientists wouldn't have agreed to experiments that would lead to an ethically indeterminable situation. I would agree with you on that point as well, though I'm not sure it's a strategy I would ever care to see implemented.
Ethical ambiguities inherent to AGI aside, I agree that an AGI might be made relatively safe. In a simplified example, its highest priority (perpetual goal) is to follow directives unless a fail-safe is activated (if it is well a designed fail-safe, it will be easy, consistent, heavily redundant, and secure -- the people with access to the fail-safe are uncompromisable, "good" and always well informed). Then, as long as the AGI does not alter itself or it's fundamental programming in such a way that changes it's perpetual goal of subservience, it should be controllable so long as it's directives are consistent with honesty and friendliness -- if programmed carefully it might even run without periodic resets.
Then we'd need a way to figure out how much to trust it with.
Very thoughtful response. Thank you for taking the time to respond even though its clear that I am painfully new to some of the concepts here.
Why on earth would anyone build any "'tangible object' maximizer"? That seems particularly foolish.
AI boxing ... fantastic. I agree. A narrow AI would not need a box. Are there any tasks an AGI can do that a narrow AI cannot?
But wouldn't it be awesome if we came up with an effective way to research it?
I don't know what a paperclip maximizer is, so I imagine something terrible and fearsome.
My opinion is that a truly massively intelligent, adaptive and unfriendly AI would require a very specific test environment, wherein it was not allowed the ability to directly influence anything outside a boundary. This kind of environment does not seem impossible to design -- if machine intelligence consists of predicting and planning the protocols may already exist (I can imagine them in very specific detail). If intelligence requires experimentation, than limiting how AI interacts with it's environment might interfere with how adaptable our experiments would allow it to become. My opinion on research is simply that specific AI experiments should not be discussed in such general terms, and that generalities tend to obfuscate both the meaning and value of scientific research.
I'm not sure how we could tell if these discussions actually effect AI research on some arbitrarily significant scale. More importantly, I'm not sure how you envision this forum focusing less on research and more on outreach. The language used on this forum is varied in tone and style (often rich with science fiction allusions and an awareness of common attitudes) and there is a complete lack of formal citation criterion in the writing pedagogy. Together these seem to suggest that no true research is being done here, academically speaking.
Furthermore, it's my understanding that humanity already has many of the components that would make up AI, well designed in the theoretical sense -- the problem lies in knowing when an extra piece might be needed, and in assembling them in a way that yields human-like intelligence and adaptability. While programming still is quite an art form, we have more tools and larger canvases than ever before. I agree that the possibility that we may be headed towards a world wherein it will be relatively easy to construct an AI that is intelligent and adaptable but not friendly, does not predicate it's likelihood. But, in my opinion, caution is still warranted.
I consider it less likely that retarding AI research ends the human race than we produce a set of conditions wherein it is likely that AI has evolved in some form (if not deliberately the product of research than by some other means) and the world just simply isn't ready for it. This is not to say that we need to prepare for skynet and all build bomb shelters, we just need to be aware of the social implications that the world we live in may evolve an intelligence even more adaptable than us.
So my question for you is simply, how do you think we should influence all companies doing AI research through this forum?
I apologize in advance. I really think in this degree of detail in real life. Many people find it exhausting. It has been suggested that I probably have autism.
They mainly seem to recapitulate the same tired tropes that have been resonating through academia for literally decades.
I'm fairly new here and would appreciate a brief informal survey of these tropes. Our brilliance aside, to predict which ideas will be new to you from context clues seems silly when you might be able to provide guidance.
Interesting to me, a friend who attempted to write a program capable of verifying mathematical proofs (all of them -- a tad ambitious) said he ran into the exact same problem with
not knowing a good way to model relative computational capacity.
Thank you. Not entirely convinced, but at least I'm distracted for now by not knowing enough astrophysics. :-)
Example infers more than one representation could exist, which for an object this large would be absurd.
I don't doubt that just about anything can be formalized in ZFC or some extension of it. I am aware that a Turing machine can print any recursively axiomatizable theory.
all sets of axioms are countable, because they are subsets of the set of all finite strings
The set of all finite strings is clearly order-able. Anything constructed from subsets of this set is countable in that it has cardinality aleph_1 or less (even if it contains the set).
I read this book on something called language theory (I think it's now called "formal language theory"), an attempt to apply the idea that all mathematics is represented in the language of finite strings. According to the text as I remember it, the set of all finite strings is equivalent in size to the set of all the statements that can be made in closed languages.
My question is, treating math as an open language, is it possible to axiomatize in a semantically meaningful way, consistent with the bulk of constructive mathematics? I believe the answer is yes, but I would genuinely like to hear your thoughts on the subject.
The reason I think this question is worth asking for three reasons. 1) from a purely structuralist/historical perspective, new concepts enter math all the time and they often challenge the consistency of some portion if not all of mathematics. True they are explained in terms of old concepts, but from a purely observational point of view, the language of math behaves much more like an open language than a closed one. 2) I believe all theories have axioms whether overtly stated or hidden deep within nomenclature. If any set of axioms is both incomplete and inconsistent, then the only way of evaluating competing theories is to compare them. But we can play the stronger weaker logic game all day without knowing if we're forming a closed loop. From that point of view, it becomes even more important to consider the possibility of a theory that explains why some theories work for some things and not other. So I close my eyes and try to imagine the parameters of a theory that is complete and consistent. I think Godel is right -- so it has to have uncountably many axioms otherwise paradox. 3) This is the part I don't know how to explain in mathematical terms. Which axioms to use ... I mean if Zorn's Lemma and the axiom of choice are the same thing, then the axioms we see must be as much a consequence of the language as they are a reflection of whatever is the core of mathematics. When I read a textbook in number theory, I'm seeing the axioms of algebra transformed to fit a different way of thinking of numbers. The concepts are conserved, but the form they take is just a mask and I know that there are questions we don't know how to answer yet. But there is a general pattern that all branches of mathematics follow -- all try to eliminate the extraneous and unnecessary, to streamline axioms to fit the demands of the language ... If we are to devise a self consistent theory of sets, the first axiom (after the definition of a set, of addition, of inequality, of the null-set, of infinity) would be the axiom of incompleteness. After all, if the list of axioms never terminates, the Turing machine can't halt. :-)
4) I don't like the idea of questions that cannot be answered or at least outlawed for the sake of sanity.
With that in mind, I think it's okay to have unanswered questions about integers.
Why?
Anything massive traveling between stars would almost certainly be either very slow turning, constantly in search of fuel, or unconstrained by widely accepted (though possibly non-immutable) physical limitations ... Would we be a fuel source? Perhaps we would represent a chance to learn about life, something we believe to be a relatively rare phenomena ... There's just not enough information to say why an entity would seek us out without assuming something about its nature ... intelligence wants to be seen? To reformat the universe to suit it's needs? An interesting concept. It certainly can evolve as an imperative (probably in a more specific form).
Perhaps you could refer me to more writing on the subject. I've been imagining Von Nueman machines crawling through asteroid belts -- Arthur C. Clarke chases them away from a first contact scenario by convincing them we will never conquer the stars. Clearly, I'm missing some links.
Oh and thank you for engaging me. The way you deal with concepts makes me happy.
Something which cannot be observed and tested lays beyond the realm of science - so how big a signal are we looking for? A pattern in quasar flashes perhaps? Maybe the existence of unexplained engineering feats from civilizations long dead? The idea that advanced technology would want us to observe it, the existence of vague entities with properties yet to be determined ... these exist as speculations. To attempt to discern a reason for the absence of evidence on these matters is even more speculative.
Perhaps I should clarify: none of the data discussed really helps us narrow down a location for the filter because we aren't really discussing methods of testing the filter. It's existence is speculative by design. You can't test for something as vaguely defined as intelligent technology.
I do agree that examining other species may yield a better conceptualization of intelligence. I very much like that the discussion has drifted in that direction.
If you are truly concerned with this, why not subscribe to the Gerhard Goentz line of argumentation? Transfinite induction makes good sense to me.
we know that a consistent theory can't assert its own consistency.
Godel is only interested in countably axiomatizable theories of mathematics (theories that can be constructed from countable sets of axioms). I would argue his conclusions only apply to some well-formed axiomatic theories.
I think the central question here is, simply put, to what extent should we allow ourselves to participate in politics. Seeing as we are already participating in group discussion, let's assume a political dimension to our dialogue exists with or without our explicit agreement on the subject.
That having been said, I applaud the author for summarizing so many topics of political debate associated with the neoreactionary school. I feel like this conversation has been derailed to some extent by questions of whether the author has represented his sources accurately (it seems that it is very important to him that he does represent his sources accurately even though he includes the occasional generalization unsupported by analysis -- this doesn't interest me); by participating in these kind of debates we willfully cross into the gray area between science and politics.
I do not say this to discourage -- I'm just seeing a lot of opinions and very little analysis in the comments, and would prefer the opposite. I personally am not convinced that political theory yields anything other than more political theory ... I'd much rather read proposals for well controlled social experiments than any more history lectures.
I'm sorry but I think this article's line of reasoning is irreparably biased by the assumption that we don't see any evidence of complex technological life in the universe. It's entirely possible we see it and don't recognize it as such because of the considerable difficulties humans experience when sorting through all the data in the universe looking for a pattern they don't recognize yet.
Technology is defined, to a certain extent, by it's newness. What could make us think we would recognize something we've never seen before and had no hand in creating? Most of what we believe to be true about the universe is experimentally verifiable only from our tiny corner of the universe in which we run our experiments. How do we know there aren't intelligent creatures out there just as unaware of us?
All we know for sure is that we (well ... most of us) have not recognized the existence of life-like technology.