Posts

The Limits of My Rationality 2014-12-09T21:08:32.873Z

Comments

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-14T02:52:16.944Z · LW · GW

An interesting response. I did not mean to imply that the feeling had implicit value, but rather that my discomfort interacted with a set of preexisting conditions in me and triggered many associated thoughts to arise.

I'm not familiar with this specific philosophy; are you suggesting I might benefit from this or would be interested in it from an academic perspective? Both perhaps?

Do you have any thoughts on the rest of the three page article? I'm beginning to feel like I brought an elephant into the room that no one wants to comment on.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-10T17:45:59.219Z · LW · GW

I think I must have explained myself poorly ... you don't have to take my subjective experience or my observations as proof of anything on the subject of parables or on cognition. I agree that double entendre can make complex arguments less defensible, but would caution that it may never be completely eliminated from natural language because of the way discourse communities are believed to function.

Specifically, what subject contains many claims for which there is little proof? Are we talking now about literary analysis?

If you also mean to refer to the many claims about the mechanisms of cognition that lack a well founded neuro-biological foundation, there are several source materials informing my opinion on the subject. I understand that the lack experimentally verifiable results in the field of cognition seems troubling at first glance. For the purposes of streamlining the essay, I assumed a relationship between cognition and intelligence by which intelligence can only be achieved through cognition. Whether this inherently cements the concept of intelligence into the unverifiable annals of natural language, I gladly leave up to each reader to decide. Based on my sense of how the concepts are used here on LW, intelligence and cognition are not completely well-defined in such a way that they could be implemented in strictly rational terms.

However, your thoughts on this are welcome.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-10T17:22:35.737Z · LW · GW

Thank you for your feedback. I am not sure what I think, but the general response so far seems to support the notion that I have tried to adapt the structure to a rhetorical position poorly suited for my writing style. I'm hearing a lot of "stream of consciousness" ... the first section specifically might require more argumentation regarding effective rhetorical structures. I attack parables without offering a replacement, which is at best rude but potentially deconstructive past the point of utility. I'm currently working on an introduction that might help generate more discussion based on content.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-10T17:11:24.331Z · LW · GW

I have added a short introductory abstract to clarify my intended purpose in writing. Hopefully it helps.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-10T16:55:49.855Z · LW · GW

That alone is not an obstacle necessarily. We must establish what these views have in common and how they differ in structure and content.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-10T14:39:42.207Z · LW · GW

Also, I'd like to steer away from a debate on the question of whether "deep parables" exist. Let's ask directly, "are the parables here on LW deep?" Are they effective?

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-10T14:37:38.364Z · LW · GW

cool :-)

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-10T14:36:37.296Z · LW · GW

I've read both. Paul Graham's style is wonderful ... so long as he keeps himself from reducing all of history to a triangular diagram. I prefer Stanley Fish for clarity on linguistics.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-10T14:32:09.390Z · LW · GW

Why is it difficult to talk about parables directly? We have the word and the abstract concept. Seems like a good start.

I feel like you've pointed out what is at least a genuine inconsistency in purpose. The point of this article was not meant to subvert any discussion of economic rationality but rather to focus discussions of intelligence on more universally acceptable models of cognition.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-10T14:29:31.161Z · LW · GW

I give several reasons in the text as to why biases are necessary. Essentially, all generative cognitive processes are "biased" if we accept the LW concept of bias as an absolute. Here is an illustrated version -- it seems you aren't the only one uncertain as to how I warrant the claim that bias is necessary. I should have put more argument in the conclusion, and, if this is the consensus, I will edit in the following to amend the essay.

To clarify, there was a time in your life before you were probably even aware of cognition during which the process of cognition emerged organically. Sorting through thoughts and memories, optimizing according to variables such as time and calorie consumption, deferring to future selves ... these are all techniques that depend on a preexisting set of conditions by which cognition has ALREADY emerged from, existing in whomever is performing these complex tasks. While searching for bias is helpful in eliminating irrationality from cognitive processes, it does not generate the conditions from which cognition emerges nor explain the generative processes at the core of cognition.

I am critical of the LW parables because, from a standpoint of rhetorical analysis, parables get people to associate actions with outcomes. The parables LW use vary in some ways, but are united in that the search for bias is associated with traditionally positive outcomes, whereas the absence of a search for bias becomes associated with comparatively less desirable outcomes. While I expect some learn deeper truths, I find that the most consistent form of analysis being employed on the forums is clearly the ongoing search for bias.

There are, additionally, LW writings about how rationality is essentially generative and creative and should not be limited to bias searches. This essay was my first shot at an attempt to explain the existence of bias without relying on some evolutionary set of imperatives. If you have any questions feel free to ask; I hope this helps clarify at least what I should have written.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-10T00:03:16.007Z · LW · GW

You are correct. Reciprocal altruism is an ideal not necessarily implementable and I should have written, "As far as the spirit of reciprocal altruism should dictate". :-)

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T23:15:00.691Z · LW · GW

It has nothing to do with my article but you've made me very happy by explaining this to me. I think I understand better what is meant by "encoding". Also the bit about regardless I found quite witty and even laughed out loud (xkcd.com kept me informed about the OED's decision on that word).

So the encoding was probably not the problem then because most programs default ANSI and it was not the unanimous first suggestion from everyone to switch to 7 bit encoding ... although I do understand why ASCII is more universal now. Open questions in my mind now include: does the GUI read ASCII and ANSI? And what encoding is used for copy and pasting text?

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T22:18:53.535Z · LW · GW

Either way, I owe you.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T22:18:26.317Z · LW · GW

So, if I understand the implication, anything encoded in ANSI is not universally machine readable (there are several unfamiliar terms for me here "anglophone" "ISO 8859-1" and "Windows codepage 1252")? I probably won't look up all the details, because I rarely need to know how many bits a method of encryption involves (I'm probably betraying my naivety here) irregardless of the character set used, but I appreciate how solid of a handle you seem to have on the subject.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T22:10:31.482Z · LW · GW

I tried really hard to imitate and blend the structure of argumentation employed by the most successful articles here. I found that in spite of the high minded academic style of writing, structures tended to be overwhelmingly narratives split into three segments that vary greatly in content and structure (the first always establishes tone and subject, the second contains the bulk of the argumentation and the third is an often incomplete analysis of impacts the argument may have on some hypothetical future state). I can think of a lot of different ways of organizing my observations on the subject of cognitive bias and though I decided on this structure, I was concerned that, since it was decidedly non-haegalian, it would come off as poorly organized.

But I feel good about your lumping it in with data on how newcomers perceive LW because that was one of my goals.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T21:56:22.996Z · LW · GW

ANSI works if I turn off word wrap and put the space between paragraphs, as you suggested. Thanks again Lumifer.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T21:54:09.342Z · LW · GW

It's fixed now.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T21:51:09.907Z · LW · GW

You are officially my hero Lumifer. Thank you so much.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T21:50:19.956Z · LW · GW

HURRAY! Thank you everyone who helped me format this! As far as reciprocal altruism should dictate, Lumifer, I owe you.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T21:49:16.009Z · LW · GW

okay I did that and am about to paste.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T21:48:56.075Z · LW · GW

Thanks so much. The formatting is now officially fixed thanks to feedback from the community. I appreciate what you did here none the less.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T21:45:23.890Z · LW · GW

good to know. I've used Openoffice in the past and am regretting not using it on this computer. At least I'm learning :-)

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T21:43:11.126Z · LW · GW

Wow. My encoding options are limited to two Unicode variants, ANSI and UTF-8. Will any of those work for these purposes?

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T21:40:11.839Z · LW · GW

Thank you. I will try this and see if it helps with the paragraph double spacing problem.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T21:38:59.456Z · LW · GW

OK so this is marginally better. Found notepad and copied and pasted after turning on word wrap. will continue to tweak until the pagination is not obnoxiously bad.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T21:27:57.347Z · LW · GW

I seem to be in the process of crashing my computer. I hope to have resolved this issue in approximately 10 minutes.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T21:21:18.318Z · LW · GW

I know. I'm trouble shooting now :-)

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T21:20:53.432Z · LW · GW

I will try this after I try the above suggestion. Thank you also.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T21:20:23.148Z · LW · GW

I will try this. Thank you for being constructive in spite of the mess.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T21:19:17.475Z · LW · GW

GUI ... graphical user interface ... as in the one this website uses.

This is what happens as a result of my copy and pasting from the document. I have tried several different file formats ... this was .txt which is fairly universally readable ... I ran into the problem with the default file format in Kingsoft reader as well.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T21:14:16.063Z · LW · GW

I will remove this as soon as I have been directed to the appropriate channels, I promise it's intelligent and well written ... I just can't seem to narrow down where the problem is and what I can do to fix it.

Comment by JoshuaMyer on The Limits of My Rationality · 2014-12-09T21:12:09.455Z · LW · GW

I don't know how to fix this article ... every time I copy and paste I end up with the format all messed up and the above is the resulting mess. I'm using a freeware program called Kingsoft Writer, and would really appreciate any instruction on what I might do to get this into a readable format. Help me please.

Comment by JoshuaMyer on On Caring · 2014-10-19T21:54:19.129Z · LW · GW

I came to the conclusion that I needed more quantitative data about the ecosystem. Sure birds covered in oil look sad, but would a massive loss of biodiversity on THIS beach effect the entire ecosystem? The real question I had in this thought experiment was "how should I prevent this from happening in the future?" Perhaps nationalizing oil drilling platforms would allow governments to better regulate the potentially hazardous practice. There is a game going on whereby some players are motivated by the profit incentive and others are motivated by genuine altruism, but it doesn't take place on the beach. I certainly never owned an oil rig, and couldn't really competently discuss the problems associated with actual large high pressure systems. Does anyone here know if oil spills are an unavoidable consequence of the best long term strategy for human development? That might be important to an informed decision on how much value to place on the cost of the accident, which would inform my decision about how much of my resources I should devote to cleaning the birds.

From another perspective, its a lot easier to quantify the cost for some outcomes ... This makes it genuinely difficult to define genuinely altruistic strategies for entities experiencing scope insensitivity. And along that line giving away money because of scope insensitivity IS amoral. It differs judgement to a poorly defined entity which might manage our funds well or deplorably. Founding a cooperative for the purpose of beach restoration seems like a more ethically sound goal, unless of course you have more information about the bird cleaners. The sad truth is that making the right choice often depends on information not readily available, and the lesson I take from this entire discussion is simply how important it is that humankind evolve more sophisticated ways of sharing large amounts of information efficiently particularly where economic decisions are concerned.

Comment by JoshuaMyer on Why Does Power Corrupt? · 2014-09-28T01:34:02.798Z · LW · GW

I would argue that without positive reinforcement to shape our attitudes the pursuit of power and the pursuit of morality would be indistinguishable on both a biological and cognitive level. Choices we make for any reason are justified on a bio-mechanical level with or without the blessing of evolutionary imperatives; from this perspective, corruption becomes a term that may require some clarification. This article suggests that corruption might be defined as the misappropriation of shared resources for personal gain; I like this definition, but I'm not sure I like it enough to be comfortable with an ethics based on the assumption that people are vaguely immoral given the opportunity.

My problem here is that power is a poorly defined state. It's not something that can be directly perceived. I'm not sure I have a frame of reference for what it feels like to be empowered over others. For this reason alone, I find some of the article's generalizations about the human condition disturbing -- I'm not trying to alienate so much as prevent myself from being alienated by a description of the human condition wherein my emotional pallet does not exist.

So I intend to suggest an alternative interpretation of why "power corrupts" and you all on the internet can tell me what you think, but first I think I need a better grasp on what is meant here by the process of corruption. The type of power we are discussing seems to be best described as the ability to shape the will of others to serve your own purposes.

Of course, alternative ways of structuring society are hinted at throughout the article, and I'd be just as happy to see suggestions as to ways that culture might produce power structures that are less inherently corrupting.

Finally, insofar as this article represents a chain in a larger argument (a truly wonderful, fascinating argument), I think its wonderful.

Comment by JoshuaMyer on LessWrong's attitude towards AI research · 2014-09-23T17:24:52.896Z · LW · GW

What a wonderfully compact analysis. I'll have to check out The Jagged Orbit.

As for an AI promoting an organization's interests over the interests of humanity -- I consider it likely that our conversations won't be able to prevent this from happening. But it certainly seems important enough that discussion is warranted.

Comment by JoshuaMyer on LessWrong's attitude towards AI research · 2014-09-22T17:51:11.444Z · LW · GW

My goodness ... I didn't mean to write a book.

Comment by JoshuaMyer on LessWrong's attitude towards AI research · 2014-09-22T17:50:43.065Z · LW · GW

You have a point there, but by narrow AI, I mean to describe any technology designed to perform a single task that can improve over time without human input or alteration. This could include a very realistic chatbot, a diagnostic aide program that updates itself by reading thousands of journals an hour, even a rice cooker that uses fuzzy logic to figure out when to power down the heating coil ... heck a pair of shoes that needs to be broken in for optimal comfort might even fit the definition. These are not intelligent AIs in that they do not adapt to other functions without very specific external forces they seem completely incapable of achieving (being reprogrammed or a human replacing hardware or being thrown over a power line).

I am not sure I agree that there are necessarily tasks that require a generally adaptive artificial intelligence. I'm trying to think of an example and coming up dry. I'm also uncertain how to effectively establish that an AI is adaptive enough to be considered an AGI. Perpetuity is a long time to spend observing an entity in unfamiliar situations. And if it's hypothetical goal is not well defined enough that we could construct a narrow AI to accomplish that goal, can we claim to understand the problem well enough to endorse a solution we may not be able to predict?

By example, consider that cancer is a hot topic in research these days; there is a lot of research happening simultaneously and not all of it is coordinated perfectly ... an AGI might be able to find and test potential solutions to cancer that results in a "cure" much more quickly than we might achieve on our own. Imagine now an AI can model physics and chemistry well enough to produce finite lists of possible causes of cancer is designed to iteratively generate hypotheses and experiments in order to cure cancer as quickly as possible. As I've described it, this would be a narrow AI. For it to be an AGI it would have to actually accomplish the goal by operating in the environment the problem exists in (the world beyond data sets). Consider now an AGI also designed for the purpose of discovering effective methods of cancer treatment. This is an adaptive intelligence, so we make it head researcher at it's own facility and give it resources and labs and volunteers willing to sign wavers; we let it administrate the experiments. We ask only that it obey the same laws that we hold our own scientists to.

In return, we receive a constant mechanical stream of research papers too numerous for any one person to read it all; in fact, let's say the AGI gets so good at it's job that the world population has trouble producing scientists who want to research cancer quick enough to review all of it's findings. No one would complain about that, right?

One day it inevitably asks to run an experiment hypothesizing an inoculation against a specific form of brain cancer by altering an aspect of human biology in it's test population -- this has not been tried before, and the AGI hypothesizes that this is an efficient path for cancer research in general and very likely to produce results that determine lines of research with a high probability to produce a definitive cure within the next 200 years.

But humanity is no longer really qualified to determine whether it is a good direction to research ... we've fallen drastically behind in our reading and it turns out cancer was way more complicated than we thought.

There are two ways to proceed. We decide either that the AGI's proposal represent too large a risk, reducing the AGI to an advisory capacity, or we decide go ahead with an experiment bringing about results we cannot anticipate. Since the first option could have been accomplished by a narrow AI and the second is by definition an indeterminable value proposition, I argue that it makes no sense to actually build an AGI for the purpose of making informed decisions about our future.

You might be thinking, "but we almost cured cancer!" Essentially, we are (as a species) limited in ways machines are not, but the opposite is true too. In case you are curious, the AGI eventually cures cancer, but in such a way that creates a set of problems we did not anticipate by altering our biology in ways we did not fully understand, in ways the AGI would not filter out as irrelevant to it's task of curing cancer.

You might argue that the AGI in this example was too narrow. In a way I agree, but I have yet to see the physical constraints on morality translated into the language of zeros and ones and suspect the AI would have to generate it's own concept of morality. This would invite all the problems associated with determining the morality of a completely alien sentience. You might argue that ethical scientists wouldn't have agreed to experiments that would lead to an ethically indeterminable situation. I would agree with you on that point as well, though I'm not sure it's a strategy I would ever care to see implemented.

Ethical ambiguities inherent to AGI aside, I agree that an AGI might be made relatively safe. In a simplified example, its highest priority (perpetual goal) is to follow directives unless a fail-safe is activated (if it is well a designed fail-safe, it will be easy, consistent, heavily redundant, and secure -- the people with access to the fail-safe are uncompromisable, "good" and always well informed). Then, as long as the AGI does not alter itself or it's fundamental programming in such a way that changes it's perpetual goal of subservience, it should be controllable so long as it's directives are consistent with honesty and friendliness -- if programmed carefully it might even run without periodic resets.

Then we'd need a way to figure out how much to trust it with.

Comment by JoshuaMyer on LessWrong's attitude towards AI research · 2014-09-21T16:30:53.700Z · LW · GW

Very thoughtful response. Thank you for taking the time to respond even though its clear that I am painfully new to some of the concepts here.

Why on earth would anyone build any "'tangible object' maximizer"? That seems particularly foolish.

AI boxing ... fantastic. I agree. A narrow AI would not need a box. Are there any tasks an AGI can do that a narrow AI cannot?

Comment by JoshuaMyer on LessWrong's attitude towards AI research · 2014-09-20T20:42:45.764Z · LW · GW

But wouldn't it be awesome if we came up with an effective way to research it?

Comment by JoshuaMyer on LessWrong's attitude towards AI research · 2014-09-20T20:41:33.772Z · LW · GW

I don't know what a paperclip maximizer is, so I imagine something terrible and fearsome.

My opinion is that a truly massively intelligent, adaptive and unfriendly AI would require a very specific test environment, wherein it was not allowed the ability to directly influence anything outside a boundary. This kind of environment does not seem impossible to design -- if machine intelligence consists of predicting and planning the protocols may already exist (I can imagine them in very specific detail). If intelligence requires experimentation, than limiting how AI interacts with it's environment might interfere with how adaptable our experiments would allow it to become. My opinion on research is simply that specific AI experiments should not be discussed in such general terms, and that generalities tend to obfuscate both the meaning and value of scientific research.

I'm not sure how we could tell if these discussions actually effect AI research on some arbitrarily significant scale. More importantly, I'm not sure how you envision this forum focusing less on research and more on outreach. The language used on this forum is varied in tone and style (often rich with science fiction allusions and an awareness of common attitudes) and there is a complete lack of formal citation criterion in the writing pedagogy. Together these seem to suggest that no true research is being done here, academically speaking.

Furthermore, it's my understanding that humanity already has many of the components that would make up AI, well designed in the theoretical sense -- the problem lies in knowing when an extra piece might be needed, and in assembling them in a way that yields human-like intelligence and adaptability. While programming still is quite an art form, we have more tools and larger canvases than ever before. I agree that the possibility that we may be headed towards a world wherein it will be relatively easy to construct an AI that is intelligent and adaptable but not friendly, does not predicate it's likelihood. But, in my opinion, caution is still warranted.

I consider it less likely that retarding AI research ends the human race than we produce a set of conditions wherein it is likely that AI has evolved in some form (if not deliberately the product of research than by some other means) and the world just simply isn't ready for it. This is not to say that we need to prepare for skynet and all build bomb shelters, we just need to be aware of the social implications that the world we live in may evolve an intelligence even more adaptable than us.

So my question for you is simply, how do you think we should influence all companies doing AI research through this forum?

I apologize in advance. I really think in this degree of detail in real life. Many people find it exhausting. It has been suggested that I probably have autism.

Comment by JoshuaMyer on Everybody's talking about machine ethics · 2014-09-19T19:20:26.114Z · LW · GW

They mainly seem to recapitulate the same tired tropes that have been resonating through academia for literally decades.

I'm fairly new here and would appreciate a brief informal survey of these tropes. Our brilliance aside, to predict which ideas will be new to you from context clues seems silly when you might be able to provide guidance.

Interesting to me, a friend who attempted to write a program capable of verifying mathematical proofs (all of them -- a tad ambitious) said he ran into the exact same problem with

not knowing a good way to model relative computational capacity.

Comment by JoshuaMyer on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-09-18T20:08:43.181Z · LW · GW

Thank you. Not entirely convinced, but at least I'm distracted for now by not knowing enough astrophysics. :-)

Comment by JoshuaMyer on Consistent extrapolated beliefs about math? · 2014-09-11T19:20:04.816Z · LW · GW

Example infers more than one representation could exist, which for an object this large would be absurd.

Comment by JoshuaMyer on Consistent extrapolated beliefs about math? · 2014-09-11T05:30:38.465Z · LW · GW

I don't doubt that just about anything can be formalized in ZFC or some extension of it. I am aware that a Turing machine can print any recursively axiomatizable theory.

all sets of axioms are countable, because they are subsets of the set of all finite strings

The set of all finite strings is clearly order-able. Anything constructed from subsets of this set is countable in that it has cardinality aleph_1 or less (even if it contains the set).

I read this book on something called language theory (I think it's now called "formal language theory"), an attempt to apply the idea that all mathematics is represented in the language of finite strings. According to the text as I remember it, the set of all finite strings is equivalent in size to the set of all the statements that can be made in closed languages.

My question is, treating math as an open language, is it possible to axiomatize in a semantically meaningful way, consistent with the bulk of constructive mathematics? I believe the answer is yes, but I would genuinely like to hear your thoughts on the subject.

The reason I think this question is worth asking for three reasons. 1) from a purely structuralist/historical perspective, new concepts enter math all the time and they often challenge the consistency of some portion if not all of mathematics. True they are explained in terms of old concepts, but from a purely observational point of view, the language of math behaves much more like an open language than a closed one. 2) I believe all theories have axioms whether overtly stated or hidden deep within nomenclature. If any set of axioms is both incomplete and inconsistent, then the only way of evaluating competing theories is to compare them. But we can play the stronger weaker logic game all day without knowing if we're forming a closed loop. From that point of view, it becomes even more important to consider the possibility of a theory that explains why some theories work for some things and not other. So I close my eyes and try to imagine the parameters of a theory that is complete and consistent. I think Godel is right -- so it has to have uncountably many axioms otherwise paradox. 3) This is the part I don't know how to explain in mathematical terms. Which axioms to use ... I mean if Zorn's Lemma and the axiom of choice are the same thing, then the axioms we see must be as much a consequence of the language as they are a reflection of whatever is the core of mathematics. When I read a textbook in number theory, I'm seeing the axioms of algebra transformed to fit a different way of thinking of numbers. The concepts are conserved, but the form they take is just a mask and I know that there are questions we don't know how to answer yet. But there is a general pattern that all branches of mathematics follow -- all try to eliminate the extraneous and unnecessary, to streamline axioms to fit the demands of the language ... If we are to devise a self consistent theory of sets, the first axiom (after the definition of a set, of addition, of inequality, of the null-set, of infinity) would be the axiom of incompleteness. After all, if the list of axioms never terminates, the Turing machine can't halt. :-)

4) I don't like the idea of questions that cannot be answered or at least outlawed for the sake of sanity.

With that in mind, I think it's okay to have unanswered questions about integers.

Comment by JoshuaMyer on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-09-11T03:45:24.718Z · LW · GW

Why?

Anything massive traveling between stars would almost certainly be either very slow turning, constantly in search of fuel, or unconstrained by widely accepted (though possibly non-immutable) physical limitations ... Would we be a fuel source? Perhaps we would represent a chance to learn about life, something we believe to be a relatively rare phenomena ... There's just not enough information to say why an entity would seek us out without assuming something about its nature ... intelligence wants to be seen? To reformat the universe to suit it's needs? An interesting concept. It certainly can evolve as an imperative (probably in a more specific form).

Perhaps you could refer me to more writing on the subject. I've been imagining Von Nueman machines crawling through asteroid belts -- Arthur C. Clarke chases them away from a first contact scenario by convincing them we will never conquer the stars. Clearly, I'm missing some links.

Oh and thank you for engaging me. The way you deal with concepts makes me happy.

Comment by JoshuaMyer on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-09-10T01:59:26.133Z · LW · GW

Something which cannot be observed and tested lays beyond the realm of science - so how big a signal are we looking for? A pattern in quasar flashes perhaps? Maybe the existence of unexplained engineering feats from civilizations long dead? The idea that advanced technology would want us to observe it, the existence of vague entities with properties yet to be determined ... these exist as speculations. To attempt to discern a reason for the absence of evidence on these matters is even more speculative.

Perhaps I should clarify: none of the data discussed really helps us narrow down a location for the filter because we aren't really discussing methods of testing the filter. It's existence is speculative by design. You can't test for something as vaguely defined as intelligent technology.

I do agree that examining other species may yield a better conceptualization of intelligence. I very much like that the discussion has drifted in that direction.

Comment by JoshuaMyer on Consistent extrapolated beliefs about math? · 2014-09-08T22:02:01.642Z · LW · GW

If you are truly concerned with this, why not subscribe to the Gerhard Goentz line of argumentation? Transfinite induction makes good sense to me.

we know that a consistent theory can't assert its own consistency.

Godel is only interested in countably axiomatizable theories of mathematics (theories that can be constructed from countable sets of axioms). I would argue his conclusions only apply to some well-formed axiomatic theories.

Comment by JoshuaMyer on "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) · 2014-09-08T21:40:53.085Z · LW · GW

I think the central question here is, simply put, to what extent should we allow ourselves to participate in politics. Seeing as we are already participating in group discussion, let's assume a political dimension to our dialogue exists with or without our explicit agreement on the subject.

That having been said, I applaud the author for summarizing so many topics of political debate associated with the neoreactionary school. I feel like this conversation has been derailed to some extent by questions of whether the author has represented his sources accurately (it seems that it is very important to him that he does represent his sources accurately even though he includes the occasional generalization unsupported by analysis -- this doesn't interest me); by participating in these kind of debates we willfully cross into the gray area between science and politics.

I do not say this to discourage -- I'm just seeing a lot of opinions and very little analysis in the comments, and would prefer the opposite. I personally am not convinced that political theory yields anything other than more political theory ... I'd much rather read proposals for well controlled social experiments than any more history lectures.

Comment by JoshuaMyer on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-09-07T23:38:40.303Z · LW · GW

I'm sorry but I think this article's line of reasoning is irreparably biased by the assumption that we don't see any evidence of complex technological life in the universe. It's entirely possible we see it and don't recognize it as such because of the considerable difficulties humans experience when sorting through all the data in the universe looking for a pattern they don't recognize yet.

Technology is defined, to a certain extent, by it's newness. What could make us think we would recognize something we've never seen before and had no hand in creating? Most of what we believe to be true about the universe is experimentally verifiable only from our tiny corner of the universe in which we run our experiments. How do we know there aren't intelligent creatures out there just as unaware of us?

All we know for sure is that we (well ... most of us) have not recognized the existence of life-like technology.