Posts
Comments
If you have a point then lay it out. Set a context, make your claims and challenge mine. Expose your beliefs and accept the risks.
I lay out my claims to you because I want you to challenge them from your perspective. I will not follow your leading questions to your chosen point of philosophical ambush.
This one line response seems generally repetitive to your others. It isn't obvious to me that you are making an effort to address my challenge to your claim that 'experience itself is certain to exist'. If you would like to address that please do, otherwise it seems that we are done.
I believe that the answer depends on the perspective I adopt. This is the answer that makes sense from my current perspective.
If I model what I understand of your perspective within myself I would say that of course all my learning proceeds from some form of sensory experience, other claims are nonsensical.
With another model: The brain structures related to learning depend on more than just sensory experience, they also depend on the action of our DNA, gene networks, the limits of energy availability along with many other factors.
But why does the answer have to sensical from your perspective?
With another model: There is a process called MUP which is imparts knowledge in any form to the human mind. This is a process that by definition is any possible process not included by 'sensory experience' as defined by shiftedShapes. In other words MUP is any possible process, or perspective on a process that leads to learning beyond your claims about 'sensory experience'. Not being about to think of any examples of MUP does not disprove that MUP exists.
With another model: Blue hat.
I also believe that there are many things that we would agree on; my arguments are just an indication that I currently find certain aspects of this topic interesting to argue about--mind expanding. :)
I want to make the case, though, that experience itself is neither "certain to exist", nor "uncertain to exist". I think that "experience itself" is fundamental to Dasein, and that therefore cannot be subject to either certainty nor uncertainty.
I am happy to hold my arguments against certainty for shiftedShapes--however I will now make similar arguments against your claim that '"experience itself" is fundamental to dasein'.
The identification of a fundamental nature of Dasein requires a perspective and so is contingent on that perspective, and presumably on the limited access that perspective has to the thing it identifies as Dasein.
I will offer a competing view. Dasein is only fundamentally 'blue hat'. It feels obviously 'blue hat' to me; without 'blue hat' it would not be Dasein; nothing else about it is essential.
Presumably neither of our claims change the actual nature of what we are attempting to refer to when we say Dasein. Dasein and our conceptions of it are concepts generated by and within... well, by and within our Dasein in some limited sense.
The problem with both of our claims is then sense in which we are attempting to establish a description as a matter-of-fact. We are implying a universal perspective from which our claims can be understood to be true. Such a perspective seems inaccessible to me, so I will treat this kind of attribution as an error, perhaps as a 'not even wrong'.
So I agree that experience itself is neither "certain to exist", nor "uncertain to exist", but in the same mode I would add that "experience itself" (or "blue hat") is neither "fundamental to Dasein" nor "non-fundamental to Dasein". At least I would make this assessment when there appears to be an implied universal perspective involved.
If "experience itself" really is a fundamental element of dasein, then, we can think of it as an axiom of the human condition. Since we can only observe from within the human condition, this places the question of the existence of experience beyond proof or disproof, beyond contingency, and therefore beyond certainty or uncertainty.
If you were to say that there is a perspective from within the human condition, from which "experience itself" appears to be a fundamental element of Dasein. I would not argue, it is an ontology we can work with as long as it seems useful. If you were say that this perspective was primary, complete, unquestionable, fundamental, or certain then I am currently tempted to question the basis of your claim, the perspective from which your claim is made, or from which it holds.
Without full access to all possible perspectives of my implementation, how would I know for certain?
I can certainly adopt a perspective that describes how all learning proceeds through my sensory experience. But the identification of this pattern from my adopted limited perspective does not actually exclude other possible perspectives.
I'm not arguing that your model of sensory experience is wrong; I actually believe it has great descriptive value. I'm arguing that it is limited by and dependent on the context from which it appears to emerge.
I am arguing against your claims of certainty, in their various forms.
The map is not the territory. The 'self-evident' nature that you identify is a map; it is an artifact of a process. That process, even though it is you in some sense, has only a perspective limited access to what it is to be you.
Within the walls identified by this process you feel justifiably confident in the existence of your experience, in its 'self-evident' nature. But yet there is no escape from the territory, which includes the as yet unexamined foundational substrates of your perspective.
Only one perspective is possible: one's own perspective.
But even one's own perspective is a dynamic, living and changing perspective; and quite probably it is non-unitary in some ways. We are not locked into the mind we are born with, and the experience that you identify is only a limited and conditional aspect of what goes into the making and modification of the experience of 'what you think you are'.
Thanks for your excellent response to this Argency. I am using one philosophical perspective to challenge another--which can be a bit tricky--so I hope that you will put up with any misinterpretation on my part.
This sounds to me like Kripkenstein's Error. You might just as well despair that you also need a method to verify and confirm each of those methods, and a method-confirmation confirmation method... etc, etc. ... Surely this infinite regress constitutes a reductio ad absurdum.
I'm challenging the claim that 'experience itself is certain to exist' by pointing out that than an identification of existence requires a basis of identification, which at some level of evaluation comes with inherent uncertainty. I'm making an argument against the claimed certainty, and for accepting uncertainty; I'm not making an argument for 'reductio ad absurdum'.
You're arguing as though experience is outside and separate to the self...
I don't intend to give that impression so I will provide another description. When I consider my own experience I am performing an identification; I am interpreting my own condition from a particular basis. Very roughly speaking, this basis is substantially the same as the basis engaged in the 'experience' I'm identifying. The identification of 'self' and 'experience' from the perspective of this basis only captures some limited aspects of what is actually going on. The rest is left unexamined and provides a source of uncertainty to any claim that I might make. There is no avoiding dependence on perspective, even within our own minds.
It is not evident to me that this entanglement of contexts creates the necessary conditions to support a claim such as 'experience is the only thing that is certain to exist'. If anything, I would generally argue that the lack of independence between the perspectives reduces certainty--which perhaps is related to the value of the outside view.
When we say that we can't doubt our own sensations, we're tautologising. It isn't the case that we might have been able to doubt them, but on balance they seem doubtless - rather, we cannot talk of doubt or being applied to our experiences, since doubt and certainty are themselves experiences.
Even tautologies require a perspective to provide them meaning. It sounds to me that you follow a particular path of evaluation which is something like this (although you might choose different words):
- I'm thinking, therefore I'm existing.
- I'm thinking about (I'm thinking, therefore I'm existing.) therefore I'm existing.
- I'm thinking about (I'm thinking about (I'm thinking, therefore I'm existing.) therefore I'm existing.) therefore I'm existing.'
- ...
You recognize the pattern and reduce this to the claim, 'I can be certain that I'm existing'. The problem is that other chains of evaluation would provide different results, even 'I can't be certain that I'm existing.' This is not a good conclusion, but it is probably a fine axiom.
I have no problems with axioms. If you wish to claim as an axiom something like 'experience itself is certain to exist', then I will accept your axiom and evaluate your arguments relative to it. But if instead you claim that 'experience itself is certain to exist' is a conclusion, then I will argue as I have been, that your claim depends upon the unexamined aspects of the perspective that generated it rendering your claim of 'certainty' inherently uncertain.
... So when we say, "every rod has a length" or "I am certain of my experiences", we're not offering our conversational partner some contingent fact, rather we are defining our terms for them.
These definitions are actually contingent upon your perspective. It is generally fair for your conversational partner to ask you to describe the basis of your definitions so he can better model your understanding of them.
Nothing can be learned or tested except through sensory experience.
This claim also requires a perspective from which it is identified. The implementation of this perspective is a source of uncertainty if left unexamined.
Thus outside verification is impossible.
There is no need to talk about outside verification. All verification is done from a perspective--it does not limit my argument to assume a 'sensory experience' interface for that perspective.
I don't see how your response supports your claim that 'experience itself is certain to exist', which is the claim that I am challenging. Would you try to clarify this for me?
If a means of transmission is only reliable to a certain limited extent then the media transmitted could approach the limits of that channel's reliability, but never surpass it.
Actually, error free communication can be established over any channel as long as there is some level of signal (plus some other minor requirements).
But perhaps I'm misunderstanding the point you are making?
but the experience itself is certain to exist.
From what perspective is it certain to exist? When you identify 'the experience', this identification is an explanation from a particular perspective. By your argument it is subject to uncertainty.
I only see the certainty you refer to when I adopt a perspective that assumes there is no uncertainty in its own basis. For example if you establish as an axiom that 'primary sensory experience can be confirmed to exist by the experience itself'.
Otherwise I need a method to identify 'primary sensory experience', a method to identify 'the experience' related to it, and a method to verify that the former can be confirmed to exist by the latter. These methods have their own basis of implementation; which introduce uncertainty if left unexamined.
Thanks for poking at the formicary of philosophy -- the concepts of reality, existence, justification, truth, and belief.
My primary tool for dissolving questions is to ask "From what perspective?". From what perspective do the claims hold? From what perspective are the claims made?
The descriptions of both direct and indirect realism identify the concepts of an external reality and its interpretation by human senses and mind. Manfred in his comment provides some models from this perspective.
When I ask the question "From what perspective?", I see that these descriptions are from a third person perspective, a human perspective, and so these descriptions are from substantially the same context that that our first person experience of reality comes from. My answer to this question also came from a human perspective, and so is also from substantially the same context... and so forth in a seemingly pointless regression of justification.
From this it seems reasonable to claim that we have an anthropocentric perspective of reality, and every evaluation of our perspective on reality is also substantially anthropocentric.
(But you might say "We can build tools of math and science that provide perspectives that are independent of the human mind." To which I would respond that these tools were designed by humans, relate to reality as described from a human perspective, and produce results that are translated into the terms of human experience and understanding so that we can comprehend them.)
From this perspective neither direct nor indirect realism describe a universally objective situation, they are actually subjective anthropocentric descriptions. As such we should be able to identify contexts where these descriptions of realism are valid, are invalid, or are even meaningless.
What remains is a question of pragmatics, not of truth; is one of these perspectives on realism more useful than competing perspectives, for the context of current concern?
Gentlemen! Welcome to Rationality Club. The first rule of Rationality Club is: you do not talk about basilisks. The second rule of Rationality Club is: you DO NOT even allude to basilisks!
Existence is reserved for things we have access to. Possible existence implies possible access. Actual existence implies actual access. Non-existence implies no possible access.
It is certainly possible to describe things outside of all possible access. For example as mentioned above we can talk about "non-actual or nonexistent things" and "possible worlds" that we can't access because they are counterfactual or because they are a separate reality. But when we talk about things beyond all possible access, we are just making up stories, and we can say anything. For example: All non-existent things are blue, and they are also simultaneously non-blue.
This reshapes the question to "Can something exist even if we don't have access to it.".
Although I am tempted to say that it certainly seems possible, I believe that the best approach is not to make any claims about anything beyond our access.
I'm exploring some elements of the philosophy of existence (ontology) and while reading about ontological arguments I was reminded again about the description of God as the "unmoved mover".
It occurred to me that although we can't say anything meaningful about the ultimate origin of motion, we can describe the mover that is not changed by the motion from a mathematical perspective, it is called relativity -- a static description of dynamic systems.
Everything that exists does so in some definite quantity. Existence is that property of conceptual referents such that they necessarily exist in some definite quantity.
I'm confused by this mix of referring to things that exist and referring to existence as a property of conceptual referents. Are you saying that conceptual referents are the things that exist in finite and definite quantity? Or are you saying something else?
definite quantity
I see that you are claiming that existing things are bounded in some quantifiable way, but you do not seem to account for the inherent uncertainty of determining quantities.
The identification of a definite quantity requires a quantifier. Some uncertainty comes from the implementation of this quantifier; if it is incorrect then the identified quantity would be wrong. You could handle this by verifying the implementation of the quantifier, but that only pushes the uncertainty into the context of verification. To use the quantifier you must choose to halt the regression of verification and accept the remaining uncertainty.
Additional uncertainty comes from the choice of the quantifier. The quantifier used is one choice from a large and possibly infinite set of possible quantifiers. Not all of these quantifiers would provide the same answer--or even provide a "reasonable" answer, for example by replying with "hat" instead of a number like "2".
For example: I scoop up a handful of gravel from a beach. I want to count the stones in my hand. But my hand contains all kinds of stuff; rocks and dirt from the size of dust to a couple inches across, bits of wood, shell, and other organic debris. Out of this mess which bits of the stuff are stones? It depends on how I quantify stones; is it by volume, apparent area, mass, composition, color, texture... there are many possible measurements, and combination of measurements. I choose one way of counting stones and get a quantity of 5, but it could have been 1 or 1000 or "blue hat" if I had made other choices.
Given this uncertainty, how can I know that only a "definite quantity" of stones exist in my hand?
It would be great to see you here. Your profile has you in Berkeley, are you visiting Portland?
The more recent meta-analysis appears to support their initial conclusion.
If you need a place to stay in Boise I might also be able to help with that.
Let's say that ontology is the study of that which exists, epistemology the study of knowledge, phenomenology the study of appearances, and methodology the study of technique.
Thanks for the description. That would place the core of my claims as an ontology, with implications for how to approach epistemology, and phenomenology.
I wouldn't call that meaning, unless you're going to explicitly say that there are meaning-qualia in your antenna-photon system. Otherwise it's just cause and effect. True meaning is an aspect of consciousness. Functionalist "meaning" is based on an analogy with meaning-driven behavior in a conscious being.
I recognize that my use of meaning is not normative. I won't defend this use because my model for it is still sloppy, but I will attempt to explain it.
The antenna-photon interaction that you refer to as cause and effect I would refer to as a change in the dynamics of the system, as described from a particular perspective.
To refer to this interaction as cause and effect requires that some aspect of the system be considered the baseline; the effect then is how the state of the system is modified by the influencing entity. Such a perspective can be adopted and might even be useful. But the perspective that I am holding is that the antenna and the photon are interacting. This is a process that modifies both systems. The "meaning" that is formed is unique to the system; it depends on the particulars of the systems and their interactions. Within the system that "meaning" exists in terms of the dynamics allowed by the nature of the system. When we describe that "meaning" we do so in the terms generated from an external perspective, but that description will only capture certain aspects of the "meaning" actually generated within the system.
How does this description compare with your concept of "meaning-qualia"?
Does your philosophy have a name? Like "functionalist perspectivism"?
I think that both functionalism and perspectivism are poor labels for what I'm attempting to describe; because both philosophies pay too much attention to human consciousness and neither are set to explain the nature of existence generally.
For now I'm calling my philosophy the interpretive context hypothesis (ICH), at least until I discover a better name or a better model.
I can help you when you are in the Portland area. Just let me know what you need.
A really well-known one is the cycle connecting ontology and epistemology: your epistemology should imply your ontology, and your ontology must permit your epistemology. More arcane is the interplay between phenomenology, epistemology, and methodology.
I have read many of your comments and I am uncertain how to model your meanings for 'ontology', 'epistemology' and 'methodology', especially in relation to each other.
Do you have links to sources that describe these types of cycles, or are you willing to describe the cycles you are referring to--in the process establishing the relationship between these terms?
Your approach to ontology seems to combine these two cycles, with the p/e/m cycle being more fundamental. All ontological claims are said to be dependent on a cognitive context, and this justifies ontological relativism.
The term "cycles" doesn't really capture my sense of the situation. Perhaps the sense of recurrent hypergraphs is closer.
Also, I do not limit my argument only to things we describe as cognitive contexts. My argument allows for any type of context of evaluation. For example an antennae interacting with a photon creates a context of evaluation that generates meaning in terms of the described system.
...and this justifies ontological relativism.
I think that this epistemology actually justifies something more like an ontological perspectivism, but it generalizes the context of evaluation beyond the human centric concepts found in relativism and perspectivism. Essentially it stops privileging human consciousness as the only context of evaluation that can generate meaning. It is this core idea that separates my epistemology from most of the related work I have found in epistemology, philosophy, linguistics and semiotics.
In what you write I don't see a proof that foundations don't exist or can't be reached.
I'm glad you don't see those proofs because I can't claim either point from the implied perspective of your statement. Your statement assumes that there exists an objective perspective from which a foundation can be described. The problem with this concept is that we don't have access to any such objective perspective. We can only identify the perspective as "objective" from some perspective... which means that the identified "objective" perspective depends upon the perspective that generated the label, rendering the label subjective.
You do provide an algorithm for finding an objective description:
I see the possibility of reaching foundations, and also the possibility of countering the relativistic influence of the p/e/m perspective, simply by having a good ontological account of what the p/e/m cycle is about. From this perspective, the cycle isn't an endless merry-go-round, it's a process that you iterate in order to perfect your thinking. You chase down the implications of one ology for another, and you keep that up until you have something that is complete and consistent.
Again from this it seems that while you reject some current conclusions of science, you actually embrace scientific realism--that there is an external reality that can be completely and consistently described.
As long as you are dealing in terms of maps (descriptions) it isn't clear that to me that you ever escape the language hierarchy and therefore you are never free of Gödel's theorems. To achieve the level of completeness and consistency you strive for, it seems that you need to describe reality in terms equivalent to those it uses... which means you aren't describing it so much as generating it. If this description of a reality is complete then it is rendered in terms of itself, and only itself, which would make it a reality independent of ours, and so we would have no access to it (otherwise it would simply be a part of our reality and therefore not complete). Descriptions of reality that generate reality aren't directly accessible by the human mind; any translation of these descriptions to human accessible terms would render the description subject to Gödel's theorems.
I see no reason to abandon cognitive optimism.
I don't want anybody to abandon the search for new and better perspectives on reality just because we don't have access to an objective perspective. But by realizing that there are no objective perspectives we can stop arguing about the "right" way of viewing all of reality and spend that time finding "good" or "useful" ways to view parts of it.
Continuing my argument.
It appears to me that you are looking for an ontology that provides a natural explanation for things like "qualia" and "consciousness" (perhaps by way of phenomenology). You would refer to this ontology as the "true ontology". You reject Platonism "an ontology which reifies mathematical or computational abstractions", because things like "qualia" are absent.
From my perspective, your search for the "true ontology"--which privileges the phenomenological perspective of "consciousness"--is indistinguishable from the scientific realism that you reject under the name "Platonism"--which (by some accounts) privileges a materialistic or mathematical perspective of everything.
For example, using a form of your argument I could reject both of these approaches to realism because they fail to directly account for the phenomenological existence of SpongeBob SquarePants, and his wacky antics.
Much of what you have written roughly matches my perspective, so to be clear I am objecting to the following concepts and many of the conclusions you have drawn from them:
- "true ontology"
- "true epistemology"
- "Consciousness objectively exists"
I claim that variants of antirealism have more to offer than realism. References to "true" and "objective" have implied contexts from which they must be considered, and without those contexts they hold no meaning. There is nothing that we can claim to be universally true or objective that does not have this dependency (including this very claim (meta-recursively...)). Sometimes this concept is stated as "we have no direct access to reality".
So from what basis can we evaluate "reality" (whatever that is)? We clearly are evaluating reality from within our dynamic existence, some of which we refer to as consciousness. But consciousness can't be fundamental, because its identification appears to depend upon itself performing the identification; and a description of consciousness appears to be incomplete in that it does not actually generate the consciousness it describes.
Extending this concept a bit, when we go looking for the "reality" that underpins our consciousness, we have to model that it based in terms of our experience which is dependent upon... well it depends on our consciousness and its dynamic dependence on "reality". Also, these models don't appear to generate the phenomenon they describe, and so it appears that circular reasoning and incompleteness are fundamental to our experience.
Because of this I suggest that we adopt an epistemology that is based on the meta-recursive dependence of descriptions on dynamic contexts. Using an existing dynamic context (such as our consciousness) we can explore reality in the terms that are accessible from within that context. We may not have complete objective access to that context, but we can explore it and form models to describe it, from inside of it.
We can also form new dynamic contexts that operate in terms of the existing context, and these newly formed inner contexts can interact with each in terms of dynamic patterns of the terms of the existing context. From our perspective we can only interact with our child contexts in the terms of the existing context, but the inner contexts may be generating internal experiences that are very different than those existing outside of it, based on the interaction of the dynamic patterns we have defined for them.
Inverting this perspective, then perhaps our consciousness is formed from the experiences generated from the dynamic patterns formed within an exterior context, and that context is itself generated from yet another set of interacting dynamic patterns... and so on. We could attempt to identify this nested set of relationships as its own ontology... only it may not actually be so well structured. It may actually be organized more like a network of partially overlapping contexts, where some parts interact strongly and other parts interact very weakly. In any case, our ability to describe this system will depend heavily on the dynamic perspective from which we observe the related phenomenon; and our perspective is of course embedded within the system we are attempting to describe.
I am not attempting to confuse the issues by pointing out how complex this can be. I am attempting to show a few things:
- There is no absolute basis, no universal truth, no center, no bottom layer... from our perspective which is embedded in the "stuff of reality". I make no claims about anything I don't have access to.
- Any ontology or epistemology will inherently be incomplete and circularly self-dependent, from some perspective.
- The generation of meaning and existence is dependent on dynamic contexts of evaluation. When considering meaning or existence it is best to consider them in the terms of the context that is generating them.
- Some models/ontologies/epistemologies are better than others, but the label "better" is dependent on the context of evaluation and is not fundamental.
- The joints that we are attempting to carve the universe at are dependent upon the context of evaluation, and are not fundamental.
- Meaning and existence are dynamic, not static. A seemingly static model is being dynamically generated, and stops existing when that modeling stops.
- Using a model of dynamic patterns, based in terms of dynamic patterns we might be able to explain how consciousness emerges from non-conscious stuff, but this model will not be fundamental or complete, it will simply be one way to look at the Whole Sort of General Mish Mash of "reality".
To apply this to your "principle of non-vagueness". There is no reason to expect that mapping between pairs of arbitrary perspectives--between physical and phenomenological states in this case--is necessarily precise (or even meaningful). Simply because they are two different ways of describing arbitrary slices of "reality" means that they may refer to not-entirely overlapping parts of "reality". Certainly physical and phenomenological states are modeled and measured in very different ways, so a great deal of non-overlap caused uncertainty/vagueness should be expected.
And this claim:
But as I have argued, not only must the true ontology be deeper than state-machine materialism, there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.
Current software is rarely programmed to directly model state-machines. It may be possible to map the behavior of existing systems to state machines, but it is not generally the perspective generally held by the programmers, or by the dynamically running software. The same is true for current AI, so from that perspective your claim seems a bit odd to me. The perspective that an AI can be mapped to a state-machine is based on a particular perspective on the AI involved, but in fact that mapping does not discount that the AI is implemented within the same "reality" that we are. If our physical configuration (from some perspective) allows us to generate consciousness then there is no general barrier that should prevent AI systems from achieving a similar form of consciousness.
I recognize that these descriptions that may not bridge our inference gap; in fact they may not even properly encode my intended meaning. I can see that you are searching for an epistemology that better encodes for your understanding of the universe; I'm just tossing in my thoughts to see if we can generate some new perspectives.
The contexts from which you identify "state-machine materialism" and "pain" appear to be very different from each other, so it is no surprise that you find no room for "pain" within your model of "state-machine materialism".
You appear to identify this issue directly in this comment:
My position is that a world described in terms of purely physical properties or purely computational properties does not contain qualia. Such a description itself would contain no reference to qualia.
Looking for the qualia of "pain" in a state-machine model of a computer is like trying to find out what my favorite color is by using a hammer to examine the contents of my head. You are simply using the wrong interface to the system.
If you examine the compressed and encrypted bit sequence stored on a DVD as a series of 0 and 1 characters, you will not be watching the movie.
If you don't understand the Russian language, then for a novel written in Russian you will not find the subtle twists of plot compelling.
If you choose some perspectives on Searle's Chinese room thought experiment you will not see the Chinese speaker, you will only see the mechanism that generates Chinese symbols.
So stuff like "qualia", "pain", "consciousness", and "electrons" only exist (hold meaning) from perspectives that are capable of identifying them. From other perspective they are non-existent (have no meaning).
If you chose a perspective on "conscious experience" that requires a specific sort of physical entity to be present, then a computer without that will never qualify as "conscious", for you. Others may disagree, perhaps pointing out aspects of its responses to them, or how some aspects of the system are functionally equivalent to the physical entity you require. So, which is the right way to identify consciousness? To figure that out you need to create a perspective from which you can identify one as right, and the other as wrong.
there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.
This strikes me as probably true but unproven
It seems possible for an AI to engage in a process of search within the ontological Hilbert space. It may not be efficient, but a random search should make all parts of any particular space accessible, and a random search across a Hilbert space of ontological spaces should make other types of ontological spaces accessible, and a random search across a Hilbert space containing Hilbert spaces of ontological spaces should... and on up the meta-chain. It isn't clear why such a system wouldn't have access to any ontology that is accessible by the human mind.
However, regardless of all that, it seems to me that buying has some tremendous drawbacks, for which I can't see comparable upsides under any realistic circumstances.
Before I bought my house I ran the numbers and came to the same conclusion, that home ownership would not maximize my net worth and would increase certain types of risk. As a result I see home ownership as a luxury, not as an investment. I bought my house because I wanted it as a luxury and believed I could manage the risk.
JavaScript is fine as a first language. I consider it to be a better first language than the TRS-80 BASIC I started on.
Is it better to focus on one path, avoiding contamination from others?
Learning multiple programming languages will broaden your perspective and will make you a better and more flexible programmer over time.
Is it better to explore several simultaneously, to make sure you don't miss the best parts?
If you are new and learning on your own, you should focus on one language at a time. Pick a project to work on and then pick the language you are going to use. I like to code a Mandelbrot set image generator in each language I learn.
Which one results in converting time to dollars the most quickly?
If you make your dollars only from the finished product, then pick the language with the highest productivity for your target platform and problem domain. This will probably be a garbage collecting language with a clean syntax, with a good integrated development environment, and with a large available set of libraries.
Right now this will probably be Python, Java or C#.
If you make your dollars by producing lines of code for a company, then you will want to learn a language that is heavily used. There is generally a large demand for C++, C#, Java, Python, and PHP programmers. Companies in certain domains will focus on other languages like Lisp, Smalltalk and Ada.
Which one most reliably converts you to a higher value programmer over a longer period of time?
No single language will do this in the long run, but you might take temporary advantage of the current rise of Python, or the large install base of Java and C++.
For a broad basic education I suggest:
- Learn a functional language. Haskell is my first choice; Lisp is my second choice.
- Learn an object oriented language. Smalltalk has the best OO representation I have come across.
- Learn a high level imperative language. Based on growth, Python appears to currently be the best choice; Java would be my second choice.
- Learn an assembly language. Your platform of choice.
If you want to do web-related development:
- HTML, CSS, Javascript.
- SQL and relational DB.
- XML, XSD, and XSLT.
- C#.NET, Java, Python or PHP.
If you want to do engineering related development:
- C and C++.
- Perl
- SQL
- Mathematica or Matlab
- for some domains, LabVIEW
These are well targeted critiques, and are points that must be addressed in my proposal. I will address these critiques here while not claiming that the approach I propose is immune to "bad design".
There is a high cognitive cost to learning a language.
Yes, traditional general purpose languages (GPLs) and many domain specific languages (DSLs) are hard to learn. There are a few reasons that I believe this can be allayed by the approach I propose. The DSLs I propose are (generally) small, composable, heavily reused, and interface oriented which is probably very different than the GPLs (and perhaps DSLs) from your experience. Also, I will describe what I call the encoding problem and map it between DSLs and GPLs to show why well chosen DSLs should be better.
In my model there will be heavy reuse of small (or even tiny) DSLs. The DSLs can be small because they can be composed to create new DSLs (via transparent implementations, heavy use of generics, transformation, and partial specialization). Composition allows each DSL to deal with a distinct and simple concern but yet be combined. Reuse is enhanced because many problem domains regardless of their abstraction level can be effectively modeled using common concerns. For example consider functions, Boolean logic, control structures, trees, lists, and sets. Cross-cutting concerns can be handled using the approaches of Aspect-oriented programming.
The small size of these commonly used DSLs, and their focused concerns make them individually easy to learn. The heavy reuse provides good leveraging of knowledge across projects and across scales and types of abstractions. Probably learning how to program with a large number of these DSLs will be the equivalent of learning a new GPL.
In my model DSLs are best thought of as interfaces, where the interface is customized to provide an efficient and easily understood method of manipulating solutions within the problem domain. In some cases this might be text based interfaces such as we commonly program in now, but it also could be graphs, interactive graphics, sound, touch, or EM signals; really any form of communication. The method and structure of communication is constrained by the interface, and is chosen to providing a useful (and low noise) perspective into the problem domain. Text base languages often come with a large amount of syntactic noise. (Ever try template based metaprogramming in C++? Ack!).
Different interfaces (DSLs) may provide different perspectives into the same solution space of a problem domain. For example a graph, and the data being graphed: the underlying data could be modified by interacting with either interface. The choice of interface will depend on the programmer's intention. This is also related to the concept of projectional editors, and can be enhanced with concepts like Example Centric Programming.
The encoding problem is the problem of transforming an abstract model (the solution) into code that represents it properly. If the solution is coded in a high-level DSL, then the description of the model that we create while thinking about the problem and talking to our customizers might actually represent the final top level code. In this case the cognitive cost of learning the DSL is the same as understanding the problem domain, and the cost of understanding the program is that of understanding the solution model. For well chosen DSLs the encoding problem will be easy to solve. In the case of general purpose languages the encoding problem can add arbitrary levels of complexity. In addition to understanding the problem domain and the abstract solution model, we also have to know how these are encoded into the general purpose language. This adds a great deal of learning effort even if we already know the language, and even if we find a library that allows us to code the solution relatively directly. Perhaps worse than the learning cost is the ongoing mental effort of encoding and decoding between the abstract models and the general purpose implementation. We have to be able to understand and modify the solution through an additional layer of syntactic noise. The extra complexity, the larger code size and the added cognitive load imposed by using general purpose languages multiplies the likelihood of bugs.
There is a high engineering cost to making different languages play nice together -- you need to figure out precisely what happens to types, synchronization, etc etc at the boundaries.
Boundary costs can be common and high even if you are lucky enough to get to program exclusively in a single general purpose language. Ever try to use functions from two different libraries on the same data? Image processing libraries and math libraries are notorious for custom memory representations, none of which seem to match my preferred representation of the same data. Two GUI libraries or stream I/O libraries will clobber each other's output. The costs (both development-time and run-time) to conform disparate interfaces in general purpose languages is outrageous. My proposal just moves these boundary costs to new (and perhaps unexpected) places while providing tools (DSLs for composition and transformation) that ease the effort of connecting the disparate interfaces.
I suspect that breaking programs into pieces that are defined in terms of separate languages is lousy engineering.
I've described my proposal as a perspective shift, and that interface might be a better term than language. To shift your perspective, consider the interfaces you have to your file system. You may have a command line interface to it, a GUI interface, and a programmatical interface (in your favorite language). You choose the appropriate interface based on the task at hand. The same is true for the interfaces I propose. You could use the file system in a complex way to perform perfectly good source code control, or you could rely on the simpler interface of a source control system. The source control system itself might simply rely on a complex structuring of the file system, but you don't really care how it works as long as it is easy to use and meets your needs. You could use CSV text files to store your data, but if you need to perform complex queries a database engine is probably a better choice.
We already break programs (stuff we do) into pieces that are defined in terms of separate languages (interfaces), and we consider this good engineering. My proposal is about how to successfully extend this type of separation of concerns to its granular and interconnected end-point.
Among other things, traditional unix shell programming has very much this flavor -- a little awk, a little sed, a little perl, all glued together with some shell. And the outcome is usually pretty gross.
Your UNIX shell programming example is well placed. It is roughly a model that matches my proposal with connected DSLs, but it is not a panacea (perhaps far from it). I will point out that the languages you mention (awk, sed, and perl) are all general purpose (Turing-complete) text based languages, which is far from the type of DSL I am proposing. Also the shell limits interaction between DSLs to character streams via pipes. This representation of communication rarely maps cleanly to the problem being solved; forcing the implementations to compensate. This generates a great deal of overhead in terms of cognitive effort, complexity, cost ($, development time, run-time), and in some sense a reduction of beauty in the Universe.
To highlight the difference between shell programming and the system I'm proposing, start with the shell programming model, but in addition to character streams add support for the communication of structured data, and in addition to pipes add new communication models like a directed graph communication model. Add DSLs that perform transformations on structured data, and DSLs for interactive interfaces. Now you can create sophisticated applications such as syntax sensitive editors while programming at a level that feels like scripting or perhaps like painting; and given the composability of my DSLs, the parts of this program could be optimized and specialized (to the hardware) together to run like a single, purpose built program.
Thank you for the reference to STEPS; I am now evaluating this material in some detail.
I would like to discuss the differences and similarities I see between their work and my perspective; are you are familiar enough with STEPS to discuss it from their point of view?
In reply to this:
Or by making a really convenient DSL factory. The only use for your "general purpose" language would be to write DSLs.
This use of a general purpose language also shows up in the current generation of language workbenches (and here). For example JetBrains' Meta Programming System uses a Java-like base language, and Intentional Software uses a C# (like?) base language.
My claim is that this use of a base general purpose language is not necessary, and possibly not generally desirable. With an ecosystem of DSLs general purpose languages can be generated when needed, and DSLs can be generated using only other DSLs.
Visual programming is great where the visual constructs map well to the problem domain. Where it does not apply well it becomes a burden to the programmer. The same can be said about text based programming. The same can be said about programming paradigms. For example object oriented programming is great... when it maps well to the problem being solved, but for other problems it simply sucks and perhaps functional programming is a better model.
In general, programming is easy when the implementation domain (the programming language, abstract model, development environment, other tools) maps well to the problem domain. When the mapping becomes complex and obscure, programming becomes hard.
You will not find a single approach to programming that is easy for all problems, instead you will find that each approach has its limits.
My current project is to catalyze a new perspective on programming. I believe that we should be programming using an ecosystem of domain specific languages. Each language will be arbitrarily simple (easy to learn) and well targeted to representing solutions within in its targeted problem domain. Although none of the languages are individually Turing-complete, Turing-completeness is available within the ecosystem by combining programs written in different languages together using other languages.
When I use the term language I mean it in its most general sense, along the lines of this definition "a systematic means of communicating ideas or feelings by the use of conventionalized signs, sounds, gestures, or marks having understood meanings". Perhaps a better word than language would be interface.
Programming from this perspective becomes the generation of new interfaces by composing, transforming, and specializing existing interfaces, using existing interfaces.
This perspective on programming is related to language-oriented programming, intentional programming, aspect-oriented programming, and model-driven engineering.
Quirrell storming into the trial when the majority of the audience believes him to be the one behind everything sounds quite like this story's style.
The trouble with this theory is that the arc is confirmed to last until chapter 84, and Quirrell being suddenly released from custody would be far too short of a resolution.
It is surprising that Quirrell would accidentally reveal himself as an impostor during interrogation; so, perhaps the Quirrell currently in custody is an impostor--meaning that he is not the Quirrell currently teaching at Hogwarts. If so, the imposter is there to give Quirrell time to do something else. He may be attempting to prove Hermione's innocence (even if he is to blame for the current situation), or he may also be after the Philosopher's stone.
Added entry for Portland to the wiki.
Created a Google Group LessWrong Portland.
That is a good place to meet. With no other suggestions, this should be the plan.
I'll try to be there.
Edit: I've cleared my conflict and now plan to make it.
Thank you. Very applicable to my current work.
I think your argument involves reflection somewhere. The desk calculator agrees that 2+2=4, and it's not reflective. Putting two pebbles next to two pebbles also agrees.
Agreement with statements such as 2+2=4 is not a function that desk calculators perform. It is not the function performed when you place two pebbles next to two pebbles.
Agreement is an evaluation performed by your mind from its unique position in the universe.
... this implies there is something to be converged upon.
The conclusion that convergence has occurred must be made from a context of evaluation. You make observations and derive a conclusion of convergence from them. Convergence is a state of your map, not a state of the territory.
Mathematical realism also explains my observations and operates entirely within the mathematical universe; ...
Mathematical realism appears to confuse the map for the territory -- as does scientific realism, as does physical realism.
When I refer to physical reality or existence I am only referring to a convenient level of abstraction. Space, time, electrons, arithmetic, these all are interpretations formed from different contexts of evaluation. We form networks of maps to describe our universe, but these maps are not the territory.
Gottlob Frege coined the term context principle in his Foundations of Arithmetic, 1884 (translated). He stated it as "We must never try to define the meaning of a word in isolation, but only as it is used in the context of a proposition."
I am saying that we must never try to identify meaning or existence in isolation, but only as they are formed by a context of evaluation.
When you state:
Putting two pebbles next to two pebbles also agrees.
I look for the context of evaluation that produces this result -- and I recognize that the pebbles and agreement are states formed within your mind as you interact with the universe. To believe that these states exist in the universe you are interacting with is a mind projection fallacy.
Your conclusion on sheep is a physical state in your mind, generated by physical processes. But the sheep still exist outside of your mind.
Restating my claim in terms of sheep: The identification of a sheep is a state change within a context of evaluation that implements sheep recognition. So a sheep exists in that context.
Physical reality however does not recognize sheep; it recognizes and responds to physical reality stuff. Sheep don't exist within physical reality.
"Sheep" is at a different meta-level than the chain of physical inference that led to that classification.
That "truth" in the map doesn't imply truth in the territory, I accept. That there is no truth in the territory, I vehemently reject.
"Truth" is at a different meta-level than the chain of physical inference that lead to that classification. There is no requirement that "truth" is in the set of stuff that has meaning within the territory.
When you look at the statement 2+2=4 you think some form of "hey, that's true". When I look at the statement, I also think some form of "hey, that's true". We can then talk and both come to our own unique conclusion that the other person agrees with us. This process does not require a metaphysical arithmetic; it only requires a common context.
For example we both have a proximal existence within the physical universe, we have a communication channel, we both understand English, and we both understand basic arithmetic. These types of common contexts allow us make some very practical and reasonable assumptions about what the other person means.
Common contexts allow us to agree on the consequences of arithmetic.
The short summary is that meaning/existence is formed by contexts of evaluation, and common contexts allow us to communicate. These processes explain your observations and operate entirely within the physical universe. The concept of metaphysical existence is not needed.
I am arguing against your concept "that truth exists outside of any implementation".
My claim is that "truth" can only be determined and represented within some kind of truth evaluating physical context; there is nothing about the resulting physical state that implies or requires non-physical truth.
As stated here
Our minds are not transparent windows unto veridical reality; when you look at a rock, you experience not the the rock itself, but your mind's representation of the rock, reconstructed from photons bouncing off its surface.
To your question:
If that is so, then how come others tend to reach the same truth?
These others are producing physical artifacts such as writing or speech, which through some chain of physical interactions eventually trigger state changes in your brain. At a higher meta-level, You are taking multiple forms of observations, transforming them within your brain/mind and then comparing them... eventually concluding that "others tend to reach the same truth". Another mind with its own unique perspective may come to a different conclusion such as "Fred is wearing a funny hat."
Your conclusion on truth is a physical state in your mind, generated by physical processes. The existence of a metaphysical truth is not required for you to come to that conclusion.
Therefore there is some sense in which the theorems are inherent in the (axioms + deduction rules): there is a truth about what those (axioms + deduction rules) lead to, and that truth exists outside of any implementation.
You are experiencing a mind projection fallacy.
The theorems don't exist unless an implementation produces them and once produced they only exist within a context that can represent them.
In the same way, the truth you refer to is generated by and exists within your mind. It has no existence outside of that implementation.
Relative rate of thinking. The universe may appear to be very different to very fast or slow thinkers relative to humans.
I have the same problem with the same version of chrome, including the weird graphical bugs.
But is it analogous to the halting problem?
By explaining your reasons for posting to this site you may get feedback suggesting how to better use this site to achieve your goals.
No, in the sense that it directly applies to all types of knowledge (which any epistemology applies to -- which i think is all of them, but that doesn't matter to universality).
Perhaps I don't understand some nuance of what you mean here. If you can explain it or link to something that explains this in detail I will read it.
But to respond to what I think you mean... If you have a method that can be applied to all types of knowledge, that implies that it is Turing complete; it is therefore equivalent in capability to other Turing complete systems; that also means it is susceptible to the infinite regresses you dislike in "justificationist epistemologies"... i.e. the halting problem.
Also, just because it can be applied to all types of knowledge does not mean it is the best choice for all types of knowledge, or for all types of operations on that knowledge.
I think the basic way we differ is you have despaired of philosophy getting anywhere, and you're trying to get rigor from math. But Popper saved philosophy. (And most people didn't notice.) Example:
I would not describe my perspective that way; you may have forgotten that I am a third party in this argument. I think that there is a lot of historical junk in philosophy and that it is continuing to produce a lot junk -- Popper didn't fix this and neither will Bayesianism, it is more of a people problem -- but philosophy has also produced and is producing a lot of interesting and good ideas.
I think one way we differ is that you see a distinct difference between math and philosophy and I see a wide gradient of abstractions for manipulating information. Another is that you think that there is something special about Popper's approach that allows it to rise above all other approaches in all cases, and I think that there are many approaches and that it is best to choose the approach based on the context.
With this you could start to answer questions like "Why is X moral in the UK but not in Saudi Arabia?"
You have very limited ambitious. You're trying to focus on small questions b/c you think bigger ones like: what is moral objectively? are too hard and, since you math won't answer them, it's hopeless.
This was a response to your request for an example; you read too much into it to assume it implies anything about my ambitions.
A question like "what is moral objectively?" is easy. Nothing is "moral objectively". Meaning is created within contexts of assessment; if you want to know if something is "moral" you must consider that question with a context that will perform the classification. Not all contexts will produce the same result and not all contexts will even support a meaning for the concept of "moral".
Sorry. I have no idea who is who. Don't mind me.
No problem, I'm just pointing out that there are other perspectives out here.
The Popperian method is universal.
Sure, in the sense it is Turing complete; but that doesn't make it the most efficient approach for all cases. For example I'm not going to use it to decide the answer to the statement "2 + 3", it is much more efficient for me to use the arithmetic abstraction.
But we don't know how to make it do that stuff. Epistemology should help us.
Agreed, it is one of the reasons that I am actively working on epistemology.
Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods.
Example or details?
The naive Bayes classifier can be an effective way to classify discrete input into independent classes. Certainly for some cases it could be used to classify something as "good" or "bad" based on example input.
Bayesian networks can capture the meaning within interdependent sets. For example the meaning of words forms a complex network; if the meaning of a single word shifts it will probably result in changes to the meanings of related words; and in a similar way ideas on morality form connected interdependent structures.
Within a culture a particular moral position may be dependent on other moral positions, or even other aspects of the culture. For example a combination of religious beliefs and inheritance traditions might result in a belief that a husband is justified in killing an unfaithful wife. A Bayesian network trained on information across cultures might be able to identify these kinds of relationships. With this you could start to answer questions like "Why is X moral in the UK but not in Saudi Arabia?"
Yes, given moral assertions you can then analyze them. Well, sort of. You guys rely on empirical evidence. Most moral arguments don't.
First of all, you shouldn't lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology.
Bayes' theorem is an abstraction. If you don't have a reasonable way to transform your problem to a form valid within that abstraction then of course you shouldn't use it. Also, if you have a problem that is solved more efficiently using another abstraction, then use that other abstraction.
This doesn't mean that Bayes' theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making.
You can't create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can't evaluate).
These are just computable processes; if Bayesianism is in some sense Turing complete then it can be used to do all of this; it just might be very inefficient when compared to other approaches.
Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods. Other aspects should probably be accomplished using other methods.
To take one issue, besides predicting the physical results of your actions you also need a way to judge which results are good or bad. That is moral knowledge. I don't think Bayesianism addresses this well.
Given well defined contexts and meanings for good and bad I don't see why Bayesianism could not be effectively applied to to moral problems.
Adding a reference for this comment: Münchhausen Trilemma.
And this led me to wonder if it really is mostly about community, experiences, relationships, wanting to provide imagined "snapshots" of parties and fun for our kids as they go through these various rituals, etc.
Yes, of course that is what it is about. Due to past survival advantages these social conventions and connections are tied to our sense of security. By trying to convince her that her faith is wrong, from her perspective you threaten her safety and the safety of her children.
Fortunately you are not constrained by WWJD and can engage in some instrumental rationality.
Explicitly identify your goals and rank them. Do you want to achieve your own peace on the topic? Do you want to convince your wife that her faith is wrong? Do you want to stay in this marriage? Do you want your children to grow up as atheists? Ranking your goals is important; you may have to make short term compromises to achieve greater long term successes.
Identify behavior that will help or hurt these goals. If you want your wife to feel secure in the marriage you may have to avoid telling her why her religious beliefs are misguided. If you want to maximize your influence over your children's beliefs you may have to negotiate with your wife; if they go to church with her then perhaps they also get matching rationality training from you.
Behave purposefully; have a goal in mind when you interact with your wife and with other people. When you have a goal in mind it is easier to avoid defensive reactions and much more likely that you will achieve the desired result.
The only issue I see with TSH vs. god is that god has been defined as something that is outside time/space, omni-max, etc.
Actually, you may not be aware that mayonnaise is critical to universe creation. Since God does not contain mayonnaise the God hypothesis is less plausible than the TSH.
So you claim that existing outside space and time is necessary for the creation of the universe and I claim that mayonnaise is necessary. Do either of these claims allow us to select between the theories? I don't see how; but by adding these additional requirements we increase the complexity of the theories and reduce their relative likelihood within the set of unfalsifiable theories.
Christian apologists can make compelling arguments because in the realm of made-up-stuff there is plenty that appeals to our cognitive biases. I agree that existing outside of space and time feels like a better property of a universe creator than containing mayonnaise; but that feeling is based from our very human perspective and not from any actual knowledge about how the universe came to be the way we see it now.