Posts
Comments
Edit: I dug through OP's post history and found this thread. The thread gives better context to what /u/reguru is trying to say.
A tip: very little is gained by couching your ideas in this self-aggrandizing, condescending tone. Your over-reliance on second person is also an annoying tic, though some make it work. You don't, however.
You come off as very arrogant and immature, and very few people will bother wading through this. The few that will do it only in hopes of correcting you.
If you're at all interested in improving the quality of your writing, consider, at the very least, reading a few other top level, highly upvoted posts. They do not have these problems, and you'd be served by emulating them.
Reality is arational. Everything you do is arational.
"Reality is arational." is an easily defensible position, though it would take some work to make an idea worth entertaining out of it.
"Everything you do is arational." is flatly solipsistic and useless. You must agree that words have meaning, if only subjunctively, by your usage of them. 'Rational' means something, and it describes behavior. Behavior is goal-directed, and be judged by how well it achieves those goals. That is what bare rationalism is. If you disagree with this, you'll need better justifications.
You aren't aware of it because you lack awareness. By becoming aware that you are unaware, you have increased your awareness.
Contradiction can be used for effect, but always err on the side of 'don't do it'. You're work is better served rigorous than poetic.
Yet still, you will always lack awareness. ... My definition of awareness is the subjective experience of separating thoughts from awareness. You can become aware of thoughts, and if an "I" thought appears, that was not you, you simply became aware of it.
Y'know, despite myself, I found this passage genuinely pleasing on a aesthetic level. It's a mess of negation and recursion and strange loops that I can only compare to the bizarre logic of time travel, or perhaps the descriptive amalgams of cosmic horror. This is not a compliment.
You seem to be equating awareness with at least four different things, three if that was supposed to be a recursive definition.
1) awareness as total self-knowledge ("you will always lack awareness") Since this is pure armchair speculation anyway, I'm sure the mere existence of quines) makes "You will never reach total awareness" false as a theorectical proposition.
2) awareness as consciousness/the self ("separating thoughts from awareness")
3) awareness as noticing something ("You can become aware of thoughts,")
4) your own definition
My point is that I think that you, confuse the map for the territory. Now I made the same mistake, because "map not being the territory" is a map. In all actuality, all types of communication are, and equally untrue.
Solipsistic and useless.
The way I see it is that reality is the way it is and it is arational. Gravity does not exist. We may create a layer on top of arational reality and call it reality, while in all actuality it is a virtual reality.
Useless and solipsistic.
It is simply a human projection on top of the arational reality. Arationality is completely independent of reasoning, everything rational and irrational exists within a matrix (virtual reality) of the arational.
Do I need to say it?
It's fine to do physics, math or other science but it is still a human projection.
I think this is false. Mathematics is interesting precisely because of its non-humanity. The joy of doing mathematics is incommensurate with an imagination of the joy of doing mathematics. The missing ingredient, of course, is the unknown, of discovering something outside yourself.
To call it a human projection is to miss the entire point of preforming these actions in the first place, which is curiosity, exploring the unknown.
You might think that there is no alternative to using maps (like I do here) but I am simply pointing out that you can discover arational reality without creating another map to point out its existence.
The "map/territory" dichotomy is just another map, as you yourself said. In reality, there is only atoms and the void. Self/other, subject/object are all a part of reality itself, and the delineation is only useful, never necessary.
If you want to find out for yourself, what happens when you become silent of all thoughts? Does reality disappear?
https://en.wikipedia.org/wiki/Flow_(psychology)
The point is that you can sit down, become aware of all the maps
you cannot
Because it is an illusion. The illusion that some maps are better than others when they are all the same from the perspective of the arational.
The arational has no perspective, because it is not the type of thing to have perspectives. Reality has no mind, no agency.
Reality is, however, patterned and models exploit this patterning.
Suppose one person (call her Alice) choose to act as if there exists models better than other models, while another person (call him Bob) chooses to not do this. One may object to using words like 'true' or 'accurate' to describe their approaches, but there is a certain quality the former would have that the latter does not. The former may make a habit ingesting certain objects, or preforming pointless tasks for useless trinkets. The other would object that 'hunger' and 'money' are just models and no model is better than another.
These approaches lead to certain outcomes. Again, one might not like describing one as 'true' and the other as 'false', but there is a certain pattern there to be found there.
What's the point of this post? It's an invitation, you have to figure it out yourself.
While I'm sure there are many people here who enjoy puzzles, obscurantism is frowned upon.
The social contract of lesswrong is the opposite of your epigram: "What's the point of this post?" You have to figure that out on your own. It's not our job, but yours. I don't doubt you have some insight here. I'm sure it could even be couched into a post fit for this community. But you have to do the job of filtering your thoughts, crafting your posts and hoping against hope you didn't make an embarrassing mistake.
Finally, I apologize for combative tone of this post. This was written out of sympathy rather than disgust or disrespect. (at the very least, notice if being offensive was my aim, I could have done a better job of it)
Cognitive psychologists generally make better predicitons about human behavior than neuroscientists.
I grant you that; my assertion was one of type, not of degree. A predictive explanation will generally (yes, I am retracting my 'almost always' quantifier) be reductionist, but this a very different statement than the most reductionist explanation will be the best.
Here it seems to me like you think about philosophy as distinct from empirical reality.
Less 'distinct' and more 'abstracted'. The put it as pithy (and oversimplified) as possible, empiricism is about what is (probably) true, philosophy is about about what is (probably) necessarily true.
I could be more precise and accurate about my own thoughts here, but philosophy is one of those terms where if you ask ten different people you'll get twelve different answers. The relation between philosophy and empirical reality depends on what 'philosophy' is.
To me your post didn't feel inaccurate but confused.
I think confusion is inaccuracy at the meta level.
And besides that, I actually felt when writing that post that I was repeating 'I was confused' to the point of parody. Illusion of transparency, I suppose.
A mix of saying trival things and throwing around terms where I don't know exactly what you mean
I'm for being ambiguous, but you'll have be more precise about what I'm being ambiguous about. I can't be clear about my terminology without knowing where I'm being unclear.
I'm not sure whether you have thought about what you mean exactly either.
I don't think it's worth debating what I meant when I don't mean it anymore.
You can also make great predicions on believes that the function of the heart is pumping blood even if there are no "function-atoms" around.
It's not clear what you're saying here. If you're talking about why the heart pumps blood instead of doing something else, that requires a historical explanation, a 'why is it like this instead of like that' and presumes the heart was optimized for something, and would have been optimized for something else if something had willed it.
If this is what you're saying then yeah, the explanation will not be reductionist.
If you're saying you can predict the broad strokes of what the heart will do without reducing all the way to the level of 'function atoms' then I completely agree. The space of explanations of reality at the level of atoms is large enough that even if most of them don't even vaguely resemble reality there still isn't enough motivation or information to exhaust the search space. Incomplete reductions are fine until there's motivations for deeper explanations.
If you weren't saying either of these things, then I've misunderstood you.
Mary is presumed to have all objective knowedge and only objectve knowledge, Your phrasing is ambiguous and therefire doesnt address the point.
The behavior of the neurons in her skull is an objective fact, and this is what I mean to referring to. Apologies for the ambiguity.
When you say Mary will know what happens when she sees red, do you mean she knows how red looks subjectively, or she knows something objective like what her behaviour will be
The latter. The former is purely experiential knowledge, and as I have repeatedly said is contained in a superset of verbal (what you call 'objective') knowledge, but is disjoint with the set of verbal ('objective') knowledge itself. This is my box metaphor.
Is that supposed to relate to the objective/ subjective distinction somehow?
Yes. Assuming the Godel encoding is fixed, [the metaphor is that] any and all statements of PA are experiential knowledge (an experience, in simple terms), non-Godel statements of PA are purely experiential knowledge; the redness of red, say, and finally the Godel statements of PA are verbal knowledge, or 'objective knowledge' in your terminology.
Despite not being Godel statements in the encoding, the second item in the above list is still mathematical, and redness of red is still physical.
So? The overall point is about physicalism, and to get to 'physicalism is false', all you need is the existence of subjective knowledge, not its usefulness in making prediction. So again I don't see the relevance
What does this knowledge do? How do we tell the difference between someone with and without these 'subjective experiences'? What definition of knowledge admits it as valid?
I think your post seems to have been a reply to me. I'm the one who still accepts physicalism. AncientGreek is the one who rejects it.
Whose idea of reductionism are you criticising? I think your post could get more useful by being more clear about the idea you want to challenge.
Hmm.
I think this is closest I get to having a "Definiton 3.4.1" in my post
...the other reductionism I mentioned, the 'big thing = small thing + small thing' one...
Essentially, the claim is that to accurately explain reality, non-reductionist explanations aren't always wrong.
The confusion, however, that I realized elsewhere in the thread, is that I conflate 'historical explanation' with 'predictive explanation'. Good predictive explanation will almost always be reductionist, because, as it says on the tin, big are made of smaller things. Good historical explanations, though, will be contra-reductionist, they'll explain phenomena in terms of its relation to the environment. Consider evolution; the genes seem to be explained non-reductionistically because their presence or absence is determined by it effect on the environment i.e. whether its fit, so the explanation for how it got there necessarily includes complex things because they cause it.
I also get the feeling that use bailey in a context where motte would be the right word.
Right you are. Pretty embarrassing, really.
I've edited the OP with this in mind, but it somewhat pointless as the thesis is no longer supported IMO.
Apart from that I don't know what you mean with theory in "Reductionism is a philosophy, not a theory." As a result on using a bunch of terms where I don't know exactly what you mean it's hard to follow your argument.
Artifact of confusion; if contra-reductionism is a valid platform for explanation, then the value of reductionism isn't constative -- that is, it isn't about whether it's true or false, but something at the meta-level, rather than the object level. The antecedent is no longer believed, so now I do not believe the consequent.
The conceit I had by calling it a philosophy, or more accurately, a perspective, is essentially that you have a dataset, then you can apply a 'reductionist' filter on it to get reductionist explanations and a 'contra-reductionist' filter to get contra explanations. This was a confusion; and only seemed reasonable because I I was treating the two type of explanation -- historical and predictive -- as somehow equivalent, which I now know to be mistaken.
P.S, I've added most of this comment to the OP so future readers know my revised opinion on the accuracy of this post. If you object to this tell me.
That's what I mean by complexity, yeah.
I don't know if I made this was clear, but the point I make is independent of what high level principles explain thing, only that they are high level. The ancestors that competed across history to produce the organism of interest are not small parts making up a big thing, unless you subscribe to causal reductionism where you use causes instead of internal moving parts. But I don't like calling this reductionism (out even a theory, really) because it's, as I said, a species of causality, broadly construed.
[Why You Don't Think You're Beautiful](http://skepticexaminer.com/2016/05/dont-think-youre-beautiful/)
[Why You Don't Think You're Beautiful](http://intentionalinsights.org/why-you-dont-think-youre-beautiful)
Mary's room seems to be arguing that,
[experiencing(red)] =/= [experiencing(understanding([experiencing(red)] )] )]
(translation: the experience of seeing red is no the experience of understanding how seeing red works)
This is true, when we take those statements literally. But it's true in the same sense a Gödel encoding of statement in PA is not literally that statement. It is just a representation, but the representation is exactly homomorphic to its referent. Mary's representation of reality is presumed complete ex hypothesi, therefore she will understand exactly what will happen in her brain after seeing color, and that is exactly what happens.
You wouldn't call a statement of PA that isn't a literally a Gödel encoding of a statement (for some fixed encoding) a non-mathematical statement. For one, because that statement has a Gödel encoding by necessity. But more importantly, even though the statement technically isn't literally a Gödel-encoding, it's still mathematical, regardless.
Mary's know how she will respond to learning what red is like. Mary knows how others will respond. This exhausts the space of possible predictions that could be made on behalf of this subjective knowledge, and it can be done without it.
what Mary doesnt know must be subjective, if there is something Mary doesn't know. So the eventual point s that there s more to knowledge than objective knowledge.
Tangentially to this discussion, but I don't think that is a wise way of labeling that knowledge.
Suppose Mary has enough information to predict her own behavior. Suppose she predicts she will do x. Could she not, upon deducing that fact, decide to not do x?
Mary has all objective knowledge, but certain facts about her own future behavior must escape her, because any certainty could trivially be negated.
There are two things you could mean when you say 'reductionism is right'. That reality is reductionist in the "big thing = small thing + small thing" sense, or that reductionist explanations are better by fiat.
Reality is probably reductionist. I won't assign perfect certainty, but reductionist reality is simpler than magical reality.
As it currently stands, we don't have a complete theory of reality, so the only criteria we can judge theories is that they 1) are accurate, 2) are simple.
I am not arguing about the rightness or wrongness of reductionism. Reductionism and contra-reductionism are containers, and they contain certain classes of explanations. Contra-reductionism conatins historical explanations, explaining the state of things by the interactions with outside forces, and reductionism contains predictive explanations, explaining the future behavior in terms of internal forces.
At least this tells me I didn't make a silly mistake in my post. Thank you for the feedback.
As for your objections,
All models are wrong, some models are useful.
exactly captures my conceit. Reductionism is correct in the sense that is, in some sense, closer to reality than anti- or contra-reductionism. Likely in a similar sense that machine code is closer to the reality of a physical computation than a .cpp file, though the analogy isn't exact, for reasons that should become clear.
I'm typing this on a laptop, which is a intricate amalgam of various kinds of atoms. Hypothetically, you could explain the positioning of the atoms in terms of dense quantum mechanical computations (or a more accurate physical theory, which would exist ex hypothesi), and/or we could explain it in terms of economics, computer science and the vagaries of my life. The former strictly contains more information than the latter, and subsumes the latter to the extend that it represents reality and contradicts it to the extend it's misleading.
At an objective level, then, the strictly reductionist theory wins on merit.
Reductionism functions neatly to explain reality-in-general, and even to explain certain orderly systems that submit to a reductionist analysis. If you want completeness, reductionism will give you completeness, at the limit. But sometimes, a simple explanation is nice. It'd be convenient to compress, to explain evolution in abstract terms.
The compression will be lossy, because we don't actually have access to reality's dataset. But lossy data is okay, and more okay to more casual the ends. Pop science books are very lossy, and are sufficient for delivering a certain type of entertainment. A full reprinting of a paper's collected data is about as lossless as we tend to get.
A lossless explanation is reductionist, and centribus paribus, we ought to go with the reductionist explanation. Given a choice between a less lossy, very complex explanation and a lossy, but simple explanation, you should probably go gather more data. But failing that, you should go with one that suits your purposes. A job where every significant bit digit of accuracy matters chooses the first, as an example.
"I think you're wrong" is not a position.
They way you're saying this, it makes it seem like we're both in the same boat. I have no idea what position you're even holding.
I feel like I'm doing the same thing over and over and nothing different is happening, but I'll quote what I said in another place in this thread and hope I was a tiny bit clearer.
http://lesswrong.com/lw/nnc/the_ai_in_marys_room/day2
I think the distinction between 'knowing all about' and 'seeing' red is captured in my box analogy. The brain state is a box. There is another box inside it, call this 'understanding'. We call something inside the first box 'experienced'. So the paradox hear is the two distinct states [experiencing (red) ] and [experiencing ( [understanding (red) ] ) ] are both brought under the header [knowing (red)], and this is really confusing.
Arguably it could simulate itself seeing red and replace itself with the simulation.
I think the distinction between 'knowing all about' and 'seeing' red is captured in my box analogy. The brain state is a box. There is another box inside it, call this 'understanding'. We call something inside the first box 'experienced'. So the paradox hear is the two distinct states [experiencing (red) ] and [experiencing ( [understanding (red) ] ) ] are both brought under the header [knowing (red)], and this is really confusing.
The big box is all knowledge, including the vague 'knowledge of experience' that people talk about in this thread. The box-inside-the-box is verbal/declarative/metaphoric/propositional/philosophical knowledge, that is anything that is fodder for communication in any way.
The metaphor is intended to highlight that people seem to conflate the small box with the big box, leading to confusion about the situation. Inside the metaphor, perhaps this would be people saying "well maybe there are objects inside the box which aren't inside the box at all". Which makes little since if you assume 'inside the box' has a single referent, which it does not.
Edit: I read your link, thanks for that. I can't say I got much of anything out of it, though. I haven't changed my mind, and my epistemic status regarding my own arguments hang changed; which is to say there is likely something subtle I'm not getting about your position and I don't know what it is.
I have no idea what your position even is and you are making no effort to elucidate it. I had hoped this line
I don't understand what disagreement is occurring here, hopefully I've given someone enough ammunition to explain.
Was enough to clue you in to the point of my post.
I'd highly recommend this sequence to anyone reading this: http://lesswrong.com/lw/5n9/seeing_red_dissolving_marys_room_and_qualia/
This thrust of the argument, applied to this situation, is simply that 'knowledge' is used to mean two completely different things here. On one hand, we have knowledge as verbal facts and metaphoric understanding. On the other we have averbal knowledge, that is, the superset containing verbal knowledge and non-verbal knowledge.
To put it as plainly as possible: imagine you have a box. Inside this box, there is a another, smaller box. We can put a toy inside the box inside the larger box. We can alternatively put a toy inside the larger box but outside the box inside this box. These situations are not equivalent. What paradox!
The only insight need here is simply noting that something can be 'inside the box' without being inside the box inside the box. Since both are referred to as 'inside the box' the confusion is not surprising.
It seems like a significant number of conventional aporia can be understood as confusions of levels.
I think the argument is asserting that Mary post-brain surgery is a identical to Mary post-seeing-red. There is no difference; the two Mary's would both attest to having access some ineffable quality of red-ness.
To put it bluntly, both Marys say the same things, think the same things, and generally are virtually indistinguishable. I don't understand what disagreement is occurring here, hopefully I've given someone enough ammunition to explain.
Somewhere I got the impression that ... Sarah Perry of Ribbonfarm were LWers at some point.
She was/is. Her (now dead) blog, The View From Hell, is on the lesswrong wiki list of blogs. She has another blog, at https://theviewfromhellyes.wordpress.com which updates, albeit at a glacial pace.
I'm sorry that I over estimated my achievements. Thank you for being civil.
What do you expect to happen if you feed your code a problem that has no Turing-computable solution?
I'm actually quite interested in this. For something like the busy beaver function, it just runs forever with the output being just fuzzy and gets progressively less fuzzy but never being certain.
Although I wonder about something like super-tasks somehow being described for my model. You can definite get input from arbitrarily far in the future, but you can do even crazier things if you can achieve a transfinite number of branches.
If you're still interested in this (I doubt you are, there are more important things you can do with you are time, but still) you glance at this reply I gave to taryneast describing how it checks if a turing machine halts. (I do have an ulterior motive in pointing you there, seeing as I want to find that one flaw I'm certain is lurking in my model somewhere)
For some strange reason, your post wasn't picked up by my RSS feed and the little mail icon wasn't orange, Sorry to keep you waiting for a reply for so long.
The Halting proof is for Turing machines. My model isn't a turing machine, it's supposed to be more powerful.
You'll have to mathematically prove that it halts for all possible problems.
Not to sound condescending, but this is why I'm posting it on a random internet forum and not sending it to a math professor or something.
I don't think this is revolutionary, and I think there is very good possibility there is something wrong with my model.
I'll tell you what convinced me that this is a hyper-computer though., and I'll go a ahead and say I'm not overly familiar with the Halting inasmuch as I don't understand the inner workings as well as I can parrot facts about it. I'll let more experienced people tell me if this breaks some sort of conditional.
What my model essentially does is graft a time-travel formalism onto a something turing-complete. Since the turing -complete model of your choice is a special case of the model we just constructed, it's already turing complete. And the formalism itself already specifies that information can travel backwards through time, what has to be proven is that an algorithm can be constructed that solves the halting problem.
With all of that, we can construct an algorithm based off of the following assumptions about time travel
Inconsistent timelines "don't exist"*
A timeline is inconsistent if it sends back different information than it receives
If more than one timeline is consistent, then all are equally realized.
I have no idea if you read through the ramblings I linked, but the gist was that to simulate the model, at any given timestep the model receives all possible input from the future, organized into different branches. 'possible' is a important qualitfier, because the difference between the model being exponential in the size of the memory and exponential in an arbitrary quantity constrained to be smaller the size of the memory is whether you can tell if a given bit of memory is dependent on the future by looking at only the current state.
Next, I'll point out that because the model allows computation to be carried out between receiving and sending messages, you can use the structure of the model to do computation. An illustration:
Suppose X is a turing machine you are interested in whether or not it halts
1. Receive extra input from the future (in the form of "X will halt after n timesteps" )
- If null, goto 3 with n = 0
2. Is it properly formatted?
- If not, halt without outputting t the past. (Halt.)
3. Simulate X for exactly n timesteps
If it halts before then, output "X will halt after m timesteps" where m is the number of cycles before it halted. Halt.
If it doesn't halt after n timesteps, output "X will halt after n+1 timesteps". Halt
I'll note this algorithm only appeared to me after writing my posts.
Here's how it works.
We can number each timeline branch based off of what it outputs to the past. If it outputs "X will halt after y timesteps" then it is machine y.
If X doesn't halt, machine y will simulate X up until y-1 timesteps, and output that it wil halt after y timesteps.
The above point should be emphasized. Recall above how a timeline is inconsistent and therefore "non-existent" if it's input doesn't match it's output. Thus, for a non-halting X, every machine y will be inconsistent, and the hyper-computer will halt immediately (things are a bit fuzzy here, I am 80% confident that if there is a problem with my model, it lies in this part).
If it halts at t=z, then y=z is that only consistent timeline. For timelines y>z, they output y+1, for timelines y<z, they output z. Making y=z a kind of attractor.
My problems with this timeline are as follows:
I haven't been able to formulate the algorithm, without a) having every timeline inconsistent when the machine halts or b) the actual output uncertain (if it says it halts at z, you know for a fact it does, but if it says it doesn't halt, then you can't be sure)
"non-existent" timelines have casual weight.
It's probably stupid to reply to comment from more than three years ago, but Antisocial personality disorder does not imply violence. There are examples of psychopaths who were raised in good homes that grew up to become successful assholes.
I wrote a hypercomputer 60-ish lines of python. It's (technically) more powerful than every supercomputer in the world.
Edit: actually, I spoke too soon. I have written code which outlines a general scheme that can be modified to construct schemes in which hyper computers could possible constructed (including itself). I haven't proven that my scheme allows for hypercomputation, but a scheme similar to could (probably), including itself.
Edit: I was downvoted for this, which suppose was justified.
What my code does is simulates a modified version of CGoL (John Conway's Game Of Life). It's modified so that information can (technically) flow back through time. It's very similar to what EY outlined in the second section of Casual Universes, except my algorithm is much simpler, and faster (it'd be even faster if I hadn't done a half-assed job of coding it and choose a good language to write it in).
My scheme is more general than the code. I've tried explaining it on /r/cellular_automatta here and here, with a passable degree of success.
The scheme itself is capable of hypercomputation with the right underlying rules. I'll write a quick demonstration, assuming you've read Casual Universes, and my explanations
in order to be capable of hyper computation it must be capable of regular computation. CA have already been proven to be Turing machines in disguise so I'll take this for granted.
by the above, you should be able to construct a simulation of any turing machine in the CA. Again, this is a fact, so I'll take it for granted
I've already said that the algorithm involves backwards information flow (time travel by another name)
by the above, we can construct a state in the CA which simulates a given Turing machine, then pipes it's output back in time to a finite time after the simulation started
if we modify the simulation to instead just pipe the fact that the machine produce output, and nothing else (one bit), we can know before hand that a turing machine produces output.
I'd think anyone reading this is familiar, but this is called the Halting Problem, I think (I could be wrong, but I am highly confident I am not) my scheme solved it.
The only real problem is that if the T-machine doesn't halt, neither will the one we constructed, but it will produce output after an arbitrarily finite amount of time.
This does mean my scheme is more powerful than an Turing machine. For instance, it can compute the busy beaver function to a proportional amount of values.
You can look at the code here, but it's messily and hacked together. I only wrote it as a proof of concept in the first /r/cellular_automata thread I linked.