Posts

Comments

Comment by siodine on Course recommendations for Friendliness researchers · 2013-01-15T16:03:00.381Z · LW · GW

In light of the following comment by jim, I think we do disagree:

Please be careful about exposing programmers to ideology; it frequently turns into politics kills their minds. This piece in particular is a well-known mindkiller, and I have personally witnessed great minds acting very stupid because of it. The functional/imperative distinction is not a real one, and even if it were, it's less important to provability than languages' complexity, the quality of their type systems and the amount of stupid lurking in their dark corners.

And while I would normally interpret jim's nearest comment in this thread charitably (i.e., mostly in agreement with me), it's more reasonable to interpret in light of quoted comment.

I think he probably doesn't or didn't understand the functional paradigm. If he did, I think he would know about its usefulness in concurrent or parallel programming, and consequently know that it is not just a mind-killing ideology like US political parties, but a paradigm with real advantages and real disadvantages over other paradigms. I don't think he would have written his first comment if he really knew that. I think he's probably confusing the functional idiomatic approach/style/dialect/whatever with the functional paradigm. I mean he says "The majority of the difference between functional style and imperative style is in how you deal with collections." And remember this thread was created in reference to a comment about a textbook on functional programming (not functional "style" -- maybe he's backpedaling or charitably he means fp).

(also c++ is a non-garbage-collected language. And more importantly I don't mean to shit on jim. I'm more worried about how many people thought it was a comment worth being at the top of the comment section in a thread about course recommendations for FAI researchers. I would have been fine ignoring it otherwise)

Comment by siodine on Course recommendations for Friendliness researchers · 2013-01-15T03:59:27.331Z · LW · GW

Functional programming isn't an idiomatic approach to container manipulation, it's a paradigm that avoids mutable state and data. Write a GUI in Haskell using pure functions to see how different the functional approach is and what it is at its core. Or just compare a typical textbook on imperative algorithms with one on functional algorithms. Container manipulation with functions is just an idiom.

And sure you can write functional code in C++, for example (which by the way has map, filter, fold, and so on), but you can also write OO code in C. But few people do, and for a reason: the language makes it near impossible or, at the very least, undesirable for humans. That's close enough for the distinction being located in the language.

Comment by siodine on Farewell Aaron Swartz (1986-2013) · 2013-01-13T03:48:37.648Z · LW · GW

-

Comment by siodine on Farewell Aaron Swartz (1986-2013) · 2013-01-13T03:47:37.514Z · LW · GW

-

Comment by siodine on Course recommendations for Friendliness researchers · 2013-01-11T16:31:13.237Z · LW · GW

Laziness can muddy the waters, but it's also optional in functional programming. People using haskell in a practical setting usually avoid it and are coming up with new language extensions to make strict evaluation the default (like in records for example).

What you're really saying is the causal link between assembly and the language is less obvious, which is certainly true as it is a very high level language. However, if we're talking about the causality of the language itself, then functional languages enforce a more transparent causal structure of the code itself.

You can be certain that a function that isn't tainted by IO in haskell, for example, isn't going to involve dozens of different causal structures. An imperative function like AnimalFactory.create("dog") could involve dozens of different dependencies (e.g. through singletons or dependency injection) making the dependency graph (and causal structure) obfuscated. This lack of transparent guarantees about state and dependencies in imperative languages makes concurrent/parallelprogramming (and even plain code) very difficult to reason about and test.

Moreover, the concessions that haskell has given way to are probably temporary. Haskell is a research language and functional solutions to problems like IO and event driven programs have been put forward but are not yet widely accepted. And even ignoring these solutions, you still have a basic paradigm where you have top level imperative style code with everything else being functional.

And while it can be more difficult to debug functional programs, they're easier to test, and they're less prone to runtime bugs. And really, the debugging problem is one of laziness and difficult to use debuggers. Debugging F# with visual studio's debugger isn't that difficult.

(Note: that when I talk about functional programming, I'm talking about a paradigm that avoids mutable state and data rather than idiomatic approaches to container manipulation)

Comment by siodine on Evaluating the feasibility of SI's plan · 2013-01-10T23:45:45.663Z · LW · GW

Why do you use diigo and pocket? They do the same thing. Also, with evernote's clearly you can highlight articles.

You weren't asking me, but I use diigo to manage links to online textbooks and tutorials, shopping items, book recommendations (through amazon), and my less important online article to read list. Evernote for saving all of my important read content (and I tag everything). Amazon's send to kindle extension to read longer articles (every once and a while I'll save all my clippings from my kindle to evernote). And then I maintain a personal wiki and collection of writings using markdown with evernote's import folder function in the pc software (I could also do this with a cloud service like gdrive).

Comment by siodine on Course recommendations for Friendliness researchers · 2013-01-10T23:22:17.553Z · LW · GW

He's stating that it will invoke arguments and distract from the thrust of the point - and guess what, he's right. Look at what you're doing, right here.

No. "It" didn't invoke this thread, jimrandomh's fatuous comment combined with it being at the top of the comment section did (I don't care that it was a criticism of functional programming). You keep failing to understand the situation and what I'm saying, and because of this I've concluded that you're a waste of my time and so I won't be responding to you further.

Comment by siodine on Course recommendations for Friendliness researchers · 2013-01-10T22:56:31.594Z · LW · GW

There are two major branches of programming: Functional and Imperative. Unfortunately, most programmers only learn imperative programming languages (like C++ or python). I say unfortunately, because these languages achieve all their power through what programmers call "side effects". The major downside for us is that this means they can't be efficiently machine checked for safety or correctness. The first self-modifying AIs will hopefully be written in functional programming languages, so learn something useful like Haskell or Scheme.

Comes from the post not the comments (maybe you mean it's louie's comment about the functional programming recommendation in the main post).

Being a standard ideology doesn't make it less of an ideology.

He's just saying it's an ideology and importing the negative connotation (of it being bad), rather than saying why or how it's an ideology and why that's bad. Now I think you're being really stupid. I don't like repeating myself.

Comment by siodine on Course recommendations for Friendliness researchers · 2013-01-10T22:34:13.766Z · LW · GW

No. Jimrandomh just says functional programming, imperative programming, ect are "ideologies" (importing the negative connotation). Just says it kills minds. Just says it's a well-known mindkiller. Just says it's not a real distinction. Just puts it in a dichotomy between being more or less important than "languages' complexity, the quality of their type systems and the amount of stupid lurking in their dark corners." What Louie says is more reasonable given that it's a fairly standard position within academia and because it's a small part of a larger post. (I'd rather Louie had sourced what he said, though.)

Comment by siodine on Course recommendations for Friendliness researchers · 2013-01-10T22:15:41.549Z · LW · GW

This is a really stupid comment for how many upvotes it's getting. I don't mind the criticism of functional programming, but I do mind that this person is essentially saying "this is bad because I say so" and gets to the top of the comments.

Comment by siodine on Course recommendations for Friendliness researchers · 2013-01-10T22:03:25.513Z · LW · GW

the entire point of functional programming is to hide the causality of the program from the human

Why? I would say it's the opposite (and really the causality being clear and obvious is just a corollary of referential transparency). The difficulty of reasoning about concurrent/parallel code in an imperative language, for example, is one of the largest selling points of functional programming languages like erlang and haskell.

Comment by siodine on Course recommendations for Friendliness researchers · 2013-01-10T21:42:04.829Z · LW · GW

I don't think you understand functional programming. What background are you coming from?

Comment by siodine on Philosophy Needs to Trust Your Rationality Even Though It Shouldn't · 2012-11-30T16:07:12.357Z · LW · GW

From SEP:

Judith Thomson provided one of the most striking and effective thought experiments in the moral realm (see Thomson, 1971). Her example is aimed at a popular anti-abortion argument that goes something like this: The foetus is an innocent person with a right to life. Abortion results in the death of a foetus. Therefore, abortion is morally wrong. In her thought experiment we are asked to imagine a famous violinist falling into a coma. The society of music lovers determines from medical records that you and you alone can save the violinist's life by being hooked up to him for nine months. The music lovers break into your home while you are asleep and hook the unconscious (and unknowing, hence innocent) violinist to you. You may want to unhook him, but you are then faced with this argument put forward by the music lovers: The violinist is an innocent person with a right to life. Unhooking him will result in his death. Therefore, unhooking him is morally wrong.

However, the argument, even though it has the same structure as the anti-abortion argument, does not seem convincing in this case. You would be very generous to remain attached and in bed for nine months, but you are not morally obliged to do so.

The thought experiment depends on your intuitions or your definition of moral obligations and wrongness, but the experiment doesn't make these distinctions. It just pretends that everyone has same intuition and as such the experiment should remain analogous regardless (probably because Judith didn't think anyone else could have different intuitions), and so then you have all these other philosophers and people arguing about this minutia and adding on further qualifications and modifications to the point where that they may as well be talking about actual abortion.

Comment by siodine on Intuitions Aren't Shared That Way · 2012-11-30T15:52:15.522Z · LW · GW

I prefer to think of 'abstract' as 'not spatially extended or localized.'

I prefer to think of it as anything existing at least partly in mind, and then we can say we have an abstraction of an abstraction or that something something is more abstract (something from category theory being a pure abstraction, while something like the category "dog" being less abstract because it connects with a pattern of atoms in reality). By their nature, abstractions are also universals, but things that actually exist like the bee hive in front of me aren't particulars at the concrete level. The specific bee hive in my mind that I'm imagining is a particular, or the "bee hive" that I'm seeing and interpreting into a bee hive in front of me is also a particular, but the bee hive is just a "pattern" of atoms.

Is that a fair summary?

I think that you're stuck in noun-land while I'm in verb-land, but I don't think noun-land is concrete (it's an abstraction).

What's relevant is whether it's useful to have separate concepts of 'the practice of science' vs. 'professional science,' the former being something even laypeople can participate in by adopting certain methodological standards. I think both concepts are useful. You seem to think that only 'professional science' is a useful concept, at least in most cases. Is that a fair summary?

Framing those concepts in terms of usefulness isn't helpful, I think. I'd simply say the laypeople are doing something different unless they're contributing to our body of knowledge. In which case, science as it is requires that those laypeople interact with science as it is (journals and such).

Counterfactuals don't make sense if you think of things as they are?

No, I mean thinking of someone as being scientific doesn't make sense if you think of science as it is because e.g. the sixth grader at the science fair that we all "scientific" isn't interacting with science as it is. We're taking some essential properties we pattern match in science as it is, and then we abstract them, and then we apply them by pattern matching.

I'm suggesting that many of those clustered properties, in particular many of the ones we most care about when we promote and praise things like 'science' and 'naturalism,' can occur in isolated individuals.

We can imagine an immortal human being on another planet replicating everything science has done on Earth thus far. So, yes I think it can occur in isolated individuals, but that's only because the individual has taken on everything that science is and not some like "carefully collecting empirical data, and carefully reasoning about its predictive and transparently ontological significance."

If I'm going to apply an abstraction to what I praise in science to individuals, it's not "being scientific" or "doing science", it's "working with feedback." It's what programmers do, it's what engineers do, it's what mathematicians, it's what scientists do, it's what people that effectively lose weight do, and so on. It's the kernel of thought most conducive to progress in any area.

Maybe we're just not approaching the problem at the same levels. When I ask about what the optimal way is to define our concepts, I'm trying to define them in a way that allows us to consistently ..

I think we are approaching the problem at the same level. I think I have optimally defined the concepts, and I think "behave in a way that predictably makes you better and better at doing good stuff" is what needs to be communicated and not "science: carefully collecting empirical data, and carefully reasoning about its predictive and transparently ontological significance." If we're going to add more content, then we should talk about how to effectively measure self-improvement, how to get solid feedback and so on. With that knowledge, I think a bunch of kids working together could rebuild science from the ground up.

If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generation of creatures, what statement would contain the most information in the fewest words? I believe it is the atomic hypothesis that all things are made of atoms — little particles that move around in perpetual motion, attracting each other when they are a little distance... -- Feynman

I'd pass on how important "behave in a way that predictably makes you better and better at doing good stuff" is.

Comment by siodine on Intuitions Aren't Shared That Way · 2012-11-30T01:17:45.333Z · LW · GW

For you, if I'm understanding you right, they're professions, institutions, social groups, population-wide behaviors. Sociology is generally considered more abstract or high-level than psychology.

You're kind of understanding me. Abstractly, bee hives produce honey. Concretely, this bee hive in front of me is producing honey. Abstractly, science is the product of professions, institutions, ect. Concretely, science is the product of people on our planet doing stuff.

I'm literally trying to not talk about abstractions or concepts but science as it actually is. And of course, science as it actually is does things that we can then categorize into abstractions like feedback cycles. But when you say science is a bunch of abstractions (like I think your definitions are), then you're missing out on what it actually is.

Feedback cycles are great, but we don't need to build them into our definition of 'science' in order to praise science for happening to possess them; if we put each scientist on a separate island, their work might suffer as a result, but it's not clear to me that they would lose all ability to do anything scientific, or that we should fail to clearly distinguish the scientifically-minded desert-islander for his unusual behaviors.

This is exactly why I want to avoid defining science with abstractions. It literally does not make sense if you think of science as it is. "Scientific" imports essentialism.

Also, it's not clear in what sense mathematics has a self-improving recursive feedback cycle with reality.

Mathematics is self-improving while at the same time hinging on reality. This is tricky to explain so I might come back to it tomorrow when I'm more well rested (i.e., not drunk).

I'm not sure that's the best approach. Telling people to find a recursively self-improving method is not likely to be as effective as giving them concrete reasoning skills (like how to perform thought experiments, or how to devise empirical hypotheses, or how to multiply quantities) and then letting intelligent society-wide behaviors emerge via the marketplace of ideas (or via top-down societal structuring, if necessary). Don't fixate first and foremost on telling people about what our abstract models suggest makes science on a societal scale so effective; fixate first and foremost on making them good scientists in their daily lives, in every concrete action.

No, I think that kernel (and we are speaking in the context of "fast-and-ready") of thought is really the most important thing to convey. Speaking abstractly, even science doesn't take that kernel seriously enough. It doesn't question how it should allocate its limited resources or improve its function. This is costing millions of lives, untold suffering, and perhaps our species continued existence. But it does employ a self-improving feedback cycle on reality which is just enough for it to uncover reality. It needs to install a self-improving feedback cycle on itself. And then we need a self-improving feedback cycle on feedback cycles. I can't think of any abstraction more important in making progress with something.

Comment by siodine on Intuitions Aren't Shared That Way · 2012-11-30T00:25:21.540Z · LW · GW

I thought you were saying that the distinctions have become less blurred?

Yup, my bad. You caught me before my edit.

Do you think these would be useful fast-and-ready definitions for everyday promotion of scientific, philosophical, and mathematical literacy? Would you modify any of them?

I think you're reifying abstraction and doing so will introduce pitfalls when discussing them. Math, science, and philosophy are the abstracted output of their respective professions. If you take away science's competitive incentive structure or change its mechanism of output (journal articles) then you're modifying science. If you install a self-improving recursive feedback cycle with reality in philosophy, then I think you've recreated math and science within philosophy (because science is fundamentally concrete reasoning while math is abstract reasoning and philosophy carries both).

If I'm going to promote something to laypeople, it's that a mechanism of recursive self-improvement is desirable. There's plenty to unpack there, though. Like you need a measure of improvement that contacts reality.

Comment by siodine on Philosophy Needs to Trust Your Rationality Even Though It Shouldn't · 2012-11-29T23:55:06.326Z · LW · GW

I agree, but the problems remain and the arguments flourish.

Comment by siodine on Intuitions Aren't Shared That Way · 2012-11-29T23:44:30.378Z · LW · GW

I didn't say they don't overlap. I said the distinctions have become less blurred (I think because of the need for increased specialization in all intellectual endeavours as we accumulate more knowledge). I define philosophy, math, and science by their professions. That is, their university departments, their journals, their majors, their textbooks, and so on.

Hence, I think the best way to ask if "philosophy" is a worthwhile endeavour is to asked "why should we fund philosophy departments?" A better way to ask that question is "why should we fund philosophy research and professional philosophers (as opposed to teachers of basic philosophy)?"

And though while I think basic philosophy can be helpful in getting a footing in critical thinking, I also think CFAR is considerably better at teaching critical thinking.

I don't see any principled reason for why we can't all be generalists without labels. Practical reasons, yes.

Comment by siodine on Intuitions Aren't Shared That Way · 2012-11-29T22:42:37.332Z · LW · GW

Even though the wikipedia page for "meaning of life" is enormous, it boils all down to the very simple either/or statement I gave.

How do we know if something is answerable? Did a chicken just materialize 10 billion light years from Earth? We can't answer that. Is the color blue the best color? We can't answer that. We can answer questions that contact reality such that we can observe them directly or indirectly. Did a chicken just materialize in front me? No. Is the color blue the most preferred color? I don't know, but it can be well answered through reported preferences. I don't know if these currently unanswerable questions will always be unanswerable, but given what I know I can only say that they will almost certainly remain unanswerable (because it's unfeasible or because it's a nonsensical question).

Wouldn't science need to do conceptual analysis? Not really, though it could appear that way. Philosophy has "free will", science has "volition." Free will is a label for a continually argued concept. Volition is a label for an axiom that's been nailed in stone. Science doesn't really care about concepts, it just wants to ask questions such that it can answer them definitely.

Even though science might provide all the knowledge necessary to easily answer a question, it doesn't actually answer it, right? My answer: so what? Science doesn't answer a lot of trivial questions like what I exactly should eat for breakfast, even though the answer is perfectly obvious (healthy food as discovered by science if I want to remain healthy).

Why still have the hard problem of consciousness if it's answerable by science? Because the brain is hard to understand. Give another century or so. We've barely explored the brain.

What if consciousness isn't explainable by science? When we get to that point, we'll be much better prepared to understand what direction we need to go to understand the brain. As it is now, philosophy is simply following science's breadcrumbs. There is no point in doing philosophy, unless there is a reasonable expectation that it will solve a problem that can be more likely solved by something else.

A scientific theory of ethics? It wouldn't have any "you ought to do X because X is good," but would be more of the form of "science says X,Y,Z are healthy for you" and then you would think "hey, I want to be healthy, so I'm going to eat X,Y,Z." This is actually how philosophy works now. You get a whole bunch of argumentation as evidence, and then you must enact it personally through hypothetical injunctions like "if I want to maximize well being, then I should act as a utilitarian."

Comment by siodine on Philosophy Needs to Trust Your Rationality Even Though It Shouldn't · 2012-11-29T22:08:14.121Z · LW · GW

My first thought was "every philosophical thought experiment ever" and to my surprise wikipedia says there aren't that many thought experiments in philosophy (although, they are huge topics of discussion). I think the violinist experiment is uniquely bad. The floating man experiment is another good example, but very old.

Comment by siodine on Intuitions Aren't Shared That Way · 2012-11-29T21:00:27.507Z · LW · GW

I'm a bit worried that your conception of philosophy is riding on the coat tails of long-past-philosophy where the distinction between philosophy, math, and science were much more blurred than they are now. Being generous, do you have any examples from the last few decades (that I can read about)?

I'll agree with you that having some philosophical training is better than none in that it can be useful in getting a solid footing in basic critical thinking skills, but then if that's a philosophy department's purpose then it doesn't need to be funded beyond that.

Comment by siodine on Intuitions Aren't Shared That Way · 2012-11-29T20:54:39.569Z · LW · GW
  1. I'm not even remotely autistic.
  2. How is philosophy going to get us the correct conception of reality? How will we know it when it happens? (I think science will progress us to the point where philosophy can answer the question, but by then anyone could)
Comment by siodine on Intuitions Aren't Shared That Way · 2012-11-29T20:34:00.843Z · LW · GW

You're both arguing over your impressions of philosophy. I'm more inclined to agree with Lukeprog's impression unless you have some way of showing that your impression is more accurate. Like, for example, show me three papers in meta-ethics from the last year that you think highlight what is representational of that area of philosophy.

From my reading of philosophy, the most well known philosophers (who I'd assume are representational of the top 10% of the field) do keep intuitions and conceptual analysis in their toolbox. But when they bring it out of the toolbox, they dress it up so that it's not prima facie stupid (and then you get a fractal mess of philosophers publishing how the intuition is wrong where their intuition isn't, or how they shouldn't be using intuitions, or how intuitions are useful, and so on with no resolution). If I were to take a step back and look at what philosophy accomplishes, I think I'd have to say "confusion."

You can say this is just the way things are in philosophy, but then why should we fund philosophy?

Comment by siodine on Intuitions Aren't Shared That Way · 2012-11-29T20:20:40.824Z · LW · GW

So? I can quote scientists saying all manner of stupid, bizarre, unintuitive things...but my selection of course sets up the terms of the discussion. If I choose a sampling that only confirms my existing bias against scientists, then my "quotes" are going to lead to the foregone conclusion. I don't see why "quoting" a few names is considered evidence of anything besides a pre-existing bias against philosophy.

Improving upon this: why care about what the worst of a field has to say? It's the 10% (stergeon's law) that aren't crap that we should care about. The best material scientists give us incremental improvements in our materials technology, and the worst write papers that are never read or do research that is never used. But what do the best philosophers of meta-ethics give us? More well examined ideas? How would you measure such a thing? How can those best philosophers know they're making progress? How can they improve the tools they use? Why should we fund philosophy departments?

Comment by siodine on 2012 Survey Results · 2012-11-29T20:09:01.881Z · LW · GW

Hopefully then someone will do a supplementary calibration test for prediction book users in the comments here or in a new post on the discussion board. (Apologies for not doing it myself)

Comment by siodine on 2012 Survey Results · 2012-11-29T19:33:39.706Z · LW · GW

How well calibrated were the prediction book users?

Comment by siodine on Intuitions Aren't Shared That Way · 2012-11-29T16:31:23.727Z · LW · GW

I've substituted problems that philosophy is actually working on (metaethics and conciousness) with one that analytic philosophy isn't (meaning of life). Meaning comes from mind. Either we create our own meaning (absurdism, existentialism, ect) or we get meaning from a greater mind that designed us with a purpose (religion). Very simple. How could computer science or science dissolve this problem? (1) By not working on it because it's unanswerable by the only methods we can have said to have answered something, or (2) making the problem answerable by operationalizing it or by reforming the intent of the question into another, answerable, question.

Through the process of science, we gain enough knowledge to dissolve philosophical questions or make the answer obvious and solved (even though science might not say "the meaning of life is X" but instead show that we evolved, what mind is, and how the universe likely came into being -- in which case you can answer the question yourself without any need for a philosophy department).

What instruments do use to get feedback from reality vis a vis phenomenal consciousness and ethical values? I didn't notice and qualiometers or agathometers last time I was in a lab.

If I want to know what's happening in a brain, I have to understand the physical/biological/computational nature of the brain. If I can't do that, then I can't really explain qualia or such. You might say we can't understand qualia through its physical/biological/computational nature. Maybe, but it seems very unlikely, and if we can't understand the brain through science, then we'll have discovered something very surprising and can then move in another direction with good reason.

Comment by siodine on Intuitions Aren't Shared That Way · 2012-11-29T16:11:37.170Z · LW · GW

I absolutely loathe the way you phrased that question for a variety of reasons (and I suspect analytic philosophers would as well), so I'm going to replace "meaning of life" with something more sensible like "solve metaethics" or "solve the hard problem of consciousness." In which case, yes. I think computer science is more likely to solve metaethics and other philosophical problems because the field of philosophy isn't founded on a program and incentive structure of continual improvement through feedback from reality. Oh, and computer science works on those kinds of problems (so do other areas of science, though).

Comment by siodine on Intuitions Aren't Shared That Way · 2012-11-29T15:40:02.478Z · LW · GW

To say the problem is "rampant" is to admit to a limited knowledge of the field and the debates within it.

Well, Lukeprog certainly doesn't have a limited knowledge of philosophy. Maybe you can somehow show that the problem isn't rampant.

Comment by siodine on Intuitions Aren't Shared That Way · 2012-11-29T15:36:22.913Z · LW · GW

Defund philosophy departments to the benefit of computer science departments?

Comment by siodine on Intuitions Aren't Shared That Way · 2012-11-29T15:21:28.174Z · LW · GW

Show me three of your favorite papers from the last year in ethics or meta-ethics that highlight the kind of the philosophy you think is representational of the field. (And if you've been following Lukeprog's posts for any length of time, you'd see that he's probably read more philosophy than most philosophers. His gestalt impression of the field is probably accurate.)

Comment by siodine on Open Thread, November 16–30, 2012 · 2012-11-21T16:15:28.432Z · LW · GW

Surely territory is just another name for reality?

I think you misinterpreted me. Territory is just another name for reality, but reality is just a name and so is territory. By nature of names coming from mind, they are maps because they can't perfectly represent whatever actually is (or more accurately, we can't confirm our representations as perfectly representational and we possibly can't form perfect representations). Also, by saying "actually is," I'm creating a map, too -- but I hope you infer what I mean. The methods by which we as humans receive and transform our state is imperfect and therefore uncertainty is injected into any thing we do, and furthermore by talking of "reality" (as it actually is) we assume no limitations of human-minds or general-mind-design that prevent us from forming what actually is within the constraints of our minds and general-mind-design.

Indeed, what? Is there an underlying computing substrate, which is more "real" than the territory?

Essentially, my question was a syncretization of the five ways. I.e., at the meta-level, what causes? Some people like Aquinas say that such a cause entails that it has the most important properties ascribed to their God (and consequently they pattern match "what causes" to their God). I don't take that view, though. I just think (i.e., a hunch) there's something there to explain and that it probably necessitates a teleological worldview at the meta-level if it is to be explained. I don't know.

Comment by siodine on Open Thread, November 16–30, 2012 · 2012-11-20T15:26:54.853Z · LW · GW

The laws are in the map, of course (if it came from mind, it is necessarily of a map). And what we call the 'territory' is a map itself. The map/territory distinction is just a useful analogy for explaining that our models of reality aren't necessarily reality (whatever that actually is). Also, keep in mind that there are many incompatible meanings for 'reductionism'. A lot of LWers (like anonymous1) use it in a way that's not in line with EY, and EY uses it in a way that's not in line with philosophy (which is where I suspect most LWers get their definition of it from).

And if the physical laws are in the map, what represents them in the territory?

Good question. A description is sufficient for execution, but what executes the description?

Comment by siodine on Taking "correlation does not imply causation" back from the internet · 2012-10-03T13:17:23.134Z · LW · GW

http://www.michaelnielsen.org/ddi/if-correlation-doesnt-imply-causation-then-what-does/

Comment by siodine on The Useful Idea of Truth · 2012-10-02T17:52:31.609Z · LW · GW

First, our territory is a map. This is by nature of evolving at a physical scale in which we exist on a type of planet (rather than at the quantum level or the cosmological level) and of a century/day/hour scale conception of time (rather than geological or the opposite) and of a species in which experience is shared, preserved, and consequently accumulated. Differentiating matter is of that perspective, labeling snow is of that perspective, labeling is of that perspective, causation, and so on.

By nature of being, we create a territory. For a map to be true (I don't like 'meaningful'), it must correspond with the relevant territory. So, we need more than a laplacian demon to restrict beliefs to propositions that can be true, we need a demon capable of having a perfect and imperfect understanding of nature. It'd have to carve out all possible territories (of which can conflict) from our block universe and see them from all possible perspectives, and then you would have to specify which territory you want to see corresponds with whatever map.

Comment by siodine on The Useful Idea of Truth · 2012-10-02T17:03:40.590Z · LW · GW

Input->Black box->Desired output. "Black box" could be replaced with"magic." How would your black box work in practice?

Comment by siodine on [Poll] Less Wrong and Mainstream Philosophy: How Different are We? · 2012-09-26T21:41:57.140Z · LW · GW

And what meaning is that?

Comment by siodine on [Poll] Less Wrong and Mainstream Philosophy: How Different are We? · 2012-09-26T20:59:37.834Z · LW · GW

For example, abstract objects could be considered to exist in the minds of people imagining them, and consequently in some neuronal pattern, which may or may not match between different individuals, but considered to not exist as something independent of the conscious minds imagining them. While this is a version of nominalism, it is not nearly as clear-cut as "abstract objects do not exist".

That would be conceptualism and is a moderate anti-realist position about universals (if you're a physicalist). Nominalism and Platonism are two poles of a continuum about realism of universals. So, you probably lean towards nominalism if you're a physicalist and conceptualist.

Comment by siodine on Less Wrong Polls in Comments · 2012-09-21T16:42:59.041Z · LW · GW

It won't prevent trolling but it will minimize its effects. As it stands, you can input numbers like 1e+19 which will seriously throw off the mean. If trolls can only give the highest or lowest reasonable bound then they're not going to have much of an effect individually and that makes going through the effort to troll less worthwhile.

Comment by siodine on New study on choice blindness in moral positions · 2012-09-21T13:36:46.969Z · LW · GW

I completely agree with you; there shouldn't be any problems discussing political examples where you're only restating a campaign's talking points rather than supporting one side or the other.

Comment by siodine on New study on choice blindness in moral positions · 2012-09-21T03:31:07.127Z · LW · GW

The problem is that we don't know how influential the blind spot is. It could just fade away after a couple minutes and a "hey, wait a minute..." But assuming it sticks:

If I were a car salesmen, I would have potential customers tell me their ideal car and then I would tell them what I want their ideal car to be as though I were simply restating what they just said.

If I were a politician, I would target identities (e.g., latino, pro-life, low taxes, ect) rather than individuals because identities are made of choices and they're easier to target than individuals. The identity makes a choice and then you assume the identity chose you. E.g., "President Obama has all but said that I'm instigating "class warfare," or that I don't care about business owners, or that I want to redistribute wealth. Well, Mr. Obama, I am fighting with and for the 99%; the middle class; the inner city neighborhoods that your administration has forgotten; Latinos; African-Americans. We all have had enough of the Democrats decades long deafness towards our voice. Vote Romney." Basically, you take the opposition's reasons for not voting for you and then assume those reasons are for the opposition, and you run the ads in the areas you want to affect.

Comment by siodine on Less Wrong Polls in Comments · 2012-09-19T16:56:27.322Z · LW · GW

Specifying a lower and upper bound on the input should be required.

Comment by siodine on The Strangest Thing An AI Could Tell You · 2012-09-19T15:20:56.907Z · LW · GW

"I built you."

Comment by siodine on Eliezer's Sequences and Mainstream Academia · 2012-09-16T16:02:58.512Z · LW · GW

Or, more meta-ly, you're not going to be very persuasive if you ignore pathos and ethos. I think this might be a common failure mode of aspiring rationalists because we feel we shouldn't have to worry about such things, but then we're living in the should-world rather than the real-world.

Comment by siodine on Checking for the Programming Gear · 2012-09-14T21:15:14.488Z · LW · GW

Yeah, I can pretty much recall 10k LOC from experience. But it's not just about having written something before, it's about a truly fundamental understanding of what is best in some area of expertise which comes with having written something before (like a GUI framework for example) and improved upon it for years. After doing that, you just know what the architecture should look like, and you just know how to solve all the hard problems already, and you know what to avoid doing, and so really all you're doing is filling in the scaffolding with your hard won experience.

Comment by siodine on Checking for the Programming Gear · 2012-09-14T19:26:07.493Z · LW · GW

I've done it, and it's not as impressive as it sounds. It's mostly just reciting from experience and not some savant-like act of intelligence or skill. Take those same masters into an area where they don't have experience and they won't be nearly as fast.

Actually, I think the sequences were largely a recital of experience (a post a day for a year).

Comment by siodine on The raw-experience dogma: Dissolving the “qualia” problem · 2012-09-14T17:16:23.249Z · LW · GW

Reading the comments in here, I think I understand Will Newsome's actions a lot better.

Comment by siodine on Under-acknowledged Value Differences · 2012-09-14T01:53:41.395Z · LW · GW

I am not supporting any of the assertions.

I don't think everyone wants to be more autonomous, either (subs in bsdm communities for example).

Comment by siodine on Under-acknowledged Value Differences · 2012-09-13T04:20:25.786Z · LW · GW

What if the problem is "I want to oppress you, but I know individually being nicer would get me more of what I want, so instead I'm going to recruit allies that will help me oppress you because I think that will get me even more of what I want."

Comment by siodine on Under-acknowledged Value Differences · 2012-09-13T03:52:31.720Z · LW · GW

So, assuming you're right, I think your conclusion then is that it's more productive to work towards uncovering what would be reflective extrapolated values than it is to bargain, but that's non-obvious given how political even LWers are. But OTOH I don't think we have anything to explicitly bargain with.