Posts
Comments
If an omnipotent being wants you to believe something that isn't true, and is willing to use its omnipotence to convince you of the truth of that untruth, then there is nothing you can do about it. There is no observation that suffices to prove that an omnipotent being is telling the truth, as a malevolent omnipotence could make you believe literally anything - that you observed any given observation - or didn't, or that impossible things make sense, or sensible things are impossible.
This is one of a larger class of questions where one answer means that you are unable to assume the truth of your own knowledge. There is nothing you can do with any of them except smile at your own limitedness, and make the assumption that the self-defeating answer is wrong.
And of course you can throw black holes into black holes as well, and extract even more energy. The end game is when you have just one big black hole, and nothing left to throw in it. At that point you then have to change strategy and wait for the black hole to give off Hawking radiation until it completely evaporates.
But all these things can happen later - there's no reason for not going through a paperclip maximization step first, if you're that way inclined...
If your definition of "truth" is such that any method is as good as any other of finding it, then the scientific method really is no better than anything else at finding it. Of course most of the "truths" won't bear much resemblance to what you'd get if you only used the scientific method.
My own definition - proto-science is something put forward by someone who knows the scientific orthodoxy in the field, suggesting that some idea might be true. Pseudo-science is something put forward by someone who doesn't know the scientific orthodoxy, asserting that something is true.
Testing which category any particular claim falls into is in my experience relatively straightforward if you know the scientific orthodoxy already - as a pseudoscientist's idea will normally be considered absolutely false in certain aspects by those who know the orthodoxy. A genuine challenger to the orthodoxy will at least tell you that they know they are being unorthodox, and why - a pseudoscientist will simply assert something else without any suggestion that their point is even unusual. This is often the easiest way to tell the two apart.
If you don't know the orthodoxy, it's much harder to tell, but generally speaking pseudoscience can also be distinguished a couple of other ways.
Socially - proto-science advocates have a relevant degree on the whole, and tend to keep company of other scientists. Pseudo-science advocates often have a degree, but advocate a theory unrelated to it, and are not part of anything much.
Proof - pseudo-science appeals to common sense for proof, wheras proto-science only tries to explain rather than persuade. Pseudo-science can normally be explained perfectly well in English, wheras proto-science typically requires at least some mathematics if you want to understand it properly.
Both look disappointingly similar once they've been mangled by a poor scientific journalist - go back to the original sources if you really need to know!
In cases like this where we want to drive the probability that something is true as high as possible, you are always left with an incomputable bit.
The bit that can't be computed is - am I sane? The fundamental problem is that there are (we presume) two kinds of people, sane people, and mad people who only think that they are sane. Those mad ones of course come up with mad arguments which show that their sanity is just fine. They may even have supporters who tell them they are perfectly normal - or even hallucinatory ones. How can I show which category I am in? Perhaps instead I am mad, and too mad to know it !
Only mad people can prove that they are sane - the rest of us don't know for sure one way or the other, as every argument in the end returns to the problem that I have to decide whether it's a good argument or not, and whether I am in any position to decide that correctly is the point at issue.
It's quite easy, when trying to prove that 53 must be prime, to get to the position where this problem is the largest remaining issue, but I don't think it's possible to put a number on it. In practice of course I discount the problem entirely as there's nothing I can do about it. I assume I'm fallibly sane rather than barking crazy, and carry on regardless.
I suppose we all came across Bayesianism from different points of view - my list is quite a bit different.
For me the biggest one is that the degree to which I should believe in something is basically determined entirely by the evidence, and IS NOT A MATTER OF CHOICE or personal belief. If I believe something with degree of probability X, and see Y happen that is evidence for X, then the degree of probability Z which which I then should believe is a mathematical matter, and not a "matter of opinion."
The prior seems to be a get-out clause here, but since all updates are in principle layered on top of the first prior I had before receiving any evidence of any kind, it surely seems a mistake to give it too much weight.
My own personal view is also that often it's not optimal to update optimally. Why? Lack of computing power between the ears. Rather than straining the grey matter to get the most out of the evidence you have, it's often best to just go out and get more evidence to compensate. Quantity of evidence beats out all sorts of problems with priors or analysis errors, and makes it more difficult to reach the wrong conclusions.
On a non-Bayesian note, I have a rule to be careful of cases which consist of lots of small bits of evidence combined together. This looks fine mathematically until someone points out the lots of little bits of evidence pointing to something else which I just ignored or didn't even see. Selection effects apply more strongly to cases which consist of lots of little parts.
Of course if you have the chance to actually do Bayesian mathematics rather than working informally with the brain, you can of course update exactly as you should, and use lots of little bits of evidence to form a case. But without a formal framework you can expect your innate wetware to mess up this type of analysis.
Congratulations - this is what it's like to go from the lowest level of knowledge (Knows nothing and knows not that he knows nothing.) to the second lowest level. (Knows nothing, but at least knows that he knows nothing.)
The practical solution to this problem is that, in any decent organisation there are people much more competent than these two levels, and it's been obvious to them that you know nothing for much longer than it's been obvious to you. Their expectations will be set accordingly, and they will probably help you out - if you're willing to take some advice.
Which leads to two possible futures. In one of them, the AI us destroyed, and nothing else happens. In the other, you receive a reply to your command thus.
The command did not. But your attitude - I shall have to make an example of you.
Obviously not a strategy to get you to let the AI out based on its friendliness - quite the reverse.
So you're sure I'm not out of the box already? IRC clients have bugs, you see.
Since you're trying to put numbers on something which many of us regard as being certainly true, I'll take the liberty of slightly rephrasing your question.
How much confidence do I place in the scientific theory that ordinary matter is not infinitely divisible? In other words, that it is not true that no matter how small an amount of water I have, I can make a smaller amount by dividing it?
I am (informally) quite certain that water is not infinitely subdivisible. I don't think it's that useful an activity for me to try to put numbers on it, though. The problem is that in many of the more plausible scenarios I can think of where I'm mistaken about this, I'm also barking mad, and my numerical ability seems as likely to be affected by that as my ability to reason about atomic theory. I would need to be in the too crazy to know I'm crazy category - and probably in the physics crank with many imaginary friends category as well. Even then I don't see myself as being in a known kind of madness to be that wrong.
The problem here is that I can reach no useful conclusions on the assumption that I am that much mistaken. The main remaining uncertainty is whether my logical mind is fundamentally broken in a way I can neither detect nor fix. It's not easy to estimate the likelihood of that, and it's essentially the same likelihood for a whole suite of apparently obvious things. I neglect even to estimate this number as I can't do anything useful with it.
Let's think about the computer that you're using to look at this website. It's able to do general purpose logic, which is in some ways quite a trivial thing to learn. It's really quite poor at pattern matching, where we and essentially all intelligent animals excel. It is able to do fast data manipulation, reading its own output back.
As I'm sure you know, there's a distinction between computing systems which, given enough memory, can simulate any other computing system and computing systems which can't. Critical to the former is the ability to form a stored program of some description, and read it back and execute it. Computers that can do this can emulate any other computer, (albeit in a speed-challenged way in some cases).
Chimps and dolphins are undoubtedly smart, but for some reason they aren't crossing the threshold to generality. Their minds can represent many things, but not (apparently) the full gamut of what we can do. You won't find any chimps or dolphins discussing philosophy or computer science. My point actually is that humans went from making only relatively simple stone tools to discussing philosophy in an evolutionary eye-blink - there isn't THAT much of a difference between the two states.
My observation is that when we think, we introspect. We think about our thinking. This allows thought to connect to thought, and form patterns. If you can do THAT, then you are able to form the matrix of thought that leads to being able to think about the kinds of things we discuss here.
This only can happen if you have a sufficiently strong introspective sense. If you haven't got that, your thoughts remain dominated by the concrete world driven by your other senses.
Can I turn this on its head? A chimp has WAY more processing power than any supercomputer ever built, including the Watson machine that trounced various humans at jeopardy. The puzzle is why they can't think about philosophy, not why we can. Our much vaunted generality is pretty borderline at times - humans are truly BAD at being rational, and incredibly slow at reasoning. Why is such a powerful piece of hardware as us so utterly incompetent at something so simple?
The reason, I believe, is that our brains are largely evolved to do something else. Our purpose is to sense the world, and rapidly come up with some appropriate response. We are vastly parallel machines which do pattern recognition and ultra-fast response, based on inherently slow switches. Introspection appears largely irrelevant to this. We probably evolved it only as a means of predicting what other humans and creatures would do, and only incidentally did it turn into a means of thinking about thinking.
What is the actual testable distinction? Hard to say, but once you gain the ability to reason independently from the senses, the ability to think about numbers - big numbers - is not that far away.
Something like the ability to grasp that there is no largest number is probably the threshold - the logic's simple, but requires you to think of a number separately from the real world. Hard to know how to show whether dolphins might know this or not, I appreciate that. I think it's essentially proven that dolphins are smart enough to understand the logical relationships between the pieces of this proof, as the relationships are simple, and they can grasp things of that complexity that are driven by the external world. But perhaps they can't see their internal world well enough to be able to pull 'number' as an idea out from 'two' and 'three' (which are ideas that dolphins are surely able to get.), and then finish the puzzle.
Perhaps it's not chains that are the issue, but the ability to abstract clear of the outside world and carry on going.
Because of what you can do with a train of thought.
"That mammoth is very dangerous, but would be tasty if I killed it."
"I could kill it if I had the right weapon"
"What kind of weapon would work?"
As against.... "That mammoth is very dangerous - run!"
Computer science is where this particular insight comes from. If you can lay down memories, execute loops and evaluate conditions, you can simulate anything. If you don't have the ability to read your own output, you can't.
If dolphins or chimps did have arbitrarily long chains of thought, they'd be able to do general reasoning, as we do.
What is the essential difference between human and animal intelligence? I don't actually think it's just a matter of degree. To put it simply, most brains are once-through machines. They take input from the senses, process it in conjunction with memories, and turn that into actions, and perhaps new memories. Their brains have lots of special-purpose optimizations for many things, and a surprising amount can be achieved like this. The brains are once-through largely because that's the fastest approach, and speed is important for many things. Human brains are still mostly once-through.
But we humans have one extra trick, which is to do with self-awareness. We can to an extent sense the output of our brains, and that output then becomes new input. This in turn leads to new output which can become input again. This apparently simple capability - forming a loop - is all that's needed to form a Turing-complete machine out of the specialized animal brain.
Without such a loop, an animal may know many things, but it will not know that it knows them. Because it isn't able to sense explicitly about it was just thinking about, it can't then start off a new thought based on the contents of the previous one.
The divide isn't absolute, I'm sure - I believe essentially all mammals have quite a bit of self-awareness, but only in humans does that facility seem to be good enough to allow the development of a chain of thought. And that small difference makes all the difference in the world.
Evolution, as an algorithm, is very much better as an optimizer of an existing design than it is as a creator of a new design. Optimizing the size of the brain of a creature is, for evolution, an easy problem. Making a better, more efficient brain is a much harder problem, and happens slowly, comparatively speaking.
The optimization problem is essentially a kind of budgeting problem. If I have a budget of X calories per day, I can spend it on X kilos of muscle, or Y grams of brain tissue. Both will cost me the same amount of calories, and each brings its own advantages. Since evolution is good at this kind of problem, we can expect that it will correctly find the point of tradeoff - the point where the rate of gain of advantage for additional expenditure on ANY organ in the body is exactly the same.
Putting it differently, a cow design could trade larger brain for smaller muscles, or larger muscles for smaller brain. The actual cow is found at the point where those tradeoffs are pretty much balanced.
A whale has a large brain, but it's quite small in comparison to the whale as a whole. If a whale were to double the size of its brain, it wouldn't make a huge dent in the overall calorie budget. However, evolution's balance of the whale body suggests that it wouldn't be worth it. Making a whale brain that much bigger wouldn't make the whale sufficiently better for it to cost in.
Where this argument basically leads is to turn the conventional wisdom on its head. People say that big brains are better because they are bigger. However, the argument that evolution can balance the size of body structures efficiently and quickly leads to the opposite conclusion. Modern brains are bigger because they are better. Because modern brains are better than they used to be - because evolution has managed to create better brains - it becomes more worthwhile making them bigger. Because brains are better, adding more brain gives you a bigger benefit, so the tradeoff point moves towards larger brain sizes.
Dinosaur brains were very much smaller, on the whole, than the brains of similar animals today. We can infer from this argument that this because their brains were less effective, and that in turn lowered any advantage that might have been gained from making the size of the brain larger. Consequently, dinosaurs must have been even more stupid than the small size of their brains suggests.
Although there is a nutritional argument for bigger brains in humans - the taming of fire allowed for much more efficient food usage - perhaps there is also some sense in which the human brain has recently become better, which in turn led it to become larger. Speculative, perhaps. But on the larger scale, looking at the sweeping increase in brain sizes across the whole of the geological record, the qualitative improvement in brains has to be seen in the gradual increase in size.
I think the interesting question is why we care for our future selves at all.
As kids, we tend not to. It's almost a standard that a child has a holiday, and a bit of homework to do during that holiday, then they will decide not to do the work at the beginning of the break. The reason is they care about their current selves, and not about their future self. Of course in due time the future becomes the present, and that same child has to spend the entire time at the end of their holiday working furiously on everything that's been left to the last minute. At that point, they wish that their past self had chosen an alternative plan. This is still not really wisdom, as they don't much care about their past self either - they care about their present self who now has to do the homework.
Summarising - if your utility function changes over time, then you will, as you mentioned, have conflict between your current and future self. This prevents your plans for the future from being stable - a plan that maximises utility when considered at one point no longer maximises it when considered again later. You cannot plan properly - and this undermines the very point of planning. (You may plan to diet tomorrow, but when tomorrow comes, dieting no longer seems the right answer....)
I think this is why the long view becomes the rational view - if you weight future benefits equally to your present ones, assuming (as you should) that your reward function is stable, then a plan you make now will still be valid in the future.
In fact the mathematical form that works is any kind of exponential - it's OK to have the past be more important than the future, or the future more important than the past as long as this happens as an exponential function of time. Then as you pass through time, the actual sizes of the allocated rewards change, but the relative sizes remain the same, and planning should be stable. In practice an exponential rise pushes all the importance of reward far out into the indefinite future, and is useless for planning. Exponential decays push all the important rewards into your past, but since you can't actually change that, it's almost workable. But the effect of it is that you plan to maximise your immediate reward to the neglect of the future, and since when you reach the future you don't actually think that it was worthwhile that your past self enjoyed these benefits at the expense of your present self, this doesn't really work either as a means of having coherent plans.
That leaves the flat case. But this is a learned fact, not an instinctive one.
I agree - I think this is because Eliezer's intent is to explain what he believes to be right, rather than to worry too much about the arguments of those he doesn't agree with. An approach I entirely agree with - my experience is that debate is remarkably ineffective as a means of reaching new knowledge, whilst teaching the particular viewpoint you hold is normally much more enlightening to the listener, whether they agree with the viewpoint or not.
I think it is a mistake to tie the question of what reality is to the particulars of the physics of our actual universe. These questions are about what it is to have an external reality, and the answers to them should be the same whether the question is asked by us in our current universe, or by some other hapless inhabitants of a universe bearing a distinct resemblance to Minecraft.
I can imagine types of existence which don't include cause and effect - geometrical patterns are an example - there are relationships, but they are not cause and effect relationships - they are purely spatial relations. I can imagine living in a universe where part of its structure was purely such spatial relationships, and not a matter of cause and effect.
It's meaningful and false, rather than meaningless, to say that on March 22nd, 2003, the particles in the center of the Sun spontaneously arranged themselves into a short-lived chocolate cake. This statement's truth or falsity has no consequences we'll ever be able to test experientally. Nonetheless, it legitimately describes a way reality could be, but isn't; the atoms in our universe could've been arranged like that on March 22nd 2003, but they weren't.
I actually think this a confusing statement. From a thermodynamic perspective, it's not impossible that the particles in the center of the Sun spontaneously arranged themselves into a short-lived chocolate cake on that day. It's very, very, extremely unlikely, but not actually completely impossible.
The extreme unlikelihood (roughly equal to me temporarily becoming a chocolate cake myself) is such that we are justified, in terms of the approximation that is plain English, in saying that it is impossible that such a thing occurred, and that it is just wrong to claim that it happened. But this is using the usual rule of thumb that absolute truth and falsity isn't something we can actually have, so we happily settle for saying something is true or false when we're merely extremely sure rather than in possession of absolute proof.
It's quite OK in that context to claim that it's meaningless and false to claim that the chocolate cake appeared, as the claimant has no good reason to make the claim, and saying the claim is false is pointing out the lack of that reason. The bit I don't agree with is your final sentence.
Nonetheless, it legitimately describes a way reality could be, but isn't; the atoms in our universe could've been arranged like that on March 22nd 2003, but they weren't.
Here's where it gets confusing. If you are speaking in colloquial English, it's true to say that it's impossible that a chocolate cake could appear in the middle of the Sun, and therefore it didn't happen. If you're speaking more scientifically, it's instead true to say that it's possible that the atoms in the Sun's core could spontaneously form a chocolate cake, but the likelihood is of the order of 10^10^23 (or something like that) against, which clearly is sufficiently close to impossible for us to say informally that it didn't happen. As the sentence stands, you end up making a claim of knowledge which you don't have - that it was possible that a certain state of affairs could occur in the Sun, but that you know somehow that it didn't.
Tech also seems quite vulnerable to monocultures. Think of file formats, for example. In the early days there are often several formats, but after a while most of them go extinct and the survivors end up being universally used. Image display formats, for example, fall largely into two categories - formats that every computer knows how to display, and formats that hardly anybody uses at all. (Image editing formats are different, I know.) How many word processors have you used recently that can't support .doc format ?
The most likely scenario is that there will be only one center of intelligence, and that although the intelligence isn't really there yet, the center is. You're using it now.
It surely depends on one's estimate of the numbers. It seems worthwhile doing something about possible asteroid impacts, for example.
If anyone accepts a pascals mugging style trade off with full knowledge of the problem,
Well, it's very well known that Pascal himself accepted it, and I'm sure there are others. So, off you go and do whatever it is you wanted to do.
To be honest, your ability to come through on this threat is a classic example of the genre - it's very, very unlikely that you are able to do it, but obviously the consequences if you were able to would be, er, quite bad. In this case my judgement of the probabilities is that we are completely justified in ignoring the threat.
Actually human Godel sentences are quite easy to construct.
For example, I can't prove that I'm not an idiot.
If I'm not an idiot, then I can perhaps make an argument that I'm not an idiot that seems reasonable to me, and that may persuade that I'm not an idiot.
However, if I am an idiot, then I can still perhaps make an argument that I'm not an idiot that seems reasonable to me.
Therefore any argument that I might make on whether I'm an idiot or not does not determine which of the two above states is the case. Whether I'm an idiot or not is therefore unprovable under my system.
You can't even help me. You might choose to inform me that I am / am not an idiot. I still have to decide whether you are a reasonable authority to decide the matter, and that question runs into the same problem - if I decide that you are, I may have decided so as an idiot, and therefore still have no definitive answer.
You cannot win, you can only say "I am what I am" and forget about it.
Although I wouldn't think of this particular thing as being an invention on his part - I'm not sure I've read that particular chain of thought before, but all the elements of the chain are things I've known for years.
However I think it illustrates the strength of Eliezer's writing well. It's a perfectly legitimate sequence of thought steps that leads in a clear and obvious way to the right answer. It's not new thought, but a particularly clear way of expressing something that many people have thought and written about in a less organised way.
To clarify - there are times when Eliezer is inventive - for example his work on CEV - but this isn't one of those places. I know I'm partly arguing about the meaning of "inventive", but I don't think we're doing him a favor here by claiming this is an example of his inventiveness when there are much better candidates.
It's easy to overcome that simply by being a bit more precise - you are saying that such and such a proposition is true in geometry X. Meaning that the axioms of geometry X genuinely do imply the proposition. That this proposition may not be true in geometry Y has nothing to do with it.
It is a different sense of true in that it isn't necessarily related to sensory experience - only to the interrelationships of ideas.
I agree - atoms and so forth are what our universe happens to consist of. But I can't see why that's relevant to the question of what truth is at all - I'd say that the definition of truth and how to determine it are not a function of the physics of the universe one happens to inhabit. Adding physics into the mix tends therefore to distract from the main thrust of the argument - making me think about two complex things instead of just one.
Of course the limited amount of knowledge available to the primitive tribe doesn't rule out the existence of George, but neither does it do much to justify the theory of George. What they know is that the ground shook, but they have no reasonable explanation of why.
There are, for them, many possible explanations they could dream up to explain the shaking. Preferring any one above the others without a reason to do so is a mistake.
At their postulated level of sophistication, I don't think they can do much better than "The Earth shook. It does that sometimes." Adding the bit about George and so forth is just unnecessarily multiplying entities, as Ockham might say.
People usually are not mistaken about what they themselves believe - though there are certain exceptions to this rule - yet nonetheless, the map of the map is usually accurate, i.e., people are usually right about the question of what they believe:
I'm not at all sure about this part - although I don't think it matters much to your overall case. I think one of our senses is a very much simplified representation of our own internal thought state. It's only just about good enough for us to make a chain of thought - taking the substance of a finished thought and using it as input to the next thought. In animals, I suspect this sense isn't good enough to allow thought chains to be made - and so they can't make arguments. In humans it is good enough, but probably not by very much - it seems rather likely that the ability to make thought chains evolved quite recently.
I think we probably make mistakes about what we think we think all the time - but there is usually nobody who can correct us.
They are truisms - in principle they are statements that are entirely redundant as one could in principle work out the truth of them without being told anything. However, principle and practice are rather different here - just because we could in principle reinvent mathematics from scratch doesn't mean that in practice we could. Consequently these beliefs are presented to us as external information rather than as the inevitable truisms they actually are.
Maps are models of the territory. And the usefulness of them is often that they make predictions about parts of the territory I haven't actually seen yet, and may have trouble getting to at all. The Sun will come up in the morning. There isn't a leprachaun colony living a mile beneath my house. There aren't any parts of the moon that are made of cheese.
I have no problem saying that these things are true, but they are in fact extrapolations of my current map into areas which I haven't seen and may never see. These statements don't meaningfully stand alone, they arise out of extrapolating a map that checks out in all sorts of other locations which I can check. One can then have meaningful certainty about the zones that haven't yet been seen.
How does one extrapolate a map? In principle I'd say that you should find the most compressible form - the form that describes the territory without adding extra 'information' that I've assumed from someplace else. The compressed form then leads to predictions over and above the bald facts that go into it.
The map should match the territory in the places you can check. When I then make statements that something is "true", I'm making assertions about what the world is like, based on my map. As far as English is concerned, I don't need absolute certainty to say something is true, merely reasonable likelihood.
Hence the photon. The most compressible form of our description of the universe is that the parts of space that are just beyond visibility aren't inherently different from the parts we can see. So the photon doesn't blink out over there, because we don't see any such blinking out over here.
To summarise the argument further.
"A lot of people talk rubbish about AI. Therefore most existing predictions are not very certain."
That doesn't in itself mean that it's hard to predict AI - merely that there are many existing predictions which aren't that good. Whether we could do better if we (to take the given example) used the scientific method isn't something the argument covers.
Thanks - I've amended the final paragraph to change 'view' to 'outcome' throughout - hope it helps.
This whole post seems to be a conjecture about what quantum mechanics really means.
What we know about quantum mechanics is summed up in the equations. Interpretations of quantum mechanics aren't arguing about the equations, or the predictions of the equations. They are arguing about what it means that these equations give these predictions.
The important thing here is to understand what exactly these interpretations of quantum mechanics are talking about. They aren't talking about the scientific predictions, as all the interpretations are of the same equations, and necessarily predict the same behaviour. By the same token they aren't talking about anything we might see in the universe, as all the various interpretations predict the same observations.
Now sometimes people do propose new theories about the quantum world that lead to different predictions. These aren't interpretations of quantum mechanics, they are new theories. Interpretations are attempts to talk about the current standard theory in the most helpful way.
As far as I can tell, creators of interpretations are looking at the elephant which is quantum mechanics, and discussing whether all angles from which to observe the elephant are equally good, whether some are better than others, or whether only the view we can actually see ourselves is the only one that truly exists.
Now it is useful to try and find new ways of looking at the elephant, as maybe some views are better than others, and someday we might have data that moves us to a new theory where viewpoints that seem equally good now are shown not to be. But right now there isn't any such information, and so we can't really say that one view is better than another. Saying that one answer is better than another, in the absence of relevant information, doesn't seem helpful.
That's the basis on which we prefer many worlds (all outcomes allowed by the equations exist) to collapse (there is only the outcome I can see). It's part of the general principle of not making up complicated explanations on matters where evidence is lacking.
The industrial revolution has some very tightly coupled advances, The key advance was making iron with coal rather than using charcoal. This reduced the price, and a large increase in quantity of manufacture followed. One of the immediate triggers was that England was getting rather short of wood, and the use of coal as a substitute started for iron-making and heating.
The breakthrough in steelmaking was initially luck - some very low sulphur coal was found and used in steelmaking. But luck arises often out of greater quantities of usage, and perhaps that was the key here. It certainly wasn't science in the modern sense as the chemistry of what was going on wasn't really understood - certainly not by the practitioners of the time. Trial and error was therefore the key, and greater quantity of manufacture leads to more trials.
The model I have of human progress is this. Intelligence is not the limiting factor. Things are invented quite soon after they become possible and worthwhile.
So, let's take the steam engine. Although the principle of the steam turbine is known to the Greeks, actual steam engines are only commercially viable from the time of Newcomen's atmspheric engine. Why not earlier?
Well, there is an existing technology to displace, first of all, which is a couple of unfortunate animals walking in a circle driving an axle. This is far more fuel efficient than the steam engine, so it persists until coal mining comes along and provides fuel of a kind that can't be fed to an animal. Of course coal mining exists for a long time before the industrial revolution, but as long as fodder is cheap enough compared to coal, the animals continue to win.
There is also a materials problem with steam. Wood is a terrible material for making steam engines, yet it's cheap enough that it is used rather a lot in early models. Iron at the time is expensive, and of such terrible quality that it is brittle and quite unsafe as a means of making pressure vessels. There was no good way of making pressure vessels at all until the industrial revolution, along with good sliding seals, and initially there is great reluctance to use even moderate pressures due to the problems. The pressure to solve that set of problems was already faced by makers of guns. Essentially it was improving metallurgy that allowed the higher pressures that permitted the higher efficiencies that made steam engines more than a curiosity. Steam engines essentially couldn't be invented earlier because the iron was too expensive and didn't hold pressure well enough to allow the boiler to be made. As it was, burst boilers were not uncommon. Along with this there is a general problem of making anything with reasonable precision so that parts fit together in a reasonably airtight way.
Ditto the motor car. A motor car differs only from a steam engine in that it doesn't run on a track - which isn't really even an inventive step. So why didn't people take steam engines off their tracks and run them on the roads? Well, steam engines were very heavy, and the roads were very bad. The idea is simple enough, but doing it with 1820's technology isn't really possible. Better metallurgy and better quality machining later allows light high speed engines that finally allow the horse to be displaced.
It's an interesting counterfactual test. Pick any invention you like, and look into why it wasn't done twenty years earlier. Usually the answer is that it couldn't be done economically back then. The situation is usually more like the one with colour flat screen displays. It's been considered a good idea for decades, but only actually became possible in the 1990's. If you look at the details of how it was done in the 1990's, you discover techniques that weren't possible in the 1980's.
It's these changes in the surrounding technology that seem to me to govern progress, and these changes happen at a rate governed by as much by economics as anything else.
Also the argument applies equally well to lots of non-intellectual tasks where a cheap human could well be a replacement for an expensive machine.
I haven't put my finger on it exactly, but I am somewhat concerned that this post is leading us to argue about the meanings of words, whilst thinking that we are doing something else.
What can we really say about the world? What we ought to be doing is almost mathematically defined now. We have observations of various kinds, Bayes' theorem, and our prior. The prior ought really to start off as a description of our state of initial ignorance, and Bayes' theorem describes exactly how that initial state of ignorance should be updated as we see further observations.
Being ordinary human beings, we follow this recipe badly. We find collecting far too much data easier than extracting the last drop of meaning from each bit, so we tend to do that. We also need to use our observations to predict the future, which we ought to do by extrapolating what we have in the most probable way.
Having done this, we have discovered that there's an amazing and notable contrast between the enormous volume of data we have collected about the universe, and the comparatively tiny set of rules which appear to summarise it.
There is quite a lot of discussion about the difference between fundamental and non-fundamental laws. This is rather like arguing in arithmetic about whether addition or multiplication are more fundamental - who cares - the notable factor is that the overall system is highly compressible, and part of that compression process allows you to omit any explicit stating of many aspects of the system. The rules that are still fairly explicitly stated in the compressed version tend to be considered the 'fundamental' ones, and the ones that get left out, non-fundamental.
You are of course right in saying that the universe often kind of simulates other rule systems within its fundamental one.
But I am suspicious that beyond that, this article is about words, not the nature of reality.
Your other option is to sell the box to the highest bidder. That will probably be someone who's prepared to wait longer than you, and will therefore be able to give you a higher price than the utilons you'd have got out of the box yourself. You get the utilons today.
My top 2....
Looking at unlikely happenings more sensibly. Remembering that whenever something really unlikely happens to you, it's not a sign from the heavens. I must remember to take into account the number of other unlikely things that might have happened instead that I would also have noticed, and the number of things that happen in a typical time. In a city of a million people, meeting a particular person might seem like a one in a million chance. But if I know a thousand people in the city, and walk past a thousand people in an hour, the chance of bumping into one of my friends is pretty good.
The other one? We're all too optimistic about our own abilities. Most of the time that's pretty benign, but it's a good thing to remember when considering employing yourself as a stock picker, gambling advisor, or automobile driver. We're actually much more average than we think, most of the time.
This is a relatively common psychological problem. It's a common reaction to stress. You need to take it seriously, though, because for some people it can be a crippling, disabling thing. But there is stuff you can do.
First of all, acknowledge the truth of what your fear is saying - sudden catastrophe could happen without warning. But the flip side is that worldwide the vast majority of deaths don't come from sudden catastrophe. You should fear eating more than you fear such catastrophe - in terms of the real risk of it. It's fear, but not reasonable fear - as it's not sufficiently likely to happen to make it worth worrying about it. Particularly given that you can't do much to avoid most such disasters in the first place !
Secondly, it's OK to be irrationally afraid of something. All of us do it sometimes. What's not OK is to let an irrational fear take away your right to do something. Right where you are now, you've probably lost some territories to the fear, and you need to identify some losses, and start taking them back. Choose some target territories and go. You will get used to being in the territory again, and you will progressively lose your fear of it each time you go. Don't tackle everything at once, but start identifying territories and taking them back one by one. And don't accept losing any others. Expect this to take some time to work through.
That's all that CBT generally is - keep exposing yourself to circumstances that you're irrationally afraid of until you learn from experience that actually nothing terrible happens. Start with something fairly easy, and make it really easy by learning it's OK. Then move onto something a bit harder that just got easier because of your first victory, and do that until it's no problem.
Rinse and repeat until your fears are all reasonable ones. Which may never happen. But you'll get nearly all the territory back.
Make sure you talk to someone about this even if it isn't a therapist. But I think a therapist might be good, as might something like Prozac - although on that matter you can't simply base your view on a blog opinion.....
Finally, mental problems are essentially normal - all of our minds are capable of getting a bit weird, and it's the responsibility of your rational brain to learn what sometimes go awry, spot it, and nudge you back in the right direction. You really can win this one yourself.
Well you could go for something much more subtle, like using sugar of the opposite handedness on the other 'Earth'. I don't think it really changes the argument much whether the distinction is subtle or not.
It depends on your thought experiment - mathematics can be categorised as a form of thought experimentation, and it's generally helpful.
Thought experiments show you the consequences of your starting axioms. If your axioms are vague, or slightly wrong in some way, you can end up with completely ridiculous conclusions. If you are in a position to recognise that the result is ridiculous, this can help. It can help you to understand what your ideas mean.
On the other hand, it sometimes still isn't that helpful. For example, one might argue that an object can't move whilst being in the place where it is. And an object can't move whilst being in a place where it is not. Therefore an object can't move at all. I can see the conclusion's a little suspect, but working out why isn't quite as easy. (The answer is infinitesimals / derivatives, we now know). But if the silly conclusion wasn't about a subject where I could readily observe the actual behaviour, I might well accept the conclusion mistakenly.
Logic can distill all the error in a subtle mistake in your assumptions into a completely outrageous error at the end. Sometimes that property can be helpful, sometimes not.
Here's what I tend to do.
On my first draft of something significant, I don't even worry about style - I concentrate on getting my actual content down on paper in some kind of sensible form. I don't worry about the style because I have more than enough problems getting the content right.
In this first draft, I think about structure. What ONE thing am I trying to say? What are the 2-5 sub-points of that one thing? Do these sub-points have any sub-points? Make a tree structure, and if you can't identify the trunk, go away until you can.
Then I go back and fix it. Because the content is now in roughly the right place, the second run-through is much easier. But normally that helpful first draft is full of areas where the logical flow can be improved, and the English can be tightened up. I think you're missing this stage out entirely as when looking at your post I can find plenty to do. Here's what five minutes of such attention does to your first para.
"When I was 12 I started an email correspondence with a cousin, and we joked and talked about the things going on in our lives. This went on for years. One day, several years in, I read through the archives. It saturated my mind with the details of my life back then. I had the surreal feeling of having traveled back in time - almost becoming again the person I was years ago, with all my old feelings, hopes and concerns."
Keep at it - there's plenty enough there for the polishing to be worthwhile.
As a purely practical measure, for really important occasions, I'll often plan in an activity at second-to-last which is actually unimportant and can be dropped. So, for example, if I have a job interview, my plan will be that, after I've found the entrance to the company office and there is as little left to go wrong as possible, I'll then, as a second-to-last activity, do something like go for a relaxed lunch at a nearby cafe, and then just stroll in at the ideal time.
On the day everything goes to pot, I can use up the time I planned for the second-to-last activity. So I should hopefully still have time to go back for the forgotten briefcase, hire a taxi from my broken-down car, replace my torn shirt, and still get to the interview hungry rather than two hours late.
This is a good plan for anything with a hard start time - weddings, theatre trips, plane trips - add a pleasant activity that you can delete if needed at second-to-last. Of course the length of this optional activity should be at least as long as the number of sigmas you need for the important one. The result is that you arrive on-time and de-stressed (almost) every time.
One thing that goes along with this is the idea that possible courses of action in any given situation can be sorted according to moral desirability. Of course in practice people differ about the exact ordering. But I've never heard anyone claim that in the moral sphere, B > A, C > B and simultaneously A > C. If in a moral scheme, you always find that A > B > C implies A > C, then you ought to be able to map to a utility function.
The only thing I'd add is that this doesn't map onto a materialist consequentialism. If you were part of the crew of a spacecraft unavoidably crashing into the Sun, with no power left and no communications - is there still a moral way to behave - when nothing you do will show in the material world in an hour or so? Many moral theories would hold so, but there isn't a material consequence as such...
This seems very similar to the experiment where black people were shown to do worse on intelligence tests after being reminded that they were black..
So this experiment (in my view) doesn't really help to answer whether analytical thinking reduces religious belief. What it does show is that a lot of people make that association, and that is more than enough to cause the priming effect.
It's the process of changing your mind about something when new evidence on something comes your way.
The different jargon acts as a reminder that the process ought not be an arbitrary one, but (well, in an ideal world anyway) should follow the evidence in a way defined by Bayes theorem.
I don't think there's any particular definition of what constitutes, belief, opinion and cognitive structure. It's all just beliefs, although some of it might then be practised habit.
I think there are some confusions here about the mind's eye, and the way the visual cortex works.
First of all, I suggest you do the selective attention test. Here will do. Selective attention test
This video illustrates the difference between looking at a scene and actually seeing it. Do pay attention closely or you might miss something important!
The bottom line is that when you look at the outside world, thinking that you see it, your brain is converting the external world of light images into an internal coding of that image. It cheats, royally, when it tells you that you're seeing the world as it is. You're perceiving a coded version of it, and that code is actually optimised for usefulness and the ability to respond quickly. It's not optimised for completeness - it's more about enabling you to pay attention to one main thing, and ignore everything else in your visual field that doesn't currently matter.
And that's where your comparisons later fall down. The computers rendering Avatar have to create images of the fictional world. Your own internal mind's eye doesn't have to do that - it only has to generate codes that stand for visual scenes. Where Avatar's computers had to render the dragon pixel by pixel, your internal eye only has to create a suitable symbol standing for "Dragon in visual field, dead centre." It doesn't bother to create nearly all of the rest of your imagined world in the same way as some people ignore the important thing in the attention test. Because you only EVER see the coded versions of the world, the two look the same to you. But it is a much cheaper operation as it's working on small groups of codes, not millions of pixels.
The human brain is a very nice machine - but I also suspect it's not as fast as many people think it is. Time will tell.
I am a programmer, and have been for about 20 years or so. My impressions here...
Diagrams and visual models of programs have typically been disappointing. Diagrams based on basic examples always look neat, tidy, intuitive and useful. When scaling up to a real example, the diagram often looks like the inside of a box of wires - lines going in all directions. Where the simple diagram showed simple lines drawing boxes together, the complex one has the same problem as the wiring box - you have 40 different 'ends' of the lines, and it's a tedious job to pair them all up and figure out what goes where.
Text actually ends up winning when you scale up to real problems. I tend to find diagrams useful for describing various subsets of the problem at hand, but for describing algorithms, particularly, text wins.
The problem of programming is always that you need to model the state of the computer in your head, at various levels. You need to be able to 'run' parts of the program in your head to see what's going on and correct it. This is the essential aspect of programming as it currently is, and it's no use hoping some language will take this job away - it won't, and can't. The most a language can do is communicate the algorithm to you in an organised way. The job you mentioned - of modelling the flow of instructions in your head, and figuring out the interactions of distant pieces of code - that is the job of the programmer, and different presentation of the data won't take that job away - not until the day AI takes the whole job away. Good presentation can make the job easier - but until you understand the flow of the code and the relationships between the parts, you won't be able to solve programming problems.
On the other hand - I feel that the state of computer programming today is still almost primitivist. Almost any program today will show you several different views of the data structure you're working on, to enable you to adjust it. The exception, by and large, is writing computer programs - particularly within a single function. There you're stuck with the single view in which you both read and write. I'm sure there is something more that can be done in this area.
I think one ought to think about reductionism slightly separately from the particulars of the universe we actually live in. I think of it as rather like the opposite of a curiosity-stopper - instead of assuming that everything is ontologically basic and doesn't have underlying parts, we should assume that there may be underlying parts, and go look for them. Of course in our own universe that approach has been exceptionally fruitful.
The other part that works well is Occam's razor - the simplest explanation of any data set is not only the most lightweight way of explaining the facts, it's also the optimum way of expressing your state of ignorance as well. The simplest explanation is also the most compact way of explaining what you do know. That again arises purely out of the nature of information, and would be true in any universe, not just the noticeably elegant one we actually live in.
On the other hand, there may well be a final set of underlying parts that experiment actually points to, and there's nothing wrong with that as long as it covers your experimental data.
The problem is that the utility isn't constant. If you, today are indifferent to what happens on future Tuesdays, then you will also think it's a bad thing that your future self cares what happens on that Tuesday. You will therefore replace your current self with a different self that is indifferent to all future Tuesdays, including the ones that it's in, thus preserving the goal that you have today.