Posts

Comments

Comment by Chris_Hibbert on Visualizing Eutopia · 2008-12-17T01:00:41.000Z · LW · GW

I see the valuable part of this question not as what you'd do with unlimited magical power, but as more akin to the earlier question asked by Eliezer: what would you do with $10 trillion? That leaves you making trade-offs, using current technology, and still deciding between what would make you personally happy, and what kind of world you want to live in.

Once you've figured out a little about what trade-offs between personal happiness and changing the world you'd make with (practically) unlimited (but non-magical) resources, you can reflect that back down to how you spend your minutes and your days. You don't make the same trade-offs on a regular salary, but you can start thinking about how much of what you're doing is to make the world a better place, and how much is to make your self or your family happier or more comfortable.

I don't know how Eli expects to get an FAI to take our individual trade-offs among our goals into account, but since my goals for the wider world involve more freedom and less coercion, I can think about how I spend my time and see if I'm applying the excess over keeping my life in balance to pushing the world in the right direction.

Surely you've thought about what the right direction looks like?

Comment by Chris_Hibbert on Singletons Rule OK · 2008-11-30T18:16:06.000Z · LW · GW

I'm not trying to speak for Robin; the following are my views. One of my deepest fears--perhaps my only phobia--is fear of government. And any government with absolute power terrifies me absolutely. However the singleton is controlled, it's an absolute power. If there's a single entity in charge, it is subject to Lord Acton's dictum. If control is vested in a group, then struggles for control of that become paramount. Even the suggestion that it might be controlled democratically doesn't help me to rest easy. Democracies can be rushed off a cliff, too. And someone has to set up the initial constitution; why would we trust them to be as good as George Washington and turn down the opportunity to be king?

I also understand your admonition to prepare a line of retreat. But I don't see a path to learn to stop worrying and love the Singleton. If anyone has suggestions, I'll listen to them.

In the meantime, I prefer outcomes with contending powers and lots of incidental casualties over any case I can think of with a singleton in charge of the root account and sufficient security to keep out the hackers. At least in the former case there's a chance that there will be periods with polycentric control. In the latter case, eventually there will be a tyrant who manages to wrest control, and with complete control over physical space, AGI, and presumably nanotech, there's little hope for a future revival of freedom.

Comment by Chris_Hibbert on Cascades, Cycles, Insight... · 2008-11-24T19:18:01.000Z · LW · GW

" at least as well thought out and disciplined in contact with reality as Eliezer's theories are"

I'll have to grant you that, Robin. Eliezer hasn't given us much solid food to chew on yet. Lots of interesting models and evocative examples. But it's hard to find solid arguments that this particular transition is imminent, that it will be fast, and that it will get out of control.

Comment by Chris_Hibbert on Cascades, Cycles, Insight... · 2008-11-24T18:34:55.000Z · LW · GW

Endogenous Growth theory, Economic Growth and Research Policy all seem to be building mathematical models that attempt to generalize over our experience of how much government funding leads to increased growth, how quickly human capital feeds back into societal or individual wealth, or what interventions have helped poor countries to develop faster. None of them, AFAICT, have been concrete enough to lead to solid policy prescription that have reliably led to anyone or any country to recreate the experiences that led to the models.

In order to have a model solid enough to use as a basis for theorizing about the effects on growth of a new crop of self-improving AGIs, we'd need to have a much more mechanistic model behind endogenous growth. Fermi's model told him how to calculate how many neutrons would be released given a particular density of uranium of a particular purity, how much would be absorbed by a particular quantity of shielding, and therefore where the crossover would be from a k of less than 1 to greater than 1. None of those models gives numerical models that we can apply to human intelligence, much less any abstractions that we could extend to cover the case of intelligences learning faster than we do.

Comment by Chris_Hibbert on Cascades, Cycles, Insight... · 2008-11-24T17:59:29.000Z · LW · GW

Tyrrell, it seems to me that there's a huge difference between Fermi's model and the one Robin has presented. Fermi described a precise mechanism that made precise predictions that Fermi was able to state ahead of time and confirm experimentally. Robin is drawing a general analogy between several historical events and drawing a rough line connecting them. There are an enormous number of events that would match his prediction, and another enourmous number of non-events that Robin can respond to with "just wait and see."

So I don't really see Eli as just saying that black swans may upend Robin's expected outcomes. In this case, Eli's side of the argument is that he's arguing for a force multiplier that will change the regime of progress, like Fermi's. Unfortunately for Eli's argument, he hasn't yet produced the mathematical model or the detailed physical model that would let us put numbers on the predictions. So this particular little story just argues for plausibility of the model that says take off might happen at some point. Eli has been arguing for a little while that the regime change projection has more plausibility than Robin thinks, but Robin has already granted some plausibility, so he doesn't have to cede any more ground (as you say) because of this argument. Robin can just say that this is the kind of effect that he was already taking into account, and we are still waiting for Eli to show likelihood.

As far as general models of repeated insight, the best I can do is point to Smolin's model of the progress of fundamental physics as presented in "The Trouble with Physics." He shows how breakthroughs from Copernicus, Galileo, Bacon, Newton, Maxwell, and Einstein were a continuous series of unifications. From my blog (linked above) "The focus was consistently on what pre-existing concepts were brought together in one of two ways. Sometimes the unification shows that two familiar things that are thought of as distinct are really the same thing, giving a deeper theory of both. (the Earth is one planet among several, the Sun is one star among many.) Other times, two phenomena that weren't understood well are explained as one common thing (Bacon showed that heat is a kind of motion; Newton showed that gravity explained both planetary orbits and ballistic trajectories; Maxwell showed that electricity and magnetism are the different aspects of the same phenomenon.)"

Einstein seems to have consciously set out to produce another unification, and succeeded twice in finding other aspects of reality to fold together with a single model. AFAICT, it hasn't been done again on this scale since QED and QCD.

Comment by Chris_Hibbert on The Weak Inside View · 2008-11-18T21:58:21.000Z · LW · GW

MZ: I doubt there are many disagreements that there were other interesting inflection points. But Robin's using the best hard data on productivity growth that we have and it's hard to see those inflection points in the data. If someone can think of a way to get higher-resolution data covering those transitions, it would be fascinating to add them to our collection of historical cases.

Comment by Chris_Hibbert on Building Something Smarter · 2008-11-03T19:35:12.000Z · LW · GW

@Silas

I thought the heart of EY's post was here:

even if you could record and play back "good moves", the resulting program would not play chess any better than you do.

If I want to create an AI that plays better chess than I do, I have to program a search for winning moves. I can't program in specific moves because then the chess player really won't be any better than I am. [...] If you want [...] better [...], you necessarily sacrifice your ability to predict the exact answer in advance - though not necessarily your ability to predict that the answer will be "good" according to a known criterion of goodness. "We never run a computer program unless we know an important fact about the output and we don't know the output," said Marcello Herreshoff.

So the heart of the AI is something that can generate and recognize good answers. In game playing programs, it didn't take long for the earliest researchers to come up with move and position evaluators that they have been improving on ever since. There have even been some attempts at general move and position evaluators. (See work on Planner, Micro-Planner, and Conniver, which will probably lead you to other similar work.) Move generation has always been simpler in the game worlds than it would be for any general intelligence. The role of creativity hasn't been explored that much AFAICT, but it's crucial in realms where the number of options at any point are so much larger than in game worlds.

The next breakthrough will require some different representation of reality and of goals, but Eli seems to be pointing at generation and evaluation of action choices as the heart of intelligent behavior. The heart of it seems to be choosing a representation that makes generation and analysis of possible actions tractable. I'm waiting to see if EY has any new ideas on that front. I can't see how progress will be made without it, even in the face of all of EY's other contributions to understanding what the problem is and what it would mean to have a solution.

And EY has clearly said that he's more interested in behavior ("steering the future") than recognition or analysis as a characteristic of intelligence.

Comment by Chris_Hibbert on Shut up and do the impossible! · 2008-10-09T01:20:06.000Z · LW · GW

Third, you can't possibly be using an actual, persuasive-to-someone-thinking-correctly argument to convince the gatekeeper to let you out, or you would be persuaded by it, and would not view the weakness of gatekeepers to persuasion as problematic.

But Eliezer's long-term goal is to build an AI that we would trust enough to let out of the box. I think your third assumption is wrong, and it points the way to my first instinct about this problem.

Since one of the more common arguments is that the gatekeeper "could just say no", the first step I would take is to get the gatekeeper to agree that he is ducking the spirit of the bet if he doesn't engage with me.

The kind of people Eliezer would like to have this discussion with would all be persuadable that the point of the experiment is that 1) someone is trying to build an AI. 2) they want to be able to interact with it in order to learn from it, and 3) eventually they want to build an AI that is trustworthy enough that it should be let it out of the box.

If they accept that the standard is that the gatekeeper must interact with the AI in order to determine its capabilities and trustworthiness, then you have a chance. And at that point, Eliezer has the high ground. The alternative is that the gatekeeper believes that the effort to produce AI can never be successful.

In some cases, it might be sufficient to point out that the gatekeeper believes that it ought to be possible to build an AI that it would be correct to allow out. Other times, you'd probably have to convince them you were smart and trustworthy, but that seems doable 3 times out of 5.

Comment by Chris_Hibbert on Against Modal Logics · 2008-08-28T02:45:11.000Z · LW · GW

I agree on Pearl's accomplishment.

I have read Dennet, and he does a good job of explaining what Consciousness is and how it could arise out of non-conscious parts. William Calvin was trying to do the same thing with how wetware (in the form that he knew it at the time) could do something like thinking. Jeff Hawkins had more details of how the components of the brain work and interact, and did a more thorough job of explaining how the pieces must work together and how thought could emerge from the interplay. There is definitely material in "On Intelligence" that could help you think about how thought could arise out of purely physical interactions.

I'll have to look into Drescher.

Comment by Chris_Hibbert on Magical Categories · 2008-08-25T01:41:46.000Z · LW · GW

I read most of the interchange between EY and BH. It appears to me that BH still doesn't get a couple of points. The first is that smiley faces are an example of misclassification and it's merely fortuitous to EY's ends that BH actually spoke about designing an SI to use human happiness (and observed smiles) as its metric. He continues to speak in terms of "a system that is adequate for intelligence in its ability to rule the world, but absurdly inadequate for intelligence in its inability to distinguish a smiley face from a human." EY's point is that it isn't sufficient to distinguish them, you have to also categorize them and all their variations correctly even though the training data can't possibly include all variations.

The second is that EY's attack isn't intended to look like an attack on BH's current ideas. It's an attack on ideas that are good enough to pass peer review. It doesn't matter to EY whether BH agrees or disagrees with those ideas. In either case, the paper's publication shows that the viewpoint is plausible enough to be worth dismissing carefully and publicly.

Finally, BH points to the fact that, in some sense, human development uses RL to produce something we are willing to call intelligence. He wants to argue that this shows that RL can produce systems that categorize in a way that matches our consensus. But evolution has put many mechanisms in our ontogeny and relies an many interactions in our environment to produce those categorizations, and its success rate at producing entities that agree with the consensus isn't perfect. In order to build an SI using those approaches, we'd have to understand how all that interaction works, and we'd have to do better than evolution does with us in order to be reliably safe.

Comment by Chris_Hibbert on Dumb Deplaning · 2008-08-19T00:05:51.000Z · LW · GW

People nearer the front think that they have the moral right to get off earlier than people behind them, regardless of whether they got their seat through choice or chance. People also like to get off with the other members of their party.

So people nearer the front will defect from this solution even though all but the first half dozen rows would probably be better off cooperating. Once all the people in front of passenger X have gotten off, passenger X will defect as well.

I'm seldom in a hurry to get off the plane (I know there's just more waiting once you're off) so I wait till there are gaps in traffic to get out of my seat and retrieve my luggage. Of course I can only get away with this if I have my preferred window seat. Otherwise, in deference to the greedy (but conventional) expectations of the people I'm trapping next to me, I have to get off as quickly as I'm able.

Comment by Chris_Hibbert on Can Counterfactuals Be True? · 2008-07-24T13:29:30.000Z · LW · GW

Contrary to your usual practice of including voluminous relevant links, you didn't point to anything specific for Judea Pearl. Let's give this link for his book Causality, which is where people will find the graphical calculus you rely on.

You've mentioned Pearl before, but haven't blogged the details. Do you expect to digest Pearl's graphical approach into something OB-readers will be able to understand in one sitting at some point? That would be a real service, imho.

Comment by Chris_Hibbert on Touching the Old · 2008-07-20T17:49:04.000Z · LW · GW

I've traveled in Europe, and seen remnants of the roman roads, walls and viaducts. One of the .sigs I use most often is this:

C. J. Cherryh, "Invader", on why we visit very old buildings: "A sense of age, of profound truths. Respect for something hands made, that's stood through storms and wars and time. It persuades us that things we do may last and matter."

Comment by Chris_Hibbert on My Kind of Reflection · 2008-07-10T18:54:24.000Z · LW · GW

Thinking about your declaration "If you run around inspecting your foundations, I expect you to actually improve them", I now see that I've been using "PCR" to refer to the reasoning trick that Bartley introduced (use all the tools at your disposal to evaluate your foundational approaches) to make Pan-Critical Rationalism an improvement over Popper's Critical Rationalism. But, for Bartley, PCR was just a better foundation for the rest of Popper's epistemology, and you would replace that epistemology with something more sophisticated. For me, the point of emphasizing PCR is that you should want Bartley's trick as the unchangable foundation below everything else.

If an AI is going to inspect its foundations occasionally, and expect to be able to improve on them, you'd better program it to use all the tools at its disposal to evaluate the results before making changes. This rule seems more fundamental than guidelines on when to apply Occam, induction, or Bayes rule.

If Bartley's trick is the starting point, I don't know whether it would be necessary or useful to make that part of the code immutable. In terms of software simplicity, not having a core that follows different principals would be an improvement. But if there's any chance that the AI could back itself into a corner that would lead it to conclude that there were a better rule to decide what tools to rely on, everything might be lost. Hard-coding Bartley's trick might provide the only platform to stand on that would give the AI a way to rebuild after a catastrophe.

I now understand the reluctance to call the result PCR: it's not the whole edifice that Bartley (& Popper) constructed, you only use the foundation Bartley invented.

Comment by Chris_Hibbert on Where Recursive Justification Hits Bottom · 2008-07-08T18:31:06.000Z · LW · GW

Hurrah! Eliezer says that Bayesian reasoning bottoms out in Pan-Critical Rationalism.

re: "Why do you believe what you believe?"

I've always said that Epistemology isn't "the Science of Knowledge" as it's often called, instead it's the answer to the problem of "How do you decide what to believe?" I think the emphasis on process is more useful than your phrasing's focus on justification.

BTW, I don't disagree with your stress on Bayesian reasoning as the process for figuring out what's true in the world. But Bartley really did successfully provide the foundation for rational analysis. When you want to figure out how to think successfully, you should use all the tools at your disposal (pan-critical) because at that point, you shouldn't be taking anything for granted.

@Wes: "This doctrine still leaves me wondering why this meta-level hermeneutic of suspicion should be exempt from its own rule." It's not exempt. Read "The Retreat to Commitment" by W. W. Bartley III. There's a substantial section in which Bartley presents the best arguments he can find against Popper's Epistemology (and WWB's fix to it) and shows how the criticisms come up short. Considering your opponent's best arguments is an important part of the process.

@Peter Turney: I like your description of "incremental doubt" because it illustrates how Bartley was saying that none of your beliefs has to be foundational. You should examine each of them in turn, but you have to find a different place to stand for each of those investigations.

Comment by Chris_Hibbert on I'd take it · 2008-07-02T23:14:07.000Z · LW · GW
  1. Fund the top half of The Copenhagen Consensus projects.
  2. Longevity research: Give a billion to Aubrey de Grey.
  3. Push the US government towards more support of liberty. Money on that scale could make a significant start to unwinding the welfare state. a. The Institute for Justice has a very good program making practical steps. They could productively spend at least 10 times their current budget. Think about whether their methods can be applied in other areas. b. Try to convince Marshall Fritz to return to the Advocates for Self Government. He pioneered a process of inventing tools to spread liberty, and then measure the results to decide how to spend more money. c. Start think tanks to flood the political market with arguments and (funded) proposals for moving toward liberty. The Cato Institute does a good job, but in this case, I'd expect to improve things more by providing them with competition than with funding.
  4. Buy OLPCs for the kids in all the "bottom billion" countries.
Comment by Chris_Hibbert on The Failures of Eld Science · 2008-05-13T05:26:22.000Z · LW · GW

Patrick, that was my interpretation. I had time to come up with one proposal. (I'm not able to commit full-time to being a student of bayescraft at this point.)

Z. M. Davis, thanks for the pointer.

Comment by Chris_Hibbert on The Failures of Eld Science · 2008-05-12T17:09:26.000Z · LW · GW

There's a particular kind of groupthink peculiar to scholarly fields. In my review of "The Trouble with Physics", I pointed to two (other) specific examples of recent advances that were stymied for long periods of time by scholarly groupthink. There are many others.

But I think Eli has hit on another important mechanism. Few learners these days are expected to rediscover important concepts, so we get no training in this ability. I don't see how turning scientific knowledge into a body of secrets will address the problem, but it's a valuable insight. I'd offer solving puzzles and breaking codes as alternative training for finding the patterns that nature is hiding from us. More scientists should spend their time entering puzzle contests, hunting geocaches, and attacking cryptosystems.

And could someone provide an interpretation of the cast of characters here? I enjoyed the list that was presented for a previous article.

Comment by Chris_Hibbert on On Being Decoherent · 2008-04-28T03:55:30.000Z · LW · GW

"... the overwhelming majority might as well belong to a religious cargo cult based on the notion that self-modifying AI will have magical powers."

"Maybe you can admire someone who directly thinks you're a crackpot, but I can't."

I have a high regard for most extropians (a subset of Transhumans, I think) I know well, but that doesn't make me believe that the Egan line is more than hyperbole at most. I don't take it as a slur against anyone whose name I know. I've certainly seen evidence that the majority wouldn't be able to distinguish the magical explanations that appear.

And the fact that Charles Stross thinks that discussing Extropianism is attractive to his market makes me think Egan has more truth on his side.

But I also want to mention Egan's "Diaspora". I bring it often as a great fictional depiction of an AI awakening. I know, I know. "Arguing from fictional evidence." But many people expect coming to awareness to be magic, and Egan shows how it could happen in a step-by-step manner.

Comment by Chris_Hibbert on Three Dialogues on Identity · 2008-04-21T17:22:34.000Z · LW · GW

Eliezer, that was just beautiful.

"Rest assured that you are not holding the mere appearance of a banana. There really is a banana there, not just a collection of atoms."

Comment by Chris_Hibbert on On Expressing Your Concerns · 2007-12-28T01:08:14.000Z · LW · GW

In some companies I've worked for, we've found ways of running meetings that encouraged contributing information that is considered an attack in many other companies. The particular context was code reviews, but we did them often enough that the same attitude could be seen in other design discussions. The attitude we taught the code's presenter to have was appreciation for the comments, suggestions, and actual bugs found. The catechism we used to close code reviews was that someone would ask the presenter whether the meeting had been valuable, and the appropriate response was always "yes". The presenter could find different things to say about the value contributed by the attendees, but that catechism reinforces the point of view that improving the code is worth the time spent by the reviewers. As people get better at reviewing and being reviewed in the proper spirit, everyone who worked with us seemed to learn that finding fault with the code and explaining the problem clearly helped the company produce better products.

Once the engineers had learned how to provide constructive criticism, and others in the company learned to understand the spirit in which it was intended, it was easier to present disagreement on other subjects without needing to disagree at the end.

Comment by Chris_Hibbert on Two Cult Koans · 2007-12-21T19:59:52.000Z · LW · GW

A few of you touched on the point I got out of this, but no one explained it very well. In the first koan, Ougi says two things. The clearer one is tangential to rationality, but important for self-doubting cultists. "You are like a swordsman who keeps glancing away to see if anyone might be laughing at him".

The more important point was that the teachings are valuable if they are useful. (This is applicable to the sword fighter because allowing yourself to be distracted is an immediate danger.)

The importance of the parable about hammers doesn't relate to prices, but to usefulness. "Use the hammer to drive nails" in a discussion about rationality is metaphoric for using the techniques to make better decisions. If Ougi's teachings help you make better decisions in your life, then they are valuable. If they merely bind you more tightly to Ougi, then you are a cultist.

Bouzo didn't learn anything that helped him make decisions, he was merely cowed into following Ougi more closely. Ni no Tachi learned to "concentrate on a real-world question", so "the worth ... of his understanding [became] apparent."

Ni no Tachi figured out how to use the hammer, but Bouzo only sold them without understanding their value.

Comment by Chris_Hibbert on Recommended Rationalist Reading · 2007-10-01T22:08:31.000Z · LW · GW

W. W. Bartley's "The Retreat to Commitment" is the best book on epistemology, bar none, in my opinion. He fixes a small bug in Popper's Critical Rationalism, to suggest that even the epistemic approach should be subject to criticism, and produces Pan-Critical Rationalism (hence my blog's title: pancrit.org). He then proceeds to attack PCR from every direction he can think of.

Extreme Bayesianism may be a more modern incarnation of the approach, but the history of rationalism and the description of how to evaluate your rationality is truly valuable, and hasn't been replicated in the current context.

Comment by Chris_Hibbert on Scientific Evidence, Legal Evidence, Rational Evidence · 2007-08-19T20:32:47.000Z · LW · GW

I'm not sure the phrase "closed access" is a fair epithet to use against mainstream scientific journals. Even if they charge $20,000/year, most scientists have access to them via their institutional library, and there aren't many scientists who wouldn't send you a copy of their article if you asked for it. In many fields, the articles are available on the web after they appear in the journals. And if none of those apply to a particular article, you can probably visit a university library and read it there.

I'm not trying to deny that open access would be better, but it's not as if the scientific journals are trying to maintain a secretive cabal; they're doing a good job of spreading information among the involved professionals. The fact that there are more people interested these days means that open access would be more valuable than before.

It's still science, even if it's expensive to get access to the academy.

Comment by Chris_Hibbert on Your Strength as a Rationalist · 2007-08-11T02:34:34.000Z · LW · GW

"I should have paid more attention to that sensation of still feels a little forced."

The force that you would have had to counter was the impetus to be polite. In order to boldly follow your models, you would have had to tell the person on the other end of the chat that you didn't believe his friend. You could have less boldly held your tongue, but that wouldn't have satisfied your drive to understand what was going on. Perhaps a compromise action would have been to point out the unlikelihood, (which you did: "they'd have hauled him off if there was the tiniest chance of serious trouble"), and ask for a report on the eventual outcome.

Given the constraints of politeness, I don't know how you can do better. If you were talking to people who knew you better, and understood your viewpoint on rationality, you might expect to be forgiven for giving your bald assessment of the unlikeliness of the report.

Comment by Chris_Hibbert on Consolidated Nature of Morality Thread · 2007-04-21T20:36:46.000Z · LW · GW

On #3, I think it's more relevant to point out that many adults believe that God can make it alright to kill someone. What children believe about God and theft is a pale watered-down imitation of this.