Posts
Comments
That is an excellent question.
I must agree with this, although video and most writing OTHER than short essays and polemics would be mostly novel, and interesting.
If every snide, unhelpful jokey reply you post is secretly a knowing reference to something only one other person in the world can recognize, I retract every bad thing I ever said about you.
Is then the ability to explicitly (at a high, abstract level) reach down to the initial hypothesis generation and include, raise, or add hypotheses for consideration always a pathology?
I can imagine a system where extremely low probability hypotheses, by virtue of complexity or special evidence required, might need to be formulated or added by high level processes, but you could simply view that as another failure of the generation system, and require that even extremely rare or novel structures of hypotheses must go through channels to avoid this kind of disturbance of natural frequencies, as it were.
It may not be a completely generic bias or fallacy, but it certainly can affect more than just human decision processes. There are a number of primitive systems that exhibit pathologies similar to what Eliezer is describing, speech recognition systems, for example, have a huge issue almost exactly isomorphic to this. Once some interpretation of a audio wave is a hypothesis, it is chosen in great excess to it's real probability or confidence. This is the primary weakness of rule-based voice grammars, that their pre-determined possible interpretations lead to unexpected inputs being slotted into the nearest pre-existing hypothesis, rather than leading to a novel interpretation. The use of statistical grammars to try to pound interpretations to their 'natural' probabilistic initial weight is an attempt to avoid this issue.
This problem is also hidden in a great many AI decision systems within the 'hypothesis generation' system, or equivalent. However elegant the ranking and updating system, if your initial possible list is weak, you distort your whole decisions process.
The bloodstained sweater in the original song refers to an urban legend that Mr. Rogers was a Marine Sniper in real life.
Why on earth wouldn't I consider whether or not I would play again? Am I barred from doing so?
If I know that the card game will continue to be available, and that Omega can truly double my expected utility every draw, either it's a relatively insignificant increase of expected utility over the next few minutes it takes me to die, in which case it's a foolish bet, compared to my expected utility over the decades I have left, conservatively, or Omega can somehow change the whole world in the radical fashion needed for my expected utility over the next few minutes it takes me to die to dwarf my expected utility right now.
This paradox seems to depend on the idea that the card game is somehow excepted from the 90% likely doubling of expected utility. As I mentioned before, my expected utility certainly includes the decisions I'm likely to make, and it's easy to see that continuing to draw cards will result in my death. So, it depends on what you mean. If it's just doubling expected utility over my expected life IF I don't die in the card game, then it's a foolish decision to draw the first or any number of cards. If it's doubling expected utility in all cases, then I draw cards until I die, happily forcing Omega to make verifiable changes to the universe and myself.
Now, there are terms at which I would take the one round, IF you don't die in the card game version of the gamble, but it would probably depend on how it's implemented. I don't have a way of accessing my utility function directly, and my ability to appreciate maximizing it is indirect at best. So I would be very concerned about the way Omega plans to double my expected utility, and how I'm meant to experience it.
In practice, of course, any possible doubt that it's not Omega giving you this gamble far outweighs any possibility of such lofty returns, but the thought experiment has some interesting complexities.
I see, I misparsed the terms of the argument, I thought it was doubling my current utilons, you're positing I have a 90% chance of doubling my currently expected utility over my entire life.
The reason I bring up the terms in my utility function, is that they reference concrete objects, people, time passing, and so on. So, measuring expected utility, for me, involves projecting the course of the world, and my place in it.
So, assuming I follow the suggested course of action, and keep drawing cards until I die, to fulfill the terms, Omega must either give me all the utilons before I die, or somehow compress the things I value into something that can be achieved in between drawing cards as fast as I can. This either involves massive changes to reality, which I can verify instantly, or some sort of orthogonal life I get to lead while simultaneously drawing cards, so I guess that's fine.
Otherwise, given the certainty that I will die essentially immediately, I certainly don't recognize that I'm getting a 90% chance of doubled expected utility, as my expectations certainly include whether or not I will draw a card.
I seem to have missed some context for this, I understand that once you've gone down the road of drawing the cards, you have no decision-theoretic reason to stop, but why would I ever draw the first card?
A mere doubling of my current utilons measured against a 10% chance of eliminating all possible future utilons is a sucker's bet. I haven't even hit a third of my expected lifespan given current technology, and my rate of utilon acquisition has been accelerating. Quite aside from the fact that I'm certain my utility function includes terms regarding living a long time, and experiencing certain anticipated future events.
It's useful evidence that EURISKO was doing something. There were some extremely dedicated and obsessive people involved in Traveller, back then. The idea that someone unused to starship combat design of that type could come and develop fleets that won decisively two years in a row seems very unlikely.
It might be that EURISKO acted merely as a generic simulator of strategy and design, and Lenat did all the evaluating, and no one else in the contest had access to simulations of similar utility, which would negate much of the interest in EURISKO, I think.
There are a number of DARPA and IARPA projects we pay attention to, but I'd largely agree that their approaches and basic organization makes them much less worrying.
They tend towards large, bureaucratically hamstrung projects, like PAL, which the last time I looked included work and funding for teams at seven different universities, or they suffer from extreme narrow focus, like their intelligent communication initiatives, which went from being about adaptive routing via deep introspection of multimedia communication and intelligent networks, to just being software radios and error correction.
They're worth keeping any eye on mostly because they have the money to fund any number of approaches, and often in long periods. But the biggest danger isn't their funded, stated goals, it's the possibility of someone going off-target, and working on generic AI in the hopes of increasing their funding or scope in the next evaluation, which could be a year or more later.
I jumped the theist fence after reading a book whose intellectual force was too great to be denied outright, and too difficult to refute point by point. I hate being wrong, and feeling stupid, and the arguments from the book stayed in my thoughts for a long time.
I didn't formalize my thoughts until later, but if my atheism had a cause, it was THE CASE AGAINST GOD by George H Smith. I was very emotionally satisfied with my religion and it's community beforehand.
I use ManicTime, myself
I'm not really interested in actual party divisions so much as I am interested in a survey of beliefs.
Affiliation seems like much less useful information, if we're going to use Aumann-like agreement processes on this survey stuff.
Yes, it might be more useful to list some wedge issues that usually divide the parties in the US.
Doesn't that make the problem worse, though?
If the feedback is esteem of students in the field, then you're rewarding the mentor who picks his battles carefully, who can sell what happened on any encounter in a positive and understandable light. The honest mentors and 'researchers' who approach a varied population, analyze their performance without upselling, and accrete performance over time(as you'd expect with a real, generic skill) will lose out.
I found the last survey interesting because of the use of ranges and confidence measures. Are there any other examples of this that a community response would be helpful for?
What is the time-urgency, if you don't mind my asking? Other than Vassar's ascension, the Summer of Code projects, and LessWrong, I wasn't aware of anything going on at SingInst with any kind of schedule.
My first attempt at volunteering for Eliezer ended badly, for outside and personal reasons, and I haven't seriously considered it since, mostly because I didn't really understand the short-term goals of SingInst(Or I didn't agree with what I did understand of them).
Also, to be honest, the last thing that I found useful (in terms of my Singularitarian goals) to come out of it was CEV, which was quite a while ago now. Are there new projects, or private projects coming to public view? Why now?
Yes, that's true. I think I was fighting a rearguard action here, trying to defend my hypothesis. I've changed my votes accordingly. Cheers to you and Yvain.
Gene transfer also resolves some very puzzling and ugly irregularities. Sometimes the beauty isn't just the theory, but it's relationship to data. If a theory's very elegant, but the data too messy, it disturbs my sense of completion.
I'm not sure that's true. Lots of people would want to know how to make the improved solar technology, because it would be immensely commercially valuable.
Also, I tend to think people's beliefs about technology, science, and the way to solve problems would change, given a large change in energy infrastructure.
People use pervasive technology or social structures as a metaphor for many things, especially new ideas. Witness how early 20th century theorists use mechanical and hydraulic metaphors in their theories of the body and brain, whereas late 20th century biologists use network, electrical, and systems metaphors that simply didn't exist before.
As a total sidenote, your choice of examples is bad. If someone solved photosynthesis in a way that output useful engineerable technologies, it would change your life, and the lives of almost everybody else.
Solar power cheap and powerful enough to run most of our technology would be a massive sea change.
I agree. Mensa and the AMA aren't actually avowedly rational, nor do they have any group goals that require the same, but they are weakly rational groups, because they contain a lot of smart people and they have institutional biases against failures of intelligence and opinion.
This keeps out certain types of dysrationalia, which is all I needed for my comparison to more vulnerable groups like the LDS and those Charismatic Protestants.
The best argument against it is that it isn't really a unique descriptor such that it can be falsified usefully.
Most posts and comments on LessWrong would work just as well if the authors were frequentist statisticians, old fashioned logical positivists, or even people who couldn't really do the math. The epistemic viewpoint doesn't actually hang off of a uniquely Bayesian procedure.
Utilitarianism is an ethical theory, put forth by John Stuart Mill. It's distinct from confusingly similar technical terms like expected utility, and is definitely not a unanimous ethical position around here.
A point further in favor, dysrationalia accumulates in groups much as the small advantages you describe do.
Mensa and AMA members may not have superpowers(to pick two weakly rationalist groups), but they also don't spend millions of dollars sponsoring attempts to locate Noah's Ark, or traces of the Jewish tribe that the LDS church believes existed in South America.
Oh, um, in case it wasn't clear, I think everybody would have their own array of negative and positive descriptors. I don't think we're that similar.
I think the 'identity' we're ascribing to the nascent community here is more complicated than any existing labels. Maybe we could build one, but I don't think there is one now.
I generally label myself contextually, in response to kinds of evaluation being made in the conversation or missive:
When I'm trying to emphasize my commitment to quantifiable, established knowledge or highlight my rejection of a concept of school of thought I feel falls outside that, I call myself a Scientist.
When the discussion centers on reflective beliefs, conceptual methodology, or worldviews, I call myself a Rationalist.
During discussions about effectiveness, organization, and the feasibility of actions, when I want to highlight my tendency to evaluate ends and means for costs and details, I call myself an Engineer.
To highlight my consequentialist evaluation of actions in ethical and moral judgements, I call myself a Pragmatist.
Discussions get too abstract or ungrounded, without units or direct examples, so to get back to more useful areas I call myself a Realist.
Often I find myself in discussions with multiple persons, whose rhetoric seems to have retreated into armed camps, so to coax them to engage in constructive exchange and give myself an opportunity to get a look at their best data as opposed to their best arguments, I describe myself as Willing to Be Persuaded.
er, am I misparsing this?
It seems to me that if you haven't hit the ground while skydiving, you're some sort of magician, or you landed on an artificial structure and then never got off..
In a manner which matches the fortuity, if not the consequence, of Archimedes' bath and Newton's apple, the [3.6 million year old] fossil footprints were eventually noticed one evening in September 1976 by the paleontologist Andrew Hill, who fell while avoiding a ball of elephant dung hurled at him by the ecologist David Western.
~John Reader, Missing Links: The Hunt for Earliest Man
I have attempted explicit EU calculations in the past, and have had to make very troubling assumptions and unit approximations, which has limited my further experimentation.
I would be very interested in seeing concrete examples and calculation rules in plausible situations.
It depends on what you mean by wrecking. Morphine, for example, is pretty safe. You can take it in useful, increasing amounts for a long time. You just can't ever stop using it after a certain point, or your brain will collapse on itself.
This might be a consequence of the bluntness of our chemical instruments, but I don't think so. We now have much more complicated drugs that blunt and control physical withdrawal and dependence, like Subutex and so forth, but the recidivism and addiction numbers are still bad. Directly messing with your reward mechanisms just doesn't leave you a functioning brain afterward, and I doubt wireheading of any sophistication will either.
Well, that's an interesting question. If you wanted to just feel maximum happiness in a something like your own mind, you could take the strongest dopamine and norepinephrin reuptake inhibitors you could find.
If you didn't care about your current state, you could get creative, opioids to get everything else out of the way, psychostimulants, deliriants. I would need to think about it, I don't think anyone has ever really worked out all the interactions. It would be easy to achieve a extremely high bliss, but some interactions work would be required to figure out something like a theoretical maximum.
The primary thing in the way is the fact that even if you could find a way to prevent physical dependency, the subject would be hopelessly psychologically addicted, unable to function afterwards. You'd need to stably keep them there for the rest of their life expectancy, you couldn't expect them to take any actions or move in and out of it.
Depending on the implementation, I would expect wireheading to be much the same. Low levels of stimulation could potentially be controlled, but using to get maximum pleasure would permanently destroy the person. Our architecture isn't built for it.
It's fairly straightforward to max out your subjective happiness with drugs today, why wait?
- Handle: outlawpoet
- Name: Justin Corwin
- Location: Playa del Rey California
- Age: 27
- Gender: Male
- Education: autodidact
- Job: researcher/developer for Adaptive AI, internal title: AI Psychologist
- aggregator for web stuff
Working in AI, cognitive science and decision theory are of professional interest to me. This community is interesting to me mostly out of bafflement. It's not clear to me exactly what the Point of it is.
I can understand the desire for a place to talk about such things, and a gathering point for folks with similar opinions about them, but the directionality implied in the effort taken to make Less Wrong what it is escapes me. Social mechanisms like karma help weed out socially miscued or incompatible communications, they aren't well suited for settling questions of fact. The culture may be fact-based, but this certainly isn't an academic or scientific community, it's mechanisms have nothing to do with data management, experiment, or documentation.
The community isn't going to make any money(unless it changes) and is unlikely to do more than give budding rationalists social feedback(mostly from other budding rationalists). It potentially is a distribution mechanism for rationalist essays from pre-existing experts, but Overcoming Bias is already that.
It's interesting content, no doubt. But that just makes me more curious about goals. The founders and participants in LessWrong don't strike me as likely to have invested so much time and effort, so much specific time and effort getting it to be the way it is, unless there were some long-term payoff. I suppose I'm following along at this point, hoping to figure that out.
Not something I was aware of, but good to know.
I wasn't aware of anything from before his career as an academic, 1982-onward. His wikipedia article doesn't mention anything but the atom thing. But he certainly set out to be a Professor of rationality-topics.
I agree with this comment vociferously.
The upper bound isn't a terrible idea, but it would, for example, knock E.T. Jaynes out of the running as a desirable rationality instructor, as the only unrelated competent activity I can find for him is the Jaynes-Cumming Model of atomic evolution, which I have absolutely zero knowledge of.
Playa del Rey, by the beach just south of Santa Monica and West of LA proper.
One of my previous co-workers ran a San Diego chapter. He enjoyed it a great deal, but that may have been because he was in charge, and shaping the meetings and context towards what he was interested in.
Lots and lots of fairly loose speculation on topics outside their specialties, lots of puzzles and mind-games. It wasn't really very fun for me, although the gender ratio was better than I expected.
would that mean that on default settings, a post or comment would be invisible until someone voted for it? Should I set my filters for -1?
This raises the question of what positive attributes we can attempt to apply to this little sub-culture of aspiring rationalists. Shared goals? Collaborative action?
Some have already been implying heavily that rationality implies certain actions in the situation most of us find ourselves in, does it make sense to move forward with that?
Is success here just enabling the growth of strong rationalist individuals, who go forth and succeed in whatever they choose to do, or to shape a community, valuing rationality, which accomplishes things?
Is it possible to do some processing of posts and comments to automagically add links to the wiki for technical terms(possibly any word or phrase with it's own page?).
I'm thinking of the annoying ad-word javascript that some sites do. I've always thought it would be useful to do that linking without the author needing to(but possibly being able to override), but most wikis require you to make links manually, because of ambiguity. Given the specialist nature of this wiki, shouldn't that be less of a problem?
I always thought the Ixian and Tleilaxu(who, it should be noted, can clone unlimited copies of the most powerful mentats they could find samples of) would have done much better in a fair Dune universe.
One thing I've never seen in these threads about rationalist literature is RPG handbooks. The 2nd Edition Dungeon Master's Guide had an enormous influence on me, because it suggested that the world ran on understandable, deterministic rules, which could be applied both to explicate dramatic situations, and to predict the outcome of situations not yet seen.
One of the first things I ever did (I lacked friends to play D&D with) was to assign stats to fictional characters and make pre-existing stories I felt were unsatisfying play out in a more "realistic" manner. A better word would be internally consistent. But I felt very strongly after that point that it was logical to expect that 9 times out of 10 that the entity with the most advantages would come out on top, contrary to the manner of stories, although the dice-rolling kept total predestination at bay.
Why not just vote the topic up, and comment what you like? The score on the topic or comment will be high, even if there aren't a lot of people saying "you rock" in the comments.
Isn't that the same signal?
Having differing updating speeds for different pages is a good idea.
I favor a lot of posting and commenting, at least initially. It's not clear to me what kinds of ideas and communication is going to be promoted by this community, and I think a wide variety of possible things for reader/commenter/providers to latch onto provides the most possibility of something interesting coming out of this.
As other commenters have said, I imagine people will lose enthusiasm or run out of ideas eventually anyway, and we'll settle into a steadier state of posts/comments.
I first began to separate the concept of truth-seeking from specific arguments of fact late in life, as a teenage catholic who was given a copy of The Case Against God.
A way to see the number of comments a particular post has would be useful