Posts
Comments
I'm coming to this party rather late, but I'd like to acknowledge that I appreciated this exchange more than just by upvoting it. Seeing in depth explanations of other people's emotions seems like the only way to counter Typical Mind Fallacy, but is also really hard to come by. So thanks for a very levelheaded discussion.
Going off of what others have said, I'll add another reason people might satisfice with teachers.
In my experience, people agree much more about which teachers are bad than about which are good. Many of my favorite (in the sense that I learned a lot easily) teachers were disliked by other people, but almost all of those I thought were bad were widely thought of as bad. If you're not as interested in serious learning this might be less important.
So avoiding bad teachers requires a relatively small amount of information, but finding a teacher that is not just good, but good for you requires a much larger amount. So people reasonably only do the first part.
I thought this was an interesting critical take. Portions are certainly mind-killing, eg you can completely ignore everything he says about rich entrepreneurs, but overall it seemed sound. Especially the proving-too-much argument; the projections involve doing multiple revolutionary things, each of which would be a significant breakthroughs on its own. The fact that Musk isn't putting money into doing any of those suggests it would not be as easy/cheap as predicted (not just in a "add a factor of 5" way, but in a "the current predictions are meaningless" way).
Also, the fact he's proposing it for California seems strange. There are places with cheaper, flatter land where you could do a proof of concept before moving into a politically complicated, expensive, earthquake-prone state like California. I've seen Texas (Houston-Dallas-San Antonio) and Alberta (Edmonton-Calgary) proposed, both of which sound like much better locations.
If there are generally decreasing returns to measurement of a single variable, I think this is more what we would expect see. If you've already put effort into measurement of a given variable it will have lower information value on the margin. If you add in enough costs for switching measurements, then even the optimal strategy might spend a serious amount of time/effort pursuing lower value measurements.
Further, if they hadn't even thought of some measurements they couldn't have pursued them, so they wouldn't have suffered any declining returns.
I don't think this is the primary reason, but may contribute, especially in conjunction with reasons from sibling comments.
I don't know if this is exactly what you're looking for, but the only way I've found to make philosophy of identity meaningful is to interpret it as about values. In this reading questions of personal identity are what you do/should value as "yourself".
Clearly you-in-this-moment is yourself. Do you value you-in-ten-minutes the same as yourself-now? ten years? simulations?, etc. Then Open Individualism (based on my cursory googling) would say we should value everyone (at all times?) identically as ourselves. Then it's clearly descriptively false, and, at least to me, seems highly unlikely to be any sort of "true values", so it's false.
First not: I'm not disagreeing with you so much as just giving more information.
This might buy you a few bits (and lots of high energy physics is done this way, with powers of electronvolts the only units here). But there will still be free variables that need to be set. Wikipedia claims (with a citation to this John Baez post) that there are 26 fundamental dimensionless physical constants. These, as far as we know right now, have to be hard coded in somewhere, maybe in units, maybe in equations, but somewhere.
Professionals read the Methods section.
Ok, but I am not a professional in the vast majority of fields I want to find studies in. I would go so far as to say I'm a dilettante in many of them.
My strategy in situations like that is to try to get rid of all respect for the person. If to be offended you have to care, at least on some level, about what the person thinks then demoting them from "agent" to "complicated part of the environment" should reduce your reaction to them. You don't get offended when your computer gives you weird error messages.
Now this itself would probably be offensive to the person (just about the ultimate in thinking of them as low status), so it might not work as well when you have to interact with then often enough for them to notice. But especially for infrequent interactions and one time interactions I find this to be a good way to get through potentially offensive situations.
The problem is that we have to guarantee that the AI doesn't do something really bad while trying to stop these problems; what if it decides it really needs more resources suddenly, or needs to spy on everyone, even briefly? And it seems (to me at least) that stopping it from having bad side effects is pretty close, if not equivalent to, Strong Friendliness.
I worry that this would bias the kind of policy responses we want. I obviously don't have a study or anything, but it seems that the framing of the War on Drugs and the War on Terrorism have encouraged too much violence. Which sounds like a better way to fight the War on Terror, negotiating in complicated local tribal politics or going in and killing some terrorists? Which is actually a better policy?
I don't know exactly how this would play out in a case where no violence makes sense (like the Cardiovascular Vampire). Maybe increased research as part of a "war effort" would work. But it seems to me that this framing would encourage simple and immediate solutions, which would be a serious drawback.
This feels like reading too much into it, but is
and each time the inner light pulsated, the assembly made a vroop-vroop-vroop sound that sounded oddly distant, muffled like it was coming from behind four solid walls, even though the spinning-conical-section thingy was only a meter or two away.
supposed to be something about the fourth wall?
How about a news show?. Best watched without sound.
I think you need to start by cashing out "understand" better. Certainly no physical system can simulate itself with full resolution. But there are all sorts of things we can't simulate like this. Understanding (as I would say its more commonly used) usually involves finding out which parts of the system are "important" to whatever function you're concerned with. For example, we don't have to simulate every particle in a gas because we have gas laws. And I think most people would say that gas laws show more understanding of thermodynamics than whatever you would get out of a complete simulation anyway.
Now the question is whether the brain actually does have any "laws" like this. IIRC, this is a relatively open question (though I do not follow neuroscience very closely) and in principle it could go either way.
I guess I don't really understand what the purpose of the argument is. Unless we can prove things about this stack of brains, what does it gets us? And how far "down" the evolutionary ladder does this argument work? Are cats omega-self-aware? Computing clusters?
Took most of it. I pressed enter accidentally after the charity questions. I would like to fill out the remainder. Is there a way I can do that without messing up the data?
Though I don't think its that simple because both sides are claiming that the other side is not reporting how they truly feel. One side claims that people are calling things creepy semi-arbitrarily to raise their own status, and the other claims that people are intentionally refusing to recognize creepy behavior as creepy so they don't have to stop it (or being slightly more charitable, so they don't take a status hit for being creepy).
But all we want is an ordering of choices, and affine transformations (with a positive multiplicative constant) are order preserving.
I don't think this is the right place to report this, but I don't know where the right place is, and this is closest. In the title of the page for comments for the deleted account (eg) the name of the poster has not been redacted.
Wouldn't this be a problem for tit for tat players going up against other tit for tat players (but not knowing the strategy of their opponent)?
In the sense that there are multiple equilibriums or that there is no equilibrium for reflection?
Not necessarily. See Chlamer's reply to Hilary Putnam who asserted something similar, especially section 6. Basically, if we require that all of the "internal" structure of the computation be the same in the isomorphism and make a reasonable assumption about the nature consciousness, all of the matter in the Hubble volume wouldn't be close to large enough to simulate a (human) consciousness.
I found this (scroll down for the majority of articles) graph of all links between Eliezer's articles a while ago, it could be be helpful. And its generally interesting to see all the interrelations.
The thing I got out of it was that human brain processes appear to be able to do something (assign a nonzero probability to a non-computable universe) that our current formalization of general induction cannot do and we can't really explain why this is.
As I understand it, it is a comparative advantage argument. More rational people are likely to have comparative advantage in making money as compared to less rational people, so the utility maximizing setup is for more rational people to make money and pay less rational people to do the day to day work of implementing the charitable organization. Thats the basic form of the argument at least.
You are right, I should have said something like "implementing MWI over some morality."
I don't think MWI is analogous to creating extra simultaneous copies. In MWI one maximizes the fraction of future selves experiencing good outcomes. I don't care about parallel selves, only future selves. As you say, looking back at my self-tree I see a single path, and looking forward I have expectations about future copies, but looking sideways just sounds like daydreaming, and I don't have place a high marginal value on that.
There is also an opportunity cost to the poor use of statistics instead of proper use. This may be only externalities (the person doing the test may actually benefit more from deception), but overall the world would be better if all statistics were used correctly.
But the important (and moral) question here is "how do we count the people for utility purposes." We also need a normative way to aggregate their utilities, and one vote per person would need to be justified separately.
I don't know game theory very well, but wouldn't this only work as long as not everyone did it. Using the car example, if these contracts were common practice, you could have one for 4000 and the dealer could have one for 5000, in which case you could not reach the pareto optimum.
In general, doesn't this infinitely regress up meta levels? Adopting precomittments is beneficial, so everyone adopts them, then pre-precomittments are beneficial... (up to some constraint from reality like being too young, although then parents might become involved)
Is this (like some of Schelling's stuff I've read) more instrumental than pure game theory? I can see how this would work in the real world, but I'm not sure that it would work in theory. (Please feel free to correct any and all of my game theory)
I think the majority of people don't evaluate AGI incentives rationally, especially failing to fully see the possibilities of it. Whereas this is an easy to imagine benefit.
Personally, pseudonymity wasn't that helpful, its not that I didn't want to risk my good name or something, as much as that I just didn't want to be publicly wrong among intelligent people. Even if people didn't know that the comment was from me per se, they were still (hypothetically) disagreeing with my ideas and I would still know that the post was mine. For me it was more hyperbolic discounting than it was rational cost-benefit analysis.
As a semi-lurker, this likely would have been very helpful for me. One problem that I had is a lack of introduction to posting. You can read everything, but its hard to learn how to post well without practice. As others have remarked, bad posts get smacked down fairly hard, so this makes it hard for people to get practice... vicious cycle. Having this could create an area where people who are not confident enough to post to the full site could get practice and confidence.
But doesn't this make precommitting have a positive expected utility to students, so students would precommit to whatever they thought was most likely to happen and the teacher would still expect more late papers from having this policy.
I don't know that much about the topic, but aren't viruses more efficient at many things than normal cells? Could there be opportunities for improvement in current biological systems through better understanding of viruses?
Or create (or does one exist) some thread(s) that would be a standard place for basic questions. Having somewhere always open might be useful too.
OB has threading (although it doesn't seem as good/ as used as on LW).
This seems like both a wonderful idea, and not mutually exclusive with the original. Having this organization could potentially increase the credibility of the entire thing, get some underdog points with the general public (although I don't know how powerful this is for average people), and act as a backup plan.
It seems interesting that lately this site has been going through a "question definitions of reality" stage (the ai in a box boxes you., this series). It does seem to follow that going far enough to materialism leads back to something similar to Cartesian questions, but its still surprising.
My technique is get time is to say "wait" about ten times or until they stop and give me time to think. This probably won't work for comment threads very well, but in reality not letting the person continue generally works. Probably slightly rude, but more honest and likely less logically rude, a trade-off I can often live with.
I think the first problem we have to solve is what the burden of proof is like for this discussion.
The far view says that science and reductionism have a very good record at demystifying lots of things that were thought to be unexplainable (fire, life, evolution), so the burden is on those saying the Hard Problem does not just follow from the Easy Problems. According to this, opponents of reductionism have to provide something close to a logical inconsistency with reducing conciseness. It would require huge amounts of evidence against reducing to overcome the prior for it coming from the far view.
The other side is that conciseness requires explaining a first-person experience. This view says that the reductionists have to demonstrate why science can make this new jump from only third-person explanations.
IMHO, I think that problems similar to the second view have been brought up against every major expansion of reductionism and science and have generally been proven wrong, so I vote that the burden of proof should be on those arguing against reductionism.
Whichever side ends up being right, it is important to first agree on what each side has to do to win or else each side can declare victory while agreeing on the facts.
(please note that this is my first post)
I found the phrasing in terms of evidence to be somewhat confusing in this case. I think there is some equivocating on "rationality" here and that is the root of the problem.
For P=NP, (if it or its inverse is provable) a perfect Bayesian machine will (dis)prove it eventually. This is an absolute rationality; straight rational information processing without any heuristics or biases or anything. In this sense it is "irrational" to not be able to (dis)prove P=NP ever.
But in the sense of "is this a worthwhile application of my bounded resources" rationality, for most people the answer is no. One can reasonably expect a human claiming to be "rational" to be able to correctly solve one-in-a-million-illness, but not to have (or even be able to) go through the process of solving P=NP. In terms of fulfillingone's utility function, solving P=NP given your processing power is most likely not the most fulfilling choice (except for some computer scientists).
So we can say this person is taking the best trade-off between accuracy and work for P=NP because it requires a large amount of work, but not for one-in-a-million-illness because learning Bayes rule is very little work.