Posts

Comments

Comment by dogiv on Ukraine Situation Report 2022/03/01 · 2022-03-04T04:55:34.963Z · LW · GW

The situation at Zaporizhzhia (currently) does not seem to be an impending disaster. The fire is/was in an administrative building. Fires at nuclear power plants can be serious, but the reactor buildings are concrete and would not easily catch fire due to nearby shelling or other external factors.

Some click-seekers on Twitter have made comparisons to Chernobyl. That kind of explosion cannot happen accidentally at Zaporizhzhia (it's a safer power plant design with sturdy containment structures surrounding the reactors). If the Russians wanted to cause a massive radioactive cloud like Chernobyl, they would have to use their own explosives, and I think it would take a very big bomb to do it. They would have to blow the roof off the containment building first, and then somehow break open the massive steel reactor vessel and spread the contents into the air.

A Fukushima-style meltdown also does not look very plausible unless someone takes over the plant and intentionally disables safety systems. 

More info here: https://mobile.twitter.com/isabelleboemeke/status/1499594126521679872

https://mobile.twitter.com/BeCurieus/status/1499604899990052866

Comment by dogiv on Are we in an AI overhang? · 2020-08-03T14:05:42.314Z · LW · GW

Sounds like something GPT-3 would say...

Comment by dogiv on Project Proposal: Gears of Aging · 2020-05-16T02:23:03.653Z · LW · GW

Alternatively, aging (like most non-discrete phenotypes) may be omnigenic.

Comment by dogiv on UDT can learn anthropic probabilities · 2018-06-25T19:45:23.033Z · LW · GW

Thanks for posting this, it's an interesting idea.

I'm curious about your second-to-last paragraph: if our current evidence already favored SSA or SIA (for instance, if we knew that an event occurred in the past that had a small chance of creating a huge number of copies of each human, but we also know that we are not copies), wouldn't that already have been enough to update our credence in SSA or SIA? Or did you mean that there's some other category of possible observations, which is not obviously evidence one way or the other, but under this UDT framework we could still use it to make an update?

Comment by dogiv on [Paper] Global Catastrophic and Existential Risks Communication Scale, similar to Torino scale · 2018-01-19T01:36:51.541Z · LW · GW

I'm curious who is the target audience for this scale...

People who have an interest in global risks will find it simplistic--normally I would think of the use of a color scale as aimed at the general public, but in this case it may be too simple even for the curious layman. The second picture you linked, on the other hand, seems like a much more useful way to categorize risks (two dimensions, severity vs urgency).

I think this scale may have some use in trying to communicate to policy makers who are unfamiliar with the landscape of GCRs, and in particularly to try to get them to focus on the red and orange risks that currently get little interest. But where is the platform for that communication to happen? It seems like currently the key conversations would be happening at a more technical level, in DoD, DHS, or FEMA. A focus on interventions would be helpful there. I couldn't get the whole paper, but from what you wrote above it sounds like you have some interesting ideas about ranking risks based on a combination of probability and possible interventions. If that could be formalized, I think it would make the whole idea a lot stronger. Like you say, people are reasonably skeptical about probabilities (even if they're just an order of magnitude) but if you can show that the severity of the risk isn't very sensitive to probability, maybe it would help to overcome that obstacle.

Comment by dogiv on [deleted post] 2017-09-25T16:41:56.420Z

Note also that non-alphanumeric symbols are hard to google. I kind of guessed it from context but couldn't confirm until I saw Kaj's comment.

Comment by dogiv on Beta - First Impressions · 2017-09-21T13:48:09.499Z · LW · GW

Separately, and more important, the way links are displayed currently makes it hard to tell if a link has already been visited. Also if you select text you can't see links anymore.

Firefox 57 on Windows 10.

Comment by dogiv on Beta - First Impressions · 2017-09-21T13:46:49.464Z · LW · GW

I am ecountering some kind of error when opening the links here to rationalsphere and single conversational locus. When I open them, a box pops up that says "Complete your profile" and asks me to enter my email address (even though I used my email to log in in the first place). When I type it in and press submit, I get the error: {"id":"app.mutation_not_allowed","value":"\"usersEdit\" on _id \"BSRa9LffXLw4FKvTY\""}

Comment by dogiv on Common vs Expert Jargon · 2017-09-21T13:35:36.297Z · LW · GW

I think this is an excellent approach to jargon and I appreciate the examples you've given. There is too much tendency, I think, for experts in a field to develop whatever terminology makes their lives easiest (or even in some cases makes them "sound smart") without worrying about accessibility to newcomers.

... but maybe ideally hints at a broader ecosystem of ideas

This sounds useful, but very hard to do in practice... do you know of a case where it's successful?

Comment by dogiv on David C Denkenberger on Food Production after a Sun Obscuring Disaster · 2017-09-19T18:28:51.029Z · LW · GW

Thanks for posting!

I haven't read your book yet but I find your work pretty interesting. I hope you won't mind a naive question... you've mentioned non-sunlight-dependent foods like mushrooms and leaf tea. Is it actually possible for a human to survive on foods like this? Has anybody self-experimented with it?

By my calculation, a person who needs 1800 kcals/day would have to eat about 5 kg of mushrooms. Tea (the normal kind, anyway) doesn't look any better.

Bacteria fed by natural gas seems like a very promising food source--and one that might even be viable outside of catastrophe scenarios. Apparently it's being used for fish feed already.

Comment by dogiv on [deleted post] 2017-08-31T13:42:07.334Z

You are assuming that all rational strategies are identical and deterministic. In fact, you seem to be using "rational" as a stand-in for "identical", which reduces this scenario to the twin PD. But imagine a world where everyone makes use of the type of supperrationality you are positing here--basically, everyone assumes people are just like them. Then any one person who switches to a defection strategy would have a huge advantage. Defecting becomes the rational thing to do. Since everybody is rational, everybody switches to defecting--because this is just a standard one-shot PD. You can't get the benefits of knowing the opponent's source code unless you know the opponent's source code.

Comment by dogiv on [deleted post] 2017-08-30T20:54:40.183Z

The first section is more or less the standard solution to the open source prisoner's dilemma, and the same as what you would derive from a logical decision theory approach, though with different and less clear terminology than what is in the literature.

The second section, on application to human players, seems flawed to me (as does the claim that it applies to superintelligences who cannot see each other's source code). You claim the following conditions are necessary:

  1. A and B are rational

  2. A and B know each other's preferences

  3. They are each aware of 1 and 2

But in fact, your concept of predisposing oneself relies explicitly on having access to the other agent's source code (and them having access to yours). If you know the other agent does not have access to your source code, then it is perfectly rational to predispose yourself to defect, whether or not you predict that the other agent has done the same. Cooperating only makes sense if there's a logical correlation between your decision to cooperate and your opponent's decision to cooperate; both of you just being "rational" does not make your decision processes identical.

"Recurrent Decision Theory" is not a meaningful idea to develop based on this post; just read and understand the existing work on UDT/FDT and you will save yourself some trouble.

Comment by dogiv on Intrinsic properties and Eliezer's metaethics · 2017-08-30T20:00:19.782Z · LW · GW

I think many of us "rationalists" here would agree that rationality is a tool for assessing and manipulating reality. I would say much the same about morality. There's not really a dichotomy between morality being "grounded on evolved behavioral patterns" and having "a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition". Rather, the moral intuitions we have are computed in our brains, and the form of that computation is determined both by the selection pressures of evolution and the ways that our evolved brain structures interact with our various environments.

So what is our highest priority here? It's neither Rationality nor Truth, but Morality in the broad sense--the somewhat arbitrary and largely incoherent set of states of reality that our moral intuition prefers. I say arbitrary because our moral intuition does not aim entirely at the optimization target of the evolutionary process that generated it--propagating our genes. Call that moral relativism if you want to.

Comment by dogiv on People don't have beliefs any more than they have goals: Beliefs As Body Language · 2017-07-25T21:14:02.964Z · LW · GW

I think this is an interesting and useful view, if applied judiciously. In particular, it will always tend to be most relevant for crony beliefs--beliefs that affect the belief-holder's life mainly through other people's opinions of them, like much of politics and some of religion. When it comes to close-up stuff that can cause benefit or harm directly, you will find that most people really do have a model of the world. When you ask someone whether so-and-so would make a good president, the answer is often a signal about their cultural affiliations. Ask them which is the fastest way to get to where they work, and the answer reflects what they've learned about rush-hour traffic patterns. Ask people if they believe in God, and the answer is a signal. Ask them if they believe pre-marital sex is ever acceptable, and the answer you get is a lot more practical.

It's also worth unpacking the us-vs-them terminology you employ here. Many of us may tend to be more literal than the average person (especially those who fall on the spectrum) but in my experience we are still prone to this same behavior. In most cases, there's nothing wrong with that. Understanding the difference can help us avoid trying to cooperatively world-model with people who are just expressing social beliefs, and can also help us recognize world-modeling when we see it, so that we can reduce our tendency to make snap judgements about people on the basis of the beliefs they express.

Comment by dogiv on Can anyone refute these arguments that we live on the interior of a hollow Earth? · 2017-07-24T15:22:50.253Z · LW · GW

This doesn't actually seem to match the description. They only talk about having used one laser, with two stakes, whereas your diagram requires using two lasers. Your setup would be quite difficult to achieve, since you would somehow have to get both lasers perfectly horizontal; I'm not sure a standard laser level would give you this kind of precision. In the version they describe, they level the laser by checking the height of the beam on a second stake. This seems relatively easy.

My guess is they just never did the experiment, or they lied about the result. But it would be kind of interesting to repeat it sometime.

Comment by dogiv on Self-modification as a game theory problem · 2017-06-28T20:22:55.856Z · LW · GW

Thanks, that's an interesting perspective. I think even high-level self-modification can be relatively safe with sufficient asymmetry in resources--simulated environments give a large advantage to the original, especially if the successor can be started with no memories of anything outside the simulation. Only an extreme difference in intelligence between the two would overcome that.

Of course, the problem of transmitting values to a successor without giving it any information about the world is a tricky one, since most of the values we care about are linked to reality. But maybe some values are basic enough to be grounded purely in math that applies to any circumstances.

Comment by dogiv on Self-modification as a game theory problem · 2017-06-28T19:53:08.716Z · LW · GW

If visible precommitment by B requires it to share the source code for its successor AI, then it would also be giving up any hidden information it has. Essentially both sides have to be willing to share all information with each other, creating some sort of neutral arbitration about which side would have won and at what cost to the other. That basically means creating a merged superintelligence is necessary just to start the bargaining process, since they each have to prove to the other that the neutral arbiter will control all relevant resources to prevent cheating.

Realistically, there will be many cases where one side thinks its hidden information is sufficient to make the cost of conflict smaller than the costs associated with bargaining, especially given the potential for cheating.

Comment by dogiv on The Use and Abuse of Witchdoctors for Life · 2017-06-26T01:37:27.322Z · LW · GW

I've read a couple of Lou Keep's essays in this series and I find his writing style very off-putting. It seems like there's a deep idea about society and social-economic structures buried in there, but it's obscured by a hodgepodge of thesis-antithesis and vague self-reference.

As best I can tell, his point is that irrational beliefs like belief in magic (specifically, protection from bullets) can be useful for a community (by encouraging everyone to resist attackers together) even though it is not beneficial to the individual (since it doesn't prevent death when shot). He relates this to Seeing Like A State, in that any attempt by the state to increase legibility by clarifying the benefits makes them disappear.

He further points out that political and economic policies tend to focus on measurable effects, whereas the ultimate point of governments and economies is to improve the subjective wellbeing of people (happiness, although he says that's just a stand-in for something else he doesn't feel like explaining).

Extending that, he thinks we have probably lost some key cultural traditions that were very important to the quality of people's lives, but weren't able to thrive in a modern economic setting. He doesn't give any examples of that, although he mentions marriages and funerals as examples of traditions that have survived. Still, it seems plausible.

Overall, it reminds me of Scott Alexander's essay How the West was Won, about the advance of universalist (capitalist) culture and its ability to out-compete traditional systems whether or not it actually improves people's lives. Moloch is also relevant.

It's very likely I've missed a key aspect here. If anyone knows what it is, please let me know.

Comment by dogiv on Open thread, June. 19 - June. 25, 2017 · 2017-06-23T15:50:09.921Z · LW · GW

Are you talking about a local game in NY or a correspondence thing?

Comment by dogiv on Open thread, June. 19 - June. 25, 2017 · 2017-06-23T14:29:17.293Z · LW · GW

I like the first idea. But can we really guarantee that after changing its source code to give itself maximum utility, it will stop all other actions? If it has access to its own source code, what ensures that its utility is "maximum" when it can change the limit arbitrarily? And if all possible actions have the same expected utility, an optimizer could output any solution--"no action" would be the trivial one but it's not the only one.

An AI that has achieved all of its goals might still be dangerous, since it would presumably lose all high-level executive function (its optimization behavior) but have no incentive to turn off any sub-programs that are still running.

Both proposals have the possible failure mode that the AI will discover or guess that this mechanism exists, and then it will only care about making sure it gets activated--which might mean doing bad enough things that humans are forced to open the box and shut it down.

Comment by dogiv on Open thread, June. 19 - June. 25, 2017 · 2017-06-22T14:06:42.051Z · LW · GW

It seems like the ideal leisure activities, then, should combine the social games with games against nature. Sports do this to some extent, but the "game against nature" part is mostly physical rather than intellectual.

Maybe we could improve on that. I'm envisioning some sort of combination of programming and lacrosse, where the field reconfigures itself according to the players' instructions with a 10-second delay...

But more realistically, certain sports are more strategic and intellectual than others. I've seen both tennis and fencing mentioned as sports that involve quick strategic thinking and predicting your opponent, although they lack the team element that lets you build coordination skills. Maybe some kind of group fencing would be good... or doubles tennis?

Comment by dogiv on Existential risk from AI without an intelligence explosion · 2017-06-22T13:46:09.949Z · LW · GW

AI is good at well-defined strategy games, but (so far) bad at understanding and integrating real-world constraints. I suspect that there are already significant efforts to use narrow AI to help humans with strategic planning, but that these remain secret. For an AGI to defeat that sort of human-computer combination would require considerably superhuman capabilities, which means without an intelligence explosion it would take a great deal of time and resources.

Comment by dogiv on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-06-21T19:37:58.703Z · LW · GW

More like driving to the store and driving into the brick wall of the store are adjacent in design space.

Comment by dogiv on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-06-21T19:35:22.998Z · LW · GW

Yes, many people intuitively feel that a universe of pleasure and a universe of pain add to a net negative. But I suspect that's just a result of experiencing (and avoiding) lots of sources of extreme pain in our lives, while sources of pleasure tend to be diffuse and relatively rare. The human experience of pleasure is conjunctive because in order to survive and reproduce you must fairly reliably avoid all types of extreme pain. But in a pleasure-maximizing environment, removing pain will be a given.

It's also true that our brains tend to adapt to pleasure over time, but that seems simple to modify once physiological constraints are removed.

Comment by dogiv on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-06-21T13:18:09.015Z · LW · GW

Human disutility includes more than just pain too. Destruction of the humanity (the flat plain you describe) carries a great deal of negative utility for me, even if I disappear without feeling any pain at all. There's more disutility if all life is destroyed, and more if the universe as a whole is destroyed... I don't think there's any fundamental asymmetry. Pain and pleasure are the most immediate ways of affecting value, and probably the ones that can be achieved most efficiently in computronium, so external states probably don't come into play much at all if you take a purely utilitarian view.

Comment by dogiv on Bet or update: fixing the will-to-wager assumption · 2017-06-15T20:02:29.986Z · LW · GW

I'm not sure what you mean here by risk aversion. If it's not loss aversion, and it's not due to decreasing marginal value, what is left?

Would you rather have $5 than a 50% chance of getting $4 and a 50% chance of getting $7? That, to me, sounds like the kind of risk aversion you're describing, but I can't think of a reason to want that.

Comment by dogiv on Bet or update: fixing the will-to-wager assumption · 2017-06-15T16:22:53.164Z · LW · GW

You will not bet on just one side, you mean. You already said you'll take both bets because of the guaranteed win. But unless your credence is quite precisely 50%, you could increase your expected value over that status quo (guaranteed $1) by choosing NOT to take one of the bets. If you still take both, or if you now decide to take neither, it seems clear that loss aversion is the reason (unless the amounts are so large that decreasing marginal value has a significant effect).

Comment by dogiv on Bet or update: fixing the will-to-wager assumption · 2017-06-15T00:09:14.240Z · LW · GW

True, you're sure to make money if you take both bets. But if you think the probability is 51% on odd rather than 50%, you make a better expected value by only taking one side.

Comment by dogiv on Bet or update: fixing the will-to-wager assumption · 2017-06-14T21:17:43.422Z · LW · GW

Let's reverse this and see if it makes more sense. Say I give you a die that looks normal, but you have no evidence about whether it's fair. Then I offer you a two-sided bet: I'll bet $101 to your $100 that it comes up odd. I'll also offer $101 to your $100 that it comes up even. Assuming that transaction costs are small, you would take both bets, right?

If you had even a small reason to believe that the die was weighted towards even numbers, on the other hand, you would take one of those bets but not the other. So if you take both, you are exhibiting a probability estimate of exactly 50%, even though it is "uncertain" in the sense that it would not to make evidence to move that estimate.

Comment by dogiv on Stupid Questions June 2017 · 2017-06-13T20:10:24.541Z · LW · GW

Gasoline is an excellent example of this behavior. It consists of a mixture of many different non-polar hydrocarbons with varying densities, some of which would be gaseous outside of solution. It stays mixed indefinitely (assuming you don't let the volatile parts escape) because separation would require a reduction in entropy.

Comment by dogiv on Futarchy, Xrisks, and near misses · 2017-06-06T02:56:19.001Z · LW · GW

It seems like there's also an issue with risk aversion. In regular betting markets there are enough bets that you can win some and lose some, and the risks can average out. But if you bet substantially on x-risks, you will get only one low-probability payout. Even if you assume you'll actually get that one (relatively large) payout, the marginal value will be greatly decreased. To avoid that problem, people will only be willing to bet small amounts on x-risks. The people betting against them, though, would be willing to make a variety of large bets (each with low payoff) and thereby carry almost no risk.

Comment by dogiv on Naturally solved problems that are easy to verify but that would be hard to compute · 2017-04-03T14:06:24.953Z · LW · GW

I guess where we disagree is in our view of how a simulation would be imperfect. You're envisioning something much closer to a perfect simulation, where slightly incorrect boundary conditions would cause errors to propagate into the region that is perfectly simulated. I consider it more likely that if a simulation has any interference at all (such as rewinding to fix noticeable problems) it will be filled with approximations everywhere. In that case the boundary condition errors aren't so relevant. Whether we see an error would depend mainly on whether there are any (which, like I said, is equivalent to asking whether we are "in" a simulation) and whether we have any mechanism by which to detect them.

Comment by dogiv on Naturally solved problems that are easy to verify but that would be hard to compute · 2017-03-31T17:08:31.669Z · LW · GW

If it is the case that we are in a "perfect" simulation, I would consider that no different than being in a non-simulation. The concept of being "in a simulation" is useful only insofar as it predicts some future observation. Given the various multiverses that are likely to exist, any perfect simulation an agent might run is probably just duplicating a naturally-occurring mathematical object which, depending on your definitions, already "exists" in baseline reality.

The key question, then, is not whether some simulation of us exists (nearly guaranteed) but how likely we are to encounter an imperfection or interference that would differentiate the simulation from the stand-alone "perfect" universe. Once that happens, we are tied in to the world one level up and should be able to interact with it.

There's not much evidence about the likelihood of a simulation being imperfect. Maybe imperfect simulations are more common than perfect ones because they're more computationally tractable, but that's not a lot to go on.

Comment by dogiv on Elon Musk launches Neuralink, a venture to merge the human brain with AI · 2017-03-28T19:54:37.302Z · LW · GW

Does anybody think this will actually help with existential risk? I suspect the goal of "keeping up" or preventing irrelevance after the onset of AGI is pretty much a lost cause. But maybe if it makes people smarter it will help us solve the control problem in time.

Comment by dogiv on Open thread, Mar. 27 - Apr. 02, 2017 · 2017-03-28T16:00:10.609Z · LW · GW

I just tried this out for a project I'm doing at work, and I'm finding it very useful--it forces me to think about possible failure modes explicitly and then come up with specific solutions for them, which I guess I normally avoid doing.

Comment by dogiv on Act into Uncertainty · 2017-03-27T21:42:21.507Z · LW · GW

Encrypting/obscuring it does help a little bit, but doesn't eliminate the problem, so it's not just that.

Comment by dogiv on Act into Uncertainty · 2017-03-27T16:16:39.315Z · LW · GW

I agree with that... personally I have tried several times to start a private journal, and every time I basically end up failing to write down any important thoughts because I am inhibited by the mental image of how someone else might interpret what I write--even though in fact no one will read it. Subconsciously it seems much more "defensible" to write nothing at all, and therefore effectively leave my thoughts unexamined, than to commit to having thought something that might be socially unacceptable.

Comment by dogiv on Making equilibrium CDT into FDT in one+ easy step · 2017-03-24T14:26:12.754Z · LW · GW

I've been trying to understand the differences between TDT, UDT, and FDT, but they are not clearly laid out in any one place. The blog post that went along with the FDT paper sheds a little bit of light on it--it says that FDT is a generalization of UDT intended to capture the shared aspects of several different versions of UDT while leaving out the philosophical assumptions that typically go along with it.

That post also describes the key difference between TDT and UDT by saying that TDT "makes the mistake of conditioning on observations" which I think is a reference to Gary Drescher's objection that in some cases TDT would make you decide as if you can choose the output of a pre-defined mathematical operation that is not part of your decision algorithm. I am still working on understanding Wei Dai's UDT solution to that problem, but presumably FDT solves it in the same way.

Comment by dogiv on Globally better means locally worse · 2017-03-22T21:43:12.911Z · LW · GW

It does seem like a past tendency to overbuild things is the main cause. Why are the pyramids still standing five thousand years later? Because the only way they knew to build a giant building back then was to make it essentially a squat mound of solid stone. If you wanted to build a pyramid the same size today you could probably do it for 1/1000 of the cost but it would be hollow and it wouldn't last even 500 years.

Even when cars were new they couldn't be overbuilt the way buildings were in prehistory because they still had to be able to move themselves around. Washing machines are somewhere in between, I guess. But I don't think rich people demand less durability. If anything, rich people have more capital to spend up front on a quality product and more luxury to research which one is a good long-term investment.

Comment by dogiv on Open thread, Mar. 20 - Mar. 26, 2017 · 2017-03-22T17:06:49.910Z · LW · GW

Agreed. There are plenty of liberal views that reject certain scientific evidence for ideological reasons--I'll refrain from examples to avoid getting too political, but it's not a one-sided issue.

Comment by dogiv on Open thread, Mar. 20 - Mar. 26, 2017 · 2017-03-21T17:34:02.276Z · LW · GW

This may be partially what has happened with "science" but in reverse. Liberals used science to defend some of their policies, conservatives started attacking it, and now it has become an applause light for liberals--for example, the "March for Science" I keep hearing about on Facebook. I am concerned about this trend because the increasing politicization of science will likely result in both reduced quality of science (due to bias) and decreased public acceptance of even those scientific results that are not biased.

Comment by dogiv on Chuckling a Bit at Microsoft and the PCFG Formalism · 2017-03-20T20:43:50.392Z · LW · GW

Interesting piece. It seems like coming up with a good human-checkable way to evaluate parsing is pretty fundamental to the problem. You may have noticed already, but Ozora is the only one that didn't figure out "easily" goes with "parse".

Comment by dogiv on Open thread, Mar. 20 - Mar. 26, 2017 · 2017-03-20T18:58:18.597Z · LW · GW

The idea that friendly superintelligence would be massively useful is implicit (and often explicit) in nearly every argument in favor of AI safety efforts, certainly including EY and Bostrom. But you seem to be making the much stronger claim that we should therefore altruistically expend effort to accelerate its development. I am not convinced.

Your argument rests on the proposition that current research on AI is so specific that its contribution toward human-level AI is very small, so small that the modest efforts of EAs (compared to all the massive corporations working on narrow AI) will speed things up significantly. In support of that, you mainly discuss vision--and I will agree with you that vision is not necessary for general AI, though some form of sensory input might be. However, another major focus of corporate AI research is natural language processing, which is much more closely tied to general intelligence. It is not clear whether we could call any system generally intelligent without it.

If you accept that mainstream AI research is making some progress toward human-level AI, even though it's not the main intention, then it quickly becomes clear that EA efforts would have greater marginal benefit in working on AI safety, something that mainstream research largely rejects outright.

Comment by dogiv on Open Thread, March. 6 - March 12, 2017 · 2017-03-10T16:23:17.727Z · LW · GW

I haven't seen any feminists addressing that particular argument (most are concerned with cultural issues rather than genetic ones) but my initial sense is something like this: a successful feminist society would have 1) education and birth control easily available to all women, and 2) a roughly equal division of the burden of child-rearing between men and women. These changes will remove most of the current incentives that seem likely to cause a lower birth rate among feminists than non-feminists. Of course, it could remain true that feminists tend to be more educated, more independent, less traditional, etc--traits that might correlate with reduced desire for children. However, I suspect we already have that issue (for both men and women) entirely separately from feminism. Some highly-educated countries try to increase fertility with tax incentives and ad campaigns (Denmark, for instance) but I'm not sure how successful it is. In the end the only good solution to such Malthusian problems may be genetic engineering.

Comment by dogiv on why people romantice magic over most science. · 2017-03-09T21:13:48.966Z · LW · GW

I would argue that the closest real-world analogue is computer hacking. It is a rare ability, but it can bestow a large amount of power on an individual who puts in enough effort and skill. Like magic, it requires almost no help from anyone else. The infrastructure has to be there, but since the infrastructure isn't designed to allow hacking, having the infrastructure doesn't make the ability available to everyone who can pay (like, say, airplanes). If you look at the more fantasy-style sci-fi, science is often treated like magic--one smart scientist can do all sorts of cool stuff on their own. But it's never plausible. With hacking, that romanticization isn't nearly as far from reality.

Comment by dogiv on John Tooby on Coalitional Instincts · 2017-03-09T20:54:30.966Z · LW · GW

It seems like the key problem described here is that coalitions of rational people, when they form around scientific propositions, cause the group to become non-scientific out of desire to support the coalition. The example that springs to my mind is climate change, where there is social pressure for scientific-minded people (or even those who just approve of science) to back the rather specific policy of reducing greenhouse gas emissions rather than to probe other aspects of the problem or potential solutions and adaptations.

I wonder if we might solve problems like this by substituting some rational principle that is not subject to re-evaluation. Ultimate goals (CEV, or the like) would fit the bill in principle, but in practice, even if enough people could agree on them, I suspect they are too vague and remote to form a coalition around. The EA movement may be closer to succeeding, where the key idea is not an ultimate goal but rather the general technique of quantitatively evaluating opportunities to achieve altruistic objectives in general. Still, it's difficult to extend a coalition like that to a broader population, since most people can't easily identify with it.

Perhaps the middle ground is to start with a goal that is controversial enough to distinguish coalition members from outsiders, but too vague to form a strong coalition around--say, aggregative consequentialism or something. Then find a clear practical implication of the goal that has the necessary emotional impact. As long as the secondary goal follows easily enough from the first goal that it won't need to be re-evaluated later on, the coalition can hold together and make progress toward the original goal without much danger of becoming irrational. Can't think of a good example for the sub-goal, though.

Comment by dogiv on Welcome to Less Wrong! (9th thread, May 2016) · 2017-03-03T18:43:10.536Z · LW · GW

Hi Jared, Your question about vegetarianism is an interesting one, and I'll give a couple of responses because I'm not sure exactly what direction you're coming from.

I think there's a strong rationalist argument in favor of limiting consumption of meat, especially red meat, on both health and environmental grounds. These issues get more mixed when you look at moderate consumption of chicken or fish. Fish especially is the best available source of healthy fats, so leaving it out entirely is a big trade-off, and the environmental impact of fishing varies a great deal by species, wild vs. farmed, and even the fishing method. Veganism gives relatively small environmental gains over vegetarianism, and is generally considered a loss in terms of health.

When you look at animal suffering, things get a lot more speculative. Clearly you can't treat a chicken's suffering the same as a human's, but how many chickens does it take to be equivalent to a human? At what point is a chicken's life not worth living? This quickly bogs down in questions of the repugnant conclusion, a standard paradox in utilitarianism. Although I have seen no thorough analysis of the topic, my sense is that 1) Scaling of moral value is probably more-than-linear with brain mass (that is, you are worth more than the ~300 chickens it would take to equal your gray matter) but I can't be much more precise than that 2) Most of the world's neurons are in wild inverterbrates: http://reflectivedisequilibrium.blogspot.com/2013/09/how-is-brain-mass-distributed-among.html which argues against focusing specially on domesticated vertebrates 3) Effort expended to reduce animal suffering is largely self-contained--that is, if you choose not to eat a chicken, you probably reduce the number of factory-farmed chickens by about one, with no longer-term effects. Effort to help humans, on the other hand, often has a difficult-to-estimate multiplier from follow-on effects. See here for more on this argument: http://globalprioritiesproject.org/2014/06/human-and-animal-interventions/

The upshot is that when you make any significant investment in animal welfare, including vegetarianism and especially veganism, you should consider the opportunity costs. If it makes your life more difficult and reduces the amount of good you can do in other ways, it may not be worth it.

Personally, I used to be a pescetarian and would consider doing so again, depending on the people around me. Trying to do it in my current circumstances would cause more hassle than I think it's worth (having to ask people for separate meals, not participating in group activities, etc). If you know a lot of other vegetarians, there may be no social cost or even some social benefit. But don't assume that's the case for everyone.

Comment by dogiv on Don't Fear the Reaper: Refuting Bostrom's Superintelligence Argument · 2017-03-01T21:23:48.922Z · LW · GW

The attempt to analytically model the recalcitrance of Bayesian inference is an interesting idea, but I'm afraid it leaves out some of the key points. Reasoning is not just repeated applications of Bayes' theorem. If it were, everyone would be equally smart except for processing speed and data availability. Rather, the key element is in coming up with good approximations for P(D|H) when data and memory are severely limited. This skill relies on much more than a fast processor, including things like simple but accurate models of the rest of the world, or knowing the correct algorithms to combine various truths into logical conclusions.

Some of it does fall into the category of having the correct prior beliefs, but they are hardly "accidentally gifted" -- learning the correct priors, either from experience with data or through optimization "in a box" is a critical aspect of becoming intellectually capable. So the recalcitrance of prediction, though it clearly does eventually go to infinity in the absence of new data, is not obviously high. I would add also that for your argument against the intelligence explosion to hold, the recalcitrance of prediction would have to be not just "predictably high" but would need to increase at least linearly with intelligence in the range of interest--a very different claim, and one for which you have given little support.

I do think it's likely that strictly limiting access to data would slow down an intelligence explosion. Bostrom argues that a "hardware overhang" could be exploited for a fast takeoff, but historically, advanced AI projects like AlphaGo or Watson have used state-of-the-art hardware during development, and this seems probable in the future as well. Data overhang, on the other hand, would be nearly impossible to avoid if the budding intelligence is given access to the internet, of which it can process only a small fraction in any reasonable amount of time.

Comment by dogiv on Open Thread, Feb. 20 - Feb 26, 2017 · 2017-02-21T17:34:25.719Z · LW · GW

I agree there's something to the exploration-exploitation view of people becoming more closed-minded. But don't be too quick to write it off as "people don't think carefully anymore", or simple tribalism. Some important questions really do get settled by all those late-night college debates, though often the answer is "I don't think it's possible to know this" or "It's not worth the years of effort it would take to understand at a more-than-amateur level."

People are recognizing their limitations and zeroing in on the areas where they can get the highest return on investment for their thoughts. That's a difficult thing to do when you're younger, because you don't have much to compare yourself to. If you've never met a physicist more knowledgeable than your 9th-grade science teacher, you might well think you can make big contributions to the theory of relativity in the space of a few weeks' discussion with your friends.

Similarly, when it comes to politics, the idea of considering every idea with an open mind can fall victim to the pressures of reality--some ideas are superficially appealing but actually harmful; some are nice in theory but are so far from what could reasonably be implemented that their return on investment is low. And because politics is so adversarial, many ideas that are promoted as novel and non-partisan are actually trying to sneak in a not-so-novel agenda through the back door.

Comment by dogiv on True understanding comes from passing exams · 2017-02-06T17:15:59.904Z · LW · GW

Thank you, this is clearer than it was before, and it does seem like a potentially useful technique. I see a couple of limitations:

First, it still seems that the whole plan rests on having a good selection of questions, and the mechanism for choosing them is unclear. If they are chosen by some structured method that thoroughly covers the AI's representation of the prior, the questions asked of the human are unlikely to capture the most important aspects of the update from new evidence. Most of the differences between the prior and the posterior could be insignificant from a human perspective, and so even if the human "understands" the posterior a broad sense they will not be likely to have the answers to all of these. Even if they can figure out those answers correctly, it does not necessarily test whether they are aware of the differences that are most important.

Second, the requirement for the two AIs to have a common prior, and differ only by some known quantum of new evidence, seems like it might restrict the applications considerably. In simple cases you might handle this by "rolling back" a copy of the first AI to a time when it had not yet processed the new evidence, and making that the starting point for the second AI. But if the processing of the evidence occurred before some other update that you want included in the prior, then you would need some way of working backward to a state that never previously existed.