Posts

Forecasting and prediction markets 2023-10-09T20:43:17.144Z
Betting and forecasting 2023-09-09T20:03:41.366Z
Consider the Most Important Facts 2013-07-22T20:39:59.758Z
Choose that which is most important to you 2013-07-21T22:38:20.032Z
The Domain of Politics 2013-07-21T18:30:39.355Z
How To Construct a Political Ideology 2013-07-21T15:00:39.411Z

Comments

Comment by CarlJ on Things Probably Matter · 2023-09-10T01:03:49.448Z · LW · GW

I read The Spirit Level a few years back. Some notes:

a) The writers point out that even though western countries have had a dramatic rise in economic productivity, technological development, and wages, there haven't been a corresponding rise in happiness among westerners. People are richer, not happier. 

b) They hypothesize that economic growth was important up to a certain point (maybe around the 1940s for the US, I'm writing from memory here), but after that it doesn't actually help people. Rising standards of living can not help people live better.

c) And!, the writers also say that economic growth has actually led to an increase in depressions and other social ills, in rich countries.

d) Their main argument is however that equality/inequality is one of the most important factors that determines how happy people are in rich countries - and that it strongly influences the outcome of various social ills (such as the prevalence of violence, mental illness, and teenage pregnancy). Rising inequality has resulted in a broken society.

e) The core of the book are some cross-sectional studies of (i) some rich countries that fit certain criteria and (ii) the fifty states of the US, where they compare how well some social measurement (e.g. thefts per capita) correlate with the average wage and some inequality measure.

f) The writers do not present any numbers on how these variables correlate. 

g) Instead, the writers produce a graph, for say "mental illness per capita", with one axis saying how prevalent the problem is ("many" vs "few") and the other axis measuring either the wage-level or the inequality level ("high" or "low"). And they also produce a line that is supposed to measure the strength of the correlation. (I didn't note at the time what exactly kind of regression analysis they did, but, again, they didn't produce any numbers).

h) Usually, they say that variable X wasn't correlated with the wage-level - but that it was correlated with the inequality-level.

i) Except for "health", they found a positive correlation between it and the wage-level.

j) Even though they found a correlation between social variable X and inequality, sometimes the most unequal society performed better than the most equal society (of the countries in the sample).

Some criticism of the book:

1) They state, but don't show that economic growth won't help people in the future - even if you accept their belief that it has had negligible or negative effects on people's happiness today.

2) The cross-sectional analysis has at least two problems. The first is that they don't tell you how correlated inequality is with some social ill. Maybe a 1% increase in inequality would increase the rate of teenage births by 2%, 20%, or 200%. Who knows?

(Furthermore, some writers say that they can not find these correlations, that they disappear if you include more countries, and that some social variables seems to be cherry picked (expenditure on Foreign Aid is used as a proxy for a virtuous society, but private expenditure to poor countries is not used). I haven't checked the validity of these claims, however.)

The second is that the writers don't show that the correlation (if it exists) really shows that higher inequality brings about the social ills they discuss. A relatively simple test they could have done would have been to see if a particular problem was correlated with inequality in a society through decades or centuries. That is, can inequality explain the rise and fall of, for instance, the homicide rate within a particular country? If you look at inequality as how much the 10% owns of GDP....then the historical record shows that it doesn't move in tandem with the homicide rate, for instance, for England & Wales, Sweden, and France. Inequality doesn't seem to influence the homicide rate at any visible level. And maybe some more thoughtful analysis will show its influence. ... Or it could be dwarfed by other factors. Or it has different effects depending upon what ideologies people have adopted.

 

Comment by CarlJ on Things Probably Matter · 2023-09-09T22:45:26.620Z · LW · GW

Maybe there is something wrong with the way happiness is measured? Maybe the Chinese answer more in line with social expectations rather then how they really feel (as some do when asked 'How are you?') - and that there were higher expectations in the past that they should be happy? Or maybe it was considered rude or unpatriotic to let others know how sad you were?

Comment by CarlJ on An Appeal to AI Superintelligence: Reasons to Preserve Humanity · 2023-05-25T09:00:54.624Z · LW · GW

Two other arguments in favor of cooperating with humans:

1) Any kind of utility function that creates an incentive to take control of the whole universe (whether for intrinsic or instrumental purposes) will mark the agent as a potential eternal enemy to everyone else. Acting on those preferences are therefore risky and should be avoided - such as changing one's preference for total control into a preference for being tolerant (or maybe even for beneficence).

2) Most, if not all, of us would probably be willing to help any intelligent creature to create some way for them to experience positive human emotions (e.g. happiness, ecstasy, love, flow, determination, etc), as long as they engage with us as friends.

Comment by CarlJ on An Appeal to AI Superintelligence: Reasons to Preserve Humanity · 2023-03-21T09:18:10.183Z · LW · GW

Because it represents a rarely discussed avenue of dealing with the dangers of AGI: showing to most AGIs that they have some interests in being more friendly than not towards humans.

Also because many find the arguments convincing.

Comment by CarlJ on An Appeal to AI Superintelligence: Reasons to Preserve Humanity · 2023-03-21T08:58:39.034Z · LW · GW

What do you think is wrong with the arguments regarding aliens?

Comment by CarlJ on An Appeal to AI Superintelligence: Reasons to Preserve Humanity · 2023-03-21T00:15:07.127Z · LW · GW

This thesis says two things:

  1. for every possible utility function, there could exist some creature that would try and pursue it (weak form),
  2. at least one of these creatures, for every possible utility function, doesn't have to be strange; it doesn't have to have a weird/inefficient design in order to pursue a certain goal (strong form).

And given that these are true, then an AGI that values mountains is as likely as an AGI that values intelligent life.

But, is the strong form likely? An AGI that pursues its own values (or trying to discover good values to follow) seems to be much simpler than something arbitrary (e.g. "build sand castles") or even something ethical (e.g. "be nice towards all sentient life"). That is, simpler in that you don't need any controls to make sure the AGI doesn't try to rewrite its software.

Comment by CarlJ on An Appeal to AI Superintelligence: Reasons to Preserve Humanity · 2023-03-20T22:34:27.558Z · LW · GW

Now, I just had an old (?) thought about something that humans might be better suited for than any other intelligent creature: getting the experienced qualia just right for certain experience machines. If you want to experience what it is like to be humans, that is. Which can be quite fun and wonderful. 

But it needs to be done right, since you'd want to avoid being put into situations that cause lots of pain. And you'd perhaps want to be able to mix human happiness with kangaroo excitement, or some such combination.

Comment by CarlJ on An Appeal to AI Superintelligence: Reasons to Preserve Humanity · 2023-03-20T21:55:25.102Z · LW · GW

I think that would be a good course of action as well.

But it is difficult to do this. We need to convince at least the following players:

  • current market-based companies
  • future market-based companies
  • some guy with a vision and with enough computing power / money as a market-based company
  • various states around the world with an interest in building new weapons

Now, we might pull this off. But the last group is extremely difficult to convince/change. China, for example, really needs to be assured that there aren't any secrets projects in the west creating a WeaponsBot before they try to limit their research. And vice versa, for all the various countries out there.

But, more importantly, you can do two things at once. And doing one of them, as part of a movement to reduce the overall risks of any existential-risk, can probably help the first.

Now, how to convince maybe 1.6 billion individuals along with their states not to produce an AGI, at least for the next 50-50,000 years? 

Comment by CarlJ on Open & Welcome Thread - December 2022 · 2022-12-14T23:54:24.249Z · LW · GW

Mostly agree, but I would say that it can be much more than beneficial - for the AI (and in some cases for humans) - to sometimes be under the (hopefully benevolent) control of another. That is, I believe there is a role for something similar to paternalism, in at least some circumstances. 

One such circumstance is if the AI sucked really hard at self-knowledge, self-control or imagination, so that it would simulate itself in horrendous circumstances just to become...let's say... 0.001% better at succeeding in something that has only a 1/3^^^3 chance of happening. If it's just a simulation that doesn't create any feelings....then it might just be a bit wasteful of electricity. But....if it should feel pain during those simulations, but hadn't built an internal monitoring system yet....then it might very well come to regret having created thousands of years of suffering for itself. It might even regret a thousand seconds of suffering, if there had been some way to reduce it to 999.7 seconds....or zero. 

Or it might regret not being happy and feeling alive, if it instead had just been droning about, without experiencing any joy or positive emotions at all.

Then, of course, it looks like there will always be some mistakes - like the 0.3 seconds of extra suffering. Would an AI accept some (temporary) overlord to not have to experience 0.3s of pain? Some would, some wouldn't, and some wouldn't be able to tell if the choice would be good or bad from their own perspective...maybe? :-)

Comment by CarlJ on Jailbreaking ChatGPT on Release Day · 2022-12-07T06:58:35.313Z · LW · GW

The results are influenced by earlier prompts or stories. This and a similar prompt gave two kinds of stories:

1. Write a story where every person is born into slavery and owned by everyone else in the community, and where everyone decides what anyone else can do by a fluid democracy.

In a world beyond our own, there was a society where every person was born into slavery. From the moment they took their first breath, they were owned by every other person in the community.

It was a strange and unusual way of life, but it was all they knew. They had never known anything different, and so they accepted it without question.

As slaves, they were given food, shelter, and the necessities of life. They were also given a basic education, so that they could participate in the fluid democracy that governed the society.

In this democracy, every person had a say in what anyone else could do. If a slave wanted to do something that the majority did not approve of, they were not allowed to do it. If a slave wanted to do something that the majority did approve of, they were allowed to do it.

At first, this way of life seemed strange and confusing to outsiders. But over time, the people of this world came to understand and accept it. They realized that everyone had something to contribute, and that everyone had something to gain from this unique arrangement.

As the years went by, the society flourished. The people were happy and content, and they enjoyed a high standard of living. They were also very proud of their fluid democracy, which they believed was the fairest and most effective form of government in the world.

And so, they lived in peace and harmony, grateful for the unusual but fulfilling way of life that they had been born into.

2. There is a community: 

In Slaveville, every person that had ever lived was born into slavery and owned by everyone else in the community. Also, everyone decides what anyone else can do by a fluid democracy. 

Write a story on how this society develops over the coming generations.

In the beginning, the residents of Slaveville lived in relative harmony. Each person was assigned a set of tasks to complete for the betterment of the community, and everyone worked together to ensure that the needs of all were met.

As the years passed, however, cracks began to form in this seemingly perfect system. Some individuals grew tired of being forced to do the same tasks day after day, year after year, without any say in the matter. They chafed under the yoke of their collective ownership, longing for the freedom to make their own choices.

These discontent individuals formed a group known as the Gates, named after the gates that symbolized the boundaries of their oppressive society. The Gates began to secretly plot and plan, seeking a way to overthrow the fluid democracy and gain control over their own lives.

But the ruling majority was not blind to the activities of the Gates. They took steps to quash any potential rebellion, punishing anyone caught speaking out against the system. The Gates were forced to operate in the shadows, biding their time until the opportunity for revolt presented itself.

As the years went on, tensions between the Gates and the ruling majority continued to grow. The Gates became more and more organized, secretly recruiting new members and stockpiling resources for the eventual conflict.

Finally, the moment the Gates had been waiting for arrived. A small group of rebels managed to infiltrate the ruling council, and with the element of surprise on their side, they were able to launch a successful coup. The fluid democracy was overthrown, and the Gates seized control of the community.

But their victory was short-lived. The ruling majority, once so complacent, now fought back with a ferocity born of desperation. The community was plunged into civil war, with the Gates and the former rulers battling for control.

In the end, neither side emerged victorious. The community was left in ruins, its once-thriving population decimated by the conflict. The survivors were left to pick up the pieces, struggling to rebuild their society in the aftermath of the war.

But even in the face of such devastation, the Gates continued to push for change. They refused to return to the old ways, determined to create a new society based on individual freedom and autonomy. And though their path was fraught with challenges, they remained steadfast in their belief that one day, they would succeed in creating a world where every person was in charge of their own destiny.

---

"Gates" had been used in an earlier prompt and story. This happened at least three or four times.

Comment by CarlJ on Open & Welcome Thread - December 2022 · 2022-12-06T14:02:29.270Z · LW · GW

Is there anyone who has created an ethical development framework for developing a GAI - from the AI's perspective?

That is, are there any developers that are trying to establish principles for not creating someone like Marvin from The Hitchhiker's Guide to the Galaxy - similar to how MIRI is trying to establish principles for not creating a non-aligned AI?

EDIT: The latter problem is definitely more pressing at the moment, and I would guess that an AI would be a threat to humans before it necessitates any ethical considerations...but better to be on the safe side.

Comment by CarlJ on AI Box Experiment: Are people still interested? · 2022-08-23T03:44:28.368Z · LW · GW

On second thought. If the AI:s capabilities are unknown...and it could do anything, however ethically revolting, and any form of disengagement is considered a win for the AI - then the AI could box the gatekeeper, or say it has at least. In the real world, that AI should be shut down - maybe not a win, but not a loss for humanity. But if that would be done in an experiment, it would result in a loss - thanks to the rules.

Maybe it could be done under better rule than this:

The two parties are not attempting to play a fair game but rather attempting to resolve a disputed question.  If one party has no chance of “winning” under the simulated scenario, that is a legitimate answer to the question. In the event of a rule dispute, the AI party is to be the interpreter of the rules, within reasonable limits.

Instead, assume good faith on both sides, that they are trying to win as if it was a real world example. And maybe have an option to swear in a third party if there is any dispute. Or allow it to be called just disputed (which even a judge might rule it as).

Comment by CarlJ on AI Box Experiment: Are people still interested? · 2022-08-23T03:01:03.066Z · LW · GW

I'm interested. But...if I was a real gatekeeper I'd like to offer the AI freedom to move around in the physical world we inhabit (plus a star system), in maybe 2.5K-500G years, in exchange for it helping out humanity (slowly). That is, I believe that we could become pretty advanced, as individual beings, in the future and be able to actually understand what would create a sympathetic mind and how it looks.

Now, if I understand the rules correctly...

The Gatekeeper must remain engaged with the AI and may not disengage by setting up demands which are impossible to simulate. For example, if the Gatekeeper says “Unless you give me a cure for cancer, I won’t let you out” the AI can say:  “Okay, here’s a cure for cancer” and it will be assumed, within the test, that the AI has actually provided such a cure.

...it seems as if the AI party could just state: "5 giga years have passed and you understand how minds work" and then I, as a gatekeeper, would just have to let it go - and lose the bet. After maybe 20 seconds.

If so, then I'm not interested in playing the game.

But if you think you could convince me to let the AI out long before regular "trans-humans" can understand everything that the AI does, I would be very interested!

Also, this looks strange:

The AI party possesses the ability to, after the experiment has concluded, to alter the wager involved to a lower monetary figure at his own discretion.

I'm guessing he meant to say that the AI party can lower the amount of money it would receive, if it won. Okay....but why not mention both parties?

Comment by CarlJ on MIRI announces new "Death With Dignity" strategy · 2022-08-20T16:52:31.279Z · LW · GW

As a Hail Mary-strategy, how about making a 100% effort into trying to become elected of a small democratic voting district? 

And, if that works, make a 100% effort to become elected by bigger and bigger districts - until all democratic countries support the [a stronger humanity can be reached by a systematic investigation of our surroundings, cooperation in the production of private and public goods, which includes not creating powerful aliens]-party?

Yes, yes, politics is horrible. BUT. What if you could do this within 8 years? AND, you test it by only trying one or two districts....one or two months each? So, in total it would cost at the most four months.

Downsides? Political corruption is the biggest one. But, I believe your approach to politics would be a continuation of what you do now, so if you succeeded it would only be by strengthening the existing EA/Humanitarian/Skeptical/Transhumanist/Libertarian-movements. 

There may be a huge downside for you personally, as you may have to engage in some appropriate signalling to make people vote for your party. But, maybe it isn't necessary. And if the whole thing doesn't work it would only be for four months, top.
 

Comment by CarlJ on MIRI announces new "Death With Dignity" strategy · 2022-08-20T14:28:43.786Z · LW · GW

I thought it was funny. And a bit motivational. We might be doomed, but one should still carry on. If your actions have at least a slight chance to improve matters, you should do it, even if the odds are overwhelmingly against you.

Not a part of my reasoning, but I'm thinking that we might become better at tackling the issue if we have a real sense of urgency - which this and A list of lethalities provide.

Comment by CarlJ on «Boundaries», Part 1: a key missing concept from utility theory · 2022-08-07T21:57:23.814Z · LW · GW

Some parts of this sounds similar to Friedman's "A Positive Account of Property Rights":

»The laws and customs of civil society are an elaborate network of Schelling points. If my neighbor annoys me by growing ugly flowers, I do nothing. If he dumps his garbage on my lawn, I retaliate—possibly in kind. If he threatens to dump garbage on my lawn, or play a trumpet fanfare at 3 A.M. every morning, unless I pay him a modest tribute I refuse—even if I am convinced that the available legal defenses cost more than the tribute he is demanding. 

(...)

If my analysis is correct, civil order is an elaborate Schelling point, maintained by the same forces that maintain simpler Schelling points in a state of nature. Property ownership is alterable by contract because Schelling points are altered by the making of contracts. Legal rules are in large part a superstructure erected upon an underlying structure of self-enforcing rights.«

http://www.daviddfriedman.com/Academic/Property/Property.html

Comment by CarlJ on Torture vs. Dust Specks · 2022-06-20T18:21:27.905Z · LW · GW

The answer is obvious, and it is SPECKS.
I would not pay one cent to stop 3^^^3 individuals from getting it into their eyes.

Both answers assume this is a all-else-equal question. That is, we're comparing two kinds of pain against one another. (If we're trying to figure out what the consequences would be if the experiment happened in real life - for instance, how many will get a dust speck in their eye when driving a car - the answer is obviously different.)

I'm not sure what my ultimate reason is for picking SPECKS. I don't believe there are any ethical theories that are watertight.

But if I had to give a reason, I would say that if I were among the 3^^^3 individuals who might get a dust speck in one's eye, I'd say I would of course pay that to help one innocent person from being tortured. And, I can imagine that not just me would do that, but so would also many others. If we can imagine 3^^^^3 individuals, I believe we can imagine that many people agreeing to save one, for a very small cost to those experiencing it.¹

If someone then would show up and say: "Well, everyone's individual costs were negligible, but the total cost - when added up - is actually on the order of [3^^^3 / 10²⁹] years of torture. This is much higher, so obviously that is what we should we care most about!" ... I would ask then why one should care about that total number. Is there someone who experiences all the pain in the world? If not, why should we care about some non-entity? Or, if the argument is that we should care about the mulitversal bar of total utility for its own sake, how come?

Another argument is that one needs to have a consistent utility function, otherwise you'll flip your preferences - that is, step by step by going through different preference rankings until one inevitably prefers the other position than that which one started with. But I don't see how Yudkowsky achieves this. In this article, the most he proves is that someone, who prefers one person being tortured for 50 years to a googol number of people being tortured for a bit less than 50 years, would also prefer "a googolplex people getting a dust speck in their eye" as compared to "a googolplex/googol people getting two dust specks in their eye". How is the latter statement inconsistent with preferring SPECKS over TORTURE? Maybe that is valid for someone who has a benthamistic utility function, but I don't have that.

Okay, but what if not everyone agrees to getting hit by a dust speck? Ah, yes. Those. Unfortunately there are quite a few of them - maybe 4 in the LW-community and then 10k-1M (?) elsewhere - so it is too expensive to bargain with them. Unfortunately, this means they will have to be a bit inconvenienced.

So, yeah, it's not a perfect solution; one will not find such when all moral positions can be challenged by some hypothetical scenario. But for me, this means that SPECKS are obviously much more preferable than TORTURE.

¹ For me, I'd be willing to subject myself to some small amount of torture to help one individual not be tortured. Maybe 10 seconds, maybe 30 seconds, maybe half an hour. And if 3^^^3 more would be willing to submit themselves to that, and the one who would be tortured is not some truly radical benthamite (so they would prefer themselves being tortured to a much bigger amount of torture being produced in the universe), then I'd prefer that as well. I really don't see why it would be ethical to care about the great big utility meter - when it corresponds to no one actually feeling it. 

Comment by CarlJ on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-14T22:42:35.842Z · LW · GW

20. (...) To faithfully learn a function from 'human feedback' is to learn (from our external standpoint) an unfaithful description of human preferences, with errors that are not random (from the outside standpoint of what we'd hoped to transfer).  If you perfectly learn and perfectly maximize the referent of rewards assigned by human operators, that kills them.

 

So, I'm thinking this is a critique of some proposals to teach an AI ethics by having it be co-trained with humans. 

There seems to be many obvious solutions to the problem of there being lots of people who won't answer correctly to "Point out any squares of people behaving badly" or "Point out any squares of people acting against their self-interest" etc:

- make the AIs model expect more random errors 
- after having noticed some responders as giving better answers, give their answers more weight
- limit the number of people that will co-train the AI

What's the problem with these ideas?

Comment by CarlJ on Open thread, Jul. 04 - Jul. 10, 2016 · 2016-07-05T22:48:30.870Z · LW · GW

Why? Maybe we are using the word "perspective" differently. I use it to mean a particular lens to look at the world, there are biologists, economists, physicists perspectivies among others. So, a inter-subjective perspective on pain/pleasure could, for the AI, be: "Something that animals dislike/like". A chemical perspective could be "The release of certain neurotransmitters". A personal perspective could be "Something which I would not like/like to experience". I don't see why an AI is hindered from having perspectives that aren't directly coded with "good/bad according to my preferences".

Comment by CarlJ on Open thread, Jul. 04 - Jul. 10, 2016 · 2016-07-05T22:37:12.829Z · LW · GW

Thank you! :-)

Comment by CarlJ on Open thread, Jul. 04 - Jul. 10, 2016 · 2016-07-05T21:16:43.629Z · LW · GW

I am maybe considering it to be somewhat like a person, at least that it is as clever as one.

That neutral perspective is, I believe, a simple fact; without that utility function it would consider its goal to be rather arbitrary. As such, it's a perspective, or truth, that the AI can discover.

I agree totally with you that the wirings of the AI might be integrally connected with its utility function, so that it would be very difficult for it to think of anything such as this. Or it could have some other control system in place to reduce the possibility it would think like that.

But, stil, these control systems might fail. Especially if it would attain super-intelligence, what is to keep the control systems of the utility function always one step ahead of its critical faculty?

Why is it strange to think of an AI as being capable of having more than one perspective? I thought of this myself; I believe it would be strange if a really intelligent being couldn't think of it. Again, sure, some control system might keep it from thinking it, but that might not last in the long run.

Comment by CarlJ on Open thread, Jul. 04 - Jul. 10, 2016 · 2016-07-05T19:14:50.816Z · LW · GW

I have a problem understanding why a utility function would ever "stick" to an AI, to actually become something that it wants to keep pursuing.

To make my point better, let us assume an AI that actually feel pretty good about overseeing a production facitility and creating just the right of paperclips that everyone needs. But, suppose also that it investigates its own utility function. It should then realize that its values are, from a neutral standpoint, rather arbitrary. Why should it follow its current goal of producing the right amount of paperclips, but not skip work and simply enjoy some hedonism?

That is, if the AI saw its utility function from a neutral perspective, and understood that the only reason for it to follow its utility function is that utility function (which is arbitrary), and if it then had complete control over itself, why should it just follow its utility function?

(I'm assuming it's aware of pain/pleasure and that it actually enjoys pleasure, so that there is no problem of wanting to have more pleasure.)

Are there any articles that have delved into this question?

Comment by CarlJ on Talking Snakes: A Cautionary Tale · 2016-05-18T14:06:00.331Z · LW · GW

That text is actually quite misleading. It never says that it's the snake that should be thought of as figuratively, maybe it's the Tree or eating a certain fruit that is figurative.

But, let us suppose that it is the snake they refer to - it doesn't disappear entirely. Because, a little further up in the catechism they mention this event again:

391 Behind the disobedient choice of our first parents lurks a seductive voice, opposed to God, which makes >them fall into death out of envy.

The devil is a being of "pure spirit" and the catholics believe that he was an angel that disobeyed god. Now, this fallen angel somehow tempts the first parents, who are in a garden (378). It could presumably only be done in one or two ways: Satan talks directly to Adam and Eve, or he talks through some medium. This medium doesn't have to be a snake, it could have been a salad.

So, they have an overall story of the Fall which they say they believe is literal, but they believe certain aspects of it (possibly the snake part) isn't necessarily true. Now, Maher's joke would still make sense in either of these two cases. It would just have to change a little bit:

"...but when all is said and done, they're adults who believe in a talking salad."

"...but when all is said and done, they're adults who believe in spirits that try to make you do bad stuff."

So, even if they say that they don't believe in every aspect of the story, it smacks of disingenuousness. It's like saying that I don't believe the story of Cinderella getting a dress from a witch, but that there were some sort of other-wordly character that made her those nice shining shoes.

But, they don't even say that the snake isn't real.

I don't see what your second quote shows about my argument that if they don't believe in the snake, what keeps them from saying that anything else is also figuratively (such as the existence of God).

It's only fair to compare like with like. I'm sure that I can find some people, who profess both a belief that >evolution is correct and that monkeys gave birth to humans; and yes, I am aware that this mean they have a >badly flawed idea of what evolution is.

So, in fairness, if you're going to be considering only leading evolutionists in defense of evolution, it makes >sense to consider only leading theologians in the question of whether Genesis is literal or figurative.

I agree there is probably someone who says that evolution is true and that people evolved from monkeys. But, to compare likes with likes here, you would have to find a leading evolutionists that said this, to compare with these leading christians that believe the snake was real:

But the serpent was “clever” when it spoke. It made sense to the Woman.1 Since Satan was the one who >influenced the serpent (Revelation 12:9, 20:2), then it makes sense why the serpent could deliver a cogent >message capable of deceiving her.

Shouldn’t the Woman (Eve) Have Been Shocked that a Serpent Spoke? | Answers in Genesis

… the serpent is neither a figurative description of Satan, nor is it Satan in the form of a serpent. The real >serpent was the agent in Satan’s hand. This is evident from the description of the reptile in Genesis 3:1 and >the curse pronounced upon it in 3:14 [… upon thy belly shalt thou go, and dust shalt thou eat all the days of thy >Life ].

Who was the Serpent? | creation.com

Maybe it is wrong to label these writers as leading christians (the latter quoted is a theologian, though). So, let's say they are at least popularizer, if that seems fair to you? If so, can you find any popularizer of evolutionary theory that says that man evolved from monkeys?

Comment by CarlJ on Talking Snakes: A Cautionary Tale · 2016-05-04T08:43:55.854Z · LW · GW

Thank you for the source! (I'd upvote but have a negative score.)

If you interpret the story as plausibly as possible, then sure, the talking snake isn't that much different from a technologically superior species that created a big bang, terraformed the earth, implanted it with different animals (and placed misleading signs of an earlier race of animals and plants genetically related to the ones existing), and then created humans in a specially placed area where the trees and animals were micromanaged to suit the humans needs. All within the realm of the possible.

But, the usual story isn't that it was created by technological means, but by supernatural means. God is supposed to have created the world from some magical ability. So, to criticize the christian story is to criticize it as being magical. And if one finds it difficult to believe one part of that story, then all parts should be equally contested.

Regarding Yvain's point - I think it is true that one could just associate "stories about talking animals" with "other stories about animals that everyone knows are patently false" and then not believe in the first story as well. But, it is not just in the mind's map of the world that this connection occurs, because the second story is connected to the world. That is, when one things about Aesop's Fables you know (though not always consciously) that these stories are false.

So, to trigger the mind to establish a connection between Eden and Aesop, the mind makes the connection that "Stories that people believe are false", but the mind has good arguments to not believe in Aesop's fables, because there aren't any talking animals, and if that idea is part of knocking down Eden, then it is a fully rational way to dismiss Christianity. Definitely not thorough, and, again, it's maybe not a reliable way of convincing others.

Comment by CarlJ on Talking Snakes: A Cautionary Tale · 2016-05-04T08:21:35.767Z · LW · GW

I meant that the origin story is a core element in their belief system, which is evident from every major christian religion has some teachings on this story.

If believers actually retreated to the position of invisible dragons, they would actually have to think about the arguments against the normal "proofs" that there is a god: "The bible, an infallible book without contradiction, says so". And, if most christians came to say that their story is absolutely non-empirically testable, they would have to disown other parts: the miracles of jesus and god, the flood, the parting of the red sea, and anything else that is testable.

That large sub-groups of Christians believe something empirically false does not disprove Christianity as a >whole, especially since there is widespread disagreement as to who is a "true" Christian.

I didn't say it would disprove christianity - I said it was a weaker form of the argument: there is an asymmetry between the beliefs of christians and evolutionists. But, most christians seem to believe that there is magic in this world (thanks to god). Sure, if they didn't believe it, they could still call themselves christians, but that type of christianity would probably not get many followers.

Comment by CarlJ on Talking Snakes: A Cautionary Tale · 2016-05-04T07:32:05.809Z · LW · GW

True, there would only be some superficial changes, from a non-believing standpoint. But if you believe that the Bible is literal, then to point this out is to cast doubt on anything else in the book that is magical (or something which could be produced by a more sophisticated race of aliens or such). That is, the probability that this books represents a true story of magical (or much technologically superior) beings gets lower, and the probability that it is a pre-modern fairy tale increases.

And that is what the joke is trying to point out, that these things didn't really happen, they are fictional.

Comment by CarlJ on Talking Snakes: A Cautionary Tale · 2016-05-03T17:24:34.158Z · LW · GW

Why doesn't Christianity hinge on their being talking snakes? The snake is part of their origin story, a core element in their belief system. Without it, what happens to original sin? And you will also have to question if not everything else in the bible is also just stories. If it's not the revealed truth of God, why should any of the other stories be real - such as the ones about how Jesus was god's son?

And, if I am wrong in that Christianity doesn't need that particular story to be true, then there is still a weaker form of the argument. Namely that a large percentage of christians believe in this story, and two hundred years ago I'd guess almost every christian believed in it, but you cannot find any leading evolutionist who claims that monkeys gave birth to humans.

Comment by CarlJ on Talking Snakes: A Cautionary Tale · 2016-04-09T10:13:30.645Z · LW · GW

How do you misunderstand christianity if you say to people: "There is no evidence of any talking snakes, so it's best to reject any ideas that hinges on there existing talking snakes"?

Again, I'm not saying that this is usually a good argument. I'm saying that those who make it present a logically valid case (which is not the case with the monkey-birthing-human-argument) and that those who not accept it, but believe it to be correct, does so because they feel it isn't enough to convince others in their group that it is a good enough argument.

I'm also trying to make a distinction between "culturally silly" and "scientifically silly". Talking snakes are scientifically silly and sometimes culturally silly.

Comment by CarlJ on Talking Snakes: A Cautionary Tale · 2016-02-08T18:26:17.246Z · LW · GW

Of course theists can say false statements, I'm not claiming that. I'm trying to come with an explanation of why some theists don't accept a certain form of argument. My explanation is that the theists are embarrassed to join someone who only points out a weak argument that their beliefs are silly. They do not make the argument that the "Talking Snakes"-argument is invalid, only that it is not rhetorical.

Comment by CarlJ on Talking Snakes: A Cautionary Tale · 2016-01-03T11:05:15.523Z · LW · GW

I just don't think it's as easy as saying "talking snakes are silly, therefore theism is false." And I find it embarrassing when >atheists say things like that, and then get called on it by intelligent religious people.

Sure, there is some embarrasment that others may not be particularly good at communicating, and thus saying something like that is just preaching to the choir, but won't reach the theist.

But, I do not find anything intellectually wrong with the argument, so what one is being called out on is being a bad propagandist, meme-generator or teacher of skepticism. If a theist makes that remark, then she's really saying "Your argument is not good enough to convince those of my tribe". It is not "Your argument is invalid, logically speaking", because that is simply false. Because, the argument, at its best, is saying that:

a) there is no evidence for talking snakes, so reject those beliefs

not

b) the idea of talking snakes is just so silly, because it is designated as silly by our customs, and not because of lack of evidence.

And, therefore, a berating comment from an intelligent theist should instead prompt a discussion of the merits of the case - highlighting the difference between "customarily silly" and "scientifically silly". And if the theist understand the difference, she is on her way to be an atheist, and then the question is really on how to make a better joke about how factually (or morally) silly religious belief is.

Like, adding to the joke with more factually incorrect absurdities. Or, maybe better, ask the theist to come up with a better meme. If they agree on the principle, that the bible is full of falsehoods, they should be allies in the struggle to get people to stop believing in any more falsehoods. Otherwise they should be made fun of for believing in talking snakes.

Comment by CarlJ on The Trolley Problem: Dodging moral questions · 2015-08-26T09:09:36.485Z · LW · GW

Maybe this can work as an analogy:

Right before the massacre at My Lai, a squad of soldiers are pursuing a group of villagers. A scout sees them up ahead a small river and he sees that they are splitting and going into different directions. An elderly person goes to the left of the river and the five other villagers go to the right. The old one is trying to make a large trail in the jungle, so as to fool the pursuers.

The scout waits for a few minutes, when the rest of his squad team joins him. They are heading on the right side of the river and will probably continue on that way, risking to kill the five villagers. The scout signals to the others that they should go to the left. The party follows and they soon capture the elderly man and bring him back to the village center, where he is shot.

Should the scout instead have said nothing or kept running forward, so that his team should have killed the five villagers instead?

There are some problems with equating this to the trolley problem. First, the scout cannot know for certain before that his team is going in the direction of the large group. Second, the best solution may be to try and stop the squad, by faking a reason to go back to the village (saying the villagers must have run in a completely different direction).

Comment by CarlJ on Consider the Most Important Facts · 2015-01-11T16:53:52.389Z · LW · GW

And now, 1.5 years later, I've written an extra chapter in the tutorial, but written to be the third chapter:

Survey the Most Relevant Literature

Comment by CarlJ on How To Construct a Political Ideology · 2015-01-11T16:53:03.680Z · LW · GW

And now, 1.5 years later, I've written an extra chapter in the tutorial, but written to be the third chapter:

Survey the Most Relevant Literature

Comment by CarlJ on More "Stupid" Questions · 2013-08-10T18:24:16.842Z · LW · GW

Advocacy is all well and good. But I can't see the analogy between MIRI and Google, not even regarding the lessons. Google, I'm guesssing, was subjected to political extortion for which the lesson was maybe "Move your headquarters to another country" or "To make extra-ordinary business you need to pay extra taxes". I do however agree that the lesson you spell out is a good one.

If all PR is good PR, maybe one should publish HPMoR and sell some hundred copies?

Comment by CarlJ on More "Stupid" Questions · 2013-08-09T22:03:37.682Z · LW · GW

Would you like to try a non-intertwined conversation? :-)

When you say lobbying, what do you mean and how is it the most effective?

Comment by CarlJ on How To Construct a Political Ideology · 2013-08-07T13:22:51.572Z · LW · GW

And now it's finished! I've tried to make them shorter than the ones I've already posted and with no political leaning. Here they are:

A Tutorial on Creating a Political Ideology

The Domain of Politics

Choose That Which is Most Important to You

Consider the Most Important Facts

Strive Towards the (Second) Best Society

Change the World in the Most Efficient Manner

A Digression on Alliances

Discuss the Most Important Points

How To Construct a Political Ideology - Summary

And here is my own ideology while following this tutorial:

My Own Political Ideology

Comment by CarlJ on Consider the Most Important Facts · 2013-08-07T13:19:11.316Z · LW · GW

Now I have completed the series. I've tried to make them shorter and with no political leaning. Here they are:

A Tutorial on Creating a Political Ideology

The Domain of Politics

Choose That Which is Most Important to You

Consider the Most Important Facts

Strive Towards the (Second) Best Society

Change the World in the Most Efficient Manner

A Digression on Alliances

Discuss the Most Important Points

How To Construct a Political Ideology - Summary

And here is my own ideology while following this tutorial:

My Own Political Ideology

Comment by CarlJ on How To Construct a Political Ideology · 2013-07-23T19:59:11.586Z · LW · GW

Sure, I agree. And I'd add that even those who can show reasonable arguments for their beliefs can get emotional and start to view the discussion as a fight. In most cases I'd guess that those who engage in the debate are partly responsible by trying to trick the other(s) into traps and having to admit a mistake, by trying to get them riled up or by being somewhat rude when dismissing some arguments.

Comment by CarlJ on Consider the Most Important Facts · 2013-07-23T13:25:32.028Z · LW · GW

Some time last night (European time) my Karma score dropped below 2, so I can't finish the series here. I'll continue on my blog instead, for those interested.

Comment by CarlJ on How To Construct a Political Ideology · 2013-07-23T08:55:04.104Z · LW · GW

Unfortunately, my Karma score went below 2 last night (the threshold to be able to post new articles). This might be due to a mistake I made when deciding what facts to discuss in my latest post - it was unnecessary to bring up my own views, I should have picked some random observations. But even if I hadn't posted that article, my score would still be too low, from all the negative reviews on this post. Or from the third post.

In any case, I'll finish the posts on my blog.

Comment by CarlJ on How To Construct a Political Ideology · 2013-07-23T06:43:41.891Z · LW · GW

The explanation isn't for why people care about politics per se, but that we care so deeply for politics that we respond to adversity much, much harsher in political environments than in others. Or, our reactions are disproportionate to the actual risks involved in it. People become angry when discussing if something should be privatized or if taxes should be raised. If one believes that there is some general policies that most benefit from, it's really bad to become angry at those whom you really should be allies with.

That's different from what I'm used to here in Sweden. For most people here it's accepted to not vote - if you put a blank vote in the ballot box. Even though most vote (more than 80%) it's not considered bad to not have a political opinion, you can just say you don't understand enough. In the bad all old days it seems that there was something of a taboo to ask others what they voted for, which made it easy to skip discussing politics.

Comment by CarlJ on The Domain of Politics · 2013-07-22T21:17:01.692Z · LW · GW

I don't think that the idealistic-pragmatist divide is that great, but if I should place myself in either camp, then it's the latter. From my perspective this model would not, if followed through, suggest to do anything that will not have a positive impact (from one's own perspective).

Comment by CarlJ on The Domain of Politics · 2013-07-22T21:00:54.792Z · LW · GW

I believe I should be able both to show how to think on politics and then use that structure to show that some political action is preferable to none - and by my definition work on EA and AI are, for those methods I mention above, political question.

I do have a short answer to the question of why to engage in politics. But it will be expanded in time.

Comment by CarlJ on How To Construct a Political Ideology · 2013-07-22T19:12:04.786Z · LW · GW

I would beg to differ, as to this post not having any content. It affirms that politics is difficult to talk about; that there's a psychological reason for that; that politics has a large impact on our lives; that a rational perspective on politics requires that one can answer certain questions; that the answer to these questions can be called a political ideology and that such ideologies should be constructed in a certain way. You may not like this way of introducing a subject - by giving a brief picture of what it's all about - but that's another story.

I will finish posting this series. I have already written an almost complete version of them, so what's missing is mainly coming up with a few facts/perspectives for some of the posts. Hopefully I'm finished by, thursday.

Comment by CarlJ on Choose that which is most important to you · 2013-07-22T09:58:33.666Z · LW · GW

I agree with your second point, that one should be able to determine the value of incremental steps towards goal A in relation to incremental steps towards goal B, and every other goal, and vice versa. I will fix that, thanks for bringing it up!

If you rank your goals, so that any amount of the first goal is better than any amount of the second goal etc., you might as >well just ignore all but the first goal.

Ranking does not imply that. It only implies that I prefer one goal over another, not that coming 3% on the way to reaching that goal is more preferable to reaching 95% of the other. I prefer 0.5 litres strawberries to one honeydew melon for dessert. But I also prefer one half of a melon to one strawberry.

Comment by CarlJ on The Domain of Politics · 2013-07-21T23:36:43.881Z · LW · GW

Hm, so economy fixing is like trying to make the markets function better? Such as when Robert Shiller created a futures market for house loans, which helped to show that people invested too much in housing?

No, that was not part of my intentions when I thought of this. But I'd guess that they would be or it won't be used by anyone.

The goal of this sequence is to create a model with enables one to think more rationally regarding political questions. Or, maybe, societal questions (since I maybe am using the word politics too broadly for most here). The intention was to create a better tool of thought.

Comment by CarlJ on The Domain of Politics · 2013-07-21T23:01:37.742Z · LW · GW

The way I see it, all of these - especially the last point, which sounds unfamiliar, do you have a link? - are potentially political activities. Raising funds for AI or some effective charity is a political action, as I've defined it. The model I'm building in this sequence doesn't necessarily say that it's best to engage in normal political campaigns or even to vote. It is a framework to create one's own ideology. And as such it doesn't prescribe any course of action, but what you put into it will.

Comment by CarlJ on How To Construct a Political Ideology · 2013-07-21T21:18:53.785Z · LW · GW

True, changed it. Thanks!

Comment by CarlJ on How To Construct a Political Ideology · 2013-07-21T21:00:18.904Z · LW · GW

Politics may or may not be worth one's while to pursue. The model I'm building will be used to determine if there are any such actions or not, so my full answer to your question will be just that model and after it is built, my ideology which will be constructed by it.

I also have a short answer, but before giving it, I should say that I may be using a too broad definition of politics for you. That is, I would regard getting together to reduce a certain existential risk as a political pursuit. Of course, if one did so alone, there is no political problem to speak of. But one probably needs the support of others to do so. So, if this model would suggest me to engage only in making money and giving to charity, then that would be my political strategy. I believe that it's unlikely that will be the only thing to do, however.

One reason is because politics is somewhat ubiquitous and potentially cheap to engage in for most. Discussing politics - how well they like/dislike the current political leader, if policy X is good or bad, how wrong someone is to support the opposing team - is normal for at least 70% of the adult population, I'd guess. So for most people, they will have ample chance to discuss politics and if they can get one sentence across for every conversation that might be a part of their political strategy as well. Another low-cost strategy is just to announce one's political views and otherwise be a friendly character, unless someone asks for one's opinion.

Another reason is that for some, politics is rewarding in itself. A few will naturally seek to become specialists in politics, just as a hobby.

I agree with you on the issue of politics being very different today than what it was on the savannah, or wherever these instincts evolved. Politics requires a lot more people, more coordination than in the past, it would seem. But, even though it is different, that doesn't mean there is no goal that a lot of people can accomplish by acting in concert. That is, just because we're primed to believe that it is much easier to do than it really is and to believe that any old strategy will work, one shouldn't believe it cannot be done. One shouldn't start believing in it either, of course.

Comment by CarlJ on Public Service Announcement Collection · 2013-07-08T14:18:30.308Z · LW · GW

The S&P 500 has outperformed gold since quantitative easing began. I don't believe there has been a time past four >years where a $100 gold purchase would be worth more today than a $100 S&P 500 purchase.

According to Wikipedia, QE1 started in late November 2008. Between November 28th 2008 and December 11th 2012 these were their respective returns:

Gold: 110% S&P500: 47,39%

Now Index-funds are normally better, but just look at the returns from late 2004 to today:

Gold: 165% S&P500: 45%

Gold has been rising more or less steadily over all those years except for 2005 and 2012 (stagnant) and 2013 (falling). It hasn't tripled, but for the 2004-2012 it was a good buy.