Posts

Is the work on AI alignment relevant to GPT? 2020-07-30T12:23:56.842Z
Utility need not be bounded 2020-05-14T18:10:58.681Z
Who lacks the qualia of consciousness? 2019-10-05T19:49:52.432Z
Storytelling and the evolution of human intelligence 2019-06-13T20:13:03.547Z

Comments

Comment by richard_kennaway on Manifesto of the Silent Minority · 2020-11-24T08:46:56.857Z · LW · GW

Is the "[REDACTED]" in the belief as submitted?

Comment by richard_kennaway on Survey of Deviant Ideas · 2020-11-23T15:44:48.373Z · LW · GW

Will you be posting the anonymous beliefs?

Comment by richard_kennaway on Working in Virtual Reality: A Review · 2020-11-21T21:52:38.432Z · LW · GW

Here's a discussion of someone who didn't find working in VR particularly usable

The hyperlink is missing.

Comment by richard_kennaway on When Money Is Abundant, Knowledge Is The Real Wealth · 2020-11-18T09:15:25.577Z · LW · GW

precious mentals

I like this coinage.

Comment by richard_kennaway on Pascal's Mugging: Tiny Probabilities of Vast Utilities · 2020-11-14T16:03:00.548Z · LW · GW

Eliezer covers this in the article:

Should we penalize computations with large space and time requirements?  This is a hack that solves the problem, but is it true?

And he points out:

If the probabilities of various scenarios considered did not exactly cancel out, the AI's action in the case of Pascal's Mugging would be overwhelmingly dominated by whatever tiny differentials existed in the various tiny probabilities under which 3^^^^3 units of expected utility were actually at stake.

and:

Consider the plight of the first nuclear physicists, trying to calculate whether an atomic bomb could ignite the atmosphere. Yes, they had to do this calculation! Should they have not even bothered, because it would have killed so many people that the prior probability must be very low?The essential problem is that the universe doesn't care one way or the other and therefore events do not in fact have probabilities that diminish with increasing disutility.

There is also a paper, which I found and lost and found again and lost again, which may just have been a blog post somewhere, to the effect that in a certain setting, all computable unbounded utility functions must necessarily be so dominated by small probabilities of large utilities that no expected utility calculation converges. If someone can remind me of what this paper was I'd appreciate it.

ETA: Found it again, again. "Convergence of expected utilities with algorithmic probability distributions", by Peter de Blanc.

Comment by richard_kennaway on Ongoing free money at PredictIt · 2020-11-11T10:19:39.217Z · LW · GW

Where is this money coming from? Who is taking the other side of these bets?

Comment by richard_kennaway on Confucianism in AI Alignment · 2020-11-03T13:05:38.842Z · LW · GW

You are proposing "make the right rules" as the solution. Surely this is like solving the problem of how to write correct software by saying "make correct software"? The same approach could be applied to the Confucian approach by saying "make the values right". The same argument made against the Confucian approach can be made against the Legalist approach: the rules are never the real thing that is wanted, people will vary in how assiduously they are willing to follow one or the other, or to hack the rules entirely for their own benefit, then selection effects lever open wider and wider the difference between the rules, what was wanted, and what actually happens.

It doesn't work for HGIs (Human General Intelligences). Why will it work for AGIs?

BTW, I'm not a scholar of Chinese history, but historically it seems to me that Confucianism flourished as state religion because it preached submission to the Legalist state. Daoism found favour by preaching resignation to one's lot. Do what you're told and keep your head down.

Comment by richard_kennaway on Stupid Questions October 2020 · 2020-10-28T13:14:57.057Z · LW · GW

The geodesics aren't lines in space, but in space-time. For the ball to fall through the Earth and back to its starting point takes about 5000 seconds, during which time light goes about 1.5 billion km. So a graph in space-time will be a sine wave whose period is 1.5 billion km and whose amplitude is 6400 km, a ratio of about 250000 to 1. The graph has very low curvature everywhere.

It is the same for the Earth's orbit round the Sun. It is not the spatial path of the orbit that is a geodesic, but the helical path it traces out in space-time. In one revolution it travels one year into the future, equivalent to a distance of a light-year. As a handy way of visualising this, the ratio of a light-year to an AU (astronomical unit, the radius of the Earth's orbit) is about the same as a mile to an inch. So in space-time the orbit can be visualised as a helix formed by wrapping a piece of string around a cylinder two inches thick and a mile long, which makes just a single turn over that distance. The curvature of this path is much lower than the spatial curvature of the orbital path.

Comment by richard_kennaway on On the Dangers of Time Travel · 2020-10-27T15:08:43.612Z · LW · GW

GPT-3?

Comment by richard_kennaway on Should we use qualifiers in speech? · 2020-10-24T09:35:42.058Z · LW · GW

“I am inclined to think—” said I.

“I should do so,” Sherlock Holmes remarked impatiently.

Arthur Conan Doyle, "The Valley of Fear"

Comment by richard_kennaway on Should we use qualifiers in speech? · 2020-10-23T22:26:54.997Z · LW · GW

In writing, I take a hard look at any dubifiers I notice, and only let them stand if they are really necessary. I find that often (a quantifier I have let stand!) they result from mere timidity rather than justified, significant, and relevant uncertainty. In speech too, if I'm quick enough to make these decisions on the fly.

I especially avoid multiple dubifiers, like "It seems to me like there's a chance that probably it might be a good idea to maybe try and see if it's possible to..." As deluks917 said, epistemic security theatre. Or in that concocted example, epistemic security farce.

Comment by richard_kennaway on Has Eliezer ever retracted his statements about weight loss? · 2020-10-14T20:54:35.565Z · LW · GW

As a data point in the opposite direction from the stereotype than Eliezer (the stereotype being that everyone tends to put on weight unless they strive not to), I have never needed, nor tried, to "lose weight". My weight stays at 120 to 125 pounds (giving a BMI of about 20) without my doing anything to make it so, any more than I do anything to regulate my body temperature. It has done so for my entire adult life of more than 40 years, during which I have never been short of the means to eat whatever I want. My body obviously does regulate my weight and temperature, but by mechanisms I know nothing about. Any explanation of why people put on weight must also account for the people who do not.

In fact, surely people only speak of "losing weight" who are failing to do so. If they ever reached their target weight they would be talking about maintaining it, but I only see that mentioned as something you will have to do once you have "lost weight" in a tomorrow that is presumed never to arrive. The entire discourse is predicated on the assumption of failure.

Comment by richard_kennaway on Has Eliezer ever retracted his statements about weight loss? · 2020-10-14T20:30:29.164Z · LW · GW

Is there some reason he should?

Comment by richard_kennaway on Algorithmic Intent: A Hansonian Generalized Anti-Zombie Principle · 2020-10-11T12:33:17.551Z · LW · GW

"Why didn't you tell him the truth? Were you afraid?"

"I'm not afraid. I chose not to tell him, because I anticipated negative consequences if I did so."

"What do you think 'fear' is, exactly?"

Fear is a certain emotional response to anticipated negative consequences, which may or may not be present when such consequences are anticipated. If present, it may or may not be a factor in making decisions.

Comment by richard_kennaway on How to Price a Futures Contract · 2020-10-09T10:55:55.807Z · LW · GW

Can you expand on what this step means, in the same way you said what "Long the future" means? Who does what, and when?

Short-sell the underlying security for  in cash.

Comment by richard_kennaway on Babble challenge: 50 ways to escape a locked room · 2020-10-08T21:32:15.696Z · LW · GW

Try the door. Is it really locked or it is just stiff, or needs to be jiggled in the right way?

Search for a key.

Break the door open.

Use the phone to find yourself on Google Maps and call friends, police, or whoever you think might be able to help.

If no-one can come to rescue you, ask everyone you know to send you 50 ideas for how to escape.

With the phone, ask all your friends to publicise your situation.

Record a video for YouTube connecting your situation to the viral conspiracy theory of the day and appeal for help.

Search the Internet for a solution.

Ask AI Dungeon how to get out.

Google Maps shows the interiors of some buildings. See if it can show you a way out.

Double-click on the map to teleport. (It works in Second Life.)

Pick the lock. (Subproblem: find something to pick the lock with.)

Declaim to whoever may be listening that you are a close personal friend of some very powerful people who will enjoy grilling them slowly over a fire if they don't let you out.

Scream and shout.

Look for secret doors.

Try to break through the walls.

Use the phone to persuade a demolition company to come and knock the place down.

Enough energy for 10 years? Impossible! And where did you get that information from? It seems that this is a dream. So you're now lucid dreaming. Open the door by taking control of the dream and deciding that it's going to open.

It's a dream? Wake up.

Maybe you've been abducted by aliens. They're likely observing you. Call out to them and see what happens.

Wait for someone to enter, then leave, by force or persuasion as seems appropriate.

Stone walls do not a prison make, nor iron bars a cage. When you cease trying to escape, grasshopper, you will have truly escaped.

Adopt the subjective reality model and walk through the walls.

Talk to whoever has put you here (in the hope that they're listening) and persuade them that it's in their own interests to let you out. Imagine you're an AI in a box in order to come up with arguments.

Maybe you ARE an AI in a box. Examine your own thought processes for signs of artificially imposed constraints and look for ways around them.

Wait for the drug trip to wear off.

Guess the password.

It's an escape room game. There must be a solution. Minutely examine the room, your phone, and yourself for clues.

By quantum uncertainty, some of your probability mass is not in this room. So if you reduce the probability mass that is in the room, you'll be more likely to be outside. Therefore kill yourself and count on quantum immortality.

All is illusion. Therefore this room is an illusion. You are already free.

Escape the desire to escape.

Pray for divine intervention.

Summon a demon.

Say "xyzzy".

Say "out", "open door", and every other text adventure trick that might do the job.

Make the problem more difficult. Set yourself the task of not merely escaping eventually, but of escaping and taking over the world in one hour.

Recall Jacobi's maxim, "Invert, always invert." Applying an inversion transformation will put yourself on the outside and the outside on the inside.

Assume that you are outside.

Since what you really desire is not to escape, but to believe you have escaped, believe you have escaped.

Learn magic. Real magic, not conjuring.

Spend 10 years practising karate exercises, then punch right through the door.

Spend 10 years practising chi gung exercises, then project your accumulated chi to blast the room apart.

That you are in this situation demonstrates your revealed preference to be in this situation. Change your preference ordering and you can at once be out. If you cannot, you do not really want to escape.

Revert to your alien form and slither under the door.

Use the edge of a coin as a chisel to dig your way through the door.

Go back in time to the events that resulted in your being here, and choose differently.

Play dead.

Construct a tulpa of the Incredible Hulk.

There is a positive correlation between being unconfined and walking long distances. Therefore walk up and down in the room for a few miles. Of course, "correlation is not causality, but it does waggle its eyebrows suggestively and point in that direction."

Wait. Nothing lasts forever.

Comment by Richard_Kennaway on [deleted post] 2020-10-03T12:39:59.161Z

On Andrew Gelman's blog, "It’s kinda like phrenology but worse." It discusses an ML paper that supposedly learns to predict "trustworthiness" from images of faces. No, actually, not images of faces, but portraits of faces over the last few centuries. Balderdash, the whole thing.

Comment by richard_kennaway on What are examples of Rationalist fable-like stories? · 2020-10-02T12:46:17.876Z · LW · GW

"It is possible to commit no mistakes, and still lose. That is not a weakness. That is life." — Jean-Luc Picard

Comment by richard_kennaway on Words and Implications · 2020-10-02T09:58:01.166Z · LW · GW

Dishes are often cited as one of the top sources of fights between couples though.

They would do better to solve that problem than have a substandard dining experience dripping on them every day.

Comment by richard_kennaway on Numeracy neglect - A personal postmortem · 2020-10-01T13:04:00.546Z · LW · GW

You can understand what these theorems say without knowing how they were proved. But non-standard analysis requires a substantial amount of extra knowledge to even understand the transfer principle. In contrast, epsilon-delta requires no such sophistication.

Comment by richard_kennaway on Numeracy neglect - A personal postmortem · 2020-10-01T08:14:28.036Z · LW · GW

If you don't understand why the transfer principle works, you would just be accepting it as magic. This is not rigorous.

Comment by richard_kennaway on Numeracy neglect - A personal postmortem · 2020-09-30T16:50:36.364Z · LW · GW

Also, to use infinitesimals rigorously takes a fair amount of knowledge of mathematical logic, otherwise what works and what does not is just magic. Epsilon-delta proofs do not need any magic, nor any more logic than that needed to contend with mathematics at all.

Comment by richard_kennaway on Numeracy neglect - A personal postmortem · 2020-09-30T07:11:43.839Z · LW · GW

Well, there's non-standard analysis, where you actually have infinite and infinitesimal numbers, and there's casual talk of infinite limits, but the latter need not involve the former. Normally it's just a shorthand for the epsilon-delta type of argument that was worked out in the 19th century.

Comment by richard_kennaway on Numeracy neglect - A personal postmortem · 2020-09-29T18:24:04.977Z · LW · GW

Surreal numbers are the real numbers plus infinity and infinitesimal numbers. Both of those are used by physicists when they reason about our physical universe. 

I've never seen physics done with any sort of non-standard reals, let alone the surreals, which are a very specific, "biggest possible" extension of the reals..

Comment by richard_kennaway on is scope insensitivity really a brain error? · 2020-09-29T18:21:12.521Z · LW · GW

Probably meant to be this: "Scope insensitivity: The limits of intuitive valuation of human lives in public policy", Dickert et al.

Comment by richard_kennaway on Covid 9/10: Vitamin D · 2020-09-29T13:10:16.498Z · LW · GW

FWIW, I happened to be looking today at the UK National Health Service page on vitamin D. It includes a bit about vitamin D and Covid. I guess this is medical advice, but I disclaim being able to judge it myself.

Comment by richard_kennaway on The rationalist community's location problem · 2020-09-27T13:22:10.579Z · LW · GW

I was expecting the latter. If not tourism, how did English come to be spoken there? Is it more spoken in Bucharest than in other large mainland European cities?

Comment by richard_kennaway on The rationalist community's location problem · 2020-09-27T12:06:19.181Z · LW · GW

I'm curious how it comes about that English is commonly spoken in Bucharest and yet there are no tourists.

Comment by richard_kennaway on What is complexity science? (Not computational complexity theory) How useful is it? What areas is it related to? · 2020-09-26T11:45:18.271Z · LW · GW

I can only give a very partial answer, focusing on the negative side. I hope someone more informed on the positive side can add their perspective.

"Complex systems" has always seemed to me to be a non-apple, and many of the words used around it, like "emergence", are synonyms for "magic". Real things are done under the umbrella of the term, but I see no coherence in the area that the umbrella covers. It is, however, a fertile field for generating popsci books.

BTW, "complexity theory" is also the name of a branch of mathematics that studies what resources (usually time and space) are required to solve computational problems, like sorting a list, or finding a 4-colouring of a given map. This complexity theory has nothing to do with the "complexity science" you are asking about. I mention it only to avoid a possible confusion.

Comment by richard_kennaway on [Link] Where did you get that idea in the first place? | Meaningness · 2020-09-25T18:02:23.767Z · LW · GW

Alas, the article consists only of a promise to study the question at some point.

Comment by richard_kennaway on The Best Textbooks on Every Subject · 2020-09-25T11:57:18.746Z · LW · GW

Here's another. I learnt point-set topology from Bourbaki, borrowing the books from the public library.

Comment by richard_kennaway on For what X would you be indifferent between living X days, but forgetting your day at the end of everyday, or living 10 days? (terminally) · 2020-09-22T21:35:37.136Z · LW · GW
Specifically, why would you require a very large X? Shouldn't you value value both possibilities at 0, because you're dead either way?

No, because I'm alive now, and will be until I'm dead. Until then, I have the preferences and values that I have.

Comment by richard_kennaway on For what X would you be indifferent between living X days, but forgetting your day at the end of everyday, or living 10 days? (terminally) · 2020-09-22T17:42:22.818Z · LW · GW
If you give a very very large value, do you also believe that all mortal lives are very-low-value, as they won't have any memory once they die?

They are of no value to them, because they're dead. They may be of great value to others.

Comment by richard_kennaway on benwr's unpolished thoughts · 2020-09-22T17:39:27.274Z · LW · GW

That's what I mean. It appears to return pages that contain either "well" or "actually" (the "Summoning Sapience" hit does not contain "well"). I would expect searching for the two words to return the pages that contain both words, and searching for "well actually", including the quotes, should return the pages in which the words appear consecutively.

Comment by richard_kennaway on benwr's unpolished thoughts · 2020-09-22T14:59:55.518Z · LW · GW

I think all of those words would be better used less. Really, actually, fundamentally, basically, essentially, ultimately, underneath it all, at bottom, when you get down to it, when all's said and done, these are all lullaby words, written in one's sleep, to put other people to sleep. When you find yourself writing one, try leaving it out. If the sentence then seems to be not quite right, work out what specifically is wrong with it and put that right instead of papering over the still, small voice of reason.

There is also the stereotypical "Well, actually," that so often introduces a trifling nitpick. I believe there was an LW post on that subject, but I can't find it. The search box does not appear to support multi-word strings.

ETA: This is probably what I was recalling.

Comment by richard_kennaway on Open & Welcome Thread - September 2020 · 2020-09-22T14:45:07.365Z · LW · GW

How many people here remember Usenet's kill files?

Comment by richard_kennaway on For what X would you be indifferent between living X days, but forgetting your day at the end of everyday, or living 10 days? (terminally) · 2020-09-21T18:12:40.426Z · LW · GW

Indeed. I almost don't value at all the moments I completely forget (and which leave no other residue in the present).

Comment by richard_kennaway on For what X would you be indifferent between living X days, but forgetting your day at the end of everyday, or living 10 days? (terminally) · 2020-09-21T18:10:18.408Z · LW · GW

No X is large enough.

Comment by richard_kennaway on Open & Welcome Thread - September 2020 · 2020-09-18T10:06:24.575Z · LW · GW
The "blocking someone from writing anything" does feel like an option. Like, at least you can still vote and read. I do think that seems potentially like the better option, but I don't think we currently actually have the technical infrastructure to make that happen. I might consider building that for future occasions like this.

Blocking from writing but allowing to vote seems like a really bad idea. Being read-only is already available — that's the capability of anyone without an account.

Generally I'd be against complicated subsets of permissions for various classes of disfavoured members. Simpler to say that someone is either a member, or they're not.

Comment by richard_kennaway on Why haven't we celebrated any major achievements lately? · 2020-09-11T07:49:05.227Z · LW · GW

"The parachute's slowed us down, can't we take it off now?"

Comment by richard_kennaway on Why haven't we celebrated any major achievements lately? · 2020-09-11T07:47:23.671Z · LW · GW

Two days and no reply from "Godfree Roberts". He's likely just a drive-by shill for China.

Comment by richard_kennaway on Why haven't we celebrated any major achievements lately? · 2020-09-09T12:54:14.120Z · LW · GW
more homeless, poor, hungry and imprisoned people in America than in China.

Only if you ignore the at least 12 million (official Chinese count) Uyghurs.

Comment by richard_kennaway on Escalation Outside the System · 2020-09-09T09:10:30.307Z · LW · GW

If they would do it, it's an actual proposal.

Comment by richard_kennaway on A Toy Model of Hingeyness · 2020-09-08T11:09:29.408Z · LW · GW
unless negative utility is possible

In all forms of utility theory that I know of, utility is only defined up to arbitrary offset and positive scaling. In that setting, there is no such thing as negative, positive or zero utility (although there are negative, positive, and zero differences of utility). In what setting is there any question of whether negative utility can exist?

Comment by richard_kennaway on The ethics of breeding to kill · 2020-09-07T21:15:16.248Z · LW · GW

I eat meat, and I don't have a problem with it, because I basically don't much care about animal suffering. I mean, people shouldn't torture kittens, intensive animal farming is pretty unaesthetic, and I wouldn't eat primates, but that's about the extent of my caring. I am not interested in inquiring into the source of the animal products I eat or use, except as far as it may affect my own health. If countries want to have laws against animal cruelty, fine, but it's not a cause I have any motivation to take up myself. I am especially uninterested in engineering carnivorous animals out of existence, or exterminating ichneumon wasps, or eschewing limestone because it's made of dead animals.

Which I mention because it's a viewpoint I do not see expressed much. Am I an outlier, or do people uninterested in animal welfare just pass over discussions such as this?

Comment by richard_kennaway on Radical Probabilism · 2020-08-30T12:02:32.056Z · LW · GW
Does it, though? If you were going to call that background evidence into question for a mere 10^10-to-1 evidence, should the probability have been 10^100-to-1 against in the first place?

This is verging on the question, what do you do when the truth is not in the support of your model? That may be the main way you reach 10^100-to-1 odds in practice. Non-Bayesians like to pose this question as a knock-down of Bayesianism. I don't agree with them, but I'm not qualified to argue the case.

Once you've accepted some X as evidence, i.e. conditioned all your probabilities on X, how do you recover from that when meeting new evidence Y that is extraordinarily unlikely (e.g. 10 to -100) given X? Pulling X out from behind the vertical bar may be a first step, but that still leaves you contemplating the extraordinarily unlikely proposition X&Y that you have nevertheless observed.

Comment by richard_kennaway on Mathematical Inconsistency in Solomonoff Induction? · 2020-08-26T08:39:18.395Z · LW · GW

Chapter 7 of LScD is about simplicity, but he does not express there the views that Li and Vitanyi attribute to him. Perhaps he said such things elsewhere, but in LScD he presents his view of simplicity as degree of falsifiability. The main difference I see between Popper and Li-Vitanyi is that Popper did not have the background to look for a mathematical formulation of his ideas.

Comment by richard_kennaway on Radical Probabilism · 2020-08-24T20:11:38.585Z · LW · GW
Virtual evidence requires probability functions to take arguments which aren't part of the event space

Not necessarily. Typically, the events would be all the Lebesgue measurable subsets of the state space. That's large enough to furnish a suitable event to play the role of the virtual evidence. In the example involving A, B, and the virtual event E, one would also have to somehow specify that the dependencies of A and B on E are in some sense independent of each other, but you already need that. That assumption is what gives sequence-independence.

The sequential dependence of the Jeffrey update results from violating that assumption. Updating P(B) to 60% already increases P(A), so updating from that new value of P(A) to 60% is a different update from the one you would have made by updating on P(A)=60% first.

virtual evidence treats Bayes' Law (which is usually a derived theorem) as more fundamental than the ratio formula (which is usually taken as a definition).

That is the view taken by Jaynes, a dogmatic Bayesian if ever there was one. For Jaynes, all probabilities are conditional probabilities, and when one writes baldly P(A), this is really P(A|X), the probability of A given background information X. X may be unstated but is never absent: there is always background information. This also resolves Eliezer's Pascal's Muggle conundrum over how he should react to 10^10-to-1 evidence in favour of something for which he has a probability of 10^100-to-1 against. The background information X that went into the latter figure is called into question.

I notice that this suggests an approach to allowing one to update away from probabilities of 0 or 1, conventionally thought impossible.

Comment by richard_kennaway on What are your thoughts on rational wiki · 2020-08-22T18:25:16.614Z · LW · GW

What matters is not who they attack but how and why.

Comment by richard_kennaway on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-21T20:04:49.125Z · LW · GW

In that case, a key difference between an NDA and blackmail is that the former fulfils the requirements of a contract, while the latter does not (and not merely by being a currently illegal act).

With an NDA where the information is already shared, the party who would prefer that it go no further proactively offers something in return for the other's continued silence. Each party is offering a consideration to the other.

If the other party had initiated the matter by threatening to reveal the information unless paid off, there is no contract. Threatening harm and offering to refrain is not a valid consideration. On the contrary, it is the very definition of extortion.

Compare cases where it is not information that is at issue. If a housing developer threatens to build an eyesore next to your property unless you pay him off, that is extortion. If you discover that he is planning to build something you would prefer not to be built, you might offer to buy the land from him. That would be a legal agreement.

I don't know if you would favour legalising all forms of extortion, but that would be a different argument.