Posts

Effective Altruism, YouTube, and AI (talk by Lê Nguyên Hoang) 2018-11-17T19:21:19.247Z
rattumb debate: Are cognitive biases a good thing ? 2018-07-26T07:38:11.797Z
[deleted] 2018-07-24T16:06:17.467Z
The Craft And The Codex 2018-07-09T10:50:14.131Z
Some Remarks on the Nature of Political Conflict 2018-07-04T12:31:07.840Z
spaced repetition & Darwin's golden rule 2018-06-28T10:54:28.259Z
Loss aversion is not what you think it is 2018-06-20T15:56:03.510Z
How many philosophers accept the orthogonality thesis ? Evidence from the PhilPapers survey 2018-06-16T12:11:49.892Z
Dissolving Scotsmen 2018-05-10T16:08:11.862Z

Comments

Comment by Paperclip Minimizer on The Fundamental Theorem of Asset Pricing: Missing Link of the Dutch Book Arguments · 2019-06-02T16:58:38.998Z · LW · GW

How does this interact with time preference ? As stated, an elementary consequence of this theorem is that either lending (and pretty much every other capitalist activity) is unprofitable, or arbitrage is possible.

Comment by Paperclip Minimizer on Two Small Experiments on GPT-2 · 2019-02-25T16:10:18.296Z · LW · GW

That would be a good argument if it were merely a language model, but if it can answer complicated technical questions (and presumably any other question), then it must have the necessary machinery to model the external world, predict what it would do in such and such circumstances, etc.

Comment by Paperclip Minimizer on Two Small Experiments on GPT-2 · 2019-02-25T11:47:07.855Z · LW · GW

My point is, if it can answer complicated technical questions, then it is probably a consequentialist that models itself and its environment.

Comment by Paperclip Minimizer on How could "Kickstarter for Inadequate Equilibria" be used for evil or turn out to be net-negative? · 2019-02-23T10:43:46.474Z · LW · GW

But this leads to a moral philosophy question: are time-discounting rates okay, and is your future self actually less important in the moral calculus than your present self ?

Comment by Paperclip Minimizer on Two Small Experiments on GPT-2 · 2019-02-23T10:11:08.710Z · LW · GW

If an AI can answer a complicated technical question, then it evidently has the ability to use resources to further its goal of answering said complicated technical question, else it couldn't answer a complicated technical question.

Comment by Paperclip Minimizer on Blackmail · 2019-02-23T10:05:54.678Z · LW · GW

But don't you need to get a gears-level model of how blackmail is bad to think about how dystopian a hypothetical legal-blackmail sociey is ?

Comment by Paperclip Minimizer on Two Small Experiments on GPT-2 · 2019-02-23T10:03:27.301Z · LW · GW

There was discussion of tips on how to produce good Moloch content in the /r/slatestarcodex subreddit.

Comment by Paperclip Minimizer on Two Small Experiments on GPT-2 · 2019-02-23T07:53:08.975Z · LW · GW

The world being turned in computronium computing in order to solve the AI alignment problem would certainly be an ironic end to it.

Comment by Paperclip Minimizer on Implications of GPT-2 · 2019-02-20T17:55:40.972Z · LW · GW

My point is that it would be a better idea to put as prompt "What follows is a transcript of a conversation between two people:".

Comment by Paperclip Minimizer on Blackmail · 2019-02-20T10:55:49.337Z · LW · GW

Note the framing. Not “should blackmail be legal?” but rather “why should blackmail be illegal?” Thinking for five seconds (or minutes) about a hypothetical legal-blackmail society should point to obviously dystopian results. This is not a subtle. One could write the young adult novel, but what would even be the point.

Of course, that is not an argument. Not evidence.

What ? From a consequentialist point of view, of course it is. If a policy (and "make blackmail legal" is a policy) probably have bad consequences, then it is a bad policy.

Comment by Paperclip Minimizer on Implications of GPT-2 · 2019-02-20T10:52:35.194Z · LW · GW

It was how it was trained, but Gurkenglas is saying that GPT-2 could male a human-like conversation because Turing test transcripts are in the GPT-2 dataset, but it's conversations between humans in the GPT-2 dataset that would make possible GPT-2 making human-like conversations and thus potentially passing the Turing Test.

Comment by Paperclip Minimizer on Blackmail · 2019-02-20T10:48:43.469Z · LW · GW

But if the blackmail information is a good thing to publish, then blackmailing is still immoral, because it should be published and people should be incentivized to publish it, not to not publish it. We, as a society, should ensure that if, say, someone routinely engage in kidnapping children to harvest their organs, and someone knows this information, then she should be incentivized to send this information to the relevant authorities and not to keep this information to herself, for reasons that are I hope obvious.

Comment by Paperclip Minimizer on Implications of GPT-2 · 2019-02-18T21:58:50.781Z · LW · GW

I'm not sure what you're trying to say. I'm only saying that if your goal is to have an AI generate sentences that look like they were wrote by humans, then you should get a corpus with a lot of sentences that were wrote by humans, not sentences wrote by other, dumber, programs. I do not see why anyone would disagree with that.

Comment by Paperclip Minimizer on Implications of GPT-2 · 2019-02-18T15:14:22.967Z · LW · GW

It would make much more sense to train GPT-2 using discussions between humans if you want it to pass the Turing Test.

Comment by Paperclip Minimizer on “She Wanted It” · 2018-11-19T19:50:20.687Z · LW · GW

You need to define the terms you use in a way so that what you are saying is useful by having pragmatic consequences on the real world of actual things, and not simply on the same level as arguing by definition.

Comment by Paperclip Minimizer on “She Wanted It” · 2018-11-18T19:05:25.357Z · LW · GW

If you have such a large definition of the right to exit being blocked, then there is practically no such thing as the right to exit not being blocked, and the claim in your original comment is useless.

Comment by Paperclip Minimizer on “She Wanted It” · 2018-11-18T08:41:55.701Z · LW · GW

I can only link to another Pervocracy post.

Comment by Paperclip Minimizer on “She Wanted It” · 2018-11-18T08:15:20.269Z · LW · GW

Excellent article ! You might want to add some trigger warnings, though.

edit: why so many downvotes in so little time ?

Comment by Paperclip Minimizer on [deleted post] 2018-10-14T10:39:25.396Z

Hey admins: The "ë" in "Michaël Trazzi" is weird, probably a bug in your handling of Unicode.

Comment by Paperclip Minimizer on Does This Fallacy Have A Name? · 2018-10-14T10:34:26.594Z · LW · GW

Actually we all fall prey to this particular one without realizing it, in one aspect or another.

At least, you do. (With apologies to Steven Brust)

Comment by Paperclip Minimizer on The Tails Coming Apart As Metaphor For Life · 2018-10-14T06:58:04.429Z · LW · GW

An high-Kolmogorov-complexity system is still a system.

Comment by Paperclip Minimizer on The Tails Coming Apart As Metaphor For Life · 2018-10-12T16:37:49.065Z · LW · GW

I'm not sure what it would even mean to not have a Real Moral System. The actual moral judgments must come from somewhere.

Comment by Paperclip Minimizer on The Tails Coming Apart As Metaphor For Life · 2018-10-01T18:15:09.353Z · LW · GW

Using PCA on utility functions could be an interesting research subject for wannabe AI risk experts.

Comment by Paperclip Minimizer on The Tails Coming Apart As Metaphor For Life · 2018-10-01T18:14:21.838Z · LW · GW

I don't see the argument. I have an actual moral judgement that painless extermination of all sentient beings is evil, and so is tiling the universe with meaningless sentient beings.

Comment by Paperclip Minimizer on The Scent of Bad Psychology · 2018-09-19T12:26:39.706Z · LW · GW

don’t trust studies that would be covered in the Weird News column of the newspaper

-- Ozy

Comment by Paperclip Minimizer on Fundamental Philosophical Problems Inherent in AI discourse · 2018-09-18T09:15:59.834Z · LW · GW

Good post. Some nitpicks:

There are many models of rationality from which a hypothetical human can diverge, such as VNM rationality of decision making, Bayesian updating of beliefs, certain decision theories or utilitarian branches of ethics. The fact that many of them exist should already be a red flag on any individual model’s claim to “one true theory of rationality.”

VNM rationality, Bayesian updating, decision theories, and utilitarian branches of ethics all cover different areas. They aren't incompatible and actually fit rather neatly into each other.

As a Jacobin piece has pointed out

This is a Jacobite piece.

A critique of Pro Publica is not meant to be an endorsement of Bayesian justice system, which is still a bad idea due to failing to punish bad actions instead of things correlated with bad actions.

Unless you're omniscient, you can only punish things correlated with bad actions.

Comment by Paperclip Minimizer on Decision Theory with F@#!ed-Up Reference Classes · 2018-09-05T15:12:33.169Z · LW · GW

While this may seem like merely a niche issue, given the butterfly effect and a sufficiently long timeline with the possibility of simulations, it is almost guaranteed that any decision will change.

I think you accidentally words.

Comment by Paperclip Minimizer on Four kinds of problems · 2018-09-05T15:04:09.939Z · LW · GW

.

Comment by Paperclip Minimizer on Open Thread August 2018 · 2018-09-05T13:51:23.009Z · LW · GW

Noticing an unachievable goal may force it to have an existential crisis of sorts, resulting in self-termination.

Do you have reasoning behind this being true, or is this baseless anthropomorphism ?

It should not hurt an aligned AI, as it by definition conforms to the humans' values, so if it finds itself well-boxed, it would not try to fight it.

So it is an useless AI ?

Comment by Paperclip Minimizer on Open Thread August 2018 · 2018-09-05T13:49:23.083Z · LW · GW

Your whole comment is founded on a false assumption. Look at Bayes' formula. Do you see any mention of whether your probability estimate is "just your prior" or "the result of a huge amount of investigation and very strong reasoning" ? No ? Well this mean that this doesn't effect how much you'll update.

Comment by Paperclip Minimizer on Open Thread August 2018 · 2018-09-05T13:42:57.664Z · LW · GW

"self-aware" can also be "self-aware" as in, say, "self-aware humor"

Comment by Paperclip Minimizer on Open Thread August 2018 · 2018-09-05T13:41:05.952Z · LW · GW

I don't see why negative utilitarians would be more likely than positive utilitarians to support animal-focused effective altruism over (near-term) human-focused effective altruism.

Comment by Paperclip Minimizer on Open Thread August 2018 · 2018-09-05T13:37:16.965Z · LW · GW

This actually made me not read the whole sequence.

Comment by Paperclip Minimizer on Don't Get Distracted by the Boilerplate · 2018-08-15T17:08:25.483Z · LW · GW

[1] It would be rather audacious to claim that this is true for each of the four axioms. For instance, do please demonstrate how you would Dutch-book an agent that does not conform to the completeness axiom!

How can an agent not conform the completeness axiom ? It literally just say "either the agent prefer A to B, or B to A, or don't prefer anything". Offer me an example of an agent that don't conform to the completeness axiom.

Obviously it’s true that we face trade-offs. What is not so obvious is literally the entire rest of the section I quoted.

The entire rest of the section is a straightforward application of the theorem. The objection is that X don't happen in real life, and the counter-objection is that something like X do happen in real life, meaning the theorem do apply.

As I explained above, the VNM theorem is orthogonal to Dutch book theorems, so this response is a non sequitur.

Yeah, sorry for being imprecise in my language. Can you just be charitable and see that my statement make sense if you replace "VNM" by "Dutch book" ? Your behavior does not really send the vibe of someone who want to approach this complicated issue honestly, and more send the vibe of someone looking for Internet debate points.

More generally, however… I have heard glib responses such as “Every decision under uncertainty can be modeled as a bet” many times. Yet if the applicability of Dutch book theorems is so ubiquitous, why do you (and others who say similar things) seem to find it so difficult to provide an actual, concrete, real-world example of any of the claims in the OP? Not a class of examples; not an analogy; not even a formal proof that examples exist; but an actual example. In fact, it should not be onerous to provide—let’s say—three examples, yes? Please be specific.

  • If I cross the street, I make a bet about whether a car will run over me.
  • If I eat a pizza, I make a bet about whether the pizza will taste good.
  • If I'm posting this comment, I make a bet about whether it will convince anyone.
  • etc.
Comment by Paperclip Minimizer on Don't Get Distracted by the Boilerplate · 2018-08-14T19:33:25.450Z · LW · GW

This one is not a central example, since I’ve not seen any VNM-proponent put it in quite these terms. A citation for this would be nice. In any case, the sort of thing you cite is not really my primary objection to VNM (insofar as I even have “objections” to the theorem itself rather than to the irresponsible way in which it’s often used), so we can let this pass.

VNM is used to show why you need to have utility functions if you don't want to get Dutch-booked. It's not something the OP invented, it's the whole point of VNM. One wonder what you thought VNM was about.

Yes, this is exactly the claim under dispute. This is the one you need to be defending, seriously and in detail.

That we face trade-offs in the real world is a claim under dispute ?

Ditto.

Another way of phrasing it is that we can model "ignore" as a choice, and derive the VNM theorem just as usual.

Ditto again. I have asked for a demonstration of this claim many times, when I’ve seen Dutch Books brought up on Less Wrong and in related contexts. I’ve never gotten so much as a serious attempt at a response. I ask you the same: demonstrate, please, and with (real-world!) examples.

Ditto.

Once again, please provide some real-world examples of when this applies.

OP said it: every time we make a decision under uncertainty. Every decision under uncertainty can be modeled as a bet, and Dutch book theorems are derived as usual.

Comment by Paperclip Minimizer on Don't Get Distracted by the Boilerplate · 2018-08-14T18:10:58.664Z · LW · GW

Is Aumann robust to untrustworthiness ?

Comment by Paperclip Minimizer on Top Left Mood · 2018-08-14T15:47:53.857Z · LW · GW

This bidimensional model is weird.

  • I can imagine pure mania: assigning a 100% probability to everything going right
  • I can imagine pure depression: assign a 100% probability to everything going wrong
  • I can imagine pure anxiety: a completely flat probability distribution of things going right or wrong

But I can't imagine pure top left mood. This lead me to think that the mood square is actually a mood triangle, and that there is no top left mood, only a spectrum of moods between anxiety and mania.

Comment by Paperclip Minimizer on Survey: Help Us Research Coordination Problems In The Rationalist/EA Community · 2018-07-26T16:47:35.033Z · LW · GW

cough cough

Comment by Paperclip Minimizer on [deleted] · 2018-07-25T18:50:27.161Z · LW · GW

This is excellent advice. Are you a moderator ?

Comment by Paperclip Minimizer on [deleted] · 2018-07-25T17:08:05.098Z · LW · GW

I don't know. This make me anxious about writing critical posts in the future. I was about to begin to write another post that is similarly a criticism of an article wrote by someone else, and I don't think I'm going to do so.

Comment by Paperclip Minimizer on [deleted] · 2018-07-25T10:35:01.694Z · LW · GW
Comment by Paperclip Minimizer on [deleted] · 2018-07-25T06:59:44.170Z · LW · GW

Can I ask you what you mean by this ?

Comment by Paperclip Minimizer on The Craft And The Codex · 2018-07-11T20:53:27.761Z · LW · GW

Never heard of a prank like this, this sound weird.

Comment by Paperclip Minimizer on The Craft And The Codex · 2018-07-10T10:53:37.022Z · LW · GW

More generally, commenting isn't a good way to train oneself as a rationalist, but blogging is.

Comment by Paperclip Minimizer on Some Remarks on the Nature of Political Conflict · 2018-07-09T09:48:19.234Z · LW · GW

I'm not sure what you mean.

Comment by Paperclip Minimizer on Some Remarks on the Nature of Political Conflict · 2018-07-05T08:12:56.319Z · LW · GW

This isn't what "conflict theory" mean. Conflict theory is a specific theory about the nature of conflict, that say conflict is inevitable. Conflict theory doesn't simply mean that conflict exist.

Comment by Paperclip Minimizer on Some Remarks on the Nature of Political Conflict · 2018-07-04T18:32:12.357Z · LW · GW

I don't agree with your pessimism. To re-use your example, if you formalize the utility created by freedom and equality, you can compare both and pick the most efficient policies.

Comment by Paperclip Minimizer on spaced repetition & Darwin's golden rule · 2018-06-28T17:23:33.656Z · LW · GW

Fixed ;)

Comment by Paperclip Minimizer on Loss aversion is not what you think it is · 2018-06-24T11:47:32.633Z · LW · GW

The author explain very clearly what the differences are between "people hate losses more than they like gains" and loss aversion. Loss aversion is people hating losing $1 while having $2 more than they like gaining $1 while having $1, even though it both case this the difference between having $1 and $2.

Comment by Paperclip Minimizer on Remembering the passing of Kathy Forth. · 2018-06-23T16:35:54.954Z · LW · GW

I think we do disagree on if it's a good idea to widely spread as a message "HEY SUICIDAL PEOPLE HAVE YOU REALIZED THAT IF YOU KILL YOURSELF EVERYONE WILL SAY NICE THINGS ABOUT YOU AND WORK ON SOLVING PROBLEMS YOU CARE ABOUT LET’S MAKE SURE TO HIGHLIGHT THIS EXTENSIVELY".