Gurkenglas's Shortform

post by Gurkenglas · 2019-08-04T18:46:34.953Z · score: 5 (1 votes) · LW · GW · 6 comments

6 comments

Comments sorted by top scores.

comment by Gurkenglas · 2019-09-04T16:10:30.928Z · score: 5 (2 votes) · LW · GW

I've found a category-theoretical model of BCI-powered reddit [LW · GW]!

Fix a set of posts. Its subsets form a category whose morphisms are inclusions that map every element to itself. Call its forgetful functor to Set f. Each BCI can measure its user, such as by producing a vector of neuron activations. Its possible measurements form a space, and these spaces form a category. (Its morphisms would translate between brains, and each morphism would keep track of how well it preserves meaning.) Call its forgetful functor to Set g.

The comma category f/g has as its objects users (each a Set-function from some set of posts they've seen to their measured reactions), and each morphism would relate the user to another brain that saw more posts and reacted similarly on what the first user saw.

The product on f/g tells you how to translate between a set of brains. A user could telepathically tell another what headspace they're in, so long as the other has ever demonstrated a corresponding experience. Note that a republican sending his love for republican posts might lead to a democrat receiving his hatred for republican posts.

The coproduct on f/g tells you how to extrapolate expected reactions between a set of brains. A user could simply put himself into a headspace and get handed a list of posts he hasn't seen for which it is expected that they would have put him into that headspace.

comment by Gurkenglas · 2019-08-04T18:54:48.278Z · score: 5 (4 votes) · LW · GW

Hearthstone has recently released Zephrys the Great, a card that looks at the public gamestate and gives you a choice between three cards that would be useful right now. You can see it in action here. I am impressed in the diversity of the choices it gives. An advisor AI that seems friendlier than Amazon's/Youtube's recommendation algorithm, because its secondary optimization incentive is fun, not money!

Could we get them to opensource the implementation so people could try writing different advisor AIs to use in the card's place for, say, their weekly rule-changing Tavern Brawls?

comment by Gurkenglas · 2019-08-04T18:46:35.281Z · score: 5 (3 votes) · LW · GW

OpenAI has a 100x profit cap for investors. Could another form of investment restriction reduce AI race incentives?

The market selects for people that are good at maximizing money, and care to do so. I'd expect there are some rich people who care little whether they go bankrupt or the world is destroyed.

Such a person might expect that if OpenAI launches their latest AI draft, either the world is turned into paperclips or all investors get the maximum payoff. So he might invest all his money in OpenAI and pressure OpenAI (via shareholder swing voting or less regulated means) to launch it.

If OpenAI said that anyone can only invest up to a certain percentage of their net worth in OpenAI, such a person would be forced to retain something to protect.

comment by Gurkenglas · 2019-08-24T14:36:38.073Z · score: 2 (2 votes) · LW · GW

https://arbital.com/p/cev/ : "If any hypothetical extrapolated person worries about being checked, delete that concern and extrapolate them as though they didn't have it. This is necessary to prevent the check itself from having a UDT influence on the extrapolation and the actual future."

Our altruism (and many other emotions) are evolutionarily just an acausal reaction to the worry that we're being simulated by other humans.

It seems like a jerk move to punish someone for being self-aware enough to replace their emotions by the decision-theoretic considerations they evolved to approximate.

And unnecessary! For if they behave nicely when checked because they worry they're being checked, they should also behave nicely when unchecked.

comment by mr-hire · 2019-08-24T17:25:28.672Z · score: 1 (2 votes) · LW · GW

I think (given my extremely limited understanding of this stuff) this is to prevent UDT agents from fooling the people simulating them by recognizing that they're in a simulation.


IE, you want to ignore the following code:


If (inOmegasHead){

oneBox;

} else{

twoBox}

comment by Gurkenglas · 2019-08-24T11:05:35.178Z · score: 1 (1 votes) · LW · GW

Consider a singleton overseeing ten simpletons. Its ontology is that each particle has a position. Each prefers all their body's particles being in it to the alternative. It aggregates their preferences by letting each of them rule out 10% of the space of possibilities. This does not let them gurantee their integrity. What if it considered changes to a single position instead of states? Each would rule out any change that removes a particle from their body, which fits fine in their 10%. Iterating non-ruled-out changes would end up in an optimal state starting from any state. This isn't free lunch, but we should formalize what we paid.