Posts

Comments

Comment by hungryhobo on Are ethical asymmetries from property rights? · 2018-07-23T11:51:59.827Z · score: 2 (2 votes) · LW · GW

The OP is basically the fairly standard basis of american-style libertarianism.

It doesn't particularly "defy consequentialism" any more than listing the primary precepts of utilitarian consequentialist groups defys deontology.

But I don't think the moral intuitions you list are terribly universal.

The closest parallel I can think of is someone listing contemporary american copyright law and listing it's norms as if they're some kind of universally accepted system of morals.

"but you are definitely not allowed to kill one"

Johny thousand livers is of course an exception.

Or put another way, if you say to most people,

"ok, so you're in a scenario a little bit like the films Armageddon or deep impact. Things have gone wrong but it's a smaller rock and and all you can do at this point is divert it or not, it's on course for new york city, ten million+ will die, you have the choice to divert it to a sparsely populated area of the rocky mountains... but there's at least one person living there"

Most of the people who would normally declare that the trolley problem with 1vs5 makes it unethical to throw that one person in front of the trolley... will change their view once the difference in the trade is large enough.

1 vs 5 isn't big enough for them but the idea of tens of millions will suddenly turn them into consequentialist.

"You are not required to save a random person"

Also, this is a very not-universal viewpoint. Show people that video of the chinese kid being run over repeatedly while people walk past ignoring her cries and many will declare that the passers-by who ignored the child committed a very clear moral infraction.

"Duty of care" is not popular in american libertarianism but it and variations is a common concept in many countries.

The deliberate failure to provide assistance in the event of an accident is a criminal offence in France.

In many countries if you become aware of a child suffering sexual abuse there are explicit duties to report.

And once you accept the fairly commonly held concept of "duty of care", the idea that you actually do have a duty to others, and suddenly the absolutist property stuff largely sort of falls apart and it becomes entirely reasonable to require some people to give up some fraction of property to provide care for those around them just as it's reasonable to expect them to help an injured toddler out of the street or to help the victim of a car accident or to let the authorities know if they find out that a kid is being raped.

"Duty" or similar "social contract" precepts that imply that you have some positive duties purely by dint of being a human with the capacity to intervene tend to be rejected by the american libertarian viewpoint but it's a very very common aspect of the moral intuitions of a large fraction of the worlds population.

It's not unlimited and it tends towards Newtonian Ethics but moral intuitions aren't known for being perfectly fair.

Comment by hungryhobo on The Critical Rationalist View on Artificial Intelligence · 2017-12-11T11:58:08.307Z · score: 1 (1 votes) · LW · GW

Yes, our ancestors could not build a nuclear reactor, the australian natives spent 40 thousand years without constructing a bow and arrow. Neither the Australian natives nor anyone else has built a cold fusion reactor. Running half way doesn't mean you've won the race.

Putting ourselves in the category of "entities who can build anything" is like putting yourself in the category "people who've been on the moon" when you've never actually been to the moon but really really want to be an astronaut one day. You might even one day become an astronaut but aspirations don't put you in the category with Armstrong until you actually do the thing.

Your pet collie might dream vaguely of building cars, perhaps in 5,000,000 years it's descendants might have self selected for intelligence and we'll have collie engineers, that doesn't make it an engineer today.

Currently by the definition in that book humans are not universal constructors, at best we might one day be universal constructors if we don't all get wiped out by something first. It would be nice if we became such one day. But right now we're merely closer to being universal constructors than unusually bright ravens and collies.

Feelings are not fact. Hopes are not reality.

Assuming that nothing will stop us based on a thin sliver of history is shaky extrapolation:

https://xkcd.com/605/

Comment by hungryhobo on The Critical Rationalist View on Artificial Intelligence · 2017-12-11T11:43:56.720Z · score: 0 (0 votes) · LW · GW

Adam and Eve AI's. The pair are designed such that they can automatically generate large numbers of hypothesis, design experiments that could falsify the maximum possible number of hypothesis and then run those experiments in an automated lab.

Rather than being designed to do X with yeast it's basically told "go look at yeast" and then develops hypothesis about yeast and yeast biology and it successfully re-discovered a number of elements of cell biology. Later iterations were given access to databases of already known genetic information and discovered new information about a number of genes .

http://www.dailygalaxy.com/my_weblog/2009/04/1st-artificially-intelligent-adam-and-eve-created.html

https://www.newscientist.com/article/dn16890-robot-scientist-makes-discoveries-without-human-help/

It's a remarkable system and could be extremely useful for scientists in many sectors but it's a 1.1 on the 1 to 10 scale where 10 is a credible paperclipper or Culture-Mind style AI.

This AI is not a pianist robot and doesn't play chess but has broad potential applications across many areas of science.

It blows a hole in the side of the "Universal Knowledge Creator" idea since it's a knowledge creator beyond most humans in a number of areas but but is never going to be controlling a pianist robot or running a nail salon because the belief that there's some magical UKC line or category (which humans technically don't qualify for yet anyway) is based on literally nothing except feelings. there's not an ounce of logic or evidence behind it.

Comment by hungryhobo on The Critical Rationalist View on Artificial Intelligence · 2017-12-08T22:15:18.856Z · score: 5 (5 votes) · LW · GW

It's pretty common for groups of people to band together around confused beliefs.

Millions of people have incorrect beliefs about vaccines, millions more are part of new age groups which have embraced confused and wrong beliefs about quantum physics (often related to utterly misunderstanding the term "Observer" as used in physics) and millions more have banded together around incorrect beliefs about biology. Are you smarter than all of those people combined? Are you smarter than every single individual in those groups? probably not but...

The man who replaced me on the commission said, “That book was approved by sixty-five engineers at the Such-and-such Aircraft Company!”

I didn’t doubt that the company had some pretty good engineers, but to take sixty-five engineers is to take a wide range of ability–and to necessarily include some pretty poor guys! It was once again the problem of averaging the length of the emperor’s nose, or the ratings on a book with nothing between the covers. It would have been far better to have the company decide who their better engineers were, and to have them look at the book. I couldn’t claim to that I was smarter than sixty-five other guys–but the average of sixty-five other guys, certainly!

I couldn’t get through to him, and the book was approved by the board.

— from “Surely You’re Joking, Mr. Feynman” (Adventures of a Curious Character)
Comment by hungryhobo on The Critical Rationalist View on Artificial Intelligence · 2017-12-08T22:04:11.029Z · score: 0 (0 votes) · LW · GW

This again feels like one of those things that creeps the second anyone points you to examples.

If someone points to an AI that can generate scientific hypothesis, design novel experiments to attempt to falsify them and run those experiments in ways that could be applied to chemistry, cancer research and cryonics you'd just declare that those weren't different enough domains because they're all science and then demand that it also be able to control pianist robots and scuba dive and run a nail salon.

Nothing to see here everyone.

This is just yet another boring iteration of the forever shifting goalposts of AI .

Comment by hungryhobo on The Critical Rationalist View on Artificial Intelligence · 2017-12-08T21:48:15.127Z · score: 0 (0 votes) · LW · GW

First: If I propose that humans can sing any possible song or that humans are universal jumpers and can jump any height the weight is not upon everyone else to prove that humans cannot because I'm the one making the absurd proposition.

he proposes that humans are universal constructors, able to build anything. Observation: there are some things humans as they currently are cannot construct, as we currently are we cannot actually arbitrarily order atoms any way we like to perform any task we like. The worlds smartest human can no more build a von neuman probe right now than the worlds smartest border collie.

he merely makes the guess that we'll be able to do so in future or that we'll be able to build something that will be able to build something in future that will be able to but that border collies never will. (that is based on little more than faith.)

From this he concludes we're "universal constructors" despite us quite trivially falling short of the definition of 'universal constructor' he proposes.

When you start talking about "reach" you utterly utterly cancel out all the claims made about AI in the OP. If a superhuman AI with a brain the size of a planet made of pure computation can just barely manage to comprehend some horribly complex problem and there's a slim chance that humans might one day be able to build AI's which might be able to build AI's which might be able to build AI's that might be able to build that AI that doesn't mean that humans have fully comprehended that thing or could fully comprehend that thing any more than slime mould could be said to comprehend the building of a nuclear power station because they could potentially produce offspring which produce offspring which produce offspring.....[repeat many times] who could potentially design and build a nuclear power station.

His arguments are full of gaping holes. How does this not jump out at other readers?

Comment by hungryhobo on The Critical Rationalist View on Artificial Intelligence · 2017-12-08T15:15:54.828Z · score: 0 (0 votes) · LW · GW

This argument seems chosen to make it utterly unfalsifiable.

If someone provides examples of animal X solving novel problems in creative ways you can just say "that's just the 'some flexibility' bit"

Comment by hungryhobo on The Critical Rationalist View on Artificial Intelligence · 2017-12-08T15:07:06.168Z · score: 0 (0 votes) · LW · GW

You're describing what's known as General game playing.

you program an AI which will play a set of games, you don't know what the rules of the games will be. Build an AI which can accept a set of rules for a game then teach itself to play.

This is in fact a field in AI.

also note recent news that AlphaGoZero has been converted to AlphaZero which can handle other games and rapidly taught itself how to play Chess,Shogi, and Go (beating it's ancestor AlphaGoZero) hinting that they're generalising it very successfully.

Comment by hungryhobo on The Critical Rationalist View on Artificial Intelligence · 2017-12-08T14:44:12.819Z · score: 2 (2 votes) · LW · GW

...ok so I don't get to find the arguments out unless I buy a copy of the book?

right... looking at a pirated copy of the book, the phrase "universal knowledge creator" appears nowhere in it nor "knowledge creator"

But lets have a read of the chapter "Artificial Creativity"

big long spiel about ELIZA being crap. Same generic qualia arguments as ever.

One minor gem in there for which the author deserves to be commended:

"I have settled on a simple test for judging claims, including Dennett’s, to have explained the nature of consciousness (or any other computational task): if you can’t program it, you haven’t understood it"

...

Claim that genetic algorithms and similar learning systems aren't really inventing or discovering anything because they reach local maxima and thus the design is really just coming from the programmer. (presumably then the developers of alpha-go must be the worlds best grandmaster go players)

I see the phrase "universal constructors" where the author claims that human bodies are able to turn anything into anything. This argument appears to rest squarely on the idea that while there may be some things we actually can't do or ideas we actually can't handle we should, one day, be able to either alter ourselves or build machines (AI's?) that can handle it. Thus we are universal constructors and can do anything.

On a related note I an in fact an office block because while I may not actually be 12 stories tall and covered in glass I could in theory build machines which build machine which could be used to build an office block and thus by this books logic, that makes me an office block and from this point forward in the comments we can make arguments based on the assumption that I can contain at least 75 office workers along with their desks and equipment

The fact that we haven't actually managed to create machines that can turn anything into anything yet strangely doesn't get a look in on the argument about why we're currently universal constructors but dolphins are not.

The author brings up the idea of things we may genuinely simply not be able to understand and just dismisses it with literally nothing except the objection that it's claiming things could be inexplicable and hence should be dismissed. (on a related note the president of the tautology club is the president of the tautology club)

Summary: I'd give it a C- but upgrade it to C for being better than the geocities website selling it.

Also, the book doesn't actually address my objections.

Comment by hungryhobo on The Critical Rationalist View on Artificial Intelligence · 2017-12-07T14:28:26.960Z · score: 2 (2 votes) · LW · GW

I started this post off trying to be charitable but gradually became less so.

"This means we can create any knowledge which it is possible to create."

Is there any proof that this is true? anything rigorous? The human mind could have some notable blind spots. For all we know there could be concepts that happen to cause normal human minds to suffer lethal epileptic fits similar to how certain patterns of flashing light can to some people. Or simple concepts that could be incredibly inefficient to encode in a normal human mind that could be easily encoded in a mind of a similar scale with a different architecture.

"There is no such thing as a partially universal knowledge creator."

What is this based upon? some animals can create novel tools to solve problems. Some humans can solve very simple problems but are quickly utterly stumped beyond a certain point. Dolphins can be demonstrated to be able to form hypothesis and test them but stop at simple hypothesis.

Is a human a couple of standard deviations bellow average who refuses to entertain hypotheticals a "universal knowledge creator"? Can the author point to any individuals on the border or below it either due to brain damage or developmental problems?

Just because a turning machine can in theory run all computable computations that doesn't mean that a given mind can solve all problems that that Turing machine could just because it can understand the basics of how a turing machine works. The programmer is not just a super-set of their programs.

"These ideas imply that AI is an all-or-none proposition."

You've not really established that very well at all. You've simply claimed it with basically no support.

your arguments seem to be poorly grounded and poorly supported, simply stating things as if they were fact does not make them so.

"Humans do not use the computational resources of their brains to the maximum."

Interesting claim. So these ruthlessly evolved brains aren't being used even when our lives and the lives of our progeny are in jeopardy? Odd to evolve all that expensive excess capacity then not use it.

"Critical Rationalism, then, says AI cannot recursively self-improve so that it acquires knowledge creation potential beyond what human beings already have. It will be able to become smarter through learning but only in the same way that humans are able to become smarter"

Ok, here's a challenge. We both set up a chess AI but I get to use the hardware that was recently used to run AlphaZero while you only get to use a 486. We both get to use the same source code. Standard tournament chess rules with time limits.

You seem to be mentally modeling all potential AI as basically just a baby based on literally... nothing whatsoever.

Your TCS link seems to be fluff and buzzwords irrelevant to AI.

"Some reading this will object because CR and TCS are not formal enough — there is not enough maths"

That's an overly charitable way of putting it. Backing up none of your claims then building a gigantic edifice of argument on thin air is not great for formal support of something.

"Not yet being able to formalize this knowledge does not reflect on its truth or rigor."

"We have no problem with ideas about the probabilities of events but it is a mistake to assign probabilities to ideas. The reason is that you have no way to know how or if an idea will be refuted in the future. Assigning a probability is to falsely claim some knowledge about that. Furthermore, an idea that is in fact false can have no objective prior probability of being true. The extent to which Bayesian systems work at all is dependent on the extent to which they deal with the objective probability of events (e.g., AlphaGo). In CR, the status of ideas is either "currently not problematic" or "currently problematic", there are no probabilities of ideas. CR is a digital epistemology. "

The space of potentially true things that are actually completely false is infinite. If you just pick ideas out of the air and don't bother with testing them and showing them to be correct you provide about as much useful insight to those around you as the average screaming madman on the street corner preaching that the Robot Lizardmen are working with the CIA to put radio transmiters in his teeth to hide the truth about 9/11.

Proving your claims to actually be true or to have some meaningful chance of being true matters.

Comment by hungryhobo on Fables grow around missed natural experiments · 2017-11-21T15:08:34.755Z · score: 0 (0 votes) · LW · GW

It's improbable but if they ever behave anything like dogs not 100% impossible.

I've encountered an older dog that really really wanted to have puppies that stole a kitten from a litter and tried to raise it and feed it and made no attempt to eat it.

and there appear to be real reports of domesticated dogs adopting and nursing neglected children.

https://www.thedodo.com/dog-breastfeeds-child-1336838906.html

Of course dogs have the aggression dialed way way down such that they may be way way more likely to do that.

I'd argue that a she-wolf that's recently lost it's puppies instead finding some other small mammal to adopt is merely very improbable.

Mammal nursing mothers, even from fairly bloodthirsty species) can be surprisingly willing to adopt infant creatures of different species, even ones they'd usually snack upon.

http://3.bp.blogspot.com/-pgW9rYRxS_4/UJbVUbIwPzI/AAAAAAAAS_4/Z_gUmGvK6Mg/s640/92770023_large_2moZJ2WhuU.jpg

Comment by hungryhobo on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-03-31T12:47:41.279Z · score: 0 (0 votes) · LW · GW

The quickest way to make me start viewing a scifi *topia as a dystopia is to have suicide banned in a world of (potential) immortals. To me the "right to death" is an essential once immortality is possible.

Still, I get the impression that saying they'll die at some point anyway is a bit of a dodge of the challenge. After all, nothing is truly infinite. Eventually entropy will necessitate an end to any simulated hell.

Comment by hungryhobo on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-03-28T15:12:48.693Z · score: 1 (1 votes) · LW · GW

This sounds like the standard argument around negative utility.

if you weight negative utility quite highly then you could also come to the conclusion that the moral thing to do is to set to work on a virus to kill all humans as fast as possible.

You don't even need mind-uploading. If you weight suffering highly enough then you could decide that it's the right thing to do taking a trip to a refugee camp full of people who, on average, are likely to have hard, painful lives, and leaving a sarin gas bomb.

Put another way: if you encountered an infant with epidermolysis bullosa would you try to kill them, even against their wishes?

Comment by hungryhobo on Open thread, Dec. 12 - Dec. 18, 2016 · 2016-12-13T13:35:31.839Z · score: 0 (0 votes) · LW · GW

"You may take ternary numeral system (base 3) and three basic instructions"

Wait, are we supposed to make up arbitrary operations for higher bases?

Comment by hungryhobo on Open thread, Dec. 12 - Dec. 18, 2016 · 2016-12-13T13:28:05.616Z · score: 5 (5 votes) · LW · GW

ok, there's not really enough data points to do proper stats but lets give it a go anyway.

Lets consider the possibility that the ad campaign did nothing. Some ad campaigns are actually damaging so lets try to get an idea of how much it varies from month to month.

Mean = 50.5 Standard Deviation = 6.05

So about 1 and 2/3rds SD's above the mean.

Sure, October is a little higher than normal but not by much.

Or put another way, imagine that the ad campaign had been put into effect in April but actually did absolutely nothing. They would have seen an increase of 15.6 million along with a new record high.

The priori chance of ads increasing sales is high for good ad campaigns but as countless dot com bubble companies learned: it's entirely possible for advertising to get you absolutely nothing.

Remember that the priori is a fancy way of encoding your expectations into how you do calculations.

If you're trying to decide whether an ad campaign you've paid for actually worked a system of assessment which involves saying "well, I believed it should work in principle so I spent money on doing it in the first place and now I can confirm it worked partly because I believe it should work in principle"

Comment by HungryHobo on [deleted post] 2016-12-13T12:47:54.581Z

And the point of this post is? Do you want us to critique the dark-arts rhetoric in the document? Discuss the subject of ideological purity? Tribalism?

Comment by hungryhobo on This AI Boom Will Also Bust · 2016-12-05T16:54:19.764Z · score: 1 (1 votes) · LW · GW

If the typical pattern holds:

Step one: new trick is discovered solving some problem X which couldn't be handled before.

Step two: people try to apply it to everything that the old styles didn't work on like problem Y which is sort of in the same problem class. At this stage overly enthusiastic people may over-promise. "I'm sure it will work amazingly on Y"

Step three: "Bah! These CS types never deliver, Y will always be better done by humans."

Step four: Interest and funding flees as the news stops paying attention, a few people keep chipping away at the problem and eventually slightly outperform humans on Y and try to get it to work on Z.

Step five: Someone proves mathematically that it can never solve major set of problems in Z.

Step six: Someone comes up a new trick... GOTO 1

Comment by hungryhobo on Now is the time to eliminate mosquitoes · 2016-08-11T11:44:48.309Z · score: 0 (0 votes) · LW · GW

I remember having a similar discussion about HIV and anti-retroviral drugs.

In short, it's an easy position to take if you and the people you care about aren't currently in the firing line and making policy choices on assumptions about future discoveries that we can't guarantee is ethically problematic.

Comment by hungryhobo on Now is the time to eliminate mosquitoes · 2016-08-11T11:38:58.206Z · score: 4 (4 votes) · LW · GW

There's about 3200 species of mosquito. < 200 bite humans and perhaps a dozen are major disease vectors for humans.

We extinct about 150 species per day without really trying. Increasing the number of species we push to extinction by 10% for a single day would save half a million lives per year.

Comment by hungryhobo on Now is the time to eliminate mosquitoes · 2016-08-11T11:24:07.642Z · score: 1 (1 votes) · LW · GW

Oh my god those articles are stupid.

"Oxitec’s GM mosquitoes have a genetic ‘kill switch’ but no one is sure if it will work on just the GM variety or also on the bugs that interbreed with the GM ‘test’ insects. "

If only there was some way to physically scream "THAT'S THE FUCKING POINT!" at the author. The whole point is to spread the "kill switch" to the wild mosquitoes.To kill them.

The Daily mail article appears to be referring to this:

http://www.theecologist.org/News/news_analysis/2987024/pandoras_box_how_gm_mosquitos_could_have_caused_brazils_microcephaly_diasaster.html

where people started pointing to GM mosquitos having been released in areas where zika has been spreading.

never mind that areas where mosquito's are the biggest problem are the areas where you try mosquito control, in a similar vein it's suspicious that most malaria deaths are in areas where bednets have previously been distributed. There can be only one conclusion: bednets cause malaria.

Comment by hungryhobo on Open thread, Jul. 11 - Jul. 17, 2016 · 2016-07-15T09:22:21.090Z · score: 1 (1 votes) · LW · GW

fair enough, I was underwhelmed by your initial post describing it but I agree that showing that your system can handle weird constraints in real examples is an excellent demonstration.

The record thing to me just happens to be a good demonstration that you're not just another little startup with some crappy schedualling software, you're actually at the top of the field in some areas.

Comment by hungryhobo on Open thread, Jul. 11 - Jul. 17, 2016 · 2016-07-13T10:26:36.438Z · score: 0 (0 votes) · LW · GW

Or possibly to find an existing company selling office/organization/planning software that's already got a big share of the market and selling them license to the tech.

Comment by hungryhobo on Open thread, Jul. 11 - Jul. 17, 2016 · 2016-07-13T10:24:37.446Z · score: 1 (1 votes) · LW · GW

if you're the holders of some records for certain problem types then that grabs my interest.

I'd suggest leading with that since it's a strong one.

Comment by hungryhobo on Open thread, Jul. 11 - Jul. 17, 2016 · 2016-07-12T14:46:37.087Z · score: 0 (0 votes) · LW · GW

I found it odd as well but I think it's because it implies that the theoretical reason for that lower bound may be invalid.

There's likely going to turn out to be a different theoretical lower bound for some other reason but right now we don't have that theoretical reason.

Comment by hungryhobo on Open thread, Jul. 11 - Jul. 17, 2016 · 2016-07-12T13:50:44.923Z · score: 2 (2 votes) · LW · GW

If true this has some spectacular implications for computing (long term).

http://phys.org/news/2016-07-refutes-famous-physical.html

"Now, an experiment has settled this controversy. It clearly shows that there is no such minimum energy limit and that a logically irreversible gate can be operated with an arbitrarily small energy expenditure. Simply put, it is not true that logical reversibility implies physical irreversibility, as Landauer wrote."

Some of the limits of computation, how much you could theoretically do with a certain amount of energy are based on what appear to have been incorrect beliefs about information processing and entropy.

It will push the research towards "zero-power" computing: the search for new information processing devices that consume less energy. This is of strategic importance for the future of the entire ICT sector that has to deal with the problem of excess heat production during computation.

It will call for a deep revision of the "reversible computing" field. In fact, one of the main motivations for its own existence (the presence of a lower energy bound) disappears.

Comment by hungryhobo on Open thread, Jul. 11 - Jul. 17, 2016 · 2016-07-12T13:05:02.655Z · score: 4 (4 votes) · LW · GW

Some interesting news: the first autonomous soft tissue surgery, sounds like a notable breakthrough in machine vision was involved for distinguishing all the messy, fleshy internals of the (porcine) patient.

http://www.popularmechanics.com/science/health/a20718/first-autonomous-soft-tissue-surgery/

Comment by hungryhobo on Open thread, Jul. 11 - Jul. 17, 2016 · 2016-07-12T12:57:43.893Z · score: 0 (0 votes) · LW · GW

so, parallel genetic algorithm based scheduling app with (ranked?) constraints?

In what way is it more automatic than existing similar apps?

presumably you still need to give it a list of constraints (say a few thousand constraints), possibly in a spreadsheet, some soft, some hard and it spits out a few of the top solutions or presumably an error if the hard constraints cannot be met?

What can it do that, say, optaplanner can't do?

Comment by hungryhobo on Are smart contracts AI-complete? · 2016-06-27T11:54:53.983Z · score: 0 (0 votes) · LW · GW

If you're going to rely on signed data from third parties then you're still trusting 3rd parties.

In a dozen or so lines of code you could create a system that collects signed and weighted opinions from a collection of individuals or organisations making encoding arbitration simple. (does the delivery company say they delivered it etc)

You're just kicking the trust can down the road.

On the other hand it's unlikely we'll see any reasonably smart AI's with anything less than millions of lines of code (or code and data) and flaws anywhere in them destroy the security of the whole system.

This is not a great use for AI until we 1: actually have notable AI and 2: have proven the code that makes it up which is a far larger undertaking.

Comment by hungryhobo on Are smart contracts AI-complete? · 2016-06-24T15:41:27.574Z · score: 5 (5 votes) · LW · GW

ok, for some context here, I think a lot of people are getting hung up on the words "contract" or "smart contract".

If we want to talk about it intelligently it helps to taboo the term "contract" or else people get terribly terribly confused like in some of the existing comments.

I'm going to say "independent program" instead of "smart contract" for clarity

Ethereum allows for the existence of independent programs which can hold and transfer money. The rules have to be hardcoded into them when you create them. They cannot be changed once launched.

For example if you wanted to create a prize fund for people who've factored large numbers you could create an independent program which accepts a set of numbers, decides if they're the prime factors of another number and if it is then it transfers a prize fund to the first person to do so.

Years later, someone factors the number, connects and gets their payment.

You might be a thousand years dead but the independent program is still in the system and has control of the funds.

Depending on how you write it you may even not be able to retrieve the funds yourself without solving the problem.

It doesn't matter if a court orders you to pay out because they have a law declaring pi to be exactly three and someone has come forward with a kooky proof for their "factored" number. If it doesn't match the rules of the program there's nothing you or the court can do.

If you've not given yourself control at the start it will sit there until the end of time until someone actually factors the number.

Or perhaps you set up the program badly and it will accept numbers which aren't the factors as correct and gives a payment to someone who's not factored anything.

it is not a legal contract which says "I will give money to the person who solves this problem", it's a piece of code which, when followed, may give the money in it's control to someone who connects to it depending on the rules programmed into it.

Some funds have been set up controlled by independent programs and their "about" pages tend to say something along the lines of "you're agreeing to the code being followed, anything we write here is just our best try at explaining what the code does, here is the full source code, if you're happy with this then you're free to give control of some funds to this code or a copy of this code"

Comment by hungryhobo on Are smart contracts AI-complete? · 2016-06-24T15:10:56.011Z · score: 0 (2 votes) · LW · GW

You're still conflating the term "smart contract" and the idea of a legal contract.

That's like conflating "observer" in physics with a human staring at you or hearing someone talking about a Daemon on their server and talking as if it's a red skinned monster from hell perched on the server.

Imagine someone says

"This is a river, if you throw your money in it will end up somewhere, we call the currents a 'water contract', the only difference to a normal river is that we've got the paperwork signed such that this doesn't count as littering"

It does indeed end up somewhere and you're really really unhappy about where it ends up.

Who do you think you're going to take to court and for what contract?

Comment by hungryhobo on Are smart contracts AI-complete? · 2016-06-24T14:56:51.252Z · score: 0 (0 votes) · LW · GW

Though unfortunately for them once launched the particular type of smart contract in question enforces itself (since it handles the transfers itself) and re-writing it isn't really possible without destroying the entire system so the court isn't being asked for help enforcing the contract and a ruling asking to change it is about as enforceable as a ruling declaring the moon to be an unlicensed aircraft that needs to cease flight immediately if you can't get your hands on both parties to physically force them to make adjustments using new transactions.

It's complicated even more by the fact that contracts can themselves be recipients/actors within this system.

Comment by hungryhobo on Are smart contracts AI-complete? · 2016-06-24T09:48:54.143Z · score: 4 (4 votes) · LW · GW

AI is complex. Complexity means bugs. Bugs in smart contracts are exactly what you need to avoid.

What is needed the most is mathematically proving code.

For certain contract types you're going to need some way of confirming that, say, physical goods have been delivered but you gain nothing my adding AI to the mix.

Without AI you have a switch someone has to toggle or some other signal that someone might hack. With AI you just have some other input stream that someone might tamper with. Either way you need to accept information into the system somehow and it may not be accurate. AI does not solve the problem. It just adds complexity which makes mistakes more likely.

When all you have is a hammer, everything looks like a nail, when all you have is AI theories everything looks like a problem to throw AI at.

Comment by hungryhobo on rationalfiction.io - publish, discover, and discuss rational fiction · 2016-06-02T13:12:27.630Z · score: 5 (5 votes) · LW · GW

sure, and the traditional plot line where superman grabs a plumeting jet would actually lead to the jet tearing like tissue paper around wherever he grabbed it.

A certain level of "ok superman has a small physics-free bubble around him" needs to be granted if you want to do anything with superman.

A lot of ink has been spilled by geeks trying to come up with self-consistent systems under which superman could do what he regularly does in the stories.

Comment by hungryhobo on Iterated Gambles and Expected Utility Theory · 2016-05-27T16:30:56.057Z · score: 1 (1 votes) · LW · GW

Ya but I don't want to buy an apartment in new York.

Again, I didn't say utility goes to zero. it just drops off dramatically. The difference between 0 and 250K is far bigger in terms of utility than the difference between 250k and 500k. You still can't buy a new york apartment and having 500K is better than having only 250K but in terms of how it changes your life the first increment is far more significant.

Comment by hungryhobo on Iterated Gambles and Expected Utility Theory · 2016-05-27T10:22:46.770Z · score: 1 (1 votes) · LW · GW

Because at that point I'm tapdancing on the top of Maslow's Hierarchy of Needs, extremely financially secure with lots of reserves.

It doesn't go to zero but it's like the difference between the utility of an extra portion of truffle desert when I'm already stuffed vs the utility of a few bags of plumpynut when I have a starving child.

Comment by hungryhobo on Iterated Gambles and Expected Utility Theory · 2016-05-26T14:38:33.173Z · score: 0 (0 votes) · LW · GW

Using your net worth as part of the calculation doesn't feel right.

Even if my net worth is quite high much of that may be inaccessible to me short term.

If I have 100,000 in liquid cash then 100 has lower utility to me than if I have 100,000 in something non liquid like a house and no cash.

Comment by hungryhobo on Iterated Gambles and Expected Utility Theory · 2016-05-26T12:41:07.354Z · score: 0 (2 votes) · LW · GW

Yep, for me 100 dollars provides a nice chunk of utility. 10000 does not provide 100 times as much utility and anything more than a couple hundred K provides little utility at all.

In theory that 1.6 billion powerball lottery had a (barely) positive expected return (depending on how taxes work out) and thus rationalists should throw money at it but in reality a certainty of 1 dollar is better than a 1 in a billion chance of getting 1.6 billion. (I know these numbers aren't exact)

Comment by hungryhobo on Information Hazards and Community Hazards · 2016-05-19T11:05:49.128Z · score: 2 (2 votes) · LW · GW

That somewhat necessitates either the group remaining very small or discussions only happening in small subsets since in any non-tiny group there will be one or more people with issues around pretty much anything.

It also wouldn't seem to work terribly well in long term and written discussions such as the ones on LW which can run for years with random members of the public joining and leaving part way through.

So the "accessory after the fact" murder example is a very clear and explicit example of where major penalties can be inflicted on pretty much anyone by providing them with particular information which forces them either into certain actions or into danger. 50%+ of the community present are going to be subject to those hazards whether or not they even understand them.

Safe space avoidance of triggers on the other hand are extremely personal, one person out of thousands can suddenly be used as a reason for why the community shouldn't talk about ,say, Rabies and since most LW communication is long term and permanent there is no such thing as "while they're not in the room". The discussion remains there when they are present even if the discussion took place while they were not.

Of course you could limit your safe spaces to verbal communication in small, personal, community events where you only talk about Rabies on the days when ,say, Jessica isn't there but then you have the situation where the main LW community could have a recurring and popular Rabies Symptoms Explained megathread.

At which point you don't so much have a "community hazard" as a polite avoidance of one topic with a few of your drinking buddies including one who isn't really part of the central community because they can't handle the discussion there but is part of your local hangout.

Comment by hungryhobo on Information Hazards and Community Hazards · 2016-05-18T12:57:58.366Z · score: 1 (3 votes) · LW · GW

If something that is tough for even a single member to handle counts as a "community hazard" then this is starting to sound more like safe spaces under a different name rather than what I thought you meant with the example of "accessory after the fact" murder thing.

Comment by hungryhobo on Information Hazards and Community Hazards · 2016-05-17T11:29:22.030Z · score: 4 (4 votes) · LW · GW

One meta-hazard would be that "community hazards" could end up defined far too broadly, encompassing anything that might make some people feel uncomfortable and simply become a defense for sacred values of the people assessing what should constitute "community hazards".

Or worse, that the arguments for one set of positions could get classified as "community hazards" such that, to use a mind-killing example, all the pro-life arguments get classified as "community hazards" while the pro-choice ones do not.

So it's probably best to be exceptionally conservative with what you're willing to classify as a "community hazard"

Comment by hungryhobo on Newcomb versus dust specks · 2016-05-13T13:06:35.985Z · score: 0 (0 votes) · LW · GW

Computational theory of identity so some large number of exact copies of the same individual experiencing the same thing don't sum, they only count as once instance?

Comment by hungryhobo on Newcomb versus dust specks · 2016-05-12T10:36:33.565Z · score: 3 (3 votes) · LW · GW

This seems like a weird mishmash of other hypotheticals on the site, I'm not really seeing the point of parts of your scenario.

Comment by hungryhobo on Improving long-run civilisational robustness · 2016-05-10T16:56:24.908Z · score: 2 (2 votes) · LW · GW

I can sort of imagine a world where some extremely well funded terrorists engineer/manufacture a few dozen really nasty diseases and release them in hundreds/thousands of locations at once, (though most terrorists wouldn't because such an attack would hurt their own side as much or more than anyone else) that might seriously hurt society as a whole but most of the time the backlash against terrorism seems more dangerous than the actual terrorists.

Comment by hungryhobo on Call for information, examples, case studies and analysis: votes and shareholder resolutions v.s. divestment for social and environmental outcomes · 2016-05-10T16:11:04.162Z · score: 0 (0 votes) · LW · GW

Very clearly put.

Some companies (like up until recently Apple) didn't pay much in the way of dividends but instead pumped money back into company growth to try to increase the value of their shares. I think this may have been the kind of gain gjm was thinking of where you buy hoping the value will increase rather than banking on the company handing out good dividends.

Comment by hungryhobo on Improving long-run civilisational robustness · 2016-05-10T13:19:27.066Z · score: 4 (4 votes) · LW · GW

Terrorists are a rounding error. Sure, some day they'll take out a city with a nuke but in history cities have been wiped out many many times without taking their parent civilization with them.

Comment by hungryhobo on Call for information, examples, case studies and analysis: votes and shareholder resolutions v.s. divestment for social and environmental outcomes · 2016-05-10T12:36:07.371Z · score: 0 (0 votes) · LW · GW

Sure, if you could coordinate with almost all players in a market and got them to agree to give up financial gain to achieve your goals without any defecting then it would work. Though that's a mighty big "if" in any large market.

Comment by hungryhobo on Call for information, examples, case studies and analysis: votes and shareholder resolutions v.s. divestment for social and environmental outcomes · 2016-05-10T12:32:25.147Z · score: 1 (1 votes) · LW · GW

You seem to be implicitly assuming that the only value of a stock is it's potential future increase in price, for most their dividends and stability are largely what set their value. Unless the divestment activists control a really really massive fraction of the market then that's not going to matter in any way shape or form.

Losing actual customers as with tobacco and fossil fuels absolutely can hurt a company. Losing sales hurts, it's only divestment that's irrelevant.

Comment by hungryhobo on Call for information, examples, case studies and analysis: votes and shareholder resolutions v.s. divestment for social and environmental outcomes · 2016-05-05T13:14:19.537Z · score: 4 (4 votes) · LW · GW

only if they do so in total secrecy.

If you're an analyst in a big trading firm you know that ,say, an oil company exists and is currently valued at a certain level by the market taking into account the available information about it's business and profits.

Later that company is targeted by divestment activists. A big university is pressured by economically illiterate students into selling all it's stock in the oil company.

The analyst's note this and know that the company is probably being slightly undervalued as a result and buy up some stock.

There are actually Sin Funds that target stock of companies like tobacco, fossil fuel companies etc and invest in them on the basis that they're likely slightly undervalued due to other "moral" investors avoiding them.

Thus the only effect of divestment is to transfer a moderate amount of money from yourself to people who are slightly less ethical. It doesn't hurt the company at all.

Comment by hungryhobo on JFK was not assassinated: prior probability zero events · 2016-04-29T17:51:18.960Z · score: 2 (2 votes) · LW · GW

"Error rendering the page: Couldn't find page"

Comment by hungryhobo on JFK was not assassinated: prior probability zero events · 2016-04-29T16:14:50.751Z · score: 1 (1 votes) · LW · GW

Why must the oracle continue to believe it's messages weren't read?

In the example you give I'm guessing the reason you'd want an oracle to believe with cold certainty that it's messages won't be read is to avoid it trying to influence the world with them but that doesn't require that it continue to believe that later. As long as when it's composing and ouputing the message it believes solidly that it will never be read and nothing can move that belief from zero then that's fine. That does not preclude it being perfectly accepting that it's past messages were in fact read and basing it's beliefs about the world on that. That knowledge after all cannot shift the belief that this next message will never, ever ever be read unlike all the others.

Of course that brings up the question of why an oracle would even be designed as a goal based AI with any kind of utility function. Square peg, round hole and all that.