Posts

Comments

Comment by _rpd on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-29T20:03:33.656Z · LW · GW

Apparently being a postman in the 60s and having a good Johnny Cash impression worked out well ...

http://infamoustribune.com/dna-tests-prove-retired-postman-1300-illegimitate-children/

Comment by _rpd on If there IS alien super-inteligence in our own galaxy, then what it could be like? · 2016-02-27T21:40:47.653Z · LW · GW

Or we are an experiment (natural or artificial) that yields optimal information when unmanipulated or manipulated imperceptibly (from our point of view).

Comment by _rpd on Open Thread Feb 16 - Feb 23, 2016 · 2016-02-23T06:36:55.478Z · LW · GW

I really like this distinction. The closest I've seen is discussion of existential risk from a non-anthropocentric perspective. I suppose the neologism would be panexistential risk.

Comment by _rpd on If there was one element of statistical literacy that you could magically implant in every head, what would it be? · 2016-02-23T05:24:13.892Z · LW · GW

The desire to know error estimates and confidence levels around assertions and figures, or better yet, probability mass curves. And a default attitude of skepticism towards assertions and figures when they are not provided.

Comment by _rpd on [deleted post] 2016-02-19T21:39:18.213Z

Yes, until the distance exceeds the Hubble distance of the time, then the light from the spaceship will red shift out of existence as it crosses the event horizon. Wiki says that in around 2 trillion years, this will be true for light from all galaxies outside the local supercluster.

Comment by _rpd on [deleted post] 2016-02-19T18:59:08.527Z

Naively, the required condition is v + dH > c, where v is the velocity of the spaceship, d is the distance from the threat and H is Hubble's constant.

However, when discussing distances on the order of billions of light years and velocities near the speed of light, the complications are many, not to mention an area of current research. For a more sophisticated treatment see user Pulsar's answer to this question ...

http://physics.stackexchange.com/questions/60519/can-space-expand-with-unlimited-speed/

... in particular the graph Pulsar made for the answer ...

http://i.stack.imgur.com/Uzjtg.png

... and/or the Davis and Lineweaver paper [PDF] referenced in the answer.

Comment by _rpd on [deleted post] 2016-02-18T10:36:57.117Z

this claim

Do you mean the metric expansion of space?

https://en.wikipedia.org/wiki/Metric_expansion_of_space

Because this expansion is caused by relative changes in the distance-defining metric, this expansion (and the resultant movement apart of objects) is not restricted by the speed of light upper bound of special relativity.

Comment by _rpd on Where does our community disagree about meaningful issues? · 2016-02-13T16:57:04.065Z · LW · GW

Would you support a law to stop them?

Wiki says that desomorphine has been a Schedule 1 controlled substance in the US since 1936, shortly after its discovery. Mere possession is illegal, much less use.

Comment by _rpd on Where does our community disagree about meaningful issues? · 2016-02-12T20:40:54.155Z · LW · GW

predict with high confidence a Republican win

Odd since most prediction markets have a 60/40 split in favor of a Democrat winning the US presidency.

E.g., https://iemweb.biz.uiowa.edu/quotes/Pres16_Quotes.html

Sanders vs. Trump.

The polls have Sanders ahead in this particular matchup ...

http://www.realclearpolitics.com/epolls/2016/president/us/general_election_trump_vs_sanders-5565.html

Comment by _rpd on Open Thread, Feb 8 - Feb 15, 2016 · 2016-02-11T04:28:36.545Z · LW · GW

"Prediction market". The DPRs implement some sort of internal currency (which, thanks to blockchains, is fairly easy), and make bets, receiving rewards for accurate predictions.

Taking this a little further, the final prediction can be a weighted combination of the individual predictions, with the weights corresponding to historical or expected accuracy.

However different individuals will likely specialize to be more accurate with regard to different cognitive tasks (in fact, you may wish to set up the reward economy to encourage such specialization), so that the set of weights will vary by cognitive task, or more generally become a weighting function if you can define some sort of sensible topology for the cognitive task space.

Comment by _rpd on Open Thread, Feb 8 - Feb 15, 2016 · 2016-02-10T13:33:13.068Z · LW · GW

AngelList says Anthony Aguirre is the founder.

Comment by _rpd on Rationality Quotes Thread February 2016 · 2016-02-09T20:05:16.695Z · LW · GW

I would say that actions that make a particular person happy can have consequences that decrease the collective happiness of some group. I might use a tyrant or an addict as examples. In answering the question "What else are you gonna do?" I'd propose at least "As long as you harm no group happiness, do what makes you happy," the Wiccan Rede "An' ye harm none, do what thou wilt" probably being too strict (rules out being Batman, for example).

Comment by _rpd on Altruistic parenting · 2016-02-09T13:54:46.196Z · LW · GW

When someone is about to be a parent (I think this question stick more to a man than a woman, considering the empathic link that's been biologicaly created between a child and his mother) is he really asking himself: Will they worth it ?

I think the situation is very different planned vs. unplanned. For me, once the decision was made I had no second thoughts. Also, the little munchkins re-write you emotionally once they arrive <- no one told me about this, so it was actually a bit of a shock.

Comment by _rpd on Open Thread, Feb 8 - Feb 15, 2016 · 2016-02-08T19:27:06.787Z · LW · GW

Often helpline workers are people who formerly needed mental health advice themselves. At least, they'll have training on how to be helpful. I think it's very likely they'll be supportive, and unlikely that they'll be judgmental.

However, this is from a US perspective. Things may be different in other parts of the world.

Comment by _rpd on Open Thread, Feb 8 - Feb 15, 2016 · 2016-02-08T18:57:44.945Z · LW · GW

That strategy has a good chance of discouraging her from getting treatment later.

Why do you say that? Also, if she is distressed, then she may want treatment now.

Getting her to call a mental health advice line that she doesn't trust likely won't be positive.

Granted, but why won't she trust the mental health advice line? If she is distressed, she may be willing to consider help from new sources.

If she is not distressed, then CronoDAS can use the mental health advice line to get educated on the options in case she does become distressed.

Comment by _rpd on Require contributions in advance · 2016-02-08T18:51:51.813Z · LW · GW

I think "all human interaction is manipulation" is false on its face. I was putting forward Adler as a candidate for being a modern root of this meme. His teachings are still quite influential.

Comment by _rpd on Open Thread, Feb 8 - Feb 15, 2016 · 2016-02-08T18:09:31.724Z · LW · GW

If she is distressed by the symptoms, you could encourage her to contact someone that can educate her about treatment options. There may be a mental health advice line in your area that can refer her or you to free or low cost resources.

Comment by _rpd on Require contributions in advance · 2016-02-08T15:54:18.726Z · LW · GW

My understanding is that Alder thought we all start with an inferiority complex because we all start as small, weak children.

Comment by _rpd on Require contributions in advance · 2016-02-08T15:38:53.693Z · LW · GW

He was the inferiority complex guy ...

"The striving for significance, this sense of yearning, always points out to us that all psychological phenomena contain a movement that starts from a feeling of inferiority and reach upward. The theory of Individual Psychology of psychological compensation states that the stronger the feeling of inferiority, the higher the goal for personal power." (From a new translation of "Progress in Individual Psychology," [1923] a journal article by Alfred Adler, in the AAISF/ATP Archives.

... everything is about the struggle to gain power over others, which can become pathological ...

"The soul under pressure of the feeling of inferiority, of the torturing thought that the individual is small and helpless, attempts with all its might to become master over this inferiority complex. Where the feeling of inferiority is highly intensified to the degree that the child believes that he will never be able to compensate for his weakness, the danger arises that in his striving for overcompensation, will aim to overbalance the scales. The striving for power and dominance may become exaggerated and intensified until it must be called pathological. The ordinary relationships of life will never satisfy such children. Well adapted to their goal, their movements will have to have a certain grandiose gesture about them. They seek to secure their position in life with extraordinary efforts, with greater haste and impatience, with more intense impulses, without consideration of any one else. Through these exaggerated movements toward their exaggerated goal of dominance these children become more noticeable, their attacks on the lives of others necessitate that they defend their own lives. They are against the world, and the world is against them." (From "The Feeling of Inferiority and the Striving for Recognition," [1927] a journal article by Alfred Adler, in the AAISF/ATP Archives.

Comment by _rpd on Require contributions in advance · 2016-02-08T15:02:50.172Z · LW · GW

Perhaps look at https://en.wikipedia.org/wiki/Alfred_Adler ?

Comment by _rpd on Rationality Quotes Thread February 2016 · 2016-02-07T23:02:49.059Z · LW · GW

I feel like there should be some constraint on harming group happiness while you "do what makes you happy."

Comment by _rpd on The case for value learning · 2016-02-06T21:29:17.583Z · LW · GW

I take your point that theorists can appear to be concerned with problems that have very little impact. On the other hand, there are some great theoretical results and concepts that can prevent us futility wasting our time and guide us to areas where success is more likely.

I think you're being ungenerous to Bolstrom. His paper on the possibility of Oracle type AIs is quite nuanced, and discusses many difficulties that would have to be overcome ...

http://www.nickbostrom.com/papers/oracle.pdf

Comment by _rpd on The AI That Pretends To Be Human · 2016-02-06T12:54:50.098Z · LW · GW

why would an AI become evil?

The worry isn't that the AI would suddenly become evil by some human standard, rather that the AI's goal system would be insufficiently considerate of human values. When humans build a skyscraper, they aren't deliberately being "evil" towards the ants that lived in the earth that was excavated and had concrete poured over it, the humans just don't value the communities and structures that the ants had established.

Comment by _rpd on The case for value learning · 2016-02-03T07:31:11.871Z · LW · GW

I think your criticism is a little harsh. Turing machines are impossible to implement as well, but they are still a useful theoretical concept.

Comment by _rpd on Open thread, Feb. 01 - Feb. 07, 2016 · 2016-02-03T07:26:26.237Z · LW · GW

There was quite a bit of commentary on the Jan 27 post ...

http://lesswrong.com/r/discussion/lw/n8b/link_alphago_mastering_the_ancient_game_of_go/#comments

tl;dr: reactions are mixed.

My personal reaction is that it is surprising that neural networks, even large ones fed with clever inputs and used in clever ways, could be used to boost Go play to this level. Although it has long been known that neural networks are universal function approximators, this achievement is a "no, really."

Comment by _rpd on The AI That Pretends To Be Human · 2016-02-03T06:46:03.206Z · LW · GW

Yes the AI would know what we would approve of.

Okay, to simplify, suppose the AI has a function ...

Boolean humankind_approves(Outcome o)

... that returns 1 when humankind would approve of a particular outcome o, and zero otherwise.

At any given point, the AI needs to have a well specified utility function.

Okay, to simplify, suppose the AI has a function ...

Outcome U(Input i)

... which returns the outcome(s) (e.g., answer, plan) that optimizes expected utility given the input i.

But it doesn't have any reason to care.

Assuming the AI is corrigible (I think we all agree that if the AI is not corrigible, it shouldn't be turned on), we modify its utility function to U' where

U'(i) = U(i) when humankind_approves(U(i)) or null if there does not exist a U(i) such that humankind_approves(U(i))

I suggest that an AI with utility function U' is a friendly AI.

It could look at the existing research

I think extrapolation from existing research is an interesting area of study, but I was attempting to evoke the surprise of a breakthrough invention. To me, the most interesting inventions are exactly those inventions that are not mundane extrapolations of existing techniques.

Comment by _rpd on The AI That Pretends To Be Human · 2016-02-03T04:50:45.420Z · LW · GW

Emulating human brains is a rather convoluted solution to any problem.

Granted. In practice, it may be possible to represent aspects of humankind in a more compact form. But the point is that if ...

The AI would be very familiar with humans and would have a good idea of our [inventive] abilities.

... then to me it seems likely that "the AI would be very familiar with humans and would have a good idea of actions that would meet human approval."

Taking your analogy ... if we can model chimp inventiveness to a useful degree, wouldn't we also be able to model which human actions would earn chimp approval and disapproval? Couldn't we build a chimp-friendly AI?

Consider a different scenario: a year ago, we asked the first AI to generate a Go playing program that could beat a professional Go player. The first AI submits AlphaGo as its solution after 1 day of processing. How does the second AI determine that AlphaGo is within or outside of human inventiveness at that time?

Comment by _rpd on The AI That Pretends To Be Human · 2016-02-03T01:51:21.318Z · LW · GW

It's easy to detect what solutions a human couldn't have invented. That's what the second AI does

I think, to make this detection, the second AI would have to maintain high resolution simulations of the world's smartest people (if not the entire population), and basically ask the simulations to collaboratively come up with their best solutions to the problem.

Supposing that is the case, the second AI can be configured to maintain high resolution simulations of the entire population, and basically ask the simulations whether they collectively approve of a particular action.

Is there a way to "detect what solutions a human couldn't have invented" that doesn't involve emulating humankind?

Comment by _rpd on The case for value learning · 2016-02-03T00:42:10.432Z · LW · GW

There is regular structure in human values that can be learned without requiring detailed knowledge of physics, anatomy, or AI programming.

While there is some regular structure to human values, I don't think you can say that the totality of human values has a completely regular structure. There are too many cases of nameless longings and generalized anxieties. Much of art is dedicated exactly to teasing out these feelings and experiences, often in counterintuitive contexts.

Can they be learned without detailed knowledge of X, Y and Z? I suppose it depends on what "detailed" means - I'll assume it means "less detailed than the required knowledge of the structure of human values." That said, the excluded set of knowledge you chose - "physics, anatomy, or AI programming" - seems really odd to me. I suppose you can poll people about their values (or use more sophisticated methods like prediction markets), but I don't see how this can yield more than "the set of human values that humans can articulate." It's something, but this seems to be a small subset of the set of human values. To characterize all dimensions of human values, I do imagine that you'll need to model human neural biophysics in detail. If successful, it will be a contribution to AI theory and practice.

Human values are so fragile that it would require a superintelligence to capture them with anything close to adequate fidelity.

To me, in this context, the term "fragile" means exactly that it is important to characterize and consider all dimensions of human values, as well as the potentially highly nonlinear relationships between those dimensions. An at-the-time invisible "blow" to at-the-time unarticulated dimension can result in unfathomable suffering 1000 years hence. Can a human intelligence capture the totality of human values? Some of our artists seem to have glimpses of the whole, but it seems unlikely to me that a baseline human can appreciate the whole clearly.

Comment by _rpd on The AI That Pretends To Be Human · 2016-02-02T23:40:27.076Z · LW · GW

I mean, the ability to estimate the abilities of superintelligences appears to be an aspect of reliable Vingean reflection.

Comment by _rpd on The AI That Pretends To Be Human · 2016-02-02T23:04:19.875Z · LW · GW

Although we use limited proxies (e.g., IQ test questions) to estimate human intelligence.

Comment by _rpd on The AI That Pretends To Be Human · 2016-02-02T22:48:10.391Z · LW · GW

The opportunities for detecting superintelligence would definitely be rarer if the superintelligence is actively trying to conceal the status.

What about in the case where there is no attempted concealment? Or even weaker, where the AI voluntary submits to arbitrary tests. What tests would we use?

Presumably we would have a successful model of human intelligence by that point. It's interesting to think about what dimensions of intelligence to measure. Number of variables simultaneously optimized? Optimization speed? Ability to apply nonlinear relationships? Search speed in a high dimensional, nonlinear solution space? I guess it is more the ability to generate appropriate search spaces in the first place. Something much simpler?

Comment by _rpd on The AI That Pretends To Be Human · 2016-02-02T21:59:07.072Z · LW · GW

Whatever mechanism that you use to require the AI to discard "solutions that a human couldn't invent", use that same mechanism to require the AI to discard "actions of which humankind would not approve."

I believe that the formal terminology is to add the condition to the AI's utility function.

Comment by _rpd on The AI That Pretends To Be Human · 2016-02-02T20:49:12.814Z · LW · GW

I wonder if this is true in general. Have you read a good discussion on detecting superintelligence?

Comment by _rpd on The AI That Pretends To Be Human · 2016-02-02T20:45:35.061Z · LW · GW

I think that if you are able to emulate humankind to the extent that you can determine things like "solutions that a human couldn't invent" and "what a human given a year to work on it, would produce," then you have already solved FAI, because instead you can require the AI to "only take actions of which humankind would approve."

To use AI to build FAI, don't we need a way to avoid this Catch 22?

Comment by _rpd on The AI That Pretends To Be Human · 2016-02-02T20:02:44.372Z · LW · GW

it doesn't optimize without end to create the best solution possible, it just has to meet some minimum threshold, then stop.

It's easy to ask hard questions. I think it can be argued that emulating a human is a hard problem. There doesn't seem to be a guarantee that the "minimum threshold" doesn't involve converting planetary volumes to computronium.

I think the same problem is present in trying to specify minimum required computing power for a task prior to prior to performing the task. It isn't obvious to me that calculating "minimum required computing power for X" is any less difficult than performing some general task X.

Comment by _rpd on Learning Mathematics in Context · 2016-01-31T17:07:28.479Z · LW · GW

provided the field is important within the context of human societal development and in engaging the material I gain a nuanced understanding of the content and a deep appreciation of how the originators created the system.

I'll suggest investigating the problem of "squaring the circle." It has it's roots in the origins of mathematics, passes through geometric proofs (including the notions of formal proofs and proof from elementary axioms), was unsolved for 2000 years in the face of myriad attempts, and was proved impossible to solve using the relatively modern techniques of abstract algebra.

The linked site has references (some already mentioned in this thread) that may be helpful ...

R.Courant and H.Robbins, What is Mathematics?, Oxford University Press, 1996

H.Dorrie, 100 Great Problems Of Elementary Mathematics, Dover Publications, NY, 1965.

W.Dunham, Journey through Genius, Penguin Books, 1991

M.Kac and S.M.Ulam, Mathematics and Logic, Dover Publications, NY, 1968.

including ...

R.B.Nelsen, Proofs Without Words, MAA, 1993

which may be of special interest to you.

Comment by _rpd on [Link] AlphaGo: Mastering the ancient game of Go with Machine Learning · 2016-01-27T22:48:16.222Z · LW · GW

Yudkowsky seems to think it is significant ...

https://news.ycombinator.com/item?id=10983539

Comment by _rpd on Open thread, Jan. 25 - Jan. 31, 2016 · 2016-01-27T19:16:42.750Z · LW · GW

How did Belmopan, Brasília, Abuja and Islamabad do it?

Well all of these are deliberate decisions to build a national capital. They overcame the bootstrap problem by being funded by a pre-existing national tax base.

dozens of new cities built just in Singapore during the past half century

Again, government funding is used to overcome the bootstrap problem. Singapore is also geographically small, and many of these "cities" would be characterized as neighborhoods if they were in the US.

Las Vegas

Well, wikipedia says it began life as a water resupply stop for steam trains, and then got lucky by being near a major government project - Hoover dam. Later it took advantage of regulatory differences. An eccentric billionaire seems to have played a key roll.

There seem to be several towns that exist because of regulatory differences, so this seems a factor to consider - at least one eccentric billionaire seems fairly serious about "seasteading" for this reason. Historically, religious and ideological differences have founded cites, if not nations, so this is one way to push through the bootstrap phase - Salt Lake City being a relatively modern example in the US. Masdar City - zero carbon, zero waste - is an interesting example - ironically funded by oil wealth.

Comment by _rpd on Open thread, Jan. 25 - Jan. 31, 2016 · 2016-01-27T15:13:16.450Z · LW · GW

Gentrification simply means that rents go up in certain parts of the city. It doesn't have directly something to do with new investments.

In my experience gentrification is always associated with renovation and new business investment. The wikipedia article seems to confirm that this is not an uncommon experience.

Comment by _rpd on Open thread, Jan. 25 - Jan. 31, 2016 · 2016-01-27T12:26:45.728Z · LW · GW

I think Seattle's South Lake Union development, kickstarted by Paul Allen and Jeff Bezos, is a counter example ...

http://crosscut.com/2015/05/why-everywhere-is-the-next-south-lake-union/

Perhaps gentrification is a more general counter example. But you're right, most developers opt for sprawl.

Comment by _rpd on Open thread, Jan. 25 - Jan. 31, 2016 · 2016-01-27T11:41:23.746Z · LW · GW

But similar profits are available at lower risk by developing at the edges of existing infrastructure. In particular, incremental development of this kind, along with some modest lobbying, will likely yield taxpayer funded infrastructure and services.

Comment by _rpd on Open thread, Jan. 25 - Jan. 31, 2016 · 2016-01-27T01:05:21.197Z · LW · GW

High quality infrastructure and community services are expensive, but taxpayers are reluctant to relocate to the new community until the infrastructure and services exist. It's a bootstrap problem. Haven't you ever played SimCity?

Comment by _rpd on Learning Mathematics in Context · 2016-01-27T00:28:44.752Z · LW · GW

Perhaps a Mathematics for Philosophers book like this http://www.amazon.com/dp/1551119099 ?

Comment by _rpd on Open thread, Jan. 18 - Jan. 24, 2016 · 2016-01-21T18:35:25.244Z · LW · GW

We can expect lower food prices. High food prices have been an important political stressor in developing nations.

Comment by _rpd on Open thread, Jan. 18 - Jan. 24, 2016 · 2016-01-21T18:25:10.261Z · LW · GW

They mainly decided not to cut their production.

And there is a good reason for this decision. Saudi Arabia tried cutting production in the '80s to lift prices, and it was disastrous for them. Here's a blog post with nice graphs showing what happened ...

Understanding Saudi Oil Policy: The Lessons of ‘79

Comment by _rpd on [Link]: KIC 8462852, aka WTF star, "the most mysterious star in our galaxy", ETI candidate, etc. · 2016-01-16T02:53:29.482Z · LW · GW

KIC 8462852 Faded at an Average Rate of 0.165+-0.013 Magnitudes Per Century From 1890 To 1989

Bradley E. Schaefer (Submitted on 13 Jan 2016)

KIC 8462852 has been dimming for a century. The comet explanation is very unlikely.

Comment by _rpd on Open Thread, January 11-17, 2016 · 2016-01-12T22:55:46.359Z · LW · GW

If you are just trying to communicate risk, analogy to a virus might be helpful in this respect. A natural virus can be thought of as code that has goals. If it harms humankind, it doesn't 'intend' to, it is just a side effect of achieving its goals. We might create an artificial virus with a goal that everyone recognizes as beneficial (e.g., end malaria), but that does harm due to unexpected consequences or because the artificial virus evolves, self-modifying its original goal. Note that once a virus is released into the environment, it is nontrivial to 'delete' or 'turn off'. AI will operate in an environment that is many times more complex: "mindspace".

Comment by _rpd on Your transhuman copy is of questionable value to your meat self. · 2016-01-11T18:58:11.632Z · LW · GW

A scenario not mentioned: my meat self is augmented cybernetically. The augmentations provide for improved, then greatly improved, then vast cognitive enhancements. Additionally, I gain the ability to use various robotic bodies (not necessarily androids) and perhaps other cybernetic bodies. My perceived 'locus' of consciousness/self disassociates from my original meat body. I see through whatever eyes are convenient, act through whatever hands are convenient. The death of my original meat body is a trauma, like losing an eye, but my sense of self is uninterrupted, since its locus has long since shifted to the augmentation cloud.