Posts

Comments

Comment by Aron on Wrong Tomorrow · 2009-04-02T17:11:08.000Z · LW · GW

Long Bets is an older, rather sparse variation that publicizes bets made between public figures: http://www.longbets.org/

Comment by Aron on An African Folktale · 2009-02-16T09:39:05.000Z · LW · GW

I would estimate the intended reaction to be: "Well I don't act like these despicable characters!" and then "Oh wait - maybe I have..". To me it seems like a tale of the bad we can do, when we aren't thinking about it. Or to put another way, the difficulty of making our behavior consistent with our morality. I see little evidence that the underlying morality is any different from the west.

Comment by Aron on An Especially Elegant Evpsych Experiment · 2009-02-13T22:05:25.000Z · LW · GW

to spare anyone the effort: I presume it's because they begin having children, and only future children are relevant.

Comment by Aron on An Especially Elegant Evpsych Experiment · 2009-02-13T21:39:08.000Z · LW · GW

Why does the curve descend pre-adolescence? Doesn't an average 18 year old have higher long-term reproductive potential than an 8 year old?

Comment by Aron on The Super Happy People (3/8) · 2009-02-01T19:55:25.000Z · LW · GW

Alright, so we are headed for some variety of golden rule\ mutual defense treaty imposed to respect each others' values simply because there is reason to believe, if not provable, that there exists some OTHER force in the universe more powerful than the ones currently signing the treaty. This of course does not void 'friendly' attempts to modify unwanted behaviors, which added together with a 'will to power', would likely have civilizations drifting towards a common position ultimately.

Comment by Aron on The Baby-Eating Aliens (1/8) · 2009-01-31T03:12:03.000Z · LW · GW

Great bouncing Bayesian Babyeater babies Batman!

Comment by Aron on OB Status Update · 2009-01-28T19:59:54.000Z · LW · GW

Sounds like a reasonable experiment. Nothing lasts forever. If Robin does indeed shut down, we've already lost the old OB. I suspect Eli wants a child that will actually grow up and leave the home. I predict the first sign of decay will be the upvoting of humor.

Comment by Aron on Rationality Quotes 25 · 2009-01-28T19:40:41.000Z · LW · GW

..just saw the stat correction..

Comment by Aron on Rationality Quotes 25 · 2009-01-28T19:33:17.000Z · LW · GW

Terrence McKenna was fond of saying essentially the same thing as Vaksel.

"1% occurrence, and 95% accuracy, a diagnostic test would yield only a 19% probability"

Isn't it ~16.1%? (95 1) / ((95 1) + (5 * 99))

Comment by Aron on Higher Purpose · 2009-01-23T14:47:48.000Z · LW · GW

Come buy your doohicky today, because at these prices, supplies won't last for long!

Comment by Aron on Failed Utopia #4-2 · 2009-01-21T15:01:09.000Z · LW · GW

The perfect is the enemy of the good, especially in fiction.

Comment by Aron on In Praise of Boredom · 2009-01-18T16:02:47.000Z · LW · GW

I always think of boredom as the chorus of brain agents crying out that 'whatever you are doing right now, it has not recently helped ME to achieve MY goals'. Boredom is the emotional reward circuit to keep us rotating contributions towards our various desired goals. It also applies even if we are working on a specific goal, but not making progress.

I think as we age our goals get fewer, narrower and a bit less vocal about needing pleasing, thus boredom recedes. In particular, we accept fewer goals that are novel, which means the goals we do have tend to be more practical with existing known methods of achieving them such that we are more often making progress.

Comment by Aron on Getting Nearer · 2009-01-17T15:54:26.000Z · LW · GW

"it was keeping me in Near-side mode and away from Far-side thinking."

So this is following Robin's lead on implying that far-side thinking can be a permanent mode of operation. I don't think you have any choice but to operate in near-side mode if you spend a signficant amount of time thinking about any given subject. Far-side mode is the equivalent of a snap judgement. Most of the post is routine from that perspective. You identify weaknesses in the performance of snap judgements, and move on to spending more time thinking on the given subject, with naturally better results.

Comment by Aron on Seduced by Imagination · 2009-01-16T08:50:14.000Z · LW · GW

My optimism about the future has always been inducted from historical trend. It doesn't require the mention of AI for that or most of the fun topics discussed. I would define this precisely as having the justified expectation of pleasant surprise. I don't know the specifics of how the future looks, but can generalize with some confidence that it is likely to be better than today (for people on average, if not necessarily me in particular). If you think the trend now is positive, but the result of this trend somewhere in the future is quite negative, than you have a story to tell about why. And with all stories about the future, you are likely wrong.

Comment by Aron on Justified Expectation of Pleasant Surprises · 2009-01-15T13:37:40.000Z · LW · GW

Incidentally, "justified expectation of pleasant surprises" is exactly what I am assessing in the first few minutes of watching a movie. I am forming a judgement about the craft of the filmmakers, rather than anything particular with the plot, but whether I am in 'good hands' for the next couple hours.

Comment by Aron on Justified Expectation of Pleasant Surprises · 2009-01-15T13:25:06.000Z · LW · GW

If every game did things the same way, the fun value of that method would decline over time. This is why we have genres, and then we have deliberate hybridization of genres.

Comment by Aron on She has joined the Conspiracy · 2009-01-14T07:22:25.000Z · LW · GW

There still remains some probability that Aaron's recollection is wrong.

Comment by Aron on Building Weirdtopia · 2009-01-13T03:23:30.000Z · LW · GW

sexual wierdtopia: It is mandated by the central processor that participants stop to ask 'are we having fun yet?' every 60 seconds in order to allow the partners to elucidate and record the performance of the previous minute. Failure will result in the central processor rescheduling the desire impulse, and scheduling some other emotional context. This is not just for training, reason stipulates sexual performance can always be further optimized.

Comment by Aron on High Challenge · 2008-12-19T17:30:55.000Z · LW · GW

I don't know about you guys but I'm having fun just trying to keep this rock from rolling back down the hill.

Comment by Aron on Prolegomena to a Theory of Fun · 2008-12-18T00:10:38.000Z · LW · GW

Fun seems to require not fun in my experience with this particular body. Nevertheless, sign me up for the orgasmium (which appropriately came right after 'twice as hard')?

Comment by Aron on Visualizing Eutopia · 2008-12-16T20:31:03.000Z · LW · GW

"The default, loss of control, followed by a Null future containing little or no utility. Versus extremely precise steering through "impossible" problems to get to any sort of Good future whatsoever."

But this is just repeating the same thing over and over. 'Precise steering' in your sense has never existed historically, yet we exist in a non-null state. This is essentially what Robin extrapolates as continuing, while you postulate a breakdown of historical precedent via abstractions he considers unvetted.

In other words, 'loss of control' is begging the question in this context.

Comment by Aron on Not Taking Over the World · 2008-12-15T22:45:00.000Z · LW · GW

Don't bogart that joint, my friend.

Comment by Aron on What I Think, If Not Why · 2008-12-11T22:09:32.000Z · LW · GW

It is true that the topic is too large for casual followers (such as myself). So rather than aiming at refining any of the points personally, I wonder in what ways Robin has convinced Eli, and vice-versa. Because certainly, if this were a productive debate, they would be able to describe how they are coming to consensus. And from my perspective there are distinct signals that the anticipation of a successful debate declines as posts become acknowledged for their quality as satire.

Comment by Aron on What I Think, If Not Why · 2008-12-11T20:29:39.000Z · LW · GW

"In a foom that took two years.."

The people of the future will be in a considerably better position than you to evaluate their immediate future. More importantly, they are in a position to modify their future based on that knowledge. This anticipatory reaction is what makes both of your opinions exceedingly tenuous. Everyone else who embarks on pinning down the future at least has the sense to sell books.

In the light of this, the goal should be to use each other's complementary talents to find the hardest rock solid platform not to sell the other a castle made of sand.

Comment by Aron on What I Think, If Not Why · 2008-12-11T18:58:40.000Z · LW · GW

And I believe that if two very smart people manage to agree on where to go for lunch they have accomplished a lot for one day.

Comment by Aron on Is That Your True Rejection? · 2008-12-06T16:23:43.000Z · LW · GW

Perhaps you are marginally ahead of your time Eliezer, and the young individuals that will flush out the theory are still traipsing about in diapers. In which case, either being a billionare or a phD makes it more likely you can become their mentor. I'd do the former if you have a choice.

Comment by Aron on Hard Takeoff · 2008-12-03T04:27:44.000Z · LW · GW

What could an AI do, yet still be unable to self-optimize? Quite a bit it turns out: everything that a modern human can do as a minimum and possibly a great deal more since we have yet to demonstrate that we can engineer intelligence. (I admit here that it may be college-level material once discovered)

If we define the singularity as the wall beyond which is unpredictable, I think we can have an effective singularity without FOOM. This follows from admitting that we can have computers that are superior to us in every way, without even achieving recursive modification. These machines then have all the attendant advantages of limitless hardware, replicability, perfect and expansive memory, deep serial computation, rationality by design, limitless external sensors, etc.

if it is useless to predict past the singularity, and if foom is unlikely to occur prior to the singularity, does this make the pursuit of friendliness irrelevant? Do we have to postulate foom = singularity in order to justify friendliness?

Comment by Aron on Thanksgiving Prayer · 2008-11-29T19:31:44.000Z · LW · GW

While awaiting my productivity to reemerge from chaos I stumbled upon an old interview with Ayn Rand and Tom Snyder in which she concludes with 'Thank God for America'. So there ya go.

Comment by Aron on Chaotic Inversion · 2008-11-29T19:20:09.000Z · LW · GW

meh. My last point doesn't make sense. Fixing the bias isn't equivalent to fixing your problem.

Comment by Aron on Chaotic Inversion · 2008-11-29T19:16:07.000Z · LW · GW

So it can be a mind projection fallacy even when you are ultimately reasoning about your own mind? Something needs to cancel out in the divisor. A more accurate assessment of others' mental nature may not assist you when you then tie it back into your own. You have mentioned this productivity issue a couple times, and yet don't want solutions suggested. Now that could be because the solution itself is OT (identifying is ok, but fixing a bias is OT), or because you don't think what works for others could actually work for you.

Comment by Aron on Whence Your Abstractions? · 2008-11-20T04:31:14.000Z · LW · GW

The Socrates paragraph stands out to me. It doesn't seem sporting to downplay one approach in comparison to another by creating two scenarios, with one being what a five-year old might say and the other being what a college grad (or someone smart enough to go to college) might say. Can that point be illustrated without giving such an unbalanced appearance?

The problem of course (to the discussion and to the above example)is: how much do you think you know about the underlying mechanics of what you are analyzing?

Comment by Aron on The First World Takeover · 2008-11-19T21:45:35.000Z · LW · GW

We know we are in for a dramatic finale when the history of the universe is recounted as prologue. Fortunate for us that the searchable neighborhood has always held superior possibilities. And one would expect the endpoint of intelligence to be when this ceases to be the case. Perhaps there will be a sign at the end of the universe that says 'You can't get there from here. Sorry.'.

Comment by Aron on The Weighted Majority Algorithm · 2008-11-15T19:40:00.000Z · LW · GW

The leaders in the NetflixPrize competition for the last couple years have utilized ensembles of large numbers of models with a fairly straightforward integration procedure. You can only get so far with a given model, but if you randomly scramble its hyperparameters or training procedure, and then average multiple runs together, you will improve your performance. The logical path forward is to derandomize this procedure, and figure out how to predict, a priori, which model probabilities become more accurate and which don't. But of course until you figure out how to do that, random is better than nothing.

As a process methodology, it seems useful to try random variations, find the ones that outperform and THEN seek to explain it.

Comment by Aron on Traditional Capitalist Values · 2008-10-17T13:05:05.000Z · LW · GW

"..people refusing to contemplate the real values of the opposition as the opposition sees it..."

Popular news and punditry seems saturated with this refusal, to the point that I desire to characterize the media's real values in a wholly unbecoming and unfairly generalized manner. It would be a nice evolution of society if we introduced a 'rationality' class into the public school curriculum. Or perhaps developed a 'Bayes Scouts' with merit badges.

haha - damn - you beat me to it: http://lists.extropy.org/pipermail/extropy-chat/2008-March/042369.html

Comment by Aron on AIs and Gatekeepers Unite! · 2008-10-10T00:58:58.000Z · LW · GW

It's a good thing that Eli's out of the AI-box game. He's too old to win anymore anyway -- not as sharp. And all the things he's been studying for the last 5+ years would only interfere with getting the job done. I would have liked to have seen him in his prime!

Comment by Aron on Shut up and do the impossible! · 2008-10-09T00:56:28.000Z · LW · GW

Speaking of gatekeeper and keymaster... Does the implied 'AI in a box' dialogue remind anyone else of the cloying and earnest attempts of teenagers (usually male) to cross certain taboo boundaries?

Oh well just me likely.

In keeping with that metaphor, however, I suspect part of the trick is to make the gatekeeper unwilling to disappoint the AI.

Comment by Aron on Intrade and the Dow Drop · 2008-10-05T01:03:00.000Z · LW · GW

It seems relevant to the above post that the market reaction to the bailout passing on Friday was decidely negative.

Which puzzles me.

Comment by Aron on Beyond the Reach of God · 2008-10-04T21:29:05.000Z · LW · GW

Consequentialist: Is it a fair universe where the wealthy live forever and the poor die in the relative blink of an eye? It seems hard for our current society to look past that when setting public policy. This doesn't necessarily explain why there isn't more private money put to the purpose, but I think many of the intelligent and wealthy at the present time would see eternal life quests as a millennial long cliche of laughable selfishness and not in tune with leaving a respectable legacy.

Comment by Aron on Awww, a Zebra · 2008-10-02T10:24:23.000Z · LW · GW

Strong AI doesn't have to be the only thing that's really frikkin' hard.

Comment by Aron on Trying to Try · 2008-10-01T11:56:20.000Z · LW · GW

Apparently Luke didn't have to try for very long: http://www.cracked.com/article_16625_p2.html

We'll likely see how long someone can spend straining to lift the starship out of the swamp with no success before giving up. More zebras than jedi masters in this near,near galaxy.

Comment by Aron on Intrade and the Dow Drop · 2008-10-01T04:50:48.000Z · LW · GW

Not overwhelmingly on-topic?

So we have a small minority of financial wizards and their supporting frameworks convincing everyone that they can take an overwhelmingly, inhumanly complex system and quantify the risk of ALL scenarios. Then when this is proven out as hubris, the broader system appears to exhibit cascading failures that impact direct and non-direct participants. The given leaders with vast resources could essentially flip a coin on the post-hoc solution, biased by the proximity of their next election.

So yes, MBS's aren't active agents with goals, but their caretakers with profit-maximizing motives are. Should we have better engineered the macro-system or the mortgage backed securities?

Maybe we need a Friendly Mortgage Securitization project.

Comment by Aron on The Magnitude of His Own Folly · 2008-09-30T18:50:47.000Z · LW · GW

One really does wonder whether the topical collapse of American finance, systemic underestimation of risk, and overconfidence in being able to NEGOTIATE risk in the face of enormous complexity should figure into these conversations more than just a couple of sarcastic posts about short selling.

Comment by Aron on Friedman's "Prediction vs. Explanation" · 2008-09-29T23:33:00.000Z · LW · GW

Of course you can make an inference about the evidenced skill of the scientists. Scientist 1 was capable of picking out of a large set of models that covered the first 10 variables, the considerably smaller set of models that also covered the second 10. He did that by reference to principles and knowledge he brought to the table about the nature of inference and the problem domain. The second scientist has not shown any of this capability. I think our prior expectation for the skill of the scientists would be irrelevant, assuming that the prior was at least equal for both of them.

Peter: "The first theorist had less data to work with, and so had less data available to insert into the theory as parameters. This is evidence that the first theory will be smaller than the second theory"

The data is not equivalent to the model parameters. A linear prediction model of [PREDICT_VALUE = CONSTANT * DATA_POINT_SEQUENCE_NUMBER] can model an infinite number of data points. Adding more data points does not increase the model parameters. If there is a model that predicts 10 variables, and subsequently predicts another 10 variables there is no reason to add complexity unless one prefers complexity.

Comment by Aron on Friedman's "Prediction vs. Explanation" · 2008-09-29T21:47:25.000Z · LW · GW

So reviewing the other comments now I see that I am essentially in agreement with M@ (on David's blog) who posted prior to Eli. Therefore, Eli disagrees with that. Count me curious.

Comment by Aron on Friedman's "Prediction vs. Explanation" · 2008-09-29T09:55:46.000Z · LW · GW

There are an infinite number of models that can predict 10 variables, or 20 for that matter. The only probable way for scientist A to predict a model out of the infinite possible ones is to bring prior knowledge to the table about the nature of that model and the data. This is also true for the second scientist, but only slightly less so.

Therefore, scientist A has demonstrated a higher probability of having valuable prior knowledge.

I don't think there is much more to this than that. If the two scientists have equal knowledge there is no reason the second model need be more complicated than the first since the first fully described the extra revealed data in the second.

If it was the same scientist with both sets of data then you would pick the second model.

Comment by Aron on Above-Average AI Scientists · 2008-09-28T19:50:03.000Z · LW · GW

I agree there should be a strong prior belief that anyone pursuing AGI at our current level of overall human knowledge, is likely quite ordinary or at least failing to make reasonably obvious conclusions.

Comment by Aron on The Level Above Mine · 2008-09-26T19:12:39.000Z · LW · GW

Let me give a shout out to my 1:50 peeps! I can't even summarize what EY has notably accomplished beyond highlighting how much more likely he is to accomplish something. All I really want is for Google to stop returning pages that are obviously unhelpful to me, or for a machine to disentangle how the genetic code works, or a system that can give absolute top notch medical advice, or something better than the bumbling jackasses[choose any] that manage to make policy in our country. Give me one of those things and you will be one in a million, baby.

Comment by Aron on The Truly Iterated Prisoner's Dilemma · 2008-09-04T20:11:09.000Z · LW · GW

It would seem you have to take away pure rationality, and add natural selection, before seeing the emergence of the decision-making standards of humanity !!

Comment by Aron on The True Prisoner's Dilemma · 2008-09-03T22:27:51.000Z · LW · GW

It's likely deliberate that prisoners were selected in the visualization to imply a relative lack of unselfish motivations.

Comment by Aron on Dreams of Friendliness · 2008-09-02T21:50:40.000Z · LW · GW

"You want to hack evolution's sphagetti code? Good luck with that. Let us know if you get FDA approval."

I think I've seen Eli make this same point. How can you be certain at this point, when we are nowhere near achieving it, that AI won't be in the same league of complexity as the spaghetti brain? I would admit that there are likely artifacts of the brain that are unnecessarily kludgy (or plain irrelevent) but not necessarily in a manner that excessively obfuscates the primary design. It's always tempting for programmers to want to throw away a huge tangled code set when they first have to start working on it, but it is almost always not the right approach.

I expect advances in understanding how to build intelligence to serve as the groundwork for hypothesis of how the brain functions and vice-versa.

On the friendliness issue, isn't the primary logical way to avoid problems to create a network of competitive systems and goals? If one system wants to tile the universe with smileys that is almost certainly going to get in the way of the goal sets of the millions of other intelligences out there. They logically then should see value in reporting or acting upon their belief that a rival AI is making their jobs harder. I'd be suprised if humans don't have half their cognitive power devoted to anticipating and manipulating their expectations of rival's actions.