Posts
Comments
Long Bets is an older, rather sparse variation that publicizes bets made between public figures: http://www.longbets.org/
I would estimate the intended reaction to be: "Well I don't act like these despicable characters!" and then "Oh wait - maybe I have..". To me it seems like a tale of the bad we can do, when we aren't thinking about it. Or to put another way, the difficulty of making our behavior consistent with our morality. I see little evidence that the underlying morality is any different from the west.
to spare anyone the effort: I presume it's because they begin having children, and only future children are relevant.
Why does the curve descend pre-adolescence? Doesn't an average 18 year old have higher long-term reproductive potential than an 8 year old?
Alright, so we are headed for some variety of golden rule\ mutual defense treaty imposed to respect each others' values simply because there is reason to believe, if not provable, that there exists some OTHER force in the universe more powerful than the ones currently signing the treaty. This of course does not void 'friendly' attempts to modify unwanted behaviors, which added together with a 'will to power', would likely have civilizations drifting towards a common position ultimately.
Great bouncing Bayesian Babyeater babies Batman!
Sounds like a reasonable experiment. Nothing lasts forever. If Robin does indeed shut down, we've already lost the old OB. I suspect Eli wants a child that will actually grow up and leave the home. I predict the first sign of decay will be the upvoting of humor.
..just saw the stat correction..
Terrence McKenna was fond of saying essentially the same thing as Vaksel.
"1% occurrence, and 95% accuracy, a diagnostic test would yield only a 19% probability"
Isn't it ~16.1%? (95 1) / ((95 1) + (5 * 99))
Come buy your doohicky today, because at these prices, supplies won't last for long!
The perfect is the enemy of the good, especially in fiction.
I always think of boredom as the chorus of brain agents crying out that 'whatever you are doing right now, it has not recently helped ME to achieve MY goals'. Boredom is the emotional reward circuit to keep us rotating contributions towards our various desired goals. It also applies even if we are working on a specific goal, but not making progress.
I think as we age our goals get fewer, narrower and a bit less vocal about needing pleasing, thus boredom recedes. In particular, we accept fewer goals that are novel, which means the goals we do have tend to be more practical with existing known methods of achieving them such that we are more often making progress.
"it was keeping me in Near-side mode and away from Far-side thinking."
So this is following Robin's lead on implying that far-side thinking can be a permanent mode of operation. I don't think you have any choice but to operate in near-side mode if you spend a signficant amount of time thinking about any given subject. Far-side mode is the equivalent of a snap judgement. Most of the post is routine from that perspective. You identify weaknesses in the performance of snap judgements, and move on to spending more time thinking on the given subject, with naturally better results.
My optimism about the future has always been inducted from historical trend. It doesn't require the mention of AI for that or most of the fun topics discussed. I would define this precisely as having the justified expectation of pleasant surprise. I don't know the specifics of how the future looks, but can generalize with some confidence that it is likely to be better than today (for people on average, if not necessarily me in particular). If you think the trend now is positive, but the result of this trend somewhere in the future is quite negative, than you have a story to tell about why. And with all stories about the future, you are likely wrong.
Incidentally, "justified expectation of pleasant surprises" is exactly what I am assessing in the first few minutes of watching a movie. I am forming a judgement about the craft of the filmmakers, rather than anything particular with the plot, but whether I am in 'good hands' for the next couple hours.
If every game did things the same way, the fun value of that method would decline over time. This is why we have genres, and then we have deliberate hybridization of genres.
There still remains some probability that Aaron's recollection is wrong.
sexual wierdtopia: It is mandated by the central processor that participants stop to ask 'are we having fun yet?' every 60 seconds in order to allow the partners to elucidate and record the performance of the previous minute. Failure will result in the central processor rescheduling the desire impulse, and scheduling some other emotional context. This is not just for training, reason stipulates sexual performance can always be further optimized.
I don't know about you guys but I'm having fun just trying to keep this rock from rolling back down the hill.
Fun seems to require not fun in my experience with this particular body. Nevertheless, sign me up for the orgasmium (which appropriately came right after 'twice as hard')?
"The default, loss of control, followed by a Null future containing little or no utility. Versus extremely precise steering through "impossible" problems to get to any sort of Good future whatsoever."
But this is just repeating the same thing over and over. 'Precise steering' in your sense has never existed historically, yet we exist in a non-null state. This is essentially what Robin extrapolates as continuing, while you postulate a breakdown of historical precedent via abstractions he considers unvetted.
In other words, 'loss of control' is begging the question in this context.
Don't bogart that joint, my friend.
It is true that the topic is too large for casual followers (such as myself). So rather than aiming at refining any of the points personally, I wonder in what ways Robin has convinced Eli, and vice-versa. Because certainly, if this were a productive debate, they would be able to describe how they are coming to consensus. And from my perspective there are distinct signals that the anticipation of a successful debate declines as posts become acknowledged for their quality as satire.
"In a foom that took two years.."
The people of the future will be in a considerably better position than you to evaluate their immediate future. More importantly, they are in a position to modify their future based on that knowledge. This anticipatory reaction is what makes both of your opinions exceedingly tenuous. Everyone else who embarks on pinning down the future at least has the sense to sell books.
In the light of this, the goal should be to use each other's complementary talents to find the hardest rock solid platform not to sell the other a castle made of sand.
And I believe that if two very smart people manage to agree on where to go for lunch they have accomplished a lot for one day.
Perhaps you are marginally ahead of your time Eliezer, and the young individuals that will flush out the theory are still traipsing about in diapers. In which case, either being a billionare or a phD makes it more likely you can become their mentor. I'd do the former if you have a choice.
What could an AI do, yet still be unable to self-optimize? Quite a bit it turns out: everything that a modern human can do as a minimum and possibly a great deal more since we have yet to demonstrate that we can engineer intelligence. (I admit here that it may be college-level material once discovered)
If we define the singularity as the wall beyond which is unpredictable, I think we can have an effective singularity without FOOM. This follows from admitting that we can have computers that are superior to us in every way, without even achieving recursive modification. These machines then have all the attendant advantages of limitless hardware, replicability, perfect and expansive memory, deep serial computation, rationality by design, limitless external sensors, etc.
if it is useless to predict past the singularity, and if foom is unlikely to occur prior to the singularity, does this make the pursuit of friendliness irrelevant? Do we have to postulate foom = singularity in order to justify friendliness?
While awaiting my productivity to reemerge from chaos I stumbled upon an old interview with Ayn Rand and Tom Snyder in which she concludes with 'Thank God for America'. So there ya go.
meh. My last point doesn't make sense. Fixing the bias isn't equivalent to fixing your problem.
So it can be a mind projection fallacy even when you are ultimately reasoning about your own mind? Something needs to cancel out in the divisor. A more accurate assessment of others' mental nature may not assist you when you then tie it back into your own. You have mentioned this productivity issue a couple times, and yet don't want solutions suggested. Now that could be because the solution itself is OT (identifying is ok, but fixing a bias is OT), or because you don't think what works for others could actually work for you.
The Socrates paragraph stands out to me. It doesn't seem sporting to downplay one approach in comparison to another by creating two scenarios, with one being what a five-year old might say and the other being what a college grad (or someone smart enough to go to college) might say. Can that point be illustrated without giving such an unbalanced appearance?
The problem of course (to the discussion and to the above example)is: how much do you think you know about the underlying mechanics of what you are analyzing?
We know we are in for a dramatic finale when the history of the universe is recounted as prologue. Fortunate for us that the searchable neighborhood has always held superior possibilities. And one would expect the endpoint of intelligence to be when this ceases to be the case. Perhaps there will be a sign at the end of the universe that says 'You can't get there from here. Sorry.'.
The leaders in the NetflixPrize competition for the last couple years have utilized ensembles of large numbers of models with a fairly straightforward integration procedure. You can only get so far with a given model, but if you randomly scramble its hyperparameters or training procedure, and then average multiple runs together, you will improve your performance. The logical path forward is to derandomize this procedure, and figure out how to predict, a priori, which model probabilities become more accurate and which don't. But of course until you figure out how to do that, random is better than nothing.
As a process methodology, it seems useful to try random variations, find the ones that outperform and THEN seek to explain it.
"..people refusing to contemplate the real values of the opposition as the opposition sees it..."
Popular news and punditry seems saturated with this refusal, to the point that I desire to characterize the media's real values in a wholly unbecoming and unfairly generalized manner. It would be a nice evolution of society if we introduced a 'rationality' class into the public school curriculum. Or perhaps developed a 'Bayes Scouts' with merit badges.
haha - damn - you beat me to it: http://lists.extropy.org/pipermail/extropy-chat/2008-March/042369.html
It's a good thing that Eli's out of the AI-box game. He's too old to win anymore anyway -- not as sharp. And all the things he's been studying for the last 5+ years would only interfere with getting the job done. I would have liked to have seen him in his prime!
Speaking of gatekeeper and keymaster... Does the implied 'AI in a box' dialogue remind anyone else of the cloying and earnest attempts of teenagers (usually male) to cross certain taboo boundaries?
Oh well just me likely.
In keeping with that metaphor, however, I suspect part of the trick is to make the gatekeeper unwilling to disappoint the AI.
It seems relevant to the above post that the market reaction to the bailout passing on Friday was decidely negative.
Which puzzles me.
Consequentialist: Is it a fair universe where the wealthy live forever and the poor die in the relative blink of an eye? It seems hard for our current society to look past that when setting public policy. This doesn't necessarily explain why there isn't more private money put to the purpose, but I think many of the intelligent and wealthy at the present time would see eternal life quests as a millennial long cliche of laughable selfishness and not in tune with leaving a respectable legacy.
Strong AI doesn't have to be the only thing that's really frikkin' hard.
Apparently Luke didn't have to try for very long: http://www.cracked.com/article_16625_p2.html
We'll likely see how long someone can spend straining to lift the starship out of the swamp with no success before giving up. More zebras than jedi masters in this near,near galaxy.
Not overwhelmingly on-topic?
So we have a small minority of financial wizards and their supporting frameworks convincing everyone that they can take an overwhelmingly, inhumanly complex system and quantify the risk of ALL scenarios. Then when this is proven out as hubris, the broader system appears to exhibit cascading failures that impact direct and non-direct participants. The given leaders with vast resources could essentially flip a coin on the post-hoc solution, biased by the proximity of their next election.
So yes, MBS's aren't active agents with goals, but their caretakers with profit-maximizing motives are. Should we have better engineered the macro-system or the mortgage backed securities?
Maybe we need a Friendly Mortgage Securitization project.
One really does wonder whether the topical collapse of American finance, systemic underestimation of risk, and overconfidence in being able to NEGOTIATE risk in the face of enormous complexity should figure into these conversations more than just a couple of sarcastic posts about short selling.
Of course you can make an inference about the evidenced skill of the scientists. Scientist 1 was capable of picking out of a large set of models that covered the first 10 variables, the considerably smaller set of models that also covered the second 10. He did that by reference to principles and knowledge he brought to the table about the nature of inference and the problem domain. The second scientist has not shown any of this capability. I think our prior expectation for the skill of the scientists would be irrelevant, assuming that the prior was at least equal for both of them.
Peter: "The first theorist had less data to work with, and so had less data available to insert into the theory as parameters. This is evidence that the first theory will be smaller than the second theory"
The data is not equivalent to the model parameters. A linear prediction model of [PREDICT_VALUE = CONSTANT * DATA_POINT_SEQUENCE_NUMBER] can model an infinite number of data points. Adding more data points does not increase the model parameters. If there is a model that predicts 10 variables, and subsequently predicts another 10 variables there is no reason to add complexity unless one prefers complexity.
So reviewing the other comments now I see that I am essentially in agreement with M@ (on David's blog) who posted prior to Eli. Therefore, Eli disagrees with that. Count me curious.
There are an infinite number of models that can predict 10 variables, or 20 for that matter. The only probable way for scientist A to predict a model out of the infinite possible ones is to bring prior knowledge to the table about the nature of that model and the data. This is also true for the second scientist, but only slightly less so.
Therefore, scientist A has demonstrated a higher probability of having valuable prior knowledge.
I don't think there is much more to this than that. If the two scientists have equal knowledge there is no reason the second model need be more complicated than the first since the first fully described the extra revealed data in the second.
If it was the same scientist with both sets of data then you would pick the second model.
I agree there should be a strong prior belief that anyone pursuing AGI at our current level of overall human knowledge, is likely quite ordinary or at least failing to make reasonably obvious conclusions.
Let me give a shout out to my 1:50 peeps! I can't even summarize what EY has notably accomplished beyond highlighting how much more likely he is to accomplish something. All I really want is for Google to stop returning pages that are obviously unhelpful to me, or for a machine to disentangle how the genetic code works, or a system that can give absolute top notch medical advice, or something better than the bumbling jackasses[choose any] that manage to make policy in our country. Give me one of those things and you will be one in a million, baby.
It would seem you have to take away pure rationality, and add natural selection, before seeing the emergence of the decision-making standards of humanity !!
It's likely deliberate that prisoners were selected in the visualization to imply a relative lack of unselfish motivations.
"You want to hack evolution's sphagetti code? Good luck with that. Let us know if you get FDA approval."
I think I've seen Eli make this same point. How can you be certain at this point, when we are nowhere near achieving it, that AI won't be in the same league of complexity as the spaghetti brain? I would admit that there are likely artifacts of the brain that are unnecessarily kludgy (or plain irrelevent) but not necessarily in a manner that excessively obfuscates the primary design. It's always tempting for programmers to want to throw away a huge tangled code set when they first have to start working on it, but it is almost always not the right approach.
I expect advances in understanding how to build intelligence to serve as the groundwork for hypothesis of how the brain functions and vice-versa.
On the friendliness issue, isn't the primary logical way to avoid problems to create a network of competitive systems and goals? If one system wants to tile the universe with smileys that is almost certainly going to get in the way of the goal sets of the millions of other intelligences out there. They logically then should see value in reporting or acting upon their belief that a rival AI is making their jobs harder. I'd be suprised if humans don't have half their cognitive power devoted to anticipating and manipulating their expectations of rival's actions.