Posts
Comments
Apropos the Wikipedia article, in what way is grey goo a "transhumanist theory"?
Grey goo scenarios are relatively straightforward extrapolations of mundane technological progress and complex system dynamics with analogues in real biological systems. Subscribing to transhumanism is not a prerequisite to thinking that grey goo is a plausible region of the technological development phase space.
Your mention of Zipcar in the context of Netflix is an astute point. Zipcar has a very nice and well-developed infrastructure that would be nearly ideal for the transition. The question is whether or not Zipcar is thinking that far ahead, and I do not know the answer.
Many people do not know that even though Netflix has only been streaming video for a few years, they were very actively building their business around that transition over a decade ago, pretty much from their inception. They built out all of the elements required to take advantage of that transition long before it was technologically viable. Even though their DVD by mail business was highly successful, it was in many ways seen merely as a strategic waypoint. I think Zipcar might be well-advised to take a similar view of their business model, being prepared to strategically cannibalize it when the market is ready for driverless cars.
Everyone is over-thinking this. I used to live in Nevada and political process is driven by the unusual history and heuristics of the state.
The politicians do not care about technology, safety, or even being first per se. Nevada has very successfully built a political economy based on doing legislative and regulatory arbitrage against neighboring states, particularly California. If they think there is a plausible way to drive revenue by allowing things that other states do not allow, it is a surprisingly easy sale. The famous liberalism of the state, where a very atypical range of activities are legal and/or unregulated for a US state, is really just a consequence of this heuristic applied over time. If California disallows something that can generate revenue for Nevada, even if just for tourism, Nevada's instinct is to allow it as a response.
It is cheap for them, passing legislation to allow people to do something is almost free. As history shows, as a state it is pretty comfortable being the first to do a lot of things; it is not as prone to precautionary "what ifs" when there is an argument that the basic risks are manageable. It has worked out well for Nevada.
There are many, many examples of this. Everyone is familiar with "instant" weddings and divorces, which used to be much more difficult to do in most states, as well as gambling, prostitution, and other vices that were outlawed across the border. Nevada's economy is, in large part, based on making things legal and inexpensive.
There are also numerous boring examples, such as approving the construction of power plants along the California border when California had the power shortages but refused to approve power plants in the state; making it a tax-free and highly effective place to run Internet fulfillment centers (e.g. Amazon, B&N, etc are all there); they managed to designate areas of their cities as international ports to bypass California; they allow Californians to do their DMV paperwork in Nevada for the registration fees (I had a Nevada driver's license with a California address for years); they will approve almost any spectacle with minimal hassle, no matter how bizarre, if it brings in tourists from out of state.
All Google had to do was convince the politicians that they could bring money into Nevada that otherwise would end up in California. It is a calculated risk but Nevada politics has always been very comfortable doing things that are politically too risky in other states. Google probably made an argument from both jobs (the development Google does needs to take place somewhere) and tourism potential. Las Vegas is very fond of people movers that make it easier to fully exploit the city (they have a privately funded mono-rail system after all) so it could also be sold on that basis.
In short, the only foresight or rationality at work here is driving revenue by legalizing something that other states are unlikely to allow. This is an old modus operandi for Nevada legislative activity and someone at Google probably knew this.
I have typically sought advice (and occasionally received unsolicited advice) from fashion-aware women, most of whom are happy to demonstrate their domain expertise. This has proven to be an efficient strategy that produces good results for relatively low cost. Most of the men I know that dress well rely on a similar strategy; the dearth of men who are savvy at this suggests a somewhat complex signaling game at work.
Take advantage of specialization. It is no different than when individuals solicit advice for me on a matter about which I am perceived as knowledgeable. People enjoy demonstrating their expertise.
There is no reason we cannot massively parallelize algorithms on silicon, it just requires more advanced computer science than most people use. Brains have a direct connect topology, silicon uses a switch fabric topology. An algorithm that parallelizes on the former may look nothing like the one that parallelizes on the later. Most computer science people never learn how to do parallelism on a switch fabric, and it is rarely taught.
Tangentially, this is why whole brain emulation on silicon is a poor way of doing things. While you can map the wetware, the algorithm implemented in the wetware probably won't parallelize on silicon due to the fundamental topological differences.
While computer science has focused almost solely on algorithms that require a directly connected network topology to scale, there are a few organizations that know how to generally implement parallelism on switch fabrics. Most people conflate their ignorance with there being some fundamental limitation; it requires a computational model that takes the topology into account.
However, that does not address the issue of "foom". There are other topology invariant reasons to believe it is not realistic on any kind of conventional computing substrate even if everyone was using massively parallel switch fabric algorithms.
A question that needs to be asked is where are you willing to go to find a job? San Jose? The best choices are somewhat context dependent.
Seaside's economy is based on a military post and agriculture, neither of which are conducive to an intellectually interesting job scene. There is a shortage of good computer people an hour north, so if you are looking up there and having trouble then there is probably a presentation gap. At the same time, I would not be surprised at all if you found the options in your area to be unsatisfactory.
The ASVAB is not an exemplar of careful correctness and it is not targeted at people for which that would be beneficial. When I took it many years ago there were a few questions with glaring ambiguities and questionable assumptions; I simply picked the answer that I thought they would want me to pick if I was ignorant of the subject matter.
I maxed the test.
The test is not aimed at intelligent, educated people. It is designed to filter out people of low intelligence. I've met many people that struggled to achieve 50%, something I used to find shocking. If there are a few technical ambiguities then that is of little consequence for its intended purpose. While there is some basic occupation recommendations based on the ASVAB, it is not designed to identify the significantly above average -- quite the opposite.
Define "top 1%". Many programmers may be "top 1%" at some programming domain in some sense but they will not be "top 1%" for every programming domain. It is conceivable that there are enough specializations in software such that half of all programmers are "top 1%" at something, even if that something is neither very interesting nor very important in any kind of grand sense. It is not just by domain either, many employers value a particular characteristic within that niche e.g. speed versus thoroughness versus optimization. Most employers are filling a small niche.
The rare kind of programmer is one who is top 1% across a broad swath of domains. These programmers are rare, highly valued, and very difficult to find; for these it is probably more like 0.1% and they are more likely to select you than you them. The closer you get to a truly general "top 1%" the rarer the specimens become.
So the question becomes, are employers hiring the top 1% of programmers as an average of their skill and performance across hundreds of metrics or are they hiring the top 1% for the narrow set of skills and characteristics they value? In my experience, it is usually the latter.
Anecdotally, I hire on a slightly different critierion than either of the above. I hire people who can become a top 1% in any particular domain required very quickly; I've met candidates with little domain expertise and an extraordinary aptitude at acquiring it. My reasoning is simple: given enough time and exposure, they will become that rare generalist top 1%.
What would a survey of a cross-section of "computer experts" have looked like predicting the Internet in 2005 from 1990? The level of awareness required to make that prediction accurately is not generally found, people who did understand it well enough to make an educated guess would be modeled as outliers. The above survey is asking people to make a similar type of prediction.
An important aspect of AI predictions like the above is that it is asking people who do not understand how AI works. They are definitely experts on the history of past attempts but that does not imply the domain knowledge required to predict human-level AI. It is a bit like asking the Montgolfier brothers to predict when man would land on the moon -- experts on what has been done but not on what is required.
There are many reasoned extrapolations of technology arrival dates based on discernible trends -- think Moore's Law -- but something comparable in AI does not exist. The vast majority of AI people have no basis on which to assert the problem, something they generally can't define, will be solved next week or next century. The few that might know something will be buried in the noise floor. Consequently, I do not find much values in these group predictions.
Zeitgeist is not predictive except perhaps in a meta way.
A problem is that karma attempts to capture orthogonal values in a single number. Even though you can reduce those values to a single number they still need to be captured as separate values e.g. slashdot karma system for a half-assed example.
Karma seems to roughly fall into one of three buckets. The first is entertainment value e.g. a particularly witty comment that nonetheless does not add material value to the discussion. The second is informational value e.g. posting a link to particularly relevant literature of which many people are unaware. The third is argumentative value e.g. a well-reasoned and interesting perspective. All of these are captured as "karma" to some extent or another.
Objections are that this makes it difficult to filter content based on karma, which raises questions about its value. If, for example, I am primarily interested in reading hilarious witticism and interesting layman opinions, there is no way to filter out comments that contain dry references to academic literature. Alternatively, if I lack an appropriate sense of humor I might find the karma attributed to immaterial witticism inexplicable.
Even if a clever system was devised and ease of use was ignored, there are still issues of gaming and perverse incentives (e.g. Gibbard-Satterthwaite theorem et al). To misappropriate an old saying, "karma is a bitch".
Gamification is essentially the art of exploiting human cognitive biases so it is very meta to use gamification to teach rationality.
Chomsky? He is something of a bellwether for specious reasoning, which is a contribution of sorts. The obviously inconsistent logic of the various beliefs he holds makes his philosophy, such as it is, seem disjointed and arbitrary.
As a philosopher, he plays a "crazy uncle" character.
It is more or less what khafra stated. I'm not saying it is true in your case (hint: winking smiley) but it is very common for people to evaluate their life choices as you did without regard for the evidence. To put it another way, your statement would only distinguishable from the ubiquitous life choice confirmation bias if you stated it had made your life much worse.
I can imagine several places worse than the Bay Area for many people (and several places better), so it is not as though your statement was not plausible on its face. :-)
This story is about rapid iteration rather than quantity. The "quantity" is the detritus of evolution created while learning to produce a perfect pot. If a machine was producing pots it would generate great quantity but the quality would not vary from one iteration to the next.
There are many stories and heuristics in engineering lore suggesting rapid iteration converges on quality faster than careful design. See also: OODA loops, the equivalent military heuristic.
That sounds an awful lot like confirmation bias. ;-)
I have always had this problem in a bad way, but the above prescription strikes me as flimsy. What is to prevent me from disabling the technical device so that I can get my pellet of rat food? What if I need to dig through a bunch of links for whatever work it is I am supposed to be doing? It does not structurally modify incentives or behaviors.
To put it another way, if it is a huge waste of time when you are supposed to be working on something else, is it ever not a waste of time?
The best solution to the problem of wasting time for myself is something that I tripped across accidentally, leveraging social media. I found that by carefully curating the feeds of "interesting things" from various sources to maximize signal to noise ratio, which produces a surprisingly manageable stream, it made most of my usual haunts boring. Over time, I simply lost interesting in all of my usual time wasting sites because I was extracting most of the value in concentrated form by other means before I ever wandered over to those sites. Most of what I spent my time on was wading through amusing crap to find a few nuggets, but while wading through that crap it was easy to spend time on amusements. When the incentive to wade through that morass disappeared, so did my exposure to distractions.
Aggressive social curation of my news feeds, originally done because I did not have time for the raw feed, achieved a signal to noise ratio where I lost interest in most of the time wasters. All I really did was inadvertently extract in pure form most of the value that made me expose myself to time wasters in the first place. It has been the single biggest optimization in me not wasting time in ages and all it really required was aggressive culling and tailoring for quality and uniqueness of content.
As a general comment based on my own experience, there is an enormous value in studying existing art to know precisely what science and study has actually been done -- not what people state has been done. And at least as important, learning the set of assumptions that have driven the current body of evidence.
This provides an enormous amount of context as to where you can actually attack interesting problems and make a difference. Most of my personal work has been based on following chains of reasoning that invalidated an ancient assumption that no one had revisited in decades. I wasn't clever, it was really a matter of no one asking "why?" in many years.
Some of these hobbies are not like the others. I would classify hobbies based on whether or not rationalism is an essential prerequisite for engaging in the hobby. Programming and poker make sense to me but the rationales for the rest seem to be thinner, ascribing lessons that could be ascribed to almost any activity.
The distinction, as I see it, is that both programming and poker require rationalist discipline in depth that must be internalized to be effective. I can play video games or read/watch science fiction and benefit from the entertainment value without any investment in rationalism. By contrast, the very act of programming requires a considerable amount of logic and careful reasoning to produce anything but the most trivial result. Without a significant investment in rationalist thinking, you can't participate in a constructive way, which to my mind defines a "rationalist hobby".
At a very high level, the problem is almost intrinsic; it is very difficult to stop a determined attacker given the current balance between defensive and offensive capabilities. A strong focus on hardening only makes it expensive, not impossible.
That said, most security breaches like the above are the result of incompetence, negligence, ignorance, or misplaced trust. In other words, human factors. Humans will continue to be a weak link across all of the components involved in security. There comes a point where systems are sufficiently hardened at a technical level that it is almost always easiest to attack the people that have access to them rather than the systems themselves.
For #1, having to drive, work, go to another important function, or being required to drink more later at some other function seems to be an acceptable occasional excuse but not a permanent one.
On #3, many cultures have sayings and aphorisms that share the idea that people who do not drink are not as trustworthy in various contexts. Much of it seems to follow from the idea that people are more honest when they've had a drink or two, and therefore people who do not drink are hiding their true character. The display of honesty is considered a trust-building exercise. I recall a proverb (Persian?) to the effect that people should not agree to serious matters sober that they have not discussed drunk.
On #2, if you must drink socially then drink very slowly. This can be developed to a fine art such that you are participating but consuming very little alcohol in fact. Also, there are also drinks you can order at any bar that have low alcohol content and large volume e.g. a redeye (tomato juice and light beer).
On the other side of that argument, a fetus does not have the higher brain function or consciousness that would allow it to experience pain. When an adult is put under general anesthesia for surgery we do not generally consider them to be "experiencing pain" even though the body is still reacting the damage as though they were conscious. They still have brain function, they temporarily lack the higher brain function required for the meaningful experience of pain. A very similar argument could be applied to a fetus.
There is another effective framing technique that I almost never see used that might be worth considering because I've seen it used well in the past.
Most people think evolution is a purely biological concept and it is virtually always framed in such ways. This runs headlong into the mystical beliefs many people attach to living organisms. Making evolution a property of an organism is no different than making a "soul" the property of an organism to them, and fits in the same cognitive pigeonhole. A lot of the jumbled chemistry and thermodynamic arguments follow from this as well; biology is special precisely because it can violate the laws of science.
Evolution is fundamentally a systems dynamic from mathematics. If you have a system -- any system -- with a certain set of abstract properties then there are certain required mathematical consequences. The result of 2+2 is always 4, no matter where in nature we find it. Biology is just one type of system to which this mathematics is applied; it has the prerequisite properties on the left hand side of the equation that require the system dynamic biologists call "evolution" on the right side of the equation. Mathematics asserts that evolution should exist in biology whether or not science has found evidence of it (fortunately, we have found much evidence). When evolution emerges from mathematics instead of biology, it has a sterilizing effect on the concept.
It turns out that very few creationists are willing to dismiss mathematics in the same way they dismiss science. Mathematics is neutral territory, it does not have a political or religious affiliation in the minds of most people, and almost everyone tacitly "believes" in it because they use it every day. The few times I have seen this strategy used -- completely divorcing evolution from science -- even the militant true believers found themselves at a loss for a counter-argument (not that it changed their minds).
Tangentially, the fact that she is arguing with a person that believes in evolution could itself be a problem that changes the dynamic.
I've often observed that most people believe in evolution in essentially the same way a creationist believes in creationism. They did not reason themselves into that position nor do they really understand evolution in any significant way, it was a position they were told all right-thinking people should believe and so that is why they do so. The charge often forwarded by creationists that evolution is merely another quasi-religious belief comports with reality in many cases, unfortunately. Nominal evolutionists that are clueless about evolution and just parrot talking points can often destroy the credibility of scientific evolutionists that come later.
Understanding the qualifications of the person she is having a discussion with is helpful from a tactical standpoint -- it determines the nature of the defense of creationist ideas.
Being a veteran of many creationist arguments, I would make the point that there is no need or reason to bring religion into it nor even to talk about "losing an argument". You can ignore it entirely if you choose to and it usually keeps the defensiveness down. Also, stay away from any arguments that have well-known creationist defenses; it will force her to think about what you are saying rather than giving the opportunity to borrow what someone else has said. Keep the tone matter of fact so that it doesn't sound like you have an emotional investment in it -- very important. If you really want to play it clean, don't even frame it in terms of what you think or believe; just discussing chains of reasonings over reasonable facts without inserting a value judgement helps make the other party think they are reasoning to the conclusion themselves rather than you imposing your beliefs.
Most of the work is framing. If you make it completely orthogonal to what anyone believes, religious or otherwise, and turn it into an emotionally-neutral logic puzzle, you can often bypass the memetic defenses long enough to make a difference. It does not matter what you believe if we have this interesting set of scientific facts...
A related empirical data point is that we already see strong light cone effects in electronic markets. The machine decision speeds are so fast that it is not possible to usefully communicate with similarly fast machines outside of a radius of some small number of kilometers because the state of reality at one machine changes faster than it can propagate that information to another due to speed of light limitations. The diminishing ability to influence decisions as a function of distance raises questions about the relevancy of most long haul communication between AGI-like systems.
This is also related to another computer phenomenon where it is becoming cheaper to duplicate computation than to transmit the result of one computation.
It is analogous to how you can implement a hyper-cube topology on a physical network in normal 3-space, which is trivial. Doing it virtually on a switch fabric is trickier.
Hyper-dimensionality is largely a human abstraction when talking about algorithms; a set of bits can be interpreted as being in however many dimensions is convenient for an algorithm at a particular point in time, which follows from fairly boring maths e.g. Morton's theorems. The general concept of topological computation is not remarkable either, it has been around since Tarski, it just is not obvious how one reduces it to useful practice.
There is no literature on what a reduction to practice would even look like but it is a bit of an open secret in the world of large-scale graph analysis that the very recent ability of a couple companies to parallelize graph analysis are based on something like this. Graph analysis scalability is closely tied to join algorithm scalability -- a well-known hard-to-parallelize operation.
You are missing the point. There are hyper-dimensional topological solutions that can be efficiently implement on vanilla silicon that obviate your argument. There is literature to support the conjecture even if there is not literature to support the implementation. Nonetheless, implementations are known to have recently been developed at places like IBM Research that have been publicly disclosed to exist (if not the design). (ObDisclosure: I developed much of the practical theory related to this domain -- I've seen running code at scale). Just because the brain exists in three dimensions does not imply that it is a 3-dimensional data model any more than analogous things are implied on a computer.
It is not an abstraction, you can implement these directly on silicon. There are very old theorems that allow the implementation of hyper-dimensional topological constructs on vanilla silicon (since the 1960s), conjectured to support massive pervasive parallelism (since the 1940s), the reduction to practice just isn't obvious and no one is taught these things. These models scale well on mediocre switch fabrics if competently designed.
Basically, you are extrapolating a "we can't build algorithms on switch fabrics" bias improperly and without realizing you are doing it. State-of-the-art parallel computer science research is much more interesting than you are assuming. Ironically, the mathematics behind it is completely indifferent to dimensionality.
There is a subtle point I think you are missing. The problem is not one of processing power or even bandwidth but one of topology. Increasing the link bandwidth does not solve any problems nor does increasing the operations retired per clock cycle.
In parallel algorithms research, the main bottleneck is that traditional computer science assumes that the communication topology is a directly connected network -- like the brain -- but all real silicon systems are based on switch fabrics. For many years computer science simplified the analysis by treating these as interchangeable when they are not and the differences from an algorithm design standpoint start to become very apparent when parallelism exceeds a certain relatively low threshold.
The real limitation is that humans currently have very limited ability to design parallel algorithms from the theoretical assumption of a switch fabric. There are two ways to work around this. The first involves inventing a scalable direct-connect computing architecture (not any time soon), and the second involves developing new body of computer science that scales on switch fabrics (currently a topic of research at a couple places).
Of course, it turns out I'll be in London on that day...
Elysian Pub is a good spot. Conveniently located too, as far as I am concerned.