Posts

Comments

Comment by peterward on [Link] Word-vector based DL system achieves human parity in verbal IQ tests · 2015-06-17T00:09:31.893Z · LW · GW

I'm not sure I'm clear on the AI/AIG distinction. Wouldn't an AI need to be able to apply its intelligence to novel situations to be "intelligent" at all, therefore making its intelligence "general" by definition? Watson winning Jeopardy! was a testament to software engineering, but Watson was programmed specifically to play Jeopardy!. If, without modification, it could go on to dominate Settlers of Catan then we might want to start worrying.

I guess it's natural that QI tests would be chosen. They are objective and feature a logic a computer can, at least theoretically, recreate or approximate convincingly. Plus a lot of people conflate IQ with intelligence, which helps on the marketing side. (Aside: if there is one place the mind excels, it's getting more out than it started with--like miraculously remembering something otherwise forgotten (in some case seemingly never learned) at just the right moment. Word-vector embeddings and other fancy relational strategies seem to need way more going in--data-wise--than the chuck back out, making them crude and brute force by comparison.)

Comment by peterward on Status - is it what we think it is? · 2015-04-02T02:26:26.557Z · LW · GW

I just don't think there are many features human social organization that can be usefully described by a one-dimensional array, the alleged left-right political divide perhaps being the canonical example. Take two books I have on my Kindle: Sirens of Titan and Influx. While one can truly say the latter is a vastly more terrible book than the former, it would be absurd to say they--and every other book I've read--should be placed in a stack that uniquely ranks then against one another. And it's not a matter of comparing apples and oranges--because you can compare apples and oranges--it's that the comparison is not scalar, perhaps not even mathematically representable at all.

In terms of status, know one knows what the word means. If we base it on influence, then some people who had the most lasting impacts where despised in their day. Additionally, people who wield power over others are generally resented if not loathed by subalterns. As with economics, with social science you can pretty much get the result you want by choosing the slice that yields the results closest to the answer you are looking for.

Comment by peterward on [LINK] Lectures on Self-Control and the myth of Willpower · 2015-03-15T16:01:37.275Z · LW · GW

"He predicts that unconscious signals of a stable environment will increase self-control, which helps explains why high social-economic status correlates strongly with self-control."

What evidence is there that this is true? For what anecdotage is worth (which is probably the only evidence there is on the matter), some of the most out-of-control people I've met have been rich kids. Showing up to a 10-hour shift at a low-wage retail job every day with a smile on your face even though you have medical bills you can't pay--that's real self control. Meanwhile it's rich customer who's the first to go ballistic because their latte came out cold.

Obviously people are going to behave worse in a less stable environment. But I'd wager those who have had to deal with real hardship function better than the socio-economic well off in a crisis.

Comment by peterward on Detecting agents and subagents · 2015-03-15T14:20:20.822Z · LW · GW

This definitely incidental--

Wouldn't a super intelligent, resource gathering agent simply figure out the futility of its prime directive and abort with some kind of error? Surely it would realize it exists in a universe of limited resources and that it had been given an absurd objective. I mean maybe it's controlled controlled by some sort of "while resources exist consume resources" loop that is beyond its free will to break out of--but if so, should it be considered an "agent"?

Contra humans, who for the moment are electing to consume themselves to extinction, if anything resource consumer AIs would be comparatively benign.

Comment by peterward on Does utilitarianism "require" extreme self sacrifice? If not why do people commonly say it does? · 2014-12-11T03:41:23.432Z · LW · GW

Isn't a "boolean" right/wrong answer exactly what utilitarianism promises in the marketing literature? Or, more precisely doesn't it promise to select for us the right choice among collection of alternatives? If the best outcomes can be ranked--by global goodness, or whatever standard--then logically there is a winner or set of winners which one may, without guilt, indifferently choose from.

Comment by peterward on When should an Effective Altruist be vegetarian? · 2014-11-26T03:11:22.773Z · LW · GW

I personally think there's not a lot of hope for animals as long as humans can't sort out their own mess. On the other hand, I don't think there is much hope for humanity as long as altruism stands in for actually taking responsibility. The very social system that puts $5 in our pockets to donate creates those who depend on our charity.

Comment by peterward on Expansion on A Previous Cost-Benefit Analysis of Vaccinating Healthy Adults Against Flu · 2014-11-13T02:48:31.449Z · LW · GW

Probably the time wasted on the cost/benefit analysis was more costly--all told--than either branch of the flow chart. Having said that, I suspect the real objective of these exercises is quite different than the ostensible one.

Comment by peterward on First(?) Rationalist elected to state government · 2014-11-12T03:37:29.077Z · LW · GW

It also takes no shortage of conceit to imagine one knows better than the majority or people. Lots of individuals flit between business and politics--GHW Bush is a major owner of a gold mine where I'm from.* But an honest person isn't going to go into politics, because they understand the fundamental lie doing so requires.

*'Fact, I'd wager the two are strongly correlated--though I'm not privy to correlation data you are.

Comment by peterward on question: the 40 hour work week vs Silicon Valley? · 2014-10-29T01:54:04.440Z · LW · GW

Probably has something to do with the American work morality--the zealousness we apply any religion can only weep in envy of. We believe/have been brainwashed into believing work is what we were born to do. As to how much we should do; I'm not sure this is a question for psychological studies so much as a question of how much (and of what kind of) work we actually want to do. It's like asking how many hours one should spend cleaning their house; one balances a cleanliness level one can live with against time one would rather spend doing something else.

Comment by peterward on Weird Alliances · 2014-10-29T01:32:47.972Z · LW · GW

Might the apparent weird alliance not be a failure to accurately separate the substantive from the superficial? It could be the New Ager and the biohacker are driven by the same psychological imperative, each just dresses it a little differently. By even classifying their alliance as "weird", we are jumping the gun on what were are entitled to take for granted. I.e., we lack even the understanding to say what is weird as what isn't.

Comment by peterward on [meta] Policy for dealing with users suspected/guilty of mass-downvote harassment? · 2014-06-09T02:19:42.826Z · LW · GW

What's the point of the up/down votes in the first place? If the object is reducing bias, doesn't making commenting a popularity contest run counter to this purpose?

Comment by peterward on Mathematics as a lossy compression algorithm gone wild · 2014-06-09T01:13:09.988Z · LW · GW

All analogies are suspect, but if I had to choose one I'd say physics' theories--at best--are if anything like code that returns the Fibonacci sequence through a specified range. The theories give us a formula we can use to make certain predictions, in some cases with arbitrary precision. Video, losslessly- or lossy-compressed, is still video. Whereas

fib n = take n fiblist where fiblist = 0:1:(zipWith (+) fiblist (tail fiblist))

is not a bag holing the entire Fibonacci sequence, waiting for us to compress it so we can look at a slightly more pixelated version of the actual piece.

Also, I don't think it makes sense to say math is part of nature (except in the sense everything is part of nature), though it may be that math is a psychological analogy to some feature of nature--like vision is an analogy to part of the EM spectrum. It would be a strange coincidence otherwise, considering how useful math is helping us make certain predictions. At the same time, many features of nature are utterly unpredictable, so either math only images select parts--we can't see x-ray--or we haven't fully understood how to use mathematics yet.

Incidentally, I think it is true that math--all our secular, materialist pretense aside--is still widely felt to have magical properties. In this regard Descartes' god is still with us.

Comment by peterward on Don't rely on the system to guarantee you life satisfaction · 2014-02-21T20:45:29.173Z · LW · GW

Were it not the case the teachers are often the biggest bullies. On the contrary, IMO, it is the excessively authoritarian, prison-like model school follows that generates bullies.

Comment by peterward on Does the simulation argument even need simulations? · 2013-10-12T16:56:29.888Z · LW · GW

Haskell (probably the language most likely to be used for a universe simulation, at least at present technology levels) >follows lazy evaluation: a value is not calculated unless it is used.

In that case, why does the simulation need to be running all the time? Wouldn't one just ask the fancy, lambda-derived software to render whatever specific event one wanted to see?

If on the other hand whole_universe_from_time_immemorial() needs to execute every time, which of course assumes a loophole gets found to infinitely add information to the host universe, then presumably every possible argument (which includes the program's own code--itself a constituent of the universe being simulated) would be needed by function anyway, so why not strict evaluation?

And both of these cases still assume we handle time in a common sense fashion. According to relativity, time is intertwined with the other dimensions, and these dimensions in turn are an artifact of our particular universe, distinctive characteristics created at the Big Bang along with everything else. Therefore, it then seems likely give_me_the_whole_universe() would have to execute everything at once--more precisely, would have to excite outside of time--to accurately simulate the universe (or simulation thereof) we observe. Even functional programming has to carry out steps one after the other, requiring a universe with a time dimension, even if the logic to this order is different from that of traditional imperative paradigms.

Comment by peterward on Learning programming: so I've learned the basics of Python, what next? · 2013-06-18T03:28:03.424Z · LW · GW

I'm in a similar boat; also starting with Python. Python is intuitive and flexible, which makes it easy to learn but also, in a sense easy to avoid understating how a language actually works. In addition I'm now learning Java and OCaml.

Java isn't a pretty language but it's widely used and a relatively easy transition from Python. But it, I find, makes the philosophy behind object oriented programing much more explicit; forcing the developer to create objects from scratch even to accomplish basic tasks.

OCaml is useful because of the level of discipline it imposes on the programer, e.g. not being able to mix an integer with a floating point, or even--strictly speaking--being able to convert one to another. It forces one to get it right the first time, contra Python's more anything goes, fix it in debugging approach. It also has features like lazy evaluation and function programing, not supported at all in the case of the former (as far as I know) and as a kind of add on in the case of the latter, by Python. Even if you never need or want these features experience with them goes some way to really understating what programs are fundamentally.

Comment by peterward on [LINK]s: Who says Watson is only a narrow AI? · 2013-05-22T03:20:37.896Z · LW · GW

Watson is also backed by a huge corporation, which makes it easier to surmount obstacles like "but doctors don't like competition."

On the other hand being a huge corporation makes it harder to surmount "relying on marketing hype to inflate the value-added of the product."

At any rate, the company I work for relies heavily on Cognos and the metrics there seem pretty arbitrary--Hocus pocus to conjure simple numbers so directors can pretend they're making informed decisions and not operating on blind guesswork and vanity....And to rationalize firings, raise skimpings, additional bureaucracy and other unhappy decisions.*

*Come to think of it, "intelligence" or not, Cognos does emulate homo sapien psychology to a high degree of approximation.

Comment by peterward on How should negative externalities be handled? (Warning: politics) · 2013-05-09T01:11:28.622Z · LW · GW

It always seemed to me "externality" was just a euphemism to cover up the fact that capitalist enterprise requires massive--not a hand out here or there--state support (and planning) to functional at all. The US is really kind of the odd ball in that we pretend this isn't the case, dressing up subsidy as defense spending or whatever. In Japan, e.g., they just take your money and give it strait to Toyota without all the pretense. At any rate anyone who opposes central planing and "big government" also opposes capitalism in it's extant form.

Comment by peterward on Social intelligence, education, & the workplace · 2013-05-04T05:07:57.129Z · LW · GW

My point was hypothetical. I skeptical a correlation actually exits,--damn lies in all--but that's beside the point. My point is a society that is into boiling complex, difficult to define concepts like intelligence down to a simple metric is liable to have lots of other analogous, oversimplified metrics that are known, if not to coworkers to teachers and whoever else makes the decisions. And I'd wager people who do well on tests are apt to be the same ones who get high marks on Cognos reports--i.e., the same prejudices affect what's deemed valuable for both.

As far as our actual society, there is only partial truth to this, we are metric obsessed of course--but nepotism and, more than anything else, the circumstances one was born into, probably play the biggest role apropos success as conventionally defined.

Comment by peterward on Social intelligence, education, & the workplace · 2013-05-03T03:10:32.710Z · LW · GW

Let's say IQ test do correlate with success (as measured by conventional standards). What would that prove? That a society that values high IQ rewards people with high IQs. The relevant question is Is IQ a valid measure of intelligence? Well, good luck defining intelligence in a scientifically meaningful way.

"Social intelligence", oh boy... At this point we're just giving common sense wisdoms--flattery gets you everywhere/the socially adept rise higher in social contexts etc--a lacquer of scientistic jargon.

Comment by peterward on On private marriage contracts · 2013-01-14T23:45:05.598Z · LW · GW

Several thoughts:

a) Isn't the solution to qualify the "libertarian argument" by limiting it's scope to "any terms that don't break the law"? (Of course "libertarian" is a poor adjective choice since a legal contract very much relies on a powerful state backing the enforcement of any breach to mean anything--the concept of a libertarian contract is an oxymoron.)

b) What do suspected ulterior motives on the part of those advancing the "libertarian argument" or the fact that sincere libertarians are a fringe minority have to do with the argument's logical validity?

c) In reality, in the case of marriage, the state isn't merely a neutral enforcer but a party to the contract as well as the contract's enforcer. That is to say, the married couple's rights and responsibilities with respect to the state are modified by the contract. In particular, the way they are taxed changes; so may citizenship or residency status. And it also affects a couple's relationship to fourth parties--e.g., if Bob is married to Ron, Ron may, in some cases, be held liable for Bob's debts.

Comment by peterward on Humans are not automatically strategic · 2010-09-10T05:39:44.928Z · LW · GW

I think the term "abstract reasoning" is being conflated with acting on good or bad information (among other things). E.g., in most cases, one basically has to take it on faith ice cream is good or bad. And since most people aren't in a position to rationally make a confident choice re: the examples the author provides or comparable ones that could be imagined, agnosticism would seem the only rational alternative.*

More generally, I think a lot of these problems stem from radically defective education (if people aren't merely mostly morons as the author implies at one point: "Perhaps 5% of the population has enough abstract reasoning skill to verbally understand that the above heuristics would be useful once these heuristics are pointed out.")**. We don't get experience from a young age in figuring things out for ourselves. Instead we are merely told what to believe and not to believe (based on "respect for authority")--a recipe for making terrible decisions in later life if ever there was one.

Finally, I think the author is lumping a lot of different problems together that seemingly shouldn't be. In one case the problem may be "strategy", another ideology...ignorance, laziness etc. Apart from the fact ones stated goal is often not the real goal at all. One really needs, I think, to do a lot more work examining actual cases before attempting to pontificate on the matter. As far as I can see almost no actual work has been done...much like "postulating what one wants..."

*Of course the implication is we don't eat ice cream to be healthy on the basis of expert claims. But this raises all kinds of further questions, invoking the application of more "abstract reasoning" before we can decide wether to trust these experts (if we are really trying to be rational, that is).

**You're telling me the 5% figure wasn't pulled out of someone's ass--please!

Comment by peterward on Fight Zero-Sum Bias · 2010-08-04T02:39:17.835Z · LW · GW

It depends on what one assumes the motives for war are. If they are economic then I think a case can be made everyone ends up worse off. But if power is at stake, then war can indeed leave the nominal victor better off (from the perspective of motive).

By the way, attempts to characterize human psychological based on what life was like in the Savanna (or whatever environment humans are supposed to be designed by Darwinian forces for) need serious qualification, at best. Speaking metaphorically, evolution is an accident; where "successful", a fortuitous coincidence.In some cases an organism ends up with a set of traits that work out for it in a given environment and it lives long enough to reproduce (even if living in a great deal of pain). Obviously the given environment will impose certain limits, and these limits may lead to certain "modifications" if not extinction. But the assumption the organism is well designed (well adapted, if one prefers secular terminology) for environment X therefore automatically poorly (or less well) designed for environment Y, where X precedes Y, is misleading. Logically, we may be better adapted for our present environment than any previous one we've inhabited (and one can come up with imaginary environments that are far superior than any we've experienced)--it's question we can only answer by looking at X and Y and the organism's traits very carefully, and then perhaps only with a great deal of uncertainty. No one talks about the hand being well adapted to the Savanna and ill adapted the modern city yet analogous arguments re: psychology crop up constantly--esp. re: politics and economics, where extreme irrational prejudices operate.

Comment by peterward on Metaphilosophical Mysteries · 2010-08-04T01:06:09.382Z · LW · GW

I agree with the general argument. I think (some) philosophy is an immature science, or predecessor to a science, and some is in reference to how to do things better, therefore subject to less stringent, but not fundamentally different, standards than science--political philosophy, say (assuming, counterfactually, political thinking were remotely rational). And of course a lot of philosophy is just nonsense--probably most of it. But economics can hardly be called a science. If anything, the "field" has experienced retrograde evolution since it stopped being part of philosophy.