Comment by miller on Open thread, May 22 - May 28, 2017 · 2017-05-25T06:20:58.454Z · score: 0 (0 votes) · LW · GW

Prediction is intelligence. Why is there not more discussion about stock picks here? Is it low status? Does everyone believe in strong forms of efficient market ?

(edited -- curious where it goes without leading the witness)

Comment by miller on What's the most annoying part of your life/job? · 2016-11-01T04:58:00.953Z · score: 0 (0 votes) · LW · GW

Reviews online that are trustworthy. I've been travelling a lot and hotel reviews require some intelligence to determine trust. e.g. someone who says 'the lady at the front desk was rude to me, and they had bed bugs'.. well that basically means they felt insulted by the person at the front desk and the bed bug thing is probably just the worst thing they can imagine saying.

Comment by miller on What's the most annoying part of your life/job? · 2016-11-01T04:48:02.674Z · score: 0 (0 votes) · LW · GW

I want a better way to eliminate any of the hindrances to having productive relationships with people I would respect if I could find them.

Comment by miller on Scientists Create AI Program That Can Predict Human Rights Trials With 79 Percent Accuracy · 2016-11-01T04:35:21.834Z · score: 0 (0 votes) · LW · GW

I'm glad that this article makes efforts to assure us that lawyers continue to have job safety. It would be horrible to lose those high paying jobs to a superior and near-free alternative.

Comment by miller on Nick Bostrom's TED talk on Superintelligence is now online · 2015-05-07T00:23:47.979Z · score: 0 (0 votes) · LW · GW

I wouldn't have been able to guess the date this speech was given. The major outline seems 10 years old.

Comment by miller on How can I spend money to improve my life? · 2014-02-03T03:10:13.681Z · score: 1 (1 votes) · LW · GW

Theft? Inferior Service?

I'm having a hard time guessing what this could be that you couldn't just look for someone with better references (or spend a bit more).

Comment by miller on RIP Doug Engelbart · 2013-07-08T06:11:00.922Z · score: 0 (0 votes) · LW · GW

My wireless mouse is driving me fucking nuts with it's stuttering randomly across the screen.

Comment by miller on What jobs are safe in an automated future? · 2013-02-17T07:23:10.725Z · score: 0 (0 votes) · LW · GW

I'm surprised posts like this are not more commonly discussed around here.

Comment by miller on The Yudkowsky Ambition Scale · 2012-09-13T03:04:33.104Z · score: -8 (10 votes) · LW · GW

Will Newsome is somewhere between Eliezer and a recursively self-improving AI.

Comment by miller on What's your "rationalist arguing" origin story? · 2012-09-03T07:27:26.599Z · score: 0 (2 votes) · LW · GW

Arguing against god(s) circa 9 years of age or so.

Comment by miller on Luke is doing an AMA on Reddit · 2012-08-17T02:41:28.776Z · score: -1 (1 votes) · LW · GW

You could probably mad words any two buzz words together though. How about quantum rationality?

Comment by miller on Rationality Quotes July 2012 · 2012-07-05T19:10:38.888Z · score: 4 (4 votes) · LW · GW

I wonder if he let his teammates know this at the time. They are unlikely to approve and then what would he do. I'd wager this was more about creating drama around him and his team than studying the opponent. I've done this kind of thing in online multiplayer contexts, and the feedback you receive from this is substantially more weighted to your own team than the opponents.

Comment by miller on Glenn Beck discusses the Singularity, cites SI researchers · 2012-06-13T07:59:04.225Z · score: 17 (17 votes) · LW · GW

That's on my list of things I didn't expect to see today.

Comment by miller on What are you working on? June 2012 · 2012-06-04T04:06:20.488Z · score: 4 (4 votes) · LW · GW

a set of tools for morphological disambiguation, shallow parsing, named entity recognition, sentence alignment and such

Is that made easier by the fact that in Hungarian they prefix each word with it's type? .

Comment by miller on This post is for sacrificing my credibility! · 2012-06-04T03:39:29.826Z · score: 2 (4 votes) · LW · GW

I'm going with this commenter being Will. What do I win?

Comment by miller on What are you working on? June 2012 · 2012-06-03T18:29:44.410Z · score: 0 (2 votes) · LW · GW

recaptures a (badly obfuscated and possibly overfit) variant of it.

How do you overfit Kepler's law?

edit: Retracted. I see now looking at the actual link the result wasn't just obfuscated but wrong, and so the manner in which it's wrong can overfit of course (and that matches the results).

Comment by miller on What are you working on? June 2012 · 2012-06-03T18:25:57.671Z · score: 1 (3 votes) · LW · GW

Using a high-powered black-box technique to regress a one-dimensional continuous outcome against a one-dimensional continuous predictor seems misguided.

I don't get this. You could have a rather complicated generator for this data set. A simple regression would imply the data points were independent, but the value at time T may have [likely has] a relation to value at T-3. So it seems a good problem to me.

Comment by miller on Low Hanging Fruit in Computer Hardware · 2012-06-02T18:13:14.775Z · score: 0 (0 votes) · LW · GW

I had an 80$ logitech keyboard (the illuminating short-stroke like a notebook variety), and when it began to deteriorate I swapped it with a 10$ Walmart special that was a slightly curved Microsoft one. I had been playing around on this typing speed site and was surprised to find that on the 5th attempt I had beaten my previous record with this new keyboard.

If I had a variety of keyboards at my disposable I think it would be an interesting exercise to test them in this fashion.

Comment by miller on This post is for sacrificing my credibility! · 2012-06-02T17:59:27.419Z · score: 1 (1 votes) · LW · GW

I frequently find Will's contributions obscurantist.

The same word came to mind, and it's common to his history of interactions, so seeing it here means I ascribe it to him rather than the logic of whatever underlying purpose he may have on this occasion.

Comment by miller on This post is for sacrificing my credibility! · 2012-06-02T17:54:53.885Z · score: 6 (8 votes) · LW · GW

If your goal is to lower your credibility, why do that in the context of talking about credibility?

Comment by miller on Depression as a defense mechanism against slavery · 2012-05-19T17:07:27.432Z · score: 11 (13 votes) · LW · GW

Rank Theory of Depression

Comment by miller on Food4Me - personalised nutrition initiative · 2012-05-16T02:04:44.211Z · score: 0 (0 votes) · LW · GW

I believe I have a few results of this nature in my 23andme profile but, like most results there, they indicate e.g. that I might gain an extra .5 pounds compared to average on a high fat diet.

I got a kick when I logged in there and it said something to the effect of 'see how your genes affect your weight' and after entering height, age, and weight it told me that my genes were responsible for 2lb (whatever that meant).

It does also note lactose tolerance, alcohol and caffeine enzymes, coeliac disease risk, etc.

Comment by miller on Neuroimaging as alternative/supplement to cryonics? · 2012-05-13T03:43:01.970Z · score: 5 (5 votes) · LW · GW

you periodically take neuroimaging scans of your brain and save them to multiple backup locations (1010 bits is only about 1 gigabyte)

I think I understand but I'm lost as to why that 10^10 is showing up here. Wouldn't it be whatever the scan happens to be rather than a reference to the compressed size of a human's unique experiences? We might plausibly have a 10^18 scan that is detailed in the wrong ways (like it carries 1024 bits per voxel of color channel info :p).

eta: In case it's not clear, I can't actually help you answer the question of just how useful a scan is.

Comment by miller on Neuroimaging as alternative/supplement to cryonics? · 2012-05-13T02:49:22.395Z · score: 3 (3 votes) · LW · GW

Ok so I'm presuming that an extremely fine grained scan stored with some naive compression is massively more than 10^14 synapse-bits. In order to store all that now in the information theoretic minimum, don't we need some kind of incredibly awesome compression algorithm NOW that we simply don't have?

Comment by miller on AGI Quotes · 2011-11-03T05:42:06.070Z · score: 1 (1 votes) · LW · GW

Whether the AI loves -- or hates, you cannot fathom, but plans it has indeed for your atoms.

Comment by miller on Machine learning and unintended consequences · 2011-09-23T06:16:44.990Z · score: 3 (5 votes) · LW · GW

this one:

http://lesswrong.com/lw/3gv/statistical_prediction_rules_outperform_expert/

When based on the same evidence, the predictions of SPRs are at least as reliable as, and are typically more reliable than, the predictions of human experts for problems of social prediction.

Hmm yes, 'same evidence'.

Comment by miller on Machine learning and unintended consequences · 2011-09-23T04:25:23.950Z · score: -1 (9 votes) · LW · GW

I'm reminded of one of your early naively breathless articles here on the value of mid-80s and prior expert systems.

Comment by miller on Dubstep and algorithmic information theory · 2011-09-11T05:50:45.163Z · score: 2 (2 votes) · LW · GW

All of your pretensions aside that's a pretty slick link.

Comment by miller on My favorite popular scientific self-help books · 2011-09-11T03:17:27.801Z · score: 3 (3 votes) · LW · GW

I read it. Luke's article here was more or less a transcription of the more interesting parts. The author essentially agreed. So you need 30 minutes. Set your pomodoro.

Comment by miller on Help Fund Lukeprog at SIAI · 2011-08-25T04:13:24.634Z · score: 6 (10 votes) · LW · GW

Because "signing" comments is not customary here, doing so signals a certain aloofness or distance from the community

No. I am very confident the intention was to signal that Luke was not being emotionally affected by the intense criticism for the purpose of appearing to be leader type material, which is substantially not aloofness from the community.

It's not a convincing signal primarily because it's idiosyncracy highlights it for analysis, but I still think the above holds.

Comment by miller on Help Fund Lukeprog at SIAI · 2011-08-25T03:45:29.838Z · score: 5 (5 votes) · LW · GW

Can we be forgiving and assume that multiple anecdotes fail because they have a consistent bias related to how they are obtained?

Comment by miller on exists(max(performance(pay))) · 2011-07-29T22:52:32.629Z · score: 3 (3 votes) · LW · GW

So you give me a firm denial, and then you edit out the first sentence which made it clear you were referencing contemporary politics, and clean up other sloppiness. I'll just move on.

Comment by miller on exists(max(performance(pay))) · 2011-07-29T20:32:17.256Z · score: 0 (6 votes) · LW · GW

I take it you just felt like ranting.

Comment by miller on Locating emotions · 2011-07-27T22:40:51.786Z · score: 1 (1 votes) · LW · GW

Sadness apparently leaked on the floor.

Comment by miller on Should rationalists put much thought into tipping and/or voting? · 2011-07-20T01:09:19.244Z · score: 3 (3 votes) · LW · GW

Oh yeah it's full of them. They are the kinds of things you say 'sure that makes sense', 'oh I've seen that used before', and 'man that's douche-y'. But I suspect they are all generally true and effective.

The book is: "Roger Dawson's Secrets of Power Negotiating".

I had a friend recently tell me that their company bought a license for a platform operating system for 75k$, whereas the initial asking price was 750k$. So somewhere in between those prices is a lot of value to be made by negotiating. It makes the engineer salaries a relative trifle.

Comment by miller on The limits of introspection · 2011-07-17T00:08:09.477Z · score: 0 (0 votes) · LW · GW

Strikes me as a behaviorist -> cognitivist paradigm shift. Scientists just got tired of the old way (or more specifically, it simply stopped being new). That'd be my armchair guess.

edit :Someone better qualified should answer that. I'm not even sure that's behaviorism.

Comment by miller on Ego syntonic thoughts and values · 2011-07-15T05:22:43.829Z · score: 13 (13 votes) · LW · GW

I am going to take the unflattering guess that your expectations for yourself are generally unrealistic, and that you are unwilling to face this fact. This is not a conscious thing but an emotional and subconscious expectation trained into you during childhood (possibly). Quite simply, hard work and small incremental gains are beneath you. Failures on inconsequential steps are unacceptable. If only a simple solution were found, the key would turn, and an uber-awesome-individual emerge.

Many of the subjects addressed on this very website are grandiose in nature. By involving yourself, the signal is that you can participate in them: research topics from the deepest corners of academia, immortality, fate of humanity, superior general problem solving on all subjects, etc. We flock to them because we recognize they are important, and we flatter ourselves to recognize they are important, and yet our contributions to them are negligible. Sure we can admit we are horrible at decision theory or quantum mechanics or any specific item, but there must be something that can be found that will demonstrate the grand plan that features us as the hero. Even failing at quantum mechanics is better than the guy that thinks it's a magazine that sits next to Motor Trend in the garage lobby, right?

This is narcissism spectrum stuff and very ego syntotic. It's my new hammer and everything looks like nails, so I can easily be wrong. The (unconscious) decision is ultimately between living in a false fantasy of grandeur or living in a real world where you are one of many: striving for marginal gains, and doomed to grow old and die.

The guy that wrote this is mostly describing himself, is controversial, and is substantially more screwed up than almost anyone, but tell me if it rings any bells. He goes on elsewhere to differentiate cerebral from physical varieties of narcissism. It's a word that kind of needs tabooing, but the underlying symptom is vehement subconscious guarding of a false fantasy persona. We aren't secretly Harry Potter.

I notice that you don't mention other people in any of your solutions (except #9 where you are causing an attention fueling ruckus). Why not?

Comment by miller on Should rationalists put much thought into tipping and/or voting? · 2011-07-14T04:15:32.485Z · score: 5 (5 votes) · LW · GW

Was thumbing through a negotiating book and it made the rather stunning and clearly true observation that the few minutes\hours it takes to talk someone down a few extra bucks is probably substantially higher than any wage rate you'll ever make. Many orders of magnitude on occasion. This is particularly true if you are leveraging the buying\selling power of a corporation, rather than just buying a trinket in Jamaica.

Comment by miller on Nevada Passes Law Authorizing Driverless Cars · 2011-06-28T15:47:10.855Z · score: 2 (2 votes) · LW · GW

Yeah Hastings was fond of saying 'That's why we called it NETflix not DVDs-by-mail'.. although I think even in the late 90s there were some weak attempts at video on demand over the web so the vision wasn't nearly as advanced as I think it would be in Zipcar's case. One of the major problems in the analogy is that the capital investment to replace cars is so ridiculously enormous it's difficult to imagine one company capturing a large chunk of it.

The precise details of how driverless cars come to be used will be fascinating. Urban or rural first? taxi replacement or owned first? Will there be restricted areas? Who are the major players? Does it kill existing mass transit (I think so)? What will be the dominant fueling model? What will NYC do with the subway (make it a high speed expressway for the cars perhaps)? Will webvan make a comeback (snicker)?

Comment by miller on The Kolmogorov complexity of a superintelligence · 2011-06-26T18:01:46.465Z · score: 1 (1 votes) · LW · GW

Interesting. How does the program determine hard questions (and their answers) without qualifying as generating them itself? That is, the part about enumerating other programs seems superfluous.

[Edit added seconds later] Ok, I see perhaps it could ask something like 'predict what's gonna happen 10 seconds from now', and then wait to see if the prediction is correct after the universe runs for that long. In that case, the parent program isn't computing the answer itself.

Comment by miller on new Bright Eyes (Conor Oberst) single 'Singularity' · 2011-06-24T00:10:35.779Z · score: 0 (0 votes) · LW · GW

Well at least they aren't practicing screaming until exhaustion yet..

Comment by miller on Nevada Passes Law Authorizing Driverless Cars · 2011-06-24T00:04:16.370Z · score: 5 (5 votes) · LW · GW

I think driverless cars should be one of the most fantastic changes in the next 20 years. The benefits are just too many. My crazy prediction is that Zipcar will end up being a leader in deployment, making a transition akin to Netflix's DVD to online one, the principle advantage being the possession of the right kind of customer relationship.

Comment by miller on existential-risk.org by Nick Bostrom · 2011-06-21T03:05:31.505Z · score: 7 (7 votes) · LW · GW

I'd wager most people wonder wtf .org is all about and why it's not a .com like all the others. But then again those people are not the ones that are gonna wind up at the site. So I find it most likely you two are just imagining two different sets of 'most people'.

Comment by miller on 1-2pm is for ??? · 2011-06-16T10:04:27.263Z · score: 4 (4 votes) · LW · GW

] What is 1-2pm?

It is the middle of the afternoon. You are likely to be eaten by a Graehl.

Comment by miller on Rational Humanist Music · 2011-06-13T21:16:43.410Z · score: 6 (6 votes) · LW · GW

Hmm Sex Pistols vis a vis Anarchy in the UK: "I am an instrumental rationalist! I don't know what I want, but I know how to get it!'

Comment by miller on Resetting Gandhi-Einstein · 2011-06-13T11:23:58.906Z · score: 2 (4 votes) · LW · GW

Sure, then all we need is good regulators to ensure everyone hobbles their extremely useful AI in this manner.

Unfortunately this topic is impossible to get traction on. We are probably better off debating which political party sucks more (Hint: it starts with a consonant).

Comment by miller on Not for the Sake of Pleasure Alone · 2011-06-12T09:41:38.804Z · score: 1 (3 votes) · LW · GW

Sounds like a decent methodology to me.

Comment by miller on Not for the Sake of Pleasure Alone · 2011-06-12T09:16:47.358Z · score: 8 (8 votes) · LW · GW

This sounds to me like a word game. It depends on what the initial intention for 'pleasure' is. If you say the device gives 'maximal pleasure' meaning to point to a cloud of good-stuffs and then you later use a more precise meaning for pleasure that is an incomplete model of the good-stuffs, you are then talking of different things.

The meaningful thought experiment for me is whether I would use a box that maximized pleasure\wanting\desire\happiness\whatever-is-going-on-at-the-best-moments-of-life while completely separating me as an actor or participant from the rest of the universe as I currently know it. In that sense of the experiment, you aren't allowed to say 'no' because of how you might feel after the machine is turned on, because then the machine is by definition failing. The argument has to be made that the current pre-machine-you does not want to become the post-machine-you, even while the post-machine-you thinks the choice was obvious.

Comment by miller on General Bitcoin discussion thread (May 2011) · 2011-06-02T05:02:39.408Z · score: 3 (3 votes) · LW · GW

Undergound drug market built on Bitcoin: http://www.wired.com/threatlevel/2011/06/silkroad/

Comment by miller on Rationality Quotes: June 2011 · 2011-06-01T23:52:54.132Z · score: 58 (62 votes) · LW · GW

The megalomania of the genes does not mean that benevolence and cooperation cannot evolve, any more than the law of gravity proves that flight cannot evolve. It means only that benevolence, like flight, is a special state of affairs in need of an explanation, not something that just happens.

  • Pinker, The Blank Slate