AALWA: Ask any LessWronger anything

post by Will_Newsome · 2014-01-12T02:18:04.159Z · LW · GW · Legacy · 633 comments

Contents

  If you want people to ask you stuff reply to this post with a comment to that effect.
None
634 comments

If you want people to ask you stuff reply to this post with a comment to that effect.

More accurately, ask any participating LessWronger anything that is in the category of questions they indicate they would answer.

If you want to talk about this post you can reply to my comment below that says "Discussion of this post goes here.", or not.

633 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2014-03-15T03:30:08.356Z · LW(p) · GW(p)

I've been getting an increasing number of interview requests from reporters and book writers (stemming from my connection with Bitcoin). In the interest of being lazy, instead of doing more private interviews I figure I'd create an entry here and let them ask questions publicly, so I can avoid having to answer redundant questions. I'm also open to answering any other questions of LW interest here.

In preparation for this AMA, I've updated my script for retrieving and sorting all comments and posts of a given LW user, to also allow filtering by keyword or regex. So you can go to http://www.ibiblio.org/weidai/lesswrong_user.php, enter my username "Wei_Dai", then (when the page finishes loading) enter "bitcoin" in the "filter by" box to see all of my comments/posts that mention Bitcoin.

Replies from: riceissa, Wei_Dai, gsastry, ESRogs, frizzers, Wei_Dai, frizzers, frizzers, Jayson_Virissimo, ubiubi18, Lu_Tong, 9kv, crabman, riceissa, chowfan, yannkyle, None
comment by riceissa · 2019-07-28T21:06:27.577Z · LW(p) · GW(p)

I was surprised to see, both on your website and the white paper, that you are part of Mercatoria/ICTP (although your level of involvement isn't clear based on public information). My surprise is mainly because you have a couple [LW(p) · GW(p)] of comments [LW(p) · GW(p)] on LessWrong that discuss why you have declined to join MIRI as a research associate. You have also (to my knowledge) never joined any other rationality-community or effective altruism-related organization in any capacity.

My questions are:

  1. What are the reasons you decided to join or sign on as a co-author for Mercatoria/ICTP?
  2. More generally, how do you decide which organizations to associate with? Have you considered joining other organizations, starting your own organization, or recruiting contract workers/volunteers to work on things you consider important?
Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-09-01T19:53:41.117Z · LW(p) · GW(p)

I seem to have missed this question when it was posted.

You have also (to my knowl­edge) never joined any other ra­tio­nal­ity-com­mu­nity or effec­tive al­tru­ism-re­lated or­ga­ni­za­tion in any ca­pac­ity.

With the background that I have an independent source of income and it's costly to move my family (we're not near any major orgs) so I'd have to join in a remote capacity, I wrote down this list of pros and cons of joining an org (name redacted) that tried to recruit me recently:

Pros

  1. More access to internal discussions at X, private Google Docs, discussions at other places (due to affiliation with X), people to discuss/collaborate with.
  2. Get my ideas taken more seriously (by some) due to X affiliation
  3. Possibly make me more productive through social pressure/expectation

Cons

  1. Feeling of obligation possibly make me less productive [LW · GW]
  2. As a personal cost, social pressure to be productive feeling unpleasant
  3. Less likely to post/comment on various topics due to worry about damaging X’s reputation (a lot of X people don’t post much, maybe partly for this reason?)
  4. Get my ideas taken less seriously (by some) due to perception of bias (e.g., having a financial interest in people taking AI risk seriously)
  5. Actual bias due to having connection to X.

Also these two cons, which I just thought of now:

  1. Losing the status and other signaling effect of conspicuously donating most of my work time to x-risk reduction (such as making other people take x-risk more seriously). (I guess I could either take a zero salary or a normal salary and then donate it back, but over time people might forget or not realize that I'm doing that.)
  2. Not being able to pivot as quickly to whatever cause/topic/strategy that seems most important/tractable/neglected, as I update on new information, which seems like an important part of my comparative advantage.

What are the rea­sons you de­cided to join or sign on as a co-au­thor for Mer­ca­to­ria/​ICTP?

I guess because the upside seems high and most of the above cons do not apply.

More gen­er­ally, how do you de­cide which or­ga­ni­za­tions to as­so­ci­ate with?

Some sort of informal/intuitive cost-benefit analysis, like the above pros/cons list.

Have you con­sid­ered join­ing other or­ga­ni­za­tions, start­ing your own or­ga­ni­za­tion, or re­cruit­ing con­tract work­ers/​vol­un­teers to work on things you con­sider im­por­tant?

I think recruiting, managing people and applying for grants are not part of my comparative advantage, so I prefer to write down my ideas and let others work on them if they agree with me that they are important. (I do worry that by "planting a flag" on some idea and then not pursuing it as vigorously as someone else who might have discovered that idea, I may be making things worse than not writing about that idea at all. So far my best guess is to keep doing what I've been doing, but I may be open to being convinced that I should do things differently.)

Replies from: Walid
comment by Wei Dai (Wei_Dai) · 2014-03-16T06:14:00.567Z · LW(p) · GW(p)

I received a PM from someone at a Portuguese newspaper who I think meant to post it publicly, so I'll respond publicly here.

You have contacted Satoshi Nakamoto. Does it seem to you only one person or a group of developers?

I think Satoshi is probably one person.

Does bitcoin seem cyberpunk project to you? In that case, can one expect they ever disclose identity?

Not sure what the first part of the question means. I don't expect Satoshi to voluntarily reveal his identity in the near future, but maybe he will do so eventually?

In that case, the libertarian motivation wouldn't be a risk to anyone who invest in the community? Like one this gets all formal and legal, it blow?

Don't understand this one either.

Is it important to know right now its origins? The author from the blog LikeinMirrorr, who says the most probable name is Nick Szabo, argues there is a concern on risk: if Szabo/ciberpunk is the source no risk, but it maybe this bubble - pump-and-dump scheme to enrich its original miners - or a project from federal goverment to track underground transactions. What is your view on this?

I'm pretty sure it's not a pump-and-dump scheme, or a government project.

Do you also think Szabo is the most probable name?

No I don't think it's Szabo or anyone else whose name is known to me. I explained why I don't think it's Szabo to a reporter from London's Sunday Times who wrote about it in the March 2 issue. I'll try to find and quote the relevant section.

How long have you start working on your ideas of criptocurrency? Have you used other pseudonyms online? Are you Szabo?

I worked on it from roughly 1995 to 1998. I've used pseudonyms only on rare (probably less than 10) occasions. I'm not Szabo but coincidentally we attended the same university and had the same major and graduated within a couple years of each other. Theoretically we could have seen each other on campus but I don't think we ever spoke in real life.

In your opinion, why has bitcoin succeed?

To be honest I didn't initially expect Bitcoin to make as much impact as it has, and I'm still at a bit of a loss to explain why it has succeeded to the extent that it has. In my experience lots of promising ideas especially in the field of cryptography never get anywhere in practice. But anyway, it's probably a combination of many things. Satoshi's knowledge and skill. His choice of an essentially fixed monetary base which ensures early adopters large windfalls if Bitcoin were to become popular, and which appeals to people who distrust flexible government monetary policies. Timing of the introduction to coincide with the economic crisis. Earlier discussions of related ideas which allowed his ideas to be more readily accepted. The availability of hardware and software infrastructure for him to build upon. Probably other factors that I'm neglecting.

(Actually I'd be interested to know if anyone else has written a better explanation of Bitcoin's success. Can anyone reading this comment point me to such an explanation?)

Finaly, what do you see as future? Wall Street has announced they wil start accepting applications for bitcoin and other digital currency exchanges. How do you see this milestone?

Don't have much to say on these. Others have probably thought much more about these questions over the past months and years and are more qualified than I am to answer.

Replies from: gwern, mfreis
comment by gwern · 2014-03-16T18:08:19.784Z · LW(p) · GW(p)

No I don't think it's Szabo or anyone else whose name is known to me. I explained why I don't think it's Szabo to a reporter from London's Sunday Times who wrote about it in the March 2 issue. I'll try to find and quote the relevant section.

I had the article jailbroken recently, and the relevant parts (I hope I got it right, my version has scrambled-up text) are:

Nonetheless, the original bitcoin white paper is written in an academic style, with an index of sources at the end. I go to Wei Dai, an original cypherpunk, the proposer of a late-1990s e-currency called b-moneyand an early correspondent of Satoshi. When, in the first of several late-night chats, I ask him how many people would have the necessary competencies to create something like bitcoin, he tells me:

"Coming up with bitcoin required someone who, a) thought about money on a deep level, and b) learnt the tools of cryptography, c) had the idea that something like Bitcoin is possible, d) was motivated enough to develop the idea into something practical, e) was technically skilled enough to make it secure, f) had enough social skills to build and grow a community around it. The number of people who even had a), b) and c) was really small -- ie, just Nick Szabo and me -- so I'd say not many people could have done all these things."

A sudden frisson. Szabo, an American computer scientist who has also served as law professor at George Washington University, developed a system for "bit gold" between 1998 and 2005, which has been seen as a precursor to Bitcoin. Is he saying that Szabo is Satoshi? "No, I'm pretty sure it's not him." you, then? "No. When I said just Nick and me, I meant before Satoshi" So where could this person have come from? "Well, when I came up with b-money I was still in college, or just recently graduated, and Nick was at a similar age when he came up with bit gold, so I think Satoshi could be someone like that." "Someone young, with the energy for that kind of commitment?" "yeah, someone with energy and time, and that isn't obligated to publish papers under their real name."

...I go back to Szabo's pal, Wei Dai. "Wei," I say, "the other night you said you were sure Nick Szabo wasn't Satoshi. What made you sure?" "Two reasons," he replies. "One: in Satoshi's early emails to me he was apparently unaware of Nick Szabo's ideas and talks about how bitcoin 'expands on your ideas into a complete working system' and 'it achieves nearly all the goals you set out to solve in your b-money paper'. I can't see why, if Nick was Satoshi, he would say things like that to me in private. And, two: Nick isn't known for being a C++ programmer."

Perversely, a point in Szabo's favour. But Wei forwards me the relevant emails, and it's true: Satoshi had been ignorant of Szabo's bit-gold plan until Wei mentioned it. Furthermore, a trawl through Szabo's work finds him blogging and fielding questions about bit gold on his Unenumerated blog on December 27, 2008, while Satoshi was preparing bitcoin to meet the world a week later. Why? Because Szabo didn't know about bitcoin: almost no one outside the Cryptography Mailing List did, and I can find no evidence of him ever having been there. Indeed, by 2011, the bit-gold inventor is blogging in defence of bitcoin, pointing out several improvements on the system he devised.

I actually meant to email you about this earlier, but is there any chance you could post those emails (you've made them half-public as it is, and Dustin Trammell posted his a while back) or elaborate on Nick not knowing C++?

I've been trying to defend Szabo against the accusations of being Satoshi*, but to be honest, his general secrecy has made it very hard for me to rule him out or come up with a solid defense. If, however, he doesn't even know C or C++, then that massively damages the claims he's Satoshi. (Oh, one could work around it by saying he worked with someone else who did know C/C++, but that's pretty strained and not many people seriously think Satoshi was a group.)

* on Reddit, HN, and places like http://blog.sethroberts.net/2014/03/11/nick-szabo-is-satoshi-nakamoto-the-inventor-of-bitcoin/ or https://likeinamirror.wordpress.com/2013/12/01/satoshi-nakamoto-is-probably-nick-szabo/ (my response) / http://likeinamirror.wordpress.com/2014/03/11/occams-razor-who-is-most-likely-to-be-satoshi-nakamoto/

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2014-03-17T00:02:40.933Z · LW(p) · GW(p)

I actually meant to email you about this earlier, but is there any chance you could post those emails (you've made them half-public as it is, and Dustin Trammell posted his a while back)

Sure, I have no objection to making them public myself, and I don't see anything in them that Satoshi might want to keep private, so I'll forward them to you to post on your website. (I'm too lazy to convert the emails into HTML myself.)

elaborate on Nick not knowing C++?

Sorry, you misunderstood when I said "Nick isn't known for being a C++ programmer". I didn't mean that he doesn't know C++. Given that he was a computer science major, he almost certainly does know C++ or can easily learn it. What I meant is that he is not known to have programmed much in C or C++, or known to have done any kind of programming that might have kept one's programming skills sharp enough to have implemented Bitcoin (and to do it securely to boot). If he was Satoshi I would have expected to see some evidence of his past programming efforts.

But the more important reason for me thinking Nick isn't Satoshi is the parts of Satoshi's emails to me that are quoted in the Sunday Times. Nick considers his ideas to be at least an independent invention from b-money so why would Satoshi say "expands on your ideas into a complete working system" to me, and cite b-money but not Bit Gold in his paper, if Satoshi was Nick? An additional reason that I haven't mentioned previously is that Satoshi's writings just don't read like Nick's to me.

Replies from: gwern, gwern
comment by gwern · 2014-04-01T01:26:30.363Z · LW(p) · GW(p)

so I'll forward them to you to post on your website.

Done: http://www.gwern.net/docs/2008-nakamoto

(Sorry for the delay, but a black-market was trying to blackmail me and I didn't want my writeup to go live so I was delaying everything.)

comment by gwern · 2014-03-18T01:15:38.553Z · LW(p) · GW(p)

so I'll forward them to you to post on your website.

Thanks.

I didn't mean that he doesn't know C++. Given that he was a computer science major, he almost certainly does know C++ or can easily learn it. What I meant is that he is not known to have programmed much in C or C++, or known to have done any kind of programming that might have kept one's programming skills sharp enough to have implemented Bitcoin (and to do it securely to boot). If he was Satoshi I would have expected to see some evidence of his past programming efforts.

I see. Unfortunately, this damages my defense: I can no longer say there's no evidence Szabo doesn't even know C/C++, but I have to confirm that he does. Your point about sharpness is well-taken, but the argument from silence here is very weak since Szabo hasn't posted any code ever aside from a JavaScript library, so we have no idea whether he has been keeping up with his C or not.

why would Satoshi say "expands on your ideas into a complete working system" to me, and cite b-money but not Bit Gold in his paper, if Satoshi was Nick?

Good question. I wonder if anyone ever asked Satoshi about what he thought of Bit Gold?

An additional reason that I haven't mentioned previously is that Satoshi's writings just don't read like Nick's to me.

I've seen people say the opposite! This is why I put little stock in people claiming Satoshi and $FAVORITE_CANDIDATE sound alike (especially given they're probably in the throes of confirmation bias and would read in the similarity if at all possible). Hopefully someone competent at stylometrics will at some point do an analysis.

Replies from: frizzers
comment by frizzers · 2014-03-21T09:12:23.833Z · LW(p) · GW(p)

I've been working hard on this in my book. (Nearly there by the way). I posted this on Like In A Mirror but put it here as well in case it doesn't get approved.

Yes, the writing styles of Szabo and Satoshi are the same.

Apart from the British spelling.

And the different punctuation habits.

And the use of British expressions like mobile phone and flat and bloody.

And Szabo’s much longer sentences.

And the fact that Szabo doesn’t make the same spelling mistakes that Satoshi does.

Ooh and the fact that Szabo’s writing has a lot more humour to it than Satoshi’s.

Szabo is one of the few people that has the breadth, depth and specificity of knowledge to achieve what Satoshi has, agreed. He is the right age, has the right background and was in the right place at the right time. He ticks a lot of the right boxes.

But confirmation bias is a dangerous thing. It blinkers.

And you need to think about the dangers your posts are creating in the life of a reclusive academic.

Satoshi is first and foremost a coder, not a writer. Szabo is a writer first and coder second. To draw any serious conclusions you need to find some examples of Szabo’s c++ coding.

You also need to find some proof a Szabo’s hacking (or anti-hacking) experience. Satoshi has rather a lot of this.

And you need to consider the possibility that Satoshi learnt his English on both sides of the Atlantic. And that English was not his first language. I don’t think it was.

Replies from: gwern
comment by gwern · 2014-03-21T19:03:30.362Z · LW(p) · GW(p)

Yes, the writing styles of Szabo and Satoshi are the same. Apart from the British spelling. And the different punctuation habits. And the use of British expressions like mobile phone and flat and bloody. And Szabo’s much longer sentences. And the fact that Szabo doesn’t make the same spelling mistakes that Satoshi does. Ooh and the fact that Szabo’s writing has a lot more humour to it than Satoshi’s.

Szabo has extensively studied British history for his legal and monetary theories (it's hard to miss this if you've read his essays), so I do not regard the Britishisms as a point against Szabo. It's perfectly easy to pick up Britishisms if you watch BBC programs or read The Economist or Financial Times (I do all three and as it happens, I use 'bloody' all the time in colloquial speech - a check of my IRC logs shows me using it 72 times, and at least once in my more formal writings on gwern.net, and 'mobile phone' pops up 3 or 4 times in my chat logs; yet I have spent perhaps 3 days in the UK in my life). And Satoshi is a very narrow, special-purpose pseudonymic identity which has one and only one purpose: to promote and work on Bitcoin - Bitcoin is not a very humorous subject, nor does it really lend itself to long essays (or long sentences). And I'm not sure how you could make any confident claims about spelling mistakes without having done any stylometrics, given that both Szabo and Satoshi write well and you would expect spelling mistakes to be rare by definition.

Replies from: frizzers
comment by frizzers · 2014-03-22T07:46:07.316Z · LW(p) · GW(p)

Points noted. All well made. Mine was a heated rebuttal to the Like IN A Mirror post.

I could only find one spelling mistake in all Satoshi's work and a few punctuation quibbles. It's a word that is commonly spelt wrong - but that Szabo spells right. I don't want to share it here because I'm keeping it for the book

comment by mfreis · 2014-03-16T09:10:40.548Z · LW(p) · GW(p)

Thank you so much Wei Dai.

My idea with second question was to understand if there is like an anarchist motivation around bitcoin that may have some risks in the future. I mean, if somehow when it reaches Wall Street the original developers can do anythink to affect credibility.

You say you don't think it was Szabo. Have you ever try to know who he was? Could you share who is your solid hunch and why?

Is relevant to know Satoshi?

If you know what you know today, would you have patented bmoney? Do you think bitcoin inventers would have done the same?

Kind regards Marta

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2014-03-16T10:21:02.055Z · LW(p) · GW(p)

My idea with second question was to understand if there is like an anarchist motivation around bitcoin that may have some risks in the future.

Ok, I think I see what you're getting at. First of all, crypto-anarchy is very different from plain anarchy. We (or at least I) weren't trying to destroy government, but just create new virtual communities that aren't ruled by the threat of violence. Second I'm not sure Satoshi would even consider himself a crypto-anarchist. I think he might have been motivated more by a distrust of financial institutions and government monetary authorities and wanted to create a monetary system that didn't have to depend on such trust. All in all, I don't think there is much risk in this regard.

You say you don't think it was Szabo. Have you ever try to know who he was? Could you share who is your solid hunch and why?

I haven't personally made any attempts to find out who he is, nor do I have any idea how. My guess is that he's not anyone who was previously active in the academic cryptography or cypherpunks communities, because otherwise he probably would have been identified by now based on his writing and coding styles.

Is relevant to know Satoshi?

I think at this point it doesn't matter too much, except to satisfy people's curiosity.

If you know what you know today, would you have patented bmoney? Do you think bitcoin inventers would have done the same?

No, because along with a number of other reasons not to patent it, the whole point of b-money was to have a money system that governments can't control or shut down by force, so how would I be able to enforce the patent? I don't think Satoshi would have patented his ideas either, because I think he is not motivated mainly to personally make money, but to change the world and to solve an interesting technical problem. Otherwise he would have sold at least some of his mined Bitcoins in order to spend or to diversify into other investments.

Replies from: mfreis, None
comment by mfreis · 2014-03-16T11:51:11.891Z · LW(p) · GW(p)

Thank you so much Wei Dai for all the answers.

You say other previously active member would have been identified base on this writing and coding style. There is exacly what Skye Grey says he/she's doing for matching Szabo with Satoshi on the blog LikeinaMirror - he say's he's 99,9% sure Szabo is Satoshi. https://likeinamirror.wordpress.com/2014/03/

Dorian Nakamoto theory may have any ground?

What made you think Satoshi motivation was distrust rather than crypto-anarchy? Someone that have loose money for instance in Lehman Brothers banrupcy? It was also in 2008

Why is anonimity important to crypto community? Just to confirm, Wei Dai is a pseudonym?

Thank you again

Replies from: Wei_Dai, gwern
comment by Wei Dai (Wei_Dai) · 2014-03-17T00:38:19.116Z · LW(p) · GW(p)

I agree with gwern's answers and will add a couple of my own.

Dorian Nakamoto theory may have any ground?

No, I doubt it.

Why is anonimity important to crypto community?

  1. We think it's cool because the technology falls out of our field of research.
  2. Anonymity provides privacy and security against physical violence, and cryptographers tend to care about privacy and security.
comment by gwern · 2014-03-16T18:10:13.832Z · LW(p) · GW(p)

There is exacly what Skye Grey says he/she's doing for matching Szabo with Satoshi on the blog LikeinaMirror - he say's he's 99,9% sure Szabo is Satoshi. https://likeinamirror.wordpress.com/2014/03/

Grey's post is worthless. I haven't written a rebuttal to his second, but about his first post, see http://www.reddit.com/r/Bitcoin/comments/1ruluz/satoshi_nakamoto_is_probably_nick_szabo/cdr2vgu

What made you think Satoshi motivation was distrust rather than crypto-anarchy? Someone that have loose money for instance in Lehman Brothers banrupcy? It was also in 2008

Because he said so. Haven't you done any background reading? (And how many private individuals could have lost money in Lehman Brothers anyway...)

Why is anonimity important to crypto community?

Seriously?

Just to confirm, Wei Dai is a pseudonym?

No, it's real.

comment by [deleted] · 2015-11-01T20:52:40.055Z · LW(p) · GW(p)Replies from: gmaxwell, Wei_Dai, VoiceOfRa
comment by gmaxwell · 2015-11-02T19:06:50.428Z · LW(p) · GW(p)

The concerns in this space go beyond personal safety, though that isn't an insignificant one. For safety, It doesn't matter what one can prove because almost by definition anyone who is going to be dangerous is not behaving in an informed and rational way, consider the crazy person who was threatening Gwern. It's also not possible to actually prove you do not own a large number of Bitcoins-- the coins themselves are pseudonymous, and many people can not imagine that a person would willingly part with a large amount of money (or decline to take it in the first place).

No one knows which, if any, Bitcoins are owned by the system's creator. There is a lot of speculation which is know to me to be bogus; e.g. identifying my coins as having belonged to the creator. So even if someone were to provably dispose of all their holdings, there will be people alleging other coins.

The bigger issue is that the Bitcoin system gains much of its unique value by being defined by software, by mechanical rule and not trust. In a sense, Bitcoin matters because its creator doesn't. This is a hard concept for most people, and there is a constant demand by the public to identify "the person in charge". To stand out risks being appointed Bitcoin's central banker for life, and in doing so undermine much of what Bitcoin has accomplished.

Being a "thought leader" also produces significant demands on your time which can inhibit making meaningful accomplishments.

Finally, it would be an act which couldn't be reversed.

Replies from: None, Lumifer
comment by [deleted] · 2015-11-02T21:11:23.481Z · LW(p) · GW(p)

consider the crazy person who was threatening Gwern

That's a fair point. There is some amount of personal risk intrinsic to being famous. In this specific case there is also certainly a political element involved which could shift the probabilities significantly.

It's also not possible to actually prove you do not own a large number of Bitcoins

This is also fair. I more assumed that if the most obvious large quantity were destroyed it would act to significantly dissuade rational attackers. Why not go kidnap a random early Google employee instead if you don't have significant reason to believe the inventor's wealth exceeds that scale? But yes, in any case, it's not a perfect solution.

In a sense, Bitcoin matters because its creator doesn't.

I don't see it as a required logical consequence that Bitcoin matters because the inventor is unknown. It stands on its own merit. You don't have to know or not know anything about the inventor to know if the system works.

To stand out risks being appointed Bitcoin's central banker for life, and in doing so undermine much of what Bitcoin has accomplished.

I guess you're maybe assuming there's a risk the majority would amend the protocol rules to explicitly grant the inventor this power? They could theoretically do that without their True Name being known. Or perhaps there's a more basic risk that people would weigh the inventor's opinion above all and as such the inventor and protocol would be newly subject to coercion? It doesn't seem to me like this presents a real risk to the system (although perhaps increased risk to the inventor.) I think this would assume ignorance controls a majority of the interest in the system and that it's more fragile than it appears. Please correct as necessary. I put a few words in your mouth there for the sake of advancing discussion.

Being a "thought leader" also produces significant demands on your time which can inhibit making meaningful accomplishments.

My intuition is that this may be the most significant factor from the inventor's perspective. It is certainly a valid concern.

Finally, it would be an act which couldn't be reversed.

Obviously true. Do the risks presented outweigh the potential benefits to humanity? I don't know but I think it's fair to say the identity of the creator does in fact matter-- but not necessarily to the continued functioning of Bitcoin.

comment by Lumifer · 2015-11-02T19:17:46.559Z · LW(p) · GW(p)

almost by definition anyone who is going to be dangerous is not behaving in an informed and rational way

Why do you think so?

comment by Wei Dai (Wei_Dai) · 2015-11-02T07:29:13.600Z · LW(p) · GW(p)

If the identity of the individual were confirmed it would perhaps, at a minimum, elevate their engineer/thinker status such that other ideas and pieces of work attributed to them may receive more attention (and maybe help) from many others who would perhaps not otherwise have happened upon them.

This is interesting and something I hadn't thought about. Now I'm more curious who Satoshi is and why he or she or they have decided to remain anonymous. Thanks! You might want to post your idea somewhere else too, like the Bitcoin reddit or forum, since probably not many people will get to read it here.

comment by VoiceOfRa · 2015-11-03T04:08:52.428Z · LW(p) · GW(p)

Bruce Wayne: As a man, I'm flesh and blood, I can be ignored, I can be destroyed; but as a symbol... as a symbol I can be incorruptible, I can be everlasting.

--Batman Begins

comment by gsastry · 2014-03-16T01:21:39.358Z · LW(p) · GW(p)
  1. What do you think are the most interesting philosophical problems within our grasp to be solved?
  2. Do you think that solving normative ethics won't happen until a FAI? If so, why?
  3. You argued previously that metaphilosophy and singularity strategies are fields with low hanging fruit. Do you have any examples of progress in metaphilosophy?
  4. Do you have any role models?
Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2014-03-16T05:04:06.221Z · LW(p) · GW(p)

What do you think are the most interesting philosophical problems within our grasp to be solved?

I'm not sure there is any. A big part of it is that metaphilosophy is essentially a complete blank, so we have no way of saying what counts as a correct solution to a philosophical problem, and hence no way of achieving high confidence that any particular philosophical problem has been solved, except maybe simple (and hence not very interesting) problems, where the solution is just intuitively obvious to everyone or nearly everyone. It's also been my experience that any time we seem to make real progress on some interesting philosophical problem, additional complications are revealed that we didn't foresee, which makes the problem seem even harder to solve than before the progress was made. I think we have to expect this trend to continue for a while yet.

If you instead ask what are some interesting philosophical problems that we can expect visible progress on in the near future, I'd cite decision theory and logical uncertainty, just based on how much new effort people are putting into them, and results from the recent past.

Do you think that solving normative ethics won't happen until a FAI? If so, why?

No I don't think that's necessarily true. It's possible that normative ethics, metaethics, and metaphilosophy are all solved before someone builds an FAI, especially if we can get significant intelligence enhancement to happen first. (Again, I think we need to solve metaethics and metaphilosophy first, otherwise how do we know that any proposed solution to normative ethics is actually correct?)

You argued previously that metaphilosophy and singularity strategies are fields with low hanging fruit. Do you have any examples of progress in metaphilosophy?

Unfortunately, not yet. BTW I'm not saying these are fields that definitely have low hanging fruit. I'm saying these are fields that could have low hanging fruit, based on how few people have worked in them.

Do you have any role models?

I do have some early role models. I recall wanting to be a real-life version of the fictional "Sandor Arbitration Intelligence at the Zoo" (from Vernor Vinge's novel A Fire Upon the Deep) who in the story is known for consistently writing the clearest and most insightful posts on the Net. And then there was Hal Finney who probably came closest to an actual real-life version of Sandor at the Zoo, and Tim May who besides inspiring me with his vision of cryptoanarchy was also a role model for doing early retirement from the tech industry and working on his own interests/causes.

Replies from: ESRogs, gsastry, Lumifer
comment by ESRogs · 2014-03-18T19:39:31.351Z · LW(p) · GW(p)

I recall wanting to be a real-life version of the fictional "Sandor Arbitration Intelligence at the Zoo" (from Vernor Vinge's novel A Fire Upon the Deep) who in the story is known for consistently writing the clearest and most insightful posts on the Net.

FWIW, I have always been impressed by the consistent clarity and conciseness of your LW posts. Your ratio of insights imparted to words used is very high. So, congratulations! And as an LW reader, thanks for your contributions! :)

comment by gsastry · 2014-03-24T18:53:51.682Z · LW(p) · GW(p)

Thanks. I have some followup questions :)

  1. What projects are you currently working on?/What confusing questions are you attempting to answer?
  2. Do you think that most people should be very uncertain about their values, e.g. altruism?
  3. Do you think that your views about the path to FAI are contrarian (amongst people working on FAI/AGI, e.g. you believing most of the problems are philosophical in nature)? If so, why?
  4. Where do you hang out online these days? Anywhere other than LW?

Please correct me if I've misrepresented your views.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2014-03-24T23:00:31.967Z · LW(p) · GW(p)

What projects are you currently working on?/What confusing questions are you attempting to answer?

If you go through my posts on LW, you can read most of the questions that I've been thinking about in the last few years. I don't think any of the problems that I raised have been solved so I'm still attempting to answer them. To give a general idea, these include questions in philosophy of mind, philosophy of math, decision theory, normative ethics, meta-ethics, meta-philosophy. And to give a specific example I've just been thinking about again recently: What is pain exactly (e.g., in a mathematical or algorithmic sense) and why is it bad? For example can certain simple decision algorithms be said to have pain? Is pain intrinsically bad, or just because people prefer not to be in pain?

As a side note, I don't know if it's good from a productivity perspective to jump around amongst so many different questions. It might be better to focus on just a few with the others in the back of one's mind. But now that I have so many unanswered questions that I'm all very interested in, it's hard to stay on any of them for very long. So reader beware. :)

Do you think that most people should be very uncertain about their values, e.g. altruism?

Yes, but I tend not to advertise too much that people should be less certain about their altruism, since it's hard to see how that could be good for me regardless of what my values are or ought to be. I make an exception of this for people who might be in a position to build an FAI, since if they're too confident about altruism then they're likely to be too confident about many other philosophical problems, but even then I don't stress it too much.

Do you think that your views about the path to FAI are contrarian (amongst people working on FAI/AGI, e.g. you believing most of the problems are philosophical in nature)? If so, why?

I guess there is a spectrum of concern over philosophical problems involved in building an FAI/AGI, and I'm on the far end of the that spectrum. I think most people building AGI mainly want short term benefits like profits or academic fame, and do not care as much about the far reaches of time and space, in which case they'd naturally focus more on the immediate engineering issues.

Among people working on FAI, I guess they either have not thought as much about philosophical problems as I have and therefore don't have a strong sense of how difficult those problems are, or are just overconfident about their solutions. For example when I started in 1997 to think about certain seemingly minor problems about how minds that can be copied should handle probabilities (within a seemingly well-founded Bayesian philosophy), I certainly didn't foresee how difficult those problems would turn out to be. This and other similar experiences made me update my estimates of how difficult solving philosophical problems is in general.

BTW I would not describe myself as "working on FAI" since that seems to imply that I endorse the building of an FAI. I like to use "working on philosophical problems possibly relevant to FAI".

Where do you hang out online these days? Anywhere other than LW?

Pretty much just here. I do read a bunch of other blogs, but tend not to comment much elsewhere since I like having an archive of my writings for future reference, and it's too much trouble to do that if I distribute them over many different places. If I change my main online hangout in the future, I'll note that on my home page.

Replies from: NancyLebovitz, Eugine_Nier
comment by NancyLebovitz · 2014-09-11T16:30:56.986Z · LW(p) · GW(p)

What is pain exactly (e.g., in a mathematical or algorithmic sense) and why is it bad? For example can certain simple decision algorithms be said to have pain? Is pain intrinsically bad, or just because people prefer not to be in pain?

Pain isn't reliably bad, or at least some people (possibly a fairly proportion), seek it out in some contexts. I'm including very spicy food, SMBD, deliberately reading things that make one sad and/or angry without it leading to any useful action, horror fiction, pushing one's limits for its own sake, and staying attached to losing sports teams.

I think this leads to the question of what people are trying to maximize.

comment by Eugine_Nier · 2014-03-25T03:58:40.437Z · LW(p) · GW(p)

Yes, but I tend not to advertise too much that people should be less certain about their altruism, since it's hard to see how that could be good for me regardless of what my values are or ought to be.

One issue is that an altruist has a harder time noticing if he's doing something wrong. An altruist with false beliefs is much more dangerous than an egotist with false beliefs.

comment by Lumifer · 2014-03-16T18:19:53.247Z · LW(p) · GW(p)

and Tim May

What is he doing, by the way? Wikipedia says he's still alive but he looks to be either retired or in deep cover...

comment by ESRogs · 2018-10-19T07:29:13.353Z · LW(p) · GW(p)

Is this you?

"Mercatoria uses pseudorandom assignment and locality of state to achieve arbitrary scalability and true decentralization for its payment processing and smart contracts."
Wei Dai, cofounder

https://mercatoria.io/

Replies from: ESRogs
comment by ESRogs · 2018-10-19T07:41:04.966Z · LW(p) · GW(p)

Nm, I see that it's listed on your home page in the "companies I'm involved with" section.

comment by frizzers · 2014-03-15T10:09:48.918Z · LW(p) · GW(p)

Good morning Wei,

Thank you for doing this. It seems like an excellent solution.

My name's Dominic Frisby. I'm an author from the UK, currently working on a book on Bitcoin (http://unbound.co.uk/books/bitcoin).

Here are some questions I'd like to ask.

  1. What steps, if any, did you take to coding up your b-money idea? If none, or very few, why did you go no further with it?

  2. You had some early correspondence with Satoshi. What do you think his motivation behind Bitcoin was? Was it, simply, the challenge of making something work that nobody had made work before? Was it the potential riches? Was it altruistic or political, maybe - did he want to change the world?

  3. In what ways do you think Bitcoin might change the world?

  4. How much of a bubble do you think it is?

  5. I sometimes wonder if Bitcoin was invented not so much to become the global reserve digital cash currency, but to prove to others that the technology can work. It was more gateway rather than final destination – do you have a view here?

That's more than enough to be going on with.

With kind regards

Dominic

Replies from: Wei_Dai, gwern
comment by Wei Dai (Wei_Dai) · 2014-03-15T20:34:19.057Z · LW(p) · GW(p)

1 - I didn't take any steps to code up b-money. Part of it was because b-money wasn't a complete practical design yet, but I didn't continue to work on the design because I had actually grown somewhat disillusioned with cryptoanarchy by the time I finished writing up b-money, and I didn't foresee that a system like it, once implemented, could attract so much attention and use beyond a small group of hardcore cypherpunks.

2 - It's hard for me to tell, but I'd guess that it was probably a mixture of technical challenge and wanting to change the world.

3 and 4 - Don't have much to say on these. Others have probably thought much more about these questions over the past months and years and are more qualified than I am to answer.

5 - I haven't seen any indication of this. What makes you suspect it?

Replies from: frizzers
comment by frizzers · 2014-03-16T10:10:34.124Z · LW(p) · GW(p)

Thanks Wei. You efforts here is much appreciated and your place in heaven is assured.

In reply to your 5.

My suspicion is not based on any significant evidence. It's just a thought that emerged in my head as I've followed the story. It's a psychological thing, almost macho - people like to solve a problem that nobody else has been able to prove something to themselves (and others).

Also from his comment 'we can win a major battle in the arms race and gain a new territory of freedom for several years' I infer that he didn't think it would last foreever .

Anyway THANK YOU WEI for taking the time to do this.

Dominic

comment by gwern · 2014-03-16T17:56:03.380Z · LW(p) · GW(p)

I sometimes wonder if Bitcoin was invented not so much to become the global reserve digital cash currency, but to prove to others that the technology can work. It was more gateway rather than final destination

Have you read Satoshi's original emails?

Replies from: frizzers
comment by frizzers · 2014-03-17T19:50:48.035Z · LW(p) · GW(p)

about 70 million times.

Even more times than I've read the Lord of the Rings

Replies from: gwern
comment by gwern · 2014-03-17T23:05:25.002Z · LW(p) · GW(p)

I was asking a serious question.

Replies from: frizzers
comment by frizzers · 2014-03-18T08:42:23.378Z · LW(p) · GW(p)

Do you mean the ones on the cryptography mailing list or the ones to Wei Dai?

I've read them both.

Not the ones to Adam Back though

comment by Wei Dai (Wei_Dai) · 2014-03-18T18:17:57.643Z · LW(p) · GW(p)

I received this question via email earlier. Might as well answer it here as well.

In bmoney you say the PoW must have no other value. Why is that? Why wouldn't it be a good idea if it were also somehow made valuable like if perhaps protein folding could be made to fit the other required criteria?

In b-money the money creation rate is not fixed, but instead there are mechanisms that give people incentives to create the right amount of money to ensure price stability or maximize economic growth. I specified the PoW to have no other value in order to not give people an extra incentive to create money (beyond what the mechanism provides). But with Bitcoin this doesn't apply since the money creation rate is fixed. I haven't thought about this much though, so I can't say that it won't cause some other problem with Bitcoin that I'm not seeing.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2014-03-18T20:39:29.663Z · LW(p) · GW(p)

I received another question from this same interlocutor:

Also, I understand you haven't read the original bitcoind code but do you have any guess for why the author chose to lift your SHA256 implementation from Crypto++ when the project already required openssl-0.9.8h? Is there anything odd about the OpenSSL implementation that wouldn't be immediately obvious to someone who isn't a crypto expert?

Hmm, I’m not sure. I thought it might have been the optimizations I put into my SHA256 implementation in March 2009 (due to discussions on the NIST mailing list for standardizing SHA-3, about how fast SHA-2 really is), which made it the fastest available at the time, but it looks like Bitcoin 0.1 was already released prior to that (in Jan 2009) and therefore had my old code. Maybe someone could test if the old code was still faster than OpenSSL?

comment by frizzers · 2014-03-16T11:00:02.039Z · LW(p) · GW(p)

What do you make of the decision to use C++?

Do you have any opinions of the original coding beyond the 'inelegant but amazingly resilient' meme? Was there anything that stood out about it?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2014-03-17T00:57:42.200Z · LW(p) · GW(p)

What do you make of the decision to use C++?

It seems like a pretty standard choice for anyone wanting to build such a piece of software...

Do you have any opinions of the original coding beyond the 'inelegant but amazingly resilient' meme? Was there anything that stood out about it?

No I haven't read any of it.

comment by frizzers · 2014-03-18T14:31:26.166Z · LW(p) · GW(p)

The correct pronunciation of your name.

Wei - is it pronounced as in 'way' or 'why'?

And Dai - as in 'dye' or 'day'?

Thank you.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2014-03-18T18:21:31.662Z · LW(p) · GW(p)

It's Chinese Pinyin romanization, so pronounced "way dye".

ETA: Since Pinyin is a many to one mapping, and as a result most Chinese articles about Bitcoin put the wrong name down for me, I'll take this opportunity to mention that my name is written logographically as 戴维.

comment by Jayson_Virissimo · 2014-03-16T06:01:35.648Z · LW(p) · GW(p)

Since the birth and early growth of Bitcoin, how has your view on the prospects for crypto-anarchy changed (if at all)? Why?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2014-03-17T00:19:31.620Z · LW(p) · GW(p)

Since the birth and early growth of Bitcoin, how has your view on the prospects for crypto-anarchy changed (if at all)? Why?

My views haven't changed very much, since the main surprise of Bitcoin to me is that people find such a system useful for reasons other than crypto-anarchy. Crypto-anarchy still depends on the economics of online security favoring the defense over the offense, but as I mentioned in Work on Security Instead of Friendliness? that still seems to be true only in limited domains and false overall.

comment by ubiubi18 · 2019-09-20T23:30:09.334Z · LW(p) · GW(p)

Assuming the security risk of growing economic monopolization build in in the dna of proof of work (as well as proof of stake)  is going to prevail in the coming years:

Do you think it is possible to create a more secure proof of democratic stake? I know that would require a not yet existing proof of unique identity first. So the question implies also: Do you think a proof of unique identity is even possible?

P.S.: Ideas flowing around the web to solve the later challenge are for example:

  • non-transferable proof of signature knowledge in combination with e-passports
  • web of trust
  • proof of location  - simultanous solved AI-resistant captchas
comment by Lu_Tong · 2017-12-22T21:52:56.233Z · LW(p) · GW(p)

Which philosophical views are you most certain of, and why? e.g. why do you think that multiple universes exist (and can you link or give the strongest argument for this?)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2017-12-24T01:26:37.465Z · LW(p) · GW(p)

I talked a bit about why I think multiple universes exist in this post. Aside from what I said there, I was convinced by Tegmark's writings on the Mathematical Universe Hypothesis. I can't really think of other views that are particularly worth mentioning (or haven't been talked about already in my posts), but I can answer more questions if you have them?

Replies from: Lu_Tong
comment by Lu_Tong · 2017-12-27T03:43:36.078Z · LW(p) · GW(p)

Thanks, I'll ask a couple more. Do you think UDT is a solution to anthropics? What is your ethical view (roughly, even given large uncertainty) and what actions do you think this prescribes? How have you changed your decisions based on the knowledge that multiple universes probably exist (AKA, what is the value of that information)?

comment by 9kv · 2014-09-11T15:00:09.385Z · LW(p) · GW(p)

I'm doing a thesis paper on Bitcoin and was wondering if you, being specifically stated as one of the main influences on Bitcoin by Satoshi Nakamoto in his whitepaper references,could give me your take on how Bitcoin is today versus whatever project you imagined when you wrote "b-money". What is different? What is the same? What should change?

comment by philip_b (crabman) · 2019-08-03T11:58:14.723Z · LW(p) · GW(p)

Hi. At http://www.weidai.com/everything.html you say:

Why do we believe that both the past and the future are not completely random, but the future is more random than the past?

I don't understand what you mean saying that the future is more random than the past. Care to explain?

comment by riceissa · 2017-09-14T18:43:40.475Z · LW(p) · GW(p)

In some recent comments over at the Effective Altruism Forum you talk about anti-realism about consciousness, saying in particular "the case for accepting anti-realism as the answer to the problem of consciousness seems pretty weak, at least as explained by Brian". I am wondering if you could elaborate more on this. Does the case for anti-realism about consciousness seem weak because of your general uncertainty on questions like this? Or is it more that you find the case for anti-realism specifically weak, and you hold some contrary position?

I am especially curious since I was under the impression that many people on LessWrong hold essentially similar views.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2017-09-23T04:24:48.997Z · LW(p) · GW(p)

I do have a lot of uncertainty about many philosophical questions. Many people seem to have intuitions that are too strong or that they trust too much, and don't seem to consider that the kinds of philosophical arguments we currently have are far from watertight, and there are lots of possible philosophical ideas/positions/arguments that have yet to be explored by anyone, which eventually might overturn their current beliefs. In this case, I also have two specific reasons to be skeptical about Brian's position on consciousness.

  1. I think for something to count as a solution to the problem of consciousness, it should at minimum have a (perhaps formal) language for describing first-person subjective experiences or qualia, and some algorithm or method of predicting or explaining those experiences from a third-person description of a physical system, or at least some sort of plan for how to eventually get something like that, or an explanation of why that will never be possible. Brian's anti-realism doesn't have this, so it seems unsatisfactory to me.
  2. Relatedly, I think a solution to the problem of morality/axiology should include an explanation of why certain kinds of subjective experiences are good or valuable and others are bad or negatively valuable (and a way to generalize this to arbitrary kinds of minds and experiences), or an argument why this is impossible. Brian's moral anti-realism which goes along with his consciousness anti-realism also seems unsatisfactory in this regard.
comment by chowfan · 2018-01-27T08:26:18.322Z · LW(p) · GW(p)

Hi Wei. Do you have any comments on the Ethereum, ICO (Initial Coin Offering) and hard forks of Bitcoin? Do you think they will solve the problem of fixed monetary supply of Bitcoin since they somehow brought much more "money" (or securities like stock, not sure how to classify them)?

Do you have any comments about the scaling fight of Bitcoin between larger blocks and 2nd-layer payment tunnels such as Lightning Network ?

comment by yannkyle · 2017-09-22T16:25:39.641Z · LW(p) · GW(p)

Hello, We are students in 11th grade from Paris, 17 years old. We're doing a project on the bitcoin and cryptomoney. This project is part of the high school diploma and we were wondering if we could ask you a few questions about the subject. First what is the "bitcoin" for you and what is it's use? Do you think cryptomoney could totally replace physical money and would it be better? How long have you been working on the subject and what do you stand for? Thank you.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2017-09-23T04:36:13.485Z · LW(p) · GW(p)

First what is the "bitcoin" for you and what is it's use? Do you think cryptomoney could totally replace physical money and would it be better?

I'm not the best person to ask these questions.

How long have you been working on the subject and what do you stand for?

I spent a few years in the 1990s thinking about how a group of anonymous people on the Internet can pay each other with money without outside help, culminating in the publication of b-money in 1998. I haven't done much work on it since then. I don't currently have strong views on cryptocurrency per se, but these thoughts are somewhat relevant.

comment by [deleted] · 2016-01-08T03:12:42.106Z · LW(p) · GW(p)Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2016-01-15T22:38:24.815Z · LW(p) · GW(p)

I don't follow Bitcoin development very closely, basically just reading about it if a story shows up on New York Times or Wired. If you're curious as to why, see this post and this thread.

Replies from: None
comment by [deleted] · 2016-01-28T23:27:52.980Z · LW(p) · GW(p)Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2016-02-05T21:12:50.113Z · LW(p) · GW(p)

Does your link to the first thread imply that you believe securing one's bitcoin (and realizing its unique benefits) is ultimately a futile venture, especially in the presence of an adversary of advanced intelligence?

Yes, that looks likely to be the case.

To the second link, I guess you mean to imply the monetary policy of Bitcoin is ultimately flawed due to its deflationary nature?

That's part of it. If decentralized cryptocurrency is ultimately good for the world, then Bitcoin may be bad because its flawed monetary policy prevents or delays widespread adoption of cryptocurrency. But another part is that cryptocurrency and other cypherpunk/cryptoanarchist ideas may ultimately be harmful even if they are successful in their goals. For example they tend to make it harder for governments to regulate economic activity, but we may need such regulation to reduce existential risk from AI, nanotech, and other future technologies.

If one wants to push the future in a positive direction, it seems to me that there are better things to work on than Bitcoin.

Replies from: None, None, None
comment by [deleted] · 2016-05-02T08:24:13.987Z · LW(p) · GW(p)

I thought for sure you were SN. In any case, I'd still much rather hang out with you than this Australian guy.

comment by [deleted] · 2016-02-13T18:09:53.229Z · LW(p) · GW(p)

Sorry to be a bother but I had another related thought. I'm reminded of a reply you made to a post on Robin Hanson's blog:

If the price of diamonds were to plummet, people would have to invent some other way to verifiably and irreversibly expend resources. That new method might have a better side-effect than enriching DeBeers, but then again it might have a worse one.

The link to shark fin soup is interesting. Did you mean to imply you were also concerned about the possible environmental impact of Bitcoin mining? I don't recall you mentioning that concern since. Maybe you consider the verdict still out on that issue or have since found reason to be unconcerned?

I also find it a bit amusing and maybe even prescient. Here we are in 2016 (as far as we know) and China is overwhelmingly the largest producer of hashcash. The hunt also shows no immediate signs of slowing down..

comment by [deleted] · 2016-02-09T18:20:11.417Z · LW(p) · GW(p)

Thanks, Wei. That really clarifies your position for me and includes a thought I hadn't previously considered but will certainly spend more time thinking about, re: decentralization risk.

If one wants to push the future in a positive direction, it seems to me that there are better things to work on than Bitcoin.

Obviously you feel it's very important to tackle the problem of FAI and I think that's a worthy pursuit. If you happen to have a mental list, mind sharing other ideas for useful things a programmer who hopes to make a positive impact could work on? It might be inspirational. Thanks again.

comment by JoshuaFox · 2014-01-12T08:59:50.222Z · LW(p) · GW(p)

I'm a Research Associate at MIRI. I became a supporter in late 2005, then contributed to research and publication in various ways. Please, AMA.

Opinions I express here and elsewhere are mine alone, not MIRI's.

To be clear, as an Associate, I am an outsider to the MIRI team (who collaborates with them in various ways).

Replies from: James_Miller, John_Maxwell_IV, Anatoly_Vorobey, Eliezer_Yudkowsky, None, XiXiDu, Apprentice
comment by James_Miller · 2014-01-12T18:37:08.961Z · LW(p) · GW(p)

When do you estimate that MIRI will start writing the code for a friendly AI?

Replies from: JoshuaFox
comment by JoshuaFox · 2014-01-12T19:06:53.778Z · LW(p) · GW(p)

Median estimate for when they'll start working on a serious code project (i.e., not just toy code to illustrate theorems) is 2017.

This will not necessarily be development of friendly AI -- maybe a component of friendly AI, maybe something else. (I have no strong estimates for what that other thing would be, but just as an example--a simulated-world sandbox).

Everything I say above (and elsewhere), is my opinion, not MIRIs. Median estimate for when they'll start working on friendly AI, if they get started with that before the Singularity, and if their direction doesn't shift away from their apparent current long-term plans to do so: 2025.

Replies from: Eliezer_Yudkowsky, Tenoke, Lumifer, Furcas, None
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-01-13T10:59:05.707Z · LW(p) · GW(p)

This is not a MIRI official estimate and you really should have disclaimed that.

Replies from: JoshuaFox
comment by JoshuaFox · 2014-01-13T12:33:00.796Z · LW(p) · GW(p)

OK, I will edit this one as well to say that.

comment by Tenoke · 2014-01-13T07:26:51.635Z · LW(p) · GW(p)

Median estimate for when they'll start working on friendly AI, if they get started with that before the Singularity, and if their direction doesn't shift away from their apparent current long-term plans to do so: 2025.

We're so screwed, aren't we?

Replies from: JoshuaFox
comment by JoshuaFox · 2014-01-13T08:53:11.491Z · LW(p) · GW(p)

Yes, but not because of MIRI. Along with FHI, they are doing more than anyone to improve our odds. As to whether writing code or any other strategy is the right one--I don't know, but I trust MIRI more than anyone to get that right.

Replies from: Tenoke
comment by Tenoke · 2014-01-13T09:10:53.076Z · LW(p) · GW(p)

Yes, but not because of MIRI.

Oh yes, I know that. It just says a lot that our best shot is still decades away from achieving it's goal.

Along with FHI, that are doing more than anyone to improve our odds.

Which, to be fair, isn't saying much.

Replies from: Calvin
comment by Calvin · 2014-01-13T09:20:46.413Z · LW(p) · GW(p)

Seeing as we are talking about speculative dangers coming from a speculative technology that has yet to be developed, it seems pretty understandable.

I am pretty sure, that as soon as first AGI's arrive on the market, people would start to take possible dangers more seriously.

Replies from: Tenoke
comment by Tenoke · 2014-01-13T09:38:30.776Z · LW(p) · GW(p)

I am pretty sure, that as soon as first AGI's arrive on the market, people would start to take possible dangers more seriously.

And it will be quite likely at that point that we are much closer to having an AGI that will foom than to having an AI that won't kill us and that it is too late.

Replies from: Calvin
comment by Calvin · 2014-01-13T11:08:06.301Z · LW(p) · GW(p)

I know it is a local trope that death and destruction is apparent and necessary logical conclusion of creating an intelligent machine capable of self improvement and goal modification, but I certainly don't share those sentiments.

How do you estimate the probability that AGI's won't take over the world (people who constructed them may use them for that purpose, but it is a different story), and would be used as simple tools and advisors in the same way boring, old fashioned and safe way 100% of our current technology is used?

I am explicitly saying that MRI or FAI are pointless, or anything like that. I just want to point out that they posture as if they were saving the world from imminent destruction, while it is no where certain weather said danger is really the case.

Replies from: Tenoke, hairyfigment
comment by Tenoke · 2014-01-13T11:19:31.735Z · LW(p) · GW(p)

How do you estimate the probability that AGI's won't take over the world (people who constructed them may use them for that purpose, but it is a different story), and would be used as simple tools and advisors in the same way boring, old fashioned and safe way 100% of our current technology is used?

1%? I believe that it is nearly impossible to use a foomed AI in a safe manner without explicitly trying to do so. That's kind of why I am worried about the threat of any uFAI developed before it is proven that we can develop a Friendly one and without using whatever the proof entails.

Anyway,

...would be used as simple tools and advisors in the same way boring, old fashioned and safe way 100% of our current technology is used?

I wasn't aware that we use a 100% of our current technology in a safe way.

comment by hairyfigment · 2014-01-14T02:44:22.998Z · LW(p) · GW(p)

You may have a different picture of current technology than I do, or you may be extrapolating different aspects. We're already letting software optimize the external world directly, with slightly worrying results. You don't get from here to strictly and consistently limited Oracle AI without someone screaming loudly about risks. In addition, Oracle AI has its own problems (tell me if the LW search function doesn't make this clear).

Some critics appear to argue that the direction of current tech will automatically produce CEV. But today's programs aim to maximize a behavior, such as disgorging money. I don't know in detail how Google filters its search results, but I suspect they want to make you feel more comfortable with links they show you, thus increasing clicks or purchases from sometimes unusually dishonest ads. They don't try to give you whatever information a smarter, better informed you would want your current self to have. Extrapolating today's Google far enough doesn't give you a Friendly AI, it gives you the making of a textbook dystopia.

comment by Lumifer · 2014-01-12T19:34:32.619Z · LW(p) · GW(p)

What are the error bars around these estimates?

Replies from: JoshuaFox
comment by JoshuaFox · 2014-01-12T19:41:51.519Z · LW(p) · GW(p)

The first estimate: 50% probability between 2015 and 2020.

The second estimate: 50% probability between 2020 and 2035. (again, taking into account all the conditioning factors).

Replies from: Lumifer
comment by Lumifer · 2014-01-13T03:25:08.599Z · LW(p) · GW(p)

Um.

2017

50% probability between 2015 and 2020.

The distribution is asymmetric for obvious reasons. The probability for 2014 is pretty close to zero. This means that there is a 50% probability that a serious code project will start after 2020.

This is inconsistent with 2017 being a median estimate.

Replies from: army1987, JoshuaFox
comment by A1987dM (army1987) · 2014-01-13T17:26:43.656Z · LW(p) · GW(p)

Unless he thinks it's very unlikely the project will start between 2017 and 2020 for some reason.

comment by JoshuaFox · 2014-01-13T09:10:34.348Z · LW(p) · GW(p)

Good point. I'll have to re-think that estimate and improve it.

comment by Furcas · 2014-01-14T22:54:31.290Z · LW(p) · GW(p)

If some rich individual were to donate 100 million USD to MIRI today, how would you revise your estimate (if at all)?

comment by [deleted] · 2014-01-15T01:39:26.702Z · LW(p) · GW(p)

Can you elaborate on the types of toy code that you (or others) have tried in terms of illustrating theoreoms?

Replies from: JoshuaFox
comment by JoshuaFox · 2014-01-15T08:26:27.523Z · LW(p) · GW(p)

I have not tried any.

Over the years, I have seen a few online comments about toy programs written by MIRI people, e.g., this, search for "Haskell". But I don't know anything more about these programs that those brief reports.

comment by John_Maxwell (John_Maxwell_IV) · 2014-01-12T21:52:43.290Z · LW(p) · GW(p)

I've talked to a former grad student (fiddlemath, AKA Matt Elder) who worked on formal verification, and he said current methods are not anywhere near up to the task of formally verifying an FAI. Does MIRI have a formal verification research program? Do they have any plans to build programming processes like this or this?

Replies from: JoshuaFox
comment by JoshuaFox · 2014-01-13T09:09:59.666Z · LW(p) · GW(p)

I don't know anything about MIRI research strategy than is publicly available, but if you look at what they are working on, it is all in the direction of formal verification.

After speaking to experts in formal verification of chips and of other systems, and they have confirmed what you learned from fiddlemath. Formal verification is limited in its capabilities: Often, you can only verify some very low-level or very specific assertions. And you have to be able to specify the assertion that you are verifying.

So, it seems that they are taking on a very difficult challenge.

comment by Anatoly_Vorobey · 2014-01-12T19:06:43.392Z · LW(p) · GW(p)

Your published dissertation sounds fascinating, but I swore off paper books. Can you share it in digital form?

Replies from: JoshuaFox
comment by JoshuaFox · 2014-01-13T07:25:40.012Z · LW(p) · GW(p)

Sure, I'll send it to you. If anyone else wants it, please contact me. I always knew that Semitic Noun Patterns would be a best seller :-)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-01-13T11:01:08.122Z · LW(p) · GW(p)

(Problem solved, comment deleted.)

Replies from: gjm, JoshuaFox
comment by gjm · 2014-01-13T14:56:24.927Z · LW(p) · GW(p)

Meta: I think this was an important thing to say, and to say forcefully, but it might have been worth expending a sentence or so to say it more nicely (but still as forcefully). (I don't want to derail the thread and will say no more than this unless specifically asked.)

comment by [deleted] · 2014-01-12T20:59:34.712Z · LW(p) · GW(p)

What do you think is the liklihood of AI boxing being successful and why (interested in reasons, not numbers).

Replies from: JoshuaFox
comment by JoshuaFox · 2014-01-13T07:33:20.390Z · LW(p) · GW(p)

I don't think I have anything to say that hasn't been said better by others in MIRI and FHI, but I think that AI boxing is impossible because (1) it can convince any gatekeepers to let it out and (2) any AI is "embodied" and not separate from the outside world if only in that its circuits pass electrons, and (3) I doubt you could convince all AGI reseachers to keep their projects isolated.

Still, I think that AI boxing could be a good stopgap measure, one of a number of techniques that are ultimately ineffectual, but could still be used to slightly hold back the danger.

comment by XiXiDu · 2014-01-12T11:58:17.630Z · LW(p) · GW(p)

My question is similar to the one that Apprentice posed below. Here are my probability estimates of unfriendly and friendly AI, what are yours? And more importantly, where do you draw the line, what probability estimate would be low enough for you to drop the AI business from your consideration?

Replies from: JoshuaFox
comment by JoshuaFox · 2014-01-12T19:45:30.576Z · LW(p) · GW(p)

what probability estimate would be low enough for you to drop the AI business from your consideration?

Even a fairly low probability estimate would justify effort on an existential risk.

And I have to admit, a secondary, personal, reason for being involved is that the topic is fascinating and there are smart people here, though that of course does not shift the estimates of risk and of the possibilities of mitigating it.

comment by Apprentice · 2014-01-12T10:17:41.026Z · LW(p) · GW(p)

What probability would you assign to this statement: "UFAI will be relatively easy to create within the next 100 years. FAI is so difficult that it will be nearly impossible to create within the next 200 years."

Replies from: JoshuaFox
comment by JoshuaFox · 2014-01-12T13:57:45.457Z · LW(p) · GW(p)

I think that the estimates cannot be undertaken independently. FAI and UFAI would each pre-empt the other. So I'll rephrase a little.

I estimate the chances that some AGI (in the sense of "roughly human-level AI") will be built within the next 100 years as 85%, which is shorthand for "very high, but I know that probability estimates near 100% are often overconfident; and something unexpected can come up."

And "100 years" here is shorthand for "as far off as we can make reasonable estimates/guesses about the future of humanity"; perhaps "50 years" should be used instead.

Conditional on some AGI being built, I estimate the chances that it will be unfriendly as 80%, which is shorthand for "by default it will be unfriendly, but people are working on avoiding that and they have some small chance of succeeding; or there might be some other unexpected reason that it will turn out friendly."

Replies from: Apprentice
comment by Apprentice · 2014-01-12T15:11:24.824Z · LW(p) · GW(p)

Thank you. I didn't phrase my question very well but what I was trying to get at was whether making a friendly AGI might be, by some measurement, orders of magnitude more difficult than making a non-friendly one.

Replies from: JoshuaFox
comment by JoshuaFox · 2014-01-12T19:11:43.652Z · LW(p) · GW(p)

Yes, it is orders of magnitude more different. If we took a hypothetical FAI-capable team, how much less time would it take them to make a UFAI than a FAI, assuming similar levels of effort, and starting at today's knowledge levels?

One-tenth the time seems like a good estimate.

comment by James_Miller · 2014-01-12T07:00:11.795Z · LW(p) · GW(p)

Ask me anything. I'm the author of Singularity Rising.

Replies from: somervta, Anatoly_Vorobey, AlexMennen, blogospheroid, JoshuaFox, FiftyTwo
comment by somervta · 2014-01-12T07:56:19.330Z · LW(p) · GW(p)

What, if anything, do you think a lesswrong regular who's read the sequences and all/most of MIRI's non-technical publications will get out of your book?

Replies from: James_Miller
comment by James_Miller · 2014-01-12T18:14:49.843Z · LW(p) · GW(p)

Along with the views of EY (which such readers would already know) I present the singularity views of Robin Hanson and Ray Kurzweil, and discuss the intelligence enhancing potential of brain training, smart drugs, and eugenics. My thesis is that there are so many possible paths to super-human intelligence and such incredible military and economic benefits to develop super-human intelligence that unless we destroy our high-tech civilization we will almost certainly develop it.

comment by Anatoly_Vorobey · 2014-01-12T10:17:12.035Z · LW(p) · GW(p)

How much time did it take you to write the singularity book? How much money has it brought you?

Same question about your microeconomics textbook. Also, what motivated you to write it given that there must be about 2^512 existing ones on the market?

Replies from: James_Miller
comment by James_Miller · 2014-01-12T18:25:27.973Z · LW(p) · GW(p)

Hard to say about the time because I worked on both books while also doing other projects. I suspect I could have done the Singularity book in about 1.5 years of full time effort. I don't have a good estimate for the textbook. Alas, I have lost money on the singularity book because the advance wasn't all that big, and I had personal expenses such as hiring a research assistant and paying a publicist. The textbook had a decent advance, still I probably earned roughly minimum wage for it. Surprisingly, I've done fairly well with my first book, Game Theory at Work, in part because of translation rights. With Game Theory at Work I've probably earned several times the minimum wage. Of course, I'm a professor and part of my salary from my college is to write, and I'm not including this.

I wanted to write a free market microeconomics textbook, and there are very few of these. I was recruited to write the textbook by the people who published Game Theory at Work. Had the textbook done very well, I could have made a huge amount of money (roughly equal to my salary as a professor) indefinitely. Alas, this didn't happen but the odds of it happening were well under 50%. Since teaching microeconomics is a big part of my job as a college professor, there was a large overlap between writing the textbook and becoming a better teacher. My textbook publisher sent all of my chapters to other teachers of microeconomics to get their feedback, and so I basically got a vast amount of feedback from experts on how I teach microeconomics.

comment by AlexMennen · 2014-01-13T02:10:56.081Z · LW(p) · GW(p)

Why did you decide to run for Massachusetts State Senate in 2004? Did you ever think you had a chance of winning?

Replies from: James_Miller
comment by James_Miller · 2014-01-13T03:01:05.979Z · LW(p) · GW(p)

No. I ran as a Republican in one of the most Democratic districts in Massachusetts, my opponent was the second most powerful person in the Massachusetts State Senate, and even Republicans in my district had a high opinion of him.

Replies from: AlexMennen
comment by AlexMennen · 2014-01-13T03:18:45.441Z · LW(p) · GW(p)

Why did you run?

Replies from: James_Miller
comment by James_Miller · 2014-01-13T03:22:20.835Z · LW(p) · GW(p)

I wanted to get more involved in local Republican politics and no one was running in the district and it was suggested that I run. It turned out to be a good decision as I had a lot of fun debating my opponent and going to political events. Since winning wasn't an option, it was even mostly stress free.

Replies from: VAuroch
comment by VAuroch · 2014-01-13T08:08:24.257Z · LW(p) · GW(p)

I have a political question/proposition I have been pondering, and you, an intelligent semi-involved Massachusetts Republican, are precisely the kind of person who could answer it usefully. May I ask it to you in a private message?

Replies from: James_Miller
comment by James_Miller · 2014-01-13T15:52:54.161Z · LW(p) · GW(p)

Yes

comment by blogospheroid · 2014-01-13T07:27:24.170Z · LW(p) · GW(p)

Haven't read your book so not sure if you have already answered this.

what is your assessment of miri's current opinion that increasing the global economic growth rate is a source of existential risk?

How much risk is increased for what increase in growth?

Are there safe paths? (Maybe catch up growth in india and china is safe??)

Replies from: James_Miller
comment by James_Miller · 2014-01-13T07:55:54.287Z · LW(p) · GW(p)

Greater economic growth means more money for AI research from companies and governments and if you think that AI will probably go wrong then this is a source of trouble. But there are benefits as well including increased charitable contributions for organizations that reduce existential risk and better educational systems in India and China which might produce people who end up helping MIRI. Overall, I'm not sure how this nets out.

Catch up growth is not necessarily safe because it will increase the demand for products that use AI and so increase the amount of resources companies such as Google devote to AI.

The only safe path is someone developing a mathematically sound theory of friendly AI, but this will be easier if we get (probably via China) intelligence enhancement with eugenics.

comment by JoshuaFox · 2014-01-12T14:19:54.675Z · LW(p) · GW(p)

Did you see any shifts in opinion (even in a small audience) following on your book?

Replies from: James_Miller
comment by James_Miller · 2014-01-12T18:30:24.831Z · LW(p) · GW(p)

Not really. Someone (I forgot who) wrote that I helped them see the race to create AI as a potential existential risk. I promoted the book on numerous radio shows and I hope I convinced at least a few people to do further research and perhaps donate money to MIRI, but this is just a hope.

Replies from: JoshuaFox
comment by JoshuaFox · 2014-01-12T19:09:00.916Z · LW(p) · GW(p)

Why do you think that it is so hard to get through to people?

Not only you, but others involved in this, and myself, have all found that intelligent people will listen and even understand what you are telling them -- I probe for inferential gaps, and if they exist they are not obvious.

Yet almost no one gets on board with the MIRI/FHI program.

Why?

Replies from: James_Miller
comment by James_Miller · 2014-01-12T19:32:03.724Z · LW(p) · GW(p)

I have thought a lot about this. Possible reasons: most humans don't care about the far future or people who are not yet born, most things that seem absurd are absurd and are not worth investigating and the singularity certainly superficially seems absurd, the vast majority is right and you and I are incorrect to worry about a singularity, it's impossible for people to imagine an intelligence AI that doesn't have human-like emotions, the Fermi paradox implies that civilizations such as ours are not going to be able to rationally think about the far future, and an ultra-AI would be a god and so is disallowed by most peoples' religious beliefs.

Your question is related to why so few signup for cryonics.

Replies from: NancyLebovitz, JoshuaFox
comment by NancyLebovitz · 2014-01-12T20:35:27.206Z · LW(p) · GW(p)

I don't know about anyone else, but I find it hard to believe that provable Friendliness is possible.

On the other hand, I think high-probability Friendliness might be possible.

comment by JoshuaFox · 2014-01-12T19:43:18.588Z · LW(p) · GW(p)

I agree with you that a lot of people think that way, but I have spoken to quite a few smart people who understand all the points -- I probe to figure out if there are any major inferential gaps -- and they still don't get on the bandwagon.

Another point is simply that we cannot all devote time to all important things; they simply choose not to prioritize this.

comment by FiftyTwo · 2014-03-20T02:21:38.854Z · LW(p) · GW(p)

Do you think "The Singularity" is a useful concept, or would it be better to discuss the constituent issues separately?

Replies from: James_Miller
comment by James_Miller · 2014-03-20T19:16:37.841Z · LW(p) · GW(p)

Yes it is useful. I define the singularity as a threshold of time at which machine intelligence or increases in human intelligence radically transform society. As similar incentives and technologies are pushing us towards this, it's useful to lump them together with a single term.

comment by jsteinhardt · 2014-01-12T06:59:18.736Z · LW(p) · GW(p)

I'm a PhD student in artificial intelligence, and co-creator of the SPARC summer program. AMA.

Replies from: None, Anatoly_Vorobey, Benito, Markas, JoshuaFox
comment by [deleted] · 2014-01-12T07:15:53.736Z · LW(p) · GW(p)

What do you feel are the most pressing unsolved problems in AGI?

Do you believe AGI can "FOOM" (you may have to qualify what you interpret FOOM as)?

How viable is the scenario of someone creating a AGI in their basement, thereby changing the course of history in unpredictable ways?

Replies from: jsteinhardt
comment by jsteinhardt · 2014-01-12T22:09:27.081Z · LW(p) · GW(p)

What do you feel are the most pressing unsolved problems in AGI?

In AGI? If you mean "what problems in AI do we need to solve before we can get to the human level", then I would say:

  • Ability to solve currently intractable statistical inference problems (probably not just by scaling up computational resources, since many of these problems have exponentially large search spaces).
  • Ways to cope with domain adaptation and model mis-specification.
  • Robust and modular statistical procedures that can be fruitfully fit together.
  • Large amounts of data, in formats helpful for learning (potentially including provisions for high-throughput interaction, perhaps with a virtual environment).

To some extent this reflects my own biases, and I don't mean to say "if we solve these problems then we'll basically have AI", but I do think it will either get us much closer or else expose new challenges that are not currently apparent.

Do you believe AGI can "FOOM" (you may have to qualify what you interpret FOOM as)?

I think it is possible that a human-level AI would very quickly acquire a lot of resources / power. I am more skeptical that an AI would become qualitatively more intelligent than a human, but even if it was no more intelligent than a human but had the ability to easily copy and transmit itself, that would already make it powerful enough to be a serious threat (note that it is also quite possible that it would have many more cycles of computation per second than a biological brain).

In general I think this is one of many possible scenarios, e.g. it's also possible that sub-human AI would already have control of much of the world's resources and we would have built systems in place to deal with this fact. So I think it can be useful to imagine such a scenario but I wouldn't stake my decisions on the assumption that something like it will occur. I think this report does a decent job of elucidating the role of such narratives (not necessarily AI-related) in making projections about the future.

How viable is the scenario of someone creating a AGI in their basement, thereby changing the course of history in unpredictable ways?

Not viable.

comment by Anatoly_Vorobey · 2014-01-12T10:40:29.947Z · LW(p) · GW(p)

Do you have a handle on the size of the field? E.g. how many people, counting from PhD students and upwards, are working on AGI in the entire world? More like 100 or more like 10,000 or what's your estimate?

Replies from: jsteinhardt
comment by jsteinhardt · 2014-01-12T22:17:51.168Z · LW(p) · GW(p)

I don't personally work on AGI and I don't think the majority of "AGI progress" comes from people who label themselves as working on AGI. I think much of the progress comes from improved tools due to research and usage in machine learning and statistics. There are also of course people in these fields who are more concerned with pushing in the direction of human-level capabilities. And progress everywhere is so inter-woven that I don't even know if thinking in terms of "number of AI researchers" is the right framing. That said, I'll try to answer your question.

I'm worried that I may just be anchoring off of your two numbers, but I think 10^3 is a decent estimate. There are upwards of a thousand people at NIPS and ICML (two of the main machine learning conferences), only a fraction of those people are necessarily interested in the "human-level" AI vision, but also there are many people who are in the field who don't go to these conferences in any given year. Also many people in natural language processing and computer vision may be interested in these problems, and I recently found out that the program analysis community cares about at least some questions that 40 years ago would have been classified under AI. So the number is hard to estimate but 10^3 might be a rough order of magnitude. I expect to find more communities in the future that I either wasn't aware of or didn't think of as being AI-relevant, and who turn out to be working on problems that are important to me.

comment by Ben Pace (Benito) · 2014-01-12T08:51:51.787Z · LW(p) · GW(p)

How did you come up with the course content for SPARC?

Replies from: jsteinhardt
comment by jsteinhardt · 2014-01-13T04:49:18.317Z · LW(p) · GW(p)

We brainstormed things that we know now that we wished we had known in high school. During the first year, we just made courses out of those (also borrowing from CFAR workshops) and rolled with that, because we didn't really know what we were doing and just wanted to get something off the ground.

Over time we've asked ourselves what the common thread is in our various courses, in an attempt to develop a more coherent curriculum. Three major themes are statistics, programming, and life skills. The thing these have in common is that they are some of the key skills that extremely sharp quantitative minds need to apply their skills to a qualitative world. Of course, it will always be the case that most of the value of SPARC comes from informal discussions rather than formal lectures, and I think one of the best things about SPARC is the amount of time that we don't spend teaching.

comment by Markas · 2014-01-12T18:59:16.127Z · LW(p) · GW(p)

Could you talk about your graduate work in AI? Also, out of curiosity, did you weight possible contribution towards a positive singularity heavily in choosing your subfield/projects?

(I am trying to figure out whether it would be productive for me to become familiar with AI in mainstream academia and/or apply for PhD programs eventually.)

Replies from: jsteinhardt
comment by jsteinhardt · 2014-01-13T04:49:22.378Z · LW(p) · GW(p)

I work on computationally bounded statistical inference. Most theoretical paradigms don't have a clean way of handling computational constraints, and I think it's important to address this since the computationally complexity of exact statistical inference scales extremely rapidly with model complexity. I also have recently starting working on applications in program analysis, both because I think it provides a good source of computationally challenging problems, and because it seems like a domain that will force us into using models with high complexity.

Singularity considerations were a factor when choosing to work on AI, although I went into the field because AI seems like a robustly game-changing technology across a wide variety of scenarios, whether or not a singularity occurs. I certainly think that software safety is an important issue more broadly, and this partially influences my choice of problems, although I am more guided by the problems that seem technically important (and indeed, I think this is mostly the right strategy even if you care about safety to a fair degree).

Learning more about mainstream AI has greatly shaped my beliefs regarding AGI, so it's something that I would certainly recommend. Going to grad school shaped my beliefs even further, even though I had already read many AI papers prior to arriving at Stanford.

comment by JoshuaFox · 2014-01-12T19:47:26.427Z · LW(p) · GW(p)

Is there any uptake of MIRI ideas in the AI community? Of HPMOR?

Replies from: jsteinhardt, jsteinhardt, None
comment by jsteinhardt · 2014-01-13T08:08:20.143Z · LW(p) · GW(p)

I wouldn't presume to know what the field as a whole thinks, as I think views vary a lot from place to place and I've only spent serious time at a few universities. However, I can speculate based on the data I do have.

I think a sizable number (25%?) of AI graduate students I know are aware of LessWrong's existence. Also a sizeable (although probably smaller) number have read at least a few chapters of HPMOR; for the latter I'm mostly going off of demographics, I don't know that many who have told me they read HPMOR.

There is very little actual discussion of MIRI or LessWrong. From what I would gather most people silently disagree with MIRI, a few people probably silently agree. I would guess almost no one knows what MIRI is, although more would have heard of the Singularity Institute (but might confuse it with Singularity University). People do occasionally wonder whether we're going to end up killing everyone, although not for too long.

To address your comment in the grandchild, I certainly don't speak for Norvig but I would guess that "Norvig takes these [MIRI] ideas seriously" is probably false. He does talk at the Singularity Summit, but the tone when I attended his talk sounded more like "Hey you guys just said a bunch of stuff, based on what people in AI actually do, here's the parts that seem true and here's the part that seem false." It's also important to note that the notion of the singularity is much more widespread as a concept than MIRI in particular. "Norvig takes the singularity seriously" seems much more likely to be true to me, though again, I'm far from being in a position to make informed statements about his views.

Replies from: JoshuaFox
comment by JoshuaFox · 2014-01-13T08:40:38.864Z · LW(p) · GW(p)

Thanks. I was basing my comments about Norvig on what he says in the intro to his AI textbook, which does address UFAI risk.

Replies from: jsteinhardt
comment by jsteinhardt · 2014-01-13T08:47:05.957Z · LW(p) · GW(p)

What's the quote? You may very well have better knowledge of Norvig's opinions in particular than I do. I've only talked to him in person twice briefly, neither time about AGI, and I haven't read his book.

Replies from: JoshuaFox
comment by JoshuaFox · 2014-01-13T09:06:25.399Z · LW(p) · GW(p)

Russell and Norvig, Artificial Intelligence: A Modern Approach. Third Edition, 2010, pp. 1037 - 1040. Available here.

Replies from: None
comment by [deleted] · 2014-01-13T14:03:46.332Z · LW(p) · GW(p)

I think the key quote here is:

Arguments for and against strong are inconclusive. Few mainstream researchers believe that anything significant hinges on the outcome of the debate.

Replies from: jsteinhardt
comment by jsteinhardt · 2014-01-14T09:19:20.593Z · LW(p) · GW(p)

Hm...I personally find it hard to divine much about Norvig's personal views from this. It seems like a relatively straightforward factual statement about the state of the field (possibly hedging to the extent that I think the arguments in favor of strong AI being possible are relatively conclusive, i.e. >90% in favor of possibility).

Replies from: lukeprog
comment by lukeprog · 2014-01-15T00:27:08.113Z · LW(p) · GW(p)

When I spoke to Norvig at the 2012 Summit, he seemed to think getting good outcomes from AGI could indeed be pretty hard, but also that AGI was probably a few centuries away. IIRC.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-01-15T00:52:16.992Z · LW(p) · GW(p)

Interesting, thanks.

comment by jsteinhardt · 2014-01-13T04:30:05.004Z · LW(p) · GW(p)

Like Mark, I'm not sure I was able to parse your question, can you please clarify?

Replies from: JoshuaFox
comment by JoshuaFox · 2014-01-13T07:30:16.659Z · LW(p) · GW(p)

Right, there was a typo. I've fixed it now. I'm just wondering if MIRI-like ideas are spreading among AI researchers. We see that Norvig take these ideas seriously.

And separately, I wonder if HPMOR is a fad in elite AI circles. I have heard that it's popular in top physics departments.

comment by [deleted] · 2014-01-12T20:57:11.495Z · LW(p) · GW(p)

What does that question mean?

Replies from: JoshuaFox
comment by JoshuaFox · 2014-01-13T07:53:24.073Z · LW(p) · GW(p)

Sorry, typo now fixed. See my response to jsteinhardt below.

comment by Will_Newsome · 2014-01-13T01:57:43.082Z · LW(p) · GW(p)

My primary interest is determining what the "best" thing to do is, especially via creating a self-improving institution (e.g., an AGI) that can do just that. My philosophical interests stem from that pragmatic desire. I think there are god-like things that interact with humans and I hope that's a good thing but I really don't know. I think LessWrong has been in Eternal September mode for awhile now so I mostly avoid it. Ask me anything, I might answer.

Replies from: Panic_Lobster, khafra, Eugine_Nier, ChristianKl, None, Jonathan_Graehl
comment by Panic_Lobster · 2014-01-13T08:04:13.513Z · LW(p) · GW(p)

Why do you believe that there are god-like beings that interact with humans? How confident are you that this is the case?

Replies from: Will_Newsome
comment by Will_Newsome · 2014-01-14T01:44:43.920Z · LW(p) · GW(p)

I believe so for reasons you wouldn't find compelling, because the gods apparently do not want there to be common knowledge of their existence, and thus do not interact with humans in a manner that provides communicable evidence. (Yes, this is exactly what a world without gods would look like to an impartial observer without firsthand incommunicable evidence. This is obviously important but it is also completely obvious so I wish people didn't harp on it so much.) People without firsthand experience live in a world that is ambiguous as to the existence or lack thereof of god-like beings, and any social evidence given to them will neither confirm nor deny their picture of the world, unless they're falling prey to confirmation bias, which of course they often do, especially theists and atheists. I think people without firsthand incommunicable evidence should be duly skeptical but should keep the existence of the supernatural (in the everyday sense of that word, not the metaphysical sense) as a live hypothesis. Assigning less than 5% probability to it is, in my view, a common but serious failure of social epistemic rationality, most likely caused by arrogance. (I think LessWrong is especially prone to this kind of arrogance; see IlyaShpitser's comments on LessWrong's rah-rah-Bayes stance to see part of what I mean.)

As for me, and as to my personal decision policy, I am ninety-something percent confident. The scenarios where I'm wrong are mostly worlds where outright complex hallucination is a normal feature of human experience that humans are for some reason blind to. I'm not talking about normal human memory biases and biases of interpretation, I'm saying some huge fraction of humans would have to have a systemic disorder on the level of anosognosia. Given that I don't know how we should even act in such a world, I'm more inclined to go with the gods hypothesis, which, while baffling, at least has some semblance of graspability.

Replies from: Furcas, TheOtherDave, Apprentice, gjm, Leonhart, jimrandomh, knb
comment by Furcas · 2014-01-14T22:38:16.953Z · LW(p) · GW(p)

Can you please describe one example of the firsthand evidence you're talking about?

Also, I honestly don't know what the everyday sense of supernatural is. I don't think most people who believe in "the supernatural" could give a clear definition of what they mean by the word. Can you give us yours?

Thanks.

Replies from: Will_Newsome
comment by Will_Newsome · 2014-01-15T02:07:58.312Z · LW(p) · GW(p)

Can you please describe one example of the firsthand evidence you're talking about?

I realize it's annoying, but I don't think I should do that.

Can you give us yours?

I give a definition of "supernatural" here. Of course, it doesn't capture all of what people use the word to mean.

Replies from: Furcas
comment by Furcas · 2014-01-15T02:22:12.156Z · LW(p) · GW(p)

I realize it's annoying, but I don't think I should do that.

Why not?

comment by TheOtherDave · 2014-01-14T18:37:39.930Z · LW(p) · GW(p)

Assigning less than 5% probability to it is, in my view, a common but serious failure of social epistemic rationality, most likely caused by arrogance.

Where does the 5% threshold come from?

Replies from: Will_Newsome
comment by Will_Newsome · 2014-01-15T02:14:45.269Z · LW(p) · GW(p)

Psychologically "5%" seems to correspond to the difference between a hypothesis you're willing to consider seriously, albeit briefly, versus a hypothesis that is perhaps worth keeping track of by name but not worth the effort required to seriously consider.

Replies from: TheOtherDave, gjm
comment by TheOtherDave · 2014-01-15T03:01:30.684Z · LW(p) · GW(p)

(nods) Fair enough.

Do you have any thoughts about why, given that the gods apparently do not want their existence to be common knowledge, they allow selected individuals such as yourself to obtain compelling evidence of their presence?

Replies from: Will_Newsome
comment by Will_Newsome · 2014-02-24T10:29:16.241Z · LW(p) · GW(p)

I don't have good thoughts about that. There may be something about sheep and goats, as a general rule but certainly not a universal law. It is possible that some are more cosmically interesting than others for some reason (perhaps a matter of their circumstances and not their character), but it seems unwise to ever think that about oneself; breaking the fourth wall is always a bold move, and the gods would seem to know their tropes. I wouldn't go that route too far without expectation of a Wrong Genre Savvy incident. Or, y'know, delusionally narcissistic schizophrenia. Ah, the power of the identity of indiscernibles. Anyhow, it is possible such evidence is not so rare, especially among sheep whose beliefs are easily explained away by other plausible causes.

comment by gjm · 2014-01-15T02:44:31.963Z · LW(p) · GW(p)

Do you think the available evidence, overall, is so finely balanced that somewhere between 5% and 95% confidence (say) is appropriate? That would be fairly surprising given how much evidence there is out there that's somewhat relevant to the question of gods. Or do you think that, even in the absence of dramatic epiphanies of one's own, we should all be way more than 95% confident of (something kinda like) theism?

I think I understand your statement about social epistemic rationality but it seems to me that a better response to the situation where you think there are many many bits of evidence for one position but lots of people hold a contrary one is to estimate your probabilities in the usual way but be aware that this is an area in which either you or many others have gone badly wrong, and therefore be especially watchful for errors in your thinking, surprising new evidence, etc.

Replies from: Will_Newsome
comment by Will_Newsome · 2014-02-24T10:38:28.118Z · LW(p) · GW(p)

No, without epiphanies you probably shouldn't be more than 95% confident, I think; with the institutions we currently have for epistemic communication, and with the polarizing nature of the subject, I don't think most people can be very confident either way. So I would say yes, I think between 5% and 95% would be appropriate, and I don't think I share your intuition that that would be fairly surprising, perhaps because I don't understand it. Take cold fusion, say, and ask a typical college student studying in psychology how plausible they think it is that it has been developed or will soon be developed et cetera. I think they should give an answer between 5% and 95% for most variations on that question. I think the supernatural is in that reference class. You have in mind a better reference class?

I agree the response you propose in your second paragraph is good. I don't remember what I was proposing instead but if it was at odds with what you're proposing then it might not be good, especially if what I recommended requires somewhat complex engineering/politics, which IIRC it did.

comment by Apprentice · 2014-01-14T14:24:32.111Z · LW(p) · GW(p)

worlds where outright complex hallucination is a normal feature of human experience

What sort of hallucinations are we talking about? I sometimes have hallucinations (auditory and visual) with sleep paralysis attacks. One close friend has vivid hallucinatory experiences (sometimes involving the Hindu gods) even outside of bed. It is low status to talk about your hallucinations so I imagine lots of people might have hallucinations without me knowing about it.

I sometimes find it difficult to tell hallucinations from normal experiences, even though my reasoning faculty is intact during sleep paralysis and even though I know perfectly well that these things happen to me. Here are two stories to illustrate.

Recently, my son was ill and sleeping fitfully, frequently waking up me and my wife. After one restless episode late in the night he had finally fallen asleep, snuggling up to my wife. I was trying to fall asleep again, when I heard footsteps outside the room. "My daughter (4 years old) must have gotten out of bed", I thought, "she'll be coming over". But this didn't happen. The footsteps continued and there was a light out in the hall. "Odd, my daughter must have turned on the light for some reason." Then through the door came an infant, floating in the air. V orpnzr greevsvrq ohg sbhaq gung V jnf cnenylmrq naq pbhyq abg zbir be fcrnx. V gevrq gb gbhpu zl jvsr naq pel bhg naq svanyyl znantrq gb rzvg n fhoqhrq fuevrx. Gura gur rkcrevrapr raqrq naq V fnj gung gur yvtugf va gur unyy jrer abg ghearq ba naq urneq ab sbbgfgrcf. "Fghcvq fyrrc cnenylfvf", V gubhtug, naq ebyyrq bire ba zl fvqr.

Here's another somewhat older incident: I was lying in bed beside my wife when I heard movement in our daughter's room. I lay still wondering whether to go fetch her - but then it appeared as if the sounds were coming closer. This was surprising since at that time my daughter didn't have the habit of coming over on her own. But something was unmistakeably coming into the room and as it entered I saw that it was a large humanoid figure with my daughter's face. V erpbvyrq va ubeebe naq yrg bhg n fuevrx. Nf zl yrsg unaq frnepurq sbe zl jvsr V sbhaq gung fur jnfa'g npghnyyl ylvat orfvqr zr - fur jnf fgnaqvat va sebag bs zr ubyqvat bhe qnhtugre. Fur'q whfg tbggra bhg bs orq gb srgpu bhe qnhtugre jvgubhg zr abgvpvat.

The two episodes play our very similarly but only one of them involved hallucinations.

I've sort of forgotten where I was going with this, but if Will would like to tell us a bit more about his experiences I would be interested.

comment by gjm · 2014-01-15T02:39:29.543Z · LW(p) · GW(p)

You are arguing, if I understand you aright, (1) that the gods don't want their existence to be widely known but (2) that encounters with the gods, dramatic enough to demand extraordinary explanations if they aren't real, are commonplace.

This seems like a curious combination of claims. Could you say a little about why you don't find their conjunction wildly implausible? (Or, if the real problem is that I've badly misunderstood you, correct my misunderstanding?)

comment by Leonhart · 2014-01-15T16:02:12.100Z · LW(p) · GW(p)

and thus do not interact with humans in a manner that provides communicable evidence

Could a future neuroscience in principle change this, or do you have a stronger notion of incommunicability?

Replies from: Will_Newsome
comment by Will_Newsome · 2014-02-24T10:03:08.553Z · LW(p) · GW(p)

It is possible the beings in question could have predicted such advances and accounted for them. But it seems some sufficiently advanced technology, whether institutional or neurological, could make the evidence "communicable". But perhaps by the time such technologies are available, there will be many more plausible excuses for spooky agents to hide behind. Such as AGIs.

comment by jimrandomh · 2014-03-14T15:07:59.471Z · LW(p) · GW(p)

Incommunicable in the anthropic sense of formally losing its evidence-value when transferred between people, in the broader sense of being encoded in memories that that can't be regenerated in a trustworthy way, or in the mundane sense of feeling like evidence but lacking a plausible reduction to Bayes? And - do you think you have incommunicable evidence? (I just noticed that your last few comments dance around that without actually saying it.)

(I am capable of handling information with Special Properties but only privately and only after a multi-step narrowing down.)

Replies from: Will_Newsome
comment by Will_Newsome · 2014-03-21T00:34:21.775Z · LW(p) · GW(p)

There might be anthropic issues, I've been thinking about that more the last week. The specific question I've been asking is 'What does it mean for me and someone else to live in the same world?'. Is it possible for gods to exist in my world but not in others, in some sense, if their experience is truly ambiguous w.r.t. supernatural phenomena? From an almost postmodern heuristic perspective this seems fine, but 'the map is not the territory'. But do we truly share the same territory, or is more of their decision theoretic significance in worlds that to them look exactly like mine, but aren't mine? Are they partial counterfactual zombies in my world? They can affect me, but am I cut off from really affecting them? I like common sense but I can sort of see how common sense could lead to off-kilter conclusions. Provisionally I just approach day-to-day decisions as if I am as real to others as they are to me. Not doing so is a form of "insanity", abstract social uncleanliness.

The memories can be regenerated in a mostly trustworthy way, as far as human memory goes. (But only because I tried to be careful; I think most people who experience supernatural phenomena are not nearly so careful. But I realize that I am postulating that I have some special hard-to-test epistemic skill, which is always a warning sign. Also I have a few experiences where my memory is not very trustworthy due to having just woken up and things like that.)

The experiences I've had can be analyzed Bayesianly but when analyzing interactions with supposed agents involved a Bayesian game model is more appropriate. But I suspect that it's one of many areas where a Bayesian analysis does not provide more insight than human intuitions for frequencies (which I think are really surprisingly good when not in a context of motivated cognition (I can defend this claim later with heuristics and biases citations, but maybe it's not too controversial)). But it could be done by a sufficiently experienced Bayesian modeler. (Which I'm not.)

do you think you have incommunicable evidence?

Incommunicable to some but not others. And I sort of try not to communicate the evidence to people who I think would have the interpretational framework and skills necessary to analyze it fairly, because I'm superstitious... it vaguely feels like there are things I might be expected to keep private. A gut feeling that I'd somehow be betraying something's or someone's confidence. It might be worth noting that I was somewhat superstitious long before I explicitly considered supernaturalism reasonable; of course, I think even most atheists who were raised atheist (I was raised atheist) are also superstitious in similar ways but don't recognize it as such.

Sorry for the poor writing.

Replies from: jimrandomh
comment by jimrandomh · 2014-03-22T20:17:46.885Z · LW(p) · GW(p)

The specific question I've been asking is 'What does it mean for me and someone else to live in the same world?'

As best I can tell, a full reduction of "existence" necessarily bottoms out in a mix of mathematical/logical statements about which structures are embedded in each other, and a semi-arbitrary weighting over computations. That weighting can go in two places: in a definition for the word "exist", or in a utility function. If it goes in the definition, then references to the word in the utility function become similarly arbitrary. So the notion of existence is, by necessity, a structural component of utility functions, and different agents' utility functions don't have to share that component.

The most common notion of existence around here is the Born rule (and less-formal notions that are ultimately equivalent). Everything works out in the standard way, including a shared symmetric notion of existence, if (a) you accept that there is a quantum mechanics-like construct with the Born rule, that has you embedded in it, (b) you decide that you don't care about anything which is not that construct, and (c) decide that when branches of the quantum wavefunction stop interacting with each other, your utility is a linear function of a real-valued function run over each of the parts separately.

Reject any one of these premises, and many things which are commonly taken as fundamental notions break down. (Bayes does not break down, but you need to be very careful about keeping track of what your measure is over, because several different measures that share the common name "probability" stop lining up with each other.)

But it's possible to regenerate some of this from outside the utility function. (This is good, because I partially reject (b) and totally reject (c)). If you hold a memory which is only ever held by agents that live in a particular kind of universe, then your decisions only affect that kind of universe. If you make an observation that would distinguish between two kinds of universes, then successors in each see different answers, and can go on to optimize those universes separately. So if you observe whether or not your memories seem to follow the Born rule, and that you're evolved with respect to an environment that seems to follow the Born rule, then one version of you will go on to optimize the content of universes that follow it, and another version will go on to optimize the content of universes that don't, and this will be more effective than trying to keep them tied together. Similarly for deism; if you make the observation, then you can accept that some other version of you had the observation come out the other way, and get on with optimizing your own side of the divide.

That is, if you never forget anything. If you model yourself with short and long term memory as separate, and think in TDT-like terms, then all similar agents with matching short-term memories act the same way, and it's the retrieval of an observation from long-term memory - rather than the observation itself - that splits an agent between universes. (But the act of performing an observation changes the distribution of results when agents do this long-term-memory lookup. I think this adds up to normality, eventually and in most cases. But the cases in which it doesn't seem interesting.)

comment by knb · 2014-01-14T21:02:16.634Z · LW(p) · GW(p)

As for me, and as to my personal decision policy, I am ninety-something percent confident. The scenarios where I'm wrong are mostly worlds where outright complex hallucination is a normal feature of human experience that humans are for some reason blind to. I'm not talking about normal human memory biases and biases of interpretation, I'm saying some huge fraction of humans would have to have a systemic disorder on the level of anosognosia.

Can you explain why you believe this? To me it doesn't seem like complex hallucination is that common. I know about 1% of the population is schizophrenic and hallucinates regularly, and I'm sure non-schizophrenics hallucinate occasionally, but it certainly seems to be fairly rare.

Can you describe your own experience with these gods?

ETA: To clarify, I'm saying that I don't think hallucination is common, and I also don't believe that gods are real. I don't see why there should be any tension between those beliefs.

Replies from: Will_Newsome
comment by Will_Newsome · 2014-01-15T02:11:17.950Z · LW(p) · GW(p)

I agree complex recurrent hallucination in otherwise seemingly psychologically healthy people is rare, which is why the "gods"/psi hypothesis is more compelling to me. For the hallucination hypothesis to hold it would require some kind of species-wide anosognosia or something like it.

Replies from: knb, gjm
comment by knb · 2014-01-15T02:33:12.902Z · LW(p) · GW(p)

I think you misunderstood me.... My position is: Most people don't claim to have seen gods, and gods aren't real. A small percentage of people do have these experiences, but these people are either frauds, hallucinating, or otherwise mistaken.

I don't see why you think the situation is either [everyone is hallucinating] or [gods are real]." It seems clear to me that [most people aren't hallucinating] and [gods aren't real.] Are you under the impression that most people are having direct experiences of gods or other supernatural apparitions?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-15T03:35:30.392Z · LW(p) · GW(p)

So how do you explain things like this?

Replies from: knb
comment by knb · 2014-01-15T05:58:52.317Z · LW(p) · GW(p)

Same as with Bigfoot/Loch Ness Monster. People (especially children) are highly suggestible, hallucinations and optical illusions occur, hoaxes occur. People lie to fit in. These are things that are already known to be true.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-15T06:05:21.061Z · LW(p) · GW(p)

Well the miracle of the sun was witnessed by 30,000 to 100,000 people.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-01-15T12:27:28.787Z · LW(p) · GW(p)

How many people witnessed this?

comment by gjm · 2014-01-15T02:35:09.740Z · LW(p) · GW(p)

It looks to me as if the two of you are talking past each other. I think knb means "it doesn't seem to me like things that would have to be complex hallucination if there were no gods are that common", and is kinda assuming there are in fact no gods; whereas Will means "actual complex hallucinations aren't common" and is kinda assuming that apparent manifestations of gods (or something of the sort) are common.

I second knb's request that Will give some description of his own encounters with god(s), but I expect him to be unwilling to do so with much detail. [EDITED to add: And in fact I see he's explicitly declined to do so elsewhere in the thread.]

I think hallucination is more common than many people think it is (Oliver Sacks recently wrote a book that I think makes this claim, but I haven't read it), and I am not aware of good evidence that apparent manifestations of gods dramatic enough to be called "outright complex hallucination" are common enough to require a huge fraction of people to be anosognosic if gods aren't real -- Will, if you're reading this, would you care to say more?

Replies from: Will_Newsome
comment by Will_Newsome · 2014-02-24T10:43:11.452Z · LW(p) · GW(p)

Upon further reflection it is very difficult for me to guess what percentage of people experience what evidence and of what nature and intensity. I do not feel comfortable generalizing from the experiences of people in my life, for obvious reasons and some less obvious ones. I believe this doesn't ultimately matter so much for me, personally, because what I've seen implies it is common enough and clear enough to require a perhaps-heavy explanation. But for others trying to guess at more general base rates, I think I don't have much insight to offer.

comment by khafra · 2014-01-14T13:37:02.480Z · LW(p) · GW(p)

A while back, you mentioned that people regularly confuse universal priors with coding theory. But minimum message length is considered a restatement of occam's razor, just like solomonoff induction; and MML is pretty coding theory-ish. Which parts of coding theory are dangerous to confuse with the universal prior, and what's the danger?

Replies from: Will_Newsome
comment by Will_Newsome · 2014-01-15T02:25:19.610Z · LW(p) · GW(p)

The difference I was getting at is that when constructing a code you're taking experiences you've already had and then assigning them weight, whereas the universal prior, being a prior, assigns weight to strings without any reference to your experiences. So when people say "the universal prior says that Maxwell's equations are simple and Zeus is complex", what they actually mean is that in their experience mathematical descriptions of natural phenomena have proved more fruitful than descriptions that involve agents; the universal prior has nothing to do with this, and invoking it is dangerous as it encourages double-counting of evidence: "this explanation is more probable because it is simpler, and I know it's simpler because it's more probable". When in fact the relationship between simplicity and probability is tautologous, not mutually reinforcing.

This error really bothers me, because aside from its incorrectness it's using technical mathematics in a surface way as a blunt weapon verbose argument that makes people unfamiliar with the math feel like they're not getting something that they shouldn't in fact get nor need to understand.

(I've swept the problem of "which prefix do I use?" under the rug because there are no AIT tools to deal with that and so if you want to talk about the problem of prefixes, you should do so separately from invoking AIT for some everyday hermeneutic problem. Generally if you're invoking AIT for some object-level hermeneutic problem you're Doing It Wrong, as has been explained most clearly by cousin_it.)

Replies from: Viliam_Bur, jimrandomh, khafra
comment by Viliam_Bur · 2014-01-15T12:31:14.362Z · LW(p) · GW(p)

So when people say "the universal prior says that Maxwell's equations are simple and Zeus is complex", what they actually mean is that in their experience mathematical descriptions of natural phenomena have proved more fruitful than descriptions that involve agents

I thought it meant that if you taboo "Zeus", the string length increases more dramatically than when you taboo "Maxwell's equations".

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-16T01:52:09.994Z · LW(p) · GW(p)

Except that's not the case. I can make any statement arbitrarily long by continuously forcing you to taboo the words you use.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-01-16T08:55:51.077Z · LW(p) · GW(p)

Sure, but stil somehow "my grandma" is more complex than "two plus two", even if the former string has only 10 characters and the latter has 12. So now the question is whether "Zeus" is more like "my grandma" or more like "two plus two".

comment by jimrandomh · 2014-03-14T15:18:15.402Z · LW(p) · GW(p)

So when people say "the universal prior says that Maxwell's equations are simple and Zeus is complex", what they actually mean is that in their experience mathematical descriptions of natural phenomena have proved more fruitful than descriptions that involve agents; the universal prior has nothing to do with this, and invoking it is dangerous as it encourages double-counting of evidence

Attempting to work the dependence of my epistemology on my experience into my epistemology itself creates a cycle in the definitions of types, and wrecks the whole thing. I suspect that reformalizing as a fixpoint thing would fix the problem, but I suspect even more strongly that the point I'm already at would be a unique fixpoint and that I'd be wrecking its elegance for the sake of generalizing to hypothetical agents that I'm not and may never encounter. (Or that all such fixpoints can be encoded as prefixes, which I too feel like sweeping under the rug.)

comment by khafra · 2014-01-15T13:38:56.655Z · LW(p) · GW(p)

...So, where in this schema does Minimum Message Length fit? Under AIT, or coding theory? Seems like it'd be coding theory, since it relies on your current coding to describe the encoding for the data you're compressing. But everyone seems to refer to MML as the computable version of Kolmogorov Complexity; and it really does seem fairly equivalent.

It seems to me that KC/SI/AIT explicitly presents the choice of UTM as an unsolved problem, while coding theory and MML implicitly assume that you use your current coding; and that that is the part that gets people into trouble when comparing Zeus and Maxwell. Is that it?

Replies from: Will_Newsome
comment by Will_Newsome · 2014-02-24T10:53:14.079Z · LW(p) · GW(p)

It seems to me that KC/SI/AIT explicitly presents the choice of UTM as an unsolved problem, while coding theory and MML implicitly assume that you use your current coding; and that that is the part that gets people into trouble when comparing Zeus and Maxwell. Is that it?

I think more or less yes, if I understand it. And more seriously, AIT is in some ways meant not to be practical, the interesting results require setting things up so that technically the work is pushed to the "within a constant" part. Which is divorced from praxis. Practical MML intuitions don't carry over into such extreme domains. That said, the same core intuitions inspire them; there are just other intuitions that emerge depending on what context you're working in or mathematizing. But this is still conjecture, 'cuz I personally haven't actually used MML on any project, even if I'm read some results.

comment by Eugine_Nier · 2014-01-14T01:36:05.837Z · LW(p) · GW(p)

Where are you posting these days?

Replies from: Will_Newsome
comment by Will_Newsome · 2014-01-14T01:49:50.582Z · LW(p) · GW(p)

I mostly don't, but when I do, Twitter. @willdoingthings mostly; it's an uninhibited drunken tweeting account. I also participate on IRC in private channels. But in general I've become a lot more secretive and jaded so I post a lot less.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-14T07:27:59.688Z · LW(p) · GW(p)

Any particular reason? I'd certainly be interested in some of the things you have to say. Incidentally, I've also had some experiences myself that could reasonably be interpreted as supernatural and wouldn't mind comparing notes (although mine are more along the lines of having latent psychic powers and not direct encounters with other entities).

comment by ChristianKl · 2014-01-13T16:53:08.171Z · LW(p) · GW(p)

I think there are god-like things that interact with humans and I hope that's a good thing but I really don't know.

What do you mean with the term god?

Replies from: Will_Newsome
comment by Will_Newsome · 2014-01-14T02:05:08.882Z · LW(p) · GW(p)

This is hard to answer. I mean something vague. A god is a seemingly transhumanly intelligent agent. (By this I don't mean something cheap like "the economy" or "evolution", I mean the obvious thing.) As to their origins I have little idea; aliens, simulators, programs simpler than our physical universe according to a universal prior, hypercompetent human conspiracies with seemingly inhuman motivations, whatever, I'm agnostic. For what it's worth (some of) the entity or entities I've interacted with seem to want to be seen as related to or identical with one or more of the gods of popular religions, but I'm not sure. In general it's all quite ambiguous and people are extremely hasty and heavy with their interpretations. Further complicating the issue is that it seems like the gods are willing to go along with and support humans' heavy-handed interpretations and so the interpretations become self-confirming. I say "gods", but for all I know it's just one entity with very diverse effects, like an author of a book.

Replies from: bogus
comment by bogus · 2014-01-14T08:05:35.877Z · LW(p) · GW(p)

Note that many folklore traditions posit paranormal entities that are basically capricious and mischievous (though not unfriendly or malevolent in any real sense) and may try to deceive people who interact with them, for their own enjoyment. Some parapsychologists argue that _if_ psi-related phenomena exist, then this is pretty much the best model we have for them.

In your view, how likely is it that you may also be interacting with entities of this kind?

Replies from: Will_Newsome
comment by Will_Newsome · 2014-02-24T10:57:59.185Z · LW(p) · GW(p)

It seems likely that something like that is going on, but I wouldn't think of capriciousness and mischievousness as character traits, just descriptions of the observed phenomena that are agnostic regarding the nature of any agency behind them. Those caveats are too vague for me to give an answer more precise than "likely".

comment by [deleted] · 2014-01-15T06:31:59.871Z · LW(p) · GW(p)

I'm curious about your experience with memantine- I vaguely remember you tweeting about it. What was it helping you with?

If you disagree in spirit with much of the sequences, what would you recommend for new rationalists to start with instead?

Replies from: Will_Newsome
comment by Will_Newsome · 2014-02-24T10:12:05.917Z · LW(p) · GW(p)

Re memantine, it helped with overactive inhibition some, but not all that much, and it made my short term memory worse and spaced me out. Not at all like the alcohol-in-a-pill I was going for, but of course benzos are better for that anyway.

New rationalists... reminds me of New Atheism these days, for a rationalist to be new. They've missed out on x-rationalism's golden days, and the current currents are more hoi polloi and less interesting for, how should I put it, those who are "intelligent" in the 19th-century French sense. I don't really identify as a rationalist, but maybe I can be identified as one. I think perhaps it would mean reading a lot in general, e.g. in history and philosophy, and reading some core LW texts like GEB, while holding back on forming any opinions, and instead just keeping a careful account of who says what and why you or others think they said what they said. I haven't been to university but I would guess they encourage a similar attitude, at least in philosophy undergrad? I hope. Anyway I think just reading a bunch of stuff is undervalued; the most impressive rationalists according to the LW community are generally those who have read a bunch of stuff, they just have a lot of information at hand to draw from. Old books too: Wealth of Nations, Origin of Species; the origins of the modern worldview. Intelligence matters a lot, but reading a lot is equally essential.

Studying Eliezer's Technical Explanation of Technical Explanation in depth is good for Yudkowskology which is important hermeneutical knowledge if you plan on reading through all the Sequences without being overwhelmed (whether attractively or repulsively) by their particular Yudkowskyan perspective. I do think Eliezer's worth reading, by the way, it's just not the core of rationality, it's not a reliable source of epistemic norms, and it has some questionable narratives driving it that some people miss and thereby accept semi-unquestioningly. The subtext shapes the text more than is easily seen. (Of course, this also applies to those who dismiss it by assuming less credible subtext than is actually there.)

comment by Jonathan_Graehl · 2014-01-27T22:50:26.326Z · LW(p) · GW(p)

I think there are god-like things that interact with humans

Crazy people and trolls exist. Some of them are eloquent.

So why do you talk about it at all when it just makes you seem crazy to most of us?

Are you looking for confirmation or agreement in others' hallucinations? Or perhaps you suspect your kind of experiences are more common than openly expressed?

I assume I'd take seriously your crazy experiences if they were mine. Is there anything at all you can say that's of value to someone like me who just hears crazy?

Replies from: Will_Newsome
comment by Will_Newsome · 2014-03-21T01:27:42.105Z · LW(p) · GW(p)

So why do you talk about it at all when it just makes you seem crazy to most of us?

When it comes to epistemic praxis I am not a friend of the mob. I want to minimize my credibility with most of LessWrong and semi-maximize my credibility with the people I consider elite. I'm very satisfied with how successful my strategy has been.

Or perhaps you suspect your kind of experiences are more common than openly expressed?

Indeed.

I assume I'd take seriously your crazy experiences if they were mine. Is there anything at all you can say that's of value to someone like me who just hears crazy?

I am somewhat proud of the care I've taken in interpreting my experiences. I think that even if people don't think there's anything substantial in my experiences, they might still appreciate and perhaps learn from my prudence. Interpreting the supernatural is extremely difficult and basically everyone quickly goes off the rails. Insofar as there is a rational way to really engage with the contents of the subject I think my approach is, if not rational, at least rational enough to avoid many of the failure modes. But perhaps I am overly proud.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2014-04-01T23:52:27.634Z · LW(p) · GW(p)

Thanks for answering that as if it were a sincere question (it was).

"Maybe this universe has invisible/anthropic/supernatural properties" is a fascinating line of daydreaming that seems a bit time-wasting to me, because I'm not at all confident I'd do anything healthy/useful if I started attempting to experiment. Looking at all the people who are stuck in one conventional religion or another, who (otherwise?) seem every bit as intelligent and emotionally stable as I am, I think, to the extent that you're predisposed to having any mystical experiences, that way is dangerous.

comment by Will_Newsome · 2014-01-12T02:18:22.399Z · LW(p) · GW(p)

Discussion of this post goes here.

Replies from: ephion
comment by ephion · 2014-01-12T15:54:39.453Z · LW(p) · GW(p)

I think this is a really cool post idea. LW has a well-above-average user base, and sharing knowledge and ideas publicly can be a great boon to the community as a whole.

Replies from: David_Gerard
comment by David_Gerard · 2014-01-12T23:21:32.577Z · LW(p) · GW(p)

Yes, this is a really nice open thread that seems to be working well.

comment by Alicorn · 2014-01-13T00:19:36.865Z · LW(p) · GW(p)

I have written various things, collected here, including what I think is the second most popular (or at least usually second-mentioned) rationalist fanfiction. I serve dinner to the Illuminati. AMA.

Replies from: Tripitaka, Anatoly_Vorobey, VAuroch, shminux, FiftyTwo
comment by Tripitaka · 2014-02-14T23:50:10.446Z · LW(p) · GW(p)

Some time ago you made the public offer to talk to depressed or otherwise seriously lonely people, even though you apparently really dislike phonecalls. Did anybody take you up on it? How did it go?

Replies from: Alicorn
comment by Alicorn · 2014-02-15T01:04:18.011Z · LW(p) · GW(p)

I don't think anyone sought me out on the basis of that offer, or if they did, they chose not to tell me or I forgot the details of how we met. Unrelatedly, I have friends with various loneliness and mental health statuses who I talk to (mostly online).

comment by Anatoly_Vorobey · 2014-01-13T14:55:48.050Z · LW(p) · GW(p)

Do you have a routine as a writer?

Do you get writer's block, and if yes, any favorite methods of breaking it?

How much do you rewrite your drafts?

Replies from: Alicorn
comment by Alicorn · 2014-01-13T17:26:01.636Z · LW(p) · GW(p)

I don't have a routine.

I could be described as having writer's block right now; I was devoting pretty much all my creative output to Effulgence, which ground to a screeching halt due to coauthor brain problems, and now I am metaphorically upside-down like a particularly unfortunate turtle. I have been trying various things but nothing has produced good results yet (I have written, like, one short story, but no chapters). However, I have every expectation of being able to return to Effulgence full speed ahead when my coauthor can even if I don't manage to budge my novels between now and then.

I do almost no revising after I've gotten an entire chapter down (though I will sometimes iterate a sentence a bit while it's in progress, and I will rearrange paragraphs if my beta readers suggest it while I'm writing for my test audience.). I don't like revision after that; it slows me down and makes me second-guess myself and hate my output faster than I normally start to and leaves me with questionable mental maps of what has and has not happened. I will correct typos and grammatical errors and the like when I am made aware of them. Elcenia as it currently stands is a complete reboot which I generate without directly consulting the original - I extracted a loose plot outline, massaged it into making somewhat better sense, and haven't opened the old documents since except to remind myself of how to spell things and various assignments of numerical value, I write from the plot outline and memory. Effulgence I can't even fix typos because of the limitations of the Dreamwidth platform, so that's closer to literally no revision.

Replies from: Anatoly_Vorobey
comment by Anatoly_Vorobey · 2014-01-13T22:06:43.521Z · LW(p) · GW(p)

That's pretty interesting, thanks. More questions!

Suppose for the sake of the argument that copyright problems do not exist, and you're offered to publish Luminosity as a book. Would you then want to work with editors/copyeditors and change the text substantially according to their suggestions, or are you more like "this is done, feel free to fix typos but otherwise take it or leave it"?

Do you have a day job? A profession? What are they? Do you like them? (obviously feel free to ignore etc.)

I am Omega, and I intend to change humanity in such a way that some authors never really existed, their books are gone from collective memory and never influenced anyone. Because I liked Luminosity, I allow you to name up to 5 authors whom I won't even consider expunging. Who do you name? (don't waste a slot on yourself, you're safe)

Replies from: Alicorn
comment by Alicorn · 2014-01-14T02:14:58.051Z · LW(p) · GW(p)

I didn't do more than get a copyeditor to look over the text of the Elcenia books before self-publishing them. I would probably go the extra mile if we're talking published published, but my tolerance for Executive Meddling is negligible, so it'd have to be more like pointing things out that I might want to fix so I can fix them than changing things without my participation. And it would have to be more about wording, pruning or adding exposition, etc. than about macroscopic plot or character issues, because I don't know how to touch those in a complete work without doing a whole lot more work than I'm willing to or having things fall apart like wet tissue paper.

My most recent conventional employment was being the administrative manager at MetaMed, but I quit a few months ago, and now I am basically a house spouse, the "spouse" part pending till September. I'd take conventional employment if it dressed up pretty and knocked on my door with a bouquet of flowers (I have informed e.g. Louie that I exist, am unemployed, and like money) but it's not urgent. Irregularly, people will pay me to do things like write commissions (I am pretty bad about delivering in a timely manner though, I have one like half finished...) or make menus. Sometimes I get donations through my websites or somebody buys an Elcenia book.

I think I'd need to know more about how this hypothetical works. Are my personal friends and family safe too even though you've likely never heard of their writing, or do I need to expend slots on all my favorite people who happen to have written fiction (or whatever the "author" threshold is)? Is Stephenie Meyer safe (because you liked Luminosity) or is she in the line of fire and something weird happens to Luminosity if she gets got? Are huge linchpins of influence like Tolkien safe just because they'd have knock-on effects beyond their own works, or are those knock-on effects part of the point?

Replies from: Anatoly_Vorobey
comment by Anatoly_Vorobey · 2014-01-14T17:41:06.497Z · LW(p) · GW(p)

The idea is that Omega makes the world stay roughly as it is, but the individual beauty and other virtues of the books are lost. The books are replaced by something generic and drab that is still able to generate roughly the same large-scale effect due to Omega's tweaks. And everyone you know personally is exempt. So for example Tolkien may be expunged, and instead someone else wrote some epic fantasy that helped launch a genre and it had something like orcs in it, but it wasn't nearly as powerful and beautiful and everything as Tolkien was. Same for Stephenie Meyer: whatever you liked about Twilight is gone, replaced with some generic vampire love story that inexplicably became incredibly popular, and you're able to base Luminosity on it, and maybe add more of your personal imagination to offset the drabness, so large-scale effects added up to the same in your world.

Basically I'm trying, instead of asking the familiar "your top 5" or "the 5 books you'll take to an uninhabited island", to ask "which 5 books you find it most painful to contemplate being lost to the world as if they never existed, but everything else mostly stayed the same". It's an inherently self-contradictory question, I know, but maybe still worth asking.

Replies from: Alicorn
comment by Alicorn · 2014-01-14T18:52:06.541Z · LW(p) · GW(p)

Hmm. Taking this question at face value where I am only prioritizing by the individual flavor and character of the books and not their cultural significance, I'm going to say let's keep J.K. Rowling... Tamora Pierce... Sharon Shinn, Laini Taylor, John Scalzi. I was also tempted by Philip Pullman (but I think about 75% of what I'd miss is people putting daemons in arbitrary fanfiction, which it sounds like would get suitably replaced?) and Zenna Henderson (but I think losing her stories would probably be a smaller loss to me than the ones I picked).

I did this by looking at my bookshelf which has actual books on it, so if I was supposed to interpret it to include screenwriters or anything the answer is invalid.

comment by VAuroch · 2014-01-13T08:41:51.411Z · LW(p) · GW(p)

My impression of Luminosity, after reading it and before reading Radiance, was that it was essentially depicting the usefulness of luminosity more of less entirely by showing vampire-Bella completely losing her luminosity techniques/attitudes. To what degree did you intend this? Do you see it as accurate?

Also what do you think of Syzygy, seven years down the line? (Me(highschool) quite liked it. Me(2014) was very surprised to discover that it was written by someone I encountered again elsewhere.)

Replies from: Alicorn
comment by Alicorn · 2014-01-13T17:40:17.902Z · LW(p) · GW(p)

I did not intend that interpetation, and have been repeatedly surprised to find people espousing it. There is a reduction in Luminosity's didacticism over the course of the book as I got caught up in the plot, and it's possible it happens to undergo a particularly noticeable drop around when Bella turns which people are reading this way. However, I didn't intend to show Bella's various errors as being consequences of any abandonment of her interior luminosity, however much less narration I spent on it. She has plenty of other personality flaws and resource shortages to drive her mistakes.

Oh man, Syzygy. That started closer to a decade ago, though I guess it did end around seven years ago. I don't hate it enough to break my rule that what goes up, stays up, so when I recovered the files from the unexpected cataclysm that caused the comic's end, up they went. But it's embarrassing, very noticeably amateur, both in the art and the writing. I'm still pleased with a couple of particularly nasty turns of plot, like Kulary's backstory, but they weren't presented to their best effect.

comment by shminux · 2014-01-13T04:42:08.954Z · LW(p) · GW(p)

What's the status of Effulgence? I gave up on it soon after it branched out wildly around the Milliways part, and when I checked to see what's going on, there appeared to be no updates in 6 months or so.

Anything else you've written recently that you may recommend?

Replies from: Alicorn
comment by Alicorn · 2014-01-13T07:18:08.804Z · LW(p) · GW(p)

My coauthor for Effulgence is suffering from an inability to can. It is slowly recovering (today we were able to do a not-in-Effulgence-continuity sandbox thread for a little more than thirty comments, and she's been writing an unrelated short story!) and we are continuing to make plans for what we will write when the ability to can is restored. The last new post was made in November 2013, though, so I'm not sure where you're getting "6 months or so".

I periodically update curious parties about Effulgence behind-the-scenes goings-on in this TV Tropes forum thread which was originally about Elcenia but is now about my stuff in more generality.

I have released two short stories relatively recently, though the latter (AU-fanfiction-of-sorts of Three Worlds Collide) was written back in 2012 and I just sat on it for a while. I have also been writing a series of social justice blog posts for alternate universes which have inspired some entertaining audience participation. I recommend subscribing to my general RSS feed if you are curious about my creative output.

I have less than zero idea how far you got into Effulgence when you describe yourself as dropping it "after it branched out wildly around the Milliways part". But if wild branching and Milliways were turnoffs for you I don't think you're gonna like anything after that mysterious part.

Replies from: Douglas_Knight, shminux
comment by Douglas_Knight · 2014-01-13T18:06:26.552Z · LW(p) · GW(p)

an inability to can

What does it mean "to can"?
Two uses spring to my mind: to discard material (as in "trashcan"); to declare work done (somehow from "film canister").

Replies from: Alicorn
comment by Alicorn · 2014-01-13T21:09:46.639Z · LW(p) · GW(p)

It's an internet-dialect neologism. Related to "I can't" without any subsequent verb, evolved into "I have lost the ability to can" etc.

comment by shminux · 2014-01-13T07:50:46.964Z · LW(p) · GW(p)

Thanks!

If I recall, I really liked the story as a standalone one, up until the Luminosity Bella showed up. Of course, given the name and the nature of the RP, I should have expected it.

Replies from: Alicorn
comment by Alicorn · 2014-01-13T08:11:18.185Z · LW(p) · GW(p)

Yeah, there are, um, lots of them. You can read some of their stories before they hit the "peal" as self-contained AUs, if you want - just go to the first instance of a new "symbella" in the index (except for the lower-case omega, that's a special case), and read only posts that have no other symbellas. (Some posts have no symbella and these are usually part of the same story as whatever's closest to them, it just means the relevant Bell isn't present in that particular thread.) These will sometimes cut off kind of awkwardly, of course...

Replies from: shminux
comment by shminux · 2014-01-13T08:20:26.607Z · LW(p) · GW(p)

Ah, thanks, I'll give it a try. I was confused about where the stories start.

comment by FiftyTwo · 2014-03-20T02:31:07.510Z · LW(p) · GW(p)

If you don't mind me asking what do you do other than writing? Do you have any plans to make it a career or is it strictly recreational?

Replies from: Alicorn
comment by Alicorn · 2014-03-20T05:12:33.409Z · LW(p) · GW(p)

I don't currently have a day job, though I have in the past. I suppose you could call me a housefiancée, spousehood pending.

I'm not interested in traditional publishing, but I certainly wouldn't object if my mini-fandom exploded and started showering me with money.

comment by CAE_Jones · 2014-01-12T09:32:33.344Z · LW(p) · GW(p)

I'm an unemployed legally blind mostly white American who may have at one point been good at math and programming, who is just smart enough to get loads of spam from MIT, but not smart enough to avoid putting my foot in my mouth an average of monthly on Lesswrong. I've been talking about blindness-related issues a lot over the past year mostly because I suddenly realized that they were relevant, but my aim is to solve these problems as quickly as possible so I can get back to getting better at things that actually matter. On the off chance that you have questions, feel free to AMA.

Replies from: Anatoly_Vorobey, JoshuaFox
comment by Anatoly_Vorobey · 2014-01-12T10:05:40.593Z · LW(p) · GW(p)

How blind are you, in layman terms of what you can/can't see? What's your prognosis?

Replies from: CAE_Jones
comment by CAE_Jones · 2014-01-12T13:38:22.825Z · LW(p) · GW(p)

I'm not-quite completely blind; what little vision I have tends to fluctuate between effectively nonexistent and good enough to notice vague details maybe once or twice a year. I could see better up until I was 14, but my vision was still too poor to get out of using braille and a cane (given thick glasses and enough time, I could possibly have read size 20 font; even with the much larger font used in movie subtitles, I had to pause the video and put my face against the screen to read them).

I don't know my official acuity/diagnoses (It's been a few years since I saw an eye doctor), but I appear to have started out with retinal detachment and scarring, and later developed uveitis. The latter seems to be the primary cause for the dramatic decline starting from age 14.

Replies from: JoshuaFox, Anatoly_Vorobey
comment by JoshuaFox · 2014-01-12T19:12:55.193Z · LW(p) · GW(p)

It's been a few years since I saw an eye doctor

Why is that? No healthcare policy? It seems that you have good reason to frequent an eye-doctor.

Replies from: CAE_Jones
comment by CAE_Jones · 2014-01-12T20:39:22.983Z · LW(p) · GW(p)

Most of my medical everything is handled by my parents, who are unlikely to do anything unless it is brought to their attention (though sometimes they do ask to make sure nothing's quietly going horribly wrong). My vision was awful enough when last I went, and the doctor only aware of a full-on bionic eye as a possible method for improvement, and what little I had left vulnerable enough to damage/severe discomfort from the sorts of things needed to examine my eyes (holding them open and shining a light in, basically) that it's mostly stopped being worth it.

I did discover a possible treatment for my specific condition recently. I am unsure as to if it would be of much value with my vision as it currently is, but it's something I aim to look into further when I've sorted out enough of this basic life stuff.

comment by Anatoly_Vorobey · 2014-01-12T18:32:57.918Z · LW(p) · GW(p)

Are these problems likely to be correctable/improvable with medicine, but you have no money/insurance to get medical help? Or are they of a kind that basically can't be helped, and that's why you haven't been to a doctor in years? Or is it something else?

Do you use a reader program to browse the web and this site? Do you touch-type or dictate your comments?

(I realize that my questions are callous; please feel free to ignore if they're too invasive)

Replies from: CAE_Jones
comment by CAE_Jones · 2014-01-12T20:58:21.911Z · LW(p) · GW(p)

The retinal issues are unlikely to be fixable in the immediate future (though the latest developments on that front seem potentially promising). There may be a treatment for the more annoying issue, but I don't know if it's too late/what I should do to learn more, and so I'm waiting until life in general is more favorable to dig into it further. (Which I expect means I'll be putting it off until 2015, since I expect to be fairly occupied during most of 2014.)

For using the internet/computers in general, I use Nonvisual Desktop Access, a free screen reader which only recently attained comparable status to Jaws for Windows, which I'd been using prior to 2011. These work well with plaintext, and have trouble with certain types of controls/labels and images and such (I had to Skype someone a screenshot to get past the CAPCHA to register here. I was using a trial of a CAPCHA-solving add-on at the time, but it was unable to locate the CAPCHA on Lesswrong.). Since NVDA is open source, users frequently develop useful add-ons and plugins, such as a CPU usage monitor and the ability to summon a Google Translation of copied text with a single keystroke. (It supposedly includes an optical character recognition feature, but I've never figured out how to use it.).

I touch-type. I'm not much of a fan of dictation, though I'm not sure why.

comment by JoshuaFox · 2014-01-12T19:12:43.622Z · LW(p) · GW(p)
  1. Why do you say "may have at one point been good at math and programming." Aren't you still good at that? Are opportunities for people like yourself -- blind, but with those aptitudes --, available in today's world, where so much is done in front of a computer screen, and adaptive technologies exist? Or do you think that in a competitive world, blindness puts you hopelessly behind sighted people?

  2. Do you think that your level of ambition and drive are lessened by your disability, increased, or does it make no difference?

  3. Does the CfAR-style philosophy of instrumental rationalism help you overcome your disability?

Replies from: CAE_Jones
comment by CAE_Jones · 2014-01-12T20:22:59.454Z · LW(p) · GW(p)
  1. Issues in my first two years of college interfered in my Math/Physics/Computer Science courses, and I never got back into those. So my skills in each has remained only that which I've used most (for example, I've made some games, but the required qualifications for most programming jobs I've come across exceed what I can do without additional training. I think that, even had I not dropped the ball on those, competing with sighted programmers/scientists/mathematicians would require a decent amount of exceptionalism and/or luck. Mathematic notation is also tricky in terms of accessibility; there exist codes such as Nemeth that make math in braille relatively powerful, but on the computer side of things, graphs and LeTX take some doing to use, which also makes trying to study anything with math online difficult (I once downloaded a web page and edited its source so I could read the equations).
  2. It's hard to say. For roughly four years after my vision went from poor to useless, I think I was still fairly driven and ambitious (I did a lot of writing, half taught myself Japanese and Javascript, self-published a terrible science fiction novel, learned to use a music composition program whose accessibility was poor, improvised some crude techniques for making simple images, got into and graduated from the state Math and Science school, and was taking plenty of notes on numerous other things I was hoping to do sooner than later). It all went to hell when I got to college, and has gone back and fourth since, but I'm not sure if any of this compares favorable/unfavorably to the average person. There may be some contributing factors to the negative aspects that go back to my vision (I can't safely get up and go running, or do the all-important eye-contact thing, as examples), but I don't think the affect in the ambition/motivation area has been majorly significant.
  3. I'm not sure what you mean, specifically? My exposure to CFAR consists primarily of LesssWrong; I've been attempting to apply LW-style rationality to the situation, but the timing has made this difficult (I found LW a few months after returning home from college, at which point my options in general were reduced to "things I can do over the internet" and "things for which I would need to go through my parents", and have mostly stayed there until half a week ago. I've not been able to avoid antisocial death spirals; I find myself wanting to deal with the family members I live with less, which makes dealing with them more annoying, repeat ad hermitdom.) If the results of last week's meeting go as planned, I should have a better answer by the end of February.
Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-12T21:17:56.240Z · LW(p) · GW(p)

competing with sighted programmers/scientists/mathematicians would require a decent amount of exceptionalism and/or luck.

Standard economics question: have you considered accepting lower pay?

Replies from: CAE_Jones
comment by CAE_Jones · 2014-01-13T01:51:40.700Z · LW(p) · GW(p)

Standard economics question: have you considered accepting lower pay?

Yes.

comment by IlyaShpitser · 2014-01-12T07:11:25.869Z · LW(p) · GW(p)

I write about causality sometimes.

Replies from: somervta, Anatoly_Vorobey, AlexSchell, None, FiftyTwo, Eugine_Nier
comment by somervta · 2014-01-12T07:59:34.836Z · LW(p) · GW(p)

How significant/relevant is the mathematical work on causality to philosophical work/discussion? If someone was talking about causality in a philosophical setting and had never heard of the relevant math, how badly would/should that reflect on them? Does it make a difference if they've heard of it, but didn't bother to learn the math?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-01-12T20:42:04.522Z · LW(p) · GW(p)

I am not up on my philosophical literature (trying to change this), but I think most analytic philosophers have heard of Pearl et al. by now. Not every analytic philosopher is as mathematically sophisticated as e.g. people at the CMU department. But I think that's ok!

I don't think it's a wise social move for LW to beat on philosophers.

comment by Anatoly_Vorobey · 2014-01-12T10:21:25.517Z · LW(p) · GW(p)

Which academic disciplines care about causality? (I'm guessing statistics, CS, philosophy... anything else?)

Is there anything like a mainstream agreement on how to model/establish causality? E.g. does more or less everyone agree that Pearl's book, which I haven't read, is the right approach? If not, is it possible to list the main competing approaches? Does there exist a reasonably neutral high-level summary of the field?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-01-12T20:29:02.741Z · LW(p) · GW(p)

Which academic disciplines care about causality? (I'm guessing statistics, CS, philosophy... anything else?)

On some level any empirical science cares, because the empirical sciences all care about cause-effect relationships. In practice, the 'penetration rate' is path-dependent (that is, depends on the history of the field, personalities involved, etc.)

To add to your list, there are people in public health (epidemiology, biostatistics), social science, psychology, political science, economics/econometrics, computational bio/omics that care quite a bit. Very few philosophers (excepting the CMU gang, and a few other places) think about causal inference at the level of detail a statistician would. CS/ML do not care very much (even though Pearl is CS).

Is there anything like a mainstream agreement on how to model/establish causality? E.g. does more or less everyone agree that Pearl's book, which I haven't read, is the right approach? If not, is it possible to list the main competing approaches?

I think there is as much agreement as there can reasonably be for a concept such as causality (that is, a philosophically laden concept that's fun to argue about). People model it in lots of ways, I will try to give a rough taxonomy, and will tell you where Pearl lies


Interventionist vs non-interventionist

Most modern causal inference folks are interventionists (including Pearl, Rubin, Robins, etc.). The 'Nicene crede' for interventionists is: (a) an intervention (forced assignment) is key for representing cause/effect, (b) interventions and conditioning are not the same thing, (c) you express interventions in terms of ordinary probabilities using the g-formula/truncated factorization/manipulated distribution (different names for the same thing). The concept of an intervention is old (goes back to Neyman (1920s), I think, possibly even earlier).

To me, non-interventionists fall into three categories: 'naive,' 'abstract', and 'indifferent.' Naive non-interventionists are not using interventions because they haven't thought about things hard enough, and will thus get things wrong. Some EDT folks are in this category. People who ask 'but why can't we just use conditional probabilities' are often in this set. Abstract non-interventionists are not using interventions because they have in mind some formalism that has interventions as a special case, and they have no particular need for the special case. I think David Lewis was in this camp. Joe Halpern might be in this set, I will ask him sometime. Indifferent non-interventionists operate in a field where there is little difference between conditioning and interventions (due to lack of interesting confounding), so there is no need to model interventions explicitly. Reinforcement learning people, and people who only work with RCT data are in this set.


Counterfactualists vs non-counterfactualists

Most modern causal inference folks are counterfactualist (including Pearl, Rubin, Robins, etc.). To a counterfactualist it is important to think about a hypothetical outcome under a hypothetical intervention. Obviously all counterfactualists are interventionist. A noted non-counterfactualist interventionist is Phil Dawid. Counterfactuals are also due to Neyman, but were revived and extended by Rubin in the 70s.


Graphical vs non-graphical

Whether you like using graphs or not. Modern causal inference is split on this point. Folks in the Rubin camp do not like graphs (for reasons that are not entirely clear -- what I heard is they find them distracting from important statistical modeling issues (??)). Folks in the Pearl/SGS/Robins/Dawid/etc. camp like graphs. You don't have to have a particular commitment to any earlier point to have an opinion on graphs (indeed lots of graphical models are not about causality at all). In the context of causality, graphs were first used by Sewall Wright for pedigree analysis (1920s). Lauritzen, Pearl, etc. gave a modern synthesis of graphical models. Spirtes/Glymour/Scheines and Pearl revived a causal interpretation of graphs in the 90s.


"Popperians" vs "non-Popperians"

Whether you restrict yourself to testable assumptions. Pearl is non-Popperian, his models make assumptions that can only be tested via a time machine or an Everett branch jumping algorithm. Rubin is also non-Popperian because of "principal stratification." People that do "mediation analysis" are generally non-Popperian. Dawid, Robins, and Richardson are Popperians -- they try to stick to testable assumptions only. I think even for Popperians, some of their assumptions must be untestable (but I think this is probably necessary for statistical inference in general). I think Dawid might claim all counterfactualists are non-Popperian in some sense.


I am "a graphical non-Popperian counterfactualist" (and thus interventionist).

Does there exist a reasonably neutral high-level summary of the field?

We are working on it.

comment by AlexSchell · 2014-01-13T16:39:47.287Z · LW(p) · GW(p)

Can you point out some cool/insightful applications of broadly Pearlian causality ideas to applied problems in, say, epidemiology or econometrics?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-01-16T08:19:51.315Z · LW(p) · GW(p)

"Pearlian causality" is sort of like "Hawkingian physics." (Not to dismiss the amazing contributions of both Pearl and Hawking to their respective fields).


I am not sure what cool or insightful is for you. What seems cool to me is that proper analysis of causality and/or missing data (these two are related) in observational data in epidemiology is now more or less routine. The use of instrumental variables for getting causal effects is also routine in econometrics.

The very fact that people think about a causal effect as a formal mathematical thing, and then use proper techniques to get it in applied/data analysis settings seems very neat to me. This is what success of analytic philosophy ought to look like!

Replies from: AlexSchell
comment by AlexSchell · 2014-01-16T17:21:39.722Z · LW(p) · GW(p)

What you mention in your last paragraph is roughly what I had in mind when asking for examples. So I take it that IVs are a method inspired by causal graphs (or at least causal maths)? If so you've answered my question.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-01-16T18:37:54.322Z · LW(p) · GW(p)

IVs were first derived by either Sewall Wright or his dad (there is some disagreement on this point). I don't think they formally understood interventions in general back in 1928, but they understood causality very well in the linear model special case.

IVs can be used in more general models than linear, and the reason they work in such settings needed formal causal math to work out, yes. IVs recover interventionist causal effects.

comment by [deleted] · 2014-01-13T14:07:16.691Z · LW(p) · GW(p)

Why?

Replies from: lfghjkl
comment by lfghjkl · 2014-01-15T00:15:55.865Z · LW(p) · GW(p)

It's his job.

Replies from: None
comment by [deleted] · 2014-01-15T21:38:11.267Z · LW(p) · GW(p)

Nobody gets my jokes...

comment by FiftyTwo · 2014-03-20T02:25:11.688Z · LW(p) · GW(p)

What caused your interest in the topic? What was the arc of your career leading up to that?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-03-20T12:53:38.280Z · LW(p) · GW(p)

Thanks for your question.

I got into AI/ML and graphical models as an undergrad. I thought graphical models were very pretty, but I didn't really understand them back then very well (probably still don't..). Causal inference is the closest we have to "applied philosophy," and that was very interesting to me because I like both philosophy and mathematics (not that I am any good at either!) Also I had an opportunity to study with a preeminent person and took it.

comment by Eugine_Nier · 2014-01-12T21:10:01.618Z · LW(p) · GW(p)

Are you aware of any attempts to assign a causality(-like?) structure to mathematics?

There are certainly areas of mathematics where it seems like there is an underlying causality structure (frequently orthogonal or even inverse to the proof structure), but the probability based definition of causality fails when all the probabilities are 0 or 1.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-01-12T21:12:49.483Z · LW(p) · GW(p)

There are certainly areas of mathematics where it seems like there is an underlying causality structure (frequently orthogonal or even inverse to the proof structure)

Can you give a simple example of/pointer to what you mean?

Replies from: None, Eugine_Nier
comment by [deleted] · 2014-01-14T17:57:18.251Z · LW(p) · GW(p)

I don't know if this is what Nier has in mind, but it reminds me of Cramer's random model for the primes. There is a 100 per cent chance that 758705024863 is prime, but it is very often useful to regard it as the output of a random process. Here's an example of the model in action.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-01-14T20:15:54.611Z · LW(p) · GW(p)

I am aware of "logical uncertainty", etc. However I think uncertainty and causality are orthogonal (some probabilistic models aren't causal, and some causal models, e.g. circuit models, have no uncertainty in them).

comment by Eugine_Nier · 2014-01-12T22:28:11.250Z · LW(p) · GW(p)

Well, in analytic number theory, for example, there are many heuristic arguments that have a causality like flavor; however, the proofs of the statements in question are frequently unrelated to the heuristics.

Also, this is a discussion about the causal relationship between a theorem and its proof.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-01-12T22:47:49.936Z · LW(p) · GW(p)

I don't know much about analytic number theory, could you be more specific? I didn't follow the discussion you linked very well, because they say things like "Pearlian causality is not counterfactual", or think that there is any relationship between implication and causation. Neither is true.

comment by eurg · 2014-01-12T16:35:21.322Z · LW(p) · GW(p)

Ask me almost anything. I'm very boring, but I have recovered from depression with the help of CBT + pills, am a lurker since back from the OB days and know the orthodoxy here quite well, started to enjoy running (real barefoot if >7 degrees Celsius) after 29 years of no physical activity, am chairman of the local hackerspace (software dev myself, soon looking for a job again), and somehow established the acceptance of a vegan lifestyle in my conservative familiy (farmers).

Replies from: pinyaka, Daniel_Burfoot, Anatoly_Vorobey, FiftyTwo, Gimpness
comment by pinyaka · 2014-01-12T20:37:16.771Z · LW(p) · GW(p)

What steps did you take to start enjoying running?

Replies from: eurg
comment by eurg · 2014-01-12T23:21:13.228Z · LW(p) · GW(p)

This was surprisingly simple: I got myself to want to run, started running, and patted myself on the back everytime I did it.

The want part was a bit of luck: I always thought I "should" do some sports, for physical and more importantly mental health reasons, and think that being able to do stuff is better than not being able, ceteris paribus. So I was thinking what kind of activity I might prefer.

I like my alone time (so team- or pair-sports are out), I dislike spending money when I expect it to be wasted (like Gym memberships, bikes, et al.). And I feel easily embarassed and ashamed, and like to get myself at least somewhat up to speed on my own.

Running fits those side requirements. Out of chance I got hold of "Born to Run", and even after the first quarter of the book I thought that it would be great if I could just go out on a bad day and spend an hour free of shit, or how it would be great that I could just reach some location a few kilometers away without any prep or machines or services.

I then decided that I will start running, and that my primary goal shall be that I like it and be able to do it even in old age if such would happen. With the '*' that I give myself an easy way out in case of physical pain or unexpected hatred against the activity, but not for any weasel reasons.

I didn't start running for another one and a half years, because Schweinehund, subtype Innerer. When my mood was getting slightly better (I was again able to do productive work), I started, with the "habit formation" mind-set. Also didn't tell anyone in the beginning. I think it helped that I already had some knowledge on how to train and run correctly, which especially in the beginning meant that I always felt like I could run further than I was "allowed" to.

And for good feedback: However it went, when I finished my training, I "said" to myself: I did good. I feel good. I feel better than before I started. I wrote every single run down on RunKeeper and Fitocracy, and always smiled at the "I'm awesome!" button of the latter one. I'm also quite sure that having at least one new personal best once a week helped. (Also, when you run barefoot, you get the "crazy badass" card for free, however slow you run. I like this.)

Once started, such a feedback loop is quite powerful. When I once barely trained for month, I was also surprised that getting back into regular running after that down-phase was so much easier. Now, after only seven months of training, I went from doing walk/run for 15 minutes to running 75 minutes, and having no problem with a cold-start 6% incline for the first two kilometers. I'm proud. Feels good (is quite new to me).

comment by Daniel_Burfoot · 2014-01-12T20:04:28.788Z · LW(p) · GW(p)

I'm very boring.... somehow established the acceptance of a vegan lifestyle in my conservative familiy (farmers).

That's not boring, it is impressive and admirable. Well done.

Replies from: eurg
comment by eurg · 2014-01-12T23:25:05.596Z · LW(p) · GW(p)

Thanks!

comment by Anatoly_Vorobey · 2014-01-12T17:30:43.700Z · LW(p) · GW(p)

What's your motivation for veganism?

What do you enjoy most in software development, and why are you going to be looking for a job again soon? What's your dream SW dev job?

Replies from: eurg, eurg
comment by eurg · 2014-01-12T23:51:23.406Z · LW(p) · GW(p)

What's your motivation for veganism?

Moral reasons. All else equal, I think that inflicting pain or death is bad, and that the ability to feel pain and the desire to not die is very widespread. I also think that the intensity of pain in simpler animals is still very strong (I think humans did not evolve large brains because otherwise the pain was not strong enough). I also think that our ability to manage pain slighly reduces the impact of our having the ability to suffer more strongly and with more variety. But I give, for sanity check reasons, priority to the desires of "more complex" animals, like humans.

Due to our technical ability we can now produce supplements for micronutrient which are missing or insufficently available in plants[1], and so I see health concerns resolved. So all the pain and death that I would inflict would only be there for greated enjoyment of food. Although I love the taste of meat and animal products, the comparative enjoyment is not big enough that I would kill for it. That I can enjoy plant-based foods is partly based upon my not being afraid of using my kitchen, and having a good vegan/vegetarian self-service restaurant 100m from my apartment.

And than there are the environmental reasons, and the antibiotic use, etc. etc. They count, and might be even sufficent on their own, but I'll only investigate those in case my other concerns/reasons were invalidated.

[1] There is vegan vit B12, vit D3, EPA/DHA (omega3), and creatin powder.

comment by eurg · 2014-01-13T00:00:59.097Z · LW(p) · GW(p)

What do you enjoy most in software development, and why are you going to be looking for a job again soon? What's your dream SW dev job?

Cannot really answer what I enjoy most; I like almost every job that comes up, with only a few exceptions. I hate repeating myself, and I hate having to do things in a ... ... ... way against my better judgement. I prefer to work more time (as in effort and calender time) doing the architecture/design/coding parts, but I also prefer doing other stuff once in a while more than being purely a lonely coder.

I will give my notice in a few hours, so I'll than search for a new job. I will have two months time for that, though, and maybe I take some time off before starting in a new company. I'll end this job because neither one of money, project nor team is good enough to make me happy, and the job market for software developers allows for searching for improved conditions.

My dream SW job would involve writing open source software which somehow tangibly improves the lives of some people (think better medical DAq and analysis instead of the newest photo sharing app), working with a team where competence and respect is wide-spread, as is friendlyness, and pay which is not worse than I what I got when I was still failing to drop out of college. Sadly, I do not think such a job exists, especially not for people like me (who do not have the necessary skills for anything fancy).

comment by FiftyTwo · 2014-03-20T02:37:16.568Z · LW(p) · GW(p)

I'm also working on depression with CBT and pills. I find I function well when I have structure and external obligations but revert to inaction when left to my own devices, any similar experience? Any general advice?

Replies from: eurg
comment by eurg · 2014-03-23T19:06:10.762Z · LW(p) · GW(p)

Similar experience, and not much of real advice. I mostly solve it by setting up obligations by myself. However, I revert to this only for stuff that is important. Examples:

I've announced and discussed doing some boring accounting and controlling for the hackerspace, and people now expect some specific results.

On anothor note, instead of procrastinating about finding a better workplace, I gave my notice. Once I was out of the job, I simply had to start looking.

Finally, I do not need to be perfect. More people than I expected have the odd day or two during the workweek, and knowing this I have reset my expectations regarding my own performance to something more humane.

comment by Gimpness · 2014-01-14T11:36:24.354Z · LW(p) · GW(p)

Could you go into a little more detail by what you mean by recovered from depression and what aspects of CBT assisted the most?

Replies from: eurg
comment by eurg · 2014-02-02T17:07:07.946Z · LW(p) · GW(p)

I'm sorry to have not answered for so long, I had some busy weeks.

Depression: I'd suffered many months from a depression bad enough that I was not able to work the hours of a part-time job, let alone achieve any acceptable performance. I was using alcohol as replacement for other diluted variants of H2O. This was also not the first time of being depressed, and needless to say, such things can fuck up your life, and are generally not very desirable.

I recovered as well as I think possible: I feel well. I can work. I enjoy, and can concentrate on stuff that piques my interest. I feel secure enough to make plans spanning more than two days, and expect to be somewhere between OK and very good for the forseeable future. For most measures, I am now better functioning, healthier (physically and emotionally) than the average person.

The sword of Damokles being that the next episode might break through my defenses so fast that I break down. Again. If I remember correctly, there is a four in five chance there will be one. I do not worry about that, though.

Therapy: The most useful part of my therapy was the judicious choice of some small things to work on, and the frequent feedback from an outsider. Also, never underestimate by how much a therapist approaches problems differently than a damaged brain.

On my own I would either not do anything, and hate myself for it, or try something, and hate myself for failing (again), or do something, and hate myself for spending energy on such a worthless, embarassingly tiny task. It was primarily option one.

It took some months, but through repeated experience I came to accept slight progress as progress nevertheless, and many of the tasks I was given to do integrate very nicely into everday activities now. I learned about saying "Well done!" to myself. I also learned about building habits, not as in 'scientist', but but applied to my own life. I also made it through some setbacks, faster and better than in the past years, so there is the chance that I actually learned something useful.

Last but not least, after a severe and sudden setback about two months or so, my therapist set me up with a psychiatrist to get some nice pills. A few days later I slept better than for the last ten years. Sleep is great. Everybody should get some.

comment by ahbwramc · 2014-01-14T00:18:07.816Z · LW(p) · GW(p)

I didn't think I had anything particularly interesting to offer, but then it occurred to me that I have a relatively rare medical disorder: my body doesn't produce any testosterone naturally, so I have to have it administered by injection. As a result I went through puberty over the age range of ~16-19 years old. If you're curious feel free to AMA.

(also, bonus topic that just came to mind: every year I write/direct a Christmas play featuring all of my cousins, which is performed for the rest of the family on Christmas Eve. It's been going on for over 20 years and now has its own mythology, complete with anti-Santa. It gets more elaborate every year and now features filmed scenes, with multi-day shoots. This year the villain won, Christmas was cancelled for seven years and Santa became a bartender (I have a weird family). It's...kind of awesome? If you're looking for a fun holiday tradition to start AMA)

Replies from: MugaSofer, ialdabaoth, Jonathan_Graehl
comment by MugaSofer · 2014-01-28T17:22:37.509Z · LW(p) · GW(p)

I have a relatively rare medical disorder: my body doesn't produce any testosterone naturally, so I have to have it administered by injection. As a result I went through puberty over the age range of ~16-19 years old. If you're curious feel free to AMA.

Cool.

Well, for starters, what are your thoughts on the experience? Presumably you were better-equipped to analyse the change than most.

comment by ialdabaoth · 2014-02-02T17:44:32.233Z · LW(p) · GW(p)

Interesting, I had a very similar puberty, but was never diagnosed with a disorder. What were the symptoms that led to a diagnosis?

comment by Jonathan_Graehl · 2014-01-27T22:56:43.322Z · LW(p) · GW(p)

What's your favorite amount of testosterone? Why? Would the optimum shift according to purpose?

Replies from: ahbwramc
comment by ahbwramc · 2014-01-28T00:22:39.537Z · LW(p) · GW(p)

Well, I've been on the same dose for the past 8 years (set by my original endocrinologist and carried forward by all doctors since, who've basically shrugged and said "ehh, worked so far"). Last time I had my testosterone levels checked they were on the high end of normal, which suits me fine. I have a fairly high sex drive, which you might expect, but very low aggression, which you might not - although I've always been a very passive and non-aggressive person. So I guess to answer your question, I haven't really explored different amounts. I don't particularly plan to in the future, if for no other reason than I've been on my current dose long enough to self-identify with the range of behaviours it produces.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2014-01-28T16:37:13.979Z · LW(p) · GW(p)

Other than wanting more sex, did you notice your mind changing?

I also wonder if late puberty extends the pre-adult skill learning window (adults supposedly can't learn as much or as well).

comment by [deleted] · 2014-01-12T22:23:46.925Z · LW(p) · GW(p)

Biology/genetics graduate student here, studying the interaction of biological oscillations with each other in yeast, quite familiar with genetic engineering due to practical experience and familiar with molecular biology in general. Fire away.

Replies from: Zaine, DaFranker
comment by Zaine · 2014-01-13T19:17:29.361Z · LW(p) · GW(p)

What's the current thinking on how to prevent physiological decay over time (id est ageing)? Figure a way to recover the bits of DNA cleaved in mitosis?

Replies from: None
comment by [deleted] · 2014-01-16T16:12:54.348Z · LW(p) · GW(p)

Shortening telomeres are a red herring. You need multiple generations of a mammal not having telomerase before you get premature ageing, and all the research you've heard about where they 'reversed ageing' with telomerase was putting it back into animals that had been engineered to lack it for generations. Plus lack of telomerase in most of your somatic cells is one of your big anti-cancer defenses.

Much more of a problem is things like nuclear pores never being replaced in post-mitotic cells (they're only replaced during mitosis) and slowly oxidizing and becoming leaky, extracellular matrix proteins having a finite lifetime, and all kinds of metabolic dysregulation and protein metabolism issues.

This isn't exactly my field, but there's a few interesting actual lines of research I've seen. One is an apparent reduction in protein-folding chaperone activity with age in many animals from C. elegans to humans [people LOVE C. elegans for ageing studies because they can enter very long-lived quiescent phases in their life cycle, and there are mutations with very different lifespans]. People still aren't quite sure what that means or where it comes from.

There's lots of interest in caloric restriction still, with many organisms switching between high-fertility shorter-lifespan and loger-lifespan lower-fertility states, but with actual mechanisms quite up in the air and there being serious questions as to if it actually happens in primates at all.

That paper a year or two back where some people claimed to double mouse lifespans with buckyballs dissolved in olive oil has my attention. Nobody including me actually believes the results, not least because their experimental design and data storage/presentation was an absolute unmitigated mess, but their biological evidence of massive antioxidant effects from the buckyballs (numbers of oxidative molecules neutralized far in excess of the numbers of buckyballs) was interesting and possibly even true. You can bet there are a couple labs around Europe trying to replicate the results that we will hear back from in a few years. If it does actually have an effect I would expect far less of an effect in animals that are less metabolically frenetic than mice, which can metabolize a tenth of their body weight per day.

If I had to actually give advice though it would come from rather more prosaic sources than molecular biology. It would say get the hell up and moving fairly often, get sleep on a regular schedule and don't expose yourself to blue light after sunset, don't eat refined sugar, have friends you can rely on, and don't take strong medicines when you don't absolutely have to. And get cheap genetic tests when you can since they can tip you off about low-frequency high-impact things.

Replies from: Zaine
comment by Zaine · 2014-01-16T16:51:44.008Z · LW(p) · GW(p)

Intriguing, and thank you for the detailed reply. May I respond in the future should I have further queries?

Replies from: None
comment by [deleted] · 2014-01-18T21:39:33.688Z · LW(p) · GW(p)

Sure, why not. I might be able (in a less busy time) to dig up that protein chaperone research too, somebody came to the university I'm at to give a talk on it a month or two ago.

comment by DaFranker · 2014-01-13T13:37:56.661Z · LW(p) · GW(p)

How stable is gene-to-protein translation in a relatively identical medium? I.e. if we abstract away all the issues with RNA and somehow neutralize any interfering products from elsewhere, will a gene sequence always produce the same protein, and always produce it, whenever encountered at as specific place? Or is there something deeper where changes to the logic in some other, unrelated part of the DNA could directly affect the way this gene is expressed (i.e. not through their protein interfering with this one)?

Or maybe I don't understand enough to even formulate the right question here. Or perhaps this subject simply hasn't been researched and analyzed enough to give an answer to the above yet?

If the answer is simple, are there any known ratios and reliability rates?

There's no particular hidden question; I'm not asking about designer babies or gengineered foodstuffs or anything like that. I'm academically curious about the fundamentals of DNA and genetic expression (and any comparison between this and programming, which I understand better, would be very nice), but hopelessly out of my depth and under-informed, to the point where I can't even understand research papers or the ones they cite or the ones that those cite, and the only things I understand properly are by-order-of-historical-discovery-style textbooks (like traditional physics textbooks) that teach things that were obsolete long before my parents were born.

Replies from: None
comment by [deleted] · 2014-01-14T17:48:16.324Z · LW(p) · GW(p)

The dreaded answer: 'Well, it depends..."

The genetic code - the relationship between base triplets in the reading frame of a messenger RNA and amino acids that come out of the ribosome that RNA gets threaded through – is at least as ancient as the most recent common ancestor of all life and is almost universal. There are living systems that use slightly different codons though – animal and fungal mitochondria, for example, have a varied lot of substitutions, and ciliate microbes have one substitution as well. If you were to move things back and forth between those systems, you would need to change things or else there would be problems.

If you avoid or compensate for those weird systems, you can move reading frames wherever you want and they will produce the same primary protein sequence. The interesting part is getting that sequence to be made and making sure it works in its new context.

At the protein level, some proteins require the proper context or cofactors or small molecules to fold properly. For example, a protein that depends on disulfide bonds to hold itself in the correct shape will never fold properly if it is expressed inside a bacterium or in the cytosol of a eukaryotic cell – it has to contain the destination-tag that causes it to be secreted into the membrane-bound spaces of the ER compartment that are kept as an oxidizing environment where such bonds can form.

At the translation level, eukaryotes and eubacteria have developed divergent methods for bringing the RNA and ribosome together, and keeping the RNA stable. Eukaryotes automatically add a chemically modified 'cap' to the front end of all RNAs they make, and require the presence of a particular sequence after the reading frame that allows everything after that point to get chopped off and replaced with a bunch of As (which the ubiquitous RNA-destroying enzymes mostly ignore). Proteins coat the poly-A, interact with cap-binding proteins, and this complex massively increases the rate at which the capped end gets fed into the ribosome. Eubacteria on the other hand do neither of these things, and a ribosome will bind to any point on the RNA that a particular ribosome-binding sequence appears allowing them to have multiple reading frames in the same RNA molecule under identical genetic control. If you don't put in all the proper elements you will still get protein but less than you could have.

There's also introns to consider. Eukaryotes cut these intervening sequences within the reading frames out, but different eukaryotes have slightly different machinery for recognizing them, so what is properly sliced out in one organism might not be in another. And if you put one into a bacterium it won't be spliced at all. We solve this by any time that you are moving things around between different systems, only moving around the processed reading frame with no introns. (Though it turns out that the presence of introns actually increases the rate of export of the RNA from the nucleus to the rest of the cell for translation because export is coupled to splicing, but it's not necessary, it just speeds it up.)

Everything said so far is actually quite easy to deal with, you just need to make sure that your favorite reading frame has the right basic elements around it for its new context. The big thing is making sure that your gene is actually expressed in its new context. You need to put in the right promoter elements upstream that bind the proteins necessary to both tag the DNA as to-be-read and anchor an RNA polymerase to actually make the transcript. In my lab we mostly just use existing promoters from other genes in yeast, but one of my labmates has actually used synthetic promoters with artificial activators around the stripped-down core of a natural promoter element to make a synthetic genetic oscillator. In animal cells people love using viral promoters because they are very strong and smaller than normal animal promoters, which can get rather large and are sometimes fragmented (especially into the introns), but normal promoters can be used too. The mileu of the cell will dictate if a certain promoter element is recognized and expressed – if the cell is making the right proteins that bind to and activate it, etc. We actually use that sometimes, putting a reading frame in front of say one of the GAL gene promoters that only turn on when you feed yeast galactose. Theres lots of post-transcriptional regulation of RNA stability too.

You also have to be careful about where you put things relative to other genes and other chromosomal features. If you put a transgene too close to a eukaryotic centromere (attachment point for fibers that pull chromosomes apart during cell division) it will not be expressed because the centromere condenses and silences DNA around it for quite a distance. If you stick two small promoter elements driving genes right next to each other, they can interact and wind up affecting each other's expression (I've been having problems with this in my yeast). If you stick two genes very close to each other (second promoter right after the first reading frame) in series reading in the same direction on the DNA, unless you add a good 'terminator' element between them that makes the RNA polymerase fall off before it reads over the promoter of the second gene, the second gene's expression can be somewhat suppressed because reading through its promoter keeps knocking off the proteins necessary to launch another polymerase down it.

On top of all these things, there are all kinds of dirty tricks that are rare but existent, like a gene in yeast where the 3-base reading frame suddenly stutters one base over halfway through the gene due to a 'pseudoknot' structure the RNA folds up into combined with a very rare codon that takes a long time to translate, letting the RNA slip one base over within the ribosome to a more common faster-translating codon before it actually reads the slow one. An individual gene probably doesn't use such a dirty trick but they are around and if one exists you can bet some virus somewhere uses it – they do horrifying things with their nucleic acids to pack in overlapping genes or genes that make different things in different circumstances.

edit: that 'dirty trick' is sort of a special case of a wider-seen thing, where the relative concentration of different tRNA adapter molecules that constitute the actual mechanics of the genetic code affects how quickly different proteins are translated. Even in different organisms with the same code, two synonymous codons might have very different levels of the tRNA adapters in the cell and one could be translated a lot faster than the other in one organism. Sometimes we codon-optimize genes for particular organisms so that the gene is more efficient, but that can get expensive and sometimes it has bad side effects: our lab did that with the firefly luciferase gene that makes a luminescent protein, and it turned out that when it was translated extremely fast parts of the protein that normally folded independently one at a time interacted and folded together, screwing up its function.

In the end, what we always wind up doing is simplifying things. If you need to move things between very different contexts you strip a gene down to its uninterrupted reading frame and move that around with promoter elements and translation-enhancing elements appropriate to its new context. You make sure you have nice terminators and a little space between things. And try to insert things into known locations that you know work rather than randomly. Artificial regulation of a gene rather than using a natural promoter often leads to coarsely controlled expression because it hasn't been optimized with all the subtle tricks, but you can almost always get them to work.

There are always surprises though. Otherwise it wouldn't be research.

Replies from: DaFranker
comment by DaFranker · 2014-01-21T14:08:31.874Z · LW(p) · GW(p)

That was an awesome breakdown of things, thank you!

I've learned way more from this than from all my previous reading, without even including the data about what I didn't know I don't know and other meta.

Replies from: None, None
comment by [deleted] · 2014-02-15T05:08:08.626Z · LW(p) · GW(p)

Any time. Feel free to message with other questions too.

comment by [deleted] · 2014-03-24T15:05:23.188Z · LW(p) · GW(p)

Just for fun, here's a couple of good-enough animations of various eukaryotic systems. Shows nothing of the constant jiggering back and forth of the molecules and makes it look far too directed, but it gives an idea of many of the things going on.

https://www.youtube.com/watch?v=yqESR7E4b_8

comment by jefftk (jkaufman) · 2014-01-13T02:35:20.717Z · LW(p) · GW(p)

I'm a programmer at Google in Boston doing earning to give, I blog about all sorts of things, and I play mandolin in a dance band. Ask me anything.

Replies from: jobe_smith, AlexSchell
comment by jobe_smith · 2014-01-15T19:09:22.047Z · LW(p) · GW(p)
  1. What are you working on at google?

  2. How much do you earn?

  3. How much do you give, and to where?

Replies from: jkaufman
comment by jefftk (jkaufman) · 2014-01-15T20:31:06.016Z · LW(p) · GW(p)

What are you working on at google?

ngx_pagespeed and mod_pagespeed. They are open source modules for nginx and apache that rewrite web pages on the fly to make them load faster.

How much do you earn?

$195k/year, all things considered. (That's my total compensation over the last 19 months, annualized. Full details: http://www.jefftk.com/money)

How much do you give, and to where?

Last year Julia and I gave a total of $98,950 to GiveWell's top charities and the Centre for Effective Altruism. (Full details: http://www.jefftk.com/donations)

comment by AlexSchell · 2014-01-13T17:05:00.703Z · LW(p) · GW(p)

Did you ever get down to trying fumaric acid? How does it compare to citric and malic acids?

Replies from: jkaufman
comment by jefftk (jkaufman) · 2014-01-13T18:45:23.604Z · LW(p) · GW(p)

I've added an update to that post: http://www.jefftk.com/p/citric-acid

I ended up ordering malic and fumaric acids as well. I like the malic acid a lot, but the fumaric acid is really hard to taste. Not being soluble in water it just sits on the tongue being slightly sour. I probably just haven't found the right use for it yet.

Replies from: Leonhart, Vaniver, AlexSchell
comment by Leonhart · 2014-01-13T22:58:04.754Z · LW(p) · GW(p)

THANK YOU WHY DID I NEVER THINK OF DOING THAT THIS IS GOING TO MAKE ALL JAM EDIBLE FOREVER

Replies from: jkaufman
comment by jefftk (jkaufman) · 2014-01-14T02:10:27.047Z · LW(p) · GW(p)

Adding citric acid to overly sweet jam is indeed wonderful.

comment by Vaniver · 2014-01-15T16:47:00.644Z · LW(p) · GW(p)

The best part of sour patch kids was the white powder left over at the bottom of the wrapper.

I once had a one-pound bag of Sour Skittles, and after eating all of them, consumed the entirety of the white powder left over in the bag at once. Simply thinking about that experience is sufficient to produce a huge burst of saliva.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2014-01-15T20:25:36.602Z · LW(p) · GW(p)

That powder is mostly citric acid mixed with sugar. Mmm.

comment by AlexSchell · 2014-01-13T23:53:03.486Z · LW(p) · GW(p)

Thanks! Will not order then.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2014-01-14T02:11:12.236Z · LW(p) · GW(p)

If you're ever in Boston I'm happy to give you some to play with.

Replies from: AlexSchell
comment by AlexSchell · 2014-01-14T04:16:56.030Z · LW(p) · GW(p)

Uncertain how soon I will be able take you up on this, but thanks!

comment by Dr_Manhattan · 2014-01-13T02:23:14.033Z · LW(p) · GW(p)

I like the idea.

Here we go, things that might be interesting to people to ask about:

  • born in Kharkov, Ukraine, 1975, Jewish mother, Russian father

  • went to a great physics/math school there (for one year before moving to US), was rather average for that school but loved it. Scored 9th in the city's math contest for my age group largely due to getting lucky with geometry problems - I used to have a knack for them

  • moved to US

  • ended up in a religious high school in Seattle because I was used to having lots of Jewish friends from the math school

  • Became an orthodox Jew in high school

  • Went to a rabbinical seminary in New York

  • After 19 years, accumulation of doubts regarding some theological issues, Haitian disaster and a lot of help from LW quit religion

  • Mostly worked as a programmer for startups with the exception of Bloomberg, which was a big company; going back to startups (1st day at Palantir tomorrow)

  • self-taught enough machine learning/NLP to be useful as a specialist in this area

  • Married with 3 boys, the older one is a high-functioning autistic

  • Am pretty sure AI issues are important to worry about. MIRI and CFAR supporter

Replies from: Anatoly_Vorobey, MugaSofer
comment by Anatoly_Vorobey · 2014-01-13T14:47:48.071Z · LW(p) · GW(p)

How did your family handle your deconversion? Do you continue with the religious Jewish style of everyday life?

Do your kids speak Russian at all/fluently? If not, are you at all unhappy about that? What about Hebrew?

If you're comfortable discussing the HFA kid: at what age was he diagnosed? What kind of therapy did you consider/reject/apply? What are the most visible differences from neurotypical norm now?

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2014-01-13T21:50:19.630Z · LW(p) · GW(p)

Hi Anatoly,

Initially it was a shock to my wife, but I took things very slowly as far as dropping practices. This helped a lot and basically I do whatever I want now (3.5 years later). Also transferred my kids to a good public school out of yeshiva. My wife remains nominally religious, it might take another 10 years :)

My kids don's speak Russian - my wife is American-born. I prefer English myself, so I'm not "unhappy" about them not speaking Russian in particular although I'd prefer them to be bilingual in general. They read a bit of Hebrew.

I'm happy to discuss my HFA kid via PM.

Replies from: Anatoly_Vorobey, jazmt
comment by Anatoly_Vorobey · 2014-01-14T01:21:06.966Z · LW(p) · GW(p)

So glad to hear you got your kids out of yeshiva. Way to go!

Did you meet your wife via shidduch or more traditionally? If you ever did shidduch: I'm curious if in the orthodox circles in the US a Baal Teshuva faces a tougher challenge in shidduch than someone who grew up in a frum family. This is very much the case in Israel. Here I've heard tales of severe discrimination and essentially second-class status.

What's the attitude in orthodox circles towards Conservative/Reform Jews? (not the official one, but the "on the street" sort of thing, if it exists...). Is there any dialogue between the branches at all? (As you probably know, Conservative/Reform barely exist in Israel).

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2014-01-14T01:34:41.434Z · LW(p) · GW(p)

Met my wife through a Shidduch, though the Shadchan was my friend and both of us were BTs, so it wasn't quite Fiddler on the Roof. The BT thing made my transition out easier, now my in-laws love me even more :).

I attended a modern and strangely rationalist Yeshiva - they really attempted to reconcile Torah with modern science ala Maimonides. I just concluded you can't pull that off in the end. The attitude to conservatives there was "well, they're wrong, but let's not make this personal", mostly treating them as "tinock shenishbh". The guy who started it was mostly a nice guy, and he used most of the allowed vitriol to attack the stupidity and superstition of the right. I can't speak for other yeshivot or sects from personal experience, but I imagine this was somewhat unusual.

Funny - my biological father's last name was Vorobyev. I guess that makes us cousins :-p

comment by Yaakov T (jazmt) · 2014-02-06T05:18:27.129Z · LW(p) · GW(p)

Is your wife still teaching your kids religion? How do you work out conflicts with your wife over religious issues (I assume she insists on a kosher kitchen, wants the kids to learn Jewish values etc)

comment by MugaSofer · 2014-01-28T17:26:12.721Z · LW(p) · GW(p)

self-taught enough machine learning/NLP to be useful as a specialist in this area

Speaking as a nonexpert, I'm curious what similarities, parallels, and overlap you see between these two fields.

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2014-01-29T00:55:34.431Z · LW(p) · GW(p)

Modern NLP (Natural Language Processing) uses statistical methods quite a bit - http://nlp.stanford.edu/fsnlp/

comment by NancyLebovitz · 2014-01-12T09:26:43.899Z · LW(p) · GW(p)

Ask me anything. Like Vulture, I reserve the right to not answer.

Replies from: Anatoly_Vorobey
comment by Anatoly_Vorobey · 2014-01-12T10:08:12.765Z · LW(p) · GW(p)

Is your button business really functioning, do you get a nontrivial number of orders? What do your buttons look like and why isn't there a single picture of one on your website?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-01-12T10:53:10.347Z · LW(p) · GW(p)

It's still functioning to some extent-- I'll be at Arisia next weekend. As far as I can tell, I'm neglecting the website because of depression and inertia.

Replies from: NancyLebovitz
comment by [deleted] · 2014-01-14T02:59:03.674Z · LW(p) · GW(p)

I understand ancient Greek philosophy really well. In case that has come up. I'm a PhD student in philosophy, and I'd be happy to talk about that as well.

Replies from: blacktrance, Douglas_Knight, Apprentice
comment by blacktrance · 2014-01-14T18:08:29.360Z · LW(p) · GW(p)

What do you think of Epicurus? What do you think of Epicurean ethics?

comment by Douglas_Knight · 2014-01-14T13:48:27.318Z · LW(p) · GW(p)

Do you have a sense of how the proportion of philosophy varied with place and time, both the proportion written and the proportion surviving? My impression is that there was a lot more philosophy in Athens than in Alexandria.

Replies from: None
comment by [deleted] · 2014-01-14T15:37:43.011Z · LW(p) · GW(p)

I'm not sure I entirely understand the question. I'll try to give a history in three stages

1) Roughly, the earliest stages of philosophy were mathematics, and attempts at reductive, systematic accounts of the natural world. This was going on pretty broadly, and only by virtue of some surviving doxographers do we have the impression that Greece was at the forefront of this practice (I'm thinking of the pre-Socratic greek philosophers, like Thales and Anaxagoras and Pythagoras). It was everywhere, and the Greeks weren't particularly good at it. This got started with the Babylonians (very little survives), and when the Assyrian empire conquered Babylon (only to be culturally subjugated to it), they spread this practice throughout the Mediterranean and near-east. Genesis 1 is a good example of a text along these lines.

2) After the collapse of the Assyrians, locals on the frontiers of the former empire (like Greece and Israel) reasserted some intellectual control, often in the form of skeptical criticisms or radically new methodologies (like Parmenides very important arguments against the possibility of change, or the Pythagorean claim that everything is number). Socrates engaged in a version of this by eschewing questions of the cosmos and focusing on ethics and politics as independent topics. Then came Plato, and Aristotle, who between them got the western intellectual tradition going. I won't go into how, for brevity's sake.

3) After Plato and Aristotle, a flurry of philosophical activity overwhelmed the Mediterranean (including and especially in Alexandria), largely because of the conquests of Alexander and the active spread of Greek culture (a rehash of the thing with the Assyrians). This period is a lot like ours now: widespread interest in science, mathematics, ethics, political theory, etc. Many, many people were devoted to these things, and they produced more work in a given year during this period than every that had come before combined. But as a result of the sheer volume of this work, and as a result of the fact that it was built on the shoulders of Plato and Aristotle, very little of it really stands out. As a result, a lot was lost.

Replies from: Douglas_Knight, Eugine_Nier
comment by Douglas_Knight · 2014-01-14T16:14:44.805Z · LW(p) · GW(p)

Before I expand on my question, let me ask what I really should have asked before: is there a place I can look up what survives, with a rough classification; or better, what is believed to have existed?

You seem to include all non-fiction in philosophy. Fine by me, but I just want to make it explicit.

What I meant by proportion was the balance between fiction and non-fiction. I don't think I've heard of any Hellenistic fiction. Was it rarer than classical fiction? Was it less often preserved? Again because it was derivative?  But maybe we should distinguish science from philosophy. My understanding is that Hellenistic science was an awful lot better than classical science. Hipparchus was not lost because he was derivative of Aristotle, but, apparently, because Ptolemy was judged to supersede him, or at least be an adequate summary.

Replies from: NancyLebovitz
comment by Eugine_Nier · 2014-01-15T01:53:13.933Z · LW(p) · GW(p)

Well, with respect to mathematics at least one difference between the Greeks and everybody else, is that the Greeks provided proofs of the non-obvious results.

Replies from: None
comment by [deleted] · 2014-01-15T03:05:11.049Z · LW(p) · GW(p)

Yes, though that really got started with Euclid, who post-dates Aristotle. It's with Plato and Aristotle that the Greeks really set them-selves apart. I don't think we'd be reading any of the rest of it if it weren't for them.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-15T03:30:35.173Z · LW(p) · GW(p)

Euclid is merely the first whose work has survived to the modern day. If tradition is to be believed, Thales and Pythagoras provided proofs of non-intuitive results from intuitive one. Furthermore, Hippocrates of Chios wrote a systematic treatment starting with axioms. All three predated Plato.

Replies from: None
comment by [deleted] · 2014-01-15T04:06:42.408Z · LW(p) · GW(p)

That's a good point about Hippocrates, I'd forgotten about him. Do you have a source handy on Thales and Pythagoras? I don't doubt it, it's just a gap I should fill. So far as I remember, a proof that the square root of two is irrational came out of the Pythagorean school, but that's all I can think of. I hadn't heard anything like that about Thales.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-15T05:33:13.525Z · LW(p) · GW(p)

I linked to the relevant Wikipedia articles in my comment.

Replies from: None
comment by [deleted] · 2014-01-15T15:30:33.015Z · LW(p) · GW(p)

Ah, but note the 'history' section of the Thales article. It rather supports my picture, if it supports anything at all.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-16T01:55:23.158Z · LW(p) · GW(p)

Why? If you mean that Thales learned the result from the Babylonians, the point is that he appears to have been the first to bother proving it.

comment by Apprentice · 2014-01-14T12:50:41.891Z · LW(p) · GW(p)

Do you feel overworked and desparate as a PhD student or is it basically fun? Have you published any articles yet or are you planning to? What are your career plans?

Replies from: None
comment by [deleted] · 2014-01-14T15:13:41.275Z · LW(p) · GW(p)

I feel overworked, desperate, and very happy.

The desperation: This is a very hard field to work in, psychologically, because there's no reliable process for producing valuable work (this might be true generally, but I get the sense that in the sciences it's easier to get moving in a worthwhile direction). It's not rare that I doubt that anything I'm writing is valuable work. Since I'm at the (early) dissertation stage, these kinds of big picture worries play an important daily role.

The overwork: This is exacerbated by the fact that I have a family. I have much more to do than I can do, and I often have to cut something important. I grade papers on a 3 min per page clock, and that almost feels unethical. I just recently got a new dissertation advisor who wants to see work every two weeks.

The happy: I have a family! It makes this whole thing much, much easier. Most of my problem with being a grad student in the before time was terrible loneliness. Some people do well under those conditions, but I didn't. Also, I do philosophy, which is like happiness distilled. When everyone is uploaded, and science is complete, and a billion years or so have gotten all the problems and needs and video games and recreational space travel out of our system, we'll all settle into that activity that makes life most worth living: talking about the most serious things in the most serious way with our friends. That's philosophy, and I'm very happy to be able to do it even if I don't get a job out of it.

I haven't published anything, but someone recently footnoted me in an important journal. Small victories. I have a paper I'd like to publish, but it's a back-burner project. As to my career, I will take literally anything they can give me, so long as I can be around my family (my wife is a philosopher too, so we need to both get jobs somewhere close). Odds are long on this, so my work has to be good.

Replies from: Apprentice
comment by Apprentice · 2014-01-14T16:41:28.071Z · LW(p) · GW(p)

This is a very hard field to work in, psychologically, because there's no reliable process for producing valuable work (this might be true generally, but I get the sense that in the sciences it's easier to get moving in a worthwhile direction).

I think you're right that philosophy is particularly difficult in this respect. In many fields you can always go out, gather some data and use relatively standard methodologies to analyze your data and produce publishable work from it. This is certainly true in linguistics (go out and record some conversations or whatever) and philology (there are always more texts to edit, more stemmas to draw etc.). I get the impression that this is also more or less possible in sociology, psychology, biology and many other fields. But for pure philosophy, you can't do much in the way of gathering novel data.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-15T01:46:19.810Z · LW(p) · GW(p)

Interestingly, my field, mathematics, is similar to philosophy, probably for the same reason.

comment by Vulture · 2014-01-12T05:00:38.271Z · LW(p) · GW(p)

If anyone's interested (ha!), then sure, go ahead, ask me anything. (Of course I reserve the right not to answer if I think it would compromise my real-world identity, etc.)

N.B. I predict at ~75% that this thread will take off (i.e. get more than about 20 comments) iff Eliezer or another public figure decides to participate.

Replies from: Will_Newsome, James_Miller
comment by Will_Newsome · 2014-01-13T02:58:30.696Z · LW(p) · GW(p)

N.B. I predict at ~75% that this thread will take off (i.e. get more than about 20 comments) iff Eliezer or another public figure decides to participate.

For what it's worth I posted this with my main account and not with a sockpuppet precisely to ensure the exclusion of Eliezer.

comment by James_Miller · 2014-01-12T18:45:33.157Z · LW(p) · GW(p)

Why are you hiding your real identity? Don't you fear that in a few years programs, available to the general public, will be able to match writing patterns and identify you?

Replies from: Vulture, VAuroch
comment by Vulture · 2014-01-14T02:46:16.358Z · LW(p) · GW(p)

I see it more as introducing a trivial inconvenience which keeps people I know in real life generally away from my (often frank) online postings. In some sense it's just psychological, since by nature I am a very reticent person and it makes me feel like I can jot out opinions and get feedback without having to agonize over it. (That's also why I'm not necessarily comfortable directly listing out personal details which could probably be inferred/collected from what I write.)

Replies from: Alsadius
comment by Alsadius · 2014-01-16T07:33:16.221Z · LW(p) · GW(p)

FWIW, this is the same as my rationale. It is theoretically possible to trace Alsadius back to me-the-human, since I'm sure I've given enough identifying details to narrow down the pool of candidates to one given perfect information, but it is sufficiently difficult that I doubt anyone will actually bother.

comment by VAuroch · 2014-01-13T08:44:59.644Z · LW(p) · GW(p)

As someone who feels the same way, forestalling that possibility/making it take effort to identify me is somewhat worth it. And there's a substantial possibility that it won't take that long from development of programs-which-can-recognize to development of programs-that-can-hide.

comment by knb · 2014-01-14T23:29:55.912Z · LW(p) · GW(p)

Feel free to ask me (almost) anything. I'm not very interesting, but here are some possible conversation starters.

  1. I'm a licensed substance abuse counselor and a small business owner (I can't give away too many specifics about the business without making my identity easy to find, sorry about this.)
  2. I'm a transhumanist, but mostly pessimistic about the future.
  3. I support Seasteading-like movements (although I have several practical issues with the Thiel/Friedman Seasteading Institute.
  4. I'm an ex-liberal and ex-libertarian. I was involved in the anti-war movement for several years as a teenager (2003-2009). I've read a lot of "neoreactionary" writings and find their political philosophy unconvincing.
Replies from: Moss_Piglet, NancyLebovitz, FiftyTwo, A-Lurker
comment by Moss_Piglet · 2014-01-15T00:04:08.089Z · LW(p) · GW(p)

Maybe you can give some common misconceptions about how people recover from / don't recover from their addictions? That's the sort of topic you tend to hear a lot of noise about which makes it tough to tell the good information from the bad.

Do you have any thoughts on wireheading?

Have you tried any 19th/20th century reactionary authors? Everyone should read Nietzsche anyway, and his work is really interesting if a little dense. His conception of Master/slave morality and nihilism is a much more coherent explanation for how history has turned out than the Cathedral, not to mention that the superman (I always translate it as posthuman in my head) as beyond good and evil is interesting from a transhumanist perspective.

Replies from: knb, Anatoly_Vorobey
comment by knb · 2014-01-15T02:07:38.355Z · LW(p) · GW(p)

Maybe you can give some common misconceptions about how people recover from / don't recover from their addictions? That's the sort of topic you tend to hear a lot of noise about which makes it tough to tell the good information from the bad.

I'm not sure if these are misconceptions, but here are some general thoughts on recovery:

  1. Neural genetics probably matters a lot. I don't know what to do with this, but I think neuroscience and genetics will produce huge breakthroughs in treatment of addiction in the next 20 years. People like me will probably be on the sidelines for this big change.
  2. People who feel coerced into entering counseling will almost certainly relapse, and they'll relapse faster and harder compared to people who enter willingly. However...
  3. ...this doesn't make coercion totally pointless--counselors can plant the seeds of a sincere recovery attempt, and give clients the mental tools to recognize their patterns.
  4. People who willingly enter counseling still usually relapse, multiple times. The people who keep coming back after a relapse stand a much better chance of getting to a high level of functioning. People who reenter therapy every time they relapse will usually succeed eventually. (I realize this is almost a tautology.)
  5. Clients with other diagnosed disorders are much less likely to fully recover.)

Do you have any thoughts on wireheading?

Wireheading is somewhat fuzzy as a term.... The extreme form (being converted into "Orgasmium") seems like it would be unappealing to practically everyone who isn't suicidally depressed (and even for them it would presumably not be the best option in a transhuman utopia in which wireheading is possible.)

I think a modest version of wireheading (changing a person's brain to raise their happiness set point) will be necessary if we want to bring everyone up to an acceptable level happiness.

Have you tried any 19th/20th century reactionary authors?

I've read a lot of excerpts and quotes, but not many full books. I read a large part of one of Carlyle's books and one late 19th Century travelogue of the United States which Moldbug approvingly linked to. (I've read a fair amount of Nietzsche's work, but I think calling him a reactionary is a bit like calling the Marquis de Sade a "libertarian.")

comment by Anatoly_Vorobey · 2014-01-15T00:25:18.298Z · LW(p) · GW(p)

The one concept from Nietzsche I see everywhere around me in the world is ressentiment. I think much of the master-slave morality stuff was too specific and now feels dated 130 years later, but ressentiment is the important core that's still true and going to stay with us for a while; it's like a powerful drug that won't let humanity go. Ideological convictions and interactions, myths and movements, all tied up with ressentiment or even entirely based on it. And you're right, I would have everyone read Nietzsche - not for practical advice or predictions, but to be able, hopefully, to understand and detect this illness in others and especially oneself.

Replies from: Moss_Piglet
comment by Moss_Piglet · 2014-01-15T00:59:49.315Z · LW(p) · GW(p)

It's funny to me that you would say that, because the way I read it was mainly that slave morality is built on resentment whereas master morality was built on self-improvement. The impulse to flee suffering or to inflict it (even on oneself) is the the difference between the lamb and the eagle, and thus the common and the aristocratic virtues. I wouldn't have thought to separate the two ideas.

But again, one of the reasons why he ought to be read more; two people reading it come away with five different opinions on it.

comment by NancyLebovitz · 2014-01-15T04:04:25.745Z · LW(p) · GW(p)

Why are you pessimistic about the future?

What are your practical issues about the Seasteading Institute? My major issue is that even if everything else works, governments are unlikely to tolerate real challenges to their authority.

What political theories, if any, do you find plausible?

Replies from: knb
comment by knb · 2014-01-16T00:17:43.029Z · LW(p) · GW(p)

Why are you pessimistic about the future?

I worry about a regression to the historical mean (Malthusian conditions, many people starving at the margins) and existential risk. I think extinction or return to Malthusian conditions (including Robin Hanson's hardscrabble emulation future) are the default result and I'm pessimistic about the potential of groups like MIRI.

What are your practical issues about the Seasteading Institute?

As I see it, the main problem with SI is their over-commitment to small-size seastead designs because of their commitment to the principle of "dynamic geography." The cost of small-seastead designs (in complexity, coordination problems, additional infrastructure) will be huge.

I don't think dynamic geography is what makes seasteading valuable as a concept. The ability to create new country projects by itself is the most important aspect. I think large seastead designs (or even land-building) would be more cost-effective and a better overall direction.

My major issue is that even if everything else works, governments are unlikely to tolerate real challenges to their authority.

I've always thought the risk from existing governments isn't that big. I don't think governments will consider seasteading to be a challenge until/unless governments are losing significant revenues from people defecting to seasteads. By default, governments don't seem to care very much about things that take place outside of their borders. Governments aren't very agent-y about considering things that are good for the long term interests of the government.

Seasteads would likely cost existing governments mainly by people attracting revenue-producing citizens away from them and into seasteads, and it will take a long time before that becomes a noticeable problem. Most people who move to seasteads will still retain the citizenship of their home country (at least in the beginning), and for the US that means you must keep paying some taxes. Other than the US, there aren't a lot of countries that have the ability to shut down a sea colony in blue water. By the time the loss of revenue becomes institutionally noticeable, the seasteads are likely to be too big to easily shut down (i.e. it would require a long term deployment and would involve a lot of news footage of crying families being forced onto transport ships).

What political theories, if any, do you find plausible?

I like the overall meta-political ethos of seasteading. I think any good political philosophy should start with accepting that there are different kinds of people and they prefer different types of governments/social arrangements.You could call this "meta-libertarianism" or "political pluralism."

comment by FiftyTwo · 2014-03-20T02:42:10.004Z · LW(p) · GW(p)

What are warning signs someone should look out for (in themselves) in avoiding addiction?

comment by A-Lurker · 2014-01-16T10:01:19.840Z · LW(p) · GW(p)

My take on drug abuse is that it isn't primarily the drugs themselves that are the problem but the user. That is to say the drugs have powerful and harmful effects, but the buck ultimately stops with the user who chooses to imbibe them. As physically addictive as some drugs can be, not everyone will; A) Be addicted if they try it once, and, B) Actually want to use the drug to begin with. It's the people who are depressed, self-harming, etc, who have drug problems. I think my point can be easily confused so i'll give an analogy: a magnetic sea mine is terribly destructive and can blow me to pieces (swap for drugs), but being a human of flesh and blood (swap for healthy life and psychology), there will be no magnetic attraction and we won't be drawn towards each other. On the other hand if I was a steel ship (depressed, etc), the magnet will be drawn to me and devastation will be the result. To recap again in one sentence; the mainstream point of view seems to be that drugs are like a virus which can effect anyone and are the problem in themselves where as I see the users as the 'problem' and the drugs as one (of many) destructive outcomes of this. My question is basically; do you agree with the above?

comment by TheOtherDave · 2014-01-14T18:40:21.286Z · LW(p) · GW(p)

Some LW-folks have in the past asked me questions about my stroke and recovery when it came up, and seemed interested in my answers, so it might be useful to offer to answer such questions here. Have at it! (You can ask me about other things if you want, too.)

comment by Axel · 2014-01-12T18:29:03.160Z · LW(p) · GW(p)

I'm a 24-year-old guy looking for a job and have a great interest in science and game design. I read a lot of LW but I rarely feel comfortable posting. I wished there was a LW meetup group in Belgium and when nobody seemed to want to take the initiative I set one up my self. I didn't expect anyone to show, but now, two years later it's still going. Ask me anything you want, but I reserve the right not to answer.

Replies from: Vivificient
comment by Vivificient · 2014-01-13T06:36:55.553Z · LW(p) · GW(p)

How hard did you find it to be to organize/run a meetup? How did that compare to what you expected?

Replies from: Axel
comment by Axel · 2014-01-13T12:53:50.898Z · LW(p) · GW(p)

How hard it is depends on what kind of meetup you’re running, in may case it’s very easy. The Brussels group is more of a social gathering. We start of with a topic for the day but go on wild tangents/play board games and generally just have fun. The only things I ever needed to do as an organizer were: pick a topic for the meetup, post the meetup on the site, arrive on time, make new members feel welcome and manage the mailing list. When I started out I honestly didn't have any expectations on how hard it would be, I had no idea how they would turn and had decided to just run with whatever happened.

Once the meetup had a core group of regulars some of them offered to help and I could delegate the stuff I’m not very good at (like the meetup posts on LW and coming up with topics) These days the only things I feel I have to do are put in an extra effort to involve new members and keep the atmosphere friendly (which, in two years of meetups, has only once been a problem, LW’ers are generally great people) and those are things I would do anyway.

I know there are other meetups were the organizer has more responsibility. For example, if you have a system where every month another person gives a short presentation you have to manage that as well. For larger groups (Brussels rarely has more then 4 people) an official moderator type person might be handy to make sure quieter people get a chance to speak up. There is no one “right” way to run a meetup, see why people enjoy coming to yours and try and make that part as awesome as you can. Just keep an open mind about trying new things every now and then.

In short, how hard it is to run a meetup depends the type (social, exercise focused, presentations, etc.) In my case, it’s very easy especially since I have other helping me out. If you’re thinking on starting one yourself don’t worry to much about what type you want it to be, just see how the first few meeting go and it'll point itself out from there.

comment by ephion · 2014-01-12T15:46:24.063Z · LW(p) · GW(p)

I'm heavily interested in instrumental rationality -- that is, optimizing my life by 1) increasing my enjoyment per moment, 2) increasing the quantity of moments, and 3) decreasing the cost per moment.

I've taught myself a decent amount and improved my life with: personal finance, nutrition, exercise, interpersonal communication, basic item maintenance, music recording and production, sexuality and relationships, and cooking.

If you're interested in possible ways of improving your life, I might have direct experience to help, and I can probably point you in the right direction if not. Feel free to ask me anything!

Replies from: FiftyTwo, jobe_smith, Markas
comment by FiftyTwo · 2014-03-20T02:40:20.137Z · LW(p) · GW(p)

Do you think you had a high starting conscientiousness level or did you have to develop it?

What do you mean about increasing enjoyment of moments? I guess some sort of mindfulness?

Can you expand on sexuality and relationships?

What techniques do you have for determining goals as opposed to fulfilling them? E.g. if I have no particular sense of what I want how would I determine it?

comment by jobe_smith · 2014-01-15T19:53:55.325Z · LW(p) · GW(p)

Have you become exceptionally good at anything, and if so what and how?

Replies from: ephion
comment by ephion · 2014-01-16T18:33:44.528Z · LW(p) · GW(p)

Improving skills is about deliberate practice, objective analysis (either by yourself or a teacher), and evaluating and fixing your weaknesses. I've been able to improve every skill I've tried with this method.

I consider myself exceptionally good at creating metal music (playing guitar, vocals, recording/mixing/production), and I'm getting pretty good at weight lifting. I am beginning to develop the skill of computer programming, which I expect to take to that level.

For most non-career and non-pleasure skills, I generally stop at the point of diminishing returns. I've learned to cook for myself better than most restaurants, but I don't care to invest the time and energy to become a real artist with it.

comment by Markas · 2014-01-12T19:01:11.852Z · LW(p) · GW(p)

Do you use any quantitative self tools for this? If so, could you elaborate on your data tracking/analysis processes?

Replies from: ephion
comment by ephion · 2014-01-12T20:40:20.422Z · LW(p) · GW(p)

Yes, but incompletely. I'll track things precisely until a habit is established, at which point I stop tracking everything and check-in every once in a while to make sure I'm still on track. Some things I keep track of consistently, such as my budget, weight lifting numbers, bodyweight, etc.

The process is different for different things. I usually start with a Google Drive spreadsheet, and then experiment with other more specific apps if they're better than spreadsheets (they rarely are). If you have any more specific questions, I'd be glad to answer them.

comment by XiXiDu · 2014-01-12T14:24:27.194Z · LW(p) · GW(p)

You can ask me anything.

Replies from: Apprentice, Locaha
comment by Apprentice · 2014-01-12T16:13:42.069Z · LW(p) · GW(p)

Okay, I'll bite. Do you think any part of what MIRI does is at all useful?

Replies from: XiXiDu
comment by XiXiDu · 2014-01-12T16:57:08.305Z · LW(p) · GW(p)

Do you think any part of what MIRI does is at all useful?

It now seems like a somewhat valuable research organisation / think tank. Valuable because they now seem to output technical research that is receiving attention outside of this community. I also expect that they will force certain people to rethink their work in a positive way and raise awareness of existential risks. But there are enough caveats that I am not confident about this assessment (see below).

I never disagreed with the basic idea that research related to existential risk is underfunded. The issue is that MIRI's position is extreme.

Consider the following fictive and actual positions people take with respect to AI risks in ascending order of perceived importance:

  1. Someone should actively think about the issue in their spare time.

  2. It wouldn’t be a waste of money if someone was paid to think about the issue.

  3. It would be good to have a periodic conference to evaluate the issue and reassess the risk every year.

  4. There should be a study group whose sole purpose is to think about the issue. All relevant researchers should be made aware of the issue.

  5. Relevant researchers should be actively cautious and think about the issue.

  6. There should be an academic task force that actively tries to tackle the issue.

  7. It should be actively tried to raise money to finance an academic task force to solve the issue.

  8. The general public should be made aware of the issue to gain public support.

  9. The issue is of utmost importance. Everyone should consider to contribute money to a group trying to solve the issue.

  10. Relevant researchers that continue to work in their field, irrespective of any warnings, are actively endangering humanity.

  11. This is crunch time. This is crunch time for the entire human species. And it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us. Everyone should contribute all but their minimal living expenses in support of the issue.

Personally, most of the time, I alternate between position 3 and 4.

Some people associated with MIRI take positions that are even more extreme than position 11 and go as far as banning the discussion of outlandish thought experiments related to AI. I believe that to be crazy.

Extensive and baseless fear-mongering might very well cause MIRI's value to be overall negative.

Replies from: MugaSofer
comment by MugaSofer · 2014-01-28T17:40:20.861Z · LW(p) · GW(p)

Upvoted solely for the handy scale.

comment by Locaha · 2014-01-12T18:32:31.592Z · LW(p) · GW(p)

How should I fight a basilisk?

Replies from: XiXiDu
comment by XiXiDu · 2014-01-12T18:54:49.392Z · LW(p) · GW(p)

How should I fight a basilisk?

Every basilisk is different. My current personal basilisk pertains measuring my blood pressure. I have recently been hospitalized as a result of dangerously high blood pressure (220 systolic, mmHg / 120 diastolic, mmHg). Since I left the hospital I am advised to measure my blood pressure.

The problem I have is that measuring causes panic about the expected result, which increases the blood pressure. Then if the result turns out to be very high, as expected, the panic increases and the next measurement turns out even higher.

Should I stop measuring my blood pressure because the knowledge hurts me or should I measure anyway because knowing it means that I know when it reaches a dangerous level and thus requires me to visit the hospital?

Replies from: Lumifer, NancyLebovitz
comment by Lumifer · 2014-01-12T19:28:36.189Z · LW(p) · GW(p)

The problem I have is that measuring causes panic about the expected result, which increases the blood pressure. Then if the result turns out to be very high, as expected, the panic increases and the next measurement turns out even higher.

Measure every hour. Or every ten minutes. Your hormonal system can't sustain the panic state for long, plus seeing high values and realizing that you are not dead yet will desensitize you to these high values.

Replies from: fubarobfusco
comment by fubarobfusco · 2014-01-12T20:16:00.342Z · LW(p) · GW(p)

As someone who's had both high blood pressure and excessive worrying — I second this advice.

comment by NancyLebovitz · 2014-01-12T19:20:57.029Z · LW(p) · GW(p)

Do you do any sort of meditation?

Replies from: XiXiDu
comment by XiXiDu · 2014-01-12T19:39:00.003Z · LW(p) · GW(p)

Do you do any sort of meditation?

No. Do you have any recommendations on what to read/try? Given the side effects of anxiety disorder medications such as pregabalin, meditation was one of the alternatives I thought about besides marijuana.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-01-12T20:48:25.123Z · LW(p) · GW(p)

I have a bunch of recommendations, but I'm no expert.

Generic advice: sit or stand with your back straight and unsupported. If sitting, your knees should be below your hips. This means straight chair (soles of feet on the ground), cross-legged on a cushion, or full lotus.

Pay attention to something low-stress. Your breath (possibly just the feeling of it going in and out of your nostrils), a candle flame, your heart beat (if low stress), counting from one to four and back again.

20 minutes is commonly recommended, but I don't think it's crazy to work up from 5 or 10 minutes if 20 is intolerable.

Meditation isn't easy. One of the useful parts of the training is gently putting your attention back where you want it when you notice you're thinking about something else. It may help to have a few simple categories like thought, memory, imagination, sensation to just label thoughts as they go by.

I recommend The Way of Energy by Lam Kam Chuen-- it's an introduction to Daoist meditation (mosly standing). I'm not going to say it's the best ever (I haven't investigated the field), but it's got a good reputation and I've gotten good results from it.

There. Now that I've said some things, I predict that other meditators will come in with more advice.

Replies from: NancyLebovitz, EvelynM
comment by NancyLebovitz · 2014-01-14T00:49:35.666Z · LW(p) · GW(p)

One more thing: Only do 70% as much as you think you can. I think this applies to meditation as well as (non-emergency) physical activities. It improves the odds that you won't make yourself sick of it.

Looks like I was wrong about getting replies.

comment by EvelynM · 2014-01-14T23:29:27.248Z · LW(p) · GW(p)

That advice is reasonable. The hospital/Doctor may be able to refer you to a local Mindfulness Based Stress Reduction course. Many people find the social support of meditating in a group, helpful.

I hope you make a speedy recovery to full health, XiXiDu.

comment by Apprentice · 2014-01-12T09:48:28.098Z · LW(p) · GW(p)

You can ask me things if you like. At Reddit, some of the most successful AMAs are when people are asked about their occupation. I have a PhD in linguistics/philology and currently work in academia. We could talk about academic culture in the humanities if someone is interested in that.

Replies from: Anatoly_Vorobey
comment by Anatoly_Vorobey · 2014-01-12T10:27:02.268Z · LW(p) · GW(p)

Can you talk about your specific field in linguistics/philology? What it is, what are the main challenges?

Do you have a stake/an opinion in the debates about the Chomskian strain in syntax/linguistics in general?

Replies from: Apprentice
comment by Apprentice · 2014-01-12T12:40:48.094Z · LW(p) · GW(p)

Can you talk about your specific field in linguistics/philology?

I've mucked about here and there including in language classification (did those two extinct tribes speak related languages?), stemmatics (what is the relationship between all those manuscripts containing the same text?), non-traditional authorship attribution (who wrote this crap anyway?) and phonology (how and why do the sounds of a word "change" when it is inflected?). To preserve some anonymity (though I am not famous) I'd rather not get too specific.

what are the main challenges?

There are lots of little problems I'm interested in for their own sake but perhaps the meta-problems are of more interest here. Those would include getting people to accept that we can actually solve problems and that we should try our best to do so, Many scholars seem to have this fatalistic view of the humanities as doomed to walk in circles and never really settle anything. And for good reason - if someone manages to establish "p" then all the nice speculation based on assuming "not p" is worthless. But many would prefer to be as free as possible to speculate about as much as possible.

Do you have a stake/an opinion in the debates about the Chomskian strain in syntax/linguistics in general?

Yes. I think the Chomskyan approach is based on a fundamentally mistaken view of cognition, akin to "good old fashioned artificial intelligence". I hope to write a top-level post on this at some point. But I'll say this for Chomsky: He's not a walk-around-in-circles obscurantist. He's a resolutely-march-ahead kind of guy. A lot of the marching was in the wrong direction, but still, I respect that.

Replies from: Douglas_Knight, Emily, VAuroch, Anatoly_Vorobey
comment by Douglas_Knight · 2014-01-13T19:31:13.273Z · LW(p) · GW(p)

non-traditional authorship attribution

Is that really the standard term? You know, that the LW party line is that it's a bad term like selling non-apples. Google suggests to me that it is not the most popular term. The link below replaces "non-traditional" with "modern," which isn't an improvement on this dimension.

Also, my first parsing was that "non-traditional" modified "authorship." This is actually a reasonable use of the prefix "non," since having a strong prior on the author makes a big difference (sociologically, if not technically). How bout that Marlowe?

Replies from: Apprentice
comment by Apprentice · 2014-01-13T21:38:34.072Z · LW(p) · GW(p)

You're right, it's a horrible term. For one thing, the methods involved are pretty well-established by now. I just use it by habit. As for that old Marlowe/Shakespeare hubbub, here's a recent study which finds their style similar but definitely not identical.

Replies from: Douglas_Knight, Douglas_Knight
comment by Douglas_Knight · 2014-01-13T21:46:56.445Z · LW(p) · GW(p)

Does anyone use a better term? "Statistical author attribution" seems like an obvious term, but google tells me that no one has ever used it.

comment by Douglas_Knight · 2014-01-13T21:44:57.789Z · LW(p) · GW(p)

Have you read the study you link? People who have read it tell me that the conclusions drawn do not match the body of the paper.

Replies from: Apprentice
comment by Apprentice · 2014-01-13T22:03:14.887Z · LW(p) · GW(p)

I skimmed it and nothing seemed obviously wrong. If you're interested, you could try for yourself. If you download Marlowe's corpus, Shakespeare's corpus and stylo you can get a feel for how this works in a couple of hours.

comment by Emily · 2014-01-13T15:02:07.477Z · LW(p) · GW(p)

Would love to read your post on the Chomskian approach, please do write it!

comment by VAuroch · 2014-01-13T08:17:50.167Z · LW(p) · GW(p)

I would be extremely interested in your post on Chomsky. I almost but not quite majored in linguistics in America, which meant that I got the basic Chomskyan introduction but never got to the arguments against it. I am vaguely familiar with the probabilistic-learning models (enough to get why Chomsky's proof that they can't work fails), but not enough to get what predictions they make etc.

comment by Anatoly_Vorobey · 2014-01-12T18:08:39.179Z · LW(p) · GW(p)

That's quite a broad field to plow! I'll keep asking questions, feel free to ignore those that are too specific/boring.

I've always wanted to know more about how authorship attribution is done; is this, found with a quick search, a reasonable survey of current state of the art, or perhaps you'd recommend something else to read?

Are your fields, and humanities in general, trying to move towards open publishing of academic papers, the way STEM fields have been trying to? As someone w/o a university affiliation, I'm intensely frustrated every time I follow an interesting citation to a JSTOR/Muse page.

Do you plan to stay in academia or leave, and it the latter, for what kind of job?

I think you should write that post about the Chomskyan approach.

Replies from: Apprentice
comment by Apprentice · 2014-01-13T00:05:11.199Z · LW(p) · GW(p)

I've always wanted to know more about how authorship attribution is done; is this, found with a quick search, a reasonable survey of current state of the art, or perhaps you'd recommend something else to read?

The Stamatatos survey you linked to will do fine. The basic story is "back in the day this stuff was really hard but some people tried anyway, then in 1964 Mosteller and Wallace published a landmark paper showing that you really could do impressive stuff, then along came computers and now we have a boatload of different algorithms, most of which work just great". The funny thing about stylometry is that it is hard to get wrong. Count up anything you like (frequent words, infrequent words, character n-grams, whatever) and use any distance measurement you like and odds are you'll get usable results. If you want to play around with this for yourself you can install stylo and turn it loose on a corpus of your choice. Gwern's little experiment is also a good read.

My involvement with stylometry has not been to tweak the algorithms (they work just fine) but to apply them in some particular cases and to try to convince my fellow scholars that technological wizardry really can tell them things worth knowing.

Are your fields, and humanities in general, trying to move towards open publishing of academic papers, the way STEM fields have been trying to?

Yes. Essentially every scholar I know is in favor of this. As far as I can see, It will happen and is happening.

Do you plan to stay in academia or leave, and it the latter, for what kind of job?

I worked as an engineer for a few years but found I wasn't that into it and really missed school. So I went back and I'd like to stay.

comment by fubarobfusco · 2014-01-12T20:05:46.147Z · LW(p) · GW(p)

Sure, what the heck. Ask me stuff.

Professional stuff: I work in tech, but I've never worked as a developer — I have fifteen years of experience as a sysadmin and site reliability engineer. I seem to be unusually good at troubleshooting systems problems — which leaves me in the somewhat unfortunate position of being most satisfied with my job when all the shit is fucked up, which does not happen often. I've used about a dozen computer languages; these days I code mostly in Python and Go; for fun I occasionally try to learn more Haskell. I've occasionally tried teaching programming to novices, which is one incredible lesson in illusion of transparency, maybe even better than playing Zendo. I've also conducted around 200 technical interviews.

Personal stuff: I like cooking, but I don't stress about diet; I have the good fortune to prefer salad over dessert. I do container gardening. I've studied nine or ten (human) languages, but alas am only fluent in English; of those I've studied, the one I'd recommend as the most interesting is ASL. I'm polyamorous and in a settled long-term relationship. I get along pretty well with feminists — and think the stereotypes about feminists are as ridiculous as the stereotypes about libertarians. My Political Compass score floats around (1, –8) in the "weird libertarian" end of the pool. I play board games; I should probably play more Go, but am more likely to play more Magic. I was briefly a Less Wrong meetup organizer.

Replies from: None, DaFranker, lmm, John_Maxwell_IV
comment by [deleted] · 2014-01-13T06:40:02.084Z · LW(p) · GW(p)

What's the best programming language to learn in order to get a job? Or a good job, if the two answers would differ.

(Open question; it's too bad there isn't an "ask everyone who works in tech" thread or somesuch. For background, I used to know Java, as well as BASIC and bits of assembly, but a series of unfortunate chance events distracted me from programming about five years ago and I haven't done any since.)

Replies from: fubarobfusco, NancyLebovitz
comment by fubarobfusco · 2014-01-13T08:49:07.185Z · LW(p) · GW(p)

What's the best programming language to learn in order to get a job?

Eh, depends on what sort of job.

In my line of work, Python or maybe Ruby — they're both widely used by major employers, and particularly for automation tools.

But Java if you want to write for business computing; C# if you want to write for Windows; Objective-C if you want to write for the Mac or iGizmos; PHP if you want Great Cthulhu to rise from his tomb at R'lyeh. And Perl, Python, or Ruby and a smattering of shellscript if you want to do systems stuff.

Replies from: gjm
comment by gjm · 2014-01-13T12:32:09.237Z · LW(p) · GW(p)

Also C for a lot of embedded-systems things, and C++ ditto (and also for a fair amount of applications and a whole lot of what you might call scientific computing: computer vision, financial simulations, games engines, etc. -- but C++ is another Great Cthulhu Language).

Also, even if your only real interest is in getting a good job, it is very worthwhile learning more languages, preferably highly varied ones. The ideas that are natural or even necessary in one language may be useful to have in your mental toolbox when working in another. Consider, e.g., (1) some variety of assembly language to get a better idea of what the machine is actually doing, (2) a functional language like Haskell to show you a very different style of software design, (3) Common Lisp for its unusual (but good) approaches to OO and exception handling and to show you what a really powerful macro system looks like, (4) some languages with very different execution models -- Prolog (unification and backtrack-based searching), Forth or PostScript (stack machine), Mathematica (pattern-matching), etc.

Warning: the more different languages you are familiar with, the more you will notice the annoying limitations of each particular language.

comment by NancyLebovitz · 2014-01-14T00:54:00.342Z · LW(p) · GW(p)

it's too bad there isn't an "ask everyone who works in tech" thread or somesuch

You could start one.

comment by DaFranker · 2014-01-13T13:51:13.368Z · LW(p) · GW(p)

I've occasionally tried teaching programming to novices, which is one incredible lesson in illusion of transparency, maybe even better than playing Zendo.

How typical do you think your experience has been in this regard? IME, teaching programming to complete novice has been cruise-control stuff and one of the relatively few things where I know exactly what's going on and where I'm going within minutes of starting.

For context: I've had success in teaching a complete novice with vague memory of high-school-math usage of variables how to go from that to writing his own VB6 scripts to automate simple tasks, of retrieving and sending data to fields on a screen using predetermined native functions in the scripting engine (which I taught him how to search and learn-to-use from the available and comprehensive reference files). This was on maybe my third or fourth attempt at doing so.

What I actually want to know is how typical my experience is, and whether or not there's value in analyzing what I did in order to share it. I suspect I may have a relatively rare mental footing, perspective and interaction of skillsets in regards to this, but I may be wrong and/or this may be more common than I think, invalidating it as evidence for the former.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-01-14T00:57:24.344Z · LW(p) · GW(p)

I think it would be a very good idea to analyse what you're doing, and probably valuable to have some transcripts of sessions-- what you think you're do may not be what you actually do.

Do you teach in person? By phone? I'm wondering how much you use subtle clues to find out what your student is thinking.

Replies from: DaFranker
comment by DaFranker · 2014-01-21T13:37:38.357Z · LW(p) · GW(p)

Usually, in person (either as a tag-team or "I'll be right over here, call me when you're stumped" approach; I've experimentally confirmed that behind-the-shoulder teaching has horrible success rates, at least for this subject), though a few times by chat / IM while passing the code back and forth (or better yet, having one of those rare setups where it's live-synch'ed).

TL;DR: Look at examples of wildly successful teaching recipes, take cues from them and from LW techniques and personal experience at learning, fiddle a little with it all, and bam, you've got a plan for teaching someone to program! Now you just need pedagogical ability.

My general approach is to feel out what dumb-basics they know by looking at it as if we were inventing programming piecemeal, naturally with my genius insight letting us work out most of the kinks on the spot. I also go straight for my list of Things I Wish Someone Would Have Told Me Sooner, the list of Things That Should Be In Every Single So-Called "Beginner's Tutorial To Programming" Ever, and the list of Kindergarden Concepts You Need To Know To Create Computer Programs -- written versions pending.

For instance, every "Beginner's Tutorial to Programming" I've ever seen fails to mention early enough that all this code and fancy stuff they're showing is nice and all, but to actually have meaningful user interactions and outputs from your program to other things (like the user's screen, such as making windows appear and put text and buttons in them!) you have to learn to find the right APIs, the right handles and calls to make, and I've yet to see a single tutorial, guide, textbook, handbook, "crash course" or anything that isn't trial-and-error or a human looking at what you did that actually teaches how to do that. So this is among the first things I hammer into them -

"You want to display a popup with yes/no buttons? Open up the Reference here, search for "prompt", "popup", "window", "input" or anything else that seems related, and swim around until you find something that looks like it does what you're doing, copy the examples given as much as possible in your own code, making changes only to things you've already mastered, and try it!"

...somewhat like this, though that's only for illustration. In a real setting, I'd be double-checking every step of the way there that they remember and understand what I told them about Developer References earlier on, that their face doesn't scrunch up at any of the terms I suggest for their search, that they can follow the visual display / UI of this particular reference I'm showing them (I'm glaring at you, javadoc! You're horribly cruel to newbies.) and find their way around it after a bit of poking around, and so on.

Obviously, that's nowhere near the first things to tackle, though. Most tutorials devote approximately twelve words to the entire idea of variables, which is rather ridiculous when contrasted with the fact that most people barely remember their math classes from high school, and never had the need or chance to wrap their head around the concept of variables as it stands in programming. Just making sure a newbie can wrap their mind comfortably around the idea that a variable won't have a set value (I pointedly ignore constants at that point, because it's utterly, completely unnecessary and utterly confusing to mention them until they have an actual need for them, which is way way way waaaaaaaay later - they can just straight-up leave raw values right in the source code until then), that a variable will probably change as the program works, that it won't change on its own but since programs get big and you can't be sure you won't have anything else ever changing it you should always assume it could change somewhere else, etc. etc. etc. There are so many concepts that already-programmers and geeks and math-savvy people just gloss right over that obviously those not part of those elites aren't going to understand a thing when you start playing guerilla on their brain with return values, mutable vs immutable, variable data types, privates and scopes, classes vs instances, statics, and all that good stuff.

Buuut I'm rambling here. I suppose I just approach this as a philosophical "blend" between facilitating a child's wonder-induced discovery of the world and its possibilities, and a drill sergeant teaching raw recruits which fingers to bend how in what order and at what speed to best tie their army boot shoelaces and YOU THERE, DON'T FOLD IT LIKE THAT! DO YOU WANT YOUR FINGERS TO SLIP AND DROP THE LACE AND GIVE YOUR ENEMY TIME TO COME UP BEHIND YOU? START OVER!

Of course, it might be my perspective that's different. I was forewarned both by my trudging, crawly, slow learning of programming and by others about the difficulty of teaching programming, and as silly as it might sound, I have a lot more experience than the average expert self-taught wiz programmer at learning how to program, since I took such a sinuous, intermittent, unassisted and uncrunched road through it.

Anecdotally, I think I've re-learned what classes and objects were (after forgetting it from stopping my self-teaching for months) at least eight times. So I have at least eight different, internal, fully-modeled experiences of the whole process of learning those things and figuring out what I'm missing and so on, without anyone ever telling me what I was doing or thinking wrong, to draw from as I try to imagine all the things that might be packed and obfuscated in all the abstracts and concepts in there.

comment by lmm · 2014-01-13T12:34:47.384Z · LW(p) · GW(p)

Do you have a view on Scala?

Replies from: fubarobfusco
comment by fubarobfusco · 2014-01-13T15:01:27.663Z · LW(p) · GW(p)

Never tried it.

comment by John_Maxwell (John_Maxwell_IV) · 2014-01-12T22:05:22.392Z · LW(p) · GW(p)

I seem to be unusually good at troubleshooting systems problems

How'd you get to be this way?

Replies from: fubarobfusco
comment by fubarobfusco · 2014-01-13T00:15:03.209Z · LW(p) · GW(p)

I'm not sure, but one of the techniques that seems most salient to me is breadth-first search. Partly this is to hold off on proposing solutions. Take just a little bit longer to look at the problem and gather data before generating hypotheses. The second part is to find cheap tests to disprove your hypotheses instead of going farther down the path that an early hypothesis leads. Folks who use depth-first search, building up a large tree of hypotheses first or going down a long path of possible tests and fixes, seem more likely to get stuck.

I also really like troubleshooting out loud with colleagues who aren't afraid to contradict each other. Generating lots of hypotheses and quickly disconfirming most of them can quickly narrow down on the problem. "Okay, maybe the cause is a bad data push. But if that were so, it would be on all the servers, not just the ones in New York, because the data push logs say the push succeeded everywhere. But the problem's just in New York. So it's not the data push."

Replies from: John_Maxwell_IV
comment by Thomas · 2014-01-12T11:36:37.978Z · LW(p) · GW(p)

I am asking everybody here.

Do you have a plan of your own, to ignite the Singularity, the Intelligence explosion, or whatever you want to call it?

If so, when?

How?

Replies from: lmm, Alsadius, eurg
comment by lmm · 2014-01-12T12:24:02.447Z · LW(p) · GW(p)

I have a plan. Posts here have convinced me that the singularity will most likely be a lose condition for most people. So I'll only activate my plan if I think other actors are getting close.

Replies from: MugaSofer, None, Apprentice
comment by MugaSofer · 2014-01-28T17:43:21.083Z · LW(p) · GW(p)

becomes wildly curious

Since you posted above that you're participating in the AMA, can you give some details of this plan? (Assuming step one isn't "tell people about this plan", in which case please don't end the world just because you precommitted to answering questions.)

Replies from: lmm
comment by lmm · 2014-01-30T12:25:15.443Z · LW(p) · GW(p)

I think sharing concrete details would be a bad idea, but it's not like I've come up with any clever trick. I'll do it the same way I'd do anything else - buy what I can, make what I can't. I am (rightly or not) very confident in my programming abilities.

comment by [deleted] · 2014-01-15T01:53:18.659Z · LW(p) · GW(p)

This post reminds me of Denethor saying the Ring was only to be used in utmost emergency at the bitter end

comment by Apprentice · 2014-01-12T15:31:35.626Z · LW(p) · GW(p)

Insert pun on the phrase 'ignite the Singularity'.

comment by Alsadius · 2014-01-16T07:58:15.248Z · LW(p) · GW(p)

No. I have no particular skills in that field, and it's the sort of thing that's plagued by optimism. Besides, it's far too big a task for any one person - it'll be lit off by whole industries working for decades, not by one person turning on Skynet.

comment by eurg · 2014-01-12T16:48:57.804Z · LW(p) · GW(p)

No, not by myself. Wouldn't have the skillset for it, anyways. So I only try to introduce people to things like MIRI, to improve the chances that future discussions might not stop dead in fatalistic and nihilistic clichés. Effective altruism is an angle where I try to get a sense if a worthwhile elaboration is possible, as steering the arguments is somewhat easier when not starting with the most crazy stuff first.

comment by RomeoStevens · 2014-01-12T08:37:52.391Z · LW(p) · GW(p)

I believe that the things I do at any given time are reasonable for me to do, AMA.

Replies from: Alsadius, None
comment by Alsadius · 2014-01-16T07:43:28.091Z · LW(p) · GW(p)

How often do you use "It seemed like a good idea at the time!" as a defence unironically?

comment by [deleted] · 2014-01-15T03:51:10.219Z · LW(p) · GW(p)

Do you mean that you evaluate the utility function for working out what things to spend time on? Have you assigned arbitrary numbers to the outcomes or is it an estimate?

Replies from: RomeoStevens
comment by RomeoStevens · 2014-01-16T19:31:54.665Z · LW(p) · GW(p)

I estimate time value of various things often, yes.

comment by Kaj_Sotala · 2014-01-12T05:39:16.211Z · LW(p) · GW(p)

Sure, you can ask me anything.

Replies from: ShardPhoenix, James_Miller, None
comment by ShardPhoenix · 2014-01-12T08:19:36.326Z · LW(p) · GW(p)

IIRC you are interested in educational games, any new thoughts in that area?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-01-12T11:39:32.083Z · LW(p) · GW(p)

Depends on what you mean by new: I elaborated on some of my core ideas about the field in the blog posts Why edugames don't have to suck, Videogames will revolutionize school (not necessarily the way you think), and also touched upon their role in society in Doing Good in the Addiction Economy. My thoughts have gotten somewhat more precise, but off-hand I can't think of any major recent insights that I wouldn't have mentioned in those posts.

On the topic of the educational game that I'm doing for my Master's Thesis, I'm making slow but sure progress.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2014-01-12T11:47:21.230Z · LW(p) · GW(p)

Yes, I had read those posts before which is why I knew you were involved in the field. Good luck with your thesis - I think games have huge potential in education, but it will be difficult because educational games are aiming at a smaller target than normal ones.

comment by James_Miller · 2014-01-12T18:53:33.910Z · LW(p) · GW(p)

I have an idea for a video game that can teach microeconomics. It would create a persistent low-graphics world similar to what's in the game Travian and would require no artificial intelligence. Unfortunately, I can't program beyond the level of what they teach in codecademy. Do you have suggestions for people I could contact to get financial support for my game? I'm the author of a microeconomics textbook and so I think I have a credible background for this project.

Replies from: Kaj_Sotala, Lumifer
comment by Kaj_Sotala · 2014-01-12T19:47:25.145Z · LW(p) · GW(p)

Hmm. I haven't really looked into any actual funding agencies or the "getting money for this" side at this point, so I don't know much about that, but I can think of some researchers who might either have an interest in collaborating, or who could know more direct sources of funding. Two groups that come to mind who might be worth contacting in this regard are GAPS and Institute of Play. I'll let you know if I think of any others. (If you do contact them, I'd be curious to hear about the response.)

comment by Lumifer · 2014-01-12T19:29:47.210Z · LW(p) · GW(p)

What is the intended audience for this game? Why, do you think, people will play it?

Replies from: James_Miller
comment by James_Miller · 2014-01-12T19:35:03.214Z · LW(p) · GW(p)

Students taking introductory or intermediate microeconomics. Instructors would require their students to play.

Replies from: Lumifer
comment by Lumifer · 2014-01-13T03:28:39.254Z · LW(p) · GW(p)

Ah, so this is purely non-commercial, a course teaching aid, basically.

Can't you rope some grad students into doing this?

Replies from: James_Miller
comment by James_Miller · 2014-01-13T03:31:14.706Z · LW(p) · GW(p)

I would love to make money off of it, and have a revenue model but I would also be willing to do it for free.

My school doesn't have econ grad students. Also, it wouldn't be a good career move for a grad student who wanted to become a professor to devote lots of time to this.

Replies from: Lumifer
comment by Lumifer · 2014-01-13T03:57:19.759Z · LW(p) · GW(p)

So the target market is economics departments at other colleges/universities? You're are talking essentially about a piece of education software sold to institutions, not to end users/players.

In this case, I think, you'll have to make a business case for the proposition. I am not sure enough people will find this idea fun enough to contribute their time for free.

Another point: do you really have to develop a new game from scratch? Doing a mod of an existing game or engine is likely to be vastly simpler and cheaper.

comment by [deleted] · 2014-01-24T18:48:11.633Z · LW(p) · GW(p)

Why are you utilitarian?

Inspired by this.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-01-26T06:36:23.754Z · LW(p) · GW(p)

At heart, utilitarianism feels like what you get when you ask yourself, "would I rather see few people hurt than many, many people happy rather than few, and how important do I think that to be", answer "I'd rather see few people hurt, rather see many people happy, and this is important", and then apply that systematically. Or if you just imagine yourself as having one miserable or fantastic experience, and then ask yourself what it would be like to have that experience many times over, or whether the impact of that experience is at all diminished just because it happens to many different people. Basically, utilitarianism feels like applied empathy.

Replies from: None, Gunnar_Zarncke
comment by [deleted] · 2014-02-10T16:22:43.694Z · LW(p) · GW(p)

So, if someone lacks empathy, utilitarianism is senseless to them?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-02-11T18:05:47.775Z · LW(p) · GW(p)

Well, the particular rationale that I gave might be. Possibly they might find it sensible for some other reason.

comment by Gunnar_Zarncke · 2014-02-11T22:55:12.994Z · LW(p) · GW(p)

Indeed. "utilitarianism feels like what you get when you ask" this, let you empathy take over and think it to its 'logical conclusion'.

The problem I have with this kind of reasoning that it leads into extremes that don't match up with your other values. Oh it might not look like a conflict. But I sometimes get the impression that this is because the daubt is compartmentalized away because the empathy is such a positively valued emotion and not following it feels wrong.

I have to admit that me not being a utilitarian I don't have a clear cut answer of how to rationally act on my empathy either. The problem with complex value functions is that there are no simple answers and utilitarianism suspiciously looks like another simplistic answer to a complex problem.

comment by ITakeBets · 2014-02-03T01:53:18.115Z · LW(p) · GW(p)

I'm a 30-year-old first-year medical student on a full tuition scholarship. I was a super-forecaster in the Good Judgment Project. I plan to donate a kidney in June. I'm a married polyamorous woman.

Replies from: niceguyanon, arundelo
comment by niceguyanon · 2014-02-12T21:03:00.308Z · LW(p) · GW(p)

Before participating in the Good Judgment Project did you think you were a particularly good forecaster?

Do you believe you have an entrepreneurial edge because of your ability, if you were to pursue it?

Have you used your abilities to hack your life for the better?

comment by arundelo · 2014-02-09T20:32:00.755Z · LW(p) · GW(p)

I realize I could research this myself -- at least enough to ask a more informed version of this question -- but I've been procrastinating that since when I first read your comment, so:

Could you talk about your decision to donate the kidney and what your judgments of the tradeoffs were? (I assume, since you didn't mention otherwise, that this donation is not to a friend or family member.)

comment by philh · 2014-01-13T14:31:00.861Z · LW(p) · GW(p)

Why not.

I attended CFAR's may 2013 workshop. I was the main organizer of the London LW group during approximately Nov 2012-April 2013, and am still an occasional organizer of it. I have an undergraduate MMath. My day job is software, I'm the only fulltime programmer on a team at Universal Pictures which is attempting to model the box office. AMAA.

comment by Daniel_Burfoot · 2014-01-13T05:08:23.840Z · LW(p) · GW(p)

I wrote a book about a new philosophy of empirical science based on large scale lossless data compression. I use the word "comperical" to express the idea of using the compression principle to guide an empirical inquiry. Though I developed the philosophy while thinking about computer vision (in particular the chronic, disastrous problems of evaluation in that field), I realized that it could also be applied to text. The resulting research program, which I call comperical linguistics, is something of a hybrid of linguistics and natural language processing, but (I believe) on much firmer methodological ground than either. I am now carrying out research in this area, AMA.

Replies from: ESRogs
comment by ESRogs · 2014-01-14T01:31:25.227Z · LW(p) · GW(p)

How do you expect this work to influence the fields of computer vision, NLP, etc. -- would it inspire new techniques?

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2014-01-14T05:11:13.569Z · LW(p) · GW(p)

First, I want people in computer vision and NLP to actually look at the data sets their algorithms apply to. Ask a physicist to tell you some facts about physical reality, and they will rattle off a lengthy list of concepts, like conservation of energy, isotropy of spacetime, Ohm's law, etc etc. As a vision scientist to tell you some things about visual reality, and my guess is they won't have much to say. Sure, a vision scientist can talk a lot about algorithms, machine learning techniques, feature sets, and other computational tools, but they can't tell you much about what's actually in the images. The same problem is true with NLP people to a lesser degree; they can talk about parsing algorithms and optimization procedures for finding MaxEnt parameters, but they can't tell you much about the actual structure of text.

So, yes, I expect the approach to produce new techniques, but not because it supplies some kind of new mathematical framework. It suggests a new set of questions.

comment by David_Gerard · 2014-01-12T10:23:12.786Z · LW(p) · GW(p)

I am not interesting, but I've been here a few years.

Replies from: Anatoly_Vorobey, Apprentice
comment by Anatoly_Vorobey · 2014-01-12T10:45:14.502Z · LW(p) · GW(p)

Are there interesting reasons that some LW regulars feel disdain for RationalWiki, besides RW's unflattering opinion of LW/EY? Can you steelman that disdain into a short description of what's wrong with RW, from their point of view? (I'm asking as someone basically unfamiliar with RW).

Replies from: David_Gerard, Eugine_Nier
comment by David_Gerard · 2014-01-12T18:52:01.882Z · LW(p) · GW(p)

I think the main reason is that basically nobody in the wider world talks about LW, and RW is the only place that talks about LW even that much. And RW can't reasonably be called very interested in LW either (though many RW regulars find LW annoying when it comes to their attention). Also, we use the word "rational", which LW thinks of as its own - I think that's a big factor.

From my own perspective: RW has many problems. The name is a historical accident (and SkepticWiki.com/org is in the hands of a domainer). Mostly it hasn't enough people who can actually write. It's literally not run by anyone (same way Wikipedia isn't), so is not going to be fixed other than organically. Its good stuff is excellent and informative, but a lot of it isn't quite fit for referring outside fresh readers to.

It surprises me how popular it is (as in, I keep tripping over people using a particular page they like - Alexa 21,000 worldwide, 8800 US - and Snopes uses us a bit) - it turns out there's demand for something that can set out "no, actually, that's BS and here's why, point for point". Raising the sanity waterline does in fact also involve dredging the swamps and cleaning up toxic waste spills. Every time we have a fundraiser it finishes ridiculously quickly ('cos our expenses are literally a couple thousand dollars a year). We have readers who just love us.

On balance, though, I do think RW makes the world a better place rather than a worse one. (Or, of course, I wouldn't bother.)

FWIW, there's a current active discussion on What RW Is For, which I expect not to go anywhere much.

I'm not sure I could reasonably steelman LW opposition to RW as if either were a monolith and there were no crossover (which simply isn't the case). I will note that RW is piss-insignificant, and if you're spending any time whatsoever worrying what RW thinks of LW then you're wasting precious seconds.

(The discussion of RW on LW actually came up on the LW and RW Facebook groups this morning too.)

comment by Eugine_Nier · 2014-01-12T21:35:07.730Z · LW(p) · GW(p)

Because RW sucks at actually being rational. Rather they seem to have confused being "rational" with supporting whatever they perceive to be the official scientific position. Whereas LW has a number of contrarian positions, most notably cryonics and the Singularity, where it is widely believed the mainstream position is likely wrong and their argument for it is just silly.

Replies from: ArisKatsaris, David_Gerard
comment by ArisKatsaris · 2014-01-14T11:05:09.215Z · LW(p) · GW(p)

I'm downvoting you not because I disagree, but rather because the question was addressed to David, not you.

comment by David_Gerard · 2014-01-13T21:44:15.684Z · LW(p) · GW(p)

It is worth noting that Eugene's main concern is that RW has no patience with "race realism", as its proponents call it.

comment by Apprentice · 2014-01-12T10:48:54.023Z · LW(p) · GW(p)

Back when you joined Wikipedia, in 2004, many articles on relatively basic subjects were quite deficient and easily improved by people with modest skills and knowledge. This enabled the cohort that joined then to learn a lot and gradually grow into better editors. This seems much more difficult today. Is this a problem and is there any way to fix it? Has something similar happened with LessWrong, where the whole thing was exciting and easy for beginners some years ago but is "boring and opaque" to beginners now?

Replies from: David_Gerard
comment by David_Gerard · 2014-01-12T21:20:14.768Z · LW(p) · GW(p)

My answer may be a bit generic :-)

Re: Wikipedia - This is pretty well-trodden ground, in terms of (a) people coming up with explanations (b) having little evidence as to which of them hold. There's all manner of obvious systemic problems with Wikipedia (maybe the easy stuff's been written, the community is frequently toxic, the community is particularly harsh to newbies, etc) but the odd thing is that the decline in editing observed since 2007 has also held for wikis that are much younger than English Wikipedia - which suggests an outside effect. We're hoping the Visual Editor helps, once it works well enough (at present it's at about the stage of quality I'd have expected; I can assure you that everyone involved fully understands that the Google+-like attempt to push everyone into using it was an utter disaster on almost every level). The Wikimedia Foundation is seriously interested in getting people involved, insofar as it can make that happen.

As for LessWrong ... it's interesting reading through every post on the site (not just the Sequences) from the beginning in chronological order - because then you get the comments. You can see some of the effect you describe. Basically, no-one had read the whole thing yet, 'cos it was just being written.

I'm not sure it was easier for beginners at all. Remember there was only "main" for the longest time - and it was very scary to write for (and still is). Right now you can write stuff in discussion, or in various open threads in discussion.

Replies from: Apprentice
comment by Apprentice · 2014-01-12T22:28:46.650Z · LW(p) · GW(p)

Thank you. You brought up considerations I hadn't considered.

comment by joaolkf · 2014-01-12T12:53:05.227Z · LW(p) · GW(p)

I'll answer anything that will not affect negatively my academic career or violates anyone's privacy but mine (I never felt like I had one). I waive my right not to answer anything else that could be useful to anyone. I'm finishing a master’s on ethics of human enhancement in Brazil, and have just submitted an application for a doctorate in Oxford about moral enhancement.

comment by [deleted] · 2014-01-12T06:50:19.434Z · LW(p) · GW(p)

I don't think I'm known around here, but sure why not. Ask me anything.

comment by drethelin · 2014-01-12T04:33:29.774Z · LW(p) · GW(p)

Why did you make this post Will? Wait I guess you didn't comment here volunteering to answer questions.

Anyway I guess I can answer questions but I'm pretty lazy and not very educated so ask at your own risk.

Replies from: Will_Newsome
comment by Will_Newsome · 2014-01-12T04:43:06.757Z · LW(p) · GW(p)

You're asking me why? I did it 'cause I was bored.

I'll probably jump in if others do, otherwise it's too narcissistic as the creator of the post.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-01-12T07:12:20.598Z · LW(p) · GW(p)

Will have you ever had an encounter with the divine?

Replies from: Leonhart, Will_Newsome
comment by Leonhart · 2014-01-13T22:38:11.060Z · LW(p) · GW(p)

I upvoted you because I misread it as "Will you ever had" and thought you were making a joke about eternity, but now I suspect you just forgot the comma after "Will".

Keep the upvote, though, I want to know too.

comment by Will_Newsome · 2014-01-13T01:09:55.426Z · LW(p) · GW(p)

Fo sho.

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2014-01-13T19:17:07.902Z · LW(p) · GW(p)

What happened?

Replies from: Will_Newsome
comment by Will_Newsome · 2014-01-14T01:50:52.687Z · LW(p) · GW(p)

See here for my explanation of why I'd rather not answer that.

Replies from: gjm
comment by gjm · 2014-01-15T02:52:03.011Z · LW(p) · GW(p)

I looked there and didn't see any explanation of why you'd rather not answer that. What did I miss?

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2014-01-16T09:48:25.323Z · LW(p) · GW(p)

I imagine it's because

the gods apparently do not want there to be common knowledge of their existence

Right?

Replies from: gjm
comment by gjm · 2014-01-16T11:34:23.938Z · LW(p) · GW(p)

Might be. But I don't see how that would make it wrong for Will to describe his experiences, without also making it wrong for him to say he's had them and is very convinced by them.

I mean, it could. The gods would need to think that the level of evidence present in the world without any comment from Will is too low, and the level of evidence present with a description of Will's experiences is too high. It would be quite a coincidence, wouldn't it?, for the optimum level of evidence to fit into so narrow a region?

comment by JonahS (JonahSinick) · 2014-01-16T20:55:44.408Z · LW(p) · GW(p)

My biography is here http://jonahsinick.com/about-me/.

Replies from: niceguyanon
comment by niceguyanon · 2014-02-12T21:10:47.724Z · LW(p) · GW(p)

What is the counter argument to EA critics, that if you take EA to its logical conclusion, your life will suck. If I donate 50% of my income I probably could donate 55% then 65%, eventually to be consistent you'd have to donate 100% because as an American I could probably dumpster dive for food and live in a box and still have a better life then someone out there.

What is the happy medium that is consistent and justified?

Replies from: JonahSinick
comment by JonahS (JonahSinick) · 2014-02-12T22:24:59.542Z · LW(p) · GW(p)

This has been written about by Julia Wise at Giving Gladly, and others.

Two relevant considerations are:

  • Major self-sacrifice tends to be unsustainable, leading to burnout.
  • If an EA makes him or herself miserable, he or she is likely to repel bystanders, reducing other people's interest in being EAs.

Giving What We Can has set donating 10% of one's income as a threshold for membership. There's a historical precedent of this level of giving being sustainable for many people, coming from tithing practices in religion.

As for higher percentages: roughly speaking, it seems that marginal returns diminish very rapidly beyond $100k/year, so that one can give everything beyond that without substantially sacrificing quality of life. There are reasons why more can help: for example, to save extra money on the contingency that one is unemployed, or to be able to take care of many children. But I think that the level of sacrifice involved would be acceptable for many people. If one is living in an area with low cost of living, or doesn't want children, one can often live on a lot less than $100k/year without sacrificing quality of life.

comment by [deleted] · 2014-01-12T20:01:38.752Z · LW(p) · GW(p)

Self deprecating observations about my knowledge and interestingness, etc, but I have been reading this site for a while. So on the off chance then sure why not, ask me anything

comment by edanm · 2014-01-12T13:05:41.380Z · LW(p) · GW(p)

Sure. I run a Software Dev Shop called Purple Bit, based in Tel Aviv. We specialise in building Python/Angular.js webapps, and have done consulting for a bunch of different companies, from startups to large businesses.

I'm very interested in business, especially Startups and Product Development. Many of my closest friends are running startups, I used to run a startup, and I work with and advise various startups, both technically and business-wise.

AMA, although I won't/can't necessarily answer everything.

Replies from: None, jobe_smith
comment by [deleted] · 2014-01-15T03:48:36.641Z · LW(p) · GW(p)

In terms of custom software, what do you see as the next big thing that business will want? More specifically do you get the feeling that more people are wanting to move away from cloud services to locally managed applications?

Replies from: edanm
comment by edanm · 2014-01-17T11:10:12.884Z · LW(p) · GW(p)

This really depends on the field. My experiences are probably only relevant to about 1% of software projects out there - there's a lot of software in the world.

That said, in terms of Cloud vs. Local - definitely not. Most large (and small!) companies we've worked with use AWS. We also highly recommend Heroku/AWS to all our customers as the easiest and least expensive way to get started on building a custom application.

Of course, there are a lot of places where cloud still doesn't make sense. We have one client who has custom software deployed in hospitals, where all of the infrastructure is of course local to their site, not in any kind of cloud. But for the majority of people who don't have such a use case, everyone understands that cloud makes everything easier.

comment by jobe_smith · 2014-01-15T20:12:17.681Z · LW(p) · GW(p)

can you explain your basic business model? Also, what is the hardest part of your business and/or the biggest barrier to entry?

Replies from: edanm
comment by edanm · 2014-01-17T11:27:48.088Z · LW(p) · GW(p)

So, we're what's called a "Professional Services" firm. This term is usually used when talking about e.g. Accountants, Lawyers, etc, but is just as relevant for a Software Consultancy. I'll go a little into the idea behind professional services firms in general, then get back to talking about us in particular.

There are many, many different types of Professional Services firms, but the basic business model is usually the same - you're selling your time for money, and people pay because of your expertise and experience in the field.

But here's where large firms make their real money: the firm gets projects based on the expertise and experience of the "managing partners", and then a combination of the managing partners and juniors perform the actual work. For example, a law office will win a contract because of their experience and the expertise of its "Name Partners", and they'll charge let's say $500 an hour for an hour of Partner time. But they'll also charge $450 an hour for an Associate lawyer. The firm pays huge salaries for the name partners, so they're basically not making any profit there. But they pay tiny salaried to the Associates, for a large profit.

This is called "leverage". This is how a professional services firm grows and makes a profit - leveraging the skills and reputation of key, highly payed employees, to sell the work of lower-payed employees.

Most Professional Services firms can be placed on a moving scale as to how much expertise vs. leverage they have. An example of a highly skilled "firm" - a team of brain surgeons. They're basically paid amazingly well, and have minimal leverage. An example of a consultancy with a lot of leverage - a company that builds websites for restaurants. Building a website for a restaurant is 90% repetitive work that can be given to junior employees, with senior employees focused on finding work and growing the reputation of the business.

So where do we fit in all this? In our case, as a rather small firm, we're mostly on the "expertise" side of the equation. We have a few people in the company, and we're all very experienced Software Devs. Companies hire us for consulting based on our experience, and for development because we get things done quickly and well.

Of course, the founders of our firm (myself and 2 partners) are much more experienced than most of our employees, and as we grow, that gap will continue to grow as we take on more junior programmers who need more training, but are underappreciated by the market and just need someone to give them an opportunity and teach them the ropes.

So there's an answer about the basic business model of a Professional Services firm in general. I didn't go into any of the specific of a Software shop in particular, but there's a lot to say about that as well, e.g., there are 100's of niches of Software dev shops - are you targetting large companies or small? Startups? Tech-savvy customers? People who want software projects? Putting people on-site at a customer's company? Each one of these niches is very, very different, and it's a fascinating topic for me at least, since 2 years ago before starting this company, I would never have realised how different all these niches truly are, or even that they exist.

comment by lmm · 2014-01-12T12:25:43.217Z · LW(p) · GW(p)

Sure, ask me if you want. Programmer/anime fan/LW reader and commenter.

Replies from: blacktrance, None
comment by blacktrance · 2014-01-12T21:19:40.177Z · LW(p) · GW(p)

What's your favorite anime, and why?

Replies from: lmm
comment by lmm · 2014-01-13T08:44:55.555Z · LW(p) · GW(p)

Wandering Son (Hōrō Musuko)

Personal reasons: the story's relevant to my own and in a genre I don't normally pay much attention to, which might be why it stands out over other possible candidates (e.g. Puella Magi Madoka☆Magica). Also, by choosing an artsy show that tackles a serious dramatic subject, full of tragedy (and qbrfa'g erfbyir rirelguvat arngyl at the end), I sound more intellectual.

Psuedo-objective reasons: I feel it accurately captures the feelings of childhood and growing up. I particularly liked the portrayal of the sibling relationship, where you hate each other on a level that's superficial but no less genuine for that, but will stand by each other when you discover things the other really cares about. The conclusion also felt very true-to-life. I liked the visual style; the character designs are much more realistic than the animé norm (and for viewers who find it hard to tell them apart, serve as a demonstration of the valid reasons for the animé norm), and the whole setting and story feels like something you could do in live action. But at the same time this would be completely impossible to produce in live action, for a different reason than normal (child actors and ethical issues), so it shows off the ability of animé to do what other media can't. The slightly washed-out, watercolour visual style is distinctive, even among animé - but it's like that for a reason, the uncertain, blurry visuals aligning perfectly with the emotions this series is trying to convey. Likewise the light, childish-sounding soundtrack is distinctive - but it's not just style for the sake of style, it fits with the show as a whole.

Practical notes: I prefer the 11-episode (rather than 12-episode) release. I've avoided describing the premise because it's an episode 1 spoiler; if you think you'd like the show from this description I recommend watching it (or at least watching episode 1) rather than seeking out more information.

Replies from: Nectanebo
comment by Nectanebo · 2014-01-17T10:02:50.786Z · LW(p) · GW(p)

Many believe that the anime is a poor adaptation of the manga, or at the very least that the manga is the best medium the story is told in. What do you think about the subject?

Replies from: lmm
comment by lmm · 2014-01-17T19:50:29.222Z · LW(p) · GW(p)

I don't generally get on with manga as a medium. I tried to read this particular one and gave up after about three chapters. So depending on your perspective either I can't compare the two, or I found the anime to be much, much better.

comment by [deleted] · 2014-01-12T21:10:03.517Z · LW(p) · GW(p)

Are you that lmm?

Replies from: lmm
comment by lmm · 2014-01-13T08:20:54.127Z · LW(p) · GW(p)

Yes

comment by ChristianKl · 2014-01-12T11:43:00.030Z · LW(p) · GW(p)

In case anyone has question for myself I"m happy to answer.

Replies from: whales
comment by whales · 2014-01-12T20:06:16.746Z · LW(p) · GW(p)

What is the philosophy behind your prolific commenting?

Replies from: ChristianKl
comment by ChristianKl · 2014-01-12T21:22:34.743Z · LW(p) · GW(p)

In general online commenting is something I do out of habit. Higher return on time than completely passive media consumption such as watching TV but not that I book under time spent with maximum returns.

I generally think that a shift to massive information consumption of content via TV/radio in the 20st century was something that's bad for the general discourse of ideas in society. Active engagment helps learning.

I also prefer it over chatting in venues such as IRC, because it provides it provides deeper engagement with ideas and leaves more of a footprint. Created content is findable afterwards.

Lesswrong is also a choice to keep me intellecutally grounded. These days I do spent plenty of time thinking in mental frameworks that are not based on reductionist materalism. I do see value of being pretty flexible about changing the map I use to navigate the world and I don't want to lose access to the intellectual way of thinking.

In total I however spent more time than optimal on LW and frequently use it to procrastinate on some other task.

comment by Anatoly_Vorobey · 2014-01-12T10:10:10.458Z · LW(p) · GW(p)

I work as a software engineer, married with two kids, live in Israel and blog mostly in Russian. AMA.

Replies from: Locaha, A-Lurker
comment by Locaha · 2014-01-12T18:26:21.806Z · LW(p) · GW(p)

Why do you even waste time on lj-russians? The level of the discourse is lagging roughly two hundred years behind the western world.

Replies from: Anatoly_Vorobey
comment by Anatoly_Vorobey · 2014-01-13T11:24:34.766Z · LW(p) · GW(p)

The quality of discourse in Russian LJ depends almost entirely on your immediate circle of readers. Incredible stupidity and mendacity happily coexist with fantastic blogs and interesting debates. The number and density of the latter has gone down over the years, but then again, blogging as a phenomenon has.

It comes down to this: the main reason I blog on LJ in Russian because I still have lots and lots of readers there who are smarter and knowledgeable than me in the many different areas I'm interested in. There's no single place I can blog or write in English that would give me as much, and as useful, feedback (and that certainly includes LW).

comment by A-Lurker · 2014-01-16T10:08:09.533Z · LW(p) · GW(p)

Do you believe that by living in Israel you are by de facto green-lighting it's history and current course of action (such as settlements, etc)? If not, can you explain what you believe your involvement/non-involvement entails? [edit: I think this question might of come off sounding thorny when it's not supposed to be- espiecially given the charged emotions and such on the conflict there. I just want some perspective on what it's personally like for you to 'live in the middle' of such a well known conflict]

Replies from: A-Lurker
comment by A-Lurker · 2014-01-17T04:24:31.439Z · LW(p) · GW(p)

Why would someone down vote me without commenting as to why? Why would my question warrant a down vote anyway?

Replies from: asr, Lumifer
comment by asr · 2014-01-17T04:26:57.794Z · LW(p) · GW(p)

Downvotes without comments are routine. I didn't vote, but I suspect the downvoter felt that a discussion of Middle-East politics was likely to follow from the question, and likely to be unpleasant or heated.

Replies from: A-Lurker
comment by A-Lurker · 2014-01-17T04:55:23.371Z · LW(p) · GW(p)

That would be an assumption and entirely irrational. I am not going to be unpleasant nor engage in a lengthy debate about anything- least of all expanding the topic to other middle east politics.

I simply wanted to know what it's like to live in such a controversial topic. Where does he finds himself in it (as in does he feel like it's in another world or maybe it's a daily experience?). I really don't know if the average person there feels like they are part of what is happening or if it is something they see in the news like every one else in the world and feel disconnected from.

I'm an Australian- and I wouldn't have a problem if someone asked me the same line of questioning based around say the current (anti) refugee policy or even the white invasion and genocide of the indigenous people.

Replies from: TheOtherDave, pragmatist
comment by TheOtherDave · 2014-01-17T05:24:11.133Z · LW(p) · GW(p)

Yes, it's an assumption.
An irrational assumption? No, not especially. In the absence of special information about you, it's rational enough to assume you are a typical commenter on this site. If they observe evidence of your exceptionality, a rational observer updates based on that information.

Replies from: A-Lurker
comment by A-Lurker · 2014-01-17T06:08:37.653Z · LW(p) · GW(p)

Hmm I see your point- but if what they did was called 'rational' then there has to be another word for the part where they made the mistake. The mistake was they came to so much of a conclusion about something that they acted on it. They were wrong. They caused negative utility. It negatively effected the world and also their understanding of it. What is that called?

Replies from: Lumifer
comment by Lumifer · 2014-01-17T06:13:15.790Z · LW(p) · GW(p)

They were wrong. They caused negative utility. It negatively effected the world

All that about a single downvote..? X-D

I recommend growing thicker skin, quickly.

Replies from: A-Lurker
comment by A-Lurker · 2014-01-17T06:17:54.907Z · LW(p) · GW(p)

lol not negative utility to me- to him! It hasn't hurt my feelings or made me feel like a victim, I'm talking about how someone has misinterpreted and acted out on to the world. Even at that it was such a minor incident that i'm not talking about this in terms of damage done. What i'm really saying is- why is someone acting irrational on a rationality website?

Replies from: Lumifer
comment by Lumifer · 2014-01-17T06:24:15.019Z · LW(p) · GW(p)

why is someone acting irrational on a rationality website?

The obvious answer is that people here are humans and not Vulcans. But I don't see the irrationality you are talking about. Rationality doesn't specify values or goals. You know nothing about the person who downvoted you or the reasons he did it. Given this, your accusation of irrationality seems... hasty. Even irrational, one might say :-)

comment by pragmatist · 2014-01-17T05:01:07.885Z · LW(p) · GW(p)

The way people vote on politically contentious topics on this site is very far from some rational ideal. Politics is the mind-killer and all that. I don't think it's changing any time soon, so I'd recommend just getting used to it.

Replies from: A-Lurker
comment by A-Lurker · 2014-01-17T05:11:42.799Z · LW(p) · GW(p)

But why do people just accept the status quo?? Politics doesn't kill my mind. I know how to not 'cheer for my team' and to think about topics in a balanced way. I expect people to act irrationally on the comment section of the news website I read- but why are people not rising above it on this website of all places?

Get use to it? It's very hard to think it's rare and unexpected for people to talk about a topic rationally. I don't see why people find it so hard- especially when they've apparently read articles highlighting common problems and where they come from.

Replies from: TheOtherDave, Lumifer
comment by TheOtherDave · 2014-01-17T05:20:35.871Z · LW(p) · GW(p)

When we find it hard to think that things are as they are, and we find it hard to see why things are as they are, that's often a good time to pay close attention to the behavior of the system. Often this has better results than expecting the behavior to be different and complaining when it isn't... though admittedly, sometimes complaining has good results.

Or do you have a third alternative in mind?

Replies from: A-Lurker
comment by A-Lurker · 2014-01-17T06:13:14.609Z · LW(p) · GW(p)

To be honest, I guess my comment was just a complaint with no expected result. It really had no point other than some kind of emotional release

comment by Lumifer · 2014-01-17T05:15:39.316Z · LW(p) · GW(p)

Politics doesn't kill my mind.

Heh. Are you quite sure of that? :-)

Replies from: A-Lurker
comment by A-Lurker · 2014-01-17T06:04:41.578Z · LW(p) · GW(p)

lol ok yes as I typed that I had to ask myself that exact same question- since it's such a bold thing to say and exactly what someone with a problem might say.

I could explain why I am sure, but I'm not sure anyone is interested in that explanation. I've got a ask me a question comment on here so I guess if anyone is interested- they can ask :-)

comment by Lumifer · 2014-01-17T05:06:24.563Z · LW(p) · GW(p)

Why would someone down vote me without commenting as to why?

That's standard operating procedure around here. Most up- and down-votes are given without comment.

Why would my question warrant a down vote anyway?

Your question implies that Israel's "history and current course of action" are bad/shameful/immoral/etc.

comment by Viliam_Bur · 2014-01-12T09:55:32.529Z · LW(p) · GW(p)

Here I am.

Replies from: None, joaolkf
comment by [deleted] · 2014-01-12T19:43:41.804Z · LW(p) · GW(p)

Why do you live in Slovakia?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-01-12T20:32:57.744Z · LW(p) · GW(p)

I was born here, and I never lived anywhere else (longer than two weeks). I dislike travelling, and I feel uncomfortable speaking another language (it has a cognitive cost, so I feel I sound more stupid than I would in my language). Generally, I dislike changes -- I should probably work on that, but this is where I am now.

I could also provide some rationalization... uhh, I have friends here, I am familiar with how the society works here, maybe I prefer being a fish in a smaller pond -- okay the last one is probably honest, too.

Replies from: army1987
comment by A1987dM (army1987) · 2014-01-13T17:15:13.339Z · LW(p) · GW(p)

I feel uncomfortable speaking another language (it has a cognitive cost, so I feel I sound more stupid than I would in my language).

Speaking in a language I'm not fluent in (and in a cultural context I'm not familiar with) makes me feel like an idiot savant, because it destroys my social skills while keeping my abstract reasoning/mental arithmetic skills intact.

comment by joaolkf · 2014-01-12T13:36:54.237Z · LW(p) · GW(p)

Is it difficult being too smart and concerned about the right things where you live/lived? If yes, how you deal/dealt with it?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-01-12T15:59:50.664Z · LW(p) · GW(p)

Well, it is sometimes difficult to be me, but I'm not sure how much of that is caused by being smart, how much by lack of some skills, and how much is simply the standard difficulty of human life. :D

Seems to me that most people around me don't care about truth or rationality. Usually they just don't comment on things outside of their kitchens; unless they are parrotting some opinion from a newspaper or a TV. That's actualy the less annoying part; I am not disappointed because I didn't expect more from them. More annoying are people who try to appear smart and do so basicly by optimizing for signalling: they repeat every conspiracy theory, share on facebook every "amazing" story without bothering to google for hoaxes or just use some basic common sense. When I am at Mensa and listen to people discussing some latest conspiracy theory, I feel like I might strangle them. Especially when they start throwing around some fully general arguments, such as: You can't actualy know anything. They use their intelligence to defeat themselves. Also, I hate religion. That's a poison of the mind; an emotional electric fence in a mind that otherwise might have a chance to become sane. -- But I suspect all countries are like this, in general. And I am lucky to live in one where people won't try to hurt me just because I say something blasphemous. Still, as is obvious from this paragraph, I feel greatly frustrated about the sanity waterline here.

Okay, specifically for Slovakia: This country used to be mostly Catholic, then it was Communist for a few decades, now it's going back to catholicism again. During the communism, the Catholics were pretty successful in recruiting many contrarians to their ranks; they pretty much told them that the search for truth is the search for God, and they associated atheism with communism (which wasn't difficult at all, since Communists used it as an applause light). I was frustrated by seeing people around me look for the truth in the supernatural, and dismissing the reality almost as a propaganda. Then there was a higher level of contrarians who dismissed also the local religion, and instead embraced buddhism or whatever. Believing in "mere reality" does not work as a signal for intelligence here.

I actually don't have a good way for dealing with it. Some time I was alone. Some time I was friendly with religious people, politely participating in their rituals, believing none of that, but enjoying the company of smart contrarians. Once or twice I tried to find some reason in Mensa, always horribly disappointed.

As a child, I was a member of the mathematical club; elementary-school students who loved math and did the mathematical olympiad. That was the best part of my life; smart activities, and no bullshit. But as we grew older, the club dissolved. -- Skip almost two frustrating decades and I found LessWrong. And I was like: "Smart and sane people again!" and "Oh shit, why do they have to be on the other side of the planet?" And since then I am trying to build a local rationalist movement, progressing very very slowly.

One thing that keeps me sane is my current girlfriend, who also reads LessWrong, and attended a CFAR minicamp with me. But she is not as enthusiastic about it as I am; and she seems to prefer good relationships with other people to being right. Maybe I am just a horrible person unable to deal with people, but the thing is I am unable to unsee the bullshit; when someone speaks bullshit, it's like a painful shrieking sound in my ears, I just can't ignore it; I can keep quiet but it still feels unpleasant.

I suspect most rational people around me cope by focusing their energy into their favorite project, and ignoring the insanity of the rest of the world. (But I may be wrong at modelling other people.) They probably can be rational in their work, and social in the rest of their lives. Maybe they are happy like that. Maybe they just don't know they could expect more (if I didn't have the unique experience of the mathematical club and of LW, probably neither would I). So this year I try to make a list of smart and sane people around me, get in contact with each of them, invite them to a local LW meetup, and give them a copy of my translation of Sequences. -- I am not sure how much should I push the LW; whether having a club of smart and sane people couldn't be enough. For me, LW is simply one level more meta: before LW I approximately knew what was and what wasn't rational, but I didn't have any arguments to win a debate. It was like a matter of feeling: this seems like a correct way to approach truth, and this feels like a way to madness. I just had the general idea that the reality is out there, and that the proper way to grasp it is to adjust my map to the territory, not the other way round. (Because that's what worked for me in mathematics.) -- Maybe my role here is to join the local smart people together. But maybe I am just projecting my desires onto them, and they are actually quite happy as they are now. This will be resolved experimentally.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-01-12T17:21:36.343Z · LW(p) · GW(p)

I look a look at Mensa sometime in the 80s in the US, mostly through their publications. I was very underwhelmed-- they had a very bad habit of coming up with a set of plausible-sounding definitions and basing an argument on them.

I went to an event, and I could get at least as good conversation at a science fiction convention.

On the other hand, one of my friends, an intelligent person, was very fond of DC area Mensa, and it doesn't surprise me if there's a lot of local variation. I also know another very smart person who's also very fond of Mensa. Perhaps it's not a coincidence that she also lives in the DC area.

If the best company you've found was a math club, perhaps you should be looking for mathematicians and/or math clubs.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-01-12T18:22:13.200Z · LW(p) · GW(p)

I suspect that local Mensas are different. But I also think that none of them even approaches the LW level. Maybe it's a question of size -- if you have say 100 Mensans in one city, 10 of them can be rational and have a nice talk together, aside from the rest of the group. If you only have 10 Mensans in one city, you are out of luck there.

The mathematician club I was in as a child was one of a kind; and the lady who led it doesn't do this anymore. She has her own children now, and she works as a coordinator of correspondence competitions; which is not the same thing as having a club. Unfortunately, there was no long-term plan... If I could somehow restart this thing, I would try something like Scouts do (okay, I don't know much details about Scouts, but this is my impression); I would encourage some members to become new leaders, so that the whole thing does not fall apart when the main person no longer has time; I would try to make a self-reproducing system.

There is an interesting background of that mathematical club. It started with a Czech elementary-school teacher of mathematics, Vít Hejný, who taught himself from books some psychology of Piaget and based on this + his knowledge of math + some experimenting in education he developed his own method of teaching matematics. He later taught it to a group of interested students; one of them was the lady who organized my club. But until recently, there was no book explaining the concepts. And even with the book, this man was a psychology autodidact, so he invented a lot of unusual words to describe the concepts he used, so it would be difficult to read for someone without a first-hand experience. And most of the psychologists wouldn't grok the mathematical aspect of the thing, because it is a theory of "how people think when they think about mathematical problems". So I am afraid the whole art will be forgotten. (Perhaps unless someone translates his book to English, substituting his neologisms with the proper psychological terminology, if there are exact equivalents.)

Also, that mathematical club had some "kalokagathia" aspects; we did a lot of sport, or logical debates. That's not the same thing as mathematicians working alone, or math students spending their free time on facebook. Sometimes I think the math (on the olympiad level) simply worked as a filter for high-quality people-- selecting both for intelligence and a desire to become stronger. I am not aware of any math club existing in my city, but people doing math competitions could be the proper group. I just need to make them meet at one place.

comment by blacktrance · 2014-01-12T05:41:23.462Z · LW(p) · GW(p)

In the unlikely even that anyone is interested, sure, ask me anything.

Edit: Ethics are a particular interest of mine.

Replies from: Tuxedage, Luke_A_Somers
comment by Tuxedage · 2014-01-12T08:28:13.672Z · LW(p) · GW(p)

Would you rather fight one horse sized duck, or a hundred duck sized horses?

Replies from: blacktrance, Moss_Piglet
comment by blacktrance · 2014-01-12T17:43:43.432Z · LW(p) · GW(p)

Depends on the situation. Do I have to kill whatever I'm fighting, or do I just have to defend myself? If it's the former, the horse-sized duck, because duck-sized horses would be too good at running away and hiding. If it's the latter, then the duck- horses, because they'd be easier to scatter.

comment by Moss_Piglet · 2014-01-12T15:47:53.996Z · LW(p) · GW(p)

Is this a fist-fight or can blacktrance use weapons?

comment by Luke_A_Somers · 2014-01-12T07:16:23.852Z · LW(p) · GW(p)

Any topics of interest? Same goes for other 'whatever's

Replies from: blacktrance
comment by blacktrance · 2014-01-12T08:08:24.684Z · LW(p) · GW(p)

Ethics, I suppose. Most of my other interests are either probably too mindkilling for LW or are written about in the Sequences already, more clearly than I could write about them.

Replies from: VAuroch
comment by VAuroch · 2014-01-13T08:25:37.134Z · LW(p) · GW(p)

What are your Sequence-superseded interests? Would you please name three points from anywhere within them where your opinion differs (even if minorly) from EY (or the author of the most relevant sequence if different)?

Replies from: blacktrance
comment by blacktrance · 2014-01-14T02:42:57.451Z · LW(p) · GW(p)

My sequence-superseded interests include the nature of free will, self-improvement (in the sense of luminosity, not productivity), and general interest in rational thinking.

Three areas where I disagree with the Sequences:

  • Fake Selfishness. EY mistakenly treats "selfishness" as something like wealth maximization, or at least something that excludes caring about others. Selfishness means acting in one's self-interest. There are three major philosophical views as to what people's interests are: hedonism (pleasure), preference satisfaction, and objective-list (i.e. if a person has the things on this list, their interests are being fulfilled). Wealth maximization is only a plausible manifestation of self-interest for a person with very atypical preferences or for an unusual list. There is no reason why egoism would automatically exclude caring about others - in fact, caring about others often makes people happy, and fulfills their preferences. As for the assumption in the sentence "Shouldn't you be trying to persuade me to be more altruistic, so you can exploit me?", that ignores virtue ethical egoism, as in the Epicurean tradition - that is, exploiting people (in the sense in which exploitation is bad) is not conducive to happiness, and that being honest, just, benevolent, etc, is actually in one's self-interest.

  • Not for the Sake of Happiness Alone. EY fails to apply reductionism to human values. He says, "I care about terminal values X, Y, and Z", but when it comes down to it, people would really like pleasure more than anything else, and the distinction between wanting and liking is irrelevant. To indulge in a bit of psychologizing, I think that trying to depict multiple values as irreducible comes from an aversion to wireheading - because if you conclude that all values reduce to happiness/pleasure, you must also conclude that wireheading is the ideal state. But I don't share this aversion - wireheading is the ideal state.

  • Because of the above, I disagree with basically the entirety of the Fun Theory sequence. It seems to be an attempt to reconcile Transhumanism as Simplified Humanism with not wanting to wirehead, and the two really aren't reconcilable - and Transhumanism as Simplified Humanism is correct.

Replies from: TheOtherDave, VAuroch
comment by TheOtherDave · 2014-01-14T18:26:26.993Z · LW(p) · GW(p)

Is there anyone in the world whose well-being you care strongly about?

Replies from: blacktrance
comment by blacktrance · 2014-01-14T19:48:23.030Z · LW(p) · GW(p)

Yes, myself and others, though the well-being of others is an instrumental value.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-14T20:20:37.236Z · LW(p) · GW(p)

To confirm: you're the only person whose well-being you care about "terminally"?

Replies from: blacktrance
comment by blacktrance · 2014-01-14T20:28:13.032Z · LW(p) · GW(p)

Yes.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-14T20:44:34.675Z · LW(p) · GW(p)

(nods) OK. Accepting that claim as true, I agree that you should endorse wireheading.

(Also that you should endorse having everyone in the world suffer for the rest of their lives after your death, in exchange for you getting a tuna fish sandwich right now, because hey, a tuna fish sandwich is better than nothing.)

Do you believe that nobody else in the world "terminally" cares about the well-being of others?

Replies from: blacktrance
comment by blacktrance · 2014-01-14T20:55:13.060Z · LW(p) · GW(p)

you should endorse having everyone in the world suffer for the rest of their lives after your death, in exchange for you getting a tuna fish sandwich right now, because hey, a tuna fish sandwich is better than nothing

No, because I care (instrumentally) about the well-being of others in the future as well, and knowing that they'll be tortured, especially because of me, would reduce my happiness now by significantly more than a tuna sandwich would increase it.

Do you believe that nobody else in the world "terminally" cares about the well-being of others?

That's a difficult question to answer because of the difficulties surrounding what it means for someone to care. People's current values can change in response to introspection or empirical information - and not just instrumental values, but seemingly terminal values as well. This makes me question whether their seemingly terminal values were actually their terminal values to begin with. Certainly, people believe that they terminally care about the well-being of others, and if believing that you care qualifies as actually caring, then yes, they do care. But I don't think that someone who'd experience ideal wireheading would like anything else more.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-14T21:12:34.379Z · LW(p) · GW(p)

I care (instrumentally) about the well-being of others in the future

What is the terminal goal which the well-being of people after your death achieves?

knowing that they'll be tortured

Oh, sure, you shouldn't endorse knowing about it. But it would be best, by your lights, if I set things up that way in order to give you a tuna-fish sandwich, and kept you in ignorance. And you should agree to that in principle... right?

This makes me question whether their seemingly terminal values were actually their terminal values to begin with.

(nods) In the face of that uncertainty, how confident are you that your seemingly terminal values are actually your terminal values?

I don't think that someone who'd experience ideal wireheading would like anything else more.

(nods) I'm inclined to agree.

Replies from: blacktrance
comment by blacktrance · 2014-01-14T21:29:56.961Z · LW(p) · GW(p)

What is the terminal goal which the well-being of people after your death achieves?

Knowing that the people I care about will have a good life after I'm gone contributes to my current happiness.

But it would be best, by your lights, if I set things up that way in order to give you a tuna-fish sandwich, and kept you in ignorance. And you should agree to that in principle... right?

No, because I also care about having true beliefs. I cannot endorse being tricked.

In the face of that uncertainty, how confident are you that your seemingly terminal values are actually your terminal values?

Given the amount of introspection I've done, having discussed this with others, etc, I'm very highly confident that my seemingly terminal values actually are my terminal values.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-14T21:56:15.352Z · LW(p) · GW(p)

No, because I also care about having true beliefs. I cannot endorse being tricked.

No trickery involved. There's simply a fact about the world of which you're unaware. There's an Vast number of such facts, what's one more?

Replies from: blacktrance
comment by blacktrance · 2014-01-14T23:24:50.497Z · LW(p) · GW(p)

I mean, I can't endorse myself as being better off not knowing something rather than knowing it.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-15T01:17:08.800Z · LW(p) · GW(p)

Even if not-knowing that thing makes you happier?

Replies from: blacktrance
comment by blacktrance · 2014-01-15T01:49:46.923Z · LW(p) · GW(p)

I can face reality.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-15T01:59:09.574Z · LW(p) · GW(p)

I'm not asking whether you can. I'm asking whether you endorse knowing things that you would be happier not-knowing.

Replies from: blacktrance
comment by blacktrance · 2014-01-15T03:41:56.008Z · LW(p) · GW(p)

If something would affect me if I knew about it, I would prefer to know about it so I can do something about it if I can. I wouldn't genuinely care about the people I care about if I would rather not know about their suffering.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-15T04:42:15.719Z · LW(p) · GW(p)

I see.

So I'm curious: given a choice between pressing button A, which wireheads you for the rest of your life, and button B, which prevents the people you care about from suffering for the rest of their lives, do you know enough to pick a button? If not, what else would you need to know?

Replies from: blacktrance
comment by blacktrance · 2014-01-15T04:52:30.661Z · LW(p) · GW(p)

Given that (ideal) wireheading would be the thing that I would like the most, it follows that I would prefer to wirehead. I admit that this is a counterintuitive conclusion, but I've found that all ethical systems are counterintuitive in some ways.

I'm assuming that wireheading would also prevent me from feeling bad about my choice.

Replies from: pragmatist, TheOtherDave
comment by pragmatist · 2014-01-15T05:02:04.327Z · LW(p) · GW(p)

I've found that all ethical systems are counterintuitive in some ways

Might this not be an argument against systematizing ethics? What data do we have other than our (and others') moral intuitions? If no ethical systems can fully capture these intuitions, maybe ethical systematization is a mistake.

Do you think there is some positive argument for systematization that overrides this concern?

To lay my cards on the table, I'm fairly convinced by moral particularism.

Replies from: blacktrance
comment by blacktrance · 2014-01-15T05:34:33.134Z · LW(p) · GW(p)

Not systematizing ethics runs into the same problem - it's counterintuitive, because it seems that ethics should be possible to systematize, that there's a principle behind why some things are right and others are wrong. Also, it means there's no good way to determine what should be done in a new situation, or to evaluate whether what is being currently done is right or wrong.

comment by TheOtherDave · 2014-01-15T04:59:15.140Z · LW(p) · GW(p)

If it also prevented you from knowing about your choice, would that change anything?

Replies from: blacktrance
comment by blacktrance · 2014-01-15T05:46:47.726Z · LW(p) · GW(p)

Could you explain the situation? How would wireheading prevent me from knowing about my choice?

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-15T15:04:37.613Z · LW(p) · GW(p)

For example: given a choice between pressing button A, which wireheads you for the rest of your life and removes your memory of having been offered the choice, and button B, which prevents the people you care about from suffering for the rest of their lives, do you know enough to pick a button? If not, what else would you need to know?

Replies from: blacktrance
comment by blacktrance · 2014-01-15T15:35:45.116Z · LW(p) · GW(p)

That's an interesting paradox and it reminds me of Newcomb's Problem. For this, it would be necessary for me to know the expected value of valuing people as I do and of wireheading (given the probability that I'd get to wirehead). Given that I don't expect to be offered to wirehead, I should follow a strategy of valuing people as I currently do.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-15T16:04:48.804Z · LW(p) · GW(p)

Um, OK. Thanks for clarifying your position.

comment by VAuroch · 2014-01-14T06:06:53.525Z · LW(p) · GW(p)

What objection do you have with the argument "Most humans object to wireheading in principle, therefore wireheading is not the ideal state?"

Because it seems like a state most people would not choose voluntarily, is not ideal.

Replies from: blacktrance
comment by blacktrance · 2014-01-14T06:25:41.622Z · LW(p) · GW(p)

Many humans have objected to many things in the past that are now widely accepted, so I don't find that to be a convincing objection.

Many people wouldn't choose it voluntarily, but some drug addicts wouldn't voluntarily choose to quit, either. Even if they didn't want it, they'd like it, and liking is what ultimately matters.

comment by saintthor · 2022-06-10T16:54:38.979Z · LW(p) · GW(p)

I heard something about the B-money and Bitcoin. I'd like to hear your views on the technical aspects of a real decentralized cryptocurrency. I hope it may fulfill all the promises.

See the article at 

https://docs.google.com/document/d/1-qQBlSFazNRsoJXikJtIDDjdbjw3nuECXWANqyw0Cwc/edit#heading=h.popn98lucyeh

or https://saintthor.medium.com/

If you can read Chinese, the Chinese version is most recommended at http://guideep.atwebpages.com/acc.html

thanks.

comment by amirsadr · 2022-05-25T19:46:13.981Z · LW(p) · GW(p)

I have been trying to find the origin of the term "b-money". What does the b refer to in Wei Dai's post http://www.weidai.com/bmoney.txt)? The term "b-money" appears first in the Appendix title. Does "b" denote the second method (with "a-money" referring to the first method)? Or was the term b-money already in use referring to digital money, b[it]-money?

Any info appreciated.

comment by ubiubi18 · 2019-09-20T23:29:18.849Z · LW(p) · GW(p)

Assuming the security risk of growing economic monopolization build in in the dna of proof of work (as well as proof of stake)  is going to prevail in the coming years:

Do you think it is possible to create a more secure proof of democratic stake? I know that would require a not yet existing proof of unique identity first. So the question implies also: Do you think a proof of unique identity is even possible?

P.S.: Ideas flowing around the web to solve the later challenge are for example:

  • non-transferable proof of signature knowledge in combination with e-passports
  • web of trust
  • proof of location  - simultanous solved AI-resistant captchas
comment by polymathwannabe · 2014-01-28T20:42:15.140Z · LW(p) · GW(p)

I'm a 31-year-old Colombian guy who writes SF in Spanish. I'm a lactovegetarian teetotaler who sympathizes with Theravada Buddhism. My current job is as chief editor at a small publishing house that produces medical literature. My estimate of the existence of one other LWer near my current location (the 8-million-inhabitant city of Bogotá) is 0.01% per every ten kilometers in the radius of search for the first 2500 kilometers of radius (after that distance you hit the U.S., which invalidates this formula). My mother was an angrily devout Catholic and my father was a hopelessly gullible Rosicrucian. Ask me anything not based on stereotypes about Colombians.

Replies from: komponisto
comment by komponisto · 2014-02-20T21:45:38.211Z · LW(p) · GW(p)

I'm a 31-year-old Colombian guy who writes SF in Spanish....My estimate of the existence of one other LWer near my current location (the 8-million-inhabitant city of Bogotá) is 0.01% per every ten kilometers in the radius of search for the first 2500 kilometers of radius

By my Google-aided calculations (interpreting "0.01% per every ten kilometers in the radius of search" as "0.0001 expected LWers per 314 km^2"), that implies that you think there's about a 14% chance that there's a LWer in Colombia besides yourself.

Can I conclude from this that you're the same person as the (most recent) Spanish translator of HPMoR?

Replies from: polymathwannabe
comment by polymathwannabe · 2014-02-20T22:04:03.494Z · LW(p) · GW(p)

I'm not.

However, I happen to be a Youtopia volunteer, currently working on my own Spanish translation of HPMoR:

https://www.fanfiction.net/s/9971807

I was aware of the translation you cite, but I can't remember having noticed that the translator was Colombian too. I guess that forces me to update my estimation.

Also, I should meet the guy.

Replies from: komponisto
comment by komponisto · 2014-02-20T23:59:10.582Z · LW(p) · GW(p)

However, I happen to be a Youtopia volunteer, currently working on my own Spanish translation of HPMoR:

https://www.fanfiction.net/s/9971807

Neat! You should get Eliezer to include it in the list.

(By the way, I should say that having multiple translations of the same text is very valuable data for language-learners such as myself -- so let me make an appeal to would-be translators out there not to be discouraged by the existence of another translation in your language, whether complete or not.)

I was aware of the translation you cite, but I can't remember having noticed that the translator was Colombian too.

(I concluded that he was because he had a Colombian Creative Commons license for his blog. EDIT: Also, I just noticed that he gives his specific location: Roldanillo, Valle del Cauca, Colombia.)

Replies from: polymathwannabe
comment by polymathwannabe · 2014-02-21T01:12:46.813Z · LW(p) · GW(p)

I concluded that he was because he had a Colombian Creative Commons license for his blog.

To me, that's a nice perspective of people's data-gathering methods. I knew he was Colombian when I saw the flag on his Fanfiction.net profile.

comment by polymathwannabe · 2014-01-28T15:57:39.736Z · LW(p) · GW(p)

I've gotten accustomed to hearing cryonics being described here as the obvious thing to do at the end of your natural life, the underlying assumption apparently being that you'd be hopelessly dumb if you didn't jump at the chance of getting a tremendous potential benefit at a comparatively negligible cost.

So, I have a calibration question for male LWers who come from Jewish families: What is your opinion on foreskin restoration surgery?

Replies from: Nisan
comment by Nisan · 2014-02-15T02:45:37.040Z · LW(p) · GW(p)

That's just stretching skin out, right? It wouldn't increase innervation, so it doesn't seem that valuable if one likes sexual pleasure but isn't particularly attached to the idea of having a foreskin.

I could be wrong. If there's a surgery that can add erogenous tissue to one's body, why stop at the foreskin? This would be highly munchkinable.

comment by DanArmak · 2014-01-18T12:14:14.066Z · LW(p) · GW(p)

Ask me anything.

comment by A-Lurker · 2014-01-16T10:20:27.267Z · LW(p) · GW(p)

I'm an Australian male with strong views on Socialism. I have an interest in modern history and keeping up with international news.

Replies from: asr, blacktrance, Eugine_Nier
comment by asr · 2014-01-17T05:40:51.510Z · LW(p) · GW(p)

What do you mean when you talk about socialism?

comment by blacktrance · 2014-01-16T15:58:04.968Z · LW(p) · GW(p)

Are your strong views in favor of socialism or against it?

Replies from: A-Lurker
comment by A-Lurker · 2014-01-17T00:48:27.732Z · LW(p) · GW(p)

I am strongly for socialism. This comes from two main points of view; 1) I think the ethical thing to do is to work together and help others as opposed to 'every man for himself'. 2) I think that 'team work' achieves more and thus it's not just about what is moral but what actually works better. One way to think of it is that we can either all buy a fire hose and a ladder- or we could pool the money together to pay for a professional team with a truck to service the town.

Replies from: blacktrance, Lumifer
comment by blacktrance · 2014-01-17T02:41:19.197Z · LW(p) · GW(p)

Why do you think capitalism (free markets + private property) is "every man for himself"?

Do you think capitalism and cooperation are opposed? If so, why?

Replies from: A-Lurker
comment by A-Lurker · 2014-01-17T04:02:35.748Z · LW(p) · GW(p)

Why do I think free markets and private property is "every man for himself"?

1) Human nature. Most people can't see past their own nose. In fact some people have such a massive problem finding empathy for other people that they act, for one example, racist and intolerant to other people. To put it simply, I think human kind has demonstrated how selfish and cruel it can be when left unrestrained. To have an entirely free market and everything private owned would be to let free and even propel all of the nasty things inside people. Just as without laws we will have (more) people hurting each other in society, so too do we need to regulate how people economically interact with each other

2) Capitalism has nothing to do with morality. Let me give you this hypothetical example: a company can either make $10 a lolly selling type A or it can make $1 selling type B. The 'problem' is that type A is known by the company (but not the public) to be poisonous. This poison will hurt the people taking it but will not hurt the companies profits- as in they won't die too soon or stop buying it for any reason. The only harmful effect is felt by the customer and not the company. Thinking purely from a capitalist point of view, with no other concepts available (such as morality, etc), what should the company do? Sell the poison of course because it's more profitable. In fact most logical and profitable decisions by the nature of the universe are dubious like this. There are even weird situations in the world where someone may be the head of a company- but think it's 'evil'. They may think the company does horrible things and hurts the world, but they themselves are 'just doing their job'. In their mind they tell themselves they wouldn't personally do such things but also acknowledge that it's how the business runs at it's most profitable and successful level. As bad as people are, some companies are even worse than those who lead them because it makes business sense to be horrible while it makes social sense for the individuals to hold their personal selves to different standards.

3) Capitalism isn't a sharing thing, so there is nothing left except 'every man for himself'. If people aren't sharing- what are they doing? Think about this entirely hypothetical scenario: There are a total of 5 houses in the world and there are 5 people. All 5 are owned by 1 person and the other 4 have to pay rent. Since there are no other options for these 4 other than living in 1 of these homes, the owner can charge as much rent as they want- as long as it doesn't exceed what the people can pay. What is the capitalist thing to do? To make a maximum profit. In effect these 4 could end up in a situation where they go to work every day simply to be able to afford to eat enough food and sleep in a house to be alive for the next days work. Capitalism alone has no remedy for this- in fact it would see no need for a remedy at all because it wouldn't see the problem with it. The only way to be not operating from a 'every man for himself' system is to share- but to share would be to not operate capitalism to it's full extent or to actually go against it in some ways.

Do I think capitalism is opposed to cooperation?

To put my answer very simply- yes I do think unrestrained capitalism is opposed to cooperation. There is no immediate and person money to be made by giving some away to another person in a less fortunate position. There is also no money to be gained by a company treating it's workers fairly. To be most successful, a company has to wage war against it's enemies, use it's employees, and prey on it's customers. All of these things are on the opposite end of the spectrum from cooperation.

Replies from: blacktrance, Eugine_Nier, Eugine_Nier
comment by blacktrance · 2014-01-17T05:07:38.461Z · LW(p) · GW(p)

As a libertarian, I don't think you and I mean the same things by "capitalism". Could you explain what you mean by "capitalism", and "unrestrained capitalism"?

Replies from: Lumifer, A-Lurker
comment by Lumifer · 2014-01-17T05:13:11.973Z · LW(p) · GW(p)

Given this post it's pretty clear that A-Lurker calls an exceptionally stupid and shortsighted version of egoism "capitalism". I don't know why.

comment by A-Lurker · 2014-01-17T06:28:40.046Z · LW(p) · GW(p)

What I'm talking about when I say that is private ownership and enterprise. When I say unrestrained that means no laws or regulation. For example there are regulations which make companies write the ingredients on food product labels.

Replies from: blacktrance
comment by blacktrance · 2014-01-17T06:34:52.741Z · LW(p) · GW(p)

No laws or regulation? I hope you know that most people who advocate for capitalism aren't anarchists, and those of them who are believe in free-market laws. So there's no one who's in favor of "no laws or regulation".

Replies from: A-Lurker
comment by A-Lurker · 2014-01-17T07:12:25.656Z · LW(p) · GW(p)

Yes I do know that. I nearly mentioned that but didn't. There is of course a wide range of regulation beliefs. Some people do advocate for very little. You are right though, no one does call for no laws or regulation. From that some people can also learn that the ideas I have are not new or alien but are actually just an extension or using the ideas already in place.

comment by Eugine_Nier · 2014-01-17T06:52:27.892Z · LW(p) · GW(p)

Human nature. Most people can't see past their own nose. In fact some people have such a massive problem finding empathy for other people that they act, for one example, racist and intolerant to other people. To put it simply, I think human kind has demonstrated how selfish and cruel it can be when left unrestrained. To have an entirely free market and everything private owned would be to let free and even propel all of the nasty things inside people. Just as without laws we will have (more) people hurting each other in society, so too do we need to regulate how people economically interact with each other

And yet you believe the proses of taking a government job magically cures people of all these problems?

Replies from: A-Lurker
comment by A-Lurker · 2014-01-17T07:02:06.357Z · LW(p) · GW(p)

No not by magic and it doesn't fix every single problem. But just look at one example if you want to understand my point of view; before fire fighters were socialised, there existed a time in the US where people had to pay private companies or have their house burn down. Socialism didn't magically cure anything but simply removed some of the opportunity for bad things to happen. Can you tell me how your point refutes the fire brigade example?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-17T07:08:56.197Z · LW(p) · GW(p)

But just look at one example if you want to understand my point of view; before fire fighters were socialised, there existed a time in the US where people had to pay private companies or have their house burn down.

Now they have to pay (higher) taxes or be arrested for tax evasion. What's your point?

Replies from: A-Lurker, army1987
comment by A-Lurker · 2014-01-17T07:33:18.110Z · LW(p) · GW(p)

My point is that fires are put out because they are fires and no fire brigades watch a house burn down anymore. You think it means nothing?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-17T07:44:08.932Z · LW(p) · GW(p)

Under the old system people had two choices:

1) Pay a private fire company.

2) Take the risk their house will burn down.

The new system is equivalent to the old except people can only make choice (1) and the private fire company is now a public fire department.

Your claim appears to be that the new system is an improvement even though people have strictly fewer choices.

Replies from: A-Lurker
comment by A-Lurker · 2014-01-17T07:59:44.137Z · LW(p) · GW(p)

The difference is the new system doesn't let houses burn

comment by A1987dM (army1987) · 2014-01-17T17:44:06.477Z · LW(p) · GW(p)

Doesn't it matter whether the amount they have to pay now is the same as the amount they had to pay then? Also, a house burning down increases the risk of other houses around it catching fire.

Dunno about fire brigades, but in the case of health care the data says that socialism is cheaper for better results.

comment by Eugine_Nier · 2014-01-17T07:32:54.246Z · LW(p) · GW(p)

Let me give you this hypothetical example: a company can either make $10 a lolly selling type A or it can make $1 selling type B. The 'problem' is that type A is known by the company (but not the public) to be poisonous.

If someone finds out that their poisonous he has the option of buying from a different company. By way of contrast, if all lollies were manufactured by the "department of lollies" and the head of the department decided to sell the poison lollies to meet budget constraints, my only recourse is to not consume lollies.

Notice that the private company can engage in this kind of behavior only if they are sure the defect will never be found out, by contrast the government department has no reason not to produce products with glaring defects, after all it's not like people can switch to a competing product. Furthermore, the the salary of the department head likely isn't even affected by how many people buy the products produced, so he is perfectly happy to waste public resources producing defective products no one wants.

Replies from: army1987, A-Lurker
comment by A1987dM (army1987) · 2014-01-18T10:20:52.263Z · LW(p) · GW(p)

If someone finds out that their poisonous

Yeah, sure.

comment by A-Lurker · 2014-01-17T07:48:26.934Z · LW(p) · GW(p)

I agree those are issues. That's why I said I think the government has no place making twirly drinking straws- the private market does it better. When we talk about fire departments though I think the issue still should be addressed but it doesn't outright kill the concept. its a negative factor which needs to be mitigated but i believe its possible.

comment by Lumifer · 2014-01-17T01:28:24.100Z · LW(p) · GW(p)

I am strongly for socialism.

How do you define "socialism"? Examples would be helpful.

Replies from: A-Lurker
comment by A-Lurker · 2014-01-17T02:57:42.548Z · LW(p) · GW(p)

To me socialism is not an exact system but is a concept. In that way, it can be a bit vague but the general principle is that the resources of a society are best used with a coordinated effort to pool them together as opposed to spending in an un-coordinated and selfish way.

Where as some people think that socialism is a system to rival or replace capitalism, my idea of socialism works in tandem with capitalism. To begin with, a lot of industry is best left for private enterprise to deal with. There is nothing to gain from the government owning a twirly drinking straw company or being responsible for coming up with such ideas. Having said this though, these private enterprises provide for the socialist system by paying tax, as do the individual workers. Then there are the industries which are best put in control of the government. This is defined by the fundamental importance they have on society. Governance itself is one example. Other easy examples are; roads and infrastructure, police, and, fire departments. I think most people would agree that the things I have mentioned are best maintained with collective funding and government control. Where my opinion gets more controversial with some people is that I think socialism should cover health, education, power and public transportation.

Some people think that socialism is something alien and untested in the world- other than through the murderous regimes of Stalin, Mao, etc. This is not true at all. I'll point out this fact while also giving you the examples of the 'socialism' i'm talking about.

The US has a strong anti-socialist base but they have possibly the biggest socialist program in the world. I say possibly because i'm too lazy to check the fact- but it's fairly safe to assume that the worlds largest armed forces (US armed forces), which spends about as much as the next 10 biggest spenders in the world, is one of the biggest socialist program in the world. It's socialist because the money for it is raised by taxing the population. Rather than everyone having to be in a militia and own a gun or some other crazy system, money from the society is pooled together and used in a co-ordinated fashion.

Another example is the fire department. At some times and places in the world there once existed private fire brigades. When a fire happened, these private crews would arrive at the scene but if the home owner wasn't one of their paying customers- they let the house burn. While this private enterprise system could be replaced by some other type of private model, socialism fills the position very effectively instead. Again money from society is pooled together and spend in a co-ordinated way and provides a better service in both effectiveness and social morality.

Replies from: Lumifer, Eugine_Nier
comment by Lumifer · 2014-01-17T03:26:27.354Z · LW(p) · GW(p)

So, "socialism" means to you government ownership and control, right?

Replies from: A-Lurker
comment by A-Lurker · 2014-01-17T04:14:28.147Z · LW(p) · GW(p)

No not really. Like I said I think it can play a role along side and in conjunction with capitalism/private ownership. Even if the government didn't own any companies or what not, socialism can still exist in the form of taxation and social spending. It's more about regulation and distribution of a societies wealth. Once the state starts owning and controlling everything, that's when I would start to call it 'communism' or something around those lines. I am not for this total control and ownership concept as I think capitalism does play a role in innovation and economic growth. To be communist would be to destroy all the benefits of capitalism.

Replies from: Lumifer
comment by Lumifer · 2014-01-17T05:00:38.288Z · LW(p) · GW(p)

Like I said I think it can play a role along side and in conjunction with capitalism/private ownership.

I did not say "complete and total government ownership and control". As you yourself point out in contemporary societies the government owns and controls a lot. For example, the army, as you said.

Under your definition, is there anything government-controlled that you would not call "socialist"? And in reverse, do you think there is anything socialist that is not connected to the government?

comment by Eugine_Nier · 2014-01-17T03:26:20.619Z · LW(p) · GW(p)

I think most people would agree that the things I have mentioned are best maintained with collective funding and government control.

Do you realize that it's possible to have one without the other?

There is nothing to gain from the government owning a twirly drinking straw company or being responsible for coming up with such ideas. (..) Other easy examples are; roads and infrastructure, police, and, fire departments. I think most people would agree that the things I have mentioned are best maintained with collective funding and government control. Where my opinion gets more controversial with some people is that I think socialism should cover health, education, power and public transportation.

What criterion are you using to make this distinction?

Replies from: A-Lurker, army1987
comment by A-Lurker · 2014-01-17T04:44:50.643Z · LW(p) · GW(p)

To have one without the other? You mean pubic funded fire brigades that are managed by a private company? Yeah I can see that. On the other hand though, I see a lot of problems with a privately run police force. For example if the chief of police was making a profit from fighting crime, why would he not expand his business by creating more crime to fight?

What criterion do I use to say the government shouldn't make twirly straws but should collect tax for (and possibly run) fire brigades? The nature of the service and how fundamental it is to society. Also a strong consideration should be put into the negative effects that personal interests can create. If the only drinking straw company decided it was going to make gold straws, poor people wouldn't get any- but that wouldn't be such a big deal. On the other hand if fire brigades were run for profit and have private interests- poor people's houses would burn to the ground with fire crews doing nothing but maybe toasting a marshmallow over the flame. Even worse, maybe when business is quiet, a fire station may light some fires.

This may sound a bit vague but like I said I think it's a concept and not an actual system. The concept I subscribe to is that the back bones of society should be funded and maintained by the government. In some cases, This maintenance can be subcontracted out to private companies rather than micro managing- but not always (not for police for example). Any further than these fundamental social services is most likely going too far and will have too much of a stifling effect on the economy.

Replies from: Lumifer, Eugine_Nier
comment by Lumifer · 2014-01-17T05:03:28.825Z · LW(p) · GW(p)

For example if the chief of police was making a profit from fighting crime, why would he not expand his business by creating more crime to fight?

Funny that you mention that. The US police works basically on this model and yet it is government-controlled...

comment by Eugine_Nier · 2014-01-17T06:47:26.090Z · LW(p) · GW(p)

On the other hand though, I see a lot of problems with a privately run police force. For example if the chief of police was making a profit from fighting crime, why would he not expand his business by creating more crime to fight?

Only if you pay him by criminal caught, as opposed to making him part of an insurance company that is responsible for reimbursing people victimized by crime.

The nature of the service and how fundamental it is to society.

Food is fundamental to society, should all food production be government controlled?

If the only drinking straw company decided it was going to make gold straws, poor people wouldn't get any- but that wouldn't be such a big deal.

If the only drinking straw company decided it was going to make gold straws, another company would get into the straw making business and start making affordable straws.

Replies from: A-Lurker
comment by A-Lurker · 2014-01-17T08:12:25.873Z · LW(p) · GW(p)

Food is important and it is supported with tax payer money by some governments for that very reason. I think government action on it should be considered. Of course no changes should be made if the system isn't broken and and if they do it should be for the better or not at all. I'm not advocating socialism just for the sake of being socialist. When private is better- it's better.

About the straws you fully missed the point. What i'm saying is no matter how bad someone screwed up the straw industry it won't be a serious blow to society. By talking about supply and demand you are changing the subject

Replies from: army1987
comment by A1987dM (army1987) · 2014-01-18T10:16:22.736Z · LW(p) · GW(p)

Actually I am under the impression that the main effects of agricultural subsidies are to make food cheaper for people in the First World who are already eating (more than) enough, while making competition for Third World farmers much harder.

Replies from: satt
comment by satt · 2014-01-18T17:55:33.852Z · LW(p) · GW(p)

I've seen a relatively upbeat spin on that phenomenon, although I'm not sure how seriously to take all of that article's empirical claims.

comment by A1987dM (army1987) · 2014-01-17T17:51:26.905Z · LW(p) · GW(p)

What criterion are you using to make this distinction?

Drinking straws are rivalrous and excludable. Defence isn't. Roads only become rivalrous when the traffic is congested, and while in principle they're excludable in practice the cost of operating toll booths is sometimes a huge fraction of the tolls, so to a zeroth approximation they're a transfer from drivers to toll booths operators with the time spent by the latter as a deadweight loss.

comment by Eugine_Nier · 2014-01-17T07:13:07.142Z · LW(p) · GW(p)

Do you know any economic theory? For example, are you familiar with the concept of supply and demand?

Replies from: army1987
comment by A1987dM (army1987) · 2014-01-17T17:41:15.247Z · LW(p) · GW(p)

Are you familiar with the concepts of externalities, coordination problems, imperfect information, irrationality, etc.?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-18T09:47:34.323Z · LW(p) · GW(p)

Yes, I am. My issue was that A-Lurker appeared to be unaware of both supply-and-demand and the concepts you listed judging by the fact the ways he attempts to defend socialism.

comment by Alsadius · 2014-01-16T07:31:03.436Z · LW(p) · GW(p)

Sure, what the hell. I'm a financial advisor by trade, so ask me questions in that field if you want expert-type answers, but being opinionated and argumentative is my hobby, so ask me anything.

Replies from: moridinamael
comment by moridinamael · 2014-01-24T20:40:13.284Z · LW(p) · GW(p)

How should I, a normal person, invest my earnings?

Replies from: Alsadius
comment by Alsadius · 2014-01-24T23:32:49.966Z · LW(p) · GW(p)

My rules, in rough order of priority:

1) Take advantage of any employer matching and any tax shelters. These are the only places in all of finance where you will ever find a sure source of free money.

2) Build a reserve fund of liquid and stable assets, sufficient for 3-6 months unavoidable expenses. Nothing exotic here - bank accounts, money market funds, or conservative mutual funds. This is for covering you if you lose your job, if your furnace breaks and needs replacing in January, or the like.

3) Invest a substantial proportion of your income, like 10-15%, unless you're truly destitute. If you're not, there are people who are living just fine on 85-90% of your current income, so live like them. You can count a proper pension against that number, but I mean this should be over and above government pension taxes. Most national pension schemes are primed for detonation not long after the Boomers retire, and for the same reason I expect significant reductions in growth rates over the long term. You're not going to be able to do what someone who retired in 1997 could do of just saving a big chunk after their kids graduated school and wind up with a giant nest egg from 20%+ growth every year, we're going to have to work for our retirement. It's much easier to get used to living on 90% of your income now than it is to have to live on 40% of it when you're too old to work. And this should go without saying, but you actually need to live within your means on the other 85-90% - don't go earning $50k, spending $50k, and saving $7k on top of that, or you'll wind up in Consumer Debt Hell.

4) Get disability insurance unless either your income is so low that it could be replaced by government benefits if you were unable to work, or you have good coverage from work(If you have mediocre coverage, top-up plans are fairly cheap). If you have dependants(including a partner), get term life insurance to replace your contributions to the household as well, plus a proper will and power of attorney. Other forms of insurance(critical illness, long-term care, whole life, child life, etc.) are luxuries, but those two are both very important.

5) Match your assets to your liabilities. If you're saving for a house next year, keep it in low-risk investments, and try to minimize up-front costs(or back-end costs) while worrying less about ongoing costs. If you're saving for retirement in 2057, invest riskier, and worry less about short-term fees but more about higher ongoing costs.

6) Try to diversify your portfolio as much as you reasonably can - index funds are a classic choice for doing so cheaply, but they're not perfect. Remember that "your portfolio" doesn't just include things where you get a quarterly report in the mail, it also includes things like your house and your career - don't be one of those Enron employees who lost both their job and their pension when the company tanked. Likewise, if you rent an apartment then a real estate investment may be much more suitable for you than it would be for a homeowner who already has the bulk of his net worth tied up in a real estate investment. Remember also that one stock index is not full diversification - the world is bigger than the S+P 500. Get some of your money overseas, and don't be afraid to put a bit into things like small-cap, emerging markets, commercial real estate, and commodities.

7) Know your risk tolerance, and invest accordingly. If you're the sort of person who will pull their money out when its value tanks, you shouldn't be in anything volatile - people like that lost their shirts in 2008-09, while the more placid investors made their money back pretty easily. If you can't handle a 30% drop, find an investment that won't lose 30%. Fortunately, this is pretty easy right now, because we have a very nice stress test of investment performance that's recent enough that it still shows up on performance charts. If you're a conservative investor, take a look at your investment's performance in 2008, and make sure you could handle seeing that on your next quarterly statement without panicking.

8) Don't be afraid to find an expert to help you. Some people treat finance as a hobby and do just fine, but if you're not one of those you can sometimes be stunned by how much money you've left on the table. As an example, a guy at my office spoke to a business owner the other day with two teenage kids. Told him to bring the kids on as employees for $10k/year, to get money out of the corporation virtually tax-free instead of taking it as personal salary and paying taxes at his(very high) rate. Saved him $9k per year that he didn't even know could be on the table with a sentence. I may be biased here, but I think I'm worth what I get paid, and most of my clients agree.

Obviously, these rules are not absolutes. For example, both employer matching and tax shelters often come with restrictions that may make them undesirable. But if you follow those rules, you'll be doing personal finance better than 90% of people.

Replies from: moridinamael
comment by moridinamael · 2014-01-28T16:25:22.490Z · LW(p) · GW(p)

Follow-up questions. If one uses a spreadsheet and basic math, one concludes that one should pay off debt before starting to save, because debt is usually at a higher interest rate than savings. However, one frequently hears that one should not in fact do this; rather, that one should always be saving. It's frustrating, because I can show you exactly how much money I'm burning in interest payments by following this advice. Can you justify or possibly refute this oft repeated wisdom?

Replies from: Alsadius
comment by Alsadius · 2014-01-28T20:21:45.900Z · LW(p) · GW(p)

There's a couple circumstances where having both loans and savings makes sense. One, emergency fund - if you have a $3000 problem with your car, you can't pay your mechanic with your lack of a mortgage, you need cash. Having to go get a loan at that point isn't really practical. If you have revolving credit(credit card/line of credit) then you can use that, but a traditional loan that goes away when it's paid off is no good.

The second case is leverage loans. When you borrow for business purposes, the interest can be used as a tax deduction(in most countries, consult local tax advice before trying to do this). Depending how you invest, you can get tax-advantaged returns in the form of dividends and/or capital gains, whereas the loan is a tax deduction at ordinary income tax rates. In Canada, capital gains are taxed at half the normal rates, so if you borrow at 4% and earn 6% of capital gains, you're getting a deduction of 4% of the loan but only paying tax on 3%, so you pocket a net tax deduction of 1% of the loan size, on top of earning a spread of 2%. All this without putting any of your own money in. That said, this is a higher-risk strategy, because if you lose money you still need to pay back the loan, so it's not for everyone.

The third reason is psychological. Some people believe that you should make a habit of savings. Depending on who you are personally, it may be a decent tactic.

comment by jobe_smith · 2014-01-16T01:26:44.884Z · LW(p) · GW(p)

I've worked in high frequency trading in Chicago as a trader and developer for 11.5 years. I am an expert on that stuff. AMA.

Replies from: niceguyanon
comment by niceguyanon · 2014-02-11T20:29:18.961Z · LW(p) · GW(p)
  • How much of your firms profits are from providing a market (giving liquidity) vs actually taking an outright position in the market?

  • Are there new strategies being developed constantly or is there just tweaks to an overall proprietary algorithm?

Replies from: jobe_smith
comment by jobe_smith · 2014-02-12T12:41:40.077Z · LW(p) · GW(p)
  • More than 100% of my profit's are from market making. Overall, I lose money on my positions. For the firm as a whole, position trading might be slightly profitable.

  • I have a basic strategy that works, and I run a couple variants on that strategy on a decent number of products. I am always trying to tweak the strategy to make it better, and add more products to trade. I also put some effort into developing new ideas. Most of the time, new ideas are a waste of time. There just aren't that many fundamentally different strategies that work, and that provide the kind of risk/return profile that works in my industry. I know it is a cliche that you learn more from failure than from success, but in developing trading strategies I think the opposite is true. You can spend forever trying things that don't work. Its much more valuable to understand and refine an idea that basically works.

Replies from: niceguyanon
comment by niceguyanon · 2014-02-12T16:13:59.928Z · LW(p) · GW(p)

Thanks for the response. This is exactly why I tell everyone who thinks they should dabble in trading to stop it. The regular person who thinks they can beat the stock market for alpha has huge odds stacked against them.

Real professionals that work at true proprietary trading firms:

  • Are not paying stupid amounts of money on retail commission.

  • Have direct access to exchanges via having a seat at the exchange, no middle broker.

  • Are using true HFT (not just automated trading ) with collocated servers at these exchanges.

  • Market making for tiny tiny spreads - it's how most trading outfits make their money, not on positional trades, (real data is hard to come by but anecdotal evidence is abundant on this)

Since you are the expert, is my assessment mostly accurate?

Replies from: jobe_smith, Lumifer
comment by jobe_smith · 2014-02-12T20:12:06.047Z · LW(p) · GW(p)

I think you are correct about what prop trading firms do, but I am not so pessimistic about the prognosis for retail investors. I don't think retail investors can compete with professional prop traders at what they do, but I think that they can do better than just sticking their money in index funds, at least on a risk adjusted basis.

comment by Lumifer · 2014-02-12T16:28:23.511Z · LW(p) · GW(p)

is my assessment mostly accurate?

No, it is not.

Prop trading -- understood as extracting money out of financial markets -- is very diverse. Some people care about milliseconds and build their own microwave links between New York (actually, New Jersey :-D) and Chicago. They make millions of trades per day and make a tiny fraction of a cent on each trade. Some people care about long-term value, trade a few times a year and hold their positions for years. Some people do arbitrage. Some people do distressed investing. Some people do convertibles, or M&A, or IPOs. Some people make macro bets. Some people do something else in the markets. All of them are "real professionals".

Replies from: niceguyanon
comment by niceguyanon · 2014-02-12T17:33:43.584Z · LW(p) · GW(p)

The type of prop firms you are describing are really just back office pools or trading arcades. Say I am a broker-dealer and I have a business that attracts chumps to deposit money with me, and I will allow them to trade through me, and all the traders can share expenses like office space and other services. The burn out rate is really high. Notice that this supposed "prop firm" is making most of its money on commissions from their own traders! and not actually realizing gains from their trading employees. Contrast this to a division in an investment bank that hires actual employees as programmers and quants, and their profits are determined by trading. I'm not saying all prop firms are just back office pools, but I know for a fact that a lot of them are. So maybe we just have different meanings for what is a prop trader and prop firm.

http://traderfeed.blogspot.com/2008/07/proprietary-trading-firms-arcades-and.html

Since this is an AMA, let's just ask jobe. Do you trade your own capital or 100% firm capital? Jobe can very well be working at a prop shop like I described and I'm not putting him down if he is. It's just a known fact that most prop shop in Chicago or New York are of the scam-ish type.

Contrast Bright trading, the most well known and probably the biggest "prop firm" in Chicago to Jane Street. The former is nothing more than a glorified back office under the banner of proprietary trading where traders put up their own money, the latter is my definition of REAL prop firm.

http://www.elitetrader.com/vb/showthread.php?t=276210

Again no offense to jobe if he works for the former type of prop firm.

Replies from: jobe_smith, Lumifer, Lumifer
comment by jobe_smith · 2014-02-12T20:22:20.737Z · LW(p) · GW(p)

Do you trade your own capital or 100% firm capital?

I trade 100% firm capital, not my own. I've heard of bright and places like that but there are lots of real prop trading firms, that actually make their money from trading. Here are some I can think of off the top of my head:

  • Getco
  • Virtu
  • DRW
  • Allston
  • Ronin
  • HTG
  • Chopper
  • Sun
  • Optiver
  • Tower Research
  • Teza
  • Wolverine
  • Marquette Partners
  • Jump
  • Eagle 7
  • Peak 6

etc.

comment by Lumifer · 2014-02-12T18:44:07.275Z · LW(p) · GW(p)

The type of prop firms you are describing are really just back office pools or trading arcades.

Actually, no, the type of prop firms I am describing are typically small (in terms of personnel, not in terms of AUM) hedge funds which run some manager's own money and some outside money.

Facilities for day traders are a different thing entirely and I don't talk about them here.

Replies from: niceguyanon
comment by niceguyanon · 2014-02-12T18:52:26.836Z · LW(p) · GW(p)

I guess we are talking about two different things then, I have never heard of any one talk about the term prop firm in your sense. Doesn't make it wrong though.

Edit: If you are describing a small hedge fund that manages the managers money and outside money, then its just a hedge fund any way you cut it. Proprietary means only the firms capital is used, and for no clients.

Replies from: Lumifer
comment by Lumifer · 2014-02-12T19:02:33.882Z · LW(p) · GW(p)

then its just a hedge fund any way you cut it

And how does that matter for the original point under discussion -- whether someone outside of an investment bank or a big hedge fund family (e.g. Blackrock) can successfully extract money out of financial markets?

If someone actually has a working strategy, he typically doesn't just trade it, he starts a small hedge fund.

Replies from: niceguyanon
comment by niceguyanon · 2014-02-12T19:35:40.112Z · LW(p) · GW(p)

Because the original point under discussion was contrasting prop trading in the context of a real prop firm vs an outsider engaging in prop trading, and the advantages afforded to the former.

Anyway, I already know your position regarding whether an outsider can successfully extract money out of the financial markets and you know mine.

Curious, what do you do if you don't mind me asking? I'm asking because you do know a lot about this topic, even though we disagree on somethings.

comment by Lumifer · 2014-02-12T17:49:44.544Z · LW(p) · GW(p)

An "average joe" in the US has the IQ a bit below 100 and does not have a decent chance at great many things in life.

Now, whether a high-IQ guy has a decent chance is a different question, and an interesting one, too.

Replies from: niceguyanon
comment by niceguyanon · 2014-02-12T18:37:50.370Z · LW(p) · GW(p)

Sorry I was in the middle of editing what I wanted to say and you responded too quickly.

So what you responded to may have changed.

comment by Kawoomba · 2014-01-15T10:33:08.774Z · LW(p) · GW(p)

I am K. Woomba. I'll answer any question so long as it contains an even number of "a"'s XOR is a question I decide to answer. Also, please no questions about why I skipped work today.

(Nice to see some less active old-timers active in this thread again.)

Replies from: pragmatist, FiftyTwo
comment by pragmatist · 2014-01-15T11:04:33.135Z · LW(p) · GW(p)

Do you consider zero to be even?

(If yes, I hope you don't decide to respond to this question.)

Replies from: Kawoomba
comment by Kawoomba · 2014-01-16T07:20:56.304Z · LW(p) · GW(p)

I decided against responding to such a silly trap, so because zero is even and the truth condition thus fulfilled, I decided to answer the question, so because I decided to respond to the question and zero is even, I decided against responding to such a silly trap, so -s--s-..... -...

[You'll be our first line of defense against uFAI, smithereening it with a simple question.]

comment by FiftyTwo · 2014-03-20T01:04:35.131Z · LW(p) · GW(p)

Why are you using an arbitrary rule?

Replies from: Kawoomba
comment by Kawoomba · 2014-03-20T09:32:38.304Z · LW(p) · GW(p)

Isn't everyone?

comment by kpreid · 2014-01-15T02:18:23.111Z · LW(p) · GW(p)

You can ask me something. I don't promise to answer. If you've never heard of me and want to ask me something anyway, here's some hooks:

  • I have many opinions on how humans interact with computers and how computers interact with computers; i.e. user interface design, programming language design, networking, and security.

  • I consider myself to have an akrasia problem but am reasonably successful* in life despite it, for causes which appear to me to be luck or other people's low standards.

  • Web site, blog, GitHub

* To be more precise, I have money (but many unfinished goals which I don't see how to throw money at). Though putting it in those words suggests some ideas…

Replies from: None
comment by [deleted] · 2014-01-15T03:59:26.821Z · LW(p) · GW(p)

How does your IRC Teddybot software work to help anyone solve problems - is this like the old Eliza program where questions are paraphrased back to the user?

Replies from: kpreid
comment by kpreid · 2014-01-15T15:23:37.808Z · LW(p) · GW(p)

I only implemented it to the specification; I suggest you take up that question with the designer. If I had to answer myself (it's been a few years since that project), I would say that it is like the "cardboard programmer" or "rubber duck": its value is in giving you something to address your one-sided conversation to. It does just a little bit more than the rubber duck.

I can say that unlike Eliza it doesn't use any of the content of incoming messages at all, except to distinguish when it is addressed from when it is not, and whether there is a question mark.

(Thanks for reminding me that I never published the (trivial) source code; I should fix that.)

comment by shminux · 2014-01-13T04:31:22.717Z · LW(p) · GW(p)

.

comment by Gunnar_Zarncke · 2014-01-12T21:41:11.156Z · LW(p) · GW(p)

Ask me about parenting.

Replies from: fubarobfusco, None
comment by fubarobfusco · 2014-01-13T01:16:25.561Z · LW(p) · GW(p)

Montessori education: Good idea? Bad idea? Fish?

Replies from: oooo
comment by oooo · 2014-01-13T02:38:34.072Z · LW(p) · GW(p)

A North American non-Montessori educator (director of daycare) said that Montessori is different in various parts of the world. I did not do more research into this, and obviously this comment can be easily biased and seen to have an agenda. However, based on this comment alone, I'm also interested in whether you (Gunnar_Zarncke) thought about putting your children through (European) Montessori.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-01-13T09:13:38.639Z · LW(p) · GW(p)

I considered Montessori education but not in depth as it wasn't really an option locally.

I think that the qualification/experience/dedication of the teachers/educators matter more than the specific concept and you should really visit the kindergarten/school during a normal day. My wife sat in class of local schools during her teacher training and recommended specific schools and teachers (and that does matter: the parallel class of my oldest has a teacher with a boy-prejudice). But even as a non-teacher you should be able to at least have a look and chat.

The Montessori teaching material is very good though and that is obviously independent of teachers.

For preschool I recommend forrest kindergarten. http://en.wikipedia.org/wiki/Forest_kindergarten (the picture is realistic) All our children visit(ed) forrest kindergarten but you have to consider that certain pre-school topics may be reduced (don't need to; our kindergarten was awarded for its curriculum and is strong e.g. in math topics). Again it depends on the institution and people.

comment by [deleted] · 2014-01-12T23:36:52.774Z · LW(p) · GW(p)

How do you instill discipline (e.g. don't be mean to your sister, wash you hands after the potty, no jumping on the couch, etc.) without being authoritarian and while maintaining a positive self-image?

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-01-13T10:12:47.794Z · LW(p) · GW(p)

That is a complex question and admits no general answers I fear. Depends a lot on your temperament and that of the children. I could recommend a few books but maybe some personal experience will give you some ideas:

don't be mean to your sister,

This is really a continuous issue between my two oldest as the 10 year old always wants to win and outsmart and take advantage of the younger ones. He is not mean but can be quite rough; kind of insensitive but it is balanced in so far as he can take it too. And the 7 year old is smart enough to notice when he is taken advantage of and just doesn't let his older brother win. He is more balanced but tend to explode or retreat and he gets his revenge by using low profile needling.

They can play together happily for hours but only as long as long competition topic comes up which it does earlier or later. And that may be . I have tried without end to talk with them about their conflict and tried to find rules they could both agree and and they did (like both go to their room, agree to apologize, allowance reuction for harsh language, ...) which didn't really change anything. They continue and either just accept the consequences or get into fights with me over my enforcing of our agreed consequences.

What is the lesson? Separate them or have a close eye on competitive situations or accept their conflict as an inevitable part of sibling revalry.

I understand that this is basically normal behavior; I am rumored to have not been much different to my younger brother either and my wife is said to have had fist and kick fights with her brother.

Nonetheless our adult values (here esp. my strong emotional reaction when others are hurt) causes significant stress. It just isn't clear enought whether the truth is that nothing can/should be done or that you can do the rigth thing if you just think and try enough.

wash you hands after the potty,

As an example of hygiene I guess. It helps if you talk and explain about biology. About viruses tricking the cells in your body and being in your snot and excrement. About bacteria eating sugar and peeing on your teeth. About bacteria eating your cells and your immune system tagged and eating the bacteria. Bacteria eating meat that has no running immune system and multiplying. Doing the repeated multiplication.

But don't overdo it. They should still happily play in the dirt (actually most small children literally eat dirt when not actively kept from it).

Also consider http://en.wikipedia.org/wiki/Hygiene_hypothesis

no jumping on the couch

But they like to.

We had a strong rule against that too. We had an old sofa in the basement - a rampage room - where they could.

But consider Christopher Alexanders advice that the child play area is (should be) the contiguous area connecting the childrens rooms and the outside. If that area goes thru the living room then the sofa is prey at least temporarily and you fight against windmills. When my wife moved out I gave in and just restrain overuse by the older. The furniture is modular and the cover can be washed.

comment by itaibn0 · 2014-01-12T17:36:36.554Z · LW(p) · GW(p)

I'm in grade 12, and know math at about a graduate level. AMA.

Replies from: whales, Anatoly_Vorobey
comment by whales · 2014-01-12T19:43:35.167Z · LW(p) · GW(p)

To what extent did you study math on your own initiative? (What kind of support did you have from parents, teachers, and institutions? Were resources readily available, or did you have to work to seek them out?)

Replies from: itaibn0
comment by itaibn0 · 2014-01-14T00:05:44.655Z · LW(p) · GW(p)

I had a lot of support from my parents, who are both mathematicians. They taught me when I was little, and then gave me textbooks soon after I learned to read. On the other hand, I learn from the textbooks entirely on my own and often the books are on subjects I requested. Also, a lot of what I know I learned on the internet, and that's entirely self-directed.

Teachers and schools typically weren't much of a help because my knowledge was usually above their level (I didn't skip any grades and was even a grade behind in elementary school). However, recently I started taking classes in a university (at my parent's initiative), so that's changing.

I don't consider myself to be a conscientious person. When you phrase it as "did you have to work to seek them out" I am inclined to answer 'no' because I usually don't consider learning math as work.

comment by Anatoly_Vorobey · 2014-01-12T18:17:41.080Z · LW(p) · GW(p)

What are the top 5 fiction books you've read in your life?

comment by chowfan · 2018-01-27T08:03:11.476Z · LW(p) · GW(p)

Hi Wei. Do you have any comments of the Ethereum, ICO (Initial Coin Offering) and hard forks of Bitcoin? Do you think they will solve the problem of fixed monetary supply of Bitcoin since they somehow brought much more "money" (or securities like stock, not sure how to classify them)?

Do you have any comments about the scaling fight of Bitcoin between larger blocks and 2nd-layer payment tunnels such as Lightning Network ?

comment by [deleted] · 2015-11-03T01:34:48.707Z · LW(p) · GW(p)

I'd like to answer questions about how AI might represent itself as a LW user, or vice-versa.

comment by Punoxysm · 2014-03-07T03:29:02.878Z · LW(p) · GW(p)

Ask me anything. Until recently I was a machine learning and text mining graduate student. Now I work on data visualization in a corporate setting (but I can't talk a ton about that).

Replies from: asr
comment by asr · 2014-03-07T05:48:57.855Z · LW(p) · GW(p)

How do you feel about moving from research to industry? Did you leave before or after getting your degree, and do you ever wish you had left earlier/later?

Replies from: Punoxysm
comment by Punoxysm · 2014-03-07T06:08:07.280Z · LW(p) · GW(p)

I left before completing my degree, but I retain the option of returning. The lifestyle and culture of academia, the chance to do research, and the possibility of entering a research career are the advantages of academia over industry. At the time I left, none of those were as compelling to me as the career I've entered.

I'm glad I gave academia a shot, and I left not too long after I had figured out my priorities weren't well-satisfied by staying in grad school.

I am sorry if that's not very helpful. It was just a natural move for me at the time.

comment by asr · 2014-01-14T01:14:44.889Z · LW(p) · GW(p)

I'm a computer science researcher, working in systems and software engineering research. I'm particularly qualified to talk about the experience of academic computer science, but AMA.

Replies from: Anatoly_Vorobey
comment by Anatoly_Vorobey · 2014-01-14T19:17:37.286Z · LW(p) · GW(p)

The way I understand it, academic CS reached a state of affairs where peer-reviewed journal publications (almost) don't matter, and conferences are the primary vehicles of communicating research and acquiring status. Do you have an opinion on this state of affairs? Is it harmful? beneficial? Why did CS converge on it, and not say math/physics?

To an outsider like me, it just seems so weird.

Replies from: asr
comment by asr · 2014-01-14T20:10:03.467Z · LW(p) · GW(p)

Yes, in general we prefer conferences to journals. It's a little more complicated than "journals don't matter." In each subfield of CS (systems, AI, graphics, etc), there are several conferences and several journals. Generally, the best conferences and even the second-tier conferences, are more prestigious than even the top-tier journals. The best journals are better than low-tier conferences. And peer-reviewed glossy magazines, like Communications of the ACM or IEEE Computer, are high-visibility and well respected.

Bear in mind that the conference and journal models aren't so different in their final output. In each area of CS that I'm familiar with, the papers that appear in "Proceedings of the ACM Symposium on Quintessence" or whatever the conference is called are written in the same style and have comparable length etc to the papers at the ACM Transactions on Quintessence [journal]. So the real difference is in the reviewing process, not in the writing style.

Neither the journal or conference model is great. The problem with journals is that they're incredibly slow -- for one of my articles, it took a year and a half from submission to publication. They're also annoyingly bureaucratic -- they have fussy typesetting and layout rules that generally don't improve the quality of the final work.

The impression people have is that it's rare for a paper to be rejected outright -- mostly even a relatively weak paper gets "here are the major flaws, fix and resubmit." And then it becomes siege warfare where the author fixes a few flaws and sends it back for re-review and at some point either the author gets fed up and quit, or the reviewers get fed up and say "okay fine, publish it."

The problem with conferences is that they have to make an up-or-down decision on each submitted paper. The timeline means there's no way for them to say "this paper has serious but correctable flaws and we will publish it if you fix it." You can always resubmit to a different conference, but it sometimes happens that conference A says "too much X", and and conference B says "not enough X" -- there's no continuity in reviewers. (In contrast, journal reviewers stay with the paper from submission through the "revise and resubmit" cycle.) The advantage of conferences is that the author gets an answer quickly and the paper, if published, gets attention quickly.

There's been some motion towards hybrid publication systems. There's a highly respected conference on Very Large Databases (VLDB). For the last few years, the model has been that you submit your paper to the VLDB Journal. The journal has a fixed reviewing cycle of a few months, fixing the "slow reviews" process. Authors of published papers get to give a talk at the VLDB conference, fixing the visibility problem. OOPSLA, a prominent programming language conference, has switched to two-round reviewing, which helps fix the "up or down and no continuity" problem.

As to how the problem got going -- Some part is that this is path dependent and the problem feeds on itself. Once journals become low status, people stop reading them. In science, visibility is the coin of the realm, and a conference talk at the leading venue is therefore much better than a journal article nobody reads. And once journals are low-status, the leading people don't want to review for them, and so you get lower-quality reviews. Conversely, once a conference becomes highly selective and prestigious, people send papers there because they want the stamp of approval of "the best people send their papers to ACM-Quintessence and only the top 20% get in."

I don't have a great sense of why CS has this model and no other field does. I have one guess: Most papers are rejected for being boring / incremental, not for being wrong or having weak evidence for an interesting claim. Mostly computers are easy to experiment on. The machines are pretty deterministic, it's easy to share code and data, and most things we care about aren't very expensive.

The conference model is pretty good for deciding if something is interesting. Whereas a journal has an editor and a couple reviewers, the conference model is to assemble a program committee of 15-30 people, and they discuss and vote on each paper. So that gets you a bigger sample of expert opinion on whether something is interesting and valuable.

My impression is that in physics, experiments are easier to get wrong and more important to replicate, and therefore the reviewers are there to check the details, not just give a high-level analysis of whether the paper solves an interesting and important problem with a solution with nontrivial elements that can be reused elsewhere.

I don't love the CS publication model, but my sense is that nobody likes the publication process. Pretty much all the jokes and complaints about peer review in physics or biology feel relevant to me. So I think it's just a sign of a healthy scientific field that having a highly-respected publication means having it scrutinized by skeptical experts who don't pull their punches.

comment by jaime2000 · 2014-01-12T20:33:16.545Z · LW(p) · GW(p)

I sometimes speak English fluently, posses a high school diploma, and live in the great United States of America. If you ask a question, I may answer.

comment by shminux · 2014-01-12T08:45:40.434Z · LW(p) · GW(p)

AALWA Requests go here.

comment by ThrustVectoring · 2014-01-12T05:27:07.082Z · LW(p) · GW(p)

I've read the sequences and have a pretty solid grip on what the LW orthodox position is on epistemology and a number of other issues - anyone need some clarification on any points?

Replies from: Benito, pianoforte611
comment by Ben Pace (Benito) · 2014-01-12T09:01:16.693Z · LW(p) · GW(p)

Could you summarise the point of/ the conclusions of the posts about second order logic and Gödel's theorems in the Epistemology Sequence? I didn't understand them, but I'd like to know where they were heading at least.

Replies from: ThrustVectoring
comment by ThrustVectoring · 2014-01-12T14:01:00.053Z · LW(p) · GW(p)

I don't quite have the mathematical background and sophistication to grok those posts as well, but I did get their purpose - to hook mathematicians into thinking about the open problems that Eliezer and MIRI have identified as being relevant.

comment by pianoforte611 · 2014-01-12T14:11:48.238Z · LW(p) · GW(p)

I'm guessing you think free will is a trivial problem, what about consciousness? That still baffles me.

Replies from: ThrustVectoring
comment by ThrustVectoring · 2014-01-12T14:21:55.301Z · LW(p) · GW(p)

The most apt description I've found is something along the lines of "consciousness is what information-processing feels like from the inside."

It's not just about the what a brain does, because a simulated brain would still be conscious, despite not being made of neurons. It's about certain kinds of patterns of thought (not the physical neural action, but thought as in operation performed on data). Human brains have it, insects don't, anything in between is something for actual specialists to discuss. But what it is - the pattern of data processing - isn't all that mysterious.

Replies from: pianoforte611, Locaha
comment by pianoforte611 · 2014-01-12T21:21:02.782Z · LW(p) · GW(p)

Okay but why does information processing feel like anything at all? There are cognitive processes that are information processing but you are not conscious of them.

comment by Locaha · 2014-01-12T18:30:43.644Z · LW(p) · GW(p)

Human brains have it

How do you know?

Replies from: ThrustVectoring, gjm, Eugine_Nier
comment by ThrustVectoring · 2014-01-13T00:44:35.368Z · LW(p) · GW(p)

I find it awfully suspicious that the vast majority of humans talk about experiencing consciousness. It'd be very strange if they were doing so for no reason, so I think that the human brain has some kind of pattern of thought that causes talking about consciousness.

For brevity, I call that-kind-of-thinking-that-causes-people-to-talk-about-consciousness "consciousness".

Replies from: Locaha
comment by Locaha · 2014-01-13T06:35:46.903Z · LW(p) · GW(p)

Definition of "it has it if it talks about it" is problematic. You can make a very simple machine that talks about experiencing consciousness.

Replies from: ThrustVectoring, gjm
comment by ThrustVectoring · 2014-01-13T15:31:01.991Z · LW(p) · GW(p)

You can make a very simple machine that talks about experiencing consciousness.

And that simple machine does so because it was made to do so by people experiencing consciousness.

Replies from: Locaha
comment by Locaha · 2014-01-13T18:33:37.518Z · LW(p) · GW(p)

And that simple machine does so because it was made to do so by people experiencing consciousness.

How do you know?

Replies from: ThrustVectoring
comment by ThrustVectoring · 2014-01-13T20:21:18.858Z · LW(p) · GW(p)

I find it awfully suspicious that the vast majority of humans talk about experiencing consciousness. It'd be very strange if they were doing so for no reason.

comment by gjm · 2014-01-13T15:17:40.395Z · LW(p) · GW(p)

And if interaction with such machines is the only ground you have for thinking that anything experiences consciousness, I think it would be reasonable to say that "consciousness" is whatever it is that makes those machines talk that way.

In practice, much of our notion of "consciousness" comes from observing our own mental workings, and I think we each have pretty good evidence that other people function quite similarly to ourselves, all of which makes that scenario unlikely to be the one we're actually in.

comment by gjm · 2014-01-13T15:16:02.263Z · LW(p) · GW(p)

How does anyone learn what the term "consciousness" applies to? So far as I can tell, it's universally by observing human beings (who are, so far as anyone can tell, implemented almost entirely in human brains) and most specifically themselves. So it seems that if "consciousness" refers to anything at all, it refers to something human brains -- or at least human beings -- have. (I would say the same thing about "intelligence" and "humanity" and "personhood".)

I suppose it's just barely possible that, e.g., someone might find good evidence that many human beings are actually some kind of puppets controlled from outside the Matrix. In that case we might want to say that some human brains have consciousness but not all. This seems improbable enough -- it seems on a par with discovering that we're in a simulation where the electrical conductivity of copper emerges naturally from the underlying laws, while the electrical conductivity of iron is hacked in case by case by experimenters who are deliberately misleading us about what the laws are -- that I feel perfectly comfortable ignoring the possibility until some actual evidence comes along.

comment by Eugine_Nier · 2014-01-14T01:14:51.954Z · LW(p) · GW(p)

I know I'm conscious because I experience it. As for everyone else, really I'm generalizing from one example.

Replies from: Locaha
comment by Locaha · 2014-01-14T07:45:26.557Z · LW(p) · GW(p)

I know I'm conscious because I experience it.

So do I, but it doesn't help me to assess the consciousness of others.

Replies from: Alsadius, Eugine_Nier
comment by Alsadius · 2014-01-16T07:35:33.753Z · LW(p) · GW(p)

Occam's Razor. All these people seem similar to me in so many ways, they're probably similar in this way too, especially if they all say that they are.

Replies from: Locaha
comment by Locaha · 2014-01-16T07:47:06.082Z · LW(p) · GW(p)

The little box that claims it experiences consciousness (just like you do) is also similar to you. How do you decide what is similar enough and what is not?

Replies from: Alsadius
comment by Alsadius · 2014-01-16T08:07:30.303Z · LW(p) · GW(p)

We live in a world effectively devoid of borderline cases. Humans are clearly close enough, since they all act like they're thinking in basically similar fashions, and other species are clearly not. I will have to reconsider this when we encounter non-human intelligences, but for now I have zero data on those, and thus cannot form a meaningful opinion.

Replies from: Locaha
comment by Locaha · 2014-01-16T08:37:17.543Z · LW(p) · GW(p)

I suggest you taboo the word clearly. For example, it is not at all clearly to me that a 6 month infant experience consciousness as I do. But if the infant does, then surely an adult chimpanzee do too?

See where it's going?

comment by Eugine_Nier · 2014-01-17T03:40:32.739Z · LW(p) · GW(p)

Well, it is possible to make an argument based on the Self-Sampling Assumption that only people who share the rare inherent trait X with me are conscious.

Replies from: Locaha
comment by Locaha · 2014-01-17T08:30:00.716Z · LW(p) · GW(p)

Is is a sort of trait the talking box can't possibly have?