Posts

Seeking Advice About Career Paths for Non-USA Citizen 2016-09-28T00:07:38.417Z
Parallelizing Rationality: How Should Rationalists Think in Groups? 2012-12-17T04:08:27.854Z

Comments

Comment by almkglor on Seeking Advice About Career Paths for Non-USA Citizen · 2016-09-30T15:23:14.530Z · LW · GW

Somebody suggested that people in LessWrong may be interested in my resume, and may be able to hire, so I updated my website on github.io to include my resume.

https://amkg.github.io/alan-resume.pdf

Comment by almkglor on Seeking Advice About Career Paths for Non-USA Citizen · 2016-09-30T12:30:17.485Z · LW · GW

Re: underestimating tech salaries, thanks for the corrections; I may have discounted similar information before because even senior software developers I know personally locally are <$30,000/yr, and "start at $100,000/yr" sounded much too good (this is retrospectively obviously a bad heuristic and I will now strive to do better). In retrospect, checking the salaries of relatives who migrated to the USA should have corrected this.

re: moving to 1st-world country as a goal, my wife has this as a goal (FWIW it's a common goal for a sizable fraction (which I haven't researched) of Filipinos, which should indicate just how lousy Philippines is), not so much mine. I personally feel that I should strive to make the Philippines better, and initially thought that staying here would be the best method, but I probably need to re-consider that, which is why I need to consider the option of working abroad, whether permanently or temporarily. I worry about decaying values if I leave the Philippines (i.e. would Gandhi drink a pill that has a 1% chance of making him indifferent to India), but maybe I just need a credible way of maintaining the values of my future self.

re: freelancing, yes, that was my analysis. My wife and I talked several months ago with a couple whose husband had successfully transitioned to a freelance software job here in the Philippines, although exact numbers never got mentioned (but it was obvious they were comfortably well off). So I took to guessing that maybe a freelancer would get 50%->80% of what a regular USA jobholder would get, and used my (flawed!) understanding of USA salaries to consider this. So maybe I should recompute this after all... Looks like freelance is a better option than I thought before.

As for my family's land, I'll have to check; it's possible it doesn't have Internet or electricity, haha (Internet access is expensive in the Philippines, and my understanding is that it's one of the more expensive rates in the world). FWIW I and my wife and children live at my wife's uncle's, since the building is rented out as residential units and my wife's current job is managing it; Internet is paid for by my wife's uncle since they communicate by Facebook and Viber (my wife's uncle emigrated to the USA), so I don't strictly speaking need to be at my own family's land as long as my wife keeps her job.

re: resume, I have a pdf copy. I was going to say that I don't have a website to put it up on, but then I remembered that I do have amkg.github.io, which means I really really really need to be a lot more aware of my options and resources, because seriously, a REAL PROGRAMMER (TM) without a website? Okay, I'll put it up there after I dredge up the instructions for updating that site.

(side note: NetHack is good rationalist training, because a lot of deaths there are in retrospect pretty stupid when you get "Do you want your possessions identified" and found out you had very valuable items you forgot to use because you didn't stop and think through your real options and take a good long look at your available resources... I need to treat real life more like NetHack, hahaha)

re: cryonics, I remember researching that maybe a decade ago and deciding that the total cost was too much for my salary then (and I'd have to contend with the possibility of relatives preventing me from being cryonically preserved anyway); I can't remember where I put the computations for that, though, sigh. Come to think of it, I haven't re-computed for my conditions now (I've been assuming the cost for me a decade later would be higher than the cost then, and cancel out my increase in purchasing capacity), which I obviously should do (damn cached thoughts), at least for my children if not for my wife and I... It's amazing how stupid a brain can be, I should have rethought that earlier.

re: CFAR, yes, that's my impression so far. Libraries in the Philippines are few and far between, but there are other ways to get the information (e.g. this website). I'd still like to attend one at some point in the future if only to see if they've gotten better, but obviously that has to come after I'm the smiling agent sitting on top of a heap of utilons.

Comment by almkglor on Seeking Advice About Career Paths for Non-USA Citizen · 2016-09-28T21:58:20.452Z · LW · GW

Hmm, yes, my wife is suggesting Singapore too (she has relatives there, although I'd prefer not to impose). I've also suggested Canada. My wife wants it "nearby", so maybe I'll consider Singapore, Taiwan, and Australia more.

Re: geopolitical situation of China, I hope you're right ^^.

Comment by almkglor on Seeking Advice About Career Paths for Non-USA Citizen · 2016-09-28T21:52:09.495Z · LW · GW

Thanks for the reply, I'll consider your advice more!

re: English, fluent writer, my spoken English is sometimes halting (it's not like I can go back and edit my vocal utterances, unlike in "written" English on a computer). re: Scheme, I'm not so sure if a Schemer would say I "contributed" to the Scheme language with SRFI-110 - there's significant resistance against indent-based syntaxes - but I know a few implementations have picked up SRFI-105 (Guile at least, I think a few others).

Comment by almkglor on Rationality Quotes from people associated with LessWrong · 2014-09-07T10:31:41.049Z · LW · GW

Jonvon, there is only one human superpower. It makes us what we are. It is our ability to think. Rationality trains this superpower, like martial arts trains a human body. It is not that some people are born with the power and others are not. Everyone has a brain. Not everyone tries to train it. Not everyone realizes that intelligence is the only superpower they will ever have, and so they seek other magics, spells and sorceries, as if any magic wand could ever be as powerful or as precious or as significant as a brain.

Eliezer Yudkowsky

Comment by almkglor on Parallelizing Rationality: How Should Rationalists Think in Groups? · 2012-12-21T08:46:05.405Z · LW · GW

I prefer "disputation arena" because "group thinking" is too close to "groupthinking".

Is there a better term for "techniques for discussing things so that lots of thinking people can give their input and get a single coherent set of probabilities for what are the best possible choices for action" other than "disputation arena" or "group thinking technique"?

I do want to be precise, and "disputation arena" sounded kewl, but whatever.

Comment by almkglor on Parallelizing Rationality: How Should Rationalists Think in Groups? · 2012-12-21T08:42:40.016Z · LW · GW

Okay, so that's a sub-goal that I didn't think about. I will think about this a little more.

Still, assuming that group exists and needs to do some thinking together, I think techniques like Delphi are fine.

Anyway, I assumed that LW's groups are more cohesive and willing to cooperate in thinking exercises in groups (this is what I was thinking when I said "This makes it not only desirable to find ways to effectively get groups of rationalists to think together, but also increasingly necessary."), but apparently it's not as cohesive as I thought.

Comment by almkglor on [SEQ RERUN] The Bad Guy Bias · 2012-12-21T08:15:04.364Z · LW · GW

I suppose that works for pre-scientific, pre-rational thinking: back when you couldn't do a thing about nature, but you could do a thing about that schmuck looking at you funny.

However, now, as humanity's power grows, we can actually do something about nature: we can learn to predict earthquakes, build structures strong enough against calamity, vaccinate against pestilence, etc etc.

So the bias, I suppose, arises from evolution being too slow for human progress.

Comment by almkglor on Wanted: Rationalist Pushback (link) · 2012-12-21T03:32:01.744Z · LW · GW

On 18 December 2012 09:13:14PM, user "aronwall" replied "yes" to the question "So you're saying that if the evidence goes against you, you are going to stop being a Christian and self-identify as atheist (note that we do not capitalize that word)?". This comment is to ensure that user "aronwall" shall not be able to disavow this reply; please ignore it otherwise.

Comment by almkglor on Parallelizing Rationality: How Should Rationalists Think in Groups? · 2012-12-21T03:25:34.984Z · LW · GW

shrug it's best practice at a particular time and place, but is it the best practice at all times and places?

I'll grant that the procedure "tell all participants: 'hold off on proposing solutions'" is a good procedure in general, but is it the best procedure under all circumstances? How about enforcing the "hold off" part, rather than just saying it to participants? (cref. NGT's silent idea generation).

Comment by almkglor on Parallelizing Rationality: How Should Rationalists Think in Groups? · 2012-12-21T03:20:56.108Z · LW · GW

You did write a long post on different systems for discussion and you did ignore it in that post.

I thought it would be unnecessary, as I thought the people here would already know and it would be repetitive to do reiterate what is already known here. I'll try to see if I can come up with some description of the local status quo, then, and edit the article to include it. I'm a little busy, Christmas is important in this country.

Within your list you didn't discuss systems that have shown to work in the real world to solve the kind of issues that you want to solve.

Huh? These are techniques that have been studied with papers backing them (at least according to some very basic searches through Google). I have no idea how good those papers are, but maybe you do. Can you show some study specifically showing that Delphi works worse then typical internet forums?

take an online community like Wikipedia as an example.

Again, since LW also has a Wiki, I thought it would be superfluous to add it to the article too. I'll find time to update it then.

If you however want to solve those kinds of problems in your country than you have to choose. One way would be to get the IWF to promote some Good Government program in your country in a top-down way. The other way involves finding supporters in your own country.

For both strategies I doubt that the LessWrong public is the right audience. Join/found some Liquid Feedback based political party in your country.

Thank you for this information.

One of the most effective calls for support to highly intelligent nerds was probably Julian Assange's call that among other thing involved him telling the audience that they won't get Christmas presents when they don't cooperate. Julian Assange didn't try to organise some vote to get consenus.

Okay.

Comment by almkglor on Wanted: Rationalist Pushback (link) · 2012-12-18T04:46:34.997Z · LW · GW

The fact that I have an opinion about where the evidence as a whole leads does not prima facie make me impossible to argue with.

So you're saying that if the evidence goes against you, you are going to stop being a Christian and self-identify as atheist (note that we do not capitalize that word)?

Comment by almkglor on Parallelizing Rationality: How Should Rationalists Think in Groups? · 2012-12-18T01:43:47.799Z · LW · GW

I don't think you understand what I mean with the word highly formalized in this context. LessWrong has also a bunch of rules. Those rules are however made in a way where they don't constrain the way one can use LessWrong as much as the rules of Delphi constrain it's participants.

Okay, what exactly do you mean by "highly formalized"?

Constraints on behavior are not necessarily bad, in much the same way that there are more things in heaven and earth than are dreamt of in our philosophy: constraining things to a subset that can be shown to work can help. So I don't really see "current LW has more freedom!!" as a significant advantage - because it might have more freedom to err. Of course, the probability of that being true is low - but can we at least try to show that?

After all, LW code is derived from Reddit. Of course, the online system is just part of the overarching system, and the system as a whole (including current community members) is different (there are more stringent rules for acceptance into the community here than on Reddit), but it might do well to consider that things may be made better.

At the very least, we need to consider what other systems are available, and specifically de-emphasize the local status quo, since we might not be thinking perfectly rationally about it.

No, if you propose an alternative it makes sense to explain how it would improve the status quo. Ignoring the status quo that provides a system that actually works in practice is a bad idea.

I said "de-emphasize", not ignore. What I mean by "de-emphasize" is, acknowledge its existence, but treat it as an idea you have already thought about, i.e. keep it on hand and don't forget about it, but don't keep thinking about it at the expense of other, external ideas. In any case, I thought that it would be unnecessary to have to discuss the local status quo, since I would assume that members already know it.

Should I discuss the current status quo? I am not a regular member, despite reading OB before and LW for years, so I don't feel qualified to get into its details. I mostly read the sequences and hardly look at discussion. Or even comments on the articles, anyway. So my knowledge of LW informal rules are minimal to say the least. Can you describe the status quo for me?

At the moment there no working Delphi system that allows rationalists to discuss solutions for handling insane governments. The cases where Delphi was used successfully are cases where it gets implented top-down. Whether the same approach works in an online community is up for discussion. I don't know of a single case where such a system got enough users to work.

So should we, at this point, completely discard Delphi methods? How about NGT?

I suspect that it's possible to modify LW's polls to add some kind of Real-Time Delphi Method, as I mentioned in the article: (1) allow members to change their chosen options (2) require members to give a short justification for their chosen option (3) give randomized samples of justifications from other members. We can even have a flag that specifies normal forum polls or Delphi-style polls. But if the cost of making this modification is higher than the expected probability of that kind of Delphi being successful times the expected utility of that kind of Delphi methods in general for the rest of LW's lifetime, then fine - let's not do it.

If you think otherwise, please illustrate how you would tackle the issue you brought forward in your post with Prediction Markets. How to tackle it with Delphi would also be interesting.

I don't know how to tackle it with Prediction Markets other than by futarchy: first vote on what measurements are to be used, then run a prediction market about whether particular policy decisions will improve or reduce those measurements. Insane governments are more sane if they have less corruption, better bureacratic efficiency blah blah - we may need to vote on that. Then we need to propose actual policy decisions and predict if they will lead to less corruption etc. or not. Unfortunately, I don't understand enough of futarchy yet to make a proper judgment about it - it's currently a mostly black box to me. I'm disturbed that futarchy_discuss appears to be defunct - I'm not sure if it's because prediction markets have turned out to fail badly, or what.

Assuming those same measures can be agreed upon - less corruption, better bureacratic efficiency - then I suppose a Delphi Method can be made with "what policies should reduce corruption blah blah? How can we impose those policies from below? What feasible actions can we use to get those policies accepted?" as the questions.

(if you think that my definition of "insane government" isn't very good, please understand that I live in a shitty little third-world country where the most troubling problems of the government is corruption and inefficiency, not whether or not the government should raise taxes)

I'm also not clear about why we need to find consensus on "insane governments, insane societies, insane individuals, and the singularity".

Because I think lack of consensus is one reason why our kind can't cooperate.

Can we at least try to pull together on this one?

Comment by almkglor on Parallelizing Rationality: How Should Rationalists Think in Groups? · 2012-12-17T20:55:43.083Z · LW · GW

I'm worried about the bits that are internal to a person, where people just have some common failure modes when trying to solve problems.

shrugs Well, seatbelts don't stop accidents, but they do reduce the side effects of getting into one. While the disputation arenas do not directly prevent such internal failure modes, they help prevent that internal failure mode in a key influential person from spreading to the rest of the group. Yes, hold off on proposing solutions (don't drink and drive). But also put some extra railing and padding so that others making a mistake do not necessarily get you into error either (seatbelts)

Comment by almkglor on Parallelizing Rationality: How Should Rationalists Think in Groups? · 2012-12-17T20:50:00.350Z · LW · GW

LessWrong is one way of implementing groups of rationalists thinking together. One might say that it provides a centripetal phase: the discussion forums. But what centrifugal phase exists that prevents groupthink? Yes, we have "hold off on proposing solutions" - but remember that no current rationalist is perfect, and LW may grow soon (indeed, spreading rationality may require growing LW).

Also remember that people - including LessWrong members - tend to favor status quos, and given a chance, people tend to defend status quos to the death.

At the very least, we need to consider what other systems are available, and specifically de-emphasize the local status quo, since we might not be thinking perfectly rationally about it.

It's not highly formalized but that makes it a lot more flexible.

The Turing machine is highly formalized and is the most flexible possible computational machine. I get "false dichotomy" signals from this statement.

If you say you want groups of rationalists to solve problems together, which problems are you thinking about? What sort of problems do you want to solve?

insane governments, insane societies, insane individuals, and the singularity, in that rough order of priority.

Comment by almkglor on Parallelizing Rationality: How Should Rationalists Think in Groups? · 2012-12-17T11:40:33.349Z · LW · GW

Because the article about it specifically mentions that this is the failure mode to avoid:

Norman R. F. Maier noted that when a group faces a problem, the natural tendency of its members is to propose possible solutions as they begin to discuss the problem. Consequently, the group interaction focuses on the merits and problems of the proposed solutions, people become emotionally attached to the ones they have suggested, and superior solutions are not suggested. Maier enacted an edict to enhance group problem solving: "Do not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any."

So "hold off on proposing solutions" is just one possible solution. Deciding to take that solution immediately, without considering other options (such as NGT's approach) is precisely falling into that same trap.

In short, hold off on proposing the solution of "hold off on proposing solutions". v(^.^)v


edit:

Consider that under NGT, you are given 10 to 15 minutes to think of solutions before anyone gets to propose any solutions. That strikes me as longer than a typical "hold off".

Comment by almkglor on Parallelizing Rationality: How Should Rationalists Think in Groups? · 2012-12-17T09:13:53.211Z · LW · GW

"Hold off on proposing solutions" is an important technique because the Human brain is lazy, and once it thinks of one solution, it will not try to look for another.

I'd say that the interface between the "centrifugal phase" and the "centripetal phase" implicitly reduces the explicit need to protect ideation using "hold off on proposing solutions" - sure, you can present the solution you thought about in the "centrifugal phase" immediately, but the solution gets pushed into the meat grinder of whatever "centripetal phase" there is, as it must compete against other solutions. Ideally, none of the solutions presented at the start of the centripetal phase will be designated as the "best" solution (hopefully, given the anonymizing effects of Delphi and the self-consistency pushed on you by writing your ideas in the NGT (nominal group technique)).

Even in brainstorm sessions, "hold off on proposing solutions" is needed only if the initial idea(s) presented are given undue weight compared to later ideas. Delphi causes the initial ideas to be mixed with the others - ideally, your summarizer will be given the expert's answer sheets in random order, and in the real-time online form that's the reason why the group qualitative answer is randomized. Ideally in an NGT the facilitator will steer everyone away from overly discussing one idea at the expense of the rest - it is noted there with an IMPORTANT scare tag, after all. For prediction markets, you don't discuss ideas anyway, so that is not even an issue.

Comment by almkglor on Rationality Quotes December 2012 · 2012-12-17T02:11:07.437Z · LW · GW

Stripped to its essentials, every decision in life amounts to choosing which lottery ticket to buy. . . . Most organisms don't buy lottery tickets, but they all choose between gambles every time their bodies can move in more than one way. They should be willing to 'pay' for information---in tissue, energy, and time---if the cost is lower than the expected payoff in food, safety, mating opportunities, and other resources, all ultimately valuated in the expected number of surviving offspring. In multicellular animals the information is gathered and translated into profitable decisions by the nervous system.

  • Steven Pinker
Comment by almkglor on The Useful Idea of Truth · 2012-12-15T01:02:39.305Z · LW · GW

I proffer the following quotes rather than an entire article (I think the major problem with post-modernism isn't irrationality, but verbosity. JUST LOOK AT YOURSELF):

"For the sake of sanity, use ET CETERA: When you say 'Mary is a good girl!' be aware that Mary is much more than 'good'. Mary is 'good', nice, kind, et cetera, meaning she also has other characteristics." - A.E. Van Vogt, World of Null-A

"For the sake of sanity, use QUOTATIONS: For instance 'conscious' and 'unconscious' mind are useful descriptive terms, but it has yet to be proved that the terms themselves accurately reflect the 'process' level of events. They are maps of a territory about which we can possibly never have exact information. Since Null-A training is for the individuals, the important thing is to be conscious of the 'multiordinal' -that is the many valued- meaning of the words one hears or speaks." - A.E. Van Vogt, World of Null-A

Comment by almkglor on The Useful Idea of Truth · 2012-12-15T00:55:23.314Z · LW · GW

How about an expanded version: if we could be a timeless spaceless perfect observer of the universe(s), what evidence would we expect to see?

Comment by almkglor on Welcome to Less Wrong! (2012) · 2012-12-14T09:31:50.742Z · LW · GW

Although it might be good to be aware that you shouldn't remove a weapon from your mental arsenal just because it's labeled "dark arts". Sure, you should be one heck of a lot more reluctant to use them, but if you need to shut up and do the impossible really really badly, do so - just be aware that the consequences tend to be worse if you use them.

After all, the label "dark art" is itself an application of a Dark Art to persuade, deceive, or otherwise manipulate you against using those techniques. But of course this was not done lightly.

Comment by almkglor on Do I really not believe in God? Do you? · 2012-12-11T06:34:35.537Z · LW · GW

I'm not sure about others, but while I initially felt that way ("Thank .... who?") whenever something like that happened, careful thought-screening and imagining situations (i.e. simulation) helped weed it out. I'd be surprised if I slip something like that these days, unless it's really really nasty.

Comment by almkglor on Rationality Quotes December 2012 · 2012-12-07T09:29:17.977Z · LW · GW

"It's frightening to think that you might not know something, but more frightening to think that, by and large, the world is run by people who have faith that they know exactly what is going on." - Amos Tversky

Comment by almkglor on Rationality Quotes December 2012 · 2012-12-07T09:26:55.385Z · LW · GW

"Speed is what distinguishes intelligence. No bird discovers how to fly: evolution used a trillion bird-years to 'discover' that - where merely hundreds of person-years sufficed." - Marvin Minsky

Comment by almkglor on What are you working on? December 2012 · 2012-12-04T09:22:01.061Z · LW · GW

I just finished my NaNoWriMo novel, Judge on a Boat (latest revision kept here), last month in November, and this month I'm going through the process of fixing it up and improving it. I described it on LessWrong yesterday.

Why this project? Well, I've been lurking on Less Wrong (and before that, Overcoming Bias) for years, and yet I recently realized that I've not been very rational in actual practice. So I decided to write a novel about rationality and moral philosophy, just to make sure that I managed to actually understand the topics well enough to put them in my own words. Hopefully the attempt to explain them to a lay audience will help my own understanding.

I'd like to get some help from others in the LW community, since I suspect the novel is not very well-written, and I need some ideas on how to improve it. Why should anyone help me? Well: two of the best recent rationalist fiction that I know of are Alicorn's Luminosity and EY's HPMoR. I am nowhere near those levels (for one, their characters are not flat). The only advantage I have is that my novel has (in current law, anyway) a slightly higher chance of being published, unless J.K.Rowling suddenly has an aneurysm and gives the copyright to the public domain, or if suddenly everyone listens to rms and start repealing copyright laws internationally: the novel is original and won't get sued into oblivion if published.

My goals are... a bit iffy. I imagine publishing this in actual real-world physical book form, because those things are easier to give as gifts and might help raise the sanity waterline (badly needed in my family, and least they read books). But with the current level of quality I suspect I have about a snowball's chance of passing unscathed through the sun.

Alternatively: how about an open-source novel? I could put it up into a CC-BY-SA and try to actively recruit people to help improve it, try to leverage the community, but that probably will make it difficult to publish physically, as legally speaking (IANAL) that would require contacting all the copyright owners. Maybe a fiduciary agreement a la FSF-Europe, but I know of no big, trustable entity that would act as a fiduciary for fiction.

Comment by almkglor on What science needs · 2012-12-04T09:02:44.754Z · LW · GW

What do you think about David Brin's "disputation arenas?"

Maybe we could get a group of scientists to try out some form of disputation arena (Delphi Method for example) and see if they can be more effectively managed that way?

Comment by almkglor on Rationalist Fiction · 2012-12-03T04:15:54.213Z · LW · GW

Hello Less Wrong,

My first comment ever. I have been lurking on Less Wrong for several years already (and on Overcoming Bias before there was even a Less Wrong site), and have been mostly cyber-stalking EY ever since I caught wind of his AI-Box exploits.

This year 2012, on a whim, I joined the NaNoWriMo (National Novel Writing Month) last November, and started writing a novel I had been randomly thinking of making, "Judge on a Boat". The world is that humanity manages to grow up a little without blowing itself up, rationality techniques are taught regularly (a certain minimum level of knowledge in these techniques is required for all citizens), practical mind simulations and artificial intelligence are still far-off (but being actively worked on, somewhere way, way off in the background of the novel), and experts in morality and ethical systems, called "Judges", are given the proper respect they deserve.

The premise is that a trainee Judge, Nicole Angel, visiting Earth for her final examinations (she's from Mars Lagrange Point 1), gets marooned on a lifeboat with a small group of people. She is then forced to act as a full Judge (despite not actually passing the exams yet) for the people in the boat.

The other premise is that a new Judge, Emmanuel Verrens, is reading about Nicole Angel's adventures in novel form, under the guidance of high-ranking Judge David Adams. Emmanuel's thinking is remarkably similar to hers, despite her being a fictional character -

The novel was intended to be more about moral philosophy than strictly rationality, but as I was using Less Wrong as an ideas pump, it ended up being more about rationality, really. (^^)v

Anyway, if anyone is interested in the early draft text, see this.