Posts

A brief summary of effective study methods 2014-04-28T12:40:57.117Z

Comments

Comment by Arran_Stirton on Learning by Doing · 2015-03-24T11:12:01.560Z · LW · GW

I see learning as very dependency based. Ie. there are a bunch of concepts you have to know. Think of them as nodes. These nodes have dependencies. As in you have to know A, B and C before you can learn D.

Spot on. This is a big problem is mathematics education; prior to university a lot of teaching is done without paying heed to the fundamental concepts. For example - here in the UK - calculus is taught well before limits (in fact limits aren't taught until students get to university).

Teaching is all about crossing the inferential distance between the student's current knowledge and the idea being taught. It's my impression that most people who say "you just have to practice," say as such because they don't know how to cross that gap. You see this often with professors who don't know how to teach their own subjects because they've forgotten what it was like not knowing how to calculate the expectation of a perturbed Hamiltonian. I suspect that in some cases the knowledge isn't truly a part of them, so that they don't know how to generate it without already knowing it.

Projects are a good way to help students retain information (the testing effect) and also train appropriate recall. Experts in a field are usually experts because they can look at a problem and see where they should be applying their knowledge - a skill that can only be effectively trained by 'real world' problems. In my experience teaching A-level math students, the best students are usually the ones that can apply concepts they've learned in non-obvious situations.

You might find this article I wrote on studying interesting.

Comment by Arran_Stirton on [deleted post] 2015-03-18T01:35:50.513Z

Thanks. I've edited the comment to reflect this better.

Comment by Arran_Stirton on [deleted post] 2015-03-17T05:34:07.127Z

Since I wanted a lot of things to be weighted when determining the search order, I considered just hiding all the complexity 'under the hood'.

The way I view it, search rankings are a tool like any other. In my own experience in academic research I've always found that clearly defined search rankings are more useful to me than generic rankings; if you know how the tool works, it's easier to use correctly. That said, there's probably still a place for a complex algorithm alongside other search tools, it just shouldn't be the only search tool.

But if people don't know what they are voting on they might be less inclined to vote at all.

Well I think it's more a matter of efficiently extracting information from users. Consider the LessWrong karma system, while it serves its purpose of filtering out spam, its a very noisy indicator of anything other than 'people thought this comment should get karma'. This is because some users think that we should vote things up or down based on different criteria, such as: do I agree with this comment?; did this comment contain valuable information for me?; was this an amusing comment?; was this comment well reasoned?; and so on.

By clearly defining the voting criteria, you're not just making users more likely to vote, you're also more efficiently extracting information out of them. From a user perspective this can be really useful, knowing that a particular rating is the popularity or the importance of a project, they can then choose whether they want to pay attention to or ignore that metric.

Comment by Arran_Stirton on [deleted post] 2015-03-16T14:40:18.904Z

In case it helps, here's a rough list of the thoughts that have come to mind:

  • Simplicity is usually best with voting systems. It may be worth looking at a reddit style up/down system for popularity. With importance you probably want high/mid/low. If you track the 'importance profile' of a user, you could use that to promote projects to their attention that other users with similar profiles find important. Also, in all these rankings it should be clear to the user exactly what metric is being used.

  • Make use of the wisdom of crowds by getting users to evaluate projects/tasks/comments for things like difficulty, relevance, utility, marginal utility - along the lines of this xkcd comic.

  • It seems to me that good open source management tool should direct attention to the right places. Having inbuilt problem flags that users can activate to have the problem brought to the attention of someone who can solve it seems like a good idea.

  • Skill matching. Have detailed skill profiles for users and have required skills flagged up for tasks.

  • Could try breaking projects up into a set of tasks, sub-tasks and next actions a-la Getting Things Done

  • Duolingo provides free language courses. They plan to make this financially viable by crowd sourcing translations from their students. Perhaps a similar thing could be implemented - maybe by getting university students involved.

  • Gamification across a broad range of possible tasks. Give points for things like participation, research, providing information. While rewarding programmers for coding is good, we should seek to reward anything that lowers the activation energy of a task for someone else.

  • Keep a portfolio of work that each user has completed in a format that is easy for them to access, customize and print out and reference in job applications.

  • Encourage networking between users with similar skills, areas of interest and the like. This would provide a benefit to being part of the community.

  • You could have a Patreon like pledging system where people pledge a small amount to projects they consider important. When the project reaches a milestone the contributors then get rewarded a portion of the pledge.

Comment by Arran_Stirton on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 120 · 2015-03-12T23:51:11.919Z · LW · GW

I'm struck with Dumbledore's ruthlessness

Actually I think he was just following his own advice:

While survives any remnant of our kind, that piece is yet in play, though the stars should die in heaven. [...] Know the value of all your other pieces, and play to win.

All things considered I think it was the most compassionate choice he could have made.

Comment by Arran_Stirton on Making a Rationality-promoting blog post more effective and shareable · 2015-02-23T02:32:01.230Z · LW · GW

No problem, some other things that come to mind are:

  • It's best to start the articles with a 'hook' paragraph rather than an image, particularly when the image only makes sense to the reader if they know what the article is about.
  • Caption your images always and forever.
  • This has been said before, but the title should make sense to an uninitiated reader. Furthermore to make it more share-able, the title should set up an expectation of what the article is going to tell them. An alternative in this case could be: "What do people really thinking of you?"; or if you restructure the article, something like "X truths about what people think of you," *For popular outreach the inferential distance has to be as low as you can make it; if you can explain something instead of linking to it, do that.
  • Take a look at the most shared websites(upworthy, buzzfeed and the likes), you can learn a lot from their methodology.
Comment by Arran_Stirton on Making a Rationality-promoting blog post more effective and shareable · 2015-02-22T23:36:48.855Z · LW · GW

(As a rule, using non-standard formatting when posting to LessWrong is a bad idea.)

There are some improvements you can make to increase cognitive ease, such as lowering your long-word count, avoiding jargon, and using fewer sentences per paragraph. I'd recommend running parts of your post (one paragraph at a time is best) through a clarity calculator to get a better idea of where you can improve.

You may also want to look into the concept of inferential distance.

Comment by Arran_Stirton on 4 Common Prediction Failures and How to Fix Them [LINK] · 2015-02-06T04:01:27.114Z · LW · GW

Nice article, have a karma!

There's a lot of information there, I'd suggest perhaps using this article as the basis for a four part series one each area. The content is non-obvious, so having the extra space to really break down the inferential distance into small steps so that the conclusions are intuitive to non-rationalists would be useful.

(As an aside I suspect that writing for the CFAR blog is right now reasonably high impact for the time investment. Personally I found CFAR's apparent radio-silence since September unnerving and it's possible that it was part of the reason the matching fundraiser struggled. Despite Anna's excellent post on CFAR's progress the lack of activity may have caused people to feel as though CFAR was stagnating and thus be less inclined to part with their money on a System 1 level.)

Comment by Arran_Stirton on CFAR fundraiser far from filled; 4 days remaining · 2015-01-28T13:56:23.690Z · LW · GW

Donated $180.

I was planning on donating this money, my yearly 'charity donation' budget (it's meager - I'm an undergraduate), to a typical EA charity such as the Against Malaria Foundation; a cash transaction for the utlilons, warm fuzzies and general EA cred. However the above has forced me to reconsider this course of action in light of the following:

  • The possibility CFAR may not receive sufficient future funding. CFAR expenditure last year was $510k (ignoring non-staff workshop costs that are offset by workshop revenue) and their current balance is something around $130k. Without knowing the details, a similarly sized operation this year might therefore require something like $380k in donations (a ballpark guesstimate, don't quote me on that). The winter matching fundraiser has the potential to fund $240k of that, so a significant undershoot would put the organization in precarious position.

  • A world that has access to a well written rationality curriculum over the next decade has significant advantage over one that doesn't. I already accept that 80,000 hours is a high impact organization and they also work by acting as an impact multiplier for individuals. Given that rationality is an exceptionally good impact multiplier I must accept that CFAR existing is much better than it not existing.

  • While donations to a sufficiently-funded CFAR are most likely much lower utility than donations to AMF, donations to ensure CFAR's continued existence are exceptionally high utility. For comparison (as great as AMF is) diverting all donations from Wikipedia to AMF would be a terrible idea, as would over funding Wikipedia itself. The world gets a large amount of utility out of the existence of at least one Wikipedia, but not a great deal of marginal utility by an over funded Wikipedia. By my judgement the same applies to CFAR.

  • CFAR isn't a typical EA cause. This means that while if I don't donate to keep AMF going, another EA will. However if I don't donate to keep CFAR going there's a reasonable chance that someone else won't. In other words my donations to CFAR aren't replaceable.

  • To put my utilons where my mouth is, it looks like the funding gap for CFAR is something like ~400k a year. GiveWell reckons that you can save a life for $5k by donating to the right charity. So CFAR costs 80 dead people a year to run, so there's the question: do I think CFAR will save more than 80 lives in the next year? The answer to that might be no, even though CFAR seems to be instigating high-impact good, but if I ask myself do I think CFAR's work over the next decade will save more than 800 lives? the answer becomes a definite yes.

Comment by Arran_Stirton on CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype · 2014-12-29T14:34:44.912Z · LW · GW

As far as I understand it, CFAR's current focus is research and developing their rationality curriculum. The workshops exist to facilitate their research, they're a good way to test which bits of rationality work and determine the best way to teach them.

In this model, broad improvements in very fundamental, schoolchild-level rationality education and the alleviation of poverty and time poverty are much stronger prospects for improving the world

In response to the question "Are you trying to make rationality part of primary and secondary school curricula?" the CFAR FAQ notes that:

We’d love to include decisionmaking training in early school curricula. It would be more high-impact than most other core pieces of the curriculum, both in terms of helping students’ own futures, and making them responsible citizens of the USA and the world.

So I'm fairly sure they agree with you on the importance of making broad improvements to education. It's also worth noting that effective altruists are among their list of clients, so you could count that as an effort toward alleviating poverty if you're feeling charitable.

However they go on to say:

At the moment, we don’t have the resources or political capital to change public school curricula, so it’s not a part of our near-term plans.

Additionally, for them to change public-school curricula they have to first develop a rationality curriculum, precisely what they're doing at the moment - building a 'minimum strategic product'. Giving "semi-advanced cognitive self-improvement workshops to the Silicon Valley elite" is just a convenient way to test this stuff.

You might argue for giving the rationality workshops to "people who have not even heard of the basics" but there's a few problems with that. Firstly the number of people CFAR can teach in the short term is tiny percentage of the population, not where near enough to have a significant impact on society (unless those people are high impact people, but then they've probably already hear of the basics). Then there's the fact that rationality just isn't viewed as useful in the eyes of the general public, so most people won't care about learning to become so. Also teaching the basics of rationality in a way that sticks is quite difficult.

Mind, if what you're really trying to do is propagandize the kind of worldview that leads to taking MIRI seriously, you rather ought to come out and say that.

I don't think CFAR is aiming to propagandize any worldview; they're about developing rationality education, not getting people to believe any particular set of beliefs (other than perhaps those directly related to understanding how the brain works). I'm curious about why you think they might be (intentionally or unintentionally) doing so.

Comment by Arran_Stirton on Has LessWrong Ever Backfired On You? · 2014-12-26T05:34:06.327Z · LW · GW

Are you sure precommitment is a useful strategy here? Generally the use of precommitments is only worthwhile when the other actors behave in a rational manner (in the strictly economic sense), consider your precommitment credible, and are not willing to pay the cost of you following through on your precommitment.

While I'm in no position to comment on how rational your parents are, it's likely that the cost of you being upset with them is a price they're willing to pay for what they may conceptualize as "keeping you safe", "good parenting" or whatever their claimed good intentions were. As a result no amount of precommitment will let you win that situation, and we all know that rationalists should win.

The optimal solution is probably the one where your parents no longer feel that they should listen to your phone calls or use physical coercion in the first place. I couldn't say exactly how you go about achieving this without knowing more about your parents' intentions. However you should be able to figure out what their goal was and explain to them how they can achieve it without using force or eavesdropping on you.

Comment by Arran_Stirton on Truth: It's Not That Great · 2014-05-14T17:36:33.240Z · LW · GW

I think we’re using different definitions of virtue. Whereas I’m using the definition of virtue as a a good or useful quality of a thing, you’re taking it to mean a behavior showing high moral standards. I don’t think anyone would argue that the 12 virtues of rationality are moral, but it is still a reasonable use of English to describe them as virtues.

Just to be clear: The argument I am asserting is that ChrisHallquist is not in any way suggesting that we should rename rationality as effective altruism.

I hope this makes my previous comment clearer :)

Comment by Arran_Stirton on Truth: It's Not That Great · 2014-05-11T17:22:48.203Z · LW · GW

This is an example of why I suspect "effective altruism" may be better branding for a movement than "rationalism".

I'm fairly certain ChrisHallquist isn't suggesting we re-brand rationality 'effective altruism', otherwise I'd agree with you.

As far as I can tell he was talking about the kinds of virtues people associate with those brands (notably 'being effective' for EA and 'truth-seeking' for rationalism) and suggesting that the branding of EA is better because the virtue associated with it is always virtuous when it comes to actually doing things, whereas truth-seeking leads to (as he says) analysis paralysis.

Comment by Arran_Stirton on A brief summary of effective study methods · 2014-05-01T10:26:00.259Z · LW · GW

TLDR: I managed to fix my terrible sleep pattern by creating the right habits.

I've been there, up until a month ago actually.

I've tried a whole slew of things to fix my sleeping pattern over the past couple of years. F.lux, conservative use of melatonin, and cutting down on caffeine all helped but none of them really fixed the problem.

What I found was that I'd often stay up late in order to get more done, and it would feel like I was getting more done (where in actual fact I was just gaining more hours now in exchange for losing more hours in the future). Alongside this my pattern was so hectic that any attempt to sleep at a "normal" time was thwarted by a lack of tiredness, I could use melatonin to 'reset' this, but it'd rarely stay that way.

The first thing that helped was sitting down and working out hour by hour how much time I actually have in a week; this prevented me from thinking I could gain more time by staying up later. The second thing was forming good habits around my sleep. Habit's typically follow a trigger-routine-reward pattern and require fairly quick feedback. As a result building a habit where the routine is sleeping for eight hours is quite hard.

Instead I appended two patterns either side of the time I wished to sleep, the first with the goal of making it easier for me to sleep, and the second with the goal of making it easier for me to get up.

The pre-sleep pattern followed:

Cue: 'Hey it's 10:30pm'

Routine:Turning off technology->Reading->Meditation

Reward: Mug of hot-chocolate

While the post-sleep pattern followed:

Cue: Alarm goes off,

Routine: Get out of bed.

Reward: Breakfast.

Since doing this I've been awake at 8 am every morning with little trouble, and the existence of those habits has made easy to add other habits into my routine. Breakfast, for example, is now a cue to go out running on days when I don't have lectures (this is very surprising for me, I've received several comments along the lines of "Who are you and what have you done with the real you" since I began doing this).

I hope you find this useful.

Comment by Arran_Stirton on A brief summary of effective study methods · 2014-04-29T09:37:19.427Z · LW · GW

Fixed it. I don't think I've ever consciously registered that adsorb != absorb, so thanks for that.

Comment by Arran_Stirton on A brief summary of effective study methods · 2014-04-28T23:07:11.243Z · LW · GW

Good point, thanks, fixed it.

Comment by Arran_Stirton on A brief summary of effective study methods · 2014-04-27T23:11:16.812Z · LW · GW

Thanks for the pointers; I'll make the changes you've proposed and move it to main at some point over the next day.

Look up one or two sequences or other posts for which this could be a follow-up.

I'm having trouble finding an appropriate post, did you have a particular one in mind?

Comment by Arran_Stirton on Stupid Questions Thread - January 2014 · 2014-01-14T21:26:32.455Z · LW · GW

As far as I can tell killing/not-killing a person isn't the same not-making/make a person. I think this becomes more apparent if you consider the universe as timeless.

This is the thought experiment that comes to mind. It's worth noting that all that follows depends heavily on how one calculates things.

Comparing the universes where we choose to make Jon to the one where we choose not to:

  • Universe A: Jon made; Jon lives a fulfilling life with global net utility of 2u.
  • Universe A': Jon not-made; Jon doesn't exist in this universe so the amount of utility he has is undefined.

Comparing the universes where we choose to kill an already made Jon to the one where we choose not to:

  • Universe B: Jon not killed; Jon lives a fulfilling life with global net utility of 2u.
  • Universe B': Jon killed; Jon's life is cut short, his life has a global net utility of u.

The marginal utility for Jon in Universe B vs B' is easy to calculate, (2u - u) gives a total marginal utility (i.e. gain in utility) from choosing to not kill Jon over killing him of u.

However the marginal utility for Jon in Universe A vs A' is undefined (in the same sense 1/0 is undefined). As Jon doesn't exist in universe A' it is impossible to assign a value to Utility_Jon_A', as a result our marginal (Utility_Jon_A - Utility_Jon_A') is equal to (u - [an undefined value]). As such our marginal utility lost or gained by choosing between universes A and A' is undefined.

It follows from this that the marginal utility between any universe and A' is undefined. In other words our rules for deciding which universe is better for Jon break down in this case.

I myself (probably) don't have a preference for creating universes where I exist over ones where I don't. However I'm sure that I don't want this current existence of me to terminate.

So personally I choose maximise the utility of people who already exist over creating more people.

Eliezer explains here why bringing people into existence isn't all that great even if someone existing over not existing has a defined(and positive) marginal utility.

Comment by Arran_Stirton on Boring Advice Repository · 2013-03-30T07:40:30.677Z · LW · GW

As far as I know there's no single sure-fire way of making sure that asking them won't put them in a position where refusal will gain them negative utility (for example, their utility function could penalize refusing requests as a matter of course) . However general strategies could include:

  • Not asking in-front of others, particularly members of their social group. (Thus refusal won't impact upon their reputation.)

  • Conditioning the request on it being convenient for them (i.e. using phrasing such as "If you've got some free time would you mind...")

  • Don't give the impression that their help is make or break for your goals (i.e. don't say "As you're the only person I know who can do [such&such], could you do [so&so] for me?")

  • If possible do something nice for them in return, it need not be full reciprocation but it's much harder to resent someone who gave you tea and biscuits, even if you were doing a favor for them at the time.

Of course there's no substitute for good judgement.

Comment by Arran_Stirton on Boring Advice Repository · 2013-03-27T08:57:16.204Z · LW · GW

Preemptive Solution: Leave a line of retreat, make sure that there is little/no cost for them if they choose to refuse; thus reducing the likelihood that they will help you out of compulsion.

Comment by Arran_Stirton on Suggest alternate names for the "Singularity Institute" · 2012-06-19T17:01:28.337Z · LW · GW

It’s quite likely you can solve the problem of people miss-associating SI with “accelerating change“ without having to change names.

The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves.

What if the AI researcher read (or more likely, skimmed) the concise summary before responding to the potential supporter? At least this line in the first paragraph, “artificial intelligence beyond some threshold level would snowball, creating a cascade of self-improvements,” doesn’t necessarily make it obvious enough that SI isn’t about “accelerating change”. (In fact, it sounds a lot like an accelerating-change-type idea.)

In my opinion at least, you need to get any potential supporter/critic to make the association between the name “Singularity Institute” and what SI actually does(/it’s goals) as soon as possible. While changing the name could do that, “Singularity Institute” has many useful aesthetic qualities that a replacement name probably won’t have.

On the other hand doing something like adding a clear tag-line about what SI does (e.g. “Pioneering safe-AI research”) to the header, would be a relatively cheap and effective solution. Perhaps rewriting the concise summary to discuss the dangers of a smarter-than-human AI before postulating the possibility of an intelligence explosion would also be effective; seeing as a smarter-than-human AI would need to be friendly, intelligence explosion or no.

Comment by Arran_Stirton on Stupid Questions Open Thread Round 2 · 2012-06-06T09:43:15.705Z · LW · GW

Interesting, thanks!

Have you come across any analysis that establishes cryonics as something that prevents information-theoretic death?

Comment by Arran_Stirton on A digitized belief network? · 2012-06-01T20:31:51.868Z · LW · GW

Sounds like a plan. Really what you want to do is contact everyone who's shown interest in helping you (including myself) in order to collude with them via email and then hold a discussion about how to move on at a scheduled time in an irc channel or somesuch.

Comment by Arran_Stirton on A digitized belief network? · 2012-05-25T14:16:46.893Z · LW · GW

At the moment I’m using yEd to create a dependency map of the Sequences, which is roughly equivalent to creating what I guess you could call an inferential network. Since embarking on this project I’ve discovered just how useful having a structured visual map can be, particularly for things like finding the weak points of various conclusions, establishing how important a particular idea is to the validity of the entire body of writing, and using how low a post is on the hierarchy as a heuristic for establishing the inferential distance to the concepts it contains.

So I’m thinking that the use of a belief network mapping tool might not necessarily be mainly in allowing updates to propagate though a personal network, but creating networks representing bodies of public knowledge. Like for example, the standard model of physics. As you can imagine this would be immensely useful for both research and education. For research such a network would point to the places where (for example) the standard model is weak, and for education it would lay out the order in which concepts should be taught in order for students to let them form an accurate internal working model without getting confused.

TL;DR: Yes, I’d love to help you design and build such a tool.

Comment by Arran_Stirton on Irrational hardware vs. rational software · 2012-05-22T14:57:09.344Z · LW · GW

Heh, well I've got dyslexia so every now and then I'll end up reading things as different to what they actually say. It's more my dyslexia than your wording. XD

It seems like I failed at being concise and clear :P

Hmm, I wonder if being concise is all it's cracked up to be. Concise messages usually have lower information content, so they're actually less useful for narrowing down an idea's location in idea-space. Thanks, I'm looking into effective communication at the moment and probably wouldn't have realized the downside to being concise if you hadn't said that.

Comment by Arran_Stirton on Irrational hardware vs. rational software · 2012-05-22T13:16:59.510Z · LW · GW

Ah, I misread your comment, my apologies. I'll retract my question.

Comment by Arran_Stirton on Irrational hardware vs. rational software · 2012-05-22T09:18:04.381Z · LW · GW

Can you evidence that?

Comment by Arran_Stirton on Neuroimaging as alternative/supplement to cryonics? · 2012-05-18T02:53:21.796Z · LW · GW

I agree, yet none of that changes the fact that conditions in the womb have a large impact on brain development. Hence information about the conditions in the womb is required to generate a specific new-born’s brain. Sure when an adult takes a stimulating drug there's not a large data flow, but when the brain is actually forming drugs can fundamentally alter its final structure.

Comment by Arran_Stirton on Singularity Ruined by Lawyers · 2012-05-17T20:50:01.031Z · LW · GW

I feel like I should be dedicating part of my resources to reducing the likelihood of something like that ever happening.

Comment by Arran_Stirton on Neuroimaging as alternative/supplement to cryonics? · 2012-05-15T17:14:22.821Z · LW · GW

So are you saying that the conditions in the womb doesn't have any effect on brain structure?

By the way, try to aim higher than DH3 because it's hard for me to understand what exactly you're disagreeing with if you don't provide me with a counterargument. Sorry for the bother.

Comment by Arran_Stirton on Neuroimaging as alternative/supplement to cryonics? · 2012-05-15T14:34:05.697Z · LW · GW

A newborn’s brain can be specified by about 10^9 bits of genetic information

While the brain of a new-born baby can be generated by 10^9 bits of genetic information, it’s not true that this is enough to suitably specify a particular new-born’s brain. This is because of the large impact that conditions in the womb have on brain development (e.g. drugs&alcohol) and the limited extent to which brain structure is inherited.

However it’s quite likely that specific sections of the genome contribute to brain development, meaning that your lower bound for how much information it takes to generate a new-born’s brain is (*probably!) much lower than 10^9 bits. Though this still won’t be enough information to specify a particular new-born’s brain, just enough to considerably narrow-down the region of brain-structure-space that the new-born’s brain can occupy.

*Don't take my word for this – I don't know nearly enough to substantiate that claim.

Also, on a tentative note, it might be worth comparing scans of a brain before and after it's been cryogenically preserved in order to see if it's possible to tell the difference (and subsequently if the data from the pre-freezing brain can be approximated from the post freezing brain data).

Comment by Arran_Stirton on Logical fallacies poster, a LessWrong adaptation. · 2012-05-10T06:11:57.050Z · LW · GW

From reading the other comments, this poster makes a three-way top level distinction.

Along similar-ish lines, it might be possible to use the hierarchy to score the quality of an argument. Essentially you'd assign a score from -4 to +4 for DH0 to DH7, then score an argument based on it's content. Although once you know it contains (say) a DH4 argument, you wouldn't keep on adding points for more DH4 arguments (otherwise an argument that was purely lots of DH4 statements would get a higher score than one DH7 statement).

It depends.

Usually you don't make use of a fallacy, it's more that you unknowingly commit one. If that's the case then the bias is in both you and the person who is persuaded by it.

On the other hand if you intentionally use a fallacy in order to persuade someone, you're a) dabbling in the dark arts and b) not actually biased (as you're not convinced by your own fallacy). However if you succeed in persuading someone with this method then that person would be the one with bias.

Comment by Arran_Stirton on Betrand Russell's Ten Commandments · 2012-05-10T04:14:13.941Z · LW · GW

Not sure that quite counts as an opinion, but what the hey. Close enough.

The particular quote should be ammended to something like:

Do not fear to be eccentric in opinion, for eccentricity is a poor measure of falsehood.

Comment by Arran_Stirton on Logical fallacies poster, a LessWrong adaptation. · 2012-05-08T09:48:12.129Z · LW · GW

At least in my ideal poster-space this hierarchy would be included. Considering that knowing about biases can hurt people having a general theme of how to resolve arguments better rather than "here are some fallacies, avoid them/point them out" can't hurt.

Comment by Arran_Stirton on Betrand Russell's Ten Commandments · 2012-05-08T00:45:54.503Z · LW · GW

However within a good deal western media (at least in England) xenophobic ideals are portrayed as to the "far right", and essentially eccentric. Whereas back in the day racism/nationalism was normal, and to not conform to that was considered eccentric.

Also I was looking for something a little less general than just xenophobia, a lot of opinions fall under that category.

Comment by Arran_Stirton on Betrand Russell's Ten Commandments · 2012-05-08T00:32:01.775Z · LW · GW

every opinon had once to be at least a bit eccentric by definition.

I'm having trouble parsing that, could you re-phrase?

Also it's not so much about what I'm defining "opinion" to be, but rather about what the quote means when it says "opinion". If we're going to say that the quote is wrong, we should at least aim to attack what the quote is intended to mean, rather than what we can interpret it to mean.

Comment by Arran_Stirton on Betrand Russell's Ten Commandments · 2012-05-07T02:28:27.509Z · LW · GW

(e.g. "clouds lead to rain", "fire is hot", et cetera).

I suspect the kind of opinion the quote is talking about is as defined here; a belief or judgment that rests on grounds insufficient to produce complete certainty. Neither “Fire is hot” nor “Clouds lead to rain” count as examples of this as most people have a fair amount of evidence on hand to back those beliefs up.

In light of this, could you please provide alternative examples of conventional opinions that were also held in the past?

Comment by Arran_Stirton on No Value · 2012-05-06T03:21:57.147Z · LW · GW

I wouldn't worry too much about it if I were you. Around your age I experienced the same kind of lack-of-feeling, a year or so later so did a friend of mine. However it passed for both of us. Retrospectively I suspect a great deal of the problem was that neither me nor my friend had invested time or effort into anything. Try working on something or executing a plot, also read http://lesswrong.com/lw/bq0/be_happier/ .

Also, reading good books works.

Comment by Arran_Stirton on Stupid Questions Open Thread Round 2 · 2012-04-22T20:17:27.128Z · LW · GW

Thanks!

I'm starting to suspect that my dream of finding an impartial analysis of cryonics is doomed to be forever unfulfilled...

Comment by Arran_Stirton on Stupid Questions Open Thread Round 2 · 2012-04-22T19:31:13.926Z · LW · GW

So the by-law bans anyone sympathetic to cryonics?

Comment by Arran_Stirton on Stupid Questions Open Thread Round 2 · 2012-04-22T18:49:39.651Z · LW · GW

So, who exactly do you expect to be doing this analysis?

No idea. Particularly if all cryobiologists are so committed to discrediting cryonics that they'll ignore/distort the relevant science. I'm not sure how banning cryonicists* from the cryobiology association is a bad thing though. Personally I think organisations like the American Psychiatric Association should follow suit and ban all those with financial ties to pharmaceutical companies.

I just want to know how far cryonics needs to go in preventing information-theoretic death in order to allow people to be "brought back to life" and to what extent current cryonics can fulfil that criterion.

* This is assuming that by cryonicists you mean people who work for cryonics institutes or people who support cryonics without having an academic background in cryobiology.

Comment by Arran_Stirton on Stupid Questions Open Thread Round 2 · 2012-04-21T23:00:47.242Z · LW · GW

Is there anywhere I can find a decent analysis of the effectiveness and feasibility of our current methods of cryonic preservation?

(one that doesn't originate with a cryonics institute)

Comment by Arran_Stirton on The Singularity Institute STILL needs remote researchers (may apply again; writing skill not required) · 2012-04-09T04:04:04.648Z · LW · GW

Your application form link is broken!

Comment by Arran_Stirton on Rationally Irrational · 2012-04-07T19:11:53.571Z · LW · GW

Of course I want to talk to you, debates are always interesting.

If you assert this:

saying rationality is a matter of degrees is basically what a paradox does.

and then this:

A paradox is the assertion that there can be multiple equally valid truth's to a situation.

That sounds a lot like the Fallacy of Grey, even if you meant to say something different. Using the word paradox implies that the "multiple equally valid truths" are contradictory in nature, if so you'd end up with the Fallacy of Grey through the Principle of Explosion.

But regardless, you can't just say "It's a paradox." and leave it at that. Feeling that it's a paradox, no matter what paradigm that you're using, shows that you don't have the actual answer. Take antinomy for example, specifically Kant's second antinomy concerning Atomism. It's not actually a paradox, it was just that at the time we had a incomplete theory of how the universe works. Now we know that the universe isn't constructed of individual particles.

You might find this and this useful further reading.

I'm interested in what you see to be the distinction between linear and non-linear rationality, I'm unfamiliar with applying the concept of linearity to rationality.

Something to keep in mind is the "rationality" you see here is very different to traditional rationality, although we use the same name. In fact a lot of what you'll come across here won't be found anywhere else which is why reading a good deal of the sequences is so important. Reading HPMoR is fairly equivalent too though.

I haven't down-voted you simply because I can see where you're coming from, you might be wrong or miscommunicating in certain respects, but you're not being stupid.

Part of the problem is that there's a huge inferential gap between you and most of the people here, as you say, you don't know much mathematics and you're not versed in the word of Eliezer. Similarly the folks here have not (necessarily) studied the social sciences intently, nor are they (necessarily) versed in the words of Weber, Rorty, Dewy or Kant.

Winning in the way we use it, is the best possible course of action to take. It's distinctly different from the notion of winning a game. If losing a game is the best thing you can do then that is winning. The reason the attempt failed is because you didn't understand what it was we meant by winning, and proceeded to say something that was untrue under the definition used on LW.

So yes, I'd like to hear about what a paradox means in your field of study. However you must realise that if you haven't read the sequences, and you don't know the math, there is a lot this community knows that you are ignorant of. By the same token, there is a lot that you know that the community is ignorant of. Neither thing is a bad thing, but that doesn't mean you shouldn't try to remedy it.

Importantly, don't try and mix your knowledge and LW-rationality until you fully understand LW-rationality. No offence meant.

Comment by Arran_Stirton on Rationally Irrational · 2012-04-07T06:08:19.199Z · LW · GW

no, no, No, NO, NO!

That is not what a paradox does. More importantly, saying rationality is a matter of degrees is nothing like saying that there are multiple equally valid truths to a situation.

It's called the Fallacy of Grey, read about it.

Comment by Arran_Stirton on Rationally Irrational · 2012-04-07T05:44:46.303Z · LW · GW

If you're going to use the word rationality, use its definition as given here. Defining rationality as accuracy just leads to confusion and ultimately bad karma.

As for a universal tool for some task? (i.e. updating on your belief) Well you really should take a look at Bayes' theorem before you claim that there is no such thing.

Comment by Arran_Stirton on Rationally Irrational · 2012-04-07T05:30:52.809Z · LW · GW

1.) You should read up on what it really means to have "true rationality". Here's the thing, we don't omit the possibility that the individual will never understand what "true rationality" is, in fact Bayes' Theorem shows that it's impossible to assign a probability of 1.0 to any theory of anything (never mind rationality). You can't argue with math.

2.) Yes, all of your goals should be approached with caution, just like all of your plans. We're not perfectly rational beings, that's why we try to become stronger. However, we approach things with due caution. If something is our best course of action given the amount of information we have, we should take it.

Also remember, you're allowed to plan for more than one eventuality, that's why we use probabilities and Bayes’ theorem it order to work out what eventualities we should plan for.

Comment by Arran_Stirton on Generalizing From One Example · 2012-04-07T03:18:43.513Z · LW · GW

How exactly did he convey to the actors the difference between arrogance and confidence?

Comment by Arran_Stirton on binomial variance problem · 2012-04-07T02:28:08.882Z · LW · GW

[Actually you can't be dickish/clever that way: The problem isn't underspecified as the goal is to do the best you can with the information you've got. You've got no information/evidence regarding the distribution between classes so your best bet is to treat it as random. From there you can use Bayes theorem, blah blah, etc. etc....]

Comment by Arran_Stirton on [deleted post] 2012-04-05T08:08:28.206Z

How would you justify that?