Posts

Meetup : Melbourne, practical rationality 2013-07-21T19:46:50.427Z
Book: AKA Shakespeare (an extended Bayesian investigation) 2013-06-28T16:34:40.933Z
Meetup : Melbourne, practical rationality 2013-05-26T19:07:11.861Z
Meetup : Melbourne, practical rationality 2013-04-22T01:34:19.534Z
Meetup : Melbourne, practical rationality 2013-03-24T14:20:33.697Z
Meetup : Melbourne, practical rationality 2013-02-18T23:00:44.458Z
Meetup : Melbourne, practical rationality 2013-01-18T01:23:19.205Z
Meetup : Melbourne, practical rationality 2012-12-28T17:08:46.814Z
Meetup : Melbourne, practical rationality 2012-11-21T00:44:16.229Z
Meetup : Melbourne, practical rationality 2012-10-24T23:11:21.338Z
Meetup : Melbourne, practical rationality 2012-09-24T05:24:08.248Z
Meetup : Melbourne, practical rationality 2012-08-27T05:52:31.628Z
Issue 301 shipped: Show parent comments on /comments 2012-08-03T06:14:18.577Z
Meetup : Melbourne, practical rationality 2012-08-02T17:15:04.020Z
Meetup : Melbourne, practical rationality 2012-08-02T17:14:10.036Z
New "Best" comment sorting system 2012-07-02T11:08:02.521Z
Meetup : Melbourne, practical rationality 2012-05-21T00:50:46.140Z
Meetup : Melbourne, practical rationality 2012-04-24T01:34:07.475Z
LessWrong downtime 2012-03-26, and site speed 2012-04-03T04:15:09.856Z
New front page 2012-03-30T01:08:43.154Z
Meetup : Melbourne, practical rationality 2012-03-23T00:45:31.889Z
What if the front page… 2012-03-14T06:47:26.971Z
Meetup : Melbourne practical rationality meetup 2012-02-17T18:43:07.320Z
Meetup : Melbourne practical rationality meetup 2012-01-23T11:40:47.432Z
Meetup : Melbourne practical rationality meetup 2011-12-22T22:47:12.203Z
Meetup : Melbourne practical rationality meetup 2011-11-17T23:43:33.753Z
Meetup : Melbourne practical rationality meetup 2011-10-21T06:04:37.594Z
Meetup : Melbourne, practical rationality, Friday 7th October, 7pm 2011-09-26T11:26:40.298Z
Meetup : Melbourne, Ben's house 2011-08-24T11:25:31.217Z
Call for volunteers: clean up the LW issue tracker 2011-07-12T18:15:36.579Z
Melbourne meetup discussion: Contrarian positions 2011-07-07T19:40:27.440Z
Meetup : Melbourne's first Friday of the month meetup 2011-07-04T07:43:09.199Z
Recent site changes, Mon 4th July 2011-07-04T07:25:40.646Z
Recent site changes 2011-06-24T03:50:40.802Z
Melbourne Meetup: Friday 1st July 7pm 2011-06-20T18:17:59.335Z
Official Less Wrong Redesign: Nearly there 2011-05-24T07:20:23.418Z
Melbourne Meetup: Friday 3rd June, 7pm 2011-05-23T04:00:25.142Z
Official Less Wrong Redesign: Special pages 2011-05-20T07:19:05.875Z
Official Less Wrong Redesign: View defaults for new users 2011-05-04T02:21:50.156Z
Melbourne Meetup: Friday 6th May, 6pm 2011-04-27T23:59:54.062Z
Meta: How should LW account deletion work? 2011-04-08T02:41:49.558Z
How are critical thinking skills acquired? Five perspectives 2010-10-22T02:29:18.779Z
We have a new discussion area 2010-09-27T07:50:15.554Z
Less Wrong: Open Thread, September 2010 2010-09-01T01:40:49.411Z
LessWrong downtime 2010-05-11, and other recent outages and instability 2010-05-22T01:33:08.403Z
The persuasive power of false confessions 2009-12-11T01:54:23.739Z
Bad reasons for a rationalist to lose 2009-05-18T22:57:40.761Z
Kling, Probability, and Economics 2009-03-30T05:15:24.400Z

Comments

Comment by matt on Common knowledge about Leverage Research 1.0 · 2021-10-11T11:00:58.179Z · LW · GW

I'm trying to apply the ITT to your position, and I'm pretty sure I'm failing (and for the avoidance of doubt I believe that you are generally very well informed, capable and are here engaging in good faith, so I anticipate that the failing is mine, not yours). I hope that you can help me better understand your position:

My background assumptions (not stated or endorsed by you):
Conditional on a contribution (a post, a comment) being all of (a) subject to a reasonably clear interpretation (for the reader alone, if that is the only value the reader is optimising for, or otherwise for some (weighted?) significant portion of the reader community), (b) with content that is relevant and important to a question that the reader considers important (most usually the question under discussion), and (c) that is substantially true, and it is evident that it is true from the content as it is presented (for the reader alone, or the reader community), then…

My agreement with the value that I think you're chasing:
… I agree that there is at least an important value at stake here, and the reader upvoting a contribution that meets those conditions may serve that important value.

Further elaboration of my background assumptions:
If (a) (clear interpretation) is missing, then the reader won't know there's value there to reward, or must (should?) at least balance the harms that I think are clear from the reader or others misinterpreting the data offered.
If (b) (content is relevant) is missing, then… perhaps you like rewarding random facts? I didn't eat breakfast this morning. This is clear and true, but I really don't expect to be rewarded for sharing it.
If (c) (evident truth) is missing, then either (not evident) you don't know whether to reward the contribution or not, or (not true) surely the value is negative?

My statement of my confusion:
Now, you didn't state these three conditions, so you obviously get to reject my claim of their importance… yet I've pretty roundly convinced myself that they're important, and that (absent some very clever but probably nit-picky edge case, which I've been around Lesswrong long enough to know is quite likely to show up) you're likely to agree (other readers should note just how wildly I'm inferring here, and if Vladimir_Nesov doesn't respond please don't assume that they actually implied any of this). You also report that you upvoted orthonormal's comment (I infer orthonormal's comment instead of RyanCarey's, because you quoted "30 points of karma", which didn't apply to RyanCarey's comment). So I'm trying to work out what interpretation you took from orthonormal's comment (and the clearest interpretation I managed to find is the one I detailed in my earlier comment: that orthonormal based their opinion overwhelmingly on their first impression and didn't update on subsequent data), whether you think the comment shared relevant data (did you think orthonormal's first impression was valuable data pertaining to whether Leverage and Geoff were bad? did you think the data relevant to some other valuable thing you were tracking, that might not be what other readers would take from the comment?), and whether you think that orthonormal's data was self-evidently true (do you have other reason to believe that orthonormal's first impressions are spectacular? did you see some other flaw in the reasoning I my earlier comment?)

So, I'm confused. What were you rewarding with your upvote? Were you rewarding (orthonormal's) behaviour, that you expect will be useful to you but misleading for others, or rewarding behaviour that you expect would be useful on balance to your comment's readers (if so, what and how)? If my model is just so wildly wrong that none of these questions make sense to answer, can you help me understand where I fell over?


(To the inevitable commenter who would, absent this addition, jump in and tell me that I clearly don't know what an ITT is: I know that what I have written here is not what it looks like to try to pass an ITT — I did try, internally, to see whether I could convince myself that I could pass Vladimir_Nesov's ITT, and it was clear to me that I could not. This is me identifying where I failed — highlighting my confusion — not trying to show you what I did.)

Edit 6hrs after posting: formatting only (I keep expecting Github Flavoured Markdown, instead of vanilla Markdown).

Comment by matt on Common knowledge about Leverage Research 1.0 · 2021-10-08T04:08:17.369Z · LW · GW

I've read this comment several times, and it seems open to interpretation whether RyanCarey is mocking orthonormal for presenting weak evidence by presenting further obviously weak evidence, or whether RyanCarey is presenting weak evidence believing it to be strong.

Just to lean on the scales a little here, towards readers taking from these two comments (Ryan's and orthonormal's) what I think could (should?) be taken from them…

An available interpretation of orthonormal's comment is that orthonormal:

  1. had a first impression of Geoff that was negative,
  2. then backed that first impression so hard that they "[hurt their] previously good friendships with two increasingly-Leverage-enmeshed people" (which seems to imply: backed that first impression against the contrary opinions of two friends who were in a position to gather increasingly overwhelmingly more information by being in a position to closely observe Geoff and his practices),
  3. while telling people of their first impression "for the entire time since" (for which, absent other information about orthonormal, it is an available interpretation that orthonormal engaged in what could be inferred to be hostile gossip based on very little information and in the face of an increasing amount of evidence (from their two friends) that their first impression was false (assuming that orthonormal's friends were themselves reasonable people)).
  4. (In this later comment) orthonormal then reports interacting with Geoff "a few times since 2012" (and reports specific memory of one conversation, I infer with someone other than Geoff, about orthonormal’s distrust of Leverage) (for which it is an available interpretation that orthonormal gathered much less information than their "Leverage-enmeshed" friends would have gathered over the same period, stuck to their first impression, and continued to engage in hostile gossip).

Those who know orthonormal may know that this interpretation is unreasonable given their knowledge of orthonormal, or out of character given other information about orthonormal, or may know orthonormal's first impressions to be unusually (spectacularly?) accurate (I think that I often have a pretty good early read on folks I meet, but having as much confidence in my early reads as I infer from what orthonormal has said in this comment, that orthonormal has in their reads, would seem to require pretty spectacular evidence), or etc., and I hope that readers will use whatever information they have available to draw their own conclusions, but I note that only the information presented in orthonormal's comment seems much more damning of orthonormal than of Geoff.
(And I note that orthonormal has accumulated >15k karma on this site… which… I don't quite know how to marry to this comment, but it seems to me might cause a reasonable person to assume that orthonormal was better than what I have suggested might be inferred from their comment… or, noting that at the time I write this orthonormal has accumulated 30 points of karma from what seems to me to be… unimpressive as presented?… that there may be something going on in the way this community allocates karma to comments (comments that do not seem to me to be very good).)

Then, RyanCarey's comment specifically uses "deadpan", a term strongly associated with intentionally expressionless comedy, to describe Geoff saying something that sounds like what a reasonable person might infer was intentional comedy if said by another reasonable person. So… the reasonable inference, only from what RyanCarey has said, seems to me to be that Geoff was making a deadpan joke.

I think I met Geoff at the same 2012 CFAR workshop that orthonormal did, and I have spent at least hundreds of hours since in direct conversation with Geoff, and in direct conversation with Geoff's close associates. It seems worth saying that I have what seems to me to be overwhelmingly more direct eyewitness evidence (than orthonormal reports in their comment) that Geoff does not seem to me to be someone who wants to be a cult leader. I note further that several comments have been published to this thread by people I know to have had even closer contact over the years with Geoff than I have, and those comments seem to me to be reporting that Geoff does not seem to them to be someone who wants to be a cult leader. I wonder whether orthonormal has other evidence, or whether orthonormal will take this opportunity to reduce their confidence in their first impression… or whether orthonormal will continue to be spectacularly confident that they've been right all along.
And given my close contact with Geoff I note that it seems only a little out of character for Geoff to deliver a deadpan joke, in the face of the very persistent accusations he has fielded, on evidence that seems to me to be of similar quality to the evidence that orthonormal presents here, that he is or is felt to be tending towards, or reminiscent of, a cult leader, a joke “that he would like to be starting a cult if he wasn't running Leverage”. RyanCarey doesn't report their confidence in the accuracy of their memory of this conversation, but given what I know, and what RyanCarey and orthonormal report only in these comments, I invite readers to be both unimpressed and unconvinced by this presentation of evidence that Geoff is a "cult-leader-wannabe".

(I want to note that while readers may react negatively to me characterising orthonormal’s behaviour as “hostile gossip”, I am in the process of drafting a more comprehensive discussion of the OP and other comments here, in which I will try to make a clear case that my use of that term is justified. If you are, based on the information you currently have, highly confident that I am being inappropriately rude in my responses here, to a post that I will attempt to demonstrate is exceedingly rude, exceedingly poorly researched and exceedingly misleading, then you are, of course, welcome to downvote this comment. If you do, I invite you to share feedback for me, so that I can better learn the standards and practices that have evolved on this site since my team first launched it.)
(If your criticism is that I did not take the time to write a shorter letter… then I’ll take those downvotes on the chin. 😁)

Instead of however we might characterise the activity we’re all engaging in here, I wonder whether we might ask Geoff directly? @Geoff_Anders, with my explicit apology for this situation, and the recognition that (given the quality of discourse exhibited here), it would be quite reasonable for you to ignore this and continue carrying on with your life, would you care to comment?

(A disclosure that some readers may conclude is evidence of collusion or conspiracy, and others might conclude is merely the bare minimum amount of research required before accusing someone of activities such as those this post (doesn't actually denotationally accuse Geoff of, but very obviously connotationally) accuses Geoff of: In the time between the OP being posted and this comment, I have communicated with Geoff and several ex-Leverage staff and contributors.)

Comment by matt on Design 2 · 2018-03-11T01:47:52.121Z · LW · GW

Schedule a script to nuke your history every X minutes?

Comment by matt on Group rationality diary, 8/20/12 · 2017-04-28T01:56:41.843Z · LW · GW

Dropbox broke old public links with no way I could see of preventing the link rot (https://www.dropbox.com/help/files-folders/public-folder). See https://drive.google.com/drive/folders/0BxlExVbPZSCRdUNaZUFId05YY1k for all of my audio tracks.

Comment by matt on Open thread, Sep. 21 - Sep. 27, 2015 · 2015-11-24T09:53:44.679Z · LW · GW

Anyone else having trouble with keyboard input on Lesswrong? (Arrow keys and page up & down work for me on OSX Chrome, Firefox & Safari.)

Comment by matt on Solving sleep: just a toe-dipping · 2015-07-10T01:23:09.159Z · LW · GW

I'm polyphasic on Everyman 3 since about March 2011 (Jan and Feb spent unsuccessfully trying to make Uberman work). According to my aging Zeo I get approximately the same REM and SWS as I did on 7.4hrs of monophasic sleep before I adapted. Nearly all of the SWS is in my 3hr core. On Uberman I never achieved enough SWS in my naps to get me through. The adaptation was ridiculously hard - both for how very unpleasant it was and for having to get through that while sleep deprived.

Comment by matt on How can I spend money to improve my life? · 2014-03-05T10:30:53.442Z · LW · GW

See http://lesswrong.com/lw/awm/how_to_avoid_dying_in_a_car_crash/62jy

Comment by matt on Group rationality diary, 8/20/12 · 2014-01-18T19:53:18.795Z · LW · GW

If that ever worked, it looks like Dropbox is no longer indexing:

Comment by matt on Post ridiculous munchkin ideas! · 2013-06-28T16:51:26.555Z · LW · GW

Actually, I would suggest not focusing your attention on evolutionary anthropology while you're supposed to be piloting a multi-ton vehicle at high speeds.

When you're driving a daily commute your mind is going to wander unless you have extraordinary focus control / mindfulness training. It's not obvious to me that it's more dangerous to have it directed to evolutionary anthropology than to what you're going to do when you get home (or wherever else it wandered).

Comment by matt on A Ketogenic Diet as an Effective Cancer Treatment? · 2013-06-28T16:19:56.630Z · LW · GW

people with late stage cancers often have enough trouble eating as is (a large fraction actually die of starvation), and getting them to eat anything is an accomplishment. So at that level, for a lot of post-metastasis patients, this will be happening naturally anyways.

Starvation does not equal ketosis. If cancer patients are suffering from nausea and lack of motivation to eat anything, they and their carers may not select high fat low carbohydrate foods that would promote and sustain ketosis and may instead choose simple and easy to digest carbohydrates and sugary treats.

(Your comment upvoted.)

Comment by matt on Maximizing Your Donations via a Job · 2013-05-07T22:52:25.053Z · LW · GW

At TrikeApps our job ads say "Choose an appropriate file format for your resume – we’ll draw conclusions about you from the tools you use". Anyone who expects us to prefer a proprietary file format over LaTeX or PDF is probably applying to the wrong place :)

Comment by matt on Explicit and tacit rationality · 2013-05-07T07:09:45.359Z · LW · GW

They're bold enough to punch through unendorsed aversions, they're not afraid to make fools of themselves, they don't procrastinate, they actually try stuff out, and they push on without getting easily discouraged.

For what it's worth, I'm a pretty successful entrepreneur and I'd say this more like:

They manage on the whole to punch through many of their unendorsed aversions (at least the big ones that look like they're getting in the way), they're just as afraid to make fools of themselves as you are but they have ways of making themselves act anyway most of the time, they keep their procrastination under control and manage to spend most of their time working, they actually try stuff out, and they have ways to push through their discouragement when it strikes.

(Your version scans better.)

I'm commenting mostly against a characterisation of this stuff being easy for successful entrepreneurs. If you try something entrepreneurial and find that it's hard, that's not very useful information and it doesn't mean that you're not one of the elect and should give up - it's bloody hard for many successful people, but you can keep working on your own systems until they work (if you try to just keep working I think you'll fail - go meta and work on both what's not working to make it work better and on what is working to get more of it).

Comment by matt on The Singularity Wars · 2013-02-14T21:41:19.454Z · LW · GW

singularity.org, recently acquired after a rather confused and tacky domain-squatter abandoned it

I would not have described the previous owner of singularity.org as a domain squatter. He provided a small amount of relevant info and linked to other relevant organizations. SI/MIRI made more of the domain than he had, but that hardly earns him membership in that pestilential club.

He sold the domain rather than abandoning it, and behaved honestly and reasonably throughout the transaction.

Comment by matt on The Singularity Wars · 2013-02-14T21:29:34.496Z · LW · GW

And http://leverageresearch.org/

Comment by matt on Meetup : Melbourne Social Meetup · 2013-01-16T04:09:13.345Z · LW · GW

I expect to attend.

Comment by matt on Meetup : Melbourne, practical rationality · 2013-01-03T00:39:03.170Z · LW · GW

Judging by https://groups.google.com/forum/?fromgroups=#!topic/melbourne-less-wrong/2dFTXTJRHZY and posts here (thanks Maelin) it's going to be a quiet one. I'll bring a couple of games in case we don't get critical mass for a raging storm of rational self improvement.

Comment by matt on Recommendations for good audio books? · 2012-12-06T23:59:04.104Z · LW · GW

If you're doing anything else, you may also want to speed up the playback and shift down the pitch. To achieve that you may use a tool like Audacity (open source, many platforms, Effect > Change Tempo…) or SoundStretch. I use this to automate podcast shifting on my mac.

Comment by matt on Recommendations for good audio books? · 2012-12-04T10:37:20.333Z · LW · GW

(duplicate comment removed)

Comment by matt on Open Thread, November 1-15, 2012 · 2012-11-13T22:01:52.622Z · LW · GW

Hmm… you're not being moderated. I'll followup on possible causes by PM.

Comment by matt on Meetup : Melbourne social meetup · 2012-11-13T09:54:38.370Z · LW · GW

There is, of course: http://hpmor.com/
Eliezer recommends The World of Null-A (which I've not yet read) and Eliezer and I recommend David's Sling.
Eliezer recommends Lawrence Watt-Evens's fiction. I merely point it out (it's not particularly well written or engaging, but it is nice to watch a protagonist be completely derailed from a quest to set up a business because he sees an opportunity).

Comment by matt on Meetup : Melbourne, practical rationality · 2012-10-24T23:12:17.379Z · LW · GW

We're planning on discussing ways CfAR (and anyone else) might measure practical rationality to provide feedback for their training.

Comment by matt on Group rationality diary, 5/21/12 · 2012-10-05T01:43:00.310Z · LW · GW

All of the materials from the July minicamp are available at https://github.com/CfAR/core-materials … for those with access to that private repository. The modules are all in Markdown format and the project includes build scripts that make HTML and PDF "books" that select some or all of the material. The formatting needs some work, and the project needs an owner.

I think the CfAR brass are happy that I give access to Alumni of past minicamps, but I'll need to confirm that before I add anyone. If you're interested in having access to the materials, please contact me with your github username and I'll seek permission to give you access.

Rationale: Github and Markdown is geekier than a wiki would be, but many of us are geeks, and having a build script to generate actually usable materials makes it easier to treat this repository as a master, rather than having available the shortcut of just using the MS Word document you used last time and you'll come back and update the repo as soon as you have time and ohh, look at that shiny thing over there… … ooo! Other shiny thing… (repo not updated, master materials scattered across many hard drives and not available to meetups).

Practical reality so far: I think CfAR instructors have made further modifications to their old sources for the materials and since August no-one but Trike employees have made any contributions to the repo.

I think this can work if someone drives it, and Trike is available to help where we can.

(cross posted to the meetup organisers Google group)

Comment by matt on Preventing discussion from being watered down by an "endless September" user influx. · 2012-09-03T18:26:32.245Z · LW · GW

It seems not hard to implement naively.

Discussion threads would truncate for new users from new user comments (experienced user comments on new user comments would be invisible to new users).
Our caching gets more complicated.
Many candidate tests for "experienced" seem obvious, but some might be very easy to game (funny comments on HPMOR posts qualify you).

Comment by matt on [META] Karma for last 30 days? · 2012-08-31T18:50:18.113Z · LW · GW

Sorry people - I should have posted when we did this. Leaving y'all in the dark was unkind.

Comment by matt on Dealing with trolling and the signal to noise ratio · 2012-08-31T18:46:22.389Z · LW · GW

Downvoted for putting more than one suggestion in a single comment.

Punish me for this anti-social act if you must, but as one of the dudes who tries to act after reading these suggestions (and tries hard to discount his own opinion and be guided by the community) this practice makes it much harder for me to judge community support for ideas. Does your comment having a score of 10 suggest 2.5 points per suggestion? ~10 points per suggestion? 15 points each for 3 of your suggestions and --35 for one of them (and which one is the -35?)?

Can we please adopt a community norm of atomicity in suggestions?

Comment by matt on Group rationality diary, 8/20/12 · 2012-08-27T05:47:31.022Z · LW · GW

And, your body repartitions your sleep on a polyphasic schedule. My sleep really isn't like yours any more. See the bar charts waaaay down the page here: http://trypolyphasic.com/forum/post/8455/#p8455

Comment by matt on Group rationality diary, 8/20/12 · 2012-08-26T21:58:40.225Z · LW · GW

We get about as much REM and SWS (deep sleep) as monophasic sleepers - about 90mins each per 24hrs. This is one hypothesis to explain why so many people (me included) have so much trouble adapting to the original Uberman schedule (which, properly adapted, gives you 50+ mins each).

Comment by matt on Group rationality diary, 8/20/12 · 2012-08-26T01:09:12.220Z · LW · GW

I think 10hrs awake, especially while adapting, is going to be very tough. I think you want to aim for 4 to 6:30 hr periods awake. I know that that requires a nap during normal working hours - as I said in my minicamp unconference presentation (unconference: polyphasic sleep isn't endorsed by CfAR) I think you're going to have to try talking to your employer about it, or sneaking off during a break.

Duplicate http://bit.ly/poly-schedule-tool and play with the times in blue for my advice - the blue cells will turn red if I think what you're attempting is going to be hard to make work.

Comment by matt on Group rationality diary, 8/20/12 · 2012-08-22T01:30:08.686Z · LW · GW

Yah - Wozniak is fairly well known in the polyphasic community for having very strongly held views that are directly contradicted by the experience of polyphasic sleepers. See for example http://www.puredoxyk.com/index.php/2006/11/01/an-attack-on-polyphasic-sleep/.

I did not gather objective evidence of the differences in my cognition before and after polyphasic sleep, but any differences are small enough that they're invisible to me and those who live with me.

Comment by matt on Group rationality diary, 8/20/12 · 2012-08-21T22:28:36.569Z · LW · GW

(Note that there are a few LWers attempting or contemplating polyphasic sleep right now. If you are considering it seriously we'd love your participation in a data collection effort on before and after cognitive performance.)

Polyphasic Sleep
How to have 19-22hrs of fun every day

https://dl.dropbox.com/u/107056/Minicamp2012/PolyphasicSleep/index.html
https://dl.dropbox.com/u/107056/Minicamp2012/PolyphasicSleep.zip

which includes at slides 114 and 115

Theory:
http://trypolyphasic.com/forum/post/8455/#p8455
http://trypolyphasic.com/forum/topic/876/uber-and-everyman-theory-analysis/
Experience:
http://www.stevepavlina.com/blog/2005/10/polyphasic-sleep/ ... and see links at bottom, particularly...
http://www.stevepavlina.com/blog/2005/11/polyphasic-sleep-log-days-25-30-final-update/
Note that Steve's experience of the flexibility of his near-uberman schedule doesn't match with other reports. I think this flexibility may be available after stabilisation, but come at a high cost before.
http://www.stevepavlina.com/blog/2006/01/polyphasic-sleep-update-day-90/
Steve's report of euphoric mood is fairly common on the Uberman schedule, and much less common on schedules that include regular core sleep.
http://www.stevepavlina.com/blog/2006/02/polyphasic-sleep-20/ Some experiments in flexibility
http://www.stevepavlina.com/blog/2006/04/polyphasic-sleep-the-return-to-monophasic/ Why did he stop?
http://trypolyphasic.com/forum/forum/17/adaptation/
And https://groups.google.com/d/topic/polyphasic/FTWKW0pvKZ0/discussion

and finally

My sleep tracks (which include masking sound including walla to drown out distracting conversation):

My schedule calculator: http://bit.ly/poly-schedule-tool

Comment by matt on Group rationality diary, 8/20/12 · 2012-08-21T22:11:22.483Z · LW · GW

I'd point out that being a polyphasic sleeper is a major confound here

Agreed.

… we all know that sleep is necessary for learning & long-term memory formation...

With some sleep phases more important than others. High quality evidence is thin on the ground here, but what is available says I'm getting a normal amount of REM and slow wave sleep, and nearly none of the other phases. Wiki (and other sources I've found) suggest that those are the sleep phases important in memory formation. (Note some studies listed on that wiki page have found napping to improve memory - my schedule gives me REM naps during the day (which is right at the top of the list of my super powers).)
[Lots of speculation here ↑. Available data below.]

Incidentally, do you do spaced repetition? I and Wozniak would be interested in your statistics/database if you started it before the polyphasic sleeping.

Before polyphasic sleeping I didn't have enough time to do spaced repetition :)
[That was the available data - sorry about that.]

There are moves afoot to organise the several July minicampers who plan to try a polyphasic schedule to gather before and after data. Do you want an introduction to the organisers of that effort?

Comment by matt on Group rationality diary, 8/20/12 · 2012-08-21T12:42:47.868Z · LW · GW

I tried Bacopa, found in some studies to improve learning and memory. It made me very sleepy in the day following taking it:

10 Aug  Bacopa  Good

11 Aug  Bacopa  Lethargic all day

12 Aug  Bacopa  Lethargic all day

13 Aug  -       Lethargic all day

14 Aug  -       Good

15 Aug  Bacopa  Good

16 Aug  Bacopa  Morning lethargy, clearing after 3hrs

17 Aug  Bacopa  Lethargic all day

18 Aug  Bacopa  Lethargic all morning

19 Aug  Bacopa  Lethargic, less than other days

20 Aug  -       Good

I've stopped.

Important: I'm a polyphasic sleeper: 3hr core, 3x 20min naps, stable for 18 months.

(http://www.australianvitamins.com/products/view/1808)

[Edited to increase visibility of polyphasic sleep.]

Comment by matt on LessWrong could grow a lot, but we're doing it wrong. · 2012-08-21T07:14:38.432Z · LW · GW

Erm… that's security by obscurity in the same way that Wikipedia relies on security by obscurity, right?

Comment by matt on Meetup : Melbourne social meetup · 2012-08-14T06:28:16.243Z · LW · GW

I'll be there.

Comment by matt on Meetup : Melbourne, practical rationality · 2012-08-02T19:26:23.543Z · LW · GW

Ouch - downvoted, presumably because it's a dup. For the record, I raised http://code.google.com/p/lesswrong/issues/detail?id=327 and worked with John to fix the cause of the dup.
Deep down, under the annoying double post character defect, I try to be a good person.

Comment by matt on New "Best" comment sorting system · 2012-07-02T23:00:39.609Z · LW · GW

Bayesian reformulations welcome.

Comment by matt on New "Best" comment sorting system · 2012-07-02T22:59:52.484Z · LW · GW

Work done by John Simon, and integrated by Wes.

Comment by matt on New "Best" comment sorting system · 2012-07-02T22:56:57.185Z · LW · GW

I think "Popular" adds weight to recent comments. This seems to be a much worse way of achieving what "Best" shoots for.

Comment by matt on Web of Trust lists Singularity.org as having a bad reputation · 2012-06-24T22:12:22.831Z · LW · GW

The page expressly says "Supplement your rating by leaving a comment. Comments provide more information, but do not affect the reputation."

If you click "Rate this website" you can rate each scale as you wish. Surely some users choosing different values on the scales is a much simpler explanation than that the site programmers built in a more complicated rating system then lied about it?!

Comment by matt on Web of Trust lists Singularity.org as having a bad reputation · 2012-06-24T19:51:31.317Z · LW · GW

Can this be true? I don't know how to check it; googling "link:singularity.org" reveals nothing (but the functionality of "link:" seems broken or something; I'd be glad if someone could explain me how it works).

Google Webmaster Tools isn't helping here either: screenshot of webmaster tools

(Webmaster Tools Links to Your Site shows "No data available")

Comment by matt on Web of Trust lists Singularity.org as having a bad reputation · 2012-06-24T19:48:23.664Z · LW · GW

Your comments don't count, your ratings do: screenshot of WOT page showing relevant controls and explanatory text

(look for the green "Rate this website" link above right of the rating graphic)

Comment by matt on Web of Trust lists Singularity.org as having a bad reputation · 2012-06-24T19:45:17.316Z · LW · GW

See https://support.google.com/webmasters/bin/answer.py?hl=en&answer=55281

To find a sampling of links to any site, you can perform a Google search using the link: operator. For instance, [link:www.google.com] will list a selection of the web pages that have links pointing to the Google home page. …

See a much larger sampling of links to a verified site:

  1. On the Webmaster Tools Home page, click the site you want.
  2. On the left-hand menu, click Traffic, and then click Links to Your Site.
Comment by matt on Web of Trust lists Singularity.org as having a bad reputation · 2012-06-23T18:41:04.435Z · LW · GW

But the important part is this: Someone from SIAI should follow the link "Click here if you own this site", verify the site ownership, and request a review.

Done (with at stretch at the "someone from SIAI" part). Comment above

Comment by matt on Web of Trust lists Singularity.org as having a bad reputation · 2012-06-23T18:21:02.175Z · LW · GW

I've registered at WoT and requested a reevaluation. We're also making a couple of changes the WoT reevaluation request process seems to suggest are important (like more prominently linking the site's privacy policy).

http://www.mywot.com/en/forum/24394-singularity-org
http://www.mywot.com/en/scorecard/singularity.org

Comment by matt on The Power of Reinforcement · 2012-06-21T19:09:46.052Z · LW · GW

I want to design a reinforcement schedule in one of our apps. Can anyone link me to some specific guidelines on how to optimise this?

(Reinforce exactly what % of successes (30%? 26%? 8%?)? Reinforce performances in the top 10% of past performances (or the top 12%, or the top 8%?)? How does time factor (if the user hasn't used the app for a week, should I push a reinforcer forward?)?)

Comment by matt on Marketplace Transactions Open Thread · 2012-06-19T04:03:56.861Z · LW · GW

I'm looking for someone to help with me on a paid basis with statistical analysis. I have problems like the following:

1. When to inspect?
I have 10k documents per month steaming to office staff for data entry in offices scattered around the world. I have trained staff at HQ doing inspections of the data entry performed by the office staff, detecting errors and updating fields in which they detected errors. I will soon have random re-checking by HQ inspectors of entries already checked by other HQ staff.
The HQ staff currently detect errors on ~15% of documents (between nearly none and ~6% errors on particular fields on documents). I don't yet have a good estimate of how many of those events are false positives and how many errors are not detected at all. Users show learning (we detect fewer errors from users who have entered data on more documents) that continues over their first 2000 or so documents (where I start running out of data). Required: I need to decide when a document can skip secondary inspection. I need to decide when users (HQ or practice users) don't understand something and need training (their error rate seems high for the difficulty of data entry on that field). When I change the user interface I need to decide whether I helped or hurt, and I need future error prediction (after I changed the data entry environment) to recover quickly.

2. What works?
We have a number of businesses that sell stuff, and we often change how that's done and how we promote (promotions, press placements (that I can work to get), changes in price, changes in product, changes in business websites, training for our sales people, etc.). I'd like to learn more than I am from the things we change, so that I can focus our efforts where they work best. There is a huge amount of noise in this data.

Proposals should be sent to jobs@trikeapps.com, should reference this comment, and should include answers to the following two questions (and please don't post your answers to the questions on this site):

  1. In my first example job above, across 200 users the average error rate in their first 10 documents was 12% (that is, of the set of 2000 documents made from the first 10 document entered by each of 200 users, 12% contained at least one error). Across so few documents from each user (only 10) there is only a small indication that the error rate on the 10th document is lower than the error rate on the first document (learning might be occurring, but isn't large across 10 documents). A new user has entered 9 documents without any errors. What is the probability that they will error on their next document?

  2. What question should I ask in this place to work out who will be good at doing this work? What question will effectively separate those who understand how to answer questions like this with data from those who don't understand the relevant techniques?

Comment by matt on [Cryonics News] Australian cryonics startup: Stasis Systems Australia update · 2012-06-17T04:53:59.783Z · LW · GW

I pushed all the way through. I'm signed up with Alcor, but feel very much as you do about how hard signup was, and how unlikely it is that Alcor will survive very long. I know only one other Australian who tried to sign up, and he also gave up in frustration.

(I've tried to volunteer my time and efforts to Alcor, and they can't organise enough to accept my help.)

Comment by matt on Proposal: Show up and down votes separately · 2012-06-11T22:28:26.936Z · LW · GW

There's some weight behind this proposal. Consider modifing the Anti-Kibitzer (http://lesswrong.com/lw/1s/lesswrong_antikibitzer_hides_comment_authors_and/1hvk) to do what you want (or adding a ticket to request same - http://code.google.com/p/lesswrong/issues/list).

Comment by matt on [deleted post] 2012-06-05T04:22:37.456Z

Test comment:

  • this
  • that
    Next paragraph
Comment by matt on How can we get more and better LW contrarians? · 2012-04-25T20:31:03.329Z · LW · GW

It's too bad that automatic wiki editing privileges don't come with a certain level of karma

Hmmm... you know that wouldn't be too hard to arrange. Keeping the passwords in sync after a change to one account would be much more work, but might be ignorable.