Comment by saidachmiz on Discourse Norms: Moderators Must Not Bully · 2019-06-17T22:08:55.093Z · score: 17 (7 votes) · LW · GW

I agree with this classification, but want to note something that sometimes happens, that is seen incorrectly as an example of the latter scenario (i.e., of the “small errors mistaken for big ones”).

I am talking about a case like this:

Alice: Blah blah. For example, so-and-so.

Bob: Now, hang on there, Alice; so-and-so is actually not an example of blah blah (and possibly so-and-so does not even exist / so-and-so does not happen the way you say / etc.)!

Alice: Yes, sure, fine. That’s just an example, though, don’t nitpick.

Now, at this point Alice often simply ignores anything else Bob says, or gets frustrated and angry and stops reading the comments, etc., but if the conversation continues, what Bob says (or might properly say) would be—

Bob: But wait—what do you mean, “just” an example? It was the only example in your post! And now that you have (apparently) agreed with me that your purported example is actually not an example of your thesis, your post is left with no examples! That is a very serious flaw; you are now presenting a thesis with no empirical support or case studies whatsoever. And I note that you haven’t made any attempt, in your response to me, to replace the defeated example with others, which is odd, since you claim the thing you describe is commonplace… You say I’m nitpicking, yet as far as I can tell, I have inflicted a serious blow on the whole edifice of your essay!

(I once summarized this as: “that [example] was not an example of the thing you mention; but also and relatedly, maybe the thing you mention doesn’t really exist?”)

And in my experience, this never results in Alice seriously re-examining her thesis, because “dismantling an example” is, somehow, automatically classified as “irrelevant nitpicking”, even when that classification is completely nonsensical because without examples, the entire piece of writing is just empty noise.

Comment by saidachmiz on Discourse Norms: Moderators Must Not Bully · 2019-06-17T04:20:20.649Z · score: 7 (3 votes) · LW · GW

But I don’t think it’s particularly weird, if you are running a private space, to say “in this space it is not acceptable to openly self-identify as a Nazi.” (This is in part because I think it’s generally essential for private spaces to have pretty strong leeway to define their culture pretty arbitrarily)

[Emphasis mine]

Note that the OP is explicitly and specifically about public (a.k.a. “civic”) spaces.

Questions of what is, and what is not, appropriate for private spaces, are thus not applicable.

Comment by saidachmiz on The Univariate Fallacy · 2019-06-16T22:31:45.433Z · score: 6 (3 votes) · LW · GW

I’ve been doing this thing where I prefer to use “plain” Unicode where possible

I entirely sympathize with this preference!

Unfortunately, proper rendering of Unicode depends on the availability of the requisite characters in the fallback fonts available in a user’s OS/client combination (which vary unpredictably). This means that the more exotic code points cannot be relied on to properly render with acceptable consistency.

Now, that having been said, and availability and proper rendering aside, I cannot endorse your use of such code points as U+2081 SUBSCRIPT ONE. Such typographic features as subscripts ought properly to be encoded via OpenType metadata[1], not via Unicode (and indeed I consider the existence of these code points to be a necessary evil at best, and possibly just a bad idea). In the case where OpenType metadata editing[2] is not available, the proper approach is either LaTeX, or “low-tech” approximations such as brackets.


  1. Which, in turn, ought to be generated programmatically from, e.g., HTML markup (or even higher-level markup languages like Markdown or wiki markup), rather than inserted manually. This is because the output generation code must be able to decide whether to use OpenType metadata or whether to instead use lower-level approaches like the HTML+CSS layout system, etc., depending on the capabilities of the output medium in any given case. ↩︎

  2. That is, the editing of the requisite markup that will generate the proper OpenType metadata; see previous footnote. ↩︎

Comment by saidachmiz on Discourse Norms: Moderators Must Not Bully · 2019-06-16T18:36:46.209Z · score: 9 (5 votes) · LW · GW

Either you’re not referring to actual Nazis, or your entire “Nazis and the like” argument is nonsense, because (to my knowledge) all those who were members of the NSDAP are dead.

So what do you really mean?

Comment by saidachmiz on The Univariate Fallacy · 2019-06-15T23:19:26.653Z · score: 6 (3 votes) · LW · GW

FYI, one of the symbols in this post is not rendering properly. It appears to be U+20D7 COMBINING RIGHT ARROW ABOVE (appearing right after the ‘x’ characters) but, at least on this machine (Mac OS 10.11.6, Chrome 74.0.3729.131), it renders as a box:

Screenshot of character incorrectly rendered as a box

It is probably a good idea to use LaTeX to encode such symbols.

UPDATE: It does work properly in Firefox 67.0.2 (on the same machine):

Screenshot of character rendered correctly as right arrow above

Comment by saidachmiz on Recommendation Features on LessWrong · 2019-06-15T04:11:42.362Z · score: 4 (2 votes) · LW · GW

Ah! Yes, I understand now, and entirely agree!

Comment by saidachmiz on Spiracular's Shortform Feed · 2019-06-15T03:54:23.704Z · score: 6 (3 votes) · LW · GW

I see. Well, fair enough. Would it be possible to add (or perhaps simply encourage authors to add) some sort of note to this effect to shortform feeds, if only as a reminder?

(As an aside, I don’t think I quite grasp how you’re using the term “visibility” here. With that clause removed, what you’re saying seems straightforward enough, but that part makes me doubt my understanding.)

Comment by saidachmiz on Recommendation Features on LessWrong · 2019-06-15T03:50:24.186Z · score: 2 (1 votes) · LW · GW

It would be good if there was some system that would allow other users that are not moderators to be able to inform other users about the updated epistemic content of a post.

I see, yes. Well, I agree that such a system would be good to have, but I am not convinced that it would be better for what I have in mind that using the recommendation system you’ve built for this. After all, three-quarters of the work here is precisely in bringing the old posts in question to the attention of users; relying on users in the first place, to accomplish that, seems to be an ineffective plan—whereas using the automated recommendation engine is perfect. (Still the user-originated system you allude to would, I think, be a good supplement.)

I think there is still a loss of ownership that people would feel when we add big moderator note’s to the top of their posts, even if clearly signaled as moderator-added content, that I think would feel quite violating to many authors, though I might be wrong here.

Well, that seems to me to be a matter of designing the UI/styling for clear separation, which is an eminently tractable problem. (Or do you disagree, with either clause?) There is, after all, all sorts of metadata and navigation UI and so on around a post, which is not generated by the author (directly or at all); have the layout and styling and such of these “moderator notes” clearly associate them with this metadata/navigation, and I think (unless I am misunderstanding you) that your concern is thereby addressed.

Comment by saidachmiz on Spiracular's Shortform Feed · 2019-06-15T03:30:55.665Z · score: 2 (1 votes) · LW · GW

Giving offense wasn’t my intent, by any means!

Certainly it’s your right to discontinue the conversation if you find it unproductive. But I find that I’m confused; what was your goal in posting these things publicly, if not to invite discussion?

Do you simply prefer that people not engage with these “shortform feed” entries? (It may be useful to note that in the top-level post, if so. Is there some sort of accepted norm for these things?)

Comment by saidachmiz on Recommendation Features on LessWrong · 2019-06-15T03:24:09.174Z · score: 2 (1 votes) · LW · GW

Why not insert a note at the top of the post?

Make it stand out, visually, like put it in a “moderator note” box or whatever, and you’re good to go…

Ideally I would want a way for things like this to happen organically driven by user activity instead of moderator intervention

I confess I don’t really know what you mean by this.

Comment by saidachmiz on Recommendation Features on LessWrong · 2019-06-15T02:04:15.292Z · score: 6 (4 votes) · LW · GW

Currently, a post can appear in this section if … it has a score of at least 50 …

Is this adjusted by post date? Posts from before the relaunch are going to have much less karma, on average (and as user karma grows and the karma weight of upvotes grows with it, average karma will increase further). A post from last month with 50 karma, and a post from 2010 with 50 karma, are really not comparable…

We manually exclude posts if they aged poorly in a way that wouldn’t be captured by votes at the time—for example … reporting of studies that later failed to replicate

I wish you wouldn’t!

It seems to me that it would be extremely valuable to include posts like this in the recommendations—but annotate them with a note that the research in question hasn’t replicated. This would, I think, have an excellent pedagogic effect! To see how popular, how highly-upvoted, a study could be, while turning out later to have been bunk—think of the usefulness as a series of naturalistic rationality case studies! (Likewise useful would be to examine the comment threads of these old posts; did any of the commentariat suspect anything amiss? If so, what heuristics did they use? Did certain people consistently get it right, and if so, how? etc.) The new recommendation engine could do great good, in this way…

Comment by saidachmiz on Spiracular's Shortform Feed · 2019-06-15T01:35:53.174Z · score: 3 (2 votes) · LW · GW

This certainly seems reasonable.

Comment by saidachmiz on Spiracular's Shortform Feed · 2019-06-15T01:35:29.031Z · score: 4 (3 votes) · LW · GW

If someone set up a deal where they extract some money and kidnap 2 far-away people in exchange for letting 1 nearby person go, someone with physical-distance-discounting might keep making this deal, and the only thing the kidnappers would need to use to exploit it is a truck. If view through a camera is enough to abridge the physical distance, it’s even easier to exploit.

I’ve two things to say to this.

First, the moral view that you imply here, seems to me to be an awful caricature. As I say in my other comment, I should be very curious to see some real-world examples of people espousing, and defending, this sort of view. To me it seems tremendously implausible, like you’ve terribly misunderstood the views of the people you’re disagreeing with. (Of course, it’s possible I am wrong; see below. However, even if such views exist—after all, it is possible to find examples of people espousing almost any view, no matter how extreme or insane—do you really suggest that they are at all common?!)

Second… any moral argument that must invoke such implausible scenarios as this sort of “repeatedly kidnap and then transport people, forcing the mark to keep paying money over and over, a slave to his own comically, robotically rigid moral views” story in order to make its point is, I think, to be automatically discounted in plausibility. Yes, if things happened in this way, and if someone were to react in this way, that would be terrible, but of course nothing like this could ever take place, for a whole host of reasons. What are the real-world justifications of your view?

… I’ve also heard of some really awful real-world cases, especially if phone calls or video count as abridging distance (for a lot of people, it seems to). The ease and severity of exploitation of it definitely contributes to why, in the modern world, I don’t just call it unintuitive, I call it straight-up broken.

Now this is interesting! Could you cite some such cases? I think it would be quite instructive to examine some case studies!

Comment by saidachmiz on Spiracular's Shortform Feed · 2019-06-15T01:25:30.994Z · score: 2 (1 votes) · LW · GW

I think you’re inferring some things that aren’t there. I’m not claiming an agent-neutral morality. I’m claiming that “physical proximity,” in particular, being a major factor of moral worth in-and-of-itself never really made sense to me, and always seemed a bit cringey.

I see. It seems to me that the more literally you interpret “physical proximity”, the more improbable it is to find people who consider it “a major factor of moral worth”.

Is your experience different? Do you really find that people think that literal physical proximity matters morally? Not cultural proximity, not geopolitical proximity, not proximity in communication-space or proximity in interaction-space, not even geographical proximity—but quite literal Euclidian distance in spacetime? If so, then I would be very curious to see an example of someone espousing such a view—and even more curious to see an example of someone explicitly defending it!

Whereas if you begin to take the concept less literally (following something like the progression I implied above), then it is increasingly difficult to see why it would be “cringey” to consider it a “major factor” in moral considerations. If you disagree with that, then—my question stands: why?

When I limit myself to looking at charity and not alliance-formation, all types of proximity-encouraging motives get drowned out by the sheer size of the difference in magnitude-of-need and the drastically-increased buying power of first-world money in parts of the third world. I think that’s a pretty common feeling among EAs.

Yes, perhaps that is so, but (as you correctly note), this has to do proximity as a purely instrumental factor in how to implement your values. It does not do much to address the matter of proximity as a factor in what your values are (that is: who, and what, you value, and how much).

Comment by saidachmiz on Discourse Norms: Moderators Must Not Bully · 2019-06-15T00:40:16.262Z · score: 25 (14 votes) · LW · GW

This obviously doesn’t apply to Nazis and the like, which should IMO be banned outright.

You understand, of course, that these four words are doing all of the work in your post, yes?

Comment by saidachmiz on Spiracular's Shortform Feed · 2019-06-14T09:11:49.805Z · score: 2 (1 votes) · LW · GW

The standard forms are obviously super-broken (there’s a lot of good reasons why EA partially builds itself as a strong reaction against that; a lot of us cringe at “local is better” charity speak unless it gets tied into “capacity building”).

Could you say more about this? What do you consider “obviously super-broken” about (if I understand you correctly) moralities that are not agent-neutral and equal-consideration? Why does “local is better” make you cringe?

Comment by saidachmiz on Coercive Formats · 2019-06-12T22:09:49.424Z · score: 6 (3 votes) · LW · GW

But I think I agree with the upthread point that open world games often aren’t as open as you’d like.

For sure. (An example of the phenomenon: S.T.A.L.K.E.R.: Shadow of Chernobyl, which has procedurally generated ‘sidequests’ which reward resources but have absolutely no relationship to the plot whatsoever, and also a single, almost perfectly linear set of plot missions. It’s a great game, and it’s certainly “open world” in the sense that you can (mostly) go wherever you like, whenever you like, but nevertheless it’s a plot railroad, period.)

But whenever someone makes a generalization and says that they’ve never seen counterexamples, and I know that there are counterexamples, then I think it’s critically important to (a) recall and make salient their existence (lest we mentally elide the generalization into a universalization), and (b) consider what features of the counterexamples allow them to be such—and what the pattern of those features tells us about the general trend.

Comment by saidachmiz on Coercive Formats · 2019-06-12T20:36:57.283Z · score: 3 (2 votes) · LW · GW

“open world” in games mostly refers to shams. In every instance I’ve seen, the choice is between “whatever forwards the plot” (no choice) and “something random” (false choice).

Escape Velocity

Comment by saidachmiz on How much does neatly eating matter? What about other food manners? · 2019-06-12T00:51:49.947Z · score: 5 (2 votes) · LW · GW

Sure, that too. It’s a set of complementary points:

  1. Sandwich eating skill is not important, and…
  2. Sandwich eating skill trades off against important things, which means…
  3. If you’ve optimized for it, you must have neglected important things, which means that…
  4. Unusually high sandwich eating skill is a signal of sub-optimal important skills, which suggests that…
  5. You should avoid having unusually high sandwich eating skill, lest you avoid sending an undesirable signal.
Comment by saidachmiz on How much does neatly eating matter? What about other food manners? · 2019-06-11T20:33:31.130Z · score: 6 (3 votes) · LW · GW

But the author of the linked post clearly says that the “sandwich eating skill” worked against these executives. Their firm was passed over—and the strong suggestion is that their impeccable table manners were a not-insignificant part of the reason! The post is about the lack of importance of “sandwich eating skill”!

Comment by saidachmiz on Coercive Formats · 2019-06-10T19:30:33.801Z · score: 10 (3 votes) · LW · GW

While they may make it easy to create different views of information chunks, what’s the benefit of such pages if no other users can find them? Having an official, well put together* page hierarchy which starts at the homepage and includes all pages is pretty valuable.

I concur, but again, this is not a problem with wiki technology any more than it would be a problem with book technology if I were to publish a textbook without a table of contents or an index.

An analogy: suppose I say that a knife is uniquely good at cutting things (compared to other tools like hammers, chisels, etc.), and you protest that you have only ever seen knives used to smash things, whereas it’s hammers that you’ve seen used to cut things (with the claw side of a claw hammer, say). That would hardly be a sensible reply, yes? It’s simply that you’re not using knives (or seeing them used) correctly!

In short, you’re saying that if the power of a tool isn’t actually used, then it doesn’t do any good. I agree entirely! The answer is to go ahead and use it, not to discard the tool; knives, books, and wikis are, in fact, quite powerful, even if some foolish people use them to smash things, publish ones without indexes, or fail to create publicly visible index pages.

We ought to learn from the folly of others—not be discouraged by it.

Comment by saidachmiz on Coercive Formats · 2019-06-10T19:23:59.277Z · score: 2 (1 votes) · LW · GW

I’d say something wikis miss is not having posts/articles contain a list of pages which link to them. (If not in the sense of not having the tech, then in not making it obvious: UI.)

FYI, this is an artifact of the specific wiki software that you’re likely familiar with (namely, MediaWiki, on which Wikipedia is built). Other, better wiki platforms have easily accessible lists of backlinks (see “Backlinks” at the top right).

Comment by saidachmiz on A Plausible Entropic Decision Procedure for Many Worlds Living, Round 2 · 2019-06-10T04:47:19.552Z · score: 2 (1 votes) · LW · GW

Meta: this should probably be a link post. (Also, why not cross-post the whole text?)

Comment by saidachmiz on Coercive Formats · 2019-06-10T03:37:02.270Z · score: 2 (1 votes) · LW · GW

It seems to me that a graph-theoretic perspective would be fruitful to take, here…

Comment by saidachmiz on Coercive Formats · 2019-06-09T20:04:44.353Z · score: 3 (2 votes) · LW · GW

Basically, my point (which, I think, you have now understood, so I am stating it explicitly for the public benefit) is this:

The problems you outline are not downsides of wikis, the technology—they are downsides of Wikipedia, the project (and some, but not all, other wikis that are run in a similar manner). In fact wikis, the technology, are not uniquely bad, but uniquely good at solving these problems (since they so easily enable the creation of arbitrarily many different views on any given set of information-chunks, or any subset thereof)!

Comment by saidachmiz on Coercive Formats · 2019-06-09T20:00:17.169Z · score: 3 (2 votes) · LW · GW

It’s linear. There’s a clear path through it.

It’s linear because I created views on the pages which present them in a linear order—which is my point. The pages are also hyperlinked together in a chaotic manner, as any other wiki is; and of course you can search it, which ditto.

You read it by reading, scrolling down, clicking (to go down a level), and when you’ve read that level you go back up and continue reading.

(You can also use the next-page / previous-page navigation buttons, which is even more linear.)

Which anyone can create an account on, edit, and make new posts/articles? The fact that it looks like a book, rather than a ghastly mess led me to believe otherwise.

The Sequence posts themselves are not publicly editable, for obvious reasons. The Talk pages (see ‘Talk’ link in top left corner) are publicly editable—with no account creation necessary. You can’t create new pages—but that’s only because I’ve got the permissions set that way. A change of configuration—a moment’s work—and that is enabled, too.

Comment by saidachmiz on Coercive Formats · 2019-06-09T05:50:48.323Z · score: 4 (2 votes) · LW · GW

Oh, and by the way—

One of the downsides to wikis (or parts of wikis) is, what if you wanted to read them? Is there a good order? Usually*, no. … *I’m not aware of any counter-examples.

ReadTheSequences.com is, in fact, a wiki.

Comment by saidachmiz on Coercive Formats · 2019-06-09T05:49:07.467Z · score: 4 (2 votes) · LW · GW

One of the downsides to wikis (or parts of wikis) is, what if you wanted to read them? Is there a good order? Usually*, no.

You are conflating multiple different issues here, which must be examined separately or else any conclusions you reach will make no sense.

The first issue is that of content type. Many pages on Wikipedia, and on many other wikis, are simply not the kind of thing that you would “read”. They might be lists, or disambiguation pages, or category pages, or reference pages, or summaries of other pages, or meta pages, or media galleries, or blog-type updates, or “latest [whatever]” pages that communicate the status of something, or pages designed to be transcluded as components into other pages, or pages designed to be disassembled by transclusion and viewed elsewhere, or pages that implement some dynamic functionality, or data pages, or logs, or “Talk” pages, or profile pages, etc., etc., etc. Asking “what order should I read these pages in” is completely nonsensical when it comes to pages of any of these types.

The second issue is that of grouping. In what order should you read the following set of Wikipedia pages:

This, too, is a nonsense question. You could read them in any order you like, because they’re on completely disparate, unrelated topics. There just is not any kind of ordering that you can impose on them and say “there, now these three pages form a natural progression; certainly you shouldn’t read the last one first…”.

And this problem isn’t unique to Wikipedia. Even topical wikis often have pages that span a far wider range of subjects than would make sense to arrange into any kind of ordering.

And the third issue is that of views on information. Let us suppose that Wikipedia contains some subset of pages which, together, constitute the contents of a good cookbook (a bunch of recipe pages, some pages about cooking techniques, etc.). But these pages aren’t arranged in any kind of order…

… but what is stopping you from putting them in order? You could create a list of pages, which, if read in order, would make up the cookbook (or whatever). You could even make such a list… as a wiki page. Via transclusion, or such tools as wiki trails, you can assemble the list of pages into a single page, or into a structure which behaves just like an ebook would (or a web book, like ReadTheSequences.com or Butterick’s Practical Typography). And—importantly—each of those pages would still be a wiki page; it would still be browsable in the usual way, could be included in other “books”, etc.!

So, you see, to ask “in what order should I read this wiki” is simply to fundamentally misunderstand the nature of wikis—as well as their tremendous power…

Comment by saidachmiz on Coercive Formats · 2019-06-09T05:25:02.365Z · score: 5 (3 votes) · LW · GW

I skimmed the linked document, and didn’t find a clear description of what these terms mean. Maybe I missed it. Could you (or someone) summarize? (Or, at least, give a page reference for where in the PDF the terms are defined?)

Comment by saidachmiz on Arbital scrape · 2019-06-08T07:45:43.284Z · score: 3 (2 votes) · LW · GW

Wait… really? I thought the notifications were just for top-level comments?? Is that false…?

Comment by saidachmiz on Arbital scrape · 2019-06-08T06:42:12.016Z · score: 3 (2 votes) · LW · GW

You may want to post a comment saying this as a top-level reply, so that emmab will be notified of it, yes?

Comment by saidachmiz on Drowning children are rare · 2019-06-07T01:22:50.909Z · score: 13 (4 votes) · LW · GW

The question of “why do so many things turn into aesthetic identity movements” is an interesting and important one, and, through study of this (and related) questions, it seems quite tractable to have a much better shot at creating something that produces long-term value, than by not studying those questions.

I agree that studying this is quite important. (If, of course, such an endeavor is entered into with the understanding that everyone around the investigators, and indeed the investigators themselves, have an interest in subverting the investigation. The level of epistemic vigilance required for the task is very unusually high.)

It is not obvious to me that further attempts at successfully building the object-level structure (or even defining the object-level structure) are warranted, prior to having substantially advanced our knowledge on the topic of the above question. (It seems like you may already agree with me, on this; I am not sure if I’m interpreting your comment correctly.)

Comment by saidachmiz on Drowning children are rare · 2019-06-07T00:54:36.731Z · score: 16 (4 votes) · LW · GW

I agree with your analysis of the situation, but I wonder whether it’s possible to replace EA with anything that won’t turn into exactly the same thing. After all, the EA movement is the result of some people noticing that much of existing charity is like this, and saying “we should replace that with something very, very different”…

Comment by saidachmiz on Arbital scrape · 2019-06-07T00:35:27.344Z · score: 24 (10 votes) · LW · GW

This is now hosted at https://www.obormot.net/arbital/ .

Comment by saidachmiz on Arbital scrape · 2019-06-07T00:24:46.865Z · score: 5 (2 votes) · LW · GW

If there’s interest let me know as I may tidy up and open source my code.

Please do!

Comment by saidachmiz on Major Update on Cost Disease · 2019-06-05T18:29:19.563Z · score: 4 (2 votes) · LW · GW

Your images don’t work because you’re not actually linking to images—you’re linking to pages on Imgur.

Your first link should be: https://i.imgur.com/HGvfMLx.png

And your second link should be: https://i.imgur.com/FOMPCw2.png

Comment by saidachmiz on Can movement from Conflict to Mistake theorist be facilitated effectively? · 2019-06-03T22:09:18.243Z · score: 14 (8 votes) · LW · GW

Can a technique be created to facilitate this transition?

The first part of any such “technique” would have to be some demonstration that “mistake theory” is more correct than “conflict theory”—or, indeed, that either of them, or even the dichotomy itself, is a sensible and accurate way of describing the world. Are you in possession of such a demonstration? If so, I would be interested in seeing it!

Comment by saidachmiz on Dark Side Epistemology · 2019-06-02T07:27:04.868Z · score: 4 (2 votes) · LW · GW

When you have reached the point where you’re considering whether your opponents are literally zombies without any subjective consciousness… could it be time to consider whether your own thinking has gone wrong somewhere?

Comment by saidachmiz on The Fundamental Theorem of Asset Pricing: Missing Link of the Dutch Book Arguments · 2019-06-02T00:55:51.719Z · score: 8 (6 votes) · LW · GW

There are fairly elementary arguments that, in the absence of uncertainty, any preferences not described by a utility function are problematic—this is the circular preferences argument.

No, it is not the circular preferences argument!

Arguments against circularity of preferences—that is, against violations of the axiom of transitivity—are all well and good. But (in the VNM formalism) preferences cannot be described by a utility function if they violate any of the axioms—not just transitivity! Transitive preferences can fail to be describable by a utility function!

I wrote a comment about this on the post of Eliezer’s which you linked. It would really be very nice if we did not perpetuate misconceptions after they’ve been pointed out.

Comment by saidachmiz on Site Guide: Personal Blogposts vs Frontpage Posts · 2019-06-01T02:22:47.626Z · score: 11 (4 votes) · LW · GW

[1] We will remove material of the following types:

To a very limited degree, material that seriously threatens LessWrong’s long-term values, mission and culture.

Could you say more about what is meant by this?

Comment by saidachmiz on Site Guide: Personal Blogposts vs Frontpage Posts · 2019-06-01T02:21:59.819Z · score: 2 (1 votes) · LW · GW

Typo/etc. thread:

All of your posts and comments are visible under your user page which you can be treated as your personal blog hosted LessWrong

Bit of an editing snafu here, I think?

Comment by saidachmiz on Editor Mini-Guide · 2019-05-31T22:36:23.175Z · score: 4 (2 votes) · LW · GW

My apologies; I sometimes lose track of what sorts of technical knowledge is common among the Less Wrong crowd.

“Inline” and “block” are the two types (layout-wise) of elements in HTML (and many other systems). More or less, this means:

  • An “inline” element appears in the flow of text (a hyperlink, for example, is an inline element)
  • A “block” element is like its own paragraph (i.e., it’s like a block of text)

All of this is not specific to Less Wrong’s editor, or to any editor or website or anything; it’s just how HTML works.

So an inline image will be inserted right into the middle of a paragraph. A block image will be, basically, its own paragraph.

In a Markdown editor (whether LW’s or GW’s), whether an image is inline or block depends on whether the image syntax is in the middle of some other text, or whether it’s on a line by itself.

In Less Wrong’s draft.js editor… well, see this comment by habryka.

I hope that helps. Feel free to ask me to explain further if anything’s not clear!

Comment by saidachmiz on Editor Mini-Guide · 2019-05-31T20:43:41.625Z · score: 2 (1 votes) · LW · GW

Note that on GreaterWrong, you can always edit any comment in Markdown (because there’s just the one editor); and also, if you click the ‘$’ button on the editing toolbar, it’ll insert the markup for a LaTeX formula for you.

Comment by saidachmiz on Editor Mini-Guide · 2019-05-31T20:41:50.917Z · score: 4 (2 votes) · LW · GW

The difference is that the first image is a block-layout image, and the second is inline.

Comment by saidachmiz on Drowning children are rare · 2019-05-31T20:19:53.773Z · score: 6 (5 votes) · LW · GW

I’m afraid I completely disagree, and in fact find this view somewhat ridiculous.

“Giving explicitly higher weighting to the importance of people and causes located near to oneself” (the other clause in that sentence strikes me as tendentious and inaccurate…) is not, in fact, complex. It is a perfectly ordinary—and perfectly sensible—way of thinking about, and valuing, the world. That doing good in contexts distant from oneself (both in physical and in social/culture space) is quite difficult (the problems you allude to are indeed very severe, and absolutely do not warrant a casual dismissal) merely turns the aforementioned perspective from “perfectly sensible” to “more sensible than any other view, absent some quite unusual extenuating circumstances or some quite unusual values”.

Now, it is true that there is a sort of “valley of bad moral philosophy”, where if you go in a certain philosophical direction, you will end up abandoning good sense, and embracing various forms of “globalist” perspectives on altruism (including the usual array of utilitarian views), until you reach a sufficient level of philosophical sophistication to realize the mistakes you were making. (Obviously, many people never make it out of the valley at all—or at least they haven’t yet…) So in that sense, it requires ‘more than a “small amount of thinking”’ to get to a “localist” view. But… another alternative is to simply not make the mistakes in question in the first place.

Finally, it is a historical and terminological distortion (and a most unfortunate one) to take “effectiveness” (in the context of discussions of charity/philanthropy) to mean only effectiveness relative to a moral value. There is nothing at all philosophically inconsistent in selecting a goal (on the basis, presumably, of your values), and then evaluating effectiveness relative to that goal. There is a good deal of thinking, and of research, to be done in service of discovering what sort of charitable activity most effectively serves a given goal; should someone who thinks and researches thus, and engages in charitable work or giving on the basis of the conclusions reached, be described as “giv[ing] money locally to things that feel good, without reflecting much”? That seems nonsensical to me…

Comment by saidachmiz on Drowning children are rare · 2019-05-29T15:46:14.560Z · score: 6 (6 votes) · LW · GW

For most people, there are going to be something like three choices:

  • do nothing (other than, like, having fun hobbies and bolstering your own career for your own gain)

  • give money locally to things that feel good, without reflecting much,

  • spend some small amount of thinking about where to give on the global scale

Why is it impossible to to give money locally, yet spend some small amount of thinking about where/how to do so? Is effectiveness incompatible with philanthrolocalism…?

Comment by saidachmiz on What is required to run a psychology study? · 2019-05-29T08:22:58.354Z · score: 3 (2 votes) · LW · GW

There are, like, hundreds of tools to do this -both finding people, and nailing the questions. Google Survey samples currently best across US (specifically, it had predicted the 2016 election results successfully).

Could you list some good ones (other than Google Surveys)?

Comment by saidachmiz on What is required to run a psychology study? · 2019-05-29T08:21:46.350Z · score: 4 (2 votes) · LW · GW

Have you done this? If so, what were the questions, what were the answers, and are they published anywhere?

Comment by saidachmiz on Evidence for Connection Theory · 2019-05-28T19:50:19.970Z · score: 13 (6 votes) · LW · GW

This document appears to be from 2011. Does anyone know whether Leverage Research still endorses this? Are they still working on this “Connection Theory”? (What are they up to, in general…?)

Comment by saidachmiz on Comment section from 05/19/2019 · 2019-05-25T22:53:40.589Z · score: 6 (3 votes) · LW · GW

Note for any GreaterWrong users who might have a similar question:

When viewing a post, you’ll see an icon under the post name, at the left. It indicates what kind of post it is, e.g.:

Screenshot of a personal blog post

Screenshot of a frontpage post

Screenshot of a curated post

Screenshot of a post in Meta

Screenshot of an Alignment Forum post

(In order, those are: personal, frontpage, curated, Meta, Alignment Forum.)

What is this new (?) Less Wrong feature? (“hidden related question”)

2019-05-15T23:51:16.319Z · score: 13 (4 votes)

History of LessWrong: Some Data Graphics

2018-11-16T07:07:15.501Z · score: 71 (23 votes)

New GreaterWrong feature: image zoom + image slideshows

2018-11-04T07:34:44.907Z · score: 39 (9 votes)

New GreaterWrong feature: anti-kibitzer (hides post/comment author names and karma values)

2018-10-19T21:03:22.649Z · score: 47 (14 votes)

Separate comments feeds for different post listings views?

2018-10-02T16:07:22.942Z · score: 14 (6 votes)

GreaterWrong—new theme and many enhancements

2018-10-01T07:22:01.788Z · score: 38 (9 votes)

Archiving link posts?

2018-09-08T05:45:53.349Z · score: 56 (19 votes)

Shared interests vs. collective interests

2018-05-28T22:06:50.911Z · score: 21 (11 votes)

GreaterWrong—even more new features & enhancements

2018-05-28T05:08:31.236Z · score: 64 (14 votes)

Everything I ever needed to know, I learned from World of Warcraft: Incentives and rewards

2018-05-07T06:44:47.775Z · score: 33 (12 votes)

Everything I ever needed to know, I learned from World of Warcraft: Goodhart’s law

2018-05-03T16:33:50.002Z · score: 81 (21 votes)

GreaterWrong—more new features & enhancements

2018-04-07T20:41:14.357Z · score: 23 (6 votes)

GreaterWrong—several new features & enhancements

2018-03-27T02:36:59.741Z · score: 44 (10 votes)

Key lime pie and the methods of rationality

2018-03-22T06:25:35.193Z · score: 59 (16 votes)

A new, better way to read the Sequences

2017-06-04T05:10:09.886Z · score: 19 (17 votes)

Cargo Cult Language

2012-02-05T21:32:56.631Z · score: 1 (32 votes)