Posts
Comments
“A space Odyssey” is not watchable without discussing historical context
… why? I’ve watched this movie, and I… don’t think I’m aware of any special “historical context” that was relevant to it. (Or, at any rate, I don’t know what you mean by this.) It seemed to work out fine…
The main problem with your approach is not that it is counterintuitive (although it is, and more so than ours!), but that there is no way to return to “auto” mode via the site’s UI![1] Having clicked the mode selector, how do I go back to “no, just use my browser preference”? A two-state selector with a hidden, ephemeral third state, which cannot be retrieved once abandoned is, I’m afraid, the worst approach…
You can go into your browse’s dev tools and deleting the
localStorage
item, or clear all your saved data via the browser’s preferences. (Well, on desktop, anyway; on mobile—who knows? Not the former, at least, and how many mobile users even know about the latter? And the latter is anyhow an undesirable method!) ↩︎
Do you think a 3-state dark mode selector is better than a 1-state (where “auto” is the only state)? My website is 1-state, on the assumption that auto will work for almost everyone and it lets me skip the UI clutter of having a lighting toggle that most people won’t use.
Gwern discusses this on his “Design Graveyard” page:
Auto-dark mode: a good idea but “readers are why we can’t have nice things”.
OSes/browsers have defined a ‘global dark mode’ toggle the reader can set if they want dark mode everywhere, and this is available to a web page; if you are implementing a dark mode for your website, it then seems natural to just make it a feature and turn on iff the toggle is on. There is no need for complicated UI-cluttering widgets with complicated implementations. And yet—if you do do that, readers will regularly complain about the website acting bizarre or being dark in the daytime, having apparently forgotten that they enabled it (or never understood what that setting meant).
A widget is necessary to give readers control, although even there it can be screwed up: many websites settle for a simple negation switch of the global toggle, but if you do that, someone who sets dark mode at day will be exposed to blinding white at night… Our widget works better than that. Mostly.
Is it possible that someday dark-mode will become so widespread, and users so educated, that we could quietly drop the widget? Yes, even by 2023 dark-mode had become quite popular, and I suspect that an auto-dark-mode would cause much less confusion in 2024 or 2025. However, we are stuck with the widget—once we had a widget, the temptation to stick in more controls (for reader-mode and then disabling/enabling popups) was impossible to resist, and who knows, it may yet accrete more features (site-wide fulltext search?), rendering removal impossible.
(The site-wide fulltext search feature has since been added, of course.)
Not bad at all! Needs some work on the details and some bug fixes, but—really not bad! The dropcaps, in particular, are well done; and the overall theme is elegant.
I’m just going to link the comment I wrote the last time you mentioned that Rethink Priorities report. That report continues to be of very little use in supporting such arguments as you present here.
I in fact don’t use Google very much these days, and don’t particularly recommend that anyone else do so, either.
(If by “google” you meant “search engines in general”, then that’s a bit different, of course. But then, the analogy here would be to something like “carefully select which LLM products you use, try to minimize their use, avoid the popular ones, and otherwise take all possible steps to ensure that LLMs affect what you see and do as little as possible”.)
The most important thing is “There is a small number of individuals who are paying attention, who you can argue with, and if you don’t like what they’re doing, I encourage you to write blogposts or comments complaining about it. And if your arguments make sense to me/us, we might change our mind. If they don’t make sense, but there seems to be some consensus that the arguments are true, we might lose the Mandate of Heaven or something.”
There’s not, like, anything necessarily wrong with this, on its own terms, but… this is definitely not what “being held accountable” is.
It happening at all already constitutes “going wrong”.
This particular sort of comment doesn’t particularly move me.
All this really means is that you’ll just do with this whatever you feel like doing. Which, again, is not necessarily “wrong”, and really it’s the default scenario for, like… websites, in general… I just really would like to emphasize that “being held accountable” has approximately nothing to do with anything that you’re describing.
As far as the specifics go… well, the bad effect here is that instead of the site being a way for me to read the ideas and commentary of people whose thoughts and writings I find interesting, it becomes just another purveyor of AI “extruded writing product”. I really don’t know why I’d want more of that than there already is, all over the internet. I mean… it’s a bad thing. Pretty straightforwardly. If you don’t think so then I don’t know what to tell you.
All I can say is that this sort of thing drastically reduces my interest in participating here. But then, my participation level has already been fairly low for a while, so… maybe that doesn’t matter very much, either. On the other hand, I don’t think that I’m the only one who has this opinion of LLM outputs.
Do you not use LLMs daily?
Not even once.
In general, I think Gwern’s suggested LLM policy seems roughly right to me.
First of all, even taking what Gwern says there at face value, how many of the posts here that are written “with AI involvement” would you say actually are checked, edited, etc., in the rigorous way which Gwern describes? Realistically?
Secondly, when Gwern says that he is “fine with use of AI in general to make us better writers and thinkers” and that he is “still excited about this”, you should understand that he is talking about stuff like this and this, and not about stuff like “instead of thinking about things, refining my ideas, and writing them down, I just asked a LLM to write a post for me”.
Approximately zero percent of the people who read Gwern’s comment will think of the former sort of idea (it takes a Gwern to think of such things, and those are in very limited supply), rather than the latter.
The policy of “encourage the use of AI for writing posts/comments here, and provide tools to easily generate more AI-written crap” doesn’t lead to more of the sort of thing that Gwern describes at the above links. It leads to a deluge of un-checked crap.
I welcome being held accountable for this going wrong in various ways.
It happening at all already constitutes “going wrong”.
Also: by what means can you be “held accountable”?
If this is true, then it’s a damning indictment of Less Wrong and the authors who post here, and is an excellent reason not to read anything written here.
Well, let’s see. Calibri is a humanist sans; Gill Sans is technically also humanist, but more more geometric in design. Geometric sans fonts tend to be less readable when used for body text.
Gill Sans has a lower x-height than Calibri. That (obviously) is the cause of all the “the new font looks smaller” comments.
(A side-by-side comparison of the fonts, for anyone curious, although note that this is Gill Sans MT Pro, not Gill Sans Nova, so the weight [i.e., stroke thickness] will be a bit different than the version that LW now uses.)
Now, as far as font rendering goes… I just looked at the site on my Windows box (adjusting the font stack CSS value to see Gill Sans Nova again, since I see you guys tweaked it to give Calibri priority)… yikes. Yeah, that’s not rendering well at all. Definitely more blurry than Calibri. Maybe something to do with the hinting, I don’t know. (Not really surprising, since Calibri was designed from the beginning to look good on Windows.) And I’ve got a hi-DPI monitor on my Windows machine…
Interestingly, the older version of Gill Sans (seen in the demo on my wiki, linked above) doesn’t have this problem; it renders crisply on Windows. (Note that this is not the flawed, broken-kerning version of the font that comes with Macs!)
I also notice that the comment font size is set to… 15.08px. Seems weird? Bumping it up to 16px improves things a bit, although it’s still not amazing.
If you can switch to the older (but not broken) version of Gill Sans, that’d be my recommendation.
If you can’t… then one option might be to check out one of the many similar fonts to see if perhaps one of them renders better on Windows while still having matching metrics.
I am confident the average user experience would become worse if you just replaced the comment font with the body font)
Yeah, I agree with that, but that’s because of a post body font that wasn’t chosen for suitability for comments also. If you pick, to begin with, a font that works for both, then it’ll work for both.
… of course, if you don’t think that any of the GW themes’ fonts work for both, then never mind, I guess. (But, uh, frankly I find that to be a strange view. But no accounting for taste, etc., so I certainly can’t say it’s wrong, exactly.)
You definitely would not want the comment font be the same as the post font.
This… seems straightforwardly false? Every one of GreaterWrong’s eight themes uses a single font for both posts and comments, and it doesn’t cause any problems. (And it’s a different font for each theme!)
What is a “deontic mesh”? I am not familiar with this term; do you have a link that explains it?
The bullet-biting here is just “‘real numbers’ are fake”. That makes most of the questions you cite moot.
“Real numbers don’t exist” seems like a good solution to me.
So it’s like… the negation of the diagonal supposedly is there, but… not at any specific place?
Why should this be a problem? On this view, there is no “the diagonal”; there are only diagonals of particular tables for particular values of n, which each have their own negations.
history clearly teaches us that civilizations and states collapse (on timescales of centuries) and the way to bet is that ours will as well, but it’s kind of insane hubris to think that this can be prevented;
It seems like it makes some difference whether our civilization collapses the way that the Roman Empire collapsed, the way that the British Empire collapsed, or the way that the Soviet Union collapsed. “We must prevent our civilization from ever collapsing” is clearly an implausible goal, but “we should ensure that a successor structure exists and is not much worse than what we have now” seems rather more reasonable, no?
I concur with @johnswentworth’s comment; I read approximately as far as he did, and came to the same conclusion. I would also like to see the “~1.5 good and important points” listed!
Doesn’t rule consequentialism (as opposed to act consequentialism) solve all of these problems (and also all[1] other problems that people sometimes bring up as alleged “arguments against consequentialism”)?
Approximately all. ↩︎
Would you mind posting that information here? I am also interested (as, I’m sure, are others).
Meta: OP and some replies occasionally misspell the example billionaire’s surname as “Arnalt”; it’s actually “Arnault”, with a ‘u’.
Uh-huh, and what about the people who aren’t front-end developers, either, but only “advocates”, “experts” (but not the kind that write code), etc.?
To help with projects like “an open-source screen reader”, it is not necessary to be able to write C++ (or whatever) code. You can also:
- file well-written and well-documented bug reports, including testing with various setups, detailed replication steps, etc.
- survey alternate software options, cataloguing which of them correctly handle the relevant test cases, and how
- find people who do have the relevant expertise and may be willing to contribute code, and connect them with the maintainers
- contribute funding to the project and/or help to convince other people to contribute funding
- other (i.e., “reach out to the maintainer(s) to ask them what would help get the bug fixed, then do that”)
If even one out of every ten accessibility advocates/experts/etc. did these things, then all these bugs would’ve been fixed years ago.
I agree that allocation is hard and in particular that if regulations overboard with trying to ensure that there will always be more handicapped spots than there are people who need them, there’s a point at which adding spots becomes net negative.
Indeed. The difficult question, of course, is: what exactly constitutes “going overboard”, here? How often is it acceptable for a handicapped person to need a reserved parking spot, but not be able to get one (because they’re all full)? Whatever answer we come up with, I sure don’t envy the politician who has to defend that answer to the public!
But also, how would we come up with an answer? (Would we have to go all the way to fully general utilitarianism, where we calculate how many utils are lost by the average disabled person who has to park in a regular spot, and how many utils are lost by the average non-disabled person who has to park slightly further away due to the presence of empty reserved spots? How would we account for the effect of the presence and number of reserved spots on people’s behavior?)
How do these decisions actually get made? Like, in real life—how is it determined that there shall be this many handicapped spots in a shopping center parking lot?
In other words—you write:
Third, because if you’re a decision-maker of any kind, recognizing a handicapped parking situation means you have the opportunity to be conscious about allocation choices, or look for ways to make allocation smarter and more flexible.
Do you know of any resources that go into detail on this? Are there such?
Am I missing someplace where my post dismisses the issues you’re talking about?
Not explicitly, no.
I would characterize the difference in our views (as I understand your views) as having primarily to do with expectations about the distribution of outcomes w.r.t. whether any given accommodation will be positive-sum, zero-sum, or negative-sum (and the details of how the benefits and harms will be distributed).
If one believes that the distribution is skewed heavily toward positive-sum outcomes, and zero-sum or negative-sum outcomes are rare or even essentially of negligible incidence, then the emphasis and focus of your post basically makes sense; in such a case, overlooking opportunities to provide accommodations is the primary way in which we end up with less value than we might have done.
If one believes that the distribution contains a substantial component of zero-sum or negative-sum outcomes (and, especially, if one believes that there are common categories of situations wherein a negative-sum outcome may be the default), then the emphasis and focus of your post is essentially mis-aimed, and the lack of discussion of costs, of harms, etc., is a substantial oversight in any treatment of the topic.
That said, I of course agree with the basic thesis which you express in the post’s title, and which you develop in the post, i.e. that not everything is a curb cut effect and that there are different dynamics that arise from different sorts of accommodations. You can think of my top-level comment in this thread as additive, so to speak—addressing a lacuna, rather than directly challenging any specific claim in your post. (My other top-level comment does directly challenge some of your claims, of course. But that’s a different subtopic.)
If I understand you correctly, you are describing the same sort of thing as I mentioned in the footnote to this comment, yes?
Yes, that may be part of it. I suspect, however, that in this case it is a slightly different (though somewhat related) dynamic that’s mostly responsible.
“Accessibility advocate” is qualification which leads naturally to “accessibility expert”; and there is a certain amount of demand for such people (e.g., as consultants on projects which are required by regulations to be “accessible”, or which otherwise benefit from being able to claim to be “accessible”). Such people have an incentive to establish their credentials and their credibility by talking about what web developers must do in order to make their websites accessible, to frequently mention accessibility in Hacker News discussions, to write blog posts about accessibility best practices, etc.
They do not have any incentive whatever to help to fix bugs in screen reader programs. What would that do for them? The better such programs work, the less work there is for these people to do, the less there is to talk about on the subject of how to make your website accessible (“do nothing special, because screen readers work very well and will simply handle your website properly without you having to do anything or think about the problem at all” hardly constitutes special expertise…), the less demand there is for them on the job market…
None of this helps actual vision-impaired users, of course. It’s a classic principal-agent problem.
I think you have misunderstood my claims and my point.
The links I have posted were to demonstrate the fact that screen readers having a problem with soft hyphens is a real thing that really happens. (You seemed to be skeptical of this.)
That developers are sometimes told to not use soft hyphens, on account of this issue, is something for which I have and need no links, because, as I said initially, this is something which I, personally, have been told, by self-described accessibility advocates and/or disabled users, in discussions of actual websites which I have worked on. (You could disbelieve me on this, I suppose…)
And whether this specific advice/request/demand happens often is inconsequential. It is one example of a class of such things, which collectively one ends up hearing quite a bit, if one does serious web development work these days. The title
attribute example was another. I could also have mentioned the deeply confusing and bizarre ARIA attributes.
Again: any specific such issue comes up only occasionally. But if I were to try to build a website such that screen readers have no problems with it, I would have to deal with many such issues—most of which could be fixed much more easily by the developers of the screen reader software… but aren’t. And the attitude of most accessibility advocates I’ve encountered has been that I should indeed take that (“build a website such that screen readers have no problems with it”) as my goal.
Come now; you can do better than that.
A search for screen reader "soft hyphen"
easily finds this:
https://github.com/nvaccess/nvda/issues/9343
A search for "screen reader" "soft hyphen"
easily finds these:
https://www.reddit.com/r/accessibility/comments/lku7kq/comment/go5kkwy/
https://lists.apache.org/thread/8bjr2lxhy3jj4vqrqzdp98hlndbt3sol
No, the lack of screen reader support for soft hyphens is a real thing, with actual user complaints behind it. Besides, that guidelines page doesn’t mention title
attributes either; those are only very general guidelines, lacking details.
As far as ignoring some advice—sure. I ignore all of it, personally.
If an accommodation makes life worse for non-users then it’s at best what I’d call a handicapped parking effect, meaning that designers have to make a hard tradeoff.
Right. The thing is (and this is what I was getting at), it seems to me that disability accommodations are often argued for on the basis of the “curb cut effect” concept, but in fact such accommodations turn out to be handicapped parking effects—at best! It seems to me, in fact, that disability accommodations quite often make life tangibly worse for many more people than those whose lives they improve.
(By the way, here’s something which I find to be… interesting, let’s say. It’s often claimed that curb cut effects are ubiquitous. Yet if you ask for three examples of such things, people tend to have trouble producing them. One’s a freebie: actual curb cuts. Two, also easy, there’s the standard-issue second example: closed captions (although I am not entirely convinced that they’re strictly positive-or-neutral either, but never mind that). But what’s the third? After some straining, you might get something lame like “high contrast on websites” (what websites…?) or “accessibility features in games” (what features…?). At that point the well of examples runs dry.)
It’s also possible that the people working on your bridge just didn’t think about it or didn’t try very hard, in which case it’s not any kind of cleverly-named effect, it’s just bad design.
Sure they didn’t. Why should they? It’s not like anyone is building the thing out of a purely altruistic desire to help disabled people. Someone somewhere passed a law, someone else in another place wrote some regulations, a third person somewhere else wrote some funding proposal, a budget was approved, jobs were created, political capital was made, etc., etc.
But that’s how it almost always is. Almost nobody ever really thinks about it or tries very hard. This entire domain is absolutely jam-packed with principal-agent problems. That’s the whole problem.
One thing that you largely ignore in this post is the cost of creating such accommodations.
I will give a couple of examples. The first concerns a “curb cut” scenario; the second is about a “Braille signage” scenario.
Making public spaces uglier on the public’s dime
Not far from me is a highway, which has a residential neighborhood on one side of it and a waterfront promenade on the other. In several places there are pedestrian bridges that cross the highway. One of these bridges (which doubles as a ramp onto the highway, in one direction) is currently being rebuilt; the project nears completion (indeed the bridge is already usable, as the remaining work is mostly to do with railings and such), so it is now possible to see, and judge, what the completed construction will be like.
Now, prior to this project, this was a perfectly functional bridge, which was not in any way damaged, decrepit, crumbling, failing, dangerous, or even unsightly. There was nothing wrong with the bridge whatsoever—except that it wasn’t wheelchair-accessible. Hence, the rebuilding.
The new bridge is much less convenient for non-disabled pedestrians (one must walk thrice as far to get from one side to the other, due to the lengthy sloped ramps which the new design uses). It is more dangerous to pedestrians of all kinds, due to the incorporation of a bizarre roundabout in the design of the new ramp. It is much uglier and more obtrusive; it takes more of the promenade away from greenery. The bridge couldn’t be used while it was being rebuilt, of course (the project has taken considerable time, as such things do). And, of course, the rebuilding project is taxpayer-funded.
As far as I can tell, this is a case of me paying (via taxes) for my life to be made strictly worse than it was before.
Helping users who seem strangely uninterested in solving their problems
Web designers/developers routinely hear that we should make our websites accessible to users of screen readers. The specific things that must be done to accomplish this are sometimes reasonable (add alt
attributes to images)… but often aren’t.
For instance, I have been told that using soft hyphens as hyphenation hints is bad, because it causes screen readers to get confused and pronounce all the words incorrectly. Alright. Well, why is that my problem? If a screen reader does this, that sounds like a bug in the screen reader program. So the users of that program should talk to the developers of said program; or, if that does not help, switch to a different screen reader. (There seem to be quite a few options!)
Similarly, I’ve been told that using the title
attribute (on links, say) is bad, because screen readers will read out the value of said attribute, which is usually undesirable. Again: why is this the web dev’s problem? Fix the screen reader, or use a better one!
And yet “accessibility advocates” seem much more interested in hectoring web developers about all the myriad inconvenient, time-consuming, headache-inducing ways in which we must cater to the strange (and strangely persistent—some of these supposed limitations of screen readers have been around for decades, it seems, despite the plenitude of offerings, of which a good number are even free software licensed and can presumably be patched, forked, etc.!) peculiarities of screen readers than they are in… fixing the screen readers.
“Every web developer must remember to do all of the following long list of specific things—many of which take time and development resources, and substantively restrict your options for implementing certain features or solving certain problems—in order to support users of screen readers” is a demand for a very large number of people to contribute unpaid work (and to keep doing so, indefinitely) to solve a problem which could be solved much more easily (with a solution that needs to be implemented just once) by a much smaller number of precisely the people who are making the demand.
This is clearly a negative-sum solution.
That doesn’t mean it’s zero-sum: The existence of a handicapped parking spot that I can’t use might cost me an extra 20 seconds of walking, but save an extra five minutes of painful limping for the person who uses it.
This does not demonstrate that handicapped parking spots aren’t zero-sum (or, indeed, even that they’re not negative-sum). Merely comparing the advantage to one handicapped person of parking in a reserved spot, and the disadvantage to one non-handicapped person of having one less spot (in an optimal location) available, is not enough; you must multiply both quantities by the number of instances affected (respectively, the number of occasions on which a handicapped person uses one of the reserved spots, and the number of occasions on which a non-handicapped person uses one of the regular spots), and compare those quantities.
It is very, very easy for an accommodation like this to end up being negative-sum.
Or for past me, on a couple of occasions when I’ve been injured
To be able to use a handicapped parking space, it is not sufficient to be injured; one must also apply for a handicapped parking permit, which is not a trivial process.
I have, several times in the past, been injured in such a way that I would have benefited from being able to use a handicapped parking spot. On zero of those occasions was it even remotely practical to apply for, and receive, a permit that would enable me to do so. Because of this, while handicapped parking spaces could have helped me on a number of occasions, they have actually helped me never.
Zulip, Discord and Slack are all options as well
However, these are all very bad for searchability, archiving, multimedia content, and creation of permanent content of any sort.
In-general, bot issues are one of the top reasons for websites that accept user submissions either need to have a strict manual review phase, or be continuously updated with defenses.
Indeed. And what you’ll generally find is that mature, widely-used platforms tend to have many and varied tools for dealing with this sort of thing, whereas if you build custom software, you end up having to handle many more edge cases, attack types, etc., than you’d expected (because it’s very hard to think of all such possibilities in advance), and the project just balloons massively due to this.
(For example, Simple Machines Forum—which runs Data Secrets Lox, and which I, on the whole, do not recommend—has all sorts of options for gating user registration behind verification emails / manual moderator approval / captcha / verification questions / etc.; it has moderation tools, including settings that let you enforce per-post approval, on a per-subforum basis; it has a karma system; it has built-in GDPR compliance features; and all of this before you consider all the optional modifications that are available… and SMF is not even one of the better platforms in this category! How much development work would it take a small team to get a discussion forum platform to this state? How much work would it take even to just build the core functionality plus the moderation/security/anti-spam tools…?)
I don’t agree with most of this.
I agree with this part:
I would advise against setting up the software for yourself (unless this is the type of thing you also do for a job)
Yes, if you are not a “tech person” / programmer / engineer of some sort / otherwise have experience with software, you should not set this sort of thing up yourself. You should find/hire someone to do it for you. That is not difficult.
I disagree with the rest of what you say.
Choosing a free solution that is well-maintained is better than rolling your own. A standardized solution plus standardized exploits plus standardized mitigations to those exploits is better than a custom solution.
Basically, remember the situation when one person practically took down Less Wrong, and it had to be reprogrammed from scratch, because updating the original Reddit codebase would be too much work? Similar thing can happen when you use a free solution, and defending against it can turn out to be too much work.
First of all, as I recall, that wasn’t an “exploit” in the usual “software vulnerability” sense. Perhaps someone from the LW team who was around back then can better describe the details, but as I understand it, it was a design flaw in the “if someone does this bad thing, we have no good tools to catch them and/or prevent someone from doing it” sense. There is no reason whatsoever why a custom solution can’t have arbitrarily many such design flaws, and such an “exploit” in no way relies on having access to the source code or… anything like that.
And—again, to my recollection—old Less Wrong was never “hacked”.
But more importantly, the reason why any of this was a problem at all is that old LW used the old Reddit codebase—that is, one which had been deprecated and was no longer maintained. Indeed, it is a bad idea to choose such a platform, if you do not have a dedicated engineer to service it! This is why you should choose something popular and well-maintained.
For example, I linked MyBB in my earlier comment. It is updated regularly, and the developers clearly take security very seriously. I don’t know how much money you’d have to spend to get this degree of protection in a custom solution, but it sure ain’t a small number.
When you speak of standardized exploits to standardized solutions, I expect that you have Wordpress in mind, which is infamous for its exploitability (although I am unsure to what extent that reputation is still accurate; it may be an outdated characterization). But most web forums (which, note, Wordpress is not) get hacked approximately never. Ones based on well-designed, well-maintained, popular software like MyBB, even less so.
I also disagree with the advice to “use some cheap and simple solution that can (and will) be thrown away later”. In my experience, such platform choices tend to be quite “sticky”, and migration is often painful, expensive, and time-consuming. That is not to say that you should never migrate to a custom solution (although I am very skeptical about OP’s use case requiring anything more advanced than a good PHP bulletin board)… but even if you expect that you’ll want to migrate, it is far better to migrate from a basically working site which merely lacks some features you want, or has some annoying limitations, etc., than to migrate from a site which has broken or been hacked or otherwise exploded.
The fact is that a decent PHP-bulletin-board-type platform already is “a cheap and simple solution”. (Which can, of course, be thrown away later, but doesn’t have to be.) Trying to go even cheaper is setting yourself up for pain later on.
That’s true, but I’m not aware of one that does this combo and is good (uses a good forum software, is reliable, etc.). Are you?
MyBB (or similar) with a custom theme.
Aesthetic: lots of themes available, and making your own seems easy.
Inexpensive: can’t beat “free” for the software, and cheap hosting that supports PHP+MySQL is plentiful.
Private: trivial to set up basically arbitrary access controls, as with any half-decent forum software.
Easily set up: standard PHP+MySQL stuff.
(I strongly anti-recommend Discourse as a forum platform.)
Re-construction of Pathfinder game mechanics in setting
(Done poorly)
Thanks!
I agree that a link to a more substantive writeup would be very good… it’s hard to know what to make of the claim that “Pianists with a long professional experience show a statistically significant preference for the aurally tuned grand”, given that there were only 8 such pianists and 2 pianos (one tuned one way, one tuned the other way).
… also, this information comes to use from the website of this “entropy piano tuner”, which seems… well, I’d like to see another source, at least.
(Apparently, the creators of this “EPT” are themselves affiliated with the University of Physics Würzburg, which certainly explains how/why they got the University of Music Würzburg involved in this test.)
Have you (or has anyone) ever done double-blind listening tests to determine whether in fact anyone can tell the difference in such cases?
The problem with economics, however, is that while it’s got theories, they are, by and large, not theories about humans.
The discipline which was, at least, intended to provide the theoretical grounding for psychology as a whole was evolutionary psychology. The best summary of the motivation for, and conceptual basis of, evo-psych is the following, written by great cognitive psychologist Roger Shepard in his paper “The Perceptual Organization of Colors: An Adaptation to Regularities of the Terrestrial World?” (1992; this paper was included as a chapter in The Adapted Mind, probably the most import text in evo psych):
STRUCTURE IN HUMAN PERCEPTION AND COGNITION IN GENERAL
For over a century, psychological researchers have been probing the structures and processes of perception, memory, and thought that mediate the behaviors of humans and other animals. Typically, this probing has taken the form of behavioral experiments suggested by evidence from one or more of three sources: (a) introspections into one’s own experience and inner processes, (b) information gleaned about the anatomy or physiology of the underlying physical mechanisms, and (c) results obtained from previous behavioral studies. More recently, in seeking to understand not only the nature but also the origins of psychological principles, some of us have been turning to a fourth source for guidance—namely, to the ecological properties of the world in which we have evolved and to the advantages to be realized by individuals who have genetically internalized representations of those properties.
Taken by themselves, findings based on introspective, behavioral, and physiological evidence alike, however well established and mutually consistent they may be, remain as little more than “brute facts” about the human or animal subjects studied. What such findings reveal might be merely arbitrary or ad hoc properties of the particular collection of terrestrial species investigated. Even our own perceptual and cognitive capabilities, as much as our own bodily sizes and shapes, may be the products of a history of more or less accidental circumstances peculiar to just one among uncounted evolutionary lines. Certainly, these capabilities do not appear to be wholly dictated by what is physically possible.
The following are just a few of the easily stated and well known of our perceptual/cognitive limitations, as these have been demonstrated under highly controlled but nonnaturalistic laboratory conditions:
- Although a physical measuring instrument can reliably identify a vast number of absolute levels of a stimulus, we reliably identify only about seven (Miller, 1956).
- Although a physical recording instrument can register a vast number of dimensions of variation of the spectral composition of light, the colors we experience vary, as I have already noted, along only three independent dimensions (Helmholtz, 1856–1866; Young, 1807).
- Although the red and violet spectral colors differ the most widely in physical wavelength, these colors appear more similar to each other than either does to the green of an intermediate wavelength (leading, as noted, to Newton’s color circle).
- Although a camera can record and indefinitely preserve an entire scene in a millisecond blink of a shutter, the “iconic” image that our visual system retains from a single brief exposure decays in less than a second and, during this time, we are able to encode only about four or five items for more permanent storage (Sperling, 1960).
- Although a computer can store an essentially unlimited number of unrelated items for subsequent retrieval, following a single presentation, we can reliably recall a list of no more than about seven items (Miller, 1956).
- Although a computer could detect correlations between events separated by any specified time interval and in either order of occurrence, in virtually all animals with nervous systems, classical conditioning generally requires that the conditioned stimulus last for a short time and either be simultaneous with the unconditioned stimulus or precede it by no more than a few seconds (Pavlov, 1927, 1928).
- Although a computer can swiftly and errorlessly carry out indefinitely protracted sequences of abstract logical operations, we are subject to systematic errors in performing the simplest types of logical inferences (e.g., Tversky & Kahneman, 1974; Wason & Johnson-Laird, 1972; Woodworth & Sells, 1935)—at least when these inferences are not of the kind that were essential to the fitness of our hunter-gatherer ancestors during the Pleistocene era (Cosmides, 1989).
Our performance in a natural setting is, however, a very different matter. There, our perceptual and cognitive capabilities vastly exceed the capabilities of even the most advanced artificial systems. We readily parse complex and changing visual scenes and auditory streams into spatially localized external objects and sound sources. We classify those objects and sources into natural kinds despite appreciable variation in the individual instances and their contexts, positions, or conditions of illumination. We infer the likely ensuing behaviors of such natural objects—including the recognition of animals and anticipation of their approach or retreat, the recognition of faces and interpretation of their expressions, and the identification of voices and interpretation of their meanings. We recode and transfer, from one individual to another, information about arbitrary or possible states of affairs by means of a finite set of symbols (phonemes or corresponding written characters). And we plan for future courses of action and devise creative solutions to an open class of real-world problems.
To the extent that psychological science fails to identify nonarbitrary reasons or sources for these perceptual/cognitive limitations and for these perceptual/cognitive capabilities, this science will remain a merely descriptive science of this or that particular terrestrial species. This is true even if we are able to show that these limitations and capabilities are consequences of the structures of underlying neurophysiological mechanisms. Those neurophysiological structures can themselves be deemed nonarbitrary only to the extent that they can be seen to derive from some ultimately nonarbitrary source.
Where, then, should we look for such a nonarbitrary source? The answer can only be, “In the world.” All niches capable of supporting the evolution and maintenance of intelligent life, though differing in numerous details, share some general—perhaps even universal—properties. It is to these properties that we must look for the ultimate, nonarbitrary sources of the regularities that we find in perception/cognition as well as in its underlying neurophysiological substrate.
Some of the properties that I have in mind here are the following (see Shepard, 1987a, 1987b, 1988, 1989): Space is three-dimensional, locally Euclidean, and endowed with a gravitationally conferred unique upward direction. Time is one-dimensional and endowed with a thermodynamically conferred unique forward direction. Periods of relative warmth and light (owing to the conservation of angular momentum of planetary rotation) regularly alternate with periods of relative coolness and darkness. And objects having an important consequence are of a particular natural kind and therefore correspond to a generally compact connected region in the space of possible objects—however much those objects may vary in their sensible properties (of size, shape, color, odor, motion, and so on).
Among the genes arising through random mutations, then, natural selection must have favored genes not only on the basis of how well they propagated under the special circumstances peculiar to the ecological niche currently occupied, but also, as I have argued previously (e.g., Shepard, 1987a), even more consistently in the long run, according to how well they propagate under the general circumstances common to ail ecological niches. For, as an evolutionary line branches into each new niche, the selective pressures on gene propagation that are guaranteed to remain unchanged are just those pressures that are common to all niches.
(Shepard then goes on to describe the deep questions which underlie his own work on color perception, one of which the rest of the paper is dedicated to examining and answering. I highly recommend reading the whole thing.)
Sure. Now, as far as I understand it, whether the extrapolated volition of humanity will even cohere is an open question (on any given extrapolation method; we set aside the technical question of selecting or constructing such a method).
So Eli Tyre’s claim seems to be something like: on [ all relevant / the most likely / otherwise appropriately selected ] extrapolation methods, (a) humanity’s EV will cohere, (b) it will turn out to endorse the specific things described (dismantling of all governments, removing the supply of factory farmed meat, dictating how people should raise their children).
Right?
And… you claim that the CEV of existing humans will want those things?
You don’t think that most humans would be opposed to having an AI dismantle their government, deprive them of affordable meat, and dictate how they can raise their children?
Er… yes, I am indeed familiar with that usage of the term “Friendly”. (I’ve been reading Less Wrong since before it was Less Wrong, you know; I read the Sequences as they were being posted.) My comment was intended precisely to invoke that “semi-technical term of art”; I was not referring to “friendliness” in the colloquial sense. (That is, in fact, why I used the capitalized term.)
Please consider the grandparent comment in light of the above.
Doesn’t this very answer show that an AI such as you describe would not be reasonably describable as “Friendly”, and that consequently any AI worthy of the term “Friendly” would not do any of the things you describe? (This is certainly my answer to your question!)
It also seems to strongly imply than mind uploading into some kind of classical artificial machine is possible, since it’s unlikely that all or even most of the classical properties of the brain are essential.
Could you say more about this? Why is this unlikely?
One man’s modus ponens is another man’s modus tollens. I agree that the LW-style decision theory posting encourages this type of thinking, and you seem to infer that the high-quality reasoning in the decision theory posts implies that they give good intuitions about the philosophy of identity.
I draw the opposite conclusion from this: the fact that the decision theory posts seem to work on the basis of a computationalist theory of identity makes me think worse of the decision-theory posts.
Strongly seconding this.
I see, yeah, that would explain it.