Posts
Comments
If I look at 200 comments pages, doesn't that require the server processing my request and sending me the comments page 200 times?
As for finding your comments regardless of the thread they are on, that is already a feature of Reddit's platform - click on your username, then click "comments" to get to the LW implementation of that feature.
Regardless, that isn't what you were describing earlier. It would not put extra load on the server to have jQuery transform this thread, which has all the comments, to show only your comments on the thread. It's a client-side task. That's what you originally said was not feasible.
All this talk has actually made me consider writing an addon that makes slashdot look clean and in-line like LW, Reddit, Ycombinator, etc.
I admit I didn't think it all the way through. If your goal isn't ultimately data collection, you would make a browser addon and use javascript injection (the frontend scripting language for rendering web pages). I replied to another person with loose technical details, but you could create a browser addon where you push a button in the top right corner of your browser, type a username, and then it transforms the page to show nothing but posts by the user of that page by leveraging the web page's frontend scripting language.
So there's a user-friendly way to transform your browser's rendering without APIs, clunky web scrapers or excess server load. It's basically the same principle that adblockers work on.
There's no extra load on the server; you're just parsing what the page already had to send you. If your goal is just to see the web page and not data collection, it's a different solution but also feasible.
What you can do is create a simple browser plugin that injects jQuery into the page to get all the comments by a name. I'll go into technical details a bit - Inject an extra version of jQuery into the page (that you know always uses the same code, in case lesswrong changes their version of jQuery). Then use JQuery selectors to anchor to all your posts using a technique similar to the one I described for the scraper. Then transform the page to consist of nothing but the anchored comments you acquired via Jquery.
You could make this a real addon where you push a button in the top right of your chrome browser, type a username, and then you see nothing but all the posts by that user on a given page.
Same principle as Adblock plus or other browser addons.
But consider the following problem: Find and display all comments by me that are children of this post, and only those comments, using only browser UI elements, i.e. not the LW-specific page widgets. You cannot -- and I'd be pretty surprised if you could make a browser extension that could do it without resorting to the API, skipping the previous elements in the chain above. For that matter, if you can do it with the existing page widgets, I'd love to know how.
If you mean parse the document object model for your comments without using an external API, it would probably take me about a day, because I'm rusty with WatiN (the tool I used to used for web scraping when that was my job a couple years ago). About four hours of that would be setting up an environment. If I was up to speed, maybe a couple hours to work out the script. Not even close to hard compared to the crap I used to have to scrape. And I'm definitely not the best web scraper; I'm a non-amateur novice, basically. The basic process is this: anchor to a certain node type that is the child of another node with certain attributes and properties, and then search all the matching nodes for your user name, then extract the content of some child nodes of all the matched nodes that contain your post.
WatiN:: http://watin.org/
Selenium: http://www.seleniumhq.org/
These are the most popular tools in the Microsoft ecosystem.
As someone who has the ability to control how content is displayed to me (tip - hit f12 in google chrome), I disagree with the statement that a web browser is not a client. It is absolutely a client and if I were sufficiently motivated I could view this page in any number of ways. So can you. Easy examples you can do with no knowledge are to disable the CSS, disable JS, etc.
You can self-teach. I guess it depends on your confidence with knives, but watch videos of how to do knife work, and don't go totally overboard trying to chop as fast as a professional chef, as fingers are valuable. Do the motion the way they do it, but slowly enough to be sure you will not hurt yourself. As you gain practice, you may feel comfortable naturally speeding up.
As for cooking and baking. Look up recipe on the internet. Do exactly what the recipe says. Do you not know what a step means or how to do it? Google it, watch videos, try to follow the directions as precisely as possible, and see if the result is any good. If the recipe is good and you follow the directions, you'll get something good. Cooking, especially baking, is like science, just follow the directions, and you can get close to the desired outcome.
If you're kind of a natural you can learn to spot problems with recipes before you make them, or improvise your own flavors and make them better, if you're not, that's ok. There are a lot of techniques you can learn but dipping your toes into cooking is not that hard, and a non professional can make excellent meals, it just takes more time. If you find a big passion for it then there's a whole world of resources about how to do things out there :)
Everyone should have a good chef's knife and know how to use it. Victorinox sells polymer handle stainless steel knives that are as good as some $200 knives I've owned for $50, probably other companies too, go to amazon. A single good Chef's knife is good for cutting almost everything you would need to cut on any cullinary adventure. Keep it clean and extremely sharp by using a sharpening steel, or, if you're not up to learning how, a knife sharpening tool, though those will slowly degrade the blade.
Buy the knife, learn the knife, it's amazingly easy to prepare your food with a single excellent knife rather than a slapchop or a bunch of specialized weird tools for cutting various things. There's a reason the professionals do it this way. Have a great time preparing your meals with much less difficulty, and thank me later.
Another serious problem is that the students must make the necessary assumption that the rule be simple. In the context of school, simple is generally "most trivial to figure out".
This is a necessary assumption because there could be rules that would not be possible to determine by guessing. For example, you'd have to spend the lifetime of the universe guessing triplets to correctly identify that the rule is "Ascending integers except sequences containing the 22nd Busy Beaver number", and then you still wouldn't know if there's some other rider.
If it was said, "It will require several more guesses to figure out the rule, but not more than a couple dozen, and the sequences you have don't fully tell you what the rule is", the exercise would be a lot more sane. At worst, the only mistake the students made was assuming that the exercise was supposed to be too simple. Which is like asking them to be mind readers: I'm thinking of a problem; on a scale of 1-10, please guess how difficult it is to solve.
The way you're summarizing the "disease" study mangles what was described in the abstract, even though the abstract makes your own point. I haven't checked the rest. I went digging for the abstract:
Participants assessed the riskiness of 11 well-known causes of death. Each participant was presented with an estimation of the number of deaths in the population due to that particular cause. The estimates were obtained from a previous study of naive participants' intuitive estimations. For instance, based on the result of the previous study, the number of deaths due to cancer was presented as: ‘2,414 out of 10,000’, ‘1,286 out of 10,000’, ‘24.14 out of 100’ and ‘12.86 out of 100’. The estimates of deaths were presented in analogous ways for the remaining ten causes of death. It was hypothesized that the judged degree of riskiness is affected by the number of deaths, irrespective of the total possible number (such as 10,000 or 100). Results from Experiment 1 were consistent with this prediction. Participants rated cancer as riskier when it was described as ‘kills 1,286 out of 10,000 people’ than as ‘kills 24.14 out of 100 people’, and similar results were observed regarding the remaining 10 causes of death. Experiment 2 replicated this trend. Implications for risk communications are discussed. © 1997 John Wiley & Sons, Ltd.
The way you described it --
Then how about this? Yamagishi (1997) showed that subjects judged a disease as more dangerous when it was described as killing 1,286 people out of every 10,000, versus a disease that was 24.14% likely to be fatal. Apparently the mental image of a thousand dead bodies is much more alarming, compared to a single person who's more likely to survive than not.
Crucially, your verbiage as-is provides Group A with a 12% total population mortality rate, and Group B with a 24% case fatality rate, and those are incommensurable. I'm assuming you meant to say the information was presented to two separate groups, maybe too generously there too. The original study very explicitly specifies mortality rate for both figures. I.E. 24.14 out of 100 to be fatal for the whole population (for a cancer, and not expressed as a % - different priming effects on some).
If you got that past all of us, I think it shows there are chinks in our armor as well. I wouldn't deny that the affect heuristic is real, but the way you present the information doesn't pass my smell test.