Posts
Comments
Phase I clinical trials exist for this purpose. The objective of Phase I trials is to establish safety, dosage, and side-effects of drugs in human subjects, and to observe their proposed mechanism of action if relevant.
It's rare for Phase II and III trials (which have clinically-relevant endpoints ) to be carried out on healthy subjects. Part of this is ethical considerations, but also clinical trials are extremely expensive to carry out, and there's not much payoff in learning whether your drug has some specific effect on healthy subjects.
Here is some colourful language for you: Dominic Cummings makes my memetic immune system want to vomit.
Part of it is because he sets off my Malcolm-Gladwell-o-Meter, but mostly it’s because he’s trying so hard to appear more knowledgeable and well-educated than he actually is. He surrounds himself with the trappings of expertise he obviously doesn’t have. Case in point: this “paper” is clearly a blog post which he converted to PDF via MS Word because he thinks that makes it look more credible.
The effect for me is a bit like receiving an email from a Nigerian prince, asking for your help in getting millions of dollars out of the country. My response is approximately the same.
How do you select (or deselect) the root set?
I've had some luck in open threads on SSC for stuff I would previously have directed to LW, but it's much noisier, and is a far cry from a fully-featured discussion forum.
The links are a new feature since I was last here, and I can't say I'm overwhelmed by them, tbh.
I haven’t posted in LW in over a year, because the ratio of interesting-discussion to parochial-weirdness had skewed way too far in the parochial-weirdness direction. There still isn’t a good substitute for LW out there, though. Now it seems there’s some renewed interest in using LW for its original purpose, so I thought I’d wander back, sheepishly raise my hand and see if anyone else is in a similar position.
I’m presumably not the only one to visit the site for the first time in ages because of new, interesting content, so it’s reasonable to assume a bunch of other former LW-users are reading this. What would it take for you to come back and start using the site again?
Similarly, I've read Austin's How to Do Things With Words. He's not winning any awards for his prose style, but he has a comprehensible project which he goes about in a rigorous, methodical way.
Subject: Written style and composition
Recommendation: Rhetorical Grammar: Grammatical Choices, Rhetorical Effects, by Martha Kolln and Loretta Gray
Reason: After reading Pinker's The Sense of Style, I wanted a meatier syllabus in the mechanics of writing well. My follow-up reading was Rhetorical Grammar and Joseph Williams' Style: Ten Lessons in Clarity and Grace.
I would actually recommend reading all three. Rhetorical Grammar is the most textbook-y of the recommendations, and The Sense of Style is more like a weighty, popular book on the subject, with Ten Lessons being more of an extended exposition/workbook on (you will be unsurprised to learn) ten broad principles of clear writing. All three books have similar messages and convergent positions on the subject matter. Rhetorical Grammar wins out for being the book I imagine one would learn most from.
Or a host for a beautiful parasitic wasp?
LW's strongest, most dedicated writers all seem to have moved on to other projects or venues, as has the better part of its commentariat.
In some ways, this is a good thing. There is now, for example, a wider rationalist blogosphere, including interesting people who were previously put off by idiosyncrasies of Less Wrong. In other ways, it's less good; LW is no longer a focal point for this sort of material. I'm not sure if such a focal point exists any more.
I'm in.
Not that much of a rags-to-riches story, I'm afraid. My parents both have working class backgrounds, and neither went to university, but my upbringing would probably get coded as lower-middle class. I had one attempt at university already, fifteen years ago, but dropped out after one year of an Astrophysics degree. Also my jobs for those six years were mid-range software dev/tech professional tybe jobs. It's not like I've been shovelling coal or anything.
Some of the things I was thinking about class in relation to that comment were on this sort of topic. I dropped out of my first degree in part because I was a feckless 19-year-old who didn't know any better, but also in part because I didn't have any academic role models and all the education I'd received up until that point had lulled me into a false sense of security. On a fundamental level, I didn't know how to study at a university, and there wasn't anyone in a position to show me how.
Your talking about class-based memetic toxicity rang some bells along these lines. Education has a bunch of "soft skills" that parents can pass on to their kids, and presumably stuff like relationships, money management, interpersonal conflict resolution, etc. have similar sets of soft skills which you simply won't learn unless they're in your environment.
Also, this is going to sound like a bit of a non-sequitur, but I'd been thinking about Game of Thrones, and what feudal lords must look like to serfs. If you're well-fed, well-groomed and well-educated in a malnourished, dirty and illiterate world, you're not only going to look like a qualitatively superior sort of person to Pete the Peon, but you will operationally outperform him in a number of ways. I wonder to what extent this sort of pattern is prevalent in contemporary Western class systems.
I finished the part-time Bachelor's degree in Economics and Maths I've been working on in my spare time for the past six years, alongside a full-time job. I got the result of my (particularly brutal, touch-and-go) final exam this afternoon, and have landed first-class honours. I'll be quitting my job in September and starting a full-time Masters in Computational Statistics and Machine Learning.
For the country data example, every instance of a country name is prepended with a small icon (for development purposes this is currently an obnoxious red X, but I plan to replace this with a neutral-coloured globe or something), and the name itself is wrapped in some custom style (currently boldface, but could be anything). Clicking on the icon places a container with the relevant data on the page, offset to the same location as the icon, (giving the illusion of the icon "expanding" to show the data). Clicking on the icon again, or away from the container, removes it.
In terms of extensibility, all the data is in a local JSON file, and the format of the data container is an HTML template that might eventually live in the same file. I'm also planning on having local image assets (maps and flags). This could all be swapped out for anything, or even obtained from a web service.
I'm playing around with writing a Chrome extension that identifies countries of the world in the browser and marks them up with expandable, at-a-glance summary data for that country, like GDP per capita, composite index scores (HDI, MPI, etc.), literacy rate, principal exports and so on. I find myself regularly looking this up on Wikipedia anyway, and figured I'd remove the inconvenience of doing so.
This example probably isn't that useful for everyone, but it got me wondering what other sets of things could be marked up in the browser in this way. Another example that occurred to me was legislature voting records, where a similar plugin would provide easy visibility of how elected representatives voted on legislation. Again, not useful for everyone, but I could imagine political junkies getting some use out of it.
Such a set of mark-uppable entities would have to be either identifiable by format (like an ISBN) where the data could be fetched from a remote source, or a finite list of a few hundred items (like countries), where the data could be stored locally. What kinds of things would you like this sort of visibility on in the browser? Is there a set of entities you find yourself tiresomely looking up data for over and over again?
(Partly inspired by the Dictionary of Numbers)
Much of what we teach teenagers about human biology is very recently-acquired knowledge, historically speaking. Modern knowledge about the circulatory system, aerobic and anaerobic respiration, vitamin deficiencies, etc. is very far away from the 13th Century, but has practical implications that can still be implemented, like "train your troops at altitude and give your sailors citrus fruit".
A lot of contemporary ideas about workflows and division of labour are fairly recent developments as well, (there were no assembly lines in the 13th Century), but have been internalised by citizens of the 21st Century.
Who should I talk to in a group? I have a bunch of existing "social senses" for navigating this, but they're not very reliable. If a clear You-Should-Talk-To-This-Person sense went off whenever I encountered someone appropriate, that would be nice.
I've read The Design of Everyday Things. You don't need to read The Psychology of..., as it's the same book, renamed for marketing reasons.
Completely off-topic, but do you have a policy for when you emphasise with italics and when you emphasise with bold?
I don't know how common this is, but with a dual-monitor setup I tend to have one in landscape and one in portrait. The portrait monitor is good for things like documents, or other "long" windows like log files and barfy terminal output. The landscape monitor is good for everything that's designed to operate in that aspect ratio (like web stuff).
More generally, there's usually something I'm reading and something I'm working on, and I'll read from one monitor, while working on whatever is in the other.
At work I make use of four Gnome workspaces: one which has distracting things like email and project management gubbins; one active work-dev workspace; one self-development-dev workspace; and one where I stick all the applications and terminals that I don't actively need to look at, but won't run minimised/headlessly for one reason or another.
This is all kinds of useful. Thanks!
You can learn an astonishing amount about web development without ever having to think about how it'll look to another human being. In a professional context, I know enough to realise when I should hand it over to a specialist, but I won't always have that luxury.
How are we operationalising "best" here? The purpose of textbooks is to efficiently impart material. Popular books have a wide variety of purposes (to inform, inspire, entertain, polemicise, etc.), so by what standard are we holding one popular book to be superior to another?
I endorse this interpretation.
Do you love it to the tune of $20?
I've dabbled with ggplot, but I've put it on hold for the immediate future in lieu of getting to grips with D3. I'll be getting all the R I can handle next year.
I did not know about the book, but it's available to view from various sources. If I get time I'll give it a look-in and report back.
A lot of online communities pay lip service to the idea that their experiences aren't universal, but Less Wrong seems to be one of the few places that takes that idea seriously.
I'm looking for some "next book" recommendations on typography and graphically displaying quantitative data.
I want to present quantitative arguments and technical concepts in an attractive manner via the web. I'm an experienced web developer about to embark on a Masters in computational statistics, so the "technical" side is covered. I'm solid enough on this to be able to direct my own development and pick what to study next.
I'm less hot on the graphical/design side. As part of my stats-heavy undergrad degree, I've had what I presume to be a fairly standard "don't use 3D pie charts" intro to quantitative data visualisation. I'm also reasonably well-introduced to web design fundamentals (colour spaces, visual composition, page layouts, etc.). That's where I'm starting out from.
I've read Butterick's Practical Typography, which I found quite informative and interesting. I'd now like a second resource on typography, ideally geared towards web usage.
I've also read Edward Tufte's Visual Display of Quantitative Information, which was also quite informative, but felt a bit dated. I can see why it's considered a classic, but I'd like to read something on a similar topic, only written this century, and maybe with a more technological focus.
Please offer me specific recommendations addressing the two above areas (typography and data visualisation), or if you're sufficiently advanced, please coherently extrapolate my volition and suggest how I can more broadly level up in this cluster of skills.
I'm pretty sure flirting works more or less the same in most of the Western world. As a general strategy for gauging interest with plausible deniability, I imagine it's universal.
Why?
1) It's largely pointless in terms of one's behaviour and psychological well-being. If you have an all-consuming infatuation and you're not acting on it, the reason for not acting probably isn't because some test statistic hasn't crossed a predesignated threshold.
2) The whole sentiment of "I will calculate your love for me" is attached to a cluster of non-attractive features that probably get binned as "creepy". No, this isn't right. No, this isn't fair. But it is the case.
3) The notion of a "prior" on other people being attracted to you is essentially asking "how attractive am I?" This is information that can't be deduced by observing other people's romantic behaviour, any more than you can measure your own height by reading about other people's height.
Your attractiveness is not some inherent frequency by which people think you're attractive: it's made up of all the attributes and behaviours that people like about you. Maybe you should figure out what those things are and how to make them shine more, rather than trying to guess the odds on any given person finding you attractive.
I like to think I was tilling a rich, inner garden.
My claim is that your model is far too simple to model the complexities of human attraction.
Let's use your example of pulling red and blue balls from an urn. Consider an urn with ten blue balls and five red balls. In a "classical" universe, you would expect to draw a red ball from this urn one time in three. A simple probabilistic model works here.
In a "romantic" universe, the individual balls don't have colours yet. They're in an indeterminate state. They may have tendencies towards being red or blue, but if you go to the urn and say "based on previous observations of people pulling balls out of this urn, the ball I'm about to pull out should be red one third of the time", they will almost always be blue. Lots of different things you might do when sampling a ball from the urn might change its colour.
In such a universe, it would be very hard to model coloured balls in an urn. As far as people being attracted to other people are concerned, we live in such a universe.
There are various things wrong with this reasoning, but I don't think you're getting my general point: this entire approach is misguided and it will not lead you to good outcomes.
A twelve-year-old sixes_and_sevens had the 1988 print of Psychology: The Essential Science and The Definitive Book of Body Language. He was not a hit with the ladies.
It's important to note that the base rate of people finding other people attractive is different from the base rate of other people finding you attractive. You're way more interested in the second question than you are the first question, but no amount of polling people on the internet can answer that question for you.
It's a bit like learning to juggle. You can't learn to juggle just by reading books and imagining how balls get thrown and how fast they fall. To learn how to do it, you've got to throw some balls up in the air. You've got to figure out how your body and brain deal with throwing and catching, and then you've got to learn how to control it. To begin with, you're going to drop a lot of balls, but that's not the worst thing in the world that can happen.
This post may get downvoted, as I suspect it's of low value and low interest to a lot of readers. You shouldn't take this personally.
For what it's worth, I admire your approach, though it's based on incorrect assumptions. Trying to calculate whether someone is attracted to you will not end well. Researching psychology for romantic reasons will probably also not end well.
People solve this problem by making bigger and bigger signals at each other, until either one side stops making the bigger signals or until the signals are so big you can't ignore them, (also known as "flirting"). If this sounds hard and unreliable, that's because it is. It takes a lot of practice to get good at this. You would be best advised to practice talking to people while trying to figure out how they feel about the conversation instead of carrying out this sort of research.
I found this response very insightful. It ties in with a variety of other things I've been thinking about recently, and has given me a great deal of food for thought. Thank you for sharing it, and you have my sympathies regarding your sister.
I really don't know what we're actually disagreeing about here, so I'm going to tap out. Have a nice evening.
(If it's not evening where you are yet, then have a tolerable rest of the day, and then have a nice evening)
Well if we're talking about that version of "me", why not talk about the version of "me" who's a member of the International Dog-Kicking Association? For any given virtue you can posit some social context were that virtue is or is not desirable. I'm not sure what that accomplishes.
I think we're talking past each other here. I'm not talking about how to cooperate with anybody, or how to cooperate in a value-hostile social environment. I'm talking about how I can cooperate with people I want to cooperate with.
I don't claim that not kicking dogs is a universal moral imperative. I claim that having some internal feature that dissuades you from kicking dogs means I will like and trust you more, and be more inclined to cooperate with you in a variety of social circumstances. This is not because I like dogs, but because that feature probably has some bearing on how you treat humans, and I am a human, and so are all the people I like.
I obviously can't directly inspect the landscape of your internal features to see if "don't needlessly hurt things" is in there, but if I see you kicking a dog, I'm going to infer that it's not.
Also, on the broader subject of fundamental attribution error, in some cases there are fundamental attributes. If I see someone exhibiting sadistic tendencies (outside of a controlled consensual environment), I don't care how bad a day they're having. Unless I can at all avoid it, I don't want them on my team.
I think it's a case of a lot of things, but fundamental attribution error isn't one of them.
It's funny you should mention kicking dogs, as I think animal cruelty (and cruelty in general) is an example of one of the strongest rationales for virtue ethics. I don't attach a lot of moral weight to dogs, but if I witnessed someone kicking a dog, especially if they thought they weren't being witnessed, that gives me insight into what sort of person they are. They are displaying characteristics I do not favour.
People would be more inclined to trust and deal with me if I display pro-social characteristics they favour (and don't display characteristics they disfavour). There are a couple of approaches to me taking advantage of this:
1) I could conspicuously display pro-social characteristics when I believe I'm under scrutiny and it's not too costly to do so.
2) I could make myself the sort of person who is pro-social and does pro-social things, even when it's costly or unobserved.
For sure, option 2 is more expensive than option 1, but it has the advantage of being more straightforward to maintain, and when costly opportunities to signal my pro-social virtues come along, I will take them, while those option 1 people will welch out.
If I kick a single dog in private, this erodes the basis of having taken option 2. If anyone sees me kicking a dog in private, this will undermine their trust in me. As such, I should try as much as is reasonably possible to be the sort of person who doesn't kick dogs.
I was envisioning some sort of context-system, in part for the reason you describe and in part because people probably have specific learning needs, and at any given time they'd probably be focusing on a specific context.
Also I reiterate what I've said to other commenters: likening it to Anki flashcards was probably misguided on my part. I'm not talking about generating a bunch of static flashcards, but about presenting a user with a dynamically-generated statement for them to parse. The interface would be reminiscent of something like Anki, but it would probably never show you the same statement twice.
I agree that auto-generated exercises would be a superior utility, but that seems like a much trickier proposition.
Also, for clarification, this wouldn't be used for memorising notation, but for training fluency in it. My use of Anki as a comparison might have been misguided.
I may not have presented this well in the original comment. This wouldn't be generating random static cards to put into an Anki deck, but a separate system which dynamically presents expressions made up of known components, and tracks those components instead of specific cards. It seems plausible to restrict these expressions to those composed of notation you've already encountered.
In fact, this could work to its advantage. It also seems plausible to determine which components are bottlenecks, and therefore which concepts are the most effective point of intervention for the person studying. If the user hasn't learned, say, hat-and-tilde notation for estimators, and introducing that notation would result in a greater order of available expressions than the next most bottleneck-y piece of notation, it could prompt the user with "hey, this is hat-and-tilde notation for estimators, and it's stopping you from reading a bunch of stuff". It could then direct them to some appropriate material on the subject.
An idea: auto-generated anki-style flashcards for mathematical notation.
Let's say you struggle reading set builder notation. This system would prompt you with progressively more complicated set builder expressions to parse, keeping track of what you find easy or difficult, and providing tooltips/highlighting for each individual term in the expression. If it were an anki card, the B-side would be how you'd read the expression out in natural language. This wouldn't be a substitute for learning how to use set builder notation, but it would give you a lot of practice in reading it.
There's an easy version of this you could cobble together in an afternoon which has a bunch of randomly-populated templates it renders with MathJax or something. There's a more sophisticated extended project which uses generative grammars, gamified progress visibility and spaced-repetition algorithms.
I've been thinking about putting something like this together, but realistically I don't have the time or the complete skill-set to do it justice, and it would never get finished. Having read this thread about having difficulty in reading mathematical notation, I'm convinced a lot of other people might benefit from it.
ETA: it was probably misguided of me to liken this to Anki decks. I'm not talking about generating a bunch of static flashcards to be used with an existing system like Anki, but something separate that generates dynamic examples of what you're trying to learn, against which you'd record your success at parsing each example in a way similar to Anki. There are, of course, all sorts of problems with memorising specific examples of mathematical notation with an Anki deck, which respondents have prudently picked up on.
Not that I particularly care about this, but my original point was that concern over AI is topical right now, and the film in question seemed to make small but deliberate effort to tap into that topicality, beyond simply having an AI as the villain. I wasn't claiming that Avengers: Age of Ultron had invented an amazing new fictional concept of antagonistic intelligent machines.
I get that you can do this in principle, but in the specific case of the Allais Paradox (and going off the Wikipedia setup and terminology), if someone prefers options 1B and 2A, what specific sequence of trades do you offer them? It seems like you'd give them 1A, then go 1A -> 1B -> (some transformation of 1B formally equivalent to 2B) -> 2A -> (some transformation of 2A formally equivalent to 1A') -> 1B' ->... in perpetuity, but what are the "(some transformation of [X] formally equivalent to [Y])" in this case?
If someone reports inconsistent preferences in the Allais paradox, they're violating the axiom of independence and are vulnerable to a Dutch Book. How would you actually do that? What combination of bets should they accept that would yield a guaranteed loss for them?
There is repeated and explicit dialogue reference in the film to the scary and unknown nature of AI. It is put forward as something novel that shouldn't be meddled with. This is not necessary given the setting, which could easily support sentient robots without having to draw attention to the fact that they're a case of artificial intelligence, and artificial intelligence is scary and new. Hence bandwagon jumpage.