Open thread, September 16-22, 2013
post by Metus · 2013-09-16T05:18:34.009Z · LW · GW · Legacy · 141 commentsContents
141 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
141 comments
Comments sorted by top scores.
comment by Nisan · 2013-09-16T22:04:36.049Z · LW(p) · GW(p)
What's a fun job for someone with strong technical skills?
I just graduated with a PhD in pure math (algebraic topology), I've done 50 Project Euler problems, I know Java and Python although I've never coded anything that anyone else uses. I'm looking for work and making a list of nonacademic job titles that involve solving interesting problems, and would appreciate suggestions. So far I'm looking at:
- Data scientist / analytics
- Software engineer
↑ comment by Metus · 2013-09-17T16:19:47.464Z · LW(p) · GW(p)
Actuary. This is very close to analytics I have been told.
Replies from: Adele_Lcomment by [deleted] · 2013-09-16T13:28:11.961Z · LW(p) · GW(p)
http://sub.garrytan.com/its-not-the-morphine-its-the-size-of-the-cage-rat-park-experiment-upturns-conventional-wisdom-about-addiction is an article about a change in perspective about how rats act when given access to a morphine drip.
Basic concept: When given a larger cage with more space and potential things and other rats to interact with, rats are much less likely to only use a morphine drip, as compared to when they are given a small standard lab cage.
Edit per NancyLebovitz: This is evidence that offers a different perspective on the experiments that I had heard about and it seemed worth sharing. It is not novel though, since apparently it was done in the late 70's and published in 1980. See wikipedia link at: http://en.wikipedia.org/wiki/Rat_Park
Replies from: NancyLebovitz, None↑ comment by NancyLebovitz · 2013-09-16T13:57:37.765Z · LW(p) · GW(p)
I agree that the information is important, but the "rat park" research was done in the '70s. It's not novel, and I suggest it's something people didn't want to hear.
I wonder why addiction is common among celebrities-- they aren't living in a deprived environment.
Replies from: printing-spoon, erratio, None↑ comment by printing-spoon · 2013-09-17T01:43:55.291Z · LW(p) · GW(p)
I wonder why addiction is common among celebrities
Are you sure this is true?
Replies from: Desrtopa↑ comment by Desrtopa · 2013-09-17T02:34:21.015Z · LW(p) · GW(p)
I'm guessing you had this in mind already, but to clarify anyway, there's a pretty major availability bias since anything celebrities are involved in is much more likely to be reported on, leading to a proliferation of news stories about celebrities with addiction problems.
On the other hand though, celebrities are a lot more likely than most people to simply be given drugs for free, since drug dealers can make extra money if their customers are enticed by the prospect of being able to do drugs with celebrities. And of course that's aside from the fact that the drug dealers themselves can be enticed by the star power and want to work their way into their circles.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-09-17T11:39:49.680Z · LW(p) · GW(p)
No real statistics, just claims.
This article uses the model that a fair number of people are just vulnerable to addiction (about 1 in 12), and celebrity doesn't affect the risk except that celebrities have more access to drugs.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-09-17T13:05:22.973Z · LW(p) · GW(p)
Second thought: What's implied is that either humans are less resistant to addiction than rats, or there's something about civilization in general which makes people less resistant to addiction.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-09-17T13:30:12.715Z · LW(p) · GW(p)
There are more addictive things for humans produced, and it is easier for humans to get them.
Human mind can create the "small cage" effect even without physical constraints. Sometimes people feel 'trapped' metaphorically.
↑ comment by [deleted] · 2013-09-16T15:30:01.373Z · LW(p) · GW(p)
Oops. Upon review, I fell victim to a classic blunder. "Someone shared something on Facebook that I have not heard of before? It must be novel. I should share it with other people because I was unaware of it and it caused me to update my worldview."
Thanks. I'll edit the original post to reflect this.
↑ comment by [deleted] · 2013-09-22T12:35:50.768Z · LW(p) · GW(p)
You call this a "different perspective", but the perspective you're linking to is the only one I'd heard before. I thought Rat Park was the conventional wisdom. So I was initially confused about what the new, different perspective was.
Replies from: None↑ comment by [deleted] · 2013-09-23T13:02:22.688Z · LW(p) · GW(p)
My previous information was basically just "Morphine=Addicted rats." Which was really, really out of date and simplistic.
Rat Park's idea "The size/interactivity of the cage significantly changes addiction rates." makes sense, but I was unaware of it until recently.
So if Rat Park was the conventional wisdom, I was behind the conventional wisdom and was just catching up to it when I posted.
comment by Rob Bensinger (RobbBB) · 2013-09-16T09:11:49.686Z · LW(p) · GW(p)
Following up on a post I made last month, I've put up A Non-Technical Introduction to AI Risk, collecting the most engaging and accessible very short introductions to the dangers of intelligence explosion I've seen. I've written up a few new paragraphs to better situate the links, and removed meta information that might make it unsuitable for distribution outside LW. Suggestions for further improvements are welcome!
Replies from: None↑ comment by [deleted] · 2013-09-17T12:18:56.767Z · LW(p) · GW(p)
That is a good, readable summary of the main issues. A minor suggestion which is purely aesthetic is that the underlined red hyper-links look like misspellings at first glance.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2013-09-17T12:31:02.630Z · LW(p) · GW(p)
Thanks! Unfortunately, I'm not sure how to get rid of those without upgrading my Wordpress or switching themes. The links are an ugly orange by default, and changing them to blue apparently leaves the underline orange.
Replies from: None↑ comment by [deleted] · 2013-09-22T02:51:24.630Z · LW(p) · GW(p)
For what it's worth, the outer elements have a CSS "color" attribute of (255, 114, 0) (orange), while the inner elements have a CSS color of (0, 0, 128) (blue). The former color attribute is set in a CSS file; the latter color attribute is set in the HTML itself.
comment by TRManderson · 2013-09-16T06:55:31.872Z · LW(p) · GW(p)
Does the average LW user actually maintain a list of probabilities for their beliefs? Or is Bayesian probabilistic reasoning just some gold standard that no-one here actually does? If the former, what kinds of stuff do you have on your list?
Replies from: Jayson_Virissimo, ArisKatsaris, mare-of-night, linkhyrule5, Armok_GoB↑ comment by Jayson_Virissimo · 2013-09-16T07:59:56.802Z · LW(p) · GW(p)
Does the average LW user actually maintain a list of probabilities for their beliefs?
Or is Bayesian probabilistic reasoning just some gold standard that no-one here actually does?
It isn't really possible since in many cases it isn't even computable let alone feasible for currently existing human brains. Approximations are the best we can do, but I still consider it the best available epistemological framework for reasons similar to those given by Jaynes.
If the former, what kinds of stuff do you have on your list?
↑ comment by ArisKatsaris · 2013-09-16T07:18:45.517Z · LW(p) · GW(p)
Does the average LW user actually maintain a list of probabilities for their beliefs? Or is Bayesian probabilistic reasoning just some gold standard that no-one here actually does?
People's brains can barely manage to multiply three-digit numbers together, so no human can do "Bayesian probabilistic reasoning". So for humans it's at best "the latter while using various practical tips to approximate the benefits of former" (e.g. being willing to express your certainty in a belief numerically when such a number is asked for you in a discussion).
↑ comment by mare-of-night · 2013-09-17T12:08:29.994Z · LW(p) · GW(p)
What ArisKatsaris said is accurate - given our hardware, it wouldn't actually be a good thing to keep track of explicit probabilities for everything.
I try to put numbers on things if I have to make an important decision, and I have enough time to sit down and sketch it out. The last time I did that, I combined it with drawing graphs, and found I was actually using the drawings more - now I wonder if that's a more intuitive way to handle it. (The way I visualize probabilities is splitting a bar up into segments, with the length of the segments in proportion to the length of the whole bar indicating the probability.)
One of my friends does keep explicit probabilities on unknowns that have a big affect on his life. I'm not sure what all he uses them for. Sometimes it gets... interesting, when I know his value for an unknown that will also affect one of my decisions, and I know he has access to more information than I do, but I'm not sure whether I trust his calibration. I'm still not really sure what the correct way to handle this is.
↑ comment by linkhyrule5 · 2013-09-29T23:59:38.870Z · LW(p) · GW(p)
It's a gold standard - true Bayesian reasoning is actually pretty much impossible in practice. But you can get a lot of mileage off of the simple approximation: "What's my current belief, how unlikely is this evidence, oh hey I should/shouldn't change my mind now."
Putting numbers on things forces you to be more objective about the evidence, and also lets you catch things like "Wait, this evidence is pretty good - it's got an odds ratio of a hundred to one - but my prior should be so low that I still shouldn't believe it."
↑ comment by Armok_GoB · 2013-09-29T22:42:37.898Z · LW(p) · GW(p)
With actual symbols and specific numbers? no. But I do visualize approximate graphs over probability distributions over configuration spaces and stuff like that, and I tend to use the related but simpler theorems in fermi calculations.
comment by maia · 2013-09-16T17:07:47.057Z · LW(p) · GW(p)
So I found this research a while ago saying, essentially, that willpower is only limited if you believe it is - subjects who believed their willpower was abundant were able to power through tasks without an extra glucose boost.
I was excited because this seemed different from the views I saw on LessWrong, and I thought based on what I'd seen people posting and commenting that this might warrant a big update for some people here. Without searching the site, I posted about it, and then was embarrassed to find out that it had been posted here before a couple of years before...
What puzzles me, though, is that people here still seem to talk about ego depletion as if it's the only model of "willpower" there is. Is it that not everyone has seen that study, or is it that people don't take it seriously compared to the other research? I'm curious.
Replies from: None, Armok_GoB↑ comment by [deleted] · 2013-09-16T19:59:43.359Z · LW(p) · GW(p)
There's been a replication of that (I'm assuming you're talking about the 2010 paper by Job, Dweck and Walton). I haven't looked at it in detail. The abstract says that the original result was replicated but you can still observe ego-depletion in people who believe in unlimited willpower, you just have to give them a more exhausting task.
Replies from: Mestroyer, Ritalin, maia↑ comment by Ritalin · 2013-09-16T20:04:11.298Z · LW(p) · GW(p)
So the false belief somehow affects reality, but not enough to make itself actually true?
Replies from: shminux↑ comment by Shmi (shminux) · 2013-09-16T20:33:38.518Z · LW(p) · GW(p)
What's the difference between "reality" and "actually true"?
Replies from: gwern, RobbBB↑ comment by Rob Bensinger (RobbBB) · 2013-09-17T21:11:43.167Z · LW(p) · GW(p)
"X is true" means "X is a map, and X corresponds to some territory Y". "X is real" means "X is territory."
The relevant contrast, though, is between 'affects' and 'makes itself'. We could rephrase Ritalin: 'The inaccurate map changes the territory (in a way that results in its improved accuracy), but not enough to make itself (fully) accurate.'
comment by RomeoStevens · 2013-09-16T05:55:08.024Z · LW(p) · GW(p)
I recently made a big update in my model of how much influence one can have on one's longevity. I had thought that genetics accounted for the vast majority of variance, but it turns out the real number is something like 20-30%. This necessitates more effort thinking about optimizing lifestyle factors. Does anyone know of a good attempt at a quantified analysis of how lifestyle factors affect lifespan? Most of the resources I find make vague qualitative claims, as such, it's hard to compare between different classes of risks.
Replies from: NancyLebovitz, twanvl↑ comment by NancyLebovitz · 2013-09-16T16:46:23.435Z · LW(p) · GW(p)
My impression is that unusually high longevity is strongly influenced by genes, but that still might leave open the possibility that lifestyle makes a big difference in the midrange.
↑ comment by twanvl · 2013-09-16T16:07:41.159Z · LW(p) · GW(p)
but it turns out the real number is something like 20-30%.
Citation needed
Replies from: gwern, RomeoStevens↑ comment by gwern · 2013-09-16T16:28:52.171Z · LW(p) · GW(p)
Punch genetics heritability longevity
into Google Scholar; first hit says:
Replies from: jsteinhardtThe heritability of longevity was estimated to be 0.26 for males and 0.23 for females.
↑ comment by jsteinhardt · 2013-09-18T05:39:38.728Z · LW(p) · GW(p)
Does this imply that the other 75% is due to life choices? This doesn't obvious to me.
Replies from: wedrifid↑ comment by wedrifid · 2013-09-18T05:46:05.212Z · LW(p) · GW(p)
Does this imply that the other 75% is due to life choices? This doesn't obvious to me.
No, that is not what heritability means. The other 75% is the myriad of other influences of environment, chaotic chance and life choices.
comment by Metus · 2013-09-16T05:25:11.596Z · LW(p) · GW(p)
Is there much value in doing psychological tests in any particular interval to catch any mental problem in its early stages even if one is not acutely aware of any problem?
comment by Metus · 2013-09-16T05:21:42.665Z · LW(p) · GW(p)
Intellectual hygiene.
I am slowly coming to terms with the limits of my knowledge. Tertrium non datur is something that I should not apply outside of formal systems but always think or I could be wrong in a way I do not realize yet. In all my beliefs I should explicitly plant the seed of its destruction: If this event occurs I should stop believing in this or at least seriously doubt this.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-09-16T06:33:08.134Z · LW(p) · GW(p)
Examples?
Replies from: Metus↑ comment by Metus · 2013-09-16T06:37:21.732Z · LW(p) · GW(p)
For what of the two? An example for the first is to think He will either buy the car or leave or I take a course of action I have not yet forseen where the action could be something malevolent or something happens that renders my plans irrelevant. An example for the second is to think I believe people are motivated by money. If I see a sizeable group of people living in voluntary poverty I should stop believing this.
Replies from: TRManderson↑ comment by TRManderson · 2013-09-16T07:04:52.889Z · LW(p) · GW(p)
That's not quite the law of the excluded middle. In your first example, leaving isn't the negation of buying the car but is just another possibility. Tertium non datur would be He will either buy the car or he will not buy the car. It applies outside formal systems, but the possibilities outside a formal system are rarely negations of one another. If I'm wrong, can someone tell me?
Still, planting the "seed of destruction" definitely seems like a good idea, although I'd think caution in specifying only one event where that would happen. This idea is basically ensuring beliefs are falsifiable.
comment by hesperidia · 2013-09-17T19:55:34.634Z · LW(p) · GW(p)
A few years ago, in my introductory psych class in college, the instructor was running through possible explanations for consciousness. He got to Roger Penrose's theory of quantum computations in the microtubules being where consciousness came from (replacing another black box with another black box, oh joy). I burst out laughing, loudly, because it was just so absurd that someone would seriously propose that, and that other scientists would even give such an explanation the time of day.
The instructor stopped midsentence, and looked at me. So did 200-odd other students.
I kept laughing.
In hindsight, I think the instructor expected more solemnity.
Replies from: Mitchell_Porter, knb↑ comment by Mitchell_Porter · 2013-09-18T20:50:56.966Z · LW(p) · GW(p)
Would you care to explain why it's absurd? :-)
Replies from: None↑ comment by [deleted] · 2013-09-19T03:52:29.141Z · LW(p) · GW(p)
Because there is nothing in neural activity or structure that even suggests that anything having anything to do with macroscopic quantum states has even a little bit to do with it. You don't need to invoke anything more exotic than normal cellular protein and electrochemistry to get very interesting behavior.
Penrose is grasping at straws trying to make his area of study applicable to something he considers capital M Mysterious, with (apparently, to those that actually work with it) little understanding of the actual biology. It's a non-sequiter, as if he were suggesting that resonant vibrations in the steel girders of skyscrapers in manhattan were what let the people there trade stocks.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-09-19T15:15:07.260Z · LW(p) · GW(p)
You don't need to invoke anything more exotic than normal cellular protein and electrochemistry to get very interesting behavior.
True, but not consciousness. While I agree that Penrose's model is a wild unsubstantiated speculation, until we have a demonstration of algorithmic consciousness without any quantum effects, his approach deserves a thoughtful critique, not a hearty laugh.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2013-09-20T01:02:10.243Z · LW(p) · GW(p)
Thing is, it's no more clear how quantum fluctuations give rise to subjective experience than how chemistry gives rise to subjective experience. So why claim that it's in the quantum instead of in the chemicals?
Replies from: somervta, TheOtherDave, shminux↑ comment by somervta · 2013-09-22T06:28:38.824Z · LW(p) · GW(p)
Because he thinks that human's are capable of some form of hypercomputation (He bases this on some Goedelian stuff mainly), and that quantum gravitational effects are what allows it.
Replies from: Douglas_Knight, None↑ comment by Douglas_Knight · 2013-09-23T04:25:42.433Z · LW(p) · GW(p)
Quantum gravity doesn't help with hypercomputation, which doesn't help with Goedel, which doesn't help with consciousness. The most plausible part is that quantum gravity allows hypercomputation, but no one but Penrose believes that.
↑ comment by [deleted] · 2013-09-24T08:20:06.295Z · LW(p) · GW(p)
I still don't understand the assertion that humans actaully think with logic that is vulnerable to Godelian stuff. Why should we blow up at the Godel incompleteness theorem at all?
Replies from: somervta↑ comment by somervta · 2013-09-24T11:54:01.509Z · LW(p) · GW(p)
If we are a TM computation (which is the standard reductionist explanation), we are vulnerable to the halting problem (which he also argue we can solve), and if we are a formal system of some kind (also standard, although maybe not quite so commonly said), Godel etc applies.
(I was using Godelian in the broader sense, which includes Halting-esque problems.).
Replies from: None↑ comment by [deleted] · 2013-09-25T20:09:34.370Z · LW(p) · GW(p)
I would argue strenuously against the idea that we resemble a formal system at all. Our cells act like a network of noisy differential equations that with enough training can approximate some of its outputs to resemble those of mathematically defined systems - AKA, what you do once you have learned math.
We also aren't turing machines. Not in the sense that we aren't turing complete or capable of running the steps that a turing machine would do, but in the sense that we, again, are an electrochemical system that does a lot of things natively without resorting to much in the way of turing-style computation. A network grows that becomes able to do some task.
We are not stuck in the formal system or the computation, we are approximating it via learned behavior and when we hit a wall in the formal system or the computation we stop it and say 'well that doesn't work'. That deosn't mean we transcend the issues, it means that we go do something else.
↑ comment by TheOtherDave · 2013-09-23T13:15:12.886Z · LW(p) · GW(p)
Because we are more confused, collectively, about quantum fluctuations than we are about chemistry. And we're also confused about the causes of subjective experience. So "quantum explains consciousness" feels more compelling than "chemistry explains consciousness". See also: god of the gaps.
↑ comment by Shmi (shminux) · 2013-09-20T01:38:12.269Z · LW(p) · GW(p)
I agree, and I would bet a priori 10:1 that chemistry is enough, no quantum required, but until and unless it's experimentally confirmed/simulated, other ideas are worth considering.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-09-21T08:37:35.761Z · LW(p) · GW(p)
That sounds like privileging the hypothesis to me.
↑ comment by knb · 2013-09-18T04:59:20.584Z · LW(p) · GW(p)
You should be embarrassed by this story. Behaving this way comes across as very smug and disrespectful because it is disruptive and wastes the time of hundreds of people.
Replies from: hesperidia, Viliam_Bur, wedrifid, ChristianKl↑ comment by hesperidia · 2013-09-18T07:08:53.875Z · LW(p) · GW(p)
I'm honestly not embarrassed by this story because it's "smug and disrespectful", I'm embarrassed because the more I stare at it the more it looks like a LWy applause light (which I had not originally intended).
Replies from: knb, Fronken, None↑ comment by Viliam_Bur · 2013-09-20T07:10:26.787Z · LW(p) · GW(p)
Behaving like this in classroom is probably not a good way to communicate knowledge to one's classmates or to the instructor. (Although sometimes the first signal of disrespect communicates an important fact.)
But if the instructor told the quantum mysteriousness hypothesis as one worth considering (as opposed to: "you know, here is a silly idea some people happen to believe"), then the instructor was wasting the time of hundreds of people. (What's next? Horoscopes as a serious hypothesis explaining human traits?)
↑ comment by wedrifid · 2013-09-18T05:39:07.587Z · LW(p) · GW(p)
You should be embarrassed by this story.
He 'should' feel embarassment if the if interfered with his social goals in the context. All things considered it most likely did not, (assuming he did not immediately signal humiliation and submission, which it appears he didn't). He 'should' laugh at your attempt to shame him and treat the parent as he would any other social attack by a (social distant and non threatening) rival.
Behaving this way comes across as very smug and disrespectful because it is disruptive and wastes the time of hundreds of people.
Your causal explanation is incorrect---it is a justification not a cause. Signalling implications other than disruption and time wasting account for the smug and disrespectful perception.
Replies from: knb, 9eB1↑ comment by knb · 2013-09-18T09:51:26.247Z · LW(p) · GW(p)
He 'should' feel embarassment if the if interfered with his social goals in the context.
Right, assuming he doesn't care about the fact that hundreds of his peers now think he's the kind of person who bursts into loud, inappropriate laughter apropos of nothing. (i.e. assuming he isn't human.)
Replies from: wedrifid, somervta↑ comment by wedrifid · 2013-09-18T11:41:15.545Z · LW(p) · GW(p)
Right, assuming he doesn't care about the fact that hundreds of his peers now think he's the kind of person who bursts into loud, inappropriate laughter apropos of nothing. (i.e. assuming he isn't human.)
My model of the expected consequences of the signal given differs from yours. That kind of attention probably does more good than harm, again assuming that the description of the scene is not too dishonest. It'd certainly raise his expected chance of getting laid (which serves as something of a decent measure of relevant social consequences in that environment.)
Incidentally, completely absurd nonsense does not qualify as 'nothing' for the purpose of evaluating humor potential. Nerds tend to love that. Any 'inappropriateness' is a matter of social affiliation. That is, those who consider it inappropriate do so because they believe that the person laughing does not have enough social status to be permitted to show disrespect to someone to whom the authority figure assigns high status, regardless of the merit of the positions described.
Replies from: army1987, knb↑ comment by A1987dM (army1987) · 2013-09-20T23:51:32.189Z · LW(p) · GW(p)
getting laid ... serves as something of a decent measure of relevant social consequences in that environment
In the very short term maybe, but in the longer term not pissing professors off is also useful.
completely absurd nonsense
I don't think Penrose's hypothesis is so obviously-to-everybody absurd (for any value of “everybody” that includes freshmen) that you can just laugh it off expecting no inferential distances. (You made a similar point about something else here.)
Replies from: wedrifid↑ comment by wedrifid · 2013-09-21T01:37:40.090Z · LW(p) · GW(p)
In the very short term maybe, but in the longer term not pissing professors off is also useful.
Sometimes. I was drawing assuming that in a first year philosophy subject the class sizes are huge, largely anonymous, not often directly graded by the lecturer and a mix of students from a large number of different majors. This may differ for different countries or even between universities.
As a rule of thumb I found that a social relationship with the professor was relevant in later year subjects with smaller class sizes, more specialised subject matter and greater chance of repeat exposure to the same professor. For example I got research assistant work and scholarship for my postgrad studies by impressing my AI lecturer. Such considerations were largely irrelevant for first year generic subjects where I could safely consider myself to be a Student No. with legs.
I don't think Penrose's hypothesis is so obviously-to-everybody absurd (for any value of “everybody that includes freshmen) that you can just laugh it off expecting no inferential distances.
You are right that the inferential distance will make most students not get the humour or understand the implied reasoning. I expect that even then the behaviour described (laughing with genuine amusement at something and showing no shame if given attention) to be a net positive. Even a large subset of the peers who find it obnoxious or annoying will also intuitively consider the individual to be somewhat higher status (or 'more powerful' or 'more significant', take your pick of terminology) even if they don't necessarily approve of them.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-09-21T08:20:48.254Z · LW(p) · GW(p)
I was drawing assuming that in a first year philosophy subject the class sizes are huge, largely anonymous, not often directly graded by the lecturer and a mix of students from a large number of different majors.
[re-reads thread, and notices the OP mentioned there were more than 200 students in the classroom] Good point.
Even a large subset of the peers who find it obnoxious or annoying will also intuitively consider the individual to be somewhat higher status (or 'more powerful' or 'more significant', take your pick of terminology) even if they don't necessarily approve of them.
That kind of status is structural power, not social power in Yvain's terminology, and I guess there are more people in the world who wish to sleep with Rebecca Black than with Donald Trump. [googles for Rebecca Black (barely knew she was a singer) and realizes she's not the best example for the point; but still] And probably there's also a large chunk of people who would just think the student is a dork with little ability to abide by social customs. But yeah, I guess the total chance for them to get laid would go up -- high-variance strategies and all that.
↑ comment by knb · 2013-09-21T01:22:32.265Z · LW(p) · GW(p)
That kind of attention probably does more good than harm, again assuming that the description of the scene is not too dishonest. It'd certainly raise his expected chance of getting laid
This is PUA nonsense.
Replies from: None, wedrifid↑ comment by [deleted] · 2013-09-21T02:50:33.068Z · LW(p) · GW(p)
This is PUA
So?
nonsense
Why do you think it did not raise his chance of getting laid?
Replies from: knb, wedrifid↑ comment by knb · 2013-09-21T04:34:54.543Z · LW(p) · GW(p)
I've done some classroom teaching, and I've seen how other students react to students who behave similarly (eye rolling, snickering, etc.) I've also seen this from the student side, people like to heap scorn on students who act like this (when they aren't around.)
To be clear, I'm not saying everything PUA's say is nonsense. They've said so much that by sheer random chance some of it is probably good. But most of PUA stuff is terrible armchair theorizing by internet people who seem very angry at women.
ETA: It's interesting how much of a perspective change classroom teaching gives you. In a typical classroom, students can't easily see the faces of most of their peers, and their peers reveal a lot because of this.
Replies from: army1987, wedrifid↑ comment by A1987dM (army1987) · 2013-09-21T08:27:27.131Z · LW(p) · GW(p)
I've done some classroom teaching, and I've seen how other students react to students who behave similarly (eye rolling, snickering, etc.) I've also seen this from the student side, people like to heap scorn on students who act like this (when they aren't around.)
It depends on, among other things, how much the students like the lecturer and what kind of subject is being taught (I gather that honesty is valued more, and politeness less, in the hard sciences than in humanities).
To be clear, I'm not saying everything PUA's say is nonsense. They've said so much that by sheer random chance some of it is probably good. But most of PUA stuff is terrible armchair theorizing by internet people who seem very angry at women.
PUA isn't the only thing that Sturgeon's Law applies to, though.
↑ comment by wedrifid · 2013-09-21T15:26:08.649Z · LW(p) · GW(p)
I've done some classroom teaching, and I've seen how other students react to students who behave similarly (eye rolling, snickering, etc.) I've also seen this from the student side, people like to heap scorn on students who act like this (when they aren't around.)
My experience classroom teaching suggests two things:
- Hesperidia's cocky laughter is not the sort of thing that makes students heap scorn on other students except, perhaps, the most sycophantic teacher's pets or sometimes among cliques of less secure rivals who want to reassure each other.
- The behaviours knb is equivocating with are not the same thing. They have different social meaning and different expected results. While for knb the most salient factor may be that each of those behaviours signals lack of respect for authority not all things that potentially lower the status of the teacher are equal or equivalent. Amused laughter that is not stifled by attention is not the same thing as eye rolling.
↑ comment by wedrifid · 2013-09-21T03:39:45.657Z · LW(p) · GW(p)
I agree with your implicature and wonder whether we have correctly resolved the ambiguity in 'nonsense'. It seems it could either mean "It is not the case that this would raise his chance of getting laid" or "It is not the case that chance of getting laid is sufficiently correlated with social status as to be at all relevant as a measure thereof". I honestly don't know which one is the most charitable reading because I consider them approximately equally as wrong.
As an aside, my motive for throwing in 'chance of getting laid' was that often 'status' is considered too ephemeral or abstract and I wanted to put things in terms that are clearly falsifiable. It also helps distinguish between different kinds of status and the different overlapping social hierarchies. The action in question is (obviously?) most usefully targeted at the "peer group" hierarchy than the "academia prestige" hierarchy. If you intend to become a grad student in that university's philosophy department silence is preferred to cocky laughter. If you intend to just complete the subject and and continue study in some other area while achieving social goals with peers (including getting high quality partners for future group work) then the cocky laughter will be more useful than silence.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-09-21T08:47:55.586Z · LW(p) · GW(p)
"It is not the case that chance of getting laid is sufficiently correlated with social status as to be at all relevant as a measure thereof"
Is social status the only thing you care about when in a classroom?
And “sufficiently correlated” isn't good enough, per Goodhart's law. You can improve your chances of getting laid even more by getting drunk in a night club in a major city, and you can bring them close to 1 by paying a prostitute.
Replies from: wedrifid↑ comment by wedrifid · 2013-09-21T11:25:05.874Z · LW(p) · GW(p)
Is social status the only thing you care about when in a classroom?
It's a minor concern, often below getting rest, immediate sense of boredom or the audiobook I'm listening to. I'm certainly neither a model student (with respect to things like lecture attendance and engagement as opposed to grades) nor a particularly dedicated status optimiser.
I think you must have interpreted my words differently than I intended them. I would not expect that reply if the meaning had come across clearly but I am not quite sure where the confusion is.
And “sufficiently correlated” isn't good enough, per Goodhart's law. You can improve your chances of getting laid even more by getting drunk in a night club in a major city, and you can bring them close to 1 by paying a prostitute.
I think there must be some miscommunication here. There is a difference between considering a metric to be somewhat useful as a means of evaluating something and outright replacing one's preferences with a lost purpose. I had thought we were talking about the first of these. The quote you made includes 'at all relevant' (a low standard) and in the context was merely a rejection of the claim 'nonsense'.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-09-21T11:45:10.984Z · LW(p) · GW(p)
I think you must have interpreted my words differently than I intended them. I would not expect that reply if the meaning had come across clearly but I am not quite sure where the confusion is.
So, you said:
He 'should' feel embarassment if the if interfered with his social goals in the context. All things considered it most likely did not, (assuming he did not immediately signal humiliation and submission, which it appears he didn't).
ISTM this doesn't follow unless you assume he had no goals other than social ones that his burst of laughter could have interfered with; am I missing something?
I think there must be some miscommunication here. There is a difference between considering a metric to be somewhat useful as a means of evaluating something and outright replacing one's preferences with a lost purpose. I had thought we were talking about the first of these. The quote you made includes 'at all relevant' (a low standard) and in the context was merely a rejection of the claim 'nonsense'.
OK, I see it now.
Replies from: wedrifid↑ comment by wedrifid · 2013-09-21T12:30:53.925Z · LW(p) · GW(p)
ISTM this doesn't follow unless you assume he had no goals other than social ones that his burst of laughter could have interfered with; am I missing something?
Ahh, pardon me. I was replying at that time to the statement "You should be embarrassed by this story.", where embarrassment is something I would describe as an emotional response to realising that you made a social blunder. It occurs to me now that I could have better conveyed my intended meaning if I included the other words inside my quotation marks like:
He "should feel embarrassment" if the if interfered with his social goals in the context.
Thank you for explaining. I was quite confused about what wasn't working in that communication.
↑ comment by wedrifid · 2013-09-21T03:00:00.086Z · LW(p) · GW(p)
This is PUA nonsense.
The 'nonsense' part of your claim is false. The 'PUA' title is (alas) not something I have earned (opportunity costs) but I do expect this is something that a PUA may also say if the subject came up.
By way of contrast I consider this to be naive moralizing mixed with bullshit. Explanation:
There is a claim about what hesperidia 'should' do. That means one of:
- Hesperidia's actions are not optimal for achieving his goals. You are presenting a different strategy which would achieve those goals better and he would be well served to adopt them.
- Hesperidia's actions are not optimal for achieving your goals. You would prefer it if he stopped optimising for his preferences and instead did what you prefer.
- As above but with one or more of the various extra layers of indirection around 'good for the tribe', 'in accordance with norms that exist' and 'the listener's preferences are also served by my should, they can consider me an ally'.
It happens that the first meaning would be a false. When it comes to the latter meanings the question is not 'Is this claim about strategy true?' but instead 'Does knb have the right to exert dominance and control over hesperidia on this particularly issue with these terms?'. My answer to that is 'No'.
I prefer it when social advice of this kind is better optimised for the recipient, not the convenience of the advice giver. When the 'should' is not about advice at all but instead setting and enforcing norms then I insist that the injunction should, in fact, benefit the tribe. In this case the tribe isn't the beneficiary. We would be better off if the nonsense the professor was citing could be laughed at rather than treated with deference. The tribe isn't the beneficiary, the existing power structure is. I oppose your intervention.
(Nothing personal, I am replying mostly because I am curious about the theory, not because I think the issue is dramatically important.)
↑ comment by somervta · 2013-09-18T10:48:22.140Z · LW(p) · GW(p)
Right, assuming he doesn't care about the fact that hundreds of his peers now think he's the kind of person who bursts into loud, inappropriate laughter apropos of nothing. (i.e. assuming he isn't human.)
Ignoring that that is not what happened (and that he probably explained the laughter to anyone there that he actually cared about, like friends), you are entirely too eager to designate someone who lacks this property as 'not human'.
↑ comment by 9eB1 · 2013-09-22T00:47:42.430Z · LW(p) · GW(p)
This sort of utilitarian thinking focused entirely on ones own goals without considering the goals of others is what leads people to believe that they should cheat on all of their tests as much as they want. If tests in school are only for signalling and the knowledge is unimportant, then you should do as little work as possible to maximize your test scores, including buying essays, looking over shoulders, paying others to take tests for you, the whole works.
Edit: I am not saying I totally disagree with this sort of thinking. I would describe myself presently as on the fence over whether one should just go ahead and be a sociopath in favor of utilitarian goals. It makes me a little bit uncomfortable, but on the other hand it seems to be the logical result. Many people bring in other considerations to try to bring it back to moral "normalcy" but they generally strike me as ad hoc and not very convincing.
↑ comment by ChristianKl · 2013-09-20T14:11:32.968Z · LW(p) · GW(p)
At least it woke up everyone who was sleeping in the lecture.
comment by [deleted] · 2013-09-18T20:45:03.365Z · LW(p) · GW(p)
"Hey Scott," I said. The technician was a familiar face, since I used the booths twice each day.
"Hey David," he replied. "Chicago Six?"
"Yup."
I walked into the booth, a room of sorts resembling an extremely small elevator, and the doors shut behind me. There was a flash of light, and I stepped out of the booth again--only to find that I was still at Scott's station in San Francisco.
"Shucks," said Scott. "The link went down, so the system sent you back here. So just wait a moment... oh shit. Chicago got their copy of you right before the link went down, so now there's one of you in Chicago, too."
"Well, uh... two heads are better than one, I guess?" I said.
"Yeah, here's what we do in this situation," said Scott, ignoring me. "We don't want two copies of you running around, so generally we just destroy the unwanted copy."
"Yeah... I guess that sounds like the way to go," I said.
"So yeah, just get back in the booth and we'll destroy this copy of you."
I stepped back into the booth again, and the doors closed. There was a fla--
Meanwhile, I was still walking to my office in Chicago, unaware that anything unusual had happened.
Replies from: drethelin, Oscar_Cunningham, shminux↑ comment by drethelin · 2013-09-19T18:17:24.501Z · LW(p) · GW(p)
There are a lot of versions of this but very few stories that take advantage of the ability to cheaply copy someone dozens of times.
Replies from: 9eB1, None↑ comment by 9eB1 · 2013-09-22T00:31:41.840Z · LW(p) · GW(p)
I recently read the source book for the Eclipse Phase pen and paper RPG, and in the flavor text it has the following description, describing the criminal faction "Pax Familiae":
PAX FAMILAE
Major Stations: Ambelina (Venus)
Though similar to the Night Cartel in that Pax Familae holds legal offices and outposts in several habitats while working underground in others, the difference between the two syndicates couldn’t be bigger. The entire Pax Familae organization goes back to one person, Claudia Ambelina, the syndicate’s founder and matriarch. Relying excessively on cloning and forking technologies, each individual member of the syndicate is a descendant or variant of Claudia. Biomorphs [any body that a mind can be put in] are cloned from Claudia’s original genetics or sexually produced offspring (thanks to sex-switching biomods), while egos [generally speaking, minds] are forks. All members are utterly loyal to Claudia (since they all are Claudia) and show their family affiliation with pride and arrogance. Individually, each remains slightly but notably different, though all are calculating and ambitious. Regular reassimilation of forks and XP updates are used to keep each variant aware of each of the other’s activities—once you’ve met one version of Claudia, the others will know you.
Needless to say, Eclipse Phase seems pretty awesome.
Replies from: gwern↑ comment by Oscar_Cunningham · 2013-09-19T09:18:48.907Z · LW(p) · GW(p)
My main worry would be that my copy hadn't actually got to Chicago. I'd want to make damn sure of that before I let the original be killed.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-09-19T15:35:38.807Z · LW(p) · GW(p)
I suspect that if I were sufficiently uninvested in my continuing existence to be willing to terminate it on being assured that a similar-enough person lives in Chicago (which I can easily imagine being), I wouldn't require enormous confidence in that person's existence... a quick phone call would suffice.
↑ comment by Shmi (shminux) · 2013-09-19T15:03:09.956Z · LW(p) · GW(p)
Strikes me as a perfectly reasonable approach, except the check would be done quickly and automatically, not leaving room for human decisions.
Replies from: Armok_GoBcomment by RolfAndreassen · 2013-09-17T22:43:28.722Z · LW(p) · GW(p)
So... it turns out some people actually do believe that there are fundamentally mental quantities not reducible to physics, and that these quantities explain the behaviour of living things. I confess I'm a bit surprised. I had the impression that everyone these days agreed that physics actually does describe the motion of all the atoms, including those in living brains. But no, believers in the ghost in the machine walk among us, and claim that the motions of living things cannot be predicted even in principle using physics. Something to bear in mind when discussing simulations; obviously such a man will never be convinced that the upload is the person no matter how close the simulation, even unto individual atoms.
Replies from: knb↑ comment by knb · 2013-09-18T05:07:42.177Z · LW(p) · GW(p)
I'm mystified that you thought everyone in the world is a materialist-reductionist. What on earth would make you believe that?
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2013-09-18T17:58:43.300Z · LW(p) · GW(p)
The typical mind fallacy, obviously!
But no, what surprised me was that people would seriously assert that "physics does not apply", and then turn around and say "no law of physics is broken".
Replies from: shminux↑ comment by Shmi (shminux) · 2013-09-19T15:08:50.106Z · LW(p) · GW(p)
What's so surprising about extrapolating "different laws in different jurisdictions" to "different laws in different magisteria"? Consider the mental model where physics is not "fundamental". Then it follows that "physics does not apply" (to a different magisterium) is logically distinct from "laws of physics are broken" (in the same magisterium).
comment by gwern · 2013-09-20T21:37:22.442Z · LW(p) · GW(p)
I thought this was interesting: perhaps the first use I've read of odds in a psychology paper. From Sprenger et al 2013:
8.1. A Bayesian analysis of WM training effectiveness
To our knowledge, our study is the first to include a Bayesian analysis of working memory training, which we view as particularly well suited for evaluating its effectiveness. For example, we suspect that at least some of the existing studies reporting positive transfer of WM training will fail the Bayesian “sniff test.” Indeed, even for studies that have faithfully observed statistically significant effects of training it is instructive to evaluate these findings in light of one's subjective prior probabilities. For illustrative purposes, suppose a pessimist adopts prior odds of 10:1 against the effectiveness of WM training, citing the plethora of historical evidence that cognitive abilities are stable. In contrast, suppose an optimist adopts a prior odds of 1:10 in favor of the effectiveness of WM training. How might these two individuals change their beliefs in light of the available evidence?
Chein and Morrison (2010, Table 2) report significant one-tailed t-tests on the gain scores for both Stroop (t(40) = 1.80) and reading comprehension (t(38) = 1.80). The corresponding BFs = 1.06 and BF = 1.067, respectively, using the JZS prior. These BFs are interpreted as providing equivalent support for the null and the alternative—that is, the BF indicates that the data are equally supportive of both the alternative and null hypotheses. The t-tests for fluid IQ (t(40) = 0.24) and reasoning (t(40) = 1.39) were both non-significant, and have corresponding BFs of 4.37 and 1.92 in favor of the null hypothesis. The average BF across all four tasks is 2.10 in favor of the null. Turning to the experiments reported above, across all measures of fluid abilities in Experiment 1, the average BF at post-test is 2.59 in favor of the null, and this includes operation span and symmetry span which arguably reflects stimulus specific training effects. Similarly, the average BF of the untrained assessment tasks in Experiment 2 across all three training groups is 4.18, again in favor of the null. Multiplying these BFs with the priors gives us the posterior odds ratios. For the pessimist, the posterior odds against the effectiveness of WM is over 227:1 (10 ∗ 2.10 ∗ 2.59 ∗ 4.18). This corresponds to a posterior probability p(null is true|data) = 227 / 228 = 0.996. But, even for the optimist, the posterior odds favors the null at a ratio of 2.27:1 (0.1 ∗ 2.10 ∗ 2.59 ∗ 4.18 = 2.27), with a posterior probability p(null is true|data) = 2.27 / 3.27 = 0.694. In other words, based on the result of Chein and Morrison (2010) and the experiments reported herein, even the optimist should express some skepticism in the hypothesis that WM-training is effective.3
comment by Slackson · 2013-09-18T01:27:03.563Z · LW(p) · GW(p)
Can blackmail kinds of information be compared to things like NashX or Mutually Assured Destruction usefully?
Most of my friends have information on me which I wouldn't want to get out, and vice versa. This means we can do favours for each other that pay off asynchronously, or trust each other with other things that seem less valuable than that information . Building a friendship seems to be based on gradually getting this information on each other, without either of us having significantly more on one than the other.
I don't think this is particularly original, but it seems a pretty elegant idea and might have some clues for blackmail resolution.
comment by [deleted] · 2013-09-18T02:04:22.753Z · LW(p) · GW(p)
If you want to do something, at least one of the following must be true:
- The task is simple.
- Someone else has taught you how to do it.
- You have a lot of experience performing similar tasks.
- As you're trying to perform the task, you receive lots of feedback about how you're doing.
- You've performed an extremely thorough analysis of the task which accounts for all possibilities.
If a task is complicated (1 is false), then it consists of many sub-tasks, all of which are possible points of failure. In order to succeed at every sub-task, either you must be able to correct failures after they show up (4 is true), or you must be able to avoid all failures before encountering any of them. In order to avoid all failures before encountering any of them, you must already know how to perform the task, and the only ways to obtain this knowledge are through experience (3), through being taught (2), and through analysis (5).
Except I'm not sure there aren't other ways to obtain the relevant knowledge. If you want to build a house, one option is to try building lots of houses until finally you're experienced enough that you can build good houses. Another option is to have someone else who already knows how to build a house teach you. Another is to think carefully about how to build a house, coming up with an exhaustive list of every way you could possibly fail to build a house, and invent a technique that you're sure will avoid all of those failure modes. Are there any other ways to learn to build a house, besides experience, being taught, and analysis? Pretty sure there isn't.
Replies from: None↑ comment by [deleted] · 2013-09-18T03:58:21.259Z · LW(p) · GW(p)
I would change 2. to be something like: Someone else has taught you how to do it, or you have instructions on how to do it.
and include
- You have unlimited time and resources so you can 'brute force' it (try all random combinations until the task is complete)
↑ comment by jsteinhardt · 2013-09-18T05:44:59.938Z · LW(p) · GW(p)
You have unlimited time and resources so you can 'brute force' it (try all random combinations until the task is complete)
While technically true I find this to be a confusing way to think...if it would take you 2^100000 operations to brute force, is this really any different from it being impossible?
Replies from: None↑ comment by [deleted] · 2013-09-18T06:14:33.795Z · LW(p) · GW(p)
That would depend on the type of task - for computational tasks a series of planners and solvers do many 'jobs' without knowing what it is doing - just minimising a function repeatedly until the right result appears.
Replies from: jsteinhardt↑ comment by jsteinhardt · 2013-09-18T08:13:49.555Z · LW(p) · GW(p)
They typically aren't literally trying all combinations though (or if they are, the space of configurations is not too large). In this sense, then, the algorithm does know what it is doing, because it is narrowing down an exponentially large search space to a manageable size.
comment by luminosity · 2013-09-17T13:08:26.041Z · LW(p) · GW(p)
Is there much known about how to recall information you've memorised at the right time / in the right context? I can memorise pieces of knowledge just fine with Anki, and if someone asks me a question about that piece of information I can tell them the answer no problem. However, recalling in the right situation that a piece of information exists and using it -- that I'm finding much more of a challenge. I've been trying to find information on instilling information in such a way as to recall it in the right context for the last few days, but none of the avenues of inquiry I've searched have yielded anything on the level I'm wanting. Most articles I've found are talking about specific good habits, or memory, rather than their mechanisms and how to engage them.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-09-17T13:40:33.522Z · LW(p) · GW(p)
I would try imagining being in the given situation, and then doing the thing. Then hopefully in the real situation the information would jump into my mind.
To do it Anki-style, perhaps the question card could contain a specific instruction to imagine something. So the pattern is not just "read the question, say answer, verify answer", but "read the question, imagine the situation, say answer, imagine the answer, verify answer", or something like this.
Without imagining the situation, I believe the connection will not be made in the real time. Unless...
Maybe there is another way. Install a generic habit of asking "what things and I supposed to remember in a situation X?" for some specific values of X. Then you have two parts. The first part is to use imagination to teach yourself asking this question in the situation X. The second part is to prepare the lists for each situations, and memorize them doing Anki. The advantage is that if you change the list later, you don't have to retrain the whole habit.
Note: I never tried any of this.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-09-17T14:47:22.819Z · LW(p) · GW(p)
Only somewhat relatedly... something I found useful when recovering from brain damage was developing the habit of:
a) telling myself explicitly, out loud, what I was about to do, why I was about to do it, and what I needed to do next, and
b) when I found myself suddenly lost and confused, explicitly asking myself, out loud, what I was doing, why I was doing it, what I needed to do next.
I found that the explicit verbal scaffolding often helped me remember things that the more implicit mechanisms that were damaged by the injury (I had a lot of deficits to attention, short-term memory, that sort of thing) could no longer do.
It also got me a lot of strange looks, which I somewhat perversely came to appreciate.
comment by Thomas · 2013-09-16T12:35:49.797Z · LW(p) · GW(p)
I have sorted 50 US states on such a way, that their Levenshtein string difference is minimal:
Massachusetts, Mississippi, Missouri, Wisconsin, Washington, Michigan, Maryland, Pennsylvania, Rhode Island, Louisiana, Indiana, Montana, Kentucky, Connecticut, Minnesota, Tennessee, New Jersey, New Mexico, New Hampshire, New York, Delaware, Hawaii, Iowa, Utah, Idaho, Ohio, Maine, Wyoming, Vermont, Oregon, Arizona, Arkansas, Kansas, Texas, Nevada, Nebraska, Alaska, Alabama, Oklahoma, Illinois, California, Colorado, Florida, Georgia, Virginia, West Virginia, South Carolina, North Carolina, North Dakota, South Dakota
http://protokol2020.wordpress.com/2013/09/13/order-by-string-proximity/
I don't know, was it done before?
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2013-09-16T17:52:26.038Z · LW(p) · GW(p)
Did you order this by a greedy-local algorithm that always takes the next state minimising the difference with the current one; or by a global minimisation of the total difference of all pairs? Presumably the latter is unique but the former changes the order depending on the starting state.
Replies from: Douglas_Knight, Thomas↑ comment by Douglas_Knight · 2013-09-16T19:22:13.607Z · LW(p) · GW(p)
This is a traveling salesman problem, so it is unlikely that Thomas used an algorithm that guarantees optimality. If I understand your proposed greedy algorithm correctly, the distances at the beginning would be shorter than the distances at the end, which I do not observe in his list. A greedy heuristic that would not produce that effect would be to consider the state to be a bunch of lists and at every step concatenate the two lists whose endpoints are closest. This is a metric TSP, so the Christofides algorithm is no more than 1.5x optimal.
Replies from: Thomas↑ comment by Thomas · 2013-09-16T19:40:43.907Z · LW(p) · GW(p)
It is a global minimization. It takes 261 insert/delete operations from Massachusetts to South Dakota.
I got many different solutions with 261 insert/delete operations. Some 262 and more, but none 260 or less.
It's a challenge to everybody interested to do better. I am not sure if it's possible.
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2013-09-17T03:30:00.609Z · LW(p) · GW(p)
Not clear what the number of operations has to do with it; isn't the challenge to find a smaller total Levenshtein difference?
Incidentally, does it make a difference if you consider the end of the string to wrap around to the beginning?
Replies from: Thomas↑ comment by Thomas · 2013-09-17T05:29:02.309Z · LW(p) · GW(p)
The Levenshtein difference IS the number of insert/delete operations necessary, to transform a string A to string B.
Wrapping around, a circular list, is another option, yes.
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2013-09-17T14:16:50.681Z · LW(p) · GW(p)
Ah! Well then, I learned something today, I can go to bed. :)
comment by Metus · 2013-09-16T05:19:18.813Z · LW(p) · GW(p)
I hope people do not mind creating me these. I live in a timezone earlier than American ones and I do periodical thread on another forum anyway so I am in the zone.
Replies from: David_Gerard↑ comment by David_Gerard · 2013-09-16T16:59:39.595Z · LW(p) · GW(p)
I always did them on UTC. I believe the servers are in Melbourne, so as long as it's Monday in +11 ;-)
comment by Scott Garrabrant · 2013-09-19T20:03:12.658Z · LW(p) · GW(p)
Are there resources for someone who is considering running a free local rationality workshop? If not does anyone have any good ideas for things that could be done in a weekly hour-long workshop? I was surprised that there weren't any free resources from CFAR for exactly this.
Replies from: palladias, Viliam_Bur↑ comment by palladias · 2013-09-20T22:34:07.512Z · LW(p) · GW(p)
The How to Run a Successful LessWrong Meetup booklet probably has some helpful crossover ideas.
↑ comment by Viliam_Bur · 2013-09-20T08:03:11.410Z · LW(p) · GW(p)
A wiki page would be helpful.
The first idea is to play a "rationalist taboo". Prepare pieces of paper with random words, and tell people to split in pairs, choose a random word, and explain it to their partner. This should only require a short explanation that it is forbidden to use not just the linguistically related words, but also synonyms and some other cheap tricks (such as telling a name of a famous person when explaining an idea). -- Then you could encourage people to use "be specific" on each other in real life. (Perhaps make it a game, that they each have to use it 3 times within the rest of the meetup.)
You could have them use CFAR's calibration game, and then try making some estimates together ("will it rain tomorrow?"), and perhaps make a prediction market. In making the estimates together, you could try to explore some biases, like a conjunction fallacy (first ask them to estimate something complex, then to estimate individual components, then review the estimate of the complex thing)... I am not sure about that part. Or you could ask people to make 90% confidence intervals for... mass of the Moon, number of people in Bolivia, etc. (things easy to find in wikipedia)... first silently on paper, then telling you the results to write them on blackboard (the hypothesis is that way more than 10% of the intervals will be wrong).
You could do an experiment on anchoring/priming, for example giving each of the people a die, and a questionnaire where the first question would be "roll a die, multiply the result by 10 and add 15, and write it as your first answer" and "how many % of countries in Africa are members of UN? write as your second answer", then collect the results and write all the estimates on the blackboard in columns by the first answer (as in "people who had 25 in the first answer provided the following values: ...; people who has 35, provided these values: ..."). People are not allowed to talk while filling out the questionnaire. Another example of priming could be asking in questionnaire the first group "what year did the first world war start?" and then "when was steam engine invented?" (emphasising that if you don't know, make your best guess), and asking another group "when was the first crusade?" and then "when was steam engine invented?" (the hypothesis is that the first group will give later estimates for the steam engine than the second group).
comment by Omid · 2013-09-19T08:05:19.495Z · LW(p) · GW(p)
Are people more productive using laptops or desktops?
Replies from: ChristianKl, Armok_GoB, Dahlen, kalium↑ comment by ChristianKl · 2013-09-20T13:56:08.807Z · LW(p) · GW(p)
In my own experience working 2 hours directly on a laptop means that my back tenses up. That doesn't happen with the desktop setup that I use.
Having a keyboard direct next to the monitor just results in bad posture. Over the long run I wouldn't expose myself to it even if my back wouldn't be as sensible as it is.
↑ comment by Armok_GoB · 2013-09-29T23:16:58.862Z · LW(p) · GW(p)
This problem is underspecified, consider:
A laptop on your kitchen table
A desktop ergonomically identical to a laptop on your kitchen table
A laptop in your lap in a library on an university campus surrounded by people to ask for advice
A desktop with multiple screens at adjustable heights and a super ergonomic seating solution unlikely to be available where you wanted to use the laptop and a superior pointing device that can't be moved around easily.
Basically, shitty laptop and desktop setups are basically identical, but they can take advantage of very different types of upgrades:
- A laptop can be brought to different environments that enable productivity on different things, and can also be used at times you'd otherwise just be waiting.
A desktop can be upgraded to be much more powerful, and can be hooked up to superior (expensive and bulky) input and output devices.
Either way, ergonomics matter greatly and are easy to get wrong. A desktop has some powerful advantages in setting up a good ergonomic environment, but more importantly since that environment is stationary anyway you can't get the benefits of both it and the laptop at once. On the other hand, some of the environments the laptop can be moved to might include a better ergonomic setup than you could afford yourself.
↑ comment by Dahlen · 2013-09-19T12:40:26.779Z · LW(p) · GW(p)
Can't answer your question with a statistic, but in my humble experience, the smaller the device, the easier it feels for me to disconnect from it. I find it more demanding to use a desktop since I have to sit in the same place, in the same position, and the time needed to turn it on/off and put it in standby mode is much greater in comparison to, say, a smartphone.
↑ comment by kalium · 2013-09-21T05:06:04.038Z · LW(p) · GW(p)
Laptops can be brought into more distracting environments, and as a result of this I've developed a strong habit of wasting time on my laptop. I have no such habit with my desktop, and therefore when I sit down at my desktop I am reasonably productive.
comment by David_Gerard · 2013-09-18T20:54:57.083Z · LW(p) · GW(p)
There's a Culture fanfic thread on this month's Media Thread. I compiled a list of what little there is.
comment by [deleted] · 2013-09-17T11:38:31.169Z · LW(p) · GW(p)
I am interested in how, in the early stages of developing an AI, we might map our perception of the human world (language) to the AI’s view of the world (likely pure maths). There have been previous discussions such as AI ontology crises: an informal typology, but it has been said to be dangerous to attempt to map the entire world down to values.
If we use an Upper Ontology and expand it slightly (as not to get too restrictive or potentially conflicting) for Friendly AI’s concepts, this would assist in giving a human view of the current state of the AI’s perception of the world.
Are there any existing ontologies on machine intelligence, and is this something worth exploring now to test on paper?
comment by Shmi (shminux) · 2013-09-19T14:48:13.249Z · LW(p) · GW(p)
If anyone got that microeconomics vs macroeconomics comic strip, feel free to explain... Possible related: inefficient hot dogs.
Replies from: Lumifer, 9eB1↑ comment by Lumifer · 2013-09-19T17:08:47.908Z · LW(p) · GW(p)
Not sure I understand it well either, but that never stopped me before :-D
I think the upper-left quadrant of "describes well / never happens" is the domain of toy theories and toy problems. Microeconomics likely landed there because it tends to go "Imagine a frictionless marketplace with two perfectly rational and omniscient agents..."
The lower-right quadrant of "describes badly / happens all the time" is the domain or reality economics. It's a mess and nobody understands it well but yes, it happens all the time. Macroeconomics was probably placed there because, while it has its share of toy theories, it does concern itself with empirical studies of what actually happens in reality when interest rates go up or down, money supply fluctuates, FX rates are fixed or left to float, etc.
↑ comment by 9eB1 · 2013-09-22T00:42:47.847Z · LW(p) · GW(p)
Traditional microeconomics makes greater assumptions about the economic actors (that they are utility maximizing, have perfect information, in competitive markets with many participants, etc.) and based on those assumptions it is accurate in describing what happens mathematically. Macroeconomics doesn't make as many assumptions because it's based on the observed behavior of market participants in aggregate (GDP is just the sum of the four components of GDP, wages can be proven to be sticky the downward direction, and such), but macroeconomists are wrong or surprised all the time about the path of GDP and unemployment.
Note that I don't necessarily agree with this characterization, but that's what he's going for.
comment by Pentashagon · 2013-09-18T22:54:02.915Z · LW(p) · GW(p)
I am still confused about aspects of the torture vs specks problem. I'll grant for this comment that I would be willing to choose torture for 1 person for 50 years to avoid a dust speck in the eye of 3^^^3 people. Numerically I'll just assign -3^^^3 utilons to specks and -10^12 utilons to torture. Where confusion sets in is if I consider the possibility of a third form of disutility between the two extremes, for example paper cuts.
Suppose that 1 paper cut is -100 utilons and 50 years of torture is -10^12 utilons so the expected utility in either case is the same*. However, my personal preference would be to choose 10^10 papercuts over 50 years of torture. Similarly, if a broken bone is worth -10^4 utilons I would rather that the same 10^10 people got a papercut instead of only 10^8 people having a broken bone. The best case would be if I could avoid 3^^^3 specks in exchange for somewhat fewer than 3^^^3 just-barely-more-irritating specks, instead of torturing, breaking, or cutting anyone.
Therefore, maximizing average or total expected utility doesn't seem to capture all my preferences. I think I can best describe it as choosing the maximum of the minimum individual utilities while still maximizing total or average utility. As such I am inclined to choose specks over torture, probably as a result of trying to find a more palatable tradeoff with broken bones or papercuts or slight-more-annoying specks. In real life there are usually compromises, unlike in hypotheticals. Still, I wonder if it might be acceptable (or even moral) to accept only 99% of the maximum possible utility if it allows significant maximin-ing of some otherwise very negative individual utilities.
*assume a universal population of 4^^^4 individuals and the roles are randomly selected so that utility isn't affected by changing the total number of individuals.
Replies from: drethelin, Armok_GoB↑ comment by drethelin · 2013-09-19T18:03:08.335Z · LW(p) · GW(p)
I think this is one of the legitimate conclusions you should make from torture vs dust specks. It's not that your intuition is necessarily wrong (though it may be) but that a simple multiplicative may NOT accurately describe your utility function. You can't choose torture based on simple addition but that doesn't necessarily mean choosing torture isn't what you should do given your UF
Replies from: Pentashagon↑ comment by Pentashagon · 2013-09-20T16:56:10.434Z · LW(p) · GW(p)
I don't think it's the specifics of the multiplicative accumulation of individual utilities that matters; just imagine that however I calculate the utility of torture and papercuts there is some lottery where I am VNM-indifferent between 10^10 papercuts and torture for 50 years. 10^10 + 1 papercuts would be too much and I would opt for torture; 50 years + 1 second of torture would be too much and I would opt for papercuts. However, given the VNM-indifferent choice, I would still have a preference for papercuts over torture because it maximizes the minimum individual utility while still maximizing overall utility. (-10^12 utility is the minimum individual utility when choosing torture, -100 utility is the minimum individual utility when choosing papercuts, total utility is -10^12 either way, average utility is -10^12 / (10^10 + 1) either way, so I am fairly certain the latter two are indifferent between the choices. If I've just made a math error, that would help alleviate my confusion.).
To me, at least, it seems like this preference is not captured by utilitarianism using VNM-utility. I think it's almost possible to explain it in terms of negative utilitarianism but I don't actually try to minimize overall harm, just minimize the greatest individual harm while keeping total or average utility maximized (or sufficiently close to maximal).
↑ comment by Armok_GoB · 2013-09-29T23:33:53.998Z · LW(p) · GW(p)
Obviously you're going to get wrong specific answers if you're just pulling exponents out of thin air. The torture vs. specs example works because the answer would be the same if specs were worth the same as a year of torture or 10^-10 as much or 10^-1000 as much.
Getting approximate utilities is tricky; general practice is to come up with two situations you're intuitively indifferent about, where one involves a small event, and the other involves a dice throw and then a big event only with a certain probability dependent on it. only AFTER you've come up with this kind of preference do you put number on anything, although often you'll find this unnecessary as just thinking about it like this resolved your confusion.