Posts
Comments
Two separate links, perhaps?
RSS for the comment page can do that. Same for recent comments on a post. Still, actual html would be nice.
You can append "?context=100" to the comment permalink.
Just some thoughts before I start my sleep interval :)
Plugins are great, especially because each can request individual permissions. That way, users don't get scared away by permission requests. Some example code here.
Widget: yes, 1x1 [start|end|track|new_event|happy] button would probably be best. One can arrange those as they see fit.
Ordinal values: perhaps just an autocomplete option for event labels.
As for analytics, perhaps draw selected intervals above each other with selected tracks plotted over them (each with own scale) and vertically write labels of (selected?) events. This may be messy, but can be done later and elsewhere (e.g. Google charts, if they can be combined). Also, setting up icons for events/ordinals would be nice.
Thanks! I was looking for something like this after reading the luminosity sequence again. Haven't found any on the android market.
Feature requests:
- You could make it respond to specific intents to create (and let others create) separately-installed plugins (e.g. a plugin with location permission to automatically track where you are, one with internet permission to track various karma etc.).
- Tracking ordinal values (e.g. happy, anxious, happy+anxious...)
- A widget
- Data export
- More analytical tools, perhaps something to compare tracks/events.
You could possibly even monetize this by keeping the app free and offering an awesome analytic service online (I'd pay $10 for 10 instances of auto-generated full analysis report).
Nice. Same applies for extracting interfaces in programming (e.g. IComperable).
Yeah, I expected someone to point out a paper where this has been done (online Wikipedia references don't have it and I couldn't find the papers Ermer cited).
The paper presents good evidence in favor of its hypothesis, but I am more interested if ordinary people really do logic better in social context as opposed to other real-world tasks.
As for the test:
- Made four cards out of paper, drew a lightning bolt, a light bulb, a crossed-out lightning bolt and a crossed-out light bulb. Back of the cards was empty.
- Presented the cards as houses - one side specifies if lights are on, other specifies if there is electricity.
- Told them that "if lights are on, there must be electricity in the house" and individually asked which house(s) they must check (flip) to see if any of them are impossible.
This isn't a good test. I'd much rather go for something more primal, such as "If you don't eat, you will die".
when framed in terms of social interactions, people's performance dramatically improves
From the Wikipedia article, after invoking evolutionary psychology and social interaction to explain the improvement:
Alternatively, it could just mean that there are some linguistic contexts in which people tend to interpret "if" as a material conditional, and other linguistic contexts in which its most common vernacular meaning is different.
It shouldn't be hard to present the test as a real world example that doesn't involve social interaction (e.g. "If lights are on, there is electricity in the house").
/me goes off to test this on a couple of linguistics students
Result: One correct and one incorrect answer.
Here is some javascript to help follow LW comments. It only works if your browser supports offline storage. You can check that here.
To use it, follow the pastebin link, select all that text and make a bookmark out of it. Then, when reading a LW page, just click the bookmark. Unread comments will be highlighted, and you can jump to next unread comment by clicking on that new thing in the top left corner. The script looks up every (new) comment on the page and stores its ID in the local database.
Edit: to be more specific, all comments are marked as read as soon as the script is run. I could come up with a version that only marks them as read once you click that thing in upper left corner. Let me know if you're using it or if you'd like anything changed/added.
re: old ideas
I can't really figure out what he means by that. His example with dangerous doses of artificial sweeteners seems to be about asking the wrong question. It seems logical that no amount of data can get you the right answer if you don't ask the/a right (set of) question(s).
He goes on about mutilating datasets, which seems to me a sin. Me, with GBytes of storage on my PC. When the medium of storage is paper, data gets mutilated. Consider a doctor writing up anamnesis: patient talks on and on, but only what the doctor considers relevant data is written down. Seems like a perfect example of a mutilated dataset and what Jaynes was talking about - if the doctor has a wrong model in mind while collecting data, (s)he is more likely not to collect important information.
I heard that the people at CERN don't let a bit go unstored. But are there variables not measured at all, due to our existing models of the universe.
Will participate (online only, living in Serbia). Additional back-and-forth on IRC seems like a good idea.
This pretty much convinced me that the fine variances of sexiness have much more to do with memes than genes. It shouldn't be hard to test if it is the case with cuteness as well: just find a culture that hasn't been exposed to Disney/Pixar films.
The harmless surprise hypothesis fits my data pretty well. But are you sure repetition-based humor isn't just conditioning people to laugh at a certain thing (catch-phrase or a situation)?
On the other hand, butt-of-a-joke hypothesis also sounds plausible.
There is an option in the bCisive application, under the "spaces" tab to turn on guest access. It should supply you with an URL you can include in your post here. Without turning that option on, we would have to register, and you would have to invite each of us to view the argument map.
So: "spaces" -> "cryonics" -> "manage" -> turn on guest access
anyone know how to quote this url properly using the [ ] ( ) markup
\ before )
Possibly related: I have a bet going with a reddit-acquaintance; basically, I gave him an upvote, and if x turns out to be true, he donates $1000 to SIAI.
If members of this community have an accurate, well calibrated map, making bets could be a cost-effective way to pump money into SIAI or other non-profits/charities (which signals caring as well as integrity).
Is such a thing in the realm of Dark Arts?
Signaling may play a significant role in this.
I too would generally regard observations of black ravens as being weak evidence that all ravens are black.
Weak evidence, but evidence nonetheless. I read the essay again, and it appears that what the author means is that there exists a case where observing a black raven is not evidence that all ravens are black; the case he specified is one where the raven is picked from a population already known to be consisting of black ravens only. In some sense, he is correct. Then again, this is not a new observation.
He does present a case where observing a red haring constitutes weak probabilistic evidence that all ravens are black.
So, my disagreement comes from my misinterpretation of the word "may".
- Red herrings may (and black ravens may not) constitute evidence that all ravens are black.
Most of his other points rely on loose definitions, IMO ("rational", "justified", "selfish", "cat"), but this one seems plainly wrong to me, as he seems to attach the same meaning to the word "evidence" as LW does (although not that formal).
I'm not saying philosophers do not contribute to problem-solving, far from it. It may be that he is wrong and this is not "at least as well-established as most scientific results" in philosophy. It may also be that a significant amount of philosophers disregard (or have no knowledge of) Bayesian inference.
A main form of insight is a hypothesis that one hadn't previously entertained, but should be assigned a non-negligible prior probability.
I think of this as P(hypothesis H is true | H is represented in my mind) > P(H is true | H is not represented in my mind), largely because someone likely did some calculations to hypothesise H (no matter how silly H may seem, e.g. "goddidit", it's better than a random generator, with few exceptions).
So, in a way, I consider the act of insight as evidence (likelihood ratio > 1) for the insight itself (the hypothesis).
Sweet, but according to the wiki the lightsaber doesn't include full Bayesian reasoning, only the special case where the likelihood ratio of evidence is zero.
One could argue that you can reach the lightsaber using the Bayesian blade, but not vice versa.
when, if ever, does an insight count as evidence?
I suspect you use the term "insight" to describe something that I would classify as a hypothesis rather than observation (evidence is a particular kind of observation, yes?).
Consider Pythagoras' theorem and an agent without any knowledge of it. If you provide the agent with the length of the legs of a right-angled triangle and ask for the length of the hypotenuse, it will use some other algorithm/heuristic to reach an answer (probably draw and measure a similar triangle).
Now you suggest the theorem to the agent. This suggestion is in itself evidence for the theorem, if for no other reason then because P(hypothesis H | H is mentioned) > P(H | H is not mentioned). Once H steals some of the probability from competing hypotheses, the agent looks for more evidence and updates it's map.
Was his first answer "rational"? I believe it was rational enough. I also think it is a type error to compare hypotheses and evidence.
If you define "rational" as applying the best heuristic you have, you still need a heuristic for choosing a heuristic to use (i.e. read wikipedia, ask expert, become expert, and so on). If you define it as achieving maximum utility, well, then it's pretty subjective (but can still be measured). I'd go for the latter.
P.S. Can Occam's razor (or any formal presentation of it) be classified as a hypothesis? Evidence for such could be any observation of a simpler hypotheses turning out to be a better one, similar for evidence against. If that is true, then you needn't dual-wield the sword of Bayes and Occam's razor; all you need is one big Bayesian blade.
Thank you, and thank you for the link; didn't occur to me to check for such a topic.
Male, 26; Belgrade, Serbia. Graduate student of software engineering. Been lurking here for a few months, reading sequences and new stuff through RSS. Found the site through reddit, likely.
Self-diagnosed (just now) with impostor syndrome. Learned a lot from reading this site. Now registered an account to facilitate learning (by interaction), and out of desire to contribute back to the community (not likely to happen by insightful posts, so I'll check out the source code).
If you are envisioning some sort of approximation of Bayesian reasoning, perhaps one dealing with an ordinal set of probabilities, a framework that is useful in everyday circumstances, I would love to see that suggested, tested and evolving.
It would have to encompass a heuristic for determining the importance of observations, as well as their reliability and general procedures for updating beliefs based on those observations (paired with their reliability).
Was such a thing discussed on LW?