Posts
Comments
Are there any updates on when the Sequences e-book versions are going to be released? I'm planning a reread of some of the core material and might wait if the release is imminent.
I think that's one issue with protests. Many people gather with ill defined goals that are tangentially related to what most would agree is the actual problem. The "actual problem" for Occupy relates to unequal distribution of wealth, and the "actual problem" for the recent police brutality protests relates to systemic bias in the criminal justice system. I'm not sure if there actually is this sort of systemic bias, nor am I sure of the implicit claim that "things have gotten worse."
So, what do protests actually achieve, and is that effective in making things better? It seems that they do raise some level of awareness in the sense that more eye balls are on the issue for a short period of time. It's unclear to me that that's effective though, especially since it's a double edged sword. Raising awareness about the issue makes the negative externalities (like rioting and looting) more likely to be picked up and emphasized about the media.
That's an interesting schedule. Do you find it easier to fast during the day, vs the commonly recommended "don't eat anything after 6pm until 1pm the next day"?
I have a question about a seemingly complex social issue, so I'm interested if anyone has any insights.
Do protests actually work? Are e.g. the Ferguson/police crime protests a good way of attacking the problem? They seem to me to have a high cost, to be deflecting from the actual problem, and not enough sustained effort by people who care to push through to actual social change in the U.S.
Look at SNPs corresponding to methylation defects, and run a self experiment on any interventions that drop out of that.
Some off the cuff thoughts:
Can you imagine an intelligent agent that is not rational? And vice versa, can you imagine a rational agent that is not intelligent?
AIXI is "rational" (believe that it's vNM-rational in the literature). Is "instrumental rationality" a superset of this definition?
In the case of human rationality and human intelligence, part of it seems a question of scale. E.g. IQ tests seem to measure low level pattern matching, while "rationality" in the sense of Stanovich refers to more of a larger scale self reflective corrective process. (I'd conjecture that there are a lot of low level self reflective corrective processes occurring in an IQ test as well).
What's the current status of this? I'm looking to get started on the course list and would love a study partner.
There is this paper, http://commonsenseatheism.com/wp-content/uploads/2014/04/Hintze-Problem-class-dominance-in-predictive-dilemmas.pdf which was an honors thesis.
More discussion relevant to the state of UDT and TDT in this comment: http://lesswrong.com/lw/k3m/open_thread_2127_april_2014/au6e
Thanks for your enticing comment!
I understand your first point, but my math knowledge is not up to par to really understand point #2, and point #3 just makes me want to learn category theory. BTW, I also posted this question on the philosophy stackexchange: http://philosophy.stackexchange.com/questions/14689/how-does-abstraction-generalization-in-mathematics-fit-into-inductive-reasoning.
Do you have any recommendations of what to study to understand category theory and more about the foundations of math? (Logic, type theory, computability & logic, model theory seem like contenders here)
So there's a MIRIxMountain View, but is it redundant to have a MIRIxEastBay/SF? It seems like the label MIRIx is content to be bestowed upon even low key research efforts, and considering the hacker culture/rationality communities there may be interest in this.
So there's a MIRIxMountain View, but is it redundant to have a MIRIxEastBay/SF? It seems like the label MIRIx is content to be bestowed upon even low key research efforts, and considering the hacker culture/rationality communities there may be interest in this.
I have a question about the nature of generalization and abstraction. Human reasoning is commonly split up into two categories: deductive and inductive reasoning. Are all instances of generalization examples of inductive reasoning? If so, does this mean that if you have a deep enough understanding of inductive reasoning, you broadly create "better" abstractions?
For example, generalizing the integers to the rationals satisfies a couple of things: the theoretical need to remove previous restrictions on the operations of subtraction and division, and AFAIK the practical need of representing measurable quantities. This generalization doesn't seem to fit into the examples given here http://en.wikipedia.org/wiki/Inductive_reasoning at first glance, and I was hoping someone could give me some nuggets of insight about this. Or, can someone point out what the evidence is that leads to this inductive conclusion/generalization?
Related -- here are some attempts to formalize and understand analogy from a category theoretic perspective:
http://link.springer.com/article/10.1023/A:1018963029743 http://pages.bangor.ac.uk/~mas010/pdffiles/Analogy-and-Comparison.pdf
Is there a way to tag a user in a comment such that the user will receive a notification that s/he's been tagged?
Before I embark on this seemingly Sisyphean endeavor, has anyone attempted to measure "philosophical progress"? It seems that no philosophical problem I know of is apparently fully solved, and no general methods are known which reliably give true answers to philosophical problems. Despite this we definitely have made progress: e.g. we can chart human progress on the problem of Induction, of which an extremely rough sketch looks like Epicurus --> Occam --> Hume --> Bayes --> Solomonoff, or something. I don't really know, but there seem to be issues with Solomonoff's formalization of Induction.
I'm thinking of "philosophy" as something like "pre-mathematics/progressing on confusing questions that no reliable methods exist yet to give truthy answers/forming a concept of something and formalizing it". Also it's not clear to me "philosophy" exists independent of the techniques its spawned historically, but there are some problems for which the label of "philosophical problem" seems appropriate, e.g. "how do uncertainties work in a universe where infinite copies of you exist?" and like, all of moral philosophy, etc.
Do you have anything quick to add about what you mean by "Eliezer-level philosophical ability"?
I would love to see these as posts. (I really enjoyed your posts on the CFAR list about human ethics).
What does "The instrumental lens" hint at?
Everyone's posting evidence for this, which is great and LW is awesome, but I'm also interested in any rebuttals of the sort like "I expected it to hugely change my social life but it didn't really"
In particular, for me:
- I found out about CFAR from LW and attended a CFAR workshop
- I've attended a couple of meetups in the bay area
- I found out about 80000 hours, GiveWell, MIRI, and effective altruism in general, which has been a large force in my life
- I've met many interesting people working on many interesting things in spheres that I care about
Declaring pseudo-Crocker's rules...
Not soon after I found out about LW, I expected to e.g move into a rationalist community, immerse myself in the memespace, etc. But there's a distinct qualitative difference that I feel when I'm hanging out with my friends whom I've met from other more prosaic circles (house parties, friends of friends, college, etc) than when I'm hanging out with people at the meetups I've been to and even the CFAR workshop. I find it hard to really connect with most people I've met through LW in a way that gives me the fuzzywuzzies, even though many of us share similar values and are working towards similar goals.
Yes, my friends are stoners, entrepreneurs, weirdos, normals, hot people, people-probably-more-concerned-social-status-than-LWers, whatever. Some of them know about LW and are familiar with rationality concepts. But I just have a really fun time with them, and I haven't had that in my experiences so far with LW people. I suspect (at the risk of sounding insulting) that there's a difference in social acumen and sense of humor or something. I honestly found some of my social experiences with LWers kind of alienating.
Please note I'm not drawing a hard and fast line here, (and obviously there's a selection effect) but I'm just curious if anyone else has had the same experience.
I'm not sure that he doesn't have "natural" skill or talent. I find the link now but I remember reading that he's extremely high IQ. (or something something eidetic memory something something?)
Motifs in his standup comedy routines are about how much smarter he is than everyone else, etc etc (anecdata)
I highly recommend the book Concepts, Techniques, and Models of Computer Programming (http://www.amazon.com/Concepts-Techniques-Models-Computer-Programming/dp/0262220695) which is the closest I've seen to distilling programming to its essence. It's language agnostic in the sense that you start with a small "kernel language" and build it up incorporating different concepts as needed.
The squats and lunges will exercise back and core. I also add supermans for mid back
Alum here... glad to hear! You should do that :)
I've been doing the "7 min scientific workout" every morning for the past month and I've seen great results. http://well.blogs.nytimes.com/2013/05/09/the-scientific-7-minute-workout/
Does anyone have any recommended "didactic fiction"? Here are a couple of examples:
1) Lauren Ipsum (http://www.amazon.com/Lauren-Ipsum-Carlos-Bueno/dp/1461178185) 2) HPMoR
Does anyone have any recommended "didactic fiction"? Here are a couple of examples:
1) Lauren Ipsum (http://www.amazon.com/Lauren-Ipsum-Carlos-Bueno/dp/1461178185) 2) HPMoR
I donated some money on Dec. 13, and I'm not sure if the matching was active at that time. Anyone know?
Are there any updates on when this will be released?
Found a proof of this article at: http://sapir.psych.wisc.edu/papers/lupyan_brainsAlgorithms_proof.pdf
The difficulties of executing simple algorithms: Why brains make mistakes computersdon’t
Here's the first track from the new release Psychic by Darkside: http://www.youtube.com/watch?v=d8NaWT0WvEE
The entire album feels like lost memories, highly recommended.
Narratives and goals: Narrative structure increases goal priming. Laham, Simon M.; Kashima, Yoshihisa http://psycnet.apa.org/journals/zsp/44/5/303/