Posts
Comments
I don't think it's fair to say that "nobody understood induction in any kind of rigorous way until about 1968." The linked paper argues that Solomonoff prediction does not justify Occam's razor, but rather that it gives us a specific inductive assumption. And such inductive assumptions had previously been rigorously studied by Carnap among others.
But even if we grant that assumption, I don't see why we should find it surprising that science made progress without having a rigorous understanding of induction. In general, successfully engaging in some activity doesn't require having a rigorous understanding of that activity, and making inductive inferences is something that comes very natural to human beings.
Moreover, it seems that algorithmic information theory has (at best) had extremely limited impact on actual scientific practice in the decades since the field was born. So even if it does constitute the first rigorous understanding of induction, the lesson seems to be that scientific progress does not require such an understanding.
Non-cognitivism strictly speaking doesn't imply the orthogonality thesis. For instance, one could consistently hold that increased intelligence leads to a convergence of the relevant non-cognitive attitudes. Admittedly, such a position appears implausible, which might explain the fact (if it is a fact) that non-cognitivists are more prone to accept the orthogonality thesis.
I don't think Sweden is significantly more transhumanist than several other western European countries. The fact that two influential transhumanists (Bostrom and Sandberg) are Swedish could be due to chance. Once they became known, they may have attracted a disproportionate number of Swedes to adopt similar views, but that number is still trivial compared to the population as a whole. In fact, it could be that the general egalitarian sentiment makes Swedes less likely to accept certain transhumanist positions (even though that sentiment is arguably weaker today than it was a few decades ago).
You can prove everything from a contradiction, but you can't prove everything from a false premise. I take it that you mean that we can derive a contradiction from the assumption of moral realism. That may be true (although I'd hesitate to call either moral realism or free will logically impossible), but I doubt many arguments from moral realism to other claims (e.g. the denial of the orthogonality thesis) rely on the derivation of a contradiction as an intermediate step.
If moral realism is simply the view that some positive moral claims are true, without further metaphysical or conceptual commitments, then I can't see how it could be at odds with the orthogonality thesis. In itself, that view doesn't entail anything about the relation between intelligence levels and goals.
On the other hand, the conjunction of moral realism, motivational judgment internalism (i.e. the view that moral judgments necessarily motivate), and the assumption that a sufficiently intelligent agent would grasp at least some moral truths is at odds with the orthogonality thesis. Other combinations of views may yield similar results.
I'm not familiar with his writings on the foundations of quantum mechanics, but in addition to his work on causality, the three volumes on measurement he co-authored have also been hugely influential. His intellectual autobiography (pdf) might be worth a look.
Well, I hope you're in Oxford soon again, João! :)
Patrick Suppes to the left?
Some might find it more convenient to set this up as a Google Form.
Just came across the book Behavior Modification in Applied Settings, which I don't think has been mentioned on Less Wrong previously. I haven't had a chance to read it yet, but it looks like it could be useful for those of us interested in boosting productivity and personal effectiveness.
See my reply to diegocaleiro.
Not sure whether I do think otherwise. But if Luke had written "smarter-than-human machine intelligence" instead, I probably wouldn't have reacted. In comparison, "machine superintelligence singleton" is much more specific, indicating both (i) that the machine intelligence will be vastly smarter than us, and (ii) that multipolar outcomes are very unlikely. Though perhaps there are very convincing arguments for both of these claims.
a machine superintelligence singleton is largely inevitable
So do you think that while we can't be very confident about when AI will be created, we can still be quite confident that it will be created?
Here.
There's a Swedish word for this, "problemformuleringsprivilegiet," which roughly translates as "the privilege to formulate the problem."
Indeed, my point was rather that if Scanian is included, so should ten or so other accents as well.
Being from southern Sweden myself, I was also quite amused to see that Scanian – which is really just an accent – is marked as a separate language.
Here.
A few points:
- This year, spring has been much colder in most European countries than it typically is.
- FHI folks are not very representative: the fact that many of them spend late nights and weekends at the office isn't particularly strong evidence that other folks in the UK and in countries with a similar climate do the same.
Indeed, even this quote is way below 140 characters :-)
By the way, you're off by a year: the February 2013 thread is here.
This was a fun read. Reminds me of Terry Bisson's "They're made out of meat."
Thanks, Brian. I know this is your position, I'm wondering if it's benthamite's as well.
Knowing that you've abandoned moral realism, how would you respond to someone making an analogous argument about preferences or duties? For instance, "When a preference of mine is frustrated, I come to see this as a state of affairs that ought not to exist," or "When someone violates a duty, I come to see this as a state of affairs that ought not to exist." Granted, the acquaintance may not be as direct as in the case of intense suffering. But is that enough to single out pleasure and suffering?
I find the title a bit confusing. To me it seems a better one would be "Outline of Possible Sources of Knowledge of Values." Or am I misunderstanding you?
I am curious about the qualifier "pre-1980." Do you think later work in these disciplines is noticeably better?
"What Would AIXI Do With Infinite Computing Power and a Halting Oracle?"
Is this problem well-posed? Doesn't the answer depend completely on the reward function?
The folly of mistaking a paradox for a discovery, a metaphor for a proof, a torrent of verbiage for a spring of capital truths, and oneself for an oracle, is inborn in us.
Paul Valéry
You could estimate the amount of time spent procrastinating. If you're at a computer, RescueTime or similar software might help you do that. You could also try to count how often you feel like procrastinating, and how often you actually do procrastinate. Of course, this might be tricky to do accurately.
Have you tried Beeminder for logging progress?
Among all hypotheses consistent with the observations, the simplest is the most likely.
I think this statement of Occam's razor is slightly misleading. The principle says that you should prefer the simplest hypothesis, but doesn't say why. As seen in the SEP entry on simplicity, there have been several different proposed justifications.
Also, if I understand Solomonoff induction correctly, the reason for preferring simpler hypotheses is not that such theories are a priori more likely to be true, but rather that using Solomonoff's universal prior means that there will be a finite bound on the number of prediction errors you make over an infinite string.
Assuming that simpler hypotheses are more likely to be true looks like wishful thinking. But the fact that the number of prediction errors will be bounded seems like a good justification of Occam's razor.
Sorry, I didn't realize you had to create an account there. I've now uploaded the file to Rapidshare here.
Here.
I live in Lund, but will hopefully be able to join you!
Here is the published version, if you still need it.
This seems like a great way of moving forward. I would certainly enter.
What do you estimate a paper written in this way would cost, in total?
This recent edited volume might be of interest.
Enoch (2005) argues that idealization is problematic for subjectivist theories:
The reading of the watch tracks the time—which is independent of it—only when all goes well, the perceptual impression tracks relative height—which is independent of this perception—only when all goes well. So there is reason to make sure—by idealizing—that all does go well. But had we taken the other Euthyphronic alternative regarding these matters things would have been very different. Had the time depended on the reading of my watch, had the reading of my watch made certain time-facts true, there would have been no reason (not this reason, anyway) to “idealize” my watch and see to it that the batteries are fully charged. In such a case, whatever the reading would be, that would be the right reading, because that this is the reading would make it right.
The natural rationale for idealization, the one exemplified by the time and relative-height examples, thus only applies to cases where the relevant procedure or response is thought of as tracking a truth inde-pendent of it. This does not necessarily rule out extensional equivalences between normative truths and our relevant responses. One may, for instance, hold a view that is an instance of “tracking internalism,”according to which, necessarily, one cannot have a (normative) reason without being motivated accordingly, not because motivations are part and parcel of (normative) reasons, but rather because our motivations necessarily track the independent truths about (normative) reasons. But typical idealizers do not think of their view in this way; they do not think of the relevant response as (necessarily) tracking an independent order of normative facts. As emphasized above, they think of the relevant response as constituting the relevant normative fact.
I'm not sure how relevant this objection is for CEV, though.
In some instances, I use citations for pointing to relevant studies, without intending to imply that this is settled science. But I now realize that it does carry that implication, and that the wording of the sentence is particularly unfortunate. I have updated the first and other footnotes to take this into account.
By "thinks is fine", I didn't mean some arbitrary personal standard, but precisely the kind of epistemic abilities that you mention.
I understand your revision and thank you for pointing in out, so I can keep trying harder.
Oops, looks like I didn't do my proof-reading carefully enough. Thanks for spotting that.
I also got a vague feeling they weren't identical. Perhaps I should mention that in the original post.
Thanks for the pointer!
Sorry about that. I've now added all the PDFs I found. At the moment I'm unable to host the ones that are still missing, but it might be worth investing in.
Oops, looks like I accidentally cited Peters 1978 when I meant to cite a paper that article pointed me to. Fixed now.
I have read at least abstracts of all cited articles, which the authors of the paper you link to seem to think is fine:
we adopt a much more generous view of a “reader” of a cited paper, as someone who at the very least consulted a >trusted source (e.g., the original paper or heavily-used and authenticated databases) inputting together the citation list.
Thank you very much!
Thanks for this. I have now included links to all fulltexts I found online. If you or anyone else manage to find the ones I'm still lacking, please point me to them and I'll update the post again.
There are a couple of similar-sounding footnotes in the preface and the first chapter, but I'm unable to find this particular one.
Unfortunately, the Kripke footnote appears to be a joke only.
By "well known", I suppose I just meant listed among the 503 tools here.
I use Eternity to keep track of time use, and Lemon to keep track of expenses. Judging by my interactions with the Quantified Self community, neither app seems too well known.
For those interested, the CMU philosophy department organizes an annual summer school in logic and formal epistemology.