Posts
Comments
Will there be a way to merge accounts?
When was the last data migration from LW 1.0? I'm getting an "Invalid email" message, even though I have a linked email here.
For me it just returns "invalid email", though I can see my email in http://lesswrong.com/prefs/update/.
Agreed. Six links daily seems like way too much.
Regarding your last point: is a hellish world preferable to an empty one?
A related point: http://thoughtinfection.com/2014/04/19/capitalism-is-a-paperclip-maximizer/
Does anyone know if and where can I find "IB Mathematics Standard Level Course Book: Oxford IB Diploma Programme" (I need this one specifically)?
https://global.oup.com/education/product/9780198390114/?region=uk
Thanks! This will be helpful.
I don't have time to evaluate which view is less wrong.
Still, I was somewhat surprised when I saw your first comment.
Is this what you have in mind?
Sugar does not cause hyperactivity in children.[230][231] Double-blind trials have shown no difference in behavior between children given sugar-full or sugar-free diets, even in studies specifically looking at children with attention-deficit/hyperactivity disorder or those considered sensitive to sugar.[232]
Sugar alone makes it more difficult to concentrate for many people, as well as having many other deleterious effects.
What do you mean?
Sorry for the pause, internet problems at my place.
Anyways, it seems you're right. Technically, it might be more plausible for AI to be coded faster (higher variance), even though I think it'll take more time than emulation (on average).
I agree.
Why does this make it more plausible that a person can sit down and invent a human-level artificial intelligence than that they can sit down and invent the technical means to produce brain emulations?
Why do we assume that all that is needed for AI is a clever insight, not the insight-equivalent of a long engineering time and commitment of resources?
How is theoretical progress different from engineering progress?
Is the following an example of valid inference?
We haven't solved many related (and seemingly easier) (sub)problems, so the Riemann Hypothesis is unlikely to be proven in the next couple of years.
In principle, it is also conceivable (but not probable), that someone will sit down and make a brain emulation machine.
Hello! My name is Christopher Galias and I'm currently studying mathematics in Warsaw.
I figured that using a reading group would be helpful in combating procrastination. Thank you for doing this.
This is the part of this section I find least convincing.
To be clear, you are saying that a thing will seem frivolous if it does have a relevant franchise, but hasn't happened in real life?
Yes, that was my (tentative) claim.
We would need to know whether the examples were seen as frivolous after they came into being, but before the technology started being used.
Can't we use a hierarchy of ordinal numbers and a different ordinal sum (e.g. maybe something of Conway's) in our utility calculations?
That is, lying would be infinitely bad, but lying ten times would be infinitely worse.
OK, but war happens in real life. For most people, the only time they hear of AI is in Terminator-like movies.
I'd rather compare it to some other technological topic, but which doesn't have a relevant franchise in popular culture.
As a possible failure of rationality (curiosity?) on my part, this week's topic doesn't really seem that interesting.
What topic are you comparing it with?
When you specify that, I think the relevant question is: does the topic have an equivalent of a Terminator franchise?
No need to apologize - thank you for your summary and questions.
Though it may not be central to Bostrom's case for AI risk, I do think economics is a good source of evidence about these things, and economic history is good to be familiar with for assessing such arguments.
No disagreement here.
I'm just trying to make sure I understand - I remember being confused about the Flynn effect and about what Katja asked above.
How does the Flynn effect affect our belief in the hypothesis of accumulation?
It is possible, then, that exposure to complex visual media has produced genuine increases in a significant form of intelligence. This hypothetical form of intelligence might be called "visual analysis." Tests such as Raven's may show the largest Flynn gains because they measure visual analysis rather directly; tests of learned content may show the smallest gains because they do not measure visual analysis at all.
Do you think this is a sensible view?
The terms that I singled out while reading were: Backpropagation, Bayesian network, Maximum likelihood, Reinforcement learning.
You could start at a time better suited for Europe.
I was under the impression (after reading the sections) that the argument hinges a lot less on (economic) growth than what might be gleamed from the summary here.
There's a small chance I might be there - if not, see you next time!
I would be interested, but I'd prefer the day before or so.
Thank you for giving links to papers.
Making lists.
Too bad I missed this.
Somewhat relevant: http://golem.ph.utexas.edu/category/2007/05/linear_algebra_done_right.html
I've also seen this book described as "one of those texts that feels like a piece of category theory even though it’s not actually about categories", which is high praise.
The cost here might be someone implementing a technical solution.
Are minor nuisances never worth solving?
I understand. Nevertheless, discussion so far hasn't gotten anywhere. Perhaps downvoting meetup threads would put some pressure on people involved in meetups to resolve the matter.
As of now, I haven't downvoted any meetup-related thread.
Is it OK for me to downvote meetup threads if I don't want to see them?
Thanks for the piece of counter-data!
I might look into the book, but the naming convention is a big turnoff.
I already mentioned what Halmos' stance was. What I'm more interested in is how is it possible to work without examples.
That seems somewhat surprising coming from Gowers.
No, of course not, but it still might make sense to wonder why it's so.
Whereas I can (somewhat) make sense of thinking with examples, it seems hard to describe just what exactly does it mean to think with general abstract concepts.
Can you provide some more background? What is a morphism of computations?
On the other hand, allowing any invertible function to be a _morphism doesn't seem strict enough. For one thing we can put any reversible computation in 1-1 correspondence with a program that merely stores a copy of the initial state of the first program and ticks off the natural numbers.
I don't understand why this is a counterexample.
What fanfics should I read (perhaps as a HPMOR substitute)?
Reading Model Theory was the first time in my life where I read a chapter of a textbook and it made absolutely no sense. In fact, it took about three passes per chapter before they made sense.
I find this experience common and I'm sure most working mathematicians (as opposed to merely a student) would confirm. One of the most important things is not getting discouraged in the face of total incomprehensibility.
Burns Depression Checklist?
That doesn't seem to be relevant, as krav maga exactly teaches you things like targeting the throat (or groin).
Luke has said it will be in a different ebook.