Posts

Comments

Comment by Fhyve on If you can see the box, you can open the box · 2015-03-02T09:06:18.345Z · LW · GW

Burning cats is another good example. Can you feel how much fun it is to burn cats? Some people used to have all sorts of fun by burning cats. And this one is harder to do the wrong sort of justification based on bad models than either burning witches or torturing heretics.

Edit: Well, just scrolled down to where you talk about torturing animals. Beat me to it I guess...

Comment by Fhyve on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-03-02T08:55:54.505Z · LW · GW

He doesn't need to stall for time to transfigure. He could have already been doing it over the last two chapters.

Comment by Fhyve on Low Hanging fruit for buying a better life · 2015-01-08T19:59:52.390Z · LW · GW

I have one of these. Can confirm, pretty good relative to other similarly priced knives I've tried, and even better than a high quality knife of the same age, when both hadn't been properly maintained.

Comment by Fhyve on Low Hanging fruit for buying a better life · 2015-01-08T19:56:16.541Z · LW · GW

In the spirit of this thread, take a typing class. I find that taking classes are an effective way to get over motivation blocks, if that's what is preventing you from learning touch typing.

Comment by Fhyve on Open thread, Oct. 27 - Nov. 2, 2014 · 2014-10-28T07:18:47.453Z · LW · GW

I'm a math undergrad, and I definitely spend more time in the second sort of style. I find that my intuition is rather reliable, so maybe that's why I'm so successful at math. This might be hitting into the "two cultures of mathematics", where I am definitely on the theory builder/algebraist side. I study category theory and other abstract nonsense, and I am rather bad (relative to my peers) at Putnam style problems.

Comment by Fhyve on A Pragmatic Epistemology · 2014-08-08T23:49:33.054Z · LW · GW

The difference is that saying there is a territory is also a model. The way I would rephrase map/territory into this language is "the model is not the data."

Comment by Fhyve on A Pragmatic Epistemology · 2014-08-08T23:46:58.316Z · LW · GW

This is the best place to apply effort for my goals, because I think that there might be some problems underlying MIRI's epistemology and philosophy of math that is causing confusion in some of their papers.

Comment by Fhyve on A Pragmatic Epistemology · 2014-08-05T18:22:12.849Z · LW · GW

That it hasn't been radically triumphant isn't strong evidence towards its lack of world-beating potential though. Pragmatism is weird and confusing, perhaps it just hasn't been exposited or argued for clearly and convincingly enough. Perhaps it historically has been rejected for cultural reasons ("we're doing physicalism so nyah"). I think there is value on clearly presenting it to the LW/MIRI crowd. There are unresolved problems with a naturalistic philosophy that should be pointed out, and it seems that pragmatism solves them.

As for originality, I'm not sure how think about this. Pretty much everything has already been thought of, but it is hard to read all of the literature to be familiar with it. So how do you write? Acknowledge that there probably is some similar exposition, but we don't know where it is? What if you've come up with most of these ideas yourself? What if every fragment of your idea has been thought of, but it has never been put together in this particular way (which I suspect is going to be the case with us). The only reason for not appearing to be original is so not to seem arrogant to people like you who've read these arguments before.

Do you have direct, object-level criticisms of our version of pragmatism? Because that would be great. We've been having a hard time finding ones that we haven't already fixed, and it seems really unlikely that there aren't any. (I've been working on this with OP)

Comment by Fhyve on A Pragmatic Epistemology · 2014-08-05T18:02:25.111Z · LW · GW

The computable algorithm isn't a meta-model though. It's just you in a different substrate. It's not something the agent can run to figure out what to do because it necessarily take more computing power. And there is nothing preventing such a pragmatic agent from having a universe-model that is computable, considering finding a computable algorithm approximating itself, and copying that algorithm over and over.

Comment by Fhyve on Me and M&Ms · 2014-08-04T21:31:48.367Z · LW · GW

Intervals and ratios are going to be essentially the same thing for conventional pomodoros. They are some time on, some time off, repeat. It might be weird to have variable pomodoros since the break is for mental fatigue, not reward. Perhaps some mechanism to reward you with an M&M at some time randomly in the second half of your pomodoros?

Comment by Fhyve on Connection Theory Has Less Than No Evidence · 2014-08-04T01:44:14.257Z · LW · GW

The most charitable take on it that I can form is a similar one to Scott's on MBTI: (http://slatestarcodex.com/2014/05/27/on-types-of-typologies/). It might not be validated by science, but it provides a description language with a high amount of granularity over something that most people don't have a good description language for. So with this interpretation, it is more of a theory in the social sciences sense, a lens at which to look at human motivation, behaviour, etc. This probably differs from, and is a much weaker claim than people at Leverage would make.

I don't know how I feel about the allegations at the end. It seems that other than connection theory, Leverage is doing good work, and having more money is generally better. I would neither endorse or criticize their use of it, but I think that since I don't want those tactics used by arbitrary people, I'd fall on the side of criticize. I would also recommend that the aforementioned creator not be so open about his ulterior motives and some other things he has mentioned in the past. All in all, Connection Theory is not what Leverage is selling it as.

Edit: I just commented on the theory side of it. The therapy side (or however they are framing the actual actions side), a therapy doesn't need its underlying theory to be correct in order to be effective. I am rather confident that actually doing the connection theory exercises will be fairly beneficial, though actually doing a lot of things coming from psychology will probably be fairly beneficial. And other than the hole in your wallet, talking to the aforementioned creator probably is too.

Comment by Fhyve on Connection Theory Has Less Than No Evidence · 2014-08-04T01:25:16.561Z · LW · GW

I'd say Nick Bostrom (a respected professor at Oxford) writing Superintelligence (and otherwise working on the project), this (https://twitter.com/elonmusk/status/495759307346952192), some high profile research associates and workshop attendees (Max Tegmark, John Baez, quite a number of Google engineers), give FAI much more legitimacy than connection theory.

Comment by Fhyve on Connection Theory Has Less Than No Evidence · 2014-08-04T01:19:13.360Z · LW · GW

If you want a more precise date for whatever reason, it was right at the end of the July 2013 workshop, which was July 19-23. There were a number of leverage folk who had just started the experiment there.

Comment by Fhyve on Bragging Thread, August 2014 · 2014-08-04T00:48:59.523Z · LW · GW

I'm currently interning at MIRI, I had a short technical conversation with Eliezer, a multi hour conversation with Michael Vassar, and other people seem to be taking me as somewhat of an authority on AI topics.

Comment by Fhyve on Irrationality Game III · 2014-03-14T04:31:47.588Z · LW · GW

I agree. I want to comment on some of the downvoted posts, but I don't want to pay the karma

Comment by Fhyve on Irrationality Game III · 2014-03-14T04:21:08.001Z · LW · GW

Irrationality Game:

Politics (in particular, large governments such as the US, China, and Russia) are a major threat to the development of friendly AI. Conditional on FAI progress having stopped, I give a 60% chance that it was because of government interference, rather than existential risk or some other problem.

Comment by Fhyve on A Fervent Defense of Frequentist Statistics · 2014-02-22T21:07:37.781Z · LW · GW

Bayes is epistemological background not a toolbox of algorithms.

I disagree: I think you are lumping two things together that don't necessarily belong together. There is Bayesian epistemology, which is philosophy, describing in principle how we should reason, and there is Bayesian statistics, something that certain career statisticians use in their day to day work. I'd say that frequentism does fairly poorly as an epistemology, but it seems like it can be pretty useful in statistics if used "right". It's nice to have nice principles underlying your statistics, but sometimes ad hoc methods and experience and intuition just work.

Comment by Fhyve on Rationality & Low-IQ People · 2014-02-03T08:03:48.985Z · LW · GW

Depending on the IQ test, I don't think your overall score will go down much if you don't do well on a subsection or two. This is low confidence, and based off one data point though. I have scores ranging from 102 to 136 and my total score somehow comes out to be 141.

Comment by Fhyve on The dangers of zero and one · 2013-11-25T08:19:50.179Z · LW · GW

That only means you are merely good at arithmetic. Can you prove, say, that there are no perfect squares of the form

3^p + 19(p-1)

where p is prime?

Comment by Fhyve on Quantum versus logical bombs · 2013-11-25T07:30:02.274Z · LW · GW

The spaceship "exists" (I don't really like using exists in this context because it is confusing) in the sense that in the futures where someone figures out how to break the speed of light, I know I can interact with the spaceship. What is the probability that I can break the speed of light in the future?

Then for Many Worlds, what is the probability that I will be able to interact with one of the Other Worlds?

I would not care more about things if I gain information that I can influence them, unless I also gain information that they can influence me. If I gain credence in Many Worlds, then I only care about Other Worlds to the extent that it might be more likely for them to influence my world.

Comment by Fhyve on What do we already have right? · 2013-11-25T06:56:45.408Z · LW · GW

I disagree with "common sense." In my experience, when questioning people about what they mean by common sense, I find that they usually mean "general principles that seem like obviously correct to me." And that doesn't even guarantee that they are correct.

Comment by Fhyve on MIRI course list study pairs · 2013-11-16T00:22:47.434Z · LW · GW

I've got Categories for the Working Mathematician by Mac Lane; I will be going through this because I will be giving some talks on category theory to the math club here at my university. I pretty much don't have any logic and I want logic. I have Enderton's A Mathematical introduction to logic which is ok, though I think I want to find a new book. I also have Probability: The Logic of Science that I want to work through. I also want to go through MIRI papers. I am a math undergrad.

I would like to be a part of a study pair or a study group. There seems to be enough people that we can group together. I would like to learn from people, and teach people what I know (mostly pure math: category theory/abstract algebra/algebraic topology and basic calculus/real analysis).

Comment by Fhyve on Harry Potter and the Methods of Rationality discussion thread, part 26, chapter 97 · 2013-08-19T20:36:35.619Z · LW · GW

aka Baba Yaga

Comment by Fhyve on Engaging Intellectual Elites at Less Wrong · 2013-08-17T07:43:20.243Z · LW · GW

+1 because of the first point. Right now we are using this catch-all Reddit style "discussion" forum to encompass absolutely everything and it is a mess.

Comment by Fhyve on More "Stupid" Questions · 2013-08-02T05:08:15.669Z · LW · GW

How about 3^...(3^^^3 up arrows)...^3?

Comment by Fhyve on Boring Advice Repository · 2013-07-30T17:41:41.069Z · LW · GW

You might want to make the habit a bit shorter than that so that it is easier to practice and repeat a lot.

Comment by Fhyve on Rationality Quotes from people associated with LessWrong · 2013-07-30T16:45:49.748Z · LW · GW

This is more to address the common thought process "this person disagrees with me, therefore they are an idiot!"

Even if they aren't very smart, it is better to frame them as someone who isn't very smart rather than a directly derogatory term "idiot."

Comment by Fhyve on Rationality Quotes from people associated with LessWrong · 2013-07-30T04:40:27.548Z · LW · GW

"How do you not have arguments with idiots? Don't frame the people you argue with as idiots!"

-- Cat Lavigne at the July 2013 CFAR workshop

Comment by Fhyve on Open thread, July 29-August 4, 2013 · 2013-07-30T04:38:04.677Z · LW · GW

Does anyone know of a good textbook on public relations (PR), or a good resource/summary of the state of the field? I think it would be interesting to know about this, especially with regards to school clubs, meetups, and online rationality advocacy.

Comment by Fhyve on Inferential credit history · 2013-07-28T23:57:40.974Z · LW · GW

Okay, that's reasonable. But can we talk about the content post itself? I don't think that this really is the most important part of the post and that the top comment should be about it.

Comment by Fhyve on Writing Style and the Typical Mind Fallacy · 2013-07-15T21:48:28.967Z · LW · GW

I prefer your style (rather, I really dislike Eliezer's style). Possible data points: I read a lot of math: math blogs, math texts, math papers, and I have poor reading comprehension and reading speed. I don't have a particularly short or long attention span, and I don't really read much science or philosophy. I didn't get a whole lot of epiphanies from the sequences, though it did have a strong influence on how I think (ie. my updates weren't felt as epiphanies).

I like the structure of your writing. I like to build my mental categories from the top down, and structured writing helps me put things in mental buckets. For quite a while after reading the sequences, the whole idea of rationality was a big muddle of concepts and I had a hard time thinking about it as a whole. I had to think it over and do all the categorization by myself, which was a lot of work, and I don't think I benefited enough from having to do that to justify the exercise.

Comment by Fhyve on "Stupid" questions thread · 2013-07-13T19:57:44.855Z · LW · GW

In transparent box Newcomb's problem, in order to get the $1M, do you have to (precommit to) one box even if you see that there is nothing in box A?

Comment by Fhyve on [META] Open threads (and repository threads) are underutilized. · 2013-07-13T03:05:57.600Z · LW · GW

Why can't we implement subreddits here? Seems like it would be super useful, for this and for other problems like the fact that philosophy, AGI, life extension/transhumanism and rationality all get mixed into the same discussions section.

Comment by Fhyve on Optimizing Workouts for Intellectual Performance · 2013-07-07T05:38:34.590Z · LW · GW

Have you looked at rhodiola and L-theanine? They tend to counter some of the negative effects of more intense nootropics.

Comment by Fhyve on Open Thread, July 1-15, 2013 · 2013-07-06T06:29:31.420Z · LW · GW

I am mostly talking about epistemic rationality, not instrumental rationality. With that in mind, I wouldn't consider anyone from a hundred years ago or earlier to be up to my epistemic standards because they simply did not have access to the requisite information, ie. cognitive science and Bayesian epistemology. There are people that figured it out in certain domains (like figuring out that the labels in your mind are not the actual things that they represent), but those people are very exceptional and I doubt that I will meet people that are capable of the pioneering, original work that these exceptional people did.

What I want are people who know about cognitive biases, understand why they are very important, and have actively tried to reduce the effects of those biases on themselves. I want people who explicitly understand the map and territory distinction. I want people who are aware of truth-seeking versus status arguments. I want people who don't step on philosophical landmines and don't get mindkilled. I would not expect someone to have all of these without having at least read some of Lesswrong or the above material. They might have collected some of these beliefs and mental algorithms on their own, but it is highly unlikely that they came across all of them.

Is that too much to ask? Are my standards too high? I hope not.

Comment by Fhyve on Open Thread, July 1-15, 2013 · 2013-07-03T19:51:25.810Z · LW · GW

Pretty much someone who has read the Lesswrong sequences. Otherwise, someone who is unusually well read in the right places (cognitive science, especially biases; books like Good and Real and Causality), and demonstrates that they have actually internalized those ideas and their implications.

Comment by Fhyve on An attempt at a short no-prerequisite test for programming inclination · 2013-07-03T10:52:13.772Z · LW · GW

This might be a more enjoyable test (warning, game and time sink): http://armorgames.com/play/6061/light-bot-20

Comment by Fhyve on Open Thread, July 1-15, 2013 · 2013-07-03T10:06:45.200Z · LW · GW

To be honest, unless they have exceptional mathematical ability or are already rationalists, I will consider them to be mooks. Of course, I wont make that apparent, it is rather hard to make friends that way. Acknowledging that you are smart is a very negative signal, so I try to be humble, which can be awkward in situations like when only two out of 13 people pass a math course that you are in, and you got an A- and the other guy got a C-.

And by the way, rationality, not rationalism.

Comment by Fhyve on What are you working on? July 2013 · 2013-07-03T09:38:21.604Z · LW · GW

Tutorials/texts that I know of are Software Foundations, Andrej Bauer's tutorial, and this Hott-Coq tutorial. It looks like installing the HoTT library is a huge pain in the arse though so I think I'll stick with vanilla Coq until either I get one of my CS friend to install it for me, or they make a more user friendly install.

Edit: also this

Comment by Fhyve on What are you working on? July 2013 · 2013-07-03T07:07:51.056Z · LW · GW

Why Haskell and not Coq or Agda? That's where all the HoTT stuff is being done anyways.

Comment by Fhyve on Open Thread, July 1-15, 2013 · 2013-07-03T06:13:52.474Z · LW · GW

How do you upgrade people into rationalists? In particular, I want to upgrade some younger math-inclined people into rationalists (peers at university). My current strategy is:

  • incidentally name drop my local rationalist meetup group, (ie. "I am going to a rationalist's meetup on Sunday")

  • link to lesswrong articles whenever relevant (rarely)

  • be awesome and claim that I am awesome because I am a rationalist (which neglects a bunch of other factors for why I am so awesome)

  • when asked, motivate rationality by indicating a whole bunch of cognitive biases, and how we don't naturally have principles of correct reasoning, we just do what intuitively seems right

This is quite passive (other than name dropping and article linking) and mostly requires them to ask me about it first. I want something more proactive that is not straight up linking to Lesswrong, because the first thing they go to is The Simple Truth and immediately get turned off by it (The Simple Truth shouldn't be the first post in the first sequence that you are recommended to read on Lesswrong). This has happened a number of times.

Comment by Fhyve on Harry Potter and the Methods of Rationality discussion thread, part 20, chapter 90 · 2013-07-03T05:56:18.052Z · LW · GW

"I need access to the restricted section, I don't want another one of my friends to die"

I would suspect that an argument along those lines would be much more likely to succeed if Quirrell hadn't given his instructions.

Comment by Fhyve on Bad Concepts Repository · 2013-07-02T22:30:05.474Z · LW · GW

I have read around and I still can't really tell what Westergaardian theory is. I can see how harmony fails as a framework (it doesn't work very well for a lot of music I have tried to analyze) so I think there is a good chance that Westergaard is (more) right. However, other than the fact that there are these things called lines, and that there exist rules (I have not actually found a list or description of such rules) for manipulating them. I am not sure how this is different from counterpoint. I don't want to go and read a textbook to figure this out, I would rather read ~5-10 pages of exposition and big-picture

Comment by Fhyve on Harry Potter and the Methods of Rationality discussion thread, part 20, chapter 90 · 2013-07-02T22:22:28.653Z · LW · GW

Just by telling everyone to keep Harry away from it improves the security

Comment by Fhyve on What are you working on? July 2013 · 2013-07-02T21:01:24.006Z · LW · GW

In that link, is that the 3 dimensional analog of living on a 2D plane with a hole in it, and when you enter the hole, you flip to the other side of the plane? (Or, take a torus, cut along the circle farthest from the center, and extend the new edges out to infinity?)

Comment by Fhyve on Harry Potter and the Methods of Rationality discussion thread, part 20, chapter 90 · 2013-07-02T20:28:13.128Z · LW · GW

And mentioned numerous times.

Comment by Fhyve on Bad Concepts Repository · 2013-07-02T05:03:01.319Z · LW · GW

Nitpick: I would consider the Weierstrass function a different sort of pathology than non-standard models or Banach-Tarski - a practical pathology rather than a conceptual pathology. The Weierstrass function is just a fractal. It never smooths out no matter how much you zoom in.

Comment by Fhyve on Bad Concepts Repository · 2013-07-02T04:42:17.672Z · LW · GW

I think any correct use of "need" is either implicitly or explicitly a phrase of the form "I need X (in order to do Y)".

Comment by Fhyve on Harry Potter and the Methods of Rationality discussion thread, part 20, chapter 90 · 2013-07-02T03:20:51.068Z · LW · GW

Why does he think of beefing up the restricted section's security only after his conversation with Harry? What did he learn?

I also don't see bringing Harry's parents to Hogwarts as being terribly predictable.

Comment by Fhyve on Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89 · 2013-07-01T00:03:14.965Z · LW · GW

There is no way Harry would get expelled. He is at Hogwarts for his protection - to be close to Dumbledore - not so that he can go to school.