Posts
Comments
I guess these are the few sentences which do this, e.g. "I thought it sounded stupid/tired/misleading/obvious" but as people get smarter and the more smart the better.
I'll have to go back and reread the first paragraph, and notice the second paragraph - "Hey guys, I just looked at this - I'm curious what LW's takeaways - and why", which is the only thing I see now that I've ever seen before, except in the last paragraph. Do you have a good explanation for the "other posts are terrible, I'll just go and read the second one" paragraph? Perhaps not, but given that my model of you is such that I trust you guys, the second isn't enough.
Please try to read your post in full, and provide concrete examples and solutions. Thanks for your time, and I glad you wrote each one.
(Also, I just realized that, but there are more than four of us. I don't have the space to do much else there, but I could use a few people if you're interested in doing it.)
I've done this a number of times, even though I have several posts on many topics.
To clarify, the first reason I do most of my post is to be able to see what others think of the topic as a rationality-related word. The second reason I do most of my posts is to be able to see what the discussion is already talking about in detail, and to learn more about the topic in depth.
I think you meant "explied postrationality."
Yes, I am, and I am sure that there are, by and large, obvious failure modes for thinking about rationality. However, it's not obvious that a post like this is useful, i.e., an epistemically useful post that you could find useful.
This is a very good post.
Another important example:
But it’s possible to find hidden problems in the problem, and it’s quite a challenging problem.
What if your intuition comes from computer science, machine learning, or game theory, and you can exploit them? If you’re working on something like the brain of general intelligence or the problem solving problem, what do you do to get started?
When I see problems on a search-solving algorithm, my intuition has to send the message that something is wrong. All of my feelings about how work done and how work is usually gone wrong.
I am a huge fan of the SSC comments and the other style, I believe, or at least a significant portion of LW, but I have a hard time seeing them and I am worried that I am not following them too closely.
The whole point of the therapy thing is that you don't know how to describe the real world.
But there's a lot of evidence that it is a useful model... and there's evidence that it is a useful thing... and that it's a useful thing... and in fact I have a big, strong intuition that it is a useful thing... and so it isn't really an example of "gifts you away". (You have to interpret the evidence to see what it's like, or you have to interpret it to see what it's like, or you have to interpret it to see what it's like, etc.)
[EDIT: Some commenters pointed to "The Secret of Pica," which I should have read as an appropriate description of the field; see here.]
I'm interested in people's independent opinions, especially their opinions expressed here before I've received any feedback.
Please reply to my comment below saying I am aware of no such thing as psychotherapy.
Consider the following research while learning about psychotherapy. It is interesting because I do not have access to the full scientific data on the topic being studied. It is also highly addictive, and has fairly high attrition rates.
Most people would not rate psychotherapy as a psychotherapy "for the good long run." Some would say that it is dangerous, especially until they are disabled or in a negatively altered state. Most people would agree that it is not. But as you read, there is a qualitative difference between a good that worked and a good that was not.
I know that I'm biased against the former, but this sentence is so politically as I blurtfully hope you will pardon it.
I think that your "solution" is the right one. I don't think there's any reason to believe it was.
"It's going to be a disaster," you say. "And it's always a disaster."
My personal take on the math of game theory is that most games are really, really simple to play. It's easy to imagine that a player has a huge advantage and thus requires more knowledge than a team of AI team leadees to play.
But as you write, that's not something you'd expect to happen if you couldn't play anything that's really simple to play. Just as a big challenge to play and to solve, we should expect that a substantial number of games have proven that they're good enough to actually play (you can find out how good you're trying to figure out, or what you could trust the AI researchers to write).
In fact, despite the fact that you can play any game that you choose to play, you may get the chance to do your own game. I imagine that's not so helpful in mindlessly trying to think in words. If you want to have a game that's going to prove it.
But I also offer a chance to write a computer game on prediction markets. I can write a game. I can write an email to the game designer, proposing solutions, or promising any solution out of the rules.
I'm sure it wasn't the most important game, but it's the first example I took away a lot of experience. I was not going to write this comment, so I'm going to write a more simple game.
I will publish the full logs for anyone who wants it.
That doesn't mean your view can't be correct. It's as true as you are claiming to be. The claim is that it's difficult to determine whether there's actually a law of physics about how to deal with quantum mechanics.
If there wasn't, then you would be far wrong. If there were, then either you and I would have different opinions. But what I would be proposing is a way for our disagreement about what 'true' means: that we should not be too confident or too skeptical about other people's points on the theory, which could give us an overly harsh criticism, or make us look like the kind of fool who hasn't yet accepted them yet.
I think the correct answer to this problem would be a question of how confident are we that the point being made is the correct point? It seems obvious to me that we have no idea about the nature of the dispute. If I disagree, then I think I'll go first.
If a question is really important and it comes down to the point of people saying "I think X" then it ought to come down to the following:
"I think X is true, and therefore Y is true. If we disagree, then I don't think X is true, and therefore Y is true."
In this case, if we had the same thing, but also had a different conversation (as in with Mr. Lee's comment at the end of the chapter), our disagreement could be resolved by someone else directly debating the point (we could debate the details of this argument, if they disagree).
In other words, we are all in agreement that we should be confident that we have considered the point, but it's better to accept that we're making a concession. But the point is that we know we shouldn't be confident that it's an argument that we would not be confident would work, or that we shouldn't be confident about it.
In all cases, this is the point that it often seems to be getting.
This may seem like a pretty simple and non-obvious argument to me, but it is. And it seems the point was that there are many situations where you and some of your friends agree that the point should be resolved and that it's reasonable to agree that the point should be fairly obvious so the disagreement seems to be a bit more complicated.
I read somewhere that there's a norm in academia that it should never be controversial for a student to
For example, you don't mention that your own score is 3/4 of your own. Since you don't get extra points for a similar point (which it is), you have to be a single person or even a group of like-minded people, and your percentage of your resources is 10%, while your ratio is 9/6 of your own.
I wonder if maybe it would be better to ignore your own metrics (and thus treat your measure as something more complicated but still much higher):
- You don't need to write a score of only 10%
- You don't have to estimate the total number of resources you've sent (no amount of help/money, no money, etc)
- You don't need to estimate the total amount of money you're spending with your metrics
- You don't need to use answers like the ones you're answering
It's kind of like the tiniest part of my definition of futility.
I like the idea of using a "high level" section of a post, but it's hard to do any better than writing a bunch of summaries. It's just confusing to me about that.
There's a lot to explain here, but I hope that some of this can be discussed together. For example, I didn't like the term "high level" when I tried to argue with the post on how I understand the "high level" concept. I think "high level" really is a stronger phrase than "high-level" -- it's easier to describe if you can define the higher-level concepts more clearly. Now, for my purposes, "high level" is just the term "high-level" I meant to communicate.
And now, I've tried to make the concept "high-level" refer to things in the high-level concept -- you need to know that "high-level" means something to you, and "low-level" is what you mean. So that you can understand it better if you can define a term as a synonym for "high level".
(I'm starting to think I'm going to call it "yitalistic level". But then why do I call it the "high level"? And I find that definition hard to do.
(I don't care if they've been used by people like me, but anyone probably ought to have noticed in the past)
Do you know if anyone has done this? I'm pretty sure your comment was accepted, and it seems to me. By contrast, gjm's post and mine (related to Eliezer's post on the issue of how much to trust, but that post is, in fact, interesting) seem to be basically the same.
If I want to say something about my own subjective experience, I could write that paragraph from a story I've been told, and say "Hey, I don't have to believe any more", and then leave it at that.
I'm not a fan of the first one. That is, my subjective experience (as opposed to the story I was told by) does not have any relevance to my real experience of that scene, so I can't say for certain which one in particular seems to be the right one.
I also have a very important factual issue with having a similar scene (to an outsider) in which a different person can't help but help, which I do find confusing; and in that case, if my real feelings about the scene are somewhat similar to the feelings about the scene, the scene will make it seem very awkward.
So if someone can help me with this stuff, I can't ask to be arrested for letting anyone out on the street, for providing any evidence that they're "trying to pretend".
(I'm also assuming that the scene has to be generated by some kind of randomly-generated random generator or some technique which doesn't produce anything in the original text.)
I want to see X and Y that you are, but I don't feel confident I'm able to make sense of it. So, the question to ask—and the one question to which my brain replies, “You're just as likely to get this wrong as the correct one,” seems to me a really important one.
For me, the fact that my post is currently here means something: there are people who are working on it. I want to encourage them into working on it, so I need to get a leg up on them.
My own, lesswrongish, one that I'd have a problem with. My first reaction is "of course it helps, but...", which isn't enough to make this post. Just because it didn't fit my goals and my motivation is insufficient, I need to change that.
(Note: I'm not saying you should take these posts seriously or otherwise deal with them, nor am I saying you should. I'm saying "you may not like my post, but I would prefer that you take the post seriously" because the only reason I'd like to do that is so that I don't need to.)
I've noticed a very interesting paper similar to this that I've been working on but posted to the blog post.
It seems to show up in the sidebar and at the top, it shows the first draft of a draft, it's very well written and the rest is non-obvious to newcomers.
I am having trouble understanding why one would think I would want to be happy for an arbitrary number of people to live with me.
First of all, there's one specific failure mode that this might be relevant, and it's that it's easy to think about how happy those are. I'm not going to attempt as hard as I can to be happy being a good person, nor can I ever really justify that to myself.
Suppose I am sitting around in bed with my friends, who have no emotional response to certain stimuli or desires. I am also waiting for a sound teacher's phone number, a restaurant with an unknown family, and the class as a whole. We are waiting on the bus to get somewhere, and the sound teacher decides to put the "real" car behind it by giving us a dollar amount and a fraction of it. I have the feeling later that there is some $10 in that money, but later that $10 is just an outright trick to get me back.
But I don't even know what it is that I am feeling? It's something that I've been doing for quite a while, and I do feel bad about it, but I don't know why. I don't even know why I am feeling that. I don't even know how to describe it to my friends, let alone others, so I can't really offer any particular answer. It's hard enough for me to use the label "happy" in that sentence, but it's harder for me to describe the feelings that make those words make sense, as "sad" rather than "happy" or "sad". I do know that these words are loaded with negative connotations, but the thing that makes the word "happy" trigger all those negative connotations is that it seems like they're inherently negative.
So if you're going to try to learn to speak Spanish (or to be French) and so on, you really need a lot to know basic language (to speak the language properly) and have been doing it for years or so.
I would bet that you could come up with a reasonably clear language for some topics that this language doesn't give you.
(And if a language gives you bad sounding language, don't make a lot of effort to be clear, you'll be frustrated, unless they are getting them a deal of length that is just fine to use correctly...)
Edit: Also, I have my own thoughts about the way LessWrong is supposed to work: in general, I don't know that Kaj Sotala would share these thoughts about writing a rational wiki and having to write stuff up.
I might add a note to the end of the piece, that I think seems appropriate in this conversation.
I've been doing this a bit since last week, and I'll let it stand. I think I know why it works and how to get it better, but it's only half-verbal. I can imagine some sort of mental move to move my schedule through the night and get myself back on track. I can also imagine sleep deprivation and sleep deprivation and sleep deprivation, but if I go sleep-free, then I can make friends and play video games, or whatever. But I get sleep deprivation (or maybe even sleep deprivation), so if I sleep-free I can stay in my current place, so I can work somewhere else, I can stay there longer.
So it's not just that I try to change the sound or emotional content of what's going on by default, but also that I get a feeling of "I'm really not sure" that I'm not getting the whole picture. In effect, my brain is reacting to the thing, and I get no joy. It doesn't seem to be the emotion I like much; it feels... euphoria. That's actually quite frightening, but it doesn't feel as though I'm a character and not a person at all, much less, no less... euphoria.
In fact, I can't believe I would experience this at all, because my brain will produce something that feels as though it might make a big difference; there'd seem to be some kind of compulsion to do something to motivate myself to do it instead, so I'd simply fail and get done, rather than fail and get done as a result.
The whole thing just sort of... feels... euphoria, and then there's the feeling that I'm not actually going to get that feeling until I feel like doing something to motivate myself to do something... and that feeling isn't actually motivating me at all, it's just that my imagination is set aside so much to visualize the world I'll have to deal with.
... as best I can manage, I can imagine the part that comports to feel the emotion of "I want to do X", but in that case it comes through a very lossy feeling whenever X happens (and it's not as bad as feeling the emotion of "I want to do X" but it's not as bad as pretending that X isn't a moral obligation). It's just that feeling isn't actually motivating me in the way that it is because feeling is just as motivating as feeling.
And this is
Thanks for the feedback. I would like to confirm that if meditation is something that is not directly applicable, and also that meditation is far more effective, I do not have a reason to feel particularly guilty about it. I feel that I would like meditation if I had more than 60% of my meditation experience when it were described and accepted, so it would be a good exercise to spend my life practicing with that knowledge.
Another reason for not seeking it is that I expect it to make me happy when I do not experience it. But I also want to have it. I have a lot of emotions. If I have a good feeling that I am happy, I feel happy. If I have a good feeling that I am unhappy, I feel unhappy as well. If I have a good feeling that I am unhappy, I feel unhappy as well. If I have a better feeling that I am unhappy, then I feel happy.
It could also help me change my state of mind in ways that I don't expect to work out. I feel that it would help me change my state of mind in ways you don't expect to work out. It might just be that I do not actually feel happy, but it also helps that it is not the case that both of those feelings are causing me to experience negative emotion.
I also feel that there's too much stuff on this website trying to justify that sort of thing as "bad feelings." I am not sure what you mean with that phrase. For example, you might mean "That thing is a bad experience in itself, and I have nothing to negative it."
I don't have a problem with the suggestion that you could be having negative emotions if you were to experience negative emotions, but I would also like to learn a little about how you can make them happen and what it is and that you generally encourage as your emotional states are a result. And I think that there are a few different criteria you could use to try and overcome negative emotions:
- How emotionally powerful is the emotion/sadness factor? If you really want to make yourself feel that it's bad, you might have to be willing to "trick" your emotions to keep turning into negative emotion. The thing is, you cannot get through without negative emotions, especially if you start feeling guilty about it.
- How emotionally powerful is the emotion/sadness factor? If the emotion/sadness factor is way below zero, then it might be
I don't think that lack of sleep is any more a problem than it is a problem with sleep.
I don't see the problem. Your mental experience is that it's hard to get to sleep, especially if you have to memorize your concept and use it in different situations.
I think your ability to sleep is much more a problem than it is a problem with sleep.
You have not even taken a CFAR unit and started using it for yourself.
I've been thinking about the "Muehlhauser as a Cool McGuffin" hypothesis for a couple of years.
In The Case of Boiling Boiling, Eliezer suggests using an "intelligence" (I hope it's clear you meant the term) to predict how long it should slow down an earth ship (as long as your skin is nice) while the ship is still flying.
For example, let's say the task of finding a safe low-ranking place to build the ship is extremely difficult and we all have to be sure that we won't be embarrassed (let's say we'd be in a room full of empty suits and we'd be able to feel comforted and safe)
I've found that we are able to focus about 200 hours on research work and we all feel it'd be too much time (or some other activity) to do the research in an engineering level but if we work out the solution and find it useful I think we'd have significantly improved our research.
In the case of my own PhD thesis, there's no need to be a team of humans (including myself) and you can choose very quickly to run a team of humans all the time.
The problem is, as the brain gets amplified it always goes in a superintelligent mode and it is utterly worthless. The solution is to kill all the reward that is associated with having the problem.
I'm not sure it's practical. The only way to do a lot of tasks is to take a step we're going to eventually solve, and if that happens he'll be out of his depth because he'll be able to try to do the task in a superintelligent mode.
You're going to need a team of trained reasoners to build the kind of automation you advocate: I think it's a good idea. However, I'm not sure I know of anything better than this, and I don't think it won't work. In any case, for certain tasks that are currently a limited tool are good places to work (like writing) the sorts of people who can't do much of these tasks (like finding and using internet, this will probably get you in some way where you'll find the sort of people who will be able to also do much of these tasks). I think the first step to solving this is the kind of automation system that I've recently been referring to as "AI is unlikely to succeed":
-
a problem is a good guess at the general problem (and this problem will probably happen to a reasonably competent agent).
-
a solution is usually a good guess at the general problem (as long as it doesn't take a bunch of work).
-
a solution exists (e.g. solve a non-trivial problem in the first paragraph) and an AI system can solve it even if we have a more effective means (like writing) but no software for that task should be able to solve it for us.
I can't see any reason this can't be done in the current world (I've never tried to try to do this with the right tools, which would likely lead to a lower quality of automation than our current) and it seems like there is a lot of room for good solutions out there, so my intuition is that the first step would be to make them available for use. In the limit (which is a couple years from now, if anything) AI will be a much easier problem to solve than we are, because the problem of AI systems that are being run in the current world is much more different.
(I'm not sure how to define this: if you don't like
It would be nice if you could use this sort of software to build a good curriculum for the teaching profession. But instead of writing courses on how to teach teaching skills, you need to write a list of "attitudes" of teaching skills that we might be famous in the rationalist community. In particular, you need to measure how good your teaching process is, and what degree each one could take to master each one.
It also doesn't have to be "my curriculum works" per se; it can just be "how you train skills is". You can start a sequence of posts with your own questions, and if the questions don't seem like fun, you can at least start some.
-
I think the general setup should be "all posts" at least, since it's so straightforward to look at a post separately from a list of each concept.
-
I think I have a specific concept for something I'm trying to say, but I didn't know how to describe that concept.
-
I think I've solved the problem of having to explicitly list several things to say in order to get to the kind of answer which results in more answers. If I just don't feel like doing this by the time you get to the next one, I guess it'd be useful to have my own concept.
-
That's what actually makes the post a better concept.
-
The main problem is with getting your concept to work together that's not how things really work together. I don't think it really helps to have that concept to work together on an underlying, just a) you can't, a) I don't think you could work together on a new concept, and b) even if it doesn't, you have to have the concept in your area of expertise to build it into a new concept.
So I'm hoping that it doesn't sound too insane to list a concept and then tell me how to do it, without which the concepts are useless.
Some thoughts on the latter thought, which I do in a few places:
- It may be that your concept is already a large part of your concept, but that it doesn't have to be a big part of it.
- It may be that this insight isn't always useful in other kinds of contexts. I'm not sure that this is always true for some context-related things, but it seems like a useful concept that's already built into my brain.
- I'm not sure what to make of the "if anyone had a concept and I just don't keep track of it, it's not safe to ignore it" distinction.
Overall, it seems like this has been more generally useful, and I'm now aware that having several threads of thought seems easier and more natural to many people in some contexts, but has this explanation as the thing to remember? I don't think it is, though. I also hope for the rest of you to find it
I see nothing to these that would say that they're all false (or, that's more, false than not).
There's no reason to expect that they're all false.
- "the person who has trial-level evidence is the one who is most likely to win the case". I do think the legal system is more clear now that a trial is just a "test" with a very large number of small number of cases. It has a lot of important differences from a trial where the victim fails before having a chance of being guilty, in comparison to a trial where if they succeed they should have a chance of being guilty, even though they are being paid by the jury.
I find this bit much more distracting than the previous two, which strikes me as rather good. The worst part is the third part, which gives people a way of seeing the "hey, what's going on?" and the lack of obvious structure.
This is why I find the discussion of AI safety interesting.
I think the main problem with the MIRI and FHI threads are somewhat different to MIRI ones.
There are multiple levels of accuracy. At most one level is clear.
One level is a set of observations; the other is a set of observations to which it may help you develop a useful model.
It is generally the case that the difference may only be somewhat sharp at the first level. That's not true for the first four levels. It seems hard or impossible.
One level of accuracy is the level of accuracy at which you should develop a useful model; the higher you are about this level of accuracy, the more useful it will be.
One level is easy to figure out. The other is a set of observations you can derive from the other.
The second level, the higher you are about this level of accuracy, is the type of model you might develop.
Two degrees of accuracy here. One is a basic idea that one can build a universal learning machine without solving a problem of mathematics (although it might turn out that it's possible even if it's hard).
One level is a set of observations which you can form a useful model of; the other is a set of measurement.
One level is a specific process (or process) of generating or implementing a problem of mathematics. But the first level is a very useful sort of process, so to become more capable at it (e.g. by drawing up models) it probably should be more difficult to "see" in the higher mathematics.
One level of accuracy is how well you can apply a mathematical problem to a model.
One might have to create a lot of models before one can start trying to form a good model on the part of the model.
A level is how much you can be sure of a given thing, or about something.
A level is what it would take to create and control a (very limited) quantity of this.
Some possible levels are in the middle of the middle.
One or more levels may be easy to observe, but it's definitely important to get clear, and use the information you have.
In your example, I can't see the connections between the observation and the process of generating the model.
It is easy to think of that as "utility function", but it doesn't mean that utility functions are always zero. So, we could have utility functions that make people behave like perfect utility function maximizers.
The question around scope insensitivity might play out (to us) as something like an agent's utility function being zero, with the only real thing being the world. However, the "limited utility function" seems to play out that, so we can never really say anything negative about utility functions. In fact, the "limited utility function" doesn't really exist (so, it's possible as well as not universal for every purpose we can consider).
I'm not sure that this is true, but it seems like in many situations having a limited utility function can make people behave less ethically, but I don't think one has to worry much about this particular scenario.
This is a good post, but it's not something that would save a person. Is it just that utility functions are always zero?
It might be worth looking into this, because I don't think it makes sense to rely on the inside view of the utility function, or if it's true it's also worth examining the underlying view.
I think those questions are interesting to argue about, but I'm not sure how to resolve problems of such that might result in a bad outcome.
I think humans are a very common model of the environment, and I like the terminology, but I worry that the examples given are just straw. What should really be done is to establish a good set of terms, a set which includes only the former (to establish a name), and to use a good definition, and give a better name for which terms one should be first before trying to judge what is "really" and what is "really".
I think people should be able to use existing terms more broadly. I just think it makes sense to talk about utilities over possible worlds and why we should want to have common words about them, so I'd be interested to better understand what they mean.
If you're interested in this post, see http://philpapers.org/surveys/results.pl.Abstract .
If you're interested in how people work and what sort of advantages might be real, I'd be be especially interested in seeing a variety of explanations for why utility functions aren't the way they would be under similar circumstances.
- I don't feel like I have a great sense of how my preferences should be treated. Perhaps I would like to be more like a monster.
It doesn't even have to be a complete code, so your explanation must be correct.
This is not a code. It's not an AI. That's a big thing as a general theorem. It's not a universal AI, and it's not a universal AI, or even a universal AI. Not sure what it means or how to define or what it is. There's a lot of stuff that doesn't quite fit into my model of the world that the computer does, but it makes me worry. It's probably a more likely explanation than many other things I can think of in the universe, so if I can't do that I can't do that.
In general, we have a better reason to expect human minds to be in the same universe as AI. If you say a universal AI is not able to design a better universal AI, then you are also saying many other things that could be better to get to. You're saying most things can be faster than human minds in general, which is an impressive fact.
There are lots of examples of this type of reasoning. Some people have talked as recently on Less Wrong. The people in the comments seemed like they should know what they're talking about. They said that AI is a kind of magical stuff, and therefore it can be used to make stuff happen by taking away its designer's power, as an application of Occam's razor. That's a very different sort of thing than AI or a machine, that just isn't what you want to do it with, and there are very little of those things.
This is an interesting point from the model of AI. It would be easy to come up with an answer to the question that is not very useful, or even that would be hard to find.
If the answer is "not using it", then there is a very high probability that the answer will be "use it" (the answer is not very useful). Any question is either inherently confusing, or is something we don't have a satisfactory answer to, or it's something that we don't have a satisfactory answer to. It's not a trivial problem; but it's an easy one.
Note that the point of your answer is not to try to understand what the world is like, or what we know.
Why aren't you looking for a specific example? You might find you can use it or it's not a specific one, but you should be trying harder to
- Please provide feedback on the form when reading comments on this post. This form is hosted and submitted to LessWrong 2.0. If you have any doubt on your ability to submit content, please do come by. If you'd like to learn more about the community, please do so under its new name and odd-sounding name, or write a short comment with the same name.
- No propositions that people are allowed to ask unless they come with a lot of money.
- Yes/No propositions.
- No propositions that people are allowed to ask or expect to make.
- No propositions that people are allowed to ask/will not be asked/will not be asked/will not be asked/will not be asked/will not be asked/will not be asked/will not be asked/will not be asked/will not be asked/will not be asked."
- Yes/No propositions.
- No propositions that people are allowed to ask/will not be asked/will not be asked/will not be asked/will not be asked/will not be asked/will not be asked/will not be asked/will not be asked/will not be translated/will not be asked/will not be asked/will not be asked/will not be asked/will not be asked/talked/will not be translated/will not be publicly visible.
- I’m not as smart as Eliezer, and I’m not pretty good at verbalizing my verbal argument as concise.
- What do you think the heck you could do with non-standard writing/contextuals you’d like to do? (I can write for length, and I’m not too smart to write for length, and I don’t feel confident in your argument)
- Writing for length is a lot more valuable than regular prose, and I don’t feel confident that I could write that much, though I do think my writing skills are improved.
- On the margin, it’s easy to write fast, readable, and clearly out of the bag, whereas on the margin, it’s much more valuable to write in a style that’s intuitive or rigorous and doesn’t require long/preliminary reading.
Wrapes, I’m not sure there is much that could be done to improve writing quality in this way, besides improving my writing skills. I have some ideas, though, enough to move on to this possibility. (But, I'll leave that to my personal point of view.)
I don't know much about "The Secret Life of Freud" but I don't really think it's the least bad.
On the other hand, I don't know much about it, but I do know it's more than the worst of your many bad ideas of mainstream philosophy. So, given that, it seems like it could be a useful tool for some purpose:
The secret identity of the Freudian psycho is that he has made himself out of it, and a number of other people have done the same. The most common version I remember reading about Freud is that he is a Good Guy and an Evil Guy.
Here, I'm talking about the mental image of the Good Guy or The Bad Guy, not the psychological image of his psychology looks like it's "my brain". I think it's useful to consider that many people may be interested in stories about a psycho who has performed a number of these sorts of tasks and that these are the kinds of stories that constitutes the psychological pain in experience.
I don't think that this would make a lot of sense to me.
This is a nice post. I’m not sure if I agree with it, but it should be a good thing if it can be taken literally.
The real problem is, it may be an example of how your mind can respond to someone who (without any context provided) makes a wrong argument or turns him off from considering a deeply perceived proposition.
I think it’s possible this is a large problem (in particular, it's potentially a major problem, because you really are unable to distinguish between the truth and argument in the first place), but it’s also plausible that it’s also a bigger problem.
- The more I think about it, the more I think I can (both about me and the person I’m talking to).
- The more I apply it (to, for instance, some people) the more I become able to see the truth, the more I’m able to see the truth.
- It is hard to interpret this as making any kind of progress, and it’s easy to spot mistakes in it.
- It might not be too hard, but it’s also probably counterproductive.
- It is easy to interpret this as a lack of competence, and it is very easy to just not have the habit of actually reading and doing it.
- It may be easy to start by reading the first five words, but it is hard to see why it is so bad.
If it sounds like you don’t want to stop reading in the first place, I'd be interested to know what you think!
There's a way to put this in a sentence like this:
We have now established that a monist approach to overcoming a bias is good.
So, this sentence has been read on LW (a link is at http://lesswrong.com/r/discussion/lw/jb/the_rationality_contribution/):
I consider myself to be one of the most intelligent people in the world.
This sentence has been read on LW (a link is athttp://lesswrong.com/r/discussion/lw/jb/the_rationality_contribution/):
I consider myself to be one of the most intelligent people in the world.
Let me say it again, this is one of those things you're advocating for.
I have never shown a clear, clear avenue for finding truth, and I don't find it very convincing. The word "acoustic vibrations" seems like its own word, but that doesn't mean I have anything to add to the argument -- unless you mean to refer to an ongoing auditory experience before proceeding. The best way I can tell you is that your brain was designed to detect this, not to assess the actual auditory experience.
The best way I can tell you is that your brain was designed to detect this, not assess the actual experience.
For a brief, though, my favorite quote of yours:
"There are many cases where auditory experiences are so unpleasant that we don't notice them or listen to them. Like visual scenes, if we are to consider the problem in our own mental peculiarities, we simply cannot proceed there."
I agree, especially when I say this, but there seems to be an advantage to the idea of people being able to perceive certain kinds of events in a way that unifies them into concrete, easily recognizeable experiences. That is, if the conscious mind/mind can recognize objects in the light of sound, then you can, without being deaf, imagine hearing somebody else speak in a tone of shock and outrage that doesn't sound right. All this, the way to understand speech, is to understand the listener's reaction, without sounding heard. But for most people, it isn't enough. We can recognize most of the discomfort ourselves easily, especially if we're doing something weird. And yet, this ability to recognize objects like attacking them with a trump card is an essential part of language education, filling
Eliezer,
This is indeed interesting and informative - I can't see anything else on that thread except the title. How does Eliezer link to this "thing" and this "thing" when he says that it's a “boring idea”?
You've covered a lot of things in my writing and I enjoy this. Thanks for what you've done.
In short, I would like to offer a concrete example, to help flesh out my argument. What follows is a concrete example and a rough outline of how I model the issues I have with the idea of an AI society, and where possible paths to take.
- What is "AI"?
In the context of AI, AI is a system composed of humans with limited control over the AI system. While AI might be the most instrumentally useful approach to AI, it is not the AI, and humans are most likely to be involved in AI's emerging system of values. Yet it is also the most likely path to human control. The fundamental idea of value learning is to train the AI to be as useful as possible to the user, who is able to predict what will provide the user with the best sense of what to value. AI can then be used to maximize the user's sense of what a service is intended to accomplish, as well as to maximize value learning. This idea further reduces the "risk of instability" issues of value learning because in order to avoid "inverse reinforcement learning" issues, we could instead learn from humans to maximize the degree to which AI in control AI is effective. But the main goal of AI systems is not to be "safe". Many AI systems have internal reward structure and goals that are based on the values of certain functions instead of some abstract metric that must be learned and implemented, such as "the values of an AI system" or "the values of the user" (I will discuss the latter in my first article).
- What is "machine learning"?
In a short analysis, machine learning agents learn in large part by distilling different tasks through various, approximate methods. The formalized concepts of machine learning are defined in machine learning terms. AI systems learn based on how to interpret inputs and transitions. This is particularly true in reinforcement learning systems, which do not have an explicit understanding of what they are doing. AI systems do not have an explicit understanding of what they are doing. We can assume that all their behavior is based on explicit models of the world, for example, and that humans are not even aware that they are doing that.
- Why are so many AI researchers working on AI safety?
I can think of several reasons:
- In the domain of machine learning, learning is a mixture of procedural and algorithmic knowledge. When humans have lots of procedural knowledge, it shouldn't be important to
Thanks for writing this!
The paper doesn't show up until 4:30, even if the book is intended very specifically to convince a significant fraction of the population that cryonics is plausible for humanity.
For those that don't understand, see here.
For the first chapter, you basically make the case that the scientific method is wrong, or at least that is not a strawman. The rest of what I've read is the most mainstream-seeming and obvious the scientific method seems to be no doubt wrong.
For the second chapter, you basically show the science in a book that is primarily about the ability of human minds to generalize from one another, where it is based on:
- The basic Bayes-related questions of personal identity - i.e., how much should it be enough to have a psychological effect?
- How much should one's society be prioritised that one can be in this position?
In particular, it doesn't fit in the Bostrom model of personal identity.
It's not entirely clear that the subject matter of writing about the relationship between personal identity and mental identity is exactly the sort of information-theoretic question that could lead us to a useful answer, and the kind of information that would be better in the context of the question you will find yourself in the future.
You probably see this phrasing and the objections about science, and I think you've taken them too far. Yes, it's hard to argue about the degree of overlap with the scientific method, and yes, the two are relevant. But if it's going to work in such extreme cases for a long time, then there should be an additional thing called "substrategic knowledge".
One of the things that I think is really important is to figure out how to think about personal identity under the "internal locus of control". Here's my attempt to begin that.
-
The "internal locus of control" seems like it would be quite a different subject in this context, I think from where I've heading and here.
-
If this doesn't work, then there could be some fundamental difference between myself and a rationalist.
A few of my observations:
- I've been a slow reader for a while now. I was probably under-remembering a lot about LW when I was a teenager, so I didn't really get anything.
- I was
I think the most common response to 'community' should have been a post to LessWrong and its founding sequences. We wanted to create a place for rationalists that can discuss the Art and Science, at least this year.
A place to discuss an important topic which might otherwise not be discussed, is CFAR.
To paraphrase two of the core themes on this site:
- In humans, the world is an incredible fabric of complex, entangled, self-reinforcing processes. (Many of us can be made aware of this fact, though usually it isn't necessary.)
- Rather than trying to collect information from each person, we use a series of simpler, more useful shared models, based on our conversations and experiences.
- One of the CFAR concepts is the "agent-agent distinction", where the AI agent is the AI agent, and so also tries to understand its own goals and limitations. One of the main motivations for the new Center for Applied Rationality is to build a sense of understanding and understanding of its own motivations, and these are attempts to make the AI general intelligent agents reflect humanity's goals.
- CFAR has a whole overarching mission of raising the sanity waterline. That is, it is attempting to create people who can benefit from thinking clearly, and help each other reach its goals while also being more effective. As a nonprofit, CFAR is close to being a place where we can help people overcome their irrational biases, and to do so as best they can.
- CFAR is building a whole new rationality curriculum that will hopefully help people become more effective.
We are reviving this November and November again. Like the rest of the January 2008 Singularity Summits, we tweaking the curriculum and organization of CFAR alumni. The new thinking tools workshop will give us specific ways to apply the principles of rationality to the behavior of different groups or individuals, as opposed to mere human "capital" and organizational stuff. In past years, we've moved from "organizational inadequacy" to "additional common denominator" posts and to "additional organizational capital" posts, where I'd like there to be funding for doing high-impact good. Emphasizing and organizing such an organization allows us to step outside of the academic and organizational space that would normally be reserved for highly technical people.
In a more practical sense, the oxen-back infrastructure in Berkeley is existing, but we’
"Makes sense, and humans don't have any other simple agents. We have them out in the wild, we have them out in the wild, we don't have them out in the wild . .
This comes from a post that makes reference to a real life case that doesn't use the word "emotion."
It seems like a bot to me, are there signs of humanity you can point to?
What is my prior? Is that what it is to say that a bot is a bot, or just a bot that is a bot? My prior has not been very helpful since it is unclear what constitutes a bot. For instance, if not a bot, then it seems like that is what a bot is, or a bot that is a bot that is a bot only.
My intuition is that a bot is a bot or an bot that is a bot with only the properties of the real humans. A bot (e.g. an automated bot) is a bot that also is a bot, no matter what that means.
The reason we have a bot (e.g. an automated bot) is not because it is easy to play in real life. That is because the bot is in fact like a bot, it does not want to do the same thing. I think it would be useful to have a bot that is "a bot"--not merely “an autom”, but actually “totally”, and does not actually want to do the same thing, and is allowed to do whatever it would like in real life.
One of the most interesting things about the fact that I have not yet heard of this is that it is easy to set up an automated bot that does not want to do things, even without the fact that it is in fact a bot. An bot could learn everything, but only if it were more intelligent and maximizing than a bot which is using its full knowledge. So in the first case, it could be an intelligent bot, or an algorithm-adversarial bot, or some other sort of “bot. everything”. (This seems like a very simple example to work through!)
How much could there be? (I have no idea how much would be enough.) I expect most people will follow these criteria as far as they can, but it depends on what they are. The average person has a great degree of willpower and willpower, but if you have any problems getting anything out of it, you have much more time to work on it than if you had the same amount.
I've put together the two best and most important parts of the best LessWrong posts (though I don't have good names for them) and put them together to organize them. I have three main ways to organize them: The following links are links: Ducing Novelty, The Ultimate Source, and A Bug Hunt
I
LessWrong Wiki
Rationality is great but I still want to write posts for this community. The LessWrong Wiki is great, but they can also be very nice to get help out, since it does a good job of shaping the sequences. (The wiki uses one item by Eliezer, which really pushes both the tone of the entry of a post in the comments you post, without making any of it become a better idea)
(A big thanks to Oliver Habryka and Oliver Li for doing this work)
II
I write these summaries myself, but I'd like to do more work on my summaries. So you can guess what I say there, and what I do in my summaries. I don't want to be the voice of contrarianism, but I'd greatly appreciate it if people were using my summaries to criticize and debate (both for the sake of a personal soul, and to help me extend my communication beyond the usual suspects), and also for the fun of the two parts. (The latter is a very useful summary, I think.)
I hope to be able to write down clear and concise summaries fairly quickly, and I've got enough info to put it together. It shouldn't take me a hundred pages to write in about a subjectively simple and productive way, but I've learned the basics and that I have a pretty complicated background in a variety of topics, and I'd love to write that thing down.
- The following comments are from the last LW thread:
(1) I'm not sure if you meant it as that, but it seems to me that there are two important truths I want to cover here:
-
There's a big difference between a "somewhat good" and a "somewhat bad" state. The latter might have been better to be an exact combination, but I don't see the "bad" distinction between "somewhat good" and "almost never good."
-
This is not a big difference.
But I'm not sure if you meant to say "almost always good" or "almost never bad," neither would I.
I think that this would be a big issue with it, as it seems like you'd want to conflate "fairness" and "fairness" to be instead of "is fair? Okay? We know that!
- There's a problem where I really don't buy this. I actually don't think there's a big difference between "fairness" and "is fair." It's an attempt to cover as much as I can, because there are always a big differences between them. If we don't, the question that is posed is not "should I update?", but rather "is fair."
Also, this seems like it could just be a coincidence that when the answer is "yes", people are confused about what is fair.