Posts
Comments
The boundaries of relevante is something to think. A lot of places outside LW have discussions. Political topics was a thing back then, but now apparently people mention is Open Threads, and the most frequent talkers are still posting elsewhere. EA emerge, and with good coordination. However, this does not mean we should stop possible dynamical changes.
Somehow, LW/MIRI can't disentangle research and weirdness. Vassar is one of the guys when make public interviews end up giving this impression.
Bet if companies cut in half the number of 'meetings', the productivity gain would be good enough to make a 40h/week for a lot of workers.
The economic implications of reading LW should be put somehow on the census. Human resources is something the rationality cluster has a lot. Imagine people being paid for insights they put here.
I suspect people actually have defined goals but are not specific enough about actions.
This anti-academic feeling is something I associate with lesswrong, mostly because people can find programming jobs without necessarily having a degree.
Apparently you don´t need a argument to be a nationalist. Guess this is just system 1 working.
Seems a good test to reactivate LW dynamics.
Learn math too, to understand data structures, graphs, algoritms and all the basic CS stuff.
Both positive and negative black swans . Aditionally: randomness and regression to the mean.
In em scenario we set rich people as the first ems. Don't know how broad this is, but Robin expect a small group of people with lots of copies.
This is academic habit, but vulnerable to group bias. Normally, you don't send drafts to experts who strong disagree with your statements, but close friends who wants to read what you write.
I've see only a math post. Do you plan to write in what kind of topics?
Students are often quite capable of applying economic analysis to emotionally neutral products such as apples or video games, but then fail to apply the same reasoning to emotionally charged goods to which similar analyses would seem to apply. I make a special effort to introduce concepts with the neutral examples, but then to challenge students to ask wonder why emotionally charged goods should be treated differently.
-- R. Hanson
The Porverty of Historicism is another book. BTW, the overall approach, making theories restrictive enough.
I think people who blog normally expose inconclusive thoughts or drafts, but not complete solutions. Or wants to teach more people, and build a community. In academic format, this is not so easy.
Benatar assimetry between life and death make B the best option. But as his argument is hard to accept, A is better, whatever human values the AI implement.
Interested too. And will be particularly useful if consider topics which are not easy to find in other sources: giving, x-risk, disruptive technologies, cryonics.
For some reason, (old)lesswrongers end up optimizing to reactionary themes. I wonder why, if is just signaling or a serious thing.
Mostly decisions about large scale intervetions could be settled in economics terms. However, MIRI recent strategic change to math questions inform us the difficulty of heuristics approach. The amount of resources needed to find very noisy information about factors for some kind of scenario, seems very costly.
To him(Kuhn) evidence don't maintain old paradigms statuos quo, but persuasion. Old fellas making remarks about the virtues of their theory. New folks in academia have to convince a good amount of people to make the new theory relevant.
The risk to lose friends make people to rationalize their behavior to make them more similar to a group, convincing himself of some identity, or optimizing toward a set of habits of the average guy of her group. Additionally, contrarian thinking signals status too.
The discussions in moral uncertain normally use the consensual options of ethical theories(deontology, consequentialism), and then propose some kind of solution, like intertheoretical comparison. The decompositions posed in the post create more problems to solve, like the correct description of values. I assume the author would find some kind of consequentialism as a correct theory, with settle the third uncertain. Feel free to respond if thats not the case.
The main response I assume is the fact that friendly agents are not yet invented, or the ideas exposed here are new, this post. The theoretical background could overlap with other sciences, but the main goal(FAI) needs more than that, I supose.
We have goals, but they are not consistent over time. The worries about artificial agents(with more power) is that, these values if bad implemented, would create losses we could not accept, like extinction.
After reading how to measure anything, I suspect of people when they say something could not be measured or defined, in principle. And one of the most useful things is how vague notions could be tabooed or replaced.
This probably has something to do with Eliezer profile on OkCupid, and he being the main rationalist.
So, the whole point is about LWers making fun of people who make fun of everyone else who disagree with him(blue/green) in a particular point. Konk should put this on the text.
That's something: posts presuppose too much. Words are hidden inferences, but most newbies don't know where to begin or if this is worth a try. For example, this sequence has causality as a topic to understand the universe, but people need to know a lot before eat the cake(probability, math logic, some Pearl and the sequences).
A tendency to thanks some randomness in life, like Taleb, could help. Maybe you should blame the incapability of perfectly prediction, or the fact that cars kill dozens of people, and yet you insist to drive.
Weatherson ask what could be done in other departments. R: all the formal methods(logic, decision and game theory), and empirical like x-phi. Besides, I don't want shut down philosophy departments, but I will be happy if they move to something like CMU + cogsci.
The big problems facing science are management problems. We don't know how to identify important areas of study, or >people who can do good science, or good and important results.
Bostrom "Predictions from philosophy" makes similar advice, but not specific to scientists. In both cases the solution is focus cognitive resources on strategic analysis, I suppose. However, is really dificult to implement this on a large scale without hurting egos.
What sort of people do you have in mind? The generalization apparently consider academic philosophers in the actual state, but not past people. Sure, someone without strong science background will miss the point, focusing on the words. But arguing "by definitions" is not something done exclusively by philosophers.
Some pessimists expect a linear grown in sofistication, with a few insights along the way, but not sequential insights from one single group. Safe AI designs are more hard than unsafe ones. Normally hard problems are solved after the easy, and by different people. If FAI is a hard-rather-than-easy project, the results will appear after the unsafe AGIs. However, this could not be the case, if AI research change.
The most astonishingly incredible coincidence imaginable would be the complete absence of all coincidences.
-- John Allen Paulos(from Beyond Numeracy)
If future people are so awesome superior to us -- in the time of revival -- its because they share some values with "me/we now" or have managed to create a successful experiment like a simulation to study this curious species.
I wonder if the choice, "moldbuggery"(in the survey), is made in a serious thought, or for lack of a better word.
In early stages is not easy to focus directly in the organizations x or y, mostly because a good amount of researchers are working in projects who could end in a AGI expert in numerous specific domains. Futhermore, large scale coordination is important too, even if not a top priority. Slowing down a project or funding another is a guided intervetion who could gain some time while technical problems remain unsolved.
Done. I hope this data help LW/CFAR.
The intelligence explosion site, for a overall bibliography of AI, with links to the principal papers.
It seems that no one is working in papers about the convergence of values. In a scale of difficult, math problems seens the priority, but the value and preferences disagreement impose a constrain in the implementantion. More specifically, in the "programmer writing code with black boxes" part.
Even in the ems/Hanson scenario having children will be a value. In the timeline, eugenics come before, but don't will extinguish these psychological traits. They could become a religion, for example.
Our evolutionary adaptation is to find faces pretty quickly, but remembering a name have don't the same strength in group formation. The extra effort to remember names could signal some kind of alliance, witch explain why I could remember some names, but remember much more faces.
Would be useful if someone create a rationality training curriculum who could be taught in a elementary school. Normally, critical thinking classes are taught to undergraduates. But critical thinking makes people disbelief weird things, and I suppose don't have a "instrumental rationality module".
The singularity have diverse connotations; is good to specify what memes you're thinking.
I find mathematics most about future physics laws who will be discovered. Math without empirical confirmation is more difficult to link, but normally is a matter of time to find a application.
Unjustified assertions could be more productive if not made. Creating fuss about idiots makes unnecessary noise. However, we could think this dismissiveness inform us about the status of the post.
Publishing in Nous means consuming enough philosophical papers to signal competence in formal methods and causality graphs. People here will don't pay the cost. Better to contact a already professional philosopher with the formal background and teach him a class on reductionism.
But this bring a tradeoff, how much do you sacrifice to show security and confidence? I suppose, there are people who tell the truth even in situations where this attitude will cause complications.