Posts

[Cross-post] Is the Fermi Paradox due to the Flaw of Averages? 2023-01-18T19:22:01.944Z
Why are probabilities represented as real numbers instead of rational numbers? 2022-10-27T11:23:39.300Z

Comments

Comment by Yaakov T (jazmt) on Reframing Superintelligence: Comprehensive AI Services as General Intelligence · 2023-09-20T12:28:47.899Z · LW · GW

https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem

https://www.lesswrong.com/tag/vnm-theorem 

Comment by Yaakov T (jazmt) on Catastrophic Risks from AI #6: Discussion and FAQ · 2023-06-29T06:11:38.118Z · LW · GW

Hi I am working on Rob Miles' Stampy project (https://aisafety.info/), which is creating a centralized resource for answering questions about AI safety and alignment. Would we be able to incorporate your list of frequently asked questions and answers into our system (perhaps with some modification)? I think they are really nice answers to some of the basic questions and would be useful for people curious about the topic to see.

Comment by Yaakov T (jazmt) on What are some of the best introductions/breakdowns of AI existential risk for those unfamiliar? · 2023-05-29T18:55:13.045Z · LW · GW

have you seen the stampy project https://aisafety.info/ although it is currently a work in progress. there was also some examples of it here https://www.lesswrong.com/posts/EELddDmBknLyjwgbu/stampy-s-ai-safety-info-new-distillations-2

Comment by Yaakov T (jazmt) on All AGI Safety questions welcome (especially basic ones) [May 2023] · 2023-05-28T12:30:22.469Z · LW · GW

@drocta @Cookiecarver We started writing up an answer to this question for Stampy. If you have any suggestions to make it better I would really appreciate it. Are there important factors we are leaving out? Something that sounds off? We would be happy for any feedback you have either here or on the document itself https://docs.google.com/document/d/1tbubYvI0CJ1M8ude-tEouI4mzEI5NOVrGvFlMboRUaw/edit#

Comment by Yaakov T (jazmt) on [Intro to brain-like-AGI safety] 9. Takeaways from neuro 2/2: On AGI motivation · 2023-04-16T14:44:37.943Z · LW · GW

But in that kind of situation, wouldn't those people also pick A over B for the same reason?

Comment by Yaakov T (jazmt) on Shoulder Advisors 101 · 2023-01-27T10:41:41.804Z · LW · GW

I really liked this post since it took something I did intuitively and haphazardly and gave it a handle by providing the terms to start practicing it intentionally. This had at least two benefits:

First it allowed me to use this technique in a much wider set of circumstances, and to improve the voices that I already have. Identifying the phenomenon allowed it to move from a knack which showed up by luck, to a skill.

Second, it allowed me to communicate the experience more easily to others, and open the possibility for them to use it as well. Unlike many lesswrong posts, I found that the technique in this post spoke to a bunch of people outside of the lesswrong community. For example, one friend who liked this idea. tried applying it to developing an Elijah the Prophet figure that he could interact with.

Comment by Yaakov T (jazmt) on Why are probabilities represented as real numbers instead of rational numbers? · 2022-10-30T18:12:11.004Z · LW · GW

Cool. So in principle we could just as well use the rationals from the standpoint of scientific inference. But we use the reals because it makes the math easier. Thank you.

Comment by Yaakov T (jazmt) on Why are probabilities represented as real numbers instead of rational numbers? · 2022-10-30T18:09:53.235Z · LW · GW

Thank you.

I am a little confused. I was working with a definition of continuity mentioned here https://mathworld.wolfram.com/RationalNumber.html : "It is always possible to find another rational number between any two members of the set of rationals. Therefore, rather counterintuitively, the rational numbers are a continuous set, but at the same time countable." 

I understand that Rationals aren't complete, and my question is why this is important for scientific inference. In other words, are we using the reals only because it makes the math easier, or is there a concrete example of inference  which completeness helps with? 

Specifically, since the context Jaynes is interested in is designing a (hypothetical) robot's brain, and in order to achieve that we need to associate degrees of plausibility with a physical state, I don't see why that entails the property of completeness which you mentioned? In fact, we mostly use digital and not analog computers, which use rational approximations for the reals. What does this system of reasoning lack?

Comment by Yaakov T (jazmt) on Alignment's phlogiston · 2022-08-19T09:48:39.256Z · LW · GW

You might find the book Is Water H2O? by Hasok Chang, 2012 useful. It was mentioned by Adam Shimi in this post https://www.lesswrong.com/posts/wi3upQibefMcFs5to/levels-of-pluralism

Comment by Yaakov T (jazmt) on Open & Welcome Thread - May 2022 · 2022-05-22T17:30:33.184Z · LW · GW

It also reminds me of Richard Feynman not wanting a position at the institute for advance study. 

"I don't believe I can really do without teaching. The reason is, I have to have something so that when I don't have any ideas and I'm not getting anywhere I can say to myself, "At least I'm living; at least I'm doing something; I am making some contribution" -- it's just psychological.

When I was at Princeton in the 1940s I could see what happened to those great minds at the Institute for Advanced Study, who had been specially selected for their tremendous brains and were now given this opportunity to sit in this lovely house by the woods there, with no classes to teach, with no obligations whatsoever. These poor bastards could now sit and think clearly all by themselves, OK? So they don't get any ideas for a while: They have every opportunity to do something, and they are not getting any ideas. I believe that in a situation like this a kind of guilt or depression worms inside of you, and you begin to worry about not getting any ideas. And nothing happens. Still no ideas come.

Nothing happens because there's not enough real activity and challenge: You're not in contact with the experimental guys. You don't have to think how to answer questions from the students. Nothing!

In any thinking process there are moments when everything is going good and you've got wonderful ideas. Teaching is an interruption, and so it's the greatest pain in the neck in the world. And then there are the longer period of time when not much is coming to you. You're not getting any ideas, and if you're doing nothing at all, it drives you nuts! You can't even say "I'm teaching my class."

If you're teaching a class, you can think about the elementary things that you know very well. These things are kind of fun and delightful. It doesn't do any harm to think them over again. Is there a better way to present them? The elementary things are easy to think about; if you can't think of a new thought, no harm done; what you thought about it before is good enough for the class. If you do think of something new, you're rather pleased that you have a new way of looking at it.

The questions of the students are often the source of new research. They often ask profound questions that I've thought about at times and then given up on, so to speak, for a while. It wouldn't do me any harm to think about them again and see if I can go any further now. The students may not be able to see the thing I want to answer, or the subtleties I want to think about, but they remind me of a problem by asking questions in the neighborhood of that problem. It's not so easy to remind yourself of these things.

So I find that teaching and the students keep life going, and I would never accept any position in which somebody has invented a happy situation for me where I don't have to teach. Never."

— Richard Feynman, Surely You're Joking, Mr. Feynman!

Comment by Yaakov T (jazmt) on The case for becoming a black-box investigator of language models · 2022-05-13T10:49:10.201Z · LW · GW

Do you suspect that black-box knowledge will be transferable between different models, or that the findings will be idiosyncratic to each system? 

Comment by Yaakov T (jazmt) on Rationality Quotes August 2014 · 2014-08-17T19:47:16.698Z · LW · GW

according to this website (http://ravallirepublic.com/news/opinion/viewpoint/article_876e97ba-1aff-11e2-9a10-0019bb2963f4.html) it is part of 'aphorisms for leo baeck' (which I think is printed in 'ideas and opinions' but I don't have access to the book right now to check)

Comment by Yaakov T (jazmt) on Rationality Quotes June 2014 · 2014-06-03T11:57:30.476Z · LW · GW

probably not, but why are you certain

Comment by Yaakov T (jazmt) on A defense of Senexism (Deathism) · 2014-02-26T02:39:07.706Z · LW · GW

That doesn't strike me as how psychology works, since in the real world people often repeatedly make the same mistakes. It also seems that even if your proposal would work, it doesn't address the original issue since you are assuming that the person has a clear idea of his goals and only needs time to pursue them, whereas I think the bigger issue which aging encourages is reorienting ones values.

I appreciate your taking the time to address my question, but it seems to me that this conversation isn't really making progress so I will probably not respond to future comments on this thread. Thank you

Comment by Yaakov T (jazmt) on A defense of Senexism (Deathism) · 2014-02-25T16:42:11.677Z · LW · GW

I would have to look around to see if there is non-anecdotal evidence, but anecdotally ~40 is when I have heard people start mentioning it.

I don't think your proposal would work since I don't think the time factor is the biggest issue, How often do people make big plans for summer vacation and not actually do them? They probably wouldn't say "I'll put it off for thirty years", but rather repeatedly say " I'll put it off till tomorrow" .

Comment by Yaakov T (jazmt) on A defense of Senexism (Deathism) · 2014-02-18T02:02:44.299Z · LW · GW

yes and that was the meaning of my initial comment, and that is a concern in today's world where we do have limited resources so that not everyone would be able to make use of such a technology. The country that has it (or the subset of people that have it within one country) will be motivated to defend their resources necessary to use it., This isn't an argument against such research in a world without any scarcity, but that isn't our world.

I am still not sure whether it is likely to be more beneficial or not for heavily emotional and biased humans like us.

Comment by Yaakov T (jazmt) on White Lies · 2014-02-18T01:56:00.900Z · LW · GW

Thank you for all of your clarifications, I think I now understand how you are viewing morality.

Comment by Yaakov T (jazmt) on A defense of Senexism (Deathism) · 2014-02-17T03:47:08.871Z · LW · GW

Maybe, but on the other hand there is inequity aversion: http://en.wikipedia.org/wiki/Inequity_aversion

Also there is the possibility of fighting over the resources to use that technology (either within society or without). Do you disagree with the general idea that without greater rationality extreme longevity will not necessarily be beneficial or do you only disagree with the example?

Comment by Yaakov T (jazmt) on White Lies · 2014-02-17T03:36:27.615Z · LW · GW

Why don't you view the consequentialist imperative to always seek maximum utility as a deontological rule? If it isn't deontological where does it come from?

Comment by Yaakov T (jazmt) on A defense of Senexism (Deathism) · 2014-02-17T03:31:48.736Z · LW · GW

"You keep using the words "we" and "our", but "we" don't have lifespans; individual humans do." Of course, but "we" is common shorthand for decisions which are made at the level of society, even though that is a collection of individual decisions (e.g. should we build a bridge, or should we legalize marijuana). Do you think that using standard english expressions is problematic? (I agree that both the question of benefit for the self and benefit for others is important and think the issue of cognitive biases is relevant to both of them)

I just looked at your comment, and I agree with that argument, but that hasn't been my impression of the view of many on this site (and clearly isn't the view of researchers like De Grey), however I am relatively new here and may be mistaken about that. Thank you for clarifying.

Comment by Yaakov T (jazmt) on A defense of Senexism (Deathism) · 2014-02-17T01:45:43.781Z · LW · GW

Thank you, but that post doesn't seem to answer my question, since it doesn't take up how death interplays with our cognitive biases. I agree that if we were perfectly rational beings immortality would be great, however I don't see how that implies that considering our current state that the choice to live forever (or a really long time) would be in our best interest.

Similarly I don't see how that argument indicates that we should develop longevity technologies until we solve the problem of human irrationality and evil. For example, would having a technology to live 150 years cause more benefit or would it cause wars over who gets to use the technology?

Comment by Yaakov T (jazmt) on A defense of Senexism (Deathism) · 2014-02-17T01:39:58.079Z · LW · GW

We are all arrogant to some degree or another, knowledge of or mortality helps keep it in check. What would the world look like with an unrestrained god complex?

Taking 10 years off after 30 years doesn't seem to solve the problem of the psychological issue, in today's world, as we get older we start noticing the weakness of our bodies which push us to act, since "if not now, when".

Unless we solve the various cognitive biases we suffer from, extreme longevity seems like a mixed blessing at best, and it seems to me that it would cause more problems than it solves.

I agree that these arguments don't decide the issue, but the counter argument of letting people choose doesn't seem to me effective. Also, arguments about how we would be superbeings who are totally rational, may be applicable to some post-human existence, but would not help the argument that longevity research should be pursued today (since, e.g. there would likely be wars over who gets to use it which might kill even more people, as we see in the world today the problem with world hunger and disease is not primarily one of lack of technological or economic ability but rather one of sociopolitical institutions)

Comment by Yaakov T (jazmt) on White Lies · 2014-02-17T01:18:18.572Z · LW · GW

Thank you, I think I understand this now.

To make sure I understand you correctly. are these correct conclusions from what you have said? a. It is permitted (i.e. ethical) to lie to yourself (though probably not prudent) b. It is permitted (i.e. ethical) to act in a way which will force you to tell a lie tomorrow c. It is forbidden (i.e. unethical) to lie now to avoid lying tomorrow (no matter how many times or how significant the lie in the future) d. The differences between the systems will only express themselves in unusual corner cases, but the underlying conceptual structure is very different

I still don't understand your view of utilitarian consequentialism, if 'maximizing utility' isn't a deontological obligation emanating from personhood or the like, where does it come from?

Comment by Yaakov T (jazmt) on A defense of Senexism (Deathism) · 2014-02-16T19:04:35.608Z · LW · GW

To those who think that death should be a choice. What about the benefits of knowing that we are mortal, which death by choice doesn't allow for. e.g. as a counter force to arrogance and as a force to act now, and so as we age to start reevaluating our priorities, in other words, the benefits while we live to knowing that we are mortal may outweigh the benefit of immortality. I suspect these concerns have been dealt with on this site, so if they have feel free to link me to an appropriate post instead of writing a new response,

Comment by Yaakov T (jazmt) on White Lies · 2014-02-16T18:25:21.765Z · LW · GW

Yes I read that post, (Thank you for putting in all this time clarifying your view)

I don't think you understood my question. since "The third thing says you must not lie unless there is a compensatory amount of something else encouraging you to lie. " is not viewing 'not lying' as a terminal value but rather as an instrumental value. a terminal value would mean that lying is bad not because of what it will lead to (as you explain in that post), but if that is the case, must I act in a situation so as not to be forced to lie. For example, lets say you made a promise to someone not to get fired in your first week at work, and if the boss knows that you cheered for a certain team he will fire you, would you say that you shouldn't watch that game since you will be forced to either lie to the boss or to break your promise of keeping your job? (Please fix any loopholes you notice, since this is only meant for illustration) If so it seems like the consequentialist utilitarian is saying that there is a deontological obligation to maximize utility, and therefore you must act to maximize that, whereas you are arguing that there are other deontological values, but you would agree that you should be prudent in achieving your deontological obligations. (we can put virtue ethics to the side if you want, but won't your deontological commitments dictate which virtues you must have, for example honesty, or even courage, so as to act in line with your deontological obligations)

Comment by Yaakov T (jazmt) on Rationality Quotes February 2014 · 2014-02-16T01:10:29.573Z · LW · GW

Why isn't saying "I don't know" a reasonable approach to the issue when ones knowledge is vague enough to be useless for knowledge (and can only be made useful if the case was a bizarre thought experiment), Just because one couldtheoretically bet on something doesn't mean one is in a position to bet. (For example to say that I don't know how to cure a disease so I will go to the doctor, or I don't know what that person's name is (even though I know it isn't "Xpchtl Vaaaaaarax") so I should ask someone, Or I don't know how life began. Or I don't know how many apples are on the tree outside (even though I know it isn't 100 million))

Comment by Yaakov T (jazmt) on White Lies · 2014-02-16T00:45:58.184Z · LW · GW

Lets take truth telling as an example. What is the difference between saying that there is an obligation to tell the truth, or honesty being a virtue or that telling the truth is a terminal value which we must maximize in a consequentialist type equation. Won't the different frameworks be mutually supportive since obligation will create a terminal value, virtue ethics will show how to incorporate that into your personality and consequentialism will say that we must be prudent in attaining it? Similarly prudence is a virtue which we must be consequentialist to attain and which is useful in living up to our deontological obligations. and justice is a virtue which emanates from the obligation not to steal and not to harm other people and therefore we must consider the consequences of our actions so that we don't end up in a situation where we will act unjust.

I think I am misunderstanding something in your position, since it seems to me that you don't seem to disagree with consequentialism in that we need to calculate, but rather in what the terminal values are (with utilitarianism saying utility is the only terminal value and you saying hat there are numerous (such as not lying , not stealing not being destructive etc.))

By obligations which emerge from a person's personhood which are not waivable, I mean that they emerge from the self and not in relation to another's rights and therefore can not be waived. To take an example (which I know you do not consider an obligation, but will serve to illustrate the class since many people have this belief) A person has an obligation to live out their life as a result of their personhood and therefore is not allowed to commit suicide since that would be unjust to the self (or nature or god or whatever)

Comment by Yaakov T (jazmt) on White Lies · 2014-02-14T00:22:02.430Z · LW · GW

"No. There's morality, and then there's all the many things that are not morality."

Is this only a linguistic argument about what to call morality? With ,e.g. , virtue ethics claiming that all areas of life are part of morality, since ethics is about human excellence, and your claim that ethics only has to do with obligations and rights? Is there a reason you prefer to limit the domain of morality? Is there a concept you think gets lost when all of life is included in ethics (in virtue ethics or utilitarianism)?

Also, could you clarify the idea of obligations, are then any obligations which don't emanate from the rights of another person? Are there any obligations which emerge inherently from a person's humanity and are therefore not waivable?

Comment by Yaakov T (jazmt) on AALWA: Ask any LessWronger anything · 2014-02-06T05:18:27.129Z · LW · GW

Is your wife still teaching your kids religion? How do you work out conflicts with your wife over religious issues (I assume she insists on a kosher kitchen, wants the kids to learn Jewish values etc)

Comment by Yaakov T (jazmt) on Very Basic Model Theory · 2014-02-04T04:36:38.584Z · LW · GW

Which of the 3 would you recommend? Does someone know why MIRI recommends Chang and Keisler if it is somewhat outdated?

Comment by Yaakov T (jazmt) on On saving the world · 2014-01-31T19:35:55.385Z · LW · GW

me too

Comment by Yaakov T (jazmt) on Rationalists Are Less Credulous But Better At Taking Ideas Seriously · 2014-01-23T01:19:27.905Z · LW · GW

For ordinary investors won't there still be an issue of buying these funds at the right time, so as not to buy when the market is unusually high?

Comment by Yaakov T (jazmt) on Rationalists Are Less Credulous But Better At Taking Ideas Seriously · 2014-01-23T01:11:57.629Z · LW · GW

Thank you, I basically use this method now and am glad to have it corroborated by an expert.

Comment by Yaakov T (jazmt) on Rationalists Are Less Credulous But Better At Taking Ideas Seriously · 2014-01-22T04:12:16.020Z · LW · GW

What method of backing up data do you recommend for a computer with windows? How often do you recommend doing it?

Comment by Yaakov T (jazmt) on Book Review: How Learning Works · 2014-01-22T03:47:18.908Z · LW · GW

A rubric is a tool for assessment. It identifies criterion for evaluating work by identifying the categories of achievement and the measurements of levels of achievement in each category. This seems like a basic summary with examples: http://learnweb.harvard.edu/alps/thinking/docs/rubricar.htm

Comment by Yaakov T (jazmt) on Rationality Quotes January 2014 · 2014-01-20T02:56:07.278Z · LW · GW

Train your tongue to say "I don't know", lest you be brought to falsehood -Babylonian Talmud

Comment by Yaakov T (jazmt) on Book Review: How Learning Works · 2014-01-20T02:20:38.327Z · LW · GW

Are you familiar with Doug Lemov's "Teach like a champion"? If so how does is compare with "How Learning Works"?

Comment by Yaakov T (jazmt) on Dark Arts of Rationality · 2014-01-19T02:07:09.861Z · LW · GW

My point is that we can't help but think of ourselves as having free will, whatever the ontological reality of free will actually is.

Comment by Yaakov T (jazmt) on Dark Arts of Rationality · 2014-01-16T19:28:10.084Z · LW · GW

It seems impossible to choose whether to think of ourselves as having free will, unless we have already implicitly assumed that we have free will. More generally the entire pursuit of acting more rational is built on the implicit premise that we have the ability to choose how to act and what to believe.

Comment by Yaakov T (jazmt) on Dark Arts of Rationality · 2014-01-16T05:16:29.715Z · LW · GW

nice post. However it might be better to characterize the first two classes as beliefs which are true because of the belief, instead of as false beliefs (Which is important so as not to unconsciously weaken our attachment to truth). For example in your case of believing that water will help you feel better, the reason you believe it is because it is actually true by virtue of the belief, similarly when the want to be rock star enjoys making music for its own sake the belief that making music is fun is now true.

Comment by Yaakov T (jazmt) on Building Phenomenological Bridges · 2013-12-23T01:14:53.615Z · LW · GW

typo: There seems to be an extra 'Y' in column 4 of the first image (it should be CYYY instead of CYYYY)

Comment by Yaakov T (jazmt) on Beautiful Probability · 2013-12-19T01:45:52.241Z · LW · GW

Thanks, I understood the mathematical point but was wondering if there is any practical significance since it seems in the real world that we cannot make such an assumption, and that in the real world we should trust the results of the two researchers differently (since the one researcher likely published no matter what, whereas the second probably only published the experiments which came out favorably (even if he didn't publish false information)). What is the practical import of this idea? In the real world with all of people's biases shouldn't we distinguish between the two researchers as a general heuristic for good research standards?

(If this is addressed in a different post on this site feel free to point me there since I have not read the majority of the site)

Comment by Yaakov T (jazmt) on Beautiful Probability · 2013-12-17T02:54:18.407Z · LW · GW

Does the publication of the result tell you the same thing, since the fact that it was published is a result of the plans?

Comment by Yaakov T (jazmt) on 2013 Less Wrong Census/Survey · 2013-11-26T16:20:08.337Z · LW · GW

By 'their decision' do you mean the decision to cooperate or defect? If so you would predict people would not offer to donate if there was no choice involved (e.g. all participants in the survey automatically receive one entry)?

It does not seem like this is what people are describing e.g. http://lesswrong.com/lw/j4y/2013_less_wrong_censussurvey/a3xl http://lesswrong.com/lw/j4y/2013_less_wrong_censussurvey/a2zz and http://lesswrong.com/lw/j4y/2013_less_wrong_censussurvey/a36h

Comment by Yaakov T (jazmt) on 2013 Less Wrong Census/Survey · 2013-11-26T16:07:09.737Z · LW · GW

For a discussion of the meaning of supernatural see here: http://onlinelibrary.wiley.com/doi/10.1525/eth.1977.5.1.02a00040/pdf

Comment by Yaakov T (jazmt) on 2013 Less Wrong Census/Survey · 2013-11-26T01:15:06.758Z · LW · GW

I noticed a bunch of people saying that they will donate the money if they win. I find that a surprisingly irrational sentiment for lesswrong. Unless I am missing something, it seems people are ignoring the principle of the fungibility of money. It seems like the more rational thing to do would be to commit to donating 60$ whether or not you win. (If your current wealth level is a factor in your decision, such that you will only donate with the higher wealth level with the prize, then this can be modified to donating whether or not you win if you receive a windfall of 60$ from any source (your grandmother gives a generous birthday present, your coworker takes you out to lunch every day this week, you find money in the street, you get a surprisingly large bonus at work, your stocks increase more then expected etc))

Comment by Yaakov T (jazmt) on 2013 Less Wrong Census/Survey · 2013-11-24T05:18:39.018Z · LW · GW

I took the survey.

Thank you for putting this together Some of the questions were unclear to me, for example: does living with family mean my parents or my spouse and children? (I guessed the former, but was unsure) For the politics question, there should be an option for not identifying with any label (or if that will lead to everyone not wanting to be labeled an option for disinterest in politics could be an alternative.) Should an atheist who practices a religion (e.g. buddhism) skip the question on religion? P(aliens), this question leaves out the time dimension which seems important to establishing a probability for aliens, e.g. if aliens live 5 bilion light years away, are we asked the probability that there were aliens there 5 billion years ago such that we could receive a message from them now, or whether there are aliens now, which we will not be able to discover for another few billion years. P(supernatural) its not clear what counts as a supernatural event, e.g. god is included even though most would not define god as an event nor as occurring since the beginning of the universe (since if god created the universe he is either nontemporal or prior to the universe) for the CFAR questions I wasn't sure what qualified as a " plausible-seeming technique or approach for being more rational / more productive / happier / having better social relationships / having more accurate beliefs / etc." does it have to be a brand new technique, or even a modification of one already known. Is it askeing about generic techniques or even domain specific ones? Also, most techniques I try are not ones I hear about, but rather ones I come up with on my own, I dont know if others here are similar. Also all of the change questions seemed poorly defined and unclear.

Comment by Yaakov T (jazmt) on Rationality Quotes September 2013 · 2013-09-08T02:16:58.865Z · LW · GW

the original is superior in a number of ways(to any translation have seen, but I suspect that it is superior to all translations since much is of necessity lost in translation generally). But is there a specific aspect you are wondering about so that I could address your question more particularly?

Comment by Yaakov T (jazmt) on Rationality Quotes September 2013 · 2013-09-02T15:19:43.019Z · LW · GW

thanks but I prefer reading in the original Hebrew to reading in translation.

Comment by Yaakov T (jazmt) on Rationality Quotes September 2013 · 2013-09-02T02:43:38.493Z · LW · GW

It seems like Proverbs has a lot of important content for gaining rationality, perhaps it should be added to our reading lists