Welcome to Less Wrong! (8th thread, July 2015)
post by Sarunas · 2015-07-22T16:49:46.765Z · LW · GW · Legacy · 273 commentsContents
A few notes about the site mechanics A few notes about the community A list of some posts that are pretty awesome None 273 comments
A few notes about the site mechanics
A few notes about the community
If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter
A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.
A list of some posts that are pretty awesome
I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:
- The Worst Argument in the World
- That Alien Message
- How to Convince Me that 2 + 2 = 3
- Lawful Uncertainty
- Your Intuitions are Not Magic
- The Planning Fallacy
- The Apologist and the Revolutionary
- Scope Insensitivity
- The Allais Paradox (with two followups)
- We Change Our Minds Less Often Than We Think
- The Least Convenient Possible World
- The Third Alternative
- The Domain of Your Utility Function
- Newcomb's Problem and Regret of Rationality
- The True Prisoner's Dilemma
- The Tragedy of Group Selectionism
- Policy Debates Should Not Appear One-Sided
More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site!
Once a post gets over 500 comments, the site stops showing them all by default. If this post has 500 comments and you have 20 karma, please do start the next welcome post; a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves. (Step-by-step, foolproof instructions here; takes <180seconds.)
If there's anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post.
Finally, a big thank you to everyone that helped write this post via its predecessors!
273 comments
Comments sorted by top scores.
comment by boatner · 2015-10-08T15:08:47.225Z · LW(p) · GW(p)
Howdy All!
I’m a post middle-aged, impressively moustachioed dude from Texas, now living in Wisconsin. I moved up here recently, following the work, and now have a fine job in a surprising career path. See, I recently took a couple degrees in Mathematics (which I capitalize out of love, grammar be damned!) hoping to be a teacher for the rest of my time. It turns out, that was not such a good move for me and I was fortunate to receive an offer to get back into private-sector IT. I am now happily managing UNIX systems for a biggish software company here in the tundra.
I’ve been consuming the sequences and lurking in the forum (and newly the Slack cahtrooms) for several weeks. I have no recollection of how I found the site; StumbleUpon would be my first guess, though the xkcd forum is nearly as likely. As I read through the LW site I am struck by the quality of discourse, which is high even among those who disagree.
I am motivated to fill in some gaps in my own thinking on various issues of interest and importance. With the exception of my atheism, I don’t have many strongly held opinions (though at times I do seem to lean quite a ways over on some of them).
So, how did I become a rationalist? Well. Hmmm. I got pulled into a youth cult in high school. At a rally (or whatever) I was implored by a zealot on stage to “seek the truth”. I realized in hindsight he probably meant something other than that, like: “listen to me and read the bible and there’s your source of truth”. But I took him at his word. I looked at other religions and started taking philosophy courses. I talked to people who held beliefs different from my own. I dug in and studied issues of morality, politics, aesthetics, and more. Gradually I started to realize that I didn’t believe any of the ideas pushed at me by organized religion. I remember questioning what it means to “believe” and concluding that I simply don’t believe in any of the gods other people claim exist.
At one point, back in my late teens, I was a bible thumping (literally and figuratively), charismatic, evangelical prophet of christ. A few years later I was openly secular, having still not fully grokked the scope of the words “atheist” and “agnostic”.
These days I am still openly secular, and when I get to know you, I’ll let on that I’m a gnostic atheist, and perfectly happy to Taboo both words (as I understand that phrase), preferably over good dark beer on tap and a basket of deep-fried cheese curds.
I am hesitant to admit that one of my principle interests lately is politics. While I support (and adore) the idea that politics is the mind killer, I can’t shake a notion that we, the folks who strive to be less wrong, should be involved in the larger discussion. If there’s a subset of human endeavor that really needs an IV drip of less wrongness, it’s politics.
Now I’ve found this part of the webs, I am fair sure I’ll continue to spend more time here than I ought.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2015-10-08T22:48:34.250Z · LW(p) · GW(p)
Be welcome, sir.
comment by Salokin · 2015-07-24T23:22:53.608Z · LW(p) · GW(p)
Hello from Spain! I first found about LW after reading a post about Newcomb's problem and the basilisk last summer. A week after that I found HPMOR and I've been reading and lurking for this whole year. It's been amazing to see how there are other people with ideas like transhumanism and who are trying to become systematically better.
I decided to post here for the first time because I recently atended a CFAR workshop and realized that I could actually help in building a better community. I'm currently translating RAZ to Spanish and hope to create a rationality community in Madrid.
Some other things about me:
- I'm currently studying Physics at Cambridge but I'm thinking of going into applied Maths and probably into computer science. (I'm very interested in AI risk)
- I'm trying to find the best way to build healthy relationships and communities of people that help each other be better. (After my experience at CFAR I felt like I'm missing something amazing by not being in an environment like the Bay Area and want to recreate that.)
And that's it! You're all amazing for being part of something like this, hope we can make it even better all together! :)
Replies from: mrexpresso, None↑ comment by mrexpresso · 2015-07-25T00:27:56.196Z · LW(p) · GW(p)
welcome to LW! Just a question when you think you will finish your translation?
Replies from: Salokin↑ comment by Salokin · 2015-07-26T22:06:26.028Z · LW(p) · GW(p)
Thank you! :) I'm planning on finishing the first book (The map and the territory) by October but it will probably take longer as I'm not very consistent with my work. The first sequence (Predictably wrong) should be finished this week if I keep my current pace. I'm publishing it here: https://cognonomicon.wordpress.com/ (Everything is in Spanish) I'd appreciate any comments, and if you think that someone you know would benefit from reading rationality in spanish it would be great if you shared it ^^
Replies from: Pancho_Iba↑ comment by Pancho_Iba · 2015-07-28T14:14:28.574Z · LW(p) · GW(p)
I'd gladly read and criticize your translations if you want me to, but it will have to wait until after my topology exam next week. If you want me to do it, please remind me to do so ten days from now or so, since I will most probably forget about it.
comment by Pancho_Iba · 2015-07-24T01:53:34.528Z · LW(p) · GW(p)
Regards from Argentina,
Great post. I had started reading through this site randomly while I got more and more into HPMOR, which a friend recommended, and having a little list of posts to start will most probably prove helpful.
I would like to mention that the thing about this community I found the most astonishing was a comment that read something like "Edit: After reading some responses I've changed my mind and this comment no longer respresents my beliefs." I did not even know that it was possible for a human being to be so greatful and humble upon being proven wrong. And humility is something I most definitely need to learn, and I suspect I will be able to do so here. In fact, I already did, for I acknowledged the fact that someone outside my field (pure math, until recently) has something to teach me. Yes, I am (was?) THAT arrogant in a deep level, but here and now I just feel like a child, craving to learn the art of rationality.
Thank you all for what this site constitutes!
Replies from: Viliam↑ comment by Viliam · 2015-07-28T08:30:01.807Z · LW(p) · GW(p)
To me it feels easier to admit mistakes in an environment which does not punish admitting mistakes by loss of status. Where people cooperate to find the truth, instead of competing for image of infallibility.
Just saying that how one reacts on being shown errors is partially a function of their personality, but also partially a function of their environment. Changing the environment can help, although sometimes bad habits remain.
Replies from: Pancho_Iba↑ comment by Pancho_Iba · 2015-07-28T14:04:32.183Z · LW(p) · GW(p)
I quite agree, but now I'm wondering how could I change my own environment -not by replacing it, but by changing people's reactions- . It seems the responsability to do so lays upon my shoulders since I am the one who intends to live differently. Do you believe it'd be right to attempt to change people's reactions (if I knew a way), or should I acknowledge the possibility that they are just happy the way they are, and should just let them be?
Replies from: Viliam, CCC↑ comment by Viliam · 2015-07-28T15:00:47.293Z · LW(p) · GW(p)
should I acknowledge the possibility that they are just happy the way they are
They probably are. Also, even if hypothetically becoming super rational should be an improvement for everyone, your ability to change them is limited, and it's uncertain whether that degree of change you could realistically achieve would be an improvement.
Unless you have superior manipulation skills, I believe it is extremely difficult to change people, if they don't want to. You push; they welcome the challenge and push hard in the opposite direction. Unfortunately, defending your own opinion, however stupid it is, is a favorite hobby of too many otherwise intelligent people. It could be a very frustrating experience for you, and an enjoyment for them.
At least my experiments in this area seem hugely negative. If people don't want to be rational, you are just giving them more clever arguments they can use in debates.
I hate to admit it, but "people never change" seems to be a very good heuristic, even if it is not literally true. (I hate it because of the outside view it provides for my own attempts at self-improvement. That's why I usually say "people never change unless they want to", but the problem is, wanting to change, and declaring that you want to change, are two different things.)
Also, I noticed that when you are trying to change, many people around you get anxious and try to bring you back to the "old you". If you want to change your own behavior, it is easier with completely new people, who don't know the "old you", and accept your new behavior as your standard one.
Replies from: Pancho_Iba↑ comment by Pancho_Iba · 2015-07-29T14:42:32.335Z · LW(p) · GW(p)
I know it would be hard, and most likely nearly impossible to change people without a very good idea very well executed, but perhaps a tiny possibility is reason enough to attempt to do it nonetheless. I wish to take your advice on trying to change myself among new people, and so I ask if you have any suggestion on a particular environment on which to try to do so.
Replies from: Viliam↑ comment by CCC · 2015-07-28T14:25:22.118Z · LW(p) · GW(p)
now I'm wondering how could I change my own environment -not by replacing it, but by changing people's reactions-
People try to do that all the time. One of the best ways is to simply ask other people to change their reactions, and explain why - some people will listen (especially if you point out how the new environment will benefit them as well) while others won't. (Mind you, even the ones that listen will probably be slow to change their reactions... habits are not easily broken)
I'd also suggest, at the same time, changing your reactions to match your preferred environment; give everyone around you an example to follow.
If you have a position of authority (e.g. a university lecturer in a classroom) you could even use that authority to mandate how students are allowed to react - again, it would help to point out how the ability to change your mind is helpful to the students.
Do you believe it'd be right to attempt to change people's reactions
I think that it can be right to attempt to change peoples' reactions, if that change is to their benefit and the means employed to effect the change are ethical (i.e. ask them to change, don't put a gun to their head and force them to change).
Replies from: Pancho_Iba↑ comment by Pancho_Iba · 2015-07-29T14:55:01.644Z · LW(p) · GW(p)
Just asking seems a little to plain to work, but I do know some very few people who would listen. The thing is that, by doing so, they are somewhat already reacting rationally. Now I'm thinking maybe I should gather a couple of those people and someone who is less inclined to change his mind and try to "convert" him by providing an environment in which it is ok to be mistaken and good to be corrected... Then I just repeat this process inductively until we take over he world, don't I?
If you have a position of authority (e.g. a university lecturer in a classroom) you could even use that authority to mandate how students are allowed to react...
I don't have it, but I will have it soon enough and see how it goes.
Replies from: CCC↑ comment by CCC · 2015-07-30T08:46:42.146Z · LW(p) · GW(p)
Just asking seems a little to plain to work
If the simplest solution works, then, well, it works. And if it doesn't... I don't really see any negative consequences of failure.
Now I'm thinking maybe I should gather a couple of those people and someone who is less inclined to change his mind and try to "convert" him by providing an environment in which it is ok to be mistaken and good to be corrected... Then I just repeat this process inductively until we take over he world, don't I?
It'll work for some people, not for others. You could try, I guess, but people change slowly so it could take a while.
I think that trying to force it could have ethical problems. But inviting someone to have a chat with you and your friends shouldn't have any such problems.
I don't have it, but I will have it soon enough and see how it goes.
Good luck!
comment by jordansparks · 2015-07-31T14:21:20.312Z · LW(p) · GW(p)
Hi, my name is Jordan Sparks, and I'm the Executive Director of Oregon Cryonics. I work very hard every day to improve cryonics technology and to attract potential cryonics clinicians.
comment by Yaacov · 2015-07-26T04:57:04.564Z · LW(p) · GW(p)
Hi LW! My name is Yaacov, I've been lurking here for maybe 6 months but I've only recently created an account. I'm interested in minimizing human existential risk, effective altruism, and rationalism. I'm just starting a computer science degree at UCLA, so I don't know much about the topic now but I'll learn more quickly.
Specific questions:
What can I do to reduce existential risk, especially that posed by AI? I don't have an income as of yet. What are the best investments I can make now in my future ability to reduce existential risk?
Replies from: endoself, Squark, Viliam↑ comment by endoself · 2015-07-27T21:48:37.899Z · LW(p) · GW(p)
Hi Yaacov!
The most active MIRIx group is at UCLA. Scott Garrabrant would be happy to talk to you if you are considering research aimed at reducing x-risk. Alternatively, some generic advice for improving your future abilities is to talk to interesting people, try to do hard things, and learn about things that people with similar goals do not know about.
↑ comment by Squark · 2015-07-27T19:02:44.273Z · LW(p) · GW(p)
Hi Yaacov, welcome!
I guess that you can reduce X-risk by financing the relevant organizations, contributing to research, doing outreach or some combination of the three. You should probably decide which of these paths you expect to follow and plan accordingly.
↑ comment by Viliam · 2015-07-28T07:59:54.032Z · LW(p) · GW(p)
If you choose the path of trying to make a lot of money and supporting the organizations who do the research, 80000 hours can help.
If you choose to contribute by doing the research, you can start by reading what's already done.
comment by chalime · 2015-07-25T06:19:30.557Z · LW(p) · GW(p)
Hello LW!
Been lurking for about three years now- it’s time to at least introduce myself. Plus, I want to share a little about my current situation (work problems), and get some feedback on that. I’ll try and give a balanced take, but remember I’m talking about myself here…
First, for background, I’m 23, graduated about a year and a half ago with degrees in finance, accounting, and economics (I can sit still and take tests), and I also played basketball in college (one thing I can definitively say I’m good at is dribbling a basketball).
Brief Intellectual Journey
I didn’t care much about anything besides sports until I got to college. Freshmen year, I took a micro class and found it interesting, so I went online and discovered Marginal Revolution. I’ve been addicted to the internet ever since.
It started with the George Mason econ guys (Kling, Caplan, Roberts—that’s my bias), then I got interested in the psychology behind our beliefs and our actions (greatest hits being The Righteous Mind (Haidt), Thinking Fast and Slow (Kahneman), Mark Manson’s blog, Paul Graham’s blog). Somewhere during that time I stumbled across Lesswrong, SSC, HPMOR, and the rest of the rationality blogosphere, and it’s all just amazing. I love it but the downside is that I probably spend too much time reading instead of doing something more challenging.
The Big Three (EA, job/career, religion)
Right now, these three are overwhelming everything else, and I want to talk about them. First the easy one, religion. I am not religious, and that fact has caused me significant strife. I’ve lost an important relationship, become less close with my family (I’m in the closet- can’t bring myself to tell my mom), and generally feel kind of isolated because everyone I know seems to be religious and I struggle to look past that Important difference of opinion.
EA
I admire the EA movement and everyone involved. My base belief is that I do not need a lot of money to live on, and there are many people/causes that could make better use of the extra than me. I do have a high degree of uncertainty on what the best cause is, but I’ve simply been deferring those questions to GiveWell and I’m ok with that arrangement. So that’s the vision, but what about the execution?
Not great. While I did donate a pretty significant amount (for me) at the beginning of this year, I’ve stopped sending any money. The current problem is the uncertainty around where my income is going to come from in the future, as well as the overall unenjoyable experiences that all of my office type jobs have been to date. Those experiences make me want to save as much as possible, so I can be free to spend time how I want.
Let’s talk about my current job, and how utterly crazy it is. I don’t have anything lined up to do after this, but I don’t know how much longer I can hang on- it’s that bad. I try to stay upbeat about it but I know it’s only a matter of time (in my mind as of now, if I’m still working there in one month, I have failed).
I work at a small, boutique wealth management firm, and I have many objections with how this business works. It’s pretty simple- in my opinion, the incentive structure (how we are paid) is in direct conflict with actually giving good advice. And no one knows what they are paying us. And we don’t give good advice because that is harder to sell. And it doesn’t matter because there is money everywhere. And the industry is changing and we are not. And the work environment is mildly toxic. Ok, let me explain all of this more clearly.
Fees/Revenue- This is our emphasis- all team meetings come back to discussions of revenue. This part of our business is very easy to understand. We are paid based on our Assets Under Management (average fee is just over 1%- that means a $1,000 account would pay us $10/year, a $1,000,000 account would pay us $10,000/year). We are also paid commissions for selling insurance and annuity contracts.
My objections:
- Logic of AUM model means we literally value people based on the size of their investable assets.
- The +1% fee is too high. The dollar amount for our larger clients can get truly absurd, even though we do the same stuff for them as everyone else. There is starting to be more attention to fees as competition and technology change the game, but we are still getting by the old way because we can (for now).
- % fee obscures the true amount that they are paying us. Also, fee is deducted automatically from their account, similar to payroll taxes, so they don’t even notice when it goes. Sorry, I can’t help but focus on the fee because it matters, especially over long time periods we are talking about hundreds of thousands of dollars. And this is to do something that basically amounts to a commodity service- investment management- and we won’t even do that as good as we could (I am an advocate for indexing- but we have to make things complicated and put portfolios together with 20+ holdings and an active/passive mix of funds).
- We push insurance (permanent life insurance specifically) and annuity contracts more than we should because of the commissions.
I would like to simply give financial advice, and charge a clear, transparent fee for those services, but because it wouldn’t be as profitable (you have any other theories?) we get this mess of a system.
This is getting long, so I’ll wrap up with some quick-hit ‘culture’ objections:
- I am in the office from 7:30 am – 6:00 pm at a minimum every day. That is soul-crushing when I have zero confidence that we are doing a Good Thing. I don’t have time to do much else.
- I have to wear a tie every day (not ideal)
- Charisma and delivery matter more than actually giving good advice.
- I make retirement plans for people. Most of these people project their income ‘needs’ in retirement to be something ridiculous like $20k/month throughout their thirty year retirement. I know, I know- I try to detach and I realize that this is their money and I have nothing to do with it and not to get overly moral with it, BUT I struggle. I struggle to care enough to actually dig in as much as I could, because 1. You’re very rich, it doesn’t matter and 2. The discussion on fees above taints everything.
- My bosses will say one thing in this meeting and the opposite in the next- it’s whatever the client wants to hear- whatever will get the deal done. They are completely malleable to what the situation requires. Kind of amazing to watch, but mostly sad/upsetting.
Qualifiers
Overall, we are not robbing them. We do provide value in that we are giving people some structure and guidance (most people need this- the behavioral aspect can make a huge impact). They came to us and agreed to the terms, so what the heck. But it just gets stupid when you could say instead of engaging with us, make these two clicks and save $20,000/year. That is wasteful, and I do not like waste.
Agree/Disagree? Am I crazy? Feedback is welcome.
Replies from: Fluttershy, Vaniver, Viliam↑ comment by Fluttershy · 2015-07-25T07:20:00.125Z · LW(p) · GW(p)
Hi chalime,
Welcome to LW!
There are many of us here who share your views on the financial services industry, and index funds with low expense ratios have been strongly recommended in nearly all of the financial advice threads posted on LW. I once went to a career information session hosted by a botique wealth management firm myself, and ended up not even sending them my resume because of similar reasons regarding my personal fit with the field, and value of services provided by advisers.
The 80,000 Hours blog has historically mentioned that the good done by donating a small part of one's income to excellent charities likely outweighs any harm done by a career in the financial services industry. However, if working for a wealth management company doesn't feel like a good fit to you, you certainly shouldn't feel morally obligated to stay with them for earning-to-give reasons!
Replies from: chalime↑ comment by chalime · 2015-07-26T15:34:11.074Z · LW(p) · GW(p)
Thanks for the reply Fhuttersly-
Yes, I’ll be honest, my mind is made up. There is no way I can continue to do this every day- it’s just not sustainable.
It’s a little scary because this is already my second job since graduating, and even if I think I have good reasons for leaving, that stuff is not easy to explain.
↑ comment by Vaniver · 2015-07-28T13:27:21.497Z · LW(p) · GW(p)
Welcome!
So, presumably you're familiar with companies like Vanguard, Wealthfront, and Betterment, which are much more customer-aligned than the rest of the financial services industry. But part of that is spending much less attention on individual clients--and, consequently, employing considerably fewer people, and different sorts of people. (I would expect that Wealthfront needs more web programmers than economists, for example.) You might consider applying at those places, but my suspicion is you'll end up in another field entirely.
Replies from: chalime↑ comment by chalime · 2015-07-29T04:32:43.868Z · LW(p) · GW(p)
Yep, I've actually already applied to all three of those places. Vanguard would be my first choice of the three because I could do more outside of focusing strictly on investments, and actually have an advisor type relationship with people. You're right though in that I do have hesitations about being in this industry at all, because:
- I am too anti-fee (e.g. why pay a fee on an IRA account at Wealthfront/Betterment? Yes, it’s better than what most people would do on their own, but it’s still not the optimal… I go back and forth on this one, because I do put a high value on the simplicity of it).
- The business is based on meeting with lots of people and selling to them, and the people I would get along with the best are probably doing this stuff themselves
- There’s tension between what this would be focused on (manage money effectively, accumulate wealth) vs. my desire to be more EA and act on the knowledge that I have enough, and many others do not.
I haven’t heard back from any of the applications, so it’s a moot point right now.
↑ comment by Viliam · 2015-07-28T08:42:31.964Z · LW(p) · GW(p)
Maybe someone on LW could recommend you a better job. Either here (but you would have to tell us at least what country are you from) or at a local meetup.
Replies from: chalime↑ comment by chalime · 2015-08-14T18:10:55.138Z · LW(p) · GW(p)
Well, I pulled the trigger yesterday. While it felt great to actually speak my mind and have a real discussion regarding all of these issues (it was actually pretty amazing- no yelling or anyone getting upset- there was actual discourse), I will now be jobless in a month, and I really don’t know the answer to what’s next.
I’m debating between staying in my current area which would be a finance/accounting/operations type of role or just scrapping that whole path and try to go the programming route (close to zero expertise as of now). I’ve spent a lot of time working towards different credentials (CPA being the main one) so it’s hard to walk away from that even though I don’t think I’m learning anything all that useful.
I’ve never met anyone from these communities (Lesswrong/EA), but I spend a lot of time here, so yeah I would definitely be open to talking with anyone here about general strategy (I’ve read all of 80000 hours) or specific opportunities if someone stumbles across this and has an idea. I will use more conventional methods as well, but I wanted to at least put this out there.
Replies from: Viliam, Lumifer↑ comment by Viliam · 2015-08-14T18:53:36.305Z · LW(p) · GW(p)
You may want to post your question in an Open Thread. Maybe it would be more strategic to skip the current one which already contains over 200 comments, and wait for the new one to appear on Wednesday 19th; so more people will see it. Better than here in a thread that started three weeks ago.
I know almost nothing about the situation in "finance/accounting/operations type of role". I have mostly been a programmer, so now my availability bias screams at me that "everyone is a programmer, and everything outside of IT is super rare", which is obviously a nonsense. If there is a website in your country with job offers, perhaps you could try to imagine that you already have 3 years of experience, and look how many opportunities are there for each option and how well they pay.
My experience with programming in Java was that about 50% of jobs available are programs for some kind of financial institutions. (But this may be irrelevant for you; I am describing Eastern Europe.) The companies usually need some analyst to talk with the customer and explain their needs to the programmers. If you have a good financial backgroud, this could be the right job for you.
Programming could be risky, because it's not for everyone. You should probably try it first in your free time. (Hint: If you don't like programming in your free time, then the job probably is not the right one for you.) Also, after a few years the programmers usually hit the salary ceiling, and want to switch to managers or analysts. (Again, in Eastern Europe; I don't know how universal this is.) If you could start as an analyst, you would be already ahead of me in the IT career, and I am almost twice your age with about 20 years of programming experience.
I have a friend who works in IT and makes more money than I do despite being a worse programmer, because he is a specialist: in his case it is finance and databases; also he is willing to travel to a customer in a different country whenever necessary. So the lesson is -- don't throw your specialization away just because you want to go to IT; instead try finding a place where they will value your specialization.
Also, tell us where you live, so the people living near you can contact you. Networking: it's what brings the good jobs (as opposed to random jobs).
comment by [deleted] · 2015-08-15T08:11:41.549Z · LW(p) · GW(p)
Hi everyone.
I'm about to start my second year of college in Utah. My intent is to major in math and/or computer science, although more generally I'm interested in many of the subjects that LessWrongers seem to gravitate towards (philosophy, physics, psychology, economics, etc.)
I first noticed something that Eliezer Yudkowsky posted on Facebook several months ago, and have since been quietly exploring the rationality-sphere and surrounding digital territories (although I'm no longer on FB). Joining LessWrong seemed like the obvious next step given the time I had spent on adjacent sites. I'm here solely out of curiosity and philosophical interest.
Thanks to Sarunas and predecessors for the welcome page, and the LW community more generally. I look forward to being a part of it.
Replies from: Stephen_Cole, Vladimir_Nesov↑ comment by Stephen_Cole · 2015-08-15T15:43:21.215Z · LW(p) · GW(p)
Exciting! If I were in your place I would look at the growing field of causal inference which lives at the interface of statistics, computer science, epidemiology and economics. The books by Hernan and Robins (causal inference) and Pearl (causality), as well as the journal edited by Judea Pearl and Maya Petersen (causal inference).
Replies from: None↑ comment by Vladimir_Nesov · 2015-08-15T19:13:38.685Z · LW(p) · GW(p)
I'm here solely out of curiosity and philosophical interest.
And if you did in fact have a secret agenda, you wouldn't reveal it.
Replies from: Nonecomment by Della · 2015-08-12T00:36:06.506Z · LW(p) · GW(p)
Hello! I'm Alex, from Maryland, but I go to college in Ithaca, NY, where I am working on my math major/computer science minor. Way back when, a few of my friends kept talking about how great HPMOR was, so I started reading and I loved it. It is one of my all-time favorite stories. As I was reading it, I was very interested by all the ways Harry knew how to think right, and then one of my friends recommended the sequences and I read them all! Except for metaethics and quantum stuff.
I really enjoyed the sequences. They changed how I think. I managed to climb out of the agnostic trap of "you can neither prove nor disprove the existence of a deity". I plan on becoming even more rational. I've heard CFAR is a good resource.
I had been reading the posts on the main page for a while when I saw the most recent census and felt guilty about taking it without an account, so I made one but haven't used it until now. I didn't feel right commenting in other places when I hadn't introduced myself, but I am finally done putting it off!
comment by anna_macdonald · 2015-10-17T00:44:09.019Z · LW(p) · GW(p)
Hi LWers.
My brothers got me into HPMOR, I started reading a couple sequences, switched over to reading the full Rationality: AI to Zombies, and recently finished that. The last few days, I've been browsing around LW semi-randomly, reading posts about starting to apply the concepts and about fighting akrasia.
I'm guessing I'm atypical for an LW reader: I'm a stay-at-home mom. Any others of those on here?
Replies from: Alicorn, Gram_Stone, Vaniver↑ comment by Gram_Stone · 2015-10-17T02:03:59.645Z · LW(p) · GW(p)
There are definitely a lot of parents on LessWrong. I'm sure there are at least a few stay-at-home moms.
In fact, 18.4% of the participants in the 2014 LW Survey have children, and 0.5% (8 people) describe themselves as 'homemakers.'
Replies from: anna_macdonald↑ comment by anna_macdonald · 2015-10-17T02:49:29.334Z · LW(p) · GW(p)
Thanks for the link! I made a (brief, low effort) attempt to find that post earlier, but only came across the census surveys, not the results.
Heck, there's even one survey respondent who has more kids than I do. Cool beans.
↑ comment by Vaniver · 2015-10-17T06:42:56.250Z · LW(p) · GW(p)
Welcome!
How many kids, and how old are they?
Replies from: anna_macdonald↑ comment by anna_macdonald · 2015-10-20T22:54:52.861Z · LW(p) · GW(p)
6... 7 if you count my adult step-daughter (who I didn't really help raise). Ages 12, 11, 9, 7, 5, and 7-months.
Replies from: Vaniver↑ comment by Vaniver · 2015-10-21T14:57:35.870Z · LW(p) · GW(p)
Impressive! Both of my parents came from huge households (7 and 8), but I had the more typical upbringing with only one sibling, who was only slightly older.
Replies from: anna_macdonald↑ comment by anna_macdonald · 2015-10-21T16:10:06.860Z · LW(p) · GW(p)
My mom was one of 11, my dad one of 4; I am one of 7 myself. It definitely makes having a big family feel more natural.
comment by csandon · 2015-08-29T05:38:58.531Z · LW(p) · GW(p)
Hi, I am a graduate student who is working on getting a PhD in math. My journey here started when I took a moral philosophy course as an undergrad that made me think about what I should do. I decided that I should do my best to improve the world, and I eventually decided that existential risk mitigation was the highest priority improvement. Researching that lead me here, I lurked for a few years, and now I have finally made an account.
I am hoping to get some insight here as to whether it would be most effective for me to work on the AI friendliness problem, donate money, or something else. I am also interested in learning how to manage routine aspects of my life better.
comment by riparianx · 2015-08-26T20:06:09.763Z · LW(p) · GW(p)
Hi, I'm Alexandra. I'm turning 18 tomorrow, and I'm slowly coming to the conclusion that I have GOT to be more rigorous in my self-improvement if I'm going to manage to reach my ambitions.
I'm not quite a new member- I've lurked a lot, and even made a post a while back that got a decent number of comments and karma.
I discovered Less Wrong through HPMOR. It was the first time I'd read a story with genuinely intelligent characters, and the things in it resonated a lot with me. This was a couple of years ago. I've spent a lot of time here and on the various other sites the rationalist community likes.
I'm mostly posting this now because I'd like to get more involved. I recently read an article that said the best way to increase competency at a subject is to join a community revolving around the subject. I live in OKC, where I've never even HEARD of another student of rationality. The closest I've gotten is introducing my boyfriend to HPMOR.
I'm a biology student at a community college near my living space. I'm very good at biology, english, philosophy, etc. I'm really, REALLY bad at chemistry/physics and math. I've done some basic research into what makes a person suck at mathematical things, but it's been frustratingly low on insights. Most of the time, it's resulted in "you need to practice! you need to learn mathematical thinking!" which is objectively true, but practically, a little more detail in what to do about it would be nice. Practice hasn't really seemed to help too much beyond working problems. Give me an equation and variables and I can do the math. But I can't EXPLAIN anything, or apply it to non-obvious problems involving it. This is seriously getting in the way of both my biology studies and my study of rationality. I took general chemistry 1 twice to get a low B. I'm in the first two weeks of general chemistry 2 and it takes ages to get what seems like basic concepts. When I discovered I magically had a B in College Algebra, I suspected the professor curved the grade without telling us. I withdrew from precalc after three weeks because I realized I couldn't cope.
I'm hoping to get into contact with some of the more mathematically inclined people here who are willing to help. I considered emailing a few of the higher-profile contributors to the community, but frankly, they're intimidating and the idea is very scary to my inner caveman worrying about being kicked out of the tribe.
I have some pretty lofty goals for my future research- I want to go into genetically modified organisms, and try to improve nutrition and caloric intake in parts of the world where that sort of thing is difficult to get. Reducing scarcity in our society seems like a good start to a general boost in the "goodness" of the world. But there is absolutely no way I can succeed at this if I can't get a good handle on math and chemistry. My skill at the lower levels of biology is only going to carry me so far.
I've probably rambled enough, so thanks if you took the time to read. If, for some strange reason, you feel a pull towards helping a struggling student get a grasp on abstract thinking, I urge you to give into the temptation because oh god I need the help.
Replies from: CCC, None, ChristianKl↑ comment by CCC · 2015-10-06T10:28:38.130Z · LW(p) · GW(p)
Hi, Alexandria!
I'm really, REALLY bad at chemistry/physics and math. ... Practice hasn't really seemed to help too much beyond working problems. Give me an equation and variables and I can do the math. But I can't EXPLAIN anything, or apply it to non-obvious problems involving it.
Okay... I am one of those people who is really good at math. Of course, I cannot be certain, but I suspect that the trouble here might be that you failed to grasp some essential point way, way back at the early stages of your mathematical education.
So, let's see how you handle a non-obvious problem. In answering this question, I'd like you to show me, as far as possible, your entire reasoning process, start to finish; the more information you can give, the more helpful my further responses can be.
The question is as follows: John is on his way to an important meeting; he has to be there at noon. Before leaving home, he has calculated what his average speed has to be to arrive at his meeting on time. When he is exactly half-way to his destination, he calculates his average speed so far, and to his dismay he finds that it is half the value that it needs to be.
How fast does John need to travel on the second half of his journey in order to reach his destination on time?
↑ comment by [deleted] · 2015-10-06T04:51:24.787Z · LW(p) · GW(p)
Hello, Alexandra.
I also struggle with the math thing. My secret to success is practicing until I'm miserable, but these things also help:
Read layman books about mathematical history, theory, and research. It ignites enthusiasm. I recommend James Glieck's [sp?] book Chaos, and his book The Information. He has a talent for weaving compelling narratives around the science.
Learn a little bit of programming. While coding is frustrating in its own right, I find that it forces me to think mathematically. I can't leave steps out. I'm learning Python right now, and it's a good introductory language (I'm told).
Explain it to your cat. I'm only mostly kidding. I've found that tutoring lower-level math has helped my skills in calculus and statistics. Learning to walk through the problems in a coherent way, so that a moody sixth-grader can understand it, is tremendously helpful.
I'd love to work together on exploring mathematical concepts. If you'd like to collaborate, hit me up sometime.
Also: if you like HPMOR, you should read Luminosity. It is a rationality-driven version of Twilight that's actually really good.
Replies from: riparianx↑ comment by riparianx · 2015-10-18T05:21:23.560Z · LW(p) · GW(p)
I will do that. I think I may actually have a copy of Chaos lying around. I've actually read (most of) Luminosity- I lost my place in the story at one point due to computer issues and never got back to it.
I tried CodeAcademy once, didn't find it that interesting. I don't think it used python, though. I'll check it out. Programming is in general very useful.
If I can find someone to tutor, I'll try that. It certainly can't hurt. Thank you!
↑ comment by ChristianKl · 2015-10-06T17:31:54.127Z · LW(p) · GW(p)
When I discovered I magically had a B in College Algebra, I suspected the professor curved the grade without telling us.
Given that you are female, it's likely that there are identity issues involved that make you worse at math than you would be otherwise. If you get a B take it as empiric evidence that your belief that you are inherently bad might be wrong.
Replies from: riparianx↑ comment by riparianx · 2015-10-18T05:16:51.309Z · LW(p) · GW(p)
While I agree that society tends to dissuade women from math, it doesn't really work in my specific subset. I grew up with more female math-related role models than male. (Mom was chemistry major, dad majored in education partially because he sucked at math.) And the B is a massive outlier- it takes a lot of work for me to keep a C, usually. But thank you for the input.
comment by Vamair0 · 2015-09-19T09:47:30.552Z · LW(p) · GW(p)
Hello. My name is Andrey, I'm a C++ programmer from Russia. I've been lurking here for about three years. As many others I've found this site by link from HPMOR. The biggest reason for joining in the first place was that I believe the community is right about a lot of important things, and the comments of quality that's difficult to find in the bigger Net. I've already finished reading the Sequences and right now I'm interested in ethics and I believe I've got a few ideas to discuss.
For the origin story as a rationalist, as it often happens it's all started with a crisis of faith. Actually, the second one. The first was a turn from Christianity to a complicated New Age paradigm I'll maybe explain later. The second was prompted by a question of why I believe some of the things I believe in. While I used to think there was a lot of evidence for the supernatural, I've started trying to verify them and also read religion apologetics to evaluate the best arguments they have. Yup, they were bad. The world doesn't look like there exists a powerful interventionist deity. (And even if the miracles they were talking about that happen right now are true miracles, all of them are better explained with not at all omnipotent or omniscient slightly magical fairies). This, coupled with my interests for physics and biology made me think there are problems that are both huge and don't get the attention they deserve. Like, y'know, death or catastrophic changes. And all we've got are some resources, some understanding of how things actually are and a limited ability to cooperate with each other.
I'm looking forward to discuss stuff with people here.
Replies from: None↑ comment by [deleted] · 2015-10-06T04:42:40.704Z · LW(p) · GW(p)
Hi there Andrey!
I am also a former apologist (aspiring, anyways - teenage girls aren't taken very seriously by theologians). I clung to my faith so hard. It's amazing how much the evidence there is against the classical notion of the supernatural. It's a snowball effect. Every piece stripped away another aspect of my fundamentalism, until I was a socially-liberal Christian. Then, an agnostic theist. Then, an agnostic atheist.
I'm also looking forward to getting involved with the community. The high standards for conversation here are intimidating, but it's exciting, too.
comment by PAM606 · 2015-08-11T18:10:41.664Z · LW(p) · GW(p)
Well since I'm procrastinating on important things I might as well use this time to introduce myself. Structured procrastination for the win!
Hello everyone, I have been poking around on less wrong , slater star codex and related places for around three to four years now but mostly lurking. I have gradually become more and more taken with the risks of artificial intelligence orders of magnitudes smarter than us Homo Sapiens. In that aspect, I'm glad that the topic of a super-intelligent AI has taken off into the mainstream media and academia. EY isn't the lonely crank with no real academic affiliation, a nerdy Cassandra of his time, spewing nonsense on the internet anymore. From what I gather, status games are so cliche here that it's not cool. But with endorsements by people like Hawking and Gates, people can't easily dismiss these ideas anymore. I feel like this is a massively good thing because with these ideas up in the air so to speak, even intelligent AI researchers who disagree on these topics will probably not accidentally build an AI that will turn us all into paper clips to maximize happiness. That is not to say that there doesn't exist numerous other failure pathways. Maybe someday notions such I. J. Good's idea of a improving intelligence feedback loop will make it's way into standard AI textbooks. You don't have to join the lw sub-community to understand the risks, neither do you have to read through the sequences and all that. IMO, the greatest good less wrong has done so far for the world is to propagate and legitimate these concerns. I'm aware of the other key ideas in the memespace of lesswrong(rationality and all that) but it's hard enough to get the general public and other academics and researchers to take concern about super intelligent AI as an existential risk seriously without all sort of other ideas outside of their inference bubble.
Intellectually, my background is in physics.(currently studying, along with requisite math you pick up from physics) I have been reading philosophy for a ridiculous long time(around seven years now) although as a part time hobby. Probably like most people here, I have an incurable addiction to the internet. I also read a lot, in varied intellectual fields. I read a lot of fiction, anything from Milton to YA books. Science fiction and fantasy probably is responsible for why I find trans-humanist notions so easy to swallow. You read enough F. A Hamilton and Greg Egan and things like living forever and super intelligent machines are downright tame in comparison. I like every academic subject, gender studies doesn't count. Neuroscience, economics, computer science.. you name it. Even "fluffy" stuff like sociology and psychology and literature. I am doomed to be caught between the two cultures( C.P. Snow)
As to the stuff regarding rationality and cognitive biases, while the scientific evidence wasn't in until fairly recently. Hume anticipated all it centuries ago. Now I know lesswrong isn't very impressed with a prior armchair philosophizing without a scrap of evidence, I have to disagree on account of correct theories being much easier to build off empirical data and that deducing the correct theory to explain natural phenomenon without any empirical data in terms of experiments is much much harder. Hume had a huge possibility space while modern psychologists and cognitive scientists have a much small one. Let's not forget Hume's most famous quote. "“If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.” I honestly can't say I was surprised by the framework presented in the series like most people were, it's sure nice to find a community that was thinking on the same lines that I do! A lot of the tactics to apply these ideas so I can overcame these heuristics were very nice and welcome. My favorite aspect of lw has to be that people has an agreed framework to discuss things and in theory we can come to agreements. Debating is one of my favorite things to do and frankly most people are not worth arguing with and is a waste of time.
I'm interested in contributing in the study of friendly AI and have some ideas regarding it. So I might post here in the future stuff I'm thinking about. Please please feel free to criticize such posts to your heart's content. I appreciate feedback much more than I care about slights or insults so feel free to be rude. My ideas are probably old or wrong anyway, I have't had time to look through all the literature presented here or elsewhere.
Lastly, I should mention I have been active in the lesswrong irc room. If you want to find me, I'm there. also if lukeprog sees this, I really liked the the literature summaries you post sometimes. it's been a huge help and saved me a ton of time in my own exploration of the scientific literature.
comment by dglukhov · 2016-12-28T21:29:51.756Z · LW(p) · GW(p)
Hello all,
I found this site from a link in the comments section of an SCP Foundation post, which consequently linked to one of Eliezer's stranger allegorical pieces about the dangers of runaway AI intelligence getting the best of us. I've been hooked since.
Thanks to this site, I'm relearning university physics through Feynman, have plans to pick up a couple textbooks from the recommended list, and plan on taking the opportunity to meet some hopefully intellectually stimulating people in person if any of the meetups you guys seem to regularly have manage to ever make it closer to the general Massachusetts area.
I recently graduated with a B.S in Chemistry with the now odd realization that I haven't really learned anything during my experiences at university. I hope participating here will alleviate this void of knowledge I could have potentially learned.
Furthermore, if I'm lucky, I might get to contribute to the plethora of useful discussions that seem to populate this site. If I'm even luckier, those contributions will be positive. Let's just hope I learn fast enough to make sure luck isn't the deciding factor for such an outcome.
I am also curious as to the level of regular activity this site receives, perhaps a link to some statistics? Any reply would be greatly appreciated.
Also, I don't know if this is really relevant here, but I'd like to mention that I have a weird dream of someday inventing direct mental communication between people that doesn't involve the use of language, or at the very least help such a project along if any exist. I don't know if anybody will care for such news, or even if this is a realistic goal to strive for considering the multitude of other priorities I have in life, but hey, it is what it is. Supposedly, meeting such a goal would at least require some optimization of my own ability to think clearly and correctly. Yet another reason to come here, no doubt.
Well, here goes nothing! Hi guys!
Replies from: Raemoncomment by [deleted] · 2015-10-06T04:33:26.295Z · LW(p) · GW(p)
Hello!
I became interested in psychology at a young age, and irritated everyone around me by reading (and refusing to shut up about) the entire psych section of my local library. I had a difficult time at that age separating the "woo" from actual science, and am disappointed that I focused more on "trivia learned" and "books read" than actual retention. At any rate, I have a pretty good contextual knowledge of psychology, even if my specific knowledge is shaky. I put this knowledge to good use for seven years while I worked with developmentally delayed children.
I discovered Less Wrong in 2011/2009/2007/I actually have three distinct memories of discovering it at different times, but was turned off by the trend of atheism. I know how ridiculous that is for an aspiring rationalist, to reject evidence because it's uncomfortable. The "quiet strain" was too much, and I found the community exclusive and hard to break into. This site was not responsible for the disintegration of my faith, but it was another nudge in that direction. I don't know how to quantify my beliefs anymore; I think the God/No-God dichotomy is irrelevant. I'm perfectly willing to accept evidence of a superintelligent, superpowerful being. But assigning characteristics like "supernatural" is wrong. If such a creature exists, it's merely something we don't understand yet.
I am a lifelong fan of Harry Potter, so I've been keeping up with HPMOR off and on. I've decided to involve myself in this community now, because developing connections with rational people is a priority now. There are so many people having interesting, rational conversations, and I'd like to meet them. I'd like to participate in the public eye, as egotistical as that may sound. The concepts of rationality are getting mainstream attention, and those public-forum debates will be more and more crucial. I intend to be involved.
EDIT: I used the phrase "at any rate" too often
comment by BiasedBayes · 2015-09-13T15:51:11.168Z · LW(p) · GW(p)
Hello all!
Im a medical student and a researcher. My interests are consciousness, computational theory of mind, evolutionary psychology, and medical decision making. I bought Eliezers book and found here because of it.
Want to thank Eliezer for writing the book, best writing i have read this year. Thank You.
Replies from: hyporational↑ comment by hyporational · 2015-09-14T02:51:47.772Z · LW(p) · GW(p)
Welcome! I'm an MD and haven't yet figured out why there are so few of us here, given the importance of rationality for medical decision making. It's interesting that at least in my country there is zero training in cognitive biases in the curriculum.
Replies from: Anders_H↑ comment by Anders_H · 2015-09-14T04:16:31.133Z · LW(p) · GW(p)
I have the Irish equivalent of an MD; "Medical Bachelor, Bachelor of Surgery, Bachelor of the Art of Obstetrics". This unwieldy degree puts me in fairly decent company on Less Wrong.
I may be generalizing from a sample of one, but my impression is that medicine selects out rationalists for the following reasons:
(1) The human body is an incompletely understood highly complex system; the consequences of manipulating any of the components can generally not be predicted from an understanding of the overall system. Medicine therefore necessarily has to rely heavily on memorization (at least until we get algorithms that take care of the memorization)
(2) A large component of successful practice of medicine is the ability to play the socially expected part of a doctor.
(3) From a financial perspective, medical school is a junk investment after you consider the opportunity costs. Consider the years in training, the number of hours worked, the high stakes and high pressure, the possibility of being sued etc. For mainstream society, this idea sounds almost contrarian, so rationalists may be more likely to recognize it.
--
My story may be relevant here: I was a middling medical student; I did well in those of the pre-clinical courses that did not rely too heavily on memorization, but barely scraped by in many of the clinical rotations. I never had any real passion for medicine, and this was certainly reflected in my performance.
When I worked as an intern physician, I realized that my map of the human body was insufficiently detailed to confidently make clinical decisions; I still wonder whether my classmates were better at absorbing knowledge that I had missed out on, or if they are just better at exuding confidence under uncertainty.
I now work in a very subspecialized area of medical research that is better aligned with rational thinking; I essentially try to apply modern ideas about causal inference to comparative effectiveness research and medical decision making. I was genuinely surprised to find that I could perform at the top level at Harvard, substantially outperforming people who were in a different league from me in terms of their performance in medical school. I am not sure whether this says something about the importance of being genuinely motivated, or if it is a matter of different cognitive personalities.
In retrospect, I am happy with where this path has taken me, but I can't help but wonder if there was a shorter path to get here. If I could talk to my 18-year old self, I certainly would have told him to stay far away from medicine.
Replies from: EHeller, BiasedBayes, hyporational↑ comment by EHeller · 2015-09-14T05:43:44.190Z · LW(p) · GW(p)
I don't think medicine is a junk investment when you consider the opportunity cost, at least in the US.
Consider my sister, a fairly median medical school graduate in the US. After 4 years of medical school (plus her undergrad) she graduated with 150k in debt (at 6% or so). She then did a residency for 3 years making 50k a year, give or take. After that she became an attending with a starting salary of $220k. At younger than 30, she was in the top 4% of salaries in the US.
The opportunity cost is maybe ~45k*4 years, 180k + direct cost of 150k or so.. So $330k "lost to training," however 35+ years of making 100k a year more than some alternative version that didn't do medical school. Depending on investment and loan decisions by 5 years out you've recouped your investment.
Now, if you don't like medicine and hate the work, you've probably damned yourself to doing it anyway. Paying back that much loan is going to be tough working in any other job. But that is a different story than opportunity cost.
↑ comment by BiasedBayes · 2015-09-14T12:15:30.827Z · LW(p) · GW(p)
Thanks hyporational ! It is exactly same here. Cognitive biases, heuristics, or even Bayes Theorem (normative decision making) is not really taught here.
Also I once argued against some pseudoscientific treatment (in mental illnesses) and my arguments were completely ignored by 200 people because of argumentum ad hominem and attribute substitution (who looks like he is right vs. looking the actual arguments). Most people dont know what is a good argument or how to think about the propability of a statement.
Interesting points Anders_H, I have to think about those littlebit.
Replies from: hyporational↑ comment by hyporational · 2015-09-14T16:39:41.905Z · LW(p) · GW(p)
We were taught bayes in the form of predictive values, but this was pretty cursory. Challenging the medical professors' competence publicly isn't a smart move careerwise, unless they happen to be exceptionally rational and principled, unfortunately. There's a time to shut up and multiply, and a time to bend to the will of the elders :)
Replies from: Lumifer, BiasedBayes↑ comment by Lumifer · 2015-09-14T16:47:01.613Z · LW(p) · GW(p)
Challenging the medical professors' competence publicly isn't a smart move careerwise
Reminds me of:
One day when I was a junior medical student, a very important Boston surgeon visited the school and delivered a great treatise on a large number of patients who had undergone successful operations for vascular reconstruction.
At the end of the lecture, a young student at the back of the room timidly asked, “Do you have any controls?” Well, the great surgeon drew himself up to his full height, hit the desk, and said, “Do you mean did I not operate on half the patients?” The hall grew very quiet then. The voice at the back of the room very hesitantly replied, “Yes, that’s what I had in mind.” Then the visitor’s fist really came down as he thundered, “Of course not. That would have doomed half of them to their death.
”God, it was quiet then, and one could scarcely hear the small voice ask, “Which half?”
↑ comment by BiasedBayes · 2015-09-14T17:36:35.949Z · LW(p) · GW(p)
Yep :) You are definetely right career wise. Problem for me was the 200 other people who will absorb completely wrong idea of how the mind works if I wont say anything. Primum non nocere.
But yeah, this was 4 years ago anyway...just wanted to mention it as an anecdote of bad general reasoning and biases :)
↑ comment by hyporational · 2015-09-14T16:13:19.539Z · LW(p) · GW(p)
Huh. My experience is somewhat similar to yours in the sense that I never was a big fan of memorization, and I'm glad that I could outsource some parts of the process to Anki. I also seem to outperform my peers in complex situations where ready made decision algorithms are not available, and outperformed them in the few courses in medschool that were not heavy on memorization. The complex situations obviously don't benefit from bayes too much, but they benefit from understanding the relevant cognitive biases.
The medical degree is a financial jackpot here in Finland, since I was actually paid for studying, and landed in one of the top 3 best paying professions in the country straight out of medschool. Money attracts every type, and the selection process doesn't especially favor rationalists, who happen to be rare. It just baffles me how the need for rationality doesn't become self evident for med students in the process of becoming a doctor, not to mention after that.
Replies from: Lumifer↑ comment by Lumifer · 2015-09-14T16:32:09.024Z · LW(p) · GW(p)
how the need for rationality doesn't become self evident for med students in the process of becoming a doctor,
Is it just a matter of terminology? I would guess that all med students will agree that they should be able to make a correct diagnosis (where correct = corresponding to the underlying reality) and then prescribe appropriate treatment (where appropriate = effective in achieving goals set for this patient).
Replies from: hyporational↑ comment by hyporational · 2015-09-14T16:44:57.962Z · LW(p) · GW(p)
Whatever the terminology, they should make the connection between the process of decision making and the science of decision making, which they don't seem to do. Medicine is like this isolated bubble where every insight must come from the medical community itself.
I found overcoming bias and became a rationalist during med school. Finding the blog was purely accidental, although I recognized the need for understanding my thinking, so I'm not sure what form this need would have taken given a slightly different circumstance.
comment by JosephRogero · 2016-02-15T02:06:33.212Z · LW(p) · GW(p)
Hello from Houston, Texas! I've been following LessWrong for several years now, slowly working my way through the Sequences. I'm an aspiring fantasy/sci-fi writer, martial artist, and outdoorsman and I am overjoyed to be a part of the LW community. It's hard for me to say exactly when I first 'clicked' on rationality, but the Tsuyoku Naritai post certainly struck a chord for me.
A few months ago, I attended a LessWrong meetup in Austin. I enjoyed the meetup immensely, not least because it also happened to be a Petrov Day celebration. I'd like to attend LW meetups more frequently, but I live in Spring (north Houston) and the Austin meetup is a 3+ hour drive for me.
So, I've decided to start a Houston meetup group. According to some (admittedly old) statistics, the number of visitors to LessWrong from the Houston area is over 9000, and I think this is more than enough to create an enjoyable meetup group.
Our first meetup will be Saturday, February 20 at the Black Walnut Cafe in the Woodlands, TX. It will start at 1:00PM and go until 4:00PM (or later, if enough people show up and are interested in staying).
If you're interested, please reply below so I know who to expect!
Replies from: None↑ comment by [deleted] · 2016-02-15T15:17:01.316Z · LW(p) · GW(p)
Hi, and welcome!
I'm hoping to start a Meetup group sometime this spring or summer. If you're amenable to it, I may bug you afterwards and see how your meetup went.
Replies from: JosephRogero↑ comment by JosephRogero · 2016-02-16T00:40:24.548Z · LW(p) · GW(p)
Gladly! Of course, if you're interested, you are also welcome to attend this one.
Replies from: Nonecomment by curtisrussell · 2015-11-04T21:39:23.777Z · LW(p) · GW(p)
Hello everyone! Came to less wrong as a lurker something like a two years ago (Perhaps more, my grasp on time is... fragile at best), and binged through all of HPMOR that was up then, and waited with bated breath for the rest. After a long time spent lurking, reading the blogs and then the e-book, I decided I wanted to do more than aimlessly wander through readings and sequences.
So here I am! I posted to the lounge on reddit, and now I'm posting here. The essence of why I'm posting now is simple: I want to start down a road towards aiding in the work towards FAI. I graduated a year and a half ago, and I want to start learning in a directed and purposeful way. So I'm here to ask for advice on where and how to get started, outside of standard higher education.
Replies from: John_Maxwell_IV, ChristianKl↑ comment by John_Maxwell (John_Maxwell_IV) · 2015-11-20T04:29:03.696Z · LW(p) · GW(p)
Welcome! MIRI created a research guide for people interested in helping with FAI.
↑ comment by ChristianKl · 2015-11-20T07:43:30.378Z · LW(p) · GW(p)
I graduated a year and a half ago,
In what discipline?
comment by EngineerofScience · 2015-07-24T22:07:27.884Z · LW(p) · GW(p)
I joined lesswrong because my friends suggested it to me. I really like all the articles and the fact that the comments on the articles are useful and don't have lots of bad language. This really surprised me.
comment by cameroncowan · 2015-07-24T03:38:30.594Z · LW(p) · GW(p)
I think I've caused enough kerfluffles around here that many people know me but I'm Cameron. I've been on the site almost a year I think. BA and MA in Political Science. I have a regular interest in philosophy and I found out about the site from a disparaging article on Slate.com. I'm one of the weird spiritual people on her practicing western esoterica. In the past I've worked in media and PR. Currently, I'm a novelist in Tacoma, WA, USA and host of The Cameron Cowan Show, every monday and friday on youtube (fresh shows in August!) For more information, clips and All The News You Need To Know In 10 Minutes or Less (and why you should care about it), see me at CameronCowan.net! Thanks for reading!
comment by htimsxela · 2015-08-21T19:57:27.809Z · LW(p) · GW(p)
Hello LW,
My name is Alex, and while I first discovered LW 2-3 years ago, I have only visited the site sporadically since then. I have always found the discussion here intriguing and insightful, but never found myself motivated enough to dedicate time to joining the community (until now!).
I'm a 26 year old Canadian with an undergraduate degree majoring in chemistry and minoring in philosophy (with a healthy dose of physics on the side). I have always been very analytical and process driven, and I have used that to fuel my creativity, and develop a more thorough understanding of the world we find ourselves a part of. I have been self-employed since graduating, with the eventual goal of returning to school for a graduate degree.
In my undergrad, my strengths and interests were in synthetic/materials chemistry, as well as organic chemistry. I spent time working for a research group that specialized (largely) in group 14 nano-material chemistry, which I enjoyed immensely. The areas of philosophy I concentrated on were philosophy of science, computing & AI, theory of mind, and existentialism. In short, I avoided the 'historical overview' philosophy courses in favour of those which were more relevant to the rapidly changing technological world (not to say philosophers of times past are uninteresting or have no current relevance, but I think the LW audience will empathize).
I expect that my contributions here will, in some sense, help me parse out what I would like to dedicate my future institutional studies to. I value knowledge and truth, as well as academic integrity and humility. I am off put by individuals who are unquestioning or unable to logically reason effectively, so I hope I will find a good home here. The toolbox of logical reasoning has allowed mankind to build itself up out of the primordial muck, and it seems that mastery of these tools is essential for continued advancement (and perhaps even survival). So, in addition to the above, I hope that my time here will allow me to continue honing my own tools of logic.
comment by phl43 · 2017-01-26T21:15:24.368Z · LW(p) · GW(p)
Hi everyone,
I'm a PhD candidate at Cornell, where I work on logic and philosophy of science. I learned about Less Wrong from Slate Star Codex and someone I used to date told me she really liked it. I recently started a blog where I plan to post my thoughts about random topics: http://necpluribusimpar.net. For instance, I wrote a post (http://necpluribusimpar.net/slavery-and-capitalism/) against the widely held but false belief that much of the US wealth derives from slavery and that without slavery the industrial revolution wouldn't have happened, as well as another (http://necpluribusimpar.net/election-models-not-predict-trumps-victory/) in which I explain how election models work and why they didn't predict Trump's victory. I think members of Less Wrong will find my blog interesting or, at least, that's what I hope. I welcome any criticisms, suggestions, etc.
Philippe
comment by DryHeap · 2016-11-21T15:35:10.071Z · LW(p) · GW(p)
Hello all,
South Carolinian uni student. Been lurking here for some time. Once my desire to give an input came to a boil, I decided to go ahead and make an account. Mathematics, CompSci, and various forms of Biology are my intensive studies.
Less intense hobbies include music theory, politics, game theory, and cultural studies. I'm more of a 'genetics is the seed, culture is the flower' kind of guy.
The art of manipulation is fascinating to me; sometimes, when one knows their audience, one must make non-rational appeals to their audience to persuade them. This is why I rarely consider any political movement to be ignorant when they make certain non-rational claims; it is the sweet art of manipulation blossoming. Whether or not a certain political movement is using these tactics for a beneficial end-game is up for debate, but I nevertheless stray from calling the heads of political philosophies 'stupid'. (Note: the followers may be useful idiots)
Very nice forum. I appreciate the culture here, and these dialogues rank with Plato.
Replies from: Viliam↑ comment by Viliam · 2016-11-23T09:02:39.979Z · LW(p) · GW(p)
Welcome!
Note: the followers may be useful idiots
I partially agree, but I believe there is usually no clear dividing line between "those who know, and use irrational claims strategically" and "the followers who drink the kool-aid".
First, peer pressure is a thing. Even if you consciously invent a lie, when everyone in your social group keeps repeating it, it will create an enormous emotional pressure on you to rationalize "well, my intention was to invent a lie, but it seems like I accidentally stumbled upon an important piece of truth". Or more simply, you start believing that the strong version of X is the lie you invented, but some weaker variant of X is actually true.
Second, unless there is a formal conspiracy coordination among the alpha lizardmen, it is possible that leader A will create and spread a lie X without explaining to leader B what happened, and leader B will create and spread a lie Y without explaining to leader A what happened, so at the end both of them are the manipulators and the sheep at the same time.
Replies from: DryHeap, entirelyuseless↑ comment by DryHeap · 2016-11-29T19:50:24.710Z · LW(p) · GW(p)
Very good point. On a similar note: we often don't consider whether we have empirically tested what we, ourselves, believe to be true. Most often, we have not. I'd wager that we are all 'useful idiots' of a sort.
Replies from: niceguyanon↑ comment by niceguyanon · 2016-11-30T15:49:24.791Z · LW(p) · GW(p)
we are all 'useful idiots' of a sort.
It's sheep all the way up!
Replies from: Lumifer↑ comment by entirelyuseless · 2016-11-23T15:14:11.896Z · LW(p) · GW(p)
"Or more simply, you start believing that the strong version of X is the lie you invented, but some weaker variant of X is actually true."
That's true, but in most cases it is in fact the case that some weaker variant is true, and this explains why you were able to convince people of the lie.
That said, this process is not in general a good way to discover the truth.
Replies from: Viliam↑ comment by Viliam · 2016-11-23T18:12:52.300Z · LW(p) · GW(p)
I would still expect a shift towards the group beliefs; e.g. if the actual value of some x is 5, and the enemy tribe believes it's 0, and you strategically convince your tribe that it is 10... you may find yourself slowly updating towards 6, 7, or 8... even if you keep remembering that 10 was a lie.
Anyway, as long as we both agree that this is not a good way to discover truth, the specific details are less important.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2016-11-24T02:40:29.149Z · LW(p) · GW(p)
I agree with that, and that is one reason why it is not a good method.
comment by aaq · 2016-01-29T20:08:59.269Z · LW(p) · GW(p)
Hello from Boston. I've been reading LW since some point this summer. I like it a lot.
I'm an engineering student and willing to learn whatever it takes for me to tackle world problems like poverty, hunger and transmissible diseases. But for now I'm focusing my efforts on my degree.
comment by Marko · 2015-10-22T20:38:31.763Z · LW(p) · GW(p)
Hello LessWrong!
I'm Marko, a mathematician from Germany. I like nerding around with epistemology, decision theory, statistics and the like. I've spent a few wonderful years with the Viennese rationality community and got to meet lots of other interesting and fun LessWrongians at the European Community Weekend this year. Now I'm in Zürich and want to build a similar group there.
Thanks for giving me so much food for thought!
Replies from: Gram_Stone↑ comment by Gram_Stone · 2015-10-22T23:03:45.147Z · LW(p) · GW(p)
Welcome, Marko!
comment by ThePrussian · 2015-07-29T12:13:48.864Z · LW(p) · GW(p)
Hi everyone.
I've already posted a couple of pieces - probably should have visited this page first, especially before posting my last piece. Well, such is life.
I headed over to LessWrong because I was/am a bit burned out by the high-octane conversations that go on online. I've disagreed with some things I've read here, but never wanted to beat my head - or someone else's - against a wall. So, I'm here to learn. I like the sequences have picked up some good points already - especially about replacing the symbol with the substance.
Question - what's the ettiquette about linking stuff from one's own blog? I'm not trying to do self-promotion here, but there's one or two ideas I've developed elsewhere & would find it useful to refer to them.
Replies from: Viliam, Usernamecomment by TheOnlyAu · 2016-06-02T15:15:55.914Z · LW(p) · GW(p)
Hi LW Users,
I apologise in advance for not having more to say initially, but I created an account on this website for one reason. I have one proposition/idea to put forth on the discussion section.
I would prefer to wait until I have twenty karma so that I may post the proposition/idea there, so I hope that your curiosity has been sparked enough, otherwise let me know.
Thanks so much for reading :)
Replies from: gjm↑ comment by gjm · 2016-06-02T17:22:33.940Z · LW(p) · GW(p)
Welcome. You will only accumulate karma by having people upvote your comments, so if your goal is as you describe then I'm afraid you'll have to participate in other ways too before you get to show us your idea. (Of course you could put it in a comment in the Open Thread or something if you can't wait.)
Replies from: TheOnlyAu↑ comment by TheOnlyAu · 2016-06-05T04:17:21.731Z · LW(p) · GW(p)
Where should I be commenting then? Right here? And where is the open thread? Thank you so much for your help and I look forward to it.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2016-06-05T06:11:12.499Z · LW(p) · GW(p)
The current open thread is here:
http://lesswrong.com/r/discussion/lw/nns/open_thread_may_30_june_5_2016/
A new one will be started soon.
comment by selador · 2015-08-10T16:52:52.295Z · LW(p) · GW(p)
Hi LW,
I got interested in rationality from the books Irrationality, some others I can't remember in between and later, fast and slow. Somehow I found HPMOR, which I loved, and through that, found this. Other influences have included growing up with quite strongly religious parents (first win for the power of the question - but why do you believe that, first loss for thinking that because something was obvious to me I could snap my fingers and make it obvious to others.)
What I'm doing: I'm in my twenties, working in the energy sector because I started following global warming and resource shortages when I was 16, thought it was an important area and decided to go work in that. The things I have learnt from an engineering degree, a bit more life, and LW, mean that I don't necessarily still believe it is THE area of importance, but as an area, I'm happy enough in it for now. My job basically involves lots of programming, modelling and data handling anyway so that is fun! I get to encounter my biases in the work environment occassionally/ regularly as I have to try and work out how much confidence to have in the data available and in my various theories. For my job at least, I do find attempting to debiase useful at a day to day level, if not as useful as being a significantly better programmer would be.
So far on less wrong I have read about half the sequences, of which the most resonant for me was the one on cached thoughts. Whilst simple, this drew together a bunch of other points learnt, and which felt really clearly, like how I think. I'm reading the sequences from a link on here that I've put on my kindle. This is pretty good, but I don't know how much shorter the A-Z version is. I do skip occasional bits. I feel that a little graph of the sequences, similar to the very simple one on http://hyperphysics.phy-astr.gsu.edu/hbase/hframe.html would help newbies negotiate through.
Anyway, I'm continuing enjoying to read posts on here, but hope to start contributing by: a) Continuing to try and help others to learn (at least the basics) of this stuff b) Maybe setting up a meetup in Bristol, UK. c) Posting some thoughts up to the hive mind if and when I have some worth sharing
comment by ShaneC · 2015-08-03T03:50:34.852Z · LW(p) · GW(p)
Hello from Canada! I study computer science and philosophy at the University of Waterloo. Above anything, I love mathematics. The certainty that comes from a mathematical proof is amazing, and it fuels my current position about epistemology (see below). My favourite courses for mathematics so far have been the introductory course about proofs, and a course about formal logic (the axioms of first order logic, deduction rules, etc). Philosophy has always been very interesting to me: I've taken courses about epistemology, ethics, the philosophy language; I am also currently taking a course about political philosophy, and am reading Nietzsche on the side. I also love to debate. Although I don't practice Christianity anymore, I loved debating about religion with my friends.
I have come to Less Wrong to talk about my epistemological views. It is a form of skepticism. I view (i.e. define) truth exclusively as the outcome of some rational system. I reject all claims unless they are given in terms of a rational system by which it can be deduced. Even when such a system is given, I would call the claim true only given the context of the rational system at hand and not (necessarily) under any other systems.
For example, "2 + 2 = 4" is true when we are using the conventional meanings for 2, 4, +, and =, along with a deductive system that takes expressions such as "2 + 2 = 4" and spits out true or false. On the contrary, "2 + 2 = 4" is false when we use the usual definitions of 2 and 4 and = but + being defined for x and y and the (regular) sum of x and y minus one. This is an illustration of the truth of a claim only making sense once it has precise meaning, axioms that are assumed to be true, and some system of deduction.
When a toddler sees the blue sky and asks his mother why the sky is blue and she responds with something about the scattering of light, he has a choice: either he accepts the system of scattering implies blueness, or he can ask again: "Why?" She might reply with something about molecules, etc... Eventually, the toddler seems to have two choices: either he must accept that the axioms of the scientific method are true just because or reject the whole thing for not being justified all the way through.
My view on epistemology is distinct from the above options. It wouldn't reject the whole system (useless; no knowledge) or truly believe in the axioms of the scientific method (naive; they could be wrong). It would appreciate the intrinsic nature of the ideas; that the scattering of light can imply that the sky is blue. It would view rational systems as tools that can be used and then put away, rather than thing that have to be carried around your whole life.
What do you think about this? Can you suggest any related readings?
Replies from: Sarunas↑ comment by Sarunas · 2015-08-07T07:25:40.018Z · LW(p) · GW(p)
This sounds similar to Coherence theory of truth.
comment by cwl · 2016-03-13T09:48:07.072Z · LW(p) · GW(p)
Hello LW,
My name is Colton. I'm a 22 year old electrical engineering student from Missouri who found Less Wrong about a year ago through Slate Star Codex and binged most of the sequences.
I have been interested in the study of bias and how to avoid it since I read the book Predictably Irrational a few years back. I also consider myself quite academic for an engineer, with a good deal of physics, math, and computer science theory under my belt.
comment by crmflynn · 2015-11-02T02:30:20.628Z · LW(p) · GW(p)
I have been lurking around LW for a little over a year. I found it indirectly through the Simulation Argument > Bostrom > AI > MIRI > LW. I am a graduate of Yale Law School, and have an undergraduate degree in Economics and International Studies focusing on NGO work. I also read a lot, but in something of a wandering path that I realize can and should be improved upon with the help, resources, and advice of LW.
I have spent the last few years living and working in developing countries around the world in various public interest roles, trying to find opportunities to do high-impact work. This was based around a vague and undertheorized consequentialism that has been pretty substantially rethought after finding FHI/MIRI/EA/LW etc. Without knowing about the larger effective altruism movement (aside from vague familiarity with Singer, QALY cost effectiveness comparisons between NGOs, etc.) I had been trying to do something like effective altruism on my own. I had some success with this, but a lot of it was just the luck of being in the right place at the right time. I think that this stuff is important enough that I should be approaching it more systematically and strategically than I had been. In particular, I am spending a lot of time moving my altruism away from just the concrete present and into thinking about “astronomical waste” and the potential importance of securing the future for humanity. This is sort of difficult, as I have a lot of experiential “availability” from working on the ground in poor countries which pulls on my biases, especially when faced with a lot of abstraction as the only counterweight. However, as stated, I feel this is too important to do incorrectly, even if it means taming intuitions and the easily available answer.
I have also been spending a lot of time recently thinking about the second disjunct of the simulation argument. Unless I am making a fundamental mistake, it seems as though the second disjunct, by bringing in human decision making (or our coherent extrapolated volition, etc.) into the process, sort of indirectly entangles the probable metaphysical reality of our world with our own decision making. This is true as a sort of unfolding of evidence if you are a two-boxer, but it is potentially sort-of-causally true if you are a one-boxer. Meaning if we clear the existential hurdle, this is seemingly the next thing between us and the likely truth of being in a simulation. I actually have a very short write-up on this which I will post in the discussion area when I have sufficient karma (2 points, so probably soon…) I also have much longer notes on a lot of related stuff which I might turn into posts in the future if, after my first short post, this is interesting to anyone.
I am a bit shy online, so I might not post much, but I am trying to get bolder as part of a self-improvement scheme, so we will see how it goes. Either way, I will be reading.
Thank you LW for existing, and providing such rigorous and engaging content, for free, as a community.
comment by masters02 · 2015-09-24T08:10:17.020Z · LW(p) · GW(p)
Hello all!
I'm a graduated International Relations student from London. I took a year off after graduation to learn how to manage my finances and invest in the stock market. Because of that, I came across my life hero, Charlie Munger, the vice-chairman of Berkshire Hathaway. He is a machine of rationality and is by far one of the wisest men (if not the wisest) alive. He wrote an essay called, "The psychology of human misjudgement" (http://law.indiana.edu/instruction/profession/doc/16_1.pdf) which I implore all rationality-seekers to devour. This essay changed my life, and I have never looked back.
Charlie said that we all have a moral obligation to be rational. So, here I am :)
Replies from: Vaniver↑ comment by Vaniver · 2015-09-24T16:58:45.396Z · LW(p) · GW(p)
Welcome!
One of my primary pieces of exposure to Munger is Peter Bevelin's book, Seeking Wisdom from Darwin to Munger, which I think you might enjoy--as I recall, it draws from the same Heuristics and Biases literature as many other things (like Munger's essay) but has enough examples that don't show up in the more standard works (Thinking and Deciding, Thinking Fast and Slow, etc.) to be worthwhile on its own.
Replies from: masters02comment by AleksTK · 2015-09-09T07:43:31.194Z · LW(p) · GW(p)
Hello LW,
I'm an aspiring rationalist from a community called PsychonautWiki. Our intent it to study and catalog all manner of altered states of continuousness in a legitimate and scientific manner. I am very interested in AGI and hope to understand the architecture and design choices for current major AGI projects.
I'll probably start a discussion for you guys tomorrow.
Aleks
Replies from: Viliam↑ comment by Viliam · 2015-09-09T09:04:31.466Z · LW(p) · GW(p)
Hi Aleks!
Have you read "Mysticism and Pattern-Matching" at Slate Star Codex? What is your opinion?
Replies from: AleksTK↑ comment by AleksTK · 2015-10-18T03:26:35.710Z · LW(p) · GW(p)
Just read it. Fascinating.
https://psychonautwiki.org/wiki/Geometry
You might want to look into level 8B and 8A geometry.
comment by PeterCoin · 2015-08-16T21:22:33.456Z · LW(p) · GW(p)
Hey y'all, I come here both as a friend and with an agenda. I'm scary.
See I have a crazy pet theory... (and yes it's a TOE, fancy that!)
...and I'd love to give it a small home on the Internet. Here?
This like to share it with you because this community seems to be be the proper blend of open minded and skeptical. Which is what the damn thing needs.
Anyways I've lurked for quite awhile, and you guys have been great at opening my mind to a lot of things. I figure this might be good enough and crazy enough to give something back.
As a personal note, I'm currently an engineer who is wondering if he should go back to school to become an academic. When I was a college student at a big faceless university, I was too awkward, clueless, and erratic to navigate the system in a way that got me attention so grabbed my degree and ran.
BTW I'm not one of those foaming at the mouth mofos who will debate endlessly and fruitlessly in an attacking manner toward anyone who dare criticize his crackpot theory. I'm more like "man, why does this idea have to be so damn compelling, better get it out on the web". I've also posted it extremely little thus far, I do not design to spray it all over the internet.
Replies from: Jiro↑ comment by Jiro · 2015-08-17T15:10:06.221Z · LW(p) · GW(p)
BTW I'm not one of those foaming at the mouth mofos who will debate endlessly and fruitlessly in an attacking manner toward anyone who dare criticize his crackpot theory.
The response to your theory, though, will depend on whether it's one of those. And the response to "should I tell you my new theory" will depend on the fact that such theories have some probability of being one of those. Ultimately, you have to tell us the theory to know how we'll react.
Replies from: PeterCoin↑ comment by PeterCoin · 2015-08-18T00:57:11.047Z · LW(p) · GW(p)
Well for better or for worse! Here it is!
http://lesswrong.com/r/discussion/lw/mms/fragile_universe_hypothesis_and_the_continual/
comment by bozj · 2016-11-18T06:34:00.869Z · LW(p) · GW(p)
Hello all,
I am just an other lurker here. Most of the times, I am found in LW slack group. I think, I better had introduced myself earlier. I have zero karma so I am unable to post anything at all. It would be better for me to explore how LW website works.
-Best
comment by Beau · 2016-04-19T17:29:00.790Z · LW(p) · GW(p)
Hi. I'm Bernardo, a business student from Brazil. I came across Less Wrong from a answer to a thread on Quora (https://www.quora.com/How-would-you-estimate-the-number-of-restaurants-in-London). It got me interested in Fermi Estimates and I'm surfing Less Wrong to read about it.
I'd love to translate those articles on Fermi Estimates to Portuguese to add to the translated pages list. How do I do that?
Replies from: Christiano↑ comment by Christiano · 2016-11-14T02:06:51.957Z · LW(p) · GW(p)
Hello Bernardo, I'm Christiano from Brazil too! Nice to see a brazilian here! Did you manage to translate the article? I can help you with English-Portuguese revision or even help you with the translation.
comment by RibbonGraph · 2016-04-12T12:01:14.717Z · LW(p) · GW(p)
Hi friends,
I'm Chris :D I've been lurking on and off for a few months now (after hearing about LW from some of my friends at uni, reading some SlateStarCodex, and devouring HPMOR in less than a week) and have decided it's about to take the plunge into the scary world of commenting. (It's a bit scary being a somewhat smart person among people who are much, much smarter)
My academic background: growing up in my family meant I picked up a lot of random stuff, but at uni I have been studying pure mathematics and a bit (pun intended) of computer science.
What motivates me: I'm very passionate about Raising the Sanity Waterline. If I learn - for the first time - something which I think is important, I get this sudden panic of "Why have I only learned this now?! Everyone should know this!". And I get very excited when I'm helping other people learn stuff I've learned.
Longer version of background: My parents have worked as Protestant Christian theological educators (i.e. training pastors and church leaders) in the Middle East since before I was born. They have always been very keen on learning as a lifelong project (a lot of my dad's work is applying evidence-based teaching research to theological education). So - somewhat like Harry Potter in HPMOR - our house has always been full of books. To add to that, I was privileged to get to meet a lot of people from very different worlds: from my Muslim close friends at school to some of my parents' suppporters in the US who have never gone far from their home state. This meant I encountered drastically different worldviews and cultural approaches to thinking, and often found it frustrating how poorly people understood each other. Thanks to my parents' influence, I also unconsciously gravitated towards people who were interested in how the world works.
Since leaving for Australia at 18 for study, I have spent much of my university life learning about things other than my specialisation, both from smart friends and from the internet. So this has meant I have changed my mind about quite a few things already.
I look forward to changing my mind about many more things, and learning completely new things!
Replies from: gjmcomment by beatricesargin · 2016-03-22T16:22:56.383Z · LW(p) · GW(p)
I'm a creative writer and a virtual assistant, I have been a freelancer for 2years now. From the creative educational environment I'll like to express an interest in becoming more rational, and I found Less Wrong through Intentional Insights.
Replies from: Sarginlove, Gleb_Tsipursky↑ comment by Sarginlove · 2016-03-22T16:47:47.797Z · LW(p) · GW(p)
Yeah thanks I also believe I could become more rational too by becoming a rational thinker.
Replies from: beatricesargin↑ comment by beatricesargin · 2016-03-22T16:52:51.215Z · LW(p) · GW(p)
Thanks, I also believe that becoming rational can help me achieve all of my objectives and long term goals.
↑ comment by Gleb_Tsipursky · 2016-03-24T00:59:54.832Z · LW(p) · GW(p)
Glad you're joining LW, Beatrice! Nice to see another volunteer and part-time contractor for Intentional Insights join LW :-)
For the rest of LW folks, I want to clarify that Beatrice volunteers at Intentional Insights for about 30 hours, and gets paid as a virtual assistant to help manage our social media for about 10 hours. She decided to volunteer so much of his time because of his desire to improve her thinking and grow more rational. She's been improving through InIn content, and so I am encouraging her to engage with LW.
comment by InhalingExhaler · 2016-01-31T18:36:58.668Z · LW(p) · GW(p)
Hello.
I found LessWrong after reading HPMoR. I think I woke up as a rationalist when I realised that in my everyday reasoning I always judjed from the bottom line not considering any third alternatives, and started to think what to do about that. I am currently trying to stop my mind from always aimlessly and uselessly wandering from one topic to another. I registered on LessWrong after I started to question myself on why do I believe rationality to work, and ran into a problem, and thought I could get some help here. The problem is expressed in the following text (I am ready to move it from welcome board to any other suitable one if needed):
John was reading a book called “Rationality: From AI to Zombies” and thought: “Well, I am advised to doubt my beliefs, as some of them may turn out to be wrong”. So, it occurred to John to try do doubt the following statement: “Extraordinary claim requires extraordinary evidence”. But that was impossible to doubt, as this statement was a straightforward implication of the theorem X of probability theory, which John, as a mathematician, knew to be correct. After a while a wild thought ran through his mind: “What if every time a person looks at the proof of the theorem X, the Dark Lords of the Matrix alter the perception of this person to make the proof look correct, but actually there is a mistake in it, and the theorem is actually incorrect?” But John didn’t even consider that idea seriously, because such an extraordinary claim would definitely require extraordinary evidence.
Fifteen minutes later, John spontaneously considered the following hypothetical situation: He visualized a religious person, Jane, who is reading a book called “Rationality: From AI to Zombies”. After reading for some time, Jane thinks that she should try to doubt her belief in Zeus. But it is definitely an impossible action, as existence of Zeus is confirmed in the Sacred Book of Lightning, which, as Jane knows, contains only Ultimate and Absolute Truth. After a while a wild thought runs through her mind: “What if the Sacred Book of Lightning actually consists of lies?” But Jane doesn’t even consider the idea seriously, because the Book is surely written by Zeus himself, who doesn’t ever lie.
From this hypothetical situation John concluded that if he couldn’t doubt B because he believed A, and couldn’t doubt A because he believed B, he’d better try to doubt A and B simultaneously, as he would be cheating otherwise. So, he attempted to simultaneously doubt the facts that “Extraordinary claim requires extraordinary evidence” and that “Theorem X is proved correctly”.
As he attempted to do it, and succeeded, he spent some more time considering Jane’s position before settling his doubt. Jane justifies her set of beliefs by Faith. Faith is certainly an implication of her beliefs (the ones about reliability of the Sacred Book), and Faith certainly belongs to the meta-level of her thinking, affecting her ideas about existence of Zeus located at the object level.
So, John generalized that if he had some meta-level process controlling his thoughts and this process was implied by the very thought he was currently doubting, it would be wise to suspend the process for the time of doubting. Because not following this rule could make him not to lose some beliefs which, from the outside perspective, looked as ridiculous as Jane’s religion. John searched through the meta-level controlling his thoughts. He was horrified to realize that Bayesian reasoning itself fitted the criteria: it was definitely organizing his thought process, and its correctness was implied by the theorem X he was currently doubting. So he was sitting, with his belief unsettled and with no ideas of how to settle it correctly. After all, even if he made up any idea, how could he know that it wasn’t the worst idea ever intentionally given to him by the Dark Lords of the Matrix? He didn’t allow himself to disregard this nonsense with “Extraordinary claim requires extraordinary evidence” – otherwise he would fail doubting this very statement and there would be no point in this whole crisis of faith which he deliberately inflicted on himself…
Jane, in whose imagination the whole story took place, yawned and closed a book called “Rationality: From AI to Zombies”, lying in front of her. If learning rationality was going to make her doubt herself out of rationality, why would she even bother to try that? She was comfortable with her belief in Zeus, and the only theory which could point out her mistakes apparently ended up in self-annihilation. Or, shortly, who would believe anyone saying “We have evidence that considering evidence leads you to truth, therefore it is true that considering evidence leads you to truth”?
Replies from: Richard_Kennaway, CCC↑ comment by Richard_Kennaway · 2016-02-01T11:58:42.222Z · LW(p) · GW(p)
Welcome to Less Wrong!
My short answer to the conundrum is that if the first thing your tool does is destroy itself, the tool is defective. That doesn't make "rationality" defective any more than crashing your first attempt at building a car implies that "The Car" is defective.
Designing foundations for human intelligence is rather like designing foundations for artificial (general) intelligence in this respect. (I don't know if you've looked at The Sequences yet, but it has a lot of material on the common fallacies the latter enterprise has often fallen into, fallacies that apply to everyday thinking as well.) That people, on the whole, do not go crazy — at least, not as crazy as the tool that blows itself up as soon as you turn it on — is a proof by example that not going crazy is possible. If your hypothetical system of thought immediately goes crazy, the design is wrong. The idea is to do better at thinking than the general run of what we can see around us. Again, we have a proof by example that this is possible: some people do think better than the general run.
Replies from: InhalingExhaler↑ comment by InhalingExhaler · 2016-02-01T17:04:52.160Z · LW(p) · GW(p)
Well, it sounds right. But which mistake in rationality was done in that described situation, and how can it be improved? My first idea was that there are things we shouldn't doubt... But it is kind of dogmatic and feels wrong. So should it maybe be like "Before doubting X think of what will you become if you succeed, and take it into consideration before actually trying to doubt X". But this still implies "There are cases when you shouldn't doubt", which is still suspicious and doesn't sound "rational". I mean, doesn't sound like making the map reflect the territory.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2016-02-01T22:37:12.254Z · LW(p) · GW(p)
It's like repairing the foundations of a building. You can't uproot all of them, but you can uproot any of them, as long as you take care that the building doesn't fall down during renovations.
↑ comment by CCC · 2016-02-01T11:29:47.481Z · LW(p) · GW(p)
After a while a wild thought ran through his mind: “What if every time a person looks at the proof of the theorem X, the Dark Lords of the Matrix alter the perception of this person to make the proof look correct, but actually there is a mistake in it, and the theorem is actually incorrect?”
As soon as the Dark Matrix Lords can (and do) directly edit your perceptions, you've lost. (Unless they're complete idiots about it) They'll simply ensure that you cannot perceive any inconsistencies in the world, and then there's no way to tell whether or not your perceptions are, in fact, being edited.
The best thing you could do is find a different proof and hope that the Dark Lord's perception-altering abilities only ever affected a single proof.
John searched through the meta-level controlling his thoughts. He was horrified to realize that Bayesian reasoning itself fitted the criteria: it was definitely organizing his thought process, and its correctness was implied by the theorem X he was currently doubting. So he was sitting, with his belief unsettled and with no ideas of how to settle it correctly. After all, even if he made up any idea, how could he know that it wasn’t the worst idea ever intentionally given to him by the Dark Lords of the Matrix?
At this point, John has to ask himself - why? Why does it matter what is true and what is not? Is there a simple and straightforward test for truth?
As it turns out, there is. A true theory, in the absence of an antagonist who deliberately messes with things, will allow you to make accurate predictions about the world. I assume that John cares about making accurate predictions, because making accurate predictions is a prerequisite to being able to put any sort of plan in motion.
Therefore, what I think John should do is come up with a number of alternative ideas on how to predict probabilities - as many as he wants - and test them against Bayesian reasoning. Whichever allows him to make the most accurate predictions will be the most correct method. (John should also take care not to bias his trials in favour of situations - like tossing a coin 100 times - in which Bayesian reasoning might be particularly good as opposed to other methods)
comment by MartinWade · 2016-01-06T17:43:44.102Z · LW(p) · GW(p)
Salutations! I've been reading Less Wrong for three or four years now without registering - ever since stumbling across a supremely accessible explanation of Bayes Theorem - and suddenly felt I might have something to add. I feel significantly more cynical than most of the posters here, but endeavor to keep my pessimism grounded.
My parents raised me rationalist (not merely atheist), encouraging an environment where questions were always more important than answers and everyone was willing to admit that "I don't know." I spent the requisite few years in my adolescence imagining I knew everything, but that delusion passed. Then I dropped out of three colleges - on scholarships, making those hard lessons less expensive than they might have been - and today I run the inter-branch delivery department of a medium-sized county library system.
I've got a stubborn fascination with philosophical materialism and behavioral neuroscience, with a recent focus on the linguistic nature of consciousness. I think that language in general - and narrative in particular - is a compression algorithm for transmitting complex ideas like "We ought to go the store." My linguistic memory map of what you already know means I don't assume I have to tell you which store, or why, or how.
Such maps are made of stories, and that means they required a protagonist. I've come to believe that consciousness is systematized experience of being that protagonist, molded by evolution to make communication faster and easier. Which makes consciousness the character the brain plays when it needs to work with other brains, and the set of mental tools with which narrative memories are compressed for later storytelling.
At any rate, I'm here to continue to have all of my perspectives challenged, and with this account I suppose I can also start challenging perspectives.
Replies from: ChristianKl↑ comment by ChristianKl · 2016-01-06T19:09:37.160Z · LW(p) · GW(p)
How about writing an article where you explain why you hold that belief? How would reality look like if the belief would be wrong? What sorts of predictions can be made with it?
comment by KevinGrant_duplicate0.2409764628391713 · 2015-11-28T14:07:21.710Z · LW(p) · GW(p)
Hi,
I'm a middle-aged computer scientist/philosopher, who specialized in artificial intelligence and machine learning back in the stone age when I was getting my degrees. Since then I've done a bit of work in probabilistic simulations and biologically inspired methods of problem solving, mostly for industry. I've recently finished writing a book about politics, although God knows if I'll ever sell a copy. Now I'm into a bit of everything. Politics. Economics.
I came here looking for input into a conlang project that I'm working on. Basically it involves the old Sapir-Whorf/Eprime/Loglan dream of creating a language that's better suited for rational cognition than English, and I'm looking for linguistic mechanisms that might aid in this and that need to be built in from the bottom up (since surface mechanisms can be added later). I already know of the three conlangs mentioned above, although I don't speak them, so I'm looking for ideas that aren't contained therein, or that if they are might have been missed by a person without a deep knowledge of the languages. I did a search of the archives here and saw some discussion around this general topic, but nothing of immediate use, although I could easily have missed something.
All ideas welcome.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-11-28T19:00:25.360Z · LW(p) · GW(p)
I do have a conlang draft. A few thoughts based on my conlang thinking:
Loglan/Lojban is a language were math was an afterthought. That's likely mistake. If you look at a concept like grandfather, using the word "grand" doesn't make much sense. I think it's better to say something like father-one for grandfather, father-two for great-grandfather. The same way the boss of your boss should be boss-one. Having a grammer in which relationships can be expressed well is very valuable.
I think that loglan attempt to build on existing roots of the widely spoken languages is flawed because it allows less freedom organizing the language effectively. It would be good to have a lot of concepts with 3 letters instead of 5.
In my language draft I started to take concept of graph theory for naming relationships (the structure of the words matters but the actual word is provisional):
bei node in same graph
cai node parent
doi node children
beiq relative
caiq parent
doiq son/daughter
bei person employed in the same company
caiß boss (person with authority to order)
doiß direct (person who can be ordered)
Once you understand that structure and learn the new word "fuiq" for sibling, you can guess that a direct coworker is called fuiß. Of the in a graph notes that share the same parent note are "fui".
I like grouping concepts this way where I can go from parent to son/daughter simply by going one forward in the alphabet and replace "c" with "d" and "a" with "o" ("i" get's skipped because the word ends in "i").
I did use a similar principle for naming numbers:
ba 0
ce 1
di 2
fo 3
gu 4
ha 5
je 6
For the number I also gave adding a "q" meaning. It turn the number into base 16. Base 16 numbers are later quite useful if you want to make an expression like north-east. At the moment pilots use phrases based on the clock to navigate: "There's a bird at 2 o'clock." It's much better to bake numbers more centrally into the language.
In case you haven't seen it http://selpahi.de/ToaqAlphaPrimer.html is a nice draft for a new language. I like how the language makes every sentence end in an evidential. In it I think he makes a mistake that he doesn't use capital letters but non-asci character instead.
I think that it's great that his language doesn't follow the Lojban place system but uses prepositions like a normal language.
Replies from: KevinGrant_duplicate0.2409764628391713, KevinGrant_duplicate0.2409764628391713↑ comment by KevinGrant_duplicate0.2409764628391713 · 2015-11-29T09:27:46.328Z · LW(p) · GW(p)
Also, the topic is now up and running in the regular "discussion" area.
↑ comment by KevinGrant_duplicate0.2409764628391713 · 2015-11-29T02:50:41.380Z · LW(p) · GW(p)
It sounds like you were trying to construct an a-priori conlang, in which the meaning of any word could be determined from its spelling, because the spelling is sufficient to give the word exact coordinates on a concept graph of some sort. I thought about this approach some time ago, but was never able to find a non-arbitrary concept graph to use, or a system of word formation that didn't create overly long or unpronounceable words.
I was originally thinking about including non-ascii characters, but eventually compromised on retaining English capitals instead. The biggest problem that any conlang faces is getting people to use it, and anything that makes that more difficult, such as requiring changes to the standard American keyboard, needs to be avoided unless it's absolutely necessary.
comment by Eigengrau · 2015-10-06T23:06:14.856Z · LW(p) · GW(p)
Hello LW! Long time lurker here. Got here from HPMOR a few years ago now. This is one of my favourite places on the internet due to its high sanity waterline and I thought I'd sign up so I could participate here and there (plus I finally came up with a username I like!). I've got a B.Sc. in math with a concentration in psychology (apparently that is a thing you can get, I didn't know either) and my other passions are music, film, humor, and being right all the time ;)
Thanks to LW and the rest of the rationality blogosphere, I've added effective altruism to my life goals. I've been wondering lately how we might shift the cultural norm from "boy I sure hope I have a big house and drive a fancy sports car by the time I'm 30" to "boy I sure hope I'm donating lots of money to worthy charities by the time I'm 30".
comment by varialus · 2015-10-04T00:38:30.300Z · LW(p) · GW(p)
Hi! I'm interested in curing death or at least contributing to the cure. I'm an ok computer programmer and I'm preparing to go to school this spring to work on a bachelor's degree in Biomedical Engineering with a minor in Cognitive Science. I'd like to make friends with someone who is also at the early planning stages of pursuing a similar degree, and yeah, I do realize just how specific those requirements are, but it doesn't hurt to keep an eye out just in case. I'm in a fairly good place in my life to pursue my education, but I don't yet know how it's going to go. If you're in a good place to go to school, but are scared or need if you need some help deciding that it's what you want to do, instead of stressing or worrying about it, how about we work on it together? I'm currently reviewing a number of educational topics, primarily through Khan Academy.
I discovered the joys of cognitive science while reading Harry Potter and the Methods of Rationality. I've always fancied myself a fairly rational person, but I've not yet studied it formally.
comment by Laszlo · 2015-08-23T13:00:36.901Z · LW(p) · GW(p)
Hello!
I first heard about LW through a SomethingAwful thread. Not the most auspicious of introductions, but when I read some of your material on my own instead of receiving it through the sneerfilter, I found myself interested. Futurology and cognitive biases are two topics that are near and dear to my heart, and I hope to pick up some new ideas and perhaps even new ways of thinking here. I've also had some thoughts about Friendly AI which I haven't seen discussed yet, and I'm excited to see what holes you guys can poke in my theories!
comment by rmoehn · 2016-07-14T02:18:50.452Z · LW(p) · GW(p)
Hi! I signed up to LessWrong, because I have the following question.
I care about the current and future state of humanity, so I think it's good to work on existential or global catastrophic risk. Since I've studied computer science at a university until last year, I decided to work on AI safety. Currently I'm a research student at Kagoshima University doing exactly that. Before April this year I had only little experience with AI or ML. Therefore, I'm slowly digging through books and articles in order to be able to do research.
I'm living off my savings. My research student time will end in March 2017 and my savings will run out some time after that. Nevertheless, I want to continue AI safety research, or at least work on X or GC risk.
I see three ways of doing this:
- Continue full-time research and get paid/funded by someone.
- Continue research part-time and work the other part of the time in order to get money. This work would most likely be programming (since I like it and am good at it). I would prefer work that helps humanity effectively.
- Work full-time on something that helps humanity effectively.
Oh, and I need to be location-independent or based in Kagoshima.
I know http://futureoflife.org/job-postings/, but all of the job postings fail me in two ways: not location-independent and requiring more/different experience than I have.
Can anyone here help me? If yes, I would be happy to provide more information about myself.
(Note that I think I'm not in a precarious situation, because I would be able to get a remote software development job fairly easily. Just not in AI safety or X or GC risk.)
comment by Menilik · 2016-04-03T22:39:11.070Z · LW(p) · GW(p)
Hello from NZ. So I'm basically, I'm here to promote my... Jokes, I came across this website from a Wait But Why article I was doing research on (Cryonics). The comments here are next level awesome, people share ideas and I feel like the moderators aren't ruled by one discourse or another. So yeah I decided to jump on in and check it out.
I enjoy Science, Learning, Entrepreneur stuff, and better ways of looking at the world.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2016-05-01T03:41:17.923Z · LW(p) · GW(p)
Menilik Dyer! I thought it might be you! We met at a Mum's Garage thing (I was the one wearing no shoes and a lot of grey). So cool to see you here. Welcome to the mouth of this bottomless rabbithole that is modern analytical futurism. I'd hazard you already have some sense of how deep it goes.
If anyone's reading this; Menilik is a badass. He once successfully built a business by picking a random market sector he knew nothing about and asking people on the ground what they might need.
comment by [deleted] · 2016-02-10T16:05:38.467Z · LW(p) · GW(p)
Hi. I live in Umeå, Sweden. I have been aware of Less Wrong for some time now. First through HPMoR, and more lately I have been reading posts that my friend has recommended to me. I just recently decided I want to also join the discussion, so i crated this user to be able to comment.
I find it very useful do distinguish between what I call "debate" and "discussion"
"debate" = everyone involved is trying to win, where "win" usually means convincing the audience.
"discussion" = everyone involved is trying to learn the truth.
Less wrong is obviously a place for discussion, but even in a discussion I find the above vocabulary useful. However I don't know if this distinction are common use of these words. What words are commonly used for these concepts on LW?
I am currently thinking about The Worst Argument in the World. But I want to read some more before I decide if I have something relevant to contribute.
And I disagree with Yudkowsky's version of timeless physics. I might say something abut that If I can just find a way to formulate what I want to say. (It is not a language problem. It is more about that I fist have to explain stuff about gauge and symmetries and how sometimes you should not get rid of redundant variables just because you can.)
I am currently writing a thesis in Loop Quantum Cosmology. It is about an alternative ideas about the beginning of the universe. I is really cool but not as cool as it sounds. Due to several reasons I will probably not stay in this field. After my defence I don't know what to do. If some one have a job offer, or a career suggestion, let me know.
Replies from: Vaniver, None, Gram_Stone↑ comment by Vaniver · 2016-02-10T20:44:53.256Z · LW(p) · GW(p)
Welcome!
After my defence I don't know what to do. If some one have a job offer, or a career suggestion, let me know.
How much programming have you done so far? In my experience physicists tend to make the transition to programming fairly well because they have lots of experience with modeling / reasoning from first principles / mathematical thinking.
Replies from: None↑ comment by [deleted] · 2016-04-10T01:44:30.770Z · LW(p) · GW(p)
I might post something soon, only I am confused by all the formatting.
Is there some where I can try it out with out actually posting?
I would like to to try out what LaTeX code is possible to include. I looked at the LaTeX to HTML for Less Wrong app but it seem to only pick upp expressions enclosed by single $, which is very limiting. Is this the only type of LaTeX code that is possible in LessWrong formatting environment, or it is just a limitation of the app?
Replies from: None↑ comment by Gram_Stone · 2016-02-10T17:11:15.931Z · LW(p) · GW(p)
I find it very useful do distinguish between what I call "debate" and "discussion" "debate" = everyone involved is trying to win, where "win" usually means convincing the audience. "discussion" = everyone involved is trying to learn the truth. Less wrong is obviously a place for discussion, but even in a discussion I find the above vocabulary useful. However I don't know if this distinction are common use of these words. What words are commonly used for these concepts on LW?
The article that I find most similar to this idea is The Scales of Justice, the Notebook of Rationality. You might call debate 'defending a side' or 'counting points', as opposed to seeking the truth.
comment by [deleted] · 2016-02-04T23:27:39.759Z · LW(p) · GW(p)
Hey LW. I found this site about an hour ago while browsing Quora (I know, I know) and the concept is really appealing to me. Currently I'm studying for my undergrad degree in Neuroscience, not sure exactly what direction I want to take it in afterwards. Artificial neural networks and AI in general are intriguing to me. Being able to actually explain/understand concepts like consciousness and perception of reality in a material sense is sort of my (possibly idealistic) goal. Empiricism is very dear to me, but I think in order to fully explore any idea you can't pit it against rationalism--if that's even a thing that people still do. It's likely that I'll do more lurking than anything else on here, but I'm looking forward to it anyways!
Replies from: Nonecomment by acrmartins · 2015-08-29T09:36:26.031Z · LW(p) · GW(p)
Hi. Just leaving a few comments about me and what I have been doing in terms of research people here will find interesting. I joined just a couple of days ago so I am not so sure about styles, this seems to be the proper place for a first post and I am guessing the format and contents are free.
While I was once a normal theoretical physicist, I was always interested in the questions of why we believe in some theories, I think that for a while I felt that we were not doing everything right. As I went through my professional life, I had to start interacting with people from different areas and that meant a need to learn Statistics. Oddly, I taught myself Bayesian methods before I even knew there was something called hypothesis tests.
Today, my research involves parts of Opinion Dynamics (I am still a theoretical physicist there, somehow), and I have been starting to make more and more use of results of human cognition experiments to understand a few things as well as a Bayesian framework to generate my models. I have also been doing some small amount of research in evolutionary models. But my real main interest in the moment can easily be seen in a paper that I just put online at the ArXiv preprint site. Indeed, while I already knew the site and found it interesting, time limits meant I never really planed to write anything here. So, the reason I actually joined the site now is because I think you will find the whole discussion in the paper quite interesting. I do think that my main conclusion there about human reasoning and its consequences is so obvious that it always amazes me how deep our instincts must be for it to have remained hidden.
There is a series of biases and effects that happen when we decide to support an idea. And those biases make us basically unable to change our minds or, in other words, to learn. In the paper I inspect the concept of choosing an idea to support from what we know about rationality. I conduct a small simulation experiment with different models that suggest that our desire to have one only idea is behind extremist points of view, and I finally discuss the consequences of it all for scientific practice. There is a book planned, with many more details and aimed at the layperson, the first draft version is complete, but it will still take a while before the book is out. The article is in drier prose, of course.
Anyway, while I am still submitting it for publication, the preprint is available at
http://arxiv.org/abs/1508.05169
The name of the article is "Thou shalt not take sides: Cognition, Logic and the need for changing how we believe", I do think you people here will have a lot of fun with it.
Best, André
Replies from: BiasedBayes↑ comment by BiasedBayes · 2015-09-24T12:36:24.939Z · LW(p) · GW(p)
Thanks for the link! Very nice publication!
comment by Sarunas · 2015-07-22T18:13:45.628Z · LW(p) · GW(p)
META. LessWrong Welcome threads have changed very little since late 2011. Should something be updated?
Replies from: Vaniver, None, Viliam, Vaniver↑ comment by Vaniver · 2015-07-22T23:28:55.046Z · LW(p) · GW(p)
This link shows you all new posts in both Main and Discussion, by title and vote count and so on, and is my preferred landing page for LW. I don't think there are any obvious links to it, and this thread seems like a fine place to do so.
Replies from: Sarunas↑ comment by [deleted] · 2015-07-23T09:14:34.098Z · LW(p) · GW(p)
What about the list of users who offered to provide English assistance? If this is a useful service to members it may be worth revisiting as most of the listed members seem to be inactive (at least from looking at post/comment history): Randaly has returned to posting recently, but shokwave hasn't posted in more than a year, Barry Cotter and Normal_Anomaly's last posts were in April.
Replies from: Sarunas↑ comment by Viliam · 2015-07-28T08:44:21.484Z · LW(p) · GW(p)
At the end of "SEQUENCES:" paragraph you could add: They are also available in a book form.
Replies from: Sarunascomment by petermac222 · 2016-11-16T19:35:39.500Z · LW(p) · GW(p)
Hello,
Browsing the web I found this site. I think it will be fun to indulge a bit and read more.
I'm retired, living on a sailboat and enjoying life. At this time I can't think of any topic of interest in the context of discussions, but I like the reading and I'm sure I'll jump in somewhere to contribute more down the road.
Peter
Replies from: Lumifer, Gyrodiot↑ comment by Lumifer · 2016-11-16T21:43:45.037Z · LW(p) · GW(p)
Welcome :-) You don't live on a Macgregor 22, do you?
Replies from: petermac222↑ comment by petermac222 · 2016-11-17T18:53:56.832Z · LW(p) · GW(p)
I live on a Nantucket Island 38. Just big enough to be roomy, and just small enough to sail about by myself. I'm just getting into the living on it part. Had the boat 4+yrs but only moved in full time this past July. Hope to start traveling on it more in 2017, targeting the Pacific Northwest for my first trips, but we'll see, I don't actually have a hard schedule, just rolling along at my own pace.
comment by jstncrri · 2016-11-14T04:58:33.433Z · LW(p) · GW(p)
Hey kids. I'm a young Canadian philosophy student trying to diversify my understanding of the world-as-it-is. Pressing my way through Rationality A-Z slowly, but while doing university, progress can be slow. I've been visiting the site frequently for a few months, but typically feel too uninformed to comment. I appreciate the (surprise) lack of bias and openness to critical thinking here, that I've found mysteriously vacant from my social, business and academic circles. I've gone through the process of being contrarian, then being a 'communist' (then reading Camus) then being lost in a world where it's difficult to find thinknig happening, at all. I come here to remind myself how to be (hah) less wrong, and see what cool things other intelligent folks are working on. If anyone has any links to blogs or sources that are interesting to someone trying to learn about...everything, I'm always looking for more networks to look to for information. Also what does a philosophy student who doesn't want to fall prey to the philosophy tropes do? (I already work at a pizza place.)
Replies from: hairyfigment↑ comment by hairyfigment · 2016-11-15T01:17:28.158Z · LW(p) · GW(p)
I don't know if this is a good answer to your last question, but you could ask what "philosophy" might look like today if Aristotle had never tutored the Emperor of the known world. I tend to think it wouldn't exist - as an umbrella category - nor should it.
Replies from: jstncrri↑ comment by jstncrri · 2016-11-15T05:23:37.336Z · LW(p) · GW(p)
I see it more as the underlying theory of theory, an aspect of all things. I chose to study it with different intentions, but now I'm just capitalizing on my ability to understand theory to learn the theories important too as many different disciplines as possible. I read somehere that philosophers have a responsibility to learn as much science as they can if they want to be relevant. I'm trying.
comment by Rachelle11 · 2016-08-25T07:31:04.712Z · LW(p) · GW(p)
Rachelle is an academic consultant at a community college in specializes in helping students with their academic problems, college stress and such. She also works part-time for an online dissertation help at dissertation corp. She’s also a hobbyist blogger and loves to do guest blogging on education or college life related topics.
comment by Arthur Milchior (Arthur-Milchior) · 2016-07-14T04:01:13.728Z · LW(p) · GW(p)
Hello from Paris, France.
As many of you, I first discovered all of this by HPMOR (actually, its French translation). I then read entirely Rationality, from AI to Zombie (because, honestly, reading things in order is SO MUCH easier than having 20 tabs open with 20 links I followed on the previous pages). I thought I would finish to read this blog, or at least the sequences, before posting, and then realized it may implies I would never post.
I'm a doctor in Fundamental Computer Science, an amateur writer (in French only), and an LGBT activist who goes into school in order to speak of LGBTphobia and sexism with 119 classes (and counting).
I can't tell right now exactly why I so much like the idea of rationality. I guess that it is unrelated to the fact that I wrote the article https://en.wikipedia.org/wiki/Rational_set recently . Its probably more related to the fact that I love the idea of being a robot, at least, being like I thought a robot was when I didn't know that robot are programmed by humans. I can rationalize it ... by hoping that, rationals methods would help me be more efficient in order to fight LGBTphobia (and probably more efficient to do research and publish, or to write more...) Even if, to tell the truth, I'm not yet convinced that studying rationality is a rational action in order to attains those goals. On the other hand, even if rationality may not be the BEST tool ever in order to attains those goal, I'm more confident in the advise I find here than in the advice of a random self-help book I could find in a shelve of my super market, because I assume some people indeed did research before giving those advise.
comment by Starglow · 2015-10-16T23:03:01.155Z · LW(p) · GW(p)
Hi! I've been lurking around here for a while; I'm quite the beginner and will be further lurking rather than contributing. A few months ago, I found and played a nifty little game that asked you to make guesses about statistics and set an interval of confidence, is mostly about updating probabilities based on new information and that ultimately requires you to collect information to decide whether a certain savant is more likely in his cave or at the pub. I've been wanting to have another look at it, but I have been entirely unable to find it again.
Could anyone point me to it? I'm fairly certain it was from this website. Thanks for the help, and keep up the interesting posts!
EDIT: http://cassandraxia.com/projs/advbiases/ in case anyone else is looking for it.
comment by alexander_poddiakov · 2015-10-03T07:56:35.900Z · LW(p) · GW(p)
Hello I am from Department of Psychology of Higher School of Economics. I study problem solving, systems thought, and help and counteraction in social interactions. Both rationality and irrationality are important here.
Web: http://www.hse.ru/en/staff/apoddiakov#sci, http://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=426114
comment by hoofnail · 2015-09-10T19:09:04.737Z · LW(p) · GW(p)
Hi. I have only ever browsed one thread on this website, before. i used to like arguing a lot, but I lost my fervor when I felt like the validity of my argument and ability to defend myself in argument didn't and doesn't matter to most. It makes me sad. I only want to make everyone happy and able to cope with their pain, but everyone rejects me.
I don't have much of a personality beyond my liking logic a lot. All I know is logic, even if most people disagree with me. I am saddened by the fact that I feel my life only truly began in my late teens when I randomly came across the knowledge I needed to gain the opinions I have today. I never want anyone else to experience the sadness I have again. I want to change the world, via argument. Hello.
I once had a chance to make friends like me, but I threw that chance away, because the day that opportunity fell into my lap was the day I formally lost my faith in humanity, and lost my fervor to change the world....
comment by ArisC · 2017-01-22T07:48:25.007Z · LW(p) · GW(p)
Hello from Beijing.
I found out about Less Wrong from Slate Star Codex. I also read HPMOR last year, but hadn't realised there was a connection between that and Less Wrong.
I am posting here because I have been thinking about morality. I get into a lot of debates that all boil down to the fact that people hold a very firm belief in a particular moral principle, to the extent that they would be happy to force others to live in accordance to that principle, without evaluating whether this principle is subjective or rational.
In response to this, I have come up with a framework for evaluating moral theories, and I would like to hear the rationalist community's feedback. Briefly, what I propose is that a moral theory needs to meet three criteria: a) the ethical principles that comprise it must not be internally contradictory; b) its ethical principles must be non-arbitrary as far as possible (so, "be good to other people just because" is not good enough); and c) if the theory's principles are taken to their logical conclusion, they must not lead to a society that the theory's proponents themselves would consider dystopian.
I would like to hear people's thoughts on this - if you think it's intriguing, I am happy to submit an article to expand on my rationale for proposing this framework.
Best, Aris
Replies from: onlytheseekerfinds↑ comment by onlytheseekerfinds · 2017-01-22T12:23:12.813Z · LW(p) · GW(p)
It seems like (a) and (c) are easily granted, but what's your definition of "non-arbitrary", and how should we determine if that definition is itself a non-arbitrary one?
This topic is one I enjoy thinking about so thank you for your post :)
Replies from: ArisC↑ comment by ArisC · 2017-01-22T14:15:39.076Z · LW(p) · GW(p)
Thanks for your comment!
My definition of non-arbitrary would be, can we derive your principle from facts on which everyone agrees? I can propose two such principles: a) liberty - in the absence of moral absolutes, the only thing you can say is live and let live, as to do otherwise is to presuppose the existence of some kind of moral authority; or b) survival of the fittest - there is no moral truth, and even liberty is arbitrary - why should I respect someone else's liberty? If I am stronger, I should feel free to take what I can.
That said, I think there could also be an argument for some sort of virtue ethics - e.g. you could argue that perhaps there is absolute truth, and there are certain virtues that will help us discover it. But you'd need to be smarter than me to make a convincing argument in this line of thought.
comment by [deleted] · 2016-12-13T09:22:07.915Z · LW(p) · GW(p)
Hi all,
I'm a 3rd year CS student at MIT interested in working with computer graphics in the future. I have way too many things I'm interested in doing (I pretty much can't find any field/hobby that I think would be completely boring after I put enough effort into finding out more about it), but things I'm actually involved in and maybe a little good at are art (digital/traditional, and some 3D) and music (singing mostly nowadays). In terms of how I spend my free time I love games and reading, and generally spending too much time on the internet.
I've always been a rationalist I think (though not a very good one), but only became more of an altruistic rationalist more recently after going through a pretty rough year. Not sure exactly why the combination of things that happened last year changed me so much, but I went from caring mostly about being right to caring about being a Good Person.
Anyway how I ended up here is procrastination from working/studying for finals. Staying up late tends to get me overly excited about the infinite amount of interesting content on the web. I'm not a very extroverted person so finding a community like this is really great.
What I'm mainly hoping to get out of LW is to improve how I live and think, to help me do better on my path from Rationalist to Altruistic Rationalist and hopefully later to Altruistic Rationalist Who Actually Does Things That Help People. I've realized I like talking a lot about my ideals but don't do enough to really apply them to my life. I'm also somewhat interested in the question of how art (or well humanities/entertainment...anything less concrete than being a doctor) helps the world. On the one hand I feel like maybe I should aspire to do something more "concrete" than what I'd probably do with my work in computer graphics (try to reform the government? cure some disease? make advancements in our understanding of physics?), but on the other hand I know that the humanities can be extremely powerful, and though I haven't completely thought this out, maybe life with only cures and efficient/good governments and a good understanding of physics without any stories or love of beauty isn't the best life for humans. I'm also debating whether being altruistic means you should sacrifice some of your own quest for self-actualization to help other people, or if you should focus even more on it since that is what you'd want others to do (not sure if I'm really wording this well here...).
Given the time though, I better stop here. I look forward to learning from you all!
comment by Christiano · 2016-11-14T02:24:04.246Z · LW(p) · GW(p)
Hello Less Wrong community! I study Statistics at the Federal University of Rio de Janeiro in Brazil. I am oriented by the bayesian probability philosophy because our Department of Statistical Methods is focused in studies of Bayesian Statistics. I found this website during my studies of bayesian philosophy and error. In Rio de Janeiro, we still do not have any rationality community, but in São Paulo there is meetings organized every month in Meetup. I am very exited to spend my time in this community developing and debating philosophy of error!
comment by JohnReese · 2016-11-08T01:58:47.277Z · LW(p) · GW(p)
Hiya! I am currently a postdoc in the neurosciences, with a computational focus. Dealing with the uncertainties and vicissitudes attendant upon one still plodding on along the path to "nowhere close to tenure-track". My core research interests include decision making, self-control/self-regulation, goal-directed behaviour, RL in the brain etc. I am quite interested in AI research, especially FAI and while I am aware of the broad picture on AI risk, I would describe myself as an optimist. On the social side of things, I am interested in understanding why people believe the things they do (in so far that I am not trying to figure this out as I dangle from the tree...) and my approach has always been one of asking open-ended questions to refine my model of "where someone is coming from" and this helps me have civil discussions with people whose views would be incompatible with mine. I am truly glad that civil discourse and collective truth-seeking are community norms here...one of my biggest pet peeves is that this is what "science'' should be about, as an enterprise, but in modern academia, one seldom feels as though one is part of such a community. Those who disagree, or have had much better times in academia are welcome to disagree. When I am not thinking about computational models, AI, ethics, or whatnot, I pretend to hoard crumpets, drink lots of tea and coffee and make trips to and from the DC Universe (the one that existed prior to Flashpoint). I discovered Scott Aaronson's fantastic blog a year ago, and this was followed by trips to SSC - and this is how I found LW. Love all 3 and now glad to join LW.
Oh, for some reason I am unable to see the button for voting on posts/comments...is there a Karma threshold to be crossed before one can vote?
comment by Secret_Tunnel · 2016-11-04T00:02:47.800Z · LW(p) · GW(p)
Hey everybody! My name's Trent, and I'm a computer science student and hobbyist game developer who's been following LessWrong for a while. Finished reading the sequences about a year ago (after blazing through HPMOR and loving it) and have lurked here (and on Weird Sun Twitter...!) since then. Figured I'd make an account and get more involved in the community; reading stuff here makes me more motivated in my studies, and it's pretty entertaining either way!
I'd love to be one of the first people on Mars. Not sure how realistic that goal is or what steps I should even take to make it happen beyond saving $500,000 for a supposed SpaceX ticket and mastering a useful skill (coding!), but it's something to shoot for!
Looking forward to reading the linked posts, I haven't seen a lot of them! Also, is this the newest Welcome thread? It's over a year old...!
Replies from: CCC↑ comment by CCC · 2016-11-04T12:01:36.273Z · LW(p) · GW(p)
Hi, Trent!
I'd love to be one of the first people on Mars. Not sure how realistic that goal is or what steps I should even take to make it happen beyond saving $500,000 for a supposed SpaceX ticket and mastering a useful skill (coding!), but it's something to shoot for!
Have you heard of the Mars One project?
Replies from: Secret_Tunnel↑ comment by Secret_Tunnel · 2016-11-04T23:04:29.492Z · LW(p) · GW(p)
I have! Wish I'd gotten in on the initial astronaut selection, haha. Still, my money is on SpaceX beating them to the punch.
comment by Arielgenesis · 2016-07-24T15:50:51.836Z · LW(p) · GW(p)
We'd love to know who you are, what you're doing: I was a high school teacher. Now I'm back to school for Honours and hopefully PhD in science (computational modelling) in Australia. I'm Chinese-Indonesian (my grammar and spelling are a mess) and I'm a theist (leaning toward Reformed Christianity).
what you value: Whatever is valuable.
how you came to identify as an aspiring rationalist or how you found us: My friend who is now a sister under the Fransiscan order of the Roman Catholic Church recommended me Harry Potter and the method of Rationality.
I think the theist community needs a better, more rational arguments for their belief. I think the easiest way is to test it against rational people. I hope this is the right place.
I am interested in making rationality be more accessible to the general public.
I am also interested in developing an ideal, universal curriculum. And I think rationality should be an integral part of it.
comment by Sarginlove · 2016-03-22T16:42:45.725Z · LW(p) · GW(p)
Am Sargin Rukevwe Oghneneruona, Am from Nigeria a student studying Business Administration and Management in Delta state polytechnic otefe. I am a rational person and this has helped me a lot i really love engaging in certain activities which could make me become a more rational thinker and also improve my knowledge about being rational. I found out about less wrong by reading articles on http://intentionalinsights.org/ and also written by Intentional insight personnel which has helped me alot to build my strength and knowledge of achieving goals and becoming more successful in life. I believe becoming a member of lesswrong.com would also help me in becoming a more rational thinker.
Replies from: Gleb_Tsipursky↑ comment by Gleb_Tsipursky · 2016-03-23T23:55:14.447Z · LW(p) · GW(p)
Glad you're joining LW, Sargin! Nice to see another volunteer and part-time contractor for Intentional Insights join LW :-)
I want to add that Sargin volunteers at Intentional Insights for about 25 hours, and gets paid as a virtual assistant to help manage our social media for about 15 hours. He decided to volunteer so much of his time because of his desire to improve his thinking and grow more rational. He's been improving through InIn content, and so I am encouraging him to engage with LW.
Replies from: None↑ comment by [deleted] · 2016-03-24T11:20:50.050Z · LW(p) · GW(p)
It would help if you or they or both you and they wrote about what exactly was improved, and why do you think they even ought to engage with LW, which is, after all, hardly the only place to be rational in.
Replies from: Gleb_Tsipursky↑ comment by Gleb_Tsipursky · 2016-03-24T15:09:34.594Z · LW(p) · GW(p)
Good point about specifics on improvement, thanks! I'll encourage them to describe their improvements in the future.
Regarding LW: Intentional Insights content is a broad-version introduction to LW-style rationality. After getting that introduction, we aim to send people that are ready for more complex materials to ClearerThinking, CFAR, and LW.
comment by pranali · 2016-08-05T05:17:22.071Z · LW(p) · GW(p)
Hi! i am new and don't know where to ask this question exactly, so I'm asking here..
how do you vote on articles and comments? i can't figure out how!!
(i hope I'm not noticing some obvious button and be embarrassed)
Replies from: Elocomment by teddy-ak17 · 2016-07-18T12:03:04.044Z · LW(p) · GW(p)
Hello from a lot of places! :) I'm Chinese (Shanghai), studying in Brighton England, and lives in Vienna Austria (Moving to Prague Czech soon). How I discovered LW is not a very long story.
I have a great interest in artificial intelligence. I was reading James Barrat's 'Our Final Invention' and he mentioned AI-Box experiment which got me excited (because just that morning I was reading an article about the Turing Test and how it's very unreliable in measuring intelligence in machines; AI box experiment might be a better experiment in the future?). Before he elaborated the story in chapter 3 (which i found out later), I googled this experiment and led me to Yudkowsky's website. After reading through the thread on the experiment (some really interesting conversations between Yudkowsky and the challengers; I still disagree with James Higgins who denied most of the questions raised by Yudkowsky and giving ambiguous answers). Then I was curious about the link given to me by google which led me to a publicised log of a AI box experiment. It led me here and i had a look around. A whole website about rationality. It's like i found a gold mine.
I am currently studying ALevels - maths, further maths, computer science and physics. I wish to study computer science with artificial intelligence in the future in uni. My goal is computer science and philosophy at Oxford.
Anyways, in the future I wish to contribute more on this website. I believe that sharing thoughts is the best way to expand our knowledge, better than reading books. I am writing an EPQ on the safety of ASI so hopefully i can get some inspiration from the LW community. My interest in life is easy, just question everything, so please bear with them. :) A rational world should be everyone's goal. With the development of AGI/ASI, i hope our world will be a better place and rationality is the key. I am so happy I found this place and I hope I can help make a difference.
comment by dimensionx · 2016-07-06T12:11:24.003Z · LW(p) · GW(p)
1
Replies from: Lumifer↑ comment by Lumifer · 2016-07-06T19:39:19.258Z · LW(p) · GW(p)
in the economic environment
Do you mean financial markets?
Replies from: dimensionx↑ comment by dimensionx · 2016-07-06T20:23:49.459Z · LW(p) · GW(p)
1
Replies from: Lumifer↑ comment by Lumifer · 2016-07-06T20:34:20.424Z · LW(p) · GW(p)
So what kind of metrics are you interested in forecasting? Macroeconomic ones (GDP, inflation, etc.)? Industry-specific things? Interest rates?
Replies from: dimensionxcomment by Alia1d · 2016-05-09T06:29:04.811Z · LW(p) · GW(p)
I’ve found the Welcome thread!
Hi, I’m Alia and I live with my husband in San Jose, California. I found this site via SlateStarCodex and having read Rationality:From AI to Zombies I think this is a fascinating and useful set of concepts and that using this type of reasoning more often is something to aspire to. I want to do more Bayesian calculations so I get more of a feel for them.
I’m also a fundamentalist* Christian. I’m perfectly ready to discuss and defend these beliefs, but I wouldn’t always bring up these beliefs in threads. I’m not trying to deceive or trick anyone, I just don’t want to derail a thread that is actually about something else. I do think it’s possible to be both a rationalist and a Christian as to stay reasonably intellectually consistent.
*(a note on why I choose the identification fundamentalist. Not long after American Christians split into mainline and fundamentalist groups, the fundamentalists got a bunch of bad press focused on certain sub-groups that were anti-intellectual. The other fundamentalists dealt with this by splitting off and re-branding themselves as evangelical. I’m not anti-intellectual and generally in the group that would self identify as evangelical, but I’m choosing to stick with the fundamentalist label for three reasons. 1) I don’t think changing the label or re-branding is a good way to deal with negative affect attached to a word. At best it just avoids the issue rather than solving the problem. 2) I don’t believe in disavowing people because they are unpopular with third parties. While I dis-agree with the anti-intellectuals on some things, the agreement on common core beliefs that lead to the fundamentalist label in the beginning is still there. 3) I think the fundamentalist label provides more clarity. The evangelicals worked hard and successfully to avoid getting over identified with and sub-group or coincidental characteristic. But as a result the label evangelical stayed vague, Individuals and groups that are more in the mainline tradition sometimes call themselves or get called evangelical. On the other hand opponents who wanted to hang on to the negative affect kept calling anything from the original fundamentalist tradition ‘fundamentalist.’ So on the I think fundamentalist will convey the most accurate idea of where I’m coming from theologically.)
Replies from: gjm, ChristianKl, johnlawrenceaspden↑ comment by gjm · 2016-05-09T11:35:12.838Z · LW(p) · GW(p)
Welcome! I applaud your decision to embrace hostile terminology. I don't think you should feel any obligation to bring up your religious beliefs all the time.
If you're interested in the interactions between unashamedly traditionalist religion and rationalism, you might want to drop into the ongoing discussion of talking snakes. Most of it lately, though, has been discussion between people who agree that the story in question is almost certainly hopelessly wrong and disagree about exactly which bits of it offer most evidence against the religion(s) it's a part of, which you might find merely annoying...
[EDITED to add: Aha, I see you've already found that. My apologies for not having noticed that you were already participating actively there.]
Just out of curiosity (and you should feel free not to answer), how "typically fundamentalist" are your positions? E.g., are you a young-earth creationist, do you believe that a large fraction of the human race is likely to spend eternity in torment, do you believe in "verbal plenary inspiration" of the Christian scriptures, etc.?
(Meta-note that in a better world would be unnecessary: it happens that one disgruntled LessWronger has taken to downvoting almost everything I post, sometimes several times by means of sockpuppets. I mention this only so that if you see this comment sitting there with a negative score you don't take it to mean that the LW community generally disapproves of my welcoming you or disagrees with what I said above.)
Replies from: Alia1d↑ comment by Alia1d · 2016-05-09T19:35:14.179Z · LW(p) · GW(p)
Fairly typically fundamentalist, I believe in young earth creationism with a roughly estimated confidence level of 70%, a large fraction of the human race destined for eternal torment at about 85% and verbal plenary inspiration at about 90%.
I'm a little more theologically engaged then average but (as is typical in my circles) that mean's I'm more theologically conservative, not less.
Replies from: gjm, gjm↑ comment by gjm · 2016-05-09T21:24:36.012Z · LW(p) · GW(p)
Are those figures derived from any sort of numerical evidence-weighing process, or are they quantifications of gut feelings? (I do not intend either of those as a value judgement. Different kinds of probability estimate are appropriate on different occasions.)
Replies from: Alia1d↑ comment by Alia1d · 2016-05-10T00:20:14.586Z · LW(p) · GW(p)
These are more gut feelings, I had already considered a lot of evidence for and against these before I found out about Bayesian updating, so the bottom line was really already written. If I tried to do a numerically rigorous calculation now, I would just end up double counting evidence. This is just a 'if I had to make a hundred statements of this type that I was this confident about, how often would I be right guess.
Replies from: gjm↑ comment by ChristianKl · 2016-07-06T19:07:14.964Z · LW(p) · GW(p)
Do you believe that both Black and White people who live currently descend from one individual that lived around 6000 years ago?
↑ comment by johnlawrenceaspden · 2016-05-09T13:22:48.238Z · LW(p) · GW(p)
Welcome Alia! You sure sound like one of us. Hope you like it here.
comment by Germaine · 2016-05-06T14:23:06.064Z · LW(p) · GW(p)
Hi from San Diego, California. I'm an attorney with academic training in molecular biology (BS, MS, PhD). I have an intense interest in politics, specifically the cognitive biology/social science of politics. I'm currently reading The Rationalizing Voter by Lodge and Taber. I have read both of Tetlock's books, Haidt's Righteous Mind, Khaneman's Thinking, Fast and Slow, Thaler's Nudge, Achen and Bartels Democracy for Realists and a few others. I also took a college-level MOOC on cognitive biology and attendant analytic techniques (fMRI, etc) and one on the biology of decision making in economics.
Based on what I have taught myself over the last 6-7 years, I came up with a new "objective" political ideology or set of morals that I thought could be used to at least modestly displace or supplement standard "subjective" ideologies including liberalism, conservatism, capitalism, socialism, Christianity, anarchy, racism, nationalism and so on. The point of this was an attempt to build an intellectual framework that could help to at least partially rationalize politics, which I see as mostly incoherent/irrational from my "objective" public-interest oriented point of view.
I have tried to explain myself to both lay audiences (I'm currently a moderator at Harlen's Place, a politics site on Disqus https://disqus.com/home/channel/harlansplace/ ), but have failed. I confess that I'm becoming discouraged at the possibility of applying cognitive and social science to even slightly rationalize politics. What both Haidt and Lodge/Tabor have to say, makes me think that what I am trying is futile. I have tried contact about 50-60 academics, including Tetlock, Haidt, Bartels and Taber, but none have responded with any substance (one got very annoyed and chewed me out for wasting his time; http://www.overcomingbias.com/ ) - most don't respond at all. I get that -- everyone is busy and crackpots with new ideas are a dime a thousand.
Anyway, I stumbled across this site this morning while looking for some online content about the affect heuristic. I thought I would introduce myself and try to fit in, if I'm up to the standards here. My interest is in trying to open a dialog with one or more people who know this science better than myself so that I can get some feedback one whether what I am trying to do is a waste of time. As a novice, I suspect that I misunderstand the science and overestimate the limits of human rationality in politics in a society that lives under the US constitution (free speech).
My blog is here: http://dispol.blogspot.com/
Replies from: ChristianKl, Lumifer↑ comment by ChristianKl · 2016-05-06T15:28:21.451Z · LW(p) · GW(p)
First impressions from skim reading the blog:
Objective politics, defined as unbiased fact and reason in service to the public interest is described and defended. Biology-based objectivity, the last political frontier.
That points for me into the direction of objectivism with all it's problems. There are good reasons to be quite suspicious when someone claims that they don't have an ideology and there views are simply "objective".
What we need to do as a country is obvious.
To me saying something like that without bringing forward a specific proposal suggests to me politcal ignorance.
Book reivew: Democracy for Realists
The blog isn't spell-checked.
Replies from: Germaine↑ comment by Germaine · 2016-05-08T15:55:15.578Z · LW(p) · GW(p)
I have been arguing and debating politics online for over 7 years now and I am quite used to how people speak to each other. There is nothing at all politically ignorant in my comment. When I say something is obvious, it has to be taken in the context of the entire post. It's easy to cherry pick and criticize by the well-known and popular practice of out-of-context distortion of a snippet on content in a bigger context. I have seen that tactic dozens of times and I reject it. It's cheap shot and nothing more. You can do better. Bring it on.
My blog and all of my other online content speaks directly to the American people in their own language. I do not address academics in academic language. I have tried academic language with the general public and it doesn't work. Here's a news flash: There is an astonishing number of average adult Americans who have little or no trust in most any kind of science, social and cognitive science included. As soon as one resorts to the language of science, or even mentions something as "technical" as "cognitive science", red flags go up in many people and their minds automatically switch to conscious rationalization mode. My guess is that anti-science attitude applies to about 40-60% of adult Americans if my online experience is a reasonably accurate indicator. (my personal experience database is based on roughly 600-1,000 people -- no, I am not so stupid as to think that is definitive, it's just my personal experience)
I am trying to foster the spread of the idea that maybe, just maybe, politics might be rationalized at least enough to make some detectable difference for the better in the real world. My world is firmly based in messy, chaotic online retail politics, not any pristine, controlled laboratory or academic lecture room environment.
Political ignorance is in the eye of the beholder. You see it in me and I see it in you.
By the way, reread the blog post you criticize as making no specific proposal. There is a specific proposal there: based on the social science, remove fuel 1 from the two-fuel fire needed to spark a terrorist into being. How did you miss it? Did you read what I said, or did your eye simply float down to the offending phrase and that triggered your unconscious, irrational attack response?
I do appreciate your comment on the review of Achen and Bartel's book. If your whining about spelling errors is the best shot you have, then I am satisfied that I understand the book well enough to use to to leverage my arguments when I cross swords with non-science, real people in the real world. I have no interest in basing my politics on my misunderstanding of areas of science that are outside my formal academic training. I need to be as accurate and honest as I can so that people can't dismiss my arguments for rationality as based in ignorance, stupidity and/or mendacity. That's another cheap shot tactic I come across with some regularity. The only defense against that attack is to be correct.
Shall we continue our dance, or is this OK for you?
Replies from: ChristianKl↑ comment by ChristianKl · 2016-05-09T11:35:01.816Z · LW(p) · GW(p)
I have been arguing and debating politics online for over 7 years now and I am quite used to how people speak to each other.
That's the problem. Most relevant political discussions that have real world effects don't happen online. Knowing how to debate politics online and actual knowing how politics processes work are two different things.
By the way, reread the blog post you criticize as making no specific proposal. There is a specific proposal there: based on the social science, remove fuel 1 from the two-fuel fire needed to spark a terrorist into being.
That's no specific proposal. The fact that you think it is suggests that you haven't talked seriously to people who make public policy but only to people on the internet who are as far removed from political processes as you are.
It's like people who are outside of mathematical academia writing proofs for important mathematical problems. They usually think that their proofs are correct because they aren't specific enough about them to see the problems that exist with them.
If your whining about spelling errors is the best shot you have,
I read one post and gave my impression of it. The spelling errors reduce the likelihood that reading other posts would be valuable, so I stopped at that point. If you are actually interested in spreading your ideas, that's valuable information for you.
↑ comment by Lumifer · 2016-05-06T14:40:12.548Z · LW(p) · GW(p)
Is a short summary of your ideology or set of morals available somewhere on the 'net?
Replies from: Germaine↑ comment by Germaine · 2016-05-08T16:08:04.318Z · LW(p) · GW(p)
I have tried for short summaries, but it hasn't worked. Very short summary: A "rational" ideology can be based on three morals (or core ideological principles): (1) fidelity to "unbiased" facts and (2) "unbiased" logic (or maybe "common sense" is the better term), both of which are focused on (3) service to an "objectively" defined conception of the public interest.
Maybe the best online attempts to explain this are these two items:
an article I wrote for IVN: http://ivn.us/2015/08/21/opinion-america-needs-move-past-flawed-two-party-ideology/
my blog post that tries to explain what an "objective" public interest definition can be and why it is important to be broad, i.e., so as to not impose fact- and logic-distorting ideological limits on how people see issues in politics: http://dispol.blogspot.com/2015/12/serving-public-interest.html
I confess, I am struggling to articulate the concepts, at least to a lay audience and maybe to everyone. That's why I was really jazzed to come across Less Wrong -- maybe some folks here will understand what I am trying to convey. I was under the impression that I was alone in my brand of politics and thinking.
Replies from: Lumifer, Gram_Stone↑ comment by Lumifer · 2016-05-09T00:56:49.238Z · LW(p) · GW(p)
(1) fidelity to "unbiased" facts and (2) "unbiased" logic (or maybe "common sense" is the better term)
These are not particularly contentious, given how they both can rephrased as "let's be really honest". However...
service to an "objectively" defined conception of the public interest
is somewhat more problematic. I assume we are speaking normatively, not descriptively, by the way, since real politics is nothing like that.
Off the top of my head, there are two big issues here. One is the notion of the "public interest" and how do you deal with aggregating the very diverse desires of the public into a single "public interest" and how do you resolve conflicts between incompatible desires.
The other one is what makes it "objective", even with the quotes. People have preferences (or values), some of them are pretty universal (e.g. the biologically hardwired ones), but some are not. Are you saying that some values should be uplifted into the "objective" realm, while others should be cast down into the "deviant" pit? Are there "right" values and "wrong" values?
Replies from: Germaine↑ comment by Germaine · 2016-05-09T02:17:28.256Z · LW(p) · GW(p)
I'm done with this weird shit arrogant, academic web site. Fuck all of you academic idiots. Your impact on the 2016 November elections: Zero. Your efforts will have zero impact on the Donald's election. Only the wisdom of American common sense can save us. LW is fucking useless. :)
Replies from: CCC, Lumifer↑ comment by CCC · 2016-05-09T09:17:31.141Z · LW(p) · GW(p)
Elections aren't everything.
Yes, I know that I, personally, have had (and will have) absolutely zero effect on the American 2016 November elections. I am fully aware that I, personally, will have absolutely zero impact on Donald Trump's candidacy, and everything that goes into that. And I am perfectly fine with that, for a single, simple, and straightforward reason; I am not American, I live in a different country entirely. I have a (very tiny) impact on a completely different set of elections, dealing with a completely different set of politicians and political problems.
And that has absolutely nothing to do with why I am here.
I've taken a (very) brief look over your blog. And I don't think I have much to say about it - it is very America-centric, in that you're not talking about an ideal political system nearly as much as you're talking about how the American system differs from an ideal political system.
Having said that, you might want to take a look over this article - it seems to cover a lot of the same ground as you're talking about. (Then note the date on that article; if you really want to change American politics, this is probably the wrong place to be doing it. If you really want to change the mind of the average American, then you need to somehow talk to the average American - I only have an outsider's view of America, but I understand that TV ads and televised political debates are the best way to do that).
Good luck!
↑ comment by Lumifer · 2016-05-09T05:41:04.800Z · LW(p) · GW(p)
I'm done with this weird shit arrogant, academic web site. Fuck all of you academic idiots. Your impact on the 2016 November elections: Zero. Your efforts will have zero impact on the Donald's election. Only the wisdom of American common sense can save us. LW is fucking useless. :)
Oh, dear. Somebody had a meltdown and a hissy fit.
Y'know, in some respects LW is like 4chan. Specifically, it's not your personal army.
You seem to have taken a break from bashing your face into a brick wall. Get back to it, the bricks are waiting.
↑ comment by Gram_Stone · 2016-05-08T23:06:34.768Z · LW(p) · GW(p)
I read your article on IVN, so this is mostly a response to that.
I do think that it would be great if people thought about politics in a scientifico-rational way. And it isn't great that you really only have two options in the United States if you want to join a coalition that will actually have some effect. It's true that having two sets of positions that cannot be mismatched without signaling disloyalty results in a false-dichotomous sort of thinking. But it seems important to think about why things are in this state in the first place. Political parties can't be all bad, they must serve some function.
Think about labor unions and business leaders. Employees have some recourse if they dislike their boss. They can demand better conditions or pay, and they can also quit and go to another company. But we know that when employees do this, it usually doesn't work. They usually get fired and replaced instead. The reason is that if an employer loses one employee out of one hundred, then they will be operating at 99% productivity, while the employee that quit will be operating at 0% productivity for some time. Labor unions solve the coordination problem.
Likewise, the use of a political party is that it offers bargaining power. Any scientifico-rational political platform will have to solve such a coordination problem, and they will have to use a different solution from the historical one: ideology. That's not easy. Which is not to say that it's not worth trying.
So, it's not enough that citizens be able to reveal their demand for goods and services from the government, or other centers of power; it's also necessary that officials have incentives to provide the quality and quantity of goods and services demanded. In democracy this is obtained through the voting mechanism, among other things. A politician will have a strong incentive to commit an action that obtains many votes, but barely any incentive to commit an action that will obtain few votes, even if they have detailed information about what policies would result in the greatest increase in the public interest in the long run, and even if the action that obtains the most votes is not the policy that maximizes public interest in the long run. They would not be threatened by the loss of a few rational votes, or swayed by the gain of a few rational votes, any more than the boss would be threatened by the loss of one employee.
It seems difficult to me to fix something like this from the inside. I think a competitive, external government would be an easier solution. Seasteading is an example of an idea along these lines. I don't believe that private and public institutions are awfully different in their functions, we often see organizations on each side of the boundary performing similar functions at different times, even if some are more likely to be delegated to one than the other, and it seems to me that among national governments there is a deplorable lack of competition. In the market, the price mechanism provides both a way for consumers to reveal their demand, and a way to incentivize suppliers to supply the quality and quantity of goods and services demanded. If a firm is inefficient, then it goes out of business. However, public institutions are different, in that there often is no price mechanism in the traditional sense. If your government sucks, you mostly cannot choose to pay taxes to a different one. Exit costs are very high as a citizen of most countries. And the existing international community has monopolized the process of state foundation. You need territory to be sovereign, but all territory has been claimed by pre-existing states, except for Marie Byrd Land in Antarctica, which the U.S. and Russia reserve the right to make a claim to, and the condominium in Antarctica does not permit sovereignty way down there a la the Antarctic Treaty System. The only other option is the high sea. Scott Alexander's Archipelago and Atomic Communitarianism is related to this.
I wonder if you've thought about stuff like that. I don't think that our poor political situation is only a matter of individuals having bad epistemology.
comment by JohnC2015_duplicate0.34499772964045405 · 2016-03-24T07:12:24.579Z · LW(p) · GW(p)
Hi Less Wrong,
I am John Chavez from the Philippines. I'm a part-time teacher in a community college, teaching computer hardware servicing and maintenance to out-of-school youths.
As I take much value on helping others in my community and reach out to people who needs help, I came to know about Intentional Insights on Facebook, which leads me here in Less Wrong. I have been here for a while reading several published articles. There are a lot of articles here that I really love to read, although I must admit that there are a few that I found confusing and I disagree of.
Hence, I am introducing myself to you to formally start my quest in learning more about being rationalist.
I hope this will be enough for you to welcome me in your community. It will be humbling to know your thoughts :)
Thanks!
comment by JohnC2015_duplicate0.34499772964045405 · 2016-03-24T03:22:52.284Z · LW(p) · GW(p)
Hi Less Wrong,
I am John Chavez from the Philippines. I'm a part-time teacher in a community college, teaching computer hardware servicing and maintenance to out-of-school youths.
As I take much value on helping others in my community and reach out to people who needs help, I came to know about Intentional Insights on Facebook, which leads me here in Less Wrong. I have been here for a while reading several published articles. There are a lot of articles here that I really love to read, although I must admit that there are a few that I found confusing and I disagree of.
Hence, I am introducing myself to you to formally start my quest in learning more about being rationalist.
I hope this will be enough for you to welcome me in your community. It will be humbling to know your thoughts :)
Thanks!
Replies from: SquirrelInHell, Gleb_Tsipursky↑ comment by SquirrelInHell · 2016-03-24T07:38:34.344Z · LW(p) · GW(p)
Hi John! From what you have described, I think it could be a better experience for you if you start with the more structured reading, which is (at the moment) best provided by Eliezer's "From AI To Zombies". You can download it for free if you follow the link. It may seem long, but it's well worth the read.
Replies from: JohnC2015_duplicate0.34499772964045405↑ comment by JohnC2015_duplicate0.34499772964045405 · 2016-03-24T10:16:35.297Z · LW(p) · GW(p)
Cool! Thank you. I will definitely read it. :)
↑ comment by Gleb_Tsipursky · 2016-03-24T15:11:58.498Z · LW(p) · GW(p)
Glad you're joining LW, John! Nice to see another volunteer and part-time contractor for Intentional Insights join LW :-) It's definitely a nice place to develop rationality, and don't be put off by the occasional roughness of the commentary here.
For the rest of LW folks, I want to clarify that John volunteers at Intentional Insights for about 45 hours, and gets paid as a virtual assistant to do various administrative tasks for about 20 hours.
Replies from: ChristianKl↑ comment by ChristianKl · 2016-03-24T19:31:46.929Z · LW(p) · GW(p)
What exactly do you have them do?
Replies from: Gleb_Tsipursky↑ comment by Gleb_Tsipursky · 2016-03-24T19:35:17.993Z · LW(p) · GW(p)
They work on a variety of tasks, such as website management, image creation, managing social media channels such as Delicious, StumbleUpon, Twitter, Facebook, Google+, etc. Here's an image of the organizational Trello showing some of the things that they do (Trello is a platform to organize teams together). We also have a couple more who do other stuff, such as Youtube editing, Pinterest, etc.
Replies from: ChristianKl↑ comment by ChristianKl · 2016-03-25T00:20:24.960Z · LW(p) · GW(p)
That doesn't really tell me what "managing social media channels means". Does managing Twitter mean that the person registers a Twitter page, follows random people and repost InIn articles?
Does it basically mean that the people are supposed to post links at various places?
Replies from: Gleb_Tsipursky↑ comment by Gleb_Tsipursky · 2016-03-25T03:33:52.870Z · LW(p) · GW(p)
Managing Twitter means several things.
Regarding content, the person finds appropriate things to post on Twitter, which we do about 4 times a day. This includes both InIn and non-InIn materials that we curate for our audience, and most of the things we post are not InIn content - about 2/3. The latter involves reading the article and determining whether our audience would find it appropriate. Then, the person writes up Tweets with appropriate hashtags for each piece. They put it into a spreadsheet. Then it gets read over by two other people for grammar/spelling/fit. Then, these are scheduled through Hootsuite, a social media scheduling app.
Regarding managing Twitter itself, this involves managing the Twitter audience of the channel, including questions, comments, etc. (we have over 10K followers on Twitter). It also involves reTweeting interesting Tweets, and other Twitter-oriented activities.
This takes place for a number of social media channels. Here's an example of a weekly social media plan for Hootsuite, if you're curious. This includes Twitter, FB, LinkedIn, and Google+.
This doesn't include Pinterest, Instagram, StumbleUpon, or Delicious, since Hootsuite doesn't handle those.
Replies from: ChristianKl, ChristianKl↑ comment by ChristianKl · 2016-03-25T16:43:14.329Z · LW(p) · GW(p)
The latter involves reading the article and determining whether our audience would find it appropriate.
Who's your target audience when you think that a Nigerian can make a good decision about whether your target audience would find an article appropriate?
Replies from: Gleb_Tsipursky↑ comment by Gleb_Tsipursky · 2016-03-26T16:25:03.851Z · LW(p) · GW(p)
What are you implying about Nigerians here?
Replies from: ChristianKl↑ comment by ChristianKl · 2016-03-27T03:38:42.804Z · LW(p) · GW(p)
That they are culturally different from Western people. They might be very well know what's culturally appropriate to post when trying to reach a Nigerian audience but Western culture is a bit different in lot's of aspects. The posts those people posted on LW look like they are not written by normal Western people but either by people who wrote them because they are payed to do so or by people who operate under different cultural norms.
Replies from: Gleb_Tsipursky↑ comment by Gleb_Tsipursky · 2016-03-27T19:48:23.892Z · LW(p) · GW(p)
As I think I mentioned before, Intentional Insights tries to reach a global audience, and after the US, our top three countries are non-western. So it's highly valuable for us to have non-western volunteers/contractors who can figure out what would be salient to a diverse international audience.
Replies from: ChristianKl↑ comment by ChristianKl · 2016-03-27T20:47:12.461Z · LW(p) · GW(p)
Do you have other data about your impact in those countries besides passive reading numbers? Do you have links to the receptions of InIn content by non-western audiences besides those people you payed?
Replies from: Gleb_Tsipursky↑ comment by Gleb_Tsipursky · 2016-03-28T02:07:08.910Z · LW(p) · GW(p)
Links are hard, since most things I have are people writing to me. However, here is one relevant link. After finding out about our content, a prominent Indian secular humanist association invited me to do a guest blog for them. I was happy to oblige.
↑ comment by ChristianKl · 2016-03-25T16:45:40.395Z · LW(p) · GW(p)
(we have over 10K followers on Twitter).
How many of those are payed and how many organic?
Replies from: Gleb_Tsipursky↑ comment by Gleb_Tsipursky · 2016-03-26T16:22:24.228Z · LW(p) · GW(p)
Five are paid as virtual assistants, but they are not paid to follow Twitter. There wouldn't be a point to having paid followers, because the goal is to distribute content widely.
There are plenty of people who after reading our widely-shared articles then choose to engage with our social media.
comment by Foo · 2016-12-08T22:55:15.187Z · LW(p) · GW(p)
Hello Less Wrong!
My name is Bryan Faucher. I'm a 27 year old from Edmonton (Canada) in the middle of the slow process of immigrating to Limerick (Ireland) where my wife has taken a contract with the University. I've been working in education for the past five years but I'm looking to pursue a masters in mathematical modeling next year, rather than attempting to fight for the right to work in a crowded industry as a non-citizen.
I've been aware of LW for something like six years, having been introduced by an old roommate's SO by way of HPMOR. In that time I've read through the sequences and a great deal of what I suppose could be called the "supplementary content" available on the site, but never found a reason to dive in to the discussion. I don't remember exactly when I created this account, but it was nice to have it waiting for me when I needed it!
I'm joining in now because I was very much grabbed by Sarah Constantin's "A Return to Discussion". I've been a member of a mid-sized discussion forum for over a decade, where I now volunteer my time as an administrator. We've done OK - better than most - in terms of maintaining activity in the face of the web's movement away from forums and bulletin boards, but the tone of our conversations has certainly changed: in may ways sliding through the grooves which Sarah seems to be describing. My purview as admin includes the "serious" discussion section of the form, and I feel I'm fighting a losing battle year over year to maintain "nerd space" in the face of cynical irony and the widespread fear of engagement.
I'm hoping to be inspired by the changes the LW community has set to out to make. To learn from what goes right here, and, in some small way, to contribute to the effort which I think is an important one. Intellectually, I don't have a hope in hell of keeping up with the local heavy hitters, but I can bring a lot of, ya know... grit.
Anyway, thanks for reading. I hope this was a fair place to post this. A new newbie thread seems to be wanting, unless I missed something, and I suppose if nothing else I can wrack up enough karma in the next few days to create one. See you around!
comment by WikiLogicOrg · 2016-05-14T10:31:27.295Z · LW(p) · GW(p)
Hello!
I am new to this site but judging from HPMOR and some articles I read here, I think I have come to the right place for some help.
I am working on the early stages of a project called WikiLogic which has many aims. Here are some that may interest LW readers specifically:
-Make skills such as logical thinking, argument construction and fallacy recognition accessible to the general public
-Provide a community created database of every argument ever made along with their issues and any existing solutions
-Highlight the dependencies between different fields in academic circles
The project requires knowledge of Bayes networks, linguistics and many more fields that I have little experience of although i am always learning. This is why I am looking for you guys to review the idea and let me know your thoughts. At this stage, unfiltered advise on any aspect of the project is welcome.
The general idea along with a short video can be found on the front page of the main site:
http://www.wikilogicfoundation.org/
Feel free to explore the site and wiki to get a better feel of what I am trying to do. Please forgive poorly written or unfinished parts of the site. It is early days and it seems unproductive to finish before I get feedback that may change its course...
Replies from: Regex↑ comment by Regex · 2016-05-14T21:52:05.075Z · LW(p) · GW(p)
Welcome!
I've seen these sorts of argument maps before.
https://wiki.lesswrong.com/wiki/Debate_tools http://en.arguman.org/
It seems there is some overlap with your list here
Generally what I've noticed about them is that they focus very hard on things like fallacies. One problem here is that some people are simply better debaters even though their ideas may be unsound. Because they can better follow the strict argument structure they 'win' debates, but actually remain incorrect.
For example: http://commonsenseatheism.com/?p=1437 He uses mostly the same arguments debate after debate and so has a supreme advantage over his opponents. He picks apart the responses, knowing full well all of the problems with typical responses. There isn't really any discussion going on anymore. It is an exercise in saying things exactly the right way without invoking a list of problem patterns. See: http://lesswrong.com/lw/ik/one_argument_against_an_army/
Now, this should be slightly less of an issue since everyone can see what everyone's arguments are, and we should expect highly skilled people on both sides of just about every issue. That said the standard for actual solid evidence and arguments becomes rather ridiculous. It is significantly easier to find some niggling problem with your opponents argument than to actually address its core issues.
I suppose I'm trying to describe the effects of the 'fallacy fallacy.'
Thus a significant portion of manpower is spent on wording and putting the argument precisely exactly right instead of dealing with the underlying facts. You'll also have to deal with the fact that if a majority of people believe something then the shear amount of manpower they can spend on shoring up their own arguments and poking holes in their opponents will make it difficult for minority views to look like they hold water.
What are we to do with equally credible citations that say opposing things?
'Every argument ever made' is a huge goal. Especially with the necessary standards people hold arguments to. Are you sure you've got something close to the right kind of format to deal with that? How many such formats have you tried? Why are you thinking of using this one over those? Has this resulted in your beliefs actually changing at any point? Has this actually improved the quality of arguments? Have you tried testing them with totally random people off of the street versus nerds versus academics? Is it actually fun to do it this way?
From what I have seen so far I'll predict there will be a the lack of manpower, and that you'll end up with a bunch of arguments marked full of holes in perpetual states of half-completion. Because making solid arguments is hard there will be very few of them. I suspect arguments about which citations are legitimate will become very heavily recursive. Especially so on issues where academia's ideological slates come into play.
I've thought up perhaps four or five similar systems, but none of which I've actually gone out and tested for effectiveness at coming to correct conclusions about the world. It is easy to generate a way of organizing information, but it needs to be thoroughly tested for effectiveness before it is actually implemented.
In this case effectiveness would mean
- production of solid arguments in important areas
- be fun to play
- maybe actually change someone's mind every now and then
- low-difficulty of use/simple to navigate
A word tabooing feature would be helpful: http://lesswrong.com/lw/np/disputing_definitions/ (The entire Map and Territory, How to Actually Change Your Mind, and A Human's Guide To Words sequences would be things I'd consider vital information for making such a site)
It may be useful for users to see their positions on particular topics change over time. What do they agree with now and before? What changed their mind?
I hope that helped spark some thoughts. Good luck!
Replies from: WikiLogicOrg↑ comment by WikiLogicOrg · 2016-05-17T20:12:57.137Z · LW(p) · GW(p)
Thanks for an excellent, in-depth reply!
Brilliant resource! Thanks for pointing it out.
You bring up a few worries although i think you also realize how i plan to deal with them. (Whether i am successful or not is another matter!)
One problem here is that some people are simply better debaters even though their ideas may be unsound
One part of this project is to make some positive aspects of debating skills easy to pick up by newbies using the site. Charisma and confidence are worthless in a written format and even powerful prose are diluted to simple facts and reasoning in this particular medium.
It is significantly easier to find some niggling problem with your opponents argument than to actually address its core issues
In my mind, if a niggling issue can break an argument then it was crucial and not merely 'niggling'. If the argument was employing it but did not rely on it, then losing it wont change its status. Being aware of issues like the 'fallacy fallacy' is useful in time-limited oral debates but in this format its ok to attack a bad argument on an otherwise well supported theory. The usual issue is it allows ones bias to come into play and makes the opponent feel the whole argument is weak. But this is easily avoided when the node remains glowing green to signify it is still 'true'.
manpower is spent on wording and putting the argument precisely exactly right instead of dealing with the underlying facts
Is this so bad? We are used to being frugal with a resource like manpower because its traditionally been limited, but i believe you can overcome that with the world wide reach offered by the internet. People will only concentrate on what they are passionate about which means the most contentious of arguments will also get the most attention to detail. Most people accept gravity so it wont get or need as much attention. In the future if a new prominent school of thought is formed attacking it, then it may require a revisit from those looking to defend it.
[limited manpower] ...will make it difficult for minority views to look like they hold water
I think the opposite is true. In most other formats, such as a forum, the one comment can easily be drowned out. Here there will simply be two different ideas. More people working on one will help of course but they cannot conjure good arguments from nothing. We also have to have faith (the good kind) in people here and assume that they will be willing to remove bad arguments even if they support the overall idea. Furthermore they will be wiling to add and help grow an opposing argument if they can see the valid points for it.
What are we to do with equally credible citations that say opposing things?
I have lots of design issues noted in the wiki but it needs a bit of a cleanup. I will give a brief answer here instead of linking you to that mess! ;) If two ideas are expressed that contradict each other, a community member should link them with a 'contradiction' tag and they both become 'false'. This draws attention to the issue and promotes further inquiry - another benefit of WL. If its key to an argument and there is no other experiments then it shows what we need to fund to get our answers. If future studies result in continued contradiction we need to go the next level down and argue about the nature of the experiment and why x is better than y. If there is no disagreement about the methodology but still the results contradict, perhaps the phenomenon is not will enough understood yet and we are right to keep them false to prevent its use in backing other statements.
'Every argument ever made' is a huge goal.
Perhaps im exaggerating slightly... but only slightly! I think a connected knowledge base is important and i dream of a future where coming up with a new idea and adding it to the human knowledge pool is as natural as breathing. But as there are probably an infinite number of arguments to be made and mankind is so very finite, i have recognized my design must handle the inevitable gaps. Its easy to see how if WL becomes popular then gets made mandatory for transparent democracies, fair legal systems and reputable academies among many other areas, it will be easy to keep up to date. But the challenge, as you point out, will be in getting it that far!
Are you sure you've got something close to the right kind of format to deal with that? How many such formats have you tried? Why are you thinking of using this one over those?
Not 100% sure what you mean - can you suggest an example of an alternate format to clarify?
Has this resulted in your beliefs actually changing at any point? Has this actually improved the quality of arguments?
As it does not exist i cannot say, but thinking rationally and trying to map and scrutinize ideas like WL will, has changed me massively. When i was first exposed to critical thinking i struggled to update my 'high level' ideas to reflect massive changes in my basic beliefs. I was also keen to revisit all my past assumptions and re-examine their foundations. Attempting to solve these issues was what made me first conceive of a tool like WL. So WL is the solution i have come up with to all the problems with critical thinking in today world as i understand them. You mention changing minds a couple of times - Although this is of course highly desirable, i want to narrow my scope to making ideas available. I am sure this will result in other perks but it wont be my focus yet.
Have you tried testing them with totally random people off of the street versus nerds versus academics?
No, good idea! I am still playing with the 'rules', which has been my main procrastination excuse so far but i will need to do this. I have a Github page with a very basic web demo that should be ready soon too.
it needs to be thoroughly tested for effectiveness before it is actually implemented
Absolutely agree and the first experiment is to see what people with relevant areas of expertise think on the idea, so thank you for participating!
P.S I want to address some more of your points but this has taken me awhile to write, so i will leave that for a second comment another day.
comment by avwenceslao · 2016-03-23T08:29:39.880Z · LW(p) · GW(p)
Hi LW! My name is Alex, a Salesperson by profession. I found Less Wrong through Intentional Insights and been here for a couple of months now. I'd like to express my interest in becoming more rational.
Replies from: Gleb_Tsipursky↑ comment by Gleb_Tsipursky · 2016-03-24T01:00:27.491Z · LW(p) · GW(p)
Nice to see you on LW, Alex!
I want to add for LW folks that Alex volunteers at Intentional Insights for about 25 hours, and gets paid as a virtual assistant to help manage our social media for about 15 hours. He decided to volunteer so much of his time because of his desire to improve his thinking and grow more rational. He's been improving through InIn content, and so I am encouraging him to engage with LW.
comment by mind_bomber · 2015-08-18T06:00:22.853Z · LW(p) · GW(p)
Hello everyone,
/u/mind_bomber here from https://www.reddit.com/r/Futurology.
I've been a moderator there for over two years now and watched the community grow from several thousand futurist to over 3.5 million subscribers. As a moderator I've had the pleasure of working with Peter Diamandis, David Brin, Kevin Kelly, and others on several AMA's. I also curate the glossary and post videos, documentaries, talks, and keynotes to the site.
I hope to participate in this community, although the Less Wrong Community is exactly the type of people I would like to see over at https://www.reddit.com/r/Futurology. So if you have a chance please stop by and tell me what you think.
Cheers,
/u/mind_bomber
Replies from: gjm, None, Lumifer↑ comment by gjm · 2015-08-18T17:36:16.063Z · LW(p) · GW(p)
Are you sure you have enough copies of that link there? There are only four, and two of your paragraphs don't have one.
(If you're trying for some SEO thing, please note that links from LW comments get rel="nofollow" on them and therefore don't provide extra googlejuice. I wouldn't be at all surprised to find that Google gives less weight to a link when it sees several instances of it in rapid succession, because that's a thing spammers do.)
↑ comment by [deleted] · 2015-08-22T02:42:22.775Z · LW(p) · GW(p)
Offense intended: your subreddit mainly consists of hype-trains, please do not advertise it.
Replies from: mind_bomber↑ comment by mind_bomber · 2015-08-22T04:57:56.869Z · LW(p) · GW(p)
This is not an advertisement!
comment by Maxlove · 2017-04-12T14:34:19.834Z · LW(p) · GW(p)
I discovered LessWrong maybe one or two years ago after reading about it on RationalWiki after searching online for a community of autodidacts (a Quora user recommended this place), and after googling Alfred Korzybski's phrase 'the map is not the territory'. I've been reading LessWrong intensively since I read my first page and first article. I thought I had developed a unique mindset, but was impressed to see that this community had discovered a lot of the same ideas and so many more. I left high school specifically so I could make sense of everything and figure what I should do if there's anything I should do. I now intend to take an online adult high school advanced functions course so I may study computer science in university. Besides computer science, I want to study neuroscience. I want to be a malware analyst and a neuroscientist. (Maybe one day I will work as a sort of sci-fi brain malware exorcist :P) But my actual skills lie in a formal understanding of persuasion, disagreement resolution, rhetoric, dialectics, communication, justification theory, and philosophy of argument. There should be an umbrella term for all this stuff; they are all closely related conceptually.
Other notable characteristics:
I also have a deep appreciation for worldbuilding, symbolism, personality psychology (which I have some serious opinions about), and cellular automata.
I made my own Game of Life ruleset using Golly.
I'd tell of my Big 5 and MBTI results but I hate those models. So instead, I'll tell just tell you that I'm argumentative but diplomatic and gently critical. I am bursting with self-esteem. I am enthusiastic because there is no excuse to permanently giving up and not because it will be easy to safely achieve perfection. I am always stressing the importance of consequentialism and always defending deontological-seeming choices on consequentialist grounds, just as any consequence-concerned consequentialist-identifier would. I have no idea what Hogwarts house I belong to.
I use all the major cultural hubs of the web, the most popular sites on which one may socialize, and those sites which are especially facilitative of internet friendships
I have internet friends of multitudinous backgrounds and persuasions and I would welcome more
I love going on long walks, in the day and in the night. My longest was ten hours! The empty suburban streets of the night can induce reflection and feelings of liberation like few other things can. I've had this habit since I was just 14.
I am compiling insights on task and resource management, aka insights on beating akrasia.
Ray Kurzweil saved me from suicide. I'm no longer as optimistic about the future as when I started reading his stuff, but I see there is a life-worthy chance for humanity to get better.
I would rather have many friends who are supposed to be my enemies than many enemies who are supposed to be my friends.
The bullet point immediately above this one is also my motto.
comment by rodomonte · 2016-12-02T17:05:00.233Z · LW(p) · GW(p)
Hello, I find this since wei dai use it, but I find this a copy of reddit, hackernews etc, honestly I just want to make a proposal here since I stop believing in human intelligence from a long time, I only believe in a sort of "social physics" that constantly build new facts and organisations, anyway.
I will not give you any single label on myself, sorry: but I'm sure good money will arouse from a good social network system and since actual ones lack any intelligence (.org XD) maybe you could do the case with some changes, I value this at more than 30% probability and so I'm doing this little text now. Hope from this group of minds first non idiotic money could arise.
comment by [deleted] · 2016-10-05T17:44:14.977Z · LW(p) · GW(p)
Hello everyone,
I'm a PhD student in social psychology focusing my time mainly on applied statistics and quantitative methods for the study of brain and behavior. My research focuses on the way that people's goals influence the way they reason and form judgments, but I've also dabbled a bit in self-regulation/self-control.
Perhaps my attraction to this community is based on the fact that I feel that my field is an unfriendly environment for the free exploration of novel or uncommon ideas. Specifically, I suspect that many of the models of human decision-making being put forth by our field over-estimate the tendency for biases/heuristics to lead to errors or poor judgments. For example, few (if any) of my colleagues are aware that our stereotypes of other groups tend to be highly accurate and this effect is one of the largest effects in all of social psychology. It appears that, in many cases, our biases tend to improve accuracy and decision-making quality. However, to utter phrases like "stereotype accuracy" around most social psychologists is to invite suspicion about one's underlying motives. I'm here not because I want to talk about stereotype accuracy in particular, but because I'd like to be able to consider such an idea without the threat of damaging my reputation and career.
I also like thinking about AI and how an (accurate) understanding of human reasoning in information-starved contexts could help us design AI responsibly, but that's just whipped cream.
Replies from: Lumifer↑ comment by Lumifer · 2016-10-05T17:57:09.530Z · LW(p) · GW(p)
the fact that I feel that my field is an unfriendly environment for the free exploration of novel or uncommon ideas ... "stereotype accuracy"
Since you are going to spend a lifetime working in this field, you... may have problems.
Replies from: None↑ comment by [deleted] · 2016-10-05T18:12:59.991Z · LW(p) · GW(p)
I'm unlikely to remain in academia after getting the degree. While I was coming to terms with the problems I'd face in academia, I was delighted to learn that there's a non-trivial demand in private industry for people who know how to quantify psychological constructs in a way that produces actionable information.
Replies from: Lumifer↑ comment by Lumifer · 2016-10-05T18:52:08.541Z · LW(p) · GW(p)
Oh, good.
You're a bit late, though, LessWrong is mostly a graveyard now. A lot of people from here moved over to the Scott Alexander's blog which is highly recommended.
Replies from: None↑ comment by [deleted] · 2016-10-05T20:40:55.002Z · LW(p) · GW(p)
I was wondering about that. In what sense is this place a graveyard?
It's too bad really. I love Scott's blog, but I've been looking for something with a format more like LW.
Replies from: Lumifer, ChristianKl↑ comment by Lumifer · 2016-10-05T21:06:03.021Z · LW(p) · GW(p)
In what sense is this place a graveyard?
A quiet place from which most souls have departed, but which references a lot of accomplishments in days past.
I, too, think the SSC's comment format is unfortunate, but it's up to Scott to do something about it. In fact, I think he treats it as a feature to avoid comment overload.
↑ comment by ChristianKl · 2016-10-05T20:58:10.286Z · LW(p) · GW(p)
I was wondering about that. In what sense is this place a graveyard?
LW used to get a lot more traffic in the past but don't let that stop you from contributing. How about writing up a longer post on your thesis about stereotype accuracy?
Replies from: None↑ comment by [deleted] · 2016-10-07T19:24:43.866Z · LW(p) · GW(p)
That specific thesis is mostly just an example. Much of what I would say would be paraphrasing the work of someone else (Lee Jussim mainly) and explaining its relevance to this community. I could do this if people thought it would be productive, but its just one of many topics that I think are misunderstood on a large scale.
My more general interest is in the less-known fact that many of our hardwired biases and heuristics were designed by natural selection (e.g., negativity bias) to improve accuracy based on goal-relevant criteria. It also seems that the biases formed in response to the environment (e.g., much of the content comprising a stereotype) track reality to a surprising degree. Imagine a marksman who practices shooting at the same firing range everyday and this range generally has a side-wind in the same direction and intensity. The marksman can manually adjust for this by placing his reticle upwind of the target, but he could also adjust his scope's reticle such that he can aim for the bulls-eye and account for the wind at the same time. Once the adjustment is made to the scope, he many have a "biased" tool, but his shots are still centered on the bullseye (on average) and the only online calculations needed to account for the wind on a shot-by-shot basis are minute. What if the marksman moves to another range? Well, in time, he will see his shots wildly missing and make the proper adjustments. This is probably not a novel analogy, but the surprising thing to me is that social psychology tends to frame any "reticle adjustment" as a bias against which we must fight without testing its performance in the contexts under which the adjustment was made. It's not that biases and heuristics don't cause problems, its that we have a much poorer understanding of when they cause problems than our field claims.
This general idea applies to stereotypes, but also:
- Negativity Bias
- Attribution errors (including the FAE)
- Availability heuristic
- Clustering bias and other illusory correlation-type biases
- Base rate neglect
- Confirmation Bias (this claim might get me in trouble here... haha)
- etc.
↑ comment by ChristianKl · 2016-10-07T19:50:18.152Z · LW(p) · GW(p)
In these spheres people generally understand that heuristics optimize for something. Frequently people think they optimize for some ancestral environment that's quite unlike the world we are living in at the moment. I think that's a question where a well written post would be very useful.
This is probably not a novel analogy, but the surprising thing to me is that social psychology tends to frame any "reticle adjustment" as a bias against which we must fight without testing its performance in the contexts under which the adjustment was made.
I would think that many sociologists would say that many people who are racist and look down on Blacks are racists because they don't interact much with Blacks. If the adjustment was made during a time where the person was at an all-White school, the interesting question isn't whether the adjustment performs well within the context of the all-White school but whether it also performs well at decisions made later outside of that heterogeneous environment.
Replies from: None↑ comment by [deleted] · 2016-10-07T23:23:24.841Z · LW(p) · GW(p)
It was poor wording on my part when I wrote "the contexts under which the adjustment was made". The spirit of my point is much better captured by the word "applied" (vs. made). That is, it looks like a balanced reading of stereotype literature shows that people are quite good in their judgments of when to apply a stereotype. My point is therefore a bit more extreme than it might have appeared.
I would think that many sociologists would say that many people who are racist and look down on Blacks are racists because they don't interact much with Blacks.
I agree with this and would add that such perceptions of superiority could be amplified by other members of the community reinforcing those judgments.
If the adjustment was made during a time where the person was at an all-White school, the interesting question isn't whether the adjustment performs well within the context of the all-White school but whether it also performs well at decisions made later outside of that heterogeneous environment.
To get a little deeper into this topic, I should mention that our stereotypes are conditional and, therefore, much of the performance of a stereotype depends on applying it in the proper contexts. Of the studies looking at when people apply stereotypes, they tend to show that they are used as a last resort under conditions in which almost no other information about the target is available. We're surprisingly good at knowing when a stereotype is applicable and seem to have little trouble spontaneously eschewing them when other, more diagnostic information is available.
My off-the-cuff hypothesis about students from an all-white school would be that they would show racial preferences when, say, only shown a picture of a black person. However, ask these students to provide judgments after a 5-minute conversation with a black person or after reviewing a resume (i.e., after giving them loads and loads of information) and race effects will become nearly or entirely undetectable. I don't know of any studies looking at this exactly and urge you to take my hypothesis with a grain of salt, but my larger point is this: You might be surprised.
Replies from: ChristianKl, hairyfigment↑ comment by ChristianKl · 2016-10-08T18:25:52.329Z · LW(p) · GW(p)
From memory without Googling the studies I remember that there are studies that test whether having a "Black name" on a resume will change response rates and it does.
There are also those studies that suggest that blinding of piano players gender is required to remove a gender bias.
Do you have another read on the literature?
↑ comment by hairyfigment · 2016-10-08T00:05:08.049Z · LW(p) · GW(p)
So, I'm pretty sure we know that humans have a bias against anyone sufficiently different, and that this evolved before humanity as such. We certainly know that humans will try to rationalize their biases. We also have a great deal of evidence for past failures of scientific racism, which has set my prior for the next such theory very low.
Replies from: None↑ comment by [deleted] · 2016-10-08T01:05:08.840Z · LW(p) · GW(p)
We also have a great deal of evidence for past failures of scientific racism, which has set my prior for the next such theory very low.
I'm not sure what you mean here. How are you defining scientific racism and how is it relevant to what we're talking about?
As a general query to other readers: Is it bad form to just ignore comments like this? I'm apt to think it unwise to try to talk about this topic here if it is just going to invoke Godwin's Law.
Replies from: ChristianKl↑ comment by ChristianKl · 2016-10-08T18:22:55.823Z · LW(p) · GW(p)
As a general query to other readers: Is it bad form to just ignore comments like this? I'm apt to think it unwise to try to talk about this topic here if it is just going to invoke Godwin's Law.
In general you can ignore comments when you don't like a productive discussion will follow.
LW by it's nature has people who argue a wide array of positions and in a case like this you will get some criticism like this. Don't let that turn you off LW or take it as suggestion that your views are unwelcome here.
comment by Prometheus · 2016-08-11T00:41:49.219Z · LW(p) · GW(p)
(Somehow I posted this in the wrong place the first time, so I'm posting it here now.) Hi, I first discovered this site a few years ago, but never really participated on it. Looking back, it appears I only commented once or twice, saying something condescending about morality. Recently, I rediscovered the site, because I started noticing updates on a Facebook group (no longer) affiliated with it. What's funny is I only realized I had an account when I tried to register under the exact same User Name. I've started reading the sequences and am interested in participating in the discussions. I've thought intensely about certain topics since I was young, but I didn't really apply a scientific (or rationalist) approach to it until my Junior year of college, when I joined an Atheist community at my school. Many times, I see different sides to an issue. This isn't to say I stay on the fence for everything, but I understand most situations are complicated with at least some conflicting ideals.
comment by alexander_poddiakov · 2015-10-03T07:55:31.027Z · LW(p) · GW(p)
Hello I am from Department of Psychology of Higher School of Economics (Moscow, Russia). I study problem solving, systems thought, and help and counteraction in social interactions. Both rationality and irrationality are important here.
Web: http://www.hse.ru/en/staff/apoddiakov#sci, [SSRN] (http://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=426114)
comment by [deleted] · 2015-12-20T13:34:16.015Z · LW(p) · GW(p)
Hello everybody. Stefano Libey is writing. Libey is my lovely nom de plume. I always wish to comprehend how reality works, what reality is or exists as, and my peculiar stand in it. I quite normally use rationality in order to do so. I believe I as a piece of the Intellect cannot do otherwise. To achieve my understanding I am accostumed to handling some instruments, one of which is etymology. I believe that etymon can unveil important, and strikingly illuminating, traits about a word, which I consider to be a vector of behaviour and accomplishment. By saying so, I mean that a word, far from being just a sound production and a mere semantic item, is a concrete conduct, an entity both ambiguous and precise. A fact resonating throughout reality.
Amongst the topics I love to enter into are the following. – The democracy notion, which I etymologically break up so as to find out the truth it conceals. Democracy is an exceedingly interesting notion because it is generally, and uncritically, embraced as an irrefutably beautiful and irrenounceable idea. – The religion notion, which I divest of the standardized meaning, so that it may be possible for me to see whether religion hides dangerously somewhere else than the quarters it is usually associated with. – Aesthetics, with the philosophical meaning of focus on appearance (Germ. ‘der Schein’, ‘die Erscheinung’), the latter being reckoned as a distinction from reality. I do not subscribe to this dichotomy. And in order to illustrate my position I am developing a particular philosophical theoretical framework which attempts to reconcile concepts that are normally viewed as opposites.
Many other themes and meditation grounds mesmerize me. Any concept whotsoever, once it has triggered some curiosity, can take, by means of being deemed as partaking in a causalistic concatenation, its own relevant place in rationality.
I would love to use the LessWrong platform to set forth my perspective on ratio and seminal thoughts of my theory on the Intellect. Reflecting upon democracy is my way to start scratching the surface of LessWrong to the core.