Posts

Comments

Comment by jotto999 on Is Clickbait Destroying Our General Intelligence? · 2018-11-23T21:55:09.551Z · score: 4 (3 votes) · LW · GW

I'm concerned about the way you're operationalizing this (as in, insufficiently). Could you make this into an concrete prediction about the world? What sort of question about this could we post on Metaculus that wouldn't resolve ambiguous?

Comment by jotto999 on Expressive Vocabulary · 2018-07-10T22:41:51.804Z · score: -9 (5 votes) · LW · GW

A major (attempted and ongoing) transition in my conversational approach is to try and focus on the prediction at hand, as explicitly as possible, as often as possible. Especially when there is a disagreement. It might help avoid this kind of pedantic behavior (if I can remind myself to stick to the predictions at hand).

Speaking of being pedantic, I don't like some of the writing style you used here. I know, an old complaint about this site; here it is anyway:

Use your own words, ask your own questions, but don't enforce an inadequate prescriptivism with feigned incomprehension while your interlocutor only wants you to pass the peas.

I was fine with most of this sentence, but using "interlocutor" broke the camel's back for me. Why not say "the other person" or something? Could've given the same information, with a lower barrier to entry and less mental energy required. You want as many potential readers to get it, right? I predict it would've gone better worded another way. Try to write like Paul Graham.

Comment by jotto999 on Frequently Asked Questions for Central Banks Undershooting Their Inflation Target · 2017-11-26T16:09:25.770Z · score: 7 (2 votes) · LW · GW

I'm not an economist, but isn't it unambiguously better to send everyone small and adjustable amounts of money than buying a bunch of bonds or other financial assets? My reasoning is as follows:

  1. Inequality is increasing, which increases violence and destabilizes the social order. It might even have other negative effects, such as hampering economic growth. The current method of QE exacerbates inequality, as it mostly helps people who already own assets. A direct payment to everyone would instead reduce inequality. FYI, I don't think equality is a universal virtue or anything, but why increase inequality unnecessarily since it seems to make societal outcomes worse?
  2. I suspect you could accomplish more inflation per unit of money created by directly sending it to people to be spent. In the current method there is hardly any velocity, and the created units mostly just sit around pooling up in the financial sector. Again, nothing universally preferable to me about getting that inflation with fewer rather than more dollars, but I guess it might save a bit of...something, not sure, I'm just giving layperson babble right now. What's astonishing is how little inflation has happened after the astonomical amounts of dollars added to the "circulation"; why not give it to people who are likely to actually do stuff with it, instead of having it mostly just sit in investment portfolios?
  3. The whole "jobs" and work ethic thing has been historically very important for the functioning of society, and you would just starve if you didn't work dawn to dusk in 1850. But who thinks 40 hours a week is even still a preferable use of the people's time? Flipping burgers or writing TPS reports 40 hours a week is pretty frigging restrictive in what you can invent or create, and it gets less necessary every decade from here out. I keep hearing the claim that we can't afford a basic income, and here we are funding various questionable programs and wars, giving questionable subsidies to various firms, and printing colossal amounts of money. We watch it pool up without causing much inflation, and wonder whatever could be done to stimulate. I don't see the dilemma. Give it to people.
Comment by jotto999 on Dark Side Epistemology · 2015-11-20T15:32:39.673Z · score: 0 (0 votes) · LW · GW

What kind of thing do you mean by "occasionally a little misguided"? Are you referring to something bad about it because humans (and all our mental frailties) were using it, or something bad that would happen no matter what kind of creature tried to use it, even ones that had ways around human-like mental frailties?

(I see this comment is from 7 years ago, and I will understand completely if no response comes.)

Comment by jotto999 on What Bayesianism taught me · 2013-08-16T02:17:50.279Z · score: 3 (3 votes) · LW · GW

How about religion? There is a variety of them and they can't all be right (many claim to be the only true one), yet people tend to just believe whichever one they happen to have been raised to believe. They are believing in these massive cosmic arrangements and belief structures...by accident of where they happen to have been raised. And I always have to tell them this.

I expected a very high "obviousness" to my assertion that the median citizen needs to be told these things; that's why I didn't even bother giving evidence. Why is this necessary?

Comment by jotto999 on What Bayesianism taught me · 2013-08-15T23:54:08.854Z · score: 1 (3 votes) · LW · GW

Bayesianism boils down to “don’t be so sure of your beliefs; be less sure when you see contradictory evidence.” Now that is just common sense.

...Common? Maybe in successful circles of academia.

Why does anyone need to be told this?

What a bizarre question. I find it difficult to believe that this person has any experience with the average/median citizen.

Comment by jotto999 on Eliezer Yudkowsky Facts · 2013-08-12T13:19:51.756Z · score: 9 (9 votes) · LW · GW

I don't always think clearly, but when I do, I prefer Yudkowsky.

Stay friendly, AIs.

Comment by jotto999 on A digitized belief network? · 2013-07-15T18:03:44.882Z · score: 0 (0 votes) · LW · GW

I'm not sure where to ask this, I'll just toss it here.

You know this site? I'm interested in something that would work similarly but for a different purpose. http://www.music-map.com/

Instead of visitors putting in a few favorite bands, you make an account and select degrees of agreement with various pre-made contentious issues. The account is so you can update your views and change the data points you contributed. So for example, there would be one for "Humans evolved by natural selection", and there would be a selection of confidence levels you could pick to represent your agreement or disagreement.

You then get a bunch of people to do this, and use algorithms similar to that music site's, so that you end up with a kind of crowd belief map with the different statements of belief clustering based on statistical closeness. So the selection for "Humans evolved by natural selection: Strongly Agree" would be on the map somewhere, probably nearer a democrat-ish cluster, and probably farther from an "intelligent design"-ish cluster of agreement statements.

So you'd end up with things like a conspiracy theory-ish cluster, which would probably have "UFOs have been on Earth: Agree" somewhere near or inside it. I would find it fascinating to look at this sort of visual representation for where these statements of belief would appear on a belief landscape, especially after thousands of people have participated and with lots of different issues to weigh in on.

If the sample size was big enough, you might even use it as a rough first-draft confidence of a particular statement you haven't researched yet. Sometimes I just wish I could short-sell a conspiracy-theory belief cluster index fund, or an ID one. And I might get a heads-up on things to look into, say for example the belief statements that ended up nearer to "Many worlds interpretation: Agree".

Comment by jotto999 on Have no heroes, and no villains · 2013-07-07T15:10:35.763Z · score: 2 (2 votes) · LW · GW

Ah! Well I had no cluifiability until you posted, thanks.

Comment by jotto999 on Have no heroes, and no villains · 2013-07-07T12:48:27.904Z · score: 1 (1 votes) · LW · GW

What does cluifiability mean? It is neither in the dictionary, nor recognized by Google.

Comment by jotto999 on The Best Textbooks on Every Subject · 2013-06-20T15:20:41.841Z · score: 0 (0 votes) · LW · GW

Okay, I'm going to take your word for it! So I just got The Great Conversation, Sixth Edition in the mail and it looks very good. But if I want to know more about Gottlob Frege or the philosophy of language or analysis, and I'm a layperson who needs something accessible, where should I go for that? Should I just get Meaning and Argument?

Comment by jotto999 on How many of you doing Khan Academy? · 2013-04-06T23:42:51.484Z · score: 1 (1 votes) · LW · GW

I failed math in grade 9. So far I'm at 202/414 tasks. Currently chewing on "linear equations", tastes like redemption.

My progress has been VERY slow. Once in a while I hit a task that I ace in the first stack, but mostly it's a grind. Like, I've been at "almost halfway" for months because they keep adding new units fast enough to keep pace with me. I'll have way more time for it when I'm no longer studying forex.

When CERN was talking about their 5-sigma result, I had recently mastered the "inferential statistics" bunch, and being able to know what '5-sigma' meant was a huge confidence boost. It makes my life feel less shameful and more like just another casualty of environmental factors.

Here's my profile, https://www.khanacademy.org/profile/Jotto999/

I am aware that doing KA is not nearly as good as having a teacher, but using up a slot at a university for me would be a stupid business decision, so I'll keep plugging away at it. Also, math is irrelevant to my day-to-day life and as long as I can master all of KA before the age of 30, I'll be more than satisfied.

Comment by jotto999 on Rationality Quotes March 2013 · 2013-03-15T21:23:14.164Z · score: 0 (0 votes) · LW · GW

I find this to be like saying to someone with cancer "Don't bother with treatment, you aren't dead yet". A bucket list is for plans and actions, not attributes inherent to existing in the first place.

Other commenters have said that it is more about things you may not have done without having it on the bucket list for a reminder or incentive. In this case, we can reasonably expect Gates meant putting effort into avoiding death, not "I was immortal, but now feel like trying to win the Hardcore Mode Bucket List challenge.

Comment by jotto999 on Rationality Quotes January 2013 · 2013-01-06T23:12:49.685Z · score: 6 (8 votes) · LW · GW

I don't know the circumstances, but I would have tried to make eye contact and just blatantly stare at them for minutes straight, maybe even hamming it up with a look of slight unhinged interest. They would have become more uncomfortable and might have started being anxious that a stranger is eavesdropping on them, causing them to want to be more discrete, depending on their disposition. I've actually tried this before, and it seems to sometimes work if they can see you staring at them. Give a subtle, slight grin, like you might be sexually turned on. If you won't see them again then it's worth a try.

Comment by jotto999 on Leave a Line of Retreat · 2012-11-27T11:33:06.200Z · score: 3 (3 votes) · LW · GW

I'm not sure what "no rationality" would mean. Evolutionarily relevant kinds of rationality can still be expected, like preference to sexually fertile mates, fearing spiders/snakes/heights, and if we're still talking about something at all similar to Homo Sapiens, language and cultural learning and such, which require some amounts of rationality to use.

I wonder if you might be imagining rationality in the form of essentialism, allowing you to universally turn the attribute off, but in reality there no such off switch that is compatible with having decision making agents.

Comment by jotto999 on New Singularity.org · 2012-06-23T12:38:05.367Z · score: 0 (0 votes) · LW · GW

I see. I rated it highly to try and counter it. Perhaps if a few other LWers did this it would shift the rating sufficiently. And as for me, I will not assume reliability in user-rated systems.

Comment by jotto999 on New Singularity.org · 2012-06-22T19:47:02.544Z · score: 2 (2 votes) · LW · GW

Web of Trust, a browser app designed to build a website security rating and trustworthiness oriented community, is warning me that singularity.org has untrustworthy attributes. I don't find it particularly likely that singularity.org is trying something malicious, but whatever the circumstances have been, I would like to know why this has occurred, or at least to point it out. Could be a false positive on WoT's part, or something else (I know almost nothing about web security).

If it is simply a case of WoT failing to be thorough enough in how it weighs ratings and avoiding false positives, then perhaps someone could recommend something else? I like the idea of WoT and would like to either help improve it or find a better service. EDIT: It has more ratings now and WoT no longer warns about it.

Comment by jotto999 on Avoid inflationary use of terms · 2012-06-02T17:26:32.555Z · score: 1 (1 votes) · LW · GW

Perhaps a loss due to not paying attention, but I think making a rapid correction based on criticism is worthy of some amount of karma, even on something trivial. I've misunderstood loads of things and felt stupid every single time, and I do believe it helps some amount to see someone else publicly brush the dirt off here and there. For me one of the hallmarks of lousy dialogue quality is that it seems like 100% of participants either believe they have 0% error rate or that they cannot brush it off publicly for emotional reasons.

Beware my opinion though, it is possible that seeing you do something against my views, then later retract and agree with my views makes me feel good. I am sorry to say that it is difficult for me to tell if that is playing a significant role or if I am some sort of dialogue connoisseur. Perhaps more likely is that it is some finicky combination. Or even more insidiously, I could be pointing this out to try and seem more aware than I am. Oh how bias is like a six-headed, head-regenerating dragon.

Comment by jotto999 on Conservation of Expected Evidence · 2012-03-05T03:51:02.963Z · score: 1 (1 votes) · LW · GW

I agree - it can be especially ambiguous if you're also used to the economics context of normative, meaning "how subjectively desirable something is".

Comment by jotto999 on Tell Your Rationalist Origin Story · 2012-02-29T02:50:22.098Z · score: 3 (3 votes) · LW · GW

Hello, Less Wrong.

With no particular or unusual intellect (that I could objectively test aside from an IQ test in elementary school, which scored somewhere around 115-125), as well as low school grades, I found myself as a teenager who took issue with religion. I suppose my journey in becoming rational started when I decided I was an atheist. I was finding various flaws with religion, as well as enjoying material put out by Richard Dawkins and Christopher Hitchens. I consider that as the starting point because it was when I realized that humans are inherently terrible at understanding reality, and that merely not succumbing to wrong beliefs is something the vast majority of people fail at, let alone actually understanding reality to even the vague degree our brains could comprehend. I would describe this point as "when I started thinking", or at least trying to do so.

My interest in being studious grew over time. The next milestone related to politics. I was a very typical bleeding heart liberal throughout my teenage years, having such simplistic convictions as "corporations are bad!" and "pictures of oil-soaked penguins mean we should hold back industry" and "we might as well socialize most industries!". Eventually I began studying economics, which caused me to go from liberal to libertarian. I had so many irrational beliefs about policy and society, it's a bit shameful for me to think back on it. I now frequently speak against Keynesianism, and finally am beginning to understand the subtle but huge negatives of government intervention.

But I'm not sure my journey as a rationalist was even in an uptrend. I was just absorbing material other people put out, and wasn't really able to make good decisions for myself. I was just cynical and suspicious of commonly held views.

I flipped through Less Wrong, came across Eliezer's article "Cynical about cynicism", and then I realized I was...full of it. I thought I was being rational, but now I realize I was being childish and angsty. In fact I wonder if that should be part of the sequences, I know many people who would benefit from it, many of them are either environmentalists or atheists (or both). It was the article that made me realize I have so, so much work to do yet before I can consider myself rational.

Which brings me here, now. I am working my way through the sequences, and occasionally re-reading previous ones to try and learn it as well as I can. I am highly fortunate to be here, I can escape my irrational past, and hopefully have something similar to Yudkowsky's Bayesian enlightenment. I feel as if in many ways I am starting over, and...it feels very, very good.

Comment by jotto999 on Quantified Health Prize results announced · 2012-02-24T03:13:24.149Z · score: 0 (0 votes) · LW · GW

These documents are interesting, particularly #1 and 2. My opinion has not changed on whether or not one should supplement (except in the case of a specific deficiency caused by some sort of malabsorption). My expectation is that someone eating a diet that includes a variety of vegetables, especially dark leafy greens, and also seafood that is low in mercury, will be healthier than someone who eats none of those but supplements, ceteris paribus, even if the supplement doses were optimal. I'm not convinced both is typically necessary.

I believe supplements are a long ways off yet from being reliable ways to improve one's diet, and to become so I would expect they will require more sophisticated measures like genetic and blood tests. Though you can get tests for deficiencies right now, which you should do to get them corrected, but I would lean towards eating more of a food or foods that are rich in that nutrient except in cases where someone has a compromised ability to absorb something and cannot get enough in food.

As has been previously said, there are a boatload of factors that we know exist and affect this subject but that we aren't yet able to anticipate. Eating the foods I described (and some others) does appear reliable.

Comment by jotto999 on Bayesian Judo · 2012-02-14T21:16:22.448Z · score: 0 (0 votes) · LW · GW

Hmm! I found that actually quite helpful. The therapist didn't even voice any apparent disagreement, he coaxed the man into making his reasoning explicit. This would greatly reduce the percent of the argument spent in an adversarial state. I noticed that it also put the emphasis of the discussion on the epistemology of the subject which seems the best way for them to learn why they are wrong, as opposed to a more example-specific "You're wrong because X".

Thank you for that link. Would it be useful for me to use other videos involving a therapist who disagrees with a delusional patient? It seems like the ideal type of behaviour to try and emulate. This is going to take me lots of practice but I'm eager to get it.

Thank you for your help and advice!

Comment by jotto999 on Bayesian Judo · 2012-02-13T17:29:25.626Z · score: 0 (0 votes) · LW · GW

You're right, I see now that the effect on audiences does not relate much to the one-on-one, so I should have kept a clear distinction. Thank you for pointing this out.

I believe this obvious mistake shows that I shouldn't comment on the sequences as I work my way through them, but rather it is better if I only start commenting after I have become familiar with them all. I am not ready yet to make comments that are relevant and coherent, and the very last thing I want to do is pollute the comment section. I am so glad about the opportunity for growth this site has, thanks very much to all.

Comment by jotto999 on Bayesian Judo · 2012-02-13T17:04:09.786Z · score: 1 (1 votes) · LW · GW

Interesting. Do we have any good information on the attributes of discussions or debates that are the most likely to educate the other person when they disagree? In hindsight this would be a large shortcoming of mine, having debated for years now but never invested much in trying to optimize my approach with people.

Something I've noticed: when someone takes the "conquer the debate" adversarial approach, a typical-minded audience appears more likely to be interested and side with the "winner" than if the person takes a much more reserved and cooperative approach despite having just as supported arguments. Maybe the first works well for typical audiences and the second for above-typical ones? Or maybe it doesn't matter if we can foster the second in "typical" minds. Given my uncertainty it seems highly unlikely that my approach with people is optimal.

Do you have any tips for someone interested in making a mental habit out of cooperative discussion as opposed to being adversarial? I find it very difficult, I'm an aggressive and vigorous person. Maybe if I could see a video of someone using the better approach so I can try to emulate them.

Comment by jotto999 on Help! Name suggestions needed for Rationality-Inst! · 2012-02-12T23:19:38.923Z · score: 0 (0 votes) · LW · GW

I like how Center for Applied Rationality sounds, though it might be too long. Or maybe that isn't a problem and suddenly the amount of times I type the word CAR would increase.

How about Colligate Institute? Though maybe Colligate is too obscure a word (Google Chrome's spell-checker has it underlined in red).

Comment by jotto999 on Bayesian Judo · 2012-02-12T21:44:56.683Z · score: 2 (6 votes) · LW · GW

Before I say anything I would like to mention that this is my first post on LW, and being only part way through the sequences I am hesitant to comment yet, but I am curious about your type of position.

What I find peculiar about your position is the fact that Yudkowsky did not, as he presented here, engage the argument. The other person did, asserting "only God can make a soul", implying that Yudkowsky's profession is impossible or nonsensical. Vocalizing any type of assertion, in my opinion, should be viewed as a two-way street, letting potential criticism come. In this particular example the assertion was of a subject that the man knew would be of large interest to Yudkowsky, certainly disproportionately more than say whether or not the punch being served had mango juice in it.

I'd like to know what you expect Yudkowsky should have done given the situation. Do you expect him not to give his own opinion, given the other person's challenge? Or was it instead something in particular about the way Yudkowsky did it? Isn't arguing inevitable and all we can do is try to build better dialogue quality? (That has been my conclusion for the last few years). Either way, I don't see the hubris you seem to. My usual complaints of discussions is that they are not well educated enough and people tend to say things that are too vague to be useful, or outright unsupported. However I rarely see a discussion and think "Well the root problem here is that they are too arrogant", so I'd like to know what your reasoning is.

It may be relevant that in real life I am known by some as being "aggressive" and "argumentative". Though you probably could have inferred that based on my position but I'd like to keep everything about my position as transparent as possible.

Thank you for your time.

Comment by jotto999 on Dying Outside · 2010-04-25T22:08:31.561Z · score: 1 (1 votes) · LW · GW

This is very inspiring for me! It makes me appreciate having such a mobile and agile body.

Have you seen Aubrey De Grey's TEDTalks speech? Or looked up organ printing, or other life-extension related technologies speculated to be available within ten or twenty years?

I'm not entirely sure how they could be applied to ALS patients, but it certainly would offer a chance of not just living longer, but maybe some day gaining back some function.

By choosing death, you will be forfeiting any chance of being helped by these potential new technologies. By choosing life, if you can just live long enough, you might see the days of indefinite lifespan.

Either way though, your story is very uplifting, and I hope you do live long enough to see indefinite lifespan. I hope everyone does. :)