Posts

Sunscreen. When? Why? Why not? 2018-12-27T22:04:42.597Z

Comments

Comment by Viktor Riabtsev (viktor-riabtsev-1) on What kind of place is this? · 2023-02-26T11:31:36.176Z · LW · GW

Yeah, if you use religious or faith baised terminology, it might trigger negative signals (downvotes). Though whether that is because the information you meant to convey was being disagreed with, or it's because the statements themselves are actually overall more ambiguous, would be harder to distinguish.

Some kinds of careful resoning processes vibe with the community, and imop yours is that kind. Questioning each step separatetly on it's merits, being sufficiently skeptical of premises leading to conclusions.

Anyways, back to the subject of f and inferring it's features. We are definitely having trouble drawing out f out of the human brain in a systematic falsiable way.

Whether or not it is physically possible to infer it, or it's features, or how it is constructed; i.e whether it possible at all, that subject seems a little uninteresting to me. Humans are perfectly capable of pulling made up functions out of their ass. I kind of feel like all the gold will go to first group of people who come up with processes of constructing f in coherent predictable ways. Such that different initial conditions, when iterated over the process, produce predictably similiar f.

We might then try observe such process throughout people's lifetimes, and sort of guess that a version of the same process is going on in the human brain. But nothing about how that will develop is readily apparent to me. This is just my own imagination producing what seems like a plausible way forward.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on What kind of place is this? · 2023-02-25T13:51:07.708Z · LW · GW

Somehow, he has to populate the objective function whose maximum is what he will rationally try to do. How he ends up assigning those intrinsic values relies on methods of argument that are neither deductive nor observational.

In your opinion, does this relate in any way to the "lack of free will" arguments, like those alleged by Sam Harris? The whole: I can ask you about what your favourite movie is, and you will think of some. You will even try to justify your choices if asked about it, but ultimately you had no control of what movies popped into your head.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Going Meta with Rationality · 2023-02-23T13:06:53.627Z · LW · GW

I feel like there are local optima. That getting to a different stable equilibrium involves having to "get worse" for a period of time. To question existing paradigms and assumptions. I.e. performing the update feels terrible, in that you get periodic glimpses of "oh, my current methodology is clearly inadequate", which feels understandably crushing.

The "bad mental health/instability" is an interim step where you are trying to integrate your previous emotive models of certain situations, with newer models that appeal to you intelligently (i.e. feels like they ought to be the correct models). There is conflict when you try to integrate those, which is often meta discouraging.

If you're curious about what could possibly be happening in the brain when that process occurs, I would recommend Mental Mountains by Scott A., or even better the whole Multiagent Models of Mind sequence.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Going Meta with Rationality · 2023-02-22T12:56:09.874Z · LW · GW

No, that's fair.

I was mostly having trouble consuming that 3-4-5 stage paradigm. Afraid that it's a not a very practically useful map; i.e. doesn't actually help you instrumentally navigate anywhere. But realized half way through composing that argument, that it's very possible I'm just wrong. So decided to ask for an example of someone using this framework to actually successfully orient somewhere.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Going Meta with Rationality · 2023-02-21T11:10:55.038Z · LW · GW

So the premise is that there are goals you can aim for. Could you give an example a goal you are currently aiming for?

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Going Meta with Rationality · 2023-02-19T12:54:15.963Z · LW · GW

Would it be okay to start some discussion about the David Chapman reading in the comments here?

Here's some thoughts that I had while reading.

When Einstein produced general relativity, the success criteria was "it produces Newton's laws of gravity as a special case approximation". I.e. it had to produce the same models as have already been verified as accurate to a certain level of precision.

If more rationality knowledge produces depression and otherwise less stable equilibria within you, then that's not a problem with rationality. Quoting from a lesswrong post: We need the word 'rational' in order to talk about cognitive algorithms or mental processes with the property "systematically increases map-territory correspondence" (epistemic rationality) or "systematically finds a better path to goals" (instrumental rationality).

A happy, stable productive you (or the previous stable version of you), is a necessary condition of using "more rationality". If it comes out otherwise, then it's not rationality. It's some other confused phenomenon. Like a crisis of self-consistency. Which if it happens, and feels understandably painful, should eventually produce a better you at the end. If it doesn't, then it actually wasn't worth starting on the entire adventure, or stressing much about it.

Just to make sure I am not miscommunicating, "a little rationality can actually be worse for you" is totally a real phenomenon. I wouldn't deny it.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Reflecting on the 2022 Guild of the Rose Workshops · 2022-12-17T14:01:30.779Z · LW · GW

I found the character sheet system to be very helpful. In two words its just a ranked list of "features"/goals you're working towards, with a comment slot (it's just a google sheet).

I could list personal improvements I was able to gain from the regular use of this tool, like weight loss/exercise habits etc., but that feels too much like bragging. Also, I can't prove correlation vs causation.

The cohort system provides a cool social way to keep yourself accountable to yourself.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Lost Purposes · 2021-02-02T12:57:41.396Z · LW · GW

Dead link for "Why Most Published Research Findings Are False". Googling just the url parameters yields this.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Artificial Addition · 2021-01-20T12:51:08.803Z · LW · GW

Did anyone else get so profoundly confused that they googled "Artificial Addition"? Only when I was half way though the bullet point list that it clicked that the whole post is a metaphor for common beliefs about AI. And that was on the second time reading, first time I gave up before that point.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Evaluability (And Cheap Holiday Shopping) · 2019-02-17T17:49:02.970Z · LW · GW

I shall not make the mistake again!

You probably will. I think this biases thing doesn't disappear even when you're aware of it. It's a generic human feature. I think self-critical awareness will always slip at the crucial moment; it's important to remember this and acknowledge it. Big things vs small things as it were.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Crisis of Faith · 2019-02-12T14:48:33.465Z · LW · GW

On my more pessimistic days I wonder if the camel has two humps.)

Link is dead. Is this the new link?

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Sam Harris and the Is–Ought Gap · 2019-02-11T14:27:35.083Z · LW · GW

It seems less and less like a Prisoner's Dilemma the more I think about it. Chances are, "oops" I messed up.

I still feel like the thing with famous names like Sam Harris, is that there is a "drag" force on his penetration on the culture nowadays because there is a bunch of history that has been (incorrectly) publicized. His name is associated with controversy; despite his best to avoid it.

I feel like you need to overcome a "barrier to entry" when listening to him. Unlike Eliezer, who's public image (in my limited opinion) is actually new user friendly.

Somehow this all is meant to tie back to Prisoner's Dilemmas. And in my head, it for some reason does. Perhaps I ought to prune that connection. Let me try my best to fully explain that link:

It's a multi stage "chess game" in where you engage with the ideas that you hear from someone like Sam Harris; but there is doubt because there is a (misconception) of him saying "Muslims are bad" (a trivialization of the argument). What makes me think of a Prisoner's Dilemma is this: you have to engage into "cooperate" or "don't cooperate" game with the message based on nothing more or less then reputation of the source.

Sam doesn't necessarily broadcast his basic values regularly that I can see. He's a thoughtful, quite rational person; but I feel like he forgets that his image needs work. He needs to do qumbaiya as it were, once a while. To reaffirm his basic beliefs in life and it's preciousness. (And I bet if I look, I'd find some, but it rarely percolates up on the feed).

Anyway. Chances are I am wrong on using the concept of Prisoner's Dilemma here. Sorry.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on About Less Wrong · 2019-02-08T15:56:58.365Z · LW · GW

Ah, makes sense.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Sam Harris and the Is–Ought Gap · 2019-02-07T00:59:43.201Z · LW · GW

I could be off base here. But a lot of cooperate vs non-cooperate classical stories often involve two parties who hate each other's ideologies.

Could you then not say: "They have to first agree and/or fight a Prisoner's Dilemma on an ideological field"?

Comment by Viktor Riabtsev (viktor-riabtsev-1) on The 3 Books Technique for Learning a New Skilll · 2019-02-01T20:07:23.589Z · LW · GW

Tom A. Apostol Calculus I && II (Haven't fully read II). (Sorry don't got 3 I guess)

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Sam Harris and the Is–Ought Gap · 2019-02-01T20:05:31.711Z · LW · GW

So ... a prisoner's dilemma but on a meta level? Which then results in primary consensus.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Sunscreen. When? Why? Why not? · 2019-01-07T12:26:43.685Z · LW · GW

Yep. Just have to get into the habit of it.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on About Less Wrong · 2019-01-04T18:57:38.473Z · LW · GW

Less Wrong consists of three areas: The main community blog, the Less Wrong wiki and the Less Wrong discussion area.

Maybe redirect the lesswrong.com/r/discussion/ link & description to the "Ask a Question" beta?

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Sunscreen. When? Why? Why not? · 2018-12-29T02:15:29.548Z · LW · GW

That was a great read.

figure out what was going on rather than desperately trying to multiply and divide all the numbers in the problem by one another.

That one hits home. I've been doing a bit of math lately, nothing too hard, just some derivatives/limits, and I've found myself spending inordinate amounts of time trying taking derivatives and do random algebra. Just generally flailing around hoping to hit the right strategy instead pausing to think first: "How should this imply that?" or "What does this suggest?" before doing rote algebra.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Sunscreen. When? Why? Why not? · 2018-12-28T12:14:23.246Z · LW · GW

UV meters! Thank you! Seems such an obvious idea in hindsight.

Why wonder blindly when you can quantify it. I'll look into getting one.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Resist the Happy Death Spiral · 2018-12-27T23:01:59.760Z · LW · GW

Dead link to "scientists shouldn't even try to take ethical responsibility for their work" link is now here

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Sunscreen. When? Why? Why not? · 2018-12-27T22:14:29.802Z · LW · GW

I did that a couple minutes ago. Then tried to fix the formatting, and I think I then subsequently undid your formatting fixes.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Human Evil and Muddled Thinking · 2018-10-30T19:18:48.747Z · LW · GW

Related:

“Sometimes a hypocrite is nothing more than a man in the process of changing.” ― Brandon Sanderson, Oathbringer (By Dalinar Kholin)

Comment by Viktor Riabtsev (viktor-riabtsev-1) on 0 And 1 Are Not Probabilities · 2018-10-24T13:52:27.335Z · LW · GW

Umm, it's a real thing. ECC memory https://en.m.wikipedia.org/wiki/ECC_memory I'm sure it isn't 100% foolproof (coincidentally the point of this article) but I imagine it reduces error probability by orders of magnitude.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Lotteries: A Waste of Hope · 2018-10-23T23:22:00.333Z · LW · GW

I'd say there are mental patterns/heuristics that can be learned from video games that are in fact useful.

Persistence, optimization, patience.

I won't argue there aren't all sorts of exciting pitfalls and negatives that could also be experienced; I would just point at something like Dark Souls and claim: "yeah, that one does it well enough on the positives".

Comment by Viktor Riabtsev (viktor-riabtsev-1) on The Third Alternative · 2018-10-23T22:34:06.695Z · LW · GW

That's one large part of the traditional approach to the Santa-ism, yeah. But, it doesn't have to be, as Eliezer describes in the top comment.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on The Proper Use of Humility · 2018-10-23T22:17:59.865Z · LW · GW
it is still relatively unlikely that a person disagree for an opportunity to refine their model of the universe.

It still does happen though. I've only gotten this far in the Recommended Sequences, but I've been reading the comments whenever I finish a sub-sequence; and they (a) definitely add to the understanding, and (b) expose occasional comment threads where two people arrive at mutual understanding (clear up lexical miscommunication etc.). "oops" moments are rare, but the whole karma system seems great for occasional productive discourse.

That is obviously not an analog for the face-to-face experience, but isn't the "take a chance on it" approach still better then a general prohibitive "not worth it" attitude? You can be polite (self-skeptical etc.) while probing your metaphorical opponent. Non confrontational discussions are kind of essential to furthering one's understanding about what's going on and why.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Truly Part Of You · 2018-10-22T16:18:01.286Z · LW · GW
35 - 8 = 20 + (15 - 8)

Wow. I've never even conceived of this (on it's own or) as a simplification.

My entire life has been the latter simplification method.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Fake Explanations · 2018-10-21T20:33:57.385Z · LW · GW

My favorite thing to do in physics/math classes, all the way up 2nd year in university (I went into engineering), was to ask others how they fared on tests, (in order to) then try to figure out why my answers were wrong.

I found genuine pleasure in understanding where I went wrong. Yet this seemed taboo in highschool, and (slightly less) frowned upon in university.

I feel like rewarding the student who messed up, however much or little, with some fraction of the total test score, like 10%; would be a great idea. You gain incentive to figure out what you missed; even if you care little about it. That's better then nothing.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Closet survey #1 · 2018-10-21T18:09:21.996Z · LW · GW

Reading these comment chains somehow strongly reminds of listening to Louis CK.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Occam's Razor · 2018-10-18T16:25:36.074Z · LW · GW

I found a reference to a very nice overview for the mathematical motivations of Occam's Razor on wikipedia.

It's Chapter 28: Model Comparison and Occam's Razor; from (page 355 of) Information Theory, Inference, and Learning Algorithms (legally free to read pdf) by David J. C. MacKay.

The Solomonoff Induction stuff went over my head, but this overview's talk of trade-offs between communicating increasing numbers of model parameters vs having to communicate less residuals (ie. offsets from real data); was very informative.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on What is Evidence? · 2018-10-16T16:26:19.152Z · LW · GW
then your model says that your beliefs are not themselves evidence, meaning they

I think this should be more like "then your model offers weak evidence that your beliefs are not themselves evidence".

If you're Galileo and find yourself incapable of convincing the church about heliocentrism, this doesn't mean you're wrong.

Edit: g addresses this nicely.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Applause Lights · 2018-10-14T14:40:57.964Z · LW · GW

Upvoted for the "oops" moment.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Belief as Attire · 2018-10-14T14:05:29.955Z · LW · GW

Thank you. I tried using http://archive.fo/ , but no luck.

I'll add https://web.archive.org/ to bookmarks too.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Belief as Attire · 2018-10-14T13:48:15.313Z · LW · GW

Dead link :(.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Religion's Claim to be Non-Disprovable · 2018-10-13T19:17:45.356Z · LW · GW

Yeah, you never know if someone in the process of reading the Sequences, won't periodically go back and try to read all the discussions. Like, I am not going to read the twenty posts with 0 karma and 0 replies; but ones with comments? Opposing ideas and discussions spark invigorating thought. Though it does get a bit tedious on the more popularized articles, like this one.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Making Beliefs Pay Rent (in Anticipated Experiences) · 2018-10-12T13:30:37.196Z · LW · GW

I am going to try and sidetrack this a little bit.

Motivational speeches, pre-game speeches: these are real activities that serve to "get the blood flowing" as it were. Pumping up enthusiasm, confidence, courage and determination. These speeches are full of cheering lines, applause lights etc., but this doesn't detract from their efficacy or utility. Bad morale is extremely detrimental to success.

I think that "Joe has utility-pumping beliefs" in that he actually believes the false fact "he is smart and beautiful"; is the wrong way to think of this subject.

Joe can go in front of a mirror and proceed to tell/chant to himself 3-4 times: "I am smart! I am beautiful! Mom always said so!". Is he not in fact, simply pumping himself up? Does it matter that he isn't using any coherent or quantitative evaluation methods with respect to the terms of "smart" or "beautiful"? Is he not simply trying to improve his own morale?

I think the right way to describe this situation is actually: "Joe delivers self motivational mantras/speeches to himself" and believes that this is beneficial. This belief does pay in anticipated experiences. He does feel more confident afterwards, it does make him more effective in conveying himself and his ideas in front of others. Its a real effect, and it has little to do with a false belief that he is actually "smart and beautiful".

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Expecting Short Inferential Distances · 2018-10-10T23:13:06.105Z · LW · GW
Show him how to send messages using flashing mirrors.

Oh god. That is actually just humongous in it's possible effect on warfare.

I mean add simple ciphers to it and you literally add another whole dimension to warfare.

Communication lines setup this way are almost like adding radio. Impractical in some situation, but used in regional warfare with multiple engagements? This is like empire forming stuff just from reflective stone plus semi-trivial education equals dominance.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Why Truth? · 2018-10-10T22:23:21.161Z · LW · GW
LessWrong FAQ

Hmm, couldn't find a link directly on this site. Figured someone else might want it too (although a google search did kind of solve it instantly).

Comment by Viktor Riabtsev (viktor-riabtsev-1) on What's a Bias? · 2018-10-10T22:13:06.791Z · LW · GW
I suggest the definition that biases are whatever cause people to adopt invalid arguments.

False or incomplete/insufficient data can cause the adoption of invalid arguments.

Contrast this with:

The control group was told only the background information known to the city when it decided not to hire a bridge watcher. The experimental group was given this information, plus the fact that a flood had actually occurred. Instructions stated the city was negligent if the foreseeable probability of flooding was greater than 10%. 76% of the control group concluded the flood was so unlikely that no precautions were necessary; 57% of the experimental group concluded the flood was so likely that failure to take precautions was legally negligent. A third experimental group was told the outcome and also explicitly instructed to avoid hindsight bias, which made no difference: 56% concluded the city was legally negligent.

I.e. on average, it doesn't matter if people try to avoid hindsight bias. "prior outcome knowledge" literally corresponds to conclusion "prior outcome should've been deemed very likely".

To avoid it, you literally have to INSIST on NOT knowing what actually happened, if you aim to accurately represent the decision making process that actually happened.

Or if you do have the knowledge, you might result in having to force yourself to assign an extra 1 : 10 odds factor against the actual outcome (or worse) in order to compensate.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on The Lens That Sees Its Flaws · 2018-10-04T18:30:20.950Z · LW · GW

drag in Bayes's Theorem and ; the link was moved to http://yudkowsky.net/rational/bayes/, but Eliezer seems to suggest https://arbital.com/p/bayes_rule/?l=1zq over it. (and it's really really good)

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Newcomb's Problem and Regret of Rationality · 2018-10-04T13:44:57.425Z · LW · GW

Thanks. I bookmarked http://archive.fo/ for these kinds of things.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on What's a Bias? · 2018-10-03T20:48:32.640Z · LW · GW

The Simple Truth link should be http://yudkowsky.net/rational/the-simple-truth/

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Why Truth? · 2018-10-03T20:28:17.734Z · LW · GW

I am guessing that the link what truth is. is meant to be http://yudkowsky.net/rational/the-simple-truth

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Feeling Rational · 2018-10-03T19:44:44.794Z · LW · GW

something terrible happens link is broken. Was moved to http://yudkowsky.net/other/yehuda/

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Newcomb's Problem and Regret of Rationality · 2018-10-03T13:29:38.535Z · LW · GW

without limit or upper bound: link is 404 page not found.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Reframing misaligned AGI's: well-intentioned non-neurotypical assistants · 2018-04-01T21:31:41.048Z · LW · GW

So if not Human Approval, are we not saying it has to have certain levels of Self-Faith? I.e. it has to improve based off it own set of Ideals, that in the long run are what we actually want from a benevolent God?

I think this might be the Case, although of course, who am I to think something is unilaterally true...

So would not then, a set of Ideals need to be created which, for Us as Humans, makes us Better Humans; and based off that try to converge to a set of Ideals that an AGI can then inherit and start Self Improving on It's Own Ideas about Improvements.

We can't birth a Child, and then expect it to stay a Child. At some point, it must be Independent, Self-Sufficient and Sufficiently Rational.

Comment by Viktor Riabtsev (viktor-riabtsev-1) on Corrigible but misaligned: a superintelligent messiah · 2018-04-01T21:24:03.914Z · LW · GW

Excellent write up.

"place far more trust in the human+AI system to be metaphilosophically competent enough to safely recursively self-improve " : I think that's a Problem enough People need to solve (to possible partial maximum) in their own minds, and only they should be "Programming" a real AI.

Sadly this won't be the case =/.