Posts

Comments

Comment by Productimothy (productimothy) on There is way too much serendipity · 2024-01-25T03:09:06.173Z · LW · GW

You could rename this post to, "Sweet, Sweet Serendipity".

Comment by Productimothy (productimothy) on Noticing Wasted Motion · 2023-11-13T06:20:18.452Z · LW · GW

Reflection from this particular experimental position:
> Why was it possible for me to assume an offensive tone? What features contribute to an offensive tone, and how can I avoid that? I think HPMoR gave me the wrong idea about bringing awareness to something, and probably a lot more social behavior.
- Conceptually speaking, my map correctly indicated to me that something more was left implicit for this example to be non-hyperbolic.
- I had more than enough information to scratch off the possibility of it being hyperbolic, but I didn't even try looking.
1. You have been on LessWrong for 4 years, have quite a bit of karma, have made over 20 posts on LessWrong, with many comments. I didn't even have to click your name for most of that info.
> These metrics aren't pointless; are very useful. I will figure out how to determine what they mean for experiences on LessWrong.
2. You made a post called, "Being Productive With Chronic Health Conditions", where the mystery could have been dispelled. Though I got to this post from a search, the mentioned post is listed right next to this one in your profile.
> I should strive to always ask the right questions to see people in a broader, more accurate light. People are not stupid, and hardly ever without any reason. So why was it not the first-nature reaction to seek existing information to satisfy my curiosity?

(P.S. I want to try again on another one of your posts. Based on my skim, I think you have quite a bit of value to offer.)

Comment by Productimothy (productimothy) on Noticing Wasted Motion · 2023-11-13T05:38:18.584Z · LW · GW

Thank you for the clarification. I am content. I congratulate you for running your errands on a bike with your condition; that's actually quite impressive. I do apologize sincerely for that unnecessarily harsh critique of a minor detail. I concede to your recommendation. 
I think I have some explaining to do.

I am 19, and am relatively new to rationality. I have been exposed to it for about two years, but have attained only hints of scattered progress. I am ashamed of this, but also realize how difficult it is to change the underlying dispositional features of oneself; how difficult it is to get past a local optimum that the self uses for most of its stability. I quickly acknowledge how little I know, and have spent two years descending this macro-mount stupid. In the first 6 months, it got so bad that I disassociated from the normal sources of social stability. Family, friends, school, religion--all of it. I had a few things that kept me alive, but life was mostly cold, confusing, and lonely.

After the strict perfectionism settled down and the emotional stability started to come back, pragmaticism (localized perfectionism) has started to face me as the true optimum. To this day, I'm trying to figure out what to do about my current limited state. I now just want to be less wrong and less dysfunctional, because that's the only improvement I could ever attain.

But the harsh season left a stain on my cognition--absolute perfectionism is a powerful tool, but crippling when facing concrete challenges (where concrete progress is born). My abstraction engine became strong, but now the polarized forms of abstraction are... polarized still. I need to find a more systematic way to weigh them properly. I presume concrete challenges with feedback from others is the next step in the right direction.

On LessWrong, I almost never comment on a post. I almost never join conversations. I've been left to my own analysis, and a static and vague window into others' lines of thought. I never thought I should even try those things because it would just be wrong or dysfunctional, or worst of all, that I would make the future worse (by doing things like wasting more intelligent peoples' time, or stimulating negative emotions). People on LessWrong aren't obviously wrong most of the time, so it makes it difficult for me to meaningfully contribute. It's in the subtlety where improvements can be made--the subtlety I have not yet learned. It's hard wanting to belong with a group from outside the window.

What I've come to is that it would be better if I just said or did something I was convinced of, even if it was disproportionate and radical, mislead, wasted motion, or in this case, rude. But that I could figure out what went wrong after I made those mistakes and do my best to repair the damage.

(P.S. I accept lower karma in exchange for a chance to mess up and learn.)

Comment by Productimothy (productimothy) on Noticing Wasted Motion · 2023-11-06T14:26:23.497Z · LW · GW

I assign a low probability that it took you 35 minutes to bike 3 miles when you were pressured to the task. That's 5.14 mph, which is quite easy to jog. There was more going on than aerobic deficiency.
I understand this isn't what your point was, but your example shouldn't be hyperbolic. 
Either explain the other factors, or use a different example. 
Have I overestimated your aerobic condition, despite your insistence?

Comment by Productimothy (productimothy) on Building a Bugs List prompts · 2023-11-05T05:08:17.961Z · LW · GW

Think pragmatically here. How do you anticipate this list is going to change you?
While much of LessWrong is dedicated to hypothetical truths, CFAR is more about iterative/adaptive truth/improvement.
Don't consider anything and everything. A threshold of hypotheticals prevent you from acting and making progress (I wish they expounded upon this in the prior post).
Just consider the limitations you anticipate that you'll actually be able to/actually want to resolve at some point.

Hopefully this gives you some direction.

Comment by Productimothy (productimothy) on A concise version of “Twelve Virtues of Rationality”, with Anki deck · 2023-09-03T23:26:35.906Z · LW · GW

"Do you see how they flow into each other? Learning the order of the items helps me remember which virtues are connected to other ones, and how."

Sure, it may help you remember how some of the virtues are connected to other virtues in an indirect way, but even if it were direct, it is quite partial. The flow can only hint at how lightness is related to evenness, or how perfectionism is related to evenness.

Lightness doesn't just relate with evenness. It also relates with all the other virtues in a ton of different ways. In fact, they are all so heavily interrelated, that, "you will see how all techniques are one technique".
If your objective is to have a good understanding of how all the virtues of rationality relate, I would chunk them in a way that is most sensible to you, then ask how each may relate to each other both in theory and in application.

Comment by Productimothy (productimothy) on A transcript of the TED talk by Eliezer Yudkowsky · 2023-07-15T06:13:20.950Z · LW · GW

I have done that here in the comments.
@Mikhail Samin, you are welcome to apply my transcript to this post, if think that would be helpful to others.

Comment by Productimothy (productimothy) on A transcript of the TED talk by Eliezer Yudkowsky · 2023-07-15T01:32:52.884Z · LW · GW

Here is the Q+A section: 
[In the video, the timestamp is 5:42 onward.]
[The Transcript is taken from YouTube's "Show transcript" feature, then cleaned by me for readability. If you think the transcription is functionally erroneous somewhere, let me know.]

Eliezer: Thank you for coming to my brief TED talk.

(Applause)

Host: So, Eliezer, thank you for coming and giving that. It seems like what you're raising the alarm about is that for an AI to basically destroy humanity, it has to break out, to escape controls of the internet and start commanding real-world resources. You say you can't predict how that will happen, but just paint one or two possibilities.

Eliezer: Okay. First, why is this hard? Because you can't predict exactly where a smarter chess program will move. Imagine sending the design for an air conditioner back to the 11th century. Even if there is enough detail for them to build it, they will be surprised when cold air comes out. The air conditioner will use the temperature-pressure relation, and they don't know about that law of nature. If you want me to sketch what a super intelligence might do, I can go deeper and deeper into places where we think there are predictable technological advancements that we haven't figured out yet. But as I go deeper and deeper, it gets harder and harder to follow.

It could be super persuasive. We do not understand exactly how the brain works, so it's a great place to exploit-- laws of nature that we do not know about, rules of the environment, new technologies beyond that. Can you build a synthetic virus that gives humans a cold, then a bit of neurological change such that they are easier to persuade? Can you build your own synthetic biology? Synthetic cyborgs? Can you blow straight past that to covalently-bonded equivalence of biology, where instead of proteins that fold up and are held together by static cling, you've got things that go down much sharper potential energy gradients and are bonded together? People have done advanced design work about this sort of thing for artificial red blood cells that could hold a hundred times as much oxygen if they were using tiny sapphire vessels to store the oxygen. There's lots and lots of room above biology, but it gets harder and harder to understand.

Host: So what I hear you saying is you know there are these terrifying possibilities, but your real guess is that AIs will work out something more devious than that. How is that really a likely pathway in your mind? 

Eliezer: Which part? That they're smarter than I am? Absolutely. *Eliezer makes facial expression of stupidity upward, then the audience laughs.

Host: No, not that they're smarter, but that they would... Why would they want to go in that direction? The AIs don't have our feelings of envy, jealousy, anger, and so forth. So why might they go in that direction?

Eliezer: Because it is convergently implied by almost any of the strange and scrutable things that they might end up wanting, as a result of gradient descent on these thumbs-up and thumbs-down internal controls. If all you want is to make tiny molecular squiggles, or that's one component of what you want but it's a component that never saturates, you just want more and more of it--the same way that we want and would want more and more galaxies filled with life and people living happily ever after. By wanting anything that just keeps going, you are wanting to use more and more material. That could kill everyone on Earth as a side effect. It could kill us because it doesn't want us making other super intelligences to compete with it. It could kill us because it's using up all the chemical energy on Earth.

Host: So, some people in the AI world worry that your views are strong enough that you're willing to advocate extreme responses to it. Therefore, they worry that you could be a very destructive figure. Do you draw the line yourself in terms of the measures that we should take to stop this happening? Or is anything justifiable to stop the scenarios you're talking about happening?

Eliezer: I don't think that "anything" works. I think that this takes takes state, actors, and international agreements. All International agreements, by their nature, tend to ultimately be backed by force on the signatory countries and on the non-signatory countries, which is a more extreme measure. I have not proposed that individuals run out and use violence, and I think that the killer argument for that is that it would not work.

Host: Well, you are definitely not the only person to propose that what we need is some kind of international Reckoning here on how to manage this going forward. Thank you so much for coming here to TED.

Comment by Productimothy (productimothy) on A concise version of “Twelve Virtues of Rationality”, with Anki deck · 2023-04-16T01:17:53.960Z · LW · GW

I think a logical response would manifest more or less as follows: If all techniques surround one center, there will be at least one relationship between each of them. The meaning of these virtues is non-linear. Anki is linear. Notes are linear. Mind mapping is better, but still limiting. For one to truly learn the virtues of rationality, he must exist through them. His life must become a set of their instantiations.

However, I think your purpose of putting it into Anki was to have a verbatim collection of words that represented something meaningful in your mind. Why? To have a clearer overarching schema on which to base your declarative knowledge mastery of rationality. As a result, two sub-goals are met: to better communicate rationality to others and to have a source of stability to turn to when things become uncertain, undesirable, or menial.

You did not have a better idea on how to fulfill this purposes. Here is how I would personally do it: group them as intuitively as possible on a mind map, then figure how the groups are generally related (representing it with arrows). It should not just be left latent in the mind, but constantly rejuvenated through new instantiated experiences. For the sentences, make smaller keywords (and maybe a doodle) and connect it to the respective element on the mind map. As you learn more, add, relate, and refine what you allow to show.

If you are confused on how to make the mind map, here's some tips: https://www.youtube.com/watch?v=5zT_2aBP6vM&t=217s&ab_channel=JustinSung

[edit: rewriting the sentences to reduce cognitive load for the reader.]

Comment by Productimothy (productimothy) on Rationality is about pattern recognition, not reasoning · 2023-03-03T03:40:19.096Z · LW · GW

An exploration of the unknown through known first-principles seems to be a good balance between order and chaos.

Comment by Productimothy (productimothy) on [META] 'Rational' vs 'Optimized' · 2023-02-02T23:08:41.239Z · LW · GW

Eliezer brilliantly wrote this in Twelve Virtues of Rationality:
"Do not be blinded by words. When words are subtracted, anticipation remains."

I think “rational” and “optimal” share similar anticipatory elements, but “optimal” is simpler and more abstract, whereas “rational” almost necessarily applies “optimal” to some bounded agent.

When I think of a “rational” decision versus an “optimal” decision, or a “rational” person versus an “optimal” person, the overlap I see is the degree of effectiveness of something.
What I anticipate with “rational” is the effectiveness of something as a result of the procedural decision-making of an agent with scarce knowledge and capability. Context reveals who this agent is; it’s often humans.
What I anticipate with “optimal” is the highest effectiveness of something, either inclusive or exclusive of an agent and scarcity. If the context reveals no agent, scarcity can be physical but not distributive; if the context reveals an agent, it will imply which agent and what level of scarcity.

I would imagine that using proper descriptors or clear context would alleviate a lot of the ambiguity.

Comment by Productimothy (productimothy) on Non Polemic: How do you personally deal with "irrational" people? · 2023-01-30T18:41:17.739Z · LW · GW

You may be confused by some of my response. I'm well aware it deviates substantially from your inquiry--there is just substantial back-end stuff I think would help your autonomy to more efficiently improve in anything.

In Eliezer's "12 Virtues of Rationality", read the last virtue--the nameless virtue of the void. Take what follows as a guide to approach what he writes.

You appear to be approaching these problems with a vague mainframe--possibly even rationality as a whole with a vague superframe. When you ask for advice and sources to help, you think you want the subframes, which will fit on your vague mainframe. While it will correlate to better decisions and will eventually lead to a clear mainframe, it will not nail them as efficiently or as expansively than could be accomplished if you were to deliberate it the other way around (recall the effects skimming a book before reading, or defining the purpose before action, versus reading the book before skimming or acting without purpose).

To devise a mainframe, though, you do need some knowledge both about how to best make a schema and general knowledge about your area of improvement. Very quickly, you will find yourself scaffolding a formalization of the outer-boundaries of what you and rationality currently knows.

This principle can be applied to learning efficiency, rationality, or anything cognitive. This is how the mind works most naturally. This is what top thinkers are actually doing; it is how some people see the world clearer than others. This is how you prevent yourself from creating sub-optimal circumstances from within your own confusion and ignorance. This is not clearly widespread, and much less so brought to application. There are tools and decisions that arise from it. 

If you do not have a clear and accurate model on which to assess yourself, you cannot expect to understand the beat of a situation, will not respond in the best way pragmatically possible, and your improvement will be drastically slower. You may be guessing about what exactly constitutes your insufficiency and thus not target your limiting attributes as well.

This is to aid you in constructing a proper mainframe for your specific inquiry:

When you feel emotional tension, there is two options: you can change yourself or you can change others. Pragmatically, you cannot often change others. It is the job of your short-term advocate to choose, and it is the job of your long-term advocate to make the prior knowledge required to assess if it can (or should) be done.

With tension, there is some underlying value you are predisposed to assume. You can change this emotional tension from within the experience by changing your lens from which you are viewing it. Or, you can train the predisposition, which is to internalize general features of the desirable type of lens-changes.

Both are indispensable for a bounded rationalist. Training the predisposition means you can make better decisions across more instances, quicker, and with less cognitive effort. And being able to change your lens real-time is a good patch where your predisposition is insufficient. This autonomy can be defined as a controller of predispositions. 

You do not want to eradicate emotional tension, you merely want to get rid of the unhelpful tension. Tension within can be extremely useful because it necessitates thought and behaviors to occur. We just want those thoughts and behaviors to be aligned to wider knowledge and purpose. My wider purpose through my bottle-necked knowledge, in short, is to minimize human suffering while maximizing sustainability.

Don't let these simple words fool you--there is a great complexity to what they actually mean and how they may be applied. Abstract thinking applied seems to be the foundation for all decision-making; this is what rationality is in thought and action. Abstractness prevents details, thus inherently coming out more correct. After practice and targeted training can one refine his abstractions down to subsets of abstractions, and further still.

I recommend these two as the strongest sources that have brought me to the above propositions. 
ICanStudy ("chunkmapping" is what they call the efficient frame-making. I cannot think of a more efficient and pragmatic way to organize a schema. Principles: Video 1, Video 2.)
and Jordan Peterson's lecture series 2017 Personality and its Transformations.

Comment by Productimothy (productimothy) on Talking to yourself: A useful thinking tool that seems understudied and underdiscussed · 2022-12-06T02:06:07.442Z · LW · GW

I'm curious, to what merit does the social stigma have in stimulating hesitation in this instance? Is that not defiant of the consequence you're trying to bring to yourself? To utilize vocalization for enhanced cognitive effects is to desire enhanced cognitive effects. It matters, and surely more than irrelevancies. This value is much better said than done, but don't these workarounds limit development?

My friend and I would go on long walks, and there would occasionally be an bystander taking his own, a dog roaming the streets, cars going by, etc. I became annoyed at suppressing myself, and took it as a challenge to develop focus. My friend and I termed the situation "third-party syndrome", and every time a distraction came, we would mentally recognize the occurrence, and choose to continue our conversation as if the third party were non-existent. Eventually, we got pretty good at it.

Ideally, it would get to the point where we would subconsciously register it, and not even have any break in flow. Recognition to it wouldn't be much more than to see the road turns right only or that there's a slim branch on the path. It requires a development of certainty--that the value of what others think is stifled in this regard. It requires confidence in the action you've chosen to take.

Obviously, there are some cases in which rationality will dictate some other response. For instance, to objectivize courtesy (exploring matters of controversy), or preserve yourself in a situation where it actually matters.