Posts

Which personalities do we find intolerable? 2022-07-21T15:56:23.837Z
What question would you like to collaborate on? 2021-06-02T19:31:11.374Z
Functional Trade-offs 2021-05-19T01:06:08.518Z
A Wiki for Questions 2021-05-11T02:31:03.124Z
What questions should we ask ourselves when trying to improve something? 2021-05-06T19:03:06.638Z

Comments

Comment by weathersystems on Which personalities do we find intolerable? · 2022-08-08T21:37:02.710Z · LW · GW

Heh, I got the same feeling from the Dutch people I met. My ex wife once did a corporate training thing where they were learning about the power of "yes and" in improve and in working with others. She and one other European person (from Switzerland maybe?) were both kinda upset about it and decided to turn their improve into a "no but" version.

Ya I definitely took agreeableness == good as just an obvious fact until that relationship.

Comment by weathersystems on BBC Future covers progress studies · 2022-08-06T23:23:12.892Z · LW · GW

This isn't as strong of an argument as I once thought


What is the "this" you're referring to? As far as I can tell I haven't presented an argument.

Comment by weathersystems on Hiring Programmers in Academia · 2022-07-25T03:29:02.366Z · LW · GW

Do you have a link to the job posting?

Comment by weathersystems on What's the goal in life? · 2022-06-20T23:42:37.876Z · LW · GW

I would say it feels like my brain's built in values are mostly a big subgoal stomp, of mutually contradictory, inconsistent, and changeable values. [...]

it feels like my brain has this longing to find a small, principled, consistent set of terminal values that I could use to make decisions instead. 


Here's a slate star codex piece on our best guess on how our motivational system works: https://slatestarcodex.com/2018/02/07/guyenet-on-motivation/. It's essentially just a bunch of small mostly independent modules all fighting for control of the body to act according to what they want. 

I don't think there's any way out of having "mutually contradictory, inconsistent, and changeable values." We just gotta negotiate between these as best we can.

There are at least a couple problems with trying to come up with a "small, principled, consistent set of terminal values" you could use to make decisions.

  1. You're never gonna be able to do it in a way that covered all edge cases.
  2. Even if you were able to come up with the "right" system, you wouldn't actually be able to follow it. Because our actual motivational systems aren't simple rule following systems. You're gonna want what you want, even if your predetermined system says to do otherwise.
  3. You don't really get to decide what your terminal values are.  I mean you can fudge it a bit, but you certainly don't have complete control over them (and thank god).

Negotiating between competing values isn't something you can smooth over with a few rules. Instead it requires some degree of finesse and moment to moment awareness. 

Do you play any board games? In chess there are a lot of what we can call "values." Better to keep your king safe, control the center, don't double your pawns etc. But there's no "small, principled, consistent set of" rules you can use to negotiate between these. It's always gotta be felt out in each new situation.

And life is much messier and more complex than something like chess. 

 

It sounds like both of you may have gone through the exercise of find terminal goals that work for you.

I "found terminal goals" in the sense that I tried to figure out what were the main things I wanted in life. I came up with some sort of list (which will probably change in the future). It's a short list, but definitely not principled or consistent :D. Occasionally it does help to keep me focused on what matters to me. If I find myself spending a lot of time doing stuff that doesn't go in one of those directions, I try to put myself more on track.

If you want I can try to figure out how I got there. But it seems like your more concerned with the deciding between competing values thing.

Inclusive genetic fitness seems like it may be a reasonable terminal goal to replace the subgoal stomp.

Ya definitely don't do that. If you did that you'd just spend all your time donating sperm or something.

Comment by weathersystems on What's the goal in life? · 2022-06-19T00:21:18.657Z · LW · GW

While these sound good, the rationale for why these are good goals is usually pretty hand wavy (or maybe I just don't understand it).

 

At some point you just got to start with some values. You can't "justify" all of your values. You got to start somewhere. And there is no "research" that could tell you what values to start with.

Luckily, you already have some core values.

The goals you should pursue are the ones that help you realize those values. 

 

but there are a ton of important questions where I don't even know what the goal is

You seem to think that finding the "right" goals is just like learning any mundane fact about the world. People can't tell you what to want in life like they can explain math to you. It's just something you have to feel out for yourself.

Let me know if I'm miss-reading you.

Comment by weathersystems on Agent level parallelism · 2022-06-18T23:54:28.799Z · LW · GW

Maybe a dumb question. What's an EM researcher? Google search didn't do me any good.

Comment by weathersystems on BBC Future covers progress studies · 2022-06-17T00:31:32.187Z · LW · GW

What do you think about the vulnerable world hypothesis? Bostrom defines the vulnerable world hypothesis as: 

If technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization sufficiently exits the semian-archic default condition.

(There's a good collection of links about the VWH on the EA forum). And he defines "semi-anarchic default condition" as having 3 features:

1. Limited capacity for preventive policing. States do not have sufficiently reliable means of real-time surveillance and interception to make it virtually impossible for any individual or small group within their territory to carry out illegal actions – particularly actions that are very strongly disfavored by > 99 per cent of the population. 

2. Limited capacity for global governance. There is no reliable mechanism for solving global coordination problems and protecting global commons – particularly in high-stakes situations where vital national security interests are involved. 

3. Diverse motivations. There is a wide and recognizably human distribution of motives represented by a large population of actors (at both the individual and state level) – in particular, there are many actors motivated, to a substantial degree, by perceived self-interest (e.g. money, power, status, comfort and convenience) and there are some actors (‘the apocalyptic residual’) who would act in ways that destroy civilization even at high cost to themselves.

To me, the idea that we're in a vulnerable world is the strongest challenge to the value of technological progress. If we are in a vulnerable world, the time we have left before civilizational devastation is partly determined by our rate of "progress."

Bostrom doesn't give us his probability estimate that the hypothesis true. But to me it seems quite likely that at some point we'll invent the technology that will screw us over (if we haven't already). AI and engineered pandemics are the scariest potential examples for me.

Do you disagree with me about the probability of us being in a vulnerable world? Do think we can somehow avoid discovering the civilization destroying tech while only finding the beneficial stuff?

Or do you think we are in a vulnerable world, but that we can exit the "semi-anarchic default condition?" Bostrom's suggestions (like having complete surveillance combined with a police state) for exiting the semi-anarchic default condition seem quite terrifying.

If you've written or spoken about this somewhere else, feel free to just point me there.

Comment by weathersystems on How much does cybersecurity reduce AI risk? · 2022-06-15T21:42:12.862Z · LW · GW

You may be interested in this 80000 hours podcast: Nova DasSarma on why information security may be critical to the safe development of AI systems

Comment by weathersystems on Who said something like "The fact that putting 2 apples next to 2 other apples leads to there being 4 apples there has nothing to do with the fact that 2 + 2 = 4"? · 2022-06-15T14:54:21.931Z · LW · GW

Do you have anything else you remember about the statement? Where you heard it, when you heard it etc.

Comment by weathersystems on A claim that Google's LaMDA is sentient · 2022-06-15T01:42:43.427Z · LW · GW

I'm not so sure I get your meaning. Is your knowledge of the taste of salt based on communication?

Usually people make precisely the opposite claim. That no amount of communication can teach you what something subjectively feels like if you haven't had the experience yourself.

I do find it difficult to describe "subjective experience" to people who don't quickly get the idea. This is better than anything I could write: https://plato.stanford.edu/entries/qualia/. 

Comment by weathersystems on Blake Richards on Why he is Skeptical of Existential Risk from AI · 2022-06-15T01:25:28.019Z · LW · GW

The quotes above are not the complete conversation. In the section of the discussion about AGI, Blake says:

Blake: Because the set of all possible tasks will include some really bizarre stuff that we certainly don’t need our AI systems to do. And in that case, we can ask, “Well, might there be a system that is good at all the sorts of tasks that we might want it to do?” Here, we don’t have a mathematical proof, but again, I suspect Yann’s intuition is similar to mine, which is that you could have systems that are good at a remarkably wide range of things, but it’s not going to cover everything you could possibly hope to do with AI or want to do with AI.

Blake: At some point, you’re going to have to decide where your system is actually going to place its bets as it were. And that can be as general as say a human being. So we could, of course, obviously humans are a proof of concept that way. We know that an intelligence with a level of generality equivalent to humans is possible and maybe it’s even possible to have an intelligence that is even more general than humans to some extent. I wouldn’t discount it as a possibility, but I don’t think you’re ever going to have something that can truly do anything you want, whether it be protein folding, predictions, managing traffic, manufacturing new materials, and also having a conversation with you about your grand’s latest visit that can’t be… There is going to be no system that does all of that for you.

I don't think he's making the mistake you're pointing to.  Looks like he's willing to allow for AI with at least as much generality as humans.

And he doesn't seem too committed to one definition of generality. Instead he talks about different types/levels of generality.

Comment by weathersystems on A claim that Google's LaMDA is sentient · 2022-06-13T22:40:07.403Z · LW · GW

Why would self-awareness be an indication of sentience? 

By sentience, do you mean having subjective experience? (That's how I read you)

I just don't see any necessary connection at all between self-awareness and subjective experience. Sometimes they go together, but I see no reason why they couldn't come apart. 

Comment by weathersystems on Operationalizing two tasks in Gary Marcus’s AGI challenge · 2022-06-12T16:46:46.016Z · LW · GW

Gary Musk decided

Comment by weathersystems on What's The Best Place to Look When You Have A Question About x? · 2022-05-29T23:17:50.153Z · LW · GW
  • https://github.com/search for when stackoverflow fails me. Sometimes when I'm trying to figure out how to use some library with not great documentation, there are good examples in other people's code that aren't yet on stackoverflow.
  • product reviews on reddit (google search something like "light phone review site:reddit.com")

     
Comment by weathersystems on Are smart people's personal experiences biased against general intelligence? · 2022-04-21T23:56:05.893Z · LW · GW

https://en.wikipedia.org/wiki/Berkson%27s_paradox

I also liked this numberphile video about it: Link

Comment by weathersystems on My Terrible Experience with Terror · 2022-04-21T23:31:04.581Z · LW · GW

Ah. Ya that makes sense. It sounds like it's not so much about what to do in the moment of panic as what to focus on throughout your day-to-day life. Let yourself be interested in and pay attention to things other than that you feel bad all the time. Don't let your pain be your main/only focus.

Comment by weathersystems on When is positive self-talk not worth it due to self-delusion? · 2022-04-21T21:13:08.683Z · LW · GW

I read it as an analogy to a programming stack trace, but with motivations. Often times you're motivated to do A in order to get B in order to get C, where one thing is desired only as a means to get something else. Presumably these chains of desire bottom out in some terminal desires, things that are desired for their own sake, not because of some other thing it gets you.

So one example could be, "I want to get a job, in order to get money, in order to be able to feed myself." 

I'm not sure if that's what they meant. I'm often kind of skeptical of that sort of psychologizing though. It's not that it can't be done, but that our reasons for having motivations are often invisible to ourselves. My guess is that when people try to explain their own actions/motivations in this way, they're largely just making up a plausible story.

Comment by weathersystems on My Terrible Experience with Terror · 2022-04-21T20:48:06.126Z · LW · GW

Thanks for writing this. As someone who went through something very similar, I largely agree with what you wrote here.

To make the "accept the panic" bit a more concrete: following someone's advice, when I'd start to panic, I'd sit down and imagine I was strapped to the chair. I'd imagine my feelings were a giant wave washing over me, but that I couldn't avoid them, because I was strapped to the chair. The wave wouldn't kill me though, just feel uncomfortable. I'd repeat that in my head "this is uncomfortable but not dangerous. this is uncomfortable but not dangerous..." Turns out that if you don't try to avoid the bad feelings, they don't last as long. My understanding is that by just sitting and taking it without flinching, you're teaching your brain that panic is not something to be feared which reduces their intensity and frequency.

Before doing that I felt terrible for about an hour. With that technique it was reduced to about 15 minutes, then I quickly (in a week or two) stopped having panic attacks.

I'm not sure I understand how "Three, distract yourself." fits with accepting panic though. I know for me, distracting myself was a way of not accepting. Of trying not to feel bad.

Comment by weathersystems on Explorative Questions towards Argumentation + GitHub · 2021-07-23T20:17:19.234Z · LW · GW

There are a few things that sound similar to what you're talking about. The first is the process of writing an RFC: https://github.com/inasafe/inasafe/wiki/How-to-write-an-RFC. Also wikipedia must need to do many of the things you describe, so looking into how they reach consensus may be interesting for you.  Also, there are attempts to have more of a direct democracy style governance in the US, and they have certain procedures that you may want to look into: https://www.newyorker.com/news/the-future-of-democracy/politics-without-politicians

I do like the idea of templates for certain types of discussion. That's why I wrote this: https://www.lesswrong.com/posts/xE7F4b34pfTMThYMX/what-questions-should-we-ask-ourselves-when-trying-to. 

 

So, depending on the topic and which appropriate template is chosen the chances of success are 'almost' guaranteed because the underlining logic is agreed upon and already proven. 

It's not easy for me to understand how strong of a claim you're making here, because you say "depending on the topic" and "almost." It still feels too strong to me. I'd say most of the time, at best, templates for discussion would just be helpful. Especially if people have different values and beliefs about the world, disagreements are very difficult to settle.

I suppose questions in mathematics or something where you can prove an answer is correct may be a type of exception. Check out the polymath project if you haven't seen it already for an example of people collaborating on trying to solve (math) problems.

I have a lot of similar ideas to the ones you've presented in this post, so if you'd like to discuss these things anytime, feel free to send me a dm.
 

Comment by weathersystems on Explorative Questions towards Argumentation + GitHub · 2021-07-14T17:29:18.322Z · LW · GW

I'm still not clear on what exactly you're wanting to do with Github. 

  • Can you give an example use case for your project?
  • What do you see the "templates" doing in this project?
Comment by weathersystems on How to make more people interested in rationality? · 2021-07-11T16:33:05.469Z · LW · GW

What do you mean by "reach out to people"? Usually that just means contact them. But here you seem to mean something different.

Comment by weathersystems on What question would you like to collaborate on? · 2021-06-04T18:25:41.625Z · LW · GW

Thanks. The "drawing what you see" vs "drawing what you think" distinction combined with the images helped me understand the idea better.

This seems somewhat related to what Scott Alexander called "concept shaped holes." So you're saying that some people have a "concept of how to draw what you see" shaped hole, and that Edwards has some techniques of helping you fill that gap.

Are you specifically looking for conceptual shifts that would allow you to do something better? Or is just being able to understand something you previously didn't understand enough? Like if someone didn't "get" jazz and there were some way to help them appreciate it, would that count?

Comment by weathersystems on [deleted post] 2021-06-04T17:47:20.098Z

Thanks for writing up your thoughts here. I hope you wont mind a little push-back.

There's a premise underlying much of your thought that I don't think is true.

But as the world of Social Studies consists of the interactions of persons, places, and things, they are subject to the Laws of Physics, and so the tenants of Physics must apply.

I don't really see how the laws of physics apply to social interactions. To me it sounds like you're mixing up different levels of description without any reason.

Yes, at bottom we're all made up of physical stuff that physics describes. But that doesn't really mean that the laws of physics are particularly useful when trying to explain human scale phenomena like why people get hungry, or angry, or why people have a hard time coordinating, or (more to your point), why people sometimes believe the wrong things. The fields of psychology, evolutionary biology, sociology among others seem like they'd be more relevant than physics. The different fields of knowledge exist for a good reason.

Comment by weathersystems on [deleted post] 2021-06-04T17:17:12.136Z

I think some question in this area would work well for this collaboration I'm proposing: https://www.lesswrong.com/posts/oqSMn6WEXPdDEvyyt/what-question-would-you-like-to-collaborate-on

If you add a question there and it gets picked I'd be happy to work on this with you.

Comment by weathersystems on What question would you like to collaborate on? · 2021-06-04T16:46:10.976Z · LW · GW

Ya I thought it was worth a try. Looks like exactly one person is putting forward a question so far. Do you have any questions you'd be interested in working on?

Comment by weathersystems on What question would you like to collaborate on? · 2021-06-04T16:43:30.139Z · LW · GW

Thanks for being the first person to submit a question! 

It turns people who have "no drawing talent" into people who can easily draw anything they see, not by strenuous exercise, but by a conceptual shift that can be achieved in a few hours.


Did that work for you, or do you know of any evidence that that's the case? I'm skeptical that a few hours can allow anyone to "draw anything they see" but would be happy to change my mind on that. I guess you didn't say how well they'd be able to draw after just a few hours of "conceptual shift." But I read you as saying anyone can draw very well after just a little effort.

I guess I'm not really understanding the question. Is the question something like:

"What are some small shifts people can make in their mental model of some skill that would have a very large impact in the skill level of the person making that mental shift?"

Comment by weathersystems on Which activities do you prefer to better recover productivity? · 2021-06-02T21:45:33.113Z · LW · GW
  • going for a walk
  • taking a long bath or shower
  • going to the gym
  • taking a nap if I'm tired
Comment by weathersystems on What question would you like to collaborate on? · 2021-06-02T19:34:33.908Z · LW · GW

I'm a bit worried that my question will be picked and then I'll be the only one working on it. So to give this thing a better chance of at least two people collaborating, I'm not submitting a question.

Comment by weathersystems on A Wiki for Questions · 2021-06-02T00:32:59.002Z · LW · GW

Thanks. I'd heard of wikispore, but not wikifunctions. That looks cool.

Comment by weathersystems on The Case for Extreme Vaccine Effectiveness · 2021-05-24T01:40:51.116Z · LW · GW

"I wrote first wrote"

Thanks for the post!

Comment by weathersystems on How refined is your art of note-taking? · 2021-05-20T01:39:26.168Z · LW · GW

A really easy way to set up your own wiki is to use a github repo. You can make it private if you don't want people to see it. If you use markdown and use the .md file extension, github will show the pages nicely and will even make links to other pages work.

do you ever go back to old free form notes and find yourself unable to reconstruct what you originally meant?

I don't think I've ever had that problem.

Or find the task of wading through your old free form notes unpleasant, since they're not polished?

I think it's fun. I've never found it unpleasant. And if it's on a computer you can always use the search function for topics you're interested in pursuing further.

Comment by weathersystems on How refined is your art of note-taking? · 2021-05-20T01:06:03.391Z · LW · GW

Also make sure to check out the other posts with the note taking tag if you haven't seen them already: https://www.lesswrong.com/tag/note-taking

Comment by weathersystems on How refined is your art of note-taking? · 2021-05-20T00:20:38.619Z · LW · GW

I like using a wiki for notes. Something like this: http://evergreennotes.com/. There are a lot of ways to set up a wiki.
 

1) How consistently do you take notes when you're reading up on a new skill or subject?

I take notes for things that I want to eventually write something about, so for most things I don't end up taking notes.
 

2) Do you regularly refer back to old notes?

Sure. Especially keeping track of relevant sources is super useful for future me.
 

3) Do you approach note-taking differently for different subjects or purposes?

For notes that I don't want people to see because they involve private information, I just use a repository with some files and folders on my computer. For anything that I'd be ok with people reading, I use this wiki: openquestions.wiki.

4) Have you adopted a specific note-taking method and used it consistently for more than a few months?

Yep.

I'm not sure if the method has a name. For my personal notes I have a folder for free form babble type thoughts. Each filename is just the date that I'm writing. Then I later go through and find anything related to some topic I want to do more work on and copy past the good bits into files separated by topic.

Some of those notes in my private repo end up being something I'd like to share with friends, so I post 'em to my wiki.

5) What role does note-taking play for you? Is it a way to focus your attention? To make extracts from the text for easier reference later? To comprehend the material better through the act of making notes?

The free form writing helps me to get down my thoughts quickly for possible future reference. Sometimes I go back to what I wrote 5 months ago and find some gems.

Sometimes when I'm researching a topic, just copy pasting links and relevant text has been useful.

Taking notes helps me keep track of fragmentary ideas for future processing and helps me do the processing.

Comment by weathersystems on What is the strongest argument you know for antirealism? · 2021-05-12T15:16:25.662Z · LW · GW

If you're just looking for the arguments. This are what you're looking for:
https://plato.stanford.edu/entries/moral-anti-realism

How do you know that disinterested (not game-theoretic or instrumental) altruism is irrational / doesn't make any sense?

What is "disinterested altruism"? And why do you think it's connected to moral anti-realism?

Comment by weathersystems on A Wiki for Questions · 2021-05-11T23:37:08.436Z · LW · GW

Thx. I'll check it out.

Comment by weathersystems on A Wiki for Questions · 2021-05-11T22:14:42.599Z · LW · GW

I agree. My two questions with regards to that are:
 

  1. Would they accept this as a sister project? The last time they took on a sister project was something like 10 years ago (iirc)
  2. Would it be better placed as it's own Wikimedia project or could it be merged with Wikiversity?
Comment by weathersystems on A Wiki for Questions · 2021-05-11T22:05:52.541Z · LW · GW

StackExchange only flags duplicates, that's true, but the reason is so that search is more efficient, not less. The duplicate serves as a signpost pointing to the canonical question.


Ya I get that. But why keep all the answers and stuff from the duplicates? My idea with the question wiki was to keep the duplicate question page (because maybe it's worded a bit differently and would show up differently in searches), have a pointer to the canonical question, and remove the rest of the content on that page, combining it with the canonical question page.

Also, StackExchange does indeed allow edits to answers by people other than the original poster. Those with less than a certain amount of reputation can only propose an edit and someone else has to approve it, and those who have a higher level of reputation can edit any answer and have the edit immediately go into effect.

Huh. That's new to me. Thanks for the info. That may affect my view on the need for the question wiki. I'll have to think about it. Maybe I gotta take a closer look at stackexchange.

Comment by weathersystems on A Wiki for Questions · 2021-05-11T20:44:44.492Z · LW · GW

Ya I think you're basically right here. Which is why I'm not really hoping to "grow large enough to be comparable to Stack Exchange and still remain good." In fact even growing large enough and being sucky seems very hard.

My goal is just to make something that's useful to individuals. I figure if I get use out of the thing when working alone, maybe other people would too.

Comment by weathersystems on A Wiki for Questions · 2021-05-11T18:28:15.244Z · LW · GW

I'm not sure I'm getting your question.

I think mediawiki (the software that runs both wikipedia and this question wiki) only allows text by default. But there's no reason why the pages can't just link to relevant sources. And in fact probably some questions should be answered with just one link to the relevant wikipedia page. 

Ideally pages should synthesize relevant sources but I think just listing sources is better than nothing.

Comment by weathersystems on Challenge: know everything that the best go bot knows about go · 2021-05-11T06:33:40.443Z · LW · GW

Sure. But the question is can you know everything it knows and not be as good as it? That is, does understanding the go bot in your sense imply that you could play an even game against it?

Comment by weathersystems on A Wiki for Questions · 2021-05-11T06:31:44.399Z · LW · GW

Ah ya I see what you're saying. Ya that's definitely right. Certainly the most common kind of question asker online just wants to ask the highest number of the most qualified people their question and that's it. Unless/until the site has a large user base that won't really be possible on the wiki.

Still, I think as long as the thing is useful to some people it may be able to grow. But it may be useful to organize my thoughts better on exactly what the value is for single users.

One example that comes to mind is the polymath project. They found it useful to start a wiki to organize their projects. If anyone else wants to come along and do a similar thing, they can just use this wiki instead of making their own.

Comment by weathersystems on A Wiki for Questions · 2021-05-11T06:10:47.168Z · LW · GW

By "network effect" do you mean this? I take the network effect to be a problem here only if the wiki requires a large amount of people to be useful. 

My hope is that the wiki should be useful even for a very small number of people. For example, I get use out of it myself just as a place to put some notes that I want to show to people and as a way of organizing my own questions.

Comment by weathersystems on Challenge: know everything that the best go bot knows about go · 2021-05-11T06:01:50.511Z · LW · GW

I'm a bit confused. What's the difference between "knowing everything that the best go bot knows" and "being able to play an even game against a go bot."? I think they're basically the same. It seems to me that you can't know everything the go bot knows without being able to beat any professional go player.

Or am I missing something?

Comment by weathersystems on Open and Welcome Thread - May 2021 · 2021-05-11T02:56:36.446Z · LW · GW

Hi y'all.

Recently I've become very interested in open research. A friend of mine gave me the tip to check out lesswrong. 

I found that lesswrong has been interested in trying to support collaborative open research (one, two, three) for a few years at least. That was the original idea behind lesswrong.com/questions. Recently Ruby explained some of their problems getting this sort of thing going with the previous approach and sketched a feature he's calling "Research Agendas." I think something like his Research Agendas seems quite useful. 

So that's what brought me here. But I've had a lot of fun reading through old top rated posts.

I just made my first post about a question centered wiki I've been working on. I guess it's a sort of self promotion, so I hope that's ok. I felt that it's the sort of thing that people here may be interested in. I'm also very interested to hear critiques of the argument I put forward in that post.

Comment by weathersystems on What questions should we ask ourselves when trying to improve something? · 2021-05-07T00:25:14.409Z · LW · GW

I added in a few more of the questions from the template that seem relevant. Including the one about possible difficulties. I think what's there cover's your trade-off.

Comment by weathersystems on What questions should we ask ourselves when trying to improve something? · 2021-05-07T00:08:40.515Z · LW · GW

I was thinking that the template would be something where you could just keep the sections that seem relevant and delete the rest. 

But I guess even that would start to get annoying if the thing was super long. That's a good consideration to keep in mind.

Comment by weathersystems on What are the greatest near-future risks or dangers to you as an individual? · 2021-05-06T21:36:17.961Z · LW · GW

What factors do you expect have the highest likelihood of severely compromising your own quality and/or duration of life, within the next 1, 5, or 10 years?

A family member dying.

Contracting a serious disease, or becoming severely injured from an accident. 

Some incident (medical or otherwise) will use the rest of my savings and put me in financial instability.

How do these risks change your behavior compared to how you expect you'd act if they were less relevant to you?

I basically never think about these risks. I guess the money one I do a bit. I use far less money than I could to protect against potentially very costly events.
 

If those greatest personal risks are not things you categorize as existential risks to all of humanity, how do you divide your risk mitigation efforts between the personal-and-near-term and the global-and-long-term ones?

I rarely think in terms of risk mitigation. Maybe I should more.

Comment by weathersystems on What questions should we ask ourselves when trying to improve something? · 2021-05-06T21:27:50.400Z · LW · GW

I added "Given these problems, why are people still tolerating the status quo (if they are)?" to the template. Does that capture your idea well enough?

Comment by weathersystems on What questions should we ask ourselves when trying to improve something? · 2021-05-06T21:24:24.771Z · LW · GW

You have spelled "stakeholders" as "steak-holders", which is charming but may reduce credibility in some circumstances.

Heh. Funny mistake. Thanks.

A suggested improvement to the template: When examining the status quo, also ask "for what related problems does the status quo have a built-in solution?".

I want to make sure I understand your point here. Is the idea that sometimes we see that a system isn't solving some problem well enough, and so try to fix it. But we don't take into account the fact that the system isn't just trying to solve that problem, but other problems as well. And maybe our "fix" may be an improvement to the system in regards to the problem we're interested in, but hurt the system in regards to the other problems the system is trying to solve? (god that was long winded)

If that's the idea. I think I was trying to capture the same type of thing with "What are the strengths of the status quo (that we want to try and keep)?" But maybe I can improve the wording to make that more clear? Or do you still think you're making a separate point?

The template might benefit from a section asking what preconceptions or stereotypes surround the topic.

I like this. Not sure where to include it. From your description it seems like it should be either a top level question, under "What are the possible difficulties in making improvements in this area?", or under "What is the status quo?".

Comment by weathersystems on Is January AI a real thing? · 2021-03-26T04:49:30.188Z · LW · GW

Maybe it would help if you shared what you've been able to find out so far?