Comment by Selueen (selueen) on Less Wrong needs footnotes · 2021-06-22T12:33:41.988Z · LW · GW

Gwern makes good case for use of sidenotes, and offers a few existing technical solutions.
I like how he uses it on his website and wonder why LW does not want to follow his example.
Are there any known problems with the idea/existing implementations that I'm missing?

Comment by Selueen (selueen) on How can there be a godless moral world ? · 2021-06-21T14:53:45.852Z · LW · GW

"People think killing is bad" is one of the many reasons to think that "killing is bad". Other reasons might include "people die if they are killed", "I don't want to get killed", "I don't want my loved ones to get killed", "I don't want to get traumatized by killing", "I don't want to traumatized by witnessing murder" and so on and so forth.
Lots of reasons to dislike murder. And we usually see dislike of murder developing naturally and independently in various cultures around the world. Sometimes it's only extended to people within a group, but it is invariably there.
If we need God for that principle, how is that possible?

Or let's look from slightly different perspective.
The 10th commandment states "thou shalt not kill". It's simple and strong, all murder is bad.

But do you really think all murder is always morally indefensible?
I don't know your position on any topic, so it's hard for me to guess. 
But you would probably agree that someone who killed by accident is not as evil as serial killer. I'd expect you feel more pity towards that person than resentment, even if by law he ends up in prison.
It's even harder if you get attacked and end up killing your attacker in self-defense. In some countries you will get jailed for that, but not in others. People in general usually support defending side here, even in countries where it almost always ends with prison sentence. 
Speaking about law, what about capital punishment? It's controversial, sure, but it used to be much more normal before morals became essentially secular.
Talking about controversies, it's even harder in cases of euthanasia and abortion. These are hard moral topics, and I'm not sure simple answer offered by religion holds up here either, considering all other exceptions.
Or what about war? Soldiers do kill, but you will look really hard to find religious figure denouncing soldiers that fight on their side.
All of this does not fit into simple framework outlined by "thou shalt not kill" commandment, does it?

And that's killing we are talking about. I too on a gut level feel that it's bad, I want to live in a world without killing, but the world we live in is much more complicated.
It's usually even harder when we talk about problems where, uhm, it's not about people getting killed. Because it's easy to agree that killing is bad (until I show up with controversial list of exceptions), but some other norms might not be quite as intuitive. 


Comment by Selueen (selueen) on Biotech to make humans afraid of AI · 2021-06-19T04:24:32.040Z · LW · GW

If the capability is there, the world has to deal with it, whoever first uses it. If the project is somewhat "use once, then burn all the notes", then it wouldn't make it much easier for anyone else to follow in their footsteps.

That's true if capability is there already.
If capability is maybe, possibly there but requires a lot of research to confirm the possibility and even more to get it going, I'd suggest that we might deal with it by acessing the risks and not going down that route.
I mean, that's precisely what this community seems to think about GoF research, how is that case different?

Why do you think that this is easy to do and bad. There are currently a small number of people warning about AI. There is some scary media stories, but not enough to really do much. 

What I really was trying to say that if you have sufficient knowledge and resources to launch proper media campaign, it might be easy to overshoot your goal if that relates to scaring people.
Why do I think it's the case?
Because modern media excels at being scary. And any story that gains traction can snowball out of control really quickly.
And if it snowballs, most people are not going to hear or read your version of your arguments.
They would get distorted, misunderstood and misrepresented version presented by journalists.
That is a risk.

Yes the same tech could be used for horrible brainwashy purposes, but hopefully we can avoid giving the tech to people who would use it like that.

And how do you ensure that this tech does not get into the wrong hands?
There are so, so many ways this can go wrong. What if your tech (or just necessary research) gets stolen? What if you are secretly hoping to use it for some other purpose? What if someone else on the team does that?
Or more realistically, do you think that the moment CIA thinks that your plan is workable they don't disappear you? That would be entirely consistent with their history and their goals.
I don't think that you are so naive to think you'd be able to hide that kind of research from them for long. I mean, you did not ask your questions in private.
And of course, there are other parties that would be willing to go to any lengths to get that tech, CIA would not be alone in that.

I feel like risks here are much higher than potential benefits.


Comment by Selueen (selueen) on Biotech to make humans afraid of AI · 2021-06-18T09:47:31.876Z · LW · GW

My (admittedly limited) knowledge of psychology and neurosciences suggests that this is not currently possible. Thankfully.

I feel like if you start seriously considering things that are themselves almost as bad as AI ruin in their implications in order to address potential AI ruin, you took a wrong turn somewhere.
If you can create a virus or something of the sort that makes people genuinely afraid of some vague abstract thing, you can make them scared of anything at all. Do I really need to spell it out how that would be abused? 
On the other hand, do you really need to go that far?
Launch media campaign and you can get most of the same results without making the world much more dystopian than it already is.
The main risk here is that it's easy to scare people so much that all of the research gets shut down. And I expect that to be the reason there's not much scare about it in media yet. As far as I remember, that's why most researcher in the field were at first reluctant to admit there's a risk at all.

Comment by Selueen (selueen) on How do you establish a comfort zone in your studies? · 2021-06-09T18:45:56.986Z · LW · GW

I understand your point, and I for the most part agree. It is important to understand the basics.
What I was trying to say is.. If you did not get the basics from your first attempt to learn those, maybe try to approach them differently.
Look for a different textbook, ask someone who is not your current teacher, maybe look for popular explanation (if you are compltetly lost), or for more technical one (if original was not detailed enough), etc etc.
Try to learn the basics, but switch the approaches if you are stuck.
I feel like it might help with motivation too, as it should be more exciting than plain repetition. 

Comment by Selueen (selueen) on How do you establish a comfort zone in your studies? · 2021-06-09T17:57:15.731Z · LW · GW

It might be inefficient for pure memorization, but maybe it can help you form more accurate maps, which is more valuable in itself.
But is it the best way to help you form higher level concepts and practise more zoomed-out perspective? Is it the best way to understand things rather than just memorize them? I'm not sure.

I suspect it's better to look for other approaches - practical applications of newly acquired knowledge, ways to test your understanding, trying to see if you understand all the implications, maybe looking for alternative explanations, or different representations of these explanations,. 

I know quite a few examples of people, often much smarter than me, that struggled with conventional ways to explain some concept, only to get it instantly once they some some alternative explanation.
I do not have good psychological explanation for this, unfortunately. I've been only taught bad ones when I've studied psychology in University (I mean, practically disproved by now). Another reason to avoid putting too much weight in memorization, I guess.

Comment by Selueen (selueen) on Networks of Trust vs Markets · 2021-06-01T07:23:12.395Z · LW · GW

I see a few problems with trust networks that are not generally present in the markets.
I'm glad that your experience was mostly positive, but I'm aware of many examples where things are more tricky.
Part of it comes from two very different but common attitudes towards transactions between friends/family. Some people think that every work should be paid, always. Others expect and provide free help. 
These positions are clearly non-compatible and predictably lead to conflicts, especially when people don't communicate their position clearly. They often think that their position is obviously right and don't even consider the alternative until the conflict arises. 
Another problem is that in transactions with friends/relatives there's often a pressure to work informally. Which is a risk - if they fuck it up you can't even sue them. And it's not that unlikely that they do - you probably did not select them based on their responsibility and their expertise in whatever field they work in. So you might lose all the resources AND hurt your relationship on top of it. 

It's not that these problems are completely unavoidable, but people do get burned.
Some personal examples, so that I don't just parrot stories I've read on the internet.

Extremely bad example.
One of my relatives was technical director and de-facto co-owner of one local ISP. De jure he was nobody, he never got around to do all the paperwork - partly because he trusted his "friends", partly because there were some complicated issues, partly because he is rather lazy. Years and years of no consequences, until they decided to sell the company. Guess who's opinions was no considered and who got nothing out of the deal.  I know, it's extreme and it's not only trust related, but these things do happen. 

Somewhat good example.
Back when I was working in e-shop, our courier got sick. We could not realistically hire replacement in time, and outsourcing would be extremely expensive. My boss asked our sysadmin to help the company out. I remember him discussing that decision with someone - "I know he would not decline, he is a nice guy, but he is too shy to name the fair price, and he would be disappointed if he does not get paid fairly". He ended up paying him slightly more per hour than he paid out actual courier. 

To conclude, my main points are: 

  • Don't rely purely on trust if stakes are big 
  • Critically evaluate your reasons to trust the person
  • Clearly communicate your expectations and listen to their expectations
  • Consider all the risks, not just the material ones 
Comment by Selueen (selueen) on Aphantasia · 2021-05-29T05:40:41.270Z · LW · GW

I have not tried the square test before, and it's weird. At my first attempt I just completely failed. I've certainly seen enough squares in my life to imagine them, but it just did not happen. Then I imagined drawing that square - not the tactile sensition of it, but just the process of going from A to B to C to A, but that only gets me the 3rd type of square. I can push it to the 4 with additional effort, but I can't seem to get past that just yet. So it's far from red.
The shape is certainly easier for me to imagine than color, colors tend to be really bleak.
It reminds me of another classic example, where they ask you to imagine an apple.
At my very first attempt I found that difficult for some reason, but after a while I have no trouble imagining any apples I want - green, red, yellow, mixed color, stem with leaf or without leaf, no stem, partially eaten, cut in half, partially rotten, with a worm inside it, etc etc.
But then again I have a lot more experience paying attention to apples then to abstract red squares, even if I do see squares way more often. Maybe it adds to effect. Or maybe all the possible transformations of shape distract me enough from color so that I fail to notice how poor my imagination of it really is.


Comment by Selueen (selueen) on Aphantasia · 2021-05-28T14:59:45.572Z · LW · GW

When I first learned about aphantasia, I thought It described me - I don't naturally visualize when I read. But after closer inspection, I found out that I can visualize if I put some effort into it. Images might not be terribly vivid, but recognizable enough.
So technically I don't have aphantasia, but my experience is pretty close, and it's all kinda confusing . For the most part of my life, I did not even realize that was not normal. 

I was always fast reader because of that, you can save time and mental resources by not visualizing, so that's an upside. As for downsides, I can't imagine them, haha.

Comment by Selueen (selueen) on Why Prefetch Is Broken · 2021-05-28T12:55:52.549Z · LW · GW

Thanks for your answer!

Yeah, additional requests definitely defeat the point.

I suppose, any other attempts to solve this on the browser side make no sense either, because of same safety concerns that caused the problem in the first place? 
In which case it looks like your solution is the only reasonable way to go.

Comment by Selueen (selueen) on Why Prefetch Is Broken · 2021-05-28T10:01:40.394Z · LW · GW

Where should you store it in your cache? Well, it depends what the user is going to do. If they are going to click on a link to b.test/index.html, then when they need the HTML they will be visiting b.test and so you want to store it as b.test:b.test/index.html. On the other hand, if it's going to load in an iframe, the user will still be on a.test and so you want to store it as a.test:b.test/index.html. You just don't know. Just guess?

The guess is a risky one: if you store it under the wrong key then you'll have fetch the same resource again just to store it under the right key. Users will see double fetching.

Is there an option for browsers to fetch the resource once, but store it under both possible keys?
What would be the downside for that? 

What I really want to say - is there a way to fix the problem without changing the specification?
Would it be too technically difficult? Too costly?

Changing specification is surely an elegant solution, but then you need everyone to learn about the changes, and implement it in their work, and I feel like that process is always slow and painful and html specs are so complicated already. And then there are so many websites that are already developed but are not supported properly. 

I'm not a web developer, my understanding of this problem is very surface-level - I apologize if my questions sound stupid to you.

Comment by Selueen (selueen) on What's your visual experience when reading technical material? · 2021-05-28T09:25:51.986Z · LW · GW

When I've first learned about the phenomenon, I've seen discussions by professional artists, designers, architects, animators and the likes, that managed to work in these areas despite their aphantasia. It's been a while and I m not able to find the links, and it was not formal study to begin with, but it's so counterintuitive that I wanted to share anyways.

Comment by Selueen (selueen) on Is nuclear war indeed unlikely? · 2021-05-24T14:58:43.121Z · LW · GW

What about the first probability - the probability of emergence of Plutonia? There are many options, some are more likely, some are less. In my opinion, Russia is seriously likely to turn into Plutonia in the next decade, and it was going in that direction last 20 years. The alternative would be a democratic transformation, and, looking at similar cases, I would estimate the chance less than 50%.

How would "democratic transformation" solve that? Do you think current Russian government is the only reason behind things getting more tense on that front?
Have you considered Russian perspective on that issue?
For example, there's at least one country that has made quite a history invading other countries, for made up reasons. That same country happens to spend almost half of entire Russian GDP on their military every year.
What would happen to Russia in a world without nuclear weapons?

Comment by Selueen (selueen) on Signalling lack of familiarity with outsiders or outside knowledge, to raise status among your in-group peers? · 2021-05-24T10:26:38.195Z · LW · GW

It's not about knowledge. 

With all the truth-seeking that goes around here it's easy to forget that knowledge is not the ultimate value agreed upon by everyone and used as foundation for every other value. Not even close, and for good reasons.

Knowledge is not the goal behind most things we do. Yes, studying and research happen. Lots of other, not knowledge-focused activities happen aswell.

Knowledge is the byproduct of practically any activity. Whatever you do, it's hard to avoid gaining some knowledge in the process. And that is precisely why "I'll have more knowledge" is not a good argument to justify any activity. Picture the worst immoral action you can think of. If you perform it, you will be more knowledgeable about the world (probably in more ways that you can currently imagine).

But, reversed familiarity with outgroup stuff isn't familarity with ingroup stuff, or vice versa, similar to how reversed stupidity is not intelligence ( 

"Reversed stupidity is not intelligence" is an idea that needed to be stated because it is counter-intuitive. We naturally tend to look for 2 matching opposite ideas that can be mapped onto "good-bad" intensity scale. Pick any pair of antonyms for simple example, or any pair of virtues and sins for example more relevant to this particular discussion. Because whatever is associated with an outgroup feels like sin and whatever is associated with ingroup feels like virtue. 

But what is it about?

It's mainly about associations.

The outgroup is bad. There are beliefs and behaviors associated with the outgroup.
Therefore these beliefs and behaviors are bad. If I show any of these beliefs and behaviors people might think I'm bad.

Yes this is flawed logic, but it's not something people think about logically in the first place. That's just how we normally feel. 
And it's important to remember that there can be consequences for signalling or lack of signalling. 
And people tend to care about these consequences way more than they care about abstract knowledge.

But of course your examples of it being harmful are valid

Some examples are cases where people in groups renowned for or labelled for some trait (e.g., jocks, supermodels) are stereotyped as "unintellectual" as part of their identity and thus hide or cut off their intellectual interests. 

The opposite is also true, I've known some people who seriously neglected their health because they associated exercise with not-so-bright folks. Notice how it has the same process behind it, but it's not related to knowledge the same way your example was. And I do agree that this is not a good basis for decision-making.
It's just it's not always quite as simple as in these examples. 

But should it be that way?

Maybe not. Maybe some people can transcend all the arbitrary norms, and avoid forming their own arbitrary norms, and find a way to interact with everyone else without signalling, and not end up practically exiled, but I'm not hopeful.

It's something fairly difficult to do in practice and I'm yet to see examples of people succeeding at it long term.