Posts
Comments
I'm not bio-related or anything, but a popular theory is that what we're seeing in data (where different strains can be observed to develop mutations at a particular spot) can be explained by convergent evolution, which can possibly mean that the variants are running out of new adaptations and converging into local maxima.
Relevant scientific american article | Relevant bioRxiv preprint
I'm slowly working on a frontpage generally arguing that Critical Theory has value.
Hey I'm open if you want to co-write something!
I agree with your assessment that concepts within fields like Critical Theory was not discussed in the aspiring-rationalist context enough. I think most aspiring rationalists would be interested in these alternative maps, if it wasn't presented as totally disconnected to our original maps.
Respectfully disagree: I don't think enforcing something like this help towards facilitating personal blogposts on lesswrong. I think a better alternative is to create some formal styling guide and implement a formatter that strips emojis etc from the title string when posts are promoted to frontpage (or even in the "recent posts" list if you guys want that); otherwise I don't think limiting editorial choices by the author helps the case of building community blogs.
Of course it should be possible to invite LessWrong readers to debate politics outside the website, using any rules you choose.
This is a good point - or like how the SSC substack has a more political comment section for people who want to debate politics within the aspiring rationalist community.
Isn't there something to be said for recognising members of other tribes and not trying to convert or kill them?
Yeah, I mean, what else can we do? I don't think a lot of people want to convert or eliminate people from other tribes, I'm more talking about standalone arguments here: how can we give the maximum benefit of doubt to arguments from (presumably) people from the other tribe?
However, to be honest, I don't think we should be trying to settle debates on this side
Wholeheartedly agree. For one I don't think debates among major political camps now can be "settled" - maybe people here tends to be more open and we can settle some debate to some extent, I don't think there's more values other than practicing rationality skills.
especially since being apolitical is increasingly being slammed as political in and of itself.
I happen to be one of those people who believe this is true - but that doesn't mean we can't be apolitical anywhere: we do have the entirety of the internet at our disposal, just go to Twitter for these stuff or something. (I'm also willing to have my mind changed on this one)
I love the dashboard idea!
I think the energy metric can give us good intuition visualization-wise, since it kind of is less arbitrary and many parts of consumption/progress can be seen as components of overall energy use.
If I'd like to add something to the graph I'd say include some additional breakdowns of the energy metric: 1) we need to figure out how to measure efficiency and represent it somehow: this is entirely in the domain of epistemic uncertainty, cause massive shifts in terms of centralization/geopolitics in the last couple decades has brought significant changes that reflected by energy usage; 2) we can add a personal/industrial division to diagnose which parts are responsible; 3) in terms of sustainability we can add something like a renewable/non-renewable breakdown to see where we're going.
I think it will also be useful from a "progress measuring" perspective to see which percents of energy are used in transportation; manufacturing; service; computing etc, in addition to other parts of the dashboard you outlined.
I agree, they both pray on our is-ought bias.
Interesting to see this discussed in a framework about attribution.
If you're willing to engage in a little thought experiment, what levels of responsibility would you consider in this scenario:
Alice was invited to Bob's birthday party. Bob's parents prepared the party and a birthday cake, but they didn't know Alice has a severe peanut allergy. During the party Alice ate the birthday cake, which contained peanut, and was hospitalized for a couple of months.
In this scenario I don't think Bob's parents are responsible - because as you said in a previous post, one person cannot be expected to be responsible for what's going on in another's body.
But what about this alternative scenario:
Bob's parents bought a birthday cake from a bakery - which (if we're living in a developing country and things like FDA don't exist) didn't label its nutrition and allergy-related facts; everything else is still the same.
In this case I'd consider the bakery to be legally and morally responsible: since they're serving potentially unlimited customers, failure to consider such important facts should not be excused by pleading ignorance.
Like allergies, depression can cause otherwise insignificant remarks/criticisms to be harmful to a patient than otherwise healthy people, since depressed people engage in more negative thinking about themselves than healthy people. I'm not a medical professional so please correct me if I'm wrong, and I'm only extending my personal experience with evidence.
My case is that since internet comments are directed to an unlimited amount of audience, we should use some caution in our words when speaking publicly, even if it's only potentially harmful to other people, intentional or not.
(Also I downvoted the parent comment since it's using unnecessary politics and tribalism as a way to avoid conversation, which isn't something we should encourage as a community)
I'm certain that the comment you're replying to was talking about youtube premium (formerly youtube red) on youtube - it can be a pretty good moral case to use adblocker + youtube premium because you will be compensating the creator (they actually receive more income per premium watch than regular ad revenue) and youtube without consenting to online tracking and targeted ads.
I'd say the opposite is also pretty prevalent, especially in rhetorical statements that (intentionally or not) misguide their audience.
New sequence idea: bridging humanities lingo to the aspiring-rationalist community.
Observation: many of our current humanities lingo (e.g. that of critical theory, postmodernism, contemporary feminism) get underrepresented or misrepresented in the lesswrong community. To verify: do a search of the terms above and see.
Observation: some people (such as philosophers on r/askphilosophy) outside of the lesswrong community view this place in a bad light because they thought we aren't taking the previous debates in philosophy on the same problems seriously.
Common knowledge: more (and more diverse) discussion almost always leads to better models of reality.
Common heuristic: use of different languages with different context shape how we view the same topic.
Hypothesis: A metaphorical bridge that connects the humanities language (and context) to that of us can help us make sense of a lot of points and discourse (often implicit, as a characterization of humanities subjects).
effective debate notes: I've read main points of every first-level comment in this thread and the author's clarification.
epistemic status: This argument is mostly about values. I hope we can all agree with the facts I mentioned here, but can also consider this alternative framework which I believe is a "better map of reality".
I disagree with your conclusion because I disagree with your model of classification of tech frontiers. In the body of this article and most comments, people seem to agree with your division of technology into 6 parts. Here's why I think this model might be not very capable: I believe it assumes every kind of innovation is "fundamentally" or "canonically" same.
- Specifically, it ignores different area's different "transferrability" or "interconnectivity" with other fields. For example, innovations in manufacturing/agriculture/energy/transportation/medicine generally cannot be transferred to one another directly; while innovations in information can be transferred to other fields easily. Google scholar "machine learning" + any of the five fields and we should be able to find plenty of literature reviews on them.
- It doesn't care about how important different fields matter to us on a "meta" level. One defining characteristic for humans is the use of complex language and theory of mind. None of the other 5 field in the original framework contribute directly to languages and the use of languages; while information technology by definition includes the enhanced efficiency and accessibility of a wide variety of discussions and knowledge-sharing. I see that you might agree with this point here by your discussions in the first half of the article. However, the impact of this aspect of information technology can be much bigger than other fields in the original classification, including:
- Simply more potential for progress: if you ask scholars 50 years ago in their very specifically subdivided fields, that they would have so many pre-prints to read even if they don't sleep or eat, they'll probably ask "what's a pre-print?" before even saying "no way!". Internet-based tools are constantly increasing accessibility and thus quantity of research, while simultaneously increasing the speed of research through streamlining the research process from knowledge acquisition to reproducibility to peer review. Results may take a bit to propagate from the science world to tech world, but this level of meta-scientific discovery is really unprecedented since the invention of the printing press (and I know you're not talking about the scientific frontier here, so this is just an example rather than a point).
- From nation states to human civilization: sure, technologies like social media divide us and that's a huge problem we should think about hard, but instant communication across the globe and very good machine translation services have already transforming a large part of the human population into a group of shared values and fundamentally agreeable ways of thinking (im kinda talking about the type of thinking by the time of enlightenment without coining a specific name for it, since people outside of the western tradition doesn't call it our way; doesn't mean they don't think like this), and it should be trivial that this would bring technological progress.
- Economically, bits > atoms. The previous sentence is used metaphorically, and what I'm trying to convey is that bits are ideas. Ideas can be copied from person to person, or from machine to machine, at a speed many times as fast as copying physical objects, and at a price of zero (mostly). This makes even a tiny innovation in IT matter the same as huge innovations in traditional industries: you can have an entire field dealing with more efficient ways to predict protein structures while spending countless hours, but a machine learning model (alphafold 2) can match lab performance at much less cost and much better scalability (I know way less about biology than I know about machine learning, so please correct me if I'm wrong!)
- Cost and scalability are the central point being made here - one of the most important innovations in my book is the public cloud industry led by the Amazon Web Service. Hiring more research assistants have a diseconomies of scale: coordination, management, communication (in a PHYSICAL environment! be sure not have any OSHA violations - and lawyer up! - and make sure to secure your lab equipment - maybe add a guard - ...etc) all become harder the more people you hire. However, if you're just adding another 100 GPUs to your infrastructure, you wouldn't need "middle management" for them - maybe their programming counterpart, but they're much cheaper, and you can get many of them open-source.
So my main point is that, the information revolution should really be a printing-press-level innovation, and comparing it to electricity or steam engine really missed a lot of important fundamental differences of IT, and these unique characteristics are already manifesting themselves everywhere. So here's my alternative framework for the original categories (roughly):
- All important technological innovation categories (impact* from least to most)
- Helping us enhance reality
- Manufacturing & construction
- concrete, civil engineering, skyscrapers
- Energy
- non-renewable energy, fission, renewable energy, fusion
- Transportation
- highways, containerization, international shipping, self-driving cars
- Manufacturing & construction
- Helping us enhance ourselves (but physical)
- Agriculture (not as important technology-wise since we can already meet all the needs we have; just a matter of redistribution)
- genetically-modified corps, genetically-engineered corps
- Transportation (partly)
- subways, intercontinental flights, self-driving cars
- Medicine/Bio-*
- polio vaccines, CRISPR, COVID-19 vaccines, immortality
- Agriculture (not as important technology-wise since we can already meet all the needs we have; just a matter of redistribution)
- Helping us enhance ourselves (but conceptual)
- Information
- alphabets, printing press, internet
- Information
- Helping us enhance reality
Prescriptively (more of 'predicting the future'), my belief is that although in the past years our focus shifted from more heavy-industrial innovations to more "meta" and indirect ones (also including non-technical ones like communication theory), the latter has more potential than the former from the points above.
*: Since no innovation come alone, our value functions for importance and impact of an innovation should not only include immediate impacts, but also potential ones that might take longer to fully manifest but we can already see coming.
Edit 1: add epistemic status