Posts

Good AI alignment online class? 2021-10-11T00:48:19.529Z
Alexander's Shortform 2021-09-27T03:46:16.719Z
Explanations as Hard to Vary Assertions 2021-09-24T11:33:13.735Z

Comments

Comment by Alexander (alexander-1) on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-17T04:25:56.114Z · LW · GW

I read this post with great interest because it touches on some crucial and sensitive themes. I sought a lesson we could learn from this situation, and your comment captured such a lesson well.

This is reminiscent of the message of the Dune trilogy. Frank Herbert warns about society's tendencies to "give over every decision-making capacity" to a charismatic leader. Herbert said in 1979:

The bottom line of the Dune trilogy is: beware of heroes. Much better rely on your own judgment, and your own mistakes.

Comment by Alexander (alexander-1) on Open & Welcome Thread October 2021 · 2021-10-14T19:51:13.039Z · LW · GW

Hello Stephie, I set my goodreads profile to private a while back because of spam. I understand your concerns. I assure you that I do not write hate speech in my reviews or engage in any other kind of misconduct. Thanks for reaching out!

Comment by Alexander (alexander-1) on Alexander's Shortform · 2021-10-13T01:03:34.119Z · LW · GW

Fascinating question, Carmex. I am interested in the following space configurations:

  1. Conservation: when a lifeform dies, its constituents should not disappear from the system but should dissipate back into the background space.
  2. Chaos: the background space should not be empty. It should have some level of background chaos mimicking our physical environment.

I'd imagine that you'd have to encode a kind of variational free energy minimisation to enable robustness against chaos.

I might play around with the simulation on my local machine when I get the chance.

Comment by Alexander (alexander-1) on Alexander's Shortform · 2021-10-11T23:33:24.202Z · LW · GW

I just came across Lenia, which is a modernisation of Conway's Game of Life. There is a video by Neat AI explaining and showcasing Lenia. Pretty cool!

Comment by Alexander (alexander-1) on The Neglected Virtue of Scholarship · 2021-10-11T10:47:21.867Z · LW · GW

This post reminded me of this quote from Bertrand Russell's epic polemic A History of Western Philosophy:

It is noteworthy that modern Platonists, almost without exception, are ignorant of mathematics, in spite of the immense importance that Plato attached to arithmetic and geometry, and the immense influence that they had on his philosophy. This is an example of the evils of specialization: a man must not write on Plato unless he has spent so much of his youth on Greek as to have had no time for the things that Plato thought important.

Comment by Alexander (alexander-1) on Apprenticeship Online · 2021-10-11T02:20:37.363Z · LW · GW

Excellent points. With the proper juridical structure, it is possible to make work more open.

Have you come across Joseph Henrich's books on cultural evolution by any chance? He talks extensively about cultural learning. His books convinced me that cultural learning sets humanity apart from other animals. He sites plenty of empirical research showing that human babies outshine other primate babies primarily in their ability to learn from others.

I work in the software industry (safe to assume you do, too, given you follow Andy Matuschak?). My company has something called "shadowing," which is basically when you join the meetings with someone more senior and watch them do their work. It is hugely underutilized in my experience, and I think it is primarily an incentive misalignment problem. I suspect that the more senior members would feel burdened by facilitating shadowing for juniors.

The recent book "Software Engineering at Google" by Hyrum Wright dedicates a significant portion to talking about mentorship and giving juniors room to grow. Giving juniors menial work and not putting thoughtful effort into developing them is a big mistake many companies make.

Comment by Alexander (alexander-1) on Apprenticeship Online · 2021-10-11T00:26:37.599Z · LW · GW

I would love to watch a livestream of a top AI researcher doing their job. I wish someone from MIRI would do that. It would be awesome to get a feel for what AI alignment research is actually like in practice.

Comment by Alexander (alexander-1) on Apprenticeship Online · 2021-10-09T21:47:00.534Z · LW · GW

Relevant to the question about how we can make it scalable for novices to enter workspaces are these livestreams Stephen Wolfram released on YouTube of his days at work.

https://youtu.be/XSO4my8mTs8

Given that most of the work of Wolfram is open source, he can record his work and put it out there. However, most workers and executives wouldn’t be able to do that as easily given red tape and NDAs.

Comment by Alexander (alexander-1) on Alexander's Shortform · 2021-10-07T22:19:29.028Z · LW · GW

This is not an answer to my question but a follow-up elaboration.

This quote by Jonathan Rauch from The Constitution of Knowledge attempts to address this problem:

Francis Bacon and his followers said that scientific inquiry is characterized by experimentation; logical positivists, that it is characterized by verification; Karl Popper and his followers, by falsification. All of them were right some of the time, but not always. The better generalization, perhaps the only one broad enough to capture most of what reality-based inquirers do, is that liberal science is characterized by orderly, decentralized, and impersonal social adjudication. Can the marketplace of persuasion reach some sort of stable conclusion about a proposition, or tackle it in an organized, consensual way? If so, the proposition is grist for the reality-based community, whether or not a clear consensus is reached.

However, I don't find it satisfying. Rauch focuses on persuasion and ignores explanatory power. It reminds me of this claim from The Enigma of Reason, stating:

Whereas reason is commonly viewed as a superior means to think better on one’s own, we argue that it is mainly used in our interactions with others. We produce reasons in order to justify our thoughts and actions to others and to produce arguments to convince others to think and act as we suggest.

I will stake a strong claim: lasting persuasion is the byproduct of good explanations. Assertions that achieve better map-territory convergence or are more effective at achieving goals tend to be more persuasive in the long run. Galileo's claim that the Earth moved around the Sun was not persuasive in his day. Still, it has achieved lasting persuasion because it is a map that reflects the territory more accurately than preceding theories.

It might very well be the case that the competing theories of rationality all boil down to Bayesian optimality, i.e., generating hypotheses and updating the map based on evidence. However, not everyone is satisfied with that theory. I keep seeing the argument that rationality is subjective because there isn't a single theory, and therefore convergence on a shared understanding of reality is impossible.

A parliamentary model with delegates corresponding to the competing theories being proportional to some metric (e.g. track record of prediction accuracy?) explicitly asserts that rationality is not dogmatic; rationality is not contingent on the existence of a single, ultimate theory. This way, the aforementioned arguments against rationality dissolve in their own contradictions.

Comment by Alexander (alexander-1) on Alexander's Shortform · 2021-10-07T00:29:19.478Z · LW · GW

Thank you for the thoughtful response Vladimir.

I should have worded that last sentence differently. I agree with you that the way I phrased it sounds like I have written at the bottom of my sheet of paper .

I am interested in a solution to the problem. There exist several theories of epistemology and decision theory and we do now know which is "right." Would a parliamentary approach solve this problem?

Comment by Alexander (alexander-1) on Alexander's Shortform · 2021-10-06T09:04:09.424Z · LW · GW

A common criticism of rationality I come across rests upon the absence of a single, ultimate theory of rationality.

Their claim: the various theories of rationality offer differing assertions about reality and, thus, differing predictions of experiences.

Their conclusion: Convergence on objective truth is impossible, and rationality is subjective. (Which I think is a false conclusion to draw).

I think that this problem is congruent to Moral Uncertainty. What is the solution to this problem? Does a parliamentary model similar to that proposed by Bostrom and Ord make sense here? I am sure this problem has been talked about on LessWrong or elsewhere. Please direct me to where I can learn more about this!

I would like to improve my argument against the aforementioned conclusion. I would like to understand this problem

Comment by Alexander (alexander-1) on Open & Welcome Thread October 2021 · 2021-10-06T08:03:15.480Z · LW · GW

I am considering using Goodreads to manage my bookshelves electronically. But for reviews, I plan to post links to my LessWrong reviews to avoid spending time formatting text for both editors. Formatting text for Goodreads is rather effortful.

I have found the reviews and the discussions on Goodreads to be, on average, more concerned with persuasion than explanation.

Additionally, Goodreads would benefit significantly from a more effective voting system. You can only upvote, so people with a large following tend to dominate, regardless of the veracity or eloquence of what they write.

Comment by Alexander (alexander-1) on A review of Steven Pinker's new book on rationality · 2021-10-06T02:57:58.969Z · LW · GW

Funny how the top-rated review of this book on Goodreads ignores everything Pinker says about cognitive biases and probabilistic reasoning and claims that "There are no objective facts; such things are self-contradictory" as some strawman rebuttal. If true, then that statement itself is a contradiction.

I find it astonishing that people continue to conflate "rationality" with "objective facts" when the modern meaning of rationality acknowledges that the map is not the territory.

Comment by Alexander (alexander-1) on Open & Welcome Thread October 2021 · 2021-10-05T08:47:03.577Z · LW · GW

This is my Goodreads profile (removed link for privacy given this is the public internet). You are welcome to add me as a friend if you use Goodreads.

I am considering posting book reviews on LessWrong instead of Goodreads because I love the software quality here, especially the WYSIWYG editor. Goodreads is still stuck on a HTML editor from 1993. However, given the high epistemic standards on LessWrong, I will be slower to post here. I never expect anyone to ask me to provide a source over at Goodreads but here I better be rigorous and prudent with what I say, which is a good thing!

Comment by Alexander (alexander-1) on Open & Welcome Thread October 2021 · 2021-10-05T06:59:28.029Z · LW · GW

Hello,

My name is Alexander, and I live and work as a software engineer in Australia. I studied the subtle art of computation at university and graduated some years ago. I don't know the demographics of LessWrong, but I don't imagine myself unique around here.

I am fascinated by the elegance of computation. It is stunning that we can create computers to instantiate abstract objects and their relations using physical objects and their motions and interactions.

I have been reading LessWrong for years but only recently decided to start posting and contributing towards the communal effort. I am thoroughly impressed by the high-quality standards maintained here, both in terms of the civility and integrity of discussions as well as the quality of software. I've only posted twice and have learnt valuable knowledge both times.

My gateway into Rationality has primarily been through reading books. I became somewhat active on Goodreads some years ago and started posting book reviews as a fun way to engage the community and practice critical thinking and idea generation. I quickly gravitated towards Rationality books and binge-read several of them. Rationality and Science books have been formative in shaping my worldview.

Learning the art of Rationality has had a positive impact on me. I cannot prove a causal link, but it probably exists. Several of my friends have commented that conversations with me have brought them clarity and optimism in recent years. A few of them were influenced enough to start frequenting LessWrong and reading the sequences.

I found Rationality: A-Z to be written in a profound and forceful yet balanced and humane way, but most importantly, brilliantly witty. I found this quote from Church vs Taskforce awe-inspiring:

If you're explicitly setting out to build community—then right after a move is when someone most lacks community, when they most need your help. It's also an opportunity for the band to grow.

Based on my personal experience, LessWrong is doing a remarkable job building out a community around Rationality. LessWrong seems very aware of the pitfalls that can afflict this type of community.

Over on Goodreads, a common criticism I see of Rationality and Effective Altruism is a fear of cultishness (with the less legitimate critics claiming that Rationality is impossible because Hegel said the nature of reality is 'contradiction'). Such criticisms tend to be wary of the tendency of such communities towards reinforcing their own biases and applying motivated skepticism towards outsider ideas. However, for what it's worth, that is not what I see around here. As Eliezer elucidates in Cultish Countercultishness, it takes an unwavering effort to resist the temptation towards cultishness. I hope to see this resistance continuing!

Comment by Alexander (alexander-1) on Alexander's Shortform · 2021-10-01T03:46:29.565Z · LW · GW

You make excellent points. The growth of knowledge is ultimately a process of creativity alternating with criticism and I agree with you that idea generation is under appreciated. Outlandish ideas are met with ridicule most of the time.

This passage from Quantum Computing Since Democritus by Scott Aaronson captures this so well:

[I have changed my attitudes towards] the arguments of John Searle and Roger Penrose against “strong artificial intelligence.” I still think Searle and Penrose are wrong on crucial points, Searle more so than Penrose. But on rereading my 2006 arguments for why they were wrong, I found myself wincing at the semi-flippant tone, at my eagerness to laugh at these celebrated scholars tying themselves into logical pretzels in quixotic, obviously doomed attempts to defend human specialness. In effect, I was lazily relying on the fact that everyone in the room already agreed with me – that to these (mostly) physics and computer science graduate students, it was simply self-evident that the human brain is nothing other than a “hot, wet Turing machine,” and weird that I would even waste the class's time with such a settled question. Since then, I think I’ve come to a better appreciation of the immense difficulty of these issues – and in particular, of the need to offer arguments that engage people with different philosophical starting-points than one's own.

I think we need to strike a balance between the veracity of ideas and tolerance of their outlandishness. This topic has always fascinated me but I don't know of a concrete criterion for effective hypothesis generation. The simplicity criterion of Occam's Razor is ok but it is not the be-all end-all.

Comment by Alexander (alexander-1) on Alexander's Shortform · 2021-09-30T02:54:22.079Z · LW · GW

It surely is an incentive structure problem. However, I am uncertain about to what extend incentive structures can be "designed". They seem to come about as a result of thousands of years of culture gene coevolution.

Peer reviews have a similar incentive structure misalignment. Why would you spend a month reviewing someone else's paper when you can write your own instead? This point was made by Scott Aaronson during one of his AMAs but he didn't attempt at offering a solution.

Comment by Alexander (alexander-1) on Explanations as Hard to Vary Assertions · 2021-09-29T05:34:31.941Z · LW · GW

Incidentally, Popper also thought that you couldn't falsify a theory unless we have a non-ad hoc alternative that explains the data better.


This is so interesting. Do you know where I can read more about this? Conjectures and Refutations?

Comment by Alexander (alexander-1) on Explanations as Hard to Vary Assertions · 2021-09-29T01:47:53.887Z · LW · GW

Good points. There were several chapters in Rationality: A-Z dedicating to this. According to Max Tegmark's speculations, all mathematically possible universes exist, and we happen to be in one described by a simple Standard Model. I suspect that this question about why simple explanations are so effective in this universe is unanswerable but still fun to speculate about.

Good points about the lack of emphasis on hypothesis-formation within the Bayesian paradigm. Eliezer talks about this a little in Do Scientists Already Know This Stuff?

Sir Roger Penrose—a world-class physicist—still thinks that consciousness is caused by quantum gravity. I expect that no one ever warned him against mysterious answers to mysterious questions—only told him his hypotheses needed to be falsifiable and have empirical consequences.

I long for a deeper treatment on hypothesis-formation. Any good books on that?

Comment by Alexander (alexander-1) on Explanations as Hard to Vary Assertions · 2021-09-28T23:49:37.217Z · LW · GW

In a literal sense, Eliezer said, "The roots of knowledge are in observation." If we took this statement in isolation to Deutsch, he would vehemently disagree and tell us, "No, we interpret observations through explanatory theories." However, I don't think Eliezer and Deutsch disagree here. Both agree that there is a map and a territory and that the map comprises models, i.e., explanatory theories.

Comment by Alexander (alexander-1) on Explanations as Hard to Vary Assertions · 2021-09-28T23:31:51.565Z · LW · GW

I agree with you here. I made a mistake but on the bright side, I learnt a lot about the generalised form of Bayes' theorem which applies to all possible hypotheses. This was also how Eliezer explained this relationship between the posterior and the numerator in Decoherence is Falsifiable and Testable. I was trying to simplify the relationship between Bayes' theorem and Deutsch's criterion for good explanations for the sake of the post but I oversimplified too much.

I still think that Bayes' theorem and Deutsch's criterion for good explanation are compatible and in a practical sense, one can be explained in terms of the other but, using the generalised form of Bayes is necessary.

I updated my post to explain that this part is slightly incorrect.

Comment by Alexander (alexander-1) on Explanations as Hard to Vary Assertions · 2021-09-28T10:49:05.550Z · LW · GW

I tend to agree. It isn't easy to generalise what entails a successful explanation, especially as one goes higher up the layers of abstraction (as you've put it) or further out to the more infeasibly testable realm.

What do you think is an elegant way to define the phenomenon of explanation that is more general than "hard-to-vary assertions about reality"?

Comment by Alexander (alexander-1) on Alexander's Shortform · 2021-09-27T03:46:17.015Z · LW · GW

Is bias within academia ever actually avoidable?

Let us take the example of Daniel Dennett vs David Chalmers. Dennett calls philosophical zombies an "embarrassment," while Chalmers continues to double-down on his conclusion that consciousness cannot be explained in purely physical terms. If Chalmers conceded and switched teams, then he is going to be "just another philosopher," while Dennett achieves an academic victory.

As an aspiring world-class philosopher, you have little incentive to adopt the dominant view because if you do you will become just another ordinary philosopher. By adopting a radically different stance, you establish an entirely new "school" and become at its helm. Meanwhile, it would be considerably more effortful to become at the helm of the more well-established schools, e.g. physicalism and compatibilism.

Thus, motivated skepticism and motivated reasoning seem to me to be completely unavoidable in academia.

Comment by Alexander (alexander-1) on Explanations as Hard to Vary Assertions · 2021-09-26T21:00:44.881Z · LW · GW

Source (emphasis added by me):

Large ground based telescopes can make images as sharp as or sharper than the Hubble Space Telescope, but only if atmospheric blurring is corrected. Previously, the deformable mirrors available to do this were small, flat, and relatively inflexible. They could be used only as part of complex instruments attached to conventional telescopes.

But in this new work, one of the two mirrors that make up the telescope optics is used to make the correction directly. The new secondary mirror makes the entire correction with no other optics required, making for a more efficient and cleaner system.

Like other secondary mirrors, this one is made of glass over 2 feet in diameter and is a steeply curved dome shape. But under the surface, it is like no other. The glass is less than 2 millimeters thick (less than eight-hundredths of an inch). It literally floats in a magnetic field and changes shape in milliseconds, virtually real-time. Electro-magnetically gripped by 336 computer-controlled "actuators" that tweak it into place, nanometer by nanometer, the adaptive secondary mirror focuses star light as steadily as if Earth had no atmosphere. Astronomers can study precisely sharpened objects rather than blurry blobs of twinkling light.

Comment by Alexander (alexander-1) on Explanations as Hard to Vary Assertions · 2021-09-26T10:05:13.644Z · LW · GW

I agree, it is more a critique of Deutsch as a person than of the book. I still think it is a good book overall.

Comment by Alexander (alexander-1) on Explanations as Hard to Vary Assertions · 2021-09-26T06:11:07.563Z · LW · GW

This is a fascinating critique of David Deutsch and The Beginning of Infinity by one of his former colleagues.

It is ironic that Deutch sees himself as an expert on counter-dogma, yet he is dogmatic about his convictions. Cultish Countercultishness springs to mind.

Comment by Alexander (alexander-1) on Explanations as Hard to Vary Assertions · 2021-09-26T04:04:09.664Z · LW · GW

Wow, this is honestly baffling. It sounds as if Deutsch doesn't know about the generalised form of Bayes' theorem (I'm sure he does know, which makes me feel worse).

You make an excellent point. Bayes' theorem can be applied to all possible hypotheses, not just  and .

If a top physicist can be this biased, then I cannot be surprised by anything anymore.

Thank you very much for your response Yoav Ravid.

Comment by Alexander (alexander-1) on Explanations as Hard to Vary Assertions · 2021-09-26T01:34:32.534Z · LW · GW

Thank you for pointing this out, by the way.  This is an important nuance. I just read this: Simple refutation of the ‘Bayesian’ philosophy of science.

By ‘Bayesian’ philosophy of science I mean the position that (1) the objective of science is, or should be, to increase our ‘credence’ for true theories, and that (2) the credences held by a rational thinker obey the probability calculus. However, if T is an explanatory theory (e.g. ‘the sun is powered by nuclear fusion’), then its negation ~T  (‘the sun is not powered by nuclear fusion’) is not an explanation at all. Therefore, suppose (implausibly, for the sake of argument) that one could quantify ‘the property that science strives to maximise’. If T had an amount q of that, then ~T would have none at all, not 1-q  as the probability calculus would require if q were a probability.

Also, the conjunction (T₁ & T₂) of two mutually inconsistent explanatory theories T₁ and T₂ (such as quantum theory and relativity) is provably false, and therefore has zero probability. Yet it embodies some understanding of the world and is definitely better than nothing.

Furthermore if we expect, with Popper, that all our best theories of fundamental physics are going to be superseded eventually, and we therefore believe their negations, it is still those false theories, not their true negations, that constitute all our deepest knowledge of physics.

What science really seeks to ‘maximise’ (or rather, create) is explanatory power.

And I am now really confused and conflicted. I would love it if someone could enlighten me on how Deutsch's definition of explanation (hard-to-vary assertions about reality) and Bayesian probability conflict with each other. I am missing something very subtle here.

For context, I am aware of Popper and falsification, but wouldn't a theory eventually become practically falsified within Bayesian updating if there is enough evidence against it?

Comment by Alexander (alexander-1) on Explanations as Hard to Vary Assertions · 2021-09-24T22:30:02.117Z · LW · GW

Oh yes, I didn't mention the differences between the worldview presented in Rationality: A-Z and that of David Deutsch.

For example, Deutsch is strongly opposed to the dogmatic nature of Empiricism, which is the sixth virtue of rationality in the LessWrong worldview. My take is that Deutsch believes that explanatory theories are more foundational to our understanding of reality than our experiences or observations. He asserts that we interpret our experiences and observations of reality through explanatory theories. He further asserts that experiences and observations are not the sources of our theories. For example, Einstein came up with Relativity with no direct observational data, Einstein didn’t use the perihelion precession of Mercury. Instead, experiences and observations are what we use to judge competing explanatory theories.

I don't feel too strongly either way at this point in my journey. I think Deutsch makes a good point, but so does Eliezer. I will probably start to feel more strongly about this in one direction or the other as I study more science.

Comment by Alexander (alexander-1) on Where do (did?) stable, cooperative institutions come from? · 2021-08-30T05:14:06.532Z · LW · GW

Henrich's "The Secret of Our Success" also contains very relevant insights. It is a book about culture-gene co-evolution. He dedicates a significant portion of the book to cultural institutions.

Henrich spends the final chapters of the book talking about social institutions. He argues that social norms are especially strong and enduring when they hook into our innate psychology. For example, social norms for fairness toward foreigners will be much harder to spread and sustain than those that demand mothers care for their children.

Henrich further argues that the imposition of new formal institutions—imported from elsewhere—on other populations often creates mismatches. For example, when the USA imported and imposed state-of-the-art democratic institutions from the West in Iraq following the fall of Saddam Hussein, the expectation was that the people of Iraq would suddenly change their social norms and adapt to these new institutions, but that was not what happened.

Henrich thinks that we are bad at designing effective institutions and hopes that we will get better at this as we gain a deeper understanding of human nature and culture. Henrich thinks that until we gain this deeper understanding, our best hope is to learn from the processes of evolution by experimenting with different types of institutions and seeing what works best.

Here is Scott Alexander's review: https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/