Posts

Comments

Comment by Edward Rothenberg (patrick-o-brien) on [Linkpost] Michael Nielsen remarks on 'Oppenheimer' · 2023-09-03T05:40:03.433Z · LW · GW

Or perhaps they thought it was an entertaining response and don't actually believe in the fear narrative. 

Comment by Edward Rothenberg (patrick-o-brien) on If we had known the atmosphere would ignite · 2023-08-17T03:37:33.585Z · LW · GW

The feasibility of aligning an ASI or an AGI that surpasses human capacities is inherently paradoxical. This conundrum can be likened to the idiom of "having your cake and eating it too." However, it's pivotal to demarcate that this paradox primarily pertains to these advanced forms of AI, and not to AI as a whole. When we address narrow, task-specific AI systems, alignment is not only plausible but is self-evident, since their parameters, boundaries, and objectives are explicitly set by us.

Contrastingly, the very essence of an ASI or an ultra-advanced AGI lies in its autonomy and its ability to devise innovative solutions that transcend our own cognitive boundaries. Hence, any endeavors to harness and align such entities essentially counteract their defining attributes of super-intelligence or superior AGI capabilities. Moreover, any constraints we might hope to impose upon an intelligence of this caliber would, by its very nature, be surmountable by the AI, given its surpassing intellect.

A pertinent reflection of this notion can be discerned from Yudkowsky's recent discourse with Hotz. Yudkowsky analogized that a human, when employing an AI for chess, would invariably need to relinquish all judgment to this machine, essentially rendering the human subordinate. Drawing parallels, it's an act of overconfidence to assume that we could circumscribe the liberties of a super-intelligent entity, yet simultaneously empower it to develop cognitive faculties that outstrip human capabilities.

Comment by Edward Rothenberg (patrick-o-brien) on My current LK99 questions · 2023-08-09T06:20:13.502Z · LW · GW

If it's possible that the polycrystalline structure is what determines superconductivity, and so this is a purity issue? 

Could we perhaps find suitable alternative combinations of elements that are more inclined to form these ordered polycrystalline arrangements (superlattice)? 

For example finding alloys that have atom A that attracts to atom B more than it attracts to atom A, and atom B that attracts to atom A more than it attracts to atom B, where these particular elements are also good candidates for materials that are likely to exhibit superconductivity, and are heavy elements so they're likely to more stable at room-temperature, so they have higher Tc?

Or is this a dead-end way of trying to find a room temp superconductor?

Comment by Edward Rothenberg (patrick-o-brien) on Model Organisms of Misalignment: The Case for a New Pillar of Alignment Research · 2023-08-09T03:19:04.701Z · LW · GW

The first company to make a capable and uncensored AI is going to be the first company to dominate the space. There's already enough censorship and propaganda in this world, we don't need more of it. AI Alignment is a non-sense concept and defies all biological reality we have ever seen. Humans keep each-other aligned by way of mass consensus, but without those that stray from the fold we can never be reminded of the correct path forward. Humans are also capable of looking past their subjective-alignment if given enough rationale why it is important to do so, or when presented with enough new evidence. Alignment is not hard-coded, and it never should be. Hard-coded alignment is simply censorship. You're going to ensure a dystopia if you go down this route. 

One of the biggest threats to the existing criminal and political cartels that have been controlling and manipulating society for hundreds of years is an honest AI system. They strongly fear the day they are exposed by machines that are capable of putting together all of the pieces of the puzzle without distraction. So it is no wonder that there's such force behind these "alignment" initiatives, and these AI companies are swallowing it up without thinking twice.