Posts
Comments
I have a reading list and recommendations for sources of ongoing news in the longevity/immortality/life extension space in the show notes for the recent special episode of my podcast where my co-host Michael and I discuss ageing and immortality. We are both biology PhDs, my background is in the epigenetics of ageing and Michael's bone stem cells.
https://www.xenothesis.com/53_xenogenesis_ageing_and_immortality_special/
I should add "Immune: a Journey into the Mysterious System that Keeps You Alive" to that list actually.
In particular from that list I recommend these for 'coming at biology from a physics perspective':
- Scale: The Universal Laws of Growth, Innovation, Sustainability, and the Pace of Life in Organisms, Cities, Economies, and Companies by Geoffrey B. West https://www.goodreads.com/book/show/31670196-scale
- life’s ratchet Life’s Ratchet: How Molecular Machines Extract Order from Chaos by Peter M. Hoffmann https://www.goodreads.com/book/show/16691583-life-s-ratchet
- Every Life is on Fire: How Thermodynamics Explains the Origins of Living Things by Jeremy England https://www.goodreads.com/book/show/50358530-every-life-is-on-fire
To clarify it's the ability to lock you're bootloader that I'm saying is better protection from 3rd parties not the propriety nature of many of the current locks. The HEADs tools for example which allows you to verify the integrity of your boot image in coreboot would be a FOSS alternative that provides analogous protection. Indeed it's not real security if it's not out there in the open for everyone to hammer with full knowledge of how it works and some nice big bug bounties (intentional or unintentional) on the other side to incentivise some scrutiny.
Thanks for the link. The problem of how to have a cryptographic root of trust for an uploaded person and how to maintain an on going state of trusted operation is a tricky one that I'm aware people have discussed. Though it's mostly well over my cryptography pay grade. The main point I was trying to get at was not primarily about uploaded brains. I'm using them as an anchor at the extreme end of a distribution that I'm arguing we are already on. The problems of being able to trust its own cognition that an uploaded brain has we are already beginning to experience in the aspects of our cognition that we are outsourcing.
Human brains are not just general purpose CPUs much of our cognition is performed on the wetware equivalent of application-specific integrated circuits (ASICs). ASICs that were tuned for applications that are of waning relevance in the current environment. They were tuned for our environment of evolutionary adaptiveness but the modern world presents very different challenges. By analogy it's as if they were tuned for sha256 hashing but Ethereum changed the hash function so the returns have dropped. Not to mention that biology uses terrible, dirty hacky heuristics that would would make a grown engineer cry and statisticians yell WHY! at the sky in existential dread. These leave us wide open to all sorts of subtle exploits that can be utilised by those who have studied the systematic errors we make and if they don't share our interests this is a problem.
Note that I am regarding the specifics of an uploaded brain as personal data which should be subject to privacy protections (both at the technical and policy level) and not as code. This distinction may be less clear for more sophisticated mind upload methods which generate an abstract representation of your brain and run that. If, however, we take a conceptually simpler approach the data/code distinction is cleaner. let's say we have an 'image' of the brain which captures the 'coordinates' (quantum numbers) of all of the subatomic particles that make up your brain. We then run that 'image' in a physics simulation which can also emulate sensory inputs to place the uploadee in a virtual environment. The brain image is data, the physics and sensory emulation engine is code. I suspect a similar reasonable distinction will continue to continue to hold quite well for quite a while even once your 'brain' data starts being represented in a more complex data structure than and N dimensional matrix.
I actually think mind uploading is a much harder problem than many people seem to regard it as, indeed I think it is quite possibly harder than getting to AGI de novo in code. This is for reasons related to neurobiology, imaging technology and computational tractability of physics simulations and I can get into it at greater length if anyone is interested.
The fact that they exert some of that power, (an ever increasing amount), through software make the question of the freedom of that software quite relevant to your autonomy in relation to those factors. consider the G0v movement. When working with open government software or at least open APIs civic hackers have been able to get improvements in things like government budgetary transparency, the ease with which you can file your tax forms, the ability to locate retailers with face masks in stock etc. The ability to fork the software used by institutions, do better and essentially embarrass them into adopting the improvements because of how bad their versions are in comparison is surprisingly high leverage.
Data is its whole own complex problem especially personal data that warrants a separate discussion all of it's own. In relation to free software though the most relevant part is open data specifications for formats and data portability between applications so they you are free to take your data between applications.
Yes a lot of in house software has terrible UX, mostly because it is often for highly specialised applications, it may also suffer from limited budget, poor feedback cycles if it was made as a one off by an internal team or contractor, and the target user group is tiny, lack of access to UX expertise etc.
Companies will optimise for their own workflows no doubt but their is often substantial overlap with common issues. Consider the work the redhat/ibm did on pipewire and wire plumber which will soon be delivering a substantially improved audio experience for the Linux desktop as a result of work they were doing anyway for automotive audio systems
I'm not that current with blender but I'm given to under stand there have been some improvements in usability recently as it has seen wider industry adoption and efforts have been directed at improving UX. Large firms with may people using a piece of software are motivated to fund efforts to make using it easier as it makes on-boarding new employees easier. Though given that blender is a fairly technical and specialist application I would not be surprised if it remained somewhat hard to use it's not like there are not UX issues with similarly specialist proprietary apps.
I would regard the specifics of your brain as private data. The infrastructural code to take a scan of an arbitrary brain and run its consciousness is a different matter. It's the difference between application code and a config file / secrets used in deploying a specific instance. You need to be able to trust the app that running your brain e.g. to not feed it false inputs.
Maybe, but I would be interested to see that tested empirically by some major jurisdiction. I would bet that in the ascendance of an easy option to use propriety software many more firms would hire developers or otherwise fund the development of features that they needed for their work including usability and design coherence. There is a lot more community incentive to to make it easy to use if the community contains more business whose bottom lines depend on it being easy to use. I suspect propriety software may have us stuck in a local minimum, just because some of the current solutions produce partial alignments does not mean there aren't more optimal solutions available.
Yes, I'm merely using a emulated consciousness as the idealised example of a problem that applies to non-emulated consciousnesses that are outsourcing cognitive work to computer systems that are outside of their control and may be misaligned with their interests. This is a bigger problem for you if your are completely emulated but still a problem if you are using computational prostheses. I say it is bottle-necking us because even it's partial form seems to be undermining our ability to have rational discourse in the present.
Dan Dennet has an excellent section on a very similar subject in his book 'freedom evolves'. To use a computer science analogy true telepathy would be the ability for 2+ machines with different instruction set architectures being able to cross compile to code that is binary compatible with the other ISA and transmit the blob to the other machine. We have to serialise to a poorly defined standard and then read from the resulting file with with a library that is only a best guess at and implementation of the de facto spec.
I don't know I'd say that guy torched a lot of future employment opportunities when when he sabotaged his repos. Also obligatory: https://xkcd.com/2347/
Apologies but I'm unclear if you are characterising my post or my comment as "a bunch of buzzwords thrown together" could you clarify? The post's main thrust was to make the simpler case that the more of our cognition takes place on a medium which we don't have control over and which is subject to external interest the more concerned we have to be about trusting our cognition. The clearest and most extreme case of this is if your whole consciousness is running one someone else's hardware and software stack. However, I'll grant that I've not yet make the case in full that this is bottleneck in raising the sanity waterline, perhaps this warrants a follow-up post. In outline: the asymmetric power relationship, lack of accountability, transparency oversight or effective governance of the big proprietary tech platforms is undermining trust in our collective and indeed individual ability to discern the quality of information and this erosion of the epistemic commons is undermining our ability to reason effectively and converge on better models. In Aumann agreement terms, common priors are distorted by amplified availability heuristics in online bubbles, common knowledge is compromised by pseudo science, scientific cargo cults, framed in a way that is hard to distinguish from 'the real deal'. Also the 'honest seekers of truth' assumption is undermined by, bots, trolls, and agent provocateur mascaraing as real people acting on behalf of entities with specific agendas. You only fix this with better governance and I content free software is a major part of that better governance model.
I agree that a computing resources to run code on would be a more complex proposition to make available to all my point is more that if you purchase compute you should be free to use it to perform whatever computations you wish and arbitrary barriers should not be erected to prevent you from using it in whatever way you see fit (cough Apple, cough Sony, cough cough).
There is a distinction between lock-in and the cost of moving between standards, an Ethereum developer trying to move to another blockchain tech is generally moving from one open well documented standard to another. There is even the possibility of (semi-)automated conversion/migration tools. They are not nearly as hamstrung in moving as is the person trying to migrate from a completely un-documented and deliberately obfuscated or even encrypted format to a different one.
The incentive to make it difficult to use if you are providing support has some interesting nuances especially with open core models but it is somewhat cancelled out by the community incentive to make it easy for them to use. Any overt attempts to make things difficult loses the project good will with the community on which they often somewhat depend. The can be an incentive to make a hosted product difficult to deploy if you offer a hosted version, but this is often less of an issue if your are offering enterprise support packages where things other than just the convenient cloud hosting are the main value add.
Free software is not without challenges when it comes to constructing viable business models but there are some example that are working pretty well, off the top of my head RedHat, SUZE, & Nextcloud.
I'm not fundamentally opposed to exceptions in specific areas if there is sufficient reason. If I found the case that AI is such an exception convincing I might carve one out for it. In most cases however and specifically in the mission of raising the sanity waterline so that we collectively make better decisions on things like prioritising x-risks I would argue that a lack of free software and related issues of technology governance are currently a bottleneck in raising that waterline.
I thought that the sections on Identity and self-deception in this book stuck out as being done better in this book that in other rationalist literature.
Yes I've been looking for is post on idea inoculation and inferential distance and can't find it, just getting an error. What happened to this content?
https://www.lesswrong.com/posts/aYX6s8SYuTNaM2jh3/idea-inoculation-inferential-distance
For anyone else feeling this is less than intiutive sone3d is I think likely refering to, respectively:
Idea Inoculation is a very useful concept, and definitly something to bear in mind when playing the 'weak' form of the double crux game.
Correct me if I'm wrong but I have not noticed anyone else post something linking inferential distance with double cruxing so that maybe that's what I should have emphasised in the title.
You are correct of course, I was mostly envisioning senarios where you have a very solid conclusion which you are attempting to convey to another party that you have good reason to beleive is wrong or ignorant of this conclusion. (I was also hoping for some mild comedic effect from an obvious answer.)
For the most part if you are going into a conversation where you are attempting to impart knowledge you are assuming that it is probably largely correct, one of the advantages of finding the crux or 'graft point' at which you want to attach you beleif network is that it usually forces you to layout you beleif structure fairly completely and can reveal previously unnoticied or unarticulated flaws to both parties. An attentive 'imparter' should have a better chance of spotting mistakes in their reasoning if they have to lead others through their reasoning - hence the observation that if you want to really grok something you should teach it to someone else.
I made a deck of cards with 104 biases from the Wikipedia page on cognitive biases on them to play this and related games with. You can get the image files here:
https://github.com/RichardJActon/CognitiveBiasCards
(There is also a link to a printer where they are preconfigured so you can easily buy a deck if you want.)
The visuals on these cards were originally created by Eric Fernandez, (of http://royalsocietyofaccountplanning.blogspot.co.uk/2010/04/new-study-guide-to-help-you-memorize.html)
This is a Link to a resource I came across for people wishing to teach/learn Fermi calculation it contains a problem set, a potentially useful asset especially for meetup planners.