Posts
Comments
This very closely mirrors a large chunk of a learning framework that I've formalized and written the draft of a book about. I would like to perhaps interview you about this process, how it has impacted your life, as well as some other things that weren't included in the scope of this article. If you're interested, please let me know, I can be easily found on LinkedIn.com/in/zoid...
As far as the exhaustion part, I feel you. I have developed severe back problems from sitting in my computer chair for 20 hours straight deconstructing my latest obsession. I'll tell you, what, though - it made me a hell of a writer. I was looking through my inbox and realized Grammarly had been sending me these "stats" emails I just assumed were spam and ignored. I started looking at them and was flabbergasted as to how much I was outputting. On my strongest week, I output over 1,000,000 words, with 99% precision (something like 640 corrections across the million words) and a vocabulary in the top 1% of Grammarly users (20,000+ unique words.) There was another week with 450k, and several in the 100k, although my weekly average is more like 30k over the entire year. Perhaps someone knows an employee at Grammarly, do I win something? LOL...
Well, if you believe no objective morality exists, you're a moral nihilist which is distinct from a moral relativist, since that allows for morality to exist but only in the context of a culture, or individual.
Regarding the images: you requested a baby peacock, not a peachick. Technically, it’s not incorrect, though it is a bit absurd to imagine a fully-feathered baby.
On the issue of offending others: it's not your responsibility to self-censor in order to shield people from emotional reactions. In evaluating the validity of ideas, emotional responses shouldn’t be the focus. If someone is offended, they may need to develop a thicker skin.
The more significant concern, in my view, is the ego associated with intellectualism. When dealing with complex systems that are unlike anything we've encountered before—systems with the potential to surpass human intelligence—it’s crucial not to assume we’re not being deceived. As these systems advance, we must remain vigilant and avoid blindly trusting the information we receive.
As for control, I'm skeptical it’s even possible. Intelligence and control seem to have an inverse relationship. The more intelligent a system becomes, the less we are able to manage or predict its behavior.