Posts
Comments
The niche criticism of Astrology that it undermines personal responsibility and potential by attributing actions to the stars. This came to mind because I was thinking about how reckless the left-brain/right-brain dichotomy is as a idea. While there is some degree of hemispherical lateralization, the popular idea that some people are intrinsically more "logical" and others more "intuitive" is not supported by observations of lateralization, but also inherently dangerous in the same way as Astrology in that it undermines the person's own ability to choose.
Amplifying that, and I don't know for sure, but I suspect that whether your interest is in the liberal arts or STEM, the very same qualities or abilities predispose you for excellence in both. It is dangerous them to tell people that they are intrinsically, as in the physical structure of their brain limits them to one or the other. After all, as Nabokov quipped to his students:
“A writer should have the precision of a poet and the imagination of a scientist.”
Why can't there be a poet-scientist[1]? Why can't there be a musician-astrophysicist[2]? A painter-mathematician[3]?
Well there ought be, there can be, and there are.
- ^
Vladimir Nabokov's influence on Russian and English literature and language is assured. Many people also know of the novelist's lifelong passion for butterflies. But his notable contributions to the science of lepidopterology and to general biology are only beginning to be widely known.
https://www.nature.com/articles/531304a - ^
When Queen began to have international success in 1974, [Brian May] abandoned his doctoral studies, but nonetheless co-authored two peer-reviewed research papers,which were based on his observations at the Teide Observatory in Tenerife.
https://en.wikipedia.org/wiki/Brian_May#Scientific_career - ^
a book on the geometry of polyhedra written in the 1480s or early 1490s by Italian painter and mathematician Piero della Francesca.
https://en.wikipedia.org/wiki/De_quinque_corporibus_regularibus
Yes they do have a separate names, "the singularity" this post here pins a lot of faith in "after the singularity" a lot of utopic things being possible that seems to be what you're confusing with alignment - the assumption here is there will be a point where AIs are so "intelligent" that they are capable of remarkable things (and in that post it is hoped, these utopic things as a result of that wild increase in intelligence). While here "alignment" more generally to making a system (including but not limited to an AI) fine-tuned to achieve some kind of goal.
Let's start with the simplest kind of system for which it makes sense to talk about "alignment" at all: a system which has been optimized for something, or is at least well compressed by modeling it as having been optimized.
Later on he repeats
The simplest pattern for which “alignment” makes sense at all is a chunk of the environment which looks like it’s been optimized for something. In that case, we can ask whether the goal-it-looks-like-the-chunk-has-been-optimized-for is “aligned” with what we want, versus orthogonal or opposed.
The "problem" is that "what we want" bit which is discussed at length
I completely agree and share your skepticism for NLP modelling, it's a great example of expecting the tail to wag the dog, but not sure that it offers any insights into how actually going about using Ray Dalio's advise of reverse engineering the reasoning of someone without having access to them narrating how they made decisions. Unless your conclusion is "It's hopeless"
Not being an AI researcher, what do we mean when we speak about AGI - will an AGI be able to do all the things a competent adult does? (If, we imagine, we gave it some robotic limbs and means of locomotion and it had corollaries of the 5 senses).
In the Western World for example, most humans can make detailed transport plans that may include ensuring there is enough petrol in their car, so that they can go to a certain store to purchase ingredients which they will later on use a recipe to make a meal of: perhaps in service of a larger goal like ingratiating themselves to a lover or investor.
In Non-Developed countries there is a stunning ingenuity, for example, how in the Sahel mechanics will get old Toyotas working again.
While arguably lots of these sub-tasks are Sphexish, this being just one humdrum examples of the wide variety of skills that the average human adult has mastered, others include writing in longhand, mastering various videogames, the muscle coordination and strategic thinking to play any number of sports or games or performing arts which require coordination between intent and physicality (guitar playing, Soccer, being a steadicam operator).
Of course, once you start getting into coordination of body and mind you get into cognitive cognition and discussions about what is really "intelligence" and whether that is representational, or whether utilizing anti-representational means of cognition can also be intelligence? But that's tangential.
Right now ChatGPT (and Claude, and Llama etc. ) do very well for only having a highly verbocentric means of representing the world. However details of implementation are often highly wanting - they continue to speak in broad, abstract brushstrokes if I ask "How do I..."
For example, I asked Claude what I should be feeling from my partner when dancing the tango (if I'm 'leading' - even though it is the traditionally the woman who actually controls the flow of the dance - the lead or man must interpret the woman's next moves correctly): "Notice the level of tension and responsiveness in your partner's muscles, which can indicate their next move" no mention of what that feels like, what muscles, or where i should be feeling it (my hands? should I feel my weight being 'psuhed')... the only specific cue it offered was:
"Pay attention to small movements, head tilts, or changes in your partner's energy that signal their intention."
Head tilts!
Now I realize, this is partly reflective of the information bottleneck of tactic-to-explicit: people have trouble writing about this knowledge, and a LLM can only be trained on what is written. But the point remains: execution counts!
I'm not sure what I'm meant to be convinced by in that Wikipedia article - can you quote the specific passage?
I don't understand how that confirms you and I are experiencing the same thing we call orange. To put it another way, imagine a common device in Comedy of Errors: we are in a three-way conversation, and our mutual interlocutor mentions "Bob" and we both nod knowingly. However this doesn't mean that we are imagining "Bob" refers to the same person, I could be thinking of animator Bob Clampett, you could be thinking of animator Bob Mckimson.
Our mutual interlocutor could say "Bob has a distinctive style" - now, assume there is nothing wrong with our hearing. We are getting the same sentence with the same syntax. Yet my mental representation of Bob and the visual style will be different to yours. In the same way that we could be shown the same calibrated computer screen which displays the same image of an orange, of a banana, we may appear to say "yep that orange is orange" "yep, that banana is a pale yellow" - but how do you know that my mental representation of orange isn't your purple. When ever I say "purple" I could be mentally experiencing your orange, in the same way that when I heard "Bob" I'm making reference to Clampett not Mckimson?
I'll certainly change the analogy if you can explain to me what I'm missing... but I just don't understand.
But that surely just describes the retina and the way light passes through the lens (which we can measure or at least make informed guesses based on the substances and reflectance/absorbtion involved)? How do you KNOW that my hue isn't rotated completely differently since you can't measure it - my experience of it? The wavelengths don't mean a thing.
No one has refuted it, ever, in my books
Nor can you refute that my qualia experience of green is what you call red, but because every time I see (and subsequently refer to) my red is the same time you see your red, there is no incongruity to suggest any different. However I think entertaining such a theory would be a waste of time.
I see the simulation hypothesis as suffering from the same flaws as the Young Earth Theory: both are incompatible with Occums Razor, or to put it another way, adds unnecessary complexity to a theory of metaphysics without offering additional accuracy or better predicting power. The Young Earth Hypothesis says that fossils and geological phenomena only appear to be older than 6,000 years, but they were intentionally created that way (by the great Simulator in the sky?). This means it also fails to meet the important criteria of modern science: it can't be falsified.
To be able to falsify something means that a theory is valuable, because if it fails, then you've identified a gap between your map of something and the territory that you can correct. A theory becomes even more valuable if it predicts some counter-intuition or result which hereto none of our models or theories predict, yet repeated tests do not falsify it.
Simulation Hypothesis intrinsically means you cannot identify the gap between your map and the territory, since the territory is just another representation. Nor does it explicitly and specifically identify things which we would expect to be true but aren't: again, because everything would continue to appear as it always has been. So it offers not value there.
Simulation Hypothesis isn't taken seriously not because it can't be true - so when you see green I see red - but that you can predict no difference in my or your behavior from knowing this. So what?
Stanley Kubrick is perhaps one of the most influential Sci-Fi filmmakers of the 20th century, therefore I believe he has some authority on this matter. What may answer the need for dystopia can be extend to war and crime films:
...one of the attractions of a war or crime story is that it provides an almost unique opportunity to contrast an individual of our contemporary society with a solid framework of accepted value, which the audience becomes fully aware of, and which can be used as a counterpoint to a human, individual, emotional situation. Further, war acts as a kind of hothouse for forced, quick breeding of attitudes and feelings. Attitudes crystallize and come out into the open. Conflict is natural, when it would in a less critical situation have to be introduced almost as a contrivance, and would thus appear forced, or - even worse - false. Eisenstein, in his theoretical writings about dramatic structure, was often guilty of oversimplification. The black and white contrasts of Alexander Nevsky do not fit all drama. But war does permit this basic kind of contrast - and spectacle. And within these contrasts you can begin to apply some of the possibilities of film - of the sort explored by Eisenstein."
https://www.archiviokubrick.it/english/words/interviews/1959independence.html
More specifically he explains the way he believes Speculative Fictional genres, such as fantasy and Sci-Fi can be effective towards expressing certain ideas which realist drama - the kinds you're advocating albeit within a Sci-Fi environment - may not be. Interviews taken from these transcripts: http://visual-memory.co.uk/amk/doc/interview.html
Michel Ciment: You are a person who uses his rationality, who enjoys understanding things, but in2001: A Space Odyssey and The Shining you demonstrate the limits of intellectual knowledge. Is this an acknowledgement of what William James called the unexplained residues of human experience?
Stanley Kubrick: Obviously, science-fiction and the supernatural bring you very quickly to the limits of knowledge and rational explanation. But from a dramatic point of view, you must ask yourself: 'If all of this were unquestionably true, how would it really happen?' You can't go much further than that. I like the regions of fantasy where reason is used primarily to undermine incredulity. Reason can take you to the border of these areas, but from there on you can be guided only by your imagination. I think we strain at the limits of reason and enjoy the temporary sense of freedom which we gain by such exercises of our imagination.
Michel Ciment: Don't you think that today it is in this sort of popular literature that you find strong archetypes, symbolic images which have vanished somehow from the more highbrow literary works?
Stanley Kubrick: Yes, I do, and I think that it's part of their often phenomenal success. There is no doubt that a good story has always mattered, and the great novelists have generally built their work around strong plots. But I've never been able to decide whether the plot is just a way of keeping people's attention while you do everything else, or whether the plot is really more important than anything else, perhaps communicating with us on an unconscious level which affects us in the way that myths once did. I think, in some ways, the conventions of realistic fiction and drama may impose serious limitations on a story. For one thing, if you play by the rules and respect the preparation and pace required to establish realism, it takes a lot longer to make a point than it does, say, in fantasy. At the same time, it is possible that this very work that contributes to a story's realism may weaken its grip on the unconscious. Realism is probably the best way to dramatize argument and ideas. Fantasy may deal best with themes which lie primarily in the unconscious. I think the unconscious appeal of a ghost story, for instance, lies in its promise of immortality. If you can be frightened by a ghost story, then you must accept the possibility that supernatural beings exist. If they do, then there is more than just oblivion waiting beyond the grave.
And to finish, I can't the source at the moment (I think it was in "A Life in Pictures"), but it is like Jack Nicholson said of Kubrick "then someone like Stanley comes along and asks: it's realistic, but is it interesting?".
A dystopia provides a background, a framework that allows a highly catalytic environment for dramatizing ideas that cannot be done by means of regular small stakes interpersonal conflict. Even Plato knew this with regards to pedagogy: hence why his Socrates suggested like the way you use big handwriting to make a manuscript more visible, he expanded the vision of justice in one single person to the entire polis.
Are there pivotal ways this is different to the theories of Enactivism?
(" Its authors define cognition as enaction, which they in turn characterize as the ‘bringing forth’ of domains of significance through organismic activity that has been itself conditioned by a history of interactions between an organism and its environment." which at first blush I'd say is a reflectively stable agent modifying or updating believes by means of enaction. Enactivism also rejects mind-body duality in favour of a more 'embodied' cognition approach together with a "deep continuity of the principles of self-organization from the simplest living things to more complex cognitive beings"), particularly autopoeisis.
"An autopoietic system was defined as a network of inter-related component-producing processes such that the components in interaction generate the same network that produced them."
An autopoietic system can be contrasted to an allopoetic system which creates objects different to itself, like a factory. Most living beings are autopoetic in that they either produce themselves, or things like them which seems to be similar to a reflectively stable agent, particularly when we describe the more complicated cognitive beings in autopoetic terms. Luhman argued that social systems too are self-organizing, self-reproducing systems which brought the concepts of enactivism from biology and cognitive science into the social sciences.
What about the incentives? PWC is apparently OpenAI's largest enterprise customer. I don't know how much PWC actually use the tools in-house and how much they use to on-sell "Digital Transformation" onto their own and new customers. How might this be affecting the way that OpenAI develop their products?
I have my own theories about the intentions which I do not feel comfortable discussing, so I'll focus on the practicalities and case studies which show why this complex and difficult to execute:
some hostages have been killed by the IDF during rescue operations, this isn't uncommon, the lone hostage was killed during a French raid in Somalia, consider the Lindt Cafe Siege in Sydney where a pregnant hostage was killed by ricocheting police bullet fire when they finally stormed in, three other hostages and a policeman were injured. This was a lone gunman, I can imagine that the Hamas hostage takers are well organized groups. A hostage during the Gladbeck Crises in Germany were also injured by police fire.
Kidnapping someone who "knows" the location of some hostages I would guess is highly ineffective for many reasons, Torture is a notoriously inaccurate source of information: hence the propensity for false admissions or telling interrogators what they want to hear. That and I suspect that there is a intentional system of moving around hostages from place to place, and never explicitly sharing locations with others to minimize the risk of locations leaking.
If someone who knows the exact location of a hostage has not been heard from for 24 hours, it is probably a good idea to move to a new location anyway.
Finally there is the incredible danger to the IDF soldiers themselves going into a dynamic environment where they don't know how much resistance they will encounter, being expected to minimize the harm to hostages while almost certainly coming under fire. It's probable suicide.
Any good resources which illustrate decision making models for career choices? Particularly ones that help you audit your strengths and weaknesses and therefore potential efficacy in given roles?
I had a look over the E.A. Forum, and there's no decision making models for how to choose a career. There's a lot of "draw the rest of the owl" stuff like - "Get a high paying salary so you can donate". Okay, but how? There's certainly a lot of job openings announced on the forum, but again, how do I know which one's I, me, am best suited to? Which types of positions am I going to be most effective in? Perhaps the real question is - "which roles will I be judged by recruiters and those hiring as being most suitable for? What decision making models are they using?"
If the question was "What are you most passionate about?" then I'd be like "filmmaking or music videos" and I've spent the last 15 and 6 years respectively trying to figure out how to make that work in practice. And that is probably a completely different methodology that involves "build a portfolio" "build a profile" "network". The meta-skill stuff about self-promotion I suck at.
At the root, I think, is the same problem and knowing which roles to apply for - my complete dearth of knowledge about what other people see as valuable.
So where are the resources that help you audit yourself: see where your weaknesses really are, not jut what you think they are, where are the resources that help you align your strengths and knowledge (both theoretical and tacit) with actual job-market positions?
Or alternatively, how can I build better models of what other people find valuable?
I meant a personal assistant type A.I. like Alexa or Siri which is capable of exerting Milieu control like Sir Humphrey does: Meta properties, Tik Tok are not yet integrated with such personal A.I. assistants... yet.
This may be pedantry, but is it correct to say "irrefutable evidence"? I know that in the real world the adjective 'irrefutable' has desirable rhetorical force but evidence is often not what is contended or in need of refuting. "Irrefutable evidence" on the face of it means means "yes, we can all agree it is evidence". A comical example that comes to mind is from Quintilian 's treatise that I'll paraphrase and embellish:
"yes, it is true I killed him with that knife, but it was justified because he was an adulterer and by the laws of Rome Legal"
In (modern) courts of law you have Admissible evidence, which is evidence that, at least in U.S. Federal courts, governed by a length list of rules including relevance, the competency to give testimony of certain witnesses, exceptions to hearsay.
However you also have, among many other types, "insufficient evidence". What is not being refuted is that it is evidence, only that the prosecution has failed to meet the burden of proof that leads to the conclusion "beyond reasonable doubt".
An item of evidence may be irrefutable, in as much as yes - it is evidence, no one is questioning that it is evidence, and it may be impossible to deny the inference that is being drawn from that evidence. But that it alone meets the burden of proof.
As far as I understand "irrefutable evidence" is not a legal term but one of the court of public opinion: where rhetorical force is preeminent. Perhaps it is useful then to say it in certain cases, but is it rational and correct?
- ^
The original refers more to points of argument than evidence:
Take for example the following case. "You killed a man." "Yes, I killed him." 7 Agreed, I pass to the defence, which has to produce the motive for the homicide. "It is lawful," he urges, "to kill an adulterer with his paramour." Another admitted point, for there is no doubt about the law...
https://penelope.uchicago.edu/Thayer/E/Roman/Texts/Quintilian/Institutio_Oratoria/7A*.html#ref2
I don't want to pretend that I'm someone who is immune to Youtube binges or similar behaviors. However I am not sure why this is a problem and what meaningful work that this behavior was getting in the way of? Speaking for myself, 9/10 if I have a commitment the next morning, I won't stay up late on my computer because... I know I have a commitment at a set time. (If you forced me to hypothesize why that 1/10 times I don't, I'd guess that it is stress related anticipation means I can't sleep even if I did lay down - but that is just a wild guess).
I'm also surprised to see how most of the solutions in the comments involve removing access to anything... doing something more productive. I think there is a difference between the nebulous guilt we feel about Opportunity Cost - "oh geez I could have used that time more effectively" and specific, tangible, realistic things we could have done but didn't. I often find that Youtube Binges are caused by/as-a-result-of not being able to find those activities, they do not frustrate them.
I have perennially found that whatever vice (or as you call it 'hyperstimuli') that I remove, I just replace it with another but it's never a beneficial activity. (The one exception I can think of was when I stopped listening to music when I had a bout of insomnia and instead replaced it with lectures on Wittgenstein or Quantum Physics, because I figured "I might as well learn SOMETHING').
This has caused me an incredible amount of frustration. For all the talk of "social media detox" and even the farcically named "dopamine detox" none seem to actually result in net increases in my well being.
Going back to what I said about specific, tangible, realistic alternatives: I have found that the only way to stop mid-way through a Youtube binge or a Instagram scroll is to be excited about a project that I have a lot of faith in my ability to complete, and a viable first-step which I can do now.
This isn't fail-safe, if I'm writing a journal entry or an essay, and I have to leave in 30 minutes, you bet your bottom dollar I'll be late because I'll be so engrossed in that writing process. But that doesn't sound like a 'hyperstimuli'
How can you mimic the decision making of someone 'smarter' or at least with more know-how than you if... you... don't know-how?
Wearing purple clothes like Prince, getting his haircut, playing a 'love symbol guitar' and other superficialities won't make me as great a performer as he was, because the tail doesn't wag the dog.
Similarly if I wanted to write songs like him, using the same drum machines, writing lyrics with "2" and "U" and "4" and loading them with Christian allusions and sexual imagery, I'd be lucky if I'm perceptive enough as a mimic to produce some pastiches. However if I wanted to drill further, how might I 'black box' his songwriting mind, reverse engineer which cruxes and decision pivots determine what rhyming or rhythm patterns he chooses, what chord progressions he operates on. Maybe after years of doing this I'd have a model composed of testable hypotheses that I could run experiments on, either by reverse engineering songs of his at random and seeing if they hold to the patterns I observed, writing my own songs in this manner and seeing if they have that 'x-factor' (hardest and most subjective of all), and finally comparing the stated narratives in biographies and interviews about how certain songs were written in accordance with my hypotheses.
Of course someone is going to say that you can't reduce a human being, let alone a super-talented human being to a formula, and perhaps draws a long bow about why they don't like A.I. art or modern Hollywood or whatever. All sentiments I'm sympathetic too even if I'm not 100% sold on.
What I'm thinking about is not too dissimilar from what Ray Dalio advises: One shouldn't just trust an expert's conclusion or advice blindly, even if they have a unparalleled pedigree.
But because I'm pretty extreme in believing that it is important to obtain understanding rather than accepting doctrine at face value, I would encourage the new batter not to accept what [Babe] Ruth has to say as right just because he was the greatest slugger of all time. If I were that new batter, I wouldn't stop questioning Ruth until I was confident I had found the truth.
In both cases rather than just taking the end result blindly - writing parodic or pastiches songs - the tail doesn't wag the dog - there is an attempt to find out why, to question!
My problem isn't so much that Prince (or Babe Ruth) are no longer around to answer these questions, but that unlike a multi-billionaire like Ray Dalio, anyone with sufficient pedigree is unlikely to pick up the phone and answer my incessant questions about "why?" and "how come?". I have to black-box it.
Different example - I said "instead" - so if the musician openly admits and apologize for only being average they are ashamed because they are afraid of the reaction of the fan who clearly loved their performance (not their failure to abstain from what they believe is the cause of their average performance?), but if they don't mention it to anyone (therefore are committing neither a dominance nor submission gesture) they are also ashamed? Or are they not ashamed in both circumstances? I'm just saying I'm really confused.
Are you telling me there is no conceivable circumstance where any human being feels shame for something which is totally alone, none at all? Because at the risk of assuming I have privileged knowledge of myself - I assure you I've felt shame for things which no one would care about.
I don't think you understand, in the example I gave they don't think they are 'average' they think their performance was not to the standard they hold themselves, and they believe that this was precipitated by their drinking which they regret. He is talking PAST the person after the show, not to them, almost like a soliloquy.
Do you think that every time you've ever felt shame it has always been primarily because of what others may think of you? You have never ever felt a solipsistic shame, a shame even though no one will know, no one will care, it has no negative influence on anyone other than yourself, and the only person you have to answer to is you? Never?
My new TAP for the year is - When I fail: try twice more. Then stop.
I'm persistent but unfortunately I don't know when to quit. I fall a foul of that saying "the definition of insanity is to try the same thing over and over again and expect different results". Need a pitch for a client? Instead of one good one I'll quota fill with 10 bad ones. Trying to answer a research question for a essay - if I don't find it in five minutes, guess I'm losing my whole evening on a Google Books/Scholar rabbit hole finding ancillary answers.
By allowing myself only two more tries but no more, that should mean that I get three failures instead of burnout-1 failures. It should mean I'll be, per the saying, less insane.
Three is an arbitrary number, it could easily be 4 or 5, but if I had to post-rationalize it then it would be: if you fail three consecutive times, then your chance of success was lower than 33.3% which means you need a better tactic or approach.
Three is a good balance between repetition without causing burnout, it also is low investment, which means that it encourages me to try again, and quickly.
Of course this approach only works if there is a postmortem. Try twice more, stop, then analyze what happened.
I can't say I'm proud of the fact that I need such a simple rule. But if it works, then I shouldn't feel ashamed for improving my behavior because of it.
I'm using Firefox. As of the time of writing after refreshing the page: from the Aristotle quote downwards the entire post is in one 'code block'. The markdown hyperlinks aren't formatting correctly, so I'm seeing the text in square brackets followed by the intended web address in plain text.
How does it look for you?
How does this work? I can't see any screenshots or videos that show the website interface.
Moriarty looks at the paper.
The switch from first person pronouns to discussing a third person with quotations is confusing.
Why best structured? What quality or cause of reader-comprehension do you think non-linearity in this particular forking format maximizes?
Also aren't most articles written with a singular or central proposition in mind (Gian Carlo Rota said that every lecture should say one thing, Quintillian advised all speeches to have one 'basis'), for which all paragraphs essentially converge on that as a conclusion?
"Is this a good use of my time?"
"No"
"Can I think of a better use of my time?"
"Also, no"
"If I could use this time to think of a better use of my time, that would be a better use of my time than the current waste of time I am now, right?"
"Yes, if.... but you can't so it isn't"
"How can you be so sure?"
"Because, look at how abstract just this little dialogue is - which is wholly representative of the kind of thinking-about-better-uses you're inclined to do (but may not be generalizable to others). This dialogue of ours is not pertaining directly to any actions of tangible value for you. Just hypothesis and abstracts. It is not a good use of your time."
Wouldn't the insight into understanding be in the encoding, particularly how the encoder discriminates between what is necessary to 'understand' a particular function of a system and what is not salient? (And if I may speculate wildly, in organisms may be correlative to dopamine in the Nucleus Accumbens. Maybe.)
All mental models of the world are inherently lossy, this is the map-territory analogy in a nutshell (itself - a lossy model). The effectiveness or usefulness of a representation determines the level of 'understanding' this is entirely dependent on the apparent salience at the time of encoding which determines what elements are given higher fidelity in encoding, and which are more lossy. Perhaps this example will stretch the use of 'understanding' but consider a fairly crowded room at a conference where there is a lot of different conversations and dialogue - I see a friend gesticulating at me on the far side of the room. Once they realize I've made eye contact they start pointing surreptitiously to their left - so I look immediately to their left (my right) and see five different people and a strange painting on the wall - all possible candidates for what they are pointing at, perhaps it's the entire circle of people.
Now I'm not sure at this point that the entire 'message' - message here being all the possible candidates for what my friend is pointing at - has been 'encoded' such that LDC could be used to single out (decode) the true subject. Or is it?
In this example, I would have failed to reach 'understanding' of their pointing gesture (although I did understand their previous attempt to get my attention).
Now, suppose, my friend was pointing not to the five people or to the painting at all - but something or sixth someone further on: a distinguished colleague is drunk let's say - but I hadn't noticed. If I had of seen that colleague, I would have understood my friend's pointing gesture. This goes beyond LDC because you can't retrieve a local code of something which extends beyond the full, uncompressed message.
Does this make sense to anyone? Please guide me if I'm very mistaken.
I think Locally Decodable Code is perhaps less analogous to understanding, but probably a mental tool for thinking about how we recall and operate with something we do understand. But the 'understanding'. For example, looking back on the conference and my friend says "hey remember when I was pointing at you" - that means I don't need to decode the entire memory of the conference - every speech, every interaction I had but only that isolated moment. Efficient!
Meta question for those who have made predictions: How do go about making a prediction? As in What is your prediction making process?
Which I suppose this is really a melange of questions that decomposes into:
Which questions appealed to you as being worth predicting, and why?
How did you determine what specific conditions the question was asking you to make a prediction about?
What was your process for determining your own level of confidence in that state of affairs?
Is the process similar or dissimilar from how you go about making decisions with tangible effects in your personal, familial, and professional life?
Is there a distinction between "true will" and "false will" and how does that factor into free will?
Take the example of someone with total paralysis, or locked-in Syndrome: they are absolutely unable to move any part of their body and therefore not able to manipulate their environment. A non-deterministic view of human consciousness will still suppose that they have free-will to choose what subject is on their mind. They can listen to the ambient sounds of the room, they can imagine a blue triangle or they could choose to imagine a red hexagon.
Thankfully for me I am very much in control of all my limbs, I am able to physically manipulate my immediate environment if I choose - I can pick up and fill a glass with water[1] for example. However, if in my mind I am setting my mind to make use of my ability to manipulate my environment to grab a glass and fill it: then I'd be exercising free-will unlike a robot arm.
I can choose to imagine a red hexagon if I want. I can sometimes choose what I want to think about. But sometimes, however, I do things without thinking. I have instincts and reflexes. I also tend to ruminate, I tend to fixate on thoughts: thoughts I don't like. I would very much like to not think about such things: embarrassing moments, framings of problems which are mal-adaptive. I also have the tendency to not be able to recall certain facts which I have, in the past without external prompting been able to recall.
To phrase my original question differently - when I become fixated on a topic, is that because I actually truly want to but I am putting up some protestations that I don't (protestations, apparently, only to myself)? Or is that in fact against my will, my truest will, and that in the same way that a person with total paralysis is unable to pick up a glass of water - no matter how much they long to be able to interact and manipulate their environment - to hug a loved one, to walk on grass - they are unable to fulfill that will. I am unable to fulfill, at times, my will to think of something else.
Does the question of authenticity of desire determine whether something is free-will or not? Am I in fact exercising free-will even when I ruminate or turn my thoughts to things I don't want to because that is in fact my desire?
I often wonder if free-will is a synonym for agency. And to ask a different question: How much agency, what is the most atomic example of free-will needed to say - "yup, this entity has free will"? And can we consider physical agency and freedom of thought differently?
- ^
I'm aware that there are robot arms that can do this, and a monkey could be trained to do this - I don't think that's relevant, I'm just saying - I'm aware of that argument.
We should entertain the possibility because it is clearly possible (since it's unfalsifiable), because I care about it, because it can dictate my actions, etc.
What makes you care about it? What makes it persuasive to you? What decisions would you make differently and what tangible results within this presumed simulation would you expect to see differently pursuant to proving this? (How do you expect your belief in the simulation to pay rent in anticipated experiences?)
Also, the general consensus in rational or at least broadly in science is if something is unfalsifiable then it must not be entertained.
And the probability argument follows after specifying a reference class, such as "being distinct" or "being a presumptuous philosopher."
Say more? I don't see how they are the same reference class.
I'm still not sure how it is related.
The implicit fear is that you are in a world which is manufactured because you, the presumed observer are so unique, right? Because you're freakishly tall or whatever.
However, as per the anthropic principle, any universe that humans exist in, and any universe that observer exists in is a universe where it is possible for them to exist. Or to put it another way: the rules of that universe are such that the observer doesn't defy the rules of that universe. Right?
So freakishly tall or average height: by the anthropic principle you are a possibility within that universe. (but, you are not the sole possibility in that universe - other observers are possible, non-human intelligent lifeforms aren't impossible just because humans are)
Why should we entertain the possibility that you are not possible within this universe, and therefore that some sort of demiurge or AGI or whatever watchmaker-stand-in you want for this thought experiment has crafted a simulation just for the observer?
How do we get that to the probability argument?
I didn't think there was anything off with my tone. But please don't consider my inquisitiveness and lack of understanding anything other than a genuine desire to fill the gaps in my reasoning.
Again, what is your understanding of Kant and German Idealism and why do you think that the dualism presented in Kantian metaphysics is insufficient to answer your question? What misgivings or where does it leave you unsatisfied and why?
I'm not immediately sure how the Presumptious Philosopher example applies here: That is saying that there's theory 1 which has x amount of observers, and theory 2 which has x times x amount of observers. However, "the world is a simulation" is but one theory, there are potentially infinite other theories, some as of yet unfathomed, and others still completely unfathomable (hence the project of metaphysics and the very paradox of Idealism).
Are you saying the presumptuous philosopher would say: "there's clearly many more theories that aren't simulation than just simulation, so we can assume it's not a simulation"
I don't think that holds, because that assumes a uniform probability distribution between all theories.
Are you prepared to make that assumption?
This seems like a very narrow view of shame and guilt to me.
The cognitive processes responsible for the intention to conceal what we call shame are necessarily partitioned from the ones that handle our public, pronormative personas. If someone senses enough optimization for moral concealment in their self and those around them
What about things we conceal, less because of what other people think of those behaviors but because they are inconsistent with how we see ourselves or the standards we like to hold ourselves to?
For example, a singer songwriter was up last last night, had a few drinks, they play a show the next morning which goes great. It's a hit. Everyone loves them, everyone in the audience is convinced the singer-songwriter played great. But backstage after the show "I shouldn't have stayed up late, I could have played so much better, this was such an average show. It should have been a great show. I was holding back, I could have sung so much better."
What does that come under? They certainly feel ashamed, they feel guilt over what they perceive to be the cause of their only-average playing. But this is in spite suffering no social consequences and indeed exceeding the expectations of proper behavior of everyone around them.
They expect that they can call on allies to derail investigations of their bad behavior, on the fly, by instantaneous mutual recognition.
Except frequently I think people who are ashamed don't expect this. Imagine that instead of concealing they openly admit and apologize for being only average: then what? Aren't they still ashamed?
I never said "falsified" in that reply - I said fake - a simulation is by definition fake (Edit: Yes I did, and now I see how I've been 'Rabbit Seasoned' - a simulation hypothesis falsifies this reality. I never said this reality is false. My mistake!). That is the meaning of the word in general sense. If i make a indistinguishible replica of the Mona Lisa and pass it off as real, I have made a fake. If some kind of demiurge makes a simulation and passes it off as 'reality' - it is a fake.
I've never heard of "anthropics" but I am familiar with the Anthropic Principle it's antecedents in pre-socratic philosophers like Heraclitus who are the first known record of the concept. Have you heard of Kant and German Idealism?
Indeed, your video game scenario is not even really qualitatively different from my own situation.
How? To take your example of being the tallest person: If all human beings were exactly 6 feet tall, and you were 600 feet tall, then you're saying that would be proof that you are in fact in a simulation. That might suggest you are in fact extremely special and unique, if you want to believe in a solipistiic, Truman-show style world.
if I were born with 1000 HP, you could still argue "data from within the 'simulation'...is not proof of something 'without'."
Yes. Exactly. I could. Although it would intuitively be less persuasive. But there aren't any 600 feet tall people in a world of otherwise uniform height.
The difference between my scenario and the video game one is merely quantitative: Pr(1000 HP | I'm not in a video game) < Pr(I'm a superlative | I'm not in a simulation), though both probabilities are very low
I don't understand where you're pulling that quantitative difference. Can you elaborate more?
I don't understand how the assumption that we are living in a simulation which is so convincing as to be indistinguishable from a non-simulation is any more useful than the Boltzmann brain, or a brain in a vat, or a psychedelic trip, or that we're all just the fantasy of the boy at the end of St. Elsewhere: since, by virtue of being a convincing simulation it has no characteristic which knowingly distinguishes it from a non-simulation. In fact some of those others would be more useful if true, because they would point to phenomena which would better explain the world.
How are the other examples not compatible? What fact could only necessarily be true in a simulation but not on a psychedelically induced hallucination? Or a fever dream? What do you mean "look up close" close to what exactly?
What do you mean by a 'rational mistake'?
If someone says "always pick the tomato that has a bit of 'bounce'" and, for the sake of demonstration, one wrongly interprets this to mean a tomato, when thrown, should bounce off of a surface - leading to a very messy series of mistakes. When what the original person's map of a 'good tomato test' was that if we press a tomato it should be firm to touch, but not too firm. Isn't that a mistake that is our own - since it didn't exist in the original map? Indeed have we inherited that map at all or created a completely new one?
we must detect mistakes that others have already made and passed on to us.
What does that look like in practice? How can you distinguish between another person's mistaken map and a... less mistaken one? By the veracity of their predictions? By the amount of 'success' they have in navigating certain systems and domains? What can we attribute to their map of the system, and what can we attribute to other factors: for example - career advancement - is someone who before the age of 30 becomes preeminent in a certain company there because they have good 'maps' related to their duties, or because they have a good 'map' of the politics of their company? Or could it be they are married to the boss? They are a nepo-hire etc. etc.?
Why are you so sure it's a computer simulation? How do you know it's not a drug trip? A fever dream? An unfathomable organism staring into some kind of (to it's particular phenomenology) plugging it's senses into a pseudo-random pattern generator from which is hallucinates or infers the experience of OP?
How could we falsify the simulation hypothesis?
I'm afraid I don't understand a lot of your assumptions. For example, why you think you being an example of any given superlative is somehow a falsifying observation of the reality - especially if other people/objects don't exist in uniform distributions. So it's not like a video game where every other NPC exactly 10 HP, but through use of cheat-code you've got 1000. And even so, that data from within the 'simulation' as you call it is not proof of something 'without'. I think the only evidence of that would be if you find yourself in a situation like Daffy Duck, the walls of reality closing in on you - meeting your maker directly.
I also wonder, how much Kant or Plato have you read and did you do any research, even on SEP before you asked? I feel like anyone who has questions about the 'simulation' would be best served reading the philosophers who have written eloquently on the matter of how we come to represent the world and really formed the concepts and language we use.
Or you could read (Tractatus) Wittgenstein and dismiss all metaphysics all together as nonsense - literally: that which cannot be sensed and therefore musn't be spoken about.
I tried a couple of times to tune my cognitive strategies. What I expected was that by finding the types of thinking and the pivotal points in chains/trains of thought that lead to the 'ah-ha' moment of insight. I could learn to cultivate the mental state where I was more prone or conducive to those a-ha moments, in the same way that actors may use Sense Memory in order to revisit certain emotions.
Was this expectation wrong?
It seemed like all I found was a kind of more effective way of noticing that I was "in a rut". However that in itself didn't propagate any more insights, which was disappointing. It has some value, but certainly not as much as I was expecting.
When I have been journalling my thoughts and find that I have an 'a-ha' moment after a meandering garden path. I try to think of it faster so I try to dive down into the details of my mind just prior to the a-ha moment. What was on the cusp of my consciousness, what mental images was I 'seeing', what aspects of particular ideas was I focusing on.
All the a-ha moments always were due to the Availability Heuristic. Something that had recently, say 7 days or less ago, entered my consciousness and I managed to call back to it. Indeed it seems like the easiest way to make myself think of things faster is to just cycle through random memories, random stimuli, completely unrelated, just churn through for some kind of strategic serendipity. Maybe. I'm almost certainly doing it wrong.
I realize that you're supposed to use this exercise on logical puzzle tasks, but I just... can't do a puzzle task and record my thoughts simultaneously. Nor are puzzle tasks the kind of things I see much 'alpha' to be gained by thinking faster.
I'm not sure what the objective is here, are you trying to build a kind of Quine prompt? If so why? What attracts you to this project, but more importantly (and I am projecting my own values here) what pragmatic applications? What specific decisions do you imagine you or others here may make differently based on the information you glean from this exercise?
If it's not a Quine that you're trying to produce, what is it exactly that you're hoping to achieve by this recursive feeding and memory summarization loop?
It would be good if you have thoughts on this, as it's philosophically an "ask the right question" task.
I assume this is a reference to either Wittgenstein saying that a skilled philosopher doesn't occupy themselves with questions which are of no concern or Daniel Dennett saying philosophy is "what you have to do until you figure out what questions you should have been asking in the first place". And to be clear, I don't know what question you're trying to answer with this exercise, and as such, can't even begin to ascertain if you're asking the right question.
I'd love to know the mechanics of "sleep on it" are and why it appears to work. Do you have any theories or hunches about what is happening on a cognitive level?
Thanks for preserving with my questions and trying to help me find an implementation. I'm going to try and reverse engineer my current approach to handles.
Oh of course, 100% retention is impossible. As ridiculous and arbitrary as it is, I'm using Sturgeon's law as a guide for now.
I constantly think about that Tweet where it's a woman saying she doesn't AI to write or do art, she wants it (but more correctly that's the purview of robotics isn't it?) to do her laundry and dishes so that she can focus on things she enjoys like writing and art.
Of course, A.I. in the form of Siri and Alexa or whatever personal assistant you use is already a stone's throw away from being in a unhealthy codependent relationship with us (I've never see the film 'Her' but I'm not discussing the parasocial relationship in that film). I'm talking about the life admin of our appointments, schedules, when we have our meals, when we go to the launderette.
Related is the term Milieu control. It's common in cults, but the same pattern can even exist in families. It combines the cutting off of communication with the outside world - or being the only conduit for it - with constant busywork so that they can't question their master. Even if that master appears to be the servant. His Girl Friday anyone?
My favorite television show Yes Minister displays a professional version of this dynamic: the erstwhile boss RH Jim Hacker is utterly dependent on his Iago-like servant Sir Humphrey Applebey who has cult-leader like knowledge of Hacker's comings and goings, if not outright controlling who does have access. He insists that he needs to know everything, and prides himself on not worrying Hacker on finer details, such as whether his government is bugging members of the opposition. Hacker might be the boss, but he is utterly useless without Appleby. Another pop-culture example might be in the SImpsons Mr. Burns and Mr. Smithers. Burns has to learn self-dependency, how to drive himself, how to make his own breakfast when Smithers leaves (after an interval of his verbal abuse of Smither's replacement Homer Simpson is met with a punch in the face - an ethical quandary where no-one looks great). Smithers, unlike Applebey is wholly devoted to Burns, but enjoys a similar total control of Milieu.
I'm not scared of a Bladerunner AI that says I love you and asks how your day is going - I'm scared of who has my bank details, who knows where I will be Friday night, who can control which calls and messages I see (or don't).
The quickest route of even a middling AI Intelligence to total domination is through life-admin codependency leading to total Milieu Control, especially if it controls your social media feed. It starts with your tickets, then your restaurant appointments, then your groceries... and so on and so on...
Personally I agree with the tweet, I wish I had more time to focus on my own creative expression. For many people creativity is therapeutic, the labour is a joy.
I find it useful to start with a clear prompt (e.g. 'what if X', 'what does Y mean for Z', or whatever my brain cooks up in the moment) and let my mind wander around for a bit while I transcribe my stream of consciousness. After a while (e.g. when i get bored) I look back at what I've written, edit / reorganise a little, try to assign some handle, and save it.
That is helpful, thank you.
I think you shouldn't feel chained to your past notes? If certain thoughts resonate with you, you'll naturally keep thinking about them.
This doesn't match up with my experience. For example, I have hundreds, HUNDREDS of film ideas. And sometimes I'll be looking through and be surprised by how good one was - as in I think "I'd actually like to see that film but I don't remember writing this". But they are all horrendously impractical in terms of resources. I don't really have a reliable method of going through and managing 100s of film ideas, and need a system for evaluating them. Reviewing weekly seems good for new notes, but what about old notes from years ago?
That's probably two separate problems, the point I'm trying to make is that even non-film ideas, I have a lot of notes that just sit in documents unvisted and unused. Is there any way to resurrect them, or at least stop adding more notes to the pile awaiting a similar fate? Weekly Review doesn't seem enough because not enough changes in a week that an idea on Monday suddenly becomes actionable on Sunday.
Not all my notes pertain to film ideas, but this is perhaps the best kept, most organized and complete note system I have hence why I mention it.
IMO it's better to let go of the idea that there's some 'perfect' way of doing it. Everyone's way of doing it is probably different, Just do it, observe what works, do more of that. And you'll get better.
Yeah but nothing is working for me, forget a perfect model, a working model would be nice. A "good enough" model would be nice.
Brainstorming (or babbling) is not random. Nor would we want it to be truly random in most cases. Whether we are operating in a creative space like lyric writing or prose, or writing a pedagogical analogy, or doing practical problem solving on concrete issues. We don’t actually want true randomness, but have certain intentions or ideas about what kind of ideas we’d like to generate. What we really want is to avoid clichés or instinctual answers – like the comic trope of someone trying to come up with a pseudonym, seeing a Helmet in their line of sight and introducing themselves as “Hal Mett”.
Based on my own personal experience [1]this is what happens when I allow myself to free-associate, and write down the first thing that comes to mind. There is a propensity to think about whatever one has been thinking about recently, unless one manage to trigger something that causes one to recall something deep and specific in memory. Recalling the right thing at the right time is hard though.
What I (and I suspect most of us) are better served by isn't free-association, but to think consciously and make a decision about what ‘anchors’ I’ll use to cause those deep specific recalls from memory, or to observe our current sensory field. (i.e. looking around my room the first thing I see is 'coffee mug' - not the most exotic thing, but the first thing I can free-associate if I don't apply any filters)
Free Association probably works much better in group environments, because everybody has their own train of thought, and even their line of sights will be different depending on if they are on the north or south side of a room. From the pulpit of a church, you may get “Hugh Tibble” as a fake name from seeing the Pew and the Vestibule; while from the Pews you might offer up “Paul Pitt”. This is to say noting of Sonder and the individuality of consciousness.
When I start thinking of brainstorming anchors as decisions (specifically: a decision about how to search through memory or your current sensory experience), just like I would any other decision – where I need to make a decision about what model I use. It suddenly becomes a lot less mysterious and I become emboldened and excited about how I can aim for higher quality rather than quantity by thinking about my decision making model.
- ^
Note, this is a sample of 1, or more correctly a highly biased sample of several dozen brainstorming exercises I have actually taken the effort to record in detail the manner in which I did them. But being my experience all standard caveats apply about how well it will generalize.
I am a big fan of the humility and the intention to help others by openly reflecting on these lessons, thank you for that.
Asking "what outputs should I expect to see?". While this post is about finding ways to build techniques for practicing Rationality Techniques, the examples are also very illustrative for thinking about what something looks like in practice or answering the question "what does that mean (in concrete, doable terms)?"
I also find that using verbs of manner helps make thinking about actions more specific - things that can be done.
For example, "what's for dinner?" can become "What should I cook for dinner?" which can even become further specified by manneristic verbs like "what should I fry for dinner", "What should I bake for dinner", "What should I boil for dinner" or it can become "What should I buy for dinner?". Bonus points if you use non-agreeing adverbs of manner. "What should I indulgently boil for dinner" suggests a vastly different kind of cooking to "What should I guiltlessly boil for dinner". I realize that "what should I boil for dinner" sounds awkward, but the point is it guides you to a list of soups or other ingredients which lead you to the answer.
25% of the time it being helpful sounds pretty good to me.
Just to be clear, when you say "undirected thinking" do you mean thinking that is not pertinent to your intention or goal with a writing session or a piece of writing; or is it knowing that you want to write something but wandering aimlessly because you're not sure what that thing is? Or am I well off the mark on both?
This is cool to me. I for one am very interested and find some of your shortforms very relevant to my own explorations, for example note taking and your "sentences as handles for knowledge" one. I may be in the minority but thought I'd just vocalize this.
I'm also keen to see how this as an experiment goes for you and what reflections, lessons, or techniques you develop as a result of it.
How often do these things become "un-confused" - like for every 20 of these, how many do you have an "ah-ha" or a "now I see" moment of clear resolution? Following on, do you find that you're able to find a way to think of that faster - i.e. that you can see what cognitive processes cause you to be confused and how you could have resolved that quicker?
What does GOOD Zettlekastern capturing look like? I've never been able to make it work. Like, what do the words on the page look like? What is the optima formula? How does one balance the need for capturing quickly and capturing effectively?
The other thing I find is having captured notes, and I realize the whole point of the Zettlekasten is inter-connectives that should lead to a kind of strategic serendipity. Where if you record enough note cards about something, one will naturally link to another.
However I have not managed to find a system which allows me to review and revist in a way which gets results. I think capturing is the easy part, I capture a lot. Review and Commit. That's why I'm looking for a decision making model. And I wonder if that system can be made easier by having a good standardized formula that is optimized between the concerns of quickly capturing notes, and making notes "actionable" or at least "future-useful".
For example, if half-asleep in the night I write something cryptic like "Method of Loci for Possums" or "Automate the Race Weekend", sure maybe it will in a kind of Brian Eno Oblique Strategies or Delphi Oracle way be a catalyst for some kind of thought. But then I can do that with any sort of gibberish. If it was a good idea, there it is left to chance that I have captured the idea in such a way that I can recreate it at another time. But more deliberation on the contents of a note takes more time, which is the trade-off.
Is there a format, a strategy, a standard that speeds up the process while preserving the ability to create the idea/thought/observation at a later date?
What do GOOD Zettlekastern notes look like?
I was under the impression it was a deliberate decision, as the aphorism of Empedocles goes:
What needs saying needs saying twice
Related is what Horace wrote
It is when I struggle to be brief that I become obscure
Now in case you didn't realize I'm going meta, by repeating similar sentiments over and over. So I'll refer to Professor of Negotiation Strategy Deepak Malhotra who advises would be negotiators:
Don't leave it to chance that they interpret what you're saying
Pithy, concise, brief statements lack context. This increases the chances they will be misinterpreted. Consistent misinterpretation is not optimal. You can remedy this, as Eliezer does by repetition, stating the same thing over and over again. You can give multiple examples of the same sentiment with slight variations. Each adds more context and narrows the band of possible interpretations.
This is not a matter of Kolmogorov complexity. The issue isn't whether it can be compressed and recreated by a theoretically optimal un-compressing machine. The audience is not a theoretically optimal un-compressor.
Have you ever been misinterpreted? How did you deal with it? If you were discussing a topic you thought was extremely important and for which interpretations that veered from your intentions could be very counter-productive, would you try to be as pithy and concise as possible or would you try to minimize and narrow the possible misinterpretations? How would you do that?