Posts
Comments
I wasn't asking how most people go about determining which goods or services to pay for generally, but rather if you're noticing that they are using the working hours by salary equation to determine what their time is worth, if it's to put a dollar figure on what they do in fact value it at, (and that isolates the time element from the effort or cognitive load element)
I didn't specify nor imply that one route took more cognitive load than the other, only that one was quicker than the other, and that differential would be one such way of revealing the value of time. (Otherwise they're not, in fact, trying to ascertain what their time is worth at all... but something else)
Nowadays using Public Transport is often no more complicated or takes no more effort than using Uber thanks to Google Maps, but this tangent is immaterial to my question: are you noticing these people are trying to measure how much they DO value their time, or are they trying to ascertain how much they SHOULD value their time?
What do you mean "model values as upstream of decisions" what is an example of a value that could be modeled based on a type of common decision? Would "CEO fires 10% of workforce just before Christmas time" be a sensationalist example in that it appears, on the surface, to reveal the moral value "shareholder value is a greater than the peace of mind afforded to these families"?
The value of an idea is dependent on what Stuart Kauffman may call 'Adjacent Possibilities'. Imagine someone has an idea for a film, a paragraph long "Elevator Pitch" which has the perfect starring role for Danny DeVito. The idea becomes more and more valuable the closer within six degrees of separation anyone with that idea is to DeVito. If I have such an idea, it's worthless because I have no means of getting it to him.
Likewise, imagine someone has a perfect model for a electronic fuel injection system in Ancient Greece, but just the injection system. That's virtually worthless in the absence of resources like refined petroleum, internal combustion engines (I'm not sure if they even had the metallurgical skills to cast engine blocks) and most importantly - the compelling business or defensive case to persuade those with the resources to risk developing that infrastructure.
However ideas, especially in the early babbling/brainstorming phases are malleable, an idea for a film that may have once suited Jerry Lewis may be well suited to Eddie Murphy or Jim Carrey because they possess certain similar qualities. Which begs the question of the integrity or identity of an idea and when is one idea different from another?
The question of identity is perhaps less important than the question of value, which is simply a matter of adjacent possibilities.
Are these people trying to determine how much they (subjectively) value their time or how much they should value their time?
Because I think if it's the former and Descriptive, wouldn't the obvious approach be to look at what time-saving services they have employed recently or in the past and see how much they have paid for them relative to how much time they saved? I'm referring to services or products where they could have done it themselves as they have the tools, abilities and freedom to commit to it, but opted to buy a machine or outsource the task to someone else. (I am aware that the hidden variable of 'effort' complicates this model). For example, in what situations will I walk or take public transport to get somewhere, and which ones will I order an Uber: There's a certain cross-over point where if the time-saved is enough I'll justify the expense to myself, which would seem to be a good starting point for evaluating in descriptive terms how much I value my time.
I'm guessing if you had enough of these examples where the effort-saved was varied enough then you'd begin to get more accurate model of how one values their time?
Sorry if I'm a little thick, but can you tell me if I'm on the right track: you're looking for a term or concept which describes the phenomenon where shareholders may select a CEO not directly because of his moral values but because of his aptitude for decisions which increase shareholder value, and that the CEO's decision making models would be in some way reflective of his moral values even though Moral Values are not the quality they've been selected for, right?
What do you mean by "values"? Do you mean moral values or do you mean what metrics they optimize for?
Sturgeon's Law but for ideas?
Sturgeon's Law is a counterargument against the negative stigma that Sci-Fi writing had as being crappy and therefore not a legitimate medium. The argument is 90% of any genre of writing, in fact anything from "cars, books, cheeses, people and pins" are "crud". Although the sentiment does seem to have a precedent in a novel Lothair by British Prime Minister Benjamin Disraeli where a Mr. Phoebus says:
"nine-tenths of existing books are nonsense, and the clever books are refutation of that nonsense. The greatest misfortune that ever befell man was the invention of printing. Printing has destroyed education"
Following on from my quest for a decision making model for ideas, as I mention Sturgeon's Law is a convenient albeit totally arbitrary metric for how many ideas should be good.
As Spanish author José Bergamín wrote (I can't track down the original):
The quality of a man's mind can generally be judged by the size of his wastepaper basket. [1]
For every 10 ideas I write down, one should be not-crud. If I have 100 film ideas (and I have more than that, many more) then 10 should be not-crud.
I think the obvious point to raise is that the opportunity cost for an idea, even if written down, is much lower than the opportunity cost of a book. As Gwern has tried to warn us. To take books as the prototypical example. There are many more people with ideas for books than have finished a book. Even a single author, each book may carry with it the unborn ghosts of hundreds of never written book ideas. We might expect that if only 1/10 books are "not crud" that perhaps that's survivorship bias of ideas, because perhaps good ideas get favored and are more likely to be completed?
I know that compared to the amount of film ideas I have, I have around a 1/90 ratio between film ideas to finished screenplays. The ideas I pursue are the ones that seem most vivid, are most exciting and therefore seem like the 'best' ideas.
Which is the elephant in the room - sure 90% of anything might be crud, but what makes it crud? What distinguishes crud, and in this case crud ideas, be they ideas for books or ideas for films, and "good" ideas?
In the meantime it seems like the easy way out is to say
"look, don't feel bad if you only have one okay idea for every nine crud ones. It's perfectly acceptable"
I could be drawing too long of a bow, but this seems to recall the distinction Marvin Minsky makes between Logic and Common-Sense thinking. Logic is a single "thin" chain of true or false propositions, if any single link in the chain is false, the whole chain collapses. Commonsense, in his parlance, is less discrete, we can have degrees of belief in any part of a chain, some parts of the train will be deeper and stronger than others.
He also greatly admired a passage in Aristotle's De Anima that shows how a single object can be represented in multiple ways, which Minsky saw as being very significant to operating in the world.
"Thus the essence of a house is assigned in such a formula as ‘a shelter against destruction by wind, rain, and heat'; the physicist would describe it as 'stones, bricks, and timbers'; but there is a third possible description which would say that it was that form in that material with that purpose or end. Which, then, among these is entitled to be regarded as the genuine physicist? The one who confines himself to the material, or the one who restricts himself to the formulable essence alone? Is it not rather the one who combines both in a single formula?"
Am I conflating different things by saying this reads as similar to the idea of favoring Cross-Entropy rather than the shortest program?
Minsky extended to the idea of multiple representations to what he called Papert's Principle - that it is how we administer and use these multiple representations together, or when we opt for one and exclude others which is the most important part of 'mental growth'.
Some of the most crucial steps in mental growth are based not simply on acquiring new skills, but on acquiring new administrative ways to use what one already knows.
Returning to replacing axioms and how this relates to Minsky's ideas about multiple representations, take for example making an omelette. I may use a stone bench-top, a tiled backsplash, a spoon, or any sort of 'hard' surface to crack the egg. The "crack the egg" part of the process/recipe stays the same, with the same anticipated result, but it becomes replaced by mental representations about the perceived hardness of many different objects.
Does any of this seem relevant or have I made some crude, tenuous connections?
How could we test the inverse? How do we test if others believe in rare important truths? Because obviously if they are rare, then that implies that either we don't share them, therefore do not believe they are truthful or important.
"Mel believes in the Law of Attraction, he believes it is very important even though it's a load of hooey"
I suppose there are "Known-Unknowns" and things which we know are significant but kept secret (i.e. Google Pagerank Algorithm, in 2008 the 'appetite' for debt in European Bond Markets was a very important belief and those who believed the right level avoided disaster), we believe there is something to believe, but don't know what the sin-qua-non belief is.
Are you alluding to Wittgenstein's final passage of the Tractatus: "Whereof one cannot speak, thereof one must remain silent"?
Yes, I think TAPs are extremely relevant here because it is about bringing attention, as you say, to the rule in the right context.
I suspect a lot of my "try to..." or "you should..." notes and instructions are Actions in search of a Trigger
I'm afraid that doesn't even come close to answering my questions about how you rank what is important or not, nor why you think visual representations are important, nor how GTD or whatever you use helps you revisit these ideas - as I said in my original post I don't like the "Someday" bucket of that system. Could you try have another go at explaining it to me?
You mentioned there are more in depth theories, which ones do you work by? Does this influence how you decide what is an important 1 , 2 or 3?
How does the visual representation help you filter or action and actualize ideas rather than just adding them to the pile?
Thank you for the detailed response, to be honest hearing the experience of a disorganized non-model user seems much more valuable than someone who uses it perfectly, like how you don't find yourself using tags.
DON'T write instructions like that, instead try this...
"Don't..." "Stop doing this but instead..." "when you find yourself [operative verb] try to..." headed instructions tend to be more useful and actionable for me than non-refutative instructions. Or to get meta:
Don’t start instructions with the operative verb, instead begin with “Don’t [old habit] instead…[operative verb and instruction]” or “Stop [old habit] and [operative verb and instruction]
I find I'm terrible at making an instruction, advice or a note actionable because it is exceedingly difficult to find suitable cues, situations or contexts to use them. This is further complicated by the struggle to remember the instruction correctly in the 'fog of war' as it were.
For example, Nicholas Nassim Taleb notes that people are so prone to "overcausation" that you can get most people to become loquacious by simply asking "why?" (others say 'why' can come off as too accusatory and 'how come?' is more polite). I may like to see how true this is, but now I need to find a situation to use it in... uhhh... hmmm... okay, next time someone gives a one-word response about their weekend. Sure... now how can I remember it? In the panicky situation where a conversation grows quiet, how can I remember to ask "why?"?
Provided that an instruction or note that begins with "stop..." or "don't" does in fact describe a habit you have or recurring situation you continue to encounter, then there is already a cue you can recognize.
For example, often when I hit an impasse while brainstorming, I will absentmindedly check my Instagram or a news website or here. That is a cue, and I can say "Don't check Instagram, instead write down a brief description of what the next step is in your brain storming process."
To test Taleb's observation, I'd do well to think of something I often do or notice when a conversation peters out, something like "don't say 'haha yeah', ask 'why'"? (and trust I have the sense to not implement that robotically and ask ‘why?’ as a non-sequitur)
So my advice to myself: Don't write instructions or notes that begin with "try to..." "you should..." or even "write instructions that begin with refutations" but instead use "Don't... but instead" as a template.
It started with me taking notes while playing RPGs, but turned into a daily journal.
Interesting! Is that because you find that your most creative while playing RPGs? How much detail are in those notes? How often do you find you pause the game to write one (reminds me of the Mitch Hedberg joke about thinking of a joke at night, he either needs to get up, or convince himself that the joke isn't that funny).
How often do you text search for ideas? What seems to trigger revisiting an idea?
There are more in depth theories about how to actually organize your notes,
Which theories have you found suite you best and why? How do you organize your notes?
And having captured your ideas in Obsidian, how do you go about revisiting them and ensuring that they don't remain captured but forgotten?
I'm not familiar with that app, but could you go into more detail about how you use it with regards to storing and capturing ideas?
Like do you instantly, say when on the bus, or at the dinner table note down an idea? How much detail do you put in?
How does it integrate with your to-do list or calendar or whatever productivity system, formal or informal you have? An idea may not necessarily represent a commitment just yet, so how do you use this app to revisit ideas? How often do you revisit them?
Do you organize or store your "ideas" notes differently to other notes?
It's very interesting how much culture (and I suppose population density and jurisdiction too) can affect driving style.
As for Italy, there's a throwaway line in a Dave Snowden talk about how drivers in Italy (Naples? Rome?) can be observed to follow the same basic 'rules' as a birds in a flock - kind of a Simon's Ant thing about how seemingly complex behavior actually is operating on a small set of simple rules.
Watching that Cruise video for the first time, I'm struck by how patient and conservative the system is when encountering obstacles or waiting for pedestrians to cross, it doesn't try to sneak or squeeze past double parked vans in a hurry for example.
I wonder how much safer roads could be if human drivers were all a little more patient? Although, as I understand it the chief benefit of FSD is that it is highly predictable and doesn't make impulsive choices.
I don't believe it's doing a good job of delivering me 'addictive' content. Am I in denial or am I a fringe case and for the most part it is good?
I have given you the wrong impression, I assure you I have a very verbal, very longwinged inner monologue which uses a lot of words and sentences. However I wouldn't consider it the sole or perhaps even the chief source of my planning, although sometimes it is involved in how I plan. So when the author says "verbal conscious planner" are there other 'planners' I should be excluding from my personal translation? How would I know?
I'm just wondering if there's a specific reason that the author has referred to it as a VERBAL conscious planner, and if willpower is therefore only applicable to what is verbal? Because as I understand it, especially in the dual-theory of memory which divides memory into Declarative/Explicit Memory and Non-Declarative/Implicit Memory (to which it is easy to draw an analogy between System 1 and 2, or the Elaboration Likelyhood model of attitudinal change) - the verbal is explicit, the non-verbal is vague in this dichotomy.
Why refer to it as a "verbal conscious planner" - why not just say "conscious planner"? Surely the difference isn't haphazard?
"Our conscious thought processes are all the ones we are conscious of."
Could you rephrase this less tautologically? - because now I'm wondering a lot of perhaps irrelevant things such as: is it necessary to be conscious of the content of a thought, or only that a thought is currently being held? What micro-macro level of abstraction is necessary? For example, if I'm deliberating if I should check if a pair of shoes are available on an online store still discounted am I conscious of the thought if I think "shoes on online store" or must I refer to "that pair of red converses on ASOS"?
I just worry that this is perhaps a logocentric view of willpower.
As I understand it the reason is that it is too computationally expensive to tailor a feed for each user, I remember seeing somewhere on the Facebook developer blog that that for any given user they take a batch, let's say 1 million posts/piece-of-content from their wider network, this will likely overlap if not be the same as many other users. From that it personalizes and prunes to whatever number of items they elect to show in your feed (say 100).
Out of the millions of pieces of content posted every day, it couldn't possibly prune it down for every user a-new every 24 hours, if there's a 200 million people using a platform at least once every 24 hours, that quickly rockets up. So they use restricted pools which go through successive filters. The problem is if you're not in the right pool to begin with, or your interests exist across multiple pools - then there is no hope of getting content that is tailored to you.
This is a half-remembered explanation of a much more intricate and multi-layered process as described by Facebook, and I don't know how similar/different Youtube, TikTok and other platforms are.
I've yet to see a personalized filter bubble, I've certainly been part of some nebulous other's filter bubble, but I am constantly and consciously struggling to get social media websites to serve up feeds of content I WANT to interact with.
Now while I am in someone else's bubble - there is certainly a progressive slant to most of the external or reshared content I see on Meta platforms (and it seems to be American-centric despite the fact I'm not in the United States), it took me an incredibly long time to curate my 'for you/Explore' page in Instagram, to remove all 'meme' videos, basically anything with a white-background and text like "wait for it" or "watch until the end" or intentionally impractical craft projects. It is only through aggressively using the "not interested" option on almost every item in the For You page, it now shows me predominantly High Fashion photography.
Alas, one stray follow (i.e., following a friend of a friend) or like and Voompf the bubble bursts and I'll start seeing stuff I have no interest in like "top 5 almond croissants" or "homemade vicks" - I have never EVER liked any local cuisine content ever and I'm really against the idea of getting medicinal craft projects from popular reels, sorry.
I suspect the filter bubble is grossly exaggerated - (Counter theory: I am not profitable enough for the algorithm to build such a bubble) - I suspect that much the fear and shock when you see Google ads delivered to you that match a conversation you had just 3 hours ago (although there is a lot of truth to the idea that smart devices are spying on you). Likely this is a combination of good old fashioned Clustering Illusion and the ego-deflating idea that the topic of that conversation you had just isn't that unique or uncommon (particularly). I've had three different people DM me the same OK GO Reel in one week recently. I never interacted with it directly - clearly the algorithm was aggressively pumping that to EVERYONE.
Spotify ads suggest I'm a parent who plays golf.
Instgaram Ads are slightly better these days - I just flicked through and most had to do with music and photography. But recently it was serving me a lot of obvious scam ads, with a photoshop image of a prominent political figure with a black eye. (again - politics. I don't follow or interact with ANY political content.)
It's someone else's bubble, I'm just living in it.
P.S. do not allow Spotify to add suggested tracks to the end of your playlist if you're not a fan of top 50 pop music, it will start playing Taylor Swift et. al., and you won't be able to remove it from your 'taste profile' rendering certain mixes unlistanble as it won't just be her music but every other top 50 artist.
Feedback loops I think are the principle bottleneck in my skill development, aside from the fact that if you're a novice you don't even know what you should be noticing (even if you have enough awareness to be cognizant of all signs and outputs of an act).
To give an example, I'm currently trying to learn how to generate client leads through video content for Instagram. Unless someone actually tells me about a video they liked and what they liked about it, figuring out how to please the algorithm to generate more engagement is hard. The only thing that "works" - tagging other people. Nothing about the type of content, the framing of the shots, the subject matter, the audio... nope... just whether or not one or more other Instagram accounts are tagged in it. (Of course since the end objective is - 'get commissioned' perhaps optimizing for Instagram engagement is not even the thing I should be optimizing at all... how would I know?)
Feedback loops are hard. A desirbale metaskill to have would be developing tight feedback loops.
It's been a while since I've read Plato's Republic, but isn't the Myth of Er just a abstraction of the way people make decision based on (perceived) justice and injustice in their everyday life? Just in the same way that Socrates says it is easier to read large print than small print, so he scales up justice from an individual to the titular Kallipolis, so too the day to day determinism of choices motivated by what we consider is 'fair' or 'just' is easier seen if multiplied over endless cycles of lives, than days and nights.
Is it possible that Plato was saying that day to day we experience this homeostatic mechanism? (if you are rational enough to observe the patterns of how your choices affect your personal circumstances?).
An example from the Republic itself: if I remember correctly the entire dialogue starts because Socrates is in effect kidnapped after the end of a festival because his interlocutors find him so darn entertaining. This would appear to be unjust - but not unexpected because he is Socrates which he has this reputation for being engaging and wise even if it is not the 'right' or 'just' way to treat him. How then should he behave in future, knowing that this is the potential cost of his social behavior? And the Myth of Er says that Odysseus kept to himself, sought neither virtue nor tyranny. That's probably the wrong reading. It's been a while since I've read it.
Scripts and screenplays are very interesting examples of this.
Manuscript is a handwritten script (manual script), which seems a bit redundant before modern presses. A screenplay is a play written for the (silver) screen. i.e. a mirror upon which a film projector bounced off images from.
It only just occurred to me that a playwright is not someone who writes plays but akin to a Cartwright.
Mesopotamia -- literally "between the rivers. "
Hippopotamus -- water horse
Welcome -- "well-come" - coming in a state of wellness (I don't know if this approximates the modern health connotations, or is more general 'goodness' which may have in older forms of English indistinguishable). It reminds me of the Modern Greek expression γειά σου/σας literally "good health to you".
Speaking of metaphors and Greek, there's a lovely anecdote about the whimsy of seeing in Greece moving vans emblazoned with the word 'metaphora' on them. It being a compound word which originally meant to carry from one place to another. Which metaphorically is what a linguistic metaphor does: carry meaning from one topos to another.
If you want to be truly pendatic, the "mind's eye" and "picturing" are analogies and not metaphors.
The mind's eye is like that of a physical, sensory eye, but doesn't replace it.
Analogy: "Joe looks at you, his eyes like gemstones"
Metaphor: "Joe looks at your with his gemstones"
Analogy: "I am picturing it in my mind's eye as if I had a second pair of eyes"
Metaphor: "I am picturing it with my second pair of eyes"
If you want to be even more pendantic, we could have a Idealist discussion of the metaphysics of sense and that all pictures are mental pictures, since the image doesn't exist the instant that photons are received by the retina, but are accumulated through a series of processes in the brain - particularly the visual cortex.
As always, I may not be the intended audience, so please excuse my questions that might be patently obvious to the intended audience.
Am I right in understanding a very simplified version of this model is that if you use willpower too much without deriving any net benefits, eventually you'll suffer 'burnout' which really is just a mistrust of using willpower ever, which may have negative effects on other aspects of your life even where willpower is needed like, say, cleaning your house?
Willpower, as I understand it is another word for 'patience' or 'discipline', variously described as the ability to choose to endure pain (physical or emotional). Whether willpower actually exists is a question I won't get into here, let's assume for the sake of this model it does, and fits the description of the ability to choose to endure pain.
For me this sentence I find especially alien to me:
your psyche’s conscious verbal planner “earns” willpower (earns trust with the rest of your psyche) by choosing actions that nourish your fundamental, bottom-up processes in the long run.
what is the "psyche's conscious verbal planner"? I don't know what this is or what part of my mind, person, identity, totality as a organism or anything really that I can equate this label to. Also without examples of what actions are that nourish (again, would cleaning the house, cooking healthy meals be examples?), that are fundamental and those that aren't, it's even harder to pin down what this is and why you attribute willpower to it.
It appears to have the ability to force one's-self to go on a date, which really makes the "verbal" descriptor confusing since a lot of the processes that are involved in going on a date don't feel like they are verbal, lexical, or take the form of the speaker's native language written or spoken. At least in my experience, a lot of the thoughts, feelings, and motivations behind going on a date are not innately verbal for me and if you asked me "why did you agree to see this person?" - even if I felt no fear of embarrassment explaining my reasons - I'd have a hard time putting that into words. Or the words I'd use would be so impossibly vague ("they seem cool") as to suggest that there was a nonverbal reasoning or motivation.
Would this 'conscious verbal planner' also be the part of my mind and body that searches an online store a week later to see if those shoes I want are on special? Or would you attribute that to a different entity?
Is there an unconscious verbal planner?
When I am thinking very carefully about what I'm saying, but not so minutely that I'm thinking about the correct grammatical use, would the grammar I use be my unconscious verbal planner, while the content of my speech be the conscious verbal planner?
A lot of example, for me, of willpower often are nonverbal and come from guilt. Guilt felt as a somatic or bodily thing. I can't verbalize why I feel guilty, although it verbally equates to the words "should" "must" and even "ought" when used as imperatives, not as modals.
Yes I assumed it was a conscious choice (of the company that develops an A.I.) and not a limitation of the architecture. Although I am confused by the single-turn reinforcement explanation as while this may increase the probability of any individual turn being useful, as my interaction over the hallucinated feature in Instagram attests to, it makes conversations far less useful overall unless it happens to correctly 'guess' what you mean.
Why don't LLM's ask clarifying questions?
Caveat: I know little to nothing about the architecture of such things, please take this as naive user feedback if you wish, or you could ignore it.
Just now I was asking the Meta AI chatbot how to do an 'L-Cut' using the Android Instagram app. It hallucinated for quite a few messages instructions how to 'overlap' two video tracks when editing a reel before it finally admitted that no such ability in fact exists in the Instagram App.
My grossly negligent mistake was assuming that a AI LLM with Meta Branding would have current or accurate knowledge of Meta properties and products.
However, imagine that there are two versions of the Instagram App, one that has this feature and one that doesn't - why didn't it ask "Just to check, what kind of phone are you using?" (which, also, would allay fears of 'spying' and invasion of privacy) and then, and only then give me advice or apologetically inform me that such a feature is not available. (In other branded/commercial chatbots, wouldn't this be an opportunity for an up-sell?[1])
Maybe I'm not prompting LLMs in the right way, but they almost never ask me questions. And certainly never clarification questions. They make an assumption, and produce a slab of text with a boilerplate "let me know if this helps?" tacked on the end. I assume this is a conscious choice and not a limitation of the technology.
To test my observation I entered the same prompt into three different LLMs:
I just asked Claude Haiku 3 "When was Rosberg's first Grand Prix?" to see if it assumes I mean Keke Rosberg, or his son, Nico Rosberg.Claude apologies for not having information on Nico Rosberg's first Grand Prix... and then goes on to state that "Formula One has had multiple drivers named Rosberg compete over the years. More details would be needed to determine the exact driver and their debut race."
What I find interesting, not one sentence in the answer takes the form of an interrogative!
Next, I asked GPT-4o mini the exact same question, to which it confidently assumes which Rosberg I mean: "Nico Rosberg's first Formula One Grand Prix was at the 2006 Bahrain Grand Prix, which took place on March 12, 2006. He raced for the Williams team."
Microsoft CoPilot makes the same assumption, with the traditional tacked on boilerplate question:
"Nico Rosberg made his Formula One debut at the 2006 Bahrain Grand Prix. He went on to have a successful career, culminating in winning the World Drivers' Championship in 2016. Is there anything else you'd like to know about his career or Formula One in general? [FI car Emoji, Smoke Emoji]"
Meta AI makes the same assumption, and gives a lot more unsolicited information, but it does mention his father:
"Nico Rosberg's first Grand Prix was the 2006 Bahrain Grand Prix. This marked the beginning of his Formula One career, which would eventually see him drive for Williams from 2006 to 2009 and then Mercedes from 2010 to 2016. During his time with Mercedes, Rosberg won the Formula One Driver's championship in 2016, a feat that made him and his father, Keke Rosberg, the second-father pairing to achieve this impressive milestone"
Not that it is relevant, but Graham and Damon Hill were the first.
This is not an exhaustive sample by any means, but does confirm my naive observation that LLMs do not ask clarifying questions or use interrogatives in order to better shape their responses to my needs.
- ^
I imagine such a commercial hellscape would look a little like this:
"I was just wondering why I wasn't earning any Ultrabonus points with my purchases"
"Before we continue, could you tell me, do you have a Overcharge Co. Premium savings account, or a Overcharge Co. Platinum savings account?"
"Uhh I think it is a Premium."
"I'm so sorry. if you have a Overcharge Co. Platinum savings account then you will not be able to enjoy our Overcharge co. ultrabonus points loyalty system. However you may be suprised that for only a small increase in account fee, you too can enjoy the range of rewards and discounts offered with the Overcharge co. ultrabonus points loyalty system. Would you like to learn more?"
Edit: [on reflection I think perhaps as a newcomer what you should do is acquaint yourself with what other intelligent and perceptive posts have been made over the last decade on Lesswrong on issues around A.I. extinction before you try and write a high level theory of your own. Maybe even create a excel spreadsheet of all the key ideas as a Post-Graduate researcher does when preparing for their lit review]
I am still not sure what your post is intended to be about, what is it about "A.I. Extinction" is it that you have new insight into? I stress "new".
As for your re-do of the opening sentence, those two examples are not comparable: getting a prognosis directly from a oncologist who has studied oncology, and presumably you've been referred to because they have experience with other cases of people with similar types of cancer, and has seem multiple examples over a number of years develop, is vastly different from the speculative statement of a unnamed "A.I." researcher. The A.I. researcher doesn't even have the benefit of analogy, because there has never been throughout the entire Holocene anything close to a mass extinction event of human life perpetuated by a super-intelligence. An oncologist has a huge body of scientific knowledge, case studies, and professional experience to draw upon which are directly comparable, together with intimate and direct access to the patient.
Who specifically is the researcher you have in mind who said that humanity has only 5 years?
If I were to redo your post, I would summarize whatever new and specific insight you have in one sentence and make that the lead sentence. Then spend the rest of the post backing that up with credible sources and examples.
Thank you for the clarification. Do you have a process or a methodology for when you try and solve this kind of "nobody knows" problems? Or is it one of those things where the very nature of these problems being so novel means that there is no broad method that can be applied?
I can't speak for the community but after having glanced at your entire post I can't be sure just what it is about. The closest you come to explaining it is near the end you promise to present a "high-level theory on the functional realities" that seem to be related to everything from increased military spending to someone accidentally creating a virus in the lab that wipes out humanity to combating cognitive bias. But what is your theory?
Your post also makes a number of generalize assumptions about the reader and human nature and invokes the pronoun "we" far too many times. I'm a hypocrite for pointing that out, because I tend to do it as well - but the problem is that unless you have a very narrow audience in mind, especially a community that you are a native to and know intimately, often you run the risk of making assumptions or statements they will at best be confused by, and at worst will get defensive for being included with.
Most of your assumptions aren't backed up by specific examples, citations to research. For example, in your first sentence you say that we subconsciously optimize for there being no major societal changes precipitated by technology. You don't back this up. I would assume that part of the reason why there are gold- bugs, just proves there is a huge contingent of people who invest real money based precisely on the fact that they can't anticipate what major economic changes future technologies might bring. There are currently billions of dollars being spent by firms like Apple, Google, even JP Morgan Chase into A.I. assistants, in anticipation of a major change.
I could one by one go through all these general assumptions, but there are too many for it to be worth my while. Not only that, most of the footnotes you use don't make reference to any concepts or observations which are particularly new or alien. The pareto principle, Compound Effect, Rumsfeld's Epistemology... I would expect your average Lesswrong reader is very familiar with these, they present no new insights.
I'm missing a key piece of context here - when you say "doing something good" are you referring to educational or research reading; or do you mean any type of personal project which may or may not involve background research?
I may have some practical observations about note-taking which may be relevant, if I understand the context.
I'm curious why you opted for Aristotle (albeit "modern") as the prompt pre-load? Most of those responses seem not directly tethered to Aristotelian concepts/books or even what he directly posits as being the most important skills and faculties of human cognition. For example, cold reading, I don't recall anything of the sort anywhere in any Aristotle I've read.
While we're not sure Aristotle himself designed the layout of the corpus, we do know that in the Nicomachean Ethics lists the faculties of "whereby the soul attains Truth":
Techne (Τεχνε) - which refers to conventional ways of achieving goals, i.e. without deliberation
Episteme (Επιστήμε) - which is apodeiktike or the faculty of arguing from proofs
Phronesis (Φρονέσις) - confusingly translated as "practical wisdom" this refers to the ability to deliberate to attain goals by means of deliberation. Excellence in phronesis is translated by the latinate word 'Prudence'.
Sofia (Σοφια) - often translated as 'wisdom' - Aristotle calls this the investigation of causes.
Nous (Νους ) - which refers to the archai - or the 'first principles'
According to Diogenes Laertius, the corpus (at least as it has come to us) divides into the practical books and the theoretical - the practical itself would be subdivided between the books on Techne (say Rhetoric and Poetics), and Phronesis (Ethics and Politics), the theoretical is then covered in works like the Metaphysics (which is probably not even a cohesive book, but a hodge-podge), Categories etc. etc.
This would appear to me to be a better guide for the timeless education in Aristotelian tradition and how we should guide a modern adaptation.
Examples of how not to write a paragraph are surprisingly rare
Epistemic Status: one person's attempt to find counter-examples blew apart their own ( subjective) expectations
I try to assemble as many examples of how not to do something as 'gold standard' or best practice examples of how the same task should be done. The principle is similar to what Plutarch wrote: Medicine to produce health must examine disease, and music to create harmony must investigate discord.
However when I tried to examine how not to write, in particular examples of poorly written paragraphs -- I was surprised by how rare they were. There are a great many okay paragraphs on the internet and in books, but very few that were so unclear or confusing that they were examples of 'bad' paragraphs.
In my categorization paragraphs can be great - okay - bad.
Okay paragraphs are the most numerous, they observe the rule of thumb - keep one idea to one paragraph. To be an 'okay' paragraph and rise above 'bad' all a paragraph needs to do is to successfully convey at least one idea. Most paragraphs I found do that.
What elevates great paragraphs above okay paragraphs is they do an especially excellent job of conveying at least one idea. There are many qualities they may exhibit, including persuasiveness, the appearance of insight, brevity and simplicity in conveying an otherwise impenetrable or 'hard to grasp' idea.
In some isolated cases a great paragraph may actually clearly and convincingly communicate disinformation or a falsehood. I believe there is much more to learn about the forms paragraphs take from a paragraph that conveys a falsehood convincingly than a paragraph that clearly conveys what is generally accepted as true.
What was surprising is how hard it is to find examples that invert the principle - a paragraph that is intended to convey an idea that is truthful but is hard to understand would be a bad paragraph in my categorization. Yet, despite actively looking for examples of 'bad paragraphs' I struggled to find some that were truly confusing or hopeless at conveying one single idea. This experience is especially surprising to me because it challenges a few assumptions or expectations that I had:
- Assumption 1 - people who have mistaken or fringey beliefs are disproportionately incapable of expressing those beliefs in a clear and intelligible form. I expected that looking for the least popular comments on Reddit, I would find many stream of consciousness rants that failed to convey ideas. This was far less common than rants that at least conveyed intent and meaning intelligibly.
- Assumption 2 - that as a whole, people need to learn to communicate better. I must reconsider, it appears on the transmission side, they already communicate better than I expected (counter-counterpoint: 1% rule)
- Assumption 3 - the adage that good writing = good thinking. Perhaps not, it would seem that you can write clearly enough to be understood yet that doesn't mean your underlying arguments are strong or your thinking is more 'intelligent'.
- Assumption 4 - That I'm a merely a below average communicator. It appears that if everyone is better than I expected, than I'm much further below average than I expected.
I have no take-out or conclusion on this highly subjective observation, hence why it is a quick-take and not a post. But I will add my current speculation:
My current theory for why is "I wasn't looking in the right places". For example, I ignored much academic or research literature because the ability of the writers to convey an idea is often difficult to assess without relevant domain knowledge as they are seldom made for popular consumption. Likewise I'm sure there's many tea-spilling image boards where more stream-of-consciousness rants of greater impenetrability might be found.
My second theory is pareidolia: perhaps I highly overrate my comprehension and reading skills because I'm a 'lazy reader' who fills in intention and meaning that is not there?
Could you please elaborate on what you mean by "highly memetic" and "internal memetic selection pressures"? I'm probably not the right audience for this piece, but that particular word (memetic) is making it difficult for me to get to grips with the post as a whole. I'm confused if you mean there is a high degree of uncritical mimicry, or if you're making some analogy to 'genetic' (and what that analogy is...)
AIs are good at explaining simple things, and not very good at thinking about how large concepts fit together.
For me there was a good example of this in the provided demonstration section, the phrase "Bayesian reinforcement learning" generated the following hilariously redundant explanation:
A learning approach that combines Bayesian inference with reinforcement learning to handle uncertainty in decision-making. It's mentioned here as part of the 'standard model' of ideal agency, alongside AIXI.
I am well aware that this is simply a demonstration for illustrative purposes and not meant to be representative of what the actual generated explanations will be like.
This is an exciting feature! Although these generated explanations remind me an awful lot of the frustration and lost productivity I experience trying to comprehend STEM terms and the long Wikipedia hopping from a term to another term to explain it; I think with better explanations it could solve part of that frustration and lost productivity. I often find STEM jargon impenetrable and find myself looking for ELI5 posts of a term used in a description of a term, used in the description of the thing I'm trying to directly learn about.
To use your example, if someone's speech patterns revolve around the topic of "bullying", it might mean that the person was bullied 50 years ago and still didn't get over it
Yes. Which is invaluable information about how they see the world currently. How is that not the 'right idea'? If that is how they continue to currently mentally represent events?
Your 'people are scammers' example is irrelevant, what is important is if they constantly bring in tropes or examples or imply deception. They may never use the word 'scammer' 'mistrustful' or make a declaration like 'no one has integrity'. The pattern is what I'm talking about.
I am overwhelmingly confident that analysis of the kinds of narratives that a particular person spins, including what tropes they evoke - even if you're not familiar with the tropes previously - would reveal a lot about their worldview, their ethical structure, the assumptions and modelling they have about how people, institutions, and general patterns they believe underlay the world.
A oversimplified example is a person who clearly has a "victim "mentality" and an obsession with the idea of attractiveness because they always use sentence structures (i.e. "they stopped me") and narratives where other people have inhibited, bullied, envied, or actively sought to stifle the person telling the story and these details disproportionately make reference to people's faces, figures, and use words like "ugly" "hot" "skinny" etc. It is not necessary to know what films, books, periodicals they read.
I'm so sorry but I haven't been able to think of any specific books, although the first case it seems like your problem could be a matter of the Availability Heuristic - your teacher answered a different question to the one you asked because quite simply it was easier for them to recall the knowledge about the evolution of the system than the relative stability of GTP to ATP.
I'm not sure if there is anything in Kahneman's Thinking Fast, Slow which might offer your practical techniques for priming listeners the right way. If anything you might be better served by the books of Robert Cialdini or even literature on sales - my thinking here is sales people often think about the structure (or in Aristotelian terms the Kairos) that they present different options which in effect 'primes' the customer to different Semantic and Mental frameworks.
Sorry that I can't point to any specific books. I could guess on some specific techniques that I think might aid your communication but I've been wracking my brain and can't think of any books that I know hit the mark.
Perhaps I misunderstand your use of the phrase "intentionally ignorant" but I believe many cases of people who are seen to have acted with "integrity" are people who have been hyperaware and well informed of what normal social conventions are in a given environment and made deliberate choice not to adhere to them, not ignoring said conventions out of a lack of interest.
I also am not sure what you mean by "weird". I assume you mean any behavior which is not the normal convention of any randomly selected cohesive group of people, from a family, to a local soccer club, to a informal but tight knit circle of friends, to a department of a large company. Have I got that right?
My idea of 'weird' tends to involve the stereotypical artists and creatives I associate with, which is, within those circles not weird at all but normal. But I'm meta-aware that might be a weird take.
Thank you for the reply.
What kind of questions, analogies, or models are your fellow students responding to your explanations with? Are there any patterns in the specific feedback you've noticed? Are there any particular aspects of Deep Learning or the metaphors or terminology you're using that seem to be the biggest bottlenecks?
My hunch is that maybe you instead look at beginner's introductions to Deep Learning and Neural Networks and see how they go about conveying these concepts. If someone else has done the hard work of figuring out an expedient way to convey the subject matter, why not borrow from them (giving credit, of course)?
Please do get back if you can think of specific examples of the second case and I'll think any books or resources I know of which might be suitable.
What specific kinds of ideas are making this problem noticeable?
Are you talking about conveying specialist knowledge to a lay-audience - for example, good luck trying to get me to understand what an Eigenvector is or the points system in Cricket - I've tried. Likewise, to explain to a friend what Sub-Surface scattering was, I first had to introduce him to the mechanics of Ray Tracing. Luckily he was a musician so I could just use analogies to the diffusion and travel of sound waves.
Or are you talking about more personal preferences and experiences, for example recently someone asked me "why do you prefer to be behind the camera rather than a performer in front of it?" - apparently they thought I was such a ham I should be a comedian not a director - I didn't know where to even begin.
Likewise many people who "kind of fell into doing this" for their current profession will stumble if you ask them how they "got into it" because there's often a meandering narrative and confused chronology because even to themselves it's not clear.
Another question I have is - are there any patterns in the assumptions, misunderstandings or tangents which people you're trying to explain exhibit in reaction to your explanations?
This post is my personal catnip, thank you.
I do wonder how much of the trend in seasonal color changes are influenced by particular combinations rather than X or Y color becoming in trend? For example, in at least two of the hot-pink images you've provided it has been paired with olive green; or at least that's what I call it based on how it appears on my uncalibrated monitor. What I'm wondering is how common is it for certain pairs or trios of colors to rise together or for their complement to shift seasonally.
On the scale between "pseudoscience that provides either completely random results or exactly what its operator wants to hear" and "always provides the correct answer", there are some uncomfortable points where we probably get first, such as "provides the correct answer 99% of the time" (and with the 1% chance you are unlucky, and you are screwed because no one is going to believe you) or "provides the correct answer for neurotypical people" (and if you are an autist, you are screwed).
I'm afraid I need you to rephrase or elaborate on what you meant by this - are you saying, aware of a technique or method which is right 99% of the time or thereabouts. Or are you saying human variability makes such a technique impossible for anything but the most narrow populations? Or have I likely (and in a meta-way appropriately) completely missed the point? What do you think of more generally - as I explicate in the second half - revelations about a person's internalized belief structures, including their hero's and related moral system, but also the idea of idiolect being a symptom of their thinking and model of the world even if it is not a mechanism for directly ascertaining their personal belief in this or that specific statement?
The promise of mind reading techniques whether it is a former FBI analyst or one of Paul Ekman's microexpression reading human lie detectors. I become aware of this cottage industry during every trial-by-media where suspicion piles upon someone not yet charged with murder.
I have to admit I am skeptical that anyone has such an amazing power to see through the facade of a stranger and with a greater-than-chance determine if they are telling the truth or not. Doubly so because I am someone who is constantly misinterpreted, I have to manage my gestures and facial expressions because my confusion is often misread as disagreement; my approval for disapproval; even a simple statement like "I'm not hungry right now" is wrongly generalized as not liking the particular cuisine... and not that I just don't want to eat anything right at this moment.
However if placed under the microscope by one of these former FBI body language experts would I feel a intense sense of validation ? Would I exclaim "yes, I feel seen, heard... you get me!"?
I have no doubt some people are more perceptive about emotional nuances than others: film and theatre actors who are trained to observe and mimic, people who have grown up in abusive or emotionally unstable households and are hyper sensitive to small changes in the mood of others (which of course may make them prone to more 'false positives' and paranoia), and of course mentalists like cold readers and palmists.
However being more emotionally perceptive doesn't necessarily mean you can tell if someone is lying - or a particular statement is false, especially if that person is especially good at telling the truth, or like me - their natural body language and expression doesn't express what you'd expect.
What I have greater faith in is that given even a small but emblematic example of a person's extemporaneous speech you could derive an accurate personality and world-view portrait of them. In the same way that an accent can help you pinpoint the geographical and economic origin of a person (think of comedies like The Nanny that play up on this convention). Harry Shearer once explained that to play Richard Nixon he channeled Jack Benny - believing that Nixon's persona and particularly his way of telling jokes was consciously or unconsciously modelled on that of Benny. Likewise Vladimir Putin's distinctive gait has been attributed to a prenatal stroke, or that his subordinates including Dmitry Medvedev have "copied the boss", the more persuasive explanation is that they all picked up the habit from watching Soviet Spy films as youngsters and wanting to emulate the hero.
The kinds of films, television, and role models, books, music and lyrics that someone has absorbed would also influence or at least be indicative of their world view. Given enough of these tells, while I am not sure that you could tell if someone is or isn't a murderer, you could certainly gain a accurate insight into their worldview, the mental models they have about the world, what they value, what their ethics system is like etc. etc.
How much information can you extract about a person from a written transcript that they aren't aware they are sharing is probably startling, but rarely or predictably "he's a murderer" level.
I think they are just using that as an example of a strongly opinionated sub-agent which may be one of many different and highly specific probability assessments of doom.
As for "survival is the default assumption" - what a declaration of that implies on the surface level is that the chance of survival is overwhelming except in the case of a cataclysmic AI scenario. To put it another way:
we have a 99% chance of survival so long as we get AGI right.
To put it yet another way - Hollywood has made popular films about the human world being destroyed by Nuclear War, Climate Change, Viral Pandemic, and Asteroid Impact to name a few - different sub-agents could each give higher or lower probabilities to each of those scenarios depending on things like domain knowledge and in concert it raises the question of why we presume that survival is the default? What is the ensemble average of doom?
Is doom more or less likely than survival for any given time frame?