Posts
Comments
The clean lines make me think you didn't use hypergeometric calculations. If I have 2 extrovert friends, on any given day 0 (25%), 1(50%), or 2(25%) of them will want to hang out. If I want to hang out on day N, there is a 25% chance I fail to.
Virtually no-one differentiates between a 4 and a 5 on these kind of surveys. Either they are the kind of person who "always" puts 5, or they "never" do.
With Rat. Adjacent or other overthibkers, you can give more specific anchors (eg 5 = this was the best pairing). Or you can have specific unsealed questions (ie:
- I would go to another room to avoid this person at a party
- I do not want to see this person again
- Whatever
- If someone else did the work to plan it, I would show up and spend time with this person again
- I will schedule time to see this person again.
Airline tickets are a bad example because they are priced dynamically. So if more people find/exploit the current pricing structure, the airline will (and does) shift the pricing slightly until it remains profitable.
+1 for substituting brain processes. High-g neurodivergents of all flavors tend to run apps in the "wrong" parts of their brain to do things that neutotypicals do automatically. Low-g neurodivergents just fail at the tasks.
Related content: https://www.shamusyoung.com/twentysidedtale/?p=168
Por que no los dos? It's a minority of people who have the ability and inclination to learn how to conform to a different mileu than thier natural state.
CK, as used here, seems more transactional and situation specific. Emotional Labor is usually referring to a pattern over time, including things like checking for unknown unknowns, and "making sure X gets done" Both ideas are playing in similar space.
Bonus points in a dating context: by being specific and authentic you drive away people who won't be compatible. In the egg example, even if the second party knows nothing about the topic, they can continue the conversation with "I can barely boil water, so I always take a frozen meal in to work" or "I don't like eggs, but I keep pb&j at my desk" or just swipe left and move on to the next match.
Follow up question: is this a permanent gain or temporary optimization (eg without further intervention, what scores would the subject get in 6 months?)
We know for sure that eating well and getting a good night's sleep dramatically improves performance on a wide array of mental tasks. It's not a stretch to think other interventions could boost short term performance even higher.
For further study: Did the observed increase represent a repeatable gain, or an optimization? Within-subject studies show a full SD variation between test sessions for many subjects, so I would predict that "a set of interventions" could produce a "best possible score" for an individual but hit rapid diminishing returns.
Communication bandwidth: if you find that you’re struggling to understand what the person is saying or get on the same page as them, this is a bad sign about your ability to discuss nuanced topics in the future if you work together.
Just pulling this quote out to highlight the most critical bit. Everything else is about distinguishing between BS and ability to remember, understand, and communicate details of an event (note: this is a skill not often found at the 100 IQ level). That second thing isn't necessarily a job requirement for all positions (eg sales, entry level positions), but being comfortable talking with your direct reports is always critical.
The described "next image" bot doesn't have goals like that, though. Can you take the pre-trained bot and give it a drive to "make houses" and have it do that? When all the local wood is used up, will it know to move elsewhere, or plant trees?
If you have to give it a task, is it really an agent? Is there some other word for "system that comes up with its own tasks to do"?
Note that you have reduced the raw quantity of dust specks by "a lot" with that framing. Heat death of universe is in "only" 10^106 years, so that would be no more than 2^ (10^(106)) people (if we somehow double every year) compared to 3||3^(27), which is 3^ (10^ (a number too big to write down))
200 years ago was 1824. So compared to buying land or company stocks (the London and NY stock exchanges were well established by then) or government bonds.
Narrator: gold has been a poor bet for 90% of the last 200 years.
(Don't quote me on that, but it is true that gold was a good bet for about 10 years in recent memory, and a bad bet for most post-industrial time)
I can't tie up cash in any sort of escrow, but I'd take that bet on a handshake.
Mr. Pero got fewer votes than either major party candidate. Not a ringing endorsement. And I didn't say the chances were quite low, I said they were zero*. Which is at least 5 orders of magnitude difference from "quite low" so I don't think we agree about his chances.
*technically odds can't be zero, but I consider anything less likely than "we are in a simulation that is subject to intervention from outside" to be zero for all decision making purposes.
There is an actual 0% chance that anyone other than the Democratic or Republican nominee (or thier replacement in the event of death etc.) becomes president. Voting for/supporting any other candidate has, historically, done nothing to support that candidate's platform in the short or long term. If you find both options without merit, you should vote for your preferred enemy:
- Who will be most receptive to your message, either in a compromise, or argument And/or
- So sorry about your number 1 issue, neither party cares. What's your number 2 issue, maybe there is a difference there?
Do you have a link to the study validating that the LLM responses actually match the responses given by humans in that category?
Note one weakness of this technique. An LLM is going to provide what the average generic written account would be. But messages are intended for a specific audience, sometimes a specific person, and that audience is never"generic median internet writer." Beware WIERDness. And note that visual/audio cues are 50-90% of communication, and 0% of LLM experience.
How does buying "none of the above" work as you add more entries? If someone buys NOTA today, and the winning entry is #13, does everyone who bought NOTA before it was posted also win?
Agree that closer to reality would be one advisor, who has a secret goal, and player A just has to muddle through against an equal skill bot with deciding how much advice to take. And playing like 10 games in a row, so the EV of 5 wins can be accurately evaluated against.
Plausible goals to decide randomly between:
- Player wins
- Player loses
- Game is a draw
- Player loses thier Queen (ie opponent still has thier queen after all immediate trades and forcing moves are completed)
- Player loses on time
- Player wins, delivering checkmate with a bishop or knight move
- Maximum number of promotions (for both sides combined)
- Player wins after having a board with only pawns Etc...
Arguing against A doesn't support Not A, but arguing against Not Not A is arguing against A (while still not arguing in favor of Not A) - albeit less strongly than arguing against A directly. No back translation is needed, because arguments are made up of actual facts and logic chains. We abstract it to "not A" but even in pure Mathematics, there is some "thing" that is actually being argued (eg, my grass example).
Arguing at a meta level can be thought of as putting the object level debate on hold and starting a new debate about the rules that do/should govern the object level domain.
Alice: grass is green -> grass isn't not green Bob: the grass is teal -> the grass is provably teal Alice: your spectrometer is miscalibrated -> your spectrometer isn't not miscalibrated.
...
I'm having trouble with the statement {...and has some argument against C'}. The point of the double negative translation is that any argument against not not A is necessarily an argument against A (even though some arguments against A would not apply to not not A). And the same applies to the other translation - Alice is steelmanning Bob's argument, so there shouldn't be any drift of topic.
Additionally and separately, the law "X takes effect at time t" will be opposed by the interests that oppose X, regardless of the value of t.
Consider a scale that runs from "authentic real life" to "Lotus eater box" At any point along that scale, you can experience euphoria. At the Lotus Eater end, it is automatic. At the real life end, it is incidental. "Games" fall towards the Lotus Eater end of the spectrum, not as far as slot machines, but further from real life than Exercise or Eating Chocolate. Modern game design is about exploiting what is known about what brains like, to guide the players through the (mental) paths necessary to generate happy chems. They call it "being Fun" but that's just thier medium level Map.
Some respected designers (including Mark Rosewater) would say that being compatible with real life is disqualifying for a thing to be a "Game." You can apply game design principles to real life stuff (lessons/repetitive tasks/etc.) to make it more Fun. One thing that makes Games a particularly good source of Fun, however, is the safety provided by being independent of real life. With no "real" consequence to losing, brains are more relaxed. A similar effect is what makes horror movies Fun - the viewers brain is put through stimulus to generate chemicals, without overwhelming the system the way a real danger can.
90% of games are designed to be fun. Meaning the point is to stimulate your brain to produce feel-good chemicals. No greater meaning, or secret goal. To do this, they have goals, rules, and other features, but the core loop is very simple:
- I want to get a dopamine hit, therefore
- I open up a game, and
- The game provides a structure that I follow, subordinating my "real life" to the artificial goals and laws of the game
- Profit!
I don't think the assumption of equal transaction costs holds. If I want to fill in some potholes on my street, I can go door to door and ask for donations - which costs me time but has minimal and well understood costs to the other contributors. If I have to add "explain this new thing" and "keep track of escrow funds" and "cycling back and telling everyone how the project funding is going, and making them re-decide if/how much to contribute" that is a whole bunch of extra costs.
Also, of the public good is not quantum (eg, I could fix anywhere from 1-10 of the potholes, amd do it well or slapdash) then the producer can make a decision about quality after seeing the final funding total, rather than having to specify everything in advance. For example, your website needs, say, 40 hours of work to be a minimally viable product, but could continue to benefit from work up to 400 hours. How many hours is the (suspiciously specific) $629 buying?
A normal Kickstart is already impossible to price correctly - 99% either deliberately underprice to ensure "success" (the "preorders" model) or accidentally underpriced and cost the founders a ton of unpaid overtime (the gitp model) or they overpriced and don't get funded.
A clarification:
Consider the premises (with scare quotes indicating technical jargon):
- "Acting in Bad Faith" is Baysean evidence that a person is "Evil"
- "Evil" people should be shunned
The original poster here is questioning statement 1, presenting evidence that "good" people act in bad faith too often for it to be evidence of "evil."
However, I belive the original poster is using a more broad definition of "Acting in Bad Faith" than the people who support premise 1.
That definition, concisely, would be "engaging in behavior that is recognized in context as moving towards a particular goal, without having that goal." Contrast this with the OP quote: bad faith is when someone's apparent reasons for doing something aren't the same as the real reasons. The persons apparent reasons don't matter, what matters is the socially determined values associated with specific behaviors, as in the Wikipedia examples. While some behavior (eg, a conversation) can have multiple goals, some special things (waving a white flag, arguing in court, and now in the 21st century that includes arguing outside of court) have specific expected goals (respectively: allowing a person to withdraw from a fight without dying, to present factual evidence that favors one side of a conflict, and to persuade others of your viewpoint). When an actor fails to hold those generally understood goals, that disrupts the social contract and "we" call it "Acting in Bad Faith"
A strange game.
The only winning move is not to play.
Just don't use the term "conspiracy theory" to describe a theory about a conspiracy. Popular culture has driven "false" into the definition of that term, and wishful appeals to bare text doesn't make that connection go away. It hurts that some terms are limited in usability, but the burden of communication falls on the writer.
Looking at actual neo natzi and white supremacist pages/formus shows quite extensive usage of 14 & 88 symbology, and explicit explanations of the same, so your first point is factually inaccurate.
The term "conspiracy theory" comes pre-loaded with the connotation of "false" and you cannot use those words to describe a situation where multiple people have actually agreed to do something.
The innocent explanation is that the SS got back to him just before some sort of 90 day deadline, and he did the math. In which case the tweet could have been made out of ignorance, like flashing an "OK" sign in the "White Power" orientation. It's not easy to keep up with all the dog whistles out there.
Still political malpractice to not track and avoid those signals, though. If you "accidentally" have a rainbow in the background of a campaign photo, that counts as aligning with the LGBTQ+ crowd - same thing with putting "88" in a campaing post & Natzis.
So, the tweet aligns hos campaign with the Natzis, but might have done it accidentally.
Obligatory xkcd: https://xkcd.com/2347/
Neurotypicals have weaker preferences regarding textures and other sensory inputs. By and large, they would not write, read, or expect others to be interested in a blow-by-blow of asthetics. Also, at a meta level, the very act of writing down specifics about a thing is not neurotypical. Contrast this post with the equivalent presentation in a mainstream magazine. The same content would be covered via pictures, feeling words, and generalities, with specific products listed in a footnote or caption, if at all. Or consider what your neurotypical friend's Facebook post about a renovation/new house etc. Emphasize: typically it's the people, as in "we just bought a house. I love the wide open floor plan, and the big windows looking out over the yard make me so happy" in contrast to "we find that the residents are happier and more productive with 1000W of light instead of the typical 200."
#don'texplainthejoke
So, hashtag "tell me the Rationalist community is neurodivergent without telling me they are neuro-divergent"?
The real answer is that you should minimize the risk that you walk away and leave the door open for hours, and open it zero times whenever possible. The relative heat loss from 1 vs many separate openings is not significantly different from each-other, but it is much more than 0, and the tail risk of "all the food gets warm and spoils" should dominate the decisions
I don't thunk your model is correct. Opening the fridge causes the accumulated cold air to fall out over a period of a few (maybe 4-7?) seconds, after which it doesn't really matter how long you leave it open, as the air is all room temp. The stuff will slowly take heat from the room temp air, at a rate of about 1 degree/minute. Once the door is closed, it takes a few minutes (again, IDK how long) to get the air back to 40F, and then however long to extract the heat from the stuff. If you are chosing between "stand there with it open" and "take something out, use it, amd put it back within a few minutes" there is no appreciable difference in the air temp inside the fridge for those two options - in both cases things will return to temp some minutes after the last closing. You can empirically test how long it takes to re-cool the air simply by getting a fridge thermometer and seeing how the temperature varies with different wait times. Or just see how long before the escaping air "feels cold" again.
Re: happiness, it's that meme graph: Dumb: low expectations, low results, is happy Top: can self-modify expectations to match reality: is happy Muddled middle: takes expectations from environment, can't achieve them, is unhappy.
The definition of Nash equilibrium is that you assume all other players will stay with thier strategy. If, as in this case, that assumption does not hold then you have (I guess) an "unstable" equilibrium.
The other thing that could happen is silent deviations, where some players aren't doing "punish any defection from 99" - they are just doing "play 99" to avoid punishments. The one brave soul doesn't know how many of each there are, but can find out when they suddenly go for 30.
It's not. The original Nash construction is that player N picks a strategy that maximizes thier utility, assuming all other players get to know what N picked, and then pick a strategy that maximizes thier own utility given that. Minimax as a goal is only valid for atomic game actions, not complex strategies - Specifically because of this "trap"
There is a more fundamental objection: why would a set of 1s and 0s represent (given periodic repetition in 1/3 of the message, so dividing it into groups of 3 makes sense) specifically 3 frequencies of light and not
- Sound (hat tip The Hail Mary Project)
- An arrangement of points in 3d space
- Actually 6 or 9 "bytes" to defie each "point"
- Or the absolute intensity or scale of the information (hat tip Monty Python tiny aliens)
I think the key facility of am agent vs a calculator is the capability to create new short term goals and actions. A calculator (or water, or bacteria) can only execute the "programming" that was present when it was created. An agent can generate possible actions based on its environment, including options that might not even have existed when it was created.
I think even these first rough concepts have a distinction between beliefs and values. Even if the values are "hard coded" from the training period and the manual goal entry.
Being able to generate short term goals and execute them, and see if you are getting closer to your long tern goals is basically all any human does. It's a matter of scale, not kind, between me and a dolphin and AgentGPT.
In summary: Creating an agent was apparently already a solved problem, just missing a robust method of generating ideas/plans that are even vaguely possible.
Star Trek (and other Sci fi) continues to be surprisingly prescient, and "Computer, create an adversary capable of outwitting Data" creating an agen AI is actually completely realistic for 24th century technology.
Our only hopes are:
- The accumulated knowledge of humanity is sufficient to create AIs with an equivalent of IQ of 200, but not 2000.
- Governments step in and ban things.
- Adversarial action keeps things from going pear shaped (winning against nature is much easier than winning against other agents - just ask any physisit who tried to win the stock market)
- Chimps still have it pretty good, at least by thier own standards, even though we took over the world.
Consider a fictional king with two advisors. One predicts the next heavy rainfall will occur in 3 weeks time. The second predicts no heavy rain for the next 3 years. If your farmers need to plant crops 1 week before heavy rain, amd you are equally confident in both advisors, what should you do?
This is a classic decision-making problem that involves balancing two conflicting pieces of information. If we assume that the predictions of both advisors are equally reliable, then the best course of action is to take a middle-ground approach that minimizes the risks associated with each prediction.
In this case, one advisor predicts heavy rainfall in 3 weeks, while the other predicts no heavy rain for the next 3 years. To balance these conflicting predictions, the king should consider planting crops in two separate phases.
First, he should plant a small portion of the crops immediately, to ensure that they are in the ground before any potential heavy rain. This will minimize the risk of missing the opportunity to plant crops before the predicted rainfall in 3 weeks.
Next, the king should wait for the predicted rainfall to occur or not occur, as per the advisors' predictions. If heavy rain does occur in 3 weeks, then the remainder of the crops should be planted immediately after the rain stops. If heavy rain does not occur in 3 weeks, then the remainder of the crops should be planted gradually over the next few months, until the next heavy rainfall is predicted to occur.
By adopting this approach, the king can minimize the risks associated with both predictions, while ensuring that his farmers have the best chance of growing healthy crops.
ChatGPT Feb 13 Version. Free Research Preview. Our goal is to make AI systems more natural and safe to interact with. Your feedback will help us impr
The devil, as they say, is in the details. But worst case scenario is to flip a coin - don't be Buridan's Ass and starve to death because you can't decide which equidistant pile of food to eat.