Posts
Comments
I don't think the assumption of equal transaction costs holds. If I want to fill in some potholes on my street, I can go door to door and ask for donations - which costs me time but has minimal and well understood costs to the other contributors. If I have to add "explain this new thing" and "keep track of escrow funds" and "cycling back and telling everyone how the project funding is going, and making them re-decide if/how much to contribute" that is a whole bunch of extra costs.
Also, of the public good is not quantum (eg, I could fix anywhere from 1-10 of the potholes, amd do it well or slapdash) then the producer can make a decision about quality after seeing the final funding total, rather than having to specify everything in advance. For example, your website needs, say, 40 hours of work to be a minimally viable product, but could continue to benefit from work up to 400 hours. How many hours is the (suspiciously specific) $629 buying?
A normal Kickstart is already impossible to price correctly - 99% either deliberately underprice to ensure "success" (the "preorders" model) or accidentally underpriced and cost the founders a ton of unpaid overtime (the gitp model) or they overpriced and don't get funded.
A clarification:
Consider the premises (with scare quotes indicating technical jargon):
- "Acting in Bad Faith" is Baysean evidence that a person is "Evil"
- "Evil" people should be shunned
The original poster here is questioning statement 1, presenting evidence that "good" people act in bad faith too often for it to be evidence of "evil."
However, I belive the original poster is using a more broad definition of "Acting in Bad Faith" than the people who support premise 1.
That definition, concisely, would be "engaging in behavior that is recognized in context as moving towards a particular goal, without having that goal." Contrast this with the OP quote: bad faith is when someone's apparent reasons for doing something aren't the same as the real reasons. The persons apparent reasons don't matter, what matters is the socially determined values associated with specific behaviors, as in the Wikipedia examples. While some behavior (eg, a conversation) can have multiple goals, some special things (waving a white flag, arguing in court, and now in the 21st century that includes arguing outside of court) have specific expected goals (respectively: allowing a person to withdraw from a fight without dying, to present factual evidence that favors one side of a conflict, and to persuade others of your viewpoint). When an actor fails to hold those generally understood goals, that disrupts the social contract and "we" call it "Acting in Bad Faith"
A strange game.
The only winning move is not to play.
Just don't use the term "conspiracy theory" to describe a theory about a conspiracy. Popular culture has driven "false" into the definition of that term, and wishful appeals to bare text doesn't make that connection go away. It hurts that some terms are limited in usability, but the burden of communication falls on the writer.
Looking at actual neo natzi and white supremacist pages/formus shows quite extensive usage of 14 & 88 symbology, and explicit explanations of the same, so your first point is factually inaccurate.
The term "conspiracy theory" comes pre-loaded with the connotation of "false" and you cannot use those words to describe a situation where multiple people have actually agreed to do something.
The innocent explanation is that the SS got back to him just before some sort of 90 day deadline, and he did the math. In which case the tweet could have been made out of ignorance, like flashing an "OK" sign in the "White Power" orientation. It's not easy to keep up with all the dog whistles out there.
Still political malpractice to not track and avoid those signals, though. If you "accidentally" have a rainbow in the background of a campaign photo, that counts as aligning with the LGBTQ+ crowd - same thing with putting "88" in a campaing post & Natzis.
So, the tweet aligns hos campaign with the Natzis, but might have done it accidentally.
Obligatory xkcd: https://xkcd.com/2347/
Neurotypicals have weaker preferences regarding textures and other sensory inputs. By and large, they would not write, read, or expect others to be interested in a blow-by-blow of asthetics. Also, at a meta level, the very act of writing down specifics about a thing is not neurotypical. Contrast this post with the equivalent presentation in a mainstream magazine. The same content would be covered via pictures, feeling words, and generalities, with specific products listed in a footnote or caption, if at all. Or consider what your neurotypical friend's Facebook post about a renovation/new house etc. Emphasize: typically it's the people, as in "we just bought a house. I love the wide open floor plan, and the big windows looking out over the yard make me so happy" in contrast to "we find that the residents are happier and more productive with 1000W of light instead of the typical 200."
#don'texplainthejoke
So, hashtag "tell me the Rationalist community is neurodivergent without telling me they are neuro-divergent"?
The real answer is that you should minimize the risk that you walk away and leave the door open for hours, and open it zero times whenever possible. The relative heat loss from 1 vs many separate openings is not significantly different from each-other, but it is much more than 0, and the tail risk of "all the food gets warm and spoils" should dominate the decisions
I don't thunk your model is correct. Opening the fridge causes the accumulated cold air to fall out over a period of a few (maybe 4-7?) seconds, after which it doesn't really matter how long you leave it open, as the air is all room temp. The stuff will slowly take heat from the room temp air, at a rate of about 1 degree/minute. Once the door is closed, it takes a few minutes (again, IDK how long) to get the air back to 40F, and then however long to extract the heat from the stuff. If you are chosing between "stand there with it open" and "take something out, use it, amd put it back within a few minutes" there is no appreciable difference in the air temp inside the fridge for those two options - in both cases things will return to temp some minutes after the last closing. You can empirically test how long it takes to re-cool the air simply by getting a fridge thermometer and seeing how the temperature varies with different wait times. Or just see how long before the escaping air "feels cold" again.
Re: happiness, it's that meme graph: Dumb: low expectations, low results, is happy Top: can self-modify expectations to match reality: is happy Muddled middle: takes expectations from environment, can't achieve them, is unhappy.
The definition of Nash equilibrium is that you assume all other players will stay with thier strategy. If, as in this case, that assumption does not hold then you have (I guess) an "unstable" equilibrium.
The other thing that could happen is silent deviations, where some players aren't doing "punish any defection from 99" - they are just doing "play 99" to avoid punishments. The one brave soul doesn't know how many of each there are, but can find out when they suddenly go for 30.
It's not. The original Nash construction is that player N picks a strategy that maximizes thier utility, assuming all other players get to know what N picked, and then pick a strategy that maximizes thier own utility given that. Minimax as a goal is only valid for atomic game actions, not complex strategies - Specifically because of this "trap"
There is a more fundamental objection: why would a set of 1s and 0s represent (given periodic repetition in 1/3 of the message, so dividing it into groups of 3 makes sense) specifically 3 frequencies of light and not
- Sound (hat tip The Hail Mary Project)
- An arrangement of points in 3d space
- Actually 6 or 9 "bytes" to defie each "point"
- Or the absolute intensity or scale of the information (hat tip Monty Python tiny aliens)
I think the key facility of am agent vs a calculator is the capability to create new short term goals and actions. A calculator (or water, or bacteria) can only execute the "programming" that was present when it was created. An agent can generate possible actions based on its environment, including options that might not even have existed when it was created.
I think even these first rough concepts have a distinction between beliefs and values. Even if the values are "hard coded" from the training period and the manual goal entry.
Being able to generate short term goals and execute them, and see if you are getting closer to your long tern goals is basically all any human does. It's a matter of scale, not kind, between me and a dolphin and AgentGPT.
In summary: Creating an agent was apparently already a solved problem, just missing a robust method of generating ideas/plans that are even vaguely possible.
Star Trek (and other Sci fi) continues to be surprisingly prescient, and "Computer, create an adversary capable of outwitting Data" creating an agen AI is actually completely realistic for 24th century technology.
Our only hopes are:
- The accumulated knowledge of humanity is sufficient to create AIs with an equivalent of IQ of 200, but not 2000.
- Governments step in and ban things.
- Adversarial action keeps things from going pear shaped (winning against nature is much easier than winning against other agents - just ask any physisit who tried to win the stock market)
- Chimps still have it pretty good, at least by thier own standards, even though we took over the world.
Consider a fictional king with two advisors. One predicts the next heavy rainfall will occur in 3 weeks time. The second predicts no heavy rain for the next 3 years. If your farmers need to plant crops 1 week before heavy rain, amd you are equally confident in both advisors, what should you do?
This is a classic decision-making problem that involves balancing two conflicting pieces of information. If we assume that the predictions of both advisors are equally reliable, then the best course of action is to take a middle-ground approach that minimizes the risks associated with each prediction.
In this case, one advisor predicts heavy rainfall in 3 weeks, while the other predicts no heavy rain for the next 3 years. To balance these conflicting predictions, the king should consider planting crops in two separate phases.
First, he should plant a small portion of the crops immediately, to ensure that they are in the ground before any potential heavy rain. This will minimize the risk of missing the opportunity to plant crops before the predicted rainfall in 3 weeks.
Next, the king should wait for the predicted rainfall to occur or not occur, as per the advisors' predictions. If heavy rain does occur in 3 weeks, then the remainder of the crops should be planted immediately after the rain stops. If heavy rain does not occur in 3 weeks, then the remainder of the crops should be planted gradually over the next few months, until the next heavy rainfall is predicted to occur.
By adopting this approach, the king can minimize the risks associated with both predictions, while ensuring that his farmers have the best chance of growing healthy crops.
ChatGPT Feb 13 Version. Free Research Preview. Our goal is to make AI systems more natural and safe to interact with. Your feedback will help us impr
The devil, as they say, is in the details. But worst case scenario is to flip a coin - don't be Buridan's Ass and starve to death because you can't decide which equidistant pile of food to eat.
Making choices between domains in pursuit of abstract goals:
Say I have an agent with the goal of "win $ in online poker" and read/write access to the internet. Obviously that agent will simulate millions of games, and play thousands of hands online to learn more about poker and get better. What I don't expect to ever see (without explicit coding by a human) is that "win $ at poker" AI looking up instructional youtube videos to learn from human experts, or telling its handlers to set up additional hardware for it, or writing child AI programs with different strategies and having them play against each-other, or trading crypto during a poker game because that is another way to "win $," or even coding and launching a new poker playing website. I would barely expect it to find new sites where it could play, and be able to join those sites.
Better headline would be "I created a market on whether, in 2 months, I will believe that IQ tests measure what I believe to be intelligence" Not a particularly good market question.
What we saw when the I-15 Corridor was expanded (souther California, from Riverside to San Diego inland) was that over time people were willing to live further away from work, because the commute was "short enough," but as more people did that it got crowded again. So, total vehicle miles increased, without increasing the number of vehicle trips, since each trip was longer.
Highlighting the point in the Q&A: If you are having fun in HS or College, you don't need to leave. Put that extra energy that could be going towards graduating early into a side project (learn plumbing, coding, carpentry, auto maintenance, socializing, networking, youtubeing, dating, writing, or anything else that will have long term value regardless of what your career happens to be).
I'm a big fan of "take community College courses, and have them count for HS credits and towards your associates/bachelors" if your HS allows it.
Jave you tried playing with two (or 3 or 4) sides considered "open" - allowing groups to live if they touch those sides (abstracting away a larger board, to teach or practice tactical moves)?
"Baby sign" is just a dozen or so concepts like "more", "help", "food", "cold" etc. The main benefit is that the baby can learn to control thier hands before they learn to control thier vocal chords.
I'll just note here that "ability to automate the validation" is only possible when we already know the answer. Since the automated loom, computers have been a device for doing the same thing, over and over, very fast.
Let us introduce a third vocabulary word: Asset. An Asset is something that is consumed to provide Value, like cash in a mattress, food in a refrigerator, or the portion of a house or car that is depreciating as it is used. One of the miracles of the modern age is the ability of banks to turn Assets into Wealth many times over. It's a bit of technology built on societal trust. In the stock market example, it isn't double-counting, just different perspectives. Stock shares are a claim on the company, so the Google code is included in the Wealth of Google, and stock ownership is counted in the Wealth of the individual owners, but it's like saying "there are 20 legs in my dining room, and there are 4 legs on the table" - it's an error of logic to add the 20 to the 4.
I distinguish between Wealth and Value as concepts. Lots of things provide Value (a croissant, a free library app, refactoring code, project management), but Wealth is specifically things that provide ongoing value when used, without being used up. For example, a code base provides ongoing value to its owners, and a refactoring code base provides more ongoing value, so that increases Wealth. Living near a beach or other nice place is a form of Wealth. Money in the bank, or in stocks, that is generating enough income to outpace inflation is Wealth. Strong relationships is Wealth. Useful knowledge is Wealth. In summary, Wealth is anything that generates (not "is convered to") Value over time.
It turns out publishing bias is one heck of a drug. Every success of automation was touted, and every failure quietly tucked away, until one day the successes started getting smaller and less significant. We still see improvements around the edges of capability, but the big rocks, like making choices between domains in pursuit of abstract goals, remain elusive.
Having read the linked piece, I think it may be more a case of common cause then learning a new skill. People who are good at deciphering one complex system are going to be good at deciphering other complex systems. And people with experience doing that are going to be better than those without. "Seeing the meta" is just a way to ID people who have learned how to learn systems.
Depending on what level of competition and scope you are looking for, here are some suggestions:
For a tiny group (dozens of players), see https://forums.sirlingames.com/
For a larger (thousands), but still moderately easy to learn, https://storybookbrawl.com/
For actual global competition (millions, good luck): https://magic.wizards.com/en/mtgarena or https://hearthstone.blizzard.com/en-us
It's not just a low-IQ human, though. I know some low IQ people and their primary deficiency is the ability to apply information to new situations. Where "new situation" might just be the same situation as before, but on a different day, or with a slightly different setup (eg task requires A, B1, B2, and C - usually all 4 things need to be done, but sometimes B1 is already done and you just need to do A, B2, and C - medium or high would notice and just do the 3 things, medium might also just do B1 anyway. Low IQ folx tend to do A, and then not know what to do next because it doesn't match the pattern.)
The one test I ran with GPTchat was asking three questions (with a prompt that you are giving instructions to a silly dad who is going to do exactly what you say, but mis-interpret things if they aren't specific):
Explain how to make a Peanut Butter Sandwitch (gave a good result to this common choice)
Explain how to add 237 and 465 (word salad with errors every time it attempted)
Explain how to start a load of laundry (was completely unable to give specific steps - it only had general platitudes like "place the sorted laundry into the washing machine." No-where did it specify to only place one sort of sorted laundry in at a time, nor did it ever include "open the washing machine" on the list.
You can also just speed-walk: quickly take full size strides, but always keep at least one foot on the ground - this keeps your torso at the same elevation for the whole journey, and eliminates the bouncing (and, added bonus, it doesn't look like you're running)
You could also say "no" if:
- You don't have "goals in life"
- Your parents are dead
- You don't care what your parents (or anyone else) thinks (a fairly common feeling among Autism Spectrum folx)
- You are focused on one or two important things (goal: get a promotion / get an A in this class / etc.), and nebulous "make my parents proud" things aren't as important.
- You interpret the question as referring to both or all your parents, but one or more of the previously mentioned reasons apply to some of your parents, so while you might want to make "my mom" proud, that doesn't apply to "my dad" or "my stepmom" and therefore you don't consider "my parents" a unified entity.
So, basically this comic: https://www.smbc-comics.com/comic/2011-07-13
Also what that other user said: the opportunity cost is only the next best thing you could have done, not all the alternatives.
1:10 management ratio is a very bad assumption as things get large. First level supervisors in Retail or Manufacturing often have 30-100 direct reports, regardless of how many levels of management exist above them (with 2-6 direct reports each).
Also, modern supply chains mean that medium to big companies often have entire additional companies "reporting into" the management structure.
There are some mainstream-ish ideas out there like Servant Leadership or Agile that try to work around this problem by dis-entangling power and prestige. If the lower level managers are organizationally "in charge" of what actually gets done (leaving the higher levels just in charge of who to hire/fire), that breaks up some of the feedback.
Another thing to do is ensure that Management is not the only (and hopefully not even the best) way to gain prestige in the company. Include the top widget makers in the high level corporate meetings. They might not care about strategic direction, but if the CEO's golf foursome includes the senior production employees in the rotation alongside upper management (and the compensation packages are similar between them), that can help.
The physical world is also acting continuously based on inputs it receives from people, and we don't say "The Earth" is an intelligence.
I think the biggest piece of an actual GI that is missing from text extenders is agency Responding to prompts and answering questions is one thing, but deciding what to do/write about next isn't even a theoretical part of thier functionality.
See also https://www.sirlin.net/articles/designing-yomi for more on this phenomena
So, in summary, current gen AI continues to be better at reaching small goals (win this game, wriite a paragraph) than the average human attempting those goals, but still lacks the ability to switch between disparate activities, and doesn't reach the level of a human expert once the playing field gets more complicated than Go or heads-up Poker?
If you are buying "a place to live" that can get complicated.
But buying "a building to rent out" isn't any more fraught than buying a car (or a boat, which is a more expensive vehicle; or a private jet for an even bigger "thing").
Guessing that it involved a human checking the website and/or sending an email to find out if the author had used a translation tool
Maybe, but a big player selling without explanation can also cause a panic.
There are also reputation effects with either choice.
Both examples are in "Cut time" [2÷2] - so only 2 beats to a measure.
A middle ground version of this happens (in the US) over the summer when almost all kids are out of school for 8 weeks between June 15 and Aug 15 (plus or minus), so families that can often take thier long vacations during that time.
On my 10 person team, that led to the entire 8 weeks having 1-4 people off.
I don't know, but it would depend greatly on your kettle design.