Posts

whestler's Shortform 2024-09-23T11:45:35.419Z

Comments

Comment by whestler on Are we dropping the ball on Recommendation AIs? · 2024-10-24T12:29:40.806Z · LW · GW

I've been thinking about this in the back of my mind for a while now. I think it lines up with points Cory Doctorow has made in talks about enshittification. 

I'd like to see recommendation algorithms which are user-editable and preferably platform-agnostic, to allow low switching costs. A situation where people can build their own social media platform and install a recommendation algorithm which works for them, pulling in posts from other users across platforms who they follow. I've heard that the fediverse is trying to do something like this, but I've not been able to get engaged with it yet. 

It's cool to see efforts like Tournesol, though it's a shame they don't have a mobile extension yet.

Comment by whestler on There is a globe in your LLM · 2024-10-09T11:31:16.763Z · LW · GW

This is fascinating, and is further evidence to me that LLMs contain models of reality.
I get frustrated with people who say LLMs "just" predict the next token, or they are simply copying and pasting bits of text from their training data. This argument skips over the fact that in order to accurately predict the next token, it's necessary to compress the data in the training set down to something which looks a lot like a mostly accurate model of the world. In other words, if you have a large set of data entangled with reality, then the simplest model which predicts that data looks like reality.

This model of reality can be used to infer things which aren't explicitly in the training data - like distances between places which aren't mentioned together in the training data.

Comment by whestler on whestler's Shortform · 2024-09-25T09:38:51.932Z · LW · GW

I'm not sure if this is the right place to post, but where can I find details on the Petrov day event/website feature?

I don't want to sign up to participate if (for example) I am not going to be available during the time of the event, but I get selected to play a role.

Maybe the lack of information is intentional?

Comment by whestler on shortplav · 2024-09-23T16:28:29.587Z · LW · GW

(apologies in advance for the wall of text, don't feel you need to respond, I wrote it out and then almost didn't post).

To clarify, I wouldn't expect stagnant or decreasing salaries to be the norm. I just wanted to say that there are circumstances where I expect this to be the case. Specifically, if I am an employee who is living paycheck to paycheck (which many do), then I can't afford any time unemployed.

As a result, if my employer is able to squeeze me in this situation, I might agree to a lower wage out of necessity.

The problem with your proposed system is that it essentially encourages employees to selectively squeeze themselves- if they're in a situation where they can't afford to lose their job, then this will lower what they ask at a negotiation, and what they receive, even if the employer is offering the same rmax to all employees. This has little to do with their relative skills as an employee and everything to do with their financial situation and responsibilities outside work.

Here's an example. I'm not sure why I wrote it, but here it is:

Brenda and Karl work at a gas station supermarket. They both work the same job, on the checkout area, with some shelf stocking as needed.

Brenda is a single mom with a 2 year old child who she is paying for childcare, and the rest of her earnings go on rent, food and fuel for her beat up car (it's a miracle it's still running). She works at the gas station 4 days a week, 9-7.

Karl takes on shifts 2 nights a week, it helps pay him through college and he enjoys the extra money. His parents give him enough that he could probably survive without the job entirely, and certainly a period of unemployment would not be a big problem for him.

Brenda and Karl both get paid $15/hr for their work, but they know that the new "payPlav system (TM)" is being introduced by management, and they have a pay negotiation coming up.

Management asks them to read the rules of the new system carefully submit their r-min. They say that if rmax < rmin, then the employee will stay on their existing salary.

Brenda sets her rmin at $15.50. She could do with a significant pay bump, but she doesn't want to lose out on the pay increase entirely, since she's only holding it together at $15.

Karl sets a bolder rmin of $16.50. He works hard at the job and thinks he deserves more, but it's not a big deal if he misses out and stays at $15

Management sets rmax at $17

Brenda gets $16.25

Karl gets $16.75

I don't think this is fair. It's a clear case where the system creates a situation where employees who care less and need the money less will be rewarded more.

Here's another scenario - same as the above, but management says that rmax<rmin will mean termination of the contract.

Brenda sets rmin at $13.50. She simply can't lose this job, it would ruin her.

Karl sets his rmin to $16

Management sets rmax at $15.50

It's more extreme, to be sure, and maybe a little unrealistic.


Many workers employed on zero-hours contracts end up in this situation - since the employer is able to lower wages with impunity and they don't have many other options, they get squeezed for profit. Sometimes unscrupulous employers do this selectively, based on which employees can least afford to stop working. This results in the most impoverished employees losing out.

Comment by whestler on whestler's Shortform · 2024-09-23T11:45:35.680Z · LW · GW

I feel that human intelligence is not the gold standard of general intelligence; rather, I've begun thinking of it as the *minimum viable general intelligence*.
In evolutionary timescales, virtually no time has elapsed since hominids began trading, utilizing complex symbolic thinking, making art, hunting large animals etc, and here we are, a blip later in high technology. The moment we reached minimum viable general intelligence, we started accelerating to dominate our environment on a global scale, despite increases in intelligence that are actually relatively megre within that time: evolution acts over much longer timescales and can't keep pace with our environment, which we're modifying at an ever-increasing rate.
Moravec's paradox suggests we are in fact highly adapted to the task of interacting with the physical world-as basically all animals are-and we have some half-baked logical thinking systems tacked on to this base.

Comment by whestler on shortplav · 2024-09-09T11:09:56.538Z · LW · GW

The employee is incentivised to put the r-min rate as close as they can to their prediction of the employer's r-max, and how far they creep into the margin for error on that prediction is going to be dependent on how much they want/need the job. I don't think the r-min rate for new hires will change in a predictable way over time, since it's going to be dependent on both the employee's prediction of their worth to the employer, and how much they need the job. 

For salary negotiation where the employee already has a contract, I would expect employees to set r-min at their current salary or a little above. 
This prediction is fully dependent on the consequences of r-max< r-min though. If  r-max< r-min results in immediate termination of the contract, then you might see wages stagnate or even decrease, depending on employer's understanding of the employee's situation. In general, I dislike this situation, since it incentivises employers to exploit workers who can't afford a break in employment, squeezing them onto worse pay when they think they can get away with it. It encourages mind games as well - if the employer says "I'm thinking about setting r-max a little below your current salary" then they may convince the employee to lower r-min, and then even if the employer sets a reasonable r-max a little above the employee's salary, the employee may lose out.

Comment by whestler on Are there any naturally occurring heat pumps? · 2024-08-29T12:39:38.539Z · LW · GW

When a whale dives after having taken a breath at the surface, it will experience higher pressure, and as a consequence the air in its lungs will be compressed and should get a little warmer. This warmth will diffuse to the rest of the whale and the whale's surroundings over time, and then when they go up to the surface again the air in their lungs would get cooler. I suppose this isn't really a continuous pump, more of a single action which involves pressure and temperature.

Any animal which is capable of altering it's own internal pressure for an extended period of time should technically qualify, since pressurising an internal cavity will make the gas or liquid within hotter (and this heat will eventually radiate to the animal's surroundings). Then the animal can cool down by reducing it's internal pressure. This effect might be negligible for the low pressure differences produced by most animals, but should still be present. 

Bivalves use their powerful bodies to suction themselves to a surface, and sea cucumbers can change their internal pressure to become rigid or flexible. You might have some luck there?

Theoretically, humans should be able to do a very small amount of heat-pumping, by taking a large breath of air and then compressing it as much as possible using your diaphragm and chest muscles. This should cause the air to heat up a little (though I doubt it would be noticeable).

Comment by whestler on Are there any naturally occurring heat pumps? · 2024-08-29T12:38:47.847Z · LW · GW
Comment by whestler on Sherlockian Abduction Master List · 2024-08-12T14:48:19.499Z · LW · GW

Rolled pants leg up to the ankle on the right hand side, but not the left - this is a fairly clear sign that someone is a cyclist, and has probably recently arrived.

They do it to avoid getting bike oil from the chain on the cuff of the pants, and to avoid the pants getting caught in the gear. Bicycles pretty much always have the crank gear on the right hand side.

Comment by whestler on What is AI Safety’s line of retreat? · 2024-08-09T11:52:57.791Z · LW · GW

It doesn't seem particularly likely to me: I don't notice a strong correlation between intelligence and empathy in my daily life, perhaps there are a few more intelligent people who are unusually kind, but that may just be the people I like to hang out with, or a result of more privilege/less abuse growing up leading to better education and also higher levels of empathy. Certainly less smart people may be kind or cruel and I don't see a pattern in it. 

Regardless, I would expect genetically engineered humans to still have the same circuits which handle empathy and caring, and I'd expect them to be a lot safer than an AGI, perhaps even a bit safer than a regular human, since they're less likely to cause damage due to misconceptions or human error since they're able to make more accurate models of the world.

If you're worried about more intelligent people considering themselves a new species, and then not caring about humans, there's some evidence against this in that more intelligent people are more likely to choose vegetarianism, which would indicate that they're more empathetic toward other species.

Comment by whestler on It's time for a self-reproducing machine · 2024-08-09T10:53:05.849Z · LW · GW

If I did not see a section in your bio about being an engineer who has worked in multiple relevant areas, I would dismiss this post as a fantasy from someone who does not appreciate how hard building stuff is; a "big picture guy" who does not realise that imagining the robot is dramatically easier than designing and building one which works. 

Given that you know you are not the first person to imagine this kind of machine, or even the first with a rough plan to build one, why do you think that your plan has a greater chance of success than other individuals or groups which have tried before you? Is there something specific that you bring to the table that means you will avoid the challenges or be more suited to tackle them?

I think you might do better starting out with creating a machine which can assemble a copy of itself, from pre-built off-the-shelf parts. A robot arm and camera attachment, which is capable of recognising the pre-made parts of itself and fitting them together autonomously would be very challenging to make and would be a good proof of concept for the larger project. 

If you have this system working, then your next step would be creating the same machine but including a 3d print bed (which it is also capable of assembling) or small scale milling machine to build a few of the parts, and continue by adding more and more manufacturing capabilities, so you have to supply fewer parts with each iteration of the design. I remember assembling my 3d printer a few years ago, and there were quite a lot of steps which would be major practical challenges to a robot a similar size to the printer, even in just assembling the pre-made parts.

Comment by whestler on [Thought Experiment] Given a button to terminate all humanity, would you press it? · 2024-08-02T14:45:31.651Z · LW · GW

The only sensible reason I can imagine to push the button: a belief that p:doom from AI or something else is inevitable, and it would be better to remove all of humanity now and gamble on the possibility of a different intelligent form of life evolving in a few millenia. 

Comment by whestler on tlevin's Shortform · 2024-07-31T14:20:33.906Z · LW · GW

Unfortunately different people have different levels of hearing ability, so you're not setting the conversation size at the same level for all participants. If you set the volume too high, you may well be excluding some people from the space entirely.

I think that people mostly put music on in these settings as a way to avoid awkward silences and to create the impression that the room is more active than it is, whilst people are arriving. If this is true, then it serves no great purpose once people have arrived and are engaged in conversation.

Another important consideration is sound-damping. I've been in venues where there's no music playing and the conversations are happening between 3 -5 people but everyone is shouting to be heard above the crowd, and it's incredibly difficult for someone with hearing damage to participate at all. This is primarily a result of hard, echoey walls and very few soft furnishings. 

I think there's something to be said for having different areas with different noise levels, allowing people to choose what they're comfortable with, and observing where they go.

Comment by whestler on Age changes what you care about · 2024-07-31T11:32:40.189Z · LW · GW

I'm in the same boat. I'm not that worried about my own life, in the general scheme of things. I fully expect I'll die, and probably earlier than I would in a world without AI development. What really cuts me up is the idea that there will be no future to speak of, that all my efforts won't contribute to something, some small influence on other people enjoying their lives at a later time. A place people feel happy and safe and fulfilled.

If I had a credible offer to guarantee that future in exchange for my life, I think I'd take it.
(I'm currently healthy, more than half my life left to live, assuming average life expectancy)

Sometimes I try to take comfort in many-worlds, that there exist different timelines where humanity manages to regulate AI or align it with human values (whatever those are). Given that I have no capacity to influence those timelines though, it doesn't feel like they are meaningfully there.

Comment by whestler on Universal Basic Income and Poverty · 2024-07-29T11:54:16.557Z · LW · GW

"But housing prices over all of the US won't rise by the amount of UBI".

If UBI were being offered across the US, I would expect them to rise by the amount of UBI. 

If UBI is restricted to SF, then moving out of SF to take advantage of lower rents would not make sense, since you would also be giving up the UBI payments of equivalent value to do so. 

(Edit): If you disagree, I'd appreciate it if you can explain, or link me to some resources where I can learn more. I'm aware that my economic model is probably simplistic and I'm interested in improving it.

Comment by whestler on Failures in Kindness · 2024-07-23T16:23:31.246Z · LW · GW

Your money-donating example is a difficult one. Ideally, it would be better to anticipate this sort of thing ahead of time and intentionally create an environment where it's ok to say "no". 

The facilitator could say something like: "this is intended as an exercise in group decision making, if you want to donate some of your own money as well to make this something you're more invested in, you are welcome to do that, but it's not something I expect everyone to be doing. We will welcome your input even if you're not putting money into the exercise this time." They could even say "I'm not adding anything myself today" to reinforce the message, and provide an ally, as in Asch's conformity experiments.

I find that most of these situations could be diffused by forward planning and expectation setting, though admittedly this is a mental load on the person who does the forward planning. Over time it becomes more natural and a person can build up conversational habits which follow these principles though. 

Comment by whestler on Yoshua Bengio: Reasoning through arguments against taking AI safety seriously · 2024-07-15T10:30:40.826Z · LW · GW

I initially thought there must be some simple reason that publishing the DNA sequence is not a dangerous thing to do, like "ok, but given that you would need a world class lab and maybe even some techniques which haven't even been invented yet to get it to work, it's not a dangerous thing to publish".

According to this article from 2002, synthesising smallpox would be tricky, but within the reach of a terrorist organisation. Other viruses may be easier. 

“Scientifically, the results are not surprising or astounding in any way,” says virologist Vincent Racaniello of Columbia University. “The point here, of course, is that the DNA can be synthesized from the [genetic] sequence, and this could be done by any third-rate terrorist.”

Apparently, large organisations like the NIH are foolhardy enough to publish dangerous data like this. I wonder if there's some other justification, like "the data was already public, in such a way that it could not be removed"

Comment by whestler on Brief notes on the Wikipedia game · 2024-07-15T09:38:06.984Z · LW · GW

This was interesting. I tried the Industrial Revolution one. 

 I initially thought it was strange that the textile industry was first (my history is patchy at best). I remembered that industrial looms were an important invention but it seemed to me that something earlier in the production chain should be bigger, like coal extraction or rail, steam engines, or agriculture. I noticed that electricity was not so significant until after the industrial revolution. I think my error sensors were over active though - I flagged a lot of stuff as false and only some of it was. Here's the summary: 


Correctly spotted: Electricity too early, Telephone too early, lack of much mention of steam power + coal.

Incorrectly spotted: Textile industry seemed too dominant, Did not know about/expect there would be a recession in the 1830s
 

Comment by whestler on Sherlockian Abduction Master List · 2024-07-12T15:21:53.097Z · LW · GW
Comment by whestler on Sherrinford's Shortform · 2024-07-10T15:10:48.301Z · LW · GW

I think it's very likely we'll see more situations like this (and more ambiguous situations than this). I recall a story of an early turing test experiment using hand-coded scripts some time in the 2000's, where one of the most convincing chatbot contestants was one which said something like:

"Does not compute, Beep boop! :)" 

pretending to be a human pretending to be a robot for a joke.

Comment by whestler on Noah Birnbaum's Shortform · 2024-07-10T13:38:03.505Z · LW · GW

Non-paywall link: http://web.archive.org/web/20240709012837/https://www.nytimes.com/2024/07/04/technology/openai-hack.html

Comment by whestler on Sherrinford's Shortform · 2024-07-10T13:32:11.827Z · LW · GW

I had a look, and no, I read it as a bot. I think if it were a human writing a witty response, they would likely have: 

a) used the format to poke fun at the other user (Toby)

b) made the last lines rhyme.

Also, I wanted to check further so I looked up the account and it's suspended. https://x.com/AnnetteMas80550
Not definitive proof, but certainly evidence in that direction.

Comment by whestler on Daniel Kokotajlo's Shortform · 2024-05-24T13:17:41.673Z · LW · GW

For some reason this is just hilarious to me. I can't help but anthropomorphise Golden Gate Claude and imagine someone who is just really excited about the Golden Gate bridge and can't stop talking about it, or has been paid a lot of money to unrepentently shill for a very specific tourist attraction.

Comment by whestler on Quick Thoughts on Our First Sampling Run · 2024-05-23T10:33:36.143Z · LW · GW

From experience doing something similar, you may find you actually get better participation rates if you give away doughnuts or canned drinks or something, for the following reasons:

  • People are more familiar with the idea of a product give-away. 
  • The physical things are visible and draw attention.
  • The reward is more tangible/exciting than straight money (especially if you are considering lower values like $1 or $2.

In terms of benefits to you:

Less paperwork/liability for you than giving cash to strangers, and cheaper, as you've mentioned.

Comment by whestler on Questions are usually too cheap · 2024-05-14T15:01:48.253Z · LW · GW

Questions are not a problem, obligation to answer is a problem.

I think if any interaction becomes cheap enough, it can be a problem.

Let's say I want to respond to ~ 5 to 10 high-effort questions (questions where the askers have done background research and spend some time checking their wording so it's easy to understand), and I receive 8 high-effort questions and 4 low-effort questions, then that's fine- it's not hard to read them all and determine which ones I want to respond to.

But what about if I receive 10 high-effort questions, and 1000 low-effort questions? then the low-effort questions are imposing a significant cost on me, purely because I have to spend effort to filter them out to reach the ones I want to respond to.

My desire to participate in answering questions, coupled with an incredibly cheap question-asking process, is sufficient to impose high costs on me (if I set up some kind of automated spam filter, this is also a cost, and leads to the kind of spam filter/automated email arms race that we currently see, with each automated system trying to outsmart the other).

Comment by whestler on Can we build a better Public Doublecrux? · 2024-05-13T14:59:42.437Z · LW · GW

I think it might be a good idea to classify a "successful" double crux as being a double crux where both participants agree on the truth of the matter at the end, or at least have shifted their world views to be significantly more coherent.

It seems like the main obstacles to successful double crux are emotional (pride, embarrassment), and associations with debates, which threaten to turn the format into a dominance contest.

It might help to start with a public and joint announcement by both participants that they intend to work together to discover the truth, recognising that their currently differing models means that at least one of them has the opportunity to grow in their understanding of the world and become a stronger rationalist, that they are committed to helping each other become stronger in the art.

Alternatively you could have the participants do the double crux in their own time, and in private (though recorded). If the double crux succeeds, then post it, and major kudos to the participants. If it fails, then simply post the fact that the crux failed but don't post the content. If this format is used regularly, eventually it may become clear which participants consistently succeed in their double crux attempts, and which don't, and they can build reputation that way, rather than trying to "win" a debate.

Comment by whestler on Did Bengio and Tegmark lose a debate about AI x-risk against LeCun and Mitchell? · 2024-05-08T13:24:23.103Z · LW · GW

I wasn't able to find the full video on the site you linked, but I found it here, if anyone else has the same issue: 

Comment by whestler on The Best Tacit Knowledge Videos on Every Subject · 2024-04-16T17:00:15.937Z · LW · GW

Domain: PCB Design, Electronics
Link: https://www.youtube.com/watch?v=ySuUZEjARPY
Person: Rick Hartley
Background: Has worked in electronics since the 60s, senior principal engineer at L-3 Avionics Systems, principal of RHartley Enterprises
Why: Rick Hartley is capable of explaining electrical concepts intuitively, and linking them directly to circuit design. He uses a lot of stories and examples visually to describe what's happening in a circuit. I'm not sure it counts as Tacit Knowledge since this is lecture format, but it includes a bunch of things that you might not know you don't know, coming into the field. I never "got" how electrical circuits really work before watching this video, despite having been a hobbyist for years.

Comment by whestler on Open Thread Spring 2024 · 2024-04-10T09:52:50.230Z · LW · GW

In terms of my usage of the site, I think you made the right call. I liked the feature when listening but I wanted to get rid of it afterwards and found it frustrating that it was stuck there. Perhaps something hidden on a settings page would be appropriate, but I don't think it's needed as a default part of the site right now.

Comment by whestler on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-09T10:20:19.747Z · LW · GW

I'm glad you like it! I was listening to it for a while before I started reading lesswrong and AI risk content, and then one day I was listening to "Monster" and started paying attention to the lyrics and realised it was on the same topic. 

Comment by whestler on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-03T15:47:19.917Z · LW · GW

It isn't quite the same but the musician "Big Data" has made some fantastic songs about AI risk. 

Comment by whestler on The other side of the tidal wave · 2024-04-03T14:15:33.729Z · LW · GW

I realise this is a few months old but personally my vision for utopia looks something like the Culture in the Culture novels by Iain M. Banks. There's a high degree of individual autonomy and people create their own societies organically according to their needs and values. They still have interpersonal struggles and personal danger (if that's the life they want to lead) but in general if they are uncomfortable with their situation they have the option to change it. AI agents are common, but most are limited to approximately human level or below. Some superhuman AI's exist but they are normally involved in larger civilisational manouvering rather than the nitty gritty of individual human lives. I recommend reading it. 

Caveats-

1: yes, this is a fictional example so I'm definitely in danger of generalising from fictional evidence. I mostly think about it as a broad template or cluster of attributes society might potentially be able to achieve.

2: I don't think this level of "good" AI is likely.

Comment by whestler on "No-one in my org puts money in their pension" · 2024-03-11T16:12:17.451Z · LW · GW

I had a similar emotional response to seeing these same events play out. The difference for me is that I'm not particularly smart or qualified, so I have an (even) smaller hope of influencing AI outcomes, plus I don't know anyone in real life who takes my concerns seriously. They take me seriously, but aren't particularly worried about AI doom. It's difficult to live in a world where people around you act like there's no danger, assuming that their lives will follow a similar trajectory to their parents. I often find myself slipping into the same mode of thought.