Posts

Ethics and prospects of AI related jobs? 2024-05-11T09:31:04.190Z
Good Bings copy, great Bings steal 2024-04-21T09:52:46.658Z
The predictive power of dissipative adaptation 2023-12-17T14:01:31.568Z
It's OK to be biased towards humans 2023-11-11T11:59:16.568Z
Measure of complexity allowed by the laws of the universe and relative theory? 2023-09-07T12:21:03.882Z
Learning as you play: anthropic shadow in deadly games 2023-08-12T07:34:42.261Z
Ethodynamics of Omelas 2023-06-10T16:24:16.215Z
One bit of observation can unlock many of optimization - but at what cost? 2023-04-29T10:53:03.969Z
Ideas for studies on AGI risk 2023-04-20T18:17:53.017Z
Goals of model vs. goals of simulacra? 2023-04-12T13:02:59.907Z
The benevolence of the butcher 2023-04-08T16:29:04.589Z
AGI deployment as an act of aggression 2023-04-05T06:39:44.853Z
Job Board (28 March 2033) 2023-03-28T22:44:41.568Z

Comments

Comment by dr_s on London Rationalish meetup - Lincoln's Inn Fields · 2024-08-21T17:56:53.872Z · LW · GW

Yeah, I found it pretty soon after.

Comment by dr_s on London Rationalish meetup - Lincoln's Inn Fields · 2024-08-18T13:33:28.401Z · LW · GW

Is anyone actually around? I can't find the spot.

Comment by dr_s on How unusual is the fact that there is no AI monopoly? · 2024-08-18T06:06:46.514Z · LW · GW

I think your model only applies to some famous cases, but ignored others. Who invented computers? Who invented television networks? Who invented the internet?

Lots of things have inventors and patents only for specific chunks of them, or specific versions, but are as a whole too big to be encompassed. They're not necessarily very well defined technologies, but systems and concepts that can be implemented in many different ways. In these fields, focusing on patents is likely to be a losing strategy anyway as you'll simply stand still to protect your one increasingly obsolete good idea like Homer Simpson in front of his sugar while everyone else runs circles around you with their legally distinct versions of the same thing that they keep iterating and improving on. I think AI and even LLMs fall under this category. It's specifically quite hard to patent algorithms - and good thing too, or it would really have a chilling effect for the whole field. I think you can patent only a specific implementation of them, but that's very limited; you can't patent the concept of a self-attention layer, for example, as that's just math. And that kind of thing is all it takes to build your own spin on an LLM anyway.

Comment by dr_s on How unusual is the fact that there is no AI monopoly? · 2024-08-18T05:59:10.510Z · LW · GW

Omnicide I can get behind, but patent infringement would be a bridge too far!

Comment by dr_s on I would have shit in that alley, too · 2024-06-24T20:40:39.719Z · LW · GW

I think in general it's mostly 1); obviously "infinite perfect bathroom availability everywhere" isn't a realistic goal, so this is about striking a compromise that is however more practical than the current situation. For things like these honestly I am disinclined to trust private enterprise too much - especially if left completely unregulated - but am willing to concede that it's not my main value. Obviously I wouldn't want the sidewalk to be entirely crowded out by competing paid chemical toilets though, that solves one problem but creates another.

Since the discussion here started around homelessness, and homeless people obviously wouldn't be able to pay for private bathrooms (especially if these did the obvious thing for convenience and forgo coins in exchange for some kind of subscription service, payment via app, or such), I think the best solution would be free public bathrooms, and I think they would "pay themselves" in terms of gains in comfort and cleanliness for the people living in the neighborhood. They should be funded locally of course. Absent that though, sure, I think removing some barriers to private suppliers of paid for bathroom services would still be better than this.

Comment by dr_s on Some Experiments I'd Like Someone To Try With An Amnestic · 2024-06-24T06:57:08.734Z · LW · GW

My wife was put on benzodiazepines not long ago for a wisdom tooth extraction, same as the author of that post. She did manifest some of the same behaviours (e.g. asking the same thing repeatedly). But your plan to make people in those conditions take an IQ test has a flaw: she was also obviously high as balls. No way her cognitive abilities weren't cut down to like half of the usual. Not sure if this is a side effect of the loss of short term memory or a different effect of the sedatives, but yeah, this would absolutely impact an experiment IMO.

Comment by dr_s on I would have shit in that alley, too · 2024-06-23T22:14:45.298Z · LW · GW

No, sorry, it's not that I didn't find it clear, but I thought it was kind of an irrelevant aside - it's obviously true (though IMO going to a barista and passing a bill while whispering "you didn't see anything" might not necessarily work that well either), but my original comment was about the absurdity of the lack of systemic solutions, so saying there are individual ones doesn't really address the main claim.

Comment by dr_s on I would have shit in that alley, too · 2024-06-23T21:45:27.591Z · LW · GW

We're discussing whether this is a systemic problem, not whether there are possible individual solutions. We can come up with solutions just fine, in fact most of the times you can just waltz in, go to the bathroom, and no one will notice. But "everyone pays bribes to the barista to go to the bathroom" absolutely makes no sense as a universal rule over "we finally acknowledge this is an issue and thus incorporate it squarely in our ordinary services instead of making up weird and unnecessary work-arounds".

Comment by dr_s on I would have shit in that alley, too · 2024-06-23T20:58:48.824Z · LW · GW

Tipping the barista is not really sticking to the rules of the business, though. It's bribing the watchman to close an eye, and the watchman must take the bribe (and deem it worthy its risks).

Comment by dr_s on I would have shit in that alley, too · 2024-06-23T13:36:49.758Z · LW · GW

Which is probably why there were apparently >50,000 pay bathrooms in the USA before some activists got them outlawed

Oh, I didn't know this story. Seems like a prime example of "be careful what economic incentives you're setting up". All that banning paid toilets has done is... less toilets, not more free toilets.

Though wonder if now you could run a public toilet merely by plastering it with ads.

Comment by dr_s on I would have shit in that alley, too · 2024-06-23T13:33:28.025Z · LW · GW

Why is it better to pay an explicit bathroom providing business, then to pay a cafe (in the form of buying a cup of coffee)? It strikes me as a distinction without real difference, but maybe I'm confused.

Economically speaking, if to acquire good A (which I need) I also have to acquire good B (which I don't need and is more expensive), thus paying more than I would pay for good A alone, using up resources and labor I didn't need and that were surely better employed elsewhere, that seems to me like a huge market inefficiency.

Imagine this happening with anything else. "I want a USB cable." "Oh we don't sell USB cables on their own, that would be ridiculous. But we do include them as part of the chargers in smartphones, so if you want a USB cable, you can buy a smartphone." Would that make sense?

Comment by dr_s on Martin Sustrik's Shortform · 2024-06-23T06:58:25.177Z · LW · GW

Honestly if the proportions of those roles were true to real life I would simply never take the lottery, that's an almost certainty of being a peasant. I guess they still must have made things a bit more friendly.

Comment by dr_s on I would have shit in that alley, too · 2024-06-23T06:54:45.991Z · LW · GW

I explained my reasoning here. Also note that most people who have demand for using the bathroom are not penniless homeless people.

Here is my reasoning. On one hand, obviously going to the bathroom, sometimes in random circumstances, is an obvious universal necessity. It is all the more pressing for people with certain conditions that make it harder for them to control themselves for long. So it's important that bathrooms are available, quickly accessible, and distributed reasonably well everywhere. I would also argue it's important that they have no barrier to access because sometimes time is critical when using it. In certain train stations I've seen bathrooms that can only be used by paying a small price, which often meant you needed to have and find precise amounts of change to go. Absolutely impractical stuff for bathrooms.

On the other, obviously maintaining bathrooms is expensive as it requires labour. You don't want your bathrooms to be completely fouled on the regular, or worse, damaged, and if they happen to be, you need money to fix them. So bathrooms aren't literally "free".

Now one possible solution would be to have "public bathroom" as a business. Nowadays you could allow entrance with a credit card (note that this doesn't solve the homeless thing, but it addresses most people's need). But IMO this isn't a particularly high value business, and on its own certainly not a good use of valuable city centre land, which goes directly against the fact that you need bathrooms to be the most where the most people are. So this never really happens.

Another solution is to have bathrooms as part of private businesses doing other stuff (serving food/drinks) and have them charge for their use. Which is how it works now. The inadequacy lies into how for some reason these businesses charge you indirectly by asking you to buy something. This is inefficient in a number of ways: it forces you to buy something you don't really want, paying more than you would otherwise, and the provider probably still doesn't get as much as they could if they just asked a bathroom fee since they also need the labour and ingredients to make the coffee or whatever. So why are things like this? I'm not sure - I think part of it may be that they don't just want money, they want a filter that will discourage people from using the bathroom too much to avoid having too many bathroom goers. If that's the case, that's bad, because it means some needs will remain unfulfilled (and some people might forgo going out for too long entirely rather than risking being left without options). Part of it may be that they just identify their business as cafes and would find it deleterious to their image to explicitly provide a bathroom service. But that's a silly hangup and one we should overcome, if it causes this much trouble. Consider also that the way things are now, it's pretty hard of the cafes to enforce their rules anyway, and lots of people will just use the bathroom without asking or buying anything anyway. Everyone loses.

Or you could simply build and maintain public bathrooms with tax money. There are solutions to the land value problem (e.g. build them as provisionary structures on the sidewalk) and this removes all issues and quite a lot of unpleasantness. You could probably use even just some of the sales tax and house taxes income from the neighbourhood and the payers would in practice see returns out of this. Alternatively, you could publicly subsidize private businesses offering their bathrooms for free. Though I reckon that real public bathrooms would be better for the homeless issue since businesses probably don't want those in their august establishments.

Comment by dr_s on I would have shit in that alley, too · 2024-06-21T04:14:22.554Z · LW · GW

I suspect the argument that it is ridiculous comes from an intuition that the need to go to the bathroom is such a human universal that we are all accustomed to, and the knowledge that having to hold in your urine is seriously unpleasant is so universal, that it becomes a matter of basic consideration for your fellow human beings to provide them with the ability to access the bathroom in an establishment when they clearly need to.

This, and how completely unrelated specifically the "buy a coffee" thing is. It makes no sense that to satisfy need A I have to do unrelated thing B. The private version of the solution would be bathrooms I can pay to use, and those happen sometimes, but they're not a particularly common business model so I guess maybe the economics don't work out to it being a good use of capital or land.

Comment by dr_s on Safety isn’t safety without a social model (or: dispelling the myth of per se technical safety) · 2024-06-20T08:02:07.285Z · LW · GW

Technical AI safety and/or alignment advances are intrinsically safe and helpful to humanity, irrespective of the state of humanity.

 

I think this statement is weakly true, insofar as almost no misuse by humans could possibly be worse than what a completely out of control ASI would do. Technical safety is a necessary but not sufficient condition to beneficial AI. That said, it's also absolutely true that it's not nearly enough. Most scenarios with controllable AI still end with humanity nearly extinct IMO, with only a few people lording their AI over everyone else. Preventing that is not a merely technical challenge. 

Comment by dr_s on The thing I don't understand about AGI · 2024-06-19T17:03:24.132Z · LW · GW

The impossibility of traveling faster than the speed of light was a lot less obvious in 1961.

I would argue that's questionable - they knew relativity very well in 1961 and all the physicists would have been able to roll out the obvious theoretical objections. But obvious the difficulties of approaching the speed of light (via e.g. ramscoop engine, solar sail, nuclear propulsion etc) are another story.

Was Concorde “inherently a bad idea”? No, but “inherently” is doing the work here. It lost money and didn’t lead anywhere, which is the criteria on which such an engineering project must be judged. It didn’t matter how glorious, beautiful or innovative it was. It’s a pyramid that was built even though it wasn’t efficient.

I guess my point is that there are objective limits and then there are cultural ones. We do most things only for the sake of making money, but as far as human cultures go we are perhaps more the exception than the rule. And in the end individuals often do the opposite - they make money to do things, things they like that play to their personal values but don't necessarily turn out a profit all the time. A different culture could have concluded that the Concorde was a success because it was awesome, and we should do more of that. In such a culture in fact the Concorde might even have been a financial success, because people would have been more willing to pay more money to witness it first hand. Since here the argument involves more the inherent limits of technology and/or science, I'd say we should be careful to separate out cultural effects. Self-sustaining Mars colonies, for example, are probably a pipe dream with current technology. But the only reason why we don't have a Moon base yet is that we don't give enough of a shit. If we cared to build one, we probably could have by now.

Comment by dr_s on Getting 50% (SoTA) on ARC-AGI with GPT-4o · 2024-06-19T15:40:20.547Z · LW · GW

I'm honestly always amazed from just how much money some people in these parts seem to have. That's a huge sum to spend on an LLM experiment. It would be pretty large even for a research group, to burn that in just 6 days!

Comment by dr_s on The thing I don't understand about AGI · 2024-06-19T06:38:42.810Z · LW · GW

TBF, was Concorde inherently "a bad idea"? Technologies have a theoretical limit and a practical one. There's deep reasons why we simply couldn't reach even near speed of light by 1982 no matter how much money we poured into it, but Concorde seems more a case of "it can be done, but it's too expensive to keep safe enough and most people won't pay such exorbitant tickets just to shave a few hours off their transatlantic trip". I don't think we can imagine such things happening with AGI, partly because its economic returns are obvious and far greater, partly because many who are racing to it have more than just economic incentives to do so - some have an almost religious fervour. Pyramids can be built even if they're not efficient.

Comment by dr_s on The thing I don't understand about AGI · 2024-06-19T06:31:31.995Z · LW · GW

I think in practice we don't know for sure - that's part of the problem - but there are various reasons to think why this might be possible with vastly less complexity than the human brain. First, the task is vastly less complex than what the human brain does. The human brain does not handle only conscious rational thought, it does a bunch of other things that mean it still fires at full cylinders even when you're unconscious. Second, lots of artificial versions of natural organs are vastly less complex than their inspiration. Cameras are vastly less complex than eyes. Plane wings are vastly less complex than bird wings. And yet these things outperform their natural counterparts. To me the essence of the reason for this is that evolution deals in compromises. It can never design just a camera. The camera must be made of organic materials, it must be self organising and self repairing, it must be compatible with everything else and it must be achievable via a set of small mutations that are each as or more viable than the previous one. It's all stumbling around in the dark until you hit something that works under the many, many constraints of the problem. Meanwhile, artificial intelligent design on our part is a lot more deliberate and a lot less constrained. The AI itself doesn't need to do anything more than be an AI - we'll provide the infrastructure, and we'll throw money at it to keep it viable until it doesn't need it any more, because we foresee the future and can invest on it. That's more than evolution can do, and it's a significant advantage that can compensate for a lot of complexity.

Comment by dr_s on Getting 50% (SoTA) on ARC-AGI with GPT-4o · 2024-06-19T06:22:56.652Z · LW · GW

How much of that is API costs? Seems like the most part, unless you're considering a truly exorbitant salary.

Comment by dr_s on I would have shit in that alley, too · 2024-06-19T06:10:06.922Z · LW · GW

The bathroom thing sucks in general. We honestly just need more public bathrooms, or subsidies paid to venues to keep their bathrooms fully public. I understand most businesses won't risk having to deal with the potential mess of having anyone use their bathroom, but it's ridiculous even for those who do have the money that you're supposed to buy a coffee or something to take a leak (and then in practice you can often sneak by anyway).

Comment by dr_s on in defense of Linus Pauling · 2024-06-06T10:59:40.398Z · LW · GW

Seems a restrictive definition of "utility function". It can have the weather as one of its inputs. It can have state (because really, that only means its input is not just the present but the whole past trajectory).

"Function" is an incredibly broad mathematical term.

Comment by dr_s on in defense of Linus Pauling · 2024-06-05T06:52:14.635Z · LW · GW

About the post linked, mostly agree, but I don't see the need to move away from utility maximisation as a framework. We just have a piss poor description of the utility function. "I enjoy being like that rich and successful dude" is a value.

Comment by dr_s on Just admit that you’ve zoned out · 2024-06-05T06:07:57.829Z · LW · GW

What's a solution to this problem?

Abolish the conference talk, turn everything into a giant poster session, possibly with scheduled explanations. Or use the unconference format, and everyone only talks with a table's worth of people at a time, possibly doing multiple rounds if there's interest.

Academic conferences as they work now are baaaad. No wonder people complained about them going remote for COVID, everything of value happens chatting over coffee and/or in front of posters, no one gives a shit or gains anything from the average talk, given by some tired and inexperienced PhD student who doesn't know how to communication well, thinks they have to jam their talk with overly technical language to be more impressive, and possibly has bad English to make things even harder to follow to boot. Absolute snoozefest, almost no reach outside of the very narrow group of hyper specialists already studying the same topic.

Comment by dr_s on MIRI 2024 Communications Strategy · 2024-06-03T09:24:46.616Z · LW · GW

(I also think AIs will probably be conscious in a way that's morally important, in case that matters to you.)

I don't think that's either a given nor something we can ever know for sure. "Handing off" the world to robots and AIs that for all we know might be perfect P-zombies doesn't feel like a good idea.

Comment by dr_s on MIRI 2024 Communications Strategy · 2024-06-03T09:22:31.675Z · LW · GW

I don't think the Gulf Stream can collapse as long as the Earth spins, I guess you mean the AMOC?

Comment by dr_s on MIRI 2024 Communications Strategy · 2024-06-03T09:20:53.639Z · LW · GW

I think the core concerns remain, and more importantly, there are other rather doom-y scenarios possible involving AI systems more similar to the ones we have that opened up and aren't the straight up singleton ASI foom. The problem here is IMO not "this specific doom scenario will become a thing" but "we don't have anything resembling a GOOD vision of the future with this tech that we are nevertheless developing at breakneck pace". Yet the amount of dystopian or apocalyptic possible scenarios is enormous. Part of this is "what if we lose control of the AIs" (singleton or multipolar), part of it is "what if we fail to structure our society around having AIs" (loss of control, mass wireheading, and a lot of other scenarios I'm not sure how to name). The only positive vision the "optimists" on this have to offer is "don't worry, it'll be fine, this clearly revolutionary and never seen before technology that puts in question our very role in the world will play out the same way every invention ever did". And that's not terribly convincing.

Comment by dr_s on MIRI 2024 Communications Strategy · 2024-06-03T09:10:28.353Z · LW · GW

We think audiences are numb to politics as usual. They know when they’re being manipulated. We have opted out of the political theater, the kayfabe, with all its posing and posturing. We are direct and blunt and honest, and we come across as exactly what we are.

 

This is IMO a great point, and true in general. I think "the meta" is sort of shifting and it's the guys who try too hard to come off as diplomatic who are often behind the curve. This has good and bad sides (sometimes it means that political extremism wins out over common sense simply because it's screechy and transgressive), but overall I think you got the pulse right on it.

Comment by dr_s on How do you shut down an escaped model? · 2024-06-03T08:52:34.209Z · LW · GW

I honestly don't think shutting it down on AWS would be the hard part, if it's clearly identifiable. To sum it up:

  • if it's doing anything illegal (like hacking or engaging in insider trading) for a quick buck, it can be obviously taken down;
  • if it's doing anything that can be reasonably construed as a threat to US national security, then it better be taken down, or else.

That leaves us with a rogue ARA that is regardless entirely on the straight and narrow, playing the good kid and acting essentially like a perfectly honest company, making money legally, which is then entirely defensible for Amazon to not shut down despite the complaints. And even still, it's not like Amazon couldn't shut it down entirely at its whim if they had reason to. If they thought it's bad publicity (and hosting a totally-not-suspicious autonomous AI that might or might not be scheming to take over the world seems like terrible publicity), they can shut it down. If it causes their relationship to other companies (like the social media the AI is probably flooding with ToS-violating botnets right now) to sour, they can shut it down. See for example how app stores and many websites are essentially purging everything remotely lewd because payment processors don't want to be seen supporting that stuff, and every business is downstream of payment processors. You don't have to convince Amazon that AI is dangerous, you have to convince VISA and Mastercard, and the rest will follow suit.

If everything else fails, and if the US government doesn't yet feel threatened enough to go "screw it" and roll in the SWAT teams anyway, there's always the option of legal loopholes. For example, if the AI was trained on copyrighted material (which it almost certainly was), you can probably invoke anti-piracy laws. I would need a legal expert to pitch in, but I can imagine you might not even need to win such a lawsuit - you might manage to get the servers put under seizure just by raising it at all.

IMO dangerous ARAs would need to be some degree of sneaky, using backups in consumer hardware and/or collaborators. Completely loner agents operating off AWS or similar services would have a clear single point of failure.

Comment by dr_s on We might be dropping the ball on Autonomous Replication and Adaptation. · 2024-06-02T12:53:54.761Z · LW · GW

The problem is that as usual people will worry that the NatSec guys are using the threat to try to slip us the pill of additional surveillance and censorship for political purposes - and they probably won't be entirely wrong. We keep undermining our civilizational toolset by using extreme measures for trivial partisan stuff and that reduces trust.

Comment by dr_s on We might be dropping the ball on Autonomous Replication and Adaptation. · 2024-06-02T12:50:27.445Z · LW · GW

I honestly don't think ARA immediately and necessarily leads to overall loss of control. It would in a world that has also widespread robotics. What it would potentially be, however, is a cataclysmic event for the Internet and the digital world, possibly on par with a major solar flare, which is bad enough. Destruction of trust, cryptography, banking system belly up, IoT devices and basically all systems possibly compromised. We'd look at old computers that have been disconnected from the Internet from before the event the way we do at pre-nuclear steel. That's in itself bad and dangerous enough to worry about, and far more plausible than outright extinction scenarios, which require additional steps.

Comment by dr_s on The Pearly Gates · 2024-06-02T12:33:58.954Z · LW · GW

Yeah I think the idea is "I get the point you moron, now stop speaking so loud or the game's up."

Comment by dr_s on robo's Shortform · 2024-06-01T21:34:00.262Z · LW · GW

It's not that people won't talk about spherical policies in a vacuum, it's that the actual next step of "how does this translate into actual politics" is forbidding. Which is kind of understandable, given that we're probably not very peopley persons, so to speak, inclined to high decoupling, and politics can objectively get very stupid.

In fact my worst worry about this idea isn't that there wouldn't be consensus, it's how it would end up polarising once it's mainstream enough. Remember how COVID started as a broad "Let's keep each other safe" reaction and then immediately collapsed into idiocy as soon as worrying about pesky viruses became coded as something for liberal pansies? I expect with AI something similar might happen, not sure in what direction either (there's a certain anti-AI sentiment building up on the far left but ironically it denies entirely the existence of X-risks as a right wing delusion concocted to hype up AI more). Depending on how those chips fall, actual political action might require all sorts of compromises with annoying bedfellows.

Comment by dr_s on Hardshipification · 2024-05-29T06:51:25.083Z · LW · GW

I mean, if a mere acquaintance told me something like that I don't know what I'd say, but it wouldn't be an offer to "talk about it" right away - I wouldn't feel like I'd enjoy talking about it with a near stranger, so I'd expect the same applies to them. It's one of those prefab reactions that don't really hold much water upon close scrutiny.

Comment by dr_s on Hardshipification · 2024-05-29T06:48:22.848Z · LW · GW

I find that rather adorable

In principle it is, but I think people do need some self awareness to distinguish between "I wish to help" and "I wish to feel like a person who's helping". The former requires focusing more genuinely on the other, rather than going off a standard societal script. Otherwise, if your desire to help ends up merely forcing the supposedly "helped" person to entertain you, after a while you'll effectively be perceived as a nuisance, good intentions or not.

Comment by dr_s on Hardshipification · 2024-05-29T06:45:09.070Z · LW · GW

Hard agree. People might be traumatised by many things, but you don't really want to convince them they should be traumatised, or define their identity about trauma (and then possibly insist that if they swear up and down they aren't that just means they're really repressing or not admitting - this has happened to me). That only increases the suffering! If they're not traumatised, great - they dodged a bullet! It doesn't mean that e.g. sex assault is less bad - the same way shooting someone isn't any less bad just because you happened to miss their vital organs (ok, so actually the funny thing is I guess that attempted murder is punished less than actual murder... but morally speaking, I'd say how good a shot you are has no relevance).

Comment by dr_s on How to get nerds fascinated about mysterious chronic illness research? · 2024-05-28T06:36:58.984Z · LW · GW

The thing is, it's hard to come up with ways to package the problem. I've tried doing small data science efforts for lesser chronic problems on myself and my wife, recording the kind of biometric indicators that were likely to correlate with our issues (e.g. food diaries vs symptoms) and it's still almost impossible to suss out meaningful correlations unless it's something as basic as "eating food X causes you immediate excruciating pain". In a non laboratory setting, controlling environmental conditions is impossible. Actual rigorous datasets, if they exist at all, are mostly privacy protected. Relevant diagnostic parameters are often incredibly expensive and complex to acquire, and possibly gatekept. The knowledge aspect is almost secondary IMO (after all, in the end, lots of recommendations your doctor will give you are still little more than empirical fixes someone came up with by analysing the data, mechanistic explanations don't go very far when dealing with biology). But even the data science, which would be doable by curious individuals, is forbidding. Even entire fields of actual, legitimate academia are swamped in this sea of noisy correlations and statistical hallucinations (looking at you, nutrition science). Add to that the risk of causing harm to people even if well meaning, and the ethical and legal implications of that, and I can see why this wouldn't take off. SMTM's citizen research on obesity seems the closest I can think of, and I've heard plenty of criticism of it and its actual rigour.

Comment by dr_s on Applying refusal-vector ablation to a Llama 3 70B agent · 2024-05-23T06:27:19.151Z · LW · GW

It doesn't change much, it still applies anyway because when talking about hypothetical really powerful models, ideally we'd want them to follow very strong principles regardless of who asks. E.g. if an AI was in charge of a military obviously it wouldn't be open, but it shouldn't accept orders to commit war crimes even from a general or a president.

Comment by dr_s on Stephen Fowler's Shortform · 2024-05-19T22:35:40.398Z · LW · GW

I'm not sure if those are precisely the terms of the charter, but that's besides the point. It is still "private" in the sense that there is a small group of private citizens who own the thing and decide what it should do with no political accountability to anyone else. As for the "non-profit" part, we've seen what happens to that as soon as it's in the way.

Comment by dr_s on Stephen Fowler's Shortform · 2024-05-19T13:29:29.106Z · LW · GW

Aren't these different things? Private yes, for profit no. It was private because it's not like it was run by the US government.

Comment by dr_s on Stephen Fowler's Shortform · 2024-05-19T13:23:46.033Z · LW · GW

I think there's a solid case for anyone who supported funding OpenAI being considered at best well intentioned but very naive. I think the idea that we should align and develop superintelligence but, like, good, has always been a blind spot in this community - an obviously flawed but attractive goal, because it dodged the painful choice between extinction risk and abandoning hopes of personally witnessing the singularity or at least a post scarcity world. This is also a case where people's politics probably affected them, because plenty of others would be instinctively distrustful of corporation driven solutions to anything - it's something of a Godzilla Strategy after all, aligning corporations is also an unsolved problem - but those with an above average level of trust in free markets weren't so averse.

Such people don't necessarily have conflicts of interest (though some may, and that's another story) but they at least need to drop the fantasy land stuff and accept harsh reality on this before being of any use.

Comment by dr_s on D&D.Sci (Easy Mode): On The Construction Of Impossible Structures · 2024-05-18T06:52:39.409Z · LW · GW

I admit it's cheating a bit the spirit of the challenge, but in practice, I guess it's the round amount that makes me suspicious that it might be intentional. But it's true there doesn't seem to be a broader materials related pattern, so it may just be as you say.

Comment by dr_s on D&D.Sci (Easy Mode): On The Construction Of Impossible Structures · 2024-05-17T18:49:08.947Z · LW · GW

I find a pattern in that buildings using Dreams together with either Wood or Silver have an 80% chance of being Impossible when made by a Self-Taught architect, but honestly this seems irrelevant since the other two types of background are a 100% guarantee so they're better value for money anyway.

Comment by dr_s on introduction to cancer vaccines · 2024-05-15T12:54:40.056Z · LW · GW

Fair, depends how hard it is to do that though, I assumed inserting a target gene would be easier than triggering death in a cell that has probably hopelessly broken its apoptosis mechanism.

Comment by dr_s on introduction to cancer vaccines · 2024-05-15T08:30:07.824Z · LW · GW

Question: would it be possible to use retroviruses to target cancer cells selectively to insert a gene that expresses a target protein, and then do monoclonal antibody treatment on that? Would the cancer accelerated metabolism make this any good?

Comment by dr_s on Against Student Debt Cancellation From All Sides of the Political Compass · 2024-05-14T20:06:35.462Z · LW · GW

Albeit with wilder swings, current 80 year olds in the US lived and worked through some of the years of highest GDP growth ever. That's not necessarily reproducible. In addition, one's net worth isn't just a linear function of the integral of the GDP throughout their life. For example, being able to buy a house early is a big boost because now you have capital that appreciates, possibly faster than the interests on your debt accrue. Meanwhile if you have to rent, your money disappears down a black hole. Guess what's a big difference between Boomers and Gen Z.

Comment by dr_s on Against Student Debt Cancellation From All Sides of the Political Compass · 2024-05-14T10:59:04.192Z · LW · GW

Only if you believe this is a natural stationary progression. In practice, it very likely is not, and current 20 years old won't be as rich as current 80 years old if only they manage to survive 60 years.

Comment by dr_s on Against Student Debt Cancellation From All Sides of the Political Compass · 2024-05-14T09:21:28.528Z · LW · GW

I'm skeptical a humanities education doesn't show up in earnings.

The question is more about whether a humanities degree does. It may be that the humanities "genius" is not something you catch in a bottle successfully. After all, the most successful authors don't usually come out of a special Author College. An employer might appreciate theoretically the talent without thinking it significantly correlates with any one degree. And on the other hand, someone like Steve Jobs certainly did have quite a bit of this knack - design and branding require artistic sensibility - yet he's mainly seen as a STEM figure.

If its boredom, better to subsidize the YouTubers, podcasters, and TikTokers than the colleges

The problem with this is that there absolutely are plenty of humanities studies that require time, impartiality and rigour, and that sort of format has all the wrong incentives for it. I think in many ways the subdivision is sort of artificial. History or philology for example are, much like natural sciences, digging towards one truth that theoretically exists, but is inaccessible save for indirect evidence. They're not creative, artistic or particularly subjective pursuits. "Human sciences" would be a more appropriate name for them.

Comment by dr_s on Against Student Debt Cancellation From All Sides of the Political Compass · 2024-05-14T06:21:58.092Z · LW · GW

I think debt cancellation would make sense as a sort of amnesty if it came together with some kind of reform that has the goal of preventing the situation from repeating in the future, whatever that may be. Otherwise, it's just a one off with the downsides you mention.

The problem is that fundamentally the argument is that humanities studies have positive externalities that aren't reflected in the salary of their graduates. I don't dismiss this argument, though I think with humanities a lot of value is provided by the very top percentile (e.g. a handful of very capable historians will write books that will be read by millions, most others will do very little unless they teach). In that sense there may be a need to subsidize the humanity degrees, but that might be best done in the long run with things like fully paid bursaries for deserving candidates. There's also a problem of evaluation because of course if you push such an argument you must accept some political accountability, and right now humanities are often terrible at making a case for themselves (every discussion about this I see tends to degenerate into "you can not appreciate our sophisticated knowledge, you bumpkins, but somehow studying humanities makes you a Better Person, so just accept it and thank us for our existence", which isn't terribly persuading. And at the very least, that the experts in subjects most closely associated with rhetoric and the understanding of human nature are so awful at persuasion is in itself concerning).

Comment by dr_s on Linch's Shortform · 2024-05-13T16:58:26.690Z · LW · GW

You're right, but while those heuristics of "better safe than sorry" might be too conservative for some fields, they're pretty spot on for powerful AGI, where the dangers of failure vastly outstrip opportunity costs.