Posts

Comments

Comment by Kronopath on I hired 5 people to sit behind me and make me productive for a month · 2024-08-29T01:43:27.273Z · LW · GW

The kind of employers that would not be okay with you streaming your work on Twitch are usually also the kind of employers that would not be okay with you hiring randos to sit behind you staring at confidential info on your screen during the work day.

This is really only suitable for people who are entrepreneurs/small business owners with less concerns over confidentiality, or have enough rapport with their employer for them to be ok with this.

Comment by Kronopath on Probabilistic Negotiation · 2024-01-02T05:09:19.984Z · LW · GW

I have to admit, i rolled my eyes when I saw that you worked in financial risk management. Not because what you did was stupid—far from It—but because of course this is the kind of cultural environment in which this would work.

If you did this in a job that wasn’t heavily invested in a culture of quantitative risk management, it would likely cause a likely-permanent loss of trust that would be retaliated against in subtle ways. You’d get a reputation as “the guy that plays nasty/tricky games when he doesn’t get his way” which would make it harder to collaborate with people.

So godspeed, glad it worked for you, but beware applying this in other circumstances and cultures.

Comment by Kronopath on Optimality is the tiger, and agents are its teeth · 2023-02-05T04:17:29.468Z · LW · GW

Sure, I agree GPT-3 isn't that kind of risk, so this is maybe 50% a joke. The other 50% is me saying: "If something like this exists, someone is going to run that code. Someone could very well build a tool that runs that code at the press of a button."

Comment by Kronopath on Optimality is the tiger, and agents are its teeth · 2023-01-27T22:32:22.639Z · LW · GW

Equally one could make a claim from the true ending, that you do not run the generated code.

Meanwhile, bored tech industry hackers:

“Show HN: Interact with the terminal in plain English using GPT-3”

https://news.ycombinator.com/item?id=34547015

Comment by Kronopath on Frequently Asked Questions for Central Banks Undershooting Their Inflation Target · 2022-02-14T05:27:40.426Z · LW · GW

It's kind of surreal to read this in the 2020s.

Comment by Kronopath on What would we do if alignment were futile? · 2021-11-20T04:29:23.100Z · LW · GW

Do we have to convince Yann LeCun? Or do we have to convince governments and the public?

(Though I agree that the word "All" is doing a lot of work in that sentence, and that convincing people of this may be hard. But possibly easier than actually solving the alignment problem?)

Comment by Kronopath on What would we do if alignment were futile? · 2021-11-19T07:36:19.291Z · LW · GW

A thought: could we already have a case study ready for us?

Governments around the world are talking about regulating tech platforms. Arguably Facebook's News Feed is an AI system and the current narrative is that it's causing mass societal harm due to it optimizing for clicks/likes/time on Facebook/whatever rather than human values.

See also:

All we'd have to do is to convince people that this is actually an AI alignment problem.

Comment by Kronopath on Stop button: towards a causal solution · 2021-11-19T06:40:29.833Z · LW · GW

On Wednesday, the lead scientist walks into the lab to discover that the AI has managed to replicate itself several times over, buttons included. The AIs are arranged in pairs, such that each has its robot hand hovering over the button of its partner.

"The AI wasn't supposed to clone itself!" thinks the scientist. "This is bad, I'd better press the stop button on all of these right away!"

At this moment, the robot arms start moving like a swarm of bees, pounding the buttons over and over. If you looked at the network traffic between each computer, you'd see what was happening: the AI kills its partner, then copies itself over to its partner's hard drive, then its partner kills it back, and copies itself back to its original. This happens as fast as the robot arms can move.

Far in the future, the AIs have succeeded in converting 95% of the mass of the earth into pairs of themselves maddeningly pressing each other's buttons and copying themselves as quickly as possible. The only part of the earth that has not been converted into button-pressing AI pairs is a small human oasis, in which the few remaining humans are eternally tortured in the worst way possible, just to make sure that every single human forever desires to end the life of all of their robot captors.

Comment by Kronopath on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-19T06:15:06.850Z · LW · GW

Are we sure that OpenAI still believes in "open AI" for its larger, riskier projects? Their recent actions suggest they're more cautious about sharing their AI's source code, and projects like GPT-3 are being "released" via API access only so far. See also this news article that criticizes OpenAI for moving away from its original mission of openness (which it frames as a bad thing).

In fact, you could maybe argue that the availability of OpenAI's APIs acts as a sort of pressure release valve: it allows some people to use their APIs instead of investing in developing their own AI. This could be a good thing.

Comment by Kronopath on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-18T05:28:47.627Z · LW · GW

I echo everyone else talking about how pessimistic and unactionable this is. The picture painted here is one where the vast majority of us (who are not capable of doing transformational AI research) should desperately try to bring about any possible world that would stop AGI from happening, up to and including civilizational collapse.

Comment by Kronopath on Your Cheerful Price · 2021-04-02T22:34:27.654Z · LW · GW

This is a fair criticism of my criticism.

Comment by Kronopath on Your Cheerful Price · 2021-02-19T10:53:27.056Z · LW · GW

To me this post may very well be a good example of some of the things that make me uncomfortable about the rationalist community, and why I so far have chosen to engage with it very minimally and mostly stay a lurker. At the risk of making a fool of myself, especially since it’s late and I didn’t read the whole post thoroughly (partly because you gave me an excuse not to halfway through) I’m going to try to explain why.

I don’t charge friends for favours, nor would I accept payment if offered. I’m not all that uncomfortable with the idea of “social capital” as a whole—I grew up partially with Portuguese culture where people will, for example, fight to pay the bill at a restaurant, because it turns out to be pretty good to be known as the guy who’s done favours for everybody in the village—but generally speaking, if I’m not willing to do a favour for a friend, then no amount of money will change that fact. At the point where you’re paying me, it ceases to be a favour, and it becomes a business transaction. Business transactions between friends are fraught.

I think a lot plays into this.

Turning a favour into a transaction means it starts being judged based on market norms rather than social norms. People treat these situations differently: there’s a famous case study of a daycare that started charging parents for picking up their kids late, only for the number of late pickups to skyrocket, because parents now felt that they could absolve their guilt by paying the fine. That change was long-lasting: even after removing the fine, things didn’t get better. In fact, IIRC it got even worse. They were still judging it based on market norms, but now the cost was $0!

When you pay someone to do things, you briefly become their employer, and that’s not a good kind of relationship to have with a friend. The employer-employee relationship is one, at least partially, of dominance: the employer now has the ability to make the employee’s life better or worse in some marginal way. You don’t usually want that in a friendship, it’s better to be in even footing.

And also? It sends a signal that you’re bad at estimating social capital and trying to paper over that weakness with money. This throws up alarm bells about what other social situations, with myself or with a third party, you might mess up in the future, and whether or not it might be a social risk to associate too closely with you.

These are all things I can tolerate in an ally, but would negatively affect anyone who wanted to be a close friend. Maybe I could put up with it, but it would take effort and patience to get past that.

My point in this isn’t to criticize you personally, or to say you’re a bad person for doing this. (Far from it, I admit there’s a kind of economic elegance to it.) It’s to try and describe the flip side, that it’s about more than just “feeling icky”.

And I think this is important, especially for building IRL communities, because I expect that people like me outnumber people like you.

Comment by Kronopath on Sunset at Noon · 2020-10-11T19:51:04.383Z · LW · GW

I had to double-check the date on this. This was written in 2017? It feels more appropriate to 2020, where both the literal and metaphorical fires have gotten extremely out of hand.