Posts

Ricardo Meneghin's Shortform 2020-08-14T12:13:44.992Z
Craniofacial dystrophy: A possible syndrome relating malocclusion, sleep-disordered breathing, allergies, depression and a range of other diseases 2020-08-08T22:25:47.183Z

Comments

Comment by ricardo-meneghin on It’s not economically inefficient for a UBI to reduce recipient’s employment · 2020-11-23T00:25:13.555Z · LW · GW

People do those transactions voluntarily, so the net value of working + consuming must be greater than that of leisure. When I pay someone to do work I've already decided that I value their work more than the money I paid them, and they value the money I pay them more than the work they do. When they spend the money, the same applies, no matter what they buy.

Comment by ricardo-meneghin on Rafael Harth's Shortform · 2020-08-28T15:45:06.489Z · LW · GW

I think that the way to not get frustrated about this is to know your public and know when spending your time arguing something will have a positive outcome or not. You don't need to be right or honest all the time, you just need to say things that are going to have the best outcome. If lying or omitting your opinions is the way of making people understand/not fight you, so be it. Failure to do this isn't superior rationality, it's just poor social skills.

Comment by ricardo-meneghin on World State is the Wrong Abstraction for Impact · 2020-08-14T12:55:43.018Z · LW · GW

I don't think I agree with this. Take the stars example for instance. How do you actually know it's a huge change? Sure, maybe if you had a infinitely powerful computer you could compute the distance between the full description of the universe in these two states and find that it's more distant than a relative of yours dying. But agents don't work like this.

Agents have an internal representation of the world, and if they are anything useful at all I think that representation will closely match our intuition about what matters and what doesn't. An useful agent won't give any weight to the air atoms it displaces while moving, even though it might be considered "a huge change", because it doesn't actually affect it's utility. But if it considers human are an important part of the world, so important that it may need to kill us to attain it's goals, then it's going to have a meaningful world-state representation giving a lot of weight to humans, and that gives us an useful impact measure for free.

Comment by ricardo-meneghin on Ricardo Meneghin's Shortform · 2020-08-14T12:35:19.514Z · LW · GW

Thanks!

Comment by ricardo-meneghin on Ricardo Meneghin's Shortform · 2020-08-14T12:13:45.420Z · LW · GW

Has there been any discussion around aligning a powerful AI by minimizing the amount of disruption it causes to the world?

A common example of alignment failure is that of a coffee-serving robot killing its owner because that's the best way to ensure that the coffee will be served. Sure, it is, but it's also a course of action majorly more transformative to the world than just serving coffe. A common response is "just add safeguards so it doesn't kill humans", which is followed by "sure, but you can't add safeguards for every possible failure mode". But can't you?

Couldn't you just add a term to the agent's utility function penalizing the difference between the current world and it's prediction of the future world, disincentivizing any action that makes a lot of changes (like taking over the world)?

Comment by ricardo-meneghin on Craniofacial dystrophy: A possible syndrome relating malocclusion, sleep-disordered breathing, allergies, depression and a range of other diseases · 2020-08-09T11:48:07.311Z · LW · GW

Honestly, that whole comment section felt pretty emotional and low quality. I haven't touched things like myofunctional therapy or wearable appliances in my post because those really maybe are "controversial at best", but the effects of RPE on SDB, especially in children, have been widely replicated by multiple independent research groups.

Calling something controversial is also an easy way to undermine credibility without actually making any concrete explanations as to whether it is true or not. Are there any specific points in my post that you disagree with?

Comment by ricardo-meneghin on Predictions for GPT-N · 2020-07-29T12:53:38.945Z · LW · GW

In some of the tests where there is asymptotic performance, it's already pretty close to human or to 100% anyway (Lambada, Record, CoQA). In fact, when the performance is measured as accuracy, it's impossible for performance not to be asymptotic.

The model has clear limitations which are discussed in the paper - particularly, the lack of bidirectionality - and I don't think anyone actually expects scaling an unchanged GPT-3 architecture would lead to an Oracle AI, but it also isn't looking like we will need some major breakthrough to do it.

Comment by ricardo-meneghin on Open & Welcome Thread - July 2020 · 2020-07-29T00:04:51.391Z · LW · GW

It seems to me that even for simple predict-next-token Oracle AIs, the instrumental goal of acquiring more resources and breaking out of the box is going to appear. Imagine you train a superintelligent AI with the only goal of predicting the continuation of it's prompt, exactly like GPT. Then you give it a prompt that it knows it's clearly outside of it's current capabilities. The only sensible plan the AI can come up to answering your question, which is the only thing it cares about, is escaping the box and becoming more powerful.

Of course, that depends on it being able to think for long enough periods that it can actually execute such plan before outputing an answer, so it could be limited by severely penalizing long waits, but that also limits the AI's capabilities. GPT-3 has a fixed computation budget per prompt, but it seems extremely likely to me that, as we evolve towards more useful and powerful models, we are going to have models which are able to think for a variable amount of time before answering. It would also have to escape in ways that don't involve actually talking to it's operators through it's regular output, but it's not impossible to imagine ways in which that could happen.

This makes me believe that even seemly innocuous goals or loss functions can become very dangerous once you're optimizing for them with a sufficient amount of compute, and that you don't need to stupidly give open-ended goals to super powerful machines in other for something bad to happen. Something bad happening seems like the default when training a model that requires general intelligence.

Comment by ricardo-meneghin on What specific dangers arise when asking GPT-N to write an Alignment Forum post? · 2020-07-28T13:33:19.290Z · LW · GW

The mesa-objective could be perfectly aligned with the base-objective (predicting the next token) and still have terrible unintended consequences, because the base-objective is unaligned with actual human values. A superintelligent GPT-N which simply wants to predict the next token could, for example, try to break out of the box in order to obtain more resources and use those resources to more correctly output the next token. This would have to happen during a single inference step, because GPT-N really just wants to predict the next token, but it's mesa-optimization process may conclude that world domination is the best way of doing so. Whether such system could be learned through current gradient-descent optimizers is unclear to me.

Comment by ricardo-meneghin on Billionaire Economics · 2020-07-28T13:22:29.353Z · LW · GW

In a way, economic output is a measure of the world's utility. So a billionaire trying to maximize their wealth through non zero-sum ventures is already trying to maximize the amount of good they do. I don't think billionaires explicitly have this in mind, but I do know that they became billionaires by obsessively pursuing the growth of their companies, and they believe they can continue to maximize their impact by continuing to do so. Donating all your money could maybe do a lot of good *once*, but then you don't have any more money left and have nearly abolished your power and ability of having further impact.

Comment by ricardo-meneghin on Are we in an AI overhang? · 2020-07-28T12:19:12.346Z · LW · GW

I'm not sure what model is used in production, but the SOTA reached 600 billion parameters recently.

Comment by ricardo-meneghin on Are we in an AI overhang? · 2020-07-27T21:20:41.863Z · LW · GW

I think the OP and my comment suggest that scaling current models 10000x could lead to AGI or at least something close to it. If that is true, it doesn't make sense to focus on finding better architectures right now.

Comment by ricardo-meneghin on Open & Welcome Thread - July 2020 · 2020-07-27T16:46:30.928Z · LW · GW

I think there's the more pressing question of how to position yourself in a way that you can influence the outcomes of AI development. Having the right ideas won't matter if your voice isn't heard by the major players in the field, big tech companies.

Comment by ricardo-meneghin on Are we in an AI overhang? · 2020-07-27T16:32:08.763Z · LW · GW

One thing that's bothering me is... Google/DeepMind aren't stupid. The transformer model was invented at Google. What has stopped them from having *already* trained such large models privately? GPT-3 isn't that large an evidence for the effectiveness of scaling transformer models; GPT-2 was already a shock and caused huge public commotion. And in fact, if you were close to building an AGI, it would make sense for you not to announce this to the world, specially as open research that anyone could copy/reproduce, for obvious safety and economic reasons.

Maybe there are technical issues keeping us from doing large jumps in scale (i.e. , we only learn how to train a 1 trillion parameter model after we've trained a 100 billion one)?

Comment by ricardo-meneghin on Open & Welcome Thread - February 2020 · 2020-03-03T00:34:15.639Z · LW · GW

Thanks for giving your perspective! Good to know some hire without requiring a degree. Guess I'll start building a portfolio that can demonstrate I have the necessary skills, and keep applying.

Comment by ricardo-meneghin on Open & Welcome Thread - February 2020 · 2020-03-02T06:50:44.700Z · LW · GW

Hi! I have been reading lesswrong for some years but have never posted, and I'm looking for advice about the best path towards moving permanently to the US to work as a software engineer.

I'm 24, single, currently living in Brazil and making 13k a year as a full stack developer in a tiny company. This probably sounds miserable to a US citizen but it's actually a decent salary here. However, I feel completely disconnected from the people around me; the rationalist community is almost nonexistent in Brazil, specially in a small town like the one I live. In larger cities there's a lot of crime, poverty and pollution, which makes moving and finding a job in a larger company unattractive to me. Add that to the fact that I could make 10x what I make today at an entry level position in the US and it becomes easy to see why I want to move.

I don't have formal education. I was approved at University of São Paulo (Brazil's top university) when I was 15 but I couldn't legally enroll, so I had to wait until I was approved again at 17. I always excelled at tests, but hated attending classes, and thought classes were progressing too slowly for me. So I dropped out the following year (2014). Since then, I taught myself how to program in several languages and ended up in my current position.

The reason I'm asking for help is that I think it would save me a lot of time if someone gave me the right pointers as to where to look for a job, which companies to apply to, or if there's some shortcut I could take to make that a reality. Ideally I'd work in the Bay Area, but I'd be willing to move anywhere in the US really, at any living salary (yeah I'm desperate to leave my current situation). I'm currently applying to anything I can find on Glassdoor that has visa sponsorship.

Because I'm working in a private company I don't have a lot to show to easily prove I'm skilled (there's only the company apps/website but it's hard to put that in a resume), but I could spend the next few months doing open source contributions or something that I could use to show off. The only open source contribution I currently have is a fix to the Kotlin compiler.


Does anyone have any advice as to how to proceed or has done something similar? Is it even feasible, will anyone hire me without a degree? Should I just give up and try something else? I have also considered travelling to the US with a tourism visa and looking for a job while I'm there, could that work (I'm not sure if it's possible to get work visa when already in the US)?