LessWrong 2.0 Reader
View: New · Old · TopRestrict date range: Today · This week · This month · Last three months · This year · All time
next page (older posts) →
next page (older posts) →
I’m only a year old rationalist
you write really eloquently for your age! and being in uni! wow. I was still learning to walk. kids are so precocious these days
⸮
keltan on some thoughts on LessOnline“Whiteboards everywhere” and my non-ironic favourite band are debuting songs!!!
But, I’m only a year old rationalist and I live in Australia on a uni student budget. Still… I’m considering flying out. It would be pretty incredible to run some abstract improv workshops with other truth seeking nerds. I think I need to sit down and calculate.
Is this the type of event that a first year rationalist could attend and get value from/be welcome at? What is the likelihood that it will run again next year? Is there a prediction market for that?
the-gears-to-ascension on Raemon's Shortforma ui on your user page where you get to pick a four letter shortening of your name and a color. the shortening is displayed as
t g
t a
in a tiny color-of-your-choice box. when picking your name, each time you pick a hue and saturation in the color picker (use a standard one, don't build a color picker), it does a query (debounced - I hope you have a standard way to debounce in react elements) for other people on the site who have that initialism, and shows you their colors in a list, along with an indicator min(color_distance(you.color, them.color) for them in other_users).
the color distance indicator could be something like the one from here, which would need transliterating into javascript:
keltan on Observations on Teaching for Four WeeksThis formula has results that are very close to L*u*v* (with the modified lightness curve) and, more importantly, it is a more stable algorithm: it does not have a range of colours where it suddenly gives far from optimal results. The weights of the formula could be optimized further, but again, the selection of the closest colour is subjective. My goal was to find a reasonable compromise.
typedef struct { unsigned char r, g, b; } RGB; double ColourDistance(RGB e1, RGB e2) { long rmean = ( (long)e1.r + (long)e2.r ) / 2; long r = (long)e1.r - (long)e2.r; long g = (long)e1.g - (long)e2.g; long b = (long)e1.b - (long)e2.b; return sqrt((((512+rmean)*r*r)>>8) + 4*g*g + (((767-rmean)*b*b)>>8)); }
That’s a great question! I’ve been teaching arts classes for a youth charity for 5 years now. Ages range from 5-18. I myself am 23.
I’d say this has happened twice? I’m counting a one off lesson with some 16-18 year olds a few years ago. And a series of weeks in which I had extremely little control over some 8-10 year olds. In that case I was able to control individuals if they had my full attention. But would ‘lose’ them when I focused on the next kid.
Your question caused me to think of why these things may have happened. Though I’m curious to hear what you think before I spill my guts.
mathieuroy on Let's split the cake, lengthwise, upwise and slantwisethanks, it worked! https://web.archive.org/web/20150412211654/http://reducing-suffering.org/wp-content/uploads/2015/02/wild-animals_2015-02-28.pdf
mathieuroy on Mati_Roy's Shortformi want a better conceptual understanding of what "fundamental values" means, and how to disentangled that from beliefs (ex.: in an LLM). like, is there a meaningful way we can say that a "cat classifier" is valuing classifying cats even though it sometimes fail?
nathan-helm-burger on Please stop publishing ideas/insights/research about AIA bit of a rant, yes, but some good thoughts here.
I agree that unenforceable regulation can be a bad thing. On the other hand, it can also work in some limited ways. For example, the international agreements against heritable human genetic engineering seem to have held up fairly well. But I think that that requires supporting facts about the world to be true. It needs to not be obviously highly profitable to defectors, it needs to be relatively inaccessible to most people (requiring specialized tech and knowledge), it needs to fit with our collective intuitions (bio-engineering humans seems kinda icky to a lot of people).
The trouble is, all of these things fail to help us with the problem of dangerous AI! As you point out, many bitcoin miners have plenty of GPUs to be dangerous if we get even a couple more orders-of-magnitude algorithmic efficiency improvements. So it's accessible. AI and AGI offer many tempting ways to acquire power and money in society. So it's immediately and incrementally profitable. People aren't as widely instinctively outraged by AI experiments as Bio-engineering experiments. So it's not intuitively repulsive.
So yes, this seems to me to be very much a situation in which we should not place any trust in unenforceable regulation.
I also agree that we probably do need some sort of organization which enforces the necessary protections (detection and destruction) against rogue AI.
And it does seem potentially like a lot of human satisfaction could be bought in the near future with a focus on making sure everyone in the world gets a reasonable minimum amount of satisfaction from their physical and social environments as you describe here:
Usually, the median person is interested in: jobs, a full fridge, rituals, culture, the spread of their opinion leader's information, dopamine, political and other random and inherited values, life, continuation of life, and the like. Provide a universal way of obtaining this and just monitor it calmly.
As Connor Leahy has said, we should be able to build sufficiently powerful tool-AI to not need to build AGI! Stop while we still have control! Use the wealth to buy off those who would try anyway. Also, build an enforcement agency to stop runaway AI or AI misuse.
I don't know how we get there from here though.
Also, the offense-dominant weapons development landscape is looking really grim, and I don't see how to easily patch that.
On the other hand, I don't think we buy ourselves any chance of victory by trying to gag ourselves for fear of speeding up AGI development. It's coming soon regardless of what we do! The race is short now, we need to act fast!
I don't buy the arguments that our discussions here will make a significant impact in the timing of the arrival of AGI. That seems like hubris to me, to imagine we have such substantial effects, just from our discussions.
Code? Yes, code can be dangerous and shouldn't be published if so.
Sufficiently detailed technical descriptions of potential advancements? Yeah, I can see that being dangerous.
Unsubstantiated commentary about a published paper being interesting and potentially having both capabilities and alignment value? I am unconvinced that such discussions meaningfully impact the experiments being undertaken in AI labs.
the-gears-to-ascension on My hour of memoryless luciditygeez, that's certainly a list of chemicals. I wonder what the ratios were - my intuition finds it less surprising for you to be less impaired if no one of them is particularly high dose.
martinkunev on Examples of Highly Counterfactual Discoveries?I have previously used special relativity as an example to the opposite. It seems to me that the Michelson-Morley experiment laid the groundwork and all alternatives were more or less rejected by the time special relativity was formulated. This could be hindsight bias though.
If nobel prizes are any indicator, then the photoelectric effect is probably more counterfactually impactful than special relativity.
migueldev on CLR's recent work on multi-agent systemssafe Pareto improvement (SPI)
This URL is broken.