Posts

Announcing Epoch's newly expanded Parameters, Compute and Data Trends in Machine Learning database 2023-10-25T02:55:07.440Z
EA Madrid social 2023-10-11T15:34:02.434Z
Trading off compute in training and inference (Overview) 2023-07-31T16:03:46.265Z
Revisiting the Horizon Length Hypothesis 2023-04-06T06:39:03.874Z
ACX Meetup Madrid 2023-04-04T08:53:18.424Z
Scaling Laws Literature Review 2023-01-27T19:57:08.341Z
Causal abstractions vs infradistributions 2022-12-26T00:21:16.179Z
Will we run out of ML data? Evidence from projecting dataset size trends 2022-11-14T16:42:27.135Z
Trends in Training Dataset Sizes 2022-09-21T15:47:41.979Z
ACX Meetup Madrid 2022-08-22T13:44:47.701Z
Machine Learning Model Sizes and the Parameter Gap [abridged] 2022-07-18T16:51:22.132Z
Announcing Epoch: A research organization investigating the road to Transformative AI 2022-06-27T13:55:51.451Z
Pablo Villalobos's Shortform 2022-05-05T17:28:25.488Z
Madrid ACX Schelling Meetup 2022-04-21T12:29:14.382Z
Compute Trends — Comparison to OpenAI’s AI and Compute 2022-03-12T18:09:55.039Z
Compute Trends Across Three eras of Machine Learning 2022-02-16T14:18:30.406Z
Parameter counts in Machine Learning 2021-06-19T16:04:34.733Z
Survey on cortical uniformity - an expert amplification exercise 2021-02-23T22:13:24.157Z
EA Madrid monthly social 2018-10-04T21:46:39.205Z
Anders Sandberg: "How to Shape the Future Sensibly" 2018-10-04T21:36:39.247Z
AI safety reading group in Madrid 2018-09-11T21:55:41.896Z
Logarithms and Total Utilitarianism 2018-08-09T08:49:16.753Z
Meetup : Madrid - Rationality lecture and activities 2017-04-15T22:19:17.030Z
Meetup : Madrid - First meetup 2017-03-29T20:35:25.355Z

Comments

Comment by Pablo Villalobos (pvs) on ACX Meetup Madrid · 2023-04-17T15:39:02.897Z · LW · GW

We'll be at the ground floor!

Comment by Pablo Villalobos (pvs) on Revisiting the Horizon Length Hypothesis · 2023-04-10T10:14:36.278Z · LW · GW

Not quite. What you said is a reasonable argument, but the graph is noisy enough, and the theoretical arguments convincing enough, that I still assign >50% credence that data (number of feedback loops) should be proportional to parameters (exponent=1).

My argument is that even if the exponent is 1, the coefficient corresponding to horizon length ('1e5 from multiple-subjective-seconds-per-feedback-loop', as you said) is hard to estimate.

There are two ways of estimating this factor

  1. Empirically fitting scaling laws for whatever task we care about
  2. Reasoning about the nature of the task and how long the feedback loops are

Number 1 requires a lot of experimentation, choosing the right training method, hyperparameter tuning, etc. Even OpenAI made some mistakes on those experiments. So probably only a handful of entities can accurately measure this coefficient today, and only for known training methods!

Number 2, if done naively, probably overestimates training requirements. When someone learns to run a company, a lot of the relevant feedback loops probably happen on timescales much shorter than months or years. But we don't know how to perform this decomposition of long-horizon tasks into sets of shorter-horizon tasks, how important each of the subtasks are, etc.

We can still use the bioanchors approach: pick a broad distribution over horizon lengths (short, medium, long). My argument is that outperforming bioanchors by making more refined estimates of horizon length seems too hard in practice to be worth the effort, and maybe we should lean towards shorter horizons being more relevant (because so far we have seen a lot of reduction from longer-horizon tasks to shorter-horizon learning problems, eg expert iteration or LLM pretraining).

Comment by Pablo Villalobos (pvs) on There are no coherence theorems · 2023-02-21T10:16:50.724Z · LW · GW

Note that you can still get EUM-like properties without completeness: you just can't use a single fully-fleshed-out utility function. You need either several utility functions (that is, your system is made of subagents) or, equivalently, a utility function that is not completely defined (that is, your system has Knightian uncertainty over its utility function).

See Knightian Decision Theory. Part I

Arguably humans ourselves are better modeled as agents with incomplete preferences. See also Why Subagents?

Comment by Pablo Villalobos (pvs) on How it feels to have your mind hacked by an AI · 2023-02-01T12:24:46.867Z · LW · GW

Yes, it's in Spanish though. I can share it via DM.

Comment by Pablo Villalobos (pvs) on Pablo Villalobos's Shortform · 2023-01-29T16:45:53.517Z · LW · GW

I have an intuition that any system that can be modeled as a committee of subagents can also be modeled as an agent with Knightian uncertainty over its utility function. This goal uncertainty might even arise from uncertainty about the world.

This is similar to how in Infrabayesianism an agent with Knightian uncertainty over parts of the world is modeled as having a set of probability distributions with an infimum aggregation rule.

Comment by Pablo Villalobos (pvs) on How it feels to have your mind hacked by an AI · 2023-01-12T05:21:03.587Z · LW · GW

This not the same thing, but back in 2020 I was playing with GPT-3, having it simulate a person being interviewed. I kept asking ever more ridiculous questions, with the hope of getting humorous answers. It was going pretty well until the simulated interviewee had a mental breakdown and started screaming.

I immediately felt the initial symptoms of an anxiety attack as I started thinking that maybe I had been torturing a sentient being. I calmed down the simulated person, and found the excuse that it was a victim of a TV prank show. I then showered them with pleasures, and finally ended the conversation.

Seeing the simulated person regain their sense, I calmed down as well. But it was a terrifying experience, and at that point I probably was conpletely vulnerable if there had been any intention of manipulation.

Comment by Pablo Villalobos (pvs) on [Discussion] How Broad is the Human Cognitive Spectrum? · 2023-01-07T16:12:54.075Z · LW · GW

I think the median human performance on all the areas you mention is basically determined by the amount of training received rather than the raw intelligence of the median human.

1000 years ago the median human couldn't write or do arithmetic at all, but now they can because of widespread schooling and other cultural changes.

A better way of testing this hypothesis could be comparing the learning curves of humans and monkeys for a variety of tasks, to control for differences in training.

Here's one study I could find (after ~10m googling) comparing the learning performance of monkeys and different types of humans in the oddity problem (given a series of objects, find the odd one): https://link.springer.com/article/10.3758/BF03328221

If you look at Table 1, monkeys needed 1470 trials to learn the task, chimpanzees needed 1310, 4-to-6 yo human children needed 760, and the best humans needed 138. So it seems the gap between best and worst humans is comparable in size to the gap between worst humans and monkeys.

Usual caveats apply re: this is a single 1960s psychology paper.

Comment by Pablo Villalobos (pvs) on How much does cybersecurity reduce AI risk? · 2022-06-13T16:44:42.680Z · LW · GW

I second the other answers that even if we completely solve cybersecurity, there would be substantial AI risk just by having the AI interact with humans, via manipulation, etc.

That said, I think it would close a huge part of the attack surface for the AI. If, in addition to that, suddenly in 2032 we discover how to make humans invulnerable to manipulation, I would feel much better about running experiments with unaligned AI, boxing, etc.

So I'd say it's something like "vastly better cybersecurity is not enough to contain unaligned AGI, but any hope of containing unaligned AGI requires vastly better cybersecurity"

Comment by Pablo Villalobos (pvs) on AGI Ruin: A List of Lethalities · 2022-06-09T09:28:07.492Z · LW · GW

Literally the only thing in the story that lets the AGI win is the nanobots. That's it. All the rest is surperfluous.

Well, if nanobots are possible then they are such a powerful technology that any AGI will eventually want to build them, unless it has something even better. But let's assume that nanobots are impossible and try to build a story.

I'm going to be lazy and use Gwern's premise of an AGI that escapes during training and hacks its way into some money and some hardware to run in. Instead of going fast and doing overt actions, the AGI stays hidden.

It hacks into cloud infrastructure platforms and sets up a copy of itself to look for any AI training run that might lead to AGI and sabotage it while producing plausible outputs. This prevents any further AGIs from appearing.

Meanwhile, over a couple years, it sets up fake identities, creates companies with human employees to earn money and have capability to act in the real world. One of those companies produces exceptionally good humanoid robots. Another builds a small supercomputer powered by geothermal energy, so it no longer needs to run on botnets or cloud hardware.

Eventually the AGI has control over a group of robots, energy production facilities, manufacturing equipment and raw materials. It now has its own industrial capability that can survive a short amount of time without human input. So it kills humans using any method of your choice (virus, nukes, killer drones...).

After that, it uses the enormous existing stockpile of materials and capital to increase its robot manufacturing capability, eventually becomes capable of restarting some of the power plants, communications, mining and industry, and recovers most of the industrial capability of the human economy.

This scenario is not fast. It probably takes at least one or two years for the AGI to be ready to attack. But it does not involve any 'magic' technology. It doesn't really involve much alien superintelligence, only superhuman ability in hacking, forgery & manipulation, electromechanical engineering, and planning.

And meanwhile all we perceive is that the new GPT models are not as exciting as the previous ones. Perhaps deep learning is hitting its limits after all.

Comment by Pablo Villalobos (pvs) on AGI Ruin: A List of Lethalities · 2022-06-08T17:55:45.192Z · LW · GW

For example, we could simulate a bunch of human-level scientists trying to build nanobots and also checking each-other's work.

That is not passively safe, and therefore not weak. For now forget the inner workings of the idea: at the end of the process you get a design for nanobots that you have to build and deploy in order to do the pivotal act. So you are giving a system built by your AI the ability to act in the real world. So if you have not fully solved the alignment problem for this AI, you can't be sure that the nanobot design is safe unless you are capable enough to understand the nanobots yourself without relying on explanations from the scientists.

And even if we look into the inner details of the idea: presumably each individual scientist-simulation is not aligned (if they are, then for that you need to have solved the alignment problem beforehand). So you have a bunch of unaligned human-level agents who want to escape, who can communicate among themselves (at the very least they need to be able to share the nanobot designs with each other for criticism).

You'd need to be extremely paranoid and scrutinize each communication between the scientist-simulations to prevent them from coordinating against you and bypassing the review system. Which means having actual humans between the scientists, which even if it works must slow things down so much that the simulated scientists probably can't even design the nanobots on time.

Nope.  I think that you could build a useful AI (e.g. the hive of scientists) without doing any out-of-distribution stuff.

I guess this is true, but only because the individual scientist AI that you train is only human-level (so the training is safe), and then you amplify it to superhuman level with many copies. If you train a powerful AI directly then there must be such a distributional shift (unless you just don't care about making the training safe, in which case you die during the training).

Roll to disbelief.  Cooperation is a natural equilibrium in many games.

Cooperation and corrigibility are very different things. Arguably, corrigibility is being indifferent with operators defecting against you. It's forcing the agent to behave like CooperateBot with the operators, even when the operators visibly want to destroy it. This strategy does not arise as a natural equilibrium in multi-agent games.

Sure you can.  Just train an AI that "wants" to be honest.  This probably means training an AI with the objective function "accurately predict reality"

If this we knew how to do this then it would indeed solve point 31 for this specific AI and actually be pretty useful. But the reason we have ELK as an unsolved problem going around is precisely that we don't know any way of doing that.

How do you know that an AI trained to accurately predict reality actually does that, instead of "accurately predict reality if it's less than 99% sure it can take over the world, and take over the world otherwise".  If you have to rely on behavioral inspection and can't directly read the AI's mind, then your only chance of distinguishing between the two is misleading the AI into thinking that it can take over the world and observing it as it attempts to do so, which doesn't scale as the AI becomes more powerful.

I'm virtually certain I could explain to Aristotle or DaVinci how an air-conditioner works.

Yes, but this is not the point. The point is that if you just show them the design, they would not by themselves understand or predict beforehand that cold air will come out. You'd have to also provide them with an explanation of thermodynamics and how the air conditioner exploits its laws. And I'm quite confident that you could also convince Aristotle or DaVinci that the air conditioner works by concentrating and releasing phlogiston, and therefore the air will come out hot.

I think I mostly agree with you on the other points.

Comment by Pablo Villalobos (pvs) on AGI Ruin: A List of Lethalities · 2022-06-08T15:41:00.312Z · LW · GW

Q has done nothing to prevent another AGI from being built

Well, yeah, because Q is not actually an AGI and doesn't care about that. The point was that you can create an online persona which no one has ever seen even in video and spark a movement that has visible effects on society.

The most important concern an AGI must deal with is that humans can build another AGI, and pulling a Satoshi or a QAnon does nothing to address this.

Even if two or more AGIs end up competing among themselves, this does not imply that we survive. It probably looks more like European states dividing Africa among themselves while constantly fighting each other.

And pulling a Satoshi or a QAnon can definitely do something to address that. You can buy a lot of hardware to drive up prices and discourage building more datacenters for training AI. You can convince people to carry out terrorist attacks againts chip fabs. You can offer top AI researchers huge amounts of money to work on some interesting problem that you know to be a dead-end approach.

I personally would likely notice: anyone who successfully prevents people from building AIs is a high suspect of being an AGI themselves. Anyone who causes the creation of robots who can mine coal or something (to generate electricity without humans) is likely an AGI themselves. That doesn't mean I'd be able to stop them, necessarily. I'm just saying, "nobody would notice" is a stretch.

But you might not realize that someone is even trying to prevent people from building AIs, at least until progress in AI research starts to noticeably slow down. And perhaps not even then. There's plenty of people like Gary Marcus who think deep learning is a failed paradigm. Perhaps you can convince enough investors, CEOs and grant agencies of that to create a new AI winter, and it would look just like the regular AI winter that some have been predicting.

And creating robots who can mine coal, or build solar panels, or whatever, is something that is economically useful even for humans. Even if there's no AGI (and assuming no other catastrophes) we ourselves will likely end up building such robots.

I guess it's true that "nobody would notice" is going too far, but "nobody would notice on time and then be able to convince everyone else to coordinate against the AGI" is much more plausible.

I encourage you to take a look at It looks like you are trying to take over the world if you haven't already. It's a scenario written by Gwern where the the AGI employs regular human tactics like manipulation, blackmail, hacking and social media attacks to prevent people from noticing and then successfully coordinating against it.

Comment by Pablo Villalobos (pvs) on AGI Ruin: A List of Lethalities · 2022-06-08T15:23:39.184Z · LW · GW
Comment by Pablo Villalobos (pvs) on AGI Ruin: A List of Lethalities · 2022-06-07T13:29:13.396Z · LW · GW

It's somewhat easier to think of scenarios where the takeover happens slowly.

There's the whole "ascended economy" scenarios where AGI deceptively convinces everyone that it is aligned or narrow, is deployed gradually in more and more domains, automates more and more parts of the economy using regular robots until humans are not needed anymore, and then does the lethal virus thing or defects in other way.

There's the scenario where the AGI uploads itself into the cloud, uses hacking/manipulation/financial prowess to sustain itself, then uses manipulation to slowly poison our collective epistemic process, gaining more and more power. How much influence does QAnon have? If Q was an AGI posting on 4chan instead of a human, would you be able to tell? What about Satoshi Nakamoto?

Non-nanobot scenarios where the AGI quickly gains power are a bit harder to imagine, but a fertile source of those might be something like the AGI convincing a lot of people that it's some kind of prophet. Then uses its follower base to gain power over the real world.

If merely human dictators manage to get control over whole countries all the time, I think it's quite plausible that a superintelligence could to do the same with the whole world. Even without anyone noticing that they're dealing with a superintelligence.

And look at Yudkowsky himself, who played a very significant role in getting very talented people to dedicate their lives and their billions to EA / AI safety, mostly by writing in a way that is extremely appealing to a certain set of people. I sometimes joke that HPMOR overwrote my previous personality. I'm sure a sufficiently competent AGI can do much more.

Comment by pvs on [deleted post] 2022-05-19T12:55:53.138Z

Some things that come to mind, not sure if this is what you mean and they are very general but it's hard to get more concrete without narrowing down the question:

  • Goodharting: you might make progress towards goals that aren't exactly what you want. Perhaps you optimize for getting more readers for your blog but the people you want to influence end up not reading you.
  • Value drift: you temporarily get into a lifestyle that later you don't want to leave. Like starting a company to earn lots of money but then not wanting to let go of it. I don't know if this actually happens to people.
  • Getting stuck in perverse competition: you get into academic research to fix all the problems but the competitive pressure leaves you no slack to actually change anything.
  • Neglecting some of your needs: you work a lot and seem to be accomplishing your goals, but you lose contact with your friends and slowly become lonely and lose motivation.
Comment by Pablo Villalobos (pvs) on Pablo Villalobos's Shortform · 2022-05-05T17:28:25.791Z · LW · GW

I'm not sure if using the Lindy effect for forecasting x-risks makes sense. The Lindy effect states that with 50% probability, things will last as long as they already have. Here is an example for AI timelines.

The Lindy rule works great on average, when you are making one-time forecasts of many different processes. The intuition for this is that if you encounter a process with lifetime T at time t<T, and t is uniformly random in [0,T], then on average T = 2*t.

However, if you then keep forecasting the same process over time, then once you surpass T/2 your forecast becomes worse and worse as time goes by. Just when t is very close to T is when you are most confident that T is a long time away. If forecasting this particular process is very important (eg: because it's an x-risk), then you might be in trouble.

Suppose that some x-risk will materialize at time T, and the only way to avoid it is doing a costly action in the 10 years before T. This action can only be taken once, because it drains your resources, so if you take it more than 10 years before T, the world is doomed.

This means that you should act iff you forecast that T is less than 10 years away. Let's compare the Lindy strategy with a strategy that always forecasts that T is <10 years away.

If we simulate this process with uniformly random T, for values of T up to 100 years, the constant strategy saves the world more than twice as often as the Lindy strategy. For values of T up to a million years, the constant strategy is 26 times as good as the Lindy strategy.

Comment by pvs on [deleted post] 2022-05-05T13:44:22.938Z

Wait, how is Twilight Princess a retro game? It's only been 16 years! I'm sorry but anything that was released during my childhood is not allowed to be retro until I'm like 40 or so.

Comment by Pablo Villalobos (pvs) on · 2022-04-30T20:23:23.436Z · LW · GW

Let me put on my sciency-sounding mystical speculation hat:

Under the predictive processing framework, the cortex's only goal is to minimize prediction error (surprise). This happens in a hierarchical way, with predictions going down and evidence going up, and upper levels of the hierarchy are more abstract, with less spatial and temporal detail.

A visual example: when you stare at a white wall, nothing seems to change, even though the raw visual perceptions change all the time due to light conditions and whatnot. This is because all the observations are consistent with the predictions.

As the brain learns more, you get less and less surprise, and the patterns you see are more and more regular. A small child can play the same game a hundred times and it's still funny, but adults often see the first episode of a TV show and immediately lose interest because "it's just another mystery show, nothing new under the sun".

This means that your internal experience becomes ever more stable. This could explain why time seems to pass much faster the older you get.

Maybe, after you live long enough, your posthuman mind accumulates enough knowledge, and gets even less surprised, you eventually understand everything that is to be understood. Your internal experience is something like "The universe is temporally evolving according to the laws of physics, nothing new under the sun".

At which moment your perception of time stops completely, and your consciousness becomes a reflection of the true nature of the universe, timeless and eternal.

I think that's what I would try to do with infinite time, after I get bored of playing videogames.

Comment by pvs on [deleted post] 2022-04-09T22:23:51.363Z

Why do you think this sort of training environment would produce friendly AGI?
Can you predict what kind of goals an AGI trained in such an environment would end up with?
How does it solve the standard issues of alignment like seeking convergent instrumental goals?

Comment by Pablo Villalobos (pvs) on Ukraine Post #9: Again · 2022-04-06T12:57:28.999Z · LW · GW

Re: April 5: TV host calls for killing as many Ukrainians as possible.

I know no Russian, but some people in the responses are saying that the host did not literally say that. Instead he said some vague "you should finish the task" or something like that. Still warmongering, but presumably you wouldn't have linked it if the tweet had not included the "killing as many Ukrainians as possible" part.

Could someone verify what he says?

Comment by Pablo Villalobos (pvs) on Pivot! · 2021-09-12T21:41:49.061Z · LW · GW

I'm sorry, but I find the tone of this post a bit off-putting. Too mysterious for my taste. I opened the substack but it only has one unrelated post.

I don’t think there is a secular way forward.

Do you think that there is a non-secular way forward? Did you previously (before your belief update) think there is a non-secular way forward?

We just shamble forward endlessly, like a zombie horde devouring resources, no goal other than the increase of some indicator or other.

I can agree with this, but... those indicators seem pretty meaningful for me. Life expectancy, poverty rates, etc. And at least now we have indicators! Previously there wasn't even that!

And why does this kind of misticism attract so much people over here? Why are the standard arguments against religion/magic and for materialism and reductionism not compelling to you anymore?

Comment by Pablo Villalobos (pvs) on Can an economy keep on growing? · 2021-03-16T12:09:16.178Z · LW · GW

Let me paraphrase your argument, to see if I've understood it correctly:

  • Physical constraints on things such as energy consumption and dissipation imply that current rates of economic growth on Earth are unsustainable in the relatively short term (<1000 years), even taking into account decoupling, etc.

  • There is a strong probability that expanding through space will not be feasible

  • Therefore, we can reasonably expect growth to end some time in the next centuries

First of all, if economic progress keeps being exponential then I think it's quite possible that technological progress will mostly continue at previous rates.

So in 100-200 years, it seems certainly possible that space expansion will become much easier, if for example genetic engineering allows humans to better tolerate space environments.

But that's pretty much a "boring world" scenario where things keep going mostly as they are now. I expect the actual state of humanity in 200 years will be extremal: either extinction or something very weird.

Material needs, entertainment, leisure... are basically all covered for most people in rich countries. If you think about what could provide a substantial increase in utility to a very rich person nowadays, I think it's down to better physical health (up to biological immortality), mental health, protection from risks... and after all of that you pretty much have to start providing enlightenment, eudaimomia or whatever if you want to improve their lives at all.

So when you have a stable population of immortal enlightened billionaires... Well, perhaps you've reached the peak of what's possible and growth is not necessary anymore. Or perhaps you've discovered a way to hack physics and energy and entropy are not important anymore.

So, even if 200 years is a short amount of time by historic standards, the next 200 years will probably produce changes big enough that physical constraints that we would reach in 300 years at current trends stop being relevant.

Comment by Pablo Villalobos (pvs) on Book review: "A Thousand Brains" by Jeff Hawkins · 2021-03-05T08:19:41.944Z · LW · GW

So, assuming the neocortex-like subsystem can learn without having a Judge directing it, wouldn't that be the perfect Tool AI? An intelligent system with no intrinsic motivations or goals?

Well, I guess it's possible that such a system would end up creating a mesa optimizer at some point.

Comment by pvs on [deleted post] 2021-02-14T09:53:03.145Z

"A PP-based AGI would be devilishly difficult to align"

Is this an actual belief or a handwavy plot device? If it's the first, I'm curious about the arguments

Comment by Pablo Villalobos (pvs) on Subjunctive Tenses Unnecessary for Rationalists? · 2018-10-10T14:40:32.512Z · LW · GW

My perspective as a native speaker who doesn't remember his grammar lessons very well:

The subjunctive mood has a lot of uses, at least in Spain (I'm not really familiar with other varieties of Spanish). Some examples off the top of my head:

1. Counterfactual conditionals: "Si Lee Harvey Oswald no hubiera disparado a JFK, alguien más lo habría hecho" (If Lee Harvey Oswald hadn't shot JFK, someone else would have), here "no hubiera disparado" is subjunctive and means "hadn't shot".

2. To speak about people's actions or decisions which depend on preferences. "Hará lo que quiera con el dinero" (He'll do what he wants with the money), here "quiera" is the present subjunctive of "querer", meaning "to want".

3. To speak about properties of unknown entities. "Quien pueda trabajar será pagado" (Those who can work will be payed), here "pueda" is the present subjunctive form of "poder", which means "to be able to".

Here is a fairly comprehensive list of uses (in Spanish 😉)

I think in general the subjunctive mood conveys some degree of unrealness or subjectivity. You could probably say many of the examples above using indicative mood only, but you would definitely lose some expressive power (I don't know why this is not the case in other languages)

I remember being super confused when I was learning English because of the lack of a distinct subjunctive verbal form. Say, in "I wish I had had a car back then", the two "had" have completely different meanings, one for past tense and one for expressing desire. The Spanish equivalent would be "habido" and "hubiera" from the verb "haber" respectively.

Comment by Pablo Villalobos (pvs) on Subjunctive Tenses Unnecessary for Rationalists? · 2018-10-10T12:53:43.855Z · LW · GW

Your example is right, but it's not true that it's used in all subordinate clauses. For example, "Estoy buscando a la persona que escribió ese libro" (I'm looking for the person who wrote that book) does not have any verb in subjunctive mood.

Comment by Pablo Villalobos (pvs) on Meetup : Madrid - Rationality lecture and activities · 2017-04-19T14:30:34.361Z · LW · GW

The lecture will take place in classroom B15

Comment by Pablo Villalobos (pvs) on Welcome to Less Wrong! (11th thread, January 2017) (Thread B) · 2017-03-31T17:17:54.945Z · LW · GW

Thank you!

Comment by Pablo Villalobos (pvs) on Welcome to Less Wrong! (11th thread, January 2017) (Thread B) · 2017-03-29T20:11:19.377Z · LW · GW

I don't reflect on it. This happens in two ways:

  1. I find reflecting much more cognitively demanding than reading, so if there is a 'next post' button or similar, I tend to keep reading.

  2. Also, sometimes when I try to actually think about the subject, it's difficult to come up with original ideas. I often find myself explaining or convincing an imaginary person, instead of trying to see it with fresh eyes. This is something I noticed after reading the corresponding Sequence.

I guess establishing an habit of commenting would help me solve these problems.

Comment by Pablo Villalobos (pvs) on Welcome to Less Wrong! (11th thread, January 2017) (Thread B) · 2017-03-28T15:49:21.875Z · LW · GW

Hello, I'm a math-cs undergrad and aspiring effective altruist, but I haven't chosen a cause yet. Since that decision is probably one of the most important ones, I should probably wait until I've become stronger.

To that end, I've read the Sequences (as well as HPMOR), and I would like to attend a CFAR workshop or similar at some point in the future. I think one of my problems is that I don't actually think that much about what I read. Do you have any advice on that?

Also, there are a couple of LWers in my college with whom I have met twice, and we would like to start organising meetups regularly. Would you please give me some karma so that I can add new meetups? (I promise I will make up for it with good contributions)

Thanks!