Comment by tag on Book review: Rethinking Consciousness · 2020-01-17T15:48:35.436Z · score: 1 (1 votes) · LW · GW

Well, I wasn't nitpicking you. Friedenbach was assserting locality+determinism. You are asserting locality+nondeterminism, which is OK.

Comment by tag on Book review: Rethinking Consciousness · 2020-01-17T15:44:28.784Z · score: 1 (1 votes) · LW · GW

I am strongly disinclined to believe (as I think David Chalmers has suggested) that there’s a notion of p-zombies, in which an unconscious system could have exactly the same thoughts and behaviors as a conscious one, even including writing books about the philosophy of consciousness, for reasons described here and elsewhere.

Again: Chalmers doesn't think p-zombies are actually possible.

If I believe (1), it seems to follow that I should endorse the claim “if we have a complete explanation of the meta-problem of consciousness, then there is nothing left to explain regarding the hard problem of consciousness”.

That doesn't follow from (1). It would follow from the claim that everyone is a zombie, because then there would be nothing to consciousness except false claims to be conscious. However, if you take the view that reports of consciousness are caused by consciousness per se, then consciousness per se exists and needs to be explained separately from reports and behaviour.

Comment by tag on Book review: Rethinking Consciousness · 2020-01-17T13:42:37.753Z · score: 1 (1 votes) · LW · GW

Postulating hard emergence requires a non-local postulate.

That is not obvious.

Comment by tag on Book review: Rethinking Consciousness · 2020-01-17T13:40:51.910Z · score: 1 (1 votes) · LW · GW

Taking (2) to its logical conclusion seems to imply that we live in a deterministic block universe,

That was not implied by (2) as stated, and isn't implied by physics in general. Both the block universe and determinism are open questions (and not equivalent to each other).

One of the chief problems here is that physics, so far as we can tell, is entirely local.

[emph. added]

Nope. What is specifically ruled out by test's of Bell's inequalities is the conjunction of (local, deterministic). The one thing we know is that the two things you just asserted are not both true. What we don't know is which is false.

Comment by tag on Red Flags for Rationalization · 2020-01-14T14:32:13.529Z · score: 7 (3 votes) · LW · GW

Becoming defensive and frustrated and retreating to vague language when asked for more specifics.

Subvariety: downvoting without replying.

Comment by tag on Realism about rationality · 2020-01-14T13:58:27.428Z · score: 1 (1 votes) · LW · GW

You can figure out whether an algorithm halts or not without being accidentally stuck in an infinite loop.

In special cases, not in the general case.

Comment by tag on Book review: Rethinking Consciousness · 2020-01-12T18:23:03.407Z · score: 1 (1 votes) · LW · GW

I really just meant the more general point that “consciousness doesn’t exist (if consciousness is defined as X)” is the same statement as “consciousness does not mean X, but rather Y”

If you stipulate that consciousness means Y consciousness, not X consciousness, you haven't proven anything about X consciousness.

If I stipulate that when I say, "duck", I mean mallards, I imply nothing about the existential status of muscovys or teals. In order to figure out what is, real you have to look, not juggle definitions.

If you have an infallible way of establishing what really exists, that in some way bypasses language, and a normative rule that every term must have a realworld referent, then you might be in a place where you you can say what a word really means.

Otherwise, language is just custom.

I’m interested in whether you can say more about how exactly you define consciousness such that illusionism is not consciousness. (As I mentioned, I’m not sure I’ll disagree with your definition!)

Illusionism is not consciousness because it is a theory of consciousness.

Illusionism explicitly does not explain consciousness as typically defined, but instead switches the topic to third person reports of consciousness.


I think that if attention schema theory can explain every thought and feeling I have about consciousness (as in my silly example conversation in the “meta-problem of consciousness” section), then there’s nothing left to explain

Explaining consciousness as part of the hard problem of consciousness is different to explaining-away consciousness (or explaining reports of consciousness) as part of the meta problem of consciousness.


There are two ways of not knowing the correct explanation of something: the way where no one has any idea, and the way where everyone has an idea... but no one knows which explanation is right because they are explaining different things in different ways.

Having an explanation is only useful in the first situation. Otherwise, the whole problem is the difference between "an explanation" and "the explanation".

Comment by tag on Book review: Rethinking Consciousness · 2020-01-12T12:21:44.063Z · score: 0 (3 votes) · LW · GW

“Why do I think reality exists?” Is already answerable. You can list a number of reasons why you hold this belief.

There are also reasons for believing in non-illusory forms of free will and consciousness. If that argument is sufficient to establish realism in some cases, it is sufficient in all cases.

You are not supposed to dissolve the new question, only reformulate the original one in a way that is becomes answerable.

Supposed by whom? EY gives some instructions in the imperative voice, but that's not how logic works.

His argument is that if free will is possibly an illusion then it is an illusion. If valid, this argument would also show that consciousness and material reality are definitely illusions.

So it disproves too much.

But there is a valid form of the argument where you argue against the reality of X on addition to arguing for the possible illusory nature of X.

There are no “unique exceptions”, we are algorithms,

That's much more conjectural than most of the claims made here.

Comment by tag on Book review: Rethinking Consciousness · 2020-01-12T11:35:02.267Z · score: 1 (3 votes) · LW · GW

"Illusionist" is in principle a one place predicate like "realist" and "sceptic". You can be a realist about X and a sceptic about Y. In practice, it tends to mean illusionism about qualia.

Comment by tag on Underappreciated points about utility functions (of both sorts) · 2020-01-11T17:50:32.634Z · score: 1 (1 votes) · LW · GW

You are proving that if preferences are well-defined , they also need to be consistent.

What does it feel like from the inside to have badly defined preferences? Presumably it feels like sometimes being unable to make decisions, which you report is the case.

You can't prove that preferences are consistent without first proving they are well defined.

Comment by tag on Book review: Rethinking Consciousness · 2020-01-11T15:36:30.297Z · score: -1 (2 votes) · LW · GW

If you say that free will and consciousness are by definition non-physical, then if course naturalist explanations explain them away.

Object level reply: I don't. Most contemporary philosophers don't. If you see that sort of thing it is almost certainly a straw man.

meta level reply: And naturally idealists reject any notion of matter except as a bundle of sensation. Just because something is normal and natural, does not mean it is normatively correct. It is normal and natural to be tribal, biased and otherwise irrational. Immunity to evidence is a Bad Thing from the point of view of rationality.

But you can also choose to define the terms to encompass what you think is really going on

You can if you really know, but confusing assumptions and knowledge is another Bad Thing. We know that atoms can be split, so redefining an atom to be a divisible unit of matter.

I’m definitely signed up for compatibilism on free will and have been for many years

Explaining compatibilist free will is automatically explaining away libertarian free will. So what is the case against libertarian free will? It isn't false because of naturalism, since it isn't supernatural by definition -- and because naturalism needs to be defeasible to mean anything. EY dismisses libertarian free will out of hand. That is not knowledge.

but I don’t yet feel 100% comfortable calling Graziano’s ideas “consciousness” (as he does), or if I do call it that, I’m not sure which of my intuitions and associations about “consciousness” are still applicable.

What would it take for it to be false? If the answer is "nothing", then you are looking at suppression of evidence.

Comment by tag on Book review: Rethinking Consciousness · 2020-01-11T13:16:26.975Z · score: 1 (3 votes) · LW · GW

Except that EY is not an illusionist about consciousness! When considering free will, he assumes right off the bat that it can't possibly be real, and has to be explained away instead. But in the generalised anti zombie principle, he goes in the opposite direction, insisting that reports of consciousness are always caused by consciousness. [*]

So there is no unique candidate for being an illusion. Anything can be. Some people think consciousness is all, and matter is an illusion.

Leading to the anti-Aumann principle: two parties will never agree if they are allowed to dismiss each others evidence out of hand.

[*] Make no mistake, asserting illusionism about consciousness is asserting you yourself are a zombie.

Comment by tag on George's Shortform · 2020-01-11T12:14:27.510Z · score: 2 (4 votes) · LW · GW

And the thing I said that isn't factually correct is...

Comment by tag on George's Shortform · 2020-01-10T13:08:39.471Z · score: -4 (4 votes) · LW · GW

Regarding MIRI/SIAI/Yudkowsky, I think you are considerably overestimating the extent to which the early AI safety movement took any notice of research. Early MIRI obsessed about stuff like AIXI, that AI researchers didn't care about, and based a lot of their nighmare scenarious on "genie" style reasoning derived from fairy tales.

Comment by tag on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-09T00:41:31.619Z · score: 1 (1 votes) · LW · GW

an entity who feels their rights are violated.

If I feel that I have a right to a swimming pool, does your failure to buy me a swimming pool mean that a right has been violated?

Comment by tag on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-09T00:26:49.430Z · score: -1 (1 votes) · LW · GW

For whom” doesn’t matter.

It matters because your ethical/decision theory will give different results depending on whose utilities you are taking into account.

If I take an action, the world that results as a consequence has an entity who feels their rights are violated. When I sum over the utility of that world, that rights violation is a negative term, if I’m the kind of person cares about people’s rights (which I am, but is a separate issue).

It's the heart of the issue. If you don't care about their rights, but they do, then you will violate their rights.

If there is some objective notion of the negative utility that comes from a rights violation, you will violate their their rights unless your personal UF happens to be exactly aligned with the objective value.

For “the ends don’t justify the means” to mean something, it implies that there is something of intrinsic negative morality in the actions I take, even if the results are identical

You can't calculate what the ultimate results are. You have to use heuristics. That's why there is a real paradox about the trolley problem. The local (necessarily) calculation says, that killing the fat man saves lives, the heuristic says "dont kill people"

Comment by tag on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-08T18:37:44.947Z · score: 1 (1 votes) · LW · GW

Rights violations are among the consequences

Consequences for whom? If I violate your rights, that's not a consequence for me. That's one of the ways in which ethical utilitarianism separates from personal decision theory.

Comment by tag on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-08T18:33:10.869Z · score: 1 (1 votes) · LW · GW

Utilitarianism uses a version of global utility that is based on summing individual utilities.

If you could show that some notion of rights emerges from summation of individual utility, that would be a remarkable result, effectively resolving the Trolley problem.

OTOH, there is a loose sense in which rules have some kind of distributed utility, but if that not based on summation of individual utilities, you are talking about something that isn't utilitarianism, as usually defined.

Comment by tag on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-08T17:59:08.520Z · score: 2 (2 votes) · LW · GW

.. under the right condition..

There's your problem. We don't say that two things are the same if they happen to coincide under exceptional circumstances, we say they are the same if they coincide under every possible circumstance.

Ethical utilitarianism and utility based decision theory don't coincide when someone is only a little more altruistic than a sociopath. Utilitarianism is notorious for being very demanding, so having a personal UF that coincides with the aggregate used by utilitarianism requires Ghandi level altruism., and is therefore improbable.

Likewise, decision theory can imply a CC equilibrium, but does not do so in every case.

Comment by tag on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-07T17:54:25.532Z · score: 1 (1 votes) · LW · GW

It sounds like "decision theoretic utilitarianism" was something invented here.

I think hybrid approaches to ethics have more to offer than purist approaches..and also that it is assists communication to label them as such.


Actually , it's worse than that. As Smiffnoy correctly states, maximising your personal utility without regard to anybody else isn't an ethical theory at all, so it continues the confusion to label it as such.

Comment by tag on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-07T09:30:04.120Z · score: 0 (2 votes) · LW · GW

Maybe utilitarianism is wrong. If means involve rights violations, maybe they are not justified by their consequences.

Comment by tag on [Book Review] The Trouble with Physics · 2020-01-07T09:19:28.348Z · score: 1 (1 votes) · LW · GW

GR and QM are valid each in their own domain

GR and QM give correct predictions in their own domains. They also have different ontological implications, which may or may not be a problem depending on what you expect to get out of physics.

Comment by tag on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-07T08:18:18.425Z · score: 1 (1 votes) · LW · GW

Isn’t it a commonly held belief here that the ability to achieve goals (rationality) is orthogonal from the content of those goals (morality)?

That would imply that means are always morally neutral, which is not the case.

Comment by tag on The Universe Doesn't Have to Play Nice · 2020-01-06T18:27:46.395Z · score: 12 (3 votes) · LW · GW

Many people believe qualia don’t exist because we wouldn’t be able learn about them empirically. But it seems spurious to assume nothing exists outside of our lightcone just because we can’t observe it.

"Qualia" is not a synonym for "non physical thingy".

We have subjective evidence for qualia, otherwise the question would never have arisen.

Disagreements over consciousness—if non-materialist qualia existed, then we wouldn’t be able to know about them empirically

We wouldn't be able to know about them using objective, third person empiricism. Whether third person empiricism is the only kind is part of the wider problem.

Comment by tag on Debunking Fallacies in the Theory of AI Motivation · 2020-01-02T14:39:56.741Z · score: 1 (1 votes) · LW · GW

most AI designs see screaming humans as no more important or special than pissing rats.

No AI design that we currently have can even conceive of humans. They're in a don't know state, not a don't care state. They are safe because they are too dumb to be dangerous. Danger is a combination of high intelligence and misalignment.

Or you might be talking about abstract, theoretical AGI and ASI. It is true that most possible ASI designs don't care about humans, but it is not useful, because AI design is not taking a random potshot into design space. AI designers don't want AI that do random stuff: they are always trying to solve some sort of control or alignment problem parallel with achieving intelligence. Since danger is a combination of high intelligence and misalignment, dangerous ASI would require efforts at creating intelligence to suddenly outstrip efforts at aligning it. The key word being "suddenly". If progress to continues to be incremental, there is not much to worry about.

It might not like this situation and might plot to change it.

Or it might not care.

Comment by tag on Don't Double-Crux With Suicide Rock · 2020-01-02T00:07:37.930Z · score: 4 (2 votes) · LW · GW

there’s only one reality

But there's no agreement about what constitutes evidence of something being real, so even agreement about fact is going to be extremely difficult.

Comment by tag on Don't Double-Crux With Suicide Rock · 2020-01-02T00:02:36.859Z · score: 3 (2 votes) · LW · GW

Yep. "Mutually trusting" would be better than "honest".

Comment by tag on Since figuring out human values is hard, what about, say, monkey values? · 2020-01-01T23:53:45.462Z · score: 1 (1 votes) · LW · GW

Depending how you define EU maximisation, everything us doing it, nothing is doing it, and many points in between.

Comment by TAG on [deleted post] 2019-12-30T13:46:40.439Z

The obvious answer is, they got things[1] basically correct[2]

Which is to say, they had a wide range of opinions, some of which have stood the test of time. Putting it that way, they are not obviously better than modern philosophers.

Comment by tag on Defining "Antimeme" · 2019-12-28T12:51:25.265Z · score: 0 (2 votes) · LW · GW

An antimeme is a meme with the following three characteristics: Learning it threatens the egos and identities of adherants to the mainstream of a culture

Antimemes are often a culture-specific phenomenon

If antimemes are culture-specific , they can be subculture specific.

In your previous version, it was carefully explained that LISP isn't that great. Yet you are still enthusing about it. Consider the possibility that you are the one who is clinging to beliefs for reasons of ego. Maybe "LISP" isn't that great" is an antimeme relative to the LISP subculture.

Comment by TAG on [deleted post] 2019-12-27T13:07:01.518Z

I actually think that 2020 could be the year of the Linux desktop

Linux has had the advantages it has for twenty why now?

Comment by tag on Is Causality in the Map or the Territory? · 2019-12-21T14:03:31.113Z · score: 1 (1 votes) · LW · GW

Considering what may happen in a similar setup in the future

... Prompted by what did or didn't work on the past.

Comment by tag on Is Causality in the Map or the Territory? · 2019-12-19T08:54:06.916Z · score: 1 (1 votes) · LW · GW

Instead of asking “what would the system do if we did X?” ask “what will the system do in a similar setup

Methodologically, these are identical. The model or equation you are using to explore counterfactual or future states does not know or care whether the input conditions gave occurred or not. So your actual objection is about the metaphysics. On the assumption that the time is deterministic , the counterfactual could not have occurred. (The assumption that the past is fixed is not enough) You feel uncomfortable about dealing with falsehoods, even though it is possible, and sometimes useful.

Comment by tag on Is Causality in the Map or the Territory? · 2019-12-19T08:28:50.791Z · score: 1 (1 votes) · LW · GW

Let's, think of a use for it. For instance, if an outcome depends on a decision you made, considering what would have happened if you made a different decision can help refine your decision making processes.

Comment by tag on Is Causality in the Map or the Territory? · 2019-12-18T14:38:04.219Z · score: 3 (2 votes) · LW · GW

Underdermination by the territory, the baisc physics, is one thing. But the flipside is we often want to identify causes in order to solve human level problems, and that can help us to focus on "the" (in context) cause.

Everything has multiple causes. All fires are caused, among other things, by oxygen, but we rarely consider oxygen to be "the" cause of a fire. When we pick out something as "the" cause, we are usually looking for something that varies and something that we can control. All fires require oxygen, but oxygen is not a, variable enough factor compared to dropped matches and inflammable materials.

Context matters as well. A coroner could find that the cause of Mr Smiths death was ingestion of Arsenic, while the judge finds that it was Mrs Smith. It would be inappropriate to put the arsenic on trial and punish it, because it is not a moral agent.. but it is a causal factor nonetheless.

Although there is definitely a lot to causality that is relevant to human interests and therefore on the map, it should not be concluded that there is not also a form of causality in the territory. That would be a version of the fallacy that says that since probability and counterfactuals are can be found in low-resolutions maps, they are therefore not in the territory.

Comment by tag on When would an agent do something different as a result of believing the many worlds theory? · 2019-12-17T11:13:49.515Z · score: 1 (1 votes) · LW · GW

Theories, imaginary ideas.

Comment by tag on When would an agent do something different as a result of believing the many worlds theory? · 2019-12-16T18:10:36.789Z · score: 1 (1 votes) · LW · GW

Yes, but compatibilism doesn't suggest that you choose between different actions or between different decision theories.

Comment by tag on Toon Alfrink's sketchpad · 2019-12-16T18:07:28.027Z · score: 1 (1 votes) · LW · GW

What is missed is the way it seems from the inside,as I pointed out originally. I don't have to put my head into an FMRI to know that I am conscious.

Comment by tag on Many Turing Machines · 2019-12-16T13:10:53.433Z · score: 1 (1 votes) · LW · GW

MWI is more than one theory, because everything is more than one thing[*].

If you defined MWI as just the evolution of the SWE (as required by the simplicity theory), then calculating a bunch of non-interacting states is getting it wrong.

If you start with the idea that MWI is a bunch of non-interacting observers observing different things, then the MTM might get it right. The problem is that no-one knows how to get the second kind of MWI out of the maths. That is where things like the basis problem come in.


There is an approach based on coherent superposiitions, and and an version based on decoherence. These are incompatible opposites.

  1. Worlds are superpositions, so they in exist at small scales, they can continue to interact with each other, after, "splitting" , and and they can be erased. These coherent superposed states are the kind of "world" we have direct evidence for, although they seem to lack many of the properties requited for a fully fledged many worlds theory, hence the scare quotes. Call these Small worlds.

  2. Worlds are large, in fact universe-like. They are causally and informationally isolated from each other. This approach is often based on quantum decoherence. Call these Big Worlds.

Comment by tag on When would an agent do something different as a result of believing the many worlds theory? · 2019-12-16T12:58:22.236Z · score: 0 (2 votes) · LW · GW

That amounts to saying that if the conjunction of MWI and utilitarianism is correct, we would or should behave as though it isn't. That is a major departure from typical rationalism (eg the Litany of Tarski).

Comment by tag on When would an agent do something different as a result of believing the many worlds theory? · 2019-12-16T12:54:55.297Z · score: 1 (1 votes) · LW · GW

Everything is possible, but not everything has the same measure (is equally likely). Killing someone in 10% of “worlds” is worse than killing them in 1% of “worlds”.

Apart from the other problem: MWI is deterministic, so you can't alter the percentages by any kind of free will, despite what people keep asserting.

Does “having a 10% probability of killing someone, and actually killing them” make you a worse person that “having a 10% probability of killing someone, but not killing them”?

Actually killing them is certainly worse. We place moral weight on actions as well as character.

Comment by tag on Toon Alfrink's sketchpad · 2019-12-15T11:40:45.596Z · score: 1 (1 votes) · LW · GW

It's not intended to be a complete definition of consciousness, just a nudge away from behaviourism.

From my per­spec­tive, once you’ve ex­plained the struc­ture, dy­nam­ics, and the be­hav­ior, you’ve ex­plained ev­ery­thing.

From my point of view, that's missing the central point quite badly.

Chalmers ar­gues in Chap­ter 3 of The Con­scious Mind that zom­bies are log­i­cally pos­si­ble

And elsewhere that they are metaphysically impossible.

Comment by tag on When would an agent do something different as a result of believing the many worlds theory? · 2019-12-15T11:22:51.492Z · score: 4 (3 votes) · LW · GW

Things like determinism and many worlds may not affect fine grained decision making, but they can profoundly impact what decision making, choice volition, agency and moral responsibility are. It is widely accepted that determinism affects freedom of choice, excluding some notions of free will. It is less often noticed that many worlds affects moral responsibility, because it removes refraining: if there is the slightest possibility that you would kill someone, then there is a world where you killed someone. You can't refrain from doing anything that is possible for you to do

Comment by tag on Toon Alfrink's sketchpad · 2019-12-15T10:32:09.440Z · score: 1 (1 votes) · LW · GW

Not to be pedantic, but what else could consciousness possibly be, except for a way of describing the behavior of some object at a high level of abstraction?

It could be something that is primarily apparent to the person that has it.

If con­scious­ness was not a be­hav­ior, but in­stead was some in­trin­sic prop­erty of a sys­tem, then you run into the ex­act same ar­gu­ment that David Chalmers uses to ar­gue that philo­soph­i­cal zom­bies are con­ceiv­able.

That runs together two claims: that consciousness is not behaviour, and that it is independent of physics. You don't have to accept the second claim in order to accept the first.

And it remains the case that Chalmers doesn't think zombies are really possible.

I think it’s still rele­vant, be­cause any evolu­tion­ary rea­son for con­scious­ness must nec­es­sar­ily show up in ob­ser­va­tional be­hav­ior, or else there is no benefit and we have a mys­tery.

"Primarily accessible to the person that has it" does not mean "no behavioural consequences".

Comment by tag on Many Turing Machines · 2019-12-12T15:11:09.938Z · score: 1 (1 votes) · LW · GW

The fact that superposed states do interact significantly on the small scale is important, because its the basis for believing there could be worlds in the first place. The MTM model is completely non interacting, so it misrepresents the physics.

Comment by tag on Meta-Honesty: Firming Up Honesty Around Its Edge-Cases · 2019-12-12T13:27:42.666Z · score: 1 (1 votes) · LW · GW

say this is a de­ci­sion the­ory prob­lem and that the “shoot­ing one­self in the foot” metaphor is choos­ing the wrong coun­ter­fac­tual. Rather, we should think of the peo­ple bas­tardiz­ing and abus­ing con­cepts as the ones shoot­ing us in the feet!

You still end up with a shot foot. People tend to confuse solving problems with apportioning blame -- I call it the "guns don't kill people" fallacy.

Comment by tag on Antimemes · 2019-12-11T22:49:22.176Z · score: 1 (1 votes) · LW · GW

You shouldn't do this in C.

I was being a bit tongue in cheek about the macro thing. You can apply macros, in the sense of a preprocessor expanding text, to anything.

Defmacro isn't unique,but not just because of preprocessors. Less vaunted languages such as Forth can also define fundamental new keywords.

The uniqueness claim, apart from being false, is an unnecessary restriction. There are ways of achieving the things that lisp can achieve that dont have its downsides.

Comment by tag on Antimemes · 2019-12-11T22:40:48.278Z · score: 1 (1 votes) · LW · GW

You're right, I'm rusty.

Comment by tag on Antimemes · 2019-12-11T14:42:58.348Z · score: 3 (2 votes) · LW · GW

For example, Python’s if and not are macros. You can’t write your own ifnot macro to abbreviate if and not because you’re not allowed to write your own macros.

Here's how you do it with the C preprocessor.

#define ifnot(x) if(!x)

You can apply this trick to any language with a textual representation, since preprocessing is a separate stage.

Comment by tag on Antimemes · 2019-12-11T12:09:21.145Z · score: 1 (1 votes) · LW · GW

So lots of people are taking up LISP, and abandoning it as soon as they hit the dreaded defmacro? But are lots of people aren't taking up LISP. There are lot of reasons for not studying old, little-used languages. And it's not as if programmers are low-openness people who hate fundamentally new paradigms -- OO would never have spread if they were. For me, LISP pattern matches to "cult following" rather than "great thing that the sheeple can't understand".