Harmonic Wave Resonance 2020-05-31T04:10:58.143Z · score: 7 (4 votes)
lsusr's Shortform 2020-05-31T03:06:18.382Z · score: 6 (1 votes)
Fractal Harmonic Waves 2020-05-31T01:31:35.842Z · score: 8 (6 votes)
Orthogonality 2020-05-21T03:17:36.365Z · score: 20 (7 votes)
Small Data 2020-05-14T04:29:52.455Z · score: 20 (7 votes)
Re: Some Heroes 2020-05-12T09:30:56.012Z · score: 36 (16 votes)
Plans for COVID-19? 2020-05-08T04:37:22.086Z · score: 8 (4 votes)
I do not like math 2020-04-29T11:27:17.214Z · score: 31 (26 votes)
Forbidden Technology 2020-04-25T07:20:17.829Z · score: 22 (8 votes)
The Inefficient Market Hypothesis 2020-04-24T07:33:08.252Z · score: 53 (28 votes)
The History of Color in Chinese 2020-04-12T04:05:49.885Z · score: 29 (13 votes)
The Past is Non-Deterministic 2020-04-11T05:42:02.512Z · score: 15 (6 votes)
What Surprised Me About Entrepreneurship 2020-04-05T09:18:16.411Z · score: 68 (34 votes)
3 Interview-like Algorithm Questions for Programmers 2020-03-25T09:05:10.875Z · score: 3 (11 votes)
Importing masks from China 2020-03-19T04:43:12.133Z · score: 22 (4 votes)
What I'm doing to fight Coronavirus 2020-03-09T21:11:39.626Z · score: 80 (25 votes)
4 Kinds of Learning 2020-03-08T02:25:30.412Z · score: 10 (7 votes)
How do you do hyperparameter searches in ML? 2020-01-13T03:45:46.837Z · score: 9 (4 votes)
[Personal Experiment] Training YouTube's Algorithm 2020-01-09T09:04:17.459Z · score: 14 (6 votes)
Machine Learning Can't Handle Long-Term Time-Series Data 2020-01-05T03:43:15.981Z · score: 2 (21 votes)
[Book Review] The Trouble with Physics 2020-01-05T01:47:26.368Z · score: 25 (15 votes)
Defining "Antimeme" 2019-12-26T09:35:11.906Z · score: 14 (8 votes)
How to Talk About Antimemes 2019-12-22T11:57:27.828Z · score: 13 (6 votes)
The Arrows of Time 2019-12-21T11:42:16.894Z · score: 5 (3 votes)
[Personal Experiment] One Year without Junk Media 2019-12-14T08:26:05.318Z · score: 39 (15 votes)
Confabulation 2019-12-08T10:18:48.986Z · score: 45 (19 votes)
Connectome-Specific Harmonic Waves 2019-12-05T00:23:53.864Z · score: 20 (11 votes)
Symbiotic Wars 2019-12-04T00:06:08.777Z · score: 23 (11 votes)
Antimemes 2019-11-26T05:58:28.954Z · score: 28 (22 votes)
[Personal Experiment] Counterbalancing Risk-Aversion 2019-11-15T08:34:03.460Z · score: 28 (14 votes)
Indescribable 2019-11-10T13:31:45.298Z · score: 12 (10 votes)
Self-Keeping Secrets 2019-11-10T07:59:15.119Z · score: 43 (21 votes)
The Technique Taboo 2019-10-30T11:22:47.184Z · score: 45 (34 votes)
Prospecting for Conceptual Holes 2019-10-30T08:34:52.769Z · score: 51 (26 votes)
Mediums Overpower Messages 2019-10-20T05:46:19.339Z · score: 37 (15 votes)
Invisible Choices, Made by Default 2019-10-20T02:09:02.992Z · score: 36 (23 votes)
Integrating the Lindy Effect 2019-09-07T17:38:27.348Z · score: 15 (9 votes)
Zeno walks into a bar 2019-08-04T07:00:27.114Z · score: 25 (13 votes)


Comment by lsusr on lc's Shortform · 2020-06-02T21:06:20.378Z · score: 2 (1 votes) · LW · GW

The mandatory sign-up is a major obstacle to new users. I'm not going to create an account on a website until it has already proven value to me.

Comment by lsusr on Pessimism over AGI/ASI causing psychological distress? · 2020-06-02T20:47:52.161Z · score: 13 (3 votes) · LW · GW

What if an AGI arms race leads to war & the Chinese (or Russians) win? Could they assign the AGI the goal of causing suffering as a way to 'punish' westerners (or to follow through with some type of blackmail)?

Suppose we flipped the flags around. The USA is the world leader in AI. The USA has a record of pursuing punitive action against defeated foes. It has a history of torture, genocide, concentration camps and bombing (nuclear + incendiary) civilian populations. Are you worried that the United States might win an AGI arms race and use it to torture Chinese and Russian civilians?

Comment by lsusr on Building brain-inspired AGI is infinitely easier than understanding the brain · 2020-06-02T18:17:44.074Z · score: 2 (1 votes) · LW · GW

The human neocortical algorithm probably wouldn't work very well if it were applied in a brain 100x smaller because, by its very nature, it requires massive amounts of parallel compute to work.

Human beings have larger brains than most animal species on Earth. It seems to me that if large brains weren't very important to evolving language and composite tool use then insects would already have these abilities.

Comment by lsusr on Harmonic Wave Resonance · 2020-06-01T21:46:27.669Z · score: 2 (1 votes) · LW · GW

By tuning the function's parameters that define success. In your words, "[b]y changing the function alone".

Comment by lsusr on GPT-3: a disappointing paper · 2020-06-01T16:42:43.300Z · score: 2 (1 votes) · LW · GW

It's hard to go much lower than 10% uncertainty on anything like this without specialized domain knowledge. I'm in a different position. I'm CTO of an AI startup I founded so I get a little bit of an advantage from our private technologies.

If I had to restrict myself to public knowledge then I'd look for a good predictive processing algorithm and then plug it into the harmonic wave theory of neuroscience. Admittedly, this stretches the meaning of "existing architectures".

Comment by lsusr on Mediums Overpower Messages · 2020-05-31T19:23:49.850Z · score: 2 (1 votes) · LW · GW

It does that too.

Comment by lsusr on The Darwin Game · 2020-05-31T18:39:03.073Z · score: 2 (1 votes) · LW · GW

This is one of the first series I read on Less Wrong and I enjoyed it very much. I found it a great introduction to this forum.

Comment by lsusr on GPT-3: a disappointing paper · 2020-05-31T18:14:16.482Z · score: 2 (1 votes) · LW · GW

What existing architectures you would bet on, if you had to?

Comment by lsusr on Should we stop using the term 'Rationalist'? · 2020-05-31T10:25:02.745Z · score: 3 (2 votes) · LW · GW

I didn't notice the faux-posh style until you pointed it out. Thank you for bringing that to my attention.

Comment by lsusr on GPT-3: a disappointing paper · 2020-05-31T09:47:27.659Z · score: 14 (6 votes) · LW · GW

Your previous posts have been well-received, with averages of > karma per vote. This post GPT-3: a disappointing paper has received < karma per vote, due to a large quantity of downvotes. Your post is well-written, well-reasoned and civil. I don't think GPT-3: a disappointing paper deserves downvotes.

I wouldn't have noticed the downvotes if my own most heavily downvoted post hadn't also cruxed around bottom-up few-shot learning.

I hope the feedback you received on this article doesn't discourage you from continuing to write about the limits of today's AI. Merely scaling up existing architectures won't get us from AI to AGI. The world needs people on the lookout for the next paradigm shift.

Comment by lsusr on Connectome-Specific Harmonic Waves · 2020-05-31T05:45:45.193Z · score: 3 (2 votes) · LW · GW

I very much did enjoy this! Thank you for the link.

There are lots of good ideas in your article. I think I actually…uh…did read your article, many months ages, and then forgot that I did so and that your articles…er…um…contributed significantly to inspiring this series.

You might like my new article on resonance. It builds off your three paragraphs about guitar strings. I swear I wrote it before re-reading your statement "Resonance in chaotic systems is inherently unstable". If I did read that particular line several months ago I definitely forgot about it.

[C]an we use it to build something cool?

I'm trying to build an AGI. These insights have been very helpful so far.

Comment by lsusr on Connectome-Specific Harmonic Waves · 2020-05-31T03:29:31.820Z · score: 3 (2 votes) · LW · GW

Thank you for putting my comment on your blog. Its very flattering.

I would now add this roughly implies a continuum of CSHWs, with scale-free functional roles:

One of the most important most important implications of CSHWs' is what you call their "scale-free functional roles" and what I call their fractal "scale invariance". Terms like RSHW, CSHW and SSHW are just markers for arbitrary scales, like "gamma rays" and "infra-red" on the electromagnetic spectrum. I just finished an article attaching equations to this idea.

Comment by lsusr on lsusr's Shortform · 2020-05-31T03:06:18.753Z · score: 6 (3 votes) · LW · GW

[Book Review] Surfing Uncertainty

Surfing Uncertainty is about predictive coding, the theory in neuroscience that each part of your brain attempts to predict its own inputs. Predictive coding has lots of potential consequences. It could resolve the problem of top-down vs bottom-up processing. It cleanly unifies lots of ideas in psychology. It even has implications for the continuum with autism on one end and schizophrenia on the other.

The most promising thing about predictive coding is how it could provide a mathematical formulation for how the human brain works. Mathematical formulations are great because once they let you do things like falsify experiments and simulate things on computers. But while Surfing Uncertainty goes into many of the potential implications of predictive codings, the author never hammers out exactly what "prediction error" means in quantifiable material terms on the neuronal level.

This book is a reiteration of the scientific consensus[1]. Judging by the total absense of mathematical equations on the Wikipedia page for predictive coding, I suspect the book never defines "prediction error" in mathematically precise terms because no such definition exists. There is no scientific consensus.

Perhaps I was disappointed with this book because my expectations were too high. If we could write equations for how the human brain performs predictive processing then we would be significantly closer to building an AGI than where we are right now.

  1. The book contains 47 pages of scientific citations. ↩︎

Comment by lsusr on Updated Hierarchy of Disagreement · 2020-05-28T19:16:38.654Z · score: 3 (2 votes) · LW · GW

I like how the top level is floating above the rest of the pyramid. This version of the hierarchy could be symbolized with the Eye of Providence.

Comment by lsusr on Small Data · 2020-05-25T18:06:33.713Z · score: 3 (2 votes) · LW · GW

NN also do not reduce/idealize/simplify, explicitly generalize and then run the results as hypothesis forks. Or use priors to run checks (BS rejection / specificity). We do.

This is very important. I plan to follow up with another post about the necessary role of hypothesis amplification in AGI.

Edit: Done.

Comment by lsusr on Small Data · 2020-05-25T18:03:58.280Z · score: 2 (1 votes) · LW · GW

NNs are a big data approach, tuned by gradient descent. Because NNs are a big data approach, every update is necessarily small (in the mathematical sense of first-order approximations). When updates are small like this, averaging is fine. Especially considering how most neural networks use sigmoid activation functions.

While this averaging approach can't solve small data problems, it is perfectly suitable to today's NN applications where things tend to be well-contained, without fat tails. This approach works fine within the traditional problem domain of neural networks.

Comment by lsusr on What are objects that have made your life better? · 2020-05-21T21:39:49.870Z · score: 2 (1 votes) · LW · GW

Does the pen require custom ink cartridges or does it accept standard ones?

Comment by lsusr on What are objects that have made your life better? · 2020-05-21T21:38:25.272Z · score: 2 (5 votes) · LW · GW

Uninstallable product placement sparks hate whenever I see it. Custom ROMs for my phone (LineageOS and Resurrection Remix) dramatically improve my life over commercial phone operating systems because custom ROMs avoid contaminating my attention with things I don't want.

  1. There are no uninstallable applications serving as advetisements.
  2. They do not try to sell my attention such as via Google News.

In addition, it is easy to get root access to these phones. That makes it easy to integrate it with my Linux laptop.

Comment by lsusr on Is there a way to replicate photosynthesis artificially? · 2020-05-20T22:27:22.388Z · score: 7 (3 votes) · LW · GW

Chlorophyll enters an excited state when the right frequency of light interacts with it. Photovoltaics operate on a similar principle, though it would be a stretch to say they were designed "in the image" of chlorophyll.

Photovoltaics harness sunlight more cheaply and efficiently than manufactured chlorophyll. Once the sunlight is harnessed, you can do whatever you want with it such as sequestering carbon dioxide.

The only advantage chlorophyll-based sunlight harvesting has over photovoltaics is that plants are easy to grow. Manufacturing chlorophyll in a laboratory would buy the inefficiency of plants at the price of inorganic manufacturing. It's the worst of both worlds.

On the other hand, chlorophyll is used as a method of carbon sequestration. This is called "growing forests".

Comment by lsusr on Craving, suffering, and predictive processing (three characteristics series) · 2020-05-16T06:12:44.597Z · score: -5 (3 votes) · LW · GW

Once we have deemed that wanting to pursue pleasure and happiness are wireheading-like impulses, why stop ourselves from saying that wanting to impact the world is a wireheading-like impulse?

Is there a way you can rephrase this question without using the word "wirehead"? When discussing meditation, the word "wirehead" can have two very different meanings. Usually, "wirehead" refers to the gross failure mode of heavy meditation where a practitioner amnesthesizes him/herself into a potato. Kaj_Sotala has used the word "wirehead" to refer to a subtle specific consequence of taṇhā.

Why isn't a desire to avoid death craving?

A desire to avoid death is craving. (In fact, death is one of the Four Sights.) The actions of postponing death are not craving. Only the desire to avoid death is.

You clearly speak as if going to a dentist when you have a tooth ache is the right thing to do, but why?

Because you have a toothache and your teeth will rot if you don't go to a dentist.

Once you distance your 'self' from pain, why not distance yourself from your rotting teeth?

Penetrating taṇhā is the opposite of distancing. It's about accepting the world right now as it is. If your teeth are rotting right this instant then you should accept that your teeth are rotting right this instant. Such is the Litany of Tarski.

If my teeth are rotting,

then I desire to believe my teeth are rotting;

If my teeth are not rotting,

then I desire to believe my teeth are not rotting;

Let me not become attached to beliefs I may not want.

The thing you distance yourself from isn't the pain, it's your self. Kaj_Sotala's post is about taṇhā, one of the Three Characteristics of Existence. Another Characteristic of Existence is anattā or non-self. I hope this becomes clearer once Kaj_Sotala gets to anattā in this series.

All my intuitions about how to act are based on this flawed sense of self. And from what you are outlining, I don't see how any intuition about the right way to act can possibly remain once we lose this flawed sense of self.

It is possible to do something without craving it. For example, consider relaxing on a tropical beach and reaching over to drink a mango smoothie. Now, consider the instant you are mid-sip, sucking through the straw while the flavor washes over your mouth. In that instant, you act without craving.

The same goes for when you are engrossed in fun conversation with close friends and family.

There's a general discomfort I have with this series of posts that I'm not able to fully articulate, but the above questions seem related.


Comment by lsusr on Small Data · 2020-05-15T04:04:45.655Z · score: 2 (1 votes) · LW · GW


Comment by lsusr on Small Data · 2020-05-14T19:10:24.194Z · score: 4 (2 votes) · LW · GW

I think in modern machine learning people are experimenting with various inductive biases and various ad-hoc fixes or techniques with help correcting for all kinds of biases.

The conclusion of my post is that these fixes and techniques are ad-hoc because they are written by the programmer, not by the ML system itself. In other words, the creation of ad-hoc fixes and techniques is not automated.

Comment by lsusr on Small Data · 2020-05-14T18:48:10.422Z · score: 2 (1 votes) · LW · GW

In your example with a non-converging sequence, I think you have a typo - there should be rather than

Fixed. Thank you for the correction.

Comment by lsusr on Machine Learning Can't Handle Long-Term Time-Series Data · 2020-05-14T17:16:23.045Z · score: 2 (1 votes) · LW · GW

Which one of their songs has a repeated chorus? I could not identify one in the Elvis Presley rock song nor the Kay Perry pop song.

Comment by lsusr on A game designed to beat AI? · 2020-05-14T04:19:12.017Z · score: 2 (1 votes) · LW · GW

In addition to your ideas, I would add long tales and chaotic systems. It's hard to train an AI on 1,000,000 datapoints when the value function of the 1,000,001st datapoint could outweigh all the previous cumulative results.

To generalize this mathematically, a board game ought to have a value function that never converges no matter how many times you play the game.

Comment by lsusr on A game designed to beat AI? · 2020-05-14T03:47:35.292Z · score: 2 (1 votes) · LW · GW

Machine learning is bad at situations where it is provided with limited training data. Therefore I would design a game with frequently-changing rules. In particular, I would create an expansion set for Betrayal at House on the Hill.

Betrayal at House on the Hill is about exploring a haunted house. The gimmick is you do not know the rules of the game before you begin playing. There are many different rules the haunted house might obey.

Humans could beat machines for a long time if the following two eratta were applied:

  1. An intelligence may not have access to information ahead of time about the expansion packs' rules. (It is fair play for the AI's programmers to have access to the base ruleset but not the special iteration-specific rulesets.)
  2. An intelligence only gets points for winning the first time it encounters a particular ruleset.

An AI would need to read the specialized rules on-the-spot and then understand the semantics well enough to devise a strategy. Then the computer would have to execute this strategy correctly on its first try. No software in existence today can do anything like this.

Not only could humans crush machines at this board game, today's best machine learning software cannot even play this game (follow the rules) without its programmers' reading the complete rulebook ahead of time, which is cheating.

As for goal #2, Betrayal at House on the Hill is my favorite board game.

Comment by lsusr on A game designed to beat AI? · 2020-05-14T03:30:59.428Z · score: 2 (1 votes) · LW · GW

You could achieve a similarly lopsided result faster with the in-person Turing test, albeit at the expense of goal #2.


Each player guesses whether the opponent is human or machine. Each player gets one point for a correct guess and one point for convincing the opponent that he/she/it is a human.

To beat humans at this, we would need the following developments:

  1. A robotic body that can perform the same functions as human biology.
  2. A robotic body aesthetically indistinguishable from a human body.
  3. An artificial mind that can convincingly simulate human behavior.
Comment by lsusr on SlateStarCodex 2020 Predictions: Buy, Sell, Hold · 2020-05-01T19:21:05.431Z · score: 1 (1 votes) · LW · GW

Good luck! Not gonna jinx it.

That's nice of you.

Comment by lsusr on I do not like math · 2020-04-29T18:44:39.832Z · score: 5 (3 votes) · LW · GW

To me, mathematics is like lifting weights. I don't like lifting weights. I do it anyway, halfheartedly and with poor form, because I need to move furniture sometimes.

I am extrinsically motivated to do math. So why didn't I major in applied mathematics? I worried "applied mathematics" would be watered down. Besides, I want my professors to be rigorous. Reading and writing proofs of known facts is a higher level of rigor than matters to me most of the time. On the other hand, writing proofs for new theorems is lots of fun.

Math is a logic of words founded on absolute truth. I believe in neither words nor absolute truth. I can play by the rules of logic, but deep down I think probabilistically.

Comment by lsusr on Prospecting for Conceptual Holes · 2020-04-29T09:36:39.058Z · score: 2 (2 votes) · LW · GW

English is the most important, most useful language to know. It is the language of business, science and technology. It is the closest thing humanity has ever had to a universal language.

Chinese is the second most important language in the world to know. It is, in a different sense, the closest thing humanity has ever had to a universal written language. Here are some ideas you might get from learning Chinese.

  • Your ideas of "center of the world", "civilization" and "humanity" shift from Europe and the Americas to East Asia. This is not the same as a global internationalist perspective. China becomes the hive of humanity everything else revolves around, like how the Earth is the center of the universe.
  • Non-phonetic writing systems make sense. You realize how little living literary tradition survives in languages written using alphabets because texts become unreadable in mere centuries as the spoken language evolves. You develop an appreciation for calligraphy. You can read more Japanese than most 外人.
  • You develop an intuitive feel for the clan system, a semi-artificial method of extending family connections.
  • You realize Western castles were tiny and crude.
  • You think in longer historical time horizons.
  • You take it for granted that the natural state of things (historically-speaking) is for China to be the richest and most powerful nation on Earth.
  • You discover lots of business opportunities.

What am I missing by not speaking Portuguese?

Comment by lsusr on The Inefficient Market Hypothesis · 2020-04-26T18:36:30.996Z · score: 2 (2 votes) · LW · GW

Don't worry about people stealing your ideas. If your ideas are any good, you'll have to ram them down people's throats.

―Howard Aiken, A Computer Science Reader : Selections from Abacus by Eric A. Weiss, (p. 404), 1988.

Actually, startup ideas are not million dollar ideas, and here's an experiment you can try to prove it: just try to sell one. Nothing evolves faster than markets. The fact that there's no market for startup ideas suggests there's no demand. Which means, in the narrow sense of the word, that startup ideas are worthless.

―Paul Graham, Ideas for Startups

Comment by lsusr on The Inefficient Market Hypothesis · 2020-04-26T18:16:44.816Z · score: 1 (1 votes) · LW · GW

Upon looking into this it appears he may only have patented one illusion himself, patent number 9358477. Others, like his famous flying illusion 5354238 appear to have been invented by other people. You can search US patents here.

Comment by lsusr on Prospecting for Conceptual Holes · 2020-04-26T17:58:50.350Z · score: 2 (2 votes) · LW · GW

No. Only Chinese.

Comment by lsusr on Forbidden Technology · 2020-04-25T23:57:32.076Z · score: 1 (1 votes) · LW · GW

What startup do you work on?

Comment by lsusr on 3 Interview-like Algorithm Questions for Programmers · 2020-04-24T21:18:31.364Z · score: 1 (1 votes) · LW · GW

No limit.

Comment by lsusr on The Inefficient Market Hypothesis · 2020-04-24T10:15:19.481Z · score: 2 (2 votes) · LW · GW

I'm using "alpha" as the term is used in quantitative finance. This includes non-zero-sum games. For example, if you were the first person to diversity across continents then you could lower your own risk profile (and therefore increase risk-adjusted returns) without increasing risk for anyone else.

Distributing the value extracted from a gold mine is zero-sum. Prospecting for gold is non-zero sum.

Comment by lsusr on What Surprised Me About Entrepreneurship · 2020-04-22T08:31:05.640Z · score: 6 (2 votes) · LW · GW

I liked Chapters 1 and 2 of On Lisp. After that, I felt like it degenerated into a design patterns book. The design patterns Paul Graham need 27 years ago aren't the design patterns I need right now. I prefer Practical Common Lisp as a textbook. Ironically, Practical Common Lisp book is extremely impractical in 2020 but I feel it demonstrates high-level Lisp programming better through its use of extremely dense code.

I've never read Let Over Lambda. Judging by the table of contents, it looks like an exceptionally good book on how to write a macro but—once again—not when to write a macro.

Instead of diving back into your Lisp textbooks, I recommend this advice from Paul Graham's Rarely-Asked Questons:

How can I become really good at Lisp programming?

Write an application big enough that you can make the lower levels into a language layer. Embedded languages (or as they now seem to be called, DSLs) are the essence of Lisp hacking.

Comment by lsusr on Fractal Harmonic Waves · 2020-04-20T07:12:19.175Z · score: 1 (1 votes) · LW · GW

Whoops! Thank you. That should be "information content", not "data-rate". The original post was also missing an important section on attenuation. I've corrected both errors.

Comment by lsusr on College advice for people who are exactly like me · 2020-04-13T05:29:14.982Z · score: 4 (3 votes) · LW · GW

This is great advice for some people and good advice to reverse for many others. I like the related posts on your blog too. They all work well together.

The unrelated post on startup options is especially cool too.

Comment by lsusr on The Past is Non-Deterministic · 2020-04-11T19:58:03.036Z · score: 0 (2 votes) · LW · GW

I suspect that entropy is more fundamental than time. This is my second post related to Loschmidt's paradox. The first one is here.

Comment by lsusr on The Past is Non-Deterministic · 2020-04-11T19:48:21.965Z · score: 1 (1 votes) · LW · GW

That's more like the Born-rule-as-interpreted-by-the-Copenhagen-interpretation.


What do you mean by "non-determinstic" ? The standard (single universe indeterministic) view is that past events occurred with a probability less than 1, ie they did not occur inevitably or necessarily. That is coupled with the idea that there is only one past state, and its can be assigned a probability of 1 for the purpose of calculating the probability of subsequent events .

Yes. This is what I mean by "non-deterministic".

...the collapse postulate is not time-symmetrical.

I think the time-symmetry of the collapse postulate is the crux of our disagreement. In Chapter 4 of Principles of Quantum Mechanics, Second Edition by R. Shankar, the collapse postulate is stated as follows.

III. If the particle is in a state , measurement of the variable (corresponding to) will yield one of the eigenvalues with the probability . The state of the system will change from to as a result of the measurement.

According to in Chapter 11.5 Time Reversal Symmetry, time-reversal is performed by . What happens if we plug this into postulate III?

If we can show that then the Born rule is time-symmetric.

Comment by lsusr on The Value Is In the Tails · 2020-04-10T18:22:48.385Z · score: 1 (1 votes) · LW · GW

This reminds me of a related phrase used in strategy. "Chaos favors the underdog."

Comment by lsusr on How to evaluate (50%) predictions · 2020-04-10T18:21:31.969Z · score: 3 (2 votes) · LW · GW

Was this inspired by Scott Alexander's 2019 Predictions: Calibration Results?

Comment by lsusr on What Surprised Me About Entrepreneurship · 2020-04-10T02:27:50.407Z · score: 3 (2 votes) · LW · GW

This is encouraging to hear. When I talk about this stuff to ML engineers, some instantly get it, especially when they come from a functional programming background. Others don't and it feels like there's a wall between me and them.

I think I can replicate a lot of this in Python, even if it's a little clunky. It's just easier to start in Hy and then write a wrapper to port it to Python.

Comment by lsusr on What Surprised Me About Entrepreneurship · 2020-04-09T22:20:38.093Z · score: 4 (3 votes) · LW · GW

Here's an example of something difficult to do in Python. lazy, stateless and minimize are custom macros.

  (stateless x float)
  (stateless y (* x x))
  (minimize y) // nothing has been calculated yet
  (print y) // 0.0 ― this is where the first calculation occurs
  (print y) // 0.0 ― the second evaluation of y just reads from the cache
  (print x)) // 0.0 ― this is read from the cache too

The stateless macro caches results locally, backs up everything to a remote server in a background process and reads from the remote cache whenever possible.

And I would like to know where you learned that sort of meta-programming.

Any decent Lisp book will cover how to write a macro. The real challenge is knowing what to write, not how to write it.

I know of no good books on this subject. In my experience, you have to understand what it's like to use many different software paradigms and how they are implemented. Then you can just steal their most relevant features as you need them. This particular system took inspiration from Haskell, R and applied mathematics. Under the hood, it makes heavy use of syntax trees, hash-based lookups, lazy evaluation and Bayesian optimization.

How to practice meta-programming commercially is an even harder question. Most companies don't use a meta-enough language like Lisp and those which do may not need meta-software at all. The only place I can think of where this has net positive commercial value would is a tiny startup working on a very hard problem. Small data comes to mind, but not much else.

Comment by lsusr on What Surprised Me About Entrepreneurship · 2020-04-06T07:53:50.085Z · score: 7 (5 votes) · LW · GW

The global dog population is estimated at 900 million. There are two species of wild wolves: red wolves and grey wolves. Red wolves are critically endangered.

It's hard to find exact numbers on grey wolf populations in 2020. According to Wikipedia, grey wolf populations were estimated to be 300 thousand in 2003.

Comment by lsusr on Which books provide a good overview of modern human prehistory? · 2020-04-03T20:24:25.629Z · score: 4 (3 votes) · LW · GW

I too was disappointed by the lack of rigor in Yuval Noah Harari's Sapiens. I felt Paleontology A Brief History of Life by Ian Tattersall covered the same material in fewer pages to a higher level of rigor. However, Paleontology might not be the book you're looking for because it covers pre-human natural history too.

I get a lot of quality information from modern ethnographies of hunter-gatherers such as Nisa: The Life and Words of a !Kung Woman by Marjorie Shostak and The World until Yesterday: What Can We Learn from Traditional Societies? by Jared Diamond.

Comment by lsusr on 3 Interview-like Algorithm Questions for Programmers · 2020-03-28T03:21:21.856Z · score: 1 (1 votes) · LW · GW

Yes. They move independent of which character it is.

Comment by lsusr on 3 Interview-like Algorithm Questions for Programmers · 2020-03-26T03:16:03.566Z · score: 5 (3 votes) · LW · GW

This question isn't about integers "given no assumptions". It's about the int primitive in Java.

Comment by lsusr on 3 Interview-like Algorithm Questions for Programmers · 2020-03-26T02:15:40.487Z · score: 1 (1 votes) · LW · GW

If you happen to know the answer already then the question is ruined. In this way, every algorithm puzzle in the world can be ruined by trivia. For an algorithm question to be interesting, I hope the reader doesn't already know the answer and has to figure it out her/himself. So in questions #1 and #3, I'm hoping the reader doesn't know the relevant trivia and will instead independently derive the answers without looking them up.

I'm not trying to ask "Do you know how Poland cracked the Enigma?" I'm trying to ask "Can you figure out how Poland cracked the Enigma?"

I don't grade these questions. These questions are for fun and self-improvement. Though I could imagine a timed written test with dozens of questions like this where the testee gets one point for each correct answer and loses one point for each incorrect answer. A sufficiently large number of questions might help counteract the individual variance.