Posts

Comments

Comment by eternal_neophyte on "The Bitter Lesson", an article about compute vs human knowledge in AI · 2019-06-25T08:56:23.594Z · LW · GW
it's not a very obvious example

I honestly regret that I didn't make it as clear as I possibly could the first time around, but expressing original, partially developed ideas is not the same thing as reciting facts about well-understood concepts that have been explained and re-explained many times. Flippancy is needlessly hostile.

there are some problems to which search is inapplicable, owing to the lack of a well-defined search space

If not wholly inapplicable, then not performant, yes. Though the problem isn't that the search-space is not defined at all, but that the definitions which are easiest to give are also the least helpful ( to return to the previous example, in the Platonic real there exists a brainf*ck program that implements an optimal map from symptoms to diagnoses - good luck finding it ). As the original author points out, there's a tradeoff between knowledge and the need for brute-force. It may be that you can have an agent synthesize knowledge by consolidating the results of a brute-force search into a formal representation which an agent can then use to tune or reformulate the search-space previously given to fit some particular purposes; but this is quite a level of sophistication above pure brute force.

Edit:

this is not an issue with search-based optimization techniques; it's simply a consequence of the fact that you're dealing with an ill-posed problem

If the problems of literature or philosophy were not in some sense "ill posed" they would also be dead subjects. The 'general' part in AGI would seem to imply some capacity for dealing with vague, partially defined ideas in useful ways.

Comment by eternal_neophyte on "The Bitter Lesson", an article about compute vs human knowledge in AI · 2019-06-24T21:11:02.753Z · LW · GW
for more abstract domains, it's harder to define a criterion (or set of criteria) that we want our optimizer to satisfy

Yes.

But there's a significant difference between choosing an objective function and "defining your search space" (whatever that means), and the latter concept doesn't have much use as far as I can see.

If you don't know what it means, how do you know that it's significantly different from choosing an "objective function" and why do you feel comfortable in making a judgment about whether or not the concept is useful?

In any case, to define a search space is to provide a spanning set of production rules which allow you to derive all elements in the target set. For example, Peano arithmetic provides a spanning set of rules for arithmetic computations, and hence define ( in one particular way ) the set of computations a search algorithm can search through in order to find arithmetic derivations satisfying whatever property. Similarly the rules of chess define the search-space for valid board-state sequences in games of chess. For neural networks, it could be defining a set of topologies, or a set of composition rules for layering networks together; and in a looser sense a loss function induces "search space" on network weights, insofar as it practically excludes certain regions of the error surface from the region of space any training run is ever likely to explore.

Comment by eternal_neophyte on "The Bitter Lesson", an article about compute vs human knowledge in AI · 2019-06-24T20:17:02.341Z · LW · GW

So is brainf*ck, & like NNs bf programs are simple in the sense of being trivial to enumerate and hence search through. Defining a search space for a complex domain is equivalent to defining a subspace of BF programs or NNs which could and probably does have a highly convoluted, warped separating surface. In the context of deep learning your ability to approximate that surface is limited by your ability to encode it as a loss function.

Comment by eternal_neophyte on "The Bitter Lesson", an article about compute vs human knowledge in AI · 2019-06-24T14:17:32.092Z · LW · GW

It only makes sense to talk about "search" in the context of a *search space*; and all extent search algorithms / learning methods involve searching through a comparatively simple space of structures, such as the space of weights on a deep neural network or the space of board-states in Go and Chess. Defining these spaces is pretty trivial. As we move on to attack more complex domains, such as abstract mathematics, or philosophy or procedurally generated music or literature which stands comparison to the best products of human genius, the problem of even /defining/ the search space in which you intend to leverage search-based techniques becomes massively involved.

Comment by eternal_neophyte on [Slashdot] We're Not Living in a Computer Simulation, New Research Shows · 2017-10-03T19:07:30.477Z · LW · GW

The strength of the claim being made by Slashdot and the lack of any examination of ways in which it could be false by whoever wrote Slashdot's summary both invite skepticism.

I'm of the opinion that we are in base reality regardless, though. The reason for this being is that the incentive for running a simulation is so that you can observe the behavior of the system being simulated. If you have some vertical stack of simulations all simulating intelligent agents in a virtual world, and most of these simulations are simulating basically the same thing, that makes simulation very costly because the 0th-level simulators won't learn anything from a simulation being run by the simulants that they won't learn from the "base-level" simulation. They would have an incentive to develop ways to starve non-useful simulant activity of computing resources.

Comment by eternal_neophyte on [deleted post] 2017-08-30T11:00:13.773Z

I liked Nietzche's framing of the question in terms of infinite recurrence better. Strangely I would forgo infinite recurrence but would choose the second option in your scenario ( since if it turns out to be a mistake the cost will be limited ).

Comment by eternal_neophyte on How I [Kaj] found & fixed the root problem behind my depression and anxiety after 20+ years · 2017-07-26T19:52:04.573Z · LW · GW

The connection between neuroses and memories was something that made me think a lot. I've been trying to provoke myself into some kind of "transformation" for about 10 years, with some limited successes and a lot of failures for a want of insight. Information like this is really valuable so thank you for sharing your experience.

Comment by eternal_neophyte on What Are The Chances of Actually Achieving FAI? · 2017-07-26T08:45:40.403Z · LW · GW

Given that world GDP growth continues for at least another century, 100%. :)

Comment by eternal_neophyte on Is Altruism Selfish? · 2017-07-24T09:09:40.942Z · LW · GW

It is impossible for one to act on another's utility function (without first incorporating it into their own utility function).

This seems tautological and trivially so. Whatever utility function you act on becomes by virtue of that fact "your" utility function.

Comment by eternal_neophyte on Can anyone refute these arguments that we live on the interior of a hollow Earth? · 2017-07-23T22:34:35.915Z · LW · GW

these laws are exactly the outside world

That is my view precisely. One way out is to assert that there is at least one mind responsible for providing the percepts available to other minds, and from its perspective nothing is unknown and it fills the function of the "outside world".

Comment by eternal_neophyte on Can anyone refute these arguments that we live on the interior of a hollow Earth? · 2017-07-23T18:47:04.532Z · LW · GW

The panpsychism argument is probably the most compelling one among all of these. The problem with it is that if percepts are the basic substance of the universe howcome we have experiences that we cannot predict? It implies our future experiences are determined by something outside of our own minds.

Comment by eternal_neophyte on Can anyone refute these arguments that we live on the interior of a hollow Earth? · 2017-07-22T08:12:21.095Z · LW · GW

I don't know a whole lot about physics or the other subjects he talks about. It just seems very well-argued to me.

These two facts are related.

Comment by eternal_neophyte on Can anyone refute these arguments that we live on the interior of a hollow Earth? · 2017-07-21T18:05:04.710Z · LW · GW

Those are a lot of links to sift through though - can you give an example of just one? :)

Comment by eternal_neophyte on Can anyone refute these arguments that we live on the interior of a hollow Earth? · 2017-07-21T17:17:10.537Z · LW · GW

Let's assume all the arguments linked are in fact sound. First obvious question is does he offer anything that resembles a falsifiability condition? If not then he doesn't present anything remarkable or particularly difficult to dispatch with since his is a scientific, material hypothesis.

Comment by eternal_neophyte on a different perspecive on physics · 2017-06-28T20:42:52.584Z · LW · GW

A section of three dimensional space can be modelled as a cubic grid with nodes where the edges intersect, up to some limited resolution for a cube of finite volume ( and I suppose the same holds true with more than three dimensions ). It sounds as if you're proposing this graph basically be flattened - you take a fully connected regular polygon of n^3 angles, map the nodes in your cube to your polygon and then delete all edges in the connected polygon that don't correspond to an edge present in the cube.

I have further questions but they hinge on whether or not I've understood you correctly., Is the above so far a fair summary?

Comment by eternal_neophyte on How I'd Introduce LessWrong to an Outsider · 2017-05-03T17:58:54.402Z · LW · GW

Hate to have to say this but directly addressing a concern is social confirmation of a form that the concern deserves to be addressed, and thus that it's based in something real. Imagine a Scientologist offering to explain to you why Scientology isn't a cult.

Of the people I know of who are outright hostile to LW, it's mostly because of basilisks and polyamory and other things that make LW both an easy and a fun target for derision. And we can't exactly say that those things don't exist.

Comment by eternal_neophyte on How I'd Introduce LessWrong to an Outsider · 2017-05-03T17:36:29.684Z · LW · GW

Thank you for being gracious about accepting the criticism.

Comment by eternal_neophyte on From Jonathan Haidt: A site that lets you "experience different viewpoints" · 2017-05-03T17:34:11.338Z · LW · GW

While I feel I technically speaking ought to be applauding any effort to boost the tollerance of heterodox opinions in universities, my heart would not be in it. I think the issue is that many of the most vicious "political types" are the ones with the weakest knowledge about the history and provenance of their own ideas. How many ultra-feminists have ever so much as opened "The Feminine Mystique"? The Feminine Mystique is not even talked about or refferenced in discussions on Feminism I've come across. How many "Marxists" ever struggled as far as the end of the 1st chapter of Das Kapital?

It's puzzling to think about how you could persuade someone to be more open-minded about the beliefs of others when they're hardly even serious about their own.

Comment by eternal_neophyte on How I'd Introduce LessWrong to an Outsider · 2017-05-03T10:33:28.803Z · LW · GW

"actually, X" is never a good way to sell anything. Scientists are quite prone to this kind of speech which from their perspective is fully justified ( because they've exhaustively studied a certain topic ) - but what the average person hears is the "you don't know what you're talking about" half of the implication which makes them deaf to the "I do know what I'm talking about" half. If you just place the fruits of rationality on display; anyone with a brain will be able to recognize them for what they are and they'll adjust their judgements accordingly.

Here's an interesting exercise - find anyone in the business of persuasion ( a lawyer, a salesman, a con artist ) and see how often you hear them say things like "no, actually..." ( or how often you hear them not saying these things ).

Comment by eternal_neophyte on How I'd Introduce LessWrong to an Outsider · 2017-05-03T10:17:35.022Z · LW · GW

He's really, really smart.

This is the kind of phrasing that usually costs more to say than you can purchase with it. Anyone who is themselves really, really smart is going to raise hackles at this kind of talk; and is going to want strong evidence moreover ( and since a smart person would independently form the same judgement about Yudkowsky, if it is correct, you can safely just supply the evidence without the attached value judgment ).

Fiction authors have a fairly robust rule of thumb: show, don't tell. Especially don't tell me what judgement to form. I'd tack on this: don't negotiate. Haggling with a person over their impressions of a group of other people with suggestions like it's still possible that the techniques may be useful to you, right? immediately inspires suspicion in anyone with any sort of disposition to scepticism. Bartering _may_s simultaneously creates the impression of personal uncertainty and inability to demonstrate while coupling it to the obvious fact that this person wants me to form a certain judgement.

If I were to introduce a stranger to LessWrong I'd straightforwardly tell them what it is: it's where people attracted to STEM come go to debate and discuss mostly STEM-related ( and generally academic ) topics; with a heavy bias towards topics that are in a the twilight zone between sci-fi and feasible scientific reality, also with a marked tendency for employing a set of tools and techniques of thought derived from studying cognitive science and an associated tendency to frame discussions in the language associated with those tools.

Comment by eternal_neophyte on [Stub] Extortion and Pascal's wager · 2017-04-28T19:39:30.182Z · LW · GW

It's not smoking-gun obvious to me that this second formulation is what the pre-Pauline Christians believed in. Jesus's divinity certainly wasn't settled even after Paul. Consider for example the Arian "heresy".

Comment by eternal_neophyte on [Stub] Extortion and Pascal's wager · 2017-04-28T18:03:51.629Z · LW · GW

Paul isn't going to cut it

Paul might cut it if you're Thomas Jeffson: https://en.wikipedia.org/wiki/Jefferson_Bible "Paul was the first corrupter of the doctrines of Jesus."

Comment by eternal_neophyte on [Stub] Extortion and Pascal's wager · 2017-04-28T15:39:55.697Z · LW · GW

God says "j/k, just kidding"

Either God, Jesus or St. Paul - that all depends entirely on which sect you ask.

Comment by eternal_neophyte on [Stub] Extortion and Pascal's wager · 2017-04-28T11:08:28.248Z · LW · GW

therefore God optimizes this world for Leviathan

?

Comment by eternal_neophyte on [Stub] Extortion and Pascal's wager · 2017-04-27T22:28:08.842Z · LW · GW

My own reading of Job was not that god's goodness is undeniable, it's that god really needs nothing from us and is entirely indifferent to human beings choosing to damn themselves or not, in contradiction to "your God is a jealous God".

If you have sinned, what do you accomplish against him? And if your transgressions are multiplied, what do you do to him? If you are righteous, what do you give to him? Or what does he receive from your hand? Your wickedness concerns a man like yourself, and your righteousness a son of man.

This seems to me like the most sane piece of theological reasoning I've found in any religious text whatever - casting God as an entirely amotivational agent ( which is strangely in contradiction to the premise of the story of Job ).

Comment by eternal_neophyte on [Stub] Extortion and Pascal's wager · 2017-04-27T06:52:05.309Z · LW · GW

They usually don't have any way to leverage their models to increase the cost of not buying their product or service though; so such a situation is still missing at least one criterion.

There is a complication involved since its possible to increase the cost to others of not doing business with you in "fair" ways. E.g. the invention of the fax machine reduced effective demand for message boys to run between office buildings, hence increasing their cost and the operating costs of anyone who refused to buy a fax machine.

Though I don't believe any company long held a monopoly on the fax market, if a company did establish such a monopoly in order to control prices that again may be construed as extortion.

Comment by eternal_neophyte on [Stub] Extortion and Pascal's wager · 2017-04-26T20:44:53.406Z · LW · GW

This is the first time I've heard of this dilemma (so this post is really just thinking aloud). It seems to me that trade usually doesn't require agents to engage in deep modeling of each other's behaviour. If I go down to the market place and offer the man at the stall £5 for a pair of shoes, and he declines and I walk away - the furthest thing from my mind is trying to step through my model of human behaviour to figure out how to persuade him to accept. I had a simple model - to wit that the £5 was sufficient incentive to effect the trade - and when that model turned out false I just abandoned negotiations without trying to calculate the incentive effects of doing so.

This isn't to say that everything involving deep modeling of human behaviour is necessarily an instance of extortion, though the converse would seem to hold ( every act of extortion involves some higher-order modeling between extorter and victim ). However, extortion usually involves the extorter trying to increase the cost of select outcomes above what they would be had the extorter not explicitly acted to increase them, which is why deep modeling of the victim is required. Unless my costly, deep model of my trading partner is paying rent to me ( with respect to a given episode of negotiation ) only in some way that does not involve allowing me to increase the cost of a certain set of outcomes to him in some negotiation, I am probably engaging in extortion.

If I walk way from a market stall with the intent of provoking the seller into lowering his price - I'm not increasing the cost of any outcome to him. The cost of me walking away is a constant. So in this case my model of his behaviour is not aimed at increasing the cost of any outcome to him - I'm effectively simply placing a bet. If I threaten to break his legs if he refuses the sale, that's placing a bet on a rigged game.

Comment by eternal_neophyte on Open thread, Apr. 17 - Apr. 23, 2017 · 2017-04-26T19:20:05.718Z · LW · GW

I may have misunderstood your argument Thomas. Are you saying that because it's possible to construct a paradox ( in this case Yablo's paradox ) using an infinitude, that the concept of infinity is itself paradoxical?

Couldn't you make a similar argument about finite systems such as, say?:

A: B is false, B: A is false

Here are only two sentences. Is the number two therefore paradoxical? I apologize if it sounds like I'm trying to parody your argument - I really would like to learn that I've misunderstood it, and in what way.

Comment by eternal_neophyte on Open thread, Apr. 17 - Apr. 23, 2017 · 2017-04-26T19:08:21.862Z · LW · GW

"you can't do infinite number of steps in a finite time"

Well, can you? If some finite period must elapse when a finite distance is covered, an an infinite distance is greater than any finite distance, then the period of time elapsed in crossing an infinite segment must be greater than the period that elapses for crossing any finite segment, and thus also infinite.

I suppose you can also assume that you can cross a finite segment without a finite period of time elapsing - but then what's to prevent any finite segment of arbitrary length being crossed instantaneously?

Comment by eternal_neophyte on Open thread, Apr. 17 - Apr. 23, 2017 · 2017-04-26T19:02:15.728Z · LW · GW

What "A & ~A" did he prove?

Comment by eternal_neophyte on Neuralink and the Brain’s Magical Future · 2017-04-26T06:58:36.202Z · LW · GW

societies of brain augments that are all working together

Even that this presupposition should hold is questionable. Mutual distrust and the associated risk might make cooperative development an exceptional scenario rather than the default one.

Comment by eternal_neophyte on Neuralink and the Brain’s Magical Future · 2017-04-25T18:57:51.568Z · LW · GW

The key ingredient for a MAD situation as far as I can think is some technology with a high destructiveness potential distributed among multiple agents who cannot trust each other. To reduce my whole argument to its cartoon outlines: serious brain augmentation seems about as good an idea as handing everyone their own nuclear arsenal.

Comment by eternal_neophyte on Open thread, Mar. 20 - Mar. 26, 2017 · 2017-04-25T12:40:38.063Z · LW · GW

The first two would suggest I'm a subject-matter expert

Why? Are the two or three most vocal critics of evolution also experts? Does the fact that newspapers quote Michio Kaku or Bill Nye on the dangers of global warming make them climatology experts?

Comment by eternal_neophyte on Neuralink and the Brain’s Magical Future · 2017-04-24T22:15:53.277Z · LW · GW

The more intelligence augmentation is equitably spread the more likely that there will be less consequence free power over others.

That is not apparent to me though. It seems like it would lead to a MAD style situation where no agent is able to take any action that might be construed as malintent without being punished. Every agent would have to be suspicious of the motives of every other agent since advanced agents may do a very good job of hiding their own malintent, making any coordinated development very difficult. Some agents might reason that it is better to risk a chance of destruction for the chance of forming a singleton.

It seems to me very hard to reason about the behaviour of advanced agents without ultimately resorting to mathematics ( e.g. situations involving mutual-policing should be formalizable in game-theoretic terms ).

Comment by eternal_neophyte on Neuralink and the Brain’s Magical Future · 2017-04-24T20:48:37.168Z · LW · GW

Privately manufactured bombs are common enough to be a problem - and there is a very plausible threat of life imprisonment ( or possibly execution ) for anyone who engages in such behaviour. That an augmented brain with the inclination to doing something analogous would be effectively punishable is open to doubt - they may well find ways of either evading the law or of raising the cost of any attempted punishment to a prohibitive level.

I'd say it's more useful to think of power in terms of things you can do with a reasonable chance of getting away with it rather than just things you can do. Looking at the former class of things - there are many things that people do that are harmful to others that they do nevertheless because they can get away with it easily: littering, lying, petty theft, deliberately encouraging pathological interpersonal relationship dynamics, going on the internet and getting into an argument and trying to bully the other guy into feeling stupid... ( no hint intended to be dropped here, just for clarity's sake ).
Many, in my estimation probably most, human beings do in fact have at least some consequence-free power over others and do choose to abuse that minute level of power.

Comment by eternal_neophyte on Neuralink and the Brain’s Magical Future · 2017-04-24T20:06:28.971Z · LW · GW

even with increasing power

At the individual level? By what metric?

these do not seem the correct things for maths to be trying to tackle

Is that a result of mathematics or of philosophy? :P

Comment by eternal_neophyte on Neuralink and the Brain’s Magical Future · 2017-04-24T18:26:47.877Z · LW · GW

For this tactic to be effectual it requires that a society of augmented human brains will converge on a pattern of aggregate behaviours that maximize some idea of humanity's collective values or at least doesn't optimize anything that is counter to such an idea. If the degree to which human values can vary between _un_augmented brains reflects some difference between them that would be infeasible to change then it's not likely that a society of augmented minds would be any more coordinated in values that a society of augmented ones.

In one sense I do believe a designed AI is better - the theorems a human being devised can stand or fall independently of the man who devised them. The risk increases inversely with our ability to follow trustworthy inference procedures in reasoning about designing AIs. With brain-augmentation the risk increases inversely with our aggregate ability to avoid the temptation of power. Humanity has produced many examples of great mathematicians. Trustworthy but powerful men are rarer.

Comment by eternal_neophyte on Holy Ghost in the Cloud (review article about christian transhumanism) · 2017-04-24T13:09:22.171Z · LW · GW

Most scientists are not extropian in any sense - so if they have been "prepping the party" it was not deliberate. Are you considering scientists and religious folk as disjoint sets?

Comment by eternal_neophyte on Neuralink and the Brain’s Magical Future · 2017-04-24T08:41:47.990Z · LW · GW

Perhaps Elon doesn't believe we are I/O bound, but that he is I/O bound. ;]

There's a more serious problem which I've not seen most of the Neuralink-related articles talk about* - which is that layering intelligence augmentations around an overclocked baboon brain will probably actually increase the risk of a non-friendly takeoff.

  • haven't read the linked article through yet
Comment by eternal_neophyte on LessWrong and Miri mentioned in major German newspaper's article on Neoreactionaries · 2017-04-20T21:41:51.454Z · LW · GW

by Wikipedia

Well, by people who edit there and may be hostile to either rationlists, NRXers or both. Luckily most people I've talked to online will react with bafflement or besument if Wikipedia is cited as a source for anything - so people are in my experience pretty well innoculised against the appeal to authority trap that Wikipedia creates.

Comment by eternal_neophyte on April '17 I Care About Thread · 2017-04-18T19:43:58.496Z · LW · GW

That's more of a function of the way you code up the running processes

Well not necessarily, depending on what kind of transforms you can apply to the source before feeding it to the interpreter, and the degree of fuss you're willing to put up with in terms of defining global functions with special names to handle resurrection of state and so on.

Python wasn't picked specifically because it's ideal for doing this kind of thing but just because it's easy for hacking prototypes together and useful for many things. At the risk of overstating my progress - some of the things that seemed to me like they would be the most difficult do now work at some level.

want to run your code inside a debugger

I want the option to do that if I want to. There's no reason it has to be done that way if the overhead added seems to be excessive. It's also a case of being able to specify the degree to which it's running in a debugger ( in otherwords the level of resolution of the logs ).

Comment by eternal_neophyte on April '17 I Care About Thread · 2017-04-18T17:59:51.047Z · LW · GW

If I may take a stab at this: it's probably a combination of 1) Costs a lot 2) Benefit isn't expected for many decades 3) No guarantee that it would work

Anyone taking a heurisitc approach to reasoning about whether to sign up for cryonics rather than a probabilistic one ( which isn't irrational if you have no way to estimate the probabilities involved available to you ) could therefore easily evaluate it as not worth doing.

Comment by eternal_neophyte on April '17 I Care About Thread · 2017-04-18T17:53:41.808Z · LW · GW

Edit: lighttable is also very close to what I would consider a good operating environment.

Comment by eternal_neophyte on April '17 I Care About Thread · 2017-04-18T17:51:57.360Z · LW · GW

For one thing I need to be able to run it on on a server without x-windows on it; so I need to be able to change code on my own machine, have a script upload it to the remote server and update the running code without halting any running processes. I also need the input source code to be transformed so every variable assignment, function-call or generator call is wrapped in a logging function which can be switched on or off, and for the output of the logs to be viewable by something basically resembling an Excel spreadsheet, where rows and columns can be filtered out according to the source and nature of the logging message; so I can examine the operational trace of a complex running program to find the source of a bug without having to manually write logging statements and try/except blocks throughout the whole system. I don't know to what extent Jupyter's feature-set intersects with what I need but when I checked it out it seemed to be basically browser-based.

"Something like a Smalltalk environment" - yes. Pharo looks a lot like what I would want and I have toyed with it slightly.

Comment by eternal_neophyte on April '17 I Care About Thread · 2017-04-18T16:24:26.248Z · LW · GW

I'm focusing on something highly specific right now: a dirty, hack-riddled attempt at turning python into a usable live-programming environment. This is a far cry from my general interest of building an OS ( or "operating environment" ) which effectively has a reflective understanding of its own internals, programming language and operations.

This splits into many sub-problems; so the one I'm primarily fascinated by is how to construct a programming language that a computer not only can execute but understands the semantics of in various dimensions and at varying conceptual levels ( hence creating potential for manipulating the machine at using abstract concepts that are not necessarily rigorously defined beforehand ).

Comment by eternal_neophyte on April '17 I Care About Thread · 2017-04-18T16:08:15.933Z · LW · GW

Can we post even if what we care about is secret? :)

I care about finding ways to turn a desktop computer into an a better auxiliary organ of the brain.

Comment by eternal_neophyte on Plan-Bot: A Simple Planning Tool · 2017-04-17T22:26:06.524Z · LW · GW

No problem, good luck with all you do.

allowing users to immediately save data onto their own devices

Aye. Not that I'd recommend doing it that way but I was basically just curious to see if JS could manage it.

dynamically change what the user sees

If you store information about the schedule they've set up in a cookie then yes - but I imagine it would be a lot of info for a cookie. If you intend to let users create or edit a schedule, close the tab and then come back to it later, you'll probably want to implement that using backend server stuff ( sessions, server-side files, etc. ).

If you already know JavaScript then you may want to check out NodeJS for that rather than python+flask, since you'll have less to pick up.

I'll stop here because I'm afraid my thinking out loud about how I might do this could send you chasing wild geese.

Comment by eternal_neophyte on How French intellectuals ruined the West - Postmodernism and its impact, explained · 2017-04-17T20:46:22.247Z · LW · GW

Ah, I confused myself because I thought you were referring to the neo-right French Identitarian youth movement: https://www.generation-identitaire.com/

Comment by eternal_neophyte on Plan-Bot: A Simple Planning Tool · 2017-04-17T20:28:57.608Z · LW · GW

If you have a decent grasp of python then https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-i-hello-world is a very good resource.

This is the book that got me started with python: http://www.diveintopython3.net/

If you end up going down the Python road and your project grows to the point where you feel you would like help, I'd be very interested in contributing to projects of this kind.

tailored to this mini-project

Possibly this: http://exploreflask.com/en/latest/static.html

Though I've done a bit of googling and it's apparent that you can serve dynamically generated data directly through javascript without resorting to any back-end stuff: http://stackoverflow.com/questions/3665115/create-a-file-in-memory-for-user-to-download-not-through-server

Comment by eternal_neophyte on How French intellectuals ruined the West - Postmodernism and its impact, explained · 2017-04-17T20:02:21.716Z · LW · GW

a lot of the tools of criticism of institutions and ideologies are largely the same between the two

Do you have a specific example of that?