Open thread, Jan. 12 - Jan. 18, 2015

post by Gondolinian · 2015-01-12T00:39:20.888Z · LW · GW · Legacy · 156 comments

Contents

156 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Previous Open Thread

Next Open Thread


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

156 comments

Comments sorted by top scores.

comment by gwern · 2015-01-16T01:31:59.522Z · LW(p) · GW(p)

Image recognition, courtesy of the deep learning revolution & Moore's Law for GPUs, seems near reaching human parity. The latest paper is "Deep Image: Scaling up Image Recognition", Wu et al 2015 (Baidu):

We present a state-of-the-art image recognition system, Deep Image, developed using end-to-end deep learning. The key components are a custom-built supercomputer dedicated to deep learning, a highly optimized parallel algorithm using new strategies for data partitioning and communication, larger deep neural network models, novel data augmentation approaches, and usage of multi-scale high-resolution images. On one of the most challenging computer vision benchmarks, the ImageNet classification challenge, our system has achieved the best result to date, with a top-5 error rate of 5.98% - a relative 10.2% improvement over the previous best result.

...The result is the custom-built supercomputer, which we call Minwa 2 . It is comprised of 36 server nodes, each with 2 six-core Intel Xeon E5-2620 processors. Each sever contains 4 Nvidia Tesla K40m GPUs and one FDR InfiniBand (56Gb/s) which is a high-performance low-latency interconnection and supports RDMA. The peak single precision floating point performance of each GPU is 4.29TFlops and each GPU has 12GB of memory. Thanks to the GPUDirect RDMA, the InfiniBand network interface can access the remote GPU memory without involvement from the CPU. All the server nodes are connected to the InfiniBand switch. Figure 1 shows the system architecture. The system runs Linux with CUDA 6.0 and MPI MVAPICH2, which also enables GPUDirect RDMA. In total, Minwa has 6.9TB host memory, 1.7TB device memory, and about 0.6PFlops theoretical single precision peak performance...We are now capable of building very large deep neural networks up to hundreds of billions parameters thanks to dedicated supercomputers such as Minwa.

...As shown in Table 3, the accuracy has been optimized a lot during the last three years. The best result of ILSVRC 2014, top-5 error rate of 6.66%, is not far from human recognition performance of 5.1% [18]. Our work marks yet another exciting milestone with the top-5 error rate of 5.98%, not just setting the new record but also closing the gap between computers and humans by almost half.

For another comparison, on pg9 Table 3 shows past performance. In 2012, the best performer reached 16.42%; 2013 knocked it down to 11.74%, and 2014 to 6.66% or to 5.98% depending on how much of a stickler you want to be; leaving ~0.8% left.

EDIT: Google may have already beaten 5.98% with a 5.5% (and thus halved the remaining difference to 0.4%), according to a commenter on HN, "smhx":

googlenet already has 5.5%, they published it at a bay area meetup, but did not officially publish the numbers yet!

Replies from: William_S, gwern, JoshuaZ, gwern
comment by William_S · 2015-01-18T16:12:40.465Z · LW(p) · GW(p)

On the other hand... Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images

From the abstract:

... A recent study revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion). Specifically, we take convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and then find images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class. It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects. Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision.

Replies from: gwern, JoshuaZ
comment by gwern · 2015-01-18T17:58:02.801Z · LW(p) · GW(p)

I'm not sure what those or earlier results mean, practically speaking. And the increased use of data augmentation may mean that the newer neural networks don't show that behavior, pace those papers showing it's useful to add the adversarial examples to the training sets.

comment by JoshuaZ · 2015-02-01T21:13:53.441Z · LW(p) · GW(p)

It seems like the work around for that is to fuzz the images slightly before feeding them to the neural net?

Replies from: gwern
comment by gwern · 2015-02-01T22:12:01.309Z · LW(p) · GW(p)

'Fuzzing' and other forms of modification (I think the general term is 'data augmentation', and there can be quite a few different ways to modify images to increase your sample size - the paper I discuss in the grandparent spends two pages or so listing all the methods it uses) aren't a fix.

In this case, they say they are using AlexNet which already does some data augmentation (pg5-6).

Further, if you treat the adversarial examples as another data augmentation trick and train the networks with the old examples, you can still generate more adversarial examples.

Replies from: JoshuaZ
comment by JoshuaZ · 2015-02-01T22:16:22.015Z · LW(p) · GW(p)

Huh. That's surprising. So what are humans doing differently? Are we doing anything differently? Should we wonder if someone given total knowledge of my optical processing could show me a picture that I was convinced was a lion even though it was essentially random?

Replies from: gwern, ShardPhoenix
comment by gwern · 2015-02-01T23:52:02.679Z · LW(p) · GW(p)

Those rather are the questions, aren't they? My thought when the original paper showed up on HN was that we can't do anything remotely similar to constructing adversarial examples for a human visual cortex, and we already know of a lot of visual illusions (I'm particularly thinking of the Magic Eye autostereograms)... "Perhaps there are thoughts we cannot think".

Hard to see how we could test it without solving AI, though.

Replies from: JoshuaZ
comment by JoshuaZ · 2015-02-02T00:02:55.051Z · LW(p) · GW(p)

I don't think we'd need to solve AI to test this. If we could get a detailed enough understanding of how the optical cortex functions it might be doable. Alternatively, we could try it on a very basic uploaded mouse or similar creature. On the other hand, if we can upload mice then we're pretty close to uploading people, and if we can upload people we've got AI.

comment by ShardPhoenix · 2015-04-02T03:49:09.711Z · LW(p) · GW(p)

I'm not sure if NNs already do this, but perhaps using augmentation on the runtime input might help? Similar to how humans can look at things in different lights or at different angles if needed.

comment by gwern · 2015-05-15T01:55:56.482Z · LW(p) · GW(p)

To update: the latest version of the Baidu paper now claims to have gone from the 5.98% above to 4.58%.

EDIT: on 2 June, a notification (Reddit discussion) was posted; apparently the Baidu team made far more than the usual number of submissions to test how their neural network was performing on the held-out ImageNet sample. This is problematic because it means that some amount of their performance gain is probably due to overfitting (tweak a setting, submit, see if performance improves, repeat). The Google team is not accused of doing this, so probably the true state-of-the-art error rate is somewhere between the 3rd Baidu version and the last Google rate.

comment by JoshuaZ · 2015-01-16T01:37:35.105Z · LW(p) · GW(p)

That is shocking and somewhat disturbing.

comment by gwern · 2015-02-09T19:11:14.226Z · LW(p) · GW(p)

Human performance on image-recognition surpassed by MSR? "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification", He et al 2015 (Reddit; emphasis added):

Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on our PReLU networks (PReLU-nets), we achieve 4.94% top-5 test error on the ImageNet 2012 classification dataset. This is a 26% relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66%). To our knowledge, our result is the first to surpass human-level performance (5.1%, Russakovsky et al.) on this visual recognition challenge.

(Surprised it wasn't a Baidu team who won.) I suppose now we'll need even harder problem sets for deep learning... Maybe video? Doesn't seem like a lot of work on that yet compared to static image recognition.

Replies from: gwern
comment by gwern · 2015-02-14T02:50:49.842Z · LW(p) · GW(p)

The record has apparently been broken again: "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" (HN, Reddit), Ioffe & Szegedy 2015:

Training Deep Neural Networks is complicated by the fact that the distribution of each layer’s inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.

...The current reported best result on the ImageNet Large Scale Visual Recognition Competition is reached by the Deep Image ensemble of traditional models Wu et al. (2015). Here we report a top-5 validation error of 4.9% (and 4.82% on the test set), which improves upon the previous best result despite using 15X fewer parameters and lower resolution receptive field. Our system exceeds the estimated accuracy of human raters according to Russakovsky et al. (2014).

On the human-level accuracy rate:

... About ~3% is an optimistic estimate without my "silly errors".

...I don't at all intend this post to somehow take away from any of the recent results: I'm very impressed with how quickly multiple groups have improved from 6.6% down to ~5% and now also below! I did not expect to see such rapid progress. It seems that we're now surpassing a dedicated human labeler. And imo, when we are down to 3%, we'd matching the performance of a hypothetical super-dedicated fine-grained expert human ensemble of labelers.

comment by JoshuaZ · 2015-01-12T02:43:42.260Z · LW(p) · GW(p)

People often talk about clusters of ideas. A common context here is the various different contrarian clusters. But ideas can often cluster for historical reasons that don't have a coherent reason to connect. That's well known. What may be less well known is that there are examples where one idea in a cluster can be discredited and as a result other, correct ideas in the same cluster can fall into disrepute. I recently encountered an example while reading Cobb and Goldwhite's "Creations of Fire" which is a history of chemistry.

In the early 1800s Berthollet had hypothesized (with a fair bit of experimental evidence) that one could make the same compound with different ratios of substances. He also hypothesized what would later become to be known as the law of mass action. When the first claim was shown to be wrong, the law of mass action was also rejected, and would not become accepted again for about 50 years.

The upshot seems to be that we should be careful not to reject ideas just because they come from the same source as other, ideas which we've assigned low probabilities.

comment by Dorikka · 2015-01-12T17:11:55.377Z · LW(p) · GW(p)

At one point there was a significant amount of discussion regarding Modafinil - this seems to have died down in the past year or so. I'm curious whether any significant updating has occurred since then (based either on research or experiences.)

Replies from: gwern
comment by gwern · 2015-01-16T02:05:20.466Z · LW(p) · GW(p)

As far as I know, no important news or research has come out about modafinil. Things have been quiet lately, even on the black-markets.

comment by Kaj_Sotala · 2015-01-12T13:32:37.051Z · LW(p) · GW(p)

I'm taking a seminar course taking a computational approach to emotions: there's a very interesting selection of papers linked from that course page that I figure people on LW might be interested in.

The professor's previous similar course covered consciousness, and also had a selection of very interesting papers.

comment by Adam Zerner (adamzerner) · 2015-01-12T02:47:34.613Z · LW(p) · GW(p)

Random thought: revealing something personal about yourself is a very powerful "dark art". People will feel strong pressure to reciprocate.

I confess that I've sort of used it before. Ie. if I want to get information out of someone, I might reveal something personal about myself (I'm comfortable talking about a lot of things, so often times it really isn't even that personal).

I can't recall ever having had bad intentions though. I recall using it to get a friend to open up about something that I think would be beneficial for them, but that is difficult for them to do.

Replies from: ilzolende, Alsadius, Dahlen, emr
comment by ilzolende · 2015-01-12T03:36:59.873Z · LW(p) · GW(p)

The real trick to both use and deflect this is to have some piece of information about yourself that sounds very personal but that you would be fine sharing with everyone. I use autism disclosure this way, not only for this purpose, but also so that when people who I have met try to think of examples of autism, they don't just think of fictional evidence.

Also, this shows up in HPMoR's chapter 7, titled Reciprocation.

comment by Alsadius · 2015-01-13T16:56:08.359Z · LW(p) · GW(p)

It's not just pressure to reciprocate - revealing something very personal is an extremely strong signal of honesty. (Edit: And also confidence)

And while I didn't do this intentionally per se, I do remember the first conversation I had with my girlfriend involved me telling her about the time I failed out of my program in university. That worked out pretty well, I'd say.

comment by Dahlen · 2015-01-14T23:32:16.752Z · LW(p) · GW(p)

I think that depends on whether the personal detail in question helps or hinders bonding.

There are many personal things people (strangers mostly) could tell me about themselves that would put me off rather than get me to reciprocate, and probably I've awakened such reactions in other people in the past as well.

Confess something embarrassing or awkward enough, and wave your success goodbye -- just when you thought you were improving social skills by consciously applying social strategies...

Tip: an unflattering but ordinary and relatable experience is best for this. Internet meme images and funny pics are full of those.

comment by emr · 2015-01-14T03:06:46.082Z · LW(p) · GW(p)

Whenever you discover a social "dark art", look for a countermeasure.

Of course, in most cases this isn't a "dark art" at all: It can just be a signal that you're okay talking about X or moving the conversation in the direction of X, without explicit requesting to talk about X, because an explicit request would require an explicit refusal in the case where they truly didn't want to talk about X. Whereas if you use the ambiguous signal, you're giving them the option of an ambiguous refusal (often by reciprocating with a superficially equal but actually trivial "yeah me too" disclosure). I think this holds for the case of "difficult" issues between friends, and well as things like flirting (ambiguous introduction of a sexual topic), and moving to informal topics from a formal context.

comment by emr · 2015-01-14T04:16:11.827Z · LW(p) · GW(p)

Languages create selections effects that influence our perceptions of other nations.

Most notably, the prevalence of English as a second language means that more people outside of the Anglo-sphere have access to a wide range of Anglo-sphere media and conversation partners, whereas countries that mostly speak English will have a more filtered selection of international sources. For example, there are more people in Slovakia who can read major US newspapers than people in the US who can read major Slovakian newspapers.

A second class of effects occur on the scale of individuals. Second-language use may stem from a direct ancestral link, as in the case of immigration. Second-language use is sometimes related to higher levels of education. Finally, individual interest can influence the choice of acquired second-languages.

I'm curious how this model seems to people living in non-English countries.

As a monolingual, it does seem clear that I'm getting a very filtered sampling of the residents of foreign countries, even relative to all the normal filtering that happens in communication. I frequently catch myself thinking "How can country X be so dysfunctional? All the people I've every met from X are highly-skilled immigrants and people who choose to hang out on the same English-language science and philosophy forums as me!". The dysfunction of an English-speaking country never puzzles me, since I've met far too many of the residents :)

Replies from: Viliam_Bur
comment by Viliam_Bur · 2015-01-14T11:52:10.511Z · LW(p) · GW(p)

Interesting. So the educational filter should make people in Slovakia appear smarter to Americans (if they notice this country at all) simply because the worst stupidity won't get translated, and the lowest-class people will not travel to USA. You will not be regularly exposed to things like this.

On the other hand, this effect is probably much smaller than noise created by random American journalists or bloggers writing made-up stuff about Slovakia, or depictions of "Slovakia" in movies (example here, or shortly here). If for whatever reason a popular writer would decide that Slovakia is e.g. inhabited by vampires, there is pretty much nothing we could do about it.

All the people I've every met from X are highly-skilled immigrants

Maybe the right question to ask yourself when you meet a smart immigrant is: "Why did they have to leave their country?" Probably not polite to ask them, but you should assume there was a reason. And if the answer seems to be "poverty", well, poverty is usually caused by something, so unless the country is just one huge empty desert, there are other things wrong there, too.

Replies from: DanielLC
comment by DanielLC · 2015-01-16T07:48:16.461Z · LW(p) · GW(p)

And if the answer seems to be "poverty", well, poverty is usually caused by something, so unless the country is just one huge empty desert, there are other things wrong there, too.

It's also clearly not caused by laziness in this case.

comment by Capla · 2015-01-12T05:43:29.964Z · LW(p) · GW(p)

Since no one answered on the stupid questions tread:

Why did LessWrong split off from Overcoming Bias?

Does anyone know?

Replies from: ZankerH, Richard_Kennaway
comment by ZankerH · 2015-01-12T05:59:13.255Z · LW(p) · GW(p)

Avoiding trivial inconveniences that effectively discourage wider participation?

I was reminded of this recently by Eliezer's Less Wrong Progress Report. He mentioned how surprised he was that so many people were posting so much stuff on Less Wrong, when very few people had ever taken advantage of Overcoming Bias' policy of accepting contributions if you emailed them to a moderator and the moderator approved. Apparently all us folk brimming with ideas for posts didn't want to deal with the aggravation.

Replies from: Vulture
comment by Vulture · 2015-01-13T19:53:35.333Z · LW(p) · GW(p)

If that effect came as a surprise, it couldn't have been the reason for the split.

comment by Richard_Kennaway · 2015-01-12T08:15:01.971Z · LW(p) · GW(p)

My impression from the outset was that Eliezer and Robin were posting very different sorts of stuff, not having much to do with each other. It was two blogs shoehorned into one. The question for me is not why did they split, but why were they ever together?

Replies from: Gunnar_Zarncke, Capla
comment by Gunnar_Zarncke · 2015-01-12T20:14:34.052Z · LW(p) · GW(p)

Hm.That sounds like something of an 'insider bias' where you see differences that are less obvious to the casual reader.

comment by Capla · 2015-01-12T19:42:20.897Z · LW(p) · GW(p)

So what was the thought process? What led them to go from one arrangement to the other?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-01-13T08:37:36.636Z · LW(p) · GW(p)

I don't think either of them ever said. It was a bit like when a band splits up because of "artistic differences". :)

Replies from: Capla
comment by Capla · 2015-01-13T16:24:07.015Z · LW(p) · GW(p)

Really? Was there "juicy drama"? Did Robin ask Eliezer to leave?

That being the case, why was it decided to use the Reddit codebase? Why the Main and the Discussion? If it was just that Robin and Eliezer had differing thrusts, why didn't Eliezer just start his own personal blog?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-01-13T17:46:40.941Z · LW(p) · GW(p)

I am as much in the dark as you about these details. I've been reading since the beginning, but I've never been involved in the internal affairs (and probably wouldn't be commenting if I had been).

On the question of a discussion forum vs. a blog, Eliezer's intentions for LW were always for it to be not only a method of "raising the sanity waterline", but also a method of recruiting people to the cause of rationality, and specifically people capable of working on AGI. Hence a format more suited for carrying on discussions than a blog with comments on articles entirely written by the owner and other people the owner has granted special dispensation to. Perhaps the fact that OB was and still is precisely the latter made it less suitable for what Eliezer wanted to do with it.

Replies from: Capla
comment by Capla · 2015-01-14T04:52:49.505Z · LW(p) · GW(p)

I've been reading since the beginning

How did you find OB?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-01-14T07:38:23.824Z · LW(p) · GW(p)

I don't remember. I wasn't previously aware of Eliezer or Robin.

Replies from: gjm
comment by gjm · 2015-01-14T14:29:35.697Z · LW(p) · GW(p)

This tells you nothing about Richard's history, but here's one datapoint from another old-timer: I think I first encountered OB via Tyler Cowen's Marginal Revolution blog, and in particular this post from 2007-08-25. (I remember being struck by Tyler's suggestion that OB, despite its name, was actually exemplifying bias in a good way.) Weak evidence in favour of this being right is that my first comment seems to have been in 2007-09 and that this one in 2007-10 says I've been reading "for a month or thereabouts".

[EDITED to add: yes, different username.]

comment by somnicule · 2015-01-12T14:35:08.496Z · LW(p) · GW(p)

I've recently been diagnosed with ADHD (predominantly inattentive). Does anyone here share this, and if so, what resources or books on the topic would you recommend?

comment by babblefish · 2015-01-18T04:14:24.819Z · LW(p) · GW(p)

Hey...

I'm new here. Hi.

I was recently re-reading the original blogs (e-reader form and all that), and noticed a comment by Eliezer something to the effect of "Someone should really write 'The simple mathematics of everything' ".

I would like to write that thing.

I'm currently starting my PhD in mathematics, with several relevant side interests (physics, computing, evolutionary biology, story telling), and the intention of teaching/lecturering one day.

Now... If someone's already got this project sorted out (it has been a few years), great... however I notice that the wiki originally started for it is looking a little sad, (diffusion of responsibility perhaps).

So... if the project HAS NOT been sorted out yet, then I'd be interested in taking a crack at it: It'll be good writing/teaching practice for me, and give me an excuse to read up on the subjects I HAVEN'T got yet, and it'll hopefully end up being a useful resource for other people by the time I'm finished (and hopefully even when I'm under way)

I was hoping I could get a few questions answered while I'm here: 1) Has "the simple mathematics of everything" already been taken care of? If so, where? 2) Does anyone know what wiki/blog formats might be useful (and free maybe?) and ABLE TO SUPPORT EQUATION. 3) Any other comments/advice/whatever?

Cheers, Babblefish.

Replies from: Viliam_Bur, shminux, IlyaShpitser, PrimeMover, ilzolende
comment by Viliam_Bur · 2015-01-20T08:37:22.961Z · LW(p) · GW(p)

Any other comments/advice/whatever?

I think I have noticed a frequent failure pattern when people try writing about complicated stuff. It goes like this:

  • Article #1: in which I describe the wide range of stuff I plan to handle in this series of articles
  • Article #2: introduction
  • Article #3: even more introduction, since the introduction from the previous article didn't seem enough
  • Article #4: reaction to some comments in the previous articles
  • Article #5: explaining some misunderstandings in comments in the previous articles
  • Article #6 ...I am already burned out, so this never gets written

Instead, this is what seems like a successful pattern:

  • Article #1: if this is the only article I will write, what part of the stuff could I explain
  • Article #2: if this is the only article I will write for the audience of article #1, what else could I explain
  • Article #3: if this is the only article I will write for the audience of articles #1+2, what else could I explain...

Seems to me that Eliezer followed the latter pattern when writing Sequences. There is no part saying "this will make sense to you only after you read the following chapters I haven't written yet". But there are parts heavily linking the previous articles, when they advance the concepts already explained. The outline can be posted after the articles were written, like this.

I understand the temptation of posting the outline first, but that's a huge promise you shouldn't make unless you are really confident you can fulfill it. Before answering this, read about the planning fallacy, etc. On the other hand, with incremental writing you have complete freedom, and you can also stop at any moment without regrets. Even if you know you are going to write about A, B, C, and you feel pretty certain you can do it, I would still recommend starting with A1 instead of introduction.

Replies from: Vaniver
comment by Vaniver · 2015-01-20T18:54:50.341Z · LW(p) · GW(p)

Seems to me that Eliezer followed the latter pattern when writing Sequences.

I'm not sure that Eliezer outlined the posts in order- he did mention at some point wanting to explain X, but realizing that in order to explain X he needed to explain W, and in order to explain W...

I understand the temptation of posting the outline first, but that's a huge promise you shouldn't make unless you are really confident you can fulfill it.

Agreed. One of the ways I've worked around this is to not post the start of a sequence until it's mostly done (I have the second post to this sequence fully finished, and the third post ~2/3rds finished). I'm not sure I'd recommend it- if you find the shame of leaving something unfinished motivating, it's probably better to post the early stuff early. (I let that particular sequence sit for months without editing it.)

comment by Shmi (shminux) · 2015-01-18T23:58:42.273Z · LW(p) · GW(p)

Good luck. 'The simple mathematics of everything' is not an easy task. Maybe not even doable. But it's a noble goal.

comment by IlyaShpitser · 2015-01-19T00:12:46.049Z · LW(p) · GW(p)

Any other comments/advice/whatever?

Since you asked, my advice is to not work on ill-posed problems. More concretely, ask your advisor for advice on developing a good nose for problems to work on. Where are you starting?

comment by PrimeMover · 2015-01-18T20:57:23.053Z · LW(p) · GW(p)

I started writing one of those back in 2005 when my MMath finished. After writing over 1000 pages of loosely-packed LaTeX I discovered ProofWiki which had only just started up. Been writing for it ever since. But I still have that original LaTeX and can at a pinch generate the PDF again (although it's seriously iffy in places).

In the meantime if you want to join ProofWiki (google it) then if you can handle the iron-rigid rules for contribution, you'd be more than welcome.

Replies from: babblefish
comment by babblefish · 2015-01-18T23:09:11.194Z · LW(p) · GW(p)

When you say "Started writing one of those" Do you mean a blog in general, or a "simple mathematics of everything" in particular? 1000 pages is a pretty decent contribution. What happened to all those pages?

I've encountered proof wiki before- its certainly a useful resource, but perhaps not precisely what I am working towards.

comment by ilzolende · 2015-01-18T06:21:26.938Z · LW(p) · GW(p)

Welcome to this website! It's common for new users to introduce themselves on the Welcome thread.

Unfortunately, while there's already a wiki, it only has 3 pages.

Replies from: babblefish
comment by babblefish · 2015-01-18T23:10:53.837Z · LW(p) · GW(p)

Welcome Thread- thanks, will go visit.

And yes, I did find that wiki, noticed it was sad and decided... that while the wiki format is nice, I'm not sure if its precisely what is needed here.

comment by Username · 2015-01-14T04:14:12.296Z · LW(p) · GW(p)

(request for guidance from software engineers)

I'm a recent grad who's spent the last six years formally studying mathematics and informally learning programming. I have experience writing code for school projects and I did a brief but very successful math-related internship that involved coding. I was a high-performing student in mathematics and I always thought I was good at coding too, especially back in high school when I did programming contests and impressive-for-my-age personal projects.

A couple months ago I decided to look for a full-time programming job and got hired fairly soon, but since then it's been a disaster. I'm at a fast-moving startup where I need to learn a whole constellation of codebase components, languages, technologies, and third-party libraries/frameworks but I'm given no dedicated time to do so. I was immediately assigned a list of bugs to fix and without context and understanding of the relevant background knowledge I frantically debug/google/ask for help until somehow I discover the subtle cause of the bug. Three times already I've received performance pressure, and things aren't necessarily looking up. Other new hires from various backgrounds seem to be doing just fine. All this despite my being a good coder and a smart person even by LW standards. I did well in the job interview.

When I was studying and working in academia, I found that the best way to be productive at something (say, graph theory research) is to gradually transition from learning background to producing output. Thoroughly learning background in an area is an investment with great returns since it gives me context and a "top-down" view that allows me to quickly answer questions, place new knowledge into an already dense semantic web, and easily gauge the difficulty of a task. I could attempt to go into more details but the core is: Based on my experience, "hitting the ground running" by prioritizing quick output and only learning background knowledge as necessary per task is inefficient and I'm bad at it.

At the moment my only strong technology skills are the understanding of the syntax and semantics of a couple of programming languages.

Am I at the wrong company? Am I in the wrong profession -- should I go back to academia, spend four years getting a PhD, and work in more mathy positions? Thanks!

Replies from: Bugmaster, NancyLebovitz, ShardPhoenix, John_Maxwell_IV, Viliam_Bur, Risto_Saarelma, shminux
comment by Bugmaster · 2015-01-16T01:38:34.155Z · LW(p) · GW(p)

I would say it's a combination of being at the wrong company, and our education system being inadequate to the task.

There are many skills that are required in order to write complex software. You need to know how to organize your code in a maintainable and comprehensible way (Design Patterns, build/package systems, abstraction layers, even simple stuff like UML). You need to know how to find bugs in one's own code as well as in code written by other people (using debuggers, reading stack traces, writing logs, applying basic deductive reasoning). When you get stuck, you need to know how to get help efficiently (reading documentation, understanding the jargon, knowing exactly which questions to ask, knowing whom to ask them to).

None of these skills are considered "sexy"; and, in fact, most scientists and mathematicians that I've worked with in the past don't even recognize them as skills at all. Their attitude usually is, "don't bother me with your bureaucratic design pattern bullshit, I wrote a 3000-line method that calculates an MDS plot and it works, what more do you want". But the problem is that, without such skills, you will never be able to create anything more than a quick one-off script that performs one specific calculation and then quits.

My advice would be as follows.

Firstly, figure out what you actually want to do. Do you want to invent algorithms for other people to implement, or do you want to write software yourself ? There's nothing wrong with either choice, but you need to consciously make the choice to begin with.

Secondly, if you do want to learn software engineering, find some people at your company who are already experienced software engineers. Ask them for a list of books or online tutorials to read (most likely, they'll recommend the Design Patterns book, so you might as well start with that). After reading (or, let's be realistic here, skimming) the books, ask them to sit down with you for a couple of hours in order to review your code -- even, and especially, the code that actually works. Listen to their input, and refactor your code according to their recommendations. When you have a bug, make sure you've tried everything you could think of, and then ask them to sit down with you and walk you through the steps of diagnosing it.

Thirdly, if there are no such people at your current company, or if they flat-out refuse to help you... then find a better company :-(

comment by NancyLebovitz · 2015-01-14T16:04:41.333Z · LW(p) · GW(p)

This is very much from the outside, but how sure are you that the other new hires are doing just fine? Could they (or some of them) be struggling like you are?

Replies from: Username
comment by Username · 2015-01-14T18:49:23.231Z · LW(p) · GW(p)

Thanks for the response. It's hard to say exactly, but I can see their work logs and I hear them getting congratulated from time to time.

comment by ShardPhoenix · 2015-01-14T05:32:17.206Z · LW(p) · GW(p)

When I started as a programmer I joined a graduate program at a big company. I was also fortunate enough that one of the consultants working there was able to act as an informal mentor in how things are done in "real world" programming (including dealing with all those technologies, frameworks, etc). You might find it easier to get up to speed with a bigger, slower-moving company with a more long-term view than a frantic startup.

comment by John_Maxwell (John_Maxwell_IV) · 2015-01-17T01:28:18.723Z · LW(p) · GW(p)

One framing that might be useful: Part of being a professional software engineer is learning new things constantly, whether it's new languages, new frameworks, new codebases, etc. But this itself is a skill that can be learned and practiced. In addition to having the attitude of relentless resourcefulness, there are many small tricks that can be picked up: for example, grepping for a bit of text from the UI to quickly find the code that defines it, using Google's site: operator to make it easier to do targeted documentation searches, using your language's debugger to solve bugs, having an editor with lots of plugins installed that make you more efficient, etc. This sentence: "I was immediately assigned a list of bugs to fix and without context and understanding of the relevant background knowledge I frantically debug/google/ask for help until somehow I discover the subtle cause of the bug" sounds like a pretty good description of what I found it was like to work as a software engineer at a startup, minus the word "frantic". I spent a lot of time learning just enough about the code I was working with to solve the problem I needed to, searching on google for just enough documentation to accomplish what I needed to accomplish, etc. Even the best software developers are doing keyword searches on Google and their codebase constantly as part of their development flow; if you find yourself doing this you should not consider it indicative of a problem.

Based on what you describe as your programming background, it sounds like you don't have much experience with this modality of software development. Probably lots of other recent hires have experience as interns or teaching themselves stuff for independent projects, which helped them learn the skill of just-in-time knowledge acquisition. I might try working at a less demanding company just so you could feel less stressed out and give yourself the opportunity to gradually ramp up to this development style. If you work at a bigger, slower-moving company that nevertheless is using fairly up-to-date technologies and you spend 15 minutes every morning working to improve your tools & efficiency, my guess is you'll be in a much better place after a year or so.

comment by Viliam_Bur · 2015-01-20T08:55:40.889Z · LW(p) · GW(p)

Seems to me that "good job" is a 2-place word -- which working environments makes which people productive? I have experiences very similar to what you described (dozen new components, zero time to learn, expected high productivity immediately, other people seem to cope well and receive performance bonuses). But I also have experiences of high productivity in business situations, where I received performance bonuses. Sometimes both experiences in the same company, just in a different project or with a different boss.

My "natural" approach is to do some learning first: I try to make sure that I understand the specification, that it is complete and unambiguous. For example I will first sketch the dialogs on paper and ask my boss or customer whether this is what they meant (because redrawing a sketch is much easier, faster, cheaper, and less frustrating than changing the code of an already developed and tested program). I think about all the additional work which was not mentioned but may be logically necessary, and I ask about it in advance, offering the least complicated solution. ("You want to have a list of users with passwords. What happens if someone forgets their password? Perhaps there could be an administrator who can change passwords, and as an almost free bonus, they could also block and unblock users. Or maybe users could provide their e-mails, and then change their passwords using e-mail verification, but that would be more complicated, and some users will call your support anyway, so unless you have thousands of users I believe this is not necessary, and can still be included later.") I also research the framework I am supposed to use, and make a few simple prototypes with the functionality I expect to need in the project. (Thus, if the framework has some serious problems, I am likely to find out in advance, when there is still a chance to use a different technology, or at least to avoid the problematic parts of the framework. The fact that I am not under pressure yet helps me notice more details and context.) Then, when I am ready (which is never perfect, but that's not the point), I implement the solution, step by step. At each moment I know where I am in the project and how much needs to be done yet. (This knowledge usually also makes my boss happy, assuming they believe my analysis.) I do a part, and I test it. When I am ready, there are usually very few bugs. So the time "wasted" analysing the problem and doing the prototypes is later offset by less bug fixing and less remakes. And there is generally less stress.

As an example of my work (probably the only example available freely online), I did a cycle route map for Trnava region in one month; and that included learning the Google Maps framework which I have never used before. A short tour: Choosing "Zoznam cyklotrás" displays a list of cycle routes; clicking on one of them displays it on the map. The "Zobrazenie cesty k bodom mimo trasy" checkbox allows you to click anywhere on the map to display the shortest path from that place to the route. There is also a possibility to find a cycle route according to your criteria, display "places of interest" within given distance from the route; your results can be printed or exported to PDF. This is the part visible to the user. The administrator part allows importing the cycle routes from GPS log files, editing and annotating the route points (e.g. "crossing a road", "crossing a railway"); and importing the "places of interest" from an Excel file. -- I believe I was rather productive here, but of course I am open to feedback. (I am not working for that company anymore, so I can't use the feedback to improve the product.)

However, most managers insist on a completely different approach to work, which other programmers seem to handle somehow (some enjoy it, others complain but cope anyway), but it makes me suffer and less productive. I would describe it as chaotic, short-sighted, penny-wise and pound-foolish, treating experts as replaceable, and accumulating the technical debt until the whole thing collapses under its own weight. Learning the new technology or debating the details of the project is considered an unproductive waste of time; the important thing is keeping the programmers busy (even if it means doing something merely to tear it down later). Maybe it's because the managers cannot recognize productivity, but can recognize silence and typing. And I really hate the idea that people are completely replaceable and should be randomly switched between different parts of the project or even random projects; in real life it often means the code is sloppy and undocumented, and there is no time or incentive to fix it, because if you spend your time refactoring the code, it makes you seem less productive, and the benefits will be enjoyed by someone else when you are moved to the next piece of spaghetti code. (Somehow collective code ownership is the part of agile programming most accepted by self-labeled enlightened managers. The other parts like unit testing or pair programming are obviously just a waste of time. I have to ask Robin Hanson whether it is a coincidence that the managers embrace exactly the one part which allows them to reduce the perceived status of expert programmers.) Well, to be fair, sometimes there are also external constraints, such as the customer insisting on doing things the stupid way; but I believe most of this comes from inside of the companies.

Most of my productive work was done when I worked on a project alone; and once when I was a leader of a small project. My approach as the leader was asking people what are their individual strengths and what part of the project would they prefer to do, and dividing the work accordingly. Then I paid attention to have clean APIs between the team members, good documentation of the data, and sketches of the dialogs. And then we just coded, and sometimes debated. Three programmers, myself included, the other two were part-time working students, for me it was the first Java EE project done from the scratch and I didn't have experience with many parts. Yet we completed it in three months, and received bonuses.

comment by Risto_Saarelma · 2015-01-15T05:02:32.106Z · LW(p) · GW(p)

Programming with other people, working with large codebases and working with multiple libraries and frameworks are basically all software engineering realities that education gives minimal training for. If you can view the job as a learning experience, it's probably a good one even though it is frustrating on multiple levels right now. If you're scrupulous about needing to pull the same weight as the other members and are thinking about switching jobs because of this, you could just talk about it to your manager. They might concede that yeah, you're not a good fit, and then you can go find a less miserable place to work in, or say that they think you're actually doing fine, in which case you can go back to considering the learning experience perspective.

Can't advise on the taking the time off to do a PhD instead, but I don't think you should give up on programming just yet. Like others said, there are many companies, and bigger companies have more resources to spend for training and mentoring. Also, the current mode of mixing together a bunch of frameworks developed in the last few years nobody really understands and rushing to the market with a minimum viable product chock-full of technical debt is probably just an artifact of the web as an application platform still being a reasonably new thing and people rushing to figure out all the simple things they can do with it. If the technology stabilizes, there's going to be more opportunity for mastering long-lived technologies.

On the other hand, becoming a specific technology expert in programming is a gamble. Technologies just plain up die sometimes. Math domain expertise is probably a lot more durable, but it's probably also trickier to get a nice math job than it is to get a nice programming job.

comment by Shmi (shminux) · 2015-01-14T05:49:52.246Z · LW(p) · GW(p)

Looks like a wrong company. Try looking for something more structured. It's good to learn the proper requirements/design/coding/testing techniques before throwing them out the window.

comment by RowanE · 2015-01-12T14:53:34.081Z · LW(p) · GW(p)

In the vein of asking personal questions of Less Wrong, I need career advice. Or advice on finding useful career advice.

I'm an undergraduate student, my course is "Mathematics & Theoretical Physics", BSc, but I'm already convinced I don't want to try to be a career scientist. Long-term, my career goals are to retire early (I've felt comfortable enough on what I live on as a student that the MrMoneyMustache approach seems eminently doable), with the actual terminal values involved being enjoyment and lack of stress, so becoming a quant also seems like a bad choice what with having to get a PhD first. Teaching just sounds horrible to me.

What this leaves me with is the much broader range of careers that are either mathematical or sciencey enough that I could use the degree for them, or the jobs and graduate programs that just ask for a degree and don't care what kind. I have too many choices, every particular one I look at seems okay but not great, I have no idea how to even begin narrowing them down or ordering them.

Replies from: Lumifer, Halfwitz, None, None, None
comment by Lumifer · 2015-01-12T16:17:12.038Z · LW(p) · GW(p)

Which marketable skills do you have or would be willing to acquire?

Replies from: RowanE, polymathwannabe
comment by RowanE · 2015-01-12T19:05:52.299Z · LW(p) · GW(p)

The concept of a "marketable skill" as it's been given to me in most career advice I've seen seems to refer to a personal virtue that you make a flimsy claim to possessing to make it more likely you'll get the job. I prefer to just think in terms of qualifications, because it doesn't put me in a spiral of "I can't just lie about it, I don't have any of these virtues they say to say you have, I'll never get a job". But at least in terms of actual skills, apart from those I'm presumably working on through the degree I'm also learning Japanese in my spare time, have been learning for a bit over a year and at the current rate would take I think 2-3 years to reach JLPT1 level.

Replies from: Lumifer
comment by Lumifer · 2015-01-12T19:16:31.977Z · LW(p) · GW(p)

The concept of a "marketable skill" as it's been given to me in most career advice I've seen seems to refer to a personal virtue

By a "marketable skill" I mean the capability to do something that other people are willing to pay you money for. Not a virtue, not a degree, not even a qualification (what matters is not whether you are qualified to do it, but whether you can do it).

In crude terms, if you want other people to pay you money, what they would pay money for?

Replies from: RowanE
comment by RowanE · 2015-01-12T20:41:25.147Z · LW(p) · GW(p)

I don't think I currently have any skills I could be paid money to do? I expect in most entry-level positions or graduate programs I could apply, I would be doing things that I don't yet know how to do that I would either be given on-the-job training for or just have to figure out as I go along. What sort of marketable skills might one have, as an undergraduate student without previous work experience, that I should be trying to think of?

Replies from: Lumifer
comment by Lumifer · 2015-01-12T20:50:54.903Z · LW(p) · GW(p)

I don't think I currently have any skills I could be paid money to do?

That seems to be a problem. I think you should fix it.

If you can't come up with a convincing answer as to why an employer should hire you, chances are the employer won't bother to think one up for you.

What sort of marketable skills might one have, as an undergraduate student without previous work experience, that I should be trying to think of?

That's basically the question of which job should you get post-college :-) There is a large variety of possible skills -- from accounting to website creation.

Replies from: RowanE
comment by RowanE · 2015-01-12T21:23:54.738Z · LW(p) · GW(p)

This graduate scheme at Aldi, which I would be way out of my depth with and I mostly remember because it's absurdly well-paid for an entry level graduate position, $62,000 in 'murican-money, doesn't ask for anything that I would actually think of as a skill that you could be paid money to do. You need a 2:1 degree, a driver's license, and a certain package of personal virtues and personality traits. There are a lot of things like that for graduates, and it's mostly those things that I'm looking at, with the issue being a lot of choice and difficulty identifying which ones are better than others.

Replies from: Lumifer
comment by Lumifer · 2015-01-12T21:39:14.366Z · LW(p) · GW(p)

doesn't ask for anything that I would actually think of as a skill that you could be paid money to do

Managing people and logistics is a very desirable and highly-paid skill.

Replies from: RowanE
comment by RowanE · 2015-01-12T22:13:47.268Z · LW(p) · GW(p)

That's a skill you learn while you're on the scheme, the applicants don't need to have the skill already, they need to have the personality traits and qualities that would enable them to quickly learn how to be managers. A qualified, experienced manager, someone who could list "managing people and logistics" among the things they can do that people might pay them to do, would not be an appropriate applicant for the scheme and could probably find better management positions that weren't entry-level.

comment by polymathwannabe · 2015-01-13T15:14:18.220Z · LW(p) · GW(p)

I'm saving to take the national examination to become a certified translator.

comment by Halfwitz · 2015-01-12T16:16:12.937Z · LW(p) · GW(p)

If you’re looking for a useful major, Computer science is the obvious choice. I also think statistics majors are undersupplied, though only anecdotal data there. I know a few stats majors (none overly clever) that have done far more with the degree than I would have guessed as an undergraduate. But this could have changed since, markets being anti-inductive. If your goal is effective egotism, you’re probably not in the best major. Probably the best way to go about your goal is to follow the advice of effective altruists and then donate all the money to your future self, via a Vanguard fund. If this sounds too evil, paying a small tithe, 1%, would more than make up for this at a managable cost.

Replies from: RowanE
comment by RowanE · 2015-01-12T18:03:14.669Z · LW(p) · GW(p)

I'm not really considering a change in major as on the table, for various reasons, mostly personal. I'm more thinking of what career to try for given the degree I'm on track for and that I've rejected the obvious choices for that degree.

The difference with the "effective egoist" approach is the diminishing returns value of money - altruists want to earn as much as they can over the course of their lives, I want to earn a set amount in as little time as possible, and might want to earn more if I'm making lots of money quickly or without stress. That's the main reason the "get PhD, become quant" track is ruled out - the "teaching sounds horrible" aside was referring to actually becoming a teacher, which is a common suggestion for what to do with a physics degree when ruling out science, I wasn't actually considering how bad teaching undergrads would be.

And there's not really a "too evil" for me, my response to the ethical obligation to donate to efficient charity is to notice the that I don't feel guilty even though the logic seems perfectly sound and say "well I guess I'm already an unrepentant murderer, and therefore evil" and then functionally be an egoist while still using utilitarianism for actual moral questions.

Replies from: RomeoStevens
comment by RomeoStevens · 2015-01-12T22:01:20.762Z · LW(p) · GW(p)

If they want to live forever the effective egoist still has linear utility WRT money until radical life extension and friendly AI research runs out of room for more funding.

Replies from: RowanE
comment by RowanE · 2015-01-12T22:29:31.675Z · LW(p) · GW(p)

If radical life extension eliminates biological ageing and thereby increases life expectancies by 1,000 years, scrounging together enough money to increase the chance it's accomplished in my lifetime by 0.1% is worth 1 year of life to me. That would take a phenomenal amount of money, and if I have to spend even two years working to get that money when I could otherwise support myself on passive income, I've taken a loss.

Replies from: RomeoStevens
comment by RomeoStevens · 2015-01-12T23:57:22.379Z · LW(p) · GW(p)

The point is to live until the functional immortality date.

Replies from: RowanE
comment by RowanE · 2015-01-13T09:43:36.888Z · LW(p) · GW(p)

Well, yes, that's why I didn't compare it to other interventions I could make and say they're much better investments, because the obvious response would be to do both, and why I described the amount of life extension funding in terms that still make sense with reaching the immortality deadline in mind. Increasing the chance you live forever with personal donations to the relevant research groups has a very low expected value per amount of money spent.

comment by [deleted] · 2015-01-12T16:14:13.055Z · LW(p) · GW(p)

Hey, Math PhD candidate here (graduating this May).

Long-term, my career goals are to retire early (I've felt comfortable enough on what I live on as a student that the MrMoneyMustache approach seems eminently doable)...

with the actual terminal values involved being enjoyment and lack of stress....

These are my goals, as well.

Teaching just sounds horrible to me.

It is pretty horrible. My university has a relatively teaching-heavy TA assignment, and it was kind of soul-crushing.

becoming a quant also seems like a bad choice what with having to get a PhD first.

Graduate school could serve the role of a holding pattern for you to figure out what it is you actually want to do. I think it's possible to become a quant or an actuary with a MSF or other master's degree. However, I don't recommend going into debt for graduate school, and as far as I know most graduate schools don't fund master's students.

There's a somewhat sneaky trick: One can apply for a PhD program, obtain a TA or RA, and then after the requirements of the Master's program you actually want are done, transfer to the Master's program and graduate out.

Of course that all requires some degree of teaching, probably, and afterwards you need to find a job making enough to balance out the cost of making a TA's salary for ~2 years.

The people I know who retired or are scheduled to retire the quickest do white-collar jobs in manufacturing or energy at very large corporations.

The people I know who do the least stressful jobs work either part-time in retail or have tenure of one kind or another. One is an epic-level computer programmer who gets so many job offers that he's able to choose the least restrictive.

Me, personally? After I defend I'm going to work for a small research lab.

Replies from: Lumifer, alienist
comment by Lumifer · 2015-01-12T16:33:08.942Z · LW(p) · GW(p)

The people I know who retired or are scheduled to retire the quickest

Cops.

These are my goals, as well.

So, this looks to be a common aspiration, but it strikes me as woefully underspecified :-) A lot of retired people spend their day extending minor tasks to take a lot of time and spend the rest of it staring into the idiot box.

Are all y'all quite sure you have enough internal motivation to do interesting, challenging things without any external stimuli? What will prevent you from vegging out and being utterly bored for the rest of your life?

Oh, and a practical question (for the US people) -- once you retire at, say, 40, what are you going to use for health insurance and does your retirement planning cover the medical costs?

Replies from: RowanE, shullak7
comment by RowanE · 2015-01-12T19:18:38.629Z · LW(p) · GW(p)

A life of just everyday minor tasks plus internet/videogames seems perfectly adequate and I don't understand why the emotional response would be "boredom" rather than "content", except for the fact that television is vastly inferior to an internet-connected gaming PC.

I'd probably prefer to do "interesting, challenging things" than just veg out the time (which surely should be enough motivation in itself, unless you're specifically talking about work-like projects and assuming those are necessary to happiness), but if I have a motivation failure and spend all my time doing inconsequential things at home, that's hardly going to be such a bad outcome that it would be preferable to have to go to work.

Replies from: Lumifer
comment by Lumifer · 2015-01-12T19:21:23.467Z · LW(p) · GW(p)

A life of just everyday minor tasks plus internet/videogames seems perfectly adequate

Ah. OK, then.

comment by shullak7 · 2015-01-13T17:24:55.362Z · LW(p) · GW(p)

The people I know who retired or are scheduled to retire the quickest
Cops.

Also military. Defined pension benefits and health care (such as it is) for the rest of your life. Of course, you must be in the military for 20+ years, which I'm guessing is not what the OP is looking for based on his/her other comments. :-)

Oh, and a practical question (for the US people) -- once you retire at, say, 40, what are you going to use for health insurance and does your retirement planning cover the medical costs?

I experienced this to some extent (a long story I won't go into here). For a while, we paid for a high-deductible plan on the state exchange since we were both relatively healthy and mainly looking to not be bankrupted should we experience a medical emergency or suddenly fall ill. Unfortunately (or fortunately, depending on how you look at it), our other income was just high enough that we didn't qualify for federal subsidies so we were paying over $400 per month for a bare-bones plan for my husband and me. Doable, but not ideal....definitely something people need to plan and budget for when considering early retirement.

comment by alienist · 2015-01-20T09:09:45.943Z · LW(p) · GW(p)

Graduate school could serve the role of a holding pattern for you to figure out what it is you actually want to do. I think it's possible to become a quant or an actuary with a MSF or other master's degree.

Will they for a Masters in mathematics? Nearly everyone knows that a Masters in math means "I quit or failed out of my PhD program". This generally doesn't reflect well on you.

comment by [deleted] · 2015-01-13T22:54:51.918Z · LW(p) · GW(p)

Short answer: business

Long answer: The high-paying in-demand jobs mostly fall into four categories right now: business, technology, engineering, and health care. Health care would be the toughest switch for you from where you are right now as you'd nearly have to get a 2nd major to get into a grad program there. Engineering would probably require graduate school since your degree isn't in engineering, and I'm not sure how easy it is for a non-engineering major to go that route. That leaves business and technology, and just a rough guess from your description is that you would prefer business to technology. You would most likely either be working in finance, accounting, or data analysis. A lot of this is just doing basic work with excel spreadsheets all day long. Those are the types of jobs I would recommend looking into.

comment by [deleted] · 2015-01-12T18:54:08.509Z · LW(p) · GW(p)

Hey Rowan,

The way to figure this out is to work backwards. Find people who have the ideal day you want, with your strengths and skills, then work backwards to deconstruct their careers.

Use that to come up with a list of potential careers, then talk to people in that career (find them using LinkedIn) to answer a few questions:

  1. Is the demand for this career going up or down?
  2. What are the biggest surprises I should watch out for?
  3. What does a typical day look like? Would I enjoy it?
  4. What would be my biggest wins in college be in terms of skills, network, credibility , and projects that would allow me to quickly land a job when I get out?

I put up a video on this process here: https://www.youtube.com/watch?v=u6sXNR7kL-c&list=UUCi-drAVuy8g4N8TfODHgUQ

I'd also be happy to chat with you about any further questions you may have: http://selfmaderenegade.net/lets-chat/

comment by sediment · 2015-01-12T09:25:20.233Z · LW(p) · GW(p)

Reposting this because I posted it at the very end of the last open thread and hence, I think, missed the window for it to get much attention:

I'm vegetarian and currently ordering some dietary supplements to help, erm, supplement any possible deficits in my diet. For now, I'm getting B12, iron, and creatine. Two questions:

  • Are there any important ones that I've missed? (Other things I've heard mentioned but of whose importance and effectiveness I'm not sure: zinc, taurine, carnitine, carnosine. Convince me!)
  • Of the ones I've mentioned, how much should I be taking? In particular, all the information I could find on creatine was for bodybuilders trying to develop muscle mass. I did manage to find that the average daily turnover/usage of creatine for an adult male (which I happen to be) is ~2 grams/day - is this how much I should be taking?
Replies from: harshhpareek, Fluttershy, Adele_L, Baughn, Dorikka, ausgezeichnet, None
comment by harshhpareek · 2015-01-12T19:43:34.496Z · LW(p) · GW(p)

I'm a vegetarian and I looked into this stuff a while back. The Examine.com page What beneficial compounds are primarily found in animal products? is a useful reference with sources and includes the ones you wrote above. An older page with some references is this one.

I currently supplement with a multivitamin (this one -- Hair, Skin and Nails), creatine and occasionally Coenzyme Q-10 and choline, You didn't mention the last two but I have subjectively felt they increase alertness. I (hopefully) get my Omega-3/6 fatty acids from cooking oil. I had a basic panel done and found I was deficient in Calcium (probably due to my specific diet, but it is worth mentioning) and B12. So, I supplement for Calcium too.

I do regular exercise (usually bodyweight and dumbbells) and I had disappointing results without whey protein and creatine supplementation. Excessive amounts of creatine (look up "loading") is recommended for bodybuilders but 5g/day is recommended for vegetarians. See gwern's review and the examine.com review.. The examine.com review mentions that the fear of this compound is irrational and recommends 5g a day for everyone, pointing out that creatine would have been labeled a vitamin if it wasn't produced in the body. (Excessive creatine causes stomach upsets but I wasn't able to find a value at which this happens, and I've never experienced this myself).

I also take a fiber supplement, Metamucil. This one isn't vegetarian-specific, but I highly recommend it.

Replies from: Lumifer, sediment
comment by Lumifer · 2015-01-12T19:51:07.969Z · LW(p) · GW(p)

I (hopefully) get my Omega-3/6 fatty acids from cooking oil.

From cooking oil you get too much Omega-6 and not enough Omega-3.

Replies from: harshhpareek
comment by harshhpareek · 2015-01-12T20:06:25.025Z · LW(p) · GW(p)

I haven't put sufficient effort into identifying healthy cooking oils. I currently use Crisco's Blended Oil supplemented with Omega-3. The question is if it is supplemented in the right amount, and that information is not provided.

Animal fats are low in Omega-6 but I think the Omega-3:6 ratio is a problem for meat-eaters too.

comment by sediment · 2015-01-13T18:17:02.499Z · LW(p) · GW(p)

I'm a vegetarian and I looked into this stuff a while back. The Examine.com page What beneficial compounds are primarily found in animal products? is a useful reference with sources and includes the ones you wrote above. An older page with some references is this one.

Thanks, this looks good. The sort of thing I was after.

I had a basic panel done

I've never heard this expression! I wonder whether that's just transatlantic terminology variation. Will look into whether I can get this on the NHS.

Excessive amounts of creatine (look up "loading") is recommended for bodybuilders but 5g/day is recommended for vegetarians. See gwern's review and the examine.com review.. The examine.com review mentions that the fear of this compound is irrational and recommends 5g a day for everyone, pointing out that creatine would have been labeled a vitamin if it wasn't produced in the body.

Perfect; thanks.

comment by Fluttershy · 2015-01-12T21:32:57.935Z · LW(p) · GW(p)

I have been vegetarian for three years, and haven't taken any supplements consistently throughout that period of time. The last time I had a blood panel done, I didn't have any mineral deficiencies, at least. I am by no means against taking supplements, but my impression is that they aren't fully necessary for vegetarians who have a well-balanced diet.

I did take B12 for a few months when I was experimenting with reducing my intake of eggs and milk, though I eventually decided that I really liked eggs and milk, and consequently stopped taking B12. I've recently started taking CoQ10 because RomeoStevens advocated doing so here.

In the past couple of years, I have considered becoming flexitarian (i.e. 98% vegetarian) or pescatarian, mostly for convenience and health reasons, respectively, though I've elected to stay vegetarian for now. This is partly because I'm used to being vegetarian, partly because I've accidentally built vegetarianism into my self-identity, and partly because of the normal reasons people give for being vegetarian (health, environmental, and compassion-towards-animals type reasons).

Added 6/29/2015: Apparently, I haven't been getting enough fiber for at least the last couple of months, but that is due to me being lazy about my diet, rather than any shortcoming of vegetarianism.

Replies from: RomeoStevens
comment by RomeoStevens · 2015-01-12T21:56:49.563Z · LW(p) · GW(p)

You might consider the vegetarian case for eating bivalves It's a way of getting the benefits of pescetarianism with less moral uncertainty issues.

Replies from: sediment
comment by sediment · 2015-01-13T18:04:38.200Z · LW(p) · GW(p)

Yes, as of a few months ago when I researched the issue, I am OK with eating bivalves. I just haven't gotten around to doing so yet.

comment by Adele_L · 2015-01-12T19:39:59.264Z · LW(p) · GW(p)

Vitamin K2. Vitamin K1 is produced by plants, and K2 is produced by animals and bacteria. They have very different functions in the human body, and you need them both. Supplements and fortified food are almost always K1, unless you look for K2 specifically.

Vitamin K2 is necessary for some proteins which modulate calcium in your body. Supplementing it has been found to protect both against osteoporosis and heart/artery calcification.

comment by Baughn · 2015-01-12T11:28:05.289Z · LW(p) · GW(p)
  • You should ask a dietician, not us.
  • There are many other vegetarians; this seems like it should be a solved problem.
Replies from: sediment, EStokes
comment by sediment · 2015-01-13T18:09:08.999Z · LW(p) · GW(p)
  • You should ask a dietician, not us.

I know plenty of LW people are interested in nutrition; it's within the realms of possibility that one of them might know enough about what I'm asking to be able to give me a quick summary of what I'm after. As for asking a dietician, I've never met one and wouldn't know how to go about getting hold of one to ask. (I'm also not totally sure I'd trust J. Random Dietician to have a good understanding of things like what counts as good evidence for or against a proposition. Nutrition is a field in which it's notoriously difficult to prove anything.)

  • There are many other vegetarians; this seems like it should be a solved problem.

Well, erm, yes, that's why I'm asking about it. (I don't go around making posts asking for proofs that P=NP, for example.)

comment by EStokes · 2015-01-12T17:17:44.049Z · LW(p) · GW(p)

I disagree about asking a dietician and not LW.

Replies from: JoshuaZ
comment by JoshuaZ · 2015-01-12T20:29:07.904Z · LW(p) · GW(p)

Can you expand on your reasoning?

Replies from: EStokes, None
comment by EStokes · 2015-01-16T14:55:37.172Z · LW(p) · GW(p)

FrameBenignly's comment reflects my opinion well

comment by [deleted] · 2015-01-13T22:37:06.827Z · LW(p) · GW(p)

A dietician can get licensed with just a bachelor's degree in nutrition. A well-informed layman will often have more informed views on the issue. Also, communities like this will select against bad information. However, a fitness forum that also has a commitment to rejecting errors will have even better answers as they will specialize in this area.

comment by Dorikka · 2015-01-12T17:17:55.328Z · LW(p) · GW(p)

Might be uaeful to enter your typical intake on cron-o-meter and check for deficiencies. If I had to guess, you might be low on choline, but you shouldn't supplement based on my wild guess. :)

comment by ausgezeichnet · 2015-01-14T00:57:58.778Z · LW(p) · GW(p)

To piggyback on this:

I'm currently a vegetarian and have been for the past three years, before which the only meat I consumed was poultry and fish. I've been reading a lot about the cognitive benefits of consuming fish (in particular, the EPA/DHA fatty acids); unless I'm mistaken (please tell me if I am), EPA and DHA cannot be obtained from vegetables alone. ALA can be obtained from seaweed, and while our bodies convert ALA into EPA, we do it very slowly and inefficiently, and ALA wouldn't give us any DHA.

I looked into fish oil pills. Apparently pills contain much less EPA/DHA than fish meat does, and it's more cost-effective to eat fish (depending on which species, of course)... and based on other research, I'd expect that our body would extract more fatty acids from a fillet than from a pill with the same quantity of acids.

I still have a visceral (moral?) opposition to eating fish and supporting horrendous fishing practices, and I worry about where fish I might be eating would come from. If it's coming from the equivalent of a factory farm, then I don't want to eat it. On that point, I've read many articles suggesting that extracting fish oil harms certain species of fish.

Ideally there would be a vegetarian, eco-friendly, and health-friendly source of EPA/DHA. Is there?

In the meanwhile, I will try fish again and see if it has any noticeable effect on me. I'll continue to investigate whether vegetarian or eco-friendly sources of EPA/DHA exist, especially if I notice any positive effects from eating fish.

And, the undermining question: does not having any EPA/DHA really matter? (I think it does, since it apparently boosts cognitive function, and I want my brain to operate at its maximum potential; but maybe I'm wrong.)

Replies from: Falacer
comment by Falacer · 2015-01-14T17:35:18.880Z · LW(p) · GW(p)

I'm in the same boat as you with regards to whether EPA/DHA has a bigger effect than ALA, but I was convinced enough to try to find some when I became vegetarian last year.

If you google "algal dha together" you'll find what I'm taking - meeting your criteria of vegetarian (vegan), eco-friendly and health-friendly (with aforementioned uncertainty)

ALA can also be found in flaxseed, soy/tofu, walnut and pumpkin, so you needn't stick to seaweed if you only want ALA.

comment by [deleted] · 2015-01-13T22:41:25.610Z · LW(p) · GW(p)

I once did a 3-day analysis of all foods consumed, and found I was within optimal limits on just about everything. I was high on salt and low on manganese. It's quite possible to get everything you need using a vegetarian diet, and your particular needs will be unique to you.

comment by advancedatheist · 2015-01-12T01:09:26.773Z · LW(p) · GW(p)

Sebastian Seung’s Quest to Map the Human Brain By GARETH COOK JAN. 8, 2015

http://www.nytimes.com/2015/01/11/magazine/sebastian-seungs-quest-to-map-the-human-brain.html?ref=magazine&_r=1

Q&A with Zoltan Istvan, Transhumanist Party candidate for the US President

http://youtu.be/Xk4olY4qIjg

Replies from: None, ilzolende, knb
comment by [deleted] · 2015-01-13T04:21:47.888Z · LW(p) · GW(p)

Sebastian Seung’s Quest to Map the Human Brain By GARETH COOK JAN. 8, 2015

http://www.nytimes.com/2015/01/11/magazine/sebastian-seungs-quest-to-map-the-human-brain.html?ref=magazine&_r=1

I recently attended a biology conference where, among many other things, I got to see a talk by Dr. Jeff Lichtman of Harvard University on brain connectomics research.

It's very interesting stuff. He has produced a set of custom equipment that can scan brain tissue (well, any tissue, but he's interested in brain tissue) at 5x5x30 nm resolution. His super-duper one of a kind electron microscope can at this point scan about 0.3 cubic millimeters in 5 weeks, if I'm not mistaken. It spits out a dataset in the fractional-petabytes range. He's had one such dataset for a full 3-4 years but is encountering major problems with analysis - tracing cells and fibers over their full path is a very difficult problem. Automatic cell-tracer programs are good enough over the number of slices that makes up a cell-body but utterly fail at identifying things like synaptic vesicles reliably and when tracing fibers over their full lengths. To the point that most of his good data that he showed us has been manually annotated by graduate students and undergrads working in his laboratory. Hence the above link's mention of gamifying the task to try to crowdsource it.

Interestingly, he described his equipment as a 'tissue observatory'. He thinks that neuroscience should take a page from astronomy and just see what the heck is out there. He thinks they are trying to make detailed hypotheses about function and structure and everything else on far far too little data right now and we need a lot more data on the actual structures before we can be confident about much about them other than the mere fact that they correspond to function. To the point that during his talk, when showing the 30 micron wide completely-perfectly-annotated chunk of his first dataset that he has published much analysis on the thousands of synapses of (something like 0.01% of his first raw dataset he has had for years) he showed the exploded figure of hundreds of cellular fibers and the table of dozens of parameters for thousands of synapses and said "And here it is... incredibly beautiful and so far totally useless." His point being he wants to annotate more of it, and use it as a base to make actual informed inferences and hypotheses about connection formation and network structure. He also notes that his datasets cannot tell apart different neurotransmitter producing cells, gene expression, or the presence of different kinds of proteins pre or post synaptically.

He (knowingly for humor and exasperation's sake) overstates the case there about 'uselessness' even at current levels of annotation - you can focus in on pieces of the dataset and annotate it for your own purposes. Someone else at the conference presented work in which they took his dataset (which is, after all, revolutionary in terms of the sheer amount of fine 3-dimensional data it has on the structure of so many different cell types in their normal living context) and figured out longstanding questions about the topology of certain intracellular structures that have held for quite a long time. He already has interesting statistical information about the connections between the cells in his pefectly-annotated segment of data, showing that if two cells are connected they tend to be connected in multiple places. He also appears to have found cell types in his data he had no idea existed and he still doesn't know what they are, and noted that the big spine-based synapses that have been well-studied so far only represented less than a third of the synapses in the perfectly-annotated chunk. There's apparently other people lining up to use his equpment too and if I recall correctly he said someone is hoping do an entire fruit-fly brain, much like was mentioned in the above link.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2015-01-13T15:20:01.087Z · LW(p) · GW(p)

He thinks that neuroscience should take a page from astronomy and just see what the heck is out there. He thinks they are trying to make detailed hypotheses about function and structure and everything else on far far too little data right now and we need a lot more data on the actual structures before we can be confident about much about them other than the mere fact that they correspond to function.

Upvoted especially for this.

comment by ilzolende · 2015-01-12T05:00:05.480Z · LW(p) · GW(p)

I checked out the Transhumanist Party site, and they didn't have a list of stuff Zoltan Istvan would do if elected, not even many applause lights. They also clearly haven't hired a web designer. They don't have a voting guide for different ballot measures. Finally, I'm tempted to vote for a 3rd-party candidate as my congressional representative, and they seem to only have a presidential candidate listed. I don't think Istvan has any plans for what he would do as President, and he doesn't seem to want to be elected.

Replies from: knb, Alsadius
comment by knb · 2015-01-13T03:45:21.646Z · LW(p) · GW(p)

The Transhumanist Party website is clearly the same template as Istvan's personal website. He is trying to sell a book, so my guess is the Transhumanist Party is a publicity stunt to sell his book.

Replies from: ilzolende
comment by ilzolende · 2015-01-13T07:37:18.430Z · LW(p) · GW(p)

Thanks! It did smell like a publicity stunt, but I wasn't sure what it was trying to promote, since it wasn't promoting policy changes or some other political goal very well. I'm not sure having a presidential campaign that obviously isn't trying to get anyone elected is the best way to sell books, though.

Replies from: knb
comment by knb · 2015-01-13T08:19:00.908Z · LW(p) · GW(p)

I'm not sure having a presidential campaign that obviously isn't trying to get anyone elected is the best way to sell books, though.

I have a gut feeling that a lot of long-shot campaigns are more about publicity/book sales/speaking fees than a genuine desire to be elected.

To be fair to Istvan, I don't think his motive is primarily financial, since he is giving away a free Kindle version of his book.

Replies from: ilzolende, advancedatheist
comment by ilzolende · 2015-01-13T20:40:31.071Z · LW(p) · GW(p)

In that case, maybe the goal is not to sell books, but rather to publicize them and the ideologies they contain.

Replies from: emr
comment by emr · 2015-01-14T03:15:45.930Z · LW(p) · GW(p)

Ron Paul and Ralph Nader (many-time presidential candidates with no chance of winning) are concrete examples of this in the US political system.

Both have done decent with respect to speaking fees and personal fame, but of course so do "genuine" candidates!

comment by advancedatheist · 2015-01-13T16:58:06.056Z · LW(p) · GW(p)

Istvan doesn't seem to hurt for money if he can afford to live in the Bay Area, and he has a vineyard in Argentina.

comment by Alsadius · 2015-01-13T16:52:43.507Z · LW(p) · GW(p)

I don't think Istvan has any plans for what he would do as President, and he doesn't seem to want to be elected.

Why would he bother unless he's a Republican or Democrat?

comment by knb · 2015-01-13T08:06:42.890Z · LW(p) · GW(p)

I just downloaded the free Kindle version of Istvan's book, and it seems he's advocating a fusion of Objectivism/egoism and Transhumanism. Transhumanism and objectivism would seem to go together very naturally from a philosophical perspective, yet it seems to me that the great majority of transhumanists are left-liberals.

Replies from: RedErin
comment by RedErin · 2015-01-13T18:55:17.782Z · LW(p) · GW(p)

I watched the Joe Rogan interview with him where he disavowed his books political leanings. I'm a left-liberal who used to hate him because of his book, but after watching that interview I like him.

https://www.youtube.com/watch?v=9grWo5ZofmA

comment by Furcas · 2015-01-17T17:24:19.409Z · LW(p) · GW(p)

Edge.org 2015 question: WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

There are answers by lots of famous or interesting scientists and philosophers, including Max Tegmark, Nick Bostrom, and Eliezer.

Replies from: iarwain1
comment by iarwain1 · 2015-01-18T02:02:14.497Z · LW(p) · GW(p)

What I find most interesting about the responses is how many of them state an opinion on the Superintelligence danger issue either without responding at all to Bostrom's arguments, or based on counter-arguments that completely miss Bostrom's points. And this after the question explicitly cited Bostrom's work.

comment by cata · 2015-01-15T04:50:33.365Z · LW(p) · GW(p)

I'm a programmer with a fair amount of reasonably diverse experience, e.g. C, C#, F#, Python, Racket, Clojure and I'm just now trying to learn how to write good Java. I think I understand most of the language, but I don't understand how to like it yet. Most Java programmers seem to basically not believe in many of the ways I have learned to write good software (e.g. be precise and concise, carefully encapsulate state, make small reusable modular parts which are usually pure functions, REPL-driven development, etc. etc.) or they apply them in ways that seem unfortunate to me. However, I feel foolish jumping to the popular conclusion that they are bad and wrong.

I would really like a book -- or heck, just a blog post -- which is like "Java for Functional Programmers" that bridges the gap for me and talks about how idiomatic Java differs from the style I normally consider good and readable and credibly steelmans the Java way. Most of my coworkers either don't like the Java style, only know the Java style, or just don't care very much about this kind of aesthetic stuff, so none of them have been very good at explaining to me how to think about it.

Does this book exist?

Replies from: Viliam_Bur, Daniel_Burfoot
comment by Viliam_Bur · 2015-01-15T10:21:17.785Z · LW(p) · GW(p)

I don't know Java books, but I would like to react to this part anyway:

Most Java programmers seem to basically not believe in many of the ways I have learned to write good software (e.g. be precise and concise, carefully encapsulate state, make small reusable modular parts which are usually pure functions, REPL-driven development, etc. etc.) or they apply them in ways that seem unfortunate to me.

There are much more bad programmers than good programmers, so any language that is sufficiently widely used is necessarily a language mostly used by bad programmers. (Also, if the programming language is Turing-complete, it also means that you can reinvent any historical bad programming practices in that language.) On the other hand, there are often genuine mistakes in the language design, or in the standard libraries. So here is my opinion on which is which in Java:

precise and concise -- sorry, no can do. Using proper formatting, merely declaring a read-only integer property of a class will cost you five lines not including whitespace (1 line declaration, 1 line assignment in constructor, 3 lines read accessor). (EDIT: Removed an obsolete info.)

carefully encapsulate state -- that's what the "private" and "public" keywords are for. I don't quite understand what could be the problem here (other than bad programmers not using these keywords; or the verbosity).

make small reusable modular parts which are usually pure functions -- this is not how Java is typically used, but it can be done. It has the garbage collector. It has immutable types; and for the mutable ones, you could create an immutable wrapper class (yes, a lot of writing again). So you can write a module that gets immutable values as inputs, returns them as outputs, which is more or less what you want. The only problem is that "immutability" is not recognized by the language; you only know that a class is immutable by reading the documentation or looking at the source code; you cannot have the compiler check it for you.

REPL-driven development -- it could be technically possible to make an interactive functional shell, and maybe someone already did it. But that's definitely not how Java is typically used. A slightly more traditional solution, although not exactly what you want, would be to use the Groovy language for the interactive shell. (Groovy is more or less a "scripting Java". Very similar to Java, with minor differences; can directly call functions from the Java program it is included in.) The traditional solution is to do unit testing with JUnit.

As a beginner, avoid Java EE like hell. That is the really ugly part. Stay with Java SE until the Stockholm syndrome kicks in and you develop feelings for Java, or until you decide you do not want to go this way.

Feel free to give me a short example in other programming language or pseudocode, and I will try to write it in Java in a functional-ish style.

Replies from: Daniel_Burfoot, cata
comment by Daniel_Burfoot · 2015-01-15T15:04:23.398Z · LW(p) · GW(p)

Without lambda syntax (yes, it is promised to be included in the next version of Java...)

Lambda syntax is definitely present in the currently available version of Java. I use it on a daily basis.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2015-01-16T11:30:13.559Z · LW(p) · GW(p)

Oops. The version is out there for almost a year. I missed it, because we do not use it at work.

Embarassing to find this out after pretending to be a Java expert. Does not add much credibility. :D

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2015-01-17T20:30:00.037Z · LW(p) · GW(p)

No worries, you obviously know what you're talking about in general. I just wanted to make sure false impressions don't spread.

comment by cata · 2015-01-15T19:37:58.901Z · LW(p) · GW(p)

I might try Groovy for the REPL stuff -- I was trying Clojure before, but I ran into problems getting it to get the dependencies and stuff all into the REPL (I work on a big project that uses Gradle as a build system, and Clojure doesn't usually use Gradle.)

carefully encapsulate state -- that's what the "private" and "public" keywords are for. I don't quite understand what could be the problem here (other than bad programmers not using these keywords; or the verbosity).

One pattern I have in mind here: if I have some algorithm I have to perform that has some intermediate state, I will break it down into some functions, pass the state around from function to function as necessary, and wind up composing five or six functions to get from the start to the end. Java programmers seem to often instead do something like make a class with the five or six functions as methods, and then set all the state as fields on the class, which the methods then initialize, mutate, and consume, and call the methods in the right order to get from the start to the end. That seems a lot harder for me to read because unless there is also great documentation I have to read very closely to understand the contract of each individual method.

(I'm also incidentally confused about when in Java people like to use a dependency injection tool like Guice and when people like to pass dependencies explicitly. I don't think I understand the motivation for the tool yet.)

Replies from: Viliam_Bur
comment by Viliam_Bur · 2015-01-16T11:50:08.732Z · LW(p) · GW(p)

Java programmers are usually familiar with procedural programming, not functional. The older ones are probably former C/C++ programmers, so they mostly write C/C++ code using Java syntax. That probably includes most textbook authors.

Nothing in Java prevents you from having intermediate states, and composing the functions. You just have to specify the data type for each intermediate state, which may require creating a new class (which is a lot of typing), or typing something like Pair, List>, so yeah, there are inconveniences.

As a crazy creative solution, I could imagine wrapping the "class with the five or six functions" into multiple interfaces. Something like this:

Old version:

class Something {
    void step1(T1 param1) { ... }
    void step2(T2 param2) { ... }
    void step3(T3 param3) { ... }
    R getResult() { return ...; }
}

New version:

interface AfterStep2 {
    R step3(T3 param3);
}

interface AfterStep1 {
    AfterStep2 step2(T2 param2);
}

class Something implements AfterStep1, AfterStep2 {
    AfterStep1 step1(T1 param1) { ...; return this; }
    AfterStep2 step2(T2 param2) { ...; return this; }
    R step3(T3 param3) { ...; return ...; }
}

This would force users to write things like:

R result = something.step1(x1).step2(x2).step3(x3);

I also admit my colleagues would kill me after doing this, and the jury would probably free them.

comment by Daniel_Burfoot · 2015-01-15T15:15:35.364Z · LW(p) · GW(p)

(e.g. be precise and concise, carefully encapsulate state, make small reusable modular parts which are usually pure functions, REPL-driven development, etc. etc.)

I am a Java programmer, and I believe in those principles, with some caveats:

  • Java is verbose. But within the constraints of the language, you should still be as concise as possible.
  • Encapsulation and reusable modular design is a central goal of the language and OO design in general. I think Java achieves the goal to a significant degree.
  • Instead of using a REPL, you do edit/compile/run loops. So you get two layers of feedback, one from the compiler and the other from the program itself.
  • Even though Java doesn't emphasize functional concepts, you can still use those concepts in Java. For example, you can easily make objects immutable just by supplying only a constructor and no mutator methods (I use this trick regularly).
  • Java 8 is really a big step forward: we can now use default interface methods (i.e. mixins) and lambda syntax with collection operations.

I don't understand how to like it yet

My feeling towards Java is just that it's a very reliable old workhorse. It does what I want it to do, consistently, without many major screwups. In this sense it compares very strongly to other technology tools like MySQL (what, an ALTER TABLE is a full table copy? What if the table is very large?) and even Unix (why can't I do some variant of ls piped through cut to get just the file sizes of all the files in a directory?)

Replies from: jkaufman
comment by jefftk (jkaufman) · 2015-01-18T00:18:25.428Z · LW(p) · GW(p)

why can't I do some variant of ls piped through cut to get just the file sizes of all the files in a directory?

Nerd sniped. After some fiddling, the problem with ls | cut is that cut in delimiter mode treats multiple spaces in a row as multiple delimiters. You could put cut in bytes or character mode instead, but then you have the problem that ls uses "as much as necessary" spacing, which means that if the largest file in your directory needs one more digit to represent then ls will push everything to the right one more digit.

If you want to handle ls output then awk would be easier, because it collapses multiple successive delimiters [1] but normally I'd just use du [2]. Though I have a vague memory that du and ls -l define file size differently.

(This doesn't counter your point at all -- unix tools are kind of a mess -- but I was curious.)

[1] ls -l | awk '{print $5}' [2] du -hs *

Replies from: Douglas_Knight
comment by Douglas_Knight · 2015-01-28T20:46:46.409Z · LW(p) · GW(p)

Your vague memory is probably that ls -l gives file size, while du give "disk usage" - the number of blocks used. On my computer the blocksize is 4k, so du only reports multiples of this size. (In particular, the default behavior is to report units of historical blocksize, so it only reports multiples of 8.)

A huge difference that I doubt you forget is how they define the size of directories - just metadata vs recursively. But that means that du is expensive. I use it all the time, but not everywhere.

comment by Capla · 2015-01-12T22:58:34.633Z · LW(p) · GW(p)

I'm going to CFAR, this week. I'll have pretty much a full day before the workshop, where I have nothing planned. Are there any cool rationalist things I should see or do in SF? Or even non-rationalist, but worthwhile things?

Are there any "landmarks" where I can just drop by (or maybe call ahead first)? I don't suppose MIRI welcomes merely -curious tourists /Pilgrims.

Replies from: ilzolende
comment by ilzolende · 2015-01-13T00:20:21.076Z · LW(p) · GW(p)

I'd suggest the Exploratorium science museum, but this may be a case of Typical Mind Fallacy. There's lots of museums in SF, so you should be able to find one targeting your interests.

If you're physically able to, you should walk across the Golden Gate Bridge and visit the visitor center.

If you know any employees, you may be able to get tours of different organizations. I was last in SF with a college professor who was able to get tours from his former students. There's not really much cost to just asking MIRI for permission to visit, even if you don't know anyone there.

comment by tog · 2015-01-12T18:20:02.825Z · LW(p) · GW(p)

Random unsolicited advice:

Here’s a self-improvement tip that I’ve come up with and found helpful. It works particularly well with bad habits, which are hard to fix using other self-improvement techniques as they’re often unconscious. To take one example, it’s helped improve my posture significantly.

1) List your bad habits. This is a valuable exercise in its own right! Examples might include bad posture (or, more concretely, crossing your legs), mumbling, vehicular manslaughter, or something you often forget to do.

2) Get in the habit of noticing when they occur, even if it’s after the fact. You can regularly try remembering whether they have at a convenient time for you, such as at lunch or in the evening. Ideally you should try to notice them soon after they occur however, for reasons that will become clear.

3) Come up with a punishment. The point of this is not to create an incentive not to lapse (you could experiment with that, but I’m not sure whether it will work, as bad habits are rarely consciously chosen). Instead, it’s to train yourself by Pavlovian conditioning - training "system one", in Daniel Kahneman’s terms. Examples of punishments would be literally slapping yourself on the wrist, pinching yourself, or costing your HabitRPG character health points (see https://habitrpg.com/ ).

Replies from: RomeoStevens, emr, ilzolende, tog
comment by RomeoStevens · 2015-01-12T21:59:26.851Z · LW(p) · GW(p)

Positive reinforcement works better than negative. If noticing is followed by a punishment you are disincentivizing yourself to notice. This is bad because noticing is its own super power. Instead maybe try congratulating yourself for noticing, and then replacing the negative habit with some other reward. Eating too many gummy bears in the short term is probably worth it to repair bad habits in the long term for instance.

comment by emr · 2015-01-14T02:30:17.347Z · LW(p) · GW(p)

How long have you been using this?

Replies from: tog
comment by tog · 2015-01-15T10:08:49.464Z · LW(p) · GW(p)

Just under a year, and I've been using it for posture (a really tough habit to break, at least for me), so I have a good bit of data.

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2015-01-16T13:09:18.113Z · LW(p) · GW(p)

If the problem is bad posture while sitting at the computer, you could try removing your chair's back and armrests. Once I learned how to sit right (with Alexander technique), I discovered that back and armrests are like magnets for my body, and they also make it quite easy to sit in a bad posture for a long time before noticing. Without that support though, a bad posture become uncomfortable much faster, and I soon notice and straighten up.

comment by ilzolende · 2015-01-13T03:00:39.545Z · LW(p) · GW(p)

On the subject of self-improvement and self-control: My big tip for achieving goals is to set goals you actually want to achieve, not goals that you want people to think you want to achieve. For example, if you want to sit with your legs crossed, not only in the immediate term but also upon weighing the advantages and disadvantages of doing so, you're not likely to succeed in trying to make yourself sit differently.

For example, my goal in anger management is not "always stay calm, even when I stand to personally gain by being angry." My goal is "avoid being more angry than I want to be." Thinking things like "it is okay to be angry, but I don't want to experience desires to do things that I think are immoral, so I should calm down to the point of not wanting to punch anyone" has been far more effective than thinking things like "you're not allowed to be angry right now, calm down!".

comment by tog · 2015-01-15T10:15:59.989Z · LW(p) · GW(p)

Amusing product you could use with this - the Pavlok, which gives you electric shocks ( http://pavlok.com/ )

There was also a kickstarter device that sucked off your blood as a penalty, but they banned it.

comment by ilzolende · 2015-01-12T02:00:39.336Z · LW(p) · GW(p)

Death by Robot by Robin Marantz Henig

Part of the hidden ethicist agenda to reveal everyone's systems of morality via discussion of self-driving cars.

comment by Omid · 2015-01-16T03:44:38.326Z · LW(p) · GW(p)

Is there any way to block distracting software on my computer? There are a blue million apps that will block websites, but I can't find any that will stop me from playing games I've installed. Ideally, I'd like some software that lets me play my games, but only after a 10 minute wait. But I'd settle for anything now that can restrict my access to games without uninstalling them entirely.

Replies from: Risto_Saarelma, jaime2000, None, iarwain1
comment by Risto_Saarelma · 2015-01-16T06:14:55.513Z · LW(p) · GW(p)

Dual boot to Linux for working and to Windows for the games?

comment by [deleted] · 2015-01-26T19:26:45.173Z · LW(p) · GW(p)

If you have somebody to store your admin password in, Windows' parental control could help.

comment by iarwain1 · 2015-01-18T00:49:24.674Z · LW(p) · GW(p)

Password Door, or Microsoft Family Safety (for Windows, available through the parental controls; I think you need to have some else administer it though).

comment by Error · 2015-01-16T00:54:20.586Z · LW(p) · GW(p)

I'm looking for an old post of Eliezer's. If I remember the post correctly, he was commenting that a lot of the negative reaction to evopsych might come from having first encountered it in the hands of dumb internet commentators, instead of .

I don't remember the title he referenced, and the search function is failing me. Can anyone point me in the right direction?

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2015-01-16T01:30:44.591Z · LW(p) · GW(p)

I can't find the exact post you're talking about, but the book involved was probably The Adapted Mind, since Eliezer often praises it in terms like those.

Replies from: Error
comment by Error · 2015-01-16T02:19:49.614Z · LW(p) · GW(p)

Thanks. Having the title was enough to find the post. I turned out to be looking in the wrong place. The comment was on SSC, not LW -- which I find amusing given that you were the one to respond.

The book is irritatingly expensive. I might read The Moral Animal instead. Both seem to be widely recommended around here. Searching on the former consistently turns up the latter as well, often in the same breath.

Motivation: I noticed that I can't distinguish between just-so stories and genuine evopsych insights. I think seeing a known example of it being done right might help fix that.

Replies from: ahbwramc, Alejandro1
comment by ahbwramc · 2015-01-16T15:37:30.651Z · LW(p) · GW(p)

The post was Polyamory is Boring btw, in case anyone else is curious.

comment by Alejandro1 · 2015-03-02T00:07:24.804Z · LW(p) · GW(p)

He actually said it beforehand in LW as well. Link.

comment by Barry_Cotter · 2015-01-13T14:14:07.423Z · LW(p) · GW(p)

I have heard that in economics and possibly other social sciences Ph.D. students can staple together three journal articles, call it a dissertation and get awarded their doctorate. But I've recently read "Publication, Publication" by Gary King, which I interpret as saying a very bright and hardworking undergraduate can write a quantitative political science article in the space of a semester, while carrying a normal class load.

This is confusing. Now, Dr. King teaches at Harvard so all his students are smart and it's two students writing one paper but this still seems insane. I'm guessing a full course load is around 6 classes a term and people are to write a journal article or close approximation thereof in a semester when three of them will suffice to get a Ph.D. and many people fail out of said degree who are very, very smart.

Where am I confused? Is research not that hard, a stapler thesis a myth or these class projects not strictly comparable to real papers?

http://gking.harvard.edu/classes/advanced-quantitative-political-methodology-government-2001-government-1002-and-e-2001

Abstract: I show herein how to write a publishable paper by beginning with the replication of a published article. This strategy seems to work well for class projects in producing papers that ultimately get published, helping to professionalize students into the discipline, and teaching them the scientific norms of the free exchange of academic information. I begin by briefly revisiting the prominent debate on replication our discipline had a decade ago and some of the progress made in data sharing since.

Citation: King, Gary. 2006. Publication, Publication, PS: Political Science and Politics 39: 119–125. Copy at http://j.mp/iTXtrg

Replies from: JoshuaZ, None, badger, Alsadius
comment by JoshuaZ · 2015-01-13T21:11:20.142Z · LW(p) · GW(p)

I don't know about social sciences, but the situation in math isn't that far off. The short answer is that the papers done by the undergraduates are real papers but the level of papers which are of the type and quality that would be stapleable into a thesis are different (higher quality, more important results) than would be the sort done in undergraduate research.

comment by [deleted] · 2015-01-14T13:48:39.283Z · LW(p) · GW(p)

There is usually more to a "PhD by publication" than just publishing any 3 articles and then submitting them for the degree.

A nice 2011 article in Times Higher Education describes what the process actually requires, at least in the UK most importantly, coherence: the articles must be on related themes, and additional supporting documentation on the order of 10k words is usually required to convert the independent publications into some kind of coherent package that, very often resembles a conventional thesis.

It's also informative to look over the recent discussion on the Thesis Whisperer blog - lots of comments from people in various disciplines about the realities of publication-based theses.... and usually they describe them as more work than a conventional thesis.

For published papers like the one described by Gary King - it may be hard to write a combination of them that meets an institution's criterion for PhD by publication. Not just the coherence part but usually there is a requirement that a PhD makes a novel contribution to the field -- and it is hard to justify this with strictly replication-based approaches.

However, if the work follows King's suggestion to replicate and then make minimal changes ("make one improvement, or the smallest number of improvements possible to produce new results, and show the results so that we can attribute specific changes in substantive conclusions to particular methodological changes" - King p.120) - a series of such publications on closely related themes starts to look a lot like a conventional PhD..... although getting a paper through peer review is still quite a challenge. King's paper (and supplemental comments) can also be a useful guide for researchers outside academia to get published.

comment by badger · 2015-01-15T18:40:18.089Z · LW(p) · GW(p)

From an economics perspective, the stapler dissertation is real. The majority of the time, the three papers haven't been published.

It's also possible to publish empirical work produced in a few months. The issue is where that article is likely to be published. There's a clear hierarchy of journals, and a low ranked publication could hurt more than it helps. Dissertation committees have very different standards depending on the student's ambition to go into academia. If the committee has to write letters of rec to other professors, it takes a lot more work to be sufficiently novel and interesting. If someone goes into industry, almost any three papers will suffice.

I've seen people leave because they couldn't pass coursework or because they felt burnt out, but the degree almost always comes conditional on writing something and having well-calibrated ambitions.

comment by Alsadius · 2015-01-13T17:07:45.291Z · LW(p) · GW(p)

I've never been a grad student, so this is pure supposition, but...

I suspect that if you went into a PhD program and tried to hand in a thesis six months later, the response that you'd get from on high is "Ha ha, very funny. Come back in three years", and that this response would happen whether or not you produced something that's actually good enough to be a proper thesis. Profs know how long a doctorate is "supposed to" take, and doing it in a tenth of that time will set off alarm bells for them.

comment by Mollie · 2015-01-17T20:37:49.276Z · LW(p) · GW(p)

Is there a better search term than "self-modification," or a better place to look other than LW, for self-modification ideas/experiments, of the "when system 1 and system 2 are in conflict, listen to system 2" type? Any comments like "This particular thing worked for me and here's a link to it" are welcome.

comment by NancyLebovitz · 2015-01-13T15:28:24.809Z · LW(p) · GW(p)

Filing with the minimum of trivial impediments

This is a system designed especially for people who suffer from depression-- one of the symptoms is difficulty with making decisions, so the idea is to require as few decisions as possible-- for example, just file the envelope full of stuff from your bank instead of sorting out the advertising.

There's also minimization of the demands on memory-- for example, writing payments on bills.

The piece that really struck me was the recommendation of having a place for the stuff you're going to file, instead of letting it get scattered and lost.

comment by polymathwannabe · 2015-01-14T03:16:03.655Z · LW(p) · GW(p)

Forget Skynet: How Person Of Interest Depicts A Realistic A.I. Uprising

Replies from: MrMind
comment by MrMind · 2015-01-14T08:40:36.395Z · LW(p) · GW(p)

I'm in the middle of the third season. Are there spoilers in the link?

Replies from: polymathwannabe
comment by polymathwannabe · 2015-01-14T13:35:48.660Z · LW(p) · GW(p)

Many.

comment by Gram_Stone · 2015-01-19T00:13:05.686Z · LW(p) · GW(p)

I've never studied any branch of ethics, maybe stumbling across something on Wikipedia now and then. Would I be out of my depth reading a metaethics textbook without having read books about the other branches of ethics? It also looks like logic must play a significant role in metaethics given its purpose, so in that regard I should say that I'm going through Lepore's Meaning and Argument right now.