For the past few months we've had three developers (Eric Rogstad, Oliver Habryka, and Harmanas Chopra) working on LW 2.0. I haven't talked about it publicly much because I didn't want to make promises that I wasn't confident could be kept. (Especially since each attempt to generate enthusiasm about a revitalization that isn't followed by a revitalization erodes the ability to generate enthusiasm in the future.)
We're far enough along that the end is in sight; we're starting alpha testing and I'm going to be start posting a status update in the Open Thread each Monday to keep people informed of how it's going.
New research out of the Stanford / Facebook AI labs: They train an LSTM-based system to construct logical programs that are then used to compose a modular system of CNNs that answers a given question about a scene.
This is very important for the following reasons:
As a breakthrough in AI performance, it beats previous benchmarks by a significant margin.
Capable of learning to generate new programs even if only trained on a small fraction (< 4%) of possible programs.
Their strongly-supervised variant achieves super-human performance on all tasks within the CLEVR dataset.
It is much less of a black box than typical deep-learning systems: The LSTM creates an interpretable program, which allows us to understand the method by which the system tries to answer the question.
It is capable of generalizing to questions made up by humans, not found in its training data.
This is really exciting and I'm glad we're moving further into the direction of "neural networks being used to construct interpretable programs."
"The crux of the approach is the use of a ‘cycle-consistency loss’. This loss ensures that the network can perform the forward translation and followed by the reverse translations with minimal loss. That is, the network must learn how to not only translate the original image, in needs to also learn the inverse (or reverse) translation."
I have a neat idea for a smartphone app, but I would like to know if something similar exists before trying to create it.
It would be used to measure various things in one's life without having to fiddle with spreadsheets. You could create documents of different types, each type measuring something different. Data would be added via simple interfaces that fill in most of the necessary information. Reminders based on time, location and other factors could be set up to prompt for data entry. The gathered data would then be displayed using various graphs and could be exported.
The cool thing is that it would be super simple to reliably measure most things on a phone in a way that's much simpler than keeping a spreadsheet. For example: you want to measure how often you see a seagull. You'd create a frequency-measuring document, entitle it "Seagull sightings", and each time you open it, there'd be a big button for you to press indicating that you just saw a seagull. Pressing the button would automatically record the time and date, perhaps the location, when this happened. Additional fields could be added, like the size of the seagull, which would be prompted and logged with each press. With a spreadsheet, you'd have to enter the date yourself, and the interface isn't nearly as convenient.
Another example: you're curious as to how long you sleep and how you feel in the morning. You'd set up an interval-measuring document with a 1-10 integer field for sleep quality and reminders tied into your alarm app or the time you usually wake up. Each morning you'd enter hours slept and rate how good you feel. After a while you could look at pretty graphs and mine for correlations.
A third example: you can emulate the experience sampling method for yourself. You would have your phone remind you to take the survey at specific times in the day, whereupon you'd be presented with sliders, checkboxes, text fields and other fields of your choosing.
This could be taken further in a useful way by adding a crowd sourcing aspect. Document-templates could be shared in a sort of template marketplace. The data of everyone using a certain template would accumulate in one place, making for a much larger sample size.
So it's 1 click to begin, 1 click for each choice, 1 click to save.
That's three clicks.
If you have a Google form with a checklist of 10 items and you answer half of them with "yes" (tick them) you have 7 (if we count opening the form and sending the form) clicks instead of 30 (or maybe 25 if if you play with predefined values).
If you know an app that does this better, I am looking.
The main argument I'm making here is that there's no app out there that really solves this problem well and there's room for Sandi to create something better than the present options.
Thanks! I didn't know this was such a developed concept already and that there are so many people trying to measure stuff about themselves. Pretty cool. I'll check out Quantified Self and what's linked.
Gleeo Time Tracker lets you define categories, and then use one click to start or stop tracking the category. You can edit the records and include more specific descriptions in them. You can export all data to spreadsheet. I use it to track my daily time, on very general level -- how much I sleep, meditate, etc.
(Note: When you start integrating with other apps, there are almost unlimited options. You may want to make some kind of plugin system, write a few plugins yourself, and let other users write their own. Otherwise people will bother you endlessly to integrate with their favorite app.)
The point is, this kind of problems is the wheel that every starting coder feels the need to reinvent. How much innovation there is in linking an on-screen UI element like a button with a predefined SQL query? (eh, don't answer that, I'm sure there is a patent for it :-/)
Sure, you may want a personalized app that is set up just right for you, but in that case just pick the right framework and assemble your app out of Lego blocks. You don't need to build your own plastic injection moulding machinery.
Data entry is a different problem than just having a database.
I installed Memento and let my start by listing how it screws up: ① The multiple choice field starts by showing me an empty drop down menu. I have to click on it for making it expand. After selecting my choices I have to click on "Ok" to get back to my form. That's clearly two clicks to many. ② Automatic time tracking is even worse than Google Forms. The field that tracks the time get's shown to the user with "current time" with the term current time meaning. There's no reason why it can't do the time tracking in the background. And if I fill out 5 entries there's no reason why it can't give me 5 time stamps. I would also want 5 time stamps with milliseconds and Memento apparantly thinks that nobody needs milliseconds. ③ It doesn't do notifications. For a use case like morning tracking it's good to have a notifaction. That means that I can press "on", "click on the notifaction" and I'm right at my form. There's no need to unlock the phone to enter data. ④ Many of the buttons are just to small. Yes, I can click on small buttons but it's not as fast and if I want to build the habit to track something for QS purposes convenience matters a great deal.
TapLog is very nifty, it's simply that it would be even better with a somewhat extended feature set.
Here's one use case: I want to log my skin picking and skin care routine (morning/evening).
The first is easy. I just add a button to my home screen that increments by one every time I click it (which is every time I touch my face with my fingers). After a while I can plot number of picks each day, or month, or cumulative, etc. It's very nice.
Logging my skin care routine is more difficult, since TapLog does not support lists. (Only quantity, and/or text-input [with an optional prompt], and/or gps position, for a single entry)
What I would like is for TapLog to let me predefine a list of items (shave, cleanse, moisturizer) then give me a push notification in the morning and/or evening requesting me to check off each item.
(If you use something like Wunderlist with a daily repeat of the list, it is very fragile. If you miss a couple of days you have to reset the date for the reminder, because there's no way for unfinished lists to simply disappear unless you actually check them off. And in Wunderlist there's no way to analyze your list data to see how well you did last month, etc.)
TapLog is designed for entering one piece of data at a time.
If you have a checklist with 10 items and on average 5 are "yes" you have to do 10 clicks. Basically "click 1 yes" "back" "click 2 yes" "back" "click 3 yes" "back" "click 1 yes" "back" "click 1 yes" "back" and "click 5 yes" "back".
If you have a Google form it only takes half as much clicks.
Besides pure click counting it's also nice to see the checklist of 10 items together before clicking send to make sure that everything is right.
Google Forms is very nice but it's not optimized for the smart-phone form factor.
It's not even optimized for gathering as much data as possible. It doesn't give me a time stamp for every single data entry but only one timestamp when the form in finished. It also needs internet. There was a while when I purposefully deactivated my router in the first hours of the day to reduce distractions and that prevented me from doing my morning tracking with Google Forms.
I think this is the wrong way to look at this problem. You can easily build an app for detecting seagull by their sound but that app isn't easily customizable and you can't throw it easily against different problems.
There's a reason why most of us still use paper from time to time. Multiple local rationalists I know use paper notebooks. That's because paper is very flexible. For all it's problem it still outperforms digital tool in many circumstances because it's so adaptable.
"If you say X asked and I answered" that commonly means that you claim that your answer has something to do with the question. If you understand that your answer has nothing to do with the question there's no need to point out that the person asked.
Why do you care if you're running in a simulation? Well, if you're a piece of malware, like a computer virus or Internet worm, you care whether you're being analyzed by an anti-malware researcher. If you're running in a researcher's lab instead of actually infecting target systems, you'd want to shut down so the researcher can't observe your real behavior and find out how to exterminate you. However, this means that if you can be fooled into believing you're running in a simulation when you're not, you can be made to shut down in the wild.
The WannaCrypt worm contained logic that basically said, "Look for the following properties in the Internet. If you observe them, that means you're not running in the real Internet; you're running in a simulation." But the researcher was able to cause those properties to become true in the real Internet, thereby convincing the live malware that was infesting the actual Internet to believe it was in a simulation and shut down.
Anti-analysis or anti-debugging features, which attempt to ask "Am I running in a simulation?", are not a new thing in malware, or in other programs that attempt to extract value from humans — such as copy-protection routines. But they do make malware an interesting example of a type of agent for which the simulation hypothesis matters, and where mistaken beliefs about whether you're in a simulation can have devastating effects on your ability to function.
Why do you think a piece of software has the same goals as it's creator? My conscious planning mind doesn't have the same goals as evolution.
Current software doesn't even have goals, it has behaviors. Ascribing desires and decision-making to it leads to incorrect beliefs. AIs will have goals, but they'll be influenced and shaped by their creators rather than being fully specified.
no one can ever argue against 'security', so you always win if you bring it up
Doesn't work for me. I am the guy saying "we should not be doing X, because when you google for X, the first three results are all telling you that you definitely shouldn't be doing X", and everyone else is "dude, you already spent the whole day trying to solve this issue, just do it the easy way and move on to the other urgent high-priority tasks".
Probably depends on the type of a company, i.e. what is the trade-off between "doing the project faster" and "covering your ass" for your superiors. If they have little to lose by being late, but can potentially get sued for ignoring a security issue, then yes, this is really scary.
A possible solution is to tell the developer to just do it as fast as possible, but still in a perfectly secure way. Have daily meetups asking him ironically whether he is still working on that one simple task. But also make him sign a document that you can deduct his yearly salary if he knowingly ignores a security issue. -- Now he has an incentive to shut up about the security issues (to avoid giving a proof that he knew about them).
I mean, not seriously, but I've done 2 decades in the industry, at a total of 5 companies, and I see it everywhere.
Dev A: We should do this with a cloud based whatver.
Dev B: No, no, we should stick with our desktop app.
Dev A (triumphantly): No, no, putting everything on the cloud is BEST PRAKTUS!!!!
Dev B: (in desperation, transgressing...) What about....security?
Bosses (Double gasp)
Dev A; (disbelief) You wouldn't....
Dev B: A's mad scheme exposes us to the viruses and also the worms.
Bosses: We agree with B!
Dev A: You realize, of course, this means war.
Dev B: I'm just saying that we could try 'not' encoding every string in pig latin, as most people would be able to decrypt this with minimal effort and it is massively increasing our translation budgets
Dev A: So you are in favor of making our software less secure?
Dev B: hahahah, no, of course not. That was just a test. I'm a double red belt qualified expert in Security Singing from every App academy. I was just making sure that you were too.
There are elements and leanings toward this combative view of security in a whole lot of companies, both in IT departments and in software-focused corporations. I haven't seen even a small fraction of such places (only maybe a few hundred directly and indirectly), but it seems rare that it gets to strategic levels (aka cold war with each side hesitant to change the status quo) - most places are aware of the tradeoffs and able to make risk-estimate-based decisions. It helps a LOT to have developers do the initial risk and attack value estimates.
I'll agree about the emergency/patch deployment process being the one to focus on. There's something akin to Gresham's law in ops methodology - bad process drives out good.
heh. Consultants are the people who couldn't meet our hiring bar, so we pay them twice as much to avoid any long-term responsibility for outcomes. They are useful at making sure our devs have asked the right questions and considered the right options. But the actual analysis and decision starts and ends on the team (and management) that's going to actually run the system and deal with the consequences.
Not everywhere, and not as completely sane as I'm stating it - there's a lot of truth in Dilbert. But if it's too bad where you are, go elsewhere. There are good software teams and they're hiring.
More importantly, the overall software dev market is such that you can change 3-4 times in one year without really limiting your prospects, as long as you can explain what you're looking for and why you think the next one is it. You probably can't do that two years in a row, but trying a new job isn't a life sentence, it's an exploration.
Do you find it demotivating to do mathematics which is assigned to you in school compared doing mathematics personally? I'm currently having difficulty getting myself doing mathematics thats assigned to me.
It works similarly for me with programming. I love programming, except when I have a programming task assigned, and must provide reports of how long it took me to solve it, and must debate whether what I am doing now is the highest priority or whether I should be doing something else instead (such as googling for existing solutions for this or similar problems)...
Okay... so this draws on a couple of things which can be confusing. 1) perspective projections 2) mapping spheres onto 2D planes.
Usually when we think of a field of vision we imagine some projection that maps the 3D world in front of us to some 2D rectangle image. And that's all fine and well. We don't expect the lines in the image to conserve the angles they had in 3D.
I think what the author of the post is saying is that if you use a cylindrical projection that wraps around 360 degrees horizontally, then the lines will appear parallel when you unwrap it. But there's nothing wrong with this. If it seems like it would be a contradiction, because the lines cross each other at right angles in 3D - it's because in a z-aligned cylindrical projection, the point where the lines cross will be on one of the singularities that sit on each pole. And if the cylindrical projection is not z-aligned, the lines won't be parallel, and will cross each other at some angle.
I guess you can also think of this as two projections. There is the two lines on the floor, which are projected up onto the bird's panoramic view (a sphere), and then the sphere is projected onto a z-aligned cylinder, and then the cylinder is unwrapped to give us our 2D image with the two lines parallel.
Like how if you projected two perpendicular lines up onto the bottom of this globe they might align with say, 0"/180" and 90"/270", but they would appear parallel on the output cylindrical projection
This is assuming that by "perspective" we mean something like "projection onto a sphere". Then the lines become great semicircles and it's true that they are parallel at the horizon, at least in the sense that the great circle representing the horizon meets them each at a right angle.
Yes, they do. In a distance all directions seems parallel.
Except that I don't deal with than many directions at once. I never see a bird flying to the West near the horizon, and a bird flying to the East near the horizon at the same time. A bird does see that at once. It sees how they fly apart of each other and fly parallel at the same time. It is counterintuitive for me, but not for the bird, I guess.
I can however, see two birds flying away from me, one to the North, other to the West, both far away. They become smaller and smaller, but the apparent distance between them remains practically unchanged.
I quickly rationalize this as an interesting illusion, at the most.
If you surround the bird B with a ten-meter-radius sphere and map each point A on the ground to the intersection between the line segment AB and the sphere, the x and y axis map to a total of four curves along the lower half of the sphere, all of which are, in fact, parallel at the equator.
Most of reality maps to near the equator, therefore the bird's eye would evolve to have most receptors near the equator and most of its visual cortex would focus there. (Assuming that things don't become more important to the bird as they grow nearer :P)
Short enough to just post here rater than linking:
Imagine, that you are an intelligent bird with a 360 degrees, panoramic view, flying over a plane equipped with orthogonal x and y axis, clearly visible and – what a coincidence – intersecting just 10 meters beneath you.
I argue, that due to the well known phenomenon of the geometrical perspective, you see in a distance the line which goes North, parallel to the line which goes West. In fact, every direction seems parallel to all other three directions.
Is that right, and why it’s right? How could this be?
Is there an unstated assumption that the panoramic view is accomplished by mapping to a human-evolved ~135 degree field of view? I don't think this would happen in a brain evolved and trained on panoramic eyes/sensors. It doesn't happen in reality, where panoramic views exist everywhere and are generally accessed by turning our heads.
Closer objects must appear bigger and this kind of perspective is inescapable for us, cameras or birds.
From here the apparent parallelism of the two, from a single point outgoing lines -- follows. How then a 360 degrees vision creature handle this? When the straight road going to the North, is parallel to another straight road going to the West, which is parallel to yet another straight road going to the South? At least in some distance and then to the horizon.
Parallel lines appear to intersect according to perspective. But, the more distant parts of the lines are the parts that appear to intersect. Here, where the lines actually do intersect, the more distant parts are away from the intersection. If these are ideal lines such that one could not gauge distance, and one is only looking downward, such as a projection onto a plane, then they are visually indistinguishable from parallel lines. Whether that's the same thing as them appearing to be parallel may be ... a matter of perspective. But, since this is a bird with 360 degree view, it can see that the lines do not also extend above the bird as parallel lines would, so they do not appear parallel to it.