Posts

Review of "Learning Normativity: A Research Agenda" 2021-06-06T13:33:28.371Z
Review of "Fun with +12 OOMs of Compute" 2021-03-28T14:55:36.984Z
Learning from counterfactuals 2020-11-25T23:07:43.935Z
Mapping Out Alignment 2020-08-15T01:02:31.489Z
Resources for AI Alignment Cartography 2020-04-04T14:20:10.851Z
SSC Meetups Everywhere: Toulouse 2019-09-10T19:17:34.732Z
Layers of Expertise and the Curse of Curiosity 2019-02-12T23:41:45.980Z
Willpower duality 2017-01-20T09:56:50.441Z
Open thread, Oct. 31 - Nov. 6, 2016 2016-10-31T21:24:05.923Z

Comments

Comment by Gyrodiot on [Review] Edge of Tomorrow (2014) · 2021-09-08T17:24:58.523Z · LW · GW

If I'm correct and you're talking about

you might want to add spoiler tags.

Comment by Gyrodiot on Looking Deeper at Deconfusion · 2021-06-16T00:58:23.287Z · LW · GW

I'm taking the liberty of pointing to Adam's DBLP page.

Comment by Gyrodiot on Why We Launched LessWrong.SubStack · 2021-04-01T10:56:13.969Z · LW · GW

All my hopes for this new subscription model! The use of NFTs for posts will, without a doubt, ensure that quality writing remains forever in the Blockchain (it's like the Cloud, but with better structure). Typos included.

Is there a plan to invest in old posts' NFTs that will be minted from the archive? I figure Habryka already holds them all, and selling vintage Sequences NFT to the highest bidder could be a nice addition to LessWrong's finances (imagine the added value of having a complete set of posts!)

Also, in the event that this model doesn't pan out, will the exclusive posts be released for free? It would be an excruciating loss for the community to have those insights sealed off.

Comment by Gyrodiot on Babble Challenge: 50 Ways to Overcome Impostor Syndrome · 2021-03-20T13:06:28.275Z · LW · GW

My familiarity with the topic gives me enough confidence to join this challenge!

  1. Write down your own criticism so it no longer feels fresh
  2. Have your criticism read aloud to you by someone else
  3. Argue back to this criticism
  4. Write down your counter-arguments so they stick
  5. Document your own progress
  6. Get testimonials and references even when you don't "need" them
  7. Praise the competence of other people without adding self-deprecation
  8. Same as above but in their vicinity so they'll feel compelled to praise you back
  9. Teach the basics of your field to newcomers
  10. Teach the basics of your field to experts from other fields
  11. Write down the basics of your field, for yourself
  12. Ask someone else to make your beverage of choice
  13. Ask them to tell you "you deserve it" when they're giving it to you
  14. If your instinct is to reply "no I don't", consider swapping the roles
  15. Drink your beverage, because it feels nice
  16. Build stuff that cannot possibly be built by chance alone
  17. Stare outside the window, wondering if anybody cares about you
  18. Consider a world where everyone is as insecure as you
  19. Ask friends about their insecurities
  20. Consider you're too stupid to drink a glass of water, then drink some water
  21. Meditate on the difference between map and territory
  22. Write instructions for the non-impostor version of you
  23. Write instructions for whoever replaces you when people find out you're an impostor
  24. Validate those instructions with other experts, passing it off as project planning
  25. Follow the instructions to keep the masquerade on
  26. Refine the instructions since they're "obviously" not perfect
  27. Publish the whole thing here, get loads of karma
  28. Document everything you don't know for reference
  29. Publish the thing as a list of open problems
  30. Criticize harshly other people's work to see how they take it
  31. Make amends by letting them criticize you
  32. Use all this bitterness to create a legendary academic rivalry
  33. Consider "impostor" as a cheap rhetorical attack that doesn't hold up
  34. Become very good at explaining why other people are better than you
  35. Publish the whole thing as in-depth reporting of the life of scientists
  36. Focus on your deadline, time doesn't care if you're an impostor or not
  37. Make yourself lunch, balance on one foot, solve a sudoku puzzle
  38. Meditate on the fact you actually can do several complex things well
  39. Consider that competence is not about knowing exactly how one does things
  40. Have motivational pictures near you and argue how they don't apply to you
  41. Consider the absurdity of arguing with pictures
  42. Do interesting things instead, not because you have to, but to evade the absurdity
  43. Practice the "I have no idea what I'm doing, but no one does" stance
  44. Ask people why they think they know how they do things
  45. If they start experimenting impostor syndrome as well, support them
  46. Join a club of impostors, to learn from better impostors than you
  47. Write an apology letter to everyone you think you've duped
  48. Simulate the outrage of anyone reading this letter
  49. Cut ties with everyone who would actually treat you badly after reading
  50. Sleep well, eat well, exercise, brush your teeth, take care of yourself
Comment by Gyrodiot on Google’s Ethical AI team and AI Safety · 2021-02-20T23:25:05.599Z · LW · GW

I hope this makes the case at least somewhat that these events are important, even if you don’t care at all about the specific politics involved.

I would argue that the specific politics inherent in these events are exactly why I don't want to approach them. From the outside, the mix of corporate politics, reputation management, culture war (even the boring part), all of which belong in the giant near-opaque system that is Google, is a distraction from the underlying (indeed important) AI governance problems.

For that particular series of events, I already got all the governance-relevant information I needed from the paper that apparently made the dominoes fall. I don't want my attention to get caught in the whirlwind. It's too messy (and still is after months). It's too shiny. It's not tractable for me. It would be an opportunity cost. So I take a deep breath and avert my eyes.

Comment by Gyrodiot on Suggestions of posts on the AF to review · 2021-02-16T21:27:46.887Z · LW · GW

My gratitude for the already posted suggestions (keep them coming!) - I'm looking forward to work on the reviews. My personal motivation resonates a lot with the help people navigate the field part; in-depth reviews are a precious resource for this task.

Comment by Gyrodiot on some random parenting ideas · 2021-02-13T21:11:48.910Z · LW · GW

This is one of the rare times I can in good faith use the prefix "as a parent...", so thank you for the opportunity.

So, as a parent, lots of good ideas here. Some I couldn't implement in time, some that are very dependent on living conditions (finding space for the trampoline is a bit difficult at the moment), some that are nice reminders (swamp water, bad indeed), some that are too early (because they can't read yet)...

... but most importantly, some that genuinely blindsided me, because I found myself agreeing with them, and they were outside my thought process! The one-Brilliant-problem a day one, the let-them-eat-more-cookies, mainly.

I appreciate, in particular, the breadth of the ideas. Thanks for sharing, even if you don't practice what you preach, you'll be able to get feedback.

Comment by Gyrodiot on Last day of voting for the 2019 review! · 2021-01-25T23:51:23.624Z · LW · GW

After several nudges (which I'm grateful for, in hindsight), my votes are in.

Comment by Gyrodiot on Luna Lovegood and the Chamber of Secrets - Part 1 · 2020-11-26T10:49:04.274Z · LW · GW

This is very nice. I subscribed for the upcoming parts (there will be, I suppose?)

Comment by Gyrodiot on Learning from counterfactuals · 2020-11-26T10:37:38.293Z · LW · GW

I think not mixing up the referents is the hard part. One can properly learn from fictional territory when they can clearly see in which ways it's a good representation of reality, and where it's not.

I may learn from an action movie the value of grit and what it feels like to have principles, but I wouldn't trust them on gun safety or CPR.

It's not common for fiction to be self-consistent enough and preserve drama. Acceptable breaks from reality will happen, and sure, sometimes you may have a hard SF universe were the alternate reality is very lawful and the plot arises from the logical consequences of these laws (often happens in rationalfic), but more often than not things happen "because it serves the plot".

My point is, yes, I agree, one should be confused only by lack of self-consistency fiction or not. Yet, given the vast amount of fiction that is set in something close to real Earth, by the time you're skilled enough to tell apart what's transferable and what isn't, you've already done most of the learning.

Not counting the meta-skill of detecting inconsistencies, which is indeed extremely useful, for fiction or not, but I'm still unclear where exactly one learns it from.

Comment by Gyrodiot on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-12T14:59:26.645Z · LW · GW

Thank you for this clear and well-argued piece.

From my reading, I consider three main features of AWSs in order to evaluate the risk they present:

  • arms race avoidance: I agree that the proliferation of AWSs is a good test bed for international coordination on safety, which extends to the widespread implementation of safe powerful AI systems in general. I'd say this extends to AGI, were we would need all (or at least the first, or only some, depending on takeoff speeds) such deployed systems to conform to safety standards.
  • leverage: I agree that AWSs would have much greater damage/casualties per cost, or per human operator. I have a question regarding persistent autonomous weapons which, much like landmines, do not require human operators at all once deployed: what, in that case, would be the limiting component of their operation? Ammo, energy supply?
  • value alignment: the relevance of this AI safety problem to the discussion would depend, in my opinion, on what exactly is included in the OODA loop of AWSs. Would weapon systems have ways to act in ways that enable their continued operation without frequent human input? Would they have other ways than weapons to influence their environment? If they don't, is the worst-case damage they can do capped at the destruction capabilities they have at launch?

I would be interested by a further investigation on the risk brought by various kinds of autonomy, expected time between human command and impact, etc.

Comment by Gyrodiot on What are Examples of Great Distillers? · 2020-11-12T14:25:34.317Z · LW · GW

To clarify the question, would a good distiller be one (or more) of:

  • a good textbook writer? or state-of-the-art review writer?
  • a good blog post writer on a particular academic topic?
  • a good science communicator or teacher, through books, videos, tweets, whatever?

Based on the level of articles in Distill I wouldn't expect producers of introductory material to fit your definition, but if advanced material counts, I'd nominate Adrian Colyer for Computer Science (I'll put this in a proper answer with extra names based on your reply).

Comment by Gyrodiot on The (Unofficial) Less Wrong Comment Challenge · 2020-11-11T20:15:35.926Z · LW · GW

I was indeed wondering about it as I just read your first comment :D

For extra convenience you could even comment again with your alt account (wait, which is the main? Which is the alt? Does it matter?)

Comment by Gyrodiot on The (Unofficial) Less Wrong Comment Challenge · 2020-11-11T20:12:39.809Z · LW · GW

The original comment seems to have been edited to a sharper statement (thanks, D0TheMath), I hope it's enough to clear up things.

I agree this qualifier pattern is harmful, in the context of collective action problems, when mutual trust and commitment has to be more firmly established. I don't believe we're in that context, hence my comment.

Comment by Gyrodiot on The (Unofficial) Less Wrong Comment Challenge · 2020-11-11T19:24:49.522Z · LW · GW

I interpret the quoted statement as "I am willing to make an effort that I don't usually do, by commenting more, based on your assessment of the importance of giving feedback", assuming good faith.

There's an uncertainty, of course, as whether it will actually turn out important. "I can try" suggests they will try even if they don't know, and we won't know if they will succeed until they try.

Yes, you can interpret the statement in an uncharitable way with respect to their goodwill, but this is not what is, in my opinion, conducive to healthy comment sections in general.

Comment by Gyrodiot on The (Unofficial) Less Wrong Comment Challenge · 2020-11-11T14:26:05.987Z · LW · GW

We discussed the topic of feedback with Adam. I approve of this challenge and will attempt to comment on at least half of all new posts from today to the end of November. Eventually renewing it if it works well.

I've been meaning to get out of mostly-lurking mode for months now, and this is as good of an opportunity as it gets.

I also want to mention the effect of "this comment could be a post", which can help people "upgrade" from commenting, to shortform or longform, if they feel (like me), that there's some quality bar to clear to feel comfortable posting more and get your ideas out there (hello, self-confidence issues).

You won't get feedback if you don't post somewhere anyways, and that could start with comments!

Comment by Gyrodiot on Why I’m Writing A Book · 2020-11-11T11:05:51.077Z · LW · GW

I have to admit having read some of your essays, found them very interesting, and yet found the prospect of diving into the rest daunting enough to put the idea somewhere on my to-read pile.

I applaud your book writing and will gladly read the final version, as I'll perceive it at a more coherent chunk of content to go through, instead of a collection of posts, even if the quality of the writing is high for both. The medium itself, to me, has its importance.

It's also easier to recommend « this excellent book by Samo Burja » than « this excellent collection of 10/20/50+ pieces by Samo Burja ».

(Awkward sidenote: I wish I could enthusiastically say I will read your draft and give you feedback, but I can't promise much on that front, my apologies)

Comment by Gyrodiot on The Wiki is Dead, Long Live the Wiki! [help wanted] · 2020-09-14T18:57:02.769Z · LW · GW

Thank you for the import.

Once again, the Progress Bar shall advance. It will probably be slower this time. No matter: I shall contribute.

Comment by Gyrodiot on Tagging Progress at 100%! (Party & Celebratory Talk w/ Jason Crawford, Habryka on Sun, Aug 30th, 12pm PDT) · 2020-08-22T17:02:50.691Z · LW · GW

Tagging is indeed awesome! That was a big milestone. Thanks, everyone.

I'd be interested in another Progress Bar for all posts with 30 (50?) or more comments. In the old days they may not have reached 25 karma but were very active.

Lots of things to do.

Comment by Gyrodiot on Tagging Open Call / Discussion Thread · 2020-08-22T16:21:08.427Z · LW · GW

I had the dubious honor to tag it! Thanks, everyone, for your work!

Comment by Gyrodiot on Tagging Open Call / Discussion Thread · 2020-08-19T07:44:05.073Z · LW · GW

And my touchpad!

Comment by Gyrodiot on 10 Fun Questions for LessWrongers · 2020-08-18T17:10:57.979Z · LW · GW

I have taken the survey as well. I have the same remark as Dagon here, #7, #8 and #10 made me chuckle as it's a clever way to ask for feedback.

I admit I wouldn't have dared (or even thought about) giving feedback to the team unless specifically prompted by #10. I may have needed that little push... but if it had been an e-mail with "We need your feedback", I wouldn't have answered!

I am confused and I will now reflect on what prompts me to participate.

Comment by Gyrodiot on Solving Key Alignment Problems Group · 2020-08-08T17:32:01.288Z · LW · GW

The FLI map probably refers to The Landscape of AI Safety and Beneficence Research, also in my list but credited to its main author, Richard Mallah.

Comment by Gyrodiot on The Manual Economy · 2020-08-08T13:41:39.881Z · LW · GW

I enjoyed this. I'd read more of it!

Also I want trickle-down economics as actual liquid flow.

Comment by Gyrodiot on [deleted post] 2020-08-08T08:22:37.176Z

Meta: the August Open & Welcome Thread already exists here.

Comment by Gyrodiot on I'm looking for research looking at the influence of fiction on changing elite/public behaviors and opinions · 2020-08-07T14:26:15.546Z · LW · GW

(Apologies for the short answer; I'll expand it if I find more extensive resources.)

My go-to keyword for this would be storytelling in science communication, through which I found the following:

For more decision-maker-oriented literature:

From a cursory reading, this is focused on short fictional stories that concisely illustrate a point, as a way to drive insight more easily. Something very targeted.

I'll have to dig more into the effects of longer works of fiction; I don't have quick references about this at the moment.

Comment by Gyrodiot on Tags Discussion/Talk Thread · 2020-08-07T09:39:54.046Z · LW · GW

Tagging everything makes sense to me as well, and, yes, the first installments should be relevance-boosted.

I perceive the consensus to have shifted in favor of the mass-tagging, which will begin soon. I'll report back.

Edit: all of HPMOR, 3WC, TBC have been tagged, and the tag description has been reupdated. Please boost the first chapters, and standalone pieces!

Comment by Gyrodiot on Tags Discussion/Talk Thread · 2020-08-06T14:44:05.382Z · LW · GW

Ah, and now there's Updated Beliefs (examples of), which is less about personal growth in rationality skill, and more about evolution of personal beliefs and the updating process.

Slightly different!

Comment by Gyrodiot on Tags Discussion/Talk Thread · 2020-08-06T14:30:18.998Z · LW · GW

Roger that! Later chapters of Three Worlds Collide, and The Bayesian Conspiracy have just been untagged.

I also updated the tag description to reflect the norm (in bold and near the top so it appears on the tag hover text, if I understand correctly the meta-norm about such disclaimers).

Edit: the tag is still there for 3WC c1, I didn't have enough power to remove it.

Comment by Gyrodiot on Open & Welcome Thread - August 2020 · 2020-08-06T08:24:34.765Z · LW · GW

Meta: I suggest the link to the Open Thread tag to be this one, sorted by new.

Comment by Gyrodiot on Tags Discussion/Talk Thread · 2020-08-06T08:09:42.264Z · LW · GW

Should all HPMOR posts be tagged with the Fiction tag? Only the very first chapter is, currently, which makes sense. Conversely, all chapters of Three Worlds Collide are tagged with it. Which convention shall prevail?

(sidenote: I'm volunteering to mass-tag HPMOR if it's greenlighted)

Comment by Gyrodiot on Solving Key Alignment Problems Group · 2020-08-05T10:53:51.538Z · LW · GW

Excellent initiative. I'm interested and will PM you. I am working on several similar projects, one of them being described in this post.

Comment by Gyrodiot on Tags Discussion/Talk Thread · 2020-07-31T12:25:28.116Z · LW · GW

I created the Growth Stories tag, but that may have been a mistake, since the Postmortem & Retrospectives tag already exists. Apologies!

Comment by Gyrodiot on Tagging Open Call / Discussion Thread · 2020-07-28T22:06:49.797Z · LW · GW

Thanks for the project, and the FAQ. I shall contribute.

Is there a way to retrieve the old tags from LW 1.0? I remember they were used to index Open Threads, for instance. I can't remember the details but that could be a good way to jumpstart some tags.

Comment by Gyrodiot on Open & Welcome Thread - July 2020 · 2020-07-28T15:15:22.165Z · LW · GW

Ah, I remembered that there were weekly Open Threads back in the day, and Stupid Questions, and others... so I went ahead and tagged as many as I could. There are now 369 tagged posts and I'm too tired to continue digging for lonely OT posted by users that didn't post regularly.

Comment by Gyrodiot on Open & Welcome Thread - July 2020 · 2020-07-27T11:51:07.253Z · LW · GW

Side question: through the Add Posts functionality of the tag page, I'm also finding not-general open threads, and I tagged one by mistake (this one). Should they be included? Belong to another tag?

My former hobby as a Wikipedia maintainer is leaking...

Comment by Gyrodiot on Open & Welcome Thread - July 2020 · 2020-07-27T11:45:55.536Z · LW · GW

Done, at least for the posts in the sequence. Tag autocomplete was a blessing.

Comment by Gyrodiot on Blog Post Day II Retrospective · 2020-06-11T12:46:32.345Z · LW · GW

Thanks for the reminder! I've had that post bookmarked as soon as I saw it and already started writing!

Comment by Gyrodiot on Resources for AI Alignment Cartography · 2020-04-10T14:36:49.479Z · LW · GW

It was in the references that initially didn't make the cut. After further thought, it's indeed worth adding. I referenced the Distill article AI Safety Needs Social Scientists, which spends more time on the motivating arguments, and linked to the paper in the note.

Thanks for your feedback!

Comment by Gyrodiot on Resources for AI Alignment Cartography · 2020-04-06T15:07:55.712Z · LW · GW

We should indeed! I just sent you an email.

Comment by Gyrodiot on Resources for AI Alignment Cartography · 2020-04-05T11:25:59.485Z · LW · GW

Is Paul's map the one in Current Work in AI Alignment? I think Rohin also used it in his online-EAG 2020 presentation. For Rohin's map, are you referring to Ben Cottier's Clarifying some key hypotheses in AI alignment, to which Rohin made major contributions? I'll be referring to those two in the rest of my answer.

I want to make more explicit the relationships between the premises and outcomes included in the diagrams. The goal of my work is to make those kinds of questions easier to answer:

  • Are scenarios X and Y mutually exclusive? If they are, is the split sharp (is there a premise P which prevents X if true, and prevents Y if false)?
  • What are the premises behind the work on a specific problem? Which events or results would make this work irrelevant?
  • Does it make sense to "partially solve" problem P? Are there efforts which won't make any difference until something specific happens?

I find it hard to answer those questions with the diagrams, since (from my understanding) they have other goals entirely. Paul's map shows how current research questions relate to each other, with closer elements in the tree sharing more concepts and techniques. Ben & Rohin's map show which questions are controversial and which debates feed into others, and which very broad scenarios/agendas are relevant to them.

You can answer the questions listed above by integrating the diagram with the post details, and following references... but it isn't convenient. I want to make it easier to discover and engage with that knowledge.

The main difference between my (future) work and the diagrams would be to enable the user to explore one specific scenario/research question at a time. For example, in Paul's talk, that would mean starting from « iterated amplification » and repeatedly asking « why ? » as you go up the tree. I want the user to find out what happens if one of the premises doesn't hold: is the work still useful? If we want to maintain the premise, what are the load-bearing sub-premises?

I expect a lot of the structure in the diagrams will be mirrored in the end result anyway, as it should, since it's the same knowledge. I hope to distill it in a different way.

Comment by Gyrodiot on Blog Post Day II Retrospective · 2020-03-31T20:00:31.806Z · LW · GW

I'm making my interest public! I missed the deadline this time, which is evidence for me to start writing earlier.

I would support having another one this month.

Comment by Gyrodiot on My current framework for thinking about AGI timelines · 2020-03-30T15:33:05.835Z · LW · GW

Looking forward to it as well. From the table of contents, I gather that your framework will draw heavily from neuroscience and insights from biological intelligence?

Comment by Gyrodiot on [deleted post] 2018-05-07T09:07:38.659Z

I sometimes do a brief presentation of rationality to acquaintances, and I often stress the importance of being able to change your mind. Often, in the Sequences, this is illustrated by thought experiments, which sound a bit contrived when taken out of context, or by wide-ranging choices, which sound too remote and dramatic for explanation purposes.

I don't encounter enough examples of day-to-day application of instrumental rationality, the experience of changing your mind, rather than the knowledge of how to do it. Your post has short glimpses of it, and I would very much enjoy reading a more in-depth description of these experiences. You seem to notice them, which is a skill I find very valuable.

On a more personal note, your post nudges me towards "write more things down", as I should track when I do change my mind. In other words, follow more of the rationality checklist advice. I'm too often frustrated by my lack of noticing stuff. So, thanks for this nudge!

Comment by Gyrodiot on [deleted post] 2018-05-04T09:03:14.798Z

Thanks for your clarification. Even though we can't rederive Intergalactic Segways from unknown strange aliens, could we derive information about those same strange aliens, by looking at the Segways? I'm reminded of some SF stories about this, and our own work figuring out prehistorical technology...

Comment by Gyrodiot on [deleted post] 2018-04-29T19:49:55.137Z

Thanks again for this piece. I'll follow your daily posts and comment on them regularly!

I have a few clarification questions for you:

  • if an AGI could simulate quasi-perfectly a human brain, with human knowledge encoded inside, would your utility function be satisfied?
  • is the goal of understanding all there is to the utility function? What would the AGI do, once able to model precisely the way humans encode knowledge? If the AGI has the keys to the observable universe, what does it do with it?
Comment by Gyrodiot on [deleted post] 2018-04-29T09:06:29.679Z

Thanks for your post. Your argumentation is well-written and clear (to me).

I am confused by the title, and the conclusion. You argue that a Segway is a strange concept, that an ASI may not be capable of reaching by itself through exploration. I agree that the space of possible concepts that the ASI can understand is far greater than the space of concepts that the ASI will compute/simulate/instantiate.

However, you compare this to one-shot learning. If an ASI sees a Segway, a single time, would it be able to infer what is does, what's it for, how to build it, etc.? I think so! The purpose of one-shot learning models is to provide a context, a structure, that can be augmented with a new concept based on a single example. This is far simpler than coming up with said new concept from scratch.

See, on efficient use of sensory data, That Alien Message.

I interpret your post as « no, an ASI shouldn't build the telescope, because it's a waste of resources and it wouldn't even need it » but I'm not sure this was the message you wanted to send.

Comment by Gyrodiot on European Community Weekend 2018 Announcement · 2018-02-05T17:39:54.566Z · LW · GW

I'll be there. As I said in the sister post on LW1.0:

The community weekend of 2017 was one of my best memories from the past year. Varied and interesting activities, broad ranges of topics, tons of fascinating discussions with people from diverse backgrounds. Organizers are super friendly.

One very, very important point is people there cooperate by default. Communication is easy, contribution is easy, getting help is easy, feedback is easy, learning is easy. Great times and productivity. And lots of fun!

Entirely worth it.

Comment by Gyrodiot on European Community Weekend 2018 Announcement · 2018-01-25T14:14:51.374Z · LW · GW

The Community Weekend of 2017 was one of the highlights of my past year. I strongly recommend it.

Excellent discussions, very friendly organizers, awesome activities.

Signed up!

Comment by Gyrodiot on [deleted post] 2018-01-19T13:23:24.694Z

Hi! Was this a test post?