Part Three: Coding for Publishing

At this point I had all my writing in StackEdit, I also got rid of all my previously added style hints as I could just be leveraging the Markdown syntax moving on. StackEdit can automatically sync with Google Drive so I had a way of fetching the files I was working on for offline processing as well. I finally had a reliable way of building a pipeline around things. This is where coding came into play in full force.

I wanted to be able to create multiple outputs from my Markdown source file. I was already working on sections in the book that were more in-depth which I started to consider having as a premium offering. I wanted to create an ebook out of the material but I also wanted to be able to display the same material on my Jekyll website as well. This already implied four different variations to the source text, like online-free, online-paid, offline-free, offline-paid. Another difference between the online and offline output was going to be that I could be using video (gif) files in the online version but these same visuals needed to be static images in the offline version.

Now I was starting to understand how programming can help with my process. I had a single authoritative source and multiple output formats that needed to be managed and this started to look more and more like an automation problem.

I started to build node.js scripts to establish my workflow. I was already syncing my Google Drive to a folder on my disk. So I wrote a script that would fetch the desired files from Google Drive to my working folder and place it in a staging area.

'use strict';

const fs = require('fs');

const SOURCE_DIR = '/Users/username/Google Drive';

const TARGET_DIR = './staging';

// remove the target content synchroniously

let targetContent = fs.readdirSync(TARGET_DIR);

targetContent.forEach((file, index) => {

let targetFile = `${TARGET_DIR}/${file}`

let isFile = fs.lstatSync(targetFile).isFile();

if (isFile) {

fs.unlinkSync(targetFile);

}

});

fs.readdir(SOURCE_DIR, (err, files) => {

files.forEach((file) => {

if (file.startsWith('p5js-') && !file.endsWith('.gdoc')) {

fs.createReadStream(`${SOURCE_DIR}/${file}`)

.pipe(fs.createWriteStream(`${TARGET_DIR}/${file}.md`));

}

});

});

I was hosting the images that were referenced inside the Markdown file on imgur. I built yet another script that would download these images into the staging folder. I needed images to be on my local for compiling an ebook.

Then I created another script that would pre-process the files in the staging area and move them to their corresponding folder that is determined based on the target destination. To be able to render different content in between online and offline versions, as well as the paid and free versions, I decided to use Nunjucks templating language. This solved two main use cases for me:

Using Nunjucks, I was able to conditionally render content based on the targeted output format. For example, a gif in my document might be represented like this:

{% if online %}

![01–01](http://i.imgur.com/<id>)

{% else %}

![01–01](images/01–01.jpg)

{% endif %}

Having an if-else statement, I can set a variable inside my preprocessing script to decide which conditional to execute and use this data to render the Nunjucks template.

const data = {online: true};

let renderedContent = nj.renderString(content, data);

Nunjucks also allowed me to create variables. I could use a variable like `{{ item }}` in my Markdown text which can render to a value of `book` or `course` depending on the destination I am targeting. (I ended up creating an interactive course using the book material where I needed to refer to the material as a ‘course’, more on that a bit later).

Using Nunjucks variables in Markdown.

Using the pre-processing script I was also able to manipulate the front matter I had in the original Markdown file using the node library called front-matter. This needed to be done since one of my target output format was a .md file for Jekyll and I wanted to automatically add some additional front matter attributes for Jekyll to parse.

This all might sound like terribly over-engineered and unnecessary. And there is a chance that it might very well be. But one thing that I was happy about this process was that even though I yielded to the developer inclination of tooling up in the face of the slightest problem, I didn’t get overly obsessed about building generic, scalable solutions. All this code I wrote is actually pretty ugly and embarrassing to be shared here but the point is to move fast and build automated solutions in a way that’s not going to waste your time. The primary objective is not building systems to deliver your content, it is delivering the content, however, possible. You should be doing things that don’t scale.

Learning Five: Definitely read Paul Graham’s essay on doing things that don’t scale. Things that are not efficient can get you massive leverage in the short run. But if you are to get bogged down by the concerns of scalability when you are only starting out then you might miss out on opportunities of growth and sources of motivation like the sense of delivering value to people which might impede and even ruin your progress.

One technical decision I regret is using Promises for my file system operations. I think I was trying to prove myself that I was comfortable with using them but they were completely an overkill for my circumstances as I didn’t have any performance concerns. The excessive usage of Promises started to have a mental toll, where I just wanted to move fast, as they are not as straightforward as synchronous operations would have been. Game developer Jonathan Blow has a great talk on optimizing for the cognitive load when developing personal projects. Granted this is not at the scale of anything he is working for but if you are just working on something that needs to work regardless of how you need to make sure that it is usable as possible. Don’t try to be smart, because most days you will dumber than your smartest self.

I also created a post-processing file that I would run for some specific purposes. For example sending the document to a copy editor ( I worked with a freelancer on Fiverr), I didn’t need to have the code snippets in there as they were ballooning up my word count and hence affecting the pricing. Having an automated workflow allowed me to remove them easily. There was an instance where I was faced with an opposite problem where I just needed the code snippets. This was again solved quickly by using the post processing file to selectively remove the target elements (anything that is not a code snippet) from the files.

I ended up publishing my work on three different platforms. I first published the book on Leanpub. Having already established a pipeline, it was very easy to integrate my work with Leanpub and use their Github integration. After pushing my work on Leanpub, I exported an unbranded version of the book using their tools and placed it on Amazon Kindle Store. Zapier has an amazing blog post on the self-publishing platforms. It is a must-read for anyone that is interested in this space.

The most amazing discovery for me was coming across Educative.io through a post on Hacker News. Educative.io is an online course creation site where you can build interactive courses using blocks that allow you to add executable code snippets (among many other things) inside a document in an easy manner. This allows you to create documents that are similar to Jupyter notebooks. Transferring my source text to their platform was a breeze as it uses the Markdown format as well.

I am not claiming that my workflow was perfect. One big shortcoming was that gathering online feedback on my text from other people was pretty hard. For that use case, working in Google Docs would have been much more useful. But unfortunately, Google Docs doesn’t offer a great way of working with Markdown files.

Also using Nunjucks templating in your source text introduces a little bit of an overhead as you can’t just copy-paste text, you need to process — compile — it first. But considering the efficiencies that were gained, I find this to be a reasonable tradeoff.

This summarizes my journey. If there is a final lesson to be derived here I think it is not to be overly obsessed with tools and best-practices from the get-go and just start with creating things. It is the content and the product that really matters and all the other concerns are secondary, at least initially. Don’t let yourself get slowed down by choices, you probably don’t know enough at the beginning to inform your decisions so it is important to get started and iteratively adjust according to your emerging needs.