Saturday, January 9, 2016

Free SSL For Any WordPress Website

If you have an e-commerce website, then SSL is mandatory for safely processing credit cards. But even if you aren’t processing payments, you should still seriously consider secure HTTP (or HTTPS), especially now that I’m going to show you how to set it up quickly, for free. Let’s get started.

What Is SSL And Why Should I Care?

In short, SSL is the “S” in HTTPS. It adds a layer of encryption to HTTP that ensures that the recipient is actually who they claim to be and that only authorized recipients can decrypt the message to see its contents.
Sensitive information such as credit-card numbers — basically, anything private — should always be served via HTTPS. However, there is an increasing trend towards serving all content via HTTPS, as we’re seeing on news website, blogs, search engines and the websites of most mainstream brands. So, even if your website isn’t processing payments, there are good reasons to consider HTTPS, a few of which are listed here:
  • Credibility
    Even non-technical audiences associate the little green padlock in the browser’s address bar with trust and reliability.
  • Password protection
    Perhaps your website only hosts kitten videos. But if users are logging into your website via Wi-Fi with a password that they also use for online banking, then you are potentially facilitating a serious security breach by broadcasting those credentials publicly.
  • Future-proofing
    Many websites are still served via HTTP, but there is an undeniable trend towards HTTPS, and this will only increase as users become increasingly educated about web security. Be on the right side of history.
  • SEO
    Google officially announced that HTTPS is used as a ranking signal. In other words, Google is rewarding HTTPS websites by boosting their rankings in search results.
A common argument against HTTPS is that it reduces performance. True, the process of encrypting and decrypting does cost additional milliseconds, but in most situations it is negligible, as evidenced by the fact that performance-conscious companies such as Google and Facebook serve all of their content via HTTPS. And, true, HTTPS can exacerbate existing performance problems, like many CSS files being served individually, but this is mitigated by following basic best practices for performance. And with the adoption of HTTP/2, the performance cost of HTTPS is even lower. The bottom line is that the reduction in performance is a meaningful deterrent only if your website is either hyperoptimized or so underperforming that every millisecond matters.

How To Set Up HTTPS For Free

The first step to setting up HTTPS for free is to sign up for a cloud DNS service. If you have no idea what DNS is, I recommend that you take a minute to learn before proceeding. The delightful How DNS Works does a great job of breaking it down into a quippy cartoon. Otherwise, simply know that DNS is the system whereby domain names like (which humans understand) get linked to IP addresses like (which computers understand). You have many options, but I’m a fan of CloudFlare because it’s really fast to set up, the dashboard is intuitive, and a free plan is available with many powerful features.

Setting Up CloudFlare

After registering for a CloudFlare account, you’ll be walked through an easy wizard to configure your first website, which will conclude with instructions on how to log into your domain registrar and point the nameservers to CloudFlare. The change will take some time to propagate, but when it’s complete, CloudFlare will be hosting your website’s DNS records. Next, turn on CloudFlare’s “flexible SSL” feature.
Choosing the “flexible SSL” setting is important because it doesn’t require you to buy and install your own SSL certificate on your website’s server.

As you can see, CloudFlare is acting as the middleman to secure traffic between your website and the client. If this were a static HTML website, you would now be able to connect to it via HTTPS ( WordPress, however, requires additional configuration in order to work with the modified protocol.

Reconfiguring WordPress From HTTP To HTTPS

You will first need to update the “WordPress Address” and “Site Address” settings in the dashboard, under “Settings” → “General.” When you do this, you will have to log into the dashboard again.

Proceed cautiously. If you update these settings prematurely, you risk locking yourself out. For example, if the website isn’t yet properly configured for HTTPS and the settings are updated, you could cause a redirect loop that breaks the website and prevents you from accessing the dashboard.
At this point, you should be able to visit the home page of the website via HTTPS. However, page links will still point to the HTTP URLs. WordPress stores links to pages and images as absoute URLs, meaning that the full URL, including the protocol, is saved in the database. To ensure that the entire website is consistently served via HTTPS (without spitting up warnings about mixed content), you will need to update your legacy content.

Updating Legacy Content

On a small website with only a few pages, the quickest option might be simply to manually update the URLs by editing existing pages in the admin interface. If the website is large or has a highly active blog, then manual editing likely isn’t practical. If your host provides phpMyAdmin or some other interface to run MySQL queries, you could do this pretty easily with a few MySQL queries in the SQL tab. Alternatively, you could follow The Customize Windows’ instructions to do it from the command line.
At the risk of stating the obvious, replace in the following queries with your actual domain. Also, if you’ve customized WordPress’ table prefix, replace wp_ with the relevant prefix.
First, update the URLs of the posts and pages.

UPDATE wp_posts SET guid = replace(guid, '','');
[UPDATE: As discussed in the comments, the guid field should not be edited.]
Update the wp_postmeta table, too.
UPDATE wp_postmeta SET meta_value = replace(meta_value,'','');
Finally, update the actual contents of posts or pages. This will update any backlinks to HTTPS.
UPDATE wp_posts SET post_content = REPLACE(post_content, '', '');
After running these queries, you will want to refresh your permalinks by going to “Settings” → “Permalinks.” Simply change the setting back to the default, and then set it back to whatever setting you were originally using.
Now, you should be able to click the menus and links throughout the website, and the protocol should remain HTTPS.

Troubleshooting Mixed-Content Warnings

Depending on the theme and plugins in use, you might get a warning in the address bar stating that certain resources are not being served securely. If the errors are associated with assets added by your own custom theme or plugin, make sure to properly enqueue JavaScript and CSS files and not to hardcode URLs that begin with HTTP. Most browsers will let you expand the warning to show the specific requests that are causing the error. You could also try a free plugin such as SSL Insecure Content Fixer, which will attempt to correct third-party plugins that have failed to do this.
By this point, you should see the green padlock in the URL bar when visiting your website. If you aren’t using an e-commerce plugin such as WooCommerce or WP eCommerce, you’re done! If you are, there is an important last step.

Getting Flexible SSL To Work With E-Commerce Plugins

WordPress has a core function named is_SSL() that plugins rely on to determine whether traffic is encrypted with SSL. With the method above alone, this function will return false because the encryption is only between CloudFlare and the client. The traffic that PHP interacts with is unencrypted, so the super global that that function checks (i.e. $_SERVER['HTTPS']) would not be useful. For our purpose, the relevant variable is $_SERVER['HTTP_X_FORWARDED_PROTO'], which, at the time of writing, WordPress does not recognize. The request to change this is long-standing, but it is yet to be resolved.
Fortunately, a free plugin will fix this for you immediately, CloudFlare Flexible SSL. Simply install the plugin and activate it. Remember that this technique does not add any more security. Traffic between CloudFlare and your website’s server is still unencrypted and, therefore, still vulnerable to sniffing.

Flexible SSL Is Not Full SSL

CloudFlare’s “Universal SSL” initiative is an interesting attempt to make the Internet more secure, but it is not without controversy. The primary concern is that flexible SSL does not encrypt the second half of the traffic’s journey (to your server), yet the browser currently still shows the same green padlock that we have come to associate with complete SSL. CloudFlare offers the following justification on its blog:
Having cutting-edge encryption may not seem important to a small blog, but it is critical to advancing the encrypted-by-default future of the Internet. Every byte, however seemingly mundane, that flows encrypted across the Internet makes it more difficult for those who wish to intercept, throttle, or censor the web. In other words, ensuring your personal blog is available over HTTPS makes it more likely that a human rights organization or social media service or independent journalist will be accessible around the world. Together we can do great things.
For better or worse, flexible SSL is here, and the Internet will have to adapt. In the meantime, the burden is on website owners to be educated and to make responsible decisions.

Redirecting HTTP Requests To HTTPS

Enabling a website to run on HTTPS does not ensure that requests will actually use the protocol. If your website has been around for a while, users might have already bookmarked it with HTTP. You can redirect all HTTP requests to the new protocol by adding the following snippet to the top of the .htaccess file in the root of your website. If the file does not exist, you can safely add it.
<IfModule mod_rewrite.c>
    RewriteEngine On
    RewriteCond %{HTTP:X-Forwarded-Proto} !https
    RewriteRule (.*)$1 [R=301,L]
If an .htaccess file already exists, be careful not to change anything between the # BEGIN WordPress and # END WordPress lines in that file. Those lines are managed by WordPress, and whenever the permalinks get refreshed, the contents in that section get overwritten.


By upgrading your website to HTTPS, you have improved your website, protected users and participated in the advancement of the Internet. And it didn’t cost you anything!

Friday, January 8, 2016

Improving UX For Color-Blind Users

According to Colour Blind Awareness 4.5% of the population are color-blind. If your audience is mostly male this increases to 8%. Designing for color-blind people can be easily forgotten because most designers aren’t color-blind. In this article I provide 13 tips to improve the experience for color-blind people – something which can often benefit people with normal vision too.

What Is Color Blindness?

There are many types of color blindness but it comes down to not seeing color clearly, getting colors mixed up, or not being able to differentiate between certain colors.
These problems can also be exacerbated by the environments in which people use websites. This could include low-quality monitors, bad lighting, screen glare, tiny mobile screens and sitting far away from a huge television screen.
Relying solely on color for readability and affordance makes a website difficult to use, which ultimately affects readership and sales.
While the following tips aren’t exhaustive, they do cover the majority of problems color-blind people experience when using websites.

1. Text Readability

To ensure text is readable it should pass accessibility guidelines based on the combination of text color, background color and text size as follows:
“WCAG 2.0 level AA requires a contrast ratio of 4.5:1 for normal text and 3:1 for large text (14 point and bold or larger, or 18 point or larger).”
WebAim color contrast checker

2. Text Overlaid On Background Images 

Text overlaid on imagery is tricky because some or all of the image may not have sufficient contrast in relation to the text.
Alternatively, you can style the text itself to have a solid color or a drop shadow, or anything else that matches your brand guidelines.

3. Color Filters, Pickers And Swatches

The screenshot below shows the color filter on Amazon as seen by someone with and without protanopia (red–green color blindness). Without descriptive text it is impossible to differentiate between many of the options available.
Amazon shows descriptive text when the user hovers, but hover isn’t available on mobile.
Gap solves this problem by adding a text label beside each color.
This happens to be beneficial for people with normal vision too. For example, black and navy are difficult colors to differentiate on screen. A text label takes the guesswork out of it.

4. Photographs Without Useful Descriptions

The screenshot below shows a SuperDry T-shirt for sale on its website. It is described as “Leaf Jaspe” which is ambiguous as leaves can come in an assortment of colors (green, yellow, brown. etc.).
Jaspe (or rather “jasp√©”) means randomly mottled or variegated, so using this in addition to the specific color would be useful: “Gray Green Leaf Jaspe.”
Links should be easy to spot without relying on color. The screenshot below simulates the vision of somebody with achromatopsia (can’t see color) viewing the UK Government Digital Service (GDS) website. Many of the links are hard to see. For example, did you notice that “GDS team, User research” (located under the heading) are links?
To find a link, users are left having to hover with their mouse waiting for the cursor to change to a pointer. On mobile, they are left to tap on text hoping it will make a page request.
The links above with icons are easier to see. For those without, it would be a good idea to add an underline, which is exactly what GDS does within the body of its articles.

6. Color Combinations

In the physical world you can’t always control which colors appear next to one another: a red apple may have dropped and nestled itself into some green grass. However, we can control the colors we use to design our website. The following color combinations should be avoided where possible:
  • green/red
  • green/brown
  • blue/purple
  • green/blue
  • light green/yellow
  • blue/grey
  • green/grey
  • green/black

7. Form Placeholders

Using a placeholder without a label is problematic because placeholder text usually lacks sufficient contrast. Apple has this problem with their registration form. Apple’s registration form uses a placeholder without a label.
Increasing the contrast is not advisable because it will then be hard to tell the difference between placeholder text and user input.
It’s better to use labels – a good practice anyway – with sufficient contrast, which is exactly what does. uses labels with good contrast.

8. Primary Buttons

Often, primary buttons use color alone to present themselves as such, and Argos does just this on its login screenThe Argos login screen relies on color to emphasize the primary button.
Instead, consider using size, placement, boldness, contrast, borders, icons and anything else that will help – within the bounds of your brand guidelines. As an example, Kidly uses size, color and iconography)

9. Alert Messaging

Success and error messages are often colored green and red respectively. Most color-blind people don’t suffer from achromatism, and so will naturally associate different colors with different messages. However, using prefix text such as “Success” or, my preference, an icon makes it quick and easy to read.

10. Required Form Fields

Denoting required fields with color is a problem because some people may not be able to see the differences.
Instead, you could consider:
  • Marking required fields with an asterisk.
  • Even better, marking required fields with “required.”
  • Where possible, remove optional fields altogether.

11. Graphs

Color is often used to signify different segments of a graph.
Using patterns and, where possible, placing text within each segment makes graphs easy to understand. When text doesn’t fit – as is often the case with a small pie chart segment – using a key will suffice.

12. Zoom 

One accessibility feature that browsers have, is enabling someone to zoom in as much as they need. This improves readability–which is especially helpful on a mobile device.
Unfortunately, zoom can be disabled using the Viewport Meta Tag, which is problematic. For example, text size may be too small to read in relation to the color contrast—but zooming in effectively increases the font size, making it easier to read. So don’t disable zoom on your website.

13. Relative Font Size Link

Similarly to the previous point, browsers provide the ability to increase text size (instead of zooming the entire page as a whole), in order to improve readability. However, some browsers disable this functionality when the font-size is specified in absolute units such as pixels. Using a relative font size unit, such as ems, ensures that all browsers afford this capability.


There are lots of tools available to help you design for color-blind people:
  1. Check My Colours: if you have an existing website, you can just enter a URL and receive feedback of what needs to be improved.
  2. WebAim’s color contrast checker: provide two colors to see if they pass accessibility guidelines.
  3. I Want To See Like The Color Blind: apply color blindness filters to your web page right within Chrome.
  4. Color Oracle: a color blindness simulator for Windows, Mac and Linux, showing you what people with common color vision impairments will see.


The tips in this article are not exhaustive, and they are not necessarily applicable to every situation. However, they do cover the majority of problems color-blind people experience when using websites.
It’s more important to take away the principles, so that you can integrate them into your own design process. Ultimately, websites aren’t just meant to look good – they are meant to be easy to use for everyone, including people who are color-blind.

Thursday, January 7, 2016

How To Harness The Machines: Being Productive With Task Runners

Task runners are the heroes (or villains, depending on your point of view) that quietly toil behind most web and mobile applications. Task runners provide value through the automation of numerous development tasks such as concatenating files, spinning up development servers and compiling code. In this article, we’ll cover Grunt, Gulp, Webpack and npm scripts. We’ll also provide some examples of each one to get you started. Near the end, I’ll throw out some easy wins and tips for integrating ideas from this post into your application.
There is a sentiment that task runners, and JavaScript advances in general, are over-complicating the front-end landscape. I agree that spending the entire day tweaking build scripts isn’t always the best use of your time, but task runners have some benefits when used properly and in moderation. That’s our goal in this article, to quickly cover the basics of the most popular task runners and to provide solid examples to kickstart your imagination regarding how these tools can fit in your workflow.

A Note on the Command Line

Task runners and build tools are primarily command-line tools. Throughout this article, I’ll assume a decent level of experience and competence in working with the command line. If you understand how to use common commands like cd, ls, cp and mv, then you should be all right as we go through the various examples.Let’s kick things off with the granddaddy of them all: Grunt.


Grunt was the first popular JavaScript-based task runner. I’ve been using Grunt in some form since 2012. The basic idea behind Grunt is that you use a special JavaScript file, Gruntfile.js, to configure various plugins to accomplish tasks. It has a vast ecosystem of plugins and is a very mature and stable tool. Grunt has a fantastic web directory that indexes the majority of plugins (about 5,500 currently). The simple genius of Grunt is its combination of JavaScript and the idea of a common configuration file (like a makefile), which has allowed many more developers to contribute to and use Grunt in their projects. It also means that Grunt can be placed under the same version control system as the rest of the project.
Grunt is battle-tested and stable. Around the time of writing, version 1.0.0 was released, which is a huge accomplishment for the Grunt team. Because Grunt largely configures various plugins to work together, it can get tangled (i.e. messy and confusing to modify) pretty quickly. However, with a little care and organization (breaking tasks into logical files!), you can get it to do wonders for any project.
In the rare case that a plugin isn’t available to accomplish the task you need, Grunt provides documentation on how to write your own plugin. All you need to know to create your own plugin is JavaScript and the Grunt API. You’ll almost never have to create your own plugin, so let’s look at how to use Grunt with a pretty popular and useful plugin!

Let’s look at how Grunt actually works. Running grunt in the command line will trigger the Grunt command-line program that looks for Gruntfile.js in the root of the directory. The Gruntfile.js contains the configuration that controls what Grunt will do. In this sense, Gruntfile.js can be seen as a kind of cookbook that the cook (i.e. Grunt, the program) follows; and, like any good cookbook, Gruntfile.js will contain many recipes (i.e. tasks).
We’re going to put Grunt through the paces by using the Grunticon plugin to generate icons for a hypothetical web app. Grunticon takes in a directory of SVGs and spits out several assets:
  • a CSS file with the SVGs base-64-encoded as background images;
  • a CSS file with PNG versions of the SVGs base-64-encoded as background images;
  • a CSS file that references an individual PNG file for each icon.
The three different files represent the various capabilities of browsers and mobile devices. Modern devices will receive the high-resolution SVGs as a single request (i.e. a single CSS file). Browsers that don’t handle SVGs but handle base-64-encoded assets will get the base-64 PNG style sheet. Finally, any browsers that can’t handle those two scenarios will get the “traditional” style sheet that references PNGs. All this from a single directory of SVGs!
The configuration of this task looks like this:
module.exports = function(grunt) {

  grunt.config("grunticon", {
    icons: {
      files: [
          expand: true,
          cwd: 'grunticon/source',
          src: ["*.svg", ".png"],
          dest: 'dist/grunticon'
      options: [
          colors: {
            "blue": "blue"

Let’s walk through the various steps here:
  1. You must have Grunt installed globally.
  2. Create the Gruntfile.js file in the root of the project. It’s best to also install Grunt as an npm dependency in your package.json file along with Grunticon via npm i grunt grunt-grunticon --save-dev.
  3. Create a directory of SVGs and a destination directory (where the built assets will go).
  4. Place a small script in the head of your HTML, which will determine what icons to load.
Here is what your directory should look like before you run the Grunticon task:
|-- Gruntfile.js
|-- grunticon
|   `-- source
|       `-- logo.svg
`-- package.json

Once those things are installed and created, you can copy the code snippet above into Gruntfile.js. You should then be able to run grunt grunticon from the command line and see your task execute.
The snippet above does a few things:
  • adds a new config object to Grunt on line 32 named grunticon;
  • fills out the various options and parameters for Grunticon in the icons object;
  • finally, pulls in the Grunticon plugin via loadNPMTasks.
Here is what your directory should look like post-Grunticon:
|-- Gruntfile.js
|-- dist
|   `-- grunticon
|       |-- grunticon.loader.js
|       |--
|       |--
|       |-- icons.fallback.css
|       |-- png
|       |   `-- logo.png
|       `-- preview.html
|-- grunticon
|   `-- source
|       `-- logo.svg
`-- package.json

There you go — finished! In a few lines of configuration and a couple of package installations, we’ve automated the generation of our icon assets! Hopefully, this begins to illustrate the power of task runners: reliability, efficiency and portability.

Gulp: LEGO Blocks For Your Build System

Gulp emerged sometime after Grunt and aspired to be a build tool that wasn’t all configuration but actual code. The idea behind code over configuration is that code is much more expressive and flexible than the modification of endless config files. The hurdle with Gulp is that it requires more technical knowledge than Grunt. You will need to be familiar with the Node.js streaming API and be comfortable writing basic JavaScript.
Gulp’s use of Node.js streams is the main reason it’s faster than Grunt. Using streams means that, instead of using the file system as the “database” for file transformations, Gulp uses in-memory transformations. For more information on streams, check out the Node.js streams API documentation, along with the stream handbook.

As in the Grunt section, we’re going to put Gulp through the paces with a straightforward example: concatenating our JavaScript modules into a single app file.
Running Gulp is the same as running Grunt. The gulp command-line program will look for the cookbook of recipes (i.e. Gulpfile.js) in the directory in which it’s run.
Limiting the number of requests each page makes is considered a web performance best practice (especially on mobile). Yet collaborating with other developers is much easier if functionality is split into multiple files. Enter task runners. We can use Gulp to combine the multiple files of JavaScript for our application so that mobile clients have to load a single file, instead of many.
Gulp has the same massive ecosystem of plugins as Grunt. So, to make this task easy, we’re going to lean on the gulp-concat plugin. Let’s say our project’s structure looks like this:
|-- dist
|   `-- app.js
|-- gulpfile.js
|-- package.json
`-- src
    |-- bar.js
    `-- foo.js
Two JavaScript files are in our src directory, and we want to combine them into one file, app.js, in our dist/ directory. We can use the following Gulp task to accomplish this.
var gulp = require('gulp');
var concat = require('gulp-concat');

gulp.task('default', function() {
  return gulp.src('./src/*.js')
The important bits are in the gulp.task callback. There, we use the gulp.src API to get all of the files that end with .js in our src directory. The gulp.src API returns a stream of those files, which we can then pass (via the pipe API) to the gulp-concat plugin. The plugin then concatenates all of the files in the stream and passes it on to the gulp.dest function. The gulp-dest function simply writes the input it receives to disk.
You can see how Gulp uses streams to give us “building blocks” or “chains” for our tasks. A typical Gulp workflow looks like this:
  1. Get all files of a certain type.
  2. Pass those files to a plugin (concat!), or do some transformation.
  3. Pass those transformed files to another block (in our case, the dest block, which ends our chain).
As in the Grunt example, simply running gulp from the root of our project directory will trigger the default task defined in the Gulpfile.js file. This task concatenates our files and let’s us get on with developing our app or website.


The newest addition to the JavaScript task runner club is Webpack. Webpack bills itself as a “module bundler,” which means it can dynamically build a bundle of JavaScript code from multiple separate files using module patterns such as the CommonJS pattern. Webpack also has plugins, which it calls loaders.
Webpack is still fairly young and has rather dense and confusing documentation. Therefore, I’d recommend Pete Hunt’s Webpack repository as a great starting point before diving into the official documentation. I also wouldn’t recommend Webpack if you are new to task runners or don’t feel proficient in JavaScript. Those issues aside, it’s still a more specific tool than the general broadness of Grunt and Gulp. Many people use Webpack alongside Grunt or Gulp for this very reason, letting Webpack excel at bundling modules and letting Grunt or Gulp handle more generic tasks.
Webpack ultimately lets us write Node.js-style code for the browser, a great win for productivity and making a clean separation of concerns in our code via modules. Let’s use Webpack to achieve the same result as we did with the Gulp example, combining multiple JavaScript files into one app file.
Webpack is often used with Babel to transpile ES6 code to ES5. Transpiling code from ES6 to ES5 lets developers use the emerging ES6 standard while serving up ES5 to browsers or environments that don’t fully support ES6 yet. However, in this example, we’ll focus on building a simple bundle of our two files from the Gulp example. To begin, we need to install Webpack and create a config file, webpack.config.js. Here’s what our file looks like:
module.exports = {
    entry: "./src/foo.js",
    output: {
        filename: "app.js",
        path: "./dist"
In this example, we’re pointing Webpack to our src/foo.js file to begin its work of walking our dependency graph. We’ve also updated our foo.js file to look like this:
var bar = require("./bar");

var foo = function() {

module.exports = foo;
And we’ve updated our bar.js file to look like this:
var bar = function() {

module.exports = bar;
This is a very basic CommonJS example. You’ll notice that these files now “export” a function. Essentially, CommonJS and Webpack allow us to begin organizing our code into self-contained modules that can be imported and exported throughout our application. Webpack is smart enough to follow the import and export keywords and to bundle everything into one file, dist/app.js. We no longer need to maintain a concatenation task, and we simply need to adhere to a structure for our code instead. Much better!


Webpack is akin to Gulp in that “It’s just JavaScript.” It can be extended to do other task runner tasks via its loader system. For instance, you can use css-loader and sass-loader to compile Sass into CSS and even to use the Sass in your JavaScript by overloading the require CommonJS pattern! However, I typically advocate for using Webpack solely to build JavaScript modules and for using another more general-purpose approach for task running (for example, Webpack and npm scripts or Webpack and Gulp to handle everything else).

npm Scripts

npm scripts are the latest hipster craze, and for good reason. As we’ve seen with all of these tools, the number of dependencies they might introduce to a project could eventually spin out of control. The first post I saw advocating for npm scripts as the entry point for a build process was by James Halliday. His post perfectly sums up the ignored power of npm scripts (emphasis mine):
There are some fancy tools for doing build automation on JavaScript projects that I’ve never felt the appeal of because the lesser-known npm run command has been perfectly adequate for everything I’ve needed to do while maintaining a very tiny configuration footprint.
Did you catch that last bit at the end? The primary appeal of npm scripts is that they have a “very tiny configuration footprint.” This is one of the main reasons why npm scripts have started to catch on (almost four years later, sadly). With Grunt, Gulp and even Webpack, one eventually begins to drown in plugins that wrap binaries and double the number of dependencies in a project.
Keith Cirkel has the go-to tutorial on using npm to replace Grunt or Gulp. He provides the blueprint for how to fully leverage the power of npm scripts, and he’s introduced an essential plugin, Parallel Shell (and a host of others just like it).

An Example

In our section about Grunt, we took the popular module Grunticon and created SVG icons (with PNG fallbacks) in a Grunt task. This used to be the one pain point with npm scripts for me. For a while, I would keep Grunt installed for projects just to use Grunticon. I would literally “shell out” to Grunt in my npm task to achieve task-runner inception (or, as we started calling it at work, a build-tool turducken). Thankfully, The Filament Group, the fantastic group behind Grunticon, released a standalone (i.e. Grunt-free) version of their tool, Grunticon-Lib. So, let’s use it to create some icons with npm scripts!
This example is a little more advanced than a typical npm script task. A typical npm script task is a call to a command-line tool, with the appropriate flags or config file. Here’s a more typical task that compiles our Sass to CSS:
"sass": "node-sass src/scss/ -o dist/css",
See how it’s just one line with various options? No task file needed, no build tool to spin up — just npm run sass from the command line, and you’re Sass is now CSS. One really nice feature of npm scripts is how you can chain script tasks together. For instance, say we want to run some task before our Sass task runs. We would create a new script entry like this:
"presass": "echo 'before sass',
That’s right: npm understands the pre- prefix. It also understands the post- prefix. Any script entry with the same name as another script entry with a pre- or post- prefix will run before or after that entry.
Converting our icons will require an actual Node.js file. It’s not too serious, though. Just create a tasks directory, and create a new file named grunticon.js or icons.js or whatever makes sense to those working on the project. Once the file is created, we can write some JavaScript to fire off our Grunticon process.
Note: All of these examples use ES6, so we’re going to use babel-node to run our task. You can easily use ES5 and Node.js, if that’s more comfortable.
import icons from "grunticon-lib";
import globby from "globby";

let files = globby.sync('src/icons/*');
let options = {
  colors: {
    "blue": "blue"

let icon = new icons(files, 'dist/icons', options);

Let’s get into the code and figure out what’s going on.
  1. We import (i.e. require) two libraries, grunticon-lib and globby. Globby is one of my favorite tools, and it makes working with files and globs so easy. Globby enhances Node.js Glob (select all JavaScript files via ./*.js) with Promise support. In this case, we’re using it to get all files in the src/icons directory.
  2. Once we do that, we set a few options in an options object and then call Grunticon-Lib with three arguments:
    • the icon files,
    • the destination,
    • the options.
    The library takes over and chews away on those icons and eventually creates the SVGs and PNG versions in the directory we want.
  3. We’re almost done. Remember that this is in a separate file, and we need to add a “hook” to call this file from our npm script, like this: "icons": "babel-node tasks/icons.js".
  4. Now we can run npm run icons, and our icons will get created every time.
npm scripts offer a similar level of power and flexibility as other task runners, without the plugin debt.

Breakdown Of Task Runners Covered Here

Tool Pros
Grunt No real programming knowledge needed
Gulp Configure tasks with actual JavaScript and streams
Webpack Best in class at module-bundling
npm scripts Direct interaction with command-line tools.

Some Easy Wins

All of these examples and task runners might seem overwhelming, so let’s break it down. First, I hope you don’t take away from this article that whatever task runner or build system you are currently using needs to be instantly replaced with one mentioned here. Replacing important systems like this shouldn’t be done without much consideration. Here’s my advice for upgrading an existing system: Do it incrementally.

Wrapper Scripts!

One incremental approach is to look at writing a few “wrapper” npm scripts around your existing task runners to provide a common vocabulary for build steps that is outside of the actual task runner used. A wrapper script could be as simple as this:
  "scripts": {
    "start": "gulp"
Many projects utilize the start and test npm script blocks to help new developers get acclimatized quickly. A wrapper script does introduce another layer of abstraction to your task runner build chain, yet I think it’s worth being able to standardize around the npm primitives (e.g. test). The npm commands have better longevitiy than an individual tool.

Sprinkle in a Little Webpack 

If you or your team are feeling the pain of maintaining a brittle “bundle order” for your JavaScript, or you’re looking to upgrade to ES6, consider this an opportunity to introduce Webpack to your existing task-running system. Webpack is great in that you can use as much or as little of it as you want and yet still derive value from it. Start just by having it bundle your application code, and then add babel-loader to the mix. Webpack has such a depth of features that it’ll be able to accommodate just about any additions or new features for quite some time.

Easily Use PostCSS With npm Scripts 

PostCSS is a great collection of plugins that transform and enhance CSS once it’s written and preprocessed. In other words, it’s a post-processor. It’s easy enough to leverage PostCSS using npm scripts. Say we have a Sass script like in our previous example:
"sass": "node-sass src/scss/ -o dist/css",
We can use npm script’s lifecycle keywords to add a script to run automatically after the Sass task:
"postsass": "postcss --use autoprefixer -c postcss.config.json dist/css/*.css -d dist/css",
This script will run every time the Sass script is run. The postcss-cli package is great, because you can specify configuration in a separate file. Notice that in this example, we add another script entry to accomplish a new task; this is a common pattern when using npm scripts. You can create a workflow that accomplishes all of the various tasks your app needs.


Task runners can solve real problems. I’ve used task runners to compile different builds of a JavaScript application, depending on whether the target was production or local development. I’ve also used task runners to compile Handlebars templates, to deploy a website to production and to automatically add vendor prefixes that are missed in my Sass. These are not trivial tasks, but once they are wrapped up in a task runner, they became effortless.
Task runners are constantly evolving and changing. I’ve tried to cover the most used ones in the current zeitgeist. However, there are others that I haven’t even mentioned, such as Broccoli, Brunch and Harp. Remember that these are just tools: Use them only if they solve a particular problem, not because everyone else is using them. Happy task running!

Wednesday, January 6, 2016

How to Install Octopress

In this tutorial I’m going to show you how I’ve installed Octopress. Installing Octopress was a fun experience especially for those without prior knowledge of Ruby tools such as gems, rake and stuff.
As you might probably know Octopress is a blogging framework for hackers and the customizability(I don’t know if that word exists) is pretty flexible you can almost do anything that you want.
The documentation for Octopress is actually pretty readable and easy to follow. You can actually go ahead and read it up and see if you can follow. Just come back here if you think you cannot and I’ll make things easier for you. I promise.

Tools you’re going to need

Download and install the first 2 tools(Ruby and Ruby Gems). It’s not that difficult. The third one(Ruby Dev Kit) is a little bit harder in my opinion so I’m going to show it to you.

Installing the Ruby Dev Kit

Extract the zip file anywhere then open up a new terminal inside the root directory where its contents has been extracted.
Execute this command:
ruby dk.rb init
Edit the config.yml file that it has generated. On the last line specify the directory where Ruby is installed:
- D:/web_files/github/yari/ruby-1.9.3-p194-i386-mingw32
Execute the following command(still inside the ruby dev kit folder):
ruby dk.rb install
Once that’s done you can actually go ahead and follow the Octopress Setup Guide.
The Octopress docs will pretty much guide you what to do next so it’s actually pretty easy and straightforward.
Just go through the following guides in order. The following steps assumes that you have already installed Git on your computer. If you haven’t done it yet just follow this guide from Github on setting up Git. If you’re on Windows its actually a lot more easier because all you have to do is to download the Github installer for Windows and install it just like how you install most of the programs on Windows(click next until you hit finish).
If you’re having trouble you can seek help at Stackoverflow(don’t forget to use the octopress tag), Twitter(they’re pretty fast at responding) or Octopress Github Page and post an issue.
Good luck

Tuesday, January 5, 2016

Idiotic Client Feedback

LOL Client Emails That Designers Hate To Receive

We have compiled a funny list of client emails that designers absolutely dread receiving. Most of them are plain old client stupidity, but some (like the payment-related ones) are just pure evil. Check out the compilation below.

They’re all cringe-worthy but no. 5, 7 and 10 take the cake. Have more to add to this list? What’s the worst/funniest client email you ever received? Share this post and your views in the comments below.

Saturday, January 2, 2016

Shopify – Add Customer Tag on user registration

Within your user registration form located in customer/register.liquid find this line
{% form 'create_customer' %}       {{ form.errors | default_errors }}
and below it add
where “Wholesale” is the tag you would like to add to the user.
If you want to build a custom registration page for Wholesale customers : Create a new page template – copy and paste the form

Friday, January 1, 2016

Feelings to learn JavaScript in 2016

Hey, I got this new web project, but to be honest I haven’t coded much web in a few years and I’ve heard the landscape changed a bit. You are the most up-to date web dev around here right?
-The actual term is Front End engineer, but yeah, I’m the right guy. I do web in 2016. Visualisations, music players, flying drones that play football, you name it. I just came back from JsConf and ReactConf, so I know the latest technologies to create web apps.
Cool. I need to create a page that displays the latest activity from the users, so I just need to get the data from the REST endpoint and display it in some sort of filterable table, and update it if anything changes in the server. I was thinking maybe using jQuery to fetch and display the data?
-Oh my god no, no one uses jQuery anymore. You should try learning React, it’s 2016.
Oh, OK. What’s React?
-It’s a super cool library made by some guys at Facebook, it really brings control and performance to your application, by allowing you to handle any view changes very easily.
That sounds neat. Can I use React to display data from the server?
-Yeah, but first you need to add React and React DOM as a library in your webpage.
Wait, why two libraries?
-So one is the actual library and the second one is for manipulating the DOM, which now you can describe in JSX.
JSX? What is JSX?
-JSX is just a JavaScript syntax extension that looks pretty much like XML. It’s kind of another way to describe the DOM, think of it as a better HTML.
What’s wrong with HTML?
-It’s 2016. No one codes HTML directly anymore.
Right. Anyway, if I add these two libraries then I can use React?
-Not quite. You need to add Babel, and then you are able to use React.
Another library? What’s Babel?
-Oh, Babel is a transpiler that allows you to target specific versions of JavaScript, while you code in any version of JavaScript. You don’t HAVE to include Babel to use ReactJS, but unless you do, you are stuck with using ES5, and let’s be real, it’s 2016, you should be coding in ES2016+ like the rest of the cool kids do.
ES5? ES2016+? I’m getting lost over here. What’s ES5 and ES2016+?
-ES5 stands for ECMAScript 5. It’s the edition that has most people target since it has been implemented by most browsers nowadays.
-Yes, you know, the scripting standard JavaScript was based on in 1999 after its initial release in 1995, back then when JavaScript was named Livescript and only ran in the Netscape Navigator. That was very messy back then, but thankfully now things are very clear and we have, like, 7 editions of this implementation.
7 editions. For real. And ES5 and ES2016+ are?
-The fifth and seventh edition respectively.
Wait, what happened with the sixth?
-You mean ES6? Yeah, I mean, each edition is a superset of the previous one, so if you are using ES2016+, you are using all the features of the previous versions.
Right. And why use ES2016+ over ES6 then?
-Well, you COULD use ES6, but to use cool features like async and await, you need to use ES2016+. Otherwise you are stuck with ES6 generators with coroutines to block asynchronous calls for proper control flow.
I have no idea what you just said, and all these names are confusing. Look, I’m just loading a bunch of data from a server, I used to be able to just include jQuery from a CDN and just get the data with AJAX calls, why can’t I just do that?
-It’s 2016 man, no one uses jQuery anymore, it ends up in a bunch of spaghetti code. Everyone knows that.
Right. So my alternative is to load three libraries to fetch data and display a HTML table.
-Well, you include those three libraries but bundle them up with a module manager to load only one file.
I see. And what’s a module manager?
-The definition depends on the environment, but in the web we usually mean anything that supports AMD or CommonJS modules.
Riiight. And AMD and CommonJS are…?
-Definitions. There are ways to describe how multiple JavaScript libraries and classes should interact. You know, exports and requires? You can write multiple JavaScript files defining the AMD or CommonJS API and you can use something like Browserify to bundle them up.
OK, that makes sense… I think. What is Browserify?
-It’s a tool that allows you to bundle CommonJS described dependencies to files that can be run in the browser. It was created because most people publish those dependencies in the npm registry.
npm registry?
-It’s a very big public repository where smart people put code and dependencies as modules.
Like a CDN?
-Not really. It’s more like a centralised database where anyone can publish and download libraries, so you can use them locally for development and then upload them to a CDN if you want to.
Oh, like Bower!
-Yes, but it’s 2016 now, no one uses Bower anymore.
Oh, I see… so I need to download the libraries from npm then?
-Yes. So for instance, if you want to use React , you download the React module and import it in your code. You can do that for almost every popular JavaScript library.
Oh, like Angular!
-Angular is so 2015. But yes. Angular would be there, alongside VueJS or RxJS and other cool 2016 libraries. Want to learn about those?
Let’s stick with React, I’m already learning too many things now. So, if I need to use React I fetch it from this npm and then use this Browserify thing?
That seems overly complicated to just grab a bunch of dependencies and tie them together.
-It is, that’s why you use a task manager like Grunt or Gulp or Broccoli to automate running Browserify. Heck, you can even use Mimosa.
Grunt? Gulp? Broccoli? Mimosa? The heck are we talking about now?
-Task managers. But they are not cool anymore. We used them in like, 2015, then we used Makefiles, but now we wrap everything with Webpack.
Makefiles? I thought that was mostly used on C or C++ projects.
-Yeah, but apparently in the web we love making things complicated and then going back to the basics. We do that every year or so, just wait for it, we are going to do assembly in the web in a year or two.
Sigh. You mentioned something called Webpack?
-It’s another module manager for the browser while being kind of a task runner as well. It’s like a better version of Browserify.
Oh, Ok. Why is it better?
-Well, maybe not better, it’s just more opinionated on how your dependencies should be tied. Webpack allows you to use different module managers, and not only CommonJS ones, so for instance native ES6 supported modules.
I’m extremely confused by this whole CommonJS/ES6 thing.
-Everyone is, but you shouldn’t care anymore with SystemJS.
Jesus christ, another noun-js. Ok, and what is this SystemJS?
-Well, unlike Browserify and Webpack 1.x, System is dynamic module loader that allows you to tie multiple modules in multiple files instead of bundling them in one big file.
Wait, but I thought we wanted to build our libraries in one big file and load that!
-Yes, but because HTTP/2 is coming now multiple HTTP requests are actually better.
Wait, so can’t we just add the three original libraries for React??
-Not really. I mean, you could add them as external scripts from a CDN, but you would still need to include Babel then.
Sigh. And that is bad right?
-Yes, you would be including the entire babel-core, and it wouldn’t be efficient for production. On production you need to perform a series of pre-tasks to get your project ready that make the ritual to summon Satan look like a boiled eggs recipe. You need to minify assets, uglify them, inline css above the fold, defer scripts, as well as-
I got it, I got it. So if you wouldn’t include the libraries directly in a CDN, how would you do it?
-I would transpile it from Typescript using a Webpack + SystemJS + Babel combo.
Typescript? I thought we were coding in JavaScript!
-Typescript IS JavaScript, or better put, a superset of JavaScript, more specifically JavaScript on version ES6. You know, that sixth version we talked about before?
I thought ES2016+ was already a superset of ES6! WHY we need now this thing called Typescript?
-Oh, because it allows us to use JavaScript as a typed language, and reduce run-time errors. It’s 2016, you should be adding some types to your JavaScript code.
And Typescript obviously does that.
-Flow as well, although it only checks for typing while Typescript is a superset of JavaScript which needs to be compiled.
Sigh… and Flow is?
-It’s a static type checker made by some guys at Facebook. They coded it in OCaml, because functional programming is awesome.
OCaml? Functional programming?
-It’s what the cool kids use nowadays man, you know, 2016? Functional programming? High order functions? Currying? Pure functions?
I have no idea what you just said.
-No one does at the beginning. Look, you just need to know that functional programming is better than OOP and that’s what we should be using in 2016.
Wait, I learned OOP in college, I thought that was good?
-So was Java before being bought by Oracle. I mean, OOP was good back in the days, and it still has its uses today, but now everyone is realising modifying states is equivalent to kicking babies, so now everyone is moving to immutable objects and functional programming. Haskell guys had been calling it for years, -and don’t get me started with the Elm guys- but luckily in the web now we have libraries like Ramda that allow us to use functional programming in plain JavaScript.
Are you just dropping names for the sake of it? What the hell is Ramnda?
-No. Ramda. Like Lambda. You know, that David Chambers’ library?
David who?
-David Chambers. Cool guy. Plays a mean Coup game. One of the contributors for Ramda. You should also check Erik Meijer if you are serious about learning functional programming.
And Erik Meijer is…?
-Functional programming guy as well. Awesome guy. He has a bunch of presentations where he trashes Agile while using this weird coloured shirt. You should also check some of the stuff from Tj, Jash Kenas, Sindre Sorhus, Paul Irish, Addy Osmani-
Ok. I’m going to stop you there. All that is good and fine, but I think all that is just so complicated and unnecessary for just fetching data and displaying it. I’m pretty sure I don’t need to know these people or learn all those things to create a table with dynamic data. Let’s get back to React. How can I fetch the data from the server with React?
-Well, you actually don’t fetch the data with React, you just display the data with React.
Oh, damn me. So what do you use to fetch the data?
-You use Fetch to fetch the data from the server.
I’m sorry? You use Fetch to fetch the data? Whoever is naming those things needs a thesaurus.
-I know right? Fetch it’s the name of the native implementation for performing XMLHttpRequests against a server.
Oh, so AJAX.
-AJAX is just the use of XMLHttpRequests. But sure. Fetch allows you to do AJAX based in promises, which then you can resolve to avoid the callback hell.
Callback hell?
-Yeah. Every time you perform an asynchronous request against the server, you need to wait for its response, which then makes you to add a function within a function, which is called the callback pyramid from hell.
Oh, Ok. And this promise thing solves it?
-Indeed. By manipulating your callbacks through promises, you can write easier to understand code, mock and test them, as well as perform simultaneous requests at once and wait until all of them are loaded.
And that can be done with Fetch?
-Yes, but only if your user uses an evergreen browser, otherwise you need to include a Fetch polyfill or use Request, Bluebird or Axios.
How many libraries do I need to know for god’s sake? How many are of them?
-It’s JavaScript. There has to be thousands of libraries that all do the same thing. We know libraries, in fact, we have the best libraries. Our libraries are huuuge, and sometimes we include pictures of Guy Fieri in them.
Did you just say Guy Fieri? Let’s get this over with. What these Bluebird, Request, Axios libraries do?
-They are libraries to perform XMLHttpRequests that return promises.
Didn’t jQuery’s AJAX method start to return promises as well?
-We don’t use the “J” word in 2016 anymore. Just use Fetch, and polyfill it when it’s not in a browser or use Bluebird, Request or Axios instead. Then manage the promise with await within an async function and boom, you have proper control flow.
It’s the third time you mention await but I have no idea what it is.
-Await allows you to block an asynchronous call, allowing you to have better control on when the data is being fetch and overall increasing code readability. It’s awesome, you just need to make sure you add the stage-3 preset in Babel, or use syntax-async-functions and transform-async-to-generator plugin.
This is insane.
-No, insane is the fact you need to precompile Typescript code and then transpile it with Babel to use await.
Wat? It’s not included in Typescript?
-It does in the next version, but as of version 1.7 it only targets ES6, so if you want to use await in the browser, first you need to compile your Typescript code targeting ES6 and then Babel that shit up to target ES5.
At this point I don’t know what to say.
-Look, it’s easy. Code everything in Typescript. All modules that use Fetch compile them to target ES6, transpile them with Babel on a stage-3 preset, and load them with SystemJS. If you don’t have Fetch, polyfill it, or use Bluebird, Request or Axios, and handle all your promises with await.
We have very different definitions of easy. So, with that ritual I finally fetched the data and now I can display it with React right?
-Is your application going to handle any state changes?
Err, I don’t think so. I just need to display the data.
-Oh, thank god. Otherwise I would had to explain you Flux, and implementations like Flummox, Alt, Fluxible. Although to be honest you should be using Redux.
I’m going to just fly over those names. Again, I just need to display data.
-Oh, if you are just displaying the data you didn’t need React to begin with. You would had been fine with a templating engine.
Are you kidding me? Do you think this is funny? Is that how you treat your loved ones?
-I was just explaining what you could use.
Stop. Just stop.
-I mean, even if it’s just using templating engine, I would still use a Typescript + SystemJS + Babel combo if I were you.
I need to display data on a page, not perform Sub Zero’s original MK fatality. Just tell me what templating engine to use and I’ll take it from there.
-There’s a lot, which one you are familiar with?
Ugh, can’t remember the name. It was a long time ago.
-jTemplates? jQote? PURE?
Err, doesn’t ring a bell. Another one?
-Transparency? JSRender? MarkupJS? KnockoutJS? That one had two-way binding.
Another one?
-PlatesJS? jQuery-tmpl? Handlebars? Some people still use it.
Maybe. Are there similar to that last one?
-Mustache, underscore? I think now even lodash has one to be honest, but those are kind of 2014.
Err.. maybe it was newer.
-Jade? DustJS?
-DotJS? EJS?
-Nunjucks? ECT?
-Mah, no one likes Coffeescript syntax anyway. Jade?
No, you already said Jade.
-I meant Pug. I meant Jade. I mean, Jade is now Pug.
Sigh. No. Can’t remember. Which one would you use?
-Probably just ES6 native template strings.
Let me guess. And that requires ES6.
Which, depending on what browser I’m using needs Babel.
Which, if I want to include without adding the entire core library, I need to load it as a module from npm.
Which, requires Browserify, or Wepback, or most likely that other thing called SystemJS.
Which, unless it’s Webpack, ideally should be managed by a task runner.
But, since I should be using functional programming and typed languages I first need to pre-compile Typescript or add this Flow thingy.
And then send that to Babel if I want to use await.
So I can then use Fetch, promises, and control flow and all that magic.
-Just don’t forget to polyfill Fetch if it’s not supported, Safari still can’t handle it.
You know what. I think we are done here. Actually, I think I’m done. I’m done with the web, I’m done with JavaScript altogether.
-That’s fine, in a few years we all are going to be coding in Elm or WebAssembly.
I’m just going to move back to the backend. I just can’t handle these many changes and versions and editions and compilers and transpilers. The JavaScript community is insane if it thinks anyone can keep up with this.
-I hear you. You should try the Python community then.
-Ever heard of Python 3?