Using Axe & React-axe to audit your web application’s accessibility

Web accessibility is one of the keys and often missed parts of web development. If you are building a website for a larger, general audience, you have to make sure your page complies with web accessibility standards (most known WCAG). 

Making sure your website is accessible is no small task. There are obvious steps you have to take but the real accessibility issues are not easy to understand and pinpoint before a real user with a disability visits your site using tools like screen readers or other accessibility aiding tools. To do it right, most companies work with audit companies that are experienced in testing your site using variations of these tools to cover as many real accessibility scenarios as they can. But getting your site audited for accessibility is also going to cost you.

Web accessibility compliance also becomes a mandate for websites serving to certain industries (like government, insurance, banking…) that you are obliged to make your site accessible at all times. But most consumer sites this is not a requirement but a work to make your site/brand more inclusive of all users. 

I want to talk about a browser extension (and a react library) that helps you to detect the obvious, programmatically, and easy to detect issues that you can address quickly to cover the majority of the basic accessibility issues on your pages as a quick win.

Axe

Axe is a software and service that both have professional solutions as well as free browser extensions (Chrome, Firefox, and MS Edge) that are very easy to install, activate and start seeing your page’s accessibility compatibility and issues. 

Visit https://www.deque.com/axe/ to learn more and install the browser extension. The extension is pretty straightforward to use that runs on a page you open and shows issues, explanations of what the issue is, how to solve it to make your page more accessible.

React-axe

There is also react npm package that you can activate in your development environments that helps you to audit the final rendered DOM tree similarly to the chrome extension.

https://www.npmjs.com/package/@axe-core/react

Google Sheets + Zapier is a perfect gateway for quick integrations when bootstrapping a new tool/service

When bootstrapping a new product, regardless of platform and solution is used in the back-end and front-end, the times come very quickly that you will need to integrate with 3rd party platforms to create continuity of the product’s user experience between different solutions.

As a good example of this, let’s say you bootstrapped a small SaaS product that helps users to calculate their taxes. Your product is not going to be only the software solution you created but the whole experience from customer support to documentation or educational materials, perhaps some marketing experience when acquiring and onboarding your new users. So right off the bat, we will need a customer support solution, marketing tool. Perhaps a CRM-ish tool to use as our “customer master” database. And will want to channel everything there as much as we can.

But when someone signs up, your back-end only creates their user account, and their customer support record, CRM record, or marketing tracking is not connected. Most likely, these will be separate services like Intercom, Zendesk, Mailchimp, etc. And obviously, your own backend/database where your user’s initial records are created and their engagement with your core product happens.

I have planned and done these integrations many times over in different products and worked with many 3rd party services to integrate. Some niche solutions that I had to integrate don’t have proper APIs or capabilities. Setting some of these exceptions aside, most tools have integrations with well-known platforms like Salesforce, Facebook Ads, IFTTT, Slack. And as a common and growing theme, most tools also have integration with Zapier which is the main event I want to come to.

Eventually, I find myself evaluating Zapier Integrations between these platforms to cover most of the use cases we often spend days doing single integration. Where if the triggers and actions are covering what we are trying to do, I started to suggest my clients and the rest of my team create Zapier focused integrations.

There is an easier way. A big majority of people working in the process/product/team management space uses sheets on a daily basis. Either Excel or Google Sheet covers that big majority of the use cases. I evangelize Google Sheets just because of its real-time collaboration and ease-of-access capabilities. It’s free and a large majority of people having google accounts making it very universal.

I have done direct google sheet integrations in the past many times. But recently I like the concept of using google sheet as a source that can be commonly used by other services for integration purposes. Since it’s a living document, it’s very easy to make changes on a document or listen to changes happening on documents (by humans or APIs). This makes it an amazing candidate for using it with Zapier to use it as a “source” of data. It makes Zapier the magic glue here to serve as a universal adapter to anything else we want to connect to. Having thousands of services available in Zapier makes it a meeting ground for moving the data we provide through google sheet to anywhere else.

I need to say this will be limited based on each service’s capability and the available actions/triggers in the Zapier platform. But most SaaS solutions invest enough effort and time to make their Zapier integrations rich enough to serve the most common use cases. It won’t cover 100% of needs but it will certainly eliminate a lot of basic integrations like slack, email notifications, marketing tools triggers (i.e: follow-up campaigns).

This is not a code-less solution

When going down this route, the biggest work and challenge will be integrating Google Sheet APIs to connect your account (through oAuth process), and store your credentials in your server and create the server → gsheet integration to send your back-end changes to a google sheet document. It’s not the easiest API to integrate with but it’s well documented, very mature, and has endless examples in the community (github). And best of all, this one integration opens up so many without needing to do further integration. Even in the most basic products, we find ourselves doing slack, email deliveries in MVP versions. Investing the same effort in google sheet will easily justify itself later.

Trade offs

One big trade-off is to have your user’s PII data to be transported, stored in a google sheet (which will be private), and then sent to Zapier. If you are super paranoid or have to comply with certain privacy regulations, managing this traffic may need to be done more sensitively or completely unfeasible for your product. But the majority of products I built do not need that rigorous audit and compliance. So this solution has worked for me many times.

Example

I want to show a sample integration to set up a google sheet as a trigger and put a Slack notification as an action. Hopefully, this showcases some imagination and helps you understand where this can go.

Set up Google Sheet changes as “trigger”

Create a new zap or edit the existing one to change the “trigger” service. Select Google Sheets.
In the first step, you will be asked to select the google account linked to your zapier account. If you haven’t done it yet, or want to connect to another account then you currently have, you can do it in this step.

After selecting the account, Zapier will ask you to select what event you want to set this zap to listen to. Generally, we will inject a new row into a sheet in one of the documents. So we select “New Spreadsheet Row” as the event to listen to, but as you can see, you can select other events like updating a spreadsheet row or new worksheet creation in a document.

Now you will need to select which document and which worksheet to listen to. Zapier will show document and sheet selection dropdowns here.

As the final step, you will be able (and kinda have to) to test your trigger that will pull a sample row from your sheet. Make sure you enter values into your columns in order to use this sample data to set up your further actions in zapier. Zapier will show these sample values when you create actions using these values.

Set up Slack as “action” to send a message to a channel

Now, we’ll use this trigger in any service we want. We can also create multiple actions where you can send an email and slack notification and create a new Intercom customer record at the same time in one zap.

For this example, in the “action” section we will select Slack service when asked.

First, we will select the type of “action” we want to perform. We will select “Send Channel Message”. You can select other actions like send a direct message or others.

Then, similar to Google sheet initial steps, we will first select the slack account we want to use.

And finally, with seeing a lot of options, we will set up the sender name, avatar, and other details, but most importantly, the channel we want the message to be sent to and the message content itself:

Zapier is pretty intuitive and simple to construct smart content areas like this one. You will be able to both type a static message as well as insert the actual data (variables) from your source. In this example, our source is the google sheet document. So you will see a dropdown with search capabilities to search and find the actual column value you want to insert when you want to construct a message with dynamic parts.

Once everything is done, you will be able to finish this step and will be forced to test the action you just set up. And all done! Don’t forget to turn the zap “on”.

This is just the most simple example I can use. There are many use cases you can allow this integration to push changes/data into thousands of services available in Zapier.

Happy Zaps!

Create a website from Notion document collection with custom subdomain via Fruition in 10 minutes

home.png

You guys know how much I love thinking, talking, and geeking about ways to smartly preserve our knowledge in form of written communication. I talked a lot about written communication and making our digital documents smarter before.

Although Notion is not my primary knowledge base, note-taking tool, I really love certain aspects of how Notion functions. Probably the best feature of it among other tools is how you can publish any document publicly. The output it creates is very minimal and clean. The second best thing about Notion is to support almost any type of web-embed code. With this, you can practically embed any interactive widgets or content pieces to your documents. Very much like a web page, Notion makes the documents you publish online as — pretty much a fully-fledged web page. 

This made Notion an attraction point for a lot of people to create microsites, sub-sites, and a lot of plain content to be written in notion and used in existing websites.

Today, I want to talk about a Notion enthusiast Stephen’s guide, and a mini-project that allows us to set up a custom domain/subdomain for our notion documents.

Stephen’s code runs on serverless “Cloudflare Workers”. This allows a few customizations like dark/light mode toggle/switch on your page as well as nice URLs (slugs) when you set up your notion document with the worker code. It’s a pretty simple, almost no-code solution. In fact, you don’t have to worry about the code, Stephen created a mini UI to allow you to customize your configuration while setting it up. It takes about 5-10 minutes to set it up but it’s worth it.

Give it a try

Check out the step by step tutorial on the project site: https://fruitionsite.com/

You can also jump right in the video tutorial:

I love this method allows you to spin up a super-fast website that you can continuously edit/update from Notion and cost nothing.

Get Google Sheets document content as JSON without Google API oAuth

Use an existing google sheet document or create a new google sheet and once you are done with your content;

example google sheet document

“Click Share” button on top right corner:

google sheet share dialog

By default, the “link sharing” is not enabled by default.

In the “Get Link” section, click “Change to anyone with the link”

google sheet share dialog with public link

 Copy the link and click Done to close this pop-up.

Now, as second step on the google sheet side, we have to publish the document to web. To do that, click “File > Publish to the web” option from the menu.

google sheet file menu

 And publish your document in the publish pop-up:

google sheet publish to web dialog

We’re done on the google sheet side. Now we’ll get the document id from the url we copied and reformat it for the JSON output.

https://docs.google.com/spreadsheets/d/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/edit?usp=sharing

Get the document id from the link. For the example sheet I created (the link above), the document id is:

1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI

Now, let’s construct our JSON url using the document id:

https://spreadsheets.google.com/feeds/cells/_YOUR_SHEET_ID_/_SHEET_NUMBER_/public/full?alt=json

As you see, aside of the document id, you need to define the sheet index. It’s the sheet tab you want to get as JSON object needs to be entered as number in the url template above.

My sample document had just one tab so the tab index will be “1”. The final url for this example will be:

https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full?alt=json

Now you can access to the content of the sheet in a flattened object. 

{
  "version": "1.0",
  "encoding": "UTF-8",
  "feed": {
    "xmlns": "http://www.w3.org/2005/Atom",
    "xmlns$openSearch": "http://a9.com/-/spec/opensearchrss/1.0/",
    "xmlns$batch": "http://schemas.google.com/gdata/batch",
    "xmlns$gs": "http://schemas.google.com/spreadsheets/2006",
    "id": {
      "$t": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full"
    },
    "updated": {
      "$t": "2021-04-28T16:11:58.672Z"
    },
    "category": [
      {
        "scheme": "http://schemas.google.com/spreadsheets/2006",
        "term": "http://schemas.google.com/spreadsheets/2006#cell"
      }
    ],
    "title": {
      "type": "text",
      "$t": "Sheet1"
    },
    "link": [
      {
        "rel": "alternate",
        "type": "application/atom+xml",
        "href": "https://docs.google.com/spreadsheets/d/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/pubhtml"
      },
      {
        "rel": "http://schemas.google.com/g/2005#feed",
        "type": "application/atom+xml",
        "href": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full"
      },
      {
        "rel": "http://schemas.google.com/g/2005#post",
        "type": "application/atom+xml",
        "href": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full"
      },
      {
        "rel": "http://schemas.google.com/g/2005#batch",
        "type": "application/atom+xml",
        "href": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full/batch"
      },
      {
        "rel": "self",
        "type": "application/atom+xml",
        "href": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full?alt=json"
      }
    ],
    "author": [
      {
        "name": {
          "$t": "..."
        },
        "email": {
          "$t": "..."
        }
      }
    ],
    "openSearch$totalResults": {
      "$t": "4"
    },
    "openSearch$startIndex": {
      "$t": "1"
    },
    "gs$rowCount": {
      "$t": "1000"
    },
    "gs$colCount": {
      "$t": "26"
    },
    "entry": [
      {
        "id": {
          "$t": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full/R1C1"
        },
        "updated": {
          "$t": "2021-04-28T16:11:58.672Z"
        },
        "category": [
          {
            "scheme": "http://schemas.google.com/spreadsheets/2006",
            "term": "http://schemas.google.com/spreadsheets/2006#cell"
          }
        ],
        "title": {
          "type": "text",
          "$t": "A1"
        },
        "content": {
          "type": "text",
          "$t": "A1-Test"
        },
        "link": [
          {
            "rel": "self",
            "type": "application/atom+xml",
            "href": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full/R1C1"
          }
        ],
        "gs$cell": {
          "row": "1",
          "col": "1",
          "inputValue": "A1-Test",
          "$t": "A1-Test"
        }
      },
      {
        "id": {
          "$t": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full/R1C2"
        },
        "updated": {
          "$t": "2021-04-28T16:11:58.672Z"
        },
        "category": [
          {
            "scheme": "http://schemas.google.com/spreadsheets/2006",
            "term": "http://schemas.google.com/spreadsheets/2006#cell"
          }
        ],
        "title": {
          "type": "text",
          "$t": "B1"
        },
        "content": {
          "type": "text",
          "$t": "B1-Test"
        },
        "link": [
          {
            "rel": "self",
            "type": "application/atom+xml",
            "href": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full/R1C2"
          }
        ],
        "gs$cell": {
          "row": "1",
          "col": "2",
          "inputValue": "B1-Test",
          "$t": "B1-Test"
        }
      },
      {
        "id": {
          "$t": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full/R2C1"
        },
        "updated": {
          "$t": "2021-04-28T16:11:58.672Z"
        },
        "category": [
          {
            "scheme": "http://schemas.google.com/spreadsheets/2006",
            "term": "http://schemas.google.com/spreadsheets/2006#cell"
          }
        ],
        "title": {
          "type": "text",
          "$t": "A2"
        },
        "content": {
          "type": "text",
          "$t": "A2-Test"
        },
        "link": [
          {
            "rel": "self",
            "type": "application/atom+xml",
            "href": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full/R2C1"
          }
        ],
        "gs$cell": {
          "row": "2",
          "col": "1",
          "inputValue": "A2-Test",
          "$t": "A2-Test"
        }
      },
      {
        "id": {
          "$t": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full/R2C2"
        },
        "updated": {
          "$t": "2021-04-28T16:11:58.672Z"
        },
        "category": [
          {
            "scheme": "http://schemas.google.com/spreadsheets/2006",
            "term": "http://schemas.google.com/spreadsheets/2006#cell"
          }
        ],
        "title": {
          "type": "text",
          "$t": "B2"
        },
        "content": {
          "type": "text",
          "$t": "B2-Test"
        },
        "link": [
          {
            "rel": "self",
            "type": "application/atom+xml",
            "href": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full/R2C2"
          }
        ],
        "gs$cell": {
          "row": "2",
          "col": "2",
          "inputValue": "B2-Test",
          "$t": "B2-Test"
        }
      }
    ]
  }
}

You can construct a matrix on your integration using following javascript address in the large JSON response:

feed.entry[].gs$cell

this sub-object will contain the row, cell and the text value of the cell. If you have formulas, you can get the raw entry in the cell in this object as well.

Note: When you hit your constructed JSON url, if you get the error below, make sure you published your document publicly.

google sheets not published document error

Monitoring your microservice stack with simple ping health checks using Helathchecks.io for free

When planning, designing and implementing the infrastructure of a product, the most common pattern we follow from the back-end perspective is to build the back-end in the distributed model using microservices. But there is a cost that comes with this model which is monitoring, alerting, and maintaining. Each microservice can be living on its own container with its own dependencies. We often build too many moving parts with the microservices approach. A very simple app can have always running services, scheduled jobs, and workers distributed in many different infrastructure elements. And the rise of containerized applications or tasks allowed us to build our products with even more micro-scale code pieces running on serverless infrastructures.

There are many solutions to monitoring and alerting if a service is properly running but it’s often difficult to centralize the monitoring.

There is a very simple yet effective approach I like and seed the very minimal integration to most of my microservice components from servers, pings, a script doing a cleanup, or backup. It’s very similar to ping checks but a little bit more simplified and universal. Ping checks generally require your “parts” of your code to either publicly available and/or serve a status on HTTP/TCP/UDP traffic which may not be available based on your “component” runs.

In this post, I’d like to focus on open-source software that you can easily set up and run your own instance for free. They also provide a SaaS version of the software that you can get started for a free or low cost.

Screen Shot 2021-04-21 at 10.11.05 AM.jpg

The principle of this service is very simple. Essentially, you create a “check” and, the software expects a ping from your component in regular intervals. You can set up what these intervals can be, group/organize them, and set up alerts in case a check fails to report (ping) within the grace period you can adjust.

Screen Shot 2021-04-21 at 10.11.25 AM.jpg

The service can integrate with many mainstream ops-support/escalation services like pager duty, opsgenie… or simpler services like slack, email, or SMS using Twilio for the basic notifications.

Check out the service, their open-source software page, and their documentation here: helathchecks.io

Hosting your own private npm packages with self-hosted npm registry using Verdaccio

At Nomad Interactive, we are relatively new to contributing to npm with our own packages. Our initial designs of how we packaged certain things that are common throughout our projects, like eslint configurations, or some generators we created for ourselves… We initially packaged them as private repositories without thinking too much about its versioning, supporting multiple versions of (let’s say eslint, or react.js). Then it became clear that we had to version our packages properly. So we started our research to see how we can easily spin a private registry of our own and distribute our packages privately to authenticated users which are not. 

We found verdaccio as an open source and lightweight npm registry. We were able to spin an instance very easily on our docker swarm in minutes and start pushing private packages with our initial versions.

https://verdaccio.org/

If you need private registry for your team/company verdaccio is definitely a great quick and easy solution to get started.

Although we switched our mindset from, “why private packages anyway?” to let’s share everything we do with the world. So we questioned do we even need a private package at all? And the answer was no. Nothing we do is government secret or any secret at all. So we converted them to public packages and start pushing them to npm directly. The issue is, we definitely don’t have any real open source software management thought put behind any of those packages at all. So if we receive a pull request, we’ll most likely to neglect it (even though I’ll be very appreciative about it).

One thing I gotta say is, a private npm registry occasionally became painful to get it working properly on our CI/CD pipelines due to authentication. And Verdaccio doesn’t have support for registry tokens, or tokenized access separate from the users. So you have to create a user for your registry and that will probably need to be a git user. And if you are in a licensed environment like gitlab.com or GitHub, you may be forced to pay for a user that only CI/CD will use. Not a big deal but a side note that we had to deal with…

How to use Genymotion for Android testing (simulator/emulator) on macOS or Windows

If you are in mobile development, particularly Android development, you probably heard or used Genymotion’s simulator.

Genymotion is a company (and product) that run Android OS on a virtual machine to simulate android runtime with some basic apps the naked OS comes with. By default, it doesn’t have Google services and APIs installed or enabled but it can be installable on top of the naked Android OS you install from the images you can pick and download from genymotion’s repository easily.

These days, the official android emulator is not bad at all, but back in the day, it was terribly slow and difficult to do certain things. It was also designed solely for Android app developers who can do stuff from the command line to supplement android emulator. In those days genymotion was a superior simulator that was everybody’s first choice of running android apps while developing because of those reasons.

Aside of android app development, I extensively used genymotion to do my mobile web tests in native chrome for android and few other browsers. Today there are many services we can use in the cloud and test our web work in so many real devices and OS/browser version variations that this use case is no longer valid. But I still like the idea of an easy to configure and run android environment when I need. And for this purpose, I suggest Genymotion be one of the best solutions out there. They changed their licensing model a lot, so I don’t know what’s the latest but I was able to use genymotion (and still is) use freely for personal use cases (which is all of my use cases are personal projects and stuff).

Genymotion-player-3.0.jpg

 For some time, I also used genymotion experimentally to run a custom size tablet, installed google services and APIs, and google play so I was able to install the apps I use on mobile platforms that have a nicer user experience than their desktop or browser counterparts. If you don’t mind spending a lot of RAM, this can be an interesting option to run mobile apps on your desktop. Not super intuitive to navigate with mobile gestures with a mouse and trackpad, but one can get used to it very quickly.

https://www.genymotion.com/

Using Vercel (formerly Zeit/now.sh) for super-fast deployments

I always love services allowing developers to quicken the time from zero code to a live URL for a web application. Large majority or web applications I write – mostly for hobby topics – are very simple, not too complicated or crowded applications. So by my experimental nature, I create a lot of small apps. Most from scratch or using plain boilerplate codes I created in the past for myself for these experimental ideas.

Generally, when my code is ready to show to someone, I waste a lot of time on doing boring steps to prepare the app to make it visible somewhere. Thanks to services like heroku, this is way much less of a headache now. Readers follow me, knows I showed my love to heroku with multiple articles previously 🙂

I want to another service that brings me joy when I see I’m cutting a lot of annoying time that I want to teleport myself using these services when I bootstrap an idea and make it live and ready to prime time. Now.sh is a delightful service I discovered when I needed a static hosting for a react.js web app I wrote that I wanted to put up very quickly. There are some apps that I don’t even want to create a heroku git repository or want even faster turnaround on my steps to write and publish an app.

Now.sh allows single command line to be live on my web apps. Now.sh is actually name of the CLI tool for the parent service “Vercel”. To install their comamdn line tool, simply install “now” package globally from npm:

npm i -g vercel

Then first create your account on vercel and run 

vercel login

to login to your account. Now you are good to go.

One command deployments for non-git projects – or automatic deployments for every commit

In your project folder. Simply run

vercel

command that will automatically deploy your code (the files in the current folder) to vercel with an automatically generated subdomain under “.now.sh”. If you want, you can attach your application to a custom domain of your own for free: https://vercel.com/docs/custom-domains

Instant API

Vercel/Now.sh also provides AWS-Lambda style “serverless” architecture. What I love about this model is that Vercel allows you to write an API endpoint in javascript in very very very simple way. You just create a javascript module file, exporting a function with 2 arguments req, res, very similar mimicking express req and res objects. So it’s familiar and even simpler than creating an express application and link it to a router. You simply create “api” folder and create “hello.js” which gives you …deployment-url…/api/hello endpoint.

Here is an “echo” endpoint that returns what’s sent to it:

module.exports = (req, res) => {
  res.json({
    body: req.body,
    query: req.query,
    cookies: req.cookies
  })
}

save this as echo.js under api folder in your project and you have yourself an endpoint 🙂

As simple as it looks, there are more advanced topics on this in vercel’s documentation: https://vercel.com/docs/serverless-functions/introduction

Github integration

Another great feature I really like is direct and seamless integration to github repositories. I experimented with very portable development environments such as coding on iPad and such in the past. I find zero configuration and zero dependency development models/environments very attractive. There was an occasion in recent months that I had to live on my iPad for 10 days and needed some quick way to deploy and code up some web based application with few back-end capabilities. Nothing complex. It was very hard and time consuming to construct a remote development environment and continuously work on ssh-based remote platforms instead of native platform I was using.

Thankfully I used now.sh’s github integration that removed the “deployment” and build steps off of my local environment. I still had to do very frequent pushes to a remote git repository (github) in order to make sure I am continuously not breaking my working app and moving along on the feature I was working on. But still, it had zero dependency on my local environment that I was using my favorite editor and was able to push code to github and rest was taken care by now.sh. I really enjoyed it’s github bot that was very responsively posting updates to commit and PR logs as comments. I was also getting deployment changes on my slack channels. So it was pretty instantaneous to make changes and make it live somewhere that I can share with my team. More on their github integration here: https://vercel.com/github

Final note

Vercel (formerly Now.sh or Zeit) is a great service for both bootstrapping and making your app scale. They are also very transparent and open on their tooling that makes moving out easy. So there is no fear of “locking in” to a degree.

There are also half dozen other beautiful features that I’m not talking on this article, worth checking out: https://vercel.com/

How Figma changed how we collaborate on our UX and UI designs

abstract figma interface

At Nomad Interactive, we done big design toolkit migration twice in the past. From old Adobe Photoshop/Illustrator era to Sketch, both utilizing other tools on our collaborative process like Invision, Zeplin. About a year ago, we made similar transition/migration to Figma.

Figma is a browser-based real-time collaborative design tool. Being vector-based makes it very efficient in the total document sizes. Vector objects are much more descriptive for the elements we create in the artboards which allows further extendability via plugins. Also, a browser-based engine makes it web technologies friendly, like javascript-based API that is most common with Sketch already.

One tool to rule them all

We used sketch to create our digital designs for years. Plus used invision for presentation purposes. We had a particular process to export our designs, place them in dropbox, then upload to invision with same/similar project list and configuration. On the other hand, for Designer to Developer handover, we started to use the beautiful startup Zeplin. But, Invision knocked them off pretty quickly, so we put some of our focus to adapt Invision’s “Inspect” feature. Not long after, we moved to Figma which replaced 3 of these tools without any adaptation or question in our minds. When we gave Figma a try in a single project, it was clear very quickly that we don’t have to jump between tools for different purposes. Then we switched over.

Collaborate – Seriously

The best of what makes Figma different than other design tools is the real time collaboration features. Being able to see all viewers and editors cursors, seeing the design changes real-time. This is a similar paradigm shift happens between a static file focused “Word” versus online, real-time collaborative alternative, google docs or quip. It makes the “creation” process much more like a white board session if utilized well. We started to do collaborative design sessions on the same project with multiple designers, product managers/owners, project managers. Not everybody designs, but they can actively collaborate on the design process, providing direction to the designers on the large artboards. It’s very much like the whiteboard session.

Not all good

This process change, resulted us to see the low-fi UX thinking process to be much more visible. This let non-designers to be more active participants of the earlier parts of the design process. Also results getting wireframes to be done very closely to the actual designs (we’re mostly talking about digital product designs – like mobile application UIs or e-commerce sites). The danger is mixing these two phases of the design process that is generally better to keep them separate for the sake of putting the mind in the right concerns at the right time.

We generally dedicate wireframing period to bake the digital product’s functionality focused discussions and iterations. And the UI/Creative design period to be more concerned about look and feel, colors, typography, animations, creating emotions after we know how the product is wanted to be working. Figma’s collaborative design feature brings these two worlds together closer. There is a danger to mix it up so all the sudden you will be hearing button color instead of what the button should say or do when interacted.

A weird need on designing on mobile platforms (namely iPad Pro)

I have a weird need to make super-portable devices like iPad to be my go-to device to carry around (I already make my “thinking” oriented tasks on iPad – like writing this article). But I have a burning desire to see the iPad to be able to handle more complex tasks like writing code (not just writing, but compiling or having the runtimes for scripting languages – not yet). Or doing more complex design work – at least on the sketch/Figma level. I’m not asking to be able to do render-heavy design tasks like photoshop does. That is also not what I need or do in 99% of the time.

I gotta say, Figma is being the closest in that game if this is a practical or real future need. We know Sketch developers said they will not going to port their macOS app to the mobile platform. And Adobe is taking a different (probably nicer – native) path but a long one to get their suite of applications in the mobile platforms – but we’re already over with Adobe products. On the other hand, Figma practically runs without any issue on the mobile-safari. But with a huge lack of touch and mobile interaction support. There are some attempts to get it better (i.e: Figurative app). But still a short road to see Figma is fully iPadOS compatible. I’m sure Figma team is already working on this and hope that day comes sooner.

http://figma.com/

Using Airtable through its API programmatically, as (almost) remote database

I recently talked a lot about the importance of collaborative and smarter documentation that will improve your personal and professional workflow. Certainly, it will be different than other competitors in an interesting use case I found myself in one of the hobby projects that I used Airtable as a remote database tool all of the sudden.

Airtable is a very nice mobile-friendly document management in a “spreadsheet” style base. You can create your data structure in any data table model. You can create different views for your data (in calendar view, or filtered table view, or kanban view…).

img_0201.jpg

What makes Airtable special for me is its API. Their API is so easy to get started and access, because you get your data-specific API documentation after you login. It shows your data in API example right and there in the dynamic documentation.

img_0200.png

Airtable API essentially makes Airtable that can be used as a remote database for small or personal projects. Managing your data belongs to Airtable’s clean and nice web or mobile interface, and you implement your data in any format you like on any platform.

If you are needing read-only access, implementing Airtable API can be a matter of minutes since the documentation gives access to your data very quickly. You only need to convert the curl request to your favorite platform’s HTTP request. If you are needing a javascript version, it also produces NodeJS example code that you can drop in and start using your data.

img_0199.png

Write access is also not very different than read-only. Your data model will be well be documented in the dynamic API documentation for your table. You only need to start constructing your API requests and make the call…

If you haven’t created an Airtable account and played with it, definitely do so: https://airtable.com/ and check out their auto-generated documentation here: https://airtable.com/api (after you login to your account).

Why every developer needs to know google sheets & excel programming

I’ve recently talked about different cloud documentation services Smart(Er) Documents – Quip, Notion, Airtable, Coda Or Good Old GDocs & GSheets and the my take on Smarting up Google Docs.

Let’s dive right in the few key reasons why every developer should know google apps script and get familiar to work with GAS in google sheets and docs.

1) Provide your technical output (data) in common ground (a tool that is known by pretty much any computer user or intuitive to learn if they don’t know)

I highly believe any tool allow developers and non-dev role in a team to effectively communicate complicated information. Generally data sets to be eventually used for a form of story tellin g (a website/blog content, an internal or external product performance report, financial or behavioral analysis etc.). When it comes to developer crew only, we always find 100 different ways to express what we want to show to the world in form of scripts utilizing whatever library, tools comes to our hands. But when it comes to handing over our output (whether it’s a SQL output in CSV format, or a dynamic data set in a service), we are as developers are constrained as well as the party we’re handing over our work to start their work with a lot of constraints. They also have to learn whatever data format we give them. This also makes the collaboration one direction, starting from developer then ending up in the non-dev role working with that data (marketing people, product management or executive roles).

Real trouble starts when we have to repeat same work over and over. Because we can export the desired data model from whatever tool we’re using, generally as static CSV/Excel exports if it’s a tabular data. If you are doing same or similar work multiple times, it’s only beneficial to think ways to automate the process. Let’s think a simple family expenses management on excel/google sheets (because everybody knows or learn to work with sheet tools easily). In this hypothetical scenario, let’s say there is a database or API provides your credit card statements (there are actually services for extracting this information from your bank – but for security reasons, it gets very complicated on the authentication layer). As developer you can start the story from “extracting the data” and your output will almost always will store the extracted data in some place – most likely a database. 

You may or may not be the “analyzing” person in the family or even if you are, you may need to review your analysis with rest of the family members and may get their own take in the dataset for a real collaborative understanding and iteration. 

Google Apps Script has many ways to connect to 3rd party APIs to pull data and if done well, do these operations automatically refreshing data set or manually with UI interactions but essentially a developer’s role to be completely automated in a “smart google sheet” or doc.

Once this is set up right, it can be copied and extended easily. It goes back to the traditional “macro enabled excel templates” approach, but it really works out well for everybody. And google docs products are true real-time collaborative tools that you can literally open computers on a conference call, collaboratively edit and work on same information to bring it to an understandable level with simpler aggregate analysis or breakdowns or even charts without having to reinvent the wheel to code all these up from scratch.

2) Mockup / Bootstrap a lot of ideas in simpler form without needing to code a lot. Perhaps these can be used even long term use cases

As developers, more than half of what we do is automation. We code to not repeat ourselves. This is just a human desire to invest in our brain to not to do time consuming less intellectual work. Our brains are wired that way form the time we born. It always seeks for shortcuts. Developers are the hackers gets these shortcuts discovered and implemented in digital realm.

Whether you are working alone or working with others (team), you will always have things that you have to do multiple steps to produce an output while you work. To not repeat yourself, you will want to create tools to yourself that helps you achieve same output with less steps. The shortest way we always use the buzz word “one-click” to reflect the magical robotic process when we refer to an “easy” tool. Regardless the steps become one or multiple, there is always area for improvement in a work we do every day.

Developers tend to jump in their own comfort environment to create scripts to do things for themselves. I don’t know any developer who doesn’t have scripts that helps resizing images, or orchestrates their common tasks, so whenever we need to do the same operation, we just want to push the “one-click” button and see the output. Most of the time we invest a lot of time to build these scripts regardless of we are super proficient on the languages we know and the environment. Also we will always have issues with sharing this work with others in the team or invest more time to convert these scripts to be utilized by others in the team or publicly by anyone in internet.

Google gdocs products give a great canvas and a lot of limitations that positively shape and give us limits to make sure what we create in google docs or sheets or slides have to be in familiar format that can be used by other developers or non-dev team members.

Just to acknowledge that google apps script has real limitations, generally if you are doing resource heavy task on the cloud servers or the browser. But it is also super easy to start in a very basic form factor so you don’t have to worry about a lot of cosmetics or look and feel.

3) Spreadsheet tools are essentially databases

Every developer has to know relational database modeling and working with any size and structure of the data. At the core of relational databases, you have to work with tables and the relationships between them. 

Google spreadsheet and Excel are great tools that can be extensively used by both developer and non-developer people together because its both visual tool and a structured data by its nature.

The reason both tools have programming interfaces because its one of the core use cases for these tools. I also think the companies behind them are forced to have this programing features that because it’s probably one of the most requested features.

Al in all, utilizing a spreadsheet file as database, where you read the structured information from it as well as inject new rows or query and update the rows becomes one of the most common use cases from its programming perspective. So whether you like it or not, you will eventually come to a work that the “users” of your product is already or will be using spreadsheet files aside of your product. So you will at least have to learn how to process these files (import, ingest) and be able to expose your product’s details (data) in these formats (export features).

The best way to see this is being a very common need to look at the most popular automation tools on internet like IFTTT, Zapier. One of the first integrations they did was google sheets.

I personally have many google sheets files that is fed by IFTTT or workers I run on my servers that exports activities daily/weekly/monthly or by trigger. Anytime I want to see what’s going on and want to analyze my activities (both personal activities like my driving history, or leaving/entering NYC area, or my expenses on my credit cards) I can easily do a pivot table to see the trend, or group them by categories or other angles to look at the same data with different questions. I can’t even describe how many different ways similar scenario can be for a business purpose. Analytics by itself will cover a long list of things you (or a non-dev team member) can and will want to do with data sets like these.

Wrapping Up

The last reason in the list is probably is the strongest from a professional skill perspective, so you kinda have to know your way around in both static excel/csv files as well as knowing how to work with google APIs for google sheets from it’s authentication steps to the actual endpoints/methods in the google sheets api to work with the spreadsheets.

But #1 and #2 are more important if you are (or want to) a resourceful problem solver. Every developer is at the end of the day non-stop problem solver and having Google sheets in developer’s toolbox is a must.

Just launched my newest product in ProductHunt: Sheet2Cal – Organize calendar events inside from your Google Sheets doc.

Sheet2Cal helps you to sync your event data from Google Sheets to your favorite calendar app. Plan and collaborate your events (social media/editorial content, wedding & travel planning) freely on Google Sheets and export it easily to a calendar subscription using Sheet2Cal.

Sheet2Cal creates a complete calendar event by using the information in a Google Sheets doc you want. It helps you to schedule and update your calendar events (Such as social media, wedding & travel planning) inside from your Google Sheets doc.

Back Story

At Nomad Interactive, we love to create various tools to make our lives easier. And we share it with the world if we think it makes the same impact for others! Sheet2Cal was born during such a need.

When uploading a shot to Dribbble, we were doing our content planning, such as title, description, in a Google Sheet document for team review and collaboration. In addition to that, we were also creating a calendar event with similar information as a reminder to ourselves – when to publish each post on our shared calendars.

During the review process, we had to make every change we made on the post document in the relevant calendar entry. To prevent this unnecessary step, we developed Sheet2Cal and started making our entire calendar organization using our main Google Sheets document, from creating an event to updating it. Then we thought that this would work in many different subjects and decided to share it with everyone for free!

Now you can use Sheet2Cal as you wish for your similar needs! If you have any problems or anything you want to ask, we would love to help as a team!

To show your support, you can visit the Product Hunt page and give us a 💬 shout or an 👍 upvote!

You can also visit the project page here: https://sheet2cal.com/

Smarting up google docs and sheets

I’d like to talk about the growing experience with Google Docs and Google Sheets to use them for more complex needs and functions in this post.

It’s been a recent theme of the topic that I’ve been talking about ways to use collaborative cloud services for documentation purposes. Whether for personal or business/team communication. I’ve also talked about going beyond the need to have plain written form of documents to the smarter, more complex form of information forms in Smart(Er) Documents – Quip, Notion, Airtable, Coda Or Good Old GDocs & GSheets post.

I like Google docs from the availability perspective that it’s available without needing a complicated pricing structure and Google allowing google docs to be open for public google accounts that most people can access google docs with their personal Google accounts even if they don’t have company accounts (not using GSuite). I also like google docs is very familiar in the visual form that all of use are used to from Microsoft Office or other office software suites (Open, Libre). This also may look google docs products outdated compared to more modern collaborative editing tools like Notion, Quip.

All of these services have equally powerful APIs but probably not as robust as Google Docs. Google Docs also has a secret weapon that I’d like to introduce lightly in this post called Google Apps Script aside from their powerful API. This is a very wide and under-discussed topic online that gives Google Docs tools a huge edge. I may want to focus on this by talking sub-topics about the Google Apps Scripts down the road.

Google Apps Script

Google Apps script is a scripting/automation feature of google larger Google products including Gmail, google calendar, google drive and few other important google products you may be already using.

Google apps script runs on your browser (or mobile device) within the google tool you’re using. Scripts can register additional UI elements in the tools you use (register a new menu item), or watch/listen changes on the documents you create (events like, when a row is updated in google sheets) or you can even map your script parts to elements in the document content you are creating (such as buttons, dropdowns, checkboxes, etc…).

So far nothing new that other tools won’t do. Microsoft Excel and Word have its famous Macros available around ~20 years or more. Almost all other alternative software also have some form of automation that allows similar capabilities. The real power of Google Apps Script comes when you combine some of its online/cloud tools like google drive or google maps or Gmail with its documents. This makes the documents interactive with other services. Similar in the sense of using the other services APIs. One of the big things that makes me feel warmer to google docs tools are having real-time collaboration in your docs. This makes collaborative writing/editing experience superb. We often real-time write content with my team and I find the conversational aspect of the collaborative work priceless.

Google Apps Script is based on JavaScript. So you write Javascript snippets to define your custom functionality along with a large list of available google services APIs ready to use.

My favorite use cases of using Google Apps Script

Pull a dataset from our internal services or public sources dynamically to google sheets

We use this more often with our analytics services internally we use. These sheets are generally reports we create but often update with the latest version of the data.

Google sheets already have IMPORTDATA and IMPORTXML functions that pull a CSV or XML formatted data easily. But often we use a service that we haven’t built that doesn’t have CSV or XML exposure of its data. Often it’s a REST API returns JSON. You can use a helper function like https://github.com/bradjasper/ImportJSON or create your custom processor to pull the data and shape it to the way you want in google apps script. We often do the latter.

Add custom functions to google sheets

We use this a lot to create custom functions (generally pull data from other cloud tools we use), like getting Trello card details (title, status, assignee) i.e: =TRELLO(“eio3u48d”) or you can do other public services like getting weather forecast for a zip code =WEATHER(“11222”)

Send mails from google docs or sheets using your Gmail account

This goes into the automation of your workflow. As I mentioned above, with google apps script, you can map UI elements (menus or a button like an element in the document content) to a custom trigger in your script that does something for you. We sometimes create sheets or docs containing form-like formats or in the google sheet scenario, an action to be taken by the user for each row there is data. For this examples sake, think like a contact list with name, email and thank-you-note columns. We use google apps script to create a button-like action item in a column we define (let’s say it’s the next column to the thank you note) with label say “Send Thank You Note”. With google apps script, we can register this column to accept clicks and trigger a google apps script function. The google apps script function then can pull the clicked row number and the values in that row for the name, email and thank you note. Then with few lines of code, we can utilize gmail service api (without needing to do complicated SDK installations and -more importantly- deal with authentication) to send an email to the recipient with the content we want (in this case use the thank you note column as the email content. This is a huge convenience compared to building out this capability in a service or custom code from scratch.

Put a google sheet data to your calendar and update accordingly

Another great use case is to have timeline-based planning to be pushed to google calendar and update accordingly. We do this in a similar fashion as the previous scenario, but utilizing the google calendar service in the google apps script.

Wrapping up

There are many more interesting use cases for google apps script. There are also many communities created/maintained lists, directories for great google apps script examples and resources.

https://github.com/contributorpw/google-apps-script-awesome-list is a good start. Also checking GitHub tag search is worth looking https://github.com/search?q=topic%3Agoogle-apps-script

The official documentation of google apps script is here: https://developers.google.com/apps-script

Launch of my latest product Screenshot Tracker

🚨 I’ve just launched a new product (also in 😸 Product Hunt): 📸 Screenshot Tracker, a desktop all that helps capture full web page screenshots in different resolutions at once.

Screenshot Tracker helps you to collect:

  • 🖼 Full-size screenshots for your web pages
  • 🎳 Multiple URLs at once
  • 📱 In multiple resolutions (desktop, tablet portrait, landscape, and mobile)
  • 🔁 Repeat/Retake screenshots

with one click!

Screenshot Tracker is great for designers and developers to test their work while iterating on a web page design and development.

And it’s 100% free & open source. PRs welcome!

Project Homepage: https://screenshot-tracker.nomadinteractive.co/

Also, give me your support in Product Hunt page

Working with Memcache on your NodeJS app on Heroku

I wrote about Heroku in the past by showing my love for this amazing service. Heroku helps developers to start writing and deploying apps much easier than any other service. This allows developers to try things out and bootstrap their ideas in a matter of minutes. More about Heroku I talked in this article: https://mfyz.com/using-heroku-for-a-quick-development-environment/

Heroku Marketplace

I also mentioned before that Heroku has a 3rd party add-on marketplace. There are all types of application infrastructure, cloud services from fully hosted database services or CDNs to transactional email delivery services. Its marketplace is filled with countless gems that are worth a weekend to just explore these great services. That also works with Heroku within a few clicks/commands.

Why cache? Why Memcache?

One of them which is almost every distributed cloud application’s primary need is memory-based fast caching. There are different ways and models to store and use data from memory based caches but most common is for key-value based storage with TTL (time to live) which makes the data stored short-lived and cleaned up regularly.

The primary reason to use a memory-based cache is to optimize the performance of an application. In general terms, you cache frequently used data to cut time on queries to the database, reads to disks as long as you have a certain confidence that the data you are caching is not frequently changing within a short time while it lives in the cache. Even with the scenario that the data can be changed, there are simple ways to refresh the cache on the “update” events to make sure cached data is always valid and it cuts expensive operation time in the code flow.

As I mentioned, there are many different memory-based storage engines, protocols, services. One of the most simple yet effective and widely used engine is “Memcache”. It is an open-source tool that can be easily installed and hosted on any server that will run as a deamon and simply using an open port on a server to accept storage requests and queries. Memcache is designed to store and serve all of its data from the physical memory of the server. Therefore it’s very very fast and responsive when it’s combined with a high throughput network between the Memcache and application servers.

Heroku also supports multiple providers for Memcache and other memory-based storage engines. One particular service is very seamless to create an instance and set it up on a Heroku application, called Memcachier.

Memcachier

The memcachier instance free and can be added without any further account details from an external site. But it comes with very limited resources. It is only 25mb total cache size and 10 concurrent connection limit. Even though this number is small, for a node.js application, it will be mostly long-living. It will be more than enough if you are not planning to have many environments connecting to it at once. It is certainly enough to have a quick Memcache server to build stuff up and running. For more details about Memcachier service and its pricing model here is the Heroku elements page explaining all that: https://elements.heroku.com/addons/memcachier

To add this add-on to your application, in your project folder, run following command:

$ heroku addons:create memcachier:dev

This will add memcachier to Heroku config as an environment variable. Copy the variable to your local .env file to be able to connect from your local development application.

$ heroku config...
MEMCACHIER_SERVERS    => mcX.ec2.memcachier.com
MEMCACHIER_USERNAME   => bobslob
MEMCACHIER_PASSWORD   => l0nGr4ndoMstr1Ngo5strang3CHaR4cteRS...

To use memcache, on a node.js app, we will use a npm package called “memjs”. To install, run:

npm install memjs

The usage of the package is pretty straightforward, here is the short example it’s quoted from memcachier’s Heroku get started guide:

var memjs = require('memjs')

var mc = memjs.Client.create(process.env.MEMCACHIER_SERVERS, {
  failover: true,  // default: false
  timeout: 1,      // default: 0.5 (seconds)
  keepAlive: true  // default: false
})

mc.set('hello', 'memcachier', {expires:0}, function(err, val) {
  if(err != null) {
    console.log('Error setting value: ' + err)
  }
})

mc.get('hello', function(err, val) {
  if(err != null) {
    console.log('Error getting value: ' + err)
  }
  else {
    console.log(val.toString('utf8'))
  }
})

Usage is as simple as using get and set async functions to set and get a value with a key.

I created a more visible version of this example with a relatively long, time-consuming mathematical calculation, so caching an expensive calculation (or data retrieval like a slow database query) to be cached in Memcache, so it’s calculated first, then when requested consecutively, it is served from Memcache instead of re-calculating. In my example, I used a calculation of a “primal” number which takes time.

https://github.com/mfyz/heroku-memcachier-example

Developing and Deploying Nodejs (Express) apps on Heroku

Heroku is an amazing platform for getting quick development up and running in a smart virtual instances. There is no hussle to get additional services you may need for a quick and dirty app to ground up. I’ve already wrote about how to use heroku for quick development environment before: https://mfyz.com/using-heroku-for-a-quick-development-environment/

This short article will be about specifically developing and deploying node.js and express apps on heroku. There is actually not much difference for deploying a node.js app than a php application or in another language. Heroku CLI tool automatically detects the application type from the package.json file for a node.js application and it’s entry point from there.

For the express related parts, just go ahead and see the very simple example I put up in github the past:
http://github.com/mfyz/heroku-nodejs-express-helloworld

Another more detailed express example that uses pug template engine for it’s layouts and views:
https://github.com/mfyz/express-pug-boilerplate

Aside of the application itself, there are few key points I found helpful when creating and deploying node.js apps. 

Use environment variables

Using environment variables is the best way to set configuration details for your application.

Setting which node.js version your app will use

As simple as adding “engines” object in package.json and having your node.js version defined in “node” property inside engines object like:

"engines": {"node": "12.13.0"}

Same applies for npm and yarn versions to be defined within engines object as well.

Use prebuild and postbuild steps to prepare additional steps needed for your application build

By default, heroku will build your application on every deployment. This is not very meaningful for pure node.js applications but you app may need build. Like gulp, grunt, webpack builds. For this, heroku will read “build” npm script if exists in package.json. Aside of this, heroku will always install dependencies with npm install as a minimum build step. If you need additional steps before or after the build, you can define these in npm scripts as heroku-prebuild and heroku-postbuild named scripts.

Utilize heroku add-ons

Remember, Heroku comes with tons of 3rd party services which a lot of them have free packages that will be enough to try things out and start coding stuff up quickly. One of my favorite is heroku’s internal database service providing postgresql database with single command line command:

heroku addons:create heroku-postgresql

Wrapping up

All in all, heroku is a great cloud platform allow developers to kick off ideas, starting with simple code to grow into complex distributed applications very easily. In my opinion, it should be in the go-to tools for every engineer’s arsenal.

Importance of changelogs

It’s very important to keep a changelog and be disciplined about maintaining it as your product progresses. This applies to both product management and software development/management processes.

I want to mention a great source that talks, describe and almost accepted as a standard in open source community for changelogs:

https://keepachangelog.com

What is a changelog?

A changelog is a file that contains a curated, chronologically ordered list of notable changes for each version of a project.

Why keep a changelog?

To make it easier for users and contributors to see precisely what notable changes have been made between each release (or version) of the project.

Who needs a changelog?

People do. Whether consumers or developers, the end-users of software are human beings who care about what’s in the software. When the software changes, people want to know why and how.

An example Changelog (from Stretchly)

https://gist.github.com/mfyz/da277a38ba7d11bf1c6258a63d12dce6.js

Github renders this markdown changelog beautifully: https://github.com/hovancik/stretchly/blob/master/CHANGELOG.md

Single JavaScript file node/express/Instagram authentication (OAuth) and get user photos

Get Ready

Register your Instagram app in their developer portal and obtain client id and client secret keys. To do that, follow the steps below:

  • Go to IG Developer page: https://www.instagram.com/developer/
  • Click “Register your application” button (if you are not logged in, you will be asked to log in on this step)
  • Fill all fields. The only field you need to pay attention to is the “valid redirect URLs”. This is where your app is publicly hosted. Below, we will create a URL on the application to capture Instagram authentication after the user goes to the Instagram page for permission dialog, then comes back to this URL. It’ll be something like https://yoursite.com/instagram/callback
  • Once you register the app, the page will display client id and secret. Keep this ready on the next steps.

Code it up

Let’s set up the plain node.js and express the application.

First, install the required packages:

npm install --save express path instagram-node cookie-parser

index.js (or server.js)

const express = require('express')
const path = require('path')
const ig = require('instagram-node').instagram()
const cookieParser = require('cookie-parser')

// Set the config values below inline or from environment variables
const PORT = process.env.PORT || 8110
const IG_CLIENT_ID = process.env.IG_CLIENT_ID
const IG_CLIENT_SECRET = process.env.IG_CLIENT_SECRET

ig.use({
  client_id: IG_CLIENT_ID,
  client_secret: IG_CLIENT_SECRET
})

const igRedirecrUri = process.env.IG_REDIRECT_URI

const app = express()
app.use(express.static(path.join(__dirname, 'public')))
app.use(cookieParser())

app.get('/', (req, res) => {
  res.send('<a href="/instagram/authorize">Login with Instagram</a>');
})

app.get('/instagram/authorize', (req, res) => {
  res.redirect(ig.get_authorization_url(igRedirecrUri, {
    scope: ['public_content', 'likes']
  }))
})

app.get('/instagram/callback', (req, res) => {
  ig.authorize_user(req.query.code, igRedirecrUri, (err, result) => {
  if(err) return res.send(err)
    // ideally, store token in db and create a
    // browser session id to use the token from db
    // but for quick demo, we'll use cookie store
    // method below is not secure at all
    res.cookie('igAccessToken', result.access_token)
    res.redirect('/instagram/photos')
  })
})

app.get('/instagram/photos', (req, res) => {
  try {
    // use ig token from db (that is linked to the browser session id)
    // or for our demo version, get it from cookies
    const accessToken = req.cookies.igAccessToken

    ig.use({ access_token: accessToken })

    const userId = accessToken.split('.')[0] // ig user id, like: 1633560409
    ig.user_media_recent(userId, (err, result, pagination, remaining, limit) => {
      if(err) return res.render('error')
      // send photo array to a view or for our demo build html
      // (list of ig photos) and return to the browser
      let html = '';
      result.map((photo) => {
        html += '<a href="' + photo.link + '">' +
          '<img src="' + photo.images.low_resolution.url + '"/></a><br/>'
      })
      res.send(html);
    })
  }
  catch (e) {
    res.render('error')
  }
})

app.listen(PORT, () => console.log(`App listening on port ${PORT}!`))

Deploy and run

Either locally or after you place it on your server, run:

node index.js

Tip 1: use “forever” on your server to run this application permanently.

Tip 2: For experimental purposes, you can run this app on your local and have a tunneling tool like “ngrok” to open your local port to the public with a quick domain assignment. Ngrok will provide a URL (random subdomain on their domain), you have to update the IG developer app’s settings to add this domain as a valid redirect URL, otherwise, after this app redirects user for authentication to Instagram, it will give an error.

Get the real thing

The code above in this article was a quick and dirty version. I put the little bit more proper express application version on Github. It uses pug for its views and has proper layout/content block separation as well.

https://github.com/mfyz/express-passport-instagram-example