Simple Gitlab CI/CD Deployment via SSH+RSYNC

Setting up a project that runs in a web server (consider a traditional server like an AWS EC2 instance) requires you to deploy your code and configure the application. Doing this once may not be a big task but doing it continuously is not. Not to mention it will get impractical. Especially if it’s a project that you work on and maintain actively.

Setting up a good way to deploy your application is one of the key characteristics of a successful development setup. Ideally, your project should have an automated way to deploy, and roll back changes.

It makes a lot of sense of to use version control systems as the base of deployments. VCS systems are about how code changes are tracked by individual developers, comes together and merges back to the main branches. It perfectly fits well to have the capabilities to track deployments to these changes too.

The VCS services like Github and Gitlab now come with powerful CI/CD pipelines supports these use cases almost out of the box.

There are also many ways to achieve what I’m going to describe in this post. But I take this as my bare minimum, plain and simple way to deploy code and perform simple tasks to restart my application automatically as part of my code workflow.

We will be using SSH and RSYNC to update your code remotely, then update the changed/added/deleted files in your target folder then restart your application if needed.

In a PHP-based project, updating files would be enough because apache will be running the scripts in every single request unless you are using a caching module – which even comes with an automatic cache refresh if the file is changed.

If you are deploying a NodeJS (or similar) app that needs to be re-started, then we’ll use remote SSH command to perform a restart operation from your CI/CD pipeline.

Let’s jump right in the .gitlab-ci.yml example and I will point out the key areas in this template.

image: node

stages:
  - deploy

variables:
  npm_config_cache: "$CI_PROJECT_DIR/.npm"

cache:
  key: ${CI_COMMIT_REF_SLUG}
  paths:
    - .npm

production_deployment:
  stage: deploy
  image: alpine
  only:
    - master
  before_script:
    - apk update
    - apk add openssh git curl rsync
    - git checkout -B "$CI_BUILD_REF_NAME" "$CI_BUILD_REF"
  variables:
    DOCKER_DRIVER: overlay
    SERVER_NAME: "my-server-name"
    CONNECTION_STR: "[email protected]"
    ENVIRONMENT: "production"
    PROJECT_NAME: "myproject"
    SLACK_CI_CHANNEL: "#ci-myproject"
    RSYNC_EXCLUDES: "--exclude 'storage' --exclude '.env' --exclude 'node_modules' --exclude 'keys' --exclude '.git' --exclude '.yarn-cache'"
    RSYNC_BEFORE_HOOK: "mkdir -p $DEPLOY_PATH && rsync"
    DEPLOY_PATH: "/srv/data/deploy/${PROJECT_NAME}/production"
    SERVE_PATH: "/srv/data/www/${PROJECT_NAME}/production"
    PRIVATE_KEY: $SSH_PRIVATE_KEY_DEPLOYER
  script:
    - mkdir -p ~/.ssh
    - 'which ssh-agent || ( apk add --update openssh )'
    - eval "$(ssh-agent -s)"
    - echo "${PRIVATE_KEY}" | tr -d ' ' | base64 -d | ssh-add -
    - '[[ -f /.dockerenv ]] && echo -e "Host *\\n\\tStrictHostKeyChecking no\\n\\n" > ~/.ssh/config'
    - ssh "$CONNECTION_STR" "mkdir -p $SERVE_PATH $DEPLOY_PATH;";
    - rsync -avzqR --rsync-path="$RSYNC_BEFORE_HOOK" $RSYNC_EXCLUDES --delete -e 'ssh' ./ "$CONNECTION_STR:$DEPLOY_PATH";
    - ssh "$CONNECTION_STR" "cd $DEPLOY_PATH; rsync -avqR --delete ${RSYNC_EXCLUDES} ./ ${SERVE_PATH}";
    - ssh "$CONNECTION_STR" "cd ${SERVE_PATH}; npm install --production";
    - ssh "$CONNECTION_STR" "if forever list | grep 'production/server_run.js'; then forever stop ${SERVE_PATH}/server_run.js; fi; forever start --workingDir ${SERVE_PATH} ${SERVE_PATH}/server_run.js"
    - 'cd $CI_PROJECT_DIR && sh ./scripts/notify_slack.sh "${SLACK_CI_CHANNEL}" ":rocket: Build on \\`$ENVIRONMENT\\` \\`$CI_BUILD_REF_NAME\\` deployed to $SERVER_NAME! :white_check_mark: Commit \\`$(git log -1 --oneline)\\` See <https://gitlab.com/myproject/$(basename $PWD)/commit/$CI_BUILD_REF>"'
  environment:
    name: production
    url: <http://myproject.com>

Essentially we need to do:

  1. Upload (or update) the files in the server
  2. Restart the application (if needed)

You get a deployment log like this:

Running with gitlab-runner 15.4.0~beta.5.gdefc7017 (defc7017)
  on green-4.shared.runners-manager.gitlab.com/default ntHFEtyX
section_start:1664673660:prepare_executor
Preparing the "docker+machine" executor
Using Docker executor with image alpine ...
Pulling docker image alpine ...
Using docker image sha256:9c6f0724472873bb50a2ae67a9e7adcb57673a183cea8b06eb778dca859181b5 for alpine with digest alpine@sha256:bc41182d7ef5ffc53a40b044e725193bc10142a1243f395ee852a8d9730fc2ad ...
section_end:1664673666:prepare_executor
section_start:1664673666:prepare_script
Preparing environment
Running on runner-nthfetyx-project-17714851-concurrent-0 via runner-nthfetyx-shared-1664673617-f4952085...
section_end:1664673667:prepare_script
section_start:1664673667:get_sources
Getting source from Git repository
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/amazingproject/website/.git/
Created fresh repository.
Checking out 7ab562d5 as staging...

Skipping Git submodules setup
section_end:1664673681:get_sources
section_start:1664673681:step_script
Executing "step_script" stage of the job script
Using docker image sha256:9c6f0724472873bb50a2ae67a9e7adcb57673a183cea8b06eb778dca859181b5 for alpine with digest alpine@sha256:bc41182d7ef5ffc53a40b044e725193bc10142a1243f395ee852a8d9730fc2ad ...
$ apk update && apk add git curl rsync openssh openssh-client python3
fetch https://dl-cdn.alpinelinux.org/alpine/v3.16/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.16/community/x86_64/APKINDEX.tar.gz
v3.16.2-221-ge7097e0782 [https://dl-cdn.alpinelinux.org/alpine/v3.16/main]
v3.16.2-229-g1f881aca9b [https://dl-cdn.alpinelinux.org/alpine/v3.16/community]
OK: 17033 distinct packages available
(1/33) Installing ca-certificates (20220614-r0)
.
.
.
(33/33) Installing rsync (3.2.5-r0)
Executing busybox-1.35.0-r17.trigger
Executing ca-certificates-20220614-r0.trigger
OK: 78 MiB in 47 packages
$ git clone https://github.com/MestreLion/git-tools.git && git-tools/git-restore-mtime
Cloning into 'git-tools'...
12,931 files to be processed in work dir
Statistics:
         0.57 seconds
       13,151 log lines processed
           59 commits evaluated
        2,204 directories updated
       12,931 files updated
$ which ssh-agent || ( apk add --update openssh )
/usr/bin/ssh-agent
$ eval "$(ssh-agent -s)"
Agent pid 54
$ echo "${PRIVATE_KEY}" | tr -d ' ' | base64 -d | ssh-add -
Identity added: (stdin) ((stdin))
$ mkdir -p ~/.ssh
$ [[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
$ ssh "$CONNECTION_STR" "mkdir -p $DEPLOY_PATH;";
Warning: Permanently added '199.192.23.254' (ED25519) to the list of known hosts.
$ echo "--------> Copy latest codebase to remote"
--------> Copy latest codebase to remote
$ eval "rsync -avzqR --rsync-path='$RSYNC_BEFORE_HOOK' $RSYNC_EXCLUDES --delete -e 'ssh' ./ '$CONNECTION_STR:$DEPLOY_PATH'"
$ ssh "$CONNECTION_STR" "find $DEPLOY_PATH -type d \( -path $DEPLOY_PATH/assets/uploads -o -path $DEPLOY_PATH/application/logs \) -prune -o -exec chmod og-w {} \;"

$ cd $CI_PROJECT_DIR && sh ./scripts/notify_slack.sh "${SLACK_CI_CHANNEL}" ":rocket: Build on \`$ENVIRONMENT\` \`$CI_BUILD_REF_NAME\` deployed to $SERVER_NAME! :white_check_mark: Commit \`$(git log -1 --oneline)\` See <https://gitlab.com/amazingproject/website/commit/$CI_BUILD_REF>"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   427    0     2  100   425     15   3218 --:--:-- --:--:-- --:--:--  3259
oksection_end:1664673757:step_script
section_start:1664673757:cleanup_file_variables
Cleaning up project directory and file based variables
section_end:1664673758:cleanup_file_variables
Job succeeded

It runs fast, is almost universal and applicable to any type of codebase, and is extendable. If you need to restart your application by either using process managers or full daemon restart, you can add a new command and use the ssh lines that we remote-execute a command on the server.

Create and use a limited-permission deployer user for better security

A good rule of thumb is to set up a “deployer” user on the server, have the smallest possible permissions to the user, and have the target folder write access so these commands run properly. There is even a way to give sudo rights for specific commands if you really need to execute something with root permissions, without having a full sudo-enabled user account.

Even simpler deployment

Maybe RSYNC is even more complex for your needs. Maybe all you need is to pull the repo on your server initially, and in each deployment run “git pull”. You can simplify this script to get rid of all rsync parts and only have a remote SSH command runs that.

Remove unused CSS with PurgeCSS

When building a web app, we often use our go-to CSS framework (bootstrap, tailwindcss…) that comes with a lot of useful stuff that normalizes and speeds up our UI building process. Frameworks also come with a lot of baggage, a lot of it. Most of our UIs are not super complex and we don’t use the majority of the CSS frameworks we use. Even when we build and implement your own Design System from scratch, you always will have unused CSS in any given project or application.

PurgeCSS is a great way to optimize your final output to only contain the CSS you need and used. In my simple apps, I’ve implemented PurgeCSS, and I’ve seen between 70-90% final CSS size decrease and a significant render time decrease.

PurgeCSS works with most javascript bundlers and web build tools. It also comes with its own CLI tool. My go-to use case is their seamless integration with TailwindCSS in NextJS builds. Here is a nice guide and the example github repo I put when I was playing with this.

Check out PrugeCSS

At least do this for securing your WordPress site as a quick win, use Wordfence

Security is an oversight for a lot of WordPress site owners

If you have or building a WordPress site, you are in the market for quick wins. When it comes to securing website traffic, there is an endless number of considerations you have to make.

If you are not a novice web security person, the step for securing your site may be an oversight for you. You may miss a lot of obvious things about your WordPress website security.

Good security doesn’t come in a couple of clicks or in a few mins, but…

There is a simple and easy, no-cost action item you can take. Install Wordfence (or a similar security plugin) and enable at least the basic security settings for your WordPress site.

https://wordpress.org/plugins/wordfence/

Wordfence is an all-in-one sort plugin that comes with a lot of security features and it can be configured pretty quickly without much tech or web security knowledge. Their recommended configuration is almost always safe to activate and it activates in a few clicks.

Wordfence also has a high-level overview of your site’s activities in its dashboard:

Things you should do with Wordfence

Enable Multi-factor Authentication (MFA) for your WordPress admin logins

One of the best things you can do for a WordPress site is to secure its admin’s access. And the best way for that is to enable 2 Factor (Multi-Factor) authentication. You can use an authenticator app for this but this setting has to be enabled for every WordPress admin user. Make sure all admin users are 2FA enabled.

WordPress doesn’t come with a 2FA/MFA capability. Wordfence is one of the easiest ways to add 2FA/MFA to your WordPress logins.

Enable rate limiting and automatic blocking

Wordfence will enable rate-limiting in its firewall settings by default. This will also allow Wordfence to block too many failed login attempts which is often an attacker trying to gain access to WordPress admin.

Block countries you don’t have any users from

Especially high-scammer/spammer countries.

Keep your WordPress plugins up to date.

Wordfence will warn you if there are any dangerous/vulnerable plugins that stay outdated.

Wordfence email alerts will be enabled by default, keep it that way

Make sure you at least get a weekly digest to stay on top of your Wordfence activity and alerts.

If you have multiple WordPress sites, use Wordfence Central to manage the Wordfence installations across sites

Check out Wordfence: https://www.wordfence.com/

Note: This is not a paid or affiliate post. I just like Wordfence and recommend every WordPress site to have it set up if Security is not in y our radar when creating a WordPress site.

Easy WordPress backups using Updraft

Why backups?

If you are running WordPress to host anything serious, you probably are asked, talked about, or want to have a backup solution. Regardless of a serious site or a hobby project, you will be putting some time and effort into building it and continue creating content. In this day and age, it’s unthinkable to not have a backup of your site.

There are much more complex backup solutions out there, but most likely, you are using a somewhat managed hosting solution. Most hosting companies have their own out-of-the-box backup solutions. You can explore these options. But if you want to find a cost-free or low-cost or more flexible option, keep reading 🙂

I’m going to dive into my recommendation of the Updraft plugin first, then talk about a few key points I pay attention to and how updraft is handling these.

Updraft makes it easy

Updraft is a full backup solution for WordPress sites. It has a lot of controls you can configure your backups in different windows, what to backup, where to store, how to notify admins, and more.

Learn more about Updraft

Aside from these generic topics, there are three larger, key topics I want to talk about, that it’s most important to me when I consider backup solutions.

What and when to backup?

The most common topic to think about when designing backup strategies is what you want to back up and with what frequency. Updraft comes with a 2 tier schedule we really like. We set one of them to take full backups every week and retain the last 2 copies of this “full backup” these backups are basically our full “restore” points.

Then we set the second tier schedule to run every day, take database, and /uploads folder backup. On some sites where we don’t have many new uploads happen and if we have a huge historical media library, we skip the uploads and only take database backups daily and we retain daily backups for 14 days. This means we can restore a full backup from the beginning of the week (that includes plugins and all other stuff) and then cherry-pick a specific day in the last 14 days if we want to roll back to a particular date.

Back up, restore, and migrate easily

The reason you are wanting to have backups is at the end of the day, to be able to restore it easily in case you lose your server. Stuff happens right?

When that happens, it matters how easily/quickly restore your backup to a fully functional state. In WordPress site terms, it means everything including your posts, assets (media library), plugins to be installed, and plugin configurations as well. We’re basically looking for a full site restore.

One of the reasons I really like Updraft is because it handles full backup restore with a few clicks.

As part of the “restoration” process, Updraft doesn’t care if you are restoring the backup files to the exact same domain or a new one. This actually makes Updraft a great “migration” or “cloning” tool. We have utilized Updraft in my team to clone production sites to create staging or test copies easily. There are more specialized solutions for this but Updraft already handles these needs, so one plugin does many things for us.

When migrating to a new domain, updraft detects if the target WordPress URL is different than the source where the backup it has taken from, and it asks if you want to update the URL references (there are tons, in WordPress DB) with the new URL. This is also another pain point when moving sites between different domains.

A usual workflow is to work on the dev or staging version of a website then when ready, clone this to the production version. Updraft makes this process really easy for us.

On the server – off-site?

Another key topic is where the backups will be stored. By default, any backup solution will create backup files in the same server. It’s helpful to have backups on the same server, because it’s less configuration, faster…But that’s not why we’re taking backups in the first place. The “worst-case” scenario we’re preparing ourselves for is if we lose access to our server altogether. That’s why it’s very important for a backup strategy to include the backups storage to be off-site (out of the server), or even complete a separate region or even further to keep them in a different cloud provider. What if a whole AWS region goes out? (unlikely but it happens).

Updraft supports many cloud storage providers and protocols. I used AWS 3, Google Drive, Dropbox, (S)FTP targets with Updraft plus, and their configurations are really easy.

Beware most of these integrations will require the paid version “Plus” which is an annual subscription but you pay for plugin updates. In my opinion, it’s very affordable given the amount of manual work it removes from the engineer’s hands. We happily pay for an agency license and use it on all of our WordPress sites.

Bonus: Automatic backups before updates

For a regular, not tech-savvy user, there is a feature of updraft that I really like that shows a pop-up every time the WordPress admin tries to perform an update in the WordPress dashboard.

The pop-up basically reminds us that if the user wants to take a backup before proceeding with updates. One of the most common reasons your WordPress site will break is when you update your plugins. Your plugins may not be careful with backward support and/or they may miss a buggy use case that you are using their plugin with. When the update happens, your site reading behavior may be impacted, or even page rendering that will basically break your pages which is more serious. When this happens, Updraft’s notice that takes a backup before performing updates which will be a reference point for you to restore in case needed.

Of course, this option is configurable and Updraft makes it prominent for WordPress admin to change this behavior:

Learn more about Updraft

Disclaimer: This post contains affiliate links that will help support this site. Even though it looks like a promoted post, I genuinely love, use and recommend Updraft free or paid plugins/services. I would have published exact same post even if I wasn’t using affiliate links.

Analyze and Optimize Webpack Bundles Size and Contents

When creating web applications with popular frameworks like react.js, angular.js, or similar, most likely, you will be utilizing a bundler/packager tool like webpack to bundle your application source code to combined bundle javascript files that can be loaded and executed by browsers. With the ease of package managers handling most dependencies, it’s easy to lose track of what’s included in your final application bundle file. We may be using an easy-to-use library that streamlines our workflow but it may come with a cost.

In this article, we will talk about ways to analyze and understand what goes into your bundle code and increase your awareness when picking libraries to use and their effect on the final js bundle size. You will:

  • Realize what’s really inside your bundle
  • Find out the modules make up the most of its size
  • Find modules that got there by mistake
  • Consider alternatives to optimize your bundle size

There are a few popular tools we can use with minimal effort to analyze our bundle, visually.

Webpack Bundle Analyzer

https://www.npmjs.com/package/webpack-bundle-analyzer

Install:

npm install --save-dev webpack-bundle-analyzer

Run your webpack to create stats.json:

webpack --json > stats.json

Start bundle analyzer:

npx webpack-bundle-analyzer stats.json

The bundle analyzer will start as a web application in its default port. Visit http://127.0.0.1:8888 to open the bundle analyzer web UI. You can analyze your bundle and the packages/dependencies you use and determine the costly dependencies and think about strategies to optimize or find lightweight alternatives using the bundle analyzer:

https://cloud.githubusercontent.com/assets/302213/20628702/93f72404-b338-11e6-92d4-9a365550a701.gif

Alternatively, https://github.com/danvk/source-map-explorer works in the same way the webpack bundle analyzer works.

Webpack Visualizer

Webpack visualizer is another tool we can use to create a visual representation of your final bundle. you can use the webpack visualizer in two ways.

1) Using their web tool

The easiest way is to use their web tool to just upload your stats.json file. https://chrisbateman.github.io/webpack-visualizer/

2) Create stats.html using the webpack plugin

Webpack visualizer runs as a webpack plugin that generates a stats.html file containing an interactive visualization of your bundle contents. This method is useful if you want to include the creation of the stats.html as part of your CI/CD. It would be helpful for teams that want to continuously analyze every commit and have stats.html generated by the webpack visualizer to be part of your build artifacts in your CI/CD solution.

https://github.com/chrisbateman/webpack-visualizer

https://d33wubrfki0l68.cloudfront.net/46973c85ea0252b441aac5ffa56cf1a17849b0c8/0afb3/images/webpack_visualizer.png

Bundlephobia

Aside from analyzing your bundle, you can also proactively be conscious of every dependency before adding to your bundle. Bundlephobia helps you to search packages and know what you are getting into, and the cost of the packages in your application. Bundlephobia also highlights the user/browser impact of this package when used in your application, like download and render timings. There is also a historical change between versions of the package you are analyzing highlighted in bundlephobia.

https://bundlephobia.com/

Bundlesize

There are different ways to monitor your bundle. One of the simplest ways is to make sure the changes you just made in your code didn’t bloat your bundle. Bundlesize is a simple tool that can be easily added as a step in your CI pipeline that keeps your bundle size in check. Bundlesize will fail if you exceed your maximum bundle size set for your project and also will highlight the delta from the primary branch of your code. This helps developers to see their commit’s impact/change in the bundle size.

https://github.com/siddharthkp/bundlesize

Using wp-rocket to speed up your wordpress site

Despite its pain points for developers, it is hard to ignore WordPress’s popularity and flexibility it helps a lot of people for entry and mid-level website building. WordPress is a very powerful website builder tool but has an extremely fragmented plugin marketplace resulting in a lot of site owners with websites being powered by a mashup of a lot of plugins and different styles of coding/implementation and opinions from different developers. When not done carefully, it may result in low-performing websites. So optimizing WordPress-powered websites is a real effort. But regardless of the plugins used in the back-end, there is a very common optimization step that applies to the vast majority of WordPress sites.

Almost all WordPress sites generate static content at the end of the day and most of these sites don’t have proper caching and CDN is not set up behind them. When these two are combined, it can make a day and night difference on the site performance. In recent Google updates, now this directly affects your site’s ranking in search engines as well as social visibility (i.e: Facebook preferring higher performing sites in their news feed algorithm). Let’s expand on these two methods and what they mean.

Caching strategy for your website

Caching is one of the key elements of performing software. There are many ways to apply caching in software. But essentially, what caching means is for your servers to remember what calculations they did before. There may be some expensive calculations that may need to happen for a software function to do its job. But if calculated once, with the right caching strategies, it can store and remember that expensive calculation at a much faster speed.

Caching is especially important for websites that receive a lot of traffic. WordPress is essentially your website rendering engine. And once a page is rendered, there are ways to eliminate the repetitive need to keep re-rendering the same page if you get a lot of visitors to the same page. There are different ways to cache stuff in WordPress, but at the end of the day, we only care for the rendered page to be cached completely, and if possible, the server to serve pre-rendered page without needing to re-render it again. 

Get blazing fast with WP-Rocket

We have experimented with many WordPress optimization plugins. One plugin that stands out far ahead in the competition that is worth taking presence for this article is WP-Rocket. WP-Rocket comes with a simple configuration that combines, compresses, and optimizes your WordPress pages and provides caching that will increase your page load speed. The change will be very visible since WP-Rocket converts your WordPress page output to static, minimized files that are served to your users’ browsers very fast. Without proper caching, your WordPress page will render every time a user reads your content and your traffic will be directly dependent on your server power. WordPress takes the strain off from your server with intelligent caching that majority of your traffic only incurs static file serving that happens very quickly. Another benefit of using WP-Rocket is that optimizes your traffic responses with better cache headers which can be combined with services like Cloudflare that will accelerate your traffic load served from cache, in most cases not even from your server, but servers at edge network, provided by services like Cloudflare. 

You don’t need the paid version of WP-Rocket and the free version comes with a lot of features that will make a visible difference with just a few clicks. Similarly, Cloudflare’s core service is also a free and great start for a brand new WordPress site.

Learn more and get started with WP Rocket

For further optimization, you can set up and configure a CDN on top of WP-Rocket aside from or instead of Cloudflare to take better performance benefits.

Content Delivery Networks

Rocket CDN (by WP-Rocket)

If you’ll go down the road, wp-rocket comes with its CDN offering that is flat-rate priced and provides unlimited bandwidth and edge storage. It will be the easiest integration with wp-rocket to enable this option.

Learn more about Rocket CDN

Cloudflare Auto Optimize

Cloudflare comes with their own WordPress content delivery option even though they will do DNS proxy-based caching to make your traffic served with optimized network caching if you are using Cloudflare’s primary service offering. Beyond that, Cloudflare comes with an affordable, flat-rate service that is pretty much out of the box, with zero configuration for WordPress sites. All thou need to do is to install their WordPress plugin, log in with your Cloudflare account and subscribe to their service.

Learn more: https://www.cloudflare.com/automatic-platform-optimization/wordpress/

Cheap CDN Bunny.net

If you want more control, there are many traditional CDN solutions. I suggest a simple and very affordable bunny.net we used in the past. Bunny has one of the most cost-competitive offerings with a great performance.

They offer an official WordPress plugin for easy integration documented here.

Disclaimer: WP-Rocket links above are affiliate links that help support this blog. But regardless, wp-rocket is a great plugin and service I used many times before paying for their premium plugin and services. I’d write them even if they didn’t have an affiliate program.

Using rclone & cronjobs for simple server backup solution

https://rclone.org/ is a command-line file/folder sync tool that connects with many cloud storage providers like AWS s3, FTP, google drive, dropbox…

It’s easily configured once then with simple commands, allowing two-way syncing between different cloud providers or local file systems.

This makes it the perfect and simplest backup solution on your personal server to take backups and sync them to multiple cloud providers.

I do have the following cron runs once a day along with a few other scripts that prepare the backups.

rclone sync -v /data/backups mfyz-gdrive:mfyz-server-backups-rclone

This command syncs my backups folder (contents) to a folder in my google drive. 

Backups

The way you take your backups will be up to you. You could even directly sync your application folders like apache httpdocs folder but that’s too many files that may update too frequently. Instead, you can tar, gzip your folders or take database backups before running rclone for your backup solution.

I have the following, simple backup script on my server takes my wordpress site’s snapshot daily, then rclone syncs it up to my google drive.

cd /data/backups
now=$(date +"%Y%m%d")
echo "--> backing up mfyz.com"
echo "files backup..."
tar cpf backup-mfyz-$now-files.tar --exclude=Files --exclude=wp-content/uploads --exclude=wp-content/cache --exclude=tr/wp-content/uploads --exclude=tr/wp-content/cache --exclude=.git ../www/mfyz
echo "database backup..."
sudo mysqldump mfyz_wp | gzip > backup-mfyz-$now-db.sql.gz
echo "done"

Monitoring

In my daily cron tasks, after running rclone, I also have a health check to make sure my backups are taken correctly. So a ping service monitors my daily tasks run successfully. I’ve written about open-source health check/ping service you can use or self-host yourself here: https://mfyz.com/monitoring-your-microservice-stack-with-simple-ping-health-checks-using-helathchecks-io-for-free/

Using Axe & React-axe to audit your web application’s accessibility

Web accessibility is one of the keys and often missed parts of web development. If you are building a website for a larger, general audience, you have to make sure your page complies with web accessibility standards (most known WCAG). 

Making sure your website is accessible is no small task. There are obvious steps you have to take but the real accessibility issues are not easy to understand and pinpoint before a real user with a disability visits your site using tools like screen readers or other accessibility aiding tools. To do it right, most companies work with audit companies that are experienced in testing your site using variations of these tools to cover as many real accessibility scenarios as they can. But getting your site audited for accessibility is also going to cost you.

Web accessibility compliance also becomes a mandate for websites serving to certain industries (like government, insurance, banking…) that you are obliged to make your site accessible at all times. But most consumer sites this is not a requirement but a work to make your site/brand more inclusive of all users. 

I want to talk about a browser extension (and a react library) that helps you to detect the obvious, programmatically, and easy to detect issues that you can address quickly to cover the majority of the basic accessibility issues on your pages as a quick win.

Axe

Axe is a software and service that both have professional solutions as well as free browser extensions (Chrome, Firefox, and MS Edge) that are very easy to install, activate and start seeing your page’s accessibility compatibility and issues. 

Visit https://www.deque.com/axe/ to learn more and install the browser extension. The extension is pretty straightforward to use that runs on a page you open and shows issues, explanations of what the issue is, how to solve it to make your page more accessible.

React-axe

There is also react npm package that you can activate in your development environments that helps you to audit the final rendered DOM tree similarly to the chrome extension.

https://www.npmjs.com/package/@axe-core/react

Google Sheets + Zapier is a perfect gateway for quick integrations when bootstrapping a new tool/service

When bootstrapping a new product, regardless of platform and solution is used in the back-end and front-end, the times come very quickly that you will need to integrate with 3rd party platforms to create continuity of the product’s user experience between different solutions.

As a good example of this, let’s say you bootstrapped a small SaaS product that helps users to calculate their taxes. Your product is not going to be only the software solution you created but the whole experience from customer support to documentation or educational materials, perhaps some marketing experience when acquiring and onboarding your new users. So right off the bat, we will need a customer support solution, marketing tool. Perhaps a CRM-ish tool to use as our “customer master” database. And will want to channel everything there as much as we can.

But when someone signs up, your back-end only creates their user account, and their customer support record, CRM record, or marketing tracking is not connected. Most likely, these will be separate services like Intercom, Zendesk, Mailchimp, etc. And obviously, your own backend/database where your user’s initial records are created and their engagement with your core product happens.

I have planned and done these integrations many times over in different products and worked with many 3rd party services to integrate. Some niche solutions that I had to integrate don’t have proper APIs or capabilities. Setting some of these exceptions aside, most tools have integrations with well-known platforms like Salesforce, Facebook Ads, IFTTT, Slack. And as a common and growing theme, most tools also have integration with Zapier which is the main event I want to come to.

Eventually, I find myself evaluating Zapier Integrations between these platforms to cover most of the use cases we often spend days doing single integration. Where if the triggers and actions are covering what we are trying to do, I started to suggest my clients and the rest of my team create Zapier focused integrations.

There is an easier way. A big majority of people working in the process/product/team management space uses sheets on a daily basis. Either Excel or Google Sheet covers that big majority of the use cases. I evangelize Google Sheets just because of its real-time collaboration and ease-of-access capabilities. It’s free and a large majority of people having google accounts making it very universal.

I have done direct google sheet integrations in the past many times. But recently I like the concept of using google sheet as a source that can be commonly used by other services for integration purposes. Since it’s a living document, it’s very easy to make changes on a document or listen to changes happening on documents (by humans or APIs). This makes it an amazing candidate for using it with Zapier to use it as a “source” of data. It makes Zapier the magic glue here to serve as a universal adapter to anything else we want to connect to. Having thousands of services available in Zapier makes it a meeting ground for moving the data we provide through google sheet to anywhere else.

I need to say this will be limited based on each service’s capability and the available actions/triggers in the Zapier platform. But most SaaS solutions invest enough effort and time to make their Zapier integrations rich enough to serve the most common use cases. It won’t cover 100% of needs but it will certainly eliminate a lot of basic integrations like slack, email notifications, marketing tools triggers (i.e: follow-up campaigns).

This is not a code-less solution

When going down this route, the biggest work and challenge will be integrating Google Sheet APIs to connect your account (through oAuth process), and store your credentials in your server and create the server → gsheet integration to send your back-end changes to a google sheet document. It’s not the easiest API to integrate with but it’s well documented, very mature, and has endless examples in the community (github). And best of all, this one integration opens up so many without needing to do further integration. Even in the most basic products, we find ourselves doing slack, email deliveries in MVP versions. Investing the same effort in google sheet will easily justify itself later.

Trade offs

One big trade-off is to have your user’s PII data to be transported, stored in a google sheet (which will be private), and then sent to Zapier. If you are super paranoid or have to comply with certain privacy regulations, managing this traffic may need to be done more sensitively or completely unfeasible for your product. But the majority of products I built do not need that rigorous audit and compliance. So this solution has worked for me many times.

Example

I want to show a sample integration to set up a google sheet as a trigger and put a Slack notification as an action. Hopefully, this showcases some imagination and helps you understand where this can go.

Set up Google Sheet changes as “trigger”

Create a new zap or edit the existing one to change the “trigger” service. Select Google Sheets.
In the first step, you will be asked to select the google account linked to your zapier account. If you haven’t done it yet, or want to connect to another account then you currently have, you can do it in this step.

After selecting the account, Zapier will ask you to select what event you want to set this zap to listen to. Generally, we will inject a new row into a sheet in one of the documents. So we select “New Spreadsheet Row” as the event to listen to, but as you can see, you can select other events like updating a spreadsheet row or new worksheet creation in a document.

Now you will need to select which document and which worksheet to listen to. Zapier will show document and sheet selection dropdowns here.

As the final step, you will be able (and kinda have to) to test your trigger that will pull a sample row from your sheet. Make sure you enter values into your columns in order to use this sample data to set up your further actions in zapier. Zapier will show these sample values when you create actions using these values.

Set up Slack as “action” to send a message to a channel

Now, we’ll use this trigger in any service we want. We can also create multiple actions where you can send an email and slack notification and create a new Intercom customer record at the same time in one zap.

For this example, in the “action” section we will select Slack service when asked.

First, we will select the type of “action” we want to perform. We will select “Send Channel Message”. You can select other actions like send a direct message or others.

Then, similar to Google sheet initial steps, we will first select the slack account we want to use.

And finally, with seeing a lot of options, we will set up the sender name, avatar, and other details, but most importantly, the channel we want the message to be sent to and the message content itself:

Zapier is pretty intuitive and simple to construct smart content areas like this one. You will be able to both type a static message as well as insert the actual data (variables) from your source. In this example, our source is the google sheet document. So you will see a dropdown with search capabilities to search and find the actual column value you want to insert when you want to construct a message with dynamic parts.

Once everything is done, you will be able to finish this step and will be forced to test the action you just set up. And all done! Don’t forget to turn the zap “on”.

This is just the most simple example I can use. There are many use cases you can allow this integration to push changes/data into thousands of services available in Zapier.

Happy Zaps!

Create a website from Notion document collection with custom subdomain via Fruition in 10 minutes

home.png

You guys know how much I love thinking, talking, and geeking about ways to smartly preserve our knowledge in form of written communication. I talked a lot about written communication and making our digital documents smarter before.

Although Notion is not my primary knowledge base, note-taking tool, I really love certain aspects of how Notion functions. Probably the best feature of it among other tools is how you can publish any document publicly. The output it creates is very minimal and clean. The second best thing about Notion is to support almost any type of web-embed code. With this, you can practically embed any interactive widgets or content pieces to your documents. Very much like a web page, Notion makes the documents you publish online as — pretty much a fully-fledged web page. 

This made Notion an attraction point for a lot of people to create microsites, sub-sites, and a lot of plain content to be written in notion and used in existing websites.

Today, I want to talk about a Notion enthusiast Stephen’s guide, and a mini-project that allows us to set up a custom domain/subdomain for our notion documents.

Stephen’s code runs on serverless “Cloudflare Workers”. This allows a few customizations like dark/light mode toggle/switch on your page as well as nice URLs (slugs) when you set up your notion document with the worker code. It’s a pretty simple, almost no-code solution. In fact, you don’t have to worry about the code, Stephen created a mini UI to allow you to customize your configuration while setting it up. It takes about 5-10 minutes to set it up but it’s worth it.

Give it a try

Check out the step by step tutorial on the project site: https://fruitionsite.com/

You can also jump right in the video tutorial:

I love this method allows you to spin up a super-fast website that you can continuously edit/update from Notion and cost nothing.

Get Google Sheets document content as JSON without Google API oAuth

Use an existing google sheet document or create a new google sheet and once you are done with your content;

example google sheet document

“Click Share” button on top right corner:

google sheet share dialog

By default, the “link sharing” is not enabled by default.

In the “Get Link” section, click “Change to anyone with the link”

google sheet share dialog with public link

 Copy the link and click Done to close this pop-up.

Now, as second step on the google sheet side, we have to publish the document to web. To do that, click “File > Publish to the web” option from the menu.

google sheet file menu

 And publish your document in the publish pop-up:

google sheet publish to web dialog

We’re done on the google sheet side. Now we’ll get the document id from the url we copied and reformat it for the JSON output.

https://docs.google.com/spreadsheets/d/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/edit?usp=sharing

Get the document id from the link. For the example sheet I created (the link above), the document id is:

1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI

Now, let’s construct our JSON url using the document id:

https://spreadsheets.google.com/feeds/cells/_YOUR_SHEET_ID_/_SHEET_NUMBER_/public/full?alt=json

As you see, aside of the document id, you need to define the sheet index. It’s the sheet tab you want to get as JSON object needs to be entered as number in the url template above.

My sample document had just one tab so the tab index will be “1”. The final url for this example will be:

https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full?alt=json

Now you can access to the content of the sheet in a flattened object. 

{
  "version": "1.0",
  "encoding": "UTF-8",
  "feed": {
    "xmlns": "http://www.w3.org/2005/Atom",
    "xmlns$openSearch": "http://a9.com/-/spec/opensearchrss/1.0/",
    "xmlns$batch": "http://schemas.google.com/gdata/batch",
    "xmlns$gs": "http://schemas.google.com/spreadsheets/2006",
    "id": {
      "$t": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full"
    },
    "updated": {
      "$t": "2021-04-28T16:11:58.672Z"
    },
    "category": [
      {
        "scheme": "http://schemas.google.com/spreadsheets/2006",
        "term": "http://schemas.google.com/spreadsheets/2006#cell"
      }
    ],
    "title": {
      "type": "text",
      "$t": "Sheet1"
    },
    "link": [
      {
        "rel": "alternate",
        "type": "application/atom+xml",
        "href": "https://docs.google.com/spreadsheets/d/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/pubhtml"
      },
      {
        "rel": "http://schemas.google.com/g/2005#feed",
        "type": "application/atom+xml",
        "href": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full"
      },
      {
        "rel": "http://schemas.google.com/g/2005#post",
        "type": "application/atom+xml",
        "href": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full"
      },
      {
        "rel": "http://schemas.google.com/g/2005#batch",
        "type": "application/atom+xml",
        "href": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full/batch"
      },
      {
        "rel": "self",
        "type": "application/atom+xml",
        "href": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full?alt=json"
      }
    ],
    "author": [
      {
        "name": {
          "$t": "..."
        },
        "email": {
          "$t": "..."
        }
      }
    ],
    "openSearch$totalResults": {
      "$t": "4"
    },
    "openSearch$startIndex": {
      "$t": "1"
    },
    "gs$rowCount": {
      "$t": "1000"
    },
    "gs$colCount": {
      "$t": "26"
    },
    "entry": [
      {
        "id": {
          "$t": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full/R1C1"
        },
        "updated": {
          "$t": "2021-04-28T16:11:58.672Z"
        },
        "category": [
          {
            "scheme": "http://schemas.google.com/spreadsheets/2006",
            "term": "http://schemas.google.com/spreadsheets/2006#cell"
          }
        ],
        "title": {
          "type": "text",
          "$t": "A1"
        },
        "content": {
          "type": "text",
          "$t": "A1-Test"
        },
        "link": [
          {
            "rel": "self",
            "type": "application/atom+xml",
            "href": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full/R1C1"
          }
        ],
        "gs$cell": {
          "row": "1",
          "col": "1",
          "inputValue": "A1-Test",
          "$t": "A1-Test"
        }
      },
      {
        "id": {
          "$t": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full/R1C2"
        },
        "updated": {
          "$t": "2021-04-28T16:11:58.672Z"
        },
        "category": [
          {
            "scheme": "http://schemas.google.com/spreadsheets/2006",
            "term": "http://schemas.google.com/spreadsheets/2006#cell"
          }
        ],
        "title": {
          "type": "text",
          "$t": "B1"
        },
        "content": {
          "type": "text",
          "$t": "B1-Test"
        },
        "link": [
          {
            "rel": "self",
            "type": "application/atom+xml",
            "href": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full/R1C2"
          }
        ],
        "gs$cell": {
          "row": "1",
          "col": "2",
          "inputValue": "B1-Test",
          "$t": "B1-Test"
        }
      },
      {
        "id": {
          "$t": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full/R2C1"
        },
        "updated": {
          "$t": "2021-04-28T16:11:58.672Z"
        },
        "category": [
          {
            "scheme": "http://schemas.google.com/spreadsheets/2006",
            "term": "http://schemas.google.com/spreadsheets/2006#cell"
          }
        ],
        "title": {
          "type": "text",
          "$t": "A2"
        },
        "content": {
          "type": "text",
          "$t": "A2-Test"
        },
        "link": [
          {
            "rel": "self",
            "type": "application/atom+xml",
            "href": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full/R2C1"
          }
        ],
        "gs$cell": {
          "row": "2",
          "col": "1",
          "inputValue": "A2-Test",
          "$t": "A2-Test"
        }
      },
      {
        "id": {
          "$t": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full/R2C2"
        },
        "updated": {
          "$t": "2021-04-28T16:11:58.672Z"
        },
        "category": [
          {
            "scheme": "http://schemas.google.com/spreadsheets/2006",
            "term": "http://schemas.google.com/spreadsheets/2006#cell"
          }
        ],
        "title": {
          "type": "text",
          "$t": "B2"
        },
        "content": {
          "type": "text",
          "$t": "B2-Test"
        },
        "link": [
          {
            "rel": "self",
            "type": "application/atom+xml",
            "href": "https://spreadsheets.google.com/feeds/cells/1yxaigtzh48EV8sJioJIZJz_HovKQ6OJH6BLq1alA4GI/1/public/full/R2C2"
          }
        ],
        "gs$cell": {
          "row": "2",
          "col": "2",
          "inputValue": "B2-Test",
          "$t": "B2-Test"
        }
      }
    ]
  }
}

You can construct a matrix on your integration using following javascript address in the large JSON response:

feed.entry[].gs$cell

this sub-object will contain the row, cell and the text value of the cell. If you have formulas, you can get the raw entry in the cell in this object as well.

Note: When you hit your constructed JSON url, if you get the error below, make sure you published your document publicly.

google sheets not published document error

Monitoring your microservice stack with simple ping health checks using Helathchecks.io for free

When planning, designing and implementing the infrastructure of a product, the most common pattern we follow from the back-end perspective is to build the back-end in the distributed model using microservices. But there is a cost that comes with this model which is monitoring, alerting, and maintaining. Each microservice can be living on its own container with its own dependencies. We often build too many moving parts with the microservices approach. A very simple app can have always running services, scheduled jobs, and workers distributed in many different infrastructure elements. And the rise of containerized applications or tasks allowed us to build our products with even more micro-scale code pieces running on serverless infrastructures.

There are many solutions to monitoring and alerting if a service is properly running but it’s often difficult to centralize the monitoring.

There is a very simple yet effective approach I like and seed the very minimal integration to most of my microservice components from servers, pings, a script doing a cleanup, or backup. It’s very similar to ping checks but a little bit more simplified and universal. Ping checks generally require your “parts” of your code to either publicly available and/or serve a status on HTTP/TCP/UDP traffic which may not be available based on your “component” runs.

In this post, I’d like to focus on open-source software that you can easily set up and run your own instance for free. They also provide a SaaS version of the software that you can get started for a free or low cost.

Screen Shot 2021-04-21 at 10.11.05 AM.jpg

The principle of this service is very simple. Essentially, you create a “check” and, the software expects a ping from your component in regular intervals. You can set up what these intervals can be, group/organize them, and set up alerts in case a check fails to report (ping) within the grace period you can adjust.

Screen Shot 2021-04-21 at 10.11.25 AM.jpg

The service can integrate with many mainstream ops-support/escalation services like pager duty, opsgenie… or simpler services like slack, email, or SMS using Twilio for the basic notifications.

Check out the service, their open-source software page, and their documentation here: helathchecks.io

Hosting your own private npm packages with self-hosted npm registry using Verdaccio

At Nomad Interactive, we are relatively new to contributing to npm with our own packages. Our initial designs of how we packaged certain things that are common throughout our projects, like eslint configurations, or some generators we created for ourselves… We initially packaged them as private repositories without thinking too much about its versioning, supporting multiple versions of (let’s say eslint, or react.js). Then it became clear that we had to version our packages properly. So we started our research to see how we can easily spin a private registry of our own and distribute our packages privately to authenticated users which are not. 

We found verdaccio as an open source and lightweight npm registry. We were able to spin an instance very easily on our docker swarm in minutes and start pushing private packages with our initial versions.

https://verdaccio.org/

If you need private registry for your team/company verdaccio is definitely a great quick and easy solution to get started.

Although we switched our mindset from, “why private packages anyway?” to let’s share everything we do with the world. So we questioned do we even need a private package at all? And the answer was no. Nothing we do is government secret or any secret at all. So we converted them to public packages and start pushing them to npm directly. The issue is, we definitely don’t have any real open source software management thought put behind any of those packages at all. So if we receive a pull request, we’ll most likely to neglect it (even though I’ll be very appreciative about it).

One thing I gotta say is, a private npm registry occasionally became painful to get it working properly on our CI/CD pipelines due to authentication. And Verdaccio doesn’t have support for registry tokens, or tokenized access separate from the users. So you have to create a user for your registry and that will probably need to be a git user. And if you are in a licensed environment like gitlab.com or GitHub, you may be forced to pay for a user that only CI/CD will use. Not a big deal but a side note that we had to deal with…

How to use Genymotion for Android testing (simulator/emulator) on macOS or Windows

If you are in mobile development, particularly Android development, you probably heard or used Genymotion’s simulator.

Genymotion is a company (and product) that run Android OS on a virtual machine to simulate android runtime with some basic apps the naked OS comes with. By default, it doesn’t have Google services and APIs installed or enabled but it can be installable on top of the naked Android OS you install from the images you can pick and download from genymotion’s repository easily.

These days, the official android emulator is not bad at all, but back in the day, it was terribly slow and difficult to do certain things. It was also designed solely for Android app developers who can do stuff from the command line to supplement android emulator. In those days genymotion was a superior simulator that was everybody’s first choice of running android apps while developing because of those reasons.

Aside of android app development, I extensively used genymotion to do my mobile web tests in native chrome for android and few other browsers. Today there are many services we can use in the cloud and test our web work in so many real devices and OS/browser version variations that this use case is no longer valid. But I still like the idea of an easy to configure and run android environment when I need. And for this purpose, I suggest Genymotion be one of the best solutions out there. They changed their licensing model a lot, so I don’t know what’s the latest but I was able to use genymotion (and still is) use freely for personal use cases (which is all of my use cases are personal projects and stuff).

Genymotion-player-3.0.jpg

 For some time, I also used genymotion experimentally to run a custom size tablet, installed google services and APIs, and google play so I was able to install the apps I use on mobile platforms that have a nicer user experience than their desktop or browser counterparts. If you don’t mind spending a lot of RAM, this can be an interesting option to run mobile apps on your desktop. Not super intuitive to navigate with mobile gestures with a mouse and trackpad, but one can get used to it very quickly.

https://www.genymotion.com/

Using Vercel (formerly Zeit/now.sh) for super-fast deployments

I always love services allowing developers to quicken the time from zero code to a live URL for a web application. Large majority or web applications I write – mostly for hobby topics – are very simple, not too complicated or crowded applications. So by my experimental nature, I create a lot of small apps. Most from scratch or using plain boilerplate codes I created in the past for myself for these experimental ideas.

Generally, when my code is ready to show to someone, I waste a lot of time on doing boring steps to prepare the app to make it visible somewhere. Thanks to services like heroku, this is way much less of a headache now. Readers follow me, knows I showed my love to heroku with multiple articles previously 🙂

I want to another service that brings me joy when I see I’m cutting a lot of annoying time that I want to teleport myself using these services when I bootstrap an idea and make it live and ready to prime time. Now.sh is a delightful service I discovered when I needed a static hosting for a react.js web app I wrote that I wanted to put up very quickly. There are some apps that I don’t even want to create a heroku git repository or want even faster turnaround on my steps to write and publish an app.

Now.sh allows single command line to be live on my web apps. Now.sh is actually name of the CLI tool for the parent service “Vercel”. To install their comamdn line tool, simply install “now” package globally from npm:

npm i -g vercel

Then first create your account on vercel and run 

vercel login

to login to your account. Now you are good to go.

One command deployments for non-git projects – or automatic deployments for every commit

In your project folder. Simply run

vercel

command that will automatically deploy your code (the files in the current folder) to vercel with an automatically generated subdomain under “.now.sh”. If you want, you can attach your application to a custom domain of your own for free: https://vercel.com/docs/custom-domains

Instant API

Vercel/Now.sh also provides AWS-Lambda style “serverless” architecture. What I love about this model is that Vercel allows you to write an API endpoint in javascript in very very very simple way. You just create a javascript module file, exporting a function with 2 arguments req, res, very similar mimicking express req and res objects. So it’s familiar and even simpler than creating an express application and link it to a router. You simply create “api” folder and create “hello.js” which gives you …deployment-url…/api/hello endpoint.

Here is an “echo” endpoint that returns what’s sent to it:

module.exports = (req, res) => {
  res.json({
    body: req.body,
    query: req.query,
    cookies: req.cookies
  })
}

save this as echo.js under api folder in your project and you have yourself an endpoint 🙂

As simple as it looks, there are more advanced topics on this in vercel’s documentation: https://vercel.com/docs/serverless-functions/introduction

Github integration

Another great feature I really like is direct and seamless integration to github repositories. I experimented with very portable development environments such as coding on iPad and such in the past. I find zero configuration and zero dependency development models/environments very attractive. There was an occasion in recent months that I had to live on my iPad for 10 days and needed some quick way to deploy and code up some web based application with few back-end capabilities. Nothing complex. It was very hard and time consuming to construct a remote development environment and continuously work on ssh-based remote platforms instead of native platform I was using.

Thankfully I used now.sh’s github integration that removed the “deployment” and build steps off of my local environment. I still had to do very frequent pushes to a remote git repository (github) in order to make sure I am continuously not breaking my working app and moving along on the feature I was working on. But still, it had zero dependency on my local environment that I was using my favorite editor and was able to push code to github and rest was taken care by now.sh. I really enjoyed it’s github bot that was very responsively posting updates to commit and PR logs as comments. I was also getting deployment changes on my slack channels. So it was pretty instantaneous to make changes and make it live somewhere that I can share with my team. More on their github integration here: https://vercel.com/github

Final note

Vercel (formerly Now.sh or Zeit) is a great service for both bootstrapping and making your app scale. They are also very transparent and open on their tooling that makes moving out easy. So there is no fear of “locking in” to a degree.

There are also half dozen other beautiful features that I’m not talking on this article, worth checking out: https://vercel.com/

How Figma changed how we collaborate on our UX and UI designs

abstract figma interface

At Nomad Interactive, we done big design toolkit migration twice in the past. From old Adobe Photoshop/Illustrator era to Sketch, both utilizing other tools on our collaborative process like Invision, Zeplin. About a year ago, we made similar transition/migration to Figma.

Figma is a browser-based real-time collaborative design tool. Being vector-based makes it very efficient in the total document sizes. Vector objects are much more descriptive for the elements we create in the artboards which allows further extendability via plugins. Also, a browser-based engine makes it web technologies friendly, like javascript-based API that is most common with Sketch already.

One tool to rule them all

We used sketch to create our digital designs for years. Plus used invision for presentation purposes. We had a particular process to export our designs, place them in dropbox, then upload to invision with same/similar project list and configuration. On the other hand, for Designer to Developer handover, we started to use the beautiful startup Zeplin. But, Invision knocked them off pretty quickly, so we put some of our focus to adapt Invision’s “Inspect” feature. Not long after, we moved to Figma which replaced 3 of these tools without any adaptation or question in our minds. When we gave Figma a try in a single project, it was clear very quickly that we don’t have to jump between tools for different purposes. Then we switched over.

Collaborate – Seriously

The best of what makes Figma different than other design tools is the real time collaboration features. Being able to see all viewers and editors cursors, seeing the design changes real-time. This is a similar paradigm shift happens between a static file focused “Word” versus online, real-time collaborative alternative, google docs or quip. It makes the “creation” process much more like a white board session if utilized well. We started to do collaborative design sessions on the same project with multiple designers, product managers/owners, project managers. Not everybody designs, but they can actively collaborate on the design process, providing direction to the designers on the large artboards. It’s very much like the whiteboard session.

Not all good

This process change, resulted us to see the low-fi UX thinking process to be much more visible. This let non-designers to be more active participants of the earlier parts of the design process. Also results getting wireframes to be done very closely to the actual designs (we’re mostly talking about digital product designs – like mobile application UIs or e-commerce sites). The danger is mixing these two phases of the design process that is generally better to keep them separate for the sake of putting the mind in the right concerns at the right time.

We generally dedicate wireframing period to bake the digital product’s functionality focused discussions and iterations. And the UI/Creative design period to be more concerned about look and feel, colors, typography, animations, creating emotions after we know how the product is wanted to be working. Figma’s collaborative design feature brings these two worlds together closer. There is a danger to mix it up so all the sudden you will be hearing button color instead of what the button should say or do when interacted.

A weird need on designing on mobile platforms (namely iPad Pro)

I have a weird need to make super-portable devices like iPad to be my go-to device to carry around (I already make my “thinking” oriented tasks on iPad – like writing this article). But I have a burning desire to see the iPad to be able to handle more complex tasks like writing code (not just writing, but compiling or having the runtimes for scripting languages – not yet). Or doing more complex design work – at least on the sketch/Figma level. I’m not asking to be able to do render-heavy design tasks like photoshop does. That is also not what I need or do in 99% of the time.

I gotta say, Figma is being the closest in that game if this is a practical or real future need. We know Sketch developers said they will not going to port their macOS app to the mobile platform. And Adobe is taking a different (probably nicer – native) path but a long one to get their suite of applications in the mobile platforms – but we’re already over with Adobe products. On the other hand, Figma practically runs without any issue on the mobile-safari. But with a huge lack of touch and mobile interaction support. There are some attempts to get it better (i.e: Figurative app). But still a short road to see Figma is fully iPadOS compatible. I’m sure Figma team is already working on this and hope that day comes sooner.

http://figma.com/

Using Airtable through its API programmatically, as (almost) remote database

I recently talked a lot about the importance of collaborative and smarter documentation that will improve your personal and professional workflow. Certainly, it will be different than other competitors in an interesting use case I found myself in one of the hobby projects that I used Airtable as a remote database tool all of the sudden.

Airtable is a very nice mobile-friendly document management in a “spreadsheet” style base. You can create your data structure in any data table model. You can create different views for your data (in calendar view, or filtered table view, or kanban view…).

img_0201.jpg

What makes Airtable special for me is its API. Their API is so easy to get started and access, because you get your data-specific API documentation after you login. It shows your data in API example right and there in the dynamic documentation.

img_0200.png

Airtable API essentially makes Airtable that can be used as a remote database for small or personal projects. Managing your data belongs to Airtable’s clean and nice web or mobile interface, and you implement your data in any format you like on any platform.

If you are needing read-only access, implementing Airtable API can be a matter of minutes since the documentation gives access to your data very quickly. You only need to convert the curl request to your favorite platform’s HTTP request. If you are needing a javascript version, it also produces NodeJS example code that you can drop in and start using your data.

img_0199.png

Write access is also not very different than read-only. Your data model will be well be documented in the dynamic API documentation for your table. You only need to start constructing your API requests and make the call…

If you haven’t created an Airtable account and played with it, definitely do so: https://airtable.com/ and check out their auto-generated documentation here: https://airtable.com/api (after you login to your account).

Why every developer needs to know google sheets & excel programming

I’ve recently talked about different cloud documentation services Smart(Er) Documents – Quip, Notion, Airtable, Coda Or Good Old GDocs & GSheets and the my take on Smarting up Google Docs.

Let’s dive right in the few key reasons why every developer should know google apps script and get familiar to work with GAS in google sheets and docs.

1) Provide your technical output (data) in common ground (a tool that is known by pretty much any computer user or intuitive to learn if they don’t know)

I highly believe any tool allow developers and non-dev role in a team to effectively communicate complicated information. Generally data sets to be eventually used for a form of story tellin g (a website/blog content, an internal or external product performance report, financial or behavioral analysis etc.). When it comes to developer crew only, we always find 100 different ways to express what we want to show to the world in form of scripts utilizing whatever library, tools comes to our hands. But when it comes to handing over our output (whether it’s a SQL output in CSV format, or a dynamic data set in a service), we are as developers are constrained as well as the party we’re handing over our work to start their work with a lot of constraints. They also have to learn whatever data format we give them. This also makes the collaboration one direction, starting from developer then ending up in the non-dev role working with that data (marketing people, product management or executive roles).

Real trouble starts when we have to repeat same work over and over. Because we can export the desired data model from whatever tool we’re using, generally as static CSV/Excel exports if it’s a tabular data. If you are doing same or similar work multiple times, it’s only beneficial to think ways to automate the process. Let’s think a simple family expenses management on excel/google sheets (because everybody knows or learn to work with sheet tools easily). In this hypothetical scenario, let’s say there is a database or API provides your credit card statements (there are actually services for extracting this information from your bank – but for security reasons, it gets very complicated on the authentication layer). As developer you can start the story from “extracting the data” and your output will almost always will store the extracted data in some place – most likely a database. 

You may or may not be the “analyzing” person in the family or even if you are, you may need to review your analysis with rest of the family members and may get their own take in the dataset for a real collaborative understanding and iteration. 

Google Apps Script has many ways to connect to 3rd party APIs to pull data and if done well, do these operations automatically refreshing data set or manually with UI interactions but essentially a developer’s role to be completely automated in a “smart google sheet” or doc.

Once this is set up right, it can be copied and extended easily. It goes back to the traditional “macro enabled excel templates” approach, but it really works out well for everybody. And google docs products are true real-time collaborative tools that you can literally open computers on a conference call, collaboratively edit and work on same information to bring it to an understandable level with simpler aggregate analysis or breakdowns or even charts without having to reinvent the wheel to code all these up from scratch.

2) Mockup / Bootstrap a lot of ideas in simpler form without needing to code a lot. Perhaps these can be used even long term use cases

As developers, more than half of what we do is automation. We code to not repeat ourselves. This is just a human desire to invest in our brain to not to do time consuming less intellectual work. Our brains are wired that way form the time we born. It always seeks for shortcuts. Developers are the hackers gets these shortcuts discovered and implemented in digital realm.

Whether you are working alone or working with others (team), you will always have things that you have to do multiple steps to produce an output while you work. To not repeat yourself, you will want to create tools to yourself that helps you achieve same output with less steps. The shortest way we always use the buzz word “one-click” to reflect the magical robotic process when we refer to an “easy” tool. Regardless the steps become one or multiple, there is always area for improvement in a work we do every day.

Developers tend to jump in their own comfort environment to create scripts to do things for themselves. I don’t know any developer who doesn’t have scripts that helps resizing images, or orchestrates their common tasks, so whenever we need to do the same operation, we just want to push the “one-click” button and see the output. Most of the time we invest a lot of time to build these scripts regardless of we are super proficient on the languages we know and the environment. Also we will always have issues with sharing this work with others in the team or invest more time to convert these scripts to be utilized by others in the team or publicly by anyone in internet.

Google gdocs products give a great canvas and a lot of limitations that positively shape and give us limits to make sure what we create in google docs or sheets or slides have to be in familiar format that can be used by other developers or non-dev team members.

Just to acknowledge that google apps script has real limitations, generally if you are doing resource heavy task on the cloud servers or the browser. But it is also super easy to start in a very basic form factor so you don’t have to worry about a lot of cosmetics or look and feel.

3) Spreadsheet tools are essentially databases

Every developer has to know relational database modeling and working with any size and structure of the data. At the core of relational databases, you have to work with tables and the relationships between them. 

Google spreadsheet and Excel are great tools that can be extensively used by both developer and non-developer people together because its both visual tool and a structured data by its nature.

The reason both tools have programming interfaces because its one of the core use cases for these tools. I also think the companies behind them are forced to have this programing features that because it’s probably one of the most requested features.

Al in all, utilizing a spreadsheet file as database, where you read the structured information from it as well as inject new rows or query and update the rows becomes one of the most common use cases from its programming perspective. So whether you like it or not, you will eventually come to a work that the “users” of your product is already or will be using spreadsheet files aside of your product. So you will at least have to learn how to process these files (import, ingest) and be able to expose your product’s details (data) in these formats (export features).

The best way to see this is being a very common need to look at the most popular automation tools on internet like IFTTT, Zapier. One of the first integrations they did was google sheets.

I personally have many google sheets files that is fed by IFTTT or workers I run on my servers that exports activities daily/weekly/monthly or by trigger. Anytime I want to see what’s going on and want to analyze my activities (both personal activities like my driving history, or leaving/entering NYC area, or my expenses on my credit cards) I can easily do a pivot table to see the trend, or group them by categories or other angles to look at the same data with different questions. I can’t even describe how many different ways similar scenario can be for a business purpose. Analytics by itself will cover a long list of things you (or a non-dev team member) can and will want to do with data sets like these.

Wrapping Up

The last reason in the list is probably is the strongest from a professional skill perspective, so you kinda have to know your way around in both static excel/csv files as well as knowing how to work with google APIs for google sheets from it’s authentication steps to the actual endpoints/methods in the google sheets api to work with the spreadsheets.

But #1 and #2 are more important if you are (or want to) a resourceful problem solver. Every developer is at the end of the day non-stop problem solver and having Google sheets in developer’s toolbox is a must.

Just launched my newest product in ProductHunt: Sheet2Cal – Organize calendar events inside from your Google Sheets doc.

Sheet2Cal helps you to sync your event data from Google Sheets to your favorite calendar app. Plan and collaborate your events (social media/editorial content, wedding & travel planning) freely on Google Sheets and export it easily to a calendar subscription using Sheet2Cal.

Sheet2Cal creates a complete calendar event by using the information in a Google Sheets doc you want. It helps you to schedule and update your calendar events (Such as social media, wedding & travel planning) inside from your Google Sheets doc.

Back Story

At Nomad Interactive, we love to create various tools to make our lives easier. And we share it with the world if we think it makes the same impact for others! Sheet2Cal was born during such a need.

When uploading a shot to Dribbble, we were doing our content planning, such as title, description, in a Google Sheet document for team review and collaboration. In addition to that, we were also creating a calendar event with similar information as a reminder to ourselves – when to publish each post on our shared calendars.

During the review process, we had to make every change we made on the post document in the relevant calendar entry. To prevent this unnecessary step, we developed Sheet2Cal and started making our entire calendar organization using our main Google Sheets document, from creating an event to updating it. Then we thought that this would work in many different subjects and decided to share it with everyone for free!

Now you can use Sheet2Cal as you wish for your similar needs! If you have any problems or anything you want to ask, we would love to help as a team!

To show your support, you can visit the Product Hunt page and give us a 💬 shout or an 👍 upvote!

You can also visit the project page here: https://sheet2cal.com/