Wordfence is an all-in-one sort plugin that comes with a lot of security features and it can be configured pretty quickly without much tech or web security knowledge. Their recommended configuration is almost always safe to activate and it activates in a few clicks.
Wordfence also has a high-level overview of your site’s activities in its dashboard:
Things you should do with Wordfence
Enable Multi-factor Authentication (MFA) for your WordPress admin logins
One of the best things you can do for a WordPress site is to secure its admin’s access. And the best way for that is to enable 2 Factor (Multi-Factor) authentication. You can use an authenticator app for this but this setting has to be enabled for every WordPress admin user. Make sure all admin users are 2FA enabled.
WordPress doesn’t come with a 2FA/MFA capability. Wordfence is one of the easiest ways to add 2FA/MFA to your WordPress logins.
Enable rate limiting and automatic blocking
Wordfence will enable rate-limiting in its firewall settings by default. This will also allow Wordfence to block too many failed login attempts which is often an attacker trying to gain access to WordPress admin.
Block countries you don’t have any users from
Especially high-scammer/spammer countries.
Keep your WordPress plugins up to date.
Wordfence will warn you if there are any dangerous/vulnerable plugins that stay outdated.
Wordfence email alerts will be enabled by default, keep it that way
Make sure you at least get a weekly digest to stay on top of your Wordfence activity and alerts.
If you have multiple WordPress sites, use Wordfence Central to manage the Wordfence installations across sites
If you are running WordPress to host anything serious, you probably are asked, talked about, or want to have a backup solution. Regardless of a serious site or a hobby project, you will be putting some time and effort into building it and continue creating content. In this day and age, it’s unthinkable to not have a backup of your site.
There are much more complex backup solutions out there, but most likely, you are using a somewhat managed hosting solution. Most hosting companies have their own out-of-the-box backup solutions. You can explore these options. But if you want to find a cost-free or low-cost or more flexible option, keep reading 🙂
I’m going to dive into my recommendation of the Updraft plugin first, then talk about a few key points I pay attention to and how updraft is handling these.
Updraft makes it easy
Updraft is a full backup solution for WordPress sites. It has a lot of controls you can configure your backups in different windows, what to backup, where to store, how to notify admins, and more.
Aside from these generic topics, there are three larger, key topics I want to talk about, that it’s most important to me when I consider backup solutions.
What and when to backup?
The most common topic to think about when designing backup strategies is what you want to back up and with what frequency. Updraft comes with a 2 tier schedule we really like. We set one of them to take full backups every week and retain the last 2 copies of this “full backup” these backups are basically our full “restore” points.
Then we set the second tier schedule to run every day, take database, and /uploads folder backup. On some sites where we don’t have many new uploads happen and if we have a huge historical media library, we skip the uploads and only take database backups daily and we retain daily backups for 14 days. This means we can restore a full backup from the beginning of the week (that includes plugins and all other stuff) and then cherry-pick a specific day in the last 14 days if we want to roll back to a particular date.
Back up, restore, and migrate easily
The reason you are wanting to have backups is at the end of the day, to be able to restore it easily in case you lose your server. Stuff happens right?
When that happens, it matters how easily/quickly restore your backup to a fully functional state. In WordPress site terms, it means everything including your posts, assets (media library), plugins to be installed, and plugin configurations as well. We’re basically looking for a full site restore.
One of the reasons I really like Updraft is because it handles full backup restore with a few clicks.
As part of the “restoration” process, Updraft doesn’t care if you are restoring the backup files to the exact same domain or a new one. This actually makes Updraft a great “migration” or “cloning” tool. We have utilized Updraft in my team to clone production sites to create staging or test copies easily. There are more specialized solutions for this but Updraft already handles these needs, so one plugin does many things for us.
When migrating to a new domain, updraft detects if the target WordPress URL is different than the source where the backup it has taken from, and it asks if you want to update the URL references (there are tons, in WordPress DB) with the new URL. This is also another pain point when moving sites between different domains.
A usual workflow is to work on the dev or staging version of a website then when ready, clone this to the production version. Updraft makes this process really easy for us.
On the server – off-site?
Another key topic is where the backups will be stored. By default, any backup solution will create backup files in the same server. It’s helpful to have backups on the same server, because it’s less configuration, faster…But that’s not why we’re taking backups in the first place. The “worst-case” scenario we’re preparing ourselves for is if we lose access to our server altogether. That’s why it’s very important for a backup strategy to include the backups storage to be off-site (out of the server), or even complete a separate region or even further to keep them in a different cloud provider. What if a whole AWS region goes out? (unlikely but it happens).
Updraft supports many cloud storage providers and protocols. I used AWS 3, Google Drive, Dropbox, (S)FTP targets with Updraft plus, and their configurations are really easy.
Beware most of these integrations will require the paid version “Plus” which is an annual subscription but you pay for plugin updates. In my opinion, it’s very affordable given the amount of manual work it removes from the engineer’s hands. We happily pay for an agency license and use it on all of our WordPress sites.
Bonus: Automatic backups before updates
For a regular, not tech-savvy user, there is a feature of updraft that I really like that shows a pop-up every time the WordPress admin tries to perform an update in the WordPress dashboard.
The pop-up basically reminds us that if the user wants to take a backup before proceeding with updates. One of the most common reasons your WordPress site will break is when you update your plugins. Your plugins may not be careful with backward support and/or they may miss a buggy use case that you are using their plugin with. When the update happens, your site reading behavior may be impacted, or even page rendering that will basically break your pages which is more serious. When this happens, Updraft’s notice that takes a backup before performing updates which will be a reference point for you to restore in case needed.
Of course, this option is configurable and Updraft makes it prominent for WordPress admin to change this behavior:
Disclaimer: This post contains affiliate links that will help support this site. Even though it looks like a promoted post, I genuinely love, use and recommend Updraft free or paid plugins/services. I would have published exact same post even if I wasn’t using affiliate links.
In this article, we will talk about ways to analyze and understand what goes into your bundle code and increase your awareness when picking libraries to use and their effect on the final js bundle size. You will:
Realize what’s really inside your bundle
Find out the modules make up the most of its size
Find modules that got there by mistake
Consider alternatives to optimize your bundle size
There are a few popular tools we can use with minimal effort to analyze our bundle, visually.
The bundle analyzer will start as a web application in its default port. Visit http://127.0.0.1:8888 to open the bundle analyzer web UI. You can analyze your bundle and the packages/dependencies you use and determine the costly dependencies and think about strategies to optimize or find lightweight alternatives using the bundle analyzer:
Webpack visualizer runs as a webpack plugin that generates a stats.html file containing an interactive visualization of your bundle contents. This method is useful if you want to include the creation of the stats.html as part of your CI/CD. It would be helpful for teams that want to continuously analyze every commit and have stats.html generated by the webpack visualizer to be part of your build artifacts in your CI/CD solution.
Aside from analyzing your bundle, you can also proactively be conscious of every dependency before adding to your bundle. Bundlephobia helps you to search packages and know what you are getting into, and the cost of the packages in your application. Bundlephobia also highlights the user/browser impact of this package when used in your application, like download and render timings. There is also a historical change between versions of the package you are analyzing highlighted in bundlephobia.
There are different ways to monitor your bundle. One of the simplest ways is to make sure the changes you just made in your code didn’t bloat your bundle. Bundlesize is a simple tool that can be easily added as a step in your CI pipeline that keeps your bundle size in check. Bundlesize will fail if you exceed your maximum bundle size set for your project and also will highlight the delta from the primary branch of your code. This helps developers to see their commit’s impact/change in the bundle size.
Despite its pain points for developers, it is hard to ignore WordPress’s popularity and flexibility it helps a lot of people for entry and mid-level website building. WordPress is a very powerful website builder tool but has an extremely fragmented plugin marketplace resulting in a lot of site owners with websites being powered by a mashup of a lot of plugins and different styles of coding/implementation and opinions from different developers. When not done carefully, it may result in low-performing websites. So optimizing WordPress-powered websites is a real effort. But regardless of the plugins used in the back-end, there is a very common optimization step that applies to the vast majority of WordPress sites.
Almost all WordPress sites generate static content at the end of the day and most of these sites don’t have proper caching and CDN is not set up behind them. When these two are combined, it can make a day and night difference on the site performance. In recent Google updates, now this directly affects your site’s ranking in search engines as well as social visibility (i.e: Facebook preferring higher performing sites in their news feed algorithm). Let’s expand on these two methods and what they mean.
Caching strategy for your website
Caching is one of the key elements of performing software. There are many ways to apply caching in software. But essentially, what caching means is for your servers to remember what calculations they did before. There may be some expensive calculations that may need to happen for a software function to do its job. But if calculated once, with the right caching strategies, it can store and remember that expensive calculation at a much faster speed.
Caching is especially important for websites that receive a lot of traffic. WordPress is essentially your website rendering engine. And once a page is rendered, there are ways to eliminate the repetitive need to keep re-rendering the same page if you get a lot of visitors to the same page. There are different ways to cache stuff in WordPress, but at the end of the day, we only care for the rendered page to be cached completely, and if possible, the server to serve pre-rendered page without needing to re-render it again.
Get blazing fast with WP-Rocket
We have experimented with many WordPress optimization plugins. One plugin that stands out far ahead in the competition that is worth taking presence for this article is WP-Rocket. WP-Rocket comes with a simple configuration that combines, compresses, and optimizes your WordPress pages and provides caching that will increase your page load speed. The change will be very visible since WP-Rocket converts your WordPress page output to static, minimized files that are served to your users’ browsers very fast. Without proper caching, your WordPress page will render every time a user reads your content and your traffic will be directly dependent on your server power. WordPress takes the strain off from your server with intelligent caching that majority of your traffic only incurs static file serving that happens very quickly. Another benefit of using WP-Rocket is that optimizes your traffic responses with better cache headers which can be combined with services like Cloudflare that will accelerate your traffic load served from cache, in most cases not even from your server, but servers at edge network, provided by services like Cloudflare.
You don’t need the paid version of WP-Rocket and the free version comes with a lot of features that will make a visible difference with just a few clicks. Similarly, Cloudflare’s core service is also a free and great start for a brand new WordPress site.
For further optimization, you can set up and configure a CDN on top of WP-Rocket aside from or instead of Cloudflare to take better performance benefits.
Content Delivery Networks
Rocket CDN (by WP-Rocket)
If you’ll go down the road, wp-rocket comes with its CDN offering that is flat-rate priced and provides unlimited bandwidth and edge storage. It will be the easiest integration with wp-rocket to enable this option.
Cloudflare comes with their own WordPress content delivery option even though they will do DNS proxy-based caching to make your traffic served with optimized network caching if you are using Cloudflare’s primary service offering. Beyond that, Cloudflare comes with an affordable, flat-rate service that is pretty much out of the box, with zero configuration for WordPress sites. All thou need to do is to install their WordPress plugin, log in with your Cloudflare account and subscribe to their service.
If you want more control, there are many traditional CDN solutions. I suggest a simple and very affordable bunny.net we used in the past. Bunny has one of the most cost-competitive offerings with a great performance.
They offer an official WordPress plugin for easy integration documented here.
Disclaimer: WP-Rocket links above are affiliate links that help support this blog. But regardless, wp-rocket is a great plugin and service I used many times before paying for their premium plugin and services. I’d write them even if they didn’t have an affiliate program.
This command syncs my backups folder (contents) to a folder in my google drive.
The way you take your backups will be up to you. You could even directly sync your application folders like apache httpdocs folder but that’s too many files that may update too frequently. Instead, you can tar, gzip your folders or take database backups before running rclone for your backup solution.
I have the following, simple backup script on my server takes my wordpress site’s snapshot daily, then rclone syncs it up to my google drive.
Web accessibility is one of the keys and often missed parts of web development. If you are building a website for a larger, general audience, you have to make sure your page complies with web accessibility standards (most known WCAG).
Making sure your website is accessible is no small task. There are obvious steps you have to take but the real accessibility issues are not easy to understand and pinpoint before a real user with a disability visits your site using tools like screen readers or other accessibility aiding tools. To do it right, most companies work with audit companies that are experienced in testing your site using variations of these tools to cover as many real accessibility scenarios as they can. But getting your site audited for accessibility is also going to cost you.
Web accessibility compliance also becomes a mandate for websites serving to certain industries (like government, insurance, banking…) that you are obliged to make your site accessible at all times. But most consumer sites this is not a requirement but a work to make your site/brand more inclusive of all users.
I want to talk about a browser extension (and a react library) that helps you to detect the obvious, programmatically, and easy to detect issues that you can address quickly to cover the majority of the basic accessibility issues on your pages as a quick win.
Axe is a software and service that both have professional solutions as well as free browser extensions (Chrome, Firefox, and MS Edge) that are very easy to install, activate and start seeing your page’s accessibility compatibility and issues.
Visit https://www.deque.com/axe/ to learn more and install the browser extension. The extension is pretty straightforward to use that runs on a page you open and shows issues, explanations of what the issue is, how to solve it to make your page more accessible.
There is also react npm package that you can activate in your development environments that helps you to audit the final rendered DOM tree similarly to the chrome extension.
When bootstrapping a new product, regardless of platform and solution is used in the back-end and front-end, the times come very quickly that you will need to integrate with 3rd party platforms to create continuity of the product’s user experience between different solutions.
As a good example of this, let’s say you bootstrapped a small SaaS product that helps users to calculate their taxes. Your product is not going to be only the software solution you created but the whole experience from customer support to documentation or educational materials, perhaps some marketing experience when acquiring and onboarding your new users. So right off the bat, we will need a customer support solution, marketing tool. Perhaps a CRM-ish tool to use as our “customer master” database. And will want to channel everything there as much as we can.
But when someone signs up, your back-end only creates their user account, and their customer support record, CRM record, or marketing tracking is not connected. Most likely, these will be separate services like Intercom, Zendesk, Mailchimp, etc. And obviously, your own backend/database where your user’s initial records are created and their engagement with your core product happens.
I have planned and done these integrations many times over in different products and worked with many 3rd party services to integrate. Some niche solutions that I had to integrate don’t have proper APIs or capabilities. Setting some of these exceptions aside, most tools have integrations with well-known platforms like Salesforce, Facebook Ads, IFTTT, Slack. And as a common and growing theme, most tools also have integration with Zapier which is the main event I want to come to.
Eventually, I find myself evaluating Zapier Integrations between these platforms to cover most of the use cases we often spend days doing single integration. Where if the triggers and actions are covering what we are trying to do, I started to suggest my clients and the rest of my team create Zapier focused integrations.
There is an easier way. A big majority of people working in the process/product/team management space uses sheets on a daily basis. Either Excel or Google Sheet covers that big majority of the use cases. I evangelize Google Sheets just because of its real-time collaboration and ease-of-access capabilities. It’s free and a large majority of people having google accounts making it very universal.
I have done direct google sheet integrations in the past many times. But recently I like the concept of using google sheet as a source that can be commonly used by other services for integration purposes. Since it’s a living document, it’s very easy to make changes on a document or listen to changes happening on documents (by humans or APIs). This makes it an amazing candidate for using it with Zapier to use it as a “source” of data. It makes Zapier the magic glue here to serve as a universal adapter to anything else we want to connect to. Having thousands of services available in Zapier makes it a meeting ground for moving the data we provide through google sheet to anywhere else.
I need to say this will be limited based on each service’s capability and the available actions/triggers in the Zapier platform. But most SaaS solutions invest enough effort and time to make their Zapier integrations rich enough to serve the most common use cases. It won’t cover 100% of needs but it will certainly eliminate a lot of basic integrations like slack, email notifications, marketing tools triggers (i.e: follow-up campaigns).
This is not a code-less solution
When going down this route, the biggest work and challenge will be integrating Google Sheet APIs to connect your account (through oAuth process), and store your credentials in your server and create the server → gsheet integration to send your back-end changes to a google sheet document. It’s not the easiest API to integrate with but it’s well documented, very mature, and has endless examples in the community (github). And best of all, this one integration opens up so many without needing to do further integration. Even in the most basic products, we find ourselves doing slack, email deliveries in MVP versions. Investing the same effort in google sheet will easily justify itself later.
One big trade-off is to have your user’s PII data to be transported, stored in a google sheet (which will be private), and then sent to Zapier. If you are super paranoid or have to comply with certain privacy regulations, managing this traffic may need to be done more sensitively or completely unfeasible for your product. But the majority of products I built do not need that rigorous audit and compliance. So this solution has worked for me many times.
I want to show a sample integration to set up a google sheet as a trigger and put a Slack notification as an action. Hopefully, this showcases some imagination and helps you understand where this can go.
Set up Google Sheet changes as “trigger”
Create a new zap or edit the existing one to change the “trigger” service. Select Google Sheets. In the first step, you will be asked to select the google account linked to your zapier account. If you haven’t done it yet, or want to connect to another account then you currently have, you can do it in this step.
After selecting the account, Zapier will ask you to select what event you want to set this zap to listen to. Generally, we will inject a new row into a sheet in one of the documents. So we select “New Spreadsheet Row” as the event to listen to, but as you can see, you can select other events like updating a spreadsheet row or new worksheet creation in a document.
Now you will need to select which document and which worksheet to listen to. Zapier will show document and sheet selection dropdowns here.
As the final step, you will be able (and kinda have to) to test your trigger that will pull a sample row from your sheet. Make sure you enter values into your columns in order to use this sample data to set up your further actions in zapier. Zapier will show these sample values when you create actions using these values.
Set up Slack as “action” to send a message to a channel
Now, we’ll use this trigger in any service we want. We can also create multiple actions where you can send an email and slack notification and create a new Intercom customer record at the same time in one zap.
For this example, in the “action” section we will select Slack service when asked.
First, we will select the type of “action” we want to perform. We will select “Send Channel Message”. You can select other actions like send a direct message or others.
Then, similar to Google sheet initial steps, we will first select the slack account we want to use.
And finally, with seeing a lot of options, we will set up the sender name, avatar, and other details, but most importantly, the channel we want the message to be sent to and the message content itself:
Zapier is pretty intuitive and simple to construct smart content areas like this one. You will be able to both type a static message as well as insert the actual data (variables) from your source. In this example, our source is the google sheet document. So you will see a dropdown with search capabilities to search and find the actual column value you want to insert when you want to construct a message with dynamic parts.
Once everything is done, you will be able to finish this step and will be forced to test the action you just set up. And all done! Don’t forget to turn the zap “on”.
This is just the most simple example I can use. There are many use cases you can allow this integration to push changes/data into thousands of services available in Zapier.
You guys know how much I love thinking, talking, and geeking about ways to smartly preserve our knowledge in form of written communication. I talkeda lot about written communication and making our digital documents smarter before.
Although Notion is not my primary knowledge base, note-taking tool, I really love certain aspects of how Notion functions. Probably the best feature of it among other tools is how you can publish any document publicly. The output it creates is very minimal and clean. The second best thing about Notion is to support almost any type of web-embed code. With this, you can practically embed any interactive widgets or content pieces to your documents. Very much like a web page, Notion makes the documents you publish online as — pretty much a fully-fledged web page.
This made Notion an attraction point for a lot of people to create microsites, sub-sites, and a lot of plain content to be written in notion and used in existing websites.
Today, I want to talk about a Notion enthusiast Stephen’s guide, and a mini-project that allows us to set up a custom domain/subdomain for our notion documents.
Stephen’s code runs on serverless “Cloudflare Workers”. This allows a few customizations like dark/light mode toggle/switch on your page as well as nice URLs (slugs) when you set up your notion document with the worker code. It’s a pretty simple, almost no-code solution. In fact, you don’t have to worry about the code, Stephen created a mini UI to allow you to customize your configuration while setting it up. It takes about 5-10 minutes to set it up but it’s worth it.
When planning, designing and implementing the infrastructure of a product, the most common pattern we follow from the back-end perspective is to build the back-end in the distributed model using microservices. But there is a cost that comes with this model which is monitoring, alerting, and maintaining. Each microservice can be living on its own container with its own dependencies. We often build too many moving parts with the microservices approach. A very simple app can have always running services, scheduled jobs, and workers distributed in many different infrastructure elements. And the rise of containerized applications or tasks allowed us to build our products with even more micro-scale code pieces running on serverless infrastructures.
There are many solutions to monitoring and alerting if a service is properly running but it’s often difficult to centralize the monitoring.
There is a very simple yet effective approach I like and seed the very minimal integration to most of my microservice components from servers, pings, a script doing a cleanup, or backup. It’s very similar to ping checks but a little bit more simplified and universal. Ping checks generally require your “parts” of your code to either publicly available and/or serve a status on HTTP/TCP/UDP traffic which may not be available based on your “component” runs.
In this post, I’d like to focus on open-source software that you can easily set up and run your own instance for free. They also provide a SaaS version of the software that you can get started for a free or low cost.
The principle of this service is very simple. Essentially, you create a “check” and, the software expects a ping from your component in regular intervals. You can set up what these intervals can be, group/organize them, and set up alerts in case a check fails to report (ping) within the grace period you can adjust.
The service can integrate with many mainstream ops-support/escalation services like pager duty, opsgenie… or simpler services like slack, email, or SMS using Twilio for the basic notifications.
Check out the service, their open-source software page, and their documentation here: helathchecks.io
At Nomad Interactive, we are relatively new to contributing to npm with our own packages. Our initial designs of how we packaged certain things that are common throughout our projects, like eslint configurations, or some generators we created for ourselves… We initially packaged them as private repositories without thinking too much about its versioning, supporting multiple versions of (let’s say eslint, or react.js). Then it became clear that we had to version our packages properly. So we started our research to see how we can easily spin a private registry of our own and distribute our packages privately to authenticated users which are not.
We found verdaccio as an open source and lightweight npm registry. We were able to spin an instance very easily on our docker swarm in minutes and start pushing private packages with our initial versions.
If you need private registry for your team/company verdaccio is definitely a great quick and easy solution to get started.
Although we switched our mindset from, “why private packages anyway?” to let’s share everything we do with the world. So we questioned do we even need a private package at all? And the answer was no. Nothing we do is government secret or any secret at all. So we converted them to public packages and start pushing them to npm directly. The issue is, we definitely don’t have any real open source software management thought put behind any of those packages at all. So if we receive a pull request, we’ll most likely to neglect it (even though I’ll be very appreciative about it).
One thing I gotta say is, a private npm registry occasionally became painful to get it working properly on our CI/CD pipelines due to authentication. And Verdaccio doesn’t have support for registry tokens, or tokenized access separate from the users. So you have to create a user for your registry and that will probably need to be a git user. And if you are in a licensed environment like gitlab.com or GitHub, you may be forced to pay for a user that only CI/CD will use. Not a big deal but a side note that we had to deal with…
If you are in mobile development, particularly Android development, you probably heard or used Genymotion’s simulator.
Genymotion is a company (and product) that run Android OS on a virtual machine to simulate android runtime with some basic apps the naked OS comes with. By default, it doesn’t have Google services and APIs installed or enabled but it can be installable on top of the naked Android OS you install from the images you can pick and download from genymotion’s repository easily.
These days, the official android emulator is not bad at all, but back in the day, it was terribly slow and difficult to do certain things. It was also designed solely for Android app developers who can do stuff from the command line to supplement android emulator. In those days genymotion was a superior simulator that was everybody’s first choice of running android apps while developing because of those reasons.
Aside of android app development, I extensively used genymotion to do my mobile web tests in native chrome for android and few other browsers. Today there are many services we can use in the cloud and test our web work in so many real devices and OS/browser version variations that this use case is no longer valid. But I still like the idea of an easy to configure and run android environment when I need. And for this purpose, I suggest Genymotion be one of the best solutions out there. They changed their licensing model a lot, so I don’t know what’s the latest but I was able to use genymotion (and still is) use freely for personal use cases (which is all of my use cases are personal projects and stuff).
For some time, I also used genymotion experimentally to run a custom size tablet, installed google services and APIs, and google play so I was able to install the apps I use on mobile platforms that have a nicer user experience than their desktop or browser counterparts. If you don’t mind spending a lot of RAM, this can be an interesting option to run mobile apps on your desktop. Not super intuitive to navigate with mobile gestures with a mouse and trackpad, but one can get used to it very quickly.
I always love services allowing developers to quicken the time from zero code to a live URL for a web application. Large majority or web applications I write – mostly for hobby topics – are very simple, not too complicated or crowded applications. So by my experimental nature, I create a lot of small apps. Most from scratch or using plain boilerplate codes I created in the past for myself for these experimental ideas.
Generally, when my code is ready to show to someone, I waste a lot of time on doing boring steps to prepare the app to make it visible somewhere. Thanks to services like heroku, this is way much less of a headache now. Readers follow me, knows I showed my love to heroku with multiple articles previously 🙂
I want to another service that brings me joy when I see I’m cutting a lot of annoying time that I want to teleport myself using these services when I bootstrap an idea and make it live and ready to prime time. Now.sh is a delightful service I discovered when I needed a static hosting for a react.js web app I wrote that I wanted to put up very quickly. There are some apps that I don’t even want to create a heroku git repository or want even faster turnaround on my steps to write and publish an app.
Now.sh allows single command line to be live on my web apps. Now.sh is actually name of the CLI tool for the parent service “Vercel”. To install their comamdn line tool, simply install “now” package globally from npm:
npm i -g vercel
Then first create your account on vercel and run
to login to your account. Now you are good to go.
One command deployments for non-git projects – or automatic deployments for every commit
In your project folder. Simply run
command that will automatically deploy your code (the files in the current folder) to vercel with an automatically generated subdomain under “.now.sh”. If you want, you can attach your application to a custom domain of your own for free: https://vercel.com/docs/custom-domains
Here is an “echo” endpoint that returns what’s sent to it:
Another great feature I really like is direct and seamless integration to github repositories. I experimented with very portable development environments such as coding on iPad and such in the past. I find zero configuration and zero dependency development models/environments very attractive. There was an occasion in recent months that I had to live on my iPad for 10 days and needed some quick way to deploy and code up some web based application with few back-end capabilities. Nothing complex. It was very hard and time consuming to construct a remote development environment and continuously work on ssh-based remote platforms instead of native platform I was using.
Thankfully I used now.sh’s github integration that removed the “deployment” and build steps off of my local environment. I still had to do very frequent pushes to a remote git repository (github) in order to make sure I am continuously not breaking my working app and moving along on the feature I was working on. But still, it had zero dependency on my local environment that I was using my favorite editor and was able to push code to github and rest was taken care by now.sh. I really enjoyed it’s github bot that was very responsively posting updates to commit and PR logs as comments. I was also getting deployment changes on my slack channels. So it was pretty instantaneous to make changes and make it live somewhere that I can share with my team. More on their github integration here: https://vercel.com/github
Vercel (formerly Now.sh or Zeit) is a great service for both bootstrapping and making your app scale. They are also very transparent and open on their tooling that makes moving out easy. So there is no fear of “locking in” to a degree.
There are also half dozen other beautiful features that I’m not talking on this article, worth checking out: https://vercel.com/
At Nomad Interactive, we done big design toolkit migration twice in the past. From old Adobe Photoshop/Illustrator era to Sketch, both utilizing other tools on our collaborative process like Invision, Zeplin. About a year ago, we made similar transition/migration to Figma.
One tool to rule them all
We used sketch to create our digital designs for years. Plus used invision for presentation purposes. We had a particular process to export our designs, place them in dropbox, then upload to invision with same/similar project list and configuration. On the other hand, for Designer to Developer handover, we started to use the beautiful startup Zeplin. But, Invision knocked them off pretty quickly, so we put some of our focus to adapt Invision’s “Inspect” feature. Not long after, we moved to Figma which replaced 3 of these tools without any adaptation or question in our minds. When we gave Figma a try in a single project, it was clear very quickly that we don’t have to jump between tools for different purposes. Then we switched over.
Collaborate – Seriously
The best of what makes Figma different than other design tools is the real time collaboration features. Being able to see all viewers and editors cursors, seeing the design changes real-time. This is a similar paradigm shift happens between a static file focused “Word” versus online, real-time collaborative alternative, google docs or quip. It makes the “creation” process much more like a white board session if utilized well. We started to do collaborative design sessions on the same project with multiple designers, product managers/owners, project managers. Not everybody designs, but they can actively collaborate on the design process, providing direction to the designers on the large artboards. It’s very much like the whiteboard session.
Not all good
This process change, resulted us to see the low-fi UX thinking process to be much more visible. This let non-designers to be more active participants of the earlier parts of the design process. Also results getting wireframes to be done very closely to the actual designs (we’re mostly talking about digital product designs – like mobile application UIs or e-commerce sites). The danger is mixing these two phases of the design process that is generally better to keep them separate for the sake of putting the mind in the right concerns at the right time.
We generally dedicate wireframing period to bake the digital product’s functionality focused discussions and iterations. And the UI/Creative design period to be more concerned about look and feel, colors, typography, animations, creating emotions after we know how the product is wanted to be working. Figma’s collaborative design feature brings these two worlds together closer. There is a danger to mix it up so all the sudden you will be hearing button color instead of what the button should say or do when interacted.
A weird need on designing on mobile platforms (namely iPad Pro)
I have a weird need to make super-portable devices like iPad to be my go-to device to carry around (I already make my “thinking” oriented tasks on iPad – like writing this article). But I have a burning desire to see the iPad to be able to handle more complex tasks like writing code (not just writing, but compiling or having the runtimes for scripting languages – not yet). Or doing more complex design work – at least on the sketch/Figma level. I’m not asking to be able to do render-heavy design tasks like photoshop does. That is also not what I need or do in 99% of the time.
I gotta say, Figma is being the closest in that game if this is a practical or real future need. We know Sketch developers said they will not going to port their macOS app to the mobile platform. And Adobe is taking a different (probably nicer – native) path but a long one to get their suite of applications in the mobile platforms – but we’re already over with Adobe products. On the other hand, Figma practically runs without any issue on the mobile-safari. But with a huge lack of touch and mobile interaction support. There are some attempts to get it better (i.e: Figurative app). But still a short road to see Figma is fully iPadOS compatible. I’m sure Figma team is already working on this and hope that day comes sooner.
I recently talked alotabout the importance of collaborative and smarter documentation that will improve your personal and professional workflow. Certainly, it will be different than other competitors in an interesting use case I found myself in one of the hobby projects that I used Airtable as a remote database tool all of the sudden.
Airtable is a very nice mobile-friendly document management in a “spreadsheet” style base. You can create your data structure in any data table model. You can create different views for your data (in calendar view, or filtered table view, or kanban view…).
What makes Airtable special for me is its API. Their API is so easy to get started and access, because you get your data-specific API documentation after you login. It shows your data in API example right and there in the dynamic documentation.
Airtable API essentially makes Airtable that can be used as a remote database for small or personal projects. Managing your data belongs to Airtable’s clean and nice web or mobile interface, and you implement your data in any format you like on any platform.
Write access is also not very different than read-only. Your data model will be well be documented in the dynamic API documentation for your table. You only need to start constructing your API requests and make the call…
Let’s dive right in the few key reasons why every developer should know google apps script and get familiar to work with GAS in google sheets and docs.
1) Provide your technical output (data) in common ground (a tool that is known by pretty much any computer user or intuitive to learn if they don’t know)
I highly believe any tool allow developers and non-dev role in a team to effectively communicate complicated information. Generally data sets to be eventually used for a form of story tellin g (a website/blog content, an internal or external product performance report, financial or behavioral analysis etc.). When it comes to developer crew only, we always find 100 different ways to express what we want to show to the world in form of scripts utilizing whatever library, tools comes to our hands. But when it comes to handing over our output (whether it’s a SQL output in CSV format, or a dynamic data set in a service), we are as developers are constrained as well as the party we’re handing over our work to start their work with a lot of constraints. They also have to learn whatever data format we give them. This also makes the collaboration one direction, starting from developer then ending up in the non-dev role working with that data (marketing people, product management or executive roles).
Real trouble starts when we have to repeat same work over and over. Because we can export the desired data model from whatever tool we’re using, generally as static CSV/Excel exports if it’s a tabular data. If you are doing same or similar work multiple times, it’s only beneficial to think ways to automate the process. Let’s think a simple family expenses management on excel/google sheets (because everybody knows or learn to work with sheet tools easily). In this hypothetical scenario, let’s say there is a database or API provides your credit card statements (there are actually services for extracting this information from your bank – but for security reasons, it gets very complicated on the authentication layer). As developer you can start the story from “extracting the data” and your output will almost always will store the extracted data in some place – most likely a database.
You may or may not be the “analyzing” person in the family or even if you are, you may need to review your analysis with rest of the family members and may get their own take in the dataset for a real collaborative understanding and iteration.
Google Apps Script has many ways to connect to 3rd party APIs to pull data and if done well, do these operations automatically refreshing data set or manually with UI interactions but essentially a developer’s role to be completely automated in a “smart google sheet” or doc.
Once this is set up right, it can be copied and extended easily. It goes back to the traditional “macro enabled excel templates” approach, but it really works out well for everybody. And google docs products are true real-time collaborative tools that you can literally open computers on a conference call, collaboratively edit and work on same information to bring it to an understandable level with simpler aggregate analysis or breakdowns or even charts without having to reinvent the wheel to code all these up from scratch.
2) Mockup / Bootstrap a lot of ideas in simpler form without needing to code a lot. Perhaps these can be used even long term use cases
As developers, more than half of what we do is automation. We code to not repeat ourselves. This is just a human desire to invest in our brain to not to do time consuming less intellectual work. Our brains are wired that way form the time we born. It always seeks for shortcuts. Developers are the hackers gets these shortcuts discovered and implemented in digital realm.
Whether you are working alone or working with others (team), you will always have things that you have to do multiple steps to produce an output while you work. To not repeat yourself, you will want to create tools to yourself that helps you achieve same output with less steps. The shortest way we always use the buzz word “one-click” to reflect the magical robotic process when we refer to an “easy” tool. Regardless the steps become one or multiple, there is always area for improvement in a work we do every day.
Developers tend to jump in their own comfort environment to create scripts to do things for themselves. I don’t know any developer who doesn’t have scripts that helps resizing images, or orchestrates their common tasks, so whenever we need to do the same operation, we just want to push the “one-click” button and see the output. Most of the time we invest a lot of time to build these scripts regardless of we are super proficient on the languages we know and the environment. Also we will always have issues with sharing this work with others in the team or invest more time to convert these scripts to be utilized by others in the team or publicly by anyone in internet.
Google gdocs products give a great canvas and a lot of limitations that positively shape and give us limits to make sure what we create in google docs or sheets or slides have to be in familiar format that can be used by other developers or non-dev team members.
Just to acknowledge that google apps script has real limitations, generally if you are doing resource heavy task on the cloud servers or the browser. But it is also super easy to start in a very basic form factor so you don’t have to worry about a lot of cosmetics or look and feel.
3) Spreadsheet tools are essentially databases
Every developer has to know relational database modeling and working with any size and structure of the data. At the core of relational databases, you have to work with tables and the relationships between them.
Google spreadsheet and Excel are great tools that can be extensively used by both developer and non-developer people together because its both visual tool and a structured data by its nature.
The reason both tools have programming interfaces because its one of the core use cases for these tools. I also think the companies behind them are forced to have this programing features that because it’s probably one of the most requested features.
Al in all, utilizing a spreadsheet file as database, where you read the structured information from it as well as inject new rows or query and update the rows becomes one of the most common use cases from its programming perspective. So whether you like it or not, you will eventually come to a work that the “users” of your product is already or will be using spreadsheet files aside of your product. So you will at least have to learn how to process these files (import, ingest) and be able to expose your product’s details (data) in these formats (export features).
The best way to see this is being a very common need to look at the most popular automation tools on internet like IFTTT, Zapier. One of the first integrations they did was google sheets.
I personally have many google sheets files that is fed by IFTTT or workers I run on my servers that exports activities daily/weekly/monthly or by trigger. Anytime I want to see what’s going on and want to analyze my activities (both personal activities like my driving history, or leaving/entering NYC area, or my expenses on my credit cards) I can easily do a pivot table to see the trend, or group them by categories or other angles to look at the same data with different questions. I can’t even describe how many different ways similar scenario can be for a business purpose. Analytics by itself will cover a long list of things you (or a non-dev team member) can and will want to do with data sets like these.
The last reason in the list is probably is the strongest from a professional skill perspective, so you kinda have to know your way around in both static excel/csv files as well as knowing how to work with google APIs for google sheets from it’s authentication steps to the actual endpoints/methods in the google sheets api to work with the spreadsheets.
But #1 and #2 are more important if you are (or want to) a resourceful problem solver. Every developer is at the end of the day non-stop problem solver and having Google sheets in developer’s toolbox is a must.
Sheet2Cal helps you to sync your event data from Google Sheets to your favorite calendar app. Plan and collaborate your events (social media/editorial content, wedding & travel planning) freely on Google Sheets and export it easily to a calendar subscription using Sheet2Cal.
Sheet2Cal creates a complete calendar event by using the information in a Google Sheets doc you want. It helps you to schedule and update your calendar events (Such as social media, wedding & travel planning) inside from your Google Sheets doc.
At Nomad Interactive, we love to create various tools to make our lives easier. And we share it with the world if we think it makes the same impact for others! Sheet2Cal was born during such a need.
When uploading a shot to Dribbble, we were doing our content planning, such as title, description, in a Google Sheet document for team review and collaboration. In addition to that, we were also creating a calendar event with similar information as a reminder to ourselves – when to publish each post on our shared calendars.
During the review process, we had to make every change we made on the post document in the relevant calendar entry. To prevent this unnecessary step, we developed Sheet2Cal and started making our entire calendar organization using our main Google Sheets document, from creating an event to updating it. Then we thought that this would work in many different subjects and decided to share it with everyone for free!
Now you can use Sheet2Cal as you wish for your similar needs! If you have any problems or anything you want to ask, we would love to help as a team!
To show your support, you can visit the Product Hunt page and give us a 💬 shout or an 👍 upvote!
I’d like to talk about the growing experience with Google Docs and Google Sheets to use them for more complex needs and functions in this post.
It’s been a recent theme of the topic that I’ve been talking about ways to use collaborative cloud services for documentation purposes. Whether for personal or business/team communication. I’ve also talked about going beyond the need to have plain written form of documents to the smarter, more complex form of information forms in Smart(Er) Documents – Quip, Notion, Airtable, Coda Or Good Old GDocs & GSheets post.
I like Google docs from the availability perspective that it’s available without needing a complicated pricing structure and Google allowing google docs to be open for public google accounts that most people can access google docs with their personal Google accounts even if they don’t have company accounts (not using GSuite). I also like google docs is very familiar in the visual form that all of use are used to from Microsoft Office or other office software suites (Open, Libre). This also may look google docs products outdated compared to more modern collaborative editing tools like Notion, Quip.
All of these services have equally powerful APIs but probably not as robust as Google Docs. Google Docs also has a secret weapon that I’d like to introduce lightly in this post called Google Apps Script aside from their powerful API. This is a very wide and under-discussed topic online that gives Google Docs tools a huge edge. I may want to focus on this by talking sub-topics about the Google Apps Scripts down the road.
Google Apps Script
Google Apps script is a scripting/automation feature of google larger Google products including Gmail, google calendar, google drive and few other important google products you may be already using.
Google apps script runs on your browser (or mobile device) within the google tool you’re using. Scripts can register additional UI elements in the tools you use (register a new menu item), or watch/listen changes on the documents you create (events like, when a row is updated in google sheets) or you can even map your script parts to elements in the document content you are creating (such as buttons, dropdowns, checkboxes, etc…).
So far nothing new that other tools won’t do. Microsoft Excel and Word have its famous Macros available around ~20 years or more. Almost all other alternative software also have some form of automation that allows similar capabilities. The real power of Google Apps Script comes when you combine some of its online/cloud tools like google drive or google maps or Gmail with its documents. This makes the documents interactive with other services. Similar in the sense of using the other services APIs. One of the big things that makes me feel warmer to google docs tools are having real-time collaboration in your docs. This makes collaborative writing/editing experience superb. We often real-time write content with my team and I find the conversational aspect of the collaborative work priceless.
My favorite use cases of using Google Apps Script
Pull a dataset from our internal services or public sources dynamically to google sheets
We use this more often with our analytics services internally we use. These sheets are generally reports we create but often update with the latest version of the data.
Google sheets already have IMPORTDATA and IMPORTXML functions that pull a CSV or XML formatted data easily. But often we use a service that we haven’t built that doesn’t have CSV or XML exposure of its data. Often it’s a REST API returns JSON. You can use a helper function like https://github.com/bradjasper/ImportJSON or create your custom processor to pull the data and shape it to the way you want in google apps script. We often do the latter.
Add custom functions to google sheets
We use this a lot to create custom functions (generally pull data from other cloud tools we use), like getting Trello card details (title, status, assignee) i.e: =TRELLO(“eio3u48d”) or you can do other public services like getting weather forecast for a zip code =WEATHER(“11222”)
Send mails from google docs or sheets using your Gmail account
This goes into the automation of your workflow. As I mentioned above, with google apps script, you can map UI elements (menus or a button like an element in the document content) to a custom trigger in your script that does something for you. We sometimes create sheets or docs containing form-like formats or in the google sheet scenario, an action to be taken by the user for each row there is data. For this examples sake, think like a contact list with name, email and thank-you-note columns. We use google apps script to create a button-like action item in a column we define (let’s say it’s the next column to the thank you note) with label say “Send Thank You Note”. With google apps script, we can register this column to accept clicks and trigger a google apps script function. The google apps script function then can pull the clicked row number and the values in that row for the name, email and thank you note. Then with few lines of code, we can utilize gmail service api (without needing to do complicated SDK installations and -more importantly- deal with authentication) to send an email to the recipient with the content we want (in this case use the thank you note column as the email content. This is a huge convenience compared to building out this capability in a service or custom code from scratch.
Put a google sheet data to your calendar and update accordingly
Another great use case is to have timeline-based planning to be pushed to google calendar and update accordingly. We do this in a similar fashion as the previous scenario, but utilizing the google calendar service in the google apps script.
There are many more interesting use cases for google apps script. There are also many communities created/maintained lists, directories for great google apps script examples and resources.