Smarting up google docs and sheets

I’d like to talk about the growing experience with Google Docs and Google Sheets to use them for more complex needs and functions in this post.

It’s been a recent theme of the topic that I’ve been talking about ways to use collaborative cloud services for documentation purposes. Whether for personal or business/team communication. I’ve also talked about going beyond the need to have plain written form of documents to the smarter, more complex form of information forms in Smart(Er) Documents – Quip, Notion, Airtable, Coda Or Good Old GDocs & GSheets post.

I like Google docs from the availability perspective that it’s available without needing a complicated pricing structure and Google allowing google docs to be open for public google accounts that most people can access google docs with their personal Google accounts even if they don’t have company accounts (not using GSuite). I also like google docs is very familiar in the visual form that all of use are used to from Microsoft Office or other office software suites (Open, Libre). This also may look google docs products outdated compared to more modern collaborative editing tools like Notion, Quip.

All of these services have equally powerful APIs but probably not as robust as Google Docs. Google Docs also has a secret weapon that I’d like to introduce lightly in this post called Google Apps Script aside from their powerful API. This is a very wide and under-discussed topic online that gives Google Docs tools a huge edge. I may want to focus on this by talking sub-topics about the Google Apps Scripts down the road.

Google Apps Script

Google Apps script is a scripting/automation feature of google larger Google products including Gmail, google calendar, google drive and few other important google products you may be already using.

Google apps script runs on your browser (or mobile device) within the google tool you’re using. Scripts can register additional UI elements in the tools you use (register a new menu item), or watch/listen changes on the documents you create (events like, when a row is updated in google sheets) or you can even map your script parts to elements in the document content you are creating (such as buttons, dropdowns, checkboxes, etc…).

So far nothing new that other tools won’t do. Microsoft Excel and Word have its famous Macros available around ~20 years or more. Almost all other alternative software also have some form of automation that allows similar capabilities. The real power of Google Apps Script comes when you combine some of its online/cloud tools like google drive or google maps or Gmail with its documents. This makes the documents interactive with other services. Similar in the sense of using the other services APIs. One of the big things that makes me feel warmer to google docs tools are having real-time collaboration in your docs. This makes collaborative writing/editing experience superb. We often real-time write content with my team and I find the conversational aspect of the collaborative work priceless.

Google Apps Script is based on JavaScript. So you write Javascript snippets to define your custom functionality along with a large list of available google services APIs ready to use.

My favorite use cases of using Google Apps Script

Pull a dataset from our internal services or public sources dynamically to google sheets

We use this more often with our analytics services internally we use. These sheets are generally reports we create but often update with the latest version of the data.

Google sheets already have IMPORTDATA and IMPORTXML functions that pull a CSV or XML formatted data easily. But often we use a service that we haven’t built that doesn’t have CSV or XML exposure of its data. Often it’s a REST API returns JSON. You can use a helper function like https://github.com/bradjasper/ImportJSON or create your custom processor to pull the data and shape it to the way you want in google apps script. We often do the latter.

Add custom functions to google sheets

We use this a lot to create custom functions (generally pull data from other cloud tools we use), like getting Trello card details (title, status, assignee) i.e: =TRELLO(eio3u48d) or you can do other public services like getting weather forecast for a zip code =WEATHER(11222)

Send mails from google docs or sheets using your Gmail account

This goes into the automation of your workflow. As I mentioned above, with google apps script, you can map UI elements (menus or a button like an element in the document content) to a custom trigger in your script that does something for you. We sometimes create sheets or docs containing form-like formats or in the google sheet scenario, an action to be taken by the user for each row there is data. For this examples sake, think like a contact list with name, email and thank-you-note columns. We use google apps script to create a button-like action item in a column we define (let’s say it’s the next column to the thank you note) with label say “Send Thank You Note”. With google apps script, we can register this column to accept clicks and trigger a google apps script function. The google apps script function then can pull the clicked row number and the values in that row for the name, email and thank you note. Then with few lines of code, we can utilize gmail service api (without needing to do complicated SDK installations and -more importantly- deal with authentication) to send an email to the recipient with the content we want (in this case use the thank you note column as the email content. This is a huge convenience compared to building out this capability in a service or custom code from scratch.

Put a google sheet data to your calendar and update accordingly

Another great use case is to have timeline-based planning to be pushed to google calendar and update accordingly. We do this in a similar fashion as the previous scenario, but utilizing the google calendar service in the google apps script.

Wrapping up

There are many more interesting use cases for google apps script. There are also many communities created/maintained lists, directories for great google apps script examples and resources.

https://github.com/contributorpw/google-apps-script-awesome-list is a good start. Also checking GitHub tag search is worth looking https://github.com/search?q=topic%3Agoogle-apps-script

The official documentation of google apps script is here: https://developers.google.com/apps-script

Launch of my latest product Screenshot Tracker

🚨 I’ve just launched a new product (also in 😸 Product Hunt): 📸 Screenshot Tracker, a desktop all that helps capture full web page screenshots in different resolutions at once.

Screenshot Tracker helps you to collect:

  • 🖼 Full-size screenshots for your web pages
  • 🎳 Multiple URLs at once
  • 📱 In multiple resolutions (desktop, tablet portrait, landscape, and mobile)
  • 🔁 Repeat/Retake screenshots

with one click!

Screenshot Tracker is great for designers and developers to test their work while iterating on a web page design and development.

And it’s 100% free & open source. PRs welcome!

Project Homepage: https://screenshot-tracker.nomadinteractive.co/

Also, give me your support in Product Hunt page

Working with Memcache on your NodeJS app on Heroku

I wrote about Heroku in the past by showing my love for this amazing service. Heroku helps developers to start writing and deploying apps much easier than any other service. This allows developers to try things out and bootstrap their ideas in a matter of minutes. More about Heroku I talked in this article: https://mfyz.com/using-heroku-for-a-quick-development-environment/

Heroku Marketplace

I also mentioned before that Heroku has a 3rd party add-on marketplace. There are all types of application infrastructure, cloud services from fully hosted database services or CDNs to transactional email delivery services. Its marketplace is filled with countless gems that are worth a weekend to just explore these great services. That also works with Heroku within a few clicks/commands.

Why cache? Why Memcache?

One of them which is almost every distributed cloud application’s primary need is memory-based fast caching. There are different ways and models to store and use data from memory based caches but most common is for key-value based storage with TTL (time to live) which makes the data stored short-lived and cleaned up regularly.

The primary reason to use a memory-based cache is to optimize the performance of an application. In general terms, you cache frequently used data to cut time on queries to the database, reads to disks as long as you have a certain confidence that the data you are caching is not frequently changing within a short time while it lives in the cache. Even with the scenario that the data can be changed, there are simple ways to refresh the cache on the “update” events to make sure cached data is always valid and it cuts expensive operation time in the code flow.

As I mentioned, there are many different memory-based storage engines, protocols, services. One of the most simple yet effective and widely used engine is “Memcache”. It is an open-source tool that can be easily installed and hosted on any server that will run as a deamon and simply using an open port on a server to accept storage requests and queries. Memcache is designed to store and serve all of its data from the physical memory of the server. Therefore it’s very very fast and responsive when it’s combined with a high throughput network between the Memcache and application servers.

Heroku also supports multiple providers for Memcache and other memory-based storage engines. One particular service is very seamless to create an instance and set it up on a Heroku application, called Memcachier.

Memcachier

The memcachier instance free and can be added without any further account details from an external site. But it comes with very limited resources. It is only 25mb total cache size and 10 concurrent connection limit. Even though this number is small, for a node.js application, it will be mostly long-living. It will be more than enough if you are not planning to have many environments connecting to it at once. It is certainly enough to have a quick Memcache server to build stuff up and running. For more details about Memcachier service and its pricing model here is the Heroku elements page explaining all that: https://elements.heroku.com/addons/memcachier

To add this add-on to your application, in your project folder, run following command:

This will add memcachier to Heroku config as an environment variable. Copy the variable to your local .env file to be able to connect from your local development application.

To use memcache, on a node.js app, we will use a npm package called “memjs”. To install, run:

The usage of the package is pretty straightforward, here is the short example it’s quoted from memcachier’s Heroku get started guide:

Usage is as simple as using get and set async functions to set and get a value with a key.

I created a more visible version of this example with a relatively long, time-consuming mathematical calculation, so caching an expensive calculation (or data retrieval like a slow database query) to be cached in Memcache, so it’s calculated first, then when requested consecutively, it is served from Memcache instead of re-calculating. In my example, I used a calculation of a “primal” number which takes time.

https://github.com/mfyz/heroku-memcachier-example

Smart(er) documents – Quip, Notion, Airtable, Coda or good old GDocs&GSheets

I’ve talked about the importance of written communication before. I highly believe that written communication is the best and purest way to accumulate and share knowledge. Most importantly, it allows all of us to communicate on our own terms/time/speed, enabled asynchronous communication.

This is a key concept to eliminate unnecessary meetings, or making everybody’s time is utilized well. Also a key requirement for scalability for whatever the knowledge transfer needed between peers at work.

I also mentioned multiple times that I use Quip personally and for my team communication and management. There are a lot of great tools came before or after Quip, namely a new up and the coming tool is Notion. Regardless of the tool itself, we grew our need to “document” beyond just writing.

Writing is the storytelling part of the documentation and it’s necessary. Any tool helping us writing better, faster and with fewer errors (i.e: Grammarly) is good. But I have a hunger for more, as an engineer. I have been thinking a lot of displaying information in different ways, making it interactive due to my education background (Statistics). I also operate highly in data filled environments where there is always a need to “simplify” information to readable, easier to digest formats. So I always look out for making data, or a plain timeline of events in a more creative and fun way.

I see 2 very common way of documenting things.

1) A story, or instructions on some topic. How-tos, technical documentation, etc… These documents are generally static. What I mean by that is, we generally just read these documents. There is not much interactivity or dynamic outcomes we expect from them. Although, even if a document is displaying a few numbers, we may want to treat it to be reports that we may want them to update with more recent versions. So the story outline stays the same but the mentioned numbers or dates, or some other info can be changing over time.

2) Complex information like technical data shown in tables, charts. These are the information we generally come up with ways to look at the same information from multiple angles. Like an expense table showing the category of the expense, it’s date, amount and more. Sometimes we want to only see certain dates or categories, or sometimes we ask a question of “what is the totals of the expenses per category”… Similar approach is applicable for charts and some other smarter elements. But essentially they all come from static information that is displayed in a static way. For this type of information, we choose to use tools like Excel, Google Sheets, where they already have a lot of formulas, chart creation tools included in them to help us come to the conclusions we want.

These tools are subtle differences between them but with enough optimizations, it can make a big difference in how we work day to day. In a lot of cases, if we do these document creation, editing frequently enough, we want to automate the process.

Now, after talking about the reasoning of it, what are the sample scenarios we may want more from the traditional tools we use.

Traditionally, we use word, google docs, quip like tools to create story-heavy documents with text formatting, images, and other elements. And we use excel, google sheets, airtable like tools for spreadsheets, showing table or data that we can analyze easily.

What about other type of either repetitive data, or ways to create a better understanding of things like calendar-based information (like marketing calendar), or the same simple bullet point list but with more context as a todo list in a more visual way also showing it’s progression? Eventually, we are all talking about tabular-like data with multiple attributes but displayed differently when it comes to read/consume them.

Here are a few tools I really like worth checking out in this matter:

Quip

Quip is a very plain and clean tool that does not have super smart features but has enough that is one of the easiest to learn and most portable that has been around long enough that is very reliable and slowly becoming more powerful. The features I like and use often in quip is:

  • Spreadsheets embedded in regular docs
  • Project tracker
  • Calendar
  • Kanban board

All of these components are very plain and bare-bones in quip. It’s ideal for quick drafting when documenting project plans, or other things but they are not advanced that you can export and utilize with other platforms. So it falls short when I feel I need more capabilities from these in my documents – or at least linking with existing systems so we can display information in our documents (like monthly planning).

https://quip.com/

Notion

Notion is the new kid in the block and it is filled with a lot of advanced views and custom “data” modeling (they call it “database”). You can create a database of anything and display them in a lot of different views. Calendar view, board view, gallery view, list/table view…

With good design, you can plan and manage a lot of things in Notion. In some respects, it can easily become a company/team knowledge base as well as a task/project management tool.

I loved Notion except 1 hard blocker for me is the mobile experience on iPad with keyboard. I had to change a lot of common sense navigation and editing gestures I use in pretty much everywhere in order to work on Notion properly when I edit the content.

Other, minor issue is the pricing is way steeper than the tools we currently use for a small team. The free quota gets filled very very quickly for a team producing a lot of written documentation like ours.

https://www.notion.so/

Airtable

If you think in spreadsheets mindset, you’ll love airtable. Airtable actually is a database engine for me. I find it extremely API friendly that if you want to code stuff that feeds data to tables with views and stuff, Airtable is perfect. I’ll write about using Airtable as a light db via their API in a separate article later.

Airtable has smarter table management that can also display the same table data in different views like calendar, board views.

https://airtable.com/

Coda

I kinda liked certain aspects of Coda, but I didn’t like the UX that much. I found it’s mobile experience is a big deal-breaker, but it’s unique approach, it’s promising if the makers catch the wave against other tools out there.

https://coda.io/

Traditional Tools Can Do That Too

Some of these stuff have been in traditional tools for long time, but not utilized well, or they are more advanced topics for their own environment, requiring technical knowledge or have steep learning curve (like Excel Macros). The only exception I still use and encourage teams to utilize is Google Docs’ App Scripting. We use google sheets for exported data, importing data to our micro services. We also use for plain documents for planning projects, content and other stuff.

One of the common things we did actually came out as a team product. Check it out: https://sheet2cal.com/

Living in the cloud as a developer

Photo by Rodion Kutsaev on Unsplash

As tech users, we live much more in the cloud than we think we have stuff all over the place. As we move on to a new decade, we passed the concept of local storage in our devices. The local storage on our devices started to serve large hard caches of our stuff. Our things consist of photos (which we produce most of them on our mobile devices now), documents we write or any other apps that are backed up by OS like iOS, Android that are already protected by automated backups to your cloud accounts.

So the digital work environment became very portable. And as the hardware advances, our need for powerful machines is moving to the streaming services even in hardware-intensive cases like gaming…

As developers, most of us don’t require very powerful machines to work with. Although there are exceptions like people doing development with crosses with gaming, 3d, video, production… But most of the engineers already work in teams that are heavily utilizing cloud services that are connected in a harmonious way to build our code remotely.

So our local development environments now are functions for comfort more than a requirement. I personally faced and forced myself to be in the “on the go” situation with experimenting to live on my iPad as a primary machine multiple times. Seeing all I need is a comfortable portable machine that can handle basic remote connectivity (like SSH, RDP…) and have some tools that we use for navigating our codebase and run our local development set up. Even the local development set up can be located in a remote instance and have access to the instance with many tools that make it feel like you are working on your local machine.

All these require a high level of connectivity. But we also work with tools most often than not requires connectivity. Regardless of the hardware, we carry is a high performing machine or not, our tools and workflow involve more around connectivity than the speed or capabilities of our local environment.

I also started to see a lot of web-based tools that are very close or in some cases unnoticeably good mimicking the native tools we use, like web-based IDEs that are now being integrated to github/gitlab that provides the comfort of a customized set up.

Judging by the majority of the engineering teams that utilize continuous development pipelines, these pipelines dictate how connected and collaborative our workflow has to be. This is the main reason every new addition or tweak in a generally accepted development journey now is connected and in the clouds.

Developing and Deploying Nodejs (Express) apps on Heroku

Heroku is an amazing platform for getting quick development up and running in a smart virtual instances. There is no hussle to get additional services you may need for a quick and dirty app to ground up. I’ve already wrote about how to use heroku for quick development environment before: https://mfyz.com/using-heroku-for-a-quick-development-environment/

This short article will be about specifically developing and deploying node.js and express apps on heroku. There is actually not much difference for deploying a node.js app than a php application or in another language. Heroku CLI tool automatically detects the application type from the package.json file for a node.js application and it’s entry point from there.

For the express related parts, just go ahead and see the very simple example I put up in github the past:
http://github.com/mfyz/heroku-nodejs-express-helloworld

Another more detailed express example that uses pug template engine for it’s layouts and views:
https://github.com/mfyz/express-pug-boilerplate

Aside of the application itself, there are few key points I found helpful when creating and deploying node.js apps. 

Use environment variables

Using environment variables is the best way to set configuration details for your application.

Setting which node.js version your app will use

As simple as adding “engines” object in package.json and having your node.js version defined in “node” property inside engines object like:

Same applies for npm and yarn versions to be defined within engines object as well.

Use prebuild and postbuild steps to prepare additional steps needed for your application build

By default, heroku will build your application on every deployment. This is not very meaningful for pure node.js applications but you app may need build. Like gulp, grunt, webpack builds. For this, heroku will read “build” npm script if exists in package.json. Aside of this, heroku will always install dependencies with npm install as a minimum build step. If you need additional steps before or after the build, you can define these in npm scripts as heroku-prebuild and heroku-postbuild named scripts.

Utilize heroku add-ons

Remember, Heroku comes with tons of 3rd party services which a lot of them have free packages that will be enough to try things out and start coding stuff up quickly. One of my favorite is heroku’s internal database service providing postgresql database with single command line command:

Wrapping up

All in all, heroku is a great cloud platform allow developers to kick off ideas, starting with simple code to grow into complex distributed applications very easily. In my opinion, it should be in the go-to tools for every engineer’s arsenal.

Importance of changelogs

It’s very important to keep a changelog and be disciplined about maintaining it as your product progresses. This applies to both product management and software development/management processes.

I want to mention a great source that talks, describe and almost accepted as a standard in open source community for changelogs:

https://keepachangelog.com

What is a changelog?

A changelog is a file that contains a curated, chronologically ordered list of notable changes for each version of a project.

Why keep a changelog?

To make it easier for users and contributors to see precisely what notable changes have been made between each release (or version) of the project.

Who needs a changelog?

People do. Whether consumers or developers, the end-users of software are human beings who care about what’s in the software. When the software changes, people want to know why and how.

An example Changelog (from Stretchly)

Github renders this markdown changelog beautifully: https://github.com/hovancik/stretchly/blob/master/CHANGELOG.md

Using Cloudinary for image cloud storage with image transformations in your NodeJS express app in Heroku

Here we are with another article about the development aspect of photo/image management (storage, serving, retrieval). I’ve previously (right before this article) wrote about Authenticating and Getting Instagram Photos in NodeJS/Express application. This story is about manually storing, handling upload, download and serving static photo/image using a CDN service called Cloudinary.

Content should be separate than the application

We’re (web/back-end/front-end developers) building apps, sites in many different ways (different platforms, languages, stack). One thing very common and old school is that everything on the site is organized in the same bucket. So when we code and deploy a site, it’s HTML, CSS, back-end code, images, videos, fonts, etc… all in the same place. Now we have distributed deployment systems with having multiple instances of our application on different web servers too. Which made us do a soft transition to keep common files (that can be changed) like uploads folders in block storages like s3 or azure blob… But it still doesn’t do the full justice that both static and dynamic content of an application/website should be completely separated than the application code. This is not a new practice but it’s a practice that can be missed so easily.

It’s so easy to leave an image that is used in a blog post within the codebase (which is wrong). A static content that will not be used to render your page on the back-end ideally doesn’t belong to the place where you store your application (code). Yet, it shouldn’t be served (or requested) from the same servers which are responsible (only should be responsible) for rendering and serving your pages. Tiring your web server with serving images or compiled CSS is not optimal. This will also affect your site’s performance that everything is coming from the same source. Having distributed sources will make your browser to manage parallel downloads of your site’s resources faster. No brainer!

We’ll not get into the techniques of how to separate these different things in different places in this article, but we’ll talk about images specifically.

Images need a big brother

This was a novelty in the past where we wanted to have multiple sizes of an image/photo so we can economically request the right size in different pages – example: get 100px width thumbnail in the page where we show photos in a grid, show the 500px width version on the lightbox, and link out to the original photo in the “download” button. This makes total sense right? Strategizing how to have the different versions ready on your server (or CDN) is another thing. So many self-solutions you can use or code it up. But from user experience (admin/editor) standpoint, nobody wants to do this manually or even automatically but wait for the server to resize and prepare these versions when uploading a single photo to your CMS/app/back-end. To me, that is a one-way road. I only want to upload an image and only wait the time takes the file transfer from my device to the server. Done!

What is Cloudinary and should I use it?

Cloudinary is that big brother and storage and server together. Smart CDN for images and videos. It has a pretty decent free package that will be enough for almost all personal, experimental and small projects. If you have decent size traffic, you may think to pay or optimize your solution with Cloudinary.

Cloudinary hosts and serves images for you. It’ll be your storage bucket that also has many out of box solutions for known CMSs like WordPress. I like the API/SDK route which they have SDKs and well-designed API for almost all platforms. We’ll play with NodeJS below.

The magic cloudinary has that is compelling that it can do so many varieties of transformations on your images on the fly (and caches them). Basic things like color filters, crop, resize, rotate etc… But the real thing is where they have face recognization that you can create square avatars with intelligently telling cloudinary to give you the face focused position in the center on your circle avatar with transparent png background and have 2px border around circle cropped avatar. All of it happens over URL parameters. True magic. I haven’t digg the video side of this but I read bunch of smart stuff on the streaming site which is also worth considering cloudinary as one-stop-shop for visual static assets.

Add Cloudinary service on your heroku application

Adding a service to heroku application is very easy and mostly done in command line. In order to create a new cloduinary service as add-on to your application, in your application folder run:

This command will create a new cloudinary account linked to your heroku account and add cloduinary credentials to your heroku config – environment variables. You can see and copy the variables to your local .env file with

Using it on nodejs/express app

Install first:

server.js

See this example on github: https://github.com/mfyz/heroku-cloudinary-uploads-example

WordPress and other sites

Cloudinary has SDKs and official plugins to well-accepted platforms like WordPress. Check out their official documentation about the ent libraries and plugins


You can also use my invitation link to give me extra free credits: https://cloudinary.com/invites/lpov9zyyucivvxsnalc5/cdlhm6z9q63gdufko1kj

Single JavaScript file node/express/Instagram authentication (OAuth) and get user photos

Get Ready

Register your Instagram app in their developer portal and obtain client id and client secret keys. To do that, follow the steps below:

  • Go to IG Developer page: https://www.instagram.com/developer/
  • Click “Register your application” button (if you are not logged in, you will be asked to log in on this step)
  • Fill all fields. The only field you need to pay attention to is the “valid redirect URLs”. This is where your app is publicly hosted. Below, we will create a URL on the application to capture Instagram authentication after the user goes to the Instagram page for permission dialog, then comes back to this URL. It’ll be something like https://yoursite.com/instagram/callback
  • Once you register the app, the page will display client id and secret. Keep this ready on the next steps.

Code it up

Let’s set up the plain node.js and express the application.

First, install the required packages:

index.js (or server.js)

Deploy and run

Either locally or after you place it on your server, run:

Tip 1: use “forever” on your server to run this application permanently.

Tip 2: For experimental purposes, you can run this app on your local and have a tunneling tool like “ngrok” to open your local port to the public with a quick domain assignment. Ngrok will provide a URL (random subdomain on their domain), you have to update the IG developer app’s settings to add this domain as a valid redirect URL, otherwise, after this app redirects user for authentication to Instagram, it will give an error.

Get the real thing

The code above in this article was a quick and dirty version. I put the little bit more proper express application version on Github. It uses pug for its views and has proper layout/content block separation as well.

https://github.com/mfyz/express-passport-instagram-example