Why every developer needs to know google sheets & excel programming

I’ve recently talked about different cloud documentation services Smart(Er) Documents – Quip, Notion, Airtable, Coda Or Good Old GDocs & GSheets and the my take on Smarting up Google Docs.

Let’s dive right in the few key reasons why every developer should know google apps script and get familiar to work with GAS in google sheets and docs.

1) Provide your technical output (data) in common ground (a tool that is known by pretty much any computer user or intuitive to learn if they don’t know)

I highly believe any tool allow developers and non-dev role in a team to effectively communicate complicated information. Generally data sets to be eventually used for a form of story tellin g (a website/blog content, an internal or external product performance report, financial or behavioral analysis etc.). When it comes to developer crew only, we always find 100 different ways to express what we want to show to the world in form of scripts utilizing whatever library, tools comes to our hands. But when it comes to handing over our output (whether it’s a SQL output in CSV format, or a dynamic data set in a service), we are as developers are constrained as well as the party we’re handing over our work to start their work with a lot of constraints. They also have to learn whatever data format we give them. This also makes the collaboration one direction, starting from developer then ending up in the non-dev role working with that data (marketing people, product management or executive roles).

Real trouble starts when we have to repeat same work over and over. Because we can export the desired data model from whatever tool we’re using, generally as static CSV/Excel exports if it’s a tabular data. If you are doing same or similar work multiple times, it’s only beneficial to think ways to automate the process. Let’s think a simple family expenses management on excel/google sheets (because everybody knows or learn to work with sheet tools easily). In this hypothetical scenario, let’s say there is a database or API provides your credit card statements (there are actually services for extracting this information from your bank – but for security reasons, it gets very complicated on the authentication layer). As developer you can start the story from “extracting the data” and your output will almost always will store the extracted data in some place – most likely a database. 

You may or may not be the “analyzing” person in the family or even if you are, you may need to review your analysis with rest of the family members and may get their own take in the dataset for a real collaborative understanding and iteration. 

Google Apps Script has many ways to connect to 3rd party APIs to pull data and if done well, do these operations automatically refreshing data set or manually with UI interactions but essentially a developer’s role to be completely automated in a “smart google sheet” or doc.

Once this is set up right, it can be copied and extended easily. It goes back to the traditional “macro enabled excel templates” approach, but it really works out well for everybody. And google docs products are true real-time collaborative tools that you can literally open computers on a conference call, collaboratively edit and work on same information to bring it to an understandable level with simpler aggregate analysis or breakdowns or even charts without having to reinvent the wheel to code all these up from scratch.

2) Mockup / Bootstrap a lot of ideas in simpler form without needing to code a lot. Perhaps these can be used even long term use cases

As developers, more than half of what we do is automation. We code to not repeat ourselves. This is just a human desire to invest in our brain to not to do time consuming less intellectual work. Our brains are wired that way form the time we born. It always seeks for shortcuts. Developers are the hackers gets these shortcuts discovered and implemented in digital realm.

Whether you are working alone or working with others (team), you will always have things that you have to do multiple steps to produce an output while you work. To not repeat yourself, you will want to create tools to yourself that helps you achieve same output with less steps. The shortest way we always use the buzz word “one-click” to reflect the magical robotic process when we refer to an “easy” tool. Regardless the steps become one or multiple, there is always area for improvement in a work we do every day.

Developers tend to jump in their own comfort environment to create scripts to do things for themselves. I don’t know any developer who doesn’t have scripts that helps resizing images, or orchestrates their common tasks, so whenever we need to do the same operation, we just want to push the “one-click” button and see the output. Most of the time we invest a lot of time to build these scripts regardless of we are super proficient on the languages we know and the environment. Also we will always have issues with sharing this work with others in the team or invest more time to convert these scripts to be utilized by others in the team or publicly by anyone in internet.

Google gdocs products give a great canvas and a lot of limitations that positively shape and give us limits to make sure what we create in google docs or sheets or slides have to be in familiar format that can be used by other developers or non-dev team members.

Just to acknowledge that google apps script has real limitations, generally if you are doing resource heavy task on the cloud servers or the browser. But it is also super easy to start in a very basic form factor so you don’t have to worry about a lot of cosmetics or look and feel.

3) Spreadsheet tools are essentially databases

Every developer has to know relational database modeling and working with any size and structure of the data. At the core of relational databases, you have to work with tables and the relationships between them. 

Google spreadsheet and Excel are great tools that can be extensively used by both developer and non-developer people together because its both visual tool and a structured data by its nature.

The reason both tools have programming interfaces because its one of the core use cases for these tools. I also think the companies behind them are forced to have this programing features that because it’s probably one of the most requested features.

Al in all, utilizing a spreadsheet file as database, where you read the structured information from it as well as inject new rows or query and update the rows becomes one of the most common use cases from its programming perspective. So whether you like it or not, you will eventually come to a work that the “users” of your product is already or will be using spreadsheet files aside of your product. So you will at least have to learn how to process these files (import, ingest) and be able to expose your product’s details (data) in these formats (export features).

The best way to see this is being a very common need to look at the most popular automation tools on internet like IFTTT, Zapier. One of the first integrations they did was google sheets.

I personally have many google sheets files that is fed by IFTTT or workers I run on my servers that exports activities daily/weekly/monthly or by trigger. Anytime I want to see what’s going on and want to analyze my activities (both personal activities like my driving history, or leaving/entering NYC area, or my expenses on my credit cards) I can easily do a pivot table to see the trend, or group them by categories or other angles to look at the same data with different questions. I can’t even describe how many different ways similar scenario can be for a business purpose. Analytics by itself will cover a long list of things you (or a non-dev team member) can and will want to do with data sets like these.

Wrapping Up

The last reason in the list is probably is the strongest from a professional skill perspective, so you kinda have to know your way around in both static excel/csv files as well as knowing how to work with google APIs for google sheets from it’s authentication steps to the actual endpoints/methods in the google sheets api to work with the spreadsheets.

But #1 and #2 are more important if you are (or want to) a resourceful problem solver. Every developer is at the end of the day non-stop problem solver and having Google sheets in developer’s toolbox is a must.

Just launched my newest product in ProductHunt: Sheet2Cal – Organize calendar events inside from your Google Sheets doc.

Sheet2Cal helps you to sync your event data from Google Sheets to your favorite calendar app. Plan and collaborate your events (social media/editorial content, wedding & travel planning) freely on Google Sheets and export it easily to a calendar subscription using Sheet2Cal.

Sheet2Cal creates a complete calendar event by using the information in a Google Sheets doc you want. It helps you to schedule and update your calendar events (Such as social media, wedding & travel planning) inside from your Google Sheets doc.

Back Story

At Nomad Interactive, we love to create various tools to make our lives easier. And we share it with the world if we think it makes the same impact for others! Sheet2Cal was born during such a need.

When uploading a shot to Dribbble, we were doing our content planning, such as title, description, in a Google Sheet document for team review and collaboration. In addition to that, we were also creating a calendar event with similar information as a reminder to ourselves – when to publish each post on our shared calendars.

During the review process, we had to make every change we made on the post document in the relevant calendar entry. To prevent this unnecessary step, we developed Sheet2Cal and started making our entire calendar organization using our main Google Sheets document, from creating an event to updating it. Then we thought that this would work in many different subjects and decided to share it with everyone for free!

Now you can use Sheet2Cal as you wish for your similar needs! If you have any problems or anything you want to ask, we would love to help as a team!

To show your support, you can visit the Product Hunt page and give us a 💬 shout or an 👍 upvote!

You can also visit the project page here: https://sheet2cal.com/

Smarting up google docs and sheets

I’d like to talk about the growing experience with Google Docs and Google Sheets to use them for more complex needs and functions in this post.

It’s been a recent theme of the topic that I’ve been talking about ways to use collaborative cloud services for documentation purposes. Whether for personal or business/team communication. I’ve also talked about going beyond the need to have plain written form of documents to the smarter, more complex form of information forms in Smart(Er) Documents – Quip, Notion, Airtable, Coda Or Good Old GDocs & GSheets post.

I like Google docs from the availability perspective that it’s available without needing a complicated pricing structure and Google allowing google docs to be open for public google accounts that most people can access google docs with their personal Google accounts even if they don’t have company accounts (not using GSuite). I also like google docs is very familiar in the visual form that all of use are used to from Microsoft Office or other office software suites (Open, Libre). This also may look google docs products outdated compared to more modern collaborative editing tools like Notion, Quip.

All of these services have equally powerful APIs but probably not as robust as Google Docs. Google Docs also has a secret weapon that I’d like to introduce lightly in this post called Google Apps Script aside from their powerful API. This is a very wide and under-discussed topic online that gives Google Docs tools a huge edge. I may want to focus on this by talking sub-topics about the Google Apps Scripts down the road.

Google Apps Script

Google Apps script is a scripting/automation feature of google larger Google products including Gmail, google calendar, google drive and few other important google products you may be already using.

Google apps script runs on your browser (or mobile device) within the google tool you’re using. Scripts can register additional UI elements in the tools you use (register a new menu item), or watch/listen changes on the documents you create (events like, when a row is updated in google sheets) or you can even map your script parts to elements in the document content you are creating (such as buttons, dropdowns, checkboxes, etc…).

So far nothing new that other tools won’t do. Microsoft Excel and Word have its famous Macros available around ~20 years or more. Almost all other alternative software also have some form of automation that allows similar capabilities. The real power of Google Apps Script comes when you combine some of its online/cloud tools like google drive or google maps or Gmail with its documents. This makes the documents interactive with other services. Similar in the sense of using the other services APIs. One of the big things that makes me feel warmer to google docs tools are having real-time collaboration in your docs. This makes collaborative writing/editing experience superb. We often real-time write content with my team and I find the conversational aspect of the collaborative work priceless.

Google Apps Script is based on JavaScript. So you write Javascript snippets to define your custom functionality along with a large list of available google services APIs ready to use.

My favorite use cases of using Google Apps Script

Pull a dataset from our internal services or public sources dynamically to google sheets

We use this more often with our analytics services internally we use. These sheets are generally reports we create but often update with the latest version of the data.

Google sheets already have IMPORTDATA and IMPORTXML functions that pull a CSV or XML formatted data easily. But often we use a service that we haven’t built that doesn’t have CSV or XML exposure of its data. Often it’s a REST API returns JSON. You can use a helper function like https://github.com/bradjasper/ImportJSON or create your custom processor to pull the data and shape it to the way you want in google apps script. We often do the latter.

Add custom functions to google sheets

We use this a lot to create custom functions (generally pull data from other cloud tools we use), like getting Trello card details (title, status, assignee) i.e: =TRELLO(eio3u48d) or you can do other public services like getting weather forecast for a zip code =WEATHER(11222)

Send mails from google docs or sheets using your Gmail account

This goes into the automation of your workflow. As I mentioned above, with google apps script, you can map UI elements (menus or a button like an element in the document content) to a custom trigger in your script that does something for you. We sometimes create sheets or docs containing form-like formats or in the google sheet scenario, an action to be taken by the user for each row there is data. For this examples sake, think like a contact list with name, email and thank-you-note columns. We use google apps script to create a button-like action item in a column we define (let’s say it’s the next column to the thank you note) with label say “Send Thank You Note”. With google apps script, we can register this column to accept clicks and trigger a google apps script function. The google apps script function then can pull the clicked row number and the values in that row for the name, email and thank you note. Then with few lines of code, we can utilize gmail service api (without needing to do complicated SDK installations and -more importantly- deal with authentication) to send an email to the recipient with the content we want (in this case use the thank you note column as the email content. This is a huge convenience compared to building out this capability in a service or custom code from scratch.

Put a google sheet data to your calendar and update accordingly

Another great use case is to have timeline-based planning to be pushed to google calendar and update accordingly. We do this in a similar fashion as the previous scenario, but utilizing the google calendar service in the google apps script.

Wrapping up

There are many more interesting use cases for google apps script. There are also many communities created/maintained lists, directories for great google apps script examples and resources.

https://github.com/contributorpw/google-apps-script-awesome-list is a good start. Also checking GitHub tag search is worth looking https://github.com/search?q=topic%3Agoogle-apps-script

The official documentation of google apps script is here: https://developers.google.com/apps-script

Launch of my latest product Screenshot Tracker

🚨 I’ve just launched a new product (also in 😸 Product Hunt): 📸 Screenshot Tracker, a desktop all that helps capture full web page screenshots in different resolutions at once.

Screenshot Tracker helps you to collect:

  • 🖼 Full-size screenshots for your web pages
  • 🎳 Multiple URLs at once
  • 📱 In multiple resolutions (desktop, tablet portrait, landscape, and mobile)
  • 🔁 Repeat/Retake screenshots

with one click!

Screenshot Tracker is great for designers and developers to test their work while iterating on a web page design and development.

And it’s 100% free & open source. PRs welcome!

Project Homepage: https://screenshot-tracker.nomadinteractive.co/

Also, give me your support in Product Hunt page

Working with Memcache on your NodeJS app on Heroku

I wrote about Heroku in the past by showing my love for this amazing service. Heroku helps developers to start writing and deploying apps much easier than any other service. This allows developers to try things out and bootstrap their ideas in a matter of minutes. More about Heroku I talked in this article: https://mfyz.com/using-heroku-for-a-quick-development-environment/

Heroku Marketplace

I also mentioned before that Heroku has a 3rd party add-on marketplace. There are all types of application infrastructure, cloud services from fully hosted database services or CDNs to transactional email delivery services. Its marketplace is filled with countless gems that are worth a weekend to just explore these great services. That also works with Heroku within a few clicks/commands.

Why cache? Why Memcache?

One of them which is almost every distributed cloud application’s primary need is memory-based fast caching. There are different ways and models to store and use data from memory based caches but most common is for key-value based storage with TTL (time to live) which makes the data stored short-lived and cleaned up regularly.

The primary reason to use a memory-based cache is to optimize the performance of an application. In general terms, you cache frequently used data to cut time on queries to the database, reads to disks as long as you have a certain confidence that the data you are caching is not frequently changing within a short time while it lives in the cache. Even with the scenario that the data can be changed, there are simple ways to refresh the cache on the “update” events to make sure cached data is always valid and it cuts expensive operation time in the code flow.

As I mentioned, there are many different memory-based storage engines, protocols, services. One of the most simple yet effective and widely used engine is “Memcache”. It is an open-source tool that can be easily installed and hosted on any server that will run as a deamon and simply using an open port on a server to accept storage requests and queries. Memcache is designed to store and serve all of its data from the physical memory of the server. Therefore it’s very very fast and responsive when it’s combined with a high throughput network between the Memcache and application servers.

Heroku also supports multiple providers for Memcache and other memory-based storage engines. One particular service is very seamless to create an instance and set it up on a Heroku application, called Memcachier.

Memcachier

The memcachier instance free and can be added without any further account details from an external site. But it comes with very limited resources. It is only 25mb total cache size and 10 concurrent connection limit. Even though this number is small, for a node.js application, it will be mostly long-living. It will be more than enough if you are not planning to have many environments connecting to it at once. It is certainly enough to have a quick Memcache server to build stuff up and running. For more details about Memcachier service and its pricing model here is the Heroku elements page explaining all that: https://elements.heroku.com/addons/memcachier

To add this add-on to your application, in your project folder, run following command:

This will add memcachier to Heroku config as an environment variable. Copy the variable to your local .env file to be able to connect from your local development application.

To use memcache, on a node.js app, we will use a npm package called “memjs”. To install, run:

The usage of the package is pretty straightforward, here is the short example it’s quoted from memcachier’s Heroku get started guide:

Usage is as simple as using get and set async functions to set and get a value with a key.

I created a more visible version of this example with a relatively long, time-consuming mathematical calculation, so caching an expensive calculation (or data retrieval like a slow database query) to be cached in Memcache, so it’s calculated first, then when requested consecutively, it is served from Memcache instead of re-calculating. In my example, I used a calculation of a “primal” number which takes time.

https://github.com/mfyz/heroku-memcachier-example

Smart(er) documents – Quip, Notion, Airtable, Coda or good old GDocs&GSheets

I’ve talked about the importance of written communication before. I highly believe that written communication is the best and purest way to accumulate and share knowledge. Most importantly, it allows all of us to communicate on our own terms/time/speed, enabled asynchronous communication.

This is a key concept to eliminate unnecessary meetings, or making everybody’s time is utilized well. Also a key requirement for scalability for whatever the knowledge transfer needed between peers at work.

I also mentioned multiple times that I use Quip personally and for my team communication and management. There are a lot of great tools came before or after Quip, namely a new up and the coming tool is Notion. Regardless of the tool itself, we grew our need to “document” beyond just writing.

Writing is the storytelling part of the documentation and it’s necessary. Any tool helping us writing better, faster and with fewer errors (i.e: Grammarly) is good. But I have a hunger for more, as an engineer. I have been thinking a lot of displaying information in different ways, making it interactive due to my education background (Statistics). I also operate highly in data filled environments where there is always a need to “simplify” information to readable, easier to digest formats. So I always look out for making data, or a plain timeline of events in a more creative and fun way.

I see 2 very common way of documenting things.

1) A story, or instructions on some topic. How-tos, technical documentation, etc… These documents are generally static. What I mean by that is, we generally just read these documents. There is not much interactivity or dynamic outcomes we expect from them. Although, even if a document is displaying a few numbers, we may want to treat it to be reports that we may want them to update with more recent versions. So the story outline stays the same but the mentioned numbers or dates, or some other info can be changing over time.

2) Complex information like technical data shown in tables, charts. These are the information we generally come up with ways to look at the same information from multiple angles. Like an expense table showing the category of the expense, it’s date, amount and more. Sometimes we want to only see certain dates or categories, or sometimes we ask a question of “what is the totals of the expenses per category”… Similar approach is applicable for charts and some other smarter elements. But essentially they all come from static information that is displayed in a static way. For this type of information, we choose to use tools like Excel, Google Sheets, where they already have a lot of formulas, chart creation tools included in them to help us come to the conclusions we want.

These tools are subtle differences between them but with enough optimizations, it can make a big difference in how we work day to day. In a lot of cases, if we do these document creation, editing frequently enough, we want to automate the process.

Now, after talking about the reasoning of it, what are the sample scenarios we may want more from the traditional tools we use.

Traditionally, we use word, google docs, quip like tools to create story-heavy documents with text formatting, images, and other elements. And we use excel, google sheets, airtable like tools for spreadsheets, showing table or data that we can analyze easily.

What about other type of either repetitive data, or ways to create a better understanding of things like calendar-based information (like marketing calendar), or the same simple bullet point list but with more context as a todo list in a more visual way also showing it’s progression? Eventually, we are all talking about tabular-like data with multiple attributes but displayed differently when it comes to read/consume them.

Here are a few tools I really like worth checking out in this matter:

Quip

Quip is a very plain and clean tool that does not have super smart features but has enough that is one of the easiest to learn and most portable that has been around long enough that is very reliable and slowly becoming more powerful. The features I like and use often in quip is:

  • Spreadsheets embedded in regular docs
  • Project tracker
  • Calendar
  • Kanban board

All of these components are very plain and bare-bones in quip. It’s ideal for quick drafting when documenting project plans, or other things but they are not advanced that you can export and utilize with other platforms. So it falls short when I feel I need more capabilities from these in my documents – or at least linking with existing systems so we can display information in our documents (like monthly planning).

https://quip.com/

Notion

Notion is the new kid in the block and it is filled with a lot of advanced views and custom “data” modeling (they call it “database”). You can create a database of anything and display them in a lot of different views. Calendar view, board view, gallery view, list/table view…

With good design, you can plan and manage a lot of things in Notion. In some respects, it can easily become a company/team knowledge base as well as a task/project management tool.

I loved Notion except 1 hard blocker for me is the mobile experience on iPad with keyboard. I had to change a lot of common sense navigation and editing gestures I use in pretty much everywhere in order to work on Notion properly when I edit the content.

Other, minor issue is the pricing is way steeper than the tools we currently use for a small team. The free quota gets filled very very quickly for a team producing a lot of written documentation like ours.

https://www.notion.so/

Airtable

If you think in spreadsheets mindset, you’ll love airtable. Airtable actually is a database engine for me. I find it extremely API friendly that if you want to code stuff that feeds data to tables with views and stuff, Airtable is perfect. I’ll write about using Airtable as a light db via their API in a separate article later.

Airtable has smarter table management that can also display the same table data in different views like calendar, board views.

https://airtable.com/

Coda

I kinda liked certain aspects of Coda, but I didn’t like the UX that much. I found it’s mobile experience is a big deal-breaker, but it’s unique approach, it’s promising if the makers catch the wave against other tools out there.

https://coda.io/

Traditional Tools Can Do That Too

Some of these stuff have been in traditional tools for long time, but not utilized well, or they are more advanced topics for their own environment, requiring technical knowledge or have steep learning curve (like Excel Macros). The only exception I still use and encourage teams to utilize is Google Docs’ App Scripting. We use google sheets for exported data, importing data to our micro services. We also use for plain documents for planning projects, content and other stuff.

One of the common things we did actually came out as a team product. Check it out: https://sheet2cal.com/

Living in the cloud as a developer

Photo by Rodion Kutsaev on Unsplash

As tech users, we live much more in the cloud than we think we have stuff all over the place. As we move on to a new decade, we passed the concept of local storage in our devices. The local storage on our devices started to serve large hard caches of our stuff. Our things consist of photos (which we produce most of them on our mobile devices now), documents we write or any other apps that are backed up by OS like iOS, Android that are already protected by automated backups to your cloud accounts.

So the digital work environment became very portable. And as the hardware advances, our need for powerful machines is moving to the streaming services even in hardware-intensive cases like gaming…

As developers, most of us don’t require very powerful machines to work with. Although there are exceptions like people doing development with crosses with gaming, 3d, video, production… But most of the engineers already work in teams that are heavily utilizing cloud services that are connected in a harmonious way to build our code remotely.

So our local development environments now are functions for comfort more than a requirement. I personally faced and forced myself to be in the “on the go” situation with experimenting to live on my iPad as a primary machine multiple times. Seeing all I need is a comfortable portable machine that can handle basic remote connectivity (like SSH, RDP…) and have some tools that we use for navigating our codebase and run our local development set up. Even the local development set up can be located in a remote instance and have access to the instance with many tools that make it feel like you are working on your local machine.

All these require a high level of connectivity. But we also work with tools most often than not requires connectivity. Regardless of the hardware, we carry is a high performing machine or not, our tools and workflow involve more around connectivity than the speed or capabilities of our local environment.

I also started to see a lot of web-based tools that are very close or in some cases unnoticeably good mimicking the native tools we use, like web-based IDEs that are now being integrated to github/gitlab that provides the comfort of a customized set up.

Judging by the majority of the engineering teams that utilize continuous development pipelines, these pipelines dictate how connected and collaborative our workflow has to be. This is the main reason every new addition or tweak in a generally accepted development journey now is connected and in the clouds.

Developing and Deploying Nodejs (Express) apps on Heroku

Heroku is an amazing platform for getting quick development up and running in a smart virtual instances. There is no hussle to get additional services you may need for a quick and dirty app to ground up. I’ve already wrote about how to use heroku for quick development environment before: https://mfyz.com/using-heroku-for-a-quick-development-environment/

This short article will be about specifically developing and deploying node.js and express apps on heroku. There is actually not much difference for deploying a node.js app than a php application or in another language. Heroku CLI tool automatically detects the application type from the package.json file for a node.js application and it’s entry point from there.

For the express related parts, just go ahead and see the very simple example I put up in github the past:
http://github.com/mfyz/heroku-nodejs-express-helloworld

Another more detailed express example that uses pug template engine for it’s layouts and views:
https://github.com/mfyz/express-pug-boilerplate

Aside of the application itself, there are few key points I found helpful when creating and deploying node.js apps. 

Use environment variables

Using environment variables is the best way to set configuration details for your application.

Setting which node.js version your app will use

As simple as adding “engines” object in package.json and having your node.js version defined in “node” property inside engines object like:

Same applies for npm and yarn versions to be defined within engines object as well.

Use prebuild and postbuild steps to prepare additional steps needed for your application build

By default, heroku will build your application on every deployment. This is not very meaningful for pure node.js applications but you app may need build. Like gulp, grunt, webpack builds. For this, heroku will read “build” npm script if exists in package.json. Aside of this, heroku will always install dependencies with npm install as a minimum build step. If you need additional steps before or after the build, you can define these in npm scripts as heroku-prebuild and heroku-postbuild named scripts.

Utilize heroku add-ons

Remember, Heroku comes with tons of 3rd party services which a lot of them have free packages that will be enough to try things out and start coding stuff up quickly. One of my favorite is heroku’s internal database service providing postgresql database with single command line command:

Wrapping up

All in all, heroku is a great cloud platform allow developers to kick off ideas, starting with simple code to grow into complex distributed applications very easily. In my opinion, it should be in the go-to tools for every engineer’s arsenal.

Importance of changelogs

It’s very important to keep a changelog and be disciplined about maintaining it as your product progresses. This applies to both product management and software development/management processes.

I want to mention a great source that talks, describe and almost accepted as a standard in open source community for changelogs:

https://keepachangelog.com

What is a changelog?

A changelog is a file that contains a curated, chronologically ordered list of notable changes for each version of a project.

Why keep a changelog?

To make it easier for users and contributors to see precisely what notable changes have been made between each release (or version) of the project.

Who needs a changelog?

People do. Whether consumers or developers, the end-users of software are human beings who care about what’s in the software. When the software changes, people want to know why and how.

An example Changelog (from Stretchly)

Github renders this markdown changelog beautifully: https://github.com/hovancik/stretchly/blob/master/CHANGELOG.md