I want to talk about a common practice among developers who work on multiple machines or work on (many) servers. As developers, we spend a healthy amount of our time on the shell and work with command line tools. If you fit in this description, it is important to configure your shell to your own liking.
Then comes the question, of how to keep this configuration in sync between devices or servers. I will talk about the common method called dotfiles to store, push and pull the changes of these files (or collection of them). You may be familiar with this method but I’ll talk about my own setup after giving some context. I hope you can find bits and pieces of helpful tricks I implemented on my own version. But you can search dotfiles in github and find thousands of other developers to explore different flavors of personal configurations. Most developers make these configuration files public. You can find my dotfiles repo here: https://github.com/mfyz/dotfiles
Most of the command line apps we use save their configuration in the user’s home folder as a single configuration file or a folder with a dot prefix which makes the files/folders hidden by default file browsing apps and commands.
This makes the configuration of each tool, extremely portable. Basically copying this file between machines will give you exact replica of your tool configuration everywhere. Well, almost always. Some tools may require additional activation steps like installing vim plugins or planning multiple files.
How to collect them in one place?
The main trick of this method is to keep the original files under a single folder, generally
in your home directory:
Then link the original files to their final locations. Instead of manually linking them we’ll be using a utility for this.
Then this folder can be git managed and pushed to a remote repo. As I mentioned, a lot of developers make their dotenv configurations public. So you can push your dotfiles folder to github privately or publicly. If you are publicly pushing, you need to make sure there is no secure tokens included in your dotfiles. Read Security section below about this, that I’ll be talking about how to separate your secrets and store in a private repo that you can also make similar dotfiles-secrets repo and replication process between machines.
How to restore them to the right place in a new machine?
We’ll be using a utility called GNU Stow. Stow is a simple command that simply links files under a folder to your home directory. For example, if you have one or more configuration file for your zsh configuration like
file, you put it in a folder (can be any name) like “zsh”. Then running the command
anywhere will link the contents of zsh folder into the home directory.
You can collect all apps configurations in different folders under your .dotfiles folder. Then run stow for each folder. Or instead automate it something like:
for d in $(ls -d */ | cut -f1 -d '/' | grep -v '^_');
( stow "$d" )
Further more, you can automate the whole installation steps:
clone the github repo, so you have local copy
install gnu stow using the operating system’s package manager
walk all directories and run stow command that will link files to the root folder.
The final installation script will look like this:
if [[ -e /etc/debian_version ]]; then
elif [[ "$OSTYPE" == "darwin"* ]]; then
elif ! command -v stow >/dev/null 2>&1; then
echo "Please install stow manually then try again."
if [[ "$OS" = 'debian' ]]; then
sudo apt-get install -y stow
elif [[ "$OS" = 'macos' ]]; then
brew install stow
git clone <https://github.com/mfyz/dotfiles.git> ~/.dotfiles
cd ~/.dotfiles || exit
for d in $(ls -d */ | cut -f1 -d '/' | grep -v '^_');
( stow "$d" )
echo 'Congrats, you are done, Enjoy!'
Securing your secrets
It’s very important NOT to push your secrets/tokens that generally make its way to rc files like .zshrc, or .bashrc. Instead, we will be moving all token/secret export commands to a separate file and repository that you can privately push and sync between machines with a similar method.
I’ve been using the name dotfiles-secrets for my repo and the name of the script.
The way it works is simply to put a folder
then place all exports of tokens and secrets to a shell file called
and push that to a separate private repo. Then git clone that repo manually to your home folder.
In my regular .dotfiles repo, my shell rc file (I use zsh, therefore it is
file for me), we have the following block that checks the
file and executes if the file exists.
if test -f "$SECRETS_FILE"; then
This way, we gracefully do not give any error if you only want to clone the configuration files and not have your secrets.
Before I upgraded to the new M1 machine, I was having a lot of speed issues with my previous Intel Macbook Pro. A month before giving up and buying a new M1 MacbookAir, I suspected that the macOS installation and apps/tools I installed over many years were perhaps the sources of the speed issues. I backed everything up and planned a clean re-install on the Intel Macbook Pro. End result is not seeing the speed lift I expected and finally giving in to buy a new M1 Macbook Air which has been amazing as far as the speed I wanted from my computer.
I wanted to write about my process of perfecting my process of setting up a brand new machine (or a reinstalled OS) as quickly as I can with restoring critical tools’ settings I use often.
Here is the todo list I created for myself for this whole operation:
Make Lists of Applications
Make List of homebrew packages installed
Make List of npm packages installed
Backup application configurations
Make List of git repositories
Push all repositories
Push .dotenv and .dotenv-secret
Backup SSH keys (also onto 1password)
Backup all .env files
Backup to (SSD)
Application configuration folders
Home & Library folders (tar)
Backup ~/.config~/.aws and similar credentials files/folders
I use an external SSD disk to store my backups. I actually take regular backups with a script I created. As simple as plugging in the SSD disk, run a bash script, and all done in under 5 minutes and I’m usually done.
Can you live without these tools you were using before? Or can you find open-source alternatives, or better paid alternatives?
Before installing any new app. This is actually a great opportunity to re-evaluate your relationship with all the tools/apps you use daily basis. Breaking digital habits is hard. Especially if you have everything set up already. But clean install is probably the best time to re-evaluate how much value each app is bringing to your day-to-day and the cost of using system resources on your computer. I have asked the same question and tentatively came up with the following list to use alternatives of the apps/tools I was using. A surprising amount of them had open-source alternatives so I could support the open source community as well as not need to pay or worry about proprietary licenses of the apps I was using.
Day to day tools
Installing Usual Tools
Using homebrew + using cask to install usual tools. Here is the partial list of the apps I installed with just running the following command:
Keep in mind that this will take a long, download and install all apps. This is a great way to document the apps you use that can be
I use Alfred extensively. I have tons of keyword shortcuts web URLs that launch chrome with a few key taps in Alfred search. Some are query-based searches. I also have a ton of script and keyboard shortcuts configured as workflows. I even store my code snippets in Alfred so they are kinda universal through apps and computers. All mac apps’ configuration is both backed up and restored in a generic way, where all the specific application settings are residing under ~/Library/Application Support base folder. Each app has its own folder there and data/settings structured in their own way under these folders. I generally back up these folders in my regular backups. I simply restore the application I want to copy settings from my old mac.
Bash, aliases, ZSH, dotfiles
I manage my shell environment configuration and the configuration of the tools I use in the shell using a practice called dotfiles. What it means is a lot of people share their shell configuration files publicly in github with help of a few open-source tools that can easily link the configuration files in the right places.
I generally don’t use the install script and manually clone the repo in new machines and manually run the install script. The script itself is pretty plain and simple. It clones the repo locally to .dotfiles folder in the home directory and loops through the folders and run “stow” command which links the files under each folder to the related shell application’s configuration paths. Stow knows almost all shell applications target locations and create symbolic links of your files to the target locations. So you continue editing your ~/.dotfiles files and keep pushing your changes to github (or any other VCS) over time.
Dotfiles can include any file actually but here are the famous ones that help me ground up a brand new shell environment with a couple of steps. I often use this approach to configure an SSH environment I’ve given access to on a server that I will be working on more frequently.
shell configuration files (for go-to tools like vim, tmux…)
I also include a few cheatsheets of keyboard shortcuts or commands for tools like vim for myself.
Running install script doesn’t get everything 100% set up. Or for certain environments, I don’t want to install some of the tools. So for those tools, I have a few more scripts that does the setup work for me.
I always manually back up and restore my ~/.ssh folder where I have both ssh configurations as well as all the keys I store on my machine. Most of the keys are backed up to 1password (more than half of them are shared with the team).
[/private]/etc/hosts file content
I use MAMP Pro to manage my LAMP stack apps and apache/mysql installation. So hosts file is generally automatically populated. But when moving to a new machine I always try to keep a copy of the hosts file so I can easily refer to the hostnames I was working on and the ones I want to restore manually.
Same as Alfred configuration backup and restore.
Almost all of my CLI tools are either in homebrew or npm. Homebrew is a popular package manager for macOS. A macOS without homebrew is basically a naked tree. I install almost every developer tool through homebrew. After setting up homebrew, the second thing is to set up nvm (Node Version Manager) and with help of nvm, install a recent/latest node and npm on the machine. There are also a lot of CLI tools I use are basically open-source npm packages. This makes installing these tools super easy. Before I did a fresh install on my Intel Mac, I did document all the global packages and picked the ones that I really use frequently and that become a long command that I ran to install all the tools I was using. So once it’s documented well, installing the packages becomes very easy.
Not so fast on M1! 🙂 I faced various issues on M1 not only nodejs but all around. I didn’t buy my M1 chip MacbookAir right away. I waited some time to transition to M1 because I knew it will be painful for a lot of tools to not fully support M1 and worse, have a lot of issues. My experience was not bad but pretty much every single thing I wanted to install had some workarounds to get it installed right on M1. So you can find yourself trying to search and find what other M1 owners faced on these packages. It may be dependencies you almost don’t care but you will have to deal with, in order to get a tool you need that depends on it.
I try to keep my development set up as plain as I can. Also, the nature of my job allows me to be picky and also be able to consolidate my stack. In the last 4-5 years, I was able to get away with the following when I needed to set up my development environment:
Web / Back-end
LAMP (using MAMP Pro)
XCode + Android Studio
Ruby / Fastlane
Mobile Studios – XCode + Android Studio
XCode is pretty seamless to install. Also, you will be asked and installing xcode developer tools very quickly when you need to install things like homebrew. So that’s all covered.
I don’t actively develop Android apps but I do the test and have some code that I still contribute. So I have to install the Android development environment. I have to say Google made Android Studio tons better to allow setting up an Android development environment very very quickly. Comparably same with XCode. What takes time on the Android side is to pick and download the right SDK, APIs, device emulators, and often each project uses its own Gradle and build tools versions.
After the base tools, setting up iOS Simulator and Android Emulator configuration is something to keep in mind.
I often pick a mobile project for both iOS and Android and try to get them build and run, to make sure everything is ready for my local mobile development work.
I separately also get a react native project up and running. Once the Xcode and Android studio are set up, it’s generally easy to get a react native project up and running quickly. Although I had to deal with some M1-specific issues it was very very easy to tune things up and make sure they work well on M1.
I also noticed both iOS simulator and Android emulators run really really fast on M1 chips. As far as build times, Xcode is visibly fast, Android by default not different but with a few tweaks, I was able to get it twice as fast compared to an Intel machine. It was surprising to see that speed on M1 side.
Getting VSCode restored was relatively easy but not straightforward. I am using the Settings Sync extension which is 3rd party. As of writing this article, VScode still does not fully support settings sync natively (although it’s there but it’s in beta/experimental state and it’s not doing a good job compared to the 3rd party plugin). But the 3rd party plugin I use worked well to restore all settings. Although it failed to install the extensions I was using.
Lucky me, I exported the list of extensions I was using and I was able to manually install them in 15-20 mins.
I also use Sublime for simpler note-taking and quick editing purposes. The backing up and storing process is the same as the other generic macOS App settings process. See Alfred’s section above for more details on how.
I use Table Plus as my database management tool on Desktop and iPad. To export and import all connections (except the ssh keys for the connections behind SSH tunnels).
Making sure you backup all of your .env files while you take backup of your old mac. This is probably the most important when you need to restore your local development environment. Documenting and pulling your code from the CVS is the easy part. You will need to re-create from scratch or use the .env files you backed up. Constructing the right credentials again would take extra effort if you are not organizing your credentials in tools like 1password.
I use Figma, sketch, Balsamiq, adobe photoshop and Pixelmator. All of these have to be installed manually since some of them require multiple-step license activation.
Automatic Dark Mode switch on apps
This is becoming more and more a natively supported capability for most apps but a few of them still don’t automatically switch and/or have a way to set separate themes for their light and dark appearances.
Alfred: I use an Alfred workflow does the automatic theme switch, to install: npm install --global alfred-dark-mode
I hope these would give you a good reference point if you are considering doing a full/clean install or bought a new mac and looking for ways to quickly set things up. I also have most of these documented that I refer to them occasionally.
If you are in mobile development, particularly Android development, you probably heard or used Genymotion’s simulator.
Genymotion is a company (and product) that run Android OS on a virtual machine to simulate android runtime with some basic apps the naked OS comes with. By default, it doesn’t have Google services and APIs installed or enabled but it can be installable on top of the naked Android OS you install from the images you can pick and download from genymotion’s repository easily.
These days, the official android emulator is not bad at all, but back in the day, it was terribly slow and difficult to do certain things. It was also designed solely for Android app developers who can do stuff from the command line to supplement android emulator. In those days genymotion was a superior simulator that was everybody’s first choice of running android apps while developing because of those reasons.
Aside of android app development, I extensively used genymotion to do my mobile web tests in native chrome for android and few other browsers. Today there are many services we can use in the cloud and test our web work in so many real devices and OS/browser version variations that this use case is no longer valid. But I still like the idea of an easy to configure and run android environment when I need. And for this purpose, I suggest Genymotion be one of the best solutions out there. They changed their licensing model a lot, so I don’t know what’s the latest but I was able to use genymotion (and still is) use freely for personal use cases (which is all of my use cases are personal projects and stuff).
For some time, I also used genymotion experimentally to run a custom size tablet, installed google services and APIs, and google play so I was able to install the apps I use on mobile platforms that have a nicer user experience than their desktop or browser counterparts. If you don’t mind spending a lot of RAM, this can be an interesting option to run mobile apps on your desktop. Not super intuitive to navigate with mobile gestures with a mouse and trackpad, but one can get used to it very quickly.
I used windows from it’s 3.1 version to pre-vista years – early 2000s. Then switched fully to be linux person for years in between before switching to mac around 2007. Since then have been apple fanboy, owning, using and geeking about apple hardware and software. But coming from other OSes, I’m not like people started their computer journey with easy-to-use apple devices. So I know there is more out therefor having more “control” and “customization” on your digital everyday space. I also worked as custom computer/hardware builder for years in my early years of converting my hobby to my profession. I also wasn’t too distant to “what in computer” question.
iPad Pro experiments
Years after living comfortably in apple ecosystem, after starting to experiment with extreme portability of a powerful device like ipad pro (see https://mfyz.com/digital-nomads). Seeing its limitations, there was always a geeky desire to own/create a super-portable work environment with me at all times. I achieved this to a degree in my experiments with various devices in last decade. Closest I got was with ipad pro with some outside help.
At the end of the day, it’s still not fully fledged operating system that responds to what I need. I talked about this in my recent articles and me trying to find alternative solutions that will work with iPad.
Microsoft doing things right lately
I also mentioned this in my previous writings that I occassionally finding myself browsing and configuring microsoft’s Surface Pro. I really believe microsoft started to do a lot of things right in it’s recent years management as far as company strategy, it’s investments on -especially- on open source community. Now I see they are doing some good things on the hardware front. I still find the quality is not match with apple hardware but I definitely see the lack of craftsmanship on all brands producing hardware that is designed to run windows operating systems. Among them, microsoft’s own hardware definitely stands out. Of course with price. But if you are using your computer exclusively for work and if your work requires exclusively a capable computer, then money is not a problem. It’s an investment. Powerful, better, comfortable, better…
I really like Surface family devices. Both surface book and surface pros are nicely designed, well built and some configurations are really powerful machines that has the best portability/mobility factor.
An alternative path for me: Give Windows10 a try for 10 days
I found myself occasionally bid on pre-owned surface pro devices on ebay. But I never went too high to pay also because I still hesitated how comfortable I would be living on Windows10. I was wondering that question more than paying for pricy experiment to get a surface pro and see it myself. Instead, I bought an external SSD that I really like and got windows 10 pro license for dirt cheap and hit the road to install the windows 10 on the external SSD. Because the SSD is crazy fast through thunderbolt port I use on my macbook pro, I didn’t feel a thing in performance as if I installed windows 10 on macbook pro’s internal hdd. This gave me great comfort that I can restart on mac, unplug ssd and live my macOS life if I decided that windows 10 is not for me. So how did the experiment go?
Migrating my work life from macOS to Windows 10 was very easy
I thought I would have to re-adapt almost all apps I use every day. Result was different, I was already spending most of my time on team coordination, meaning communication tools we use were the primary tools I had to see if I get same precision on macOS. Almost all the apps are browser or electron based apps like Slack, Trello, Jira… So almost zero difference happened on this side. Only thing was bummer is that there is no great email client as you have many on macOS. Outlook is probably “the best” email client on windows. And even with outlook, there were so many holes you have to fill. I’ve been using spark on macOS for many years now and I was super excited to see they are working on windows version. Although nothing in the horizon yet. So it may be years until that happens.
Development environment was much better and faster than I expected. I really loved the idea of subsystem. Windows Subsystem for Linux (WSL) that had almost all distributions to be installed and run within a virtual machine that is managed by windows OS itself. Brilliant.
So you have pretty much ubuntu subsystem running on windows almost without any issue. It went great to set up my zsh scripts, aliases, nodejs, python and other packages super fast. Until I realized, when running some apps like visual studio code, started from command line that runs separate nodejs threads inside WSL that may not be 100% optimized to run with the local filesystem. Windows is continuously working on to improve this as well as vscode team (Microsoft again) also has some remedies in vscode to overcome the integration painpoints. But I hit a weird high cpu usage issue that was discussed online, and looks like closed/solved in github issues but still receiving comments from people like me reporting the issue still exists. All in all, great development set up with little shortcomings that can be addressed or adapted easily.
I also found most of the tools I use that were already open source tools that were pretty much cross platform OR tools that commercial but had cross platform client apps (like TablePlus for database client).
Continuity is a lacking big on windows platform when you use other devices – not just iOS but also Android too. There is almost no connectivity between your mobile device (phone/table) and your desktop OS. Apple started this and after last few OS versions, they kinda perfected it to a level that we don’t see it until we lose it. For example, I got used to receiving my SMSes (not iMessage, the actual SMS I get from the bank) and only need to use my computer to check the SMS and copy paste the OTP I received from paypal when I’m trying to login on my mac. It’s a very subtle but became very important micro feature between my mac and iPhone to be communicating between each other smartly. There is also other things similar to this.
But I went back to macOS after 10 days
Why? Because I had to rewrite a lot of other things under the hood – like my keymaps, like a lot of shortcuts I learned, optimized and perfected over the years. I also don’t want to invest any time to research and re-learn new apps and new ways to do the same thing I’ve been doing in last 10+ years. Like sending email in few keyboard clicks.
I’m feeling less adventurous and more comformist on my wok setup. I don’t want to spend my precious time to learn the basics or re-adapt. But I’m ok to spend hours on improving my efficiency for doing X. Doesn’t matter what it is.
I can survive – I can buy a surface pro now
My primary work/life station will remain apple eco-system. But I know it’s not as difficult as I assumed to have same/very-similar tools to live life happy in windows even after spending a decade exclusively living on apple ecosystem. I know surface devices are the best portable devices designed until apple gives up the resistance of not having hybrid working OS that runs desktop-class apps on their ipads or have macbookpros to be more like 2-in-1 style devices.
Sometimes we need to take a screenshot of a long content mostly from scrolling applications. Most common example of this is full-length web page screenshot. There are chrome extensions we can use for taking full-length website screenshot. But there is not an easy way to take screenshot from other apps like native desktop apps or email content from mail clients.
XNip Screen Capture Tool
We can use Xnip Screen Capture tool that has all of the common screen capture software features and a feature We can use for taking long content screenshots called “Scrolling screenshot”
It’s is a freeware with upgrade ($2/yr subscription) but works perfectly for this purpose without the upgrade (it leaves watermark that can be cropped easily if needed).
I was very skeptical about styluses for many years, especially over a screen. Closest thing was Wacom’s Cintiq and that was a giant monitor. The reason that styluses weren’t compelling is because they were laggy and never felt great to draw on screen.
Even with Apple pencil, it is still not great. But very very close. I owned pretty much all iPad models and versions from the very first release (I got a limited edition back then). And I’ve used Apple Pencil from it’s first debut but wasn’t convinced and also rejected the new gadget because it wasn’t Steve Jobs way. But I can totally understand it’s value from the educational standpoint – not art (at least not yet). The first release wasn’t necessarily laggy but you can definitely feel it’s tech side of it. There is only one thing you’re looking at a pencil, and that is the feeling has to be real because you’re dragging a pointer on a virtual paper.
With the latest iPad Pro 10” version, with screen refresh rate going much faster, now you can definitely feel it’s very very close. With the software support and not being paranoid about hand touching screen while writing or drawing, it definitely gets very close to the real thing. Unfortunately I’m still seeing many of the drawing and note taking apps are having the lag it had from 60 refresh rate old screen. I don’t know why it still has that.
Notability is the one app that I’m feeling very comfortable to draw and create stuff with it. My use cases are still a lot of ideation, super low fidelity wireframing, some planning work. I love the output feels truely non-techy creation but it is in fact a digital creation.
Sometimes we udnerestimate the value of not having boundaries when we think. Even including the requirement of typing our ideas or words on a keyboard being a restriction to our mind. I started this back-transformation with using more and more moleskines than taking notes on my phone.
I like Notability app because it’s very simple and plain app with few extra features that I like. You can link your dropbox, google drive accounts to backup all notes in PDF format that becomes very handy when you want to draw some stuff and open in your computer very quickly.
Recently I have been having performance issues with few personal sites that I was hosting on my small Digital Ocean instance. I was using 2vCpu, 2gig ram instance costing me $10/month + 30gb disk for extra $3/month. To be honest this is a very very cheap personal server to have and host few sites as well as doing minimal software tests and playing a good proxy/vpn when I need it.
I have installed New Relic, Datadog, Pingdom and linked them to my personal slack account to have everything monitored and reported seamlessly. I’ve been getting CPU and occasional memory alerts mostly because of my wife’s blogs that is getting much higher traffic than my sites. Even with that traffic, it was making me feel the performance that I supposed to get from that server wasn’t as consistent as I wanted.
A new kid in the block
A few weeks ago I started going into cheap ssd vps hunt again and I saw many more players in the game including Amazon Web Services’ Digital Ocean killer. I compared many services and wanted to find something more stable and reliable. But I found a very very new and smaller company utilizing raspberry pi like ARM based hardware in greater scale for this small to medium size VPS services called Scaleway. Although the CPU architecture as “server” is foreign to me, I was getting curious. One of the differentiator factors for Scaleway is because of the initial and maintenance cost of the hardware, the costs of equivalent instances to be ~5 times cheaper. This means I can get 3-4 times more powerful machines with the same minimal monthly cost I’m spending for Digital Ocean or similar services. Scaleway is definitely pushing hard on the $5/mo SSD VPS game.
The main question was and remains “how reliable this new French startup for a permanent VPS need?”. This is obviously a risk but for me, it’s an easy risk to take because I mastered to set up and migrate web and database servers in many years with bare minimum servers as well as with cloud services like AWS or Azure.
So I took the risk and moved to Scaleway in 2 hours with 2x size instance with pretty much half of what I pay which is ~$5/month. It’s been 2,5 weeks and I’m seeing no hickups or any issues so far.