I have a strong set of views when it comes to how to work with computers. I try not to impose my way onto others and I can recognize many circumstances where my way of doing things would definitely not be optimal, but when I see how much time is spent in the long run fiddling with badly integrated tools that produce all kinds of headaches down the way I do start to wonder why others don't do it my way. So here are my general Ideas on how I try to use my computer and how I pick tools and make them interoperate with each other.

Talking machines

I like language and the structure and abstraction that is inherent in it. I think that language is an important counter to our visual brain and how we perceive the world. As such a computer is incapable of perceiving the world in any other way as through the abstractions that we teach it and the only way we can teach a computer these abstract concepts is through language. These languages are mostly programming languages and in deed we (as in humanity) have figured out how to teach computers to draw elaborate images and recognize things in a picture, however these concepts had first to be written down in a language and then fed to this mechanical apparatus. What I want to say with this is that computers by their very nature only speak languages and can't do anything else.

Let's take a look at history to make things a little more tangible. In the early days computers where data processors. They took some pattern of bits and converted this pattern into a different pattern of bits (they still do this by the way but you can't see it as well as you once could). So we had four distinct steps

  1. feed the computer the instructions that tell it how to transform a given bit pattern into a different bit pattern (read in the program)
  2. feed the computer the bit pattern that is to be transformed (this is the input).
  3. start the transformation procedure (the program executes)
  4. have the computer output the transformed bit pattern to a storage medium (the output is written somewhere)

In the early days the program and input where separate decks of cardboard cards with sets of holes punched into them. The main means of output for the computer was either a printer which was used to generate human readable output (like tables or things like bills or invoices) or a card punch to generate more punched cards that could be fed into the machine at a later time. Computers had neither the speed nor the storage nor the means to produce real graphical outputs that where not some sort of abstract view of something (technical drawings could be generated but photo realistic images would need an increase of performance of a factor of 100,000 to 1,000,000 and a whole new generation of display technology).

The only way of getting a computer to do something was to write a program in the machine language (the set of instructions that any given computer can perform as they are burnt into the circuits that make up the CPU) this was cumbersome and mostly took many attempts to get right as these languages are very specific and the programmer has to take care of all the nitty gritty details of programming like memory management and the like. Essentially the first thing that was done was to write tools to help in the task of programming computers. Eventually programs where written that are nowadays called a compiller that take an input (in the end a bit pattern, as text can be encoded as bit pattern) and translate that input into machine language. This input is a set of instructions written in a programming language that the compiler can understand. Again we use language to encode abstract concepts, only this time the compiler can do many of the nitty gritty repetitive work for us eliminating many sources of errors in an instant.

Programming in a programming language is still pretty error prone especially when the language offers only few abstractions (like for example C) so the process of writing a program in this programming language is still quite error prone even if it is a good bit better than writing machine language or assembly. This leads to a high cost for writing a program. Especially when it is taken into consideration that a write-compile-test cycle would take about a day or so to complete, as many computers had to not only compile the program of the programmer but also accomplish data processing tasks for the company or institution that owned the machine (at this point a computer filled a room at least and a floor of a building if it was a larger machine and was operated by up to 10 people).

It was realized early on that using a computer interactively would decrease the time to accomplish programming tasks and with computers getting cheaper and more capable (these new computers where only the size of a bookshelf) it was not long until the first interactive programs where created.

Now the programmer had the option of either writing a program that would be compiled to machine language and then executed or write a script that would be interpreted by a program called the interpreter and executed. Even though this does not seem like much of a difference it poses a dramatic shift in paradigm. The interpreter could not only be fed entire scripts to process but the programmer could also sit down at a machine and write a new program line by line and have it executed essentially while writing it. This made it possible to spot logical errors early and also adapted the way humans could interact with a computer. It was no longer necessary to provide program and data and have the computer produce an output but interact with the computer in a way that more resembles a conversation. This also lends itself to the human mind as we can process communication that is conversational very well. This shift in interaction needed a special piece of hardware, the terminal.

A terminal is essentially a keyboard and an electric typewriter that are connected to the computer in some way or the other. The cool thing about a terminal is that not only the user can operate the typewriter but the computer also can (terminals where the main way of interacting with computers that had an interpreter running).

Talking to a computer now starts to resemble a modern text chat application with the difference that the user issues commands and the computer returns the result of the operation as an answer similar to the answer of a friend to a question asked via the chat app. The computer also reports all problems encountered while trying to execute the instruction. The programmer can then issue a corrected instruction decreasing the time it takes to develop a working program. Before that a computer was given a program and some data and generated a report (mostly a formatted printout of the prcessed data) and sometimes also returned an altered version of the data on punch cards (the machine readable format) for further processing.

The Shell, interactive computer programs and the mainframe era

As I said earlier computers where large machines that occupied floors of buildings and where operated by specially trained staff. This changed rather quickly. Initially the second most important program after the compiler where the programs that helped the operators to increase the throughput of these machines (the lease on such a machine was roughly the equivalent of a Boeing 747, so every second of operating time could be measured in dollars spent). These programs evolved from simple helper tools into a system that eventually outright replaced the operators. This program is called an operating system and you rarely ever will find a computer that does not run an operating system (even the computer in you washingmachine runs an OS though probably not windows (although that depends on your washing machine and I bet there are some that run android)). A major feature of operating systems was the capability of switching between different tasks so quickly as to make it seem to the user that the program was running continuously.

This high speed switching (together with many other abstractions that an OS also provides) enabled the computer to not only talk to one but actually talk to multiple terminals at the same time. Computers now could be used by different people at the same time while seeming like every user had the machine to themselves, while the OS took care of the neccesary organisation to make that possible.

Terminals had also evolved away from the ASR33 that still uses a typewriter and a keyboard (and a paper tape punch) to something that used a TV of the time to dispay the output of the computer as well as the input of the user negating the need for large scrolls of paper used by the typewriters. Programs and large amounts of data still where essentially large piles of cards that where loaded by faster and faster card readers and produced by larger and larger card punches.

The other thing that started to change around now was the possibility to use large amounts of persistent electronic storage. In contrast to the punch card decs electronic storage had a similar capacity but was accessible to the computer in a matter of seconds instead of the minutes it took to load a new card dec into the working memory. This gave rise to the filesystem where the users could store electronic versions of their documents instead of printing them or otherwise neccesating the machine readable output in form of punched cards.

Along with interpreters (that are meant to help write programs) another type of interactive program became the main way a person interacted with a computer. This program is called the shell. A shell is essentially a weird sort of interpreter. It lives in the world of files and commands. Files where a relatively new concept at the time (see above paragraph) and Commands where now in themselves complex programs that could accomplish complicated data transformatios in contrast to the relatively simple statements that an interpreter could natively understand. Besides using files to store data people quickly stored programs as files on a computer. If these programs where placed in a specific location in the filesystem the shell could turn the programs into commands (this is what installing commonly refers to).

So now multiple people can perform data transformation tasks in parallel on a machine while some batch jobs also executes greatly increasing utilisation of the expensive mainframe hardware.

th GUI rears it's head

Using CRT (TV) displays also enabled to draw much more characters than a typewriter could possibly in any given time period. They could be drawn so quickly in fact that it became feasable to redraw an entire screen at each and every keystroke. As a consequence the first form-like programs came into being that started to format their printout in a specifc way and redrew the screen every time they updated the form.

A common piece of terminal-GUI software, the Midnight Commander (Image by Didier Mission licenced under GPL)

This technique essentially used the graphical formatting used to generate the reports and to regenerate a report for every keystroke (this did require relatively large amounts of computing power but as computers where getting a lot more powerfull every year that was not much of a real problem). That was in a sense the first graphical user interface that existed and they still are in wide use today (even though the "normal" user has probably not seen them).

Over time programs and hardware where adapted so that only the parts of the screen would need to be changed in order to generate the new screen saving on recources. This meant however that the typewriters of old where now utterly unusable because sending a typewriter back up the page to overwrite a allready existing character was pretty much impossible.

Color screens

Eventually TVs started to have color displays and with color displays the mode changed from vector graphics to raster graphics. The Black and white TVs used an electron beam to illuminate a screen and traced many horizontal lines while varying the intensity of the electron beam. The scan pattern was hard wired into the electronics and was synced with the Signal coming from the TV antenna producing a picture on the screen. The first terminals used a somewhat similar technology with the difference that the position of the electron beam on the screen could be controlled by the computer and used to trace an arbitrary path accross the screen while. It was also possible to switch the electron beam on and off very quickly so only the lines where drawn that should be visible. These displays are called vector displays because the trace is essentially following a path of 2D vectors on the screen.

A large format vector display from Tektronix (image from vintagetex.org)

The new type of display that came along with the color television was raster graphics. Due to the desire to have a TV signal that can display a color image with color television and a black and white image for B/W television sets (as most people did not have a color television set and would not purchase one for a very long time due to the still very high cost of a color television screen) color televisions are rather more complicated than their black and white counterparts. The main difference that is of interest for this part of the story is that in order to produce color on color television screens the screen is divided into segments that contain three subregions that are each coated with one of three chemicals that, when hit with an electron beam of sufficient energy, start to glow either red blue or green. This meant that a color television had distinkt regions that it could light up and the resolution was subsequently limited to the size of that region. The regions where arranged in a grid and and that is where the raster part of raster image comes from. This quantisations was also fairly welcome as computers can work well with quantized and therefore enumerable things.

Personal computers

By the time color televisions and the accompanying display technology had become common enough to use in a professinal context the personal computer had allready come into being. Thus the corporate terminal stayed monochrome until the point where it was superseeded by a full fledged computer connected to a screen.

So all in all there are two things happening on screens at this time. The first thing is form based textual input and output to a screen. the thing similar to what midnight commander lookes like. The other is using the high resolution vector terminals to show complex techincal drawings and limited 3D renderings. This uses minicomputers and mainframes as actual computer while the terminal is a simple keyboard display combo that has nothing more than simple communication hardware to make it possible to connect the equipment to the mainframe.

Then there was the shift to personal computers that would eventually morph into laptops. Personal computers had a way larger user base than the "professional" users that where normally content with working in the rather sparse environment afforded by most terminals (I'm suer there where very capable machines to display the highly complex technical drawings that do exist in many different areas). This meant that people suddenly want to play games and others graphically rather demanding things neccessetating the development of powerfull and affordable display controler. As most normal users allready had a TV or even a color TV at the time a color capable tv output was a favorable feature. From there many diffenernt methods where developed to show different game elements on screen in more and more realistic fashions.

Apple and the GUI

The Lisa computer by Marcin Wichary Apple Lisa 2 with Profile HD.jpg:, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=24055148

The big breakthrough for the GUI was Apples Lisa computer and the subsequent macintosh machines. They where the first to use things that looked like virtual buttons together with a pointing device (the mouse) for navigating the computer. This was really useful for the normal user as they no longer had to memorize commands and their options but where guided through the use of the different functions by pop up and context sensitive menues. Nowadays the metaphores that where revolutionary at the time are all to commonplace to the point where we distinguish the development of operating systems by the look and feel of their accompanying display and menu style and not by the actual features of the OS. People have also been fully abstracted from how computers work. Only very few people that use computers currently know how to program the machines that they are using never mind knowing how the underlying concepts are that a computer is built upon.

That is effectively the tradeoff that you are accepting when using a GUI. On the one hand it has features that guide a user through a task that the software was designed to do (be it manipulating files in a filesystem as the OS or writing printable documents or spreadsheets) while avoiding lengthy manuals or expensive training by affording explorability and other visual cues that we can perceive naturally. These GUI tools also display the result of the commanded operation. This needs a tight integration of all aspects of the tool increasing usability on the one hand but also decreasing the universality on the other. If these tools are now to be used in novel ways (aka. be hacked) then this reduction of the user interface to something that is well defined the reduction of possibilities makes this all to often impossible. The tight integration also makes it neccesary to store otherwise useless metadata about the state of the tool in the output files so that they can be loaded by the tool at a later time without breaking the visualisation again reducing interoperability.

The last nail in the coffin in my oppinion is the fact that many gui programs have a differing look and feel that is heavily depending on the program. There are many different ways of writing gui applications (different libraries a programmer can use) as well as many different design phillosophies that can be used and that means that many approaches have been tried by different programs leading to an usatisfyingly ununiform look accross a range of different applications. As I do like consistency and the inherent familiarity that is a somewhat important annoyance to me.

My middle way

The thing that is undoubtably true is that Some sort of graphical structuring is really helpful. Just having a sequence of characters thrown at the screen is definitely not helpful. Overly sophisticated visuals like partially transparens and wobbeling windows that can be found on KDEs Plasma desktop however just add unneccesary clutter to an allready pretty full screen and draws attention away from the thing that I would actually want to spend time on (this is my personal view and I don't want to diminish the effort or the reasons that other people have and have had to build and use these features. I'm just saying that they are not for me).

I strive towards reducing the task to it's minimal complexity and trying to use the optimal tool for the job. Optimal is of course specific to me and therefore I'll go into a bit more detail about what I mean with optimal.

  1. One important feature for me is configurability. I want the tool to fit to me and not the other way round. Some tools embed a specific way of getting something done in their structure, others are more like a toolbox that have tools for specific tasks (like reading or answering an Email) and let me (the user) decide how to string them together to do What I want to do.

  2. This second feature kinda goes together with the first one and that is composability. If a Program is made up of individual functions that can be stringed together to make a workflow within the program it probably is not to difficult to just output the result of whatever thing I was doing and let other programs do other work to whatever the output was. This is probably the most aggrevating non-feature that gui programs tend towards. Only few programs actually work this way in letting the user export files or partially processed files to the file system to use a different tool to continue the work with. The more professional tools do that but most consumer programs want to be a kind of one stop shop that outputs a file that the user would never want to change ever again and subsequently has to include every feature under the sun to be able to do that and in turn spreading developer time fairly thinly over many features makin each feature a sub-par experience that could have been avoided if the tool would have focused on one job and done it well while trying to play ball with as many other programs in the same field that may be better at other things delegating that work to them.

  3. The last big thing for me is automatability. Frankly I am not there yet. I am still in the phase of defining the things I want to do and getting good at the primary task to be able to figure out where I could imprve workflow and what tool to use then. So automating common tasks is not yet a huge issue for me. Sometimes it would be fairly nice to be able to automate certain things like uploading this website for example and I may start doing that soon as I am posting fairly frequently here and I am proud of the Articles I put there. Automating things comes mostly with a scripting language of some sort and is also closly related to configurability and composability.

Tying it together, where the shell shines

The above description is exactly why I live the shell lifestyle. If a tool has property 1. and 2. then the third can be taken over by the shell under the condition that the program has a command line interface that exposes the parts of the program I need to integrate it into a larger workflow that can be composed and automated using shell scripts. The benefits of a (well designed) shell is that it provides a somewhat consistent way to use all software on the computer in a coordinated manner to produce a desiderd output. That means that many programs can focus on a specific thing and let other tools do other jobs negating the need for "one stop shops". This is what I mean when I talk about getting productive with the computer instead of being productive whith a program. The GUI programs effectively turn this general purpose machine into a special purpose machine. This can of course be hugely beneficial (like making it possible to think in concepts that are really alien to a computer but can be a very good fit for the task at hand (CAD systems are in my mind such an application)) while on the other hand forcing one to give up on the notion of using different tools for different tasks and using the computer system as a whole to do the work.

Not all is well in the shell

There are a few notable exeptions to the Shell only lifestyle and these (among other things) are the reason that I install a GUI and associated programs even if I don't use it for much more than displaying terminals. The most significant one is the browser, closely followed by the PDF reader and various programs for drawing graphics in like inkscape and the pixel-art editor graf2x. Form time to time I'll use VLC to play videos, but that's about it. The shell itsself is also fairly bad at utilizing screen space which is why it is still really useful to use a GUI with a terminal emulator running inside of it.

The current setup

As one might expect i am using the i3 compatible wayland compositor sway as a window manager. for terminal emulation I use kitty and run a fish inside of that. Most of my work is done in neovim as a text editor even though I have heard many good things about emacs (apart from the fact that it seems to be an operating system ;)). My emails are currently read by neomutt and I currently have a bit of a painpoint configuring it well which is why I am planning to write an article about that as soon as I figure out how to configure it myself. I use pass as password manager and am trying out taskwarrior and timewarrior as a task-/time-management system. The browser is a firefox with a vim-plugin and the pdf-reader is zathura (which also has vim like bindings) and of course my fish is configured that it acts vim-like.

The nice thing about all of this is that i barely ever need to touch the mouse, which eliminates much movement of the hands when trying to be productive speeding up everything by a few fractions of a second for common tasks. The other nice side effect of being very keyboard centric is that I can get the most out of my terribly loud and beautiful IBM model M keyboard which is a joy to type on esspecially when, like me, you are capable of touchtyping for the most part.

Closing remarks

I hope I have been able to share my way of doing things with you in a way, that you may have gained some insight into why and how I do this. I know that what I have done is the best way for me and by no means represents the only way to get work done on a computer. If you like wobbeling windows and graphical effects and you get done what you want to get done then that's just as valid an approach. I think however that it is important to think about what you are doing and (if it is a significant enough amount of time) think about how you want to do things. You of course may have different priorities and be constrained by company policy and other external factors outside of your control. But maybe, just maybe, this insight into my way of doing things has given you a new way to look at a computer system not as a collection of individoal programs but as a possibly integrated system to automate all the boring steps for you so you can spend more time doing what you actually want to do in life.