everything wrong with free software

 "obedience breeds foolishness"

### how-small-utilities-work *originally posted:* apr 2022 this wont be highly technical, but it may have technical terms that are unfamiliar to some people new to coding. if you encounter one of these, let curiosity be your guide but dont worry about whether the term is extremely important to the rest of this-- its not. a small utility doesnt have to be text-based, but it usually will be-- because usually, text-based programs are easier to write. a graphical program, even a relatively small and simple one, normally doesnt use the framebuffer. the framebuffer is a way of displaying graphics without a windowing system like x11 or wayland. its a wonderful thing to have, but you wont find many developers using it. perhaps it will make a comeback. in practical terms, this means that a graphical program tends to use: 1. a windowing system, which you are probably already (but not necessarily always) running 2. some kind of "toolkit" that means you dont have to call every windowing system feature directly (optional) 3. some library that lets you access that toolkit with "bindings" to the language you write your programs in, sometimes with its own obtuse syntax (and resulting code that you have to write and maintain) or it doesnt have that, and sometimes the native (thin) bindings are even more tedious. you can avoid the need for much of this if you simply make your program text-based. but occasionally it will be worth the extra bother to at least have a minimally graphical program. apart from this, there is a reasonable and relatively simple compromise that follows the concept of small utilities, called a dialog program. of course if youre using a dialog program from a very heavy environment like gnome or qt, (probably xfce as well) then it will probably let you down eventually-- not because of the size of the program but the near impossibility of forking the environment. if projects like mate or trinity continue to be maintained, and a good dialog program is available for them, maybe we can use those. when choosing libraries and toolkits and other components, its good to consider who maintains them, where (microsoft github? not great!) and how difficult those components are to maintain. of course if youre just throwing together a quick tool and you dont plan to spend a long time maintaining it, you can chose less sustainable tools to make it with. but often tools we throw together quickly get used for a long time, so a habit of using less sustainable tools is unfortunate. a dialog program, which can be written for any toolkit that you could have written a "full-fledged" graphical program with, will typically let you access simple graphical features from a script or even a compiled program using trivial program calls (you can even access dialog programs from a shell script). sometimes the dialog will just be a "popup" that relays a message and lets you click "ok", or sometimes "ok" or "cancel", or maybe the dialog program is sophisticated enough to let you have a handful of gui options including but not limited to a "widget" with text you can scroll or maybe even edit. and of course, you could write your own dialog program and then call it from many different programs. this would save you from having to write a "full-fledged" gui program every time you want to do a simple task. but since these are the exceptions and text-based programs are typically the easiest to write and simplest in terms of features, thats more in line with what we are talking about. i did want to mention that there are graphical options at least. on the other hand, dialog programs are often very limited and even disappointing (or tedious) to use. instead of a dialog program, a text program can simply do this: ``` which option do you want? (a, b, c) enter your selection here: ``` that isnt glamourous, but it can be done with one or two lines of code. nonetheless its good to know, in a sitution where keyboard access is limited (or a touchscreen is being used) that dialog programs are an option. (some dialog programs are text based, and mostly save you the trouble of "drawing" a rectangle or implementing an input routine that needs you to press "enter", but that doesnt make them useless). most programs work by simply putting text on the screen. a very straightforward way to do this in a script is with the use of the echo command: ``` echo put this text on the screen ``` thats a very small "program" or "shell script" that explains what it does. what it actually puts on the screen is: ``` put this text on the screen ``` and then getting a reply can be as simple as adding the read command: ``` echo "what is your name?" read n echo "hi, $n" ``` this is how you would write a simple "shell script" to demonstrate input and output, and running that script requires a "shell" like bash or ksh. but it doesnt have to be a shell script, its just faster to do (SOME) simple tasks with one. for more features, you can use a language like python-- of course people who are used to writing shell scripts might turn their nose up at a larger environment like python. but that doesnt mean that small, simple tools cant be written in python; it means youll need a larger environment (than ksh) to run them. incidentally, bash has so many features that its behaviour is not as predictable as that of ksh. i spent years learning bash and still find it relatively tedious. i didnt "learn" ksh but i transferred my skills from bash to ksh, and find that i spent less time "fighting" with ksh to get it to do what i expect of it. for this reason, i think bash is a terrible (but common) first shell, and that ksh is better for people new to the shell. i think i would have written more shell scripts with less hassle and learned scripting faster, had i started with ksh instead. im sure bash fans will scoff at that, but im still annoyed they foisted such a tedious shell on me. (the extra bash features ARE useful sometimes, but in the past year ive barely touched bash except to make a couple existing bash scripts work). but our example could be written in python, it could be written in c, it doesnt have to be a script at all. what it would probably do either way is have "stdin" and "stdout", which are two common interfaces to the program-- especially (but not only) text-based programs. getting back to graphical programs for a minute, consider the old "workflow" of some older programs, particularly programs from the 90s or as recently as ten of fifteen years ago: 1. you open a program 2. you do some work 3. you save the file, or select all and copy 4. you load the file in another program, or paste into another program 5. you do work in that other program, and then maybe open a third... this isnt always avoidable and it isnt always terrible, but sometimes we dont need to do all this. for some types of data and some tasks this is worth the trouble, but if we lumped all the programs into one we would be at a great risk of running into the worst problems outlined here: => https://wrongwithfreesw.neocities.org/the-importance-of-small-utilities.html and for text programs especially, this isnt necessary. instead, you can write very small programs that do only a few things (preferably one) and then run those together using a "command pipeline". for example, instead of running a bunch of programs and then saving the output to a file, i can simply pipe the output of those programs to a text editor like jed. lets see a simple example: ``` find / -type d | jed ``` the first part simply lists all the folders (accessible to the user, or all the folders if the user is root) on the system, and the second part "pipes" (the | symbol is called a "pipe") that output to the second program called "jed". jed is a text editor that accepts piped input. it also lets you run shell commands from inside the editor. because we piped the output (stdout) of find into (stdin) jed, we dont even have to stop to think of a filename. we could have said: ``` find / -type d > file.txt jed file.txt ``` and thats alright, but this saved us the intermediary step. theres also a problem called a "race condition" that you can have with a multi-tasking operating system using "temporary files" this way, which pipelines avoid. so the pipe directly connects the stdout (standard output) of one program to the stdin (standard input) of another, instead of a series of opening programs and cutting and pasting. THIS IS GREAT FOR AUTOMATING TASKS. but theres another cool thing it does. in the shell, most "commands" are actually separate programs. there are certainly times where its an advantage to have commands "built in" to a language, such as python or awk. but with the shell, most commands can have the file associated with them located by simply typing: ``` which find ``` to "find" where the "find" command is, or which programname to find where "programname" is-- simply substitute "programname" with the name of the program you want to find. a lot of scripts are created by stringing commands together in a pipeline, among other techniques. for example: ``` find . -type f | grep "hello" | fold -sw 10 | jed ``` is a "line" made with 4 programs, which finds files then searches for text in that output containing "hello" (files with "hello" in the name) then it wraps that output to 10 characters wide, and pipes the output to jed. once again, we could do it this way: ``` find . -type f > file1.txt grep "hello" file1.txt > file2.txt fold -sw 10 file2.txt > file3.txt jed file3.txt ``` but what a mess that is. not to mention we are risking race conditions and naming and creating files we dont even want to keep (if you want to keep them, this is one way to do it). race conditions on temp files happen when two programs are running at the same time and both want to work on the same temp file. the issue here isnt find, grep, fold and jed-- which are running one after the other, but the possibility of another user or another process doing the same thing at similar times. for example, you might put the above code in a script, then run that script while running it again for a different reason or task, but with the same files. what happens next is not always predictable-- this is a race condition. but if you use pipes instead of temp files, you can run several instances of that script at once. if youre confident that neither you nor another person with access to the same files at the same time, nor another scheduled process are using those temp files at the same time, then of course you can still use them. its simply not a good way to write a script because of the problems it can create. pipelines have their own limitations as well. whats cool though is that you can create scripts that combine: 1. programs that are included with your operating system, like the find command 2. programs that you installed later, but didnt write, like jed 3. programs you wrote, including shell scripts and each one can do a simple task before passing that text to the next program. this also means that instead of writing complex programs, you can often simply create a new command, then add it to your system and use it wherever you want (just like any other command). this can help you reduce a great deal of your work into the chaining together of simple, relatively easy-to-maintain programs. => https://wrongwithfreesw.neocities.org