Processes are a core component of the POSIX operating systems. They are simply a running instance of a program. So if we run nano on a text file, it creates a new instance, aka a process. We can open different files with nano because even though they are using the same program, they’re using different instances.
Processes have info attached to them, like a unique id, permissions, etc.
top is a tool that let’s you see what processes are running, like window’s task manager or apples activity monitor. Run it by typing top then enter. It will then take up the full screen and tell you the processes that are running and the resources they’re consuming, like memory or cpu.
The top section says the number of tasks and the number running, sleeping, stopped etc, along with the total/available/in use amounts of CPU and memory.
Each row is a process. The first column is the process ID (PID). On the right you can see which command each one is. init is the first thing that runs, is responsible for bringing the entire system up, and everything runs off of it.
Occasionally you’ll see top appear at the top, as it uses a bit of CPU.
For the help, click ?.
You can sort using F or O.
CPU is starred, indicating that we’re currently sorting by that. To switch, type the letter of the option you want, then hit enter.
To exit, just hit q.
ps is another tool that tells us the processes running in our current session. bash is the program that’s interpreting our keystrokes and formatting the output on the screen. ps is what we’re running to see the other processes, and since it’s one it makes sense it would show up here.
If you add aux to ps is will show you a list of all the processes for the user “x”, which is all users, so everyone on the machine. This isn’t an interactive program like top, it simply displays all of them in the console. It’s used to tell you what’s running, and the PID for a specific task.
To do that, we’ll open a new tab and run top. Now, we see top in the list when we run ps aux.
The pst/1 and 0 column tell us which tab it’s running under.
If you want to find something specific in there, simply write ps aux, then a space, then the | (pipe) character. This let’s us combine programs together, so we take the output of ps aux, which lists the processes, and runs it through a new tool called grep, which let’s us filter for things in certain lines.
Notice that the process running it, grep, shows up in the list as well because top is in there.
Pausing and Resuming
When you run some programs, they take up the entire window, hiding the command line, but you can do multiple things at the same time. A job is a process that you own and started from your console window.
We wrote nano demo.txt, which runs nano and opens up a new file.
How do you get back to the command line without closing nano or opening a new tab? We can pause this job. To do that, press ctrl+z.
First it tells us we can return to nano with fg. Then it will list our jobs. The first column is the job number. Note that this is NOT the process ID, but rather the job number. Then is gives us the state, which right now is Stopped, aka paused. Finally, the name of the job.
To return, use the fg (foreground) command, which brings you back to the most recently stopped program.
The jobs command prints a list of the jobs in this session. Notice that it says wd ~ to let us know the working directory it started from, because we’ve gone to a different one.
If we open up top, we can pause that as well, and you can see now it has a job number of 2.
If we type fg now it will take us to top, since it was the most recent job paused. If we wanted to go to the demo.txt job, simply type fg then the job number, 1. Notice the plus an minus next to their job number. Plus is the thing that’s going to be opened when you type fg, as it was the most recently opened than paused job.
If we were to close nano, top’s job number would still be 2. They don’t change if you close something else.
One last trick. If you type an & after top and run it, notice it doesn’t come up. If we run jobs, we can see that it was opened and just put in the background for us.
Sometimes a process goes out of control and you can’t quit it with normal means. Be careful, some of the methods here can shut down your machine or cause data corruption.
Normally when you exit a program it stops running but there may be cases where it’s still running and you don’t have direct access to it, as it’s not running from your terminal. A good example is when you’re running servers. You’ll need to first find the process ID using the techniques listed above.
This is then done using signals, which are messages sent to a process by the OS. The term (terminate) signal requests that the process terminates after any clean up. This is usually sent to something to shut it down.
We’ve used signals before. Using ctrl+z to pause a process is an example of one. We can also use ctrl+c to send the term signal to the current process. This is a consistent way to signal to the process you’re running that you want to terminate it.
So what if what you want to close isn’t in your current terminal? Let’s open a new tab and run top in it. Then, we’ll run jobs in the first tab. Since top is not in this session, it doesn’t show up, as jobs are per session.
So how would we kill this? First we’d find the PID, using the ps aux | grep “top” method from before, which will return the PID for us. Remember to ignore the self referencing grep one. To kill it, we then use the kill command. Make sure to be careful with this, as if you type the wrong PID you could kill a system process by accident.
Based on your permissions, you may only be able to kill processes you own, but if you’re using sudo you can kill anything, so be even more careful.
So, we kill is, run ps aux again and see that it’s no longer running.
By default, kill sends the term signal to the process. But sometimes a process can get in a state where it’s not even paying attention to the signals it’s receiving. What we can do is send an option with the kill command. By default, you’re basically sending “kill -TERM”. The last resort if that doesn’t work is “kill -KILL” or “kill -SIGKILL”. These accept numbers as well, so you may also see it as “kill -9”. This isn’t like term where it tells it to clean up after it’s done, but rather is telling it to end it immediately. Avoid this when you can, and only use it when the term signal isn’t working.
But when we go to check on the process on tab 2, we see something weird – the command line is within the previous text. This is because it hasn’t been cleared, and it will be overwritten if we write over it.
This shows how kill is different. The normal terminate signal would clear out what was there before and place the command line appropriately. kill didn’t give it that chance.
The -STOP signal pauses it, but causes a similar issue where the command line just pops up.
Still, you can clear that, run jobs and see that it’s exactly as if you pressed ctrl+z.
So, kill is more about sending signals than closing a process every time.
Environment and Redirection
Environment Variables are like variables in any programming language. In the console, they will be written in all CAPS and will contain strings. Some ones the system will expect are HOME, your home directory, and PATH, which is a path to you directories, the PS1 defines the format of your command line prompt.
To see what your env variables are, you can use the command env. It will print all of them out for you. They’re all capitalized, and are set equal to a value. A lot of these are set by config files on the system, but others we should recognize like the ones mentioned above. The PS1 one has a /u and /w then a $. These are special characters that allow us to inset dynamic info into our prompt. /u is replaced with the user’s username, and the /w with the current working directory, which allows are prompt to change as we traverse directories.
LS_COLORS is the colors set for the console.
The echo command prints out the arguments that it’s given. Writing a $ before a variable name will return the value of that variable. So writing echo $HOME will print HOME’s value.
Writing cd $HOME would take us to /home/treehouse.
To change a variable, type its name, then an equals sign, then the new value in quotes (so we can write a string with spaces and special characters). Notice how the prompt has changed once we’ve done that. This is because the command line is using the value of PS1 for what it displays. We can change this to anything really.
bash is run automatically when you start a new instance. If we type the bash command, it will start a new instance for us, which ignores the changes we set in the previous instance to PS1. If we exit this instance, we can see it then takes us back to the original instance.
To create a new variable, simply write one (in all caps) and set it equal to something. Let’s do that. But what if we open a new instance of BASH and echo the variable we made prior? We get nothing. This is because a env variable will by default stay in its own session. The new bash is a child of the original one, but the variable is not passed down.
To pass it down, use exporting. Simply write export before the variable you set, and it will be available to child instances.
Variables are useful for controlling programs. The ls command uses the LS_COLORS variable to determine how the results it prints out are colored.
The PATH variable is important. It is a list of directories separated by colons, and is the list of directories to search for when we run an executable. For example, when we run echo, this is a command, and is a file that exists on our computer. To find out where, use the command which.
So, we could type /bin/echo to use echo, but no one wants to use absolute paths, so we use the PATH. If we type in a command that’s not a full path to the command, PATH will search these directories in that order for it.
Sometimes we install executables that are not in these directories, but we want them to be available to us in our path. So to add to it, we can simply type export PATH=, then the new directory, then a colon, then $PATH since that contains everything else anyways. This will put the new directory in the front.
If we start a new bash instance, it will be there, but we don’t want to do it this way, because usually it will be part of a start up script, and we’ll lose it if we have to restart our computer. In this case it’s bashrc, which we can edit with nano.
Here you can add it and save it. Usually you do this because a program you’re installing has asked you to.
Find and Grep
Find and Grep are useful for finding files. find locates a file based on its name. You type find, then the directory for it to base its search off of. The current directory is represented by a dot. Here this will search the current one and all of its child directories. Then, -name, then the name of the file in quotes (the quotes aren’t always required but it’s good practice to add them for strings and spaces and characters). The results are listed one per line, and will give us the path to the files found.
If you wanted to search through your entire system, which may take a long time, you can use / for the directory, which is the root directory. Here, we get a bunch of permission denied messages, which are error messages, because there are a lot files/directories our treehouse user has no access to.
To specify multiple directories to search from, just separate them by spaces.
How about to search for something inside a file? You’d use grep, which lets you search inside a file for a pattern. It stands for Global Regular Expression Print. Every time it finds the pattern, it will print the line it appears on. You type grep, then pattern you’re searching for, then the file(s) you want to search through.
You can also have it print out the line number it found it on by adding -n after grep.
By default grep is case sensitive, but you can turn this off with -i.
To look for lines WITHOUT a pattern, use -v. Here we look for lines that don’t contain the letter e.
To read the grep or find manual, type man grep or man find.
Pipes and Redirection
When you run a program and create a process there’s a standard way of inputting and outputting: standard in and standard out.
The out is usually text printed out in the console, and the in is the keyboard. You can change these around though. The out can be a file, so the output text is stored there. Or, the in can be a file. The in’s and out’s are always plain text, which lets you make the output of one process the input of another, which is called piping.
You can have a bunch of small programs that each do one thing, then put them all together to make something more powerful.
Let’s do grep again. If we don’t enter a file name and hit enter, you’ll notice the command prompt doesn’t look normal. This is because grep is using the standard input to wait for data to be entered into it. If you type hello world and hit enter, it will highlight the lines where it found that pattern. Note that after that it’s still receiving input, and if you type just hi, you won’t get anything back. Standard in is what we write, out is what it prints back.
It will continue to wait for input from us until it finds the special end of file marker, which you can enter by doing crtl+d.
Now, like we’ve done prior, we can also use a file as our standard input. Here we’ll do it with redirection, represented by a less than sign. Afterwards you write the name of the file that will be the new standard input. This isn’t that different that without it because grep also takes file name inputs, but some programs may only accept input via standard in, so this is how you’d do it.
To change the output from the terminal print out to an output file, using a greater than symbol to redirect the output. Here, I forgot to put in a standard input, so I had to type it in. After we’re done, we close it with ctrl+d. Now there’s a new file called hello.grep, which contains the output.
If we do this with the standard in, we don’t have to type it in, as the in this time is the hello.txt file.
Notice that it overwrote the original copy. To append rather than overwrite, simply use two greater than signs rather than one.
Remember the find sudeors from before with all the permissions denied? This is because there are two output streams from the process. One is what you’re looking for, like the results of the search, and the second is extra info like if errors occur. The second is called standard error, which also prints to the terminal. You can redirect that using 2>.
If we want to see the errors, we can now just open the log.
But what if we don’t care about the errors? We can redirect it to a special file that will delete anything written into it – /dev/null.
Now for piping. ps aux tells us all the processes running, and there are a lot of them. If we combine it with grep with |, we can use grep to search. ps aux provides the output, which will be the input of grep.
The sort command sorts the lines of its standard input and sends it to its output.
We can then send the sorts output to a file like before using a redirect.
Building Software from Source
For linux, a package manager let’s you instill, update and remove software with simple commands. Sometimes one, or the version you want of the package, isn’t available though, so you can downloading the source code and install it yourself.
Today we’ll be intalling SQLite. First we’ll install some tools to let us do that. You’ll only need to do this once for your machine. We’ll need sudo, then apt-get, which is part of the package manager system, then update, which is a command that will update the package manager database on our computer. So, when we go to install our build tools we’ll have the latest version.
It will then fetch a bunch of urls, which are the databases that hold the packages that are available for our ubuntu system.
We now do sudo apt-get install, which will install a package by name, which here is build-essential. Enter Y to proceed.
To confirm if they were installed, use which make. Remember which tells you where a program is on the system. make is a program used for building things. The build-essential package included make, so since it’s here we know it was successful.
Now to actually install SQLite. It offers the source code as a zip, or a tar.gz file. The latter is like a zip file, but is usually for an archive. We’ll grab it’s link address. We’ll then use the curl program, which is used for making requests to the internet, and -O, will save the file to a file on our machine.
It saved it to ls, but we can make a directory for it using mkdir, and move it with mv.
Now to untar the file, which is like unzipping one. The tags to extract it are -xvf. x is extract, v is verbose output and f is point to the file we want to extract.
After installing, there’s now a directory next to our tar file. Let’s enter that and see the files inside.
First we need to run configure. We’ll do ./configure, because the file is inside of our current directory. This runs through our system prepares for some more files that will be used to actually build the program.
It’s created a special file called a MakeFile, which specifies how to build the program.
To execute it, use the program called make from before when we installed build-essential. Make sure you’re in the same directory as your make file.
Now we’ve built it, but not yet installed it. We need to run a different make task called make install. It’s already built the program so it won’t recompile everything, it will just move everything that it has to somewhere accessible in our path. Some of those may not be writable using our current user, so we’ll use sudo as well.
To confirm that it installed, we’ll go home, the use which to see if it’s there, and even run it.
To summarize what we did:
Introduction to Package Managers
These make making installs way easier and make updates and uninstalls easier as well. Linux uses apt (advanced packaging tool) as its package manager. There are a variety of these for the different flavors of linux. OSX10 has its own called homebrew.
First, run sudo apt-get update like before. Now, let’s install git, a version control system. If we try to run just that, we get an error, because it’s not installed yet, which actually tells you how to install it.
Another way to find out how to install it would be to use apt-cache, with the search command, which let’s us search based on a pattern.
There are a lot of results, each with a name and a brief description, that you can use and look through to find the correct one.
It gives you some info on what it will install, and it understands what tools this install may depend on, so it lists them under the “extra packages will be installed” part. Press Y to continue.
Now, we can check for it using which git like before, and we can run it just by typing git.
To update the software, write sudo apt-get upgrade. You typically want to run this after you’ve done an update. If you haven’t, our system won’t be aware of any new packages. You can see near the bottom it tells you that 172 will be upgraded, as well as the amount that will be downloaded and installed. Hit y to run it.
To uninstall a program, write sudo apt-get remove, then the name of the package. Remember the two packages that were installed because git needed them? The message tells you they won’t be installed unless you use apt-get autoremove instead.