Operating Systems Using Unix
Operating Systems Using Unix
Operating Systems
EXPERIMENT NO. 01
(I) Objective:To learn some basic UNIX commands to do system level programming. (II) Software Required:UNIX Operating System (III) Commands:Folder/Directory Commands and Options. (IV) Folder Directory Command and Options: -
(VIII)Pattern Matching:-
ls --- lists your files ls -l --- lists your files in 'long format', which contains lots of useful
information, e.g. the exact size of the file, who owns the file and who has the right to look at it, and when it was last modified. ls -a --- lists all files, including the ones whose filenames begin in a dot, which you do not always want to see. There are many more options, for example to list files by size, by date, recursively etc. Just hit the space bar to see more or q to quit. You can use /pattern to search for a pattern.
more filename --- shows the first part of a file, just as much as will fit on one screen. mv fname1 fname2 --- moves a file (i.e. gives it a different name, or moves it into a
different directory (see below)
cp fname1 fname2 --- copies a file rm filename --- removes a file. It is wise to use the option rm -i, which will ask you for diff fname1 fname2 --- compares files, and shows where they differ wc fname --- tells you how many lines, words, and characters there are in a file chmod opt fname --- lets you change the read, write, and execute permissions on
confirmation before actually deleting anything. You can make this your default by making an alias in your .cshrc file.
your files. The default is that only you can look at them and change them, but you may sometimes want to change these permissions. For example, chmod o+r filename will make the file readable for everyone, and chmod o-rfilename will make it unreadable for others again. Note that for someone to be able to actually look at the file the directories it is in need to be at least executable. b. Directories:Directories, like folders on a Macintosh, are used to group files together in a hierarchical structure.
mkdir dirname --- make a new directory cd dirname --- change directory. You basically 'go' to another directory, and you will see
the files in that directory when you do 'ls'. You always start out in your 'home directory', and you can get back there by typing 'cd' without arguments. 'cd ..' will get you one level up from your current position. You don't have to walk along step by step - you can make big leaps or avoid walking around by specifying pathnames.
ff
--- find files anywhere on the system. This can be extremely useful if you've forgotten in which directory you put a file, but do remember the name. In fact, if you use ff -p you don't even need the full name, just the beginning. This can also be useful for finding other things on the system, e.g. documentation.
grep
string fname(s) --- looks for the string in the files. This can be useful a lot of purposes, e.g. finding the right file among many, figuring out which is the right version of something, and even doing serious corpus work. grep comes in several varieties (grep, Developed by:Adnan Alam
Khan(Write2adnanalamkhan@gmail.com)
w --- Tells you who's logged in, and what they're doing. Especially useful: the 'idle' part.
This allows you to see whether they're actually sitting there typing away at their keyboards right at the moment.
who
--- Tells you who's logged on, and where they're coming from. Useful if you're looking for someone who's actually physically in the same building as you, or in some other particular location.
finger username --- gives you lots of information about that user, e.g. when they last last -1 username --- tells you when the user last logged on and off and from where.
Without any options, last will give you a list of everyone's logins.
read their mail and whether they're logged in. Often people put other practical information, such as phone numbers and addresses, in a file called .plan. This information is also displayed by 'finger'.
talk username --- lets you have a (typed) conversation with another user write username --- lets you exchange one-line messages with another user elm --- lets you send e-mail messages to people around the world (and, of course, read
them). It's not the only mailer you can use, but the one we recommend. About your (electronic) self out who it is who forgot to log out somewhere, and make sure *you* have logged out.
whoami --- returns your username. Sounds useless, but isn't. You may need to find passwd --- lets you change your password, which you should do regularly. ps -u yourusername --- lists your processes. Contains lots of information about
including the process ID, this list will contain the processes you need to kill. them,
kill PID --- kills (ends) the processes with the ID you gave. This works only for your own
processes, of course. Get the ID by using ps. If the process doesn't 'die' properly, use the option -9. But attempt without that option first, because it doesn't give the process a chance to finish possibly important business before dying. You may need to kill processes for example if your modem connection was interrupted and you didn't get logged out properly, which sometimes happens.
quota du
-v --- show what your disk quota is (i.e. how much space you have to store files), how much you're actually using, and in case you've exceeded your quota (which you'll be given an automatic warning about by the system) how much time you have left to sort them out (by deleting ) filename --- shows the disk usage of the files and directories in filename (without argument the current directory is used). du -s gives only a total.
date show date and time history list of previously executed commands man show online documentation by program name w, who who is on the system and what they are doing whoami who is logged onto
this terminal f. File management
cat combine files cp copy files ls list files in a directory and their attributes mv change file name or directory location rm remove files ln create another link (name) to a file chmod set file permissions
g. Display contents of files
cat copy files to display device more show text file on display terminal with paging control head show first few lines of a file(s) tail show last few lines of a file; or reverse line order vi full-featured screen editor for modifying text files pico simple screen editor for modifying text files grep display lines that match a pattern lpr send file to printer diff compare two files and show differences cmp compare two binary files and report if different comm compare two files; show common or unique lines wc count characters, words, and lines in a file
h.Directories
mkdir create new directory rmdir remove empty directory (you must remove files first) mv change name of directory pwd show current directory
i.Disks
df summarize free space on disk filesystems du show disk space used by files or directories
j.Controlling program execution for C-shell
& run job in background ^c kill job in foreground ^z suspend job in foreground fg restart suspended job in foreground bg run suspended job in background ; delimit commands on same line () group commands on same line ! re-run earlier commands from history list jobs list current jobs ps show process information kill kill background job or previous process nice run program at lower priority at run program at a later time crontab run program at specified intervals limit see or set resource limits for programs alias create alias name for program (normally used in .login file) sh, csh execute command file
k.Controlling program input/output for C-shell
| pipe output to input > redirect output to a storage file < redirect input from a storage file >> append redirected output to a storage file tee copy input to both file and next program in pipe
Developed by:Adnan Alam
Khan(Write2adnanalamkhan@gmail.com)
sed programmable text editor for data streams vi full-featured editor for character terminals pico very simple text editor
j. Printing (BSD based)
lpr send file to print queue lpq examine status of files in print queue lprm remove a file from print queue enscript convert text files to PostScript format for printing
(XI) Starting and Ending: -
Try this code and see what it does. Note that the values can be anything at all:
is well worth trying. Make sure that you understand what is happening here. Try it without the * and grasp the idea, then re-read the Wildcards section and try it again with the * in place. Try it also in different directories, and with the *surrounded by double quotes, and try it preceded by a backslash (\*) In case you don't have access to a shell at the moment (it is very useful to have a shell to hand whilst reading this tutorial), the results of the above two scripts are:
So, as you can see, for simply loops through whatever input it is given, until it runs out of input. While Loops:while loops can be much more fun! (depending on your idea of fun, and how often you get out of the house... )
Test:Test is used by virtually every shell script written. It may not seem that way, because test is not often called directly.test is more frequently called as [. [ is a symbolic link to test, just to make shell programs more readable. If is also normally a shell builtin (which means that the shell itself will interpret [ as meaning test, even if your Unix environment is set up differently):
This means that '[' is actually a program, just like ls and other programs, so it must be surrounded by spaces:
will not work; it is interpreted as if test$foo == "bar" ], which is a ']' without a beginning '['. Put spaces around all your operators I've highlighted the mandatory spaces with the Developed by:Adnan Alam
Khan(Write2adnanalamkhan@gmail.com)
Test is a simple but powerful comparison utility. For full details, run man test on your system, but here are some usages and typical examples.Test is most often invoked indirectly via the if and while statements. It is also the reason you will come into difficulties if you create a program called test and try to run it, as this shell builtin will be called instead of your program! The syntax for if...then...else... is:
Note that fi is if backwards! This is used again later with case and esac. Also, be aware of the syntax - the "if [ ... ]" and the "then" commands must be on different lines. Alternatively, the semicolon ";" can separate them:
This will echo "Something" if the [ something ] test succeeds, otherwise it will test [ something_else ], and echo "Something else" if that succeeds. If all else fails, it will echo "None of the above". Try the following code snippet, before running it set the variable X to various values (try -1, 0, 1, hello, bye, etc). You can do this as follows
Then try it again, with $X as the name of an existing file, such as /etc/hosts.
Note that we can use the semicolon (;) to join two lines together. This is often done to save a bit of space in simple ifstatements. The backslash simply tells the shell that this is not the end of the line, but the two (or more) lines should be treated as one. This is useful for readability. It is customary to indent the following line. As we see from these examples, test can perform many tests on numbers, strings, and filenames. Thanks to Aaron for pointing out that -a, -e (both meaning "file exists"), -S (file is a Socket), -nt (file is newer than), -ot(file is older than), -ef (paths refer to the same file) and -O (file is owned my user), are not available in the traditional Bourne shell (eg, /bin/sh on Solaris, AIX, HPUX, etc). There is a simpler way of writing if statements: The && and || commands give code to run if the result is true.
This syntax is possible because there is a file (or shell-builtin) called [ which is linked to test. Be careful using this construct, though, as overuse can lead to very hard-to-read code. The if...then...else... structure is much more readable. Use of the [...] construct is recommended for while loops and trivial sanity checks with which you do not want to overly distract the reader. Developed by:Adnan Alam
Khan(Write2adnanalamkhan@gmail.com)
This is because the -lt, -gt, -le, -ge, comparisons are only designed for integers, and do not work on strings. The string comparisons, such as != will happily treat "5" as a string, but there is no sensible way of treating "Hello" as an integer, so the integer comparisons complain. If you want your shell script to behave more gracefully, you will have to check the contents of the variable before you test it - maybe something like this:
In this way you can echo a more meaningful message to the user, and exit gracefully. The $? variable is explained inVariables - Part II, and grep is a complicated beast, so here goes: grep [0-9] finds lines of text which contain digits (0-9) and possibly other characters, so the caret (^) in grep [^0-9] finds only those lines which don't consist only of numbers. We can then take the opposite (by acting on failure, not success). Okay? The >/dev/null 2>&1 directs any output or errors to the special "null" device, instead of going to the user's screen. Many thanks to Paul Schermerhorn for correcting me - this page used to claim that grep -v [0-9] would work, but this is clearly far too simplistic. We can use test in while loops as follows:
Note also that I've used two different syntaxes for if statements on this page. These are:
You must have a break between the if statement and the then construct. This can be a Developed by:Adnan Alam
Khan(Write2adnanalamkhan@gmail.com)
Case:The case statement saves going through a whole set of if .. then .. else statements. Its syntax is really quite simple:
Okay, so it's not the best conversationalist in the world; it's only an example! Try running it and check how it works...
The syntax is quite simple: The case line itself is always of the same format, and it means that we are testing the value of the variable INPUT_STRING. The options we understand are then listed and followed by a right bracket, as hello) and bye). This means that if INPUT_STRING matches hello then that section of code is executed, up to the double semicolon. If INPUT_STRING matches bye then the goodbye message is printed and the loop exits. Note that if we wanted to exit the script completely then we would use the command exit instead of break. The third option here, the *), is the default catch-all condition; it is not required, but is often useful for debugging purposes even if we think we know what values the test variable will have. The whole case statement is ended with esac (case backwards!) then we end the while loop with a done. That's about as complicated as case conditions get, but they can be a very useful and powerful tool. They are often used to parse the parameters passed to a shell script, amongst other uses.
That's not all, though - these fancy brackets have a another, much more powerful use. We can deal with issues of variables being undefined or null (in the shell, there's not much difference between undefined and null).
Line 4 identifies itself as a function declaration by ending in (). This is followed by {, and everything following to the matching } is taken to be the code of that function. This code is not executed until the function is called. Functions are read in, but basically ignored until they are actually called. Note that for this example the useradd and passwd commands have been prefixed with echo - this is a useful debugging technique to check that the right commands would be executed. It also means that you can run the script without being root or adding dodgy user accounts to your system! We have been used to the idea that a shell script is executed sequentially. This is not so with functions. In this case, the function add_a_user is read in and checked for syntax, but not executed until it is explicitly called. Execution starts with the echo statement "Start of script...". The next line, add_a_user bob letmein Bob Holness is recognized as a function call so the add_a_user function is entered and starts executing with certain additions to the environment:
So within that function, $1 is set to bob, regardless of what $1 may be set to outside of the function. So if we want to refer to the "original" $1 inside the function, we have to assign a name to it - such as: A=$1 before we call the function. Then, within the function, we can refer to $A. We use the shift command again to get the $3 and onwards parameters into $@. The function then adds the user and sets their password. It echoes a comment to that effect, and returns control to the next line of the main code. Scope of Variables:Developed by:Adnan Alam
Khan(Write2adnanalamkhan@gmail.com)
The $@ parameters are changed within the function to reflect how the function was called. The variable x, however, is effectively a global variable - myfunc changed it, and that change is still effective when control returns to the main script.A function will be called in a sub-shell if its output is piped somewhere else - that is, "myfunc 1 2 3 | tee out.log" will still say "x is 1" the second time around. This is because a new shell process is called to pipe myfunc(). This can make debugging very frustrating; Astrid had a script which suddenly failed when the "| tee" was added, and it is not immediately obvious why this must be. The tee has to be started up before the function to the left of the pipe; with the simple example of "ls | grep foo", then grep has to be started first, with its stdin then tied to the stdout of ls once lsstarts. In the shell script, the shell has already been started before we even knew we were going to pipe through tee, so the operating system has to start tee, then start a new shell to call myfunc(). This is frustrating, but well worth being aware of. Functions cannot change the values they have been called with, either - this must be done by changing the variables themselves, not the parameters as passed to the script. An example shows this more clearly: Developed by:Adnan Alam
Khan(Write2adnanalamkhan@gmail.com)
This rather cynical function changes $a, so the message "Hello World" becomes "Goodbye Cruel World".
Here we see two user shell scripts, function2.sh and function3.sh, each sourceing the common library file common.lib, and using variables and functions declared in that file. This is nothing too earth-shattering, just an example of how code reuse can be done in shell programming. Developed by:Adnan Alam
Khan(Write2adnanalamkhan@gmail.com)
/* Program to illustrate Inter Process Communication */ #include <stdio.h> #include <types.h> #include <unistd.h> #include <stdlib.h> int main() { int pfd[2], i; pid_t mypid; if(pipe(pfd) < 0) perror(Pipe Error); if(!fork()) Developed by:Adnan Alam
Khan(Write2adnanalamkhan@gmail.com)
Conclusion :-
Objective :
Learn how to link C Language in UNIX and Programs on UNIX System calls cc -o run [file].c : Compiles source [file].c, using the standard C compiler and producing an executable named run. cc -c [file].c : Compiles source [file].c, using the standard C compiler `scc2.0' and producing an object file named [file].o.
A system call is just what its name implies -- a request for the operating system to do something on behalf of the user's program. The system calls are functions used in the kernel itself. To the programmer, the system call appears as a normal C function call. However since a system call executes code in the kernel, there must be a mechanism to change the mode of a process from user mode to kernel mode. The C compiler uses a predefined library of functions (the C library)that have the names of the system calls. The library functions typically invoke an instruction that changes the process execution mode to kernel mode and causes the kernel to start executing code for system calls. The instruction that causes the mode change is often referred to as an "operating system trap" which is a software generated interrupt .The library routines execute in user mode, but the system call interface is a special case of an interrupt handler. The library functions pass the kernel a unique number per system call in a machine dependent way --either as a parameter to the operating system trap, in a particular register, or on the stack -- and the kernel thus determines the specific system call the user is invoking. In handling the operating system trap, the kernel looks up the system call number in a table to find the address of the appropriate kernel routine that is the entry point for the system call and to find the number of parameters the system call expects. The kernel calculates the (user) address of the first parameter to the system call by adding (or subtracting, depending on the direction of stack growth) an offset to the user stack pointer, corresponding to the number of the parameters to the system call. Finally, it copies the user parameters to the "u area" and call the appropriate system call routine. After executing the code for the system call, the kernel determines whether there was an error. If so ,it adjusts register locations in the saved user register context ,typically setting the "carry" bit for the PS (processor status) register and copying the error number into register 0 location. If there were no errors in the execution of the system call, the kernel clears the "carry" bit in the PS register and copies the appropriate return values from the system call into the locations for registers 0 and 1 in the saved user register context. When the kernel returns from the operating system trap to user mode, it returns to the library instruction afterthe trap instruction. The library interprets the return values from the kernel and returns a value to the user program. UNIX system calls are used to manage the file system, control processes, and to provide inter process communication. The UNIX system interface consists of about 80 system calls (as UNIX evolves this number willincrease).The following table lists about 40 of the more important system call: