Saturday, February 27, 2010

Hard & Soft Mount

This is a UNIX terminology as to what the client does when
it can't talk to an NFS Server. If you just mount a file
system without specifying hard or soft, the default is a
hard mount. Hard mounts are preferable because of the
stateless nature of NFS. If a client sends an I/O request to
the server (such as an ls -la), and the server gets
rebooted, the client will wait until the server comes back
on line. This preserves data transfers in the event of a
server failure. There are disadvantages to this, as a simple
mount request could hang. A soft link will return with an
error and fail. This kills the wait time, but can cause
problems with data transfers. 

Hard mount
-- If the NFS file system is hard mounted the NFS daemons will try repeatedly to contact the server. The NFS daemon retries will not time out will affect system performance and you cannot interrupt them.
Soft mount
-- If the NFS file system is soft mounted NFS will try repeatedly to contact the server until either:
  • A connection is established
  • The NFS retry threshold is met
  • The nfstimeout value is reached

NIS setup

http://bradthemad.org/tech/notes/redhat_nis_setup.php

Everything On NFS

The Network File System (NFS) is used to distribute filesystems over a network. The server exports the filesystem and the client imports it. There are now two ways to run a NFS server. The traditional method is by running the user space NFS daemon. The newer option is the kernel based kernel-nfsd, which was introduced with kernel version 2.2. SuSE supports both methods. The user space daemon has built a reputation for reliability, but has limitations in terms of speed. The kernel based NFS daemon is faster, but not as well tested as the older one.


Step By Step Configure NFS server and NFS client:
====================================

NFS server setup:
------------------------------

#rpm –Uhv nfs-utils-0.3.1-13.i386.rpm

-------
#vi /etc/exports
/home   192.168.0.0/24(rw,no_root_squash,anonuid=500,anongid=100,sync)

-------

verify:
-----------
# /usr/sbin/exportfs –a
# showmount    -ad


NFS client setup:
----------------------------

#mount –t nfs compaq:/export /mnt/nfs

-------
#vi /etc/fstab
hostA:/export /mnt/nfs mfs soft 0 0
-------


Automount NFS & Setting which uses NIS:
================================

#rpm -Uhv autofs-3.1.7-28.i386.rpm

------------
#vi /etc/auto.master
/nfs /etc/auto.home --timeout 60
-------------

---------------
#vi /etc/auto.home
home -rw,hard,intr,nolock compaq:/home
--------------

--------------
#vi /var/yp/Makefile
all: passwd group hosts rpc services netid protocols mail \
shadow auto.home \
--------------
cd /var/yp
make

Verify:
----------
# ypcat auto.home
-rw,hard,intr,nolock compaq:/home





Important Commands:
=================

*   mount -t nfs

*   rpcinfo     [information about the RPC service that is running on a system]
*   showmount -a (on the server)      [Number of connection]
*   showmount -e [Displays a list of exported directories.]
*   netstat -s       [fragmentation  socket buffers]
*   nfsstat -cr
*   nfsstat [ -cmnrsz ]
  pstack [This command displays a stack trace for each process. ]

      /usr/bin/pgrep nfsd

      /usr/bin/pstack PID

      /usr/sbin/dtrace -Fs       
      startsrc -s biod
      /usr/sbin/exportfs -v
      /usr/bin/rusers  [Remote Users]

*   tracepath
*   snoop       [This command is often used to watch for packets on the network.]
*   truss        [You can use this command to check if a process is hung.]




Directories & Files:
===============
/etc/exports
#  /var/lib/nfs/rmtab
#  /etc/rmtab      [    Contains information about the current state of all exported directories.]
#  /etc/xtab   [Lists currently exported directories.]
/etc/fstab


Process & Daemons:
=======================

Daemon                                   Description
--------------------                        ---------------------
*** nfsd ->           The NFS daemon which services requests from the NFS clients.
*** mountd ->     The NFS mount daemon which carries out the requests that nfsd passes on to it.
*** rpcbind ->     This daemon allows NFS clients to discover which port the NFS server is using.

rpc.mountd — The running process that receives the mount request from an NFS client and checks to see if it matches with a currently exported file system.
 
*    rpc.nfsd — The process that implements the user-space components of the NFS service. It works with the Linux kernel to meet the dynamic demands of NFS clients, such as providing additional server threads for NFS clients to use.

*    rpc.lockd — A daemon that is not necessary with modern kernels. NFS file locking is now done by the kernel. It is included with the nfs-utils package for users of older kernels that do not include this functionality by default.

*    rpc.statd — Implements the Network Status Monitor (NSM) RPC protocol. This provides reboot notification when an NFS server is restarted without being gracefully brought down.
 
*    rpc.rquotad — An RPC server that provides user quota information for remote users.
 



NFS:
====

http://docstore.mik.ua/orelly/networking_2ndEd/nfs/
http://docs.sun.com/app/docs/doc/819-1634/rfsrefer-45?a=view
http://docs.sun.com/app/docs/doc/819-1634/rfsadmin-215?a=view
http://www.redhat.com/docs/manuals/linux/RHL-9-Manual/ref-guide/ch-nfs.html
http://www.linux.org/docs/ldp/howto/NFS-HOWTO/server.html



NFS Performance Tuning:
====================

http://www.ncsa.illinois.edu/UserInfo/Resources/Hardware/IBMp690/IBM/usr/share/man/info/en_US/a_doc_lib/aixbman/prftungd/2365ca3.htm
http://tldp.org/HOWTO/NFS-HOWTO/performance.html


Commands :
==========

http://docs.sun.com/app/docs/doc/819-1634/rfsrefer-37?a=view
http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/iphcg/showmount.htm
http://www.regatta.cs.msu.su/doc/usr/share/man/info/ru_RU/a_doc_lib/cmds/aixcmds5/aixcmds502.htm#ToC


Configuration:
===========
http://www.labtestproject.com/linnet/index.html
http://highervisibilitywebsites.com/step-step-set-nfs-share
http://www.freebsd.org/doc/handbook/network-nfs.html
http://www.troubleshooters.com/linux/nfs.htm

 

Thursday, February 25, 2010

Setup a transparent proxy with Squid in three easy steps

Steps to Compile a Kernel

First download and install the kernel source to your machine. Then move to that directory. In redhat linux the source is usually installed in a directory named linux-2.x.xx.xxx in the /usr/src directory. And a soft link to that directory is created by the name linux-2.4 (Assuming it is the source of the linux 2.4 kernel). So the kernel source is installed in /usr/src/linux-2.4 directory.
Now move to this directory.
# cd /usr/src/linux-2.4
The next step is creating the configuration file (.config). This can be done in three ways.
# make config - brings up a command-line console mode interface.
# make menuconfig - brings up a ncurses based GUI Interface.
# make xconfig - brings up a X GUI based user friendly interface.
You may use any one of the above three commands.Depending upon the command that was executed, you will get a relevent interface using which you can configure your kernel before it is compiled.For example, you can select the proper processor type in the configuration dialog. Another thing is you can decide whether to build a functionality directly in the kernel or load it as a module when the kernel needs it.This will optimise your kernel for your computer and will help decrease the size of your kernel. The end result is your linux machine starts much faster.
After you have made the changes to the settings, you have to save and exit.Then all the changes you made to the configuration file has been saved at /usr/src/linux-2.4/.config
Now the next step is to make the dependencies. For that execute the following commands.
# make dep
# make clean
The first of these commands builds the tree of interdependencies in the kernel sources. These dependencies may have been affected by the options you have choosen in the configure step. The 'make clean' purges any now-unwanted files left from previous builds of the kernel.
The next step is the actual compilation of the kernel. Here you can opt to create a compressed image by executing the command
# make bzImage
Or if you opt for a non compressed image then you can execute the command
#make zImage
The time taken for compilation of the kernel depends on the speed of your machine. In my machine (Celeron 333MHz) it took around 15-20 mins.
After the compilation is complete, you can find your newly compiled kernel here : /usr/src/linux-2.4/arch/i386/boot/bzImage .
If you have enabled loadable modules support in the kernel during configuring, then you have to now execute the commands
# make modules
# make modules_install
Loadable modules are installed in the /lib/modules directory.
Now to start using your newly compiled kernel, you have to copy the kernel into the /boot directory.
# cp /usr/src/linux-2.4/arch/i386/boot/bzImage /boot/bzImage-mykernel-sept2004
Now open your boot configuration file (I used lilo) in your favourate editor and insert the following lines to boot linux using your new kernel.
# vi /etc/lilo.conf
//Inside lilo.conf file
image=/boot/bzImage-mykernel-sept2004
label=myker
root=/dev/hda3
read-only
My root is located at /dev/hda3 which I found out using the command
# df /
Now don't forget to execute the 'lilo' command to update the changes in the boot loader. Reboot your machine and in the lilo prompt, select 'myker' and press enter to start loading your newly compiled kernel.

Links:
 http://bobcares.com/blog/?p=162
 http://linuxhelp.blogspot.com/2004/08/steps-to-compile-kernel.html#axzz0gcGYpoAR

Suspend and resume any program using SIGSTOP UNIX

If you've ever been running a program that requires a lot of CPU or hits the disk heavily, and then wanted to be able to use your computer for something else for a few minutes, this is the hint for you.

Most UNIX people know they can use Control-Z and the bg and fg commands to control whether or not their programs are running. What many often don't know is that you can do the same thing using signals. For instance, let's say I am doing a long build in Project Builder, but I need to use my computer for a few minutes at full speed to do something else. Here's how to accomplish that:
  1. Find the process ID of the program you want to suspend using either the ps wwwaux command from the shell or via Process Viewer (in /Applications -> Utilities):
    /Users/sam:> ps auxwww | grep Project
    sam        814   0.0  0.6   114984   5900  ??  S     4:24PM
      0:01.56  /Developer/Applications/Project Builder.app/
      Contents/MacOS/Project Builder -psn_0_5636097
    Here the id is 814 (line breaks were added above for narrower display width).

  2. Use the kill command and send it a SIGSTOP signal:
    /Users/sam:> kill -SIGSTOP 814
    The program will now stop doing whatever it was doing and you can then do a quick render or whatever it was that needed the whole machine.
To resume your program right back where it was, just use the kill command and send it a SIGCONT signal:
/Users/sam:> kill -SIGCONT 814
It's as easy as that. I'm sure some enterprising individual could make a graphical program that does this for you, but I'm a UNIX user at heart.




Links:
http://www.macosxhints.com/article.php?story=20030915193440334


Linux Software RAID Management

raidtools has been the standard software RAID management package for Linux since the inception of the software RAID driver. Over the years, raidtools have proven cumbersome to use, mostly because they rely on a configuration file (/etc/raidtab) that is difficult to maintain, and partly because its features are limited. In August 2001, Neil Brown, a software engineer at the University of New South Wales and a kernel developer, released an alternative. Hismdadm (multiple devices admin) package provides a simple, yet robust way to manage software arrays. mdadm is now at version 1.0.1 and has proved quite stable over its first year of development. There has been much positive response on the Linux-raid mailing list andmdadm is likely to become widespread in the future. This article assumes that you have at least some familiarity with software RAID on Linux and that you have had some exposure to the raidtools package.
Installation


Download the most recent mdadm tarball, issue make install to compile, and install mdadmand its documentation. In addition to the binary, some manual pages and example files are also installed.

# tar xvf ./mdadm-1.0.1.tgz

# cd mdadm-1.0.1.tgz

# make install

Alternatively, you can download and install the package file found under the RPM directory at the same URL (http://www.cse.unsw.edu.au/~neilb/source/mdadm/).

# rpm -ihv mdadm-1.0.1-1.i386.rpm

mdadm has five major modes of operation. The first two modes, Create and Assemble, are used to configure and activate arrays. Manage mode is used to manipulate devices in an active array. Follow or Monitor mode allows administrators to configure event notification and actions for arrays. Build mode is used when working with legacy arrays that use an old version of the md driver. I will not cover build mode in this article. The remaining options are used for various housekeeping tasks and are not attached to a specific mode of operation, although the mdadm documentation calls these options Misc mode.
Creating an Array

Create (mdadm --create) mode is used to create a new array. In this example I use mdadmto create a RAID-0 at /dev/md0 made up of /dev/sdb1 and /dev/sdc1:

# mdadm --create --verbose /dev/md0 --level=0

--raid-devices=2 /dev/sdb1 /dev/sdc1

: array /dev/md0 started.

mdadm: chunk size defaults to 64K mdadm

The --level option specifies which type of RAID to create in the same way that raidtools uses the raid-level configuration line. Valid choices are 0,1,4 and 5 for RAID-0, RAID-1, RAID-4, RAID-5 respectively. Linear (--level=linear) is also a valid choice for linear mode. The --raid-devices option works the same as the nr-raid-disks option when using /etc/raidtab and raidtools.

In general, mdadm commands take the format:

mdadm [mode]  [options]

Each of mdadm's options also has a short form that is less descriptive but shorter to type. For example, the following command uses the short form of each option but is identical to the example I showed above.

# mdadm -Cv /dev/md0 -l0 -n2 -c128 /dev/sdb1 /dev/sdc1

-C selects Create mode, and I have also included the -v option here to turn on verbose output. -l and -n specify the RAID level and number of member disks. Users of raidtoolsand /etc/raidtab can see how much easier it is to create arrays using mdadm. You can change the default chunk size (64KB) using the --chunk or -c option. In this previous example I changed the chunk size to 128KB. mdadm also supports shell expansions, so you don't have to type in the device name for every component disk if you are creating a large array. In this example, I'll create a RAID-5 with five member disks and a chunk size of 128KB:

# mdadm -Cv /dev/md0 -l5 -n5 -c128 /dev/sd{a,b,c,d,e}1

mdadm: layout defaults to left-symmetric

mdadm: array /dev/md0 started.

This example creates an array at /dev/md0 using SCSI disk partitions /dev/sda1, /dev/sdb1,/dev/sdc1, /dev/sdd1, and /dev/sde1. Notice that I have also set the chunk size to 128 KB using the -c128 option. When creating a RAID-5, mdadm will automatically choose the left-symmetric parity algorithm, which is the best choice.

Use the --stop or -S command to stop running array:

# mdadm -S /dev/md0

/etc/mdadm.conf

/etc/mdadm.conf is mdadms' primary configuration file. Unlike /etc/raidtab, mdadm does not rely on /etc/mdadm.conf to create or manage arrays. Rather, mdadm.conf is simply an extra way of keeping track of software RAIDs. Using a configuration file with mdadm is useful, but not required. Having one means you can quickly manage arrays without spending extra time figuring out what array properties are and where disks belong. For example, if an array wasn't running and there was no mdadm.conf file describing it, then the system administrator would need to spend time examining individual disks to determine array properties and member disks.

Unlike the configuration file for raidtools, mdadm.conf is concise and simply lists disks and arrays. The configuration file can contain two types of lines each starting with either theDEVICE or ARRAY keyword. Whitespace separates the keyword from the configuration information. DEVICE lines specify a list of devices that are potential member disks. ARRAYlines specify device entries for arrays as well as identifier information. This information can include lists of one or more UUIDs, md device minor numbers, or a listing of member devices.

A simple mdadm.conf file might look like this:

DEVICE /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

ARRAY /dev/md0 devices=/dev/sda1,/dev/sdb1

ARRAY /dev/md1 devices=/dev/sdc1,/dev/sdd1

In general, it's best to create an /etc/mdadm.conf file after you have created an array and update the file when new arrays are created. Without an /etc/mdadm.conf file you'd need to specify more detailed information about an array on the command in order to activate it. That means you'd have to remember which devices belonged to which arrays, and that could easily become a hassle on systems with a lot of disks. mdadm even provides an easy way to generate ARRAY lines. The output is a single long line, but I have broken it here to fit the page:

# mdadm --detail --scan

ARRAY /dev/md0 level=raid0 num-devices=2

UUID=410a299e:4cdd535e:169d3df4:48b7144a

If there were multiple arrays running on the system, then mdadm would generate an array line for each one. So after you're done building arrays you could redirect the output of mdadm --detail --scan to /etc/mdadm.conf. Just make sure that you manually create a DEVICEentry as well. Using the example I've provided above we might have an /etc/mdadm.conf that looks like:

DEVICE /dev/sdb1 /dev/sdc1

ARRAY /dev/md0 level=raid0 num-devices=2

UUID=410a299e:4cdd535e:169d3df4:48b7144a

Starting an Array

Assemble mode is used to start an array that already exists. If you created an/etc/mdadm.conf you can automatically start an array listed there with the following command:

# mdadm -As /dev/md0

mdadm: /dev/md0 has been started with 2 drives.



The -A option denotes assemble mode. You can also use --assemble. The -s or --scanoption tells mdadm to look in /etc/mdadm.conf for information about arrays and devices. If you want to start every array listed in /etc/mdadm.conf, don't specify an md device on the command line.

If you didn't create an /etc/mdadm.conf file, you will need to specify additional information on the command line in order to start an array. For example, this command attempts to start/dev/md0 using the devices listed on the command line:

# mdadm -A /dev/md0 /dev/sdb1 /dev/sdc1

Since using mdadm -A in this way assumes you have some prior knowledge about how arrays are arranged, it might not be useful on systems that have arrays that were created by someone else. So you may wish to examine some devices to gain a better picture about how arrays should be assembled. The examine options (-E or --examine) allows you to print the md superblock (if present) from a block device that could be an array component.

# mdadm -E /dev/sdc1

/dev/sdc1:

4efc Version :

Magic : a92 b00.90.00

88b68:1bb79088:9a73ebcc:2ab430da Creation

UUID : 84 7 Time : Mon Sep 23 16:02:33 2002 Raid Level : raid0

Device Size : 17920384 (17.09 GiB 18.40 GB) Raid D evices : 4 Total Devices : 4 Preferred Minor : 0

Update Time : Mon Sep 23 16:14:52 2002

State : clean, no-errors

Devices : 4 Faile

Active Devices : 4 Working d Devices : 0 Spare Devices : 0

nts : 0.10

Checksum : 8ab5e437 - correct Eve

Chunk Size : 128K

Number Major Minor RaidDevice State

this 1 8 33 1 active sync /dev/sdc1

0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1

3 3 8 65 3 active sync /dev/sde

2 2 8 49 2 active sync /dev/sdd11


mdadm's examine option displays quite a bit of useful information about component disks. In this case we can tell that /dev/sdc1 belongs to a RAID-0 made up of a total of four member disks. What I want to specifically point out is the line of output that contains the UUID. A UUID is a 128-bit number that is guaranteed to be reasonably unique on both the local system and across other systems. It is a randomly generated using system hardware and timestamps as part of its seed. UUIDs are commonly used by many programs to uniquely tag devices. See the uuidgen and libuuid manual pages for more information.

When an array is created, the md driver generates a UUID for the array and stores it in the md superblock. You can use the UUID as criteria for array assembly. In the next example I am going to activate the array to which /dev/sdc1 belongs using its UUID.

# mdadm -Av /dev/md0 --uuid=84788b68:1bb79088:9a73ebcc:2ab430da /dev/sd*

This command scans every SCSI disk (/dev/sd*) to see if it's a member of the array with the UUID 84788b68:1bb79088:9a73ebcc:2ab430da and then starts the array, assuming it found each component device. mdadm will produce a lot of output each time it tries to scan a device that does not exist. You can safely ignore such warnings.
Managing Arrays

Using Manage mode you can add and remove disks to a running array. This is useful for removing failed disks, adding spare disks, or adding replacement disks. Manage mode can also be used to mark a member disk as failed. Manage mode replicates the functions of raidtools programs such as raidsetfaulty, raidhotremove, and raidhotadd.

For example, to add a disk to an active array, replicating the raidhotadd command:

# mdadm /dev/md0 --add /dev/sdc1

Or, to remove /dev/sdc1 from /dev/md0 try:

# mdadm /dev/md0 --f ail /dev/sdc1 --remove /dev/sdc1

Notice that I first mark /dev/sdc1 as failed and then remove it. This is the same as using theraidsetfaulty and raidhotremove commands with raidtools. It's fine to combine add, fail, and remove options on a single command line as long as they make sense in terms of array management. So you have to fail a disk before removing it, for example.
Monitoring Arrays

Follow, or Monitor, mode provides some of mdadm's best and most unique features. Using Follow/Monitor mode you can daemonize mdadm and configure it to send email alerts to system administrators when arrays encounter errors or fail. You can also use Follow mode to arbitrarily execute commands when a disk fails. For example, you might want to try removing and reinserting a failed disk in an attempt to correct a non-fatal failure without user intervention.

The following command will monitor /dev/md0 (polling every 300 seconds) for critical events. When a fatal error occurs, mdadm will send an email to sysadmin. You can tailor the polling interval and email address to meet your needs.

# mdadm --monitor --mail=sysadmin --delay=300 /dev/md0

When using monitor mode, mdadm will not exit, so you might want to wrap it around nohupand ampersand:

# nohup mdadm --monitor --mail=sysadmin --delay=300 /dev/md0 &

Follow/Monitor mode also allows arrays to share spare disks, a feature that has been lacking in Linux software RAID since its inception. That means you only need to provide one spare disk for a group of arrays or for all arrays. It also means that system administrators don't have to manually intervene to shuffle around spare disks when arrays fail. Previously this functionality was available only using hardware RAID. When Follow/Monitor mode is invoked, it polls arrays at regular intervals. When a disk failure is detected on an array without a spare disk, mdadm will remove an available spare disk from another array and insert it into the array with the failed disk. To facilitate this process, each ARRAY line in/etc/mdadm.conf needs to have a spare-group defined.

DEVICE /dev/sd*

ARRAY /dev/md0 level=raid1 num-devices=3 spare-group=database



UUID=410a299e:4cdd535e:169d3df4:48b7144a

-group=database UUID=59b6e564:739d4d28:ae0aa308:71147fe7

ARRAY /dev/md1 level=raid1 num-device=2 spare

In this example, both /dev/md0 and /dev/md1 are part of the spare group database. Just assume that /dev/md0 is a two-disk RAID-1 with a single spare disk. If mdadm is running in monitor mode (as I showed earlier), and a disk in /dev/md1 fails, mdadm will remove the spare disk from /dev/md0 and insert it into /dev/md1.



Links:
---------

http://archive.networknewz.com/2003/0113.html

http://tldp.org/HOWTO/Software-RAID-HOWTO-5.html

http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-5.html

Thursday, February 18, 2010

Perl version Upgradation at Cpanel

You can ensure that each installed module gets carried over to
the updated Perl build with the use of the "autobundle" CPAN feature.

You can create a bundle of the currently installed modules
by executing the following while logged in via SSH as root:

perl -MCPAN -e 'autobundle'

Once completed, you should see the following output before getting
returned to the shell:

'Wrote bundle file /home/.cpan/Bundle/Snapshot_2007_08_16_00.pm'

Once you've made note of this file name, you can proceed with the update.

On linux based systems, you should be able to update Perl using
the installer provided at layer1.cpanel.net:

cd /root
wget http://layer1.cpanel.net/perl588installer.tar.gz
tar -zxf perl588installer.tar.gz
cd perl588installer
./install -optimize-memory

On FreeBSD based systems, you will need to install Perl from ports.

This will take a few minutes, so take a coffee break and check
the status when you return. Once the update has completed, you
can install all previously installed modules from the CPAN bundle
by executing the following (with the bundle name adjusted to the
name of the bundle generated earlier):

perl -MCPAN -e 'install Bundle::Snapshot_2007_08_16_00'

This should install each of the modules present in the bundle,
assuming there are no issues during the installation (dependencies,
network, etc).

Once this has completed, execute the following to ensure that all modules
required by cPanel are installed, and restart cPanel:

/usr/local/cpanel/bin/checkperlmodules
/usr/local/cpanel/startup

Sunday, February 14, 2010

Restoring RPM and YUM on RedHat after an accidental yum remove rpm

You can restore rpm using the following procedure:

Download the OS related RPM installer file at your local machine using wget.

Suppose for RedHat I download the file "rpm-4.1.1-1.7x.i386.rpm" from the link:

http://ftp.freshrpms.net/pub/freshrpms/redhat/testing/7.3/rpm-4.1.1/

Now make a folder suppose "RPM_test" at your local machine. Extract the rpm it that folder using the following commands.

rpm2cpio rpm-4.1.1-1.7x.i386.rpm | cpio -idmv
It will create the following folders inside your RPM_test directory.

------------------
kanchan rpm > ls -al
drwx------ 2 kanchan games 512 Feb 13 20:08 bin
drwx------ 5 kanchan games 512 Feb 13 20:08 etc
drwx------ 5 kanchan games 512 Feb 13 20:08 usr
drwx------ 4 kanchan games 512 Feb 13 20:08 var
kanchan rpm >
-----------------

Now copy the folder RPM_test using SCP or RSYNC command.

At the server enter into the directory "RPM_test/bin", you will get the following binary.

-------------------------------------------------
-rwxr-xr-x 1 kanchan games 2179291 Jul 23 2003 .rpm
-------------------------------------------------

Use the binary to restore your original rpm like

=================
./bin/rpm -vUh --nodeps --force rpm-4.1.1-1.8x.i386.rpm
=================

Your work done here.


If you want to install YUM just install all the files at the following link (It's for redhat for other version use corresponding repository):

-------------
http://ftp.freshrpms.net/pub/freshrpms/redhat/testing/7.3/rpm-4.1.1/
-------------

You can use the following script to download it in a temporary directory :

---------------
#!/bin/bash

for file in \

librpm404-4.0.5-1.7x.i386.rpm \
librpm404-4.0.5-1.7x.src.rpm \
librpm404-devel-4.0.5-1.7x.i386.rpm \
popt-1.7.1-1.7x.i386.rpm \
rpm-4.1.1-1.7x.src.rpm \
rpm-build-4.1.1-1.7x.i386.rpm \
rpm-devel-4.1.1-1.7x.i386.rpm \
rpm-python-4.1.1-1.7x.i386.rpm \
rpm404-python-4.0.5-1.7x.i386.rpm \
yum-2.0.3-0.rh7.rh.fr.i386.rpm \
yum-2.0.3-0.rh7.rh.fr.src.rpm \
do
wget http://ftp.freshrpms.net/pub/freshrpms/redhat/testing/7.3/rpm-4.1.1/$file;
done

---------------
Then install it using the following command:

==============
rpm -vUh --nodeps --force *.rpm
==============

You are done here.

For others OS distribution please view the links:

----------------------------
http://www.lazyhacker.com/blog/index.php/2009/12/28/restoring-rpm-and-yum-on-fedora-after-an-accidental-yum-remove-rpm
http://www.cyberciti.biz/tips/how-to-extract-an-rpm-package-without-installing-it.html


http://www.fedoralegacy.org/docs/yum-rh8.php
http://wiki.centos.org/HowTos/PackageManagement/YumOnRHEL
http://eric.lubow.org/2008/misc/adding-yum-to-centos-5/
http://www.electrictoolbox.com/install-yum-with-rpm-on-centos/
http://upstre.am/2009/04/30/installing-yum-on-centos-53/
http://kb.in2net.net/questions/118/Installing+Yum
----------------------------

 Linux Interview  Linux booting process EXT4 XFS file system runlevel scan disk hba driver systool -c fc_host lspci -nn | grep -i hba single...