Wednesday, 30 December 2009

List all Users and Groups in Domain


From the support tools we can find LDIFDE.exe, which is a tool for bulk import and export of Active Directory Objects. You can use LDIFDE to import new user records into the directory, or export specific information on specific users into a text file. LDIFDE defaults to export mode (reading From the Directory). When you add the -i option it can be used to write changes into the Directory. Also, if you want to export and extract only specific details, such as the user name, title and login name for all the users in a specific OU (Organizational Unit), you can run the following command:

ldifde -f C:\ldif\ExportUsers.ldf –s SERVERNAME -d "OU=YourOUname,dc=YourDomainName,dc=com" -p subtree -r "(objectClass=User)" -l "cn,givenName,Title,SamAccountName"

Enabling Multiple Remote Desktop Sessions in Windows XP Professional and Media Center Edition 2005

If you have ever used a real remote computer system like Citrix, then you have probably been craving multiple Remote Desktop sessions since you first fired up Windows XP Professional and/or Media Center Edition. Here is a HACK (translated: USE AT YOUR OWN RISK), to enable multiple Remote Desktop sessions on your XP Pro or MCE 2005 box:

NOTE: You will have to have knowledge of the Windows operating system and more specifically the Windows Registry. If you have no experience with the registry, then I would recommend you find someone who does or leave these alone. I do not make any kind of warranty that this will work for you or your friends. This is provided for entertainment purposes only. Don’t call me if your computer stops working. Got it?

Print these directions so that you have them to work from.
Restart your computer in Safe Mode - Follow this link to learn how to restart Windows XP in Safe Mode
Turn off/disable Remote Desktop Connection (RDC) and Terminal Services
Right click My Computer
Select Properties
Click on the Remote tab at the top of the window
UNCHECK the box next to, “Allow users to connect remotely to this computer“
Click OK
Go to Start -> Control Panel -> Administrative Tools -> Services
Find Terminal Services in the list
Right click on Terminal Services and click Properties
In the Startup Type box, select Disabled
Click OK to close the window
Next you will replace the current version of the Terminal Services DLL (termsrv.dll) with an unrestricted version from a previous release of Terminal Services.
Here is a copy of the Terminal Services DLL - Save it to your Desktop or other suitable location
Using a file manager like Windows Explorer open C:\Windows\system32\dllcache
Rename the file termsrv.dll to termsrv_dll.bak or whatever you would like.
Copy the downloaded termsrv.dll file (the one you just downloaded from the web) to C:\Windows\system32\dllcache
Open the C:\Windows\system32 folder
Delete the file termsrv.dll in C:\Windows\system32
Now we can edit the Windows Registry to enable more than one RDP connection. Go to Start -> Run and type regedit - Hopefully you knew that already
Go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\Licensing Core
Add a DWORD Key named EnableConcurrentSessions and give it a value of 1
Close the Registry Editor window
Go to Start -> Run and type gpedit.msc to run the Group Policy Editor
Browse to Computer Configuration -> Administrative Templates -> Windows Components -> Terminal Services and double click Limit number of connections
Select the Enabled button and enter the number of connections you would like to enable….at least 2.
Restart Windows
Right click My Computer and select Properties.
Click on the Remote tab at the top of the window
CHECK the box next to, “Allow users to connect remotely to this computer“
Click OK
Go to Start -> Control Panel ->Administrative Tools -> Services. Select Terminal Services from the list and double click it or right-click -> Properties. Set the Startup Type to Manual.

Restart Windows/Computer
You should be good to go

Friday, 11 December 2009

Bulk delete from Postfix queue

To delete a message in Postfix queue, I normally find out the message id first from “postqueue -p” (or simply “mailq”) command. Once the message id is known, I simply issue the following command to delete that particular message (assume the message id is BA4491827DE):

# postsuper -d BA4491827DE

If there is only one message to delete, I can live with that. However, when there’s a bunch of messages (e.g. from a particular domain) you need to delete from the queue, the above method simply too much of a hassle (well, unless you want to delete *everything*, which would be #postsuper -d ALL). Postfix does not have a function for doing that. Luckily, a search on Google yielded this Perl script that does exactly what I want, removing message(s) from queue based on my keyword. Here is the content of that Perl script called “delete-from-mailq”:


$REGEXP = shift || die “no email-adress given (regexp-style, e.g. bl.*\!”;

@data = qx;
for (@data) {
if (/^(\w+)(\*|\!)?\s/) {
$queue_id = $1;
if($queue_id) {
if (/$REGEXP/i) {
$Q{$queue_id} = 1;
$queue_id = “”;

open(POSTSUPER,”|postsuper -d -”) || die “couldn’t open postsuper” ;

foreach (keys %Q) {
print POSTSUPER “$_\n”;

Save the above script to a file say “delete-queue” in your home directory, and make it excutable:

# chmod 755 delete-queue


Delete all queued messages from or to the domain “”

Delete all queued messages to specific address “”

Delete all queued messages that begin with the word “bush” in the e-mail address:

./delete-queue bush*\
Delete all queued messages that contain the word “biz” in the e-mail address:

./delete-queue biz

That's it.

Thursday, 10 December 2009

Self-Signed IIS SSL Certificates using OpenSSL

Self-Signed IIS SSL Certificates using OpenSSL

This tutorial assumes that you have a Linux box with OpenSSL installed,and that you want to create a self-signed certificate for IIS5.0 / 6.0

Set up your CA (you only have to do this once)

Create a private key

openssl genrsa -des3 -out CA.key 1024

(You’ll need to supply a passphrase. DON’T FORGET THIS!!)

Set this to read-only for root for security

chmod 400 CA.key

Create the CA certificate

openssl req -new -key CA.key -x509 -days 1095 -out CA.crt

(Provide appropriate responses to the prompts…for Common Name, you might want to use something like “OurCompany CA”)

Set the certificate to read-only for root for security

chmod 400 CA.crt

Obtain a CSR

Open the Internet Manager
Select the site for which you want to create a key
Right-click and choose Properties
Select the “Directory Security” tab
Click the “Server Certificate” button
Follow the prompts to create a CSR
your CSR, then transfer it to the Linux box for further processing.
(For the following steps, we’ll refer to your CSR as “new.csr”)
Sign the CSR

Sign the CSR (all of this on one line)

openssl x509 -req -days 365 -in new.csr -CA CA.crt
-CAkey CA.key -CAcreateserial -out new.crt

Transfer the new.crt file back to the IIS box
Install self-signed certificate

Open the Internet Manager
Select the site to install the key
Right-click and choose properties
Select the “Directory Security” tab
Click the “Server Certificate” button
Specify that you want to complete the pending request
Select the .crt file that you just transferred
That’s it!

PS: If you have problems with the certification path (”does not chain up to a
trusted root certificate” in System Log), the following seems to fix it:

1. Internet Information Services -> select the site -> right click -> Properties
2. Directory Security -tab -> Secure communications -frame -> Edit…
3. Select “Enable certificate trust list”, click New… -> Next
4. Add from file -> use CA.crt -> Next
5. Write something to the Name and Description, if you like. -> Next -> Finish

Tuesday, 17 November 2009

Tripwire generating too big report files

Tripwire was generating big report files in one of our boxes, almost 12MB, compared to 60KB for the others.
I found the problem is that there were many changes applied to this server, and files copied/moved,
so the tripwire DB holding the changes grew up a lot.
The only command to run and update the tripwire DB is:
tripwire --update -Z low

This command will compare your database against your current file system and then launch an editor so that you can choose to make changes to your database.

If you try this command but get an error message about a missing report file, the reason is most likely that the last check was not run immediately prior to the update. The report file in the /var/lib/tripwire/report directory is named by hostname, then date (yyyymmdd) then time. If you have recently run a check and want the update to proceed using your most recent report file, then use the -r option and provide the report filename that you want the update to use.
tripwire --update -Z low --twrfile host-yyyymmdd-tttttt.twr

If it asks for a password, you'll have to set it up first, unless you already know the password:

tripwire --local-passphrase mypassword

Then, run the sync again, now you know the password.

Converting Ext2 Filesystems to Ext3

This is one of those tips that are handy when you upgrade or have a partition already formatted and with data, and you realize that it is ext2!, there is no need to re-format it, or erase the data, just convert it !

When you see the partition type, 83 is used for a Linux partition, no matter if it is ext2 or ext3.

The Ext3 filesystem is an Ext2 filesystem with a journal file and some filesystem driver additions making the filesystem journalized.

Converting from Ext2 to Ext3
The conversion procedure is simple enough. Imagine /dev/sdb1 mounted as /data – the procedure would be as follows:

Log in as root
Make sure /etc/fstab has /dev/sdb1 mounted to /data as ext2, read write
umount /dev/sdb1
If you can't unmount it, then remount it read only (mount -o remount,ro /dev/sdb1)
tune2fs -j /dev/sdb1
Edit /etc/fstab, and for /dev/sdb1, change ext2 to ext3
mount /dev/sdb1 /data, or mount -a
Check if the partition was correctly mounted:
mount | grep /dev/sdb1
If it's not shown as ext3, reboot (shutdown -r now)
if still not, troubleshoot ...
Otherwise, you're done.
A few explanations are in order.
The tune2fs command creates the journal file, which is kept in a special inode on the device (by default). You then must change the /etc/fstab entry to reflect it's a journalling filesystem, and then mount it.

You can check a full article describing other procedures in:

Monday, 16 November 2009

Accurate Date and Time in Linux using ntp step by step instructions

First, a little bit of theory. Read it all, so you will understand how time works, and why setup needs some steps.

Why do We Need a Precise Clock?
If our computer never connects to other computers (or other devices that use a clock), the precision of the clock is not critical itself, it depends on the need of the user. However, programs that some way use the net are dependent on a precise date and time. Some examples, when you may need precise clock:

Softwares that deal with transactions

Commercial applications (e.g. eBay)

Mail and messaging-related client and servers

Websites that use cookies

Distributed web applications

Web services

Distributed component-based applications as J2EE, .NET, etc

Advanced modern and paralel filesystems, as AFS, DFS, GFS, GPFS, etc

And of course, to use the computer to adjust our wristwatch clock.

Computer Global Date and Time Concept
To determine the current time for some planet region, a computer needs exactly these two pieces of information:

Correct UTC (universal time as in Greenwich, but not GMT) time

Region's current Time Zone

For computers, there is also the hardware clock, which is used as a base by the OS to set its time.

OS date and time (we'll use only date or time from now on) is set on boot, by some script that reads the hardware clock, makes Time Zone calculations (there is no time zone data stored in BIOS) and sets the OS. After this synchronization, BIOS
and OS time are independent from each other. So after a while they may have some seconds of difference. Which one is correct?
If you don't make any special configuration, none of them.

We'll discuss here how to make them both globally 100% accurate.

Time Zones
Time Zones are a geographical world globe division of 15o each, starting at Greenwich, in England, created to help people know what time is it now in another part of the world.

Nowadays it is much more a political division than geographical, because sometimes people needs to have the same time as other people in not-so-far locations. And for energy savings reasons, we have today the Daylight Savings Time, that are also a Time
Zone variation.

Time Zones are usually defined by your country government or some astronomical institute, and is represented by 3 or 4 letters.

Use the to know what time is it now at any part of the globe.

Daylight Savings Time
For energy savings reasons, governments created the Daylight Savings Time. Our clocks are forwarded one hour, and this makes our days look longer. In fact, what really happens is only a Time Zone change. The primitive time (UTC) is still, and will always be, the same.

Time Zone Mechanism on Linux
Linux systems uses the GLIBC dynamic Time Zones, based on /etc/localtime. This file is a link to (or a copy of) a zone information file, usually located under /usr/share/zoneinfo directory.
To make it effective, you only have to link (or copy) the zoneinfo file to /etc/localtime. In some distributions, there is a higher level (and preferred) way to set the Time Zone, described later.

After making /etc/localtime pointing to the correct zoneinfo file, you are already under that zone rules and DST changes are automatic -- you don't have to change time manually.

Accurate Global Time Synchronization
To have accurate time in all your systems is as important as having a solid network security strategy (achieved by much more than simple firewall boxes). It is one of the primary components of a system administration based on good practices, which leads to organization and security. Specially when administering distributed applications, web-services, or even a distributed security monitoring tool, accurate time is a must.

NTP: The Network Time Protocol
We won't discuss here the protocol, but how this wonderful invention, added to the pervasivenes of the Internet, can be useful for us. You can find more about it at

Once your system is properly setup, NTP will manage to keep its time accurate, making very small adjustments to not impact the running applications.

People can get exact time using hardware based on atom's electrons frequency. There is also a method based on GPS (Global Positioning System). The first is more accurate, but the second is pretty good also. Atomic clocks require very special and
expensive equipment, but their maintainers (usually universities and research labs) connect them to computers, that run a NTP daemon, and some of them are connected to the Internet, that finally let us access them for free. And this is how we'll
synchronize our systems.

Building a Simple Time Synchronization Architecture
You will need:

1) A direct or indirect (through a firewall) connection to the Internet, to syncrhonize our servers with a public accurate NTP server.

2) Choose some NTP servers. You can use the public server, or choose some from the stratum 2 public time servers on NTP website. If you don't have an Internet access, your WAN administrator (must be a clever guy) can provide you some internal addresses.

3) Have the NTP package installed in all systems you want to synchronize. You can find RPMs in your favorite Linux distribution CD, or make a search on

Local Relay Servers for NTP

If you have several machines to synchronize, do not make them all access the remote NTP servers you chose. Only 2 of your server farm's machines need access remote NTP servers, and the other machines will sync with these 2. We will call them the Relay Servers.

Your Relay Servers can be any machine already available in your network. NTP consumes low memory and CPU. You don't need a dedicated machine for it.

The Correct Settings for Your Linux Box
For any OS installation, you must know your Time Zone. This is expressed in terms of a city, a state or a country. You must also decide how to set BIOS time, and we may follow two strategies here:

Linux Only Machine
In this case you should set BIOS time to UTC time. DST changes will be dynamically managed by Time Zone configurations.

Dual Boot Linux and MS Windows Machine
Windows handles time in a more primitive way than Linux. For Windows, BIOS time is allways your local time, so DST changes are more aggressive because they directly change hardware clock. And since both Linux and Windows initially get and set time from the hardware, when they are together, Linux must handle it in the same way. So set BIOS time to your localtime.

Step 1, Setting the Zone Info file:

Your time zone is defined by /etc/localtime

If you are unsure of the file you have is the correct one, you can check if its the same as your time zone:
diff -b /usr/share/zoneinfo/America/Vancouver /etc/localtime
If it doesn't, you will have to remove it and create the link:
#rm /etc/localtime
#ln -s /usr/share/zoneinfo/America/Vancouver /etc/localtime
Alternatively, instead of creting the symbolic link, you can just copy the file:
#rm /etc/localtime
#cp /usr/share/zoneinfo/America/Vancouver /etc/localtime

Step 2, Setting the Time Zone:

On Red Hat Linux and derived systems, you can set the hardware clock strategy and Time Zone using the timeconfig command:
that shows a user-friendly dialog to select your time zone, and apply the changes.

You can also use it non-interactively:
#timeconfig "America/Vancouver" # set HC to localtime, and TZ to America/Vancouver
#timeconfig --utc "America/Vancouver" # set HC to UTC, and TZ to America/Vancouver

In any case, this utility changes /etc/sysconfig/clock file that is read at boot time.
You can edit it by hand, and that is how it looks:

#cat /etc/sysconfig/clock

Step 3, Setting the Hardware Clock

I encourage you to set your hardware clock only after understanding how to get accurate time.

The hwclock command reads and sets the hardware clock, based on several options you give to it, documented in its man page.
But you don't have to use it if you have a modern Linux distribution. After defining your hardware clock strategy and Time Zone, you can use the high level setclock command to correctly set your hardware clock.
You don't need to pass any parameters because setclock intelligently calls hwclock to set the BIOS based on your OS current date and time. So you should always use the setclock command.

#hwclock --systohc --utc # set HC with UTC time based on OS current time (I personally use this one)

You can also use any of these options:

#setclock # The easy way to set HC
#hwclock # reads HC
#hwclock --systohc # set HC with local time based on OS current time
#hwclock --set --date "21 Oct 2004 21:17" # set HC with time specified on stringSince the OS time is independent from the hardware clock, any BIOS change we make will take place in the next boot.

Another option to change HC is rebooting and accessing your computer BIOS screens, but as you see, there is no need to do it in Linux!

Step 4, Configure NTP protocol

Of course, as a prerequisite, you need to install the ntp package.

The only file to configure is /etc/ntp.conf, doesn't matter if you are configuring a client, or a server (like a local relay server to serve your local network), you will only need an additional keyword.

Change the following parameters in /etc/ntp.conf:

To make it a local relay server:

1) First we specify the servers you're interested in:

server # A stratum 1 server at
server # A stratum 2 server at

2) Restrict the type of access you allow these servers. In this example the servers are not allowed to modify the run-time configuration or query your Linux NTP server.

restrict mask nomodify notrap noquery
restrict mask nomodify notrap noquery

The mask statement is really a subnet mask limiting access to the single IP address of the remote NTP servers.

3) As this server is also going to provide time for other computers, such as PCs, other Linux servers and networking devices, then you'll have to define the networks from which this server will accept NTP synchronization requests. You do so with a
modified restrict statement removing the noquery keyword to allow the network to query your NTP server. The syntax is:

restrict mask nomodify notrap

In this case the mask statement has been expanded to include all 255 possible IP addresses on the local network.

4) We also want to make sure that localhost (the universal IP address used to refer to a Linux server itself) has full access without any restricting keywords:


5) Save the file and restart NTP for these settings to take effect.

6) To get NTP configured to start at boot, use the line:

# chkconfig ntpd on

You can now configure other Linux hosts on your network to synchronize with this new master NTP server in a similar fashion.
To make it a client accessing your brand new local relay server:

1) First we specify the servers you're interested in, you can use a host name or an ip address; it is recommended to use a host name, if later you have to change their ip address, you don't have to change anything in your client computers:

server # Your first local ntp server
server # Your second local ntp server

2) Restrict the type of access you allow these servers. In this example the servers are not allowed to modify the run-time configuration or query your Linux NTP server.

restrict mask nomodify notrap noquery
restrict mask nomodify notrap noquery

The mask statement is really a subnet mask limiting access to the single IP address of the remote NTP servers.

3) As this server is NOT going to provide time for other computers, there is no need to allow others to query it, then include the noquery statement:

restrict mask nomodify notrap noquery

In this case the mask statement has been expanded to include all 255 possible IP addresses on the local network.

4) We also want to make sure that localhost (the universal IP address used to refer to a Linux server itself) has full access without any restricting keywords:


5) Save the file and restart NTP for these settings to take effect.

6) To get NTP configured to start at boot, use the line:

# chkconfig ntpd on

Step 5: First sync:

It's a good practice to syncronize manually the first time, because if the time difference is too big, it will not be synchronized automatically.

For your local ntp server:

#service ntpd stop
#service ntpd start

For the rest of your servers/guests ntp clients:

#service ntpd stop
#service ntpd start

To check the sync status, you can use:

#ntpq -p

Enjoy, and have A GOOD TIME !!!

Friday, 11 September 2009

Control Windows services of a remote computer using the command line

You can control Windows services of a remote computer just using the command line.
There are some options around there, but the one I use is psservice. This is a command line utility, part of PsTools, that allows you to control services. (See at the bottom a short description of PsTools).
The syntax is very simple:


HOST - is the hostname or ip address of the host to control (local or remote)
USERNAME - a user with permissions to control services of the HOST
PASSWORD - the password of the user
COMMAND - the action to take with the service. The most commonly used are: query, stop, start, restart
SERVICENAME - the name of the service to control

For example, to restart IIS service of a remote server:

psservice.exe \\192.x.x.x -u user -p ******* restart W3SVC

If you receive an error like:

Unable to access Service Control Manager on \\192.x.x.x:
Access is denied.


Unable to connect to \\192.x.x.x

It's because pstools relies on $admin share access. A way to open it is:


HOST - is the hostname or ip address of the host
PASSWORD - the password of the user
USERNAME - a local or domain user with permissions to map a folder in the host
if USERNAME is a local user, type it as HOST\USERNAME,
if USERNAME is a domain user, type it as DOMAIN\USERNAME

For example:

net use \\192.x.x.x\admin$ ****** /USER:DOMAIN\USERNAME

You can remove the connection using:

net use \\192.x.x.x\admin$ /DELETE

Whith this base, you can create a batch file to restart services of a remote host with just a double-click


PsTools description

This tool is a set of command line utilities that allow you to manage local and remote systems
PsTools is a set of commandline utilities that allow you to manage local and remote systems.

All of the utilities in the PsTools suite work on Windows NT, Windows 2000 and Windows XP. The PsTools download package includes an HTML help file with complete usage information for all the tools.

The tools included in the PsTools suite are:

· PsExec - execute processes remotely
· PsFile - shows files opened remotely
· PsGetSid - display the SID of a computer or a user
· PsKill - kill processes by name or process ID
· PsInfo - list information about a system
· PsList - list detailed information about processes
· PsLoggedOn - see who's logged on locally and via resource sharing (full source is included)
· PsLogList - dump event log records
· PsService - view and control services
· PsShutdown - shuts down and optionally reboots a computer
· PsSuspend - suspends processes
· PsUptime - shows you how long a system has been running since its last reboot (PsUptime's functionality has been incorporated into PsInfo)

Friday, 29 May 2009

Centos 5.1 Chrooting SFTP using SCPonly

Centos 5.1 Chrooting SFTP using SCPonly


GCC is installed.

OpenSSH is installed.

Download scponly from: and extract it to /tmp

Configure Your Installation

Navigate into the directory in /tmp where you extracted scponly. Configure with the bellow command:

./configure --enable-chrooted-binary

Build & Install The Binaries


make install

This will install your manpage and scponly binary/binaries.

Edit /etc/shells using vi to look like this:








If you want to not use scponly in a chrooted fashion then use the following instead of scponlyc:


Set up the jail with the following command which invokes a helper script:

make jail

The output will look similar to below:

/usr/bin/install -c -d /usr/local/bin

/usr/bin/install -c -d /usr/local/man/man8

/usr/bin/install -c -d /usr/local/etc/scponly

/usr/bin/install -c -o 0 -g 0 scponly /usr/local/bin/scponly

/usr/bin/install -c -o 0 -g 0 -m 0644 scponly.8 /usr/local/man/man8/scponly.8

/usr/bin/install -c -o 0 -g 0 -m 0644 debuglevel /usr/local/etc/scponly/debuglevel

if test "xscponlyc" != "x"; then \

/usr/bin/install -c -d /usr/local/sbin; \

rm -f /usr/local/sbin/scponlyc; \

cp scponly scponlyc; \

/usr/bin/install -c -o 0 -g 0 -m 4755 scponlyc /usr/local/sbin/scponlyc; \


chmod u+x ./


Next we need to set the home directory for this scponly user.

please note that the user's home directory MUST NOT be writeable

by the scponly user. this is important so that the scponly user

cannot subvert the .ssh configuration parameters.

for this reason, a writeable subdirectory will be created that

the scponly user can write into.

Username to install [scponly]scponly

home directory you wish to set for this user [/home/scponly]

name of the writeable subdirectory [incoming]files

useradd: warning: the home directory already exists.

Not copying any file from skel directory into it.

creating /home/scponly/files directory for uploading files

Your platform (Linux) does not have a platform specific setup script.

This install script will attempt a best guess.

If you perform customizations, please consider sending me your changes.

Look to the templates in build_extras/arch.

- joe at sublimation dot org

please set the password for scponly:

Changing password for user scponly.

New UNIX password:

Retype new UNIX password:

passwd: all authentication tokens updated successfully.

if you experience a warning with winscp regarding groups, please install

the provided hacked out fake groups program into your chroot, like so:

cp groups /home/scponly/bin/groups

Note: I ran the command mentioned at the end.

cp groups /home/scponly/bin/groups

Note that this is not the end all for setting up chrooted scponly!

During "make jail", for example I used /home/scponly/ as mychroot main path. The following are the final steps I took to get scponly working.

Edit /home/scponly/etc/ and replace its content with :



Type ldconfig -r /home/scponly/

Copy /lib/* in /home/scponly/lib/

cp /lib/* /home/scponly/lib/

Copy /etc/group in /home/scponly/etc/

cp /etc/group /home/scponly/etc/

Create the folder /home/scponly/etc/selinux

mkdir /home/scponly/etc/selinux

Create a file named config there and insert the following content in this file :

vi /home/scponly/etc/selinux/config




Create the folder:

mkdir /home/scponly/dev

Create the null device in chroot:

mknod /home/scponly/dev/null c 1 3

Change permissions on the null device:

chmod 666 /home/scponly/dev/null

Monday, 6 April 2009

Rescan dynamically the scsi bus (applicable to CX Clariion SAN infrastructure)

Rescan dynamically the scsi bus

I've been working for a while with a Dell - Clariion CX-300, and the best way to add new attached LUNs was always to reboot the server.
However, that procedure is not always the most acceptable if you're in a hurry or if just want to do some tests.
I found the procedure described above, in an outdated website, but worked very well in my case.

I also recommend to use script with the options -lwc. Type --help to see the description of each option.

The original link is:

After initialization ends, the server doesn't see the new devices :-( I tried a script from that should dynamically rescan the bus, but with no sucess.

$ /root/
Host adapter 1 (qla2xxx) found.
Host adapter 2 (qla2xxx) found.
Scanning for device 1 0 0 0 ...
OLD: Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: DGC Model: LUNZ Rev: 0208
Type: Direct-Access ANSI SCSI revision: 04
Scanning for device 2 0 0 0 ...
OLD: Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: DGC Model: LUNZ Rev: 0208
Type: Direct-Access ANSI SCSI revision: 04
0 new device(s) found.
0 device(s) removed.

So I stoped powerpath and unload qla modules in order to restart the whole thing.
$ /etc/init.d/PowerPath stop
Stopping PowerPath: done
$ lsmod | grep qla
qla6312 119233 0
qla2xxx 165733 1 qla6312
scsi_transport_fc 12225 1 qla2xxx
scsi_mod 116941 5 sg,qla2xxx,scsi_transport_fc,megaraid_mbox,sd_mod
[root@pasargades /opt/Navisphere/bin]
$ modprobe -r qla6312 qla2xxx
[root@pasargades /opt/Navisphere/bin]
$ lsmod | grep qla

then reload the whole thing:

$ modprobe qla2xxx qla6312
[root@pasargades /opt/Navisphere/bin]
$ /etc/init.d/PowerPath start
Starting PowerPath: done

then it works, the kernel does see the new devices

$ cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 06 Lun: 00
Vendor: PE/PV Model: 1x2 SCSI BP Rev: 1.0
Type: Processor ANSI SCSI revision: 02
Host: scsi0 Channel: 01 Id: 00 Lun: 00
Vendor: MegaRAID Model: LD 0 RAID1 69G Rev: 521S
Type: Direct-Access ANSI SCSI revision: 02
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: DGC Model: RAID 5 Rev: 0208
Type: Direct-Access ANSI SCSI revision: 04
Host: scsi3 Channel: 00 Id: 00 Lun: 01
Vendor: DGC Model: RAID 5 Rev: 0208
Type: Direct-Access ANSI SCSI revision: 04
Host: scsi4 Channel: 00 Id: 00 Lun: 00
Vendor: DGC Model: RAID 5 Rev: 0208
Type: Direct-Access ANSI SCSI revision: 04
Host: scsi4 Channel: 00 Id: 00 Lun: 01
Vendor: DGC Model: RAID 5 Rev: 0208
Type: Direct-Access ANSI SCSI revision: 04
[root@pasargades /opt/Navisphere/bin]
$ fdisk -l

Disk /dev/sda: 73.2 GB, 73274490880 bytes
255 heads, 63 sectors/track, 8908 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 1 4 32098+ de Dell Utility
/dev/sda2 * 5 583 4650817+ 83 Linux
/dev/sda3 584 1220 5116702+ 83 Linux
/dev/sda4 1221 8908 61753860 5 Extended
/dev/sda5 1221 3770 20482843+ 83 Linux
/dev/sda6 3771 5682 15358108+ 83 Linux
/dev/sda7 5683 6192 4096543+ 82 Linux swap

Disk /dev/sdb: 676.4 GB, 676457349120 bytes
255 heads, 63 sectors/track, 82241 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdc: 1395.8 GB, 1395864371200 bytes
255 heads, 63 sectors/track, 169704 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdd: 676.4 GB, 676457349120 bytes
255 heads, 63 sectors/track, 82241 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdd doesn't contain a valid partition table

Disk /dev/sde: 1395.8 GB, 1395864371200 bytes
255 heads, 63 sectors/track, 169704 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sde doesn't contain a valid partition table

Disk /dev/emcpowera: 676.4 GB, 676457349120 bytes
255 heads, 63 sectors/track, 82241 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/emcpowera doesn't contain a valid partition table

Disk /dev/emcpowerb: 1395.8 GB, 1395864371200 bytes
255 heads, 63 sectors/track, 169704 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/emcpowerb doesn't contain a valid partition table

REmarque: We can see that fdisk sees double path 'raw' devices ( /dev/sdb and /dev/sdd ) to a same device, which finnaly is presented by powerpath as /dev/emcpowera . All disk system command (fdisk etc ...) should now use that device in order to benefit the use of powerpath (load balancing and failover on our double attached FC ).//

The 'rescan' script shows that now:

$ /root/
Host adapter 3 (qla2xxx) found.
Host adapter 4 (qla2xxx) found.
Scanning for device 3 0 0 0 ...
OLD: Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: DGC Model: RAID 5 Rev: 0208
Type: Direct-Access ANSI SCSI revision: 04
Scanning for device 4 0 0 0 ...
OLD: Host: scsi4 Channel: 00 Id: 00 Lun: 00
Vendor: DGC Model: RAID 5 Rev: 0208
Type: Direct-Access ANSI SCSI revision: 04
0 new device(s) found.
0 device(s) removed.

Monday, 23 March 2009

How to run an application a daemon? Example with nsca Nagios utility

Recently I needed to make a server application run as an Unix daemon and be able to start, stop and restart it on demand. The application I’m talking about didn’t have any startup/shutdown utilities (nsca utility to send passive check results to Nagios). It runs as a script, without even detaching from the console ( unless you use the & at the end of the command line, let's say #nsca -c nsca.cfg & ).
I also did it some time ago, to make another utility work as a daemon, but today I want to share the info.
I’ll have to write a utility application that would start the process, store it’s PID in some file and somehow “daemonize” the forked process.

Let’s assume that the application we want to run is /usr/local/nagios/nsca

First, we need to create a script in /etc/init.d/ Let us name the file /etc/init.d/nsca

The file contents would look something like this:

# /etc/rc.d/init.d/nsca
# Control to start/stop nsca utility as a daemon
# chkconfig: 345 99 01
# description: NSCA passive alerts writer for Nagios
# Author: Edwin Salvador (
# Changelog:
# 2009-03-23
# - First version of the script
# processname: nsca
# Source function library.
. /etc/init.d/functions

test -x /usr/local/nagios/nsca || exit 0


prog="NSCA passive alerts writer for Nagios"

start() {
echo -n $"Starting $prog: "
daemon /usr/local/nagios/nsca -c /usr/local/nagios/nsca.cfg
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/nsca

stop() {
echo -n $"Stopping $prog: "
killproc /usr/local/nagios/nsca
[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/nsca

# See how we were called.
case "$1" in
if [ -f /var/lock/subsys/nsca ]; then
status /usr/local/nagios/nsca
echo $"Usage: $0 {condrestart|start|stop|restart|reload|status}"
exit 1

exit $RETVAL

This is a pretty standard service start/stop/restart file. This small script will take care of controlling the PID and work as any other standard service, you don't need to remove any PID file manually, nor event to kill the process.

It can also be easyly modified to match any other program / application / script under Linux you want to daemonize !!.

Remember to assign user and group "root", and chmod 755
Use a different and unique name to each new startup script you create.

You're all set. Just use it as any usual service you run:

#service start nsca
#service stop nsca
#service restart nsca

If you need it to startup automatically on system boot-up:

#chkconfig nsca on

Enjoy it !

Thursday, 12 March 2009

Sudo: allow a normal user to run commands as root under Linux / UNIX operating systems

I would like to run few commands such as stop or start web server as a root user. How do I allow a normal user to run these commands as root?

You need to use sudo command which is use to execute a command as another user. It allows a permitted user to execute a command as the superuser or another user, as specified in the /etc/sudoers (config that defines or list of who can run what) file. i.e. the sudo command allows users to do tasks on a Linux system as another user.

sudo is more more secure then su command. By default it logs sudo usage, command and arguments in /var/log/secure (Red Hat/Fedora / CentOS Linux) or /var/log/auth.log (Ubuntu / Debian Linux).

If the invoking user is root or if the target user is the same as the invoking user, no password is required. Otherwise, sudo requires that users authenticate themselves with a password by default (NOTE: in the default configuration this is the user's password, not the root password). Once a user has been authenticated, a timestamp is updated and the user may then use sudo without a password for a short period of time (15 minutes unless overridden in sudoers).

/etc/sudoers Syntax
Following is general syntax used by /etc/sudoers file:

USER: Name of normal user
HOSTNAME: Where command is allowed to run. It is the hostname of the system where this rule applies. sudo is designed so you can use one sudoers file on all of your systems. This space allows you to set per-host rules.
COMMAND: A simple filename allows the user to run the command with any arguments he/she wishes. However, you may also specify command line arguments (including wildcards). Alternately, you can specify "" to indicate that the command may only be run without command line arguments.
How do I use sudo?
For example, you want to give user rokcy access to halt/shutdown command and restart apache web server.
1) Login as root user

2) Use visudo command edit to edit the config file:
# visudo
3) Append the following lines to file:
rokcy localhost=/sbin/halt
rokcy dbserver=/etc/init.d/apache-perl restart
4) Save the file and exit to shell prompt.
5) Now rokcy user can restart apache server by typing the following command:
$ sudo /etc/init.d/apache-perl restart

Restarting apache-perl 1.3 web server....The sudo command has logged the attempt to the log file /var/log/secure or /var/log/auth.log file:
# tail -f /var/log/auth.log


May 13 08:37:43 debian sudo: rokcy : TTY=pts/4 ; PWD=/home/rokcy ; USER=root

If rokcy want to shutdown computer he needs to type command:
$ sudo /sbin/halt

Password:Before running a command with sudo, users usually supply their password. Once authenticated, and if the /etc/sudoers configuration file permits the user access, then the command is run. sudo logs each command run and in some cases has completely supplanted the superuser login for administrative tasks.

More examples
a) Specify multiple commands for user jadmin:
jadmin ALL=/sbin/halt, /bin/kill, /etc/init.d/httpd
b) Allow user jadmin to run /sbin/halt without any password i.e. as root without authenticating himself:
jadmin ALL= NOPASSWD: /sbin/halt
c) Allow user charvi to run any command from /usr/bin directory on the system devl02:
charvi devl02 = /usr/bin/*


HowTo SSH/SCP without a password

HowTo SSH/SCP without a password.

This small HowTo will explain how to setup key-based authentication for password-less SSH and SCP usage.

This HowTo does assume the reader has some basic knowledge of ssh and a terminal, and is using an operating system that implements SSH. If you're using a Windows OS and want to use SSH, try PuTTY. For Putty, see key-based auth with Putty.

In the examples that follow please substitute 'servername' , 'ipaddress' and 'username' with the proper information for your setup. I have included a list of weblinks for the words in italic at the end of this document.

Step 1. Verify that you can connect normally (using a password) to the server you intend to setup keys for:

#### Examples ####

user@homebox ~ $ ssh username@'servername'

# Or:

user@homebox ~ $ ssh username@'ipaddress'

# If your username is the same on both the client ('homebox') and the server ('servername'):

user@homebox ~ $ ssh 'servername'

# Or:

user@homebox ~ $ ssh 'ipaddress'

# If this is your first time connecting to 'servername' (or 'ipaddress'), upon establishing a connection with the
# server you'll be asked if you want to add the servers fingerprint to the known_hosts file on your computer.
# Press 'enter' to add the fingerprint.

Step 2. Now that you're connected to the server and verified that you have everything you need for access (hopefully), disconnect by typing 'exit' .

#### Examples ####

user@servername ~ $ exit

# You should be back at:

user@homebox ~ $

Step 3. The next step is to copy a unique key generated on your 'homebox' to the server you are connecting too. First, before you generate a new key, check to see if you already have a key:

#### Example ####

user@homebox ~ $ ls -l ~/.ssh
total 20
-rwx--xr-x 1 user user 601 Feb 2 01:58 authorized_keys
-rwx--xr-x 1 user user 668 Jan 1 19:26 id_dsa
-rwx--xr-x 1 user user 599 Jan 1 19:26
-rwx--xr-x 1 user user 6257 Feb 2 21:04 known_hosts

# The file we need to copy to the server is named As you can see above, the file needed exists. You may or may not have other files in ~/.ssh as I do. If the key doesn't exist, however, you can make one as follows:

#### Example ####

user@homebox ~ $ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/user/.ssh/id_dsa): # Press 'enter' here
Enter passphrase (empty for no passphrase): # Press 'enter' here
Enter same passphrase again: # Press 'enter' here
Your identification has been saved in /home/user/.ssh/id_dsa.
Your public key has been saved in /home/user/.ssh/
The key fingerprint is:
6f:c3:cb:50:e6:e9:90:f0:0f:68:d2:10:56:eb:1d:91 user@host

# Entering a password when asked during the key generation processes when prompted would require you to enter a password each time you SSH/SCP to the server which defeats the purpose of this document.

Step 4. Regardless whether you had a key ready to go or if you had to generate a new key, the next step is the same in either case. Now you're ready to copy the key to the server. Do so like this:

#### Example ####

user@homebox ~ $ ssh-copy-id -i ~/.ssh/ user@'servername' (or 'ipaddress')

# If you are asked weather or not you wish to continue, say yes.

Step 5. Now it's time to test the setup. To do that, try to ssh to the server:

#### Example ####

user@homebox ~ $ ssh 'servername' (or 'ipaddress')

# You should log in to the remote host without being asked for a password.

Step 6. You can now SSH or SCP to the remote host without having to enter a password at each connection. To make sure your public key stays secure from prying eyes, do the following to change permissions and restrict access on 'homebox' and also on 'servername' to ~/.ssh:

#### Example ####

user@homebox ~ $ chmod 600 ~/.ssh/id_dsa ~/.ssh/

# Verify the permissions on the files:

#### Example ####

user@homebox ~ $ ls -l ~/.ssh
-rw------- 1 user user 668 Feb 4 19:26 id_dsa
-rw------- 1 user user 599 Feb 4 19:26


1. OpenSSH

2. known_hosts

3. fingerprint

Nice post!

I've noticed that I don't have the command ssh-copy-id on my OS X machine (I didn't even know one existed!). To achieve the same effect I usually do the following:
user@homebox ~ $ scp ~/.ssh/ user@'servername':.ssh/authorized_keysThis is assuming you've already created a .ssh directory on your server 'servername' (just ssh in as normal and `mkdir .ssh`). This also assumes that you don't already have an `authorized_keys` file in the .ssh directory on your server. If you do just copy (scp) the file to a temporary file in your server's home directory and then
user@homebox ~ $ scp .ssh/ user@servername:homebox_dsa.pubuser@homebox ~ $ ssh user@servernameuser@servername ~ $ cat >> .ssh/authorized_keysuser@servername ~ $ rm If you've got it, the ssh-copy-id way is clearly a lot easier!

~ Mark

Hi Mark. Thanks for adding that bit. I don't have access to a Mac (new one anyway) so that's very nice to know.


Seth, I liked this post a lot, but felt the formatting and wording can be improved. I've made a few changes to the introduction.

(I wish I had used my name for my username now!)


I found an elegant way of creating a new, or adding to an existing authorized_keys file with a single command:

ssh -n "echo `cat ~/.ssh/` >> ~/.ssh/authorized_keys"-

I think it *is* a good practice to use pass phrases when using ssh keys. You can use ssh-agent on Linux and SSH Agent or SSHKeychain on Mac OS X, to avoid you to type your pass phrase everytime you access a remote host. Also, you can forward your keys using 'ssh -A' if you need to hop onto some host in the middle.

-- Igor

Wednesday, 11 February 2009

Two ways to copy files from a remote computer securely

Run any of the two following commands from the destination computer, previously located in the destination directory.

rsync -avz -e ssh root@192.x.x.x:/s01/backup/oradata/databkup/* .

scp root@192.x.x.x:/s01/backup/oradata/databkup/* .

After cloning a Linux box problems running vnc server, xterm can not run

After cloning a Linux box, running vnc server, a terminal connection could not start xterm. (I clonned a virtual server into a virtual server, on vmware ESX 3.5)
The log file (located at /home/username/.vnc/hostname:1.log) was as follows:

Wed Feb 11 11:27:53 2009
Client: Server default pixel format depth 16 (16bpp) little-endian rgb565
Client: Client pixel format depth 6 (8bpp) rgb222
xterm: Error 32, errno 2: No such file or directory
Reason: get_pty: not enough ptys

I think the two steps that solved it were:

1) create a new .Xauthority file
Loged in as username:
Delete the .Xauthority file, located at /home/username
Create a new .Xauthority file, issuing the command: $mkxauth -c

2) use makedev to create the pty and pts devices:
Loged in as root:

cd /dev

Steps found at:


host:/# xterm
xterm: Error 32, errno 2: No such file or directory
Reason: get_pty: not enough ptys
try running MAKEDEV pty in /dev to make the devices you need.


cannot start xterm on NetBSD-4.0


Subject: Re: cannot start xterm on NetBSD-4.0
From: Aleksey Cheusov
Date: Thu, 10 Jul 2008 00:46:55 +0300


>> - After manual running the following commands
>> cd /dev
>> ./MAKEDEV ptm
>> mkdir pts
>> mount pts

> You can add this to MAKEDEV under "init)":
> makedev ptm
> mkdir -m 0755 /dev/pts

I've added this code to /etc/rc.local (because /dev is on MFS)
and everything works fine now while booting.
But this is strange ;-( Before HDD failure everything worked fine
without this code.

>> xterm seems to work but says
>> utmp_update: Cannot update utmp entry: Resource temporarily unavailable
>> utmp_update: Cannot update utmp entry: Undefined error: 0

> Have you searched for "Cannot update utmp entry"? Same problem, same solution?
Thank you. I've found it :-) I really forgot -U option of

Best regards, Aleksey Cheusov.

I also made as root:

set DISPLAY=:0.0
export DISPLAY

Monday, 9 February 2009

Change name of server, after install SQL Server 2005

If you change the name of a server / computer, after installing SQL server 2005 (it happened to me also for SQL 7.0 server and SQL 2000 server), some of the programs that have access to the database, will have problems, because of the default instance was using the old name.
A long time ago, I had to backup my databases, uninstall SQL server, change the server name, reinstall SQL server, and restore the databases ....
But the solution is very simplistic: change the name of the server, and after restarting it, launch the SQL Management Studio (Enterprise Manager or Query Analyzer if using SQL server 2000), then execute the following queries:

1) select @@servername
It will show you the actual server name used by SQL server

2) sp_dropserver OLDNAME
It will erase this parameter

3) sp_addserver NEWNAME, local
It will configure the SQL server parameter with the new name

4) Restart SQL server services

5) select @@servername
It will show you the actual NEW server name used by SQL server. Try at least twice.

You're all done.
It works for the default instance.
If you need to read further, go to

Thursday, 5 February 2009

VMware virtual machines grayed out in Virtual Center

It happened to me already twice, with no apparent cause. Fortunately, the resolution is very straightforward, you just need to restart the management agents on ESX server.

Here is the link to VMware site:

Here is the contents of it, so you can follow the procedure easily:

Restarting the Management agents on ESX Server 3.x

To restart the management agents on ESX Server 3.x:
  1. Login to your ESX Server as root from either an SSH session or directly from the console of the server.
  2. Type service mgmt-vmware restart .

    Caution: Ensure Automatic Startup/Shutdown of virtual machines is disabled before running this command or you risk rebooting the virtual machines. For more information, see Restarting hostd (mgmt-vmware) on ESX Server Hosts Restarts Hosted Virtual Machines Where Virtual Machine Startup/Shutdown is Enabled (1003312).
  3. Press Enter.
  4. Type service vmware-vpxa restart .
  5. Press Enter.
  6. Type logout and press Enter to disconnect from the ESX Server.
If this process is successful, it appears as:
[root@server]# service mgmt-vmware restart
Stopping VMware ESX Server Management services:
VMware ESX Server Host Agent Watchdog [ OK ]
VMware ESX Server Host Agent [ OK ]
Starting VMware ESX Server Management services:
VMware ESX Server Host Agent (background) [ OK ]
Availability report startup (background) [ OK ]
[root@server]# service vmware-vpxa restart
Stopping vmware-vpxa: [ OK ]
Starting vmware-vpxa: [ OK ]

Restarting the Management agents on ESX Server 3i

To restart the management agents on ESX Server 3i:
  1. Connect to the console of your ESX Server.
  2. Press F2 to customize the system.
  3. Login as root .
  4. Using the Up/Down arrows navigate to Restart Management Agents.
  5. Press Enter.
  6. Press F11 to restart the services.
  7. When the service has been restarted, press Enter.
  8. Press Esc to logout of the system.

Command to delete user password under Linux

Type the following command to delete a user password:

# passwd --delete username


# passwd -d username

Above command delete a user's password (make it empty). This is a quick way to disable a password for an account. It will set the named account passwordless. User will not able to login.

It is also a good idea to setup user shell to nologin to avoid security related problems:

# usrmod -s /sbin/nologin username

For example to delete password for user johnc, Type:

# passwd -d johnc
# usrmod -s /sbin/nologin johnc

Thursday, 15 January 2009

Create a NFS share for VM ISO files with Windows 2003 Server R2

If your ESX servers are not connected to network storage or if you do not have enough available space on your SAN to dedicate a sub folder of a VMFS volume for ISO files, then you can use a NFS network share to centrally store these images. Creating the NFS share can be done with many server operating systems, but did you know that Windows Server 2003 R2 has native NFS? has many “how to” VMware Tips for ESX, and the following is the instructions found there for creating a Windows 2003 R2 NFS share:
On the Windows 2003 Server make sure “Microsoft Services for NFS” in installed. If not you need to add it under Add/Remove Programs, WindowsComponents, Other Network File and Print Services
Next go to folder you want to share and right-click on it and select Properties
Click on the NFS Sharing tab and select “Share this Folder”
Enter a Share Name, check “Anonymous Access” and make sure the UID and GID are both -2
In VirtualCenter, select your ESX server and click the “Configuration” tab and then select “Storage”
Click on “Add Storage” and select “Network File System” as the storage type
Enter the Windows Server name, the folder (share) name and a descriptive Datastore Name
Once it finishes the configuration you can now map your VM’s CD-ROM devices to this new VMFS volume
Repeat steps 5 through 8 for each of your ESX servers to make the same ISO files available to all ESX hosts.
These instructions assume that you have already configured the VMkernel port group on a vSwitch for each ESX host. For instructions and information about configuring the VMKernel for NAS/NFS storage check the Storage Chapter of the ESX Server 3 Configuration Guide.
Of course, you can use the NFS share for more than just ISO file storage too. This is a good repository for patches and scripts that need to be used on all hosts. NFS also makes a good target for VM image backups too. Use some imagination and install the free VMware server on your 2003 R2 box and you have a low budget DR platform. Oh yeah, I shouldn’t forget to mention you can even run ESX VMs from NFS!
Important Notes:
ESX version 3.x only supports NFS version 3 over TCP/IP.
Best practice for TCP/IP storage is to use a dedicated subnet. This will usually require creating separate Service Console and VMKernel port groups on a dedicated vSwitch.
On the Windows 2003 R2 server be sure to configure the shared folder so that the both the share and the file permissions allow everyone and anonymous full control. You can make the share read only when adding the storage in ESX.
Be sure to remember to punch a hole in the ESX firewall for NFS. On the Configuration tab, go to the Security Profile settings and add the NFS Client so it appears in the allowed outbound connections.

Dont forget the thing about windows 2003 security thing"On the Windows 2003 R2 server be sure to configure the shared folder so that the both the share and the file permissions allow everyone and anonymous full control. You can make the share read only when adding the storage in ESX." Then u will not be see this error:Error during the configuration of the host: Cannot open volume: /vmfs/volumes/********-********It took me avile to check that, and now i will spank myself around.

HOW TO: Share Windows Folders by Using Server for NFS

UNIX uses the Network File System (NFS) protocol to share files and folders on the network. You can use the Server for NFS component in Windows Services for UNIX to share Windows file system resources to UNIX and Linux clients by using NFS, which includes full support for NFS v3. You can use Server for NFS to make interoperability and migration in a mixed environment easier. If you are using Windows, you can use either Windows Explorer or the Windows Nfsshare.exe command-line utility to share files to UNIX clients.

Share Windows Folders by Using Server for NFS

You can use Server for NFS to make Windows resources available to UNIX and Linux clients by using the NFS protocol. You can use either Windows Explorer or the Nfsshare.exe command line utility to share the folder.

To share a folder by using Nfsshare.exe:

  1. Log on to the Windows-based server by using an administrative level account.
  2. Click Start, click Run, type cmd, and then click OK.
  3. Type the following command, and then press ENTER to share a folder to NFS clients and to allow anonymous access:

    nfsshare -o anon=yes share_name=drive:path
  4. Type the following command, and then press ENTER to delete an NFS share:

    nfsshare share_name /delete
  5. Type: nfsshare /?, and then press ENTER to display the parameters that you can use with Nfsshare.

To share a folder by using Windows Explorer:

  1. Log on to the Windows-based server by using an administrative level account.
  2. Start Windows Explorer.
  3. Right-click the folder that you want to share, and then click Sharing.
  4. Click the NFS Sharing tab, and then click Share this folder.
  5. Configure the appropriate settings, and then click OK.NOTE: Microsoft recommends that you install at least one User Name Mapping service on your network to map UNIX and Windows user names to each other. Please view our Kb article about User Name Mapping service in our REFERENCES section.

P2V: How To Make a Physical Linux Box Into a Virtual Machine

P2V: How To Make a Physical Linux Box Into a Virtual Machine

Over the last four days, I’ve been exploring how to convert physical
Linux boxes into virtual machines. VMWare has a tool
for doing P2V conversions, as they’re called, but as far as I can
tell it only works for Windows physical machines and for converting
various flavors of virtual machines into others.

I’ve had a Linux machine that I’ve used in my CS462 (Large Distributed
class for years. The Linux distro has been updated over
the years, but the box is an old 266MHz Pentium with 512Mb of RAM.
Overall, it’s done surprisingly well—a testament to the small
footprint of Linux. Still, I decided it was time for an upgrade.

Why Go Virtual

In an effort to simplify my life, I’m trying to cut down on the
number of physical boxes I administer, so I decided I wanted the new
version of my class server to be running on a virtual machine. This offers several

  • Fewer physical boxes to manage

  • Easier to move to faster hardware when needed

  • Less noise and heat

I could have just rebuilt the whole machine from scratch on a new
virtual machine, but that takes a lot of time and the old build isn’t
that out of date (one year) and works fine. So, I set out to
discover how to transfer a physical machine to a virtual machine.
The instructions below give a few details specific to VMWare and OS
X, but if you happen to use Parallels (or Windows), the vast majority
of what I did is applicable and where it’s not, figuring it out isn’t
hard. I’ve tried to leave clues and I’m open to questions.

Note: I’ve used
this same process to transfer a VMWare virtual image to run on
Parallels. The are probably easier ways, but this technique works
fine for that purpose as well—it doesn’t matter if the source
machine is physical or virtual.

The Process

The first step is to make an image of the source machine. I
recommend g4l, Ghost for Linux. There are some detailed
on g4l available, but the basics are:

  • Download the g4l bootable ISO and put it on a CD.

  • Boot it on the source machine.

  • Select the latest version from the resulting menu and start it up
    (you have to type g4l at the prompt).

  • Select raw transfered over network and configure the IP address
    and the username/password for the FTP server you want the image
    transfered to.

  • Give the new image a name.

  • Select “backup” and sit back and watch it work.

Note that if you have more than one hard drive on the
source machine, you’ll have to do each separately. I found that
separately imaging
each partition on each drive worked best. One tip: there
are three compression options. Lzop works, in this application,
nearly as GZip or BZip but with much less CPU load. Compression
helps not only with storing the images, but also with transfering
them around on the ‘Net, so you’ll probably want some kind of

The next step is to create a virtual machine and put the images on
it’s drive(s). Create a virtual machine in VMWare as you normally
would, selecting the right options for the source OS. When you get
to the screen that asks “Startup virtual machine and load OS” (or
something like that), uncheck the box and you should be able to
change the machine options.

The first thing you need to do with the new VM is create the right
number and size of hard drives—and partitions on those drives—to
match the partition images you’re going to restore.

For transfering single image machines to VMWare, just using the
default drive, appropriately sized, worked fine. For more than one
drive image, however, I found that making the drive type (SCSI/IDE)
match the type on the source was easiest thing to do. Note that
VMWare won’t let you make the main drive an IDE drive by default.
You can always delete it and create a new drive that’s an IDE drive
if you need to.

The second thing you need to do with the new VM is set the machine to
boot from the CD ROM since we’ve got to start up g4l on the
target machine.

On VMWare, you can enter the BIOS by pressing F2 while the virtual
machine is loading. This isn’t as easy as it sounds since it starts
quick. Once you’re there, however, it’s a pretty standard BIOS setup
and changing the boot order is straight forward. On Parallels this
is easier since the boot order is an option you can change in the
VM’s settings.

If you’re creating partitions on the drives, you’ll need to boot from
a ISO image for the appropriate Linux distro and create the
partitions using the partition wiazrd, parted, or some other
tool—whatever you’d normally do.

Next boot the VM from the g4l ISO image on your computer or
the physical CD you made. If you have trouble, be sure the virtual
CDROM is connected and powered on when the virtual machine is
started. Start g4l and configure it the same way you did
before, but this time, you’ll select “restore” from the options.
g4l should start putting the images from the source machine
onto the target. If you have more than one hard drive or partition
image, you’ll have to restore each to a separate drive or
partition—as appropriate—on the virtual machine.

When doing a raw transfer, I you need make the drives the
same size as the machine you’re moving the image from (I’ve found
that larger works OK, but smaller doesn’t). If the drives aren’t big
enough to support the entire image, you’ll get “short reads” and not
everything will be transfered. Note that you won’t get much
complaint from g4l.

The virtual drives should theoretically only take as much space as
they need, but it turns out that since you’re doing a raw transfer,
you’ll fill them up with “space.” This is one of those instances
where copying a sparse data structure results in one that isn’t.
This results in awfully large disks—make sure you’ve got plenty of
scratch disk space for this operation. More on large disks later.

Repairing and Booting the New Machine

Linux panics if the init RAM disk is not updated
Linux panics if the init RAM disk is not updated
(click to enlarge)

Once the images are copied, you have to make them usable. If you
just try to boot from them, you’ll likely see something like the
screenshot shown on the right: a short message followed by a kernel
panic. Before you can use the new machine, you have to do a little
repair work on the old images.

  • Get an emergency boot CD ISO for your flavor of Linux and boot
    the new virtual machine from it. Often you can just boot from the
    installation image and then enter a rescue mode. For example for
    Redhat, you can type “linux rescue” at the boot prompt and get into
    recovery mode.

  • It will search for Linux partitions and should find any you’ve
    restored to the machine. You’ll have the option to mount these. Do

  • Now, use the chroot command to change the root of the
    file system to the root partition. Mount any of the other partitions
    that you need (e.g. /boot).

  • Run kudzu to find any new devices and get rid of old

  • Use mkinitrd to
    create a new init RAM disk. This command should work:

    /sbin/mkinitrd -v -f /boot/initrd-2.2.12-20.img 2.2.12-20

    Of course, you’ll have to substitute the right initrd name
    (look in /boot) and use the right version (look in

If you get an error message about not being able to find the right
modules, be sure that the last argument to mkinitrd matches
what you see in /lib/modules exactly.

Now, you should be able to boot the machine. With any luck, it
should work.

Disk Size Issues

When you restore the image, your new sparse disk will grow to the
size of the image, even if the image is only partially full of real
data. For example, my Linux box had a 6Gb drive (I told you it was
ancient) that contained the root partition and a 100 Gb drive that
I’d partitioned into two pieces: one 40Gb partition mounted as
/home and a 60Gb partition mounted as /web. After
restoring the images for these three partitions, I ended up with a 6Gb and
a 107Gb files representing the virtual disks. This despite the fact
that only 8Gb of the 107Gb actually contained any data.

Clearly, you don’t want 107Gb files hanging around if they can be
smaller. One option is to do a file copy rather than an image. This
would work fine for the /home and /web partitions
in my case, but wouldn’t have worked for the root partition—I wanted
an image for that. If you’ve just got one big partition, then you
can’t use the file transfer option and still have exactly the
same machine.

Fortunately there’s a relatively painless way of reducing the size of
the disk to just what’s needed (thanks to Christian
for the technique).

The first step is to zero out all the free space on each partition of
the drive you want to shrink. This, in effect, marks the free
space. You can do that easily with this command:

cat /dev/zero > zero.fill;sync;sleep 1;sync;rm -f zero.fill

After this runs, you’ll get an error that says
“cat: write error: No space left on device”. That’s
normal—you just filled the drive with one BIG file full of zeros,
made sure it was flushed to the disk, and then deleted it.

Next you can use the VMWare supplied disk management tool to do the
actual shrinking. For VMWare Workstation Manager, you use
vmware-vdiskmanager, but the version of this program that
ships with Fusion doesn’t support the shrink option. Note that this,
and other support programs, are in

/Library/Application Support/VMware\ Fusion/

on OS X.

Fortunately, in OS X at least, there’s another
program, called diskTool in


that does support the shrink option (-k1). Running
this command

diskTool -k 1 Luwak-IDE_0-1.vmdk

on my large disk reduced it from 107Gb to 8Gb!

A few notes: Apparently you have to perform the shrink option on the
disks for a machine before any snapshots have been taken.
Also, be sure to run the zero fill operation in each partition on the
disk. The shrinking option takes a little time, but it’s well worth
it. I haven’t tried this in Parallels, but I suspect the disk
compaction option would work. If someone tries it, let me know.


So, after a lot of experimentation, some playing around, and a lot of
long operations on large files, I have a virtual machine that’s a
fairly accurate reproduction of the physical machine that it came
from. I’ll be testing it over the next few days to make sure it’s

On reflection, I needn’t have been so faithful to the structure on
the physical machine. I could have created the right number of
partitions on one drive rather than creating multiple drives. After
all, the new drive can be as big as I like. Maybe I’ll do that next
and see how things go…

Posted by windley on August 20, 2007 7:38 AM

Creating and formatting swap partitions

You can have
several swap partitions. [Older Linux kernels limit the size of each swap
partition to up to approximately 124 MB, but the linux kernels 2.2.x up
do not have this restriction.] Here are the steps to create and enable
a swap partition:

- Create the partition of the proper size using fdisk (partition
type 82, "Linux swap").

- Format the partition checking for bad blocks, for example:

mkswap -c /dev/hda4

You have to substitute /dev/hda4 with your partition name. Since I did
not specify the partition size, it will be automatically detected.

- Enable the swap, for example:

swapon /dev/hda4

To have the swap enabled automatically at bootup, you have to include
the appropriate entry into the file /etc/fstab, for example:

/dev/hda4 swap swap defaults 0 0

If you ever need to disable the swap, you can do it with (as root):

swapoff /dev/hda4

Swap partitions

Swap is an extension of the physical memory of the computer. Most likely, you
created a swap partition during the initial RedHat setup. You can
verify the amount of swap space available on your system using:

cat /proc/meminfo

The general recommendation is that one should have: at least 4 MB
swap space, at least 32 MB total (physical+swap) memory for a system running
command-line-only, at least 64 MB of total (physical+swap) memory for
a system running X-windows, and swap space at least 1.5 times the amount
of the physical memory on the system.

If this is too complicated, you might want to have a swap twice as large
as your physical (silicon) memory, but not less than 64 MB.

After Converting Physical RHEL4 System to a Virtual Machine, System Cannot See Hard Disks and Kernel Panics

vmware official document. KB Article 1002402 Updated Sep. 12, 2008

After Converting Physical RHEL4 System to a Virtual Machine, System Cannot See Hard Disks and Kernel Panics
KB Article1002402
UpdatedSep. 12, 2008
VMware Converter
After using VMware Converter to convert a physical RHEL4 host into a virtual machine running on ESX Server 3.0.x, the RHEL4 guest operating system fails to boot. The following error message is returned:

No volume groups found

Followed by:

Kernel panic - not syncing: Attempted to kill init!

Note: This issue might also apply to other Linux distributions.


The issue occure because the initial ramdisk image does not include the drivers or modules for the LSILogic virtual SCSI adapter in an ESX Server 3.0.x virtual machine. These modules are not in the initial ramdisk image because the image is originally created on a system that does not use this hardware. To fix this issue, you must replace the existing initial ramdisk image with a new one that includes the proper drivers.

Note: Before you begin modifying the guest operating system, ensure the SCSI host adapter for the virtual machine is set to LSI Logic. For more information, see Changing the type of SCSI controller used in a Hosted virtual machine (1216).

Here are the steps required to do this:

  1. Remember to make a snapshot of your virtual machine before starting, and create a backup copy of any files to be edited.

  2. Because the RHEL4 installation in the virtual machine is not currently bootable, boot the virtual machine from the first RHEL4 installation disk.

  3. At the first prompt, type

    linux rescue

    and press Enter to boot into rescue mode.

  4. Enter the following command to change root to the mounted RHEL installation:

    chroot /mnt/sysimage

  5. If the physical host was IDE-based, check the following files for any cases of /dev/hda, and replace with /dev/sda:




  6. Ensure that grub is installed properly with the following command:


  7. Edit the file /etc/modules.conf and remove anything it contains. This should be an empty file. If the file does not exist, that is OK (you do not need to create it).

  8. Edit the file /etc/modprobe.conf, remove all existing lines, and replace them with the following 3 lines:

    alias eth0 pcnet32

    alias scsi_hostadapter mptbase

    alias scsi_hostadapter1 mptscsih

  9. Determine the full path to the initial ramdisk image you are going to rebuild. The initial ramdisk will be located in /boot. List the directory:

    ls /boot

    You see a file with a name similar to initrd-2.6.9-42.EL.img. In this case, the full path to this file is /boot/initrd-2.6.9-42.EL.img.

  10. Determine the kernel version to use for rebuilding the initial ramdisk image. Each installed kernel has their own folder in /lib/modules. List the directory:

    ls /lib/modules

    You see a folder with a name similar to 2.6.9-42.EL.

  11. Rebuild the ramdisk with the following command (replacing the path to the initial ramdisk image, and the kernel version with the ones you determined in the previous two steps. If there were multiple options, choose the newest ones, or check /etc/grub.conf to see which version is in use. Be sure the version number in the initial ramdisk image path matches the kernel version.):

    mkinitrd -v -f /boot/initrd-2.6.9-42.EL.img 2.6.9-42.EL

    Explanation of this command:

    1. mkinitrd: Make initial ramdisk

    2. -v: Be verbose

    3. -f: Force overwrite if file already exists (you want to replace the existing file)

    4. /boot/initrd-2.6.9-42.EL.img: Path to the file to write (which is already pointed to by /etc/grub.conf)

    5. 2.6.9-42.EL: Kernel version to use, which tells mkinitrd where to find the modules to include.

  12. Reboot.

    Important: After you have booted the system successfully and determined it is working as expected, remember to delete the snapshot you created in step 1.

Product Versions
VMware Converter 3.0.x
guest OS; RHEL 4
Last Modified Date: 09-12-2008ID: 1002402