Thursday, 15 January 2009

Create a NFS share for VM ISO files with Windows 2003 Server R2

http://vmetc.com/2008/02/19/create-a-nfs-share-for-vm-iso-files-with-windows-2003-server-r2/

If your ESX servers are not connected to network storage or if you do not have enough available space on your SAN to dedicate a sub folder of a VMFS volume for ISO files, then you can use a NFS network share to centrally store these images. Creating the NFS share can be done with many server operating systems, but did you know that Windows Server 2003 R2 has native NFS?
VMware-land.com has many “how to” VMware Tips for ESX, and the following is the instructions found there for creating a Windows 2003 R2 NFS share:
window.google_render_ad();
On the Windows 2003 Server make sure “Microsoft Services for NFS” in installed. If not you need to add it under Add/Remove Programs, WindowsComponents, Other Network File and Print Services
Next go to folder you want to share and right-click on it and select Properties
Click on the NFS Sharing tab and select “Share this Folder”
Enter a Share Name, check “Anonymous Access” and make sure the UID and GID are both -2
In VirtualCenter, select your ESX server and click the “Configuration” tab and then select “Storage”
Click on “Add Storage” and select “Network File System” as the storage type
Enter the Windows Server name, the folder (share) name and a descriptive Datastore Name
Once it finishes the configuration you can now map your VM’s CD-ROM devices to this new VMFS volume
Repeat steps 5 through 8 for each of your ESX servers to make the same ISO files available to all ESX hosts.
These instructions assume that you have already configured the VMkernel port group on a vSwitch for each ESX host. For instructions and information about configuring the VMKernel for NAS/NFS storage check the Storage Chapter of the ESX Server 3 Configuration Guide.
Of course, you can use the NFS share for more than just ISO file storage too. This is a good repository for patches and scripts that need to be used on all hosts. NFS also makes a good target for VM image backups too. Use some imagination and install the free VMware server on your 2003 R2 box and you have a low budget DR platform. Oh yeah, I shouldn’t forget to mention you can even run ESX VMs from NFS!
Important Notes:
ESX version 3.x only supports NFS version 3 over TCP/IP.
Best practice for TCP/IP storage is to use a dedicated subnet. This will usually require creating separate Service Console and VMKernel port groups on a dedicated vSwitch.
On the Windows 2003 R2 server be sure to configure the shared folder so that the both the share and the file permissions allow everyone and anonymous full control. You can make the share read only when adding the storage in ESX.
Be sure to remember to punch a hole in the ESX firewall for NFS. On the Configuration tab, go to the Security Profile settings and add the NFS Client so it appears in the allowed outbound connections.


Important:
Dont forget the thing about windows 2003 security thing"On the Windows 2003 R2 server be sure to configure the shared folder so that the both the share and the file permissions allow everyone and anonymous full control. You can make the share read only when adding the storage in ESX." Then u will not be see this error:Error during the configuration of the host: Cannot open volume: /vmfs/volumes/********-********It took me avile to check that, and now i will spank myself around.

HOW TO: Share Windows Folders by Using Server for NFS

http://support.microsoft.com/kb/324089


UNIX uses the Network File System (NFS) protocol to share files and folders on the network. You can use the Server for NFS component in Windows Services for UNIX to share Windows file system resources to UNIX and Linux clients by using NFS, which includes full support for NFS v3. You can use Server for NFS to make interoperability and migration in a mixed environment easier. If you are using Windows, you can use either Windows Explorer or the Windows Nfsshare.exe command-line utility to share files to UNIX clients.

Share Windows Folders by Using Server for NFS


You can use Server for NFS to make Windows resources available to UNIX and Linux clients by using the NFS protocol. You can use either Windows Explorer or the Nfsshare.exe command line utility to share the folder.



To share a folder by using Nfsshare.exe:

  1. Log on to the Windows-based server by using an administrative level account.
  2. Click Start, click Run, type cmd, and then click OK.
  3. Type the following command, and then press ENTER to share a folder to NFS clients and to allow anonymous access:




    nfsshare -o anon=yes share_name=drive:path
  4. Type the following command, and then press ENTER to delete an NFS share:




    nfsshare share_name /delete
  5. Type: nfsshare /?, and then press ENTER to display the parameters that you can use with Nfsshare.


To share a folder by using Windows Explorer:

  1. Log on to the Windows-based server by using an administrative level account.
  2. Start Windows Explorer.
  3. Right-click the folder that you want to share, and then click Sharing.
  4. Click the NFS Sharing tab, and then click Share this folder.
  5. Configure the appropriate settings, and then click OK.NOTE: Microsoft recommends that you install at least one User Name Mapping service on your network to map UNIX and Windows user names to each other. Please view our Kb article about User Name Mapping service in our REFERENCES section.

P2V: How To Make a Physical Linux Box Into a Virtual Machine



P2V: How To Make a Physical Linux Box Into a Virtual Machine










Over the last four days, I’ve been exploring how to convert physical
Linux boxes into virtual machines. VMWare has a tool
for doing P2V conversions, as they’re called, but as far as I can
tell it only works for Windows physical machines and for converting
various flavors of virtual machines into others.




I’ve had a Linux machine that I’ve used in my CS462 (Large Distributed
Systems)
class for years. The Linux distro has been updated over
the years, but the box is an old 266MHz Pentium with 512Mb of RAM.
Overall, it’s done surprisingly well—a testament to the small
footprint of Linux. Still, I decided it was time for an upgrade.



Why Go Virtual



In an effort to simplify my life, I’m trying to cut down on the
number of physical boxes I administer, so I decided I wanted the new
version of my class server to be running on a virtual machine. This offers several
advantages:



  • Fewer physical boxes to manage


  • Easier to move to faster hardware when needed


  • Less noise and heat




I could have just rebuilt the whole machine from scratch on a new
virtual machine, but that takes a lot of time and the old build isn’t
that out of date (one year) and works fine. So, I set out to
discover how to transfer a physical machine to a virtual machine.
The instructions below give a few details specific to VMWare and OS
X, but if you happen to use Parallels (or Windows), the vast majority
of what I did is applicable and where it’s not, figuring it out isn’t
hard. I’ve tried to leave clues and I’m open to questions.




Note: I’ve used
this same process to transfer a VMWare virtual image to run on
Parallels. The are probably easier ways, but this technique works
fine for that purpose as well—it doesn’t matter if the source
machine is physical or virtual.



The Process



The first step is to make an image of the source machine. I
recommend g4l, Ghost for Linux. There are some detailed
instructions
on g4l available, but the basics are:



  • Download the g4l bootable ISO and put it on a CD.


  • Boot it on the source machine.


  • Select the latest version from the resulting menu and start it up
    (you have to type g4l at the prompt).


  • Select raw transfered over network and configure the IP address
    and the username/password for the FTP server you want the image
    transfered to.


  • Give the new image a name.


  • Select “backup” and sit back and watch it work.




Note that if you have more than one hard drive on the
source machine, you’ll have to do each separately. I found that
separately imaging
each partition on each drive worked best. One tip: there
are three compression options. Lzop works, in this application,
nearly as GZip or BZip but with much less CPU load. Compression
helps not only with storing the images, but also with transfering
them around on the ‘Net, so you’ll probably want some kind of
compression.





The next step is to create a virtual machine and put the images on
it’s drive(s). Create a virtual machine in VMWare as you normally
would, selecting the right options for the source OS. When you get
to the screen that asks “Startup virtual machine and load OS” (or
something like that), uncheck the box and you should be able to
change the machine options.




The first thing you need to do with the new VM is create the right
number and size of hard drives—and partitions on those drives—to
match the partition images you’re going to restore.




For transfering single image machines to VMWare, just using the
default drive, appropriately sized, worked fine. For more than one
drive image, however, I found that making the drive type (SCSI/IDE)
match the type on the source was easiest thing to do. Note that
VMWare won’t let you make the main drive an IDE drive by default.
You can always delete it and create a new drive that’s an IDE drive
if you need to.





The second thing you need to do with the new VM is set the machine to
boot from the CD ROM since we’ve got to start up g4l on the
target machine.




On VMWare, you can enter the BIOS by pressing F2 while the virtual
machine is loading. This isn’t as easy as it sounds since it starts
quick. Once you’re there, however, it’s a pretty standard BIOS setup
and changing the boot order is straight forward. On Parallels this
is easier since the boot order is an option you can change in the
VM’s settings.




If you’re creating partitions on the drives, you’ll need to boot from
a ISO image for the appropriate Linux distro and create the
partitions using the partition wiazrd, parted, or some other
tool—whatever you’d normally do.




Next boot the VM from the g4l ISO image on your computer or
the physical CD you made. If you have trouble, be sure the virtual
CDROM is connected and powered on when the virtual machine is
started. Start g4l and configure it the same way you did
before, but this time, you’ll select “restore” from the options.
g4l should start putting the images from the source machine
onto the target. If you have more than one hard drive or partition
image, you’ll have to restore each to a separate drive or
partition—as appropriate—on the virtual machine.




When doing a raw transfer, I you need make the drives the
same size as the machine you’re moving the image from (I’ve found
that larger works OK, but smaller doesn’t). If the drives aren’t big
enough to support the entire image, you’ll get “short reads” and not
everything will be transfered. Note that you won’t get much
complaint from g4l.




The virtual drives should theoretically only take as much space as
they need, but it turns out that since you’re doing a raw transfer,
you’ll fill them up with “space.” This is one of those instances
where copying a sparse data structure results in one that isn’t.
This results in awfully large disks—make sure you’ve got plenty of
scratch disk space for this operation. More on large disks later.



Repairing and Booting the New Machine


Linux panics if the init RAM disk is not updated
Linux panics if the init RAM disk is not updated
(click to enlarge)


Once the images are copied, you have to make them usable. If you
just try to boot from them, you’ll likely see something like the
screenshot shown on the right: a short message followed by a kernel
panic. Before you can use the new machine, you have to do a little
repair work on the old images.



  • Get an emergency boot CD ISO for your flavor of Linux and boot
    the new virtual machine from it. Often you can just boot from the
    installation image and then enter a rescue mode. For example for
    Redhat, you can type “linux rescue” at the boot prompt and get into
    recovery mode.


  • It will search for Linux partitions and should find any you’ve
    restored to the machine. You’ll have the option to mount these. Do
    so.


  • Now, use the chroot command to change the root of the
    file system to the root partition. Mount any of the other partitions
    that you need (e.g. /boot).


  • Run kudzu to find any new devices and get rid of old
    ones.


  • Use mkinitrd to
    create a new init RAM disk. This command should work:

    /sbin/mkinitrd -v -f /boot/initrd-2.2.12-20.img 2.2.12-20

    Of course, you’ll have to substitute the right initrd name
    (look in /boot) and use the right version (look in
    /lib/modules).




If you get an error message about not being able to find the right
modules, be sure that the last argument to mkinitrd matches
what you see in /lib/modules exactly.




Now, you should be able to boot the machine. With any luck, it
should work.



Disk Size Issues




When you restore the image, your new sparse disk will grow to the
size of the image, even if the image is only partially full of real
data. For example, my Linux box had a 6Gb drive (I told you it was
ancient) that contained the root partition and a 100 Gb drive that
I’d partitioned into two pieces: one 40Gb partition mounted as
/home and a 60Gb partition mounted as /web. After
restoring the images for these three partitions, I ended up with a 6Gb and
a 107Gb files representing the virtual disks. This despite the fact
that only 8Gb of the 107Gb actually contained any data.




Clearly, you don’t want 107Gb files hanging around if they can be
smaller. One option is to do a file copy rather than an image. This
would work fine for the /home and /web partitions
in my case, but wouldn’t have worked for the root partition—I wanted
an image for that. If you’ve just got one big partition, then you
can’t use the file transfer option and still have exactly the
same machine.





Fortunately there’s a relatively painless way of reducing the size of
the disk to just what’s needed (thanks to Christian
Mohn
for the technique).




The first step is to zero out all the free space on each partition of
the drive you want to shrink. This, in effect, marks the free
space. You can do that easily with this command:




cat /dev/zero > zero.fill;sync;sleep 1;sync;rm -f zero.fill



After this runs, you’ll get an error that says
“cat: write error: No space left on device”. That’s
normal—you just filled the drive with one BIG file full of zeros,
made sure it was flushed to the disk, and then deleted it.




Next you can use the VMWare supplied disk management tool to do the
actual shrinking. For VMWare Workstation Manager, you use
vmware-vdiskmanager, but the version of this program that
ships with Fusion doesn’t support the shrink option. Note that this,
and other support programs, are in



/Library/Application Support/VMware\ Fusion/


on OS X.




Fortunately, in OS X at least, there’s another
program, called diskTool in



/Applications/VMware Fusion.app/Contents/MacOS/



that does support the shrink option (-k1). Running
this command




diskTool -k 1 Luwak-IDE_0-1.vmdk



on my large disk reduced it from 107Gb to 8Gb!





A few notes: Apparently you have to perform the shrink option on the
disks for a machine before any snapshots have been taken.
Also, be sure to run the zero fill operation in each partition on the
disk. The shrinking option takes a little time, but it’s well worth
it. I haven’t tried this in Parallels, but I suspect the disk
compaction option would work. If someone tries it, let me know.



Conclusion




So, after a lot of experimentation, some playing around, and a lot of
long operations on large files, I have a virtual machine that’s a
fairly accurate reproduction of the physical machine that it came
from. I’ll be testing it over the next few days to make sure it’s
usable.




On reflection, I needn’t have been so faithful to the structure on
the physical machine. I could have created the right number of
partitions on one drive rather than creating multiple drives. After
all, the new drive can be as big as I like. Maybe I’ll do that next
and see how things go…












Posted by windley on August 20, 2007 7:38 AM


Creating and formatting swap partitions

You can have
several swap partitions. [Older Linux kernels limit the size of each swap
partition to up to approximately 124 MB, but the linux kernels 2.2.x up
do not have this restriction.] Here are the steps to create and enable
a swap partition:

- Create the partition of the proper size using fdisk (partition
type 82, "Linux swap").

- Format the partition checking for bad blocks, for example:

mkswap -c /dev/hda4

You have to substitute /dev/hda4 with your partition name. Since I did
not specify the partition size, it will be automatically detected.

- Enable the swap, for example:

swapon /dev/hda4

To have the swap enabled automatically at bootup, you have to include
the appropriate entry into the file /etc/fstab, for example:


/dev/hda4 swap swap defaults 0 0

If you ever need to disable the swap, you can do it with (as root):

swapoff /dev/hda4



Swap partitions



Swap is an extension of the physical memory of the computer. Most likely, you
created a swap partition during the initial RedHat setup. You can
verify the amount of swap space available on your system using:

cat /proc/meminfo

The general recommendation is that one should have: at least 4 MB
swap space, at least 32 MB total (physical+swap) memory for a system running
command-line-only, at least 64 MB of total (physical+swap) memory for
a system running X-windows, and swap space at least 1.5 times the amount
of the physical memory on the system.

If this is too complicated, you might want to have a swap twice as large
as your physical (silicon) memory, but not less than 64 MB.

After Converting Physical RHEL4 System to a Virtual Machine, System Cannot See Hard Disks and Kernel Panics

vmware official document. KB Article 1002402 Updated Sep. 12, 2008

http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1002402&sliceId=2&docTypeID=DT_KB_1_1&dialogID=14730726&stateId=0%200%204678181


After Converting Physical RHEL4 System to a Virtual Machine, System Cannot See Hard Disks and Kernel Panics
KB Article1002402
UpdatedSep. 12, 2008
Products
VMware Converter
Details
After using VMware Converter to convert a physical RHEL4 host into a virtual machine running on ESX Server 3.0.x, the RHEL4 guest operating system fails to boot. The following error message is returned:

No volume groups found


Followed by:


Kernel panic - not syncing: Attempted to kill init!


Note: This issue might also apply to other Linux distributions.

Solution

The issue occure because the initial ramdisk image does not include the drivers or modules for the LSILogic virtual SCSI adapter in an ESX Server 3.0.x virtual machine. These modules are not in the initial ramdisk image because the image is originally created on a system that does not use this hardware. To fix this issue, you must replace the existing initial ramdisk image with a new one that includes the proper drivers.


Note: Before you begin modifying the guest operating system, ensure the SCSI host adapter for the virtual machine is set to LSI Logic. For more information, see Changing the type of SCSI controller used in a Hosted virtual machine (1216).


Here are the steps required to do this:



  1. Remember to make a snapshot of your virtual machine before starting, and create a backup copy of any files to be edited.



  2. Because the RHEL4 installation in the virtual machine is not currently bootable, boot the virtual machine from the first RHEL4 installation disk.



  3. At the first prompt, type

    linux rescue

    and press Enter to boot into rescue mode.



  4. Enter the following command to change root to the mounted RHEL installation:

    chroot /mnt/sysimage



  5. If the physical host was IDE-based, check the following files for any cases of /dev/hda, and replace with /dev/sda:

    /etc/fstab

    /boot/grub/device.map

    /boot/grub/grub.conf




  6. Ensure that grub is installed properly with the following command:

    grub-install



  7. Edit the file /etc/modules.conf and remove anything it contains. This should be an empty file. If the file does not exist, that is OK (you do not need to create it).



  8. Edit the file /etc/modprobe.conf, remove all existing lines, and replace them with the following 3 lines:

    alias eth0 pcnet32

    alias scsi_hostadapter mptbase

    alias scsi_hostadapter1 mptscsih




  9. Determine the full path to the initial ramdisk image you are going to rebuild. The initial ramdisk will be located in /boot. List the directory:

    ls /boot

    You see a file with a name similar to initrd-2.6.9-42.EL.img. In this case, the full path to this file is /boot/initrd-2.6.9-42.EL.img.



  10. Determine the kernel version to use for rebuilding the initial ramdisk image. Each installed kernel has their own folder in /lib/modules. List the directory:

    ls /lib/modules

    You see a folder with a name similar to 2.6.9-42.EL.



  11. Rebuild the ramdisk with the following command (replacing the path to the initial ramdisk image, and the kernel version with the ones you determined in the previous two steps. If there were multiple options, choose the newest ones, or check /etc/grub.conf to see which version is in use. Be sure the version number in the initial ramdisk image path matches the kernel version.):

    mkinitrd -v -f /boot/initrd-2.6.9-42.EL.img 2.6.9-42.EL

    Explanation of this command:




    1. mkinitrd: Make initial ramdisk


    2. -v: Be verbose


    3. -f: Force overwrite if file already exists (you want to replace the existing file)


    4. /boot/initrd-2.6.9-42.EL.img: Path to the file to write (which is already pointed to by /etc/grub.conf)


    5. 2.6.9-42.EL: Kernel version to use, which tells mkinitrd where to find the modules to include.



  12. Reboot.

    Important: After you have booted the system successfully and determined it is working as expected, remember to delete the snapshot you created in step 1.

Product Versions
VMware Converter 3.0.x
Keywords
guest OS; RHEL 4
Last Modified Date: 09-12-2008ID: 1002402

Wednesday, 14 January 2009

Why doesn't mindi work with RHAS 2.1, RHEL 3, Fedora Core 2 or RedHat 9 ?

It seems that the version of tar used at that time is picky about simlinks, and exit with an error when it find .. in paths. Which is the case for gawk. A workaround is to issue:
# ln -sf /bin/gawk /usr/bin

Same may happen with smbmount
# ln -sf /usr/sbin/smbmount /usr/bin

and loadkeys:
# ln -sf /bin/loadkeys /usr/bin

Another possibility is to upgrade your tar version by using 1.15 instead.

Install mondo archive on a 64 bit Linux

If you have all of the dependencies downloaded and installed (afio, mindi, mkisofs, gzip) but I still receive the following message when trying to install Mondo:

libnewt.so.0.52 is needed by mondo-2.0.8-1.fc5.i386

It complains that newt is missing but , it is installed.

rpm -ivh newt-0.52.2-6.rpm
package newt-0.52.2-6 is already installed

SOLUTION:
=========
Dependencies between i386 pkgs and x86_64 pkges are tricky.
Rebuild a x86_64 package from the src.rpm:
rpmbuild --rebuild mondo-2.2.7-1.rhel3.src.rpm (the x86_64 version)
When you checked early:
package newt-0.52.2-6 is already installed (it means a i386 version)

After installing the dependencies, install mondo:

rpm -Uvh /usr/src/redhat/RPMS/x86_64/mondo-2.2.7-1.rhel3.x86_64.rpm
rpm -Va mindi mondo mindi-busybox

RHEL 3 network card DHCP not working on VMWare ESX 3.5

If you run Redhat Enterprise Linux 3 (RHEL 3) in a Virtual Machine, using VMWare ESX 3.5, network interfaces will work if you assign a static IP address to each of them, but if you use DHCP, it will not work, showing up a message like:

no link present. Check Cable?

Solution:
======
This is a known issue for VMWare, published on Knowledgebase Article 977:

Edit (for eth0):
/etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
check_link_down() {
return 1;
}
BOOTPROTO=dhcp

Then execute: $ifup eth0

Repeat the process for every NIC you need to use DHCP

Friday, 2 January 2009

Activation of SSH on VMWare ESXi server

How can I SSH to an ESXi hosts?

By default this isn’t possible. But there’s a way to get this working, just do the following:

- Go to the ESXi console and press alt+F1
- Type: unsupported
- Enter the root password
- At the prompt type “vi /etc/inetd.conf”
- Look for the line that starts with “#ssh” (you can search with pressing “/”)
- Remove the “#” (press the “x” if the cursor is on the character)
- Save “/etc/inetd.conf” by typing “:wq!”
- Restart the management service “/sbin/services.sh restart”

Done!