Tag: CentOS

  • HOWTO: Upgrade from Subversion 1.4 to 1.6 on CentOS 5

    How to upgrade the packages and existing repositories from Subversion 1.4 to 1.6.6 on CentOS 5.

    # File: Subversion_1.6_Upgrade.notes
    # Auth: burly
    # Date: 12/01/2009
    # Refs: http://svnbook.red-bean.com/nightly/en/index.html
    #       http://dev/antoinesolutions.com/subversion
    # Desc: Upgrading from subversion 1.4 to 1.6.6 on CentOS 5
    #       NOTE:These instructions are actually fairly generic 
    #       in regards to the version of SVN you are upgrading
    #       from/to. At the time of writing, it just happened
    #       to be 1.4 -> 1.6.6
    
    # Backup each repository
    svnadmin dump /srv/svn/<repo> > /backup/svn/<Repo>_20091201_rXXXX.dump
    
    # Backup any hooks or configuration files in 
    # /srv/svn/<repo>/hooks and /srv/svn/conf
    
    # Setup yum to allow the package to come in from
    # RPMforge (must setup RPMforge repo first).
    vim /etc/yum.repos.d/Centos-Base.repo
    
    # Add the following line at the end of each section
    # in the Centos-Base.repo
    exclude=subversion mod_dav_svn
    
    # Restart the yum update daemon
    service yum-updatesd restart
    
    # Upgrade subversion
    yum upgrade subversion
    
    # For each repository
    #    delete the existing repo
    rm -rf /srv/svn/<repo>
    
    # Create a new repo
    svnadmin create /srv/svn/<repo> --fs-type fsfs
    
    # Import the data
    svnadmin load /srv/svn/<repo> < /backup/srv/<Repo>_20091201_rXXXX.dump
    
    # Restore any hooks or configuration files in 
    # /srv/svn/<repo>/hooks and /srv/svn/<repo>/conf
    
    # If you are using Trac, you'll need to resync the repo
    trac-admin /srv/trac/<repo> resync
    
  • HOWTO: Migrate an Existing RAID Array to a New Array

    How to migrate from an existing software RAID 1 array to a new RAID 1 array on CentOS 5.5

    # File: Migrate_to_new_RAID_Array_on_CentOS_5.5.notes
    # Auth: burly
    # Date: 11/20/2010
    # Refs: 
    # Desc: How migrate from one RAID 1 array to a new one
    #       on CentOS 5.5
    
    # I booted from a Knoppix CD to do this. In retrospect,
    # I should have used a CentOS LiveCD because the
    # tooling, versions, and layout of Knoppix are different 
    # which caused some issues. Also, because my OS is x86-64
    # but Knoppix is x86, I could not chroot into my system 
    # environment, which are ultimately required to create the
    # initrd files.
    
    # Boot from the Knoppix CD and drop to a shell
    
    # Start up the existing RAID Array (one of the 2 drives
    # from the existing RAID 1 array was on sdc for me)
    mdadm --examine --scan /dev/sdc1 >> /etc/mdadm/mdadm.conf
    mdadm --examine --scan /dev/sdc2 >> /etc/mdadm/mdadm.conf
    mdadm --examein --scan /dev/sdc3 >> /etc/mdadm/mdadm.conf
    /etc/init.d/mdadm start
    /etc/init.d/mdadm-raid start
    
    # Partition first SATA drive in whatever partition numbers
    # and sizes you want. Make sure all partitions that 
    # will be in an RAID array use ID type "fd" for RAID 
    # autodetect and type "82" for swap. Make sure /boot
    # is marked with the bootable flag
    fdisk /dev/sda
     
    # Repeat for the other disks OR if you are using the
    # identical setup on each, you can use sfdisk to 
    # simplify your life.
    sfdisk -d /dev/sda | sfdisk /dev/sdb
    
    # Create the new boot array
    # NOTE: If you don't use metadata 0.90 (but instead 
    #       1.0 or 1.1) you'll run into problems with grub.
    #       In RAID 1, with metadata 0.90, you can mount
    #       the fs on the partition without starting RAID.
    #       With newer versions of metadata the superblock
    #       for RAID gets written at the beginning of the 
    #       partition where the filesystem superblock
    #       normally would go. This results in the inability
    #       to mount the filesystem without first starting
    #       RAID. In the case of your boot partition, this 
    #       results in the inability to setup grub and thus boot.
    mdadm --create --verbose --metadata=0.90 /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
    
    # Copy everything over for /boot
    mkdir /mnt/oldBoot
    mkdir /mnt/newBoot
    mkfs.ext3 /dev/md0
    mount --options=ro /dev/md0 /mnt/oldBoot
    cd /mnt/oldBoot
    find . -mount -print0 | cpio -0dump /mnt/newBoot
    
    # Make the new swap
    mkswap /dev/sda2
    mkswap /dev/sdb2
    
    # Create the new array for LVM. I used metadata
    # 0.90 again for consistency AND because I believe
    # the version of mdadm in CentOS won't handle newer
    # versions of it
    mdadm --create --verbose --metadata=0.90 /dev/md1 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3
    
    # Setup LVM2
    pvcreate /dev/md1
    vgcreate vg /dev/md1
    lvcreate -L8G -nroot vg
    lvcreate -L10G -nhome vg
    lvcreate -L250G -nvm vg
    
    # Format the filesystems.
    # NOTE: I fixed the reserved space to 1% (default is 5%)
    #       for the VM LV to save some space and 
    #       because in larger, non-root partitions, you
    #       don't need all that reserved space.
    mkfs.ext3 /dev/vg/root
    mkfs.ext3 /dev/vg/home
    mkfs.ext3 -m 1 /dev/vg/vm
    
    
    # Copy everything over for /
    mkdir /mnt/oldRoot
    mkdir /mnt/newRoot
    mount --options=ro /dev/vgOS/lvRoot /mnt/oldRoot
    mount /dev/vg/root /mnt/newRoot
    cd /mnt/oldRoot
    find . -mount -print0 | cpio -0dump /mnt/newRoot
    
    # Copy everything over for /home
    mkdir /mnt/oldHome
    mkdir /mnt/newHome
    mount --options=ro /dev/vgOS/lvHome /mnt/oldHome
    mount /dev/vg/home /mnt/newHome
    cd /mnt/oldHome
    find . -mount -print0 | cpio -0dump /mnt/newHome
    
    # Copy everything over for /boot
    mkdir /mnt/oldVM
    mkdir /mnt/newVM
    mount --options=ro /dev/vgOS/lvVM /mnt/oldVM
    mount /dev/vg/vm /mnt/newVM
    cd /mnt/oldVM
    find . -mount -print0 | cpio -0dump /mnt/newVM
    
    # Remove any existing/stale lines in the mdadm.conf file
    
    # Setup the mdadm config on the new /
    mdadm -Esb /dev/sda1 >> /mnt/newRoot/etc/mdadm.conf
    mdadm -Esb /dev/sda3 >> /mnt/newRoot /etc/mdadm.conf
    
    # Update fstab on the new machine to use the new 
    # mount points (e.g. if you changed VolumeGroup or 
    # LogicalVolume names)
    vim /mnt/newRoot/etc/fstab
    
    # REBOOT TO A CENTOS LIVECD (if you weren't already on one)
    
    # First we chroot
    mkdir /mnt/sysimage
    mount /dev/vg/root /mnt/sysimage
    mount /dev/vg/home /mnt/sysimage/home
    mount /dev/md0 /mnt/sysimage/boot
    mount --bind /dev /mnt/sysimage/dev
    mount -t proc none /mnt/sysimage/proc
    mount -t sysfs none /mnt/target/sys
    chroot /mnt/sysimage
    
    # Make a new initrd to boot from
    cd /boot
    mv initrd-2.6.18-194.26.1.el5.img initrd-2.6.18-194.26.1.el5.img.bak
    mkinitrd initrd-2.6.18-194.26.1.el5.img  2.6.18-194.26.1.el5
    
    # Setup grub on both of the drives
    grub
    root(hd0,0)
    setup(hd0)
    root(hd1,0)
    setup(hd1)
    quit
    
    # Reboot!
    
  • HOWTO: VMWare Server on CentOS 5.4

    I have a habit of creating .notes files whenever I’m doing system admin type work. I’ve collected a number of these over the years and I refer back to them fairly regularly whether I’m doing something similar or just looking for a specific command. I’ll be placing a bunch of these up here for easier access for me as well as public consumption in case anyone else finds them useful. They will be posted pretty much unedited, so they won’t be in the same “format” as I’ve used in the past, but hopefully they are sufficiently legible :-).

    Installation and Configuration of VMWare Server 2.x on CentOS 5.4 and 5.5. These instructions should mostly work on 5.0-5.6, note however that the glibc workaround is only necessary on 5.4 and 5.5. VMWare Server is no longer supported by VMWare but I continue to use it until I can upgrade my hardware to be ESXi compatible.

    # File: HOWTO_VMwareServer_on_CentOS_5.4.notes
    # Auth: burly
    # Date: 02/28/2010
    # Refs: http://www.cyberciti.biz/tips/vmware-on-centos5-rhel5-64-bit-version.html
    #       http://sanbarrow.com/vmx/vmx-config-ini.html
    #       http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=844
    #       http://pubs.vmware.com/vi301/resmgmt/wwhelp/wwhimpl/common/html/wwhelp.htm?context=resmgmt&file=vc_advanced_mgmt.11.32.html
    # Desc: Installation of VMware Server 2.0.2 on CentOS 5.4 x86-64
    
    # Download VMware Server 2.x
    
    # Install dependencies
    yum install gcc gcc-c++ kernel-headers kernel-devel libXtst-devel libXrender-devel xinetd
    
    # Install VMware server
    rpm -ivh VMware-server-2.x.x-XXXXX.<arch>.rpm
    
    # Configure VMware server
    vmware-config.pl
    
    # Answer the series of questions. My answers are below:
    Networking: yes
    Network Type: Bridge
    Network Name: Bridged
    . vmnet0 is bridged to eth0
    NAT: no
    Host-only: no
    remote connectiosn port: 902
    http connections: 8222
    https connections: 8333
    Different Admin: yes
    Admin user: <my user account>
    VM File Location: /vwmare/vms
    VMware VIX API files: Default locations
    
    # ##########################################################
    # Deal with the hostd/glibc compatilibity issues of VMware 
    # Server 2.x w/ CentOS 5.4 - 5.5 (no issues with CentOS 5.3 
    # and earlier or CentOS 5.6. VMware Server had not addressed
    # this as of VMware Server 2.0.2-203138
    
    # Get the necessary glibc file from 5.3
    mkdir ~/vmwareglibc
    cd ~/vmwareglibc
    wget http://vault.centos.org/5.3/os/x86_64/CentOS/glibc-2.5-34.x86_64.rpm
    rpm2cpio glibc-2.5-34.x86_64.rpm | cpio -ivd
    
    # Stop the vmware service and kill any instances of hostd
    service vmware stop
    killall vmware-hostd
    
    # Move the libc file 
    mkdir /usr/lib/vmware/lib/libc.so.6
    mv lib64/libc-2.5.so /usr/lib/vmware/lib/libc.so.6/libc.so.6
    
    # Edit the VMware hostd process script
    vim /usr/sbin/vmware-hostd
    
    # At line 372, before the program is called, insert an
    # empty line and the following
    export LD_LIBRARY_PATH=/usr/lib/vmware/lib/libc.so.6:$LD_LIBRARY_PATH
    
    # Start the vmware service
    service vmware start
    
    # Set the service to run on startup
    chkconfig vmware on
    
    # -----------------------------------------------------------------------------
    #                           Optional Performance Tunings
    # -----------------------------------------------------------------------------
    
    # -------------------------------------
    #    Server-wide Host VMware Settings
    # -------------------------------------
    
    # The following changes are made in /etc/vmware/config
    
    # Fit memory into RAM whenever possible and don't ballon
    # and shrink memory as needed.
    prefvmx.useRecommendedLockedMemSize="TRUE"
    prefvmx.minVmMemPct = "100"
    
    # By default, VMware will back the guest's main memory with
    # a file the size of  the guest's nominal RAM in the working
    # directory (next to the vmdk). If we turn this off, then in
    # Linux the memory backed file will be created in the 
    # temporary directory while on Windows it will be back by the 
    # host's swap file. On Linux hosts, if we turn off named file
    # backing AND use a shared memory file system in RAM for the 
    # temporary directory, we will miss the disk completely
    # unless we are out of RAM on the host system.
    mainMem.useNamedFile = "FALSE"
    tmpDirectory = "/dev/shm"
    
    # The following changse are made in /etc/sysctl.conf
    # Disabling the kernel from over committing memory and only
    # using swap when physical memory has been exhausted helps
    # overall performance (vm.swapiness). The maximum user 
    # frequency covers how fast a virtual machine can set 
    # it's tick count to. The vm.dirty options tune how the
    # VM subsystem commits I/O operations to disk, you may 
    # not want to tune these values if you do not have a
    # stable power source.
    # http://peterkieser.com/technical/vmware-server-issues/
    vm.swappiness = 0
    vm.overcommit_memory = 1
    vm.dirty_background_ratio = 5
    vm.dirty_ratio = 10
    vm.dirty_expire_centisecs = 1000
    dev.rtc.max-user-freq = 1024
    
    
    # -------------------------------------
    #            Host OS Settings
    # -------------------------------------
    
    # In order for the VMWare configuration to work properly 
    # with shared memory, you'll need to increase the default
    # shared memory size for tmpfs to match the amount of
    # memory in your system. This can be done by
    # editing /etc/fstab
    tmpfs                   /dev/shm                tmpfs   size=8G                    0 0
    
    # In order for the tmpfs changes to take effect, 
    # remount the tmpfs
    mount -o remount /dev/shm
    
    # The following changes are made in /etc/rc.d/rc.local
    
    # Read ahead on the hard drive should be set to an
    # optimal value I have found an optimal value is
    # between 16384 and 32768.
    # http://peterkieser.com/technical/vmware-server-issues/
    blockdev --setra 32768 /dev/md1
    
    # The following items are added as boot-time options
    # to the kernel for the host. To enable these values,
    # add them to /boot/grub/menu.lst at the end of the
    # kernel line.
    
    # On the host operating system, consider using deadline 
    # I/O scheduler (enabled by adding elevator=deadline to
    # kernel boot parameters), and noop I/O scheduler in
    # the guest if it is running Linux 2.6; using the noop 
    # scheduler enables the host operating system to better 
    # optimize I/O resource usage between different virtual machines.
    # http://peterkieser.com/technical/vmware-server-issues/
    elevator=deadline
    
    # -------------------------------------
    #            Per VM Settings
    # -------------------------------------
    
    # The following changes are made to the guest's vmx file
    
    # If we have enough RAM for all the guests to have their
    # memory in physical RAM all the time, then we can avoid 
    # the ballooning (grow/shrinking) to save CPU cycles. 
    # Note this will force the VMware hypervisor to swap
    # rather than balloon if it's in need of memory. 
    # Swapping is less desirable than ballooning.
    sched.mem.maxmemctl = 0
    
    # Disable memory sharing for the VM. This prevents the
    # hypervisor from scanning the memory pages for places
    # to de-dup memory across VMs and save space. This scanning
    # doesn't come free however, and if we have enough physical
    # RAM to support all of our VMs, then we don't really need
    # the savings.
    sched.mem.pshare.enable = "FALSE"
    mem.ShareScanTotal = 0
    mem.ShareScanVM = 0
    mem.ShareScanThreshold = 4096
    
    
    # The VMware clock synchronization features are a bit
    # problematic. If the guest clock gets behind,then VMware
    # will catch it up by trying to issue all of the missed
    # ticks until it is caught up. However, if the guest gets
    # ahead, then the VMware clock will not bring it back. So,
    # I am going to use ntp on the guest machines. If you have
    # a large number of guests, it's best to setup a local ntpd
    # server to offload some of the traffic from the root pools.
    tools.syncTime = "FALSE"
    
    # When I reboot the host, I want to gracefully stop each
    # VM instead of just powering it off:
    autostop = "softpoweroff"
    
    # -------------------------------------
    #            Guest OS Settings
    # -------------------------------------
    
    # The following items are added as boot-time options to 
    # the kernel for the host. To enable these values, add
    # them to /boot/grub/menu.lst at the end of the kernel line.
    
    # On the host operating system, consider using deadline I/O
    # scheduler (enabled by adding elevator=deadline to kernel
    # boot parameters), and noop I/O scheduler in the guest if it 
    # is running Linux 2.6; using the noop scheduler enables the 
    # host operating system to better optimize I/O resource usage
    # between different virtual machines.
    # http://peterkieser.com/technical/vmware-server-issues/
    elevator=noop
    
    # The following kernel boot parameters will help performance 
    # and stability using Linux 2.6 as a guest. APCI/APIC support
    # must be enabled if you plan on using SMP virtualization in
    # the guest.Setting the clock to PIT has shown to have better
    # time keeping than other clock sources, your mileage may vary. 
    # Setting elevator to noop will enable the host operating 
    # system to better schedule I/O as it has an overview of the
    # whole system as opposed to just one virtual machine.
    # http://peterkieser.com/technical/vmware-server-issues/
    
    # The current (March 3, 2010) guidance from VMware is that 
    # clocksource is no longer required in CentOS 5.4 Use this 
    # guide to determine what time keeping settings you need
    # for your Guest OS
    # http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006427
    
    # CentOS 5.4 x86_64 Guest
    divider=10 elevator=noop
    
  • Installing Mercurial and Hosting Repositories with CentOS

    In the previous post I discussed some things to consider when publishing mercurial repositories. Here I explain the steps I took to setup and configure Mercurial repository publishing via SSH and HTTPS on CentOS 5.5 x64.

    Prerequisites

    These are the prereqs for my setup. Most of these instructions will probably work on other distros and web servers, but the exact commands and details will vary.

    • CentOS 5.5
    • Apache 2.x (Installed and Configured)
    • root or sudo access on the server
    • Internet Connection during installation (yes, some of us do development on offline networks)

    Download and Install Packages

    The mercurial packages that are available directly out of the CentOS repositories and RPMForge for CentOS 5.5 were too old for my liking. I downloaded the latest RPMs (at the time of writing) directly. Update 2011/03/30: Mercurial 1.8.x is now in RPMForge for EL5 so you can get it directly via yum.

    sudo yum install python-docutils python-setuptools mercurial mercurial-hgk mercurial-ssh
    
    I ran into an dependency resolution issue because I have the EPEL repo enabled with a higher priority than RPMForge. I added the following line to the [epel] entry in /etc/yum.repos.d/epel.repo to force yum to look for mercurial elsewhere so it pulled from RPMForge.
    exclude=mercurial,mercurial-hgk,mercurial-ssh
    

    Create an hg Group

    I find it useful to create a group for all of the version control users on the server because I want to support both HTTPS and SSH access. If you are using HTTPS access only, then you don’t need a group here since it will all be done via apache. I used hg here but you could use anything you want like vcs or versioncontrol.

    sudo groupadd hg
    # Add all committers to the hg group using usermod
    

    Create and Configure a Repo

    Note that I go through a few extra hoops here to allow multiple-committers since I want to support both SSH and HTTPS access.

    # Create your new Hg repo (or copy the existing one) in /srv/hg
    sudo hg init /srv/hg/
    
    # Make the repo writable by the group for a multiple-committers environment
    cd /srv/hg/
    sudo chgrp hg .hg .hg/* .hg/store/*
    sudo chmod g+w .hg .hg/* .hg/store/*
    sudo chmod g+s .hg .hg/* .hg/store/data
    
    # Give ownership of Hg repos to the apache user
    sudo chown -R apache /srv/hg/
    
    # Setup your .hgignore file to handle files you don't want under version control
    
    # Add your project files to the repo
    hg add
    
    # Commit your project
    hg commit
    

    Setup HgWeb

    There is great documentation on how to do this on the Mercurial Wiki but here are the steps I used.

    # Setup Hgweb for HTTP access using mod_python
    sudo mkdir /etc/hgweb
    sudo mkdir /var/hg
    
    sudo vim /var/hg/hgwebdir.py
    ########## BEGIN COPY BELOW THIS LINE ###########
    #!/usr/bin/env python
    #
    import cgitb
    cgitb.enable()
    
    from mercurial.hgweb.hgwebdir_mod import hgwebdir
    from mercurial.hgweb.request import wsgiapplication
    import mercurial.hgweb.wsgicgi as wsgicgi
    
    def make_web_app():
        return hgwebdir("/etc/hgweb/hgwebdir.conf")
    
    def start(environ, start_response):
        toto = wsgiapplication(make_web_app)
        return toto (environ, start_response)
    ############ END COPY ABOVE THIS LINE ############
    
    sudo vim /etc/hgweb/hgwebdir.conf
    ######### COPY BELOW THIS LINE #############
    [collections]
    /srv/hg = /srv/hg
    
    [web]
    style = gitweb
    allow_archive = bz2 gz zip
    contact = Your Name, your.email@address.com
    allow_push = *
    push_ssl = true
    ######## END COPY ABOVE THIS LINE ###########
    

    Setup modpython_gateway

    We need a dynamic landing page to handle access to the collection of all repos available on the box rather. This way we don’t have to setup a static landing page for each and every repo we publish. I’m using the ever popular modpython_gateway script for this.

    wget http://www.aminus.net/browser/modpython_gateway.py?format=raw
    sudo mv modpython_gateway.py\?format\=raw /var/hg/modpython_gateway.py
    
    # IMPORTANT! Only use the -c flag for the FIRST person you add, drop it for every add after that
    sudo htdigest -c /etc/hgweb/users.htdigest "Zach's Mercurial Repository" burly
    
    sudo vim /etc/httpd/conf.d/hgweb.conf
    ######### BEGIN COPY BELOW THIS LINE ########
    
            PythonPath "sys.path + ['/var/hg']"
            SetHandler mod_python
            PythonHandler modpython_gateway::handler
            PythonOption wsgi.application hgwebdir::start
    
            AuthType Digest
            AuthName "Zach's Mercurial Repository"
            AuthDigestProvider file
            AuthUserFile "/etc/hgweb/users.htdigest"
    
            Require valid-user
    
            # Redirect all non-SSL traffic automagically
            RewriteEngine On
            RewriteCond %{HTTPS} off
            RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
    
    ####### END COPY ABOVE THIS LINE ##########
    

    Configure Apache

    We need to enable WSGI handling in apache and that we have all the right file permissions. If you don’t have a CA cert you’ll need to setup one up if you want to use SSL. Here is how to do that on CentOS.

    sudo vim /etc/httpd/conf/httpd.conf
    # Add the Following Lines in their Respective Locations in the Conf
    ####### BEGIN COPY BELOW THIS LINE ########
    LoadModule wsgi_module modules/mod_wsgi.so
    AddHandler wsgi-script .wsgi
    ####### END COPY ABOVE THIS LINE #########
    
    sudo chown -R root.root /etc/hgweb
    sudo chown apache.apache /etc/hgweb/users.htdigest
    sudo chmod 400 /etc/hgweb/users.htdigest
    sudo chown -R apache.apache /var/hg
    
    sudo service httpd restart
    

    Use your Repo!

    You should now be able to view your repositories by pointing your browser at http://yermachinenameorip/hg and you should be prompted for the username and password created in your htdigest file from earlier (i.e. not your shell account credentials). Note that due to the re-write rule in our hgweb.conf file, you should automatically be redirected to the SSL version of the site.

    You should now be able to clone your repo via https and your htdigest credentials via:

    hg clone https://@yermachinenameorip/hg/reponame

    or via SSH and your shell account credentials via:

    hg clone ssh://@yermachinenameorip/srv/hg/reponame

    Maintenance

    #--------------------------
    # Create a new project
    #--------------------------
    hg init /srv/hg/<reponame>
    
    # Make the repo writable by the group for a multiple-committers environment
    cd /srv/hg/<reponame>
    sudo chgrp hg .hg .hg/* .hg/store/*
    sudo chmod g+w .hg .hg/* .hg/store/*
    sudo chmod g+s .hg .hg/* .hg/store/data
    
    # Give ownership of Hg  repos to the apache user
    sudo chown -R apache /srv/hg/<reponame>
    
    hg add
    hg commit
    
    #--------------------------
    # Add a new user
    #--------------------------
    useradd  -G hg
    passwd
    
    # Set the htaccess password for the user
    htpasswd /etc/hgweb/users.htdigest
    

    Additional Resources