Sunday, February 22, 2009

Configuring ESX 3.5 with openfiler 2.3

About me, VCP and like to test things.

Objectives:



  • Install and Configure Openfiler to achieve the following goals

  • Management and production on network 192.168.1.x

  • ISCSI Interface and ISCSI traffic on network 10.10.10.x

  • Isolate management and ISCSI Traffic

  • X amount of LUN’s available for ESX Hosts

  • Configure ESX to use the LUN’s on the isolated network 10.10.10.x


Pre-Requisites



  • Computer with RAID 5 and 3 ethernet controllers for OpenfilerA

  • ESX Host with at least 6 Ethernet Controllers, local disks for ESX OS is fine. I prefer 8 ethernets in mine if doing a DMZ for as much physical isolation as possible, however, can be done with vlans instead.

  • 2 Switches for further isolation, could be vlans instead.

  • ESX licensing that supports use of ISCSI based storage


Setting up network



  • Take one switch and plug in 4 of your esx NIC’s and one SAN NIC (Production Network)

  • Take the other switch and plug in 2 of your ESX Nic’s and 2 of your SAN Nic’s (Iscsi Network)

  • Some workstation on the Production network so you can https to openfiler and virtual infrastructure client to the esx host for configuration purposes.


Installing Openfiler - Documentation seems to be different even on openfilers website. 



  • The only thing that is really important is that you’re manually partitioning and that you have free space before finalizing the partitions.

  • Basically, the problem can be that you follow this document  http://www.openfiler.com/learn/how-to/graphical-installation and then your root takes up all the space not allowing you to create volumes within openfiler. I don’t know, maybe you wanted that, but I didn’t.

  • This document is better http://wwwold.openfiler.com/docs/install/#id2674208 as it leaves you disk space if you have one giant RAID Array that you want to use.

  • Follow the second document and only configure your management nic at this point which is on 192.168.1.x

  • Remove your disk and reboot

  • Login via https to openfiler


Now the configuration – You’ll have to ask yourself a few questions before you get started so that you set it up correctly, but you can always just blow them back out as long as you’re not already using them. Questions like, how many volumes do I want to create and how many LUNS, will all my LUNS fit into the volume etc etc. Just some disks space sizing questions. Since we’re really just using this for testing for now then you can follow the process once or twice and then adjust your goals and start over. So, configuring Openfiler.

Use remaining free space to create physical storage



  • Login to the management console via https

  • Click on Volumes

  • On the right click on block devices

  • Click on edit disk link on the left

  • Scroll down to the bottom to see how much disk space you can use. Lets make this easy for now and just use all the space (It does this by default). Use primary mode with physical volume partition type.

  • Select create

  • It now gives you a pretty pie of your disk and its usage.


Create Your volume Group



  • On the right pane select volume groups

  • Scroll down to create a new volume group

  • I named mine vmstorage

  • Check the box that selects the disk you just created

  • Click add volume group


Configure ISCSI NICS and bond them



  • Click on the top button system

  • Click on the button on the right network setup

  • Scroll down and find network interface configuration

  • Locate the 2 nic’s that aren’t assigned and select configure on the right of them

  • Configure them to be on the 10.10.10.x network

  • Once both are configured then below the network interface configuration click on the create bonded interface

  • Select your two cards on the 10.10.10.x network and select continue

  • Configure ip and SM – leave the bond options at the default. Review them if you want clarification on their purposes.

  • Reboot openfiler

  • Go back to the system button and then network setup on the right

  • Scroll down to Network Interface Configuration and verify that the 10.10.10.x Nic’s are bonded. If not, work it out, if so then continue. (This isn’t imperative, it’s just a failover mechanism. You can also load balance it instead, depending on the load your esx host will have on it. I just can’t stop doing things all the way.


Add a Volume



  • This allows you to split up your volume group into different LUN’s. There are multiple ways to do this. I just create one big volume that takes all f the storage on the volume group.

  • Click on the volumes button on the top

  • Click on add volume button on the right

  • Verify that the vmstorage volume group is listed

  • Name it, describe it, allocate space to it

  • File system needs set to ISCSI

  • Select create

  • It takes you back to the same page and lists the volume you just created.


Network Access Configuration



  • This part is how you say what host has access to the storage

  • Click on the system tab on the top

  • Scroll down to network access configuration

  • I name mine ESXACCESS

  • Specify the ip of the ESX hosts ISCSI network, not the specific IP. 10.10.10.0 - 255.255.255.0

  • Specify Subnet mask

  • Specify share type

  • If you have multiple esx hosts for HA or vmotion you can add them as well.

  • Click update

  • Now if you scroll down you can see that the 10.10.10.x can later be given access to your LUN.


Enable ISCSI target server



  • Click on the top tab services

  • Locate iscsi target server and enable


Configure ISCSI targets



  • Click on the volumes tab on the top

  • Click on iscsi targets on the right

  • Verify target configuration tab on top left is selected

  • You’ll notice under target iqn that the name is autopopulated, it is best to leave this alone

  • Just hit the add button under add new iscsi target

  • Now click on the lun mapping tab if not already there

  • Find the LUN we created earlier and hit the map button to the right of its description

  • Now click on the network acl grey tab on the top

  • Find the network Access ip we created earlier and change it to allow

  • Click on update  – this just took the Lun we created and set him to share only to the ESX Host on the 10.10.10.x network.

  • I haven’t used Chap yet because these guys are physically separated, but it is extra security if needed. I won’t cover configuring chap on the esx host either.


Openfiler wrap up.

Ok, basically we just pulled some storage off of the hard disk and set it up as a LUN that can be targeted by the ESX Host. We’re now ready to configure the ESX host to use the LUN.

Configuring ESX Host to use the openfiler LUN



  • Use virtual infrastructure client

  • Connect to your esx host virtual center or single host if licensed for iscsi

  • Figure out which 2 Nic’s you have attached to your ESX host that you’re going to use for ISCSI and plug them into the same switch that the 2 nic’s for the SAN ISCSI Traffic are on. To determine which card is where you can use the vmware way by knowing pci id’s etc or the easy way by going to configuration tab, network adapters, disconnect a cable and see which one goes down. Once you have the two you want, get them in the other switch.

  • We’ll be working mostly from the configuration tab in the vi client.

  • So – I’m assuming you already have one vswitch setup with 2 nic’s attached for failover and a service console port on it that you use for management. Or maybe even a second vswitch only for service console management.

  • Basically we’re going to create an additional vswitch for vmotion and storage – to do that


Add vmotion network – remember working in configuration tab



  • Click on the networking button on the left

  • Click on add networking on the top right

  • Select vmkernel and hit next

  • Select the two nic’s that are in the switch with the SAN ISCSI Nics

  • Label It whatever you want to – vmotion is intuitive though

  • Check the box to enable vmotion

  • Give It the ip we talked about earlier in the Networking Access Control for Openfiler 10.10.10.x

  • Add the subnet mask

  • Hit next

  • Hit finish

  • You’ll probably get a warning about a default gateway not being there, don’t need it

  • Write down the name of this vswitch, prolly vswitch1


Configure ISCSI software initiator



  • Set the firewall to allow ISCSI traffic

  • Click on the storage adapters button on the upper left

  • Select your iscsi software adapter, something like vmhab33.

  • Click on the properties button to the right just a little lower

  • On the general tab select configure

  • Set the status tab to enabled

  • Leave everything else and hit ok

  • Its going to tell you that you’ll need a service console on this port. Select yes.

  • It pops up automatically with the configuration wizard

  • Select service console port

  • Hit next

  • Set this service console to use the same vswitch as the vmotion network

  • Select Next

  • Leave the name unless you need to change it

  • Give him an ip on the 10.10.10.x network

  • Hit next

  • Hit finish

  • This takes you back to your iscsi software initiator configuration window

  • Select the dynamic discovery tab

  • Click add

  • Add the ip of the bonded nic’s on your openfiler

  • Hit ok and it may take a bit to log it in

  • You can also do your chap config here or static discovery. (not covered)

  • Hit ok to finish out this window

  • It is going to prompt for a rescan – select yes.

  • Now you should have your LUN listed on the bottom.


Add the storage to your datastore



  • Click on the storage button on the left

  • Click on add storage on the top right

  • Choose disk/LUN

  • Hit next

  • Select the LUN with your IQN identifier of the openfiler san

  • Click next

  • Review disk layout

  • Click next

  • Name it something like SAN1

  • Click Next

  • Set the maximum file size. I usually do 256GB for home

  • Verify and finish


Now feel free to add vm’s to the san.


You now have the san and esx iscsi traffic isolated to a network that is only accessible if someone comes along and plugs into the switch. Isolating is good not only for security but also for keeping network traffic on your production LAN down.

I tested using this san with microsofts iscsi initiator as well and it works just fine. I did the same thing, 2 nics on the box, one for production lan and one for iscsi lan. I don’t think this is as secure as an ESX environment, but it will work.

Monday, February 16, 2009

ESX Deployment Appliance

This one is only for me to let me know that it exists. Since I don't just do one deployment here or there, always deploying It is nice to know that these types of things exist. What is it? Well
EDA is an appliance dedicated to deploying ESX servers fast and easy. It has a scriptbuilder to quickly create %post-scripts.

Get it and make your deployment process easier.
http://www.vmware.com/appliances/directory/1216

ESX Best Practices

So, you want to deploy ESX? Itching to buy that new hardware? Thinking about all the room and electricity you'll be saving? Such a great thing, going green and feeding your inside geekness! There's always a huge list of considerations before starting a project, but we just want to deploy right! So to at least get you started in a somewhat right direction - You should go over every Best Practices analyzer you can find. Best Practices ESX Host Partitions, Best Practices partition alignment, Performance enhancements, networking performance, vmfs best practices etc.

Read them all completely before starting your deployment and you'll be well on your way to having a very nice infrastructure that will last a very long time. This is also a great confidence booster whenever you select that migrate option or want to verify DRS. You'll rest easy knowing that you followed best practices.

Partition Align
Best Practices for ESX Host Partitions
VI3 Performance Enhancements
Best Practices for VMWare ESX Server 3
Networking Performance
VMFS best practices

Common Issues in VMware Infrastructure

What if one were so daring as to think of all the common issues facing ESX administrators on a day to day basis and then compile it into a list that links to KB articles for resolution? Well, then we'd have this vmware wolf character that can't stop putting things out there for the community. I'd like to publicly say thanks!

Common Licensing Issues:
http://www.vmwarewolf.com/common-licensing-issues-in-vmware-infrastructure/

Common Fault Issues:
http://www.vmwarewolf.com/common-fault-issues-in-vmware-infrastructure/

Common Network Issues:
http://www.vmwarewolf.com/common-network-issues-in-vmware-infrastructure/

Common System Management Issues:
http://www.vmwarewolf.com/common-system-management-issues-in-vmware-infrastructure/

Learning ESX

Found some nice little videos so you can learn about great new features of ESX in what will be ESX vSphere. (ESX 4)

Host profiles - http://download3.vmware.com/vdcos/demos/Hostprofiles_Linked_VC_800x600.html
Vnetwork distributed switch - http://download3.vmware.com/vdcos/demos/Hostprofiles_Linked_VC_800x600.html
Fault tolerance - http://download3.vmware.com/vdcos/demos/Hostprofiles_Linked_VC_800x600.html

ESX/VI3 Backup Applications

Wondering what backups solutions would work for you? Do a comparison at the following link.
http://vmprofessional.com/index.php?content=esx3backups

It is a bit dated and products have been updated since then, specifically VEEAM 3 is about to be out and will cover ESX and ESXi. Multiple things to consider, full image backups or no? VM or physical, Agentless or with an agent etc.

VirtualCenter LogFiles

Recently I noticed that my VirtualCenter Server, which is virtualized was running out of space on the C:\. I thought, hmmm, what is this. So I looked into it and found that the VirtualCenter Logfiles were starting to build and starting to take up some space. Since I also installed Update Manager on that same virtual machine I had a 25GB D:\ that was only using about 18GB of space, I decided to use the D:\ for logfiles. How did I do it - I searched google and came up with the following, which is from VMWARE WOLF.

Edit “vpxd.cfg”. It’s located here: %AllUsersProfile%\Application Data\VMware\VMware VirtualCenter\.
Add the following lines in the “<config>” section and change the path accordingly:
<log>
<directory>c:\VC_Logs</directory>
</log>


Props:

http://www.vmwarewolf.com/which-virtual-center-log-file/
ESX Hardware Troubleshooting -
Think you're having a hardware issue, but just can't seem to prove it to the Hardware tech over the phone? I've noticed that hardware techs usually ask what OS you're running on it and when you say ESX or Hyper-V they're usually at a loss as to what to do in order to successfully determine what is causing the issue. And since they're the hardware vendor they usually look for the least expensive way out, which usually is - sorry but it appears to be an OS misconfiguration.

The answer to this is some sort of Linux Live CD. Boot to it and run diags or test things out. I didn't think of this on my own, i rarely think of things on my own, we've got google for things like that. SO - I looked into my magic google ball and it found me the following link
http://www.vmwareinfo.com/2008/12/great-vmware-troubleshooting-tip.html
- Because I just wasn't well rounded enough to think of it on my own.
ESX to Hyper-V -
So you have your ESX or Vmware Server set up and you decided you want to test out Hyper-V or Microsoft Virtual Server. You find that you really don't want to recreate your test virtual machines. Well - how about instead just convert them over? Sure would be great if there was tool for that?
Convert your VMDK files to VHD
http://vmtoolkit.com/files/folders/converters/entry8.aspx

Well, now you decided that you didn't correctly size the disks on your newly created vhd, hmm, I know in ESX if I use the enterprise converter I could resize a disk, but how would I do that on my vhd file?
http://vmtoolkit.com/files/folders/converters/entry87.aspx

I haven't used the aforementioned, but that doesn't mean it isn't a great tool for you to try. If you've found this post useful or would like to comment on it, feel free to post a follow up comment.