Sunday, February 22, 2009

Configuring ESX 3.5 with openfiler 2.3

About me, VCP and like to test things.

Objectives:



  • Install and Configure Openfiler to achieve the following goals

  • Management and production on network 192.168.1.x

  • ISCSI Interface and ISCSI traffic on network 10.10.10.x

  • Isolate management and ISCSI Traffic

  • X amount of LUN’s available for ESX Hosts

  • Configure ESX to use the LUN’s on the isolated network 10.10.10.x


Pre-Requisites



  • Computer with RAID 5 and 3 ethernet controllers for OpenfilerA

  • ESX Host with at least 6 Ethernet Controllers, local disks for ESX OS is fine. I prefer 8 ethernets in mine if doing a DMZ for as much physical isolation as possible, however, can be done with vlans instead.

  • 2 Switches for further isolation, could be vlans instead.

  • ESX licensing that supports use of ISCSI based storage


Setting up network



  • Take one switch and plug in 4 of your esx NIC’s and one SAN NIC (Production Network)

  • Take the other switch and plug in 2 of your ESX Nic’s and 2 of your SAN Nic’s (Iscsi Network)

  • Some workstation on the Production network so you can https to openfiler and virtual infrastructure client to the esx host for configuration purposes.


Installing Openfiler - Documentation seems to be different even on openfilers website. 



  • The only thing that is really important is that you’re manually partitioning and that you have free space before finalizing the partitions.

  • Basically, the problem can be that you follow this document  http://www.openfiler.com/learn/how-to/graphical-installation and then your root takes up all the space not allowing you to create volumes within openfiler. I don’t know, maybe you wanted that, but I didn’t.

  • This document is better http://wwwold.openfiler.com/docs/install/#id2674208 as it leaves you disk space if you have one giant RAID Array that you want to use.

  • Follow the second document and only configure your management nic at this point which is on 192.168.1.x

  • Remove your disk and reboot

  • Login via https to openfiler


Now the configuration – You’ll have to ask yourself a few questions before you get started so that you set it up correctly, but you can always just blow them back out as long as you’re not already using them. Questions like, how many volumes do I want to create and how many LUNS, will all my LUNS fit into the volume etc etc. Just some disks space sizing questions. Since we’re really just using this for testing for now then you can follow the process once or twice and then adjust your goals and start over. So, configuring Openfiler.

Use remaining free space to create physical storage



  • Login to the management console via https

  • Click on Volumes

  • On the right click on block devices

  • Click on edit disk link on the left

  • Scroll down to the bottom to see how much disk space you can use. Lets make this easy for now and just use all the space (It does this by default). Use primary mode with physical volume partition type.

  • Select create

  • It now gives you a pretty pie of your disk and its usage.


Create Your volume Group



  • On the right pane select volume groups

  • Scroll down to create a new volume group

  • I named mine vmstorage

  • Check the box that selects the disk you just created

  • Click add volume group


Configure ISCSI NICS and bond them



  • Click on the top button system

  • Click on the button on the right network setup

  • Scroll down and find network interface configuration

  • Locate the 2 nic’s that aren’t assigned and select configure on the right of them

  • Configure them to be on the 10.10.10.x network

  • Once both are configured then below the network interface configuration click on the create bonded interface

  • Select your two cards on the 10.10.10.x network and select continue

  • Configure ip and SM – leave the bond options at the default. Review them if you want clarification on their purposes.

  • Reboot openfiler

  • Go back to the system button and then network setup on the right

  • Scroll down to Network Interface Configuration and verify that the 10.10.10.x Nic’s are bonded. If not, work it out, if so then continue. (This isn’t imperative, it’s just a failover mechanism. You can also load balance it instead, depending on the load your esx host will have on it. I just can’t stop doing things all the way.


Add a Volume



  • This allows you to split up your volume group into different LUN’s. There are multiple ways to do this. I just create one big volume that takes all f the storage on the volume group.

  • Click on the volumes button on the top

  • Click on add volume button on the right

  • Verify that the vmstorage volume group is listed

  • Name it, describe it, allocate space to it

  • File system needs set to ISCSI

  • Select create

  • It takes you back to the same page and lists the volume you just created.


Network Access Configuration



  • This part is how you say what host has access to the storage

  • Click on the system tab on the top

  • Scroll down to network access configuration

  • I name mine ESXACCESS

  • Specify the ip of the ESX hosts ISCSI network, not the specific IP. 10.10.10.0 - 255.255.255.0

  • Specify Subnet mask

  • Specify share type

  • If you have multiple esx hosts for HA or vmotion you can add them as well.

  • Click update

  • Now if you scroll down you can see that the 10.10.10.x can later be given access to your LUN.


Enable ISCSI target server



  • Click on the top tab services

  • Locate iscsi target server and enable


Configure ISCSI targets



  • Click on the volumes tab on the top

  • Click on iscsi targets on the right

  • Verify target configuration tab on top left is selected

  • You’ll notice under target iqn that the name is autopopulated, it is best to leave this alone

  • Just hit the add button under add new iscsi target

  • Now click on the lun mapping tab if not already there

  • Find the LUN we created earlier and hit the map button to the right of its description

  • Now click on the network acl grey tab on the top

  • Find the network Access ip we created earlier and change it to allow

  • Click on update  – this just took the Lun we created and set him to share only to the ESX Host on the 10.10.10.x network.

  • I haven’t used Chap yet because these guys are physically separated, but it is extra security if needed. I won’t cover configuring chap on the esx host either.


Openfiler wrap up.

Ok, basically we just pulled some storage off of the hard disk and set it up as a LUN that can be targeted by the ESX Host. We’re now ready to configure the ESX host to use the LUN.

Configuring ESX Host to use the openfiler LUN



  • Use virtual infrastructure client

  • Connect to your esx host virtual center or single host if licensed for iscsi

  • Figure out which 2 Nic’s you have attached to your ESX host that you’re going to use for ISCSI and plug them into the same switch that the 2 nic’s for the SAN ISCSI Traffic are on. To determine which card is where you can use the vmware way by knowing pci id’s etc or the easy way by going to configuration tab, network adapters, disconnect a cable and see which one goes down. Once you have the two you want, get them in the other switch.

  • We’ll be working mostly from the configuration tab in the vi client.

  • So – I’m assuming you already have one vswitch setup with 2 nic’s attached for failover and a service console port on it that you use for management. Or maybe even a second vswitch only for service console management.

  • Basically we’re going to create an additional vswitch for vmotion and storage – to do that


Add vmotion network – remember working in configuration tab



  • Click on the networking button on the left

  • Click on add networking on the top right

  • Select vmkernel and hit next

  • Select the two nic’s that are in the switch with the SAN ISCSI Nics

  • Label It whatever you want to – vmotion is intuitive though

  • Check the box to enable vmotion

  • Give It the ip we talked about earlier in the Networking Access Control for Openfiler 10.10.10.x

  • Add the subnet mask

  • Hit next

  • Hit finish

  • You’ll probably get a warning about a default gateway not being there, don’t need it

  • Write down the name of this vswitch, prolly vswitch1


Configure ISCSI software initiator



  • Set the firewall to allow ISCSI traffic

  • Click on the storage adapters button on the upper left

  • Select your iscsi software adapter, something like vmhab33.

  • Click on the properties button to the right just a little lower

  • On the general tab select configure

  • Set the status tab to enabled

  • Leave everything else and hit ok

  • Its going to tell you that you’ll need a service console on this port. Select yes.

  • It pops up automatically with the configuration wizard

  • Select service console port

  • Hit next

  • Set this service console to use the same vswitch as the vmotion network

  • Select Next

  • Leave the name unless you need to change it

  • Give him an ip on the 10.10.10.x network

  • Hit next

  • Hit finish

  • This takes you back to your iscsi software initiator configuration window

  • Select the dynamic discovery tab

  • Click add

  • Add the ip of the bonded nic’s on your openfiler

  • Hit ok and it may take a bit to log it in

  • You can also do your chap config here or static discovery. (not covered)

  • Hit ok to finish out this window

  • It is going to prompt for a rescan – select yes.

  • Now you should have your LUN listed on the bottom.


Add the storage to your datastore



  • Click on the storage button on the left

  • Click on add storage on the top right

  • Choose disk/LUN

  • Hit next

  • Select the LUN with your IQN identifier of the openfiler san

  • Click next

  • Review disk layout

  • Click next

  • Name it something like SAN1

  • Click Next

  • Set the maximum file size. I usually do 256GB for home

  • Verify and finish


Now feel free to add vm’s to the san.


You now have the san and esx iscsi traffic isolated to a network that is only accessible if someone comes along and plugs into the switch. Isolating is good not only for security but also for keeping network traffic on your production LAN down.

I tested using this san with microsofts iscsi initiator as well and it works just fine. I did the same thing, 2 nics on the box, one for production lan and one for iscsi lan. I don’t think this is as secure as an ESX environment, but it will work.

No comments:

Post a Comment