VMFS Properties
- Block Storage
- High performance file system format that is optimized for storing virtual machines
- Current version is VMFS 5. VMFS 5 is only readable and writable by ESXi 5 hosts, however ESXi 5 hosts can read and write to datastores with the previous VMFS version (3).
- Space can be increased while VMs are running on the datastore.
- Designed for concurrent access from multiple physical machines and enforces the appropriate access controls on VM files.
- When a datastore is formatted with VMFS 5, it uses a GUID partition table, which allows datastores be up to 64TB in size.
- VMFS provides specific locking mechanisms (on disk locking) that allow multiple hosts to access the VMs on a shared storage environment.
- Contains metadata which includes all mapping information for files on the datastore. This mapping information or metadata is updated each time you perform either a datastore or virtual machine management option such as creating or growing a virtual disk, powering a VM on or off, creating a template, etc.
- The locking mechanism prevents multiple hosts from concurrently writing or updating the metadata.
- There are two types of locking mechanisms. SCSI reservations (locks the entire LUN from other hosts) which is used with storage devices that do not support hardware acceleration and Atomic Test and SET (ATS) – (locks per disk sector) for those storage devices that do support hardware acceleration.
- File system will be dictated by the NFS Server.
- Shared Storage capabilities supported on a NFS volume include vMotion, VMware DRS and VMware HA, ISO images, and snapshots.
- Maximum size of the NFS datastore depends on the maximum size supported by the NFS Server. ESXi does not impost any limits on NFS datastore size.
- If the NFS server does not offer internationalization support, do not use non-ASCII characters to name your datastores or VMs as you may experience unpredictable failures
- Up to 256 VMFS datastores can be attached to a host, with each datastore having a maximum size of 64 TB.
- Can have up to 32 extents per datastore
- Extents can be greater than 2 TB in size
- Increased resource limits (file descriptors)
- Standardized on a 1 MB block size with support for virtual disk up to 2 TB.
- Support of greater than 2 TB when utilizing Physical Mode RDMs
- Default use of hardware assisted locking (ATS) on devices that support hardware acceleration.
- Ability to reclaim unused space on thin provisioned arrays utilizing VAAI
- Online upgrade
- Under the storage section of a hosts configuration tab click 'Add Storage'
- Select the disk/LUN storage type (either VMFS or NFS) In this case, VMFS.
- Select the desired device and file-system version.
- Select whether to use all available partitions (erase everything) or use free space. If the disk you are adding is blank, then the entire space is just displayed.
- Give the datastore a name and click 'Finish'
- After you have completed, if the host is a member of a cluster, a rescan operation will be performed on all hosts in the cluster and the datastore will be added to the others as well
- Pretty simple, right click and rename. This name will be reflected across all hosts that have access to the datastore.
- Just a note, once the datastore is deleted it will be gone from all hosts and the data will be destroyed.
- Be sure to remove all VMs from the datastore.
- Right click the datastore and select 'Delete'.
- Right click the datastore and select Unmount
- If the datastore is shared, you will have to select which hosts you would like to unmount this datastore from.
- Confirm and you're done
- In the storage section of the hosts Configuration tab click 'Add Storage'
- Select NFS as the storage type.
- Enter in the NFS Server Name or IP, the mount point and a desired name for the datastore.
- If the volume has been exported as read only, be sure to select Mount NFS Read only.
- Confirm and Done!
- Click on the datastore, then click properties.
- Click 'Increase' and then select your option to either add a new extent or grow and existing one.
- From here on the process is similar to that of creating a new datastore, where you select to either destroy current data or use available free space, set a capacity and click 'Finish'
- Verify that the host has at least 2MB of free blocks and 1 free file descriptor
- Select the VMFS datastore from the Storage section of the Hosts Configuration Tab.
- Click 'Upgrade to VMFS5.' – Done!
- Storage DRS needs to be enabled on the datastore cluster that contains the datastore.
- No CD ROM/ISO images can be stored on the datastore
- There must be at least two datastores in the cluster.
- Separate spindles – having different spindles to help provide better performance. Having multiple VMs, especially I/O intensive VMs sitting on one big datastore may cause latency and performance issues.
- Separate RAID groups. – for certain applications, such as SQL server you may want to configure a different RAID configuration of the disks that the logs sit on and that the actual databases sit on.
- Redundancy. – If you are doing any sort of replication you would certainly want your replicated VMs to sit on different disks than your production VMs in the case that you have failure on your production storage.
- Tiered Storage – You may have a use case to have storage formatted in different arrays laid out as Tier 1, Tier 2, etc.
- The host will select the path that it used most recently.
- When this path becomes unavailable, it will select an alternate path.
- When the initial path comes back online, nothing will happen, it will continue to use the alternate path.
- Default policy for most active/passive arrays.
- The host will use the designated preferred path if configured, other wise it will select the first available working path.
- If the path becomes unavailable, the host will select an alternate path
- When the preferred path comes back online, the host will revert to that path. (note: this only works if you set the preferred path. If you allow the host to select it's own preferred path it will continue to use the alternate and not revert to the original).
- Default policy for most active/active arrays.
- Uses an automatic path selection algorithm rotating through all active paths when connecting to an active/passive array, and all paths when connected to an active/active array.
- Provides more load balancing across paths.
- Default for a number of different arrays.
No comments:
Post a Comment