Tuesday, November 15, 2011

VMware’s vFabric; Enabling Application Development for the Cloud

vFabric aligns with VMware’s cloud strategy specifically around PaaS.  There are generally three types of cloud architectures recognized in the industry; Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and SaaS (Software as a Service).  VMware is the only vendor that has clear defined products in all three categories ready for sale. 

vFabric is targeted towards development for a new breed of applications, namely ones that demand Cloud scale.  Traditional application development makes planning for a burst in demand and a subsequent contraction not only difficult to resource but problematic when it comes to licensing the infrastructure.  Essentially VMware's targeted competitors force licensing on maximum utilization which makes it both expensive and inflexible. 

The development is all based on Java and specifically the Spring source platform which VMware believes offers significant flexibility over competitors like Oracle or IBM Websphere platforms.

The basic make up of application development has undergone an evolution; for example, even the most powerful of databases are considered bottlenecks when developing Cloud scale applications. Instead multiple individual database objects which live in memory and can be duplicated and scaled quicker running on a lightweight JeOS (Just enough OS) are used.

In VMware’s vFabric the SQL database objects (SQLFire) are an extension of GemFire. In addition vFabric includes production grade and tuned versions of Apache, Tomcat and RabbitMQ for messaging. In order to provide visibility, Hyperic is used to provide transparent performance management.

The value that VMware is promoting is enabled by licensing vFabric in a way that allows the combination of components to be quickly changed and reconfigured based on requirements. This ability, which is provided largely through infrastructure virtualization allows development to happen quicker using a new set of building blocks tuned for mobile on demand applications. This enables customers to go to market quicker than their competitors.

In addition to vFabric which can be deployed in-house on private cloud infrastructure, VMware offers CloudFoundry which is a provided as a PaaS Cloud Service. VMware sites a number of customer use cases that are largely based on applications developed for the financial industries which require real time information which have no tolerance for any delay.

Monday, November 7, 2011

Multiple Different Hypervisors; Should You?

From a service perspective, I have had the experience of delivering different vendor Hypervisors into a single customer environment. The original justification used by the customer was to reduce costs. What was quickly apparent though, is that the upfront cost benefit did not come anywhere near covering the increased operational costs and complexity experienced post deployment. In addition, neither one of the vendor management platforms was considered adequate by the customer; VMware's being exclusive and Microsoft's requiring integration of multiple solutions. Although Microsoft has gotten better, VMware is extremely good at managing their hypervisors. There was also a strong push back from the operational team once they experienced the management of a mixed hypervisor environment without a single targetted toolset to deal with both.


I suspect that the split in hypervisors happens as much for political reasons vs. real cost reduction. In the history of Virtual Desktop Infrastructure (VDI) the idea of having a server virtualization team manage desktops has typically been an under considered decision by IT teams. The server virtualization team is eager to approach the new technology but less eager to adopt desktop support. This is the stake that is wedging a 2nd hypervisor into the mix. IT teams force separation of servers and client side virtualization onto separate platforms to ensure ownership by one group or another.


The 2nd pressure I believe is cost justification. I recall being at a Financial firm when they introduced IBM blades in addition to HP blades to reduce costs. The capital costs savings was offset by the operational overhead in supporting the two platforms. As the support cost was a soft cost and not visible in a single bucket the executive 'cost cutting' decision carried the day and we became a dual platform environment. I believe that from an accounting perspective the organization did benefit from having two vendors in play when it came to renewal but the overhead was definitely felt by support.


When introducing an additional virtualization vendor you must consider what the true benefit is from running a mixed hypervisor environment?




  1. Cost; Capital vs. Operational?

  2. Should you have different hypervisors for virtual desktop and servers?

  3. What is the benefit of avoiding vendor Lock in?


What are the road blocks you will likely face in implementing a mixed hypervisor environment?



  1. Current Maturity of Toolsets

  2. Cost and benefits of having not just two hypervisors but perhaps an additional management platform

  3. Do your virtualization vendors provide a single point of management or is another software product required?

There can be many benefits to running a mixed mode environment. It is important to consider the decision carefully to avoid adding complexity without any return on the investment.

Wednesday, August 24, 2011

vSphere 5: virtual networking revisited

One of the less touted features of vSphere 5 is the improvements in virtual networking. The visibility into virtual networking has traditional been a enigma in virtual environments. This was significantly improved due to the partnership with CISCO and the introduction of the distributed and managed switch options into the vSphere 4 environment.

With vSphere 5, VMware has started to support standard discovery protocols to allow more interaction between physical and virtual switching environments. In addition it is now possible to 'tag' and prioritize traffic in the virtual environment through QoS support.

VMware did not stop there in beefing up the transparency in the networking stack. You can enable NetFlow on a distributed to switch and pass it to a collector. This provides the opportunity to understand network traffic flow and bandwidth utilization between vms located on the same hosts, different hosts and between the vms and the physical environment.

Why is this significant? Many customers are now dealing with internal multi-tenancy issues in which virtual clusters are often the demarcation points between business units that are reluctant to 'share' resources. The visibility and prioritization allows the IT team to make policy based decisions to logically separate the environment and then demonstrate it through granular reporting all the way down to traffic flow. This allows them to collapse clusters of 'parked' resources that could be better utilized.

This is the same problem that cloud providers often struggle with. It is difficult to provide visibility into traffic flow down to the vm level. This in turn leads to issues in deriving SLA's that can be measured in real terms. Interestingly these features and the pain points they address are not well represented in the promotional material from VMware. I find this unusual as it clearly distinguishes the vSphere 5 platform from their competitors.

If you have not looked at these features yet, it is a good idea to give them a thorough review. As your environment scales, these technologies become increasingly important in ensuring you have end-to-end visibility and manageability of your virtual environment.




- Posted using BlogPress from my iPad

Monday, August 8, 2011

VDI and storage; can't find one without the other

During a recent design engagement building a scalable VDI environment, storage quickly became a core consideration. Those of you who have designed and operated large VDI environments know that storage I/O can easily cause performance issues if not properly planned for. There were a few interesting things we learned as we worked with different SAN vendors to ensure their solution addressed some of the unique characteristics of our VDI workload.

What is unusual about virtual desktop environments is two very different disk I/O conditions; operational and burst I/O. Burst I/O is more common in VDI environments because operational requirements necessitate large reboots of desktop operating systems not typical in virtual server environments. Operational I/O can also be problematic if things such as virus scanning activities are synchronized based on time vs. randomized to reduce the performance hit on the VMs.

Some storage vendors have a very utilitarian view of storage services; they do not view virtual desktop workloads as any more unique then other virtual workloads. The limitation with SAN vendors who do not differentiate between server and desktop virtualization environments is that in order to guarantee good throughput you may have to consider their enterprise class storage systems to ensure good performance. This can be a tough sell hosting virtual desktop operating systems.

Others storage vendors provide mid-tier solutions and provide Solid State Drives in order to deal with burst I/O. While better, it still requires you to adjust your design so that high I/O requirements are segregated onto volumes made up of SSD. This leads to a very static design where you may or may not make good use of high performance drives.

Most recently we have seen storage vendors start to build mid-tier storage systems that have some of the features of enterprise class systems such as Dynamic Tiering. Dynamic Tiering is the ability to move hot or data that is in demand to high performance drives so that the SAN delivers great performance. This can typically be done on the fly or scheduled to happen periodically during the day. These solutions are ideal for Virtual Desktop environments as they do not require the premium of enterprise class storage systems but still deliver the features. EMC has clearly targeted the VNX line to provide features that make them ideally suited for virtual workloads. Of course companies such as NetApp have been using Programmable Acceleration Modules (PAM) cards for years to deal with burst I/O. Whichever solution you select here are a few general considerations for putting together your design.

1. Each SAN vendor has very different numbers when est. I/O’s for virtual/virtual desktop workloads. It is best to have your own reference numbers based on internal testing. Use these to make sure the estimates provided meet your requirements.

2. Burst I/O and Operational I/O are treated distinctly by most storage vendors. For example if your numbers est. that your environment may generate 15,000 burst I/O’s and required 4 TB of storage the vendor may suggest 6 X SSD drives (6 x 2500 IOs each = 15K burst, excluding RAID considerations) and SAS drives to meet your storage/operational requirements.

3. Ensure that your virtual desktop design incorporates the SAN environment. A good design should provide consistent performance over the lifetime of the solution (typically 3 years). This is not possible if you build a great VDI design that does not set specific requirements for storage. While your VDI environment may run great during the first year you may see high SAN utilization lead to problems over time.

4. Separate your expected read and write I/Os. Take the number of writes and ensure you factor the number by 4 to allow for an I/O penalty on writes. For example if you expect 2000 Reads and 2000 writes, multiply the writes X 4 for a total of 10,000 expected I/Os (2000 Read I/O + 8000 Write I/O) .









Thursday, August 4, 2011

Cloud Design; the return of Hive Architecture

I have been interested in the notion of what I like to call “Hive Architecture” for a few years.  It is designing software services that are the sum of their parts using virtual machines.  As with a bee hive, each member has a simple function but collectively they form a complex system necessary for survival.  Designing IT infrastructure in a similar manner allows you to be more Cloud friendly. 

Although the design concept is not new; as clustering has been around for years and we have seen early versions of this based on virtualization with projects such as LiquidVM (BEA/Oracle) and JeOS (the Just Enough OS initiative). 

At one point VMware was a strong proponent of the concept.  I remember sitting in a keynote session for VMware, the speaker was explaining that the time of multi-function, general purpose OSes was over.  Well that didn’t happen, and with the strong adoption of Windows 2008 R2 it seems that the general purpose OS will be around for a time longer. 

There are lessons however in the design principles, especially when we look at how VMware in enabling Cloud adoption.  With vCloud Director VMware is very much betting on organizations adopting Clouds that look and feel very much like their internal virtualization infrastructure.  So how does the concept of Hive Architecture and VMware’s vision come together?

If we move forward to todays virtualization environments; the focus has been on automation to simplify management of the virtualization stack and reduce the operational overhead of managing a cluster of VMs running traditional operating systems and leveraging virtualization much more heavily in the supporting Network and storage infrastructure.  In addition with the vCloud and vShield product line; the stretching of virtual infrastructure securely between separate locations.  How then does Hive architecture add any value to what todays operational environments look like?

The concept is rather simple; when we implement multi-tiered applications that have database,web, load balancing, network considerations in our virtual infrastructure we should keep in mind that the application should be deployed as if it is a single ‘hive’ avoiding the sharing of services between non-related VMs even if it goes against our grain.  Now this may not be possible due to licensing costs; consider the conundrum of running dedicated SQL services to an app vs. the simplicity of collecting databases on a centralized service and the operational overhead.

So why consider it?  Well if we use the example of a company that has an internal datacenter running VMware and has stretched this to take advantage of virtual infrastructure at their cloud provider and is now considering what makes sense to put at arms length, Hive architecture makes sense.  The entire business application is made up of a logically collected bunch of VMs (‘the hive if you will’).  The IT organization does not have to go through a large decoupling of shared services to take advantage of the cloud opportunity.

The idea occurred to me several years ago when I had the experience of working with an organization that was isolating and virtualizing services by business application for QA. It was a significant challenge to map out all the involved software, servers and supporting infrastructure.  It also struck me when watching a presentation by Intel corporation about Cloud adoption in which they detailed the amount of organization required to enable them to make use of Cloud computing.  It was no small effort.  While Hive architecture is not the end all, it is an important consideration in simplifying the move to the Cloud.

Wednesday, February 2, 2011

Virtualizing Provisioning Server (PVS)?

I have deployed environments in which the PVS servers where physical and I have deployed environments where they were virtual, so of course either is a valid approach.  It got me thinking however,when might a physical PVS server be more appropriate than a virtual one?

Virtual Desktop Infrastructure (VDI) creates hotspots on your SAN infrastructure if you are not careful.  This is quite easy to do if you focus to much on space and not enough on throughput.  For example; you may not need a lot of drives in order to provide space for your virtual desktops because a single drive has high storage capacity.  This is especially true if you are using PVS server. 

Keep in mind though that in environments that scale, Citrix recommends that the PVS cache is stored on local drives to the desktop.  With virtual desktops  this offloads the caching IO to the LUN storing the virtual machines.  Each drive in a SAN can only provide so much IOPS (Input/Output Operations Per Second) so although the number of drives may meet the storage requirement it may be inadequate for the throughput.  Storage virtualization does deal with this is some respects by aggregating the underlying spindles but it  limits visibility.

If you have not properly planned and designed your environment “you may be putting to many eggs (or vms) in your (LUN) basket”. 

In environments designed for scale, separating the image streaming PVS servers from the virtual cluster will provide better scalability and a better VM to PVS server ratio.  If you are designing your environment to be modular you may want the PVS servers segregated to ensure that the delivery of images can be provided to multiple virtual clusters.   

Even though your physical PVS server will likely still store the vDisks on a SAN attached volume, deploying a physical server with a separate SAN  attached LUN allows you greater visibility and greater flexibility if you need to move it. 

All of this can be configured in virtual hardware (using Pass through LUNs) however it stands to reason that if you are going to scale the PVS server to 1000’s of desktops you may not want the network paths, I/O and Storage paths to be shared within the virtual desktop infrastructure.  In addition you have the option of redirecting the PVS cache to solid state drives on the PVS server if it is physical.

While either approach is valid, the decision on when the PVS server should be physical is related to scale.  This is not an absolute though, as we know that virtualizing provides better linear scale than an application installed natively on a server.  It is also a question of control and visibility on the load that is generated on the PVS server.  The last factor is the assurance of performance by not forcing the PVS server to share network and storage paths with the virtual desktop instances.