Friday, December 10, 2010

A Layer 2 Cloud

I attended an interesting datacenter design course which opened my eyes to what might be possible with cloud computing. It was based around redesigning the datacenter and allowing Layer 2 network traffic further up the network stack in order to build a datacenter based on virtualization. Essentially designing a datacenter for virtualization vs. incorporating virtualization into a datacenter.

It is typical that a datacenter has three network layers; access, edge and core switching and routing. This ensures that traffic engineering is applied before our data is sent over our long haul networks. Layer 2 traffic is often restricted to access switches and unlikely to proliferate to the edge or core tiers. Core virtualization technologies however are Layer 2; VMotion for example.

A datacenter designed for virtualization allow Layer 2 much further up the traditional network stack and also flattens the old 3 tier system using embedded virtualization in the switches and routers. Consider multiple physical switches being combined to form a single logical switch or multiple virtual routers inside a single physical router. The traditional tiered network architecture approach is leveled in favour of flexibility, speed and to deliver core virtualization technologies across broader distances.

A lot of the difficulty in integrating Cloud services has to-do with a lack of standards at the demarcation points between where the Cloud provider service starts and where internal IT organizations end. This leads to the mentality of breaking off a portion of your IT environment for the cloud and tethering it back to your organization in some shape or form. But consider if the Cloud provider essentially provided a layer 2 service that allowed you to create a virtual switch that spanned from one end of the country to the other. Now the ability to create storage fabrics and do long distance VMotion become available at a fraction of the cost of architecting the environment internally. A Layer 2 service allows the network engineering to remain under the control of the internal IT organization with less complexity.

The cloud becomes an virtual private circuit that links to either public or private virtual infrastructure. While all this exists currently, it is complex, expensive and not designed with virtualization in mind. Is this that far away? Large players have been developing and acquiring technology to deliver a complete model of x86 virtualization and integrated networking services; Juniper has recently acquired Altor Networks, VMware is pouring development and resources into the vShield product line, CISCO continues to expand its commitment to embedded virtualization.

While the focus is still around virtualizing traditional networking tiers, the blending of network services and x86 virtualization is happening at a dizzying pace. A integrated model may enable a more simplistic way to incorporate Cloud services without the inherit security concerns that limit adoption. It will allow IT organizations to extend their datacenter in a much more seamless way using existing standards rather than the confusing array of options now available.

Friday, November 12, 2010

Setting Up Citrix Profile Manager

In a VDI environment Citrix Profile Manager is installed on the virtual desktop instance.  The installation is simple and straight forward and is a matter of clicking next at the default screens.

image 

There are two ways of controlling Profile Manager; through the AD or through local INI files.  The local INI files are not recommended for production so we will run through the installation using the Active Directory template. 

You need to ensure you have setup a share location with the correct permissions for the profiles.  Microsoft recommends the following:

NTFS Permissions for Roaming Profile Parent Folder

User Account

Minimum Permissions Required

Creator Owner

Full Control, Subfolders and Files Only

Administrator

Full Control (Microsoft actually recommends none but it simplifies things if you give admins full control)

Security group of users needing to put data on share

List Folder/Read Data, Create Folders/Append Data - This Folder Only

Everyone

No permissions

Local System

Full Control, This Folder, Subfolders and Files

Share level (SMB) Permissions for Roaming Profile Share

User Account

Minimum Permissions Required

Everyone

No permissions

Security group of users needing to put data on share

Full Control

Once you have the share setup you are ready to import the ADM template.  By default it is copied into the installation directory of the Citrix Profile Manager (within the virtual desktop instance).  From your AD server run the Group Policy Object Manager Console (Note: this console is not  installed by default but can be downloaded from Microsoft). Browse to your XenDesktop OU and create a new GPO Object.

image

Right click the new GPO Object and select Edit to bring up the Group Policy Object Editor. Browse to the Administrative Templates under Computer Configuration.

image

Click Add/Remove Templates and browse to the XenDesktop ADM template.

Once you have imported the template you can browse to the Classic Administrator Templates\Citrix\Profile Management. At a minimum to get everything working you will need to configure the following:

1. Enable Profile Management

2. Processed Groups (Your XenDesktop AD Group)

3. Process logons of admins (optional)

4. Path to user store (your profile directory)

There are many other settings you can use to tweak how profiles act in your environment but this base set of steps should get you up in running. 

Friday, September 17, 2010

IT and Consumerization

At one time IT products were invented for business use first before being redeveloped for consumers. Now products are being designed for users first. One just has to look at the incredible growth of Apple to recognize this trend. Apple's market cap exceeds Microsoft's so it is easy to forget that they do not own a large share of the market relatively speaking. Just 7% of the PC market and 18% of the smartphone market according to industry analysts, however they are now one of the biggest software companies around.

The economics of IT are shifting with Gartner predicting that by 2012 20% of businesses will not own IT assets. In addition users are becoming increasingly demanding and oblivious to traditional IT problems like where data is stored or how it is secured. It is also Interesting that services like hotmail that have no SLAs, tend to provide better uptime than some internal IT shops because they are designed for demand from a vast number of users.

Personal computing has also arrived at work and is unlikely to be stopped. IT must now come to grips with how they integrate these new devices, how they deal with user applications and more importantly how do you impose some level of control?

There is evidence to support that these trends are not all bad for an organization. For example enabling Bring Your Own Computer programs lets businesses generally increase employee satisfaction while reducing support costs. Employees tend to be more productive as they have a single device with potentially more mobility leading to a better work life balance.

Why bother looking at consumerization and it's impact? A generation of employees are entering the work force that have grown up with consumerization and expect IT to work in a similar fashion. This will become an increasingly competitive factor as the economy heats up and companies work to attract and retain the best, brightest and most technically savvy employees.





- Posted using BlogPress from my iPad

Wednesday, September 15, 2010

VMworld Revisited; VMware View 4.5 & vSphere 4.1

Thanks all for attending our event. As promised I have uploaded the content for easy reference and viewing.

NetApp for Your Virtual Infrastructure

This information is based on a presentation given by Rowena Samuel, the Canadian Technical Partner Manager at NetApp while at our VMworld revisited event.

We all are aware that Virtualization creates storage challenges. Inherently centralized storage is more expensive than DASD. What is often overlooked is the cost of floor space for storage related to server or desktop virtualization. What also impacts storage is the increased consumption rate of storage in virtual environments and new response time expectations.

NetApp addresses cost using deduplication and storage efficiency. When NetApp talks about storage efficiency they are talking about server offload or using wintel servers as servers and storage devices for storage and backup.

To NetApp mobility is about being able to mirror data between data centers; in addition integration with virtualization products from VMware like SRM is very important to NetApps strategy.

NetApp believes unified storage is much more than multi-protocol support. It's also about unified storage controllers to enable customers to move from low end to high end storage. NetApp also supports other vendors storage solutions by adding NetApp controllers in front of existing storage solutions.
Unification also means one management tool and one storage OS to support.

NetApp integrates "tier-less" storage or high speed caching hardware called Flashcache. NetApp provides many degrees of efficiency such as using Flashcache in front of SATA to accelerate cheaper disk solutions.

NetApp Snapshot technology is highly robust and plugs into VMware management tools. For example, you can snapshot entire data stores but recover individual vms using NetApp technology. NetApp believes strongly in the value of deduplication in virtualization environments. They offer a guarantee of a 50% recovery of storage when NetApp dedupe is integrated into virtual infrastructure (of course conditions apply)

NetApp offers storage for 50$/VDI instance however you will have to check through their white papers to determine what combination of product and configuration is required to provide this price point.

Flashcache prevents boot storms (performance issues on storage caused by multiple vms starting at the same time) by reading from cache vs. The disk subsystem. It is dedupe aware so if 100 images are requested only one loads and is can be used to serve multiple requests. This can offer significant performance gains and cost savings in VDI environments.

NetApp is very excited about the vCloud Director. They were heavily involved in the testing stage prior to product launch. NetApp sees vCloud Director as the first initial offering in the ITaaS (IT as a Service) space.

The key features of vCloud are support for virtual data centers or true multi-tenant environments. In addition Infrastructure service catalogs can be created to allow users to browse and select vms or a collection of vms representing an application or service. vCloud Director fully integrates with VMware Orchestrator. vCloud director provides an extra level of abstraction on virtual infrastructure to merge public and private clouds.

NetApp believes they can enhance multi-tenant environments because they can create virtual storage devices on top of a single storage device. While customers may not require these features now, these technologies will be increasingly important as we migrate to the cloud. The full value message can be found on youtube at http://tinyurl.com/36wx9cw

NetApp sees a journey towards the cloud as a roadmap for all customers. To prepare for this customers will need to integrate their silos of virtualization and standardize to ensure they are "cloud ready"










- Posted using BlogPress from my iPad

Friday, September 10, 2010

Dazzle: Old School is New School

You know I always find it interesting when a new concept breaths new life into one that has been around for awhile. I am delivering the VDI message as the interest level is extremely high and it is a good solution in many cases. As a person who has been in the industry for a while, I was delivering Presentation virtualization when it was just called server based computing. Over the last few months I have had a few customers mention that they consider terminal services (TS) a legacy delivery method. The great criticism has always been that it changes the user experience by not delivering a full desktop (I do recognize you can deliver a shared server desktop but in general it is not recommended).

What is interesting is that on an iPad this is it's greatest strength. There is a lot of development being done by all VDI vendors to provide a iPad client to deliver the desktop anywhere. Some of them have delivered and a few are in alpha or beta. But from using sysadmin type tools on the iPad, I am wondering if going straight to the app provides a better experience then first to the desktop and then to the app?

As a TS administrator can tell you, giving up the desktop in favor of thin clients in a traditional Citrix or TS Server environment was a bit of an up hill battle. But does the iPad close this last mile of hard fought over end user space? Clearly Citrix has hedged their bets by providing an Apple'esc method of consuming applications called Dazzle. VMware mentioned a similar on demand application model but details were light and Microsoft has significantly improved TS server in the release of 2008. What may not serve the vendors as well is the rebranding around desktop centric virtualization. As we look to transition from IT shops to Service driven organizations the server based computing/tablet model has become compelling thanks in large part to the iPad. And unlike in past IT end user turf wars the demand for the new device is coming from the users themselves.



- Posted using BlogPress from my iPad

Thursday, September 9, 2010

Cloud Ready: How Standard are your standards?

I was attending an Intel presentation last week on their cloud strategy and what stuck me was the amount of internal alignment that had to be completed in order for them to take advantage of cloud based services. Now most organizations are not the size of Intel which runs 100000 servers across 95 datacenters but there are lessons to be learned from their cloud readiness preparation that are applicable to organizations of all sizes. The areas of focus were compute consistency which when we look at consuming Infrastructure as a Service (IaaS) relates to ensuring standard sizing of a virtual machine to an application workload. In our virtualization practice we recommend that customers standardize on a set of configurations for high, medium and low workloads if they do not include performance measuring as an internal process. This ensures consistency when transferring on premise workloads to IaaS providers. For companies running large virtualization shops this can be daunting if no consistent standards existed in the first place. Ironically Amazon just announced micro instances on EC2 for smaller application workloads which highlights the need for this type of categorization and standard http://aws.amazon.com/ec2/ .

The second area of focus was on consistent service standards. This is somewhat intuitive but may not be at the top of the list for customers considering consuming resources from the cloud. It stands to reason that if you are going to move compute resources to the cloud that you need to ensure that your services are well defined and consistent so either they can be matched by the provider or seamlessly maintained by your internal IT organization irrespective of geography. Really the message is rather simple but it is important; ensure you IT house is in order so consistency can be maintained when you take advantage of cloud based services.



- Posted using BlogPress from my iPad

Wednesday, September 8, 2010

Cloud Bleeding: Losing Corporate Data to the Cloud

One interesting phenomena that is impacting IT environments is the DIY "Do It Yourself" type applications that are designed to work over standard ports, allowing remote access to information. Increasingly these DIY applications are showing up inside our environments. Applications like logmein or dropbox to name a few, which allow a lightweight applet to load on a desktop, without privileged access in some cases. These "applets" enable users to turn up their own remote access type services without IT. While they have been around for a while, the use of these applications is on the increase as users get acclimatized to installing "applets" similar to the AppStore or Facebook model. In addition a new generation of online storage and synchronization tools designed for end users are readily available. Demand is also increasing as people transfer data from their desktops to devices like the iPad to enable mobility.

The age of "DIY IT" is upon us and we need to move quickly to respond to the trend. How then do we balance the requirement for IT self service, the Bring Your Own Computer (BYOC) trend and complete mobility against compliance and regulation requirements. Luckily solutions such as VDI are now offering flexible delivery of virtualized applications and desktop access from anywhere to help. In addition security standards are being introduced to cloud providers along with vendor security certification programs to ensure data is protected and that the hosting facility can be trusted. However, while these aim to meet users changing consumption requirements they do not form a comprehensive enough solution to prevent an underground exodus of corporate data. Additional layers of security such as digital rights management and a strong policy governing end users responsibilities will be required to secure corporate information.

It will be a fine balance between enabling users with technology while protecting company information at the same time. All the pieces however have reached a level of maturity to meet or address most of these issues. The onus will be on the IT team to work with their partners to bring these components together to provide security and reduce the risks without restricting mobility or end user flexibility. As it is likely this trend is already impacting our environments the time to start planning is now.

- Posted using BlogPress from my iPad

Tuesday, September 7, 2010

VMworld 2010 Reporting: VMware's Acquisitions

During the keynote Stephen Herrod made a few announcements regarding acquisitions, however there was little detail. The eCommerce Times published a conference report with a little more information. You can read more here VMware Buys Parts for Its 'Virtual Giant'

Friday, September 3, 2010

VMworld 2010 Reporting: The morning after the night before

So the virtualization event of the year has come and gone. With it several new product announcements specifically designed to fix scale, security and management of cloud based infrastructure.

As always the event was an awesome opportunity to interact with people from all areas of the industry. After much confusion regarding some of the acquisitions, VMware is putting forth a vision of the future. What was interesting was the shift in focus to applications. The comment "it's all about the applications" was reinforced at the keynotes and some of the future looking breakout sessions. This sounds decidedly familiar to Citrix's mantra, but is it really? VMware went to great pains to distinguish between legacy and future application development platforms. Clearly they see Citrix as an enabler of legacy applications but not as a platform or application framework. This is what Springsource is all about. It is a hosted development platform much like Microsoft Azure.

The new generation of applications will live in the cloud with a small client side pluggin that provides the user interface; the model that Apple has pioneered with the "AppStore". Citrix has moved quickly to emulate this through Dazzle but they are focused on delivery vs. delivering a development platform for customers.

There is evidence to strengthen this view, as Gartner recently released statistics that mention that 50% of the applications businesses consume are coming from the cloud (think salesforce.com). This integration will create new challenges for IT teams as they seek to ensure standards and compliance are maintained on 3rd party infrastructure that they do not have any visibility into. In addition they will be forced to move quickly as users who have become accustomed to the AppStore model are likely to pose a real threat to private internal information bleeding out to the Internet (i.e. If IT does not provide similar services why not use something like DropBox on my desktop or smartphone).

The hardware vendors are also moving quickly to take advantage of the rush to the cloud. Several announced turn key unified server, storage and networking hardware that can be purchased as a single unit or block. This will have an impact on integrators as they transition from providing component based services like VMware product deployment to Integrated Service Providers. Service Providers will have to become a one stop shop for the design, deployment and delivery of the entire stack of infrastructure. While those of us who have been doing virtualization for a while have largely made these adjustments because virtualization already tightly integrates the infrastructure components, there are still a few things that are not yet clear. In addition the demand for buying large blocks of infrastructure will inevitably lead to partnering and acquisitions between the software, hardware and service providers to shore up any gaps in their capabilities or ability to compete in certain markets.

It will be interesting to see if traditional infrastructure services will need to expand to incorporate development as an additional capability. VMware has already encouraged partners to start engaging customers in conversations around development. With VMware's stronger focus on a new generation or platform for development I wonder how long it will be before application development and virtual infrastructure become the same conversation.

I was speaking with one my colleagues and he could not get over the amount of product that was being introduced to optimize all different forms of virtualization (end user, storage, etc) The irony was not lost on us that a technology that was introduced to simplify management has introduced such a vast array of complexity into the market place. Even as problems are addressed the level of expertise that is required has increased significantly. This is apparent in the vSheild and Nexus products, targeted at many of the deficiencies in Virtual Infrastructure but requiring a different level of understanding of the network and security stack.

VMware decided to eat their own dog food a little and offered all labs from the cloud through several cloud providers. The amount of labs they were able to deliver using this strategy and the number of virtual machines they deployed and then refreshed was staggering over the 4 day conference. Clearly they have developed their own case study for the power of cloud computing even if the lifespan of the vms was typically under 90 minutes.


- Posted using BlogPress from my iPad

Wednesday, September 1, 2010

VMworld 2010 Reporting: View Security

There are 3 elements to View architecture
- centralized desktops
- end user devices
- View broker

View currently supports AD, Novell RSA SecurID and Smart Card as authentication methods. In addition there are 3rd parties that support additional methods of authentication for View.

Two methods can be used to establish a connection; direct to desktop and tunneled through the View server over https. In addition the tunnel can be moved to a View Security server to offload the overhead from the View server to the proxy (Security server). Typically a Security server is deployed in a DMZ and is appropriate for remote access scenarios.

PCoIP is not supported on View 4 or 4.5 through the secure server proxy, only RDP. VMware is working with Teradici to get this working but recommends using a VPN if you are serving PCoIP externally.

With 4.5 you have delegated role-based access control. Certificate management and revocation has also been added.

Administrators can now be associated by role, with associated permissions and then assigned to folders within the VMview hierarchy of resources. Some of the roles include inventory management or global administration. In addition custom roles can be added with specific permissions. For example you can divide your View architecture into folders that represent geographic regions and then add regional administrator roles.

Best practices for securing a View Deployment.

- Use the vSphere hardening guidelines
- Harden the virtual desktops
- Review your refresh intervals - i.e. clean desktop on logoff
- Change the default certificates on the View components
- Disable unused ciphers for SSL encryption
- Consider disabling the USB port for external access

VMware recommends integrating the vShield products into your VDI environments. One interesting thing about vShield edge is that it can provide load balancing to the View servers. vShield App can be used for zoning desktops and vShield EndPoint can offload antivirus protection.





- Posted using BlogPress from my iPad

VMworld 2010 Reporting: Cloud Intel Supersession

The Internet is only 45 years old. Now there are 1.8 billion users accessing the internet. This presentation will focus on what the Internet means to Intel. By 2015 this number is expected to be 4 billion. Each smart device typically has 4 separate radios accessing the Internet. Intel is working to move that number to 5 - 6 using chip technology.

So how is cloud taking shape? Two different layers; cloud computing or consumption and cloud architecture which is typically shared and dynamic and virtual in nature. Intel will build micro processors to expose more and more instrumentation that partners like VMware will take advantage of.

When we look at private vs. public cloud there are number of concerns.
The providers will need to assure their customers these things have been addressed. Specifically interoperability, security and standards. Intel is making acquisitions to deliver on their vision of the cloud. Intel believes the cloud should be simplified, efficient and secure.

Intel runs 100 000 servers and 95 datacenters broken down into 4 verticals; design, office, manufacturing and enterprise, growing at 45% a year. Storage utilization is 18 PB. Virtualization is key to meeting this internal demand. In addition a proactive server refresh introduces new capacity in CPU processing capacity. Four servers can be replaced with one Westmere (CPU) based server. In addition the network is being optimized so that the path to storage takes the lion share of the bandwidth.

A key component is global data center metrics and monitoring to guide how money should be spent to increase efficiency and reduce compute costs. In addition rationalizing the number of applications that Intel supports is part of this strategy.

Intel is piloting cubicle cluster computing. Building a virtual rack based on desktops distributed in an office cubicle. Intel studies have shown that cost is lower because natural air flows in office locations are more efficient at cooling than pushing air through a dense rack based datacenter. They put desktops in a virtual cluster and deliver demand from the collection of desktops to create a local datacenter experience for a branch location.

Intel has worked on pushing standards across all datacenters to ensure compute power can be pushed to any datacenter irrespective of physical location. This has lead to 82% utilization for the design environment. Now that they have consistent standards they can look to the public cloud to ensure information can be shared. Public cloud is being used for sales and marketing currently, but is expected to play a bigger role in the future.

In addition service delivery standards must be consistent across the entire organization to enable private and public delivery from cloud infrastructure. Intel has a new point of view "device independent mobility and client aware computing" to focus their service delivery standards.






- Posted using BlogPress from my iPad

VMworld 2010 Reporting: VMware's Vision of Storage

This presentation is a vision of how VMware believes storage will evolve over the next 3 - 5 years.

Future claims

- physical storage will achieve linear scalability and will be managed trivially
- storage will be a single pool allowing vms to move between data centers.

Current problems and challenges with storage

- Planning and sizing over time
- Optimizing HW value
- Cost of managing scale
- Application SLA enforcement
- Multi-tenancy
- Cross datacenter movement

IT's challenge; How can I take a large environment and run it with different SLAs and scale It? How can it be run with zero touch management? Two solutions have traditionally been available; scale out (separate storage tiers) or server and storage combinations (i.e. virtual storage appliances)

In the future storage needs to become vmdk aware and also to understand vm based policies. In addition it needs to support key encryption to deal with multi-tenancy and security.

Enablers for these transitions are multi-site global identities for the vms. Long distance vMotion for site-to-sit load balancing.

Some considerations for desktop and storage architectures include new problems like anti-virus storms brought about by VDI. Challenges include cost, scale (100000 vm deployment should be as easy as the deployment of the 10th vm). Also a state full experience must be delivered to the users.

Storage is moving to cloud based application centered management. Traditional storage solutions prevented applications from scaling. This was helped by scale-out or clustered applications but this involved development coding with architecture in mind and restricted application designs.

VMware believes the storage of the future will be a blob store ( hmmmm not to sure about this concept). Essentially it will lack structure completely opposite to the RDBMS model and instead will be a series of datastore services that will likely be made up of a collection of infrastructures both internal and external to an organization. The benefit to this architecture is low OpEx as there is no structure to manage.



- Posted using BlogPress from my iPad

VMworld 2010 Reporting: Cisco Nexus 1000V

Cisco has shipped over 1 million virtual Ethernet ports to date. The switch is built on the Cisco NX-OS and is compatible with all switching platforms. The infrastructure is made up of a virtual supervisor module and virtual Ethernet module. Based on the Nexus 7000 framework. Cisco is looking to extend it's framework using a Virtual Service Node. The VSM has a virtual appliance form factor.

Customers have been asking for networking services at the kernel level vs. the guest OS level to improve performance. Cisco has started down this path by introducing virtual service domains. Virtual service domains define a logical group of vms protected by a virtual appliance.

This year Cisco is introducing a new architecture: vPath. Network packets are redirected to virtual service nodes to enforce policies to say push communications through firewall. Virtual service nodes can support multiple ESX hosts to eliminate the appliance per host framework. These redirect policies are cached on the Nexus 1000 to reduce the network overhead. Cisco's first implementation of this architecture is the virtual security gateway which is a firewall architecture. The virtual security gateway can be deployed in an active standby configuration. To manage a combined virtual security gateway and Nexus 1000V architecture an administration point is now available; the virtual network management center. You can now manage multiple zones; network and security, so that policy can be enforced across both. This environment allows you to setup different SLA's for network bandwidth consumption on a group of VMs. In addition Port Mirroring is supported to enable traffic analyzers. This enables troubleshooting on a multi-tenant environment without exposing everyones traffic. This allows you to get very granular when you pipe out network traffic in a cloud environment.

Cisco has been working with several partners to extend vMotion across long distances. Cisco refers to this development as Over the Top Virtualization (OTV).

Nexus 1000V Myths

- Nexus switching is based on proprietary Cisco standards. No it is based on open standards

- Only works with Nexus switching. No it works with any Ethernet switch






- Posted using BlogPress from my iPad

VMworld 2010 Reporting: vCloud Director

The goal of this session is to review the complete VMware stack to provision a business service. The session will introduce vBlock. A vBlock is the ability to buy a turn key infrastructure. It can be purchased as a low to high end solution. It is bundled vSphere, vDirector, CISCO blade and networking with a CLARiiON storage and SAN switch. The idea is to provide predictable facilities, performance and fault tolerance. Additional vBlocks can be purchased to buy additional scale and capacity. The basis of the vBlock model is to buy a service platform and not concentrate on the components.

The provisioning block of the vBlock is done by EMC Ionix Unified Infrastructure Manager. A series of templates are provided for configuring network, compute and storage resources. For example the ESX build can be pushed with the storage and networking environments. The state information for the blades are stored in a service profile so that the blades are stateless. Once associated the blade takes on the appropriate identity.

Using vCloud Director with vCenter Chargeback and vShield for security you can provision virtual DataCenters (vDC). To provision you need to start with an organization which you associate vDC's to. The vDC's can then be subdivided into service catalogs (a collection of vms, templates and ISOs that deliver applications). In addition you have organizational networks which tie the vDC to the physical network layer.




- Posted using BlogPress from my iPad

Tuesday, August 31, 2010

VMworld 2010 Reporting: SRM Futures: Host Based Replication

Note: In the next major release fail back will be supported. During fail back only the vms that were failed over initially will be failed back.

Site Recovery Manager of Today

- Simplified and automated testing and failover.

Site Recovery Manager of the future

Host Based Replication (HBR)
HBR provides replication between different storage vendors and local storage. Replication is managed as a property of the virtual machine or a large group. Replication is being managed at the ESX layer. You can opt to replicate all disks or a subset of disks for a vm. HBR Allows you to sneaker net in order to jumpstart replication. You set the RPO (Recovery Point Objective) on a per vm basis down to 15 minutes. HBR is based on replication of delta disks between sites.

ESX watches the lower level SCSI traffic and sends the changes. A FT enabled vm will not be supported when the product ships. The product is expected to ship next year.

There is no log management of applications inside the VMs.

The framework for HBR involves an HBR agent installed on the ESX host on the primary site. A group snapshot of all disks associated with a VM is done to ensure crash consistency across the entire vm. The second component is an HBR server (HBR Management Server) which is a Linux based vm deployed at the replica site. The vm is controlled through the SRM tab, no need to go to the interface of the HMS. Multiple HBR servers can be deployed for scale but one will be typical for most customers. One HMS per vCenter. The HMS has a database (internal or external TBD) to keep track of the linkages. Many to one site replication scenarios are supported. The first release supports powered on vms; replication stops on powered off vms. Physical RDMs are not supported for HBR because it ESX needs to see the SCSI transactions.




- Posted using BlogPress from my iPad

VMworld 2010 Reporting: Keynote

The cloud is a collective of computing resources.

Rick Jackson; Chief Marketing Officer

17021 attendees at VMworld

This years event is being serviced by a hybrid cloud, generating 4000 vms per hour. By the end of the week this number will be in 100000. In addition Rick announced the VMUG program has been formalized and now has a board of directors. Attendees were encouraged to join a local chapter.

The tag line for this years event is virtual roads, real clouds. As customers adopt virtualization there are 3 distinct phases. IT production, cost savings. Business production, unprecedented reliability. Phase 3 is SaaS or IT as a service. VMware is the ideal solutions for SaaS because they provide an open framework by supporting industry standards i.e. OVF and vAPI.

IT as a Service = optimizing IT production for business consumption. Customers must move from phase 1&2 to phase 3.

Paul Maritz, CEO

Paul Reviewed the phases; IDC reported the turning point, VMs out shipping physical servers. This year 10 million vms will be deployed. Paul thanks the audience for this collective achievement. Paul segways to theory. More traditional operating systems are being deployed without visibility to the actual hardware. The innovation of the future will be done at the virtualization stack. The requirements are automation to decrease OpEx and integrated security. These are themes for the innovation that will occur.

Another factor is how the virtualization stack will be paid for. On whose books will this expenditure sit? Movement between private and service provider clouds requires work and adherence to a common set of standards. It must allow movement to and from a public or private cloud.

But are old apps on new infrastructure enough? Customers cannot be stuck with a monolithic application that cannot be upgraded or developed properly. The industry has responded by delivering new open framework and tools (Springsource, HTML5) for application frameworks.

This will change the traditional operating system as a general purpose operating system has to much overhead for what it is required to do. It is now just a piece of a larger system.

The other change is the integration of SaaS apps into the business infrastructure. They are coming in "uninvited" and IT will need to figure it out. Also there is a proliferation of new OSes and hardware profiles and once again IT will need to make sense of it all. This introduces a new requirement for innovation in the end user environment. The reality is IT cannot keep pace with these changes.

This new area of innovation will lead to a new stack in the end user environment. This is a change that will impact all of us and is inevitable with or without VMware according to Paul.

This introduces Stephen Herrod's product announcements. He starts with a review of virtual infrastructure. Stephen refers to VI as the virtual giant. The virtual Giant has properties

- Open
- Automation
- Elastic resources
- Efficient pooling of resources

vSphere provides incredible scale. There was a strong focus on vMotion in the 4.1 release allowing more to move with less resources. The issue this technology addresses is scale. The other features of 4.1 is storage and network I/O controls. Think shares based on network and storage properties. (I will cover this in-depth on a different post). The vStorage API offloads some capabilities back to the storage hardware.

Stephen then starts a review of vCenter; capacity, configuration, disaster recovery and compliance features. VMware has acquired Integrien; management through proactive analytics. Predicting what is going to happen based on metrics.

Stephen uses his IT use at home which allows quick consumption of new apps to contrast the pace of IT at work. IT must now consider how business consumes services. VMware presents their App Store vision. The infrastructure is made up with a virtual datacenter (VDC). A VDC allows the business to move a service center to a 3rd party hosting center. New product announcement based on project Redwood "vCloud Director".

There is also an element to of security which leeds to new product announcements

VMware vShield EndPoint
VMware vShield Edge
VMware vShield App

To assure compliance from a security perspective VMware also introduced a new certification program "vCloud Datacenter service".

Demo focused on a portal to present a personal service catalog for a user. Now the management interface is introduced. You can manage multiple vCenters that represent provider vDCs. Now you connect public and private clouds through vShield. The user is unaware of where these resources are running.

Next point is how applications will be written in the near future. VMware introduces vFabric. vmForce was highlighted as a co-development between Salesforce and VMware. Hyperic was mentioned as the management system between the application stack and the virtualization stack.

End User computing and View 4.5 was discussed. Offline and new reference material for how VDI reduces acquisition costs was referred to. Project Horizon was introduced. Single sign on was demo'd and an acquisition was announced to bring this capability to VMware's portfolio.

VMware View Client for the iPad was demo'd. It showed both VDI and applications integrated into the device with a very Apple look and feel.












- Posted using BlogPress from my iPad

VMworld 2010 Reporting: Security and the Cloud

Session introduced the Tenant-in-control concept. How can the cloud provider assure customers their assets are safe?

Issues for IaaS (infrastructure as a service)

Hyperjacking - installing a rogue hypervisor to take complete control. Examples are Blue Pill/SubVirt experiments.

White paper on cloud attacks; http://cseweb.used.edu/~havoc/dist/cloud sec.pdf

Regulatory requirements on cloud and virtualization are being actively developed NIST's.

Rick Brunner was talking about booting from a secure chain of trust, rooted to hardware. The concept is to establish and validate hardware and software. New generation of CPU's (Intel Westmere) with Trusted Execution Technology (TXT). TXT provides secure measures of all software which are stored in the Trusted Platform Module making the system tamper proof to prevent attacks. The TPM provides secure storage on the physical server.

vSphere ESXi supports TXT (not supported in classic) vSphere sends the TPM measurements to vCenter. vCenter allows applications to take advantage of this through an API. vCenter is the control point; can I move vms to this hardware, is it trusted?

All this is good but it is not sufficient means of ensuring security and compliance. Customers should follow vSphere hardening guidelines in addition to considering TXT.

This leads into presentation from RSA, the security devision of EMC.

In the 'demo', RSA enVision is used to query vCenter to ensure compliance. EnVision sends the information through the Advanced Data Management Layer to the RSA Archer eGRC platform.

The use case for this technology is more complex than just firewalling vms. The use case presented is "ensuring VISMA vms are executing with US-tagged resources". TXT is enabled in the bios of the hardware and a geotag is written to the TPM on the host. You enable tboot under the advanced properties of the vSphere host to ensure a trusted boot is performed.

A policy is applied at the cluster level and inherited by the virtual machine. The demo showed the customized version of RSA Archer. You can look at your FISMA compliance chart to determine the level of compliance across your Virtual Infrastructure. You can also look at compliance over a period of time.

As a cloud provider you can tier based on security offerings. For example a Gold standard complies with FISMA.

This is a solution that integrates VMware, Intel and RSA to solve security problems with utilizing cloud resources.









- Posted using BlogPress from my iPad

Monday, August 30, 2010

VMworld 2010 Reporting: The Future Direction of Networking Virtualization


Howie Xu, R & D Director, Virtualization and Cloud Platform

This presentation is visionary in nature with no commitment to product delivery. Howie noted a trend in more and more networking professionals attending vmworld. Additional trends also impacting networking;

- Virtualization and mobility
- Convergence in platforms between servers, storage and networking
- Cloud economics
VMware sees the cloud as a way of doing business not a destination.
Cloud involves increased efficiency and flexibility

In VMwares own platform they have progressed from a managed virtual switch to a distributed switch, with a distributed "virtual network" envisioned for the future. The properties of a virtual network are access to anything, anywhere and at any scale. Cloud should also not be a second class citizen with respect to networking, it must offer an equivalent quality of service. Antime is about closing the time between deploying a virtual sever and the time robust networking services are applied to the virtual machine. Any scale is about scaling up, down, horizontally and virtically economically.

You need therefore to decouple the workload from a static networking configuration. Today network managers struggle to adapt to a much more dynamic environment. It is unlikely that IT groups can build technical teams in the current market to deal with this additional level of complexity. So how? The only solution is to liberate IT resources from the drudgery of networking support to enable them to become more strategic.

Coordination of L2-L7 services is currently human resource intensive. The network is also not very transparent. This problem has existed for a while but the demand for cloud economics is making it a bottleneck to flexibility.

VMwares customers want the network to become transparent. This leads into the concept of the virtual chassis or vChassis. Think of a typical balde enclosure that includes modules and plugging for storage and L2-7 networking services.

vChassis

VMware provides a platform and allows their 3rd party partners to pluggin to it. Similar in concept to the integration of 3rd party network switches into blade enclosure. 3 planes, data management and control plane.

In order to provide this, networking must extend it's capabilities to enable instantaneous service provisioning, visibility and policy enforcement, elasticity and scalability, multi-tenancy.

Think about plugging in a distributed traffic shaper though a control plane that extends across the entire virtual infrastructure to provide custom data plans on a per vm basis. Over riding this solution would be a policy based management solution.

vChassis can do 10 GB line rate to a VM using a small part of the CPU but it needs to be added to a control point to manage this capability.

Networking technology was designed for a static environment. L2 has to be scalable, flexible, and include multi-tenancy.

VMware is working closely with their partners but it is not easy as things like backwards comparability have to be considered.

The value os this development is to allow 3rd parties to develop on, certify against and sell to VMware customers. VMware takes advantage of this themselves through vShield. This will enable a new generation of cloud enabled services.

The foundation of this currently is the distributed network switch and vNetwork API. Futures is the vChassis and virtual network.




VMworld 2010 Reporting: Future of End User Computing

Mobile devices are pulling apart the traditional desktop; example many apps are already running from the public cloud (think salesforce.com). The key to enabling users is to deliver a better, always on, customizable type user experience. How then can today's IT deliver consumer level simplicity with enterprise level compliance and security? So what to do? Doing nothing will cost you on many fronts; so we have no choice but to deliver. You need to embrace cloud computing and address these end user requirements according to VMware. VMware sees this as a 3 stage journey, modernizing windows, unifying application management (cloud and local apps) and then collaboration in the cloud.

ThinApp is key to this modernization because it is an app virtualization and migration tool not just an app virtualization tool (applicable in Windows 7 migrations).

PCoIP is also key; a real strength PCoIP is that it is UDP based. Fundamentally PCoIP does not impose overhead through packet retransmits like TCP.

VDI is more complex as the OS is constantly changing depending on what the user is doing. In addition local mode in View 4.5 was highlighted with the type two solution (windows on windows and mac on mac soon) as a Bring Your Own Computer (BYOC) solutions. It Does not require the reload that the client side hypervisor requires. View 4.5 supports tiered storage APIs. New product announcement "vShield edge server for offloading anti-virus for VDI".

- Future trends, diversity (hardware platforms, OSes and Apps)
- HTML5 cache, zero touch clients, native video and GPS capabilities
- Application Frameworks, apps are being constructed though web frameworks (i.e. Google Earth, SaaS).
- Corporate IT is 50%/50% (sales force.com)
- Connectivity is ubiquitous

It's about the applications which leads the presenter into talking about Zimbra. People see it as email but it is SaaS framework.

New product announcement; ThinApp factory a potential Dazzle competitor. Codename Project Horizon based on ThinApp and policy based delivery of applications to users.

Native PCoIP client from the iPad was discussed but not demo'd. It will be during one of the keynotes.

What is collaboration in the cloud, blending hosted and non hosted apps into a single end point. Auto-managed and delivered to the users.

VMware's valued add is they are not building a SaaS which assumes the backend is a proprietary datacenter (I.e. Google). There technology allows the integration of several data centers behind a common framework.








Wednesday, May 5, 2010

WAN Optimization

Ensuring you have a good methodology for promoting WAN Optimization changes to your production network is an absolute must to move from tactical to strategic use of the technology. While the details may vary depending on the vendor solution that is deployed, having a clearly defined process is critical.

The majority of our customers deploy their WAN Optimization solutions inline vs. logical inline or with hardcoded path statements (often referred to as out-of-path deployments). As such the potential for unplanned network outages is always a risk when promoting net new Optimization configurations to your appliances.

I recommend setting up a lab that emulates production as closely as possible. With the cost of smaller appliances designed for branch optimization now cheaper than ever it does not have to cost a fortune to build a lab that represents your production environment. In addition much of the network hardware that had to be physical can be virtualized to reduce the cost and size of your lab.

WAN Optimization technology has the potential to benefit many transport protocols, applications, data steams and even server consolidation projects within an organization. There is a tendency however to isolate this technology within the networking team. While this makes sense from an operations perspective it does nothing to empower other parts of the IT team to take advantage of the capabilities of WAN Optimization. For example, Riverbed supports encrypted MAPI, without the awareness of the Exchange team however it is unlikely to be an overall factor in the design of the email system.

Part of the process should use the lab to engage your Subject Matter Experts (SME’s) to demonstrate the benefit of WAN Optimization as it relates to their technology (File Services, Storage, Messaging, Web Services, etc.). In addition you can reduce the overall risks in promoting changes to production by having them validate the tests that you are using to prove the solution. As with any technology you wish to promote within your organization, the more people who recognize the benefits, the more likely that it will factor into new designs, consolidations or enhancements to existing technology.

As we move more and more to providing service driven architecture, it is a great idea to capitalizes on technology that improves the overall end user experience like WAN Optimization. It is even more important to ensure that your IT team is familiar with the technology so it can benefit the entire organization.

Wednesday, April 28, 2010

DPM & Hyper-V

Data Protection Manager (DPM) is the extension of Microsoft Backup, designed for the enterprise.  It is based on volume shadow copies which create a ‘point in time’ snapshot of a Microsoft Volume.  Microsoft provides the ability to take a full copy and then perpetual scheduled snapshots across a server and desktop environment.  With DPM you can schedule and manage this activity centrally. 

Shadow copies are dependant on a volume shadow copy writer in order to ‘quiesce’ the read/write activity to the volume to ensure data is consistent.  Additional shadow copy writers have been written by Microsoft for applications such as SQL, SharePoint etc.  Because the shadow copy writers are application aware they ensure the applications data is crash recoverable.  The difference between crash consistent and crash recoverable is whether or not the data in transit is consistent with the data on the volume.  If the snapshot is crash recoverable the data is  consistent.

When snapshots are requested from the Hyper-V host level to take a point in time copy of a running virtual machines the snapshot is crash consistent.  When it is initiated by the Operating system within the virtual machine using the application specific shadow copy writer the data is crash recoverable.  This is why agents are typically run at both the Hyper-V host level and also within the virtual machines.

According to Microsoft the estimates for a single 2010 DPM server is as follows:

  • 100 Servers
  • 1000 Desktops
  • 2000 SQL databases
  • 40 TB Exchange Datastores
  • Managing 80 TB of disk space

Friday, April 23, 2010

Microsoft System Center Desktop Error Monitoring (DEM)

 

Enabled through error reporting on Windows desktops; the idea is that all the information that you could forward to Microsoft (watson.microsoft.com) is available to the internal IT team.  DEM is available under MDOP which is provided as part of Software Assurance (SA).  DEM facilitates low-cost monitoring of application crashes.  DEM is enabled through GPO and does not require an agent. 

According to Microsoft studies 90% of users reboot if they get an application crash vs. calling the help desk.  This translates to lost time, possible lost data and still the underlying problem remains unresolved.  DEM allows you to collect data and bounce it off Microsoft's Knowledge base. 

The system requirements are a management server, reporting server and SQL database.  Overall utilization is light as the purpose of these components is to collect not query the information.  Through GPO you redirect the errors from watson.microsoft.com to your internal DEM server. 

One of the interesting things you can do is turn off the user prompt that requests a user to send the data.  DEM is built on the same framework as Systems Center Operations Manager however it is rights restricted to ‘agentless’ desktop error monitoring only.  Because it is essentially OpsMgr you can also alert on crossed thresholds.  DEM categorizes all the error messages automatically to allow you to easily check the version information of the applications and its associated DLLs. You can also create a custom error message response on the alert or collect additional information like dumping information from a file.  You can report on the number of application crashes across the organization.  You can then take these batched dumps and send them to Microsoft.  Microsoft will query these against the knowledge base and will respond with the link if it is a known issue. 

Bonus Tip: Do not configure DEM to send full memory dumps from the desktop as there is a significant increase in the amount of data traversing the network.

It’s about the Service not the Servers

Technorati Tags:

The role of the IT administrator is evolving from proactive support (although some of us have very reactive environments) to service automation.  Microsoft's framework to enable  IT administrators to implement service models is Systems Center, Service Manager and Opalis.  Opalis is a workflow engine from Microsoft that was  recently acquired and allows you to automate Systems Center components.  The new challenge for IT administrators is to be able to create a “Service Definitions”.  A service definition includes all tiers of a business application  including the optimal performance or baseline of the business application and the knowledge to remediate problems if they occur.  So what does this mean in the context of Systems Center? well customizing management packs in Operations Manager, designing workflow logic in Opalis and defining remediation events in Configuration Manager to react to changes in the environment.  This adds some significant complexity to the existing IT environment however this is clearly a strong focus in the software industry as they all build on existing workflow frameworks (i.e. VMware LifeCycle Manager, Citrix Workflow Studio).  Clearly this will require a reorganization of IT teams to bring application knowledge, infrastructure, network expertise and process understanding closer together to map all the pieces and processes surrounding a business application.  Once  modeled or mapped the logic can be defined and the processes that should be automated can be automated.  Having worked with customers in traditional or silo’d IT environments to complete this business application mapping it can be very time consuming as often the information rests with individuals in an organization and not in well documented business processes.  Even though this will be challenging, internal IT teams cannot afford not to evolve as the software vendors continue to push for this level of automation in Cloud services that continue to mature.

Wednesday, April 21, 2010

Microsoft Management Summit 2010

Notes on Brad Anderson's Keynote, Corporate Vice President Microsoft Corporation

Windows 7 has been the fastest selling operating system in history according to Microsoft. This has lead to a renewed focus on providing a comprehensive desktop management strategy to Microsoft’s customers.

Configuration Manager 2007 R3 delivers power management a.k.a. “the greening of IT”. Configuration Manager now has visibility into power consumption metrics. Once tracked the configurations can be adjusted and a report generated to show power conservation gains. Clients are simply enabled so that the KW/hour can be tracked. Through the management console you can enable power management policies which adjust power behavior patterns on the client side. In addition Configuration Manager allows you to put in a cost associated with the power consumption. Once you enforce policies you can report on CO2 savings across the entire organization. Brad makes the point that these reports can be used to save money and to market a green message to customers. The beta of this technology is available today.

Brad makes the point that you do not want to have to separate management solutions for your distributed desktops and centralized VDI solutions. Brad hammers home the point that everything should converge under a single management framework (If I did not know better I’d say this was a thinly veiled criticism of VMware’s platform).

Citrix was named by Microsoft as the leader in the VDI space and Brad mentioned that both Microsoft and Citrix meet quarterly to ensure their VDI strategies align. At this point Bill Anderson is brought out to show an integrated view of configuration manager and XenApp (Oddly not a XenDesktop/Hyper-V/Configuration Manager demonstration). The goal is to increase the level of automation between Configuration Manager and XenApp. During the demo an application is deployed and published from Configuration Manager to the XenApp server environment. The demo switches to a remote desktop running Dazzle where the application is launched.

Interestingly Brad challenges the audience to have lunch with their internal Citrix administrators. The next part of the message focuses on VDI and the partnership with Citrix. In Windows 2008 R2 SP1 Remote FX and dynamic Memory is delivered to optimize the platform for VDI. Dynamic memory is Microsoft’s version of memory oversubscription. Memory is now allocated as a range vs. a static amount. Remote FX is the release of Calista technology that was acquired by Microsoft and designed to deliver multi-media.

The next demonstration focused on Microsoft ForeFront which is Microsoft’s security framework which plugs into Systems Center. The Beta allows end point protection to be enabled and configured through Configuration manager. Interestingly the install of the ForeFront agent uninstalls other security products (Now I know why the Antivirus vendors made such a fuss about Windows 7). ForeFront provides complete reporting on how compliant the environment is from a security perspective.

Brad then started talking about Desktop as a Service (DaaS) and how every desktop strategy should incorporate the ability to move desktops to the cloud at some point. Windows Intune was re-announced at the keynote as the go to market strategy of Systems Center Online. It will not provide feature parity with System Center in its initial release (Please refer to my Intune post for additional details).

Microsoft’s next announcement focused on commoditizing the compliance industry which is a hefty challenge indeed. Microsoft’s strategy is to translate industry compliance rules into a set of policies and templates that can be applied across an IT environment and then reported on. It was at this point we watched the demo of Service Manager take the PCI compliance standard which Microsoft had translated into an audit of the systems that had to comply and a series of configurations to ensure compliance. As anyone who has worked in compliance knows this is generally a very labor intensive process to translate the compliance requirements into actionable IT tasks that can be applied across an organization. Of course by applying a set of policies Microsoft is tackling compliance from a systems perspective but this can be a large piece of the compliance requirement.

Microsoft is providing the Beta of many of the products now but general release is slated for 2011.

MED-V; Enabling OS Migrations?

Technorati Tags: ,

I have always wondered about MED-V and the use cases for it. It has always seemed that it was a product that missed its window in time. In the early years of virtualization deployments, VMware workstation was used to contain Windows legacy OSes to enable desktop migrations. I seem to recall an old case study where Merrill Lynch used this strategy successfully. Microsoft positions MED-V as a key tool for migrating from Windows XP to Windows 7.

MED-V is a management layer that sits on top of a distributed Virtual PC environment. It is very reminiscent of the original VMware ACE architecture and to be quite frank that product struggled to find opportunities as well. Why not just use App-V, XP mode in Windows 7, TS or VDI? well Microsoft’s argument is that if you have a number of legacy applications that are not OS compatible and you have a large number of desktops, MED-V is a good choice. While this makes sense if I have desktops that have a reasonable amount of CPU and memory, it seems that it is a technology where the stars have to completely align to prove itself.

MED-V can apply policies to the distributed virtual instances to make the image read or read / write. In addition the naming of the VM, the network profile (IP, DNS) and resource utilization (Memory) of the VM can all be centrally managed. In addition the applications in the guest VM can be blended into the startup menu of the host desktop. Microsoft recommends Configuration Manager to push the MED-V images as the default deployment mechanism is not as robust.

Here is my take on the ideal situation for MED-V; A Windows 7 deployment of new desktop hardware in which 10 or more Windows XP legacy applications that are incompatible with Windows 7 exist in a enterprise environment where Configuration Manager is deployed.

Windows Intune; Online Services for PC Management

Technorati Tags: ,,

Intune builds on a strategy to take Microsoft products and cloud enable them.  Intune is for customers who have not deployed Systems Center onsite and is only licensed for desktop management at this time.  Intune allows you to avoid costs and complexity by NOT implementing on-premise management.  The target audience for Microsoft Intune is the mid market customer. 

Intune is available through the Microsoft Business Online Services.  If you subscribe to the service you will be able to deploy the latest version of Microsoft's desktop operating system to help standardize the user environment.  Subscribers also will get access to MDOP and all its tools (i.e. diagnostic and recovery tool kit for image and password recovery).   It is recommended that you configure the following as part of the initial enrollment.

  • Product Update Classifications
  • Setup Auto approval for patching
  • Setup the Agent policy
  • Setup Alerts and notifications

Communication is secured through certificates; one for the initial setup and then one per desktop for ongoing management.  Intune has been tested on Windows 7, Vista and XP running the latest service pack.  For alerting the SCOM management agents are used.  These management packs have been tweaked to be less chatty across the WAN. 

The console has been intentionally simplified to provide a fairly straight forward operational console.  The team that developed Intune worked internally on the Windows Systems Update Server (WSUS) so similar capabilities, concepts and simplicity in setup is apparent when browsing the interface.  The console was designed with “surfability”  in mind. 

Intune will track license compliance and alert on license issues.  You can import licensing agreement information to cross reference license compliance.  The team expects to have asset tracking in the final release so that hardware and software inventory is available. 

In the initial release there is no concept of delegation; all users are essentially administrators.  Desktop policies are available and can be configured and deployed to the desktops.  The policies are limited in the first release but focus has been put on the most critical settings.  The application of policies has been intentionally simplified through the use of templates and wizards.  You have the flexibility of enforcing local or domain policies depending on whether the desktops participate in AD.

One of the interesting features is the ability to remote control the machines through the integration of Microsoft Easy Assist.  This is end-user driven in the initial service offering meaning the user initiates the request.  Due to the integration of system center monitoring you can configure notification rules to send an alert or message for things like an Easy Assist request.

Although Microsoft was intentionally vague about the road map for Intune it is clear that the service is being actively developed to bring new features to market quickly.  Demand for the initial Beta preview was so strong that Microsoft closed signup on the day the service was announced.

Citrix Essentials with Site Recovery

Citrix has continued to develop the feature set of Citrix Essentials to enhance the Hyper-V platform.  What is interesting is they have announced a Citrix Essentials Express that is restricted to two host servers.  Included with Citrix Essentials Express is Site Recovery.  This provides some interesting DR alternatives for businesses in the SMB space.  As Citrix Essentials uses its StorageLink feature to provide visibility into the SAN layer to enable Site Recovery the limitation is the number of SAN vendors providing Citrix StorageLink support.  Citrix has set a strategy of continuing to focus on reasonable alternatives for DR automation for the SMB market so expect the capabilities and vendor support to continue to evolve over time.

Tuesday, April 20, 2010

Microsoft’s Management Summit 2010

This year Microsoft Invited me to attend the 2010 Microsoft Management Summit.  As we have noticed a stronger interest in Microsoft technology this year amongst our customers with the release of Windows 7 late last year, I was delighted with the opportunity to go and review Microsoft’s virtualization and management strategies. 

The key note by Bob Muglia (President, Server & Tools Business) started with a restating of the core principles of Dynamic IT which were laid out by Microsoft in 2003.

  • Service-enabled
  • Process-led, Model-driven
  • Unified and Virtualized
  • User-focused

Bob noted that many of the products in the System Center Suite have matured so that the reality of Dynamic IT can now be delivered.  Bob also drew a strong comparison between the principles of Dynamic IT and the requirements for Cloud Computing. 

The point was made that software development is largely based on software models that originated from the developers within an  organization.  With increased scale, the maturity of virtualization, and the need to properly stage code into production Microsoft discovered that the IT organization had a stronger influence over the software model than developers.  This background was used to introduce several recent or new integration points between System Center and Service Map, Visual Studio and the Lab Management feature, and Hyper-V.  Through this integration the demo focused on deploying a new software model consisting of several tiers (web, database etc.) visually represented in Service Map onto a staging environment consisting of Hyper-V virtual machines.  Lab Management from Visual Studio 2010 was used to develop and validate test plans.  When an error occurred you had the option of taking a screen shot or capturing the state of the VMs making up the software model and emailing them to the development team.  Once the code was “corrected” the final configuration was deployed using Opalis Orchestration which reminded me of VMware’s Stage Manager but seems to provide the flexibility of LifeCycle Manager of Citrix’s Workflow Studio. 

The keynote then laid out Microsoft’s message around Cloud computing and lessons learned with the deployment of Azure and Bing.  These lessons are being used to fine tune the next generation of software to be ‘cloud ready’.  Some references were made to software made up of multiple virtual containers that could scale up and down on demand.  This sounds much like the BEA Liquid VM development that was being done before the Oracle acquired the company.

It was at this point we got a sneak peak of the next release of SCVMM.  One thing I picked up was that XenServer was integrated into the management console.  Templates have been extended to include multiple virtual machines as a single application architecture.  SCVMM now integrates with WSUS server for patching.  The library in SCVMM has been expanded to include App-V application packages which allow templates to include VMs and virtual applications.  This also simplifies scaling of additional VMs to meet demand as applications are streamed into new application servers vs. natively installed or scripted into the images.

One interesting thing that was also demonstrated was the ability of SC to monitor VMs in the Cloud or off-premise monitoring.  This was provided through a management Pack for Windows Azure; it would seem that if its a Microsoft Cloud it will have a management pack.

This makes Microsoft’s foray into Virtual Lab Automation interesting as it is tightly integrated into Visual Studio and the hosted development platform Azure.

Monday, February 1, 2010

XenDesktop and VMware View Versions

Technorati Tags: ,

XenDesktop

Citrix makes XenDesktop available in four editions, Express, XenDesktop VDI Edition, XenDesktop Enterprise and XenDesktop Platinum. As XenServer has no cost associated, the hypervisor can be deployed with any version. VMware View 4 comes in two editions; Enterprise and Premier. As we often talk to customers about the specific technical details in each platform, I thought it would be interesting to have a look at pricing and where the vendors have drawn a line between versions of their software.

Just a note of caution about this blog; I am going to use list prices in Canadian dollars to make some observations. Please keep in mind that rather than the actual price, the information is more useful when you consider relative price (i.e. the vendor has priced their version above the price of the competition). The list price of course is subject to the exchange rate and also to other competitive factors in the marketplace.

Citrix bundles XenApp with the Enterprise and Platinum editions of XenDesktop. VDI Edition and Enterprise XenDesktop share the same features with the exception of XenApp which is not included in the VDI edition but is included in Enterprise.

The features shared in both VDI Edition and Enterprise are:

  1. StorageLink
    1. Is the technology that allows XenServer or Hyper-V to communicate directly with storage appliances
  2. Provisioning Services
    1. Is the image management technology originally acquired from Ardence
  3. Profile Management
    1. Is the profile management utility acquired from Sepago
  4. Workflow Studio
    1. Is the process automation tool
  5. EasyCall
    1. Is a telephone connection broker
  6. XenDesktop Connection License
  7. Merchandizing Server
    1. Is the new client deployment appliance from Citrix
  8. Access Gateway Standard
    1. Is the ssl gateway’s standard connection license

In the Platinum Edition of XenDesktop Citrix bundles licenses from other product lines. Although this does provide additional value to the customer, the caveat is that the required backend appliance is not provided. In the past this made the bundling valuable to customers who had implemented, were planning to implement or as a hedge against future deployments of other Citrix solutions. What has made this bundling more attractive is the release of virtual appliances by Citrix in some product lines. The bundling in Platinum includes

  1. EdgeSight
    1. The end node monitoring solution originally acquired from Reflectent
  2. Branch Repeater
    1. Is the WAN acceleration appliance originally acquired from Orbital Data
  3. Access Gateway Universal License
    1. Is the ssl gateway’s universal connection license

VMware View

VMware View is available for purchase in two editions; Enterprise and Premier. Both editions share the core components necessary for building a VMware VDI environment. These components include:

        • vSphere
        • vCenter
        • VMware View Manager
                • Is the 4rth generation product developed from the acquisition of Propero
        • The PCoIP Display Protocol

The Premier edition includes the linked clone technology, Composer, the application virtualization technology acquired from Thinstall, ThinApp and the offline virtual desktop feature (Currently listed as experimental under VMware View 4)

The Price Comparison for both platforms from least to highest cost item is as follows (Note: Prices are provided for comparison only)

XenDesktop Express$0.00
Citrix XenDesktop VDI Edition$110.00
VMware View Enterprise$160.00
Citrix XenDesktop Enterprise$230.00
VMware View Premier Edition$260.00
Citrix XenDesktop Platinum Edition $370.00

One of the important considerations when costing out the solution is what the license requirement is. As in a server based computing environment, licensing by user, device or concurrency can make a dramatic difference to the bottom line. VMware vSphere 4.0 is licensed by concurrent user connection. It is enforced through the EULA (End User Licensing Agreement) and not restricted by session at the connection broker. Citrix XenDesktop 4.0 offers licensing by user, device or concurrent connection.

Tuesday, January 26, 2010

Thin Client or Desktop Appliances

An often under considered component in a VDI deployment project is the thin client device or “desktop appliance”. Reducing the total cost of ownership in a virtual desktop environment is often dependent on the removal of the thick client device and its replacement with a desktop appliance. While the operational requirements are reduced on a desktop appliance they still need to be considered and planned for as part of the deployment strategy. Desktop appliances come with an integrated operating system that may be Windows or Linux based. In addition they may have image management solutions that need to be deployed, although for proof of concept or limited scale environment imaging can usually be done by unlocking and shuttling the image through a USB device.

One of the common problems with desktop appliances is the integrated version of the desktop agent that is shipped is typically not current enough to provide all the features of the VDI solution. In addition the desktop agent supplied by companies such as VMware or Citrix may have additional requirements such as windows compatibility that need to be considered before selecting a specific embedded OS for the desktop appliance. Desktop agents may not have feature parity between Linux or Windows agents or may limit support to Windows derivatives only. Those desktop appliances that do provide feature parity and are Linux based often do so by using vendor developed software. As these agents and features are not directly supplied by the VDI software vendor they should be thoroughly tested. One thing that helps when selecting the desktop appliance is that if it has already been certified as working with the VDI vendor (i.e. XenDesktop or VMware View Certified). Enough time should be allowed in the deployment plan to understand how to manage the desktop appliance and also how to apply upgrades to the embedded image. It is useful to have surplus units available for ongoing operational support such as image testing or agent upgrades.

An interesting alternative developed for VDI is the no-software desktop appliance or thin client. These devices reduce the management overhead by running only firmware on the desktop appliance and moving the management to a centralized administration console (i.e. the PanoCube although others are appearing on the market). While reducing operational overhead these devices tend to be very vendor biased and restrict the customer to certain platforms only. The other potential drawback is the possible physical replacement of the device for any major revisions to the product line or feature set. These devices are designed for VDI only so if the environment requires a blend of server based computing and VDI a standard desktop appliance may be better suited. If matched to the right requirement these devices can substantially reduce the burden of management so are well worth considering.

Wednesday, January 20, 2010

Is VDI the right way to go?

I am going to combine a couple of thoughts here and add a little blue sky thinking.  One thing I have noticed from dealing with various organizations at different levels of virtual desktop maturity is that there still seems a few barriers to 100% adoption across the entire organization.   I am generalizing as things are not the same for every customer.   The real TCO for VDI is not substantially reduced until the PCs are replaced by Thin Clients (or desktop appliances); and there tends to be the sticking point for some.  Sometimes as much as IT would love to move users to a lower support cost desktop alternative the users or business is reluctant to go.  This can be for various reasons such as protectionism from the desktop support teams, peoples general reluctance to change or a misunderstanding of the technology being deployed to site a few.  In situations like this VDI tends to be used for 2nd desktop requirements and remote access. 

VDI provides the opportunity to manage the corporate image while at the same time providing very flexible options for delivering it to the user locally or remotely.  Although it is not exactly a consolidated environment (I am setting aside technologies like View Composer, Provisioning Server, Storage virtual cloning, for a moment) it is a centralized distributed environment of desktops.  I have had the opportunity to look at a slightly different option recently and wanted to share some thoughts.  I have been reviewing Microsoft’s DirectAccess Technology which is a new feature of Windows 7 and Windows Server 2008.  It goes along with my own thinking that technology should not change anything about the way the user works or plays, it should just do its job seamlessly. 

Now this approach from Microsoft is designed for the IPv6 world although it will run with IPv4.  The fundamental opportunity that IPv6 promises is that everything is globally addressable.  What this means is that potentially all things have unique addresses unlike today were we use NAT to extend the lifespan of IPv4 networks.  Traditionally we use VPNs to connect devices remotely which often adds overhead and delays to the login process.  Additionally, they are often dependant on user interaction to start them up.  DirectAccess automatically establishes a bi-directional connection from client computers located remotely to the corporate network using IPsec and IPv6.  It uses certificates to establish a tunnel to the DirectAccess server where the traffic can be decrypted and forwarded to your internal network.  If you have deployed IPv6 and Windows 2008 internally the connection can be securely transported to all your application servers.  Access Control is used to allow or restrict access.  The promise of this technology is that it allows you to extend your corporate network without changing the user experience or sacrificing how the desktop is managed.  It also makes your corporate network perimeter much more dynamic.  Essentially it allows you to overlay your corporate network in a secure fashion over private and public networks. 

Now make no mistake this solution from Microsoft does presume that the end user device is a laptop and that it has been deployed and managed by IT services.  The reason I thought about the relationship between VDI and Windows DirectAccess is that often customers deploy VDI for remote access to avoid a full VPN solution.  With Microsoft DirectAccess and Windows 2008 and 7 integration Microsoft has provided another option that might be a good fit in certain situations.

Monday, January 18, 2010

Application Encapsulation or Application Virtualization

One of the problems in distributed desktop environments is application lifecycle management. Lifecycle management is the testing, deploying, upgrading and removing applications that are no longer needed. In addition installing applications into a standard desktop image increases the number of images that need to be maintained. With every unique application workload a separate image is developed so different users or business groups have the appropriate applications. This leads to desktops being segregated based on the types of applications; e.g. finance uses a finance image and marketing uses a marketing image and so on and so on. While manageable from a desktop perspective, it can lead to operational overhead in building, managing and maintaining the number of standard images.

In addition as application incompatibilities are discovered desktop images became locked to a specific build with static application and operating system versions.  In a terminal server environment this caused servers to be silo’d based on application compatibility; on desktops this leads to a long refresh cycle. Application encapsulation or application virtualization was originally developed to solve these problem on terminal server environments however it was ported to the desktop space to deal with the same issues.

Application encapsulation is a form of virtualization that isolates the application in a separate set of files that have read access to the underlying operating system but only limited or redirected writes. Citrix XenApp Application streaming leverages the Microsoft cab file format (Microsoft’s native compressed archive format) for its encapsulated packages. VMware acquired a company called ThinStall (VMware ThinApp) which encapsulates the application into a single exe or msi. Once applications are repackaged for application virtualization they can be removed from the desktop image and run from a file share as an exe (VMware) or streamed to the desktop using RTSP (Real Time Streaming Protocol) (Citrix) to run from a cached location. By abstracting the application from the images the number of images that need to be maintained in reduced. In addition, depending on the software the applications can be delivered to users based on file or AD (Active Directory) permissions. The big benefit to implementing application encapsulation is that applications can be tied to users vs. the more traditional approach of installing them into a desktop image. It is common for organizations to over license software by installing it on every desktop instead of just to the required users to simplify licensing compliance. Obstructing the applications on the desktop through virtualization allows the image to be truly universal as a single image can be applied to all users.

Unless you have the same application workload for every business unit you should consider application encapsulation or “application virtualization” to reduce the operational overhead of managing applications in a VDI environment. Encapsulation eliminates application interoperability problems and reduces the management of deploying new applications. Because the applications are pre-packaged, the application configurations are centrally managed lowering application support. While these technologies are available without desktop virtualization, they are more problematic to implement as it is difficult to maintain a consistent desktop OS baseline in a physical environment even if user changes are restricted. Because of the consistent representation of physical hardware within a virtual machine a standard desktop baseline is much easier to enforce in a VDI environment.

VDI presents the opportunity to effectively reduce the administrative burden of applications through the integration of application encapsulation. These solutions have been bundled by the vendors in a way that allows customers to easily incorporate this technology.  Keep in mind that when managing a VDI project you should allow ample time for the testing and integration of application virtualization technology.  The heavy lifting in deploying application virtualization is the repackaging of applications. 

clip_image002

2. Application encapsulation is a form of virtualization that isolates the application in a separate set of files