Thursday, February 13, 2014

Cloud Hybrid Service (vCHS): Advanced Networking & Security, Chris Colotti

The goal of this presentation is to understand the building blocks of vCHS and networking requirements to build a Hybrid Cloud. vCHS is available as a Dedicated Cloud which is Physically Isolated or a Virtual Private Cloud which is logically isolated.

vCHS is built on vSphere and vCenter, vCloud Director (vCD) and vCloud Networking and Security (vCNS) at this time. NSX is not part of the infrastructure at this time. As NSX has additional functionality it is definitely something VMware is looking at closely. When you buy vCHS you get one external network protected by an Edge Gateway (EG). By default a routed network and an isolated network created for you. VMware Cloud Service Providers running vCD will also have these services and features available.

The Edge Gateway has one interface facing out and 9 internal so you have 9 possible routable IP spaces. The EG is deployed in HA mode. All networks are segmented based on VXLAN. EG come out of the provider resource pool not out of the tenant pool. Typically customers will create a DMZ, Application network as well as one for Test and Development.

As part of the EG you can create an VPN connection between the customer datacenter. In addition VMware now offers a dedicated network option. The VPN uses IPSEC and allows you to build complex interconnected Cloud architectures. Each connection however is a single tunnel. This allows you to run cross cloud functional services like Active Directory (AD) for example.

All Firewall Rules are configured at the gateway. By default all traffic is denied. Right now the vCHS operational team has access to the firewall logs however they are available upon request. VMware is looking at ways to provide direct access to these logs.

You can configure Source NAT and Destination NAT rules on the EG. In addition you can configure Load Balancing by defining Virtual IPs and Server Pools. The load balancing rules allow you to run health checks to monitor things like ports on the servers in the pools.

It is possible to drop 3rd party appliances between isolated networks to for additional network services such as an F5 virtual appliance. VMware is also providing examples of doing split designs using common services like SharePoint and Exchange. You cannot replace the external facing EG with a 3rd party appliance but you can configure the EG to flow through the traffic to get these scenarios working.

You can use stretched networks to extend a Layer 2 network between the customer datacenter and the cloud. You have to keep in mind though that all network traffic traverses the VPN as routing is done by the On Premise Network Gateway. One reason to use stretched networks is applications that are tied to MACs or IPs. But keep in mind that a vApp container can only contain 128 VMs.

To do stretched network you need an Edge Gateway on premise and in the Cloud with two active interfaces. A single EG is required per segment you want to stretch so you cannot use the additional interfaces. This is not recommended for a segment with a large number of VMs due to the amount of traffic going back to the router on premise.

The other option is a DirectConnect which allows you to put in a private line to connect an on premise segment to a Cloud based network. There are actually two versions available; DirectConnect and Cross Connect. If the customer is in an existing datacenter running vCHS, then they can cross connect from their on premise cage to vCHS. DirectConnect is setup when the customer is not in the same datacenter.

These options (VPN, DirectConnect and Cross Connect) allows the customer to pick and choose the best method to connect to vCHS. In vCHS there is 5 different Role Based Access Control levels defined. from Account Administrator, Virtualization Infrastructure Administrator, Network Administrator, Read-only Administrator and Subscription Administrator to provide a flexible security policy.












- Posted using BlogPress from my iPad

Wednesday, February 12, 2014

General Session, Pat Gelsinger (CEO, VMware)

Pat believes we are going through a major tectonic shift in the IT industry. Pat explains that in times of disruptive innovation you need to obey these guiding principles:

1) Bet on the right side of the technology
2) Do the right thing for the customer
3) Deliver solutions in cooperation with the partner ecosystem

Pat reviews VMware 3 strategic priorities for 2014; Software Defined Datacenter (SDDC), End User Computing (EUC) and vCloud Hybrid Services (vCHS). In order to do this VMware must make SDDC easy to adopt because right now the value is there but implementation can be easier.

Pat explains that VMware is learning to be both a Software Vendor and a Cloud Services Provider. Adapting to this new reality helps them understand how to deliver the SDDC, while enabling it to extend to the Hybrid Cloud. This must be done without sacrificing enterprise requirements like security and policy control. In addition in the End User Computing space VMware must unify Mobile Device Management and End User Computing to incorporate and deliver Cloud Mobility.

While this is a huge challenge for VMware it also represents a huge opportunity. To do this VMware has hired some of the best software engineers in the world. VMware intends to be a leader in the development and adoption of these technologies and its success to date demonstrates this.









- Posted using BlogPress from my iPad

General Session: Presented by Sanjay Poonen Executive VP & GM, End-User Computing

We are entering a world where mobile and Cloud will be key. Everything is interconnected and mobile. Apps are becoming more cloud centric. Data is being stored on Cloud based Object stores.

VMware's vision is based on a foundation of virtualized infrastructure and hybrid cloud computing. A layer of management and automation is required to manage these as a single entity. Once these layers are established you can build a software defined datacenter.

VMware wants to make your end user experience like Netflix in which you start watching at home, travel and continue to watch on any device in any context including machine (the Tesla Automobile is provided as an example).

The challenge with EUC is that it is made up of a bunch of point products. VMware is committed to being best in breed in all these components but to also be completely unified. VMware makes a commitment to extend VDI to add application publishing. Once complete they will add the remaining components of the software defined datacenter (SDDC) to unify VMware's EUC product offering. VMware's believes their desktop strategy is accelerating faster than the competition.

Sanjay has announced an ongoing initiative with Google and Caesar Sengupta, VP of Chrome and Google is introduced. Caesar mentions 58% of the Fortune 500 are paying Google customers consuming enterprise services. Caesar talks about the Chromebook and how the product went to 21% of the embedded operating system in tablets and laptops in an incredibly short period of time. It would appear that the initiative is based on Google Chromebooks supporting VMware Horizon View at some point.

Sanjay introduces John Marshall the CEO of AirWatch. AirWatch was able to scale to match the demand for Mobile Device Management (MDM). AirWatch has been the leader in Gartner's MDM quadrant since inception. AirWatch started with managing the device but with a focus on enterprise grade; they provide application deployment, content management, email connectivity and integrated browsing options on any device.

The world is going mobile and each device will have its own ecosystem. This is why AirWatch supports every mobile device and every mobile platform. AirWatch is based on central policy management which is pushed down to the device. If the device is lost the business content can be removed to protect corporate assets.

VMware believes the combination of SDDC, EUC and MDM will make their solutions and value incredibly compelling to their customers.














- Posted using BlogPress from my iPad

Tuesday, February 11, 2014

vCloud Hybrid Service and the Recovery as a Service: Presented by Chris Colotti

As VMware's Recovery as a Service (RaaS) is a service based on vCloud Hybrid Service (vCHS), you need to understand how resources can be purchased:

1) As a Dedicated Cloud (DC) resource which provides physically isolated reserved resources
2) As a Virtual Private Cloud (VPC) which is software based and is defined by resource allocation. RaaS is offered as an extension of the VPC model.

Recovery as a Service will not be based on Site Recovery Manager. It will be based on vSphere Replication. If you recall vSphere Replication is done per VM, based on asynchronous replication and replicates at the VMDK level. RaaS vSphere Replication is not the same as the vSphere Replication shipped with vSphere as additional capabilities have been added. VMware plans to merge these at some point.

One of the differences between the vSphere Replication (VR) that is shipped with vSphere and VR for RaaS is that each VR supports 500 VMs and an additional encryption module is provided to secure transfers. To setup VR for RaaS a customer downloads the OVF, pairs the components with vCenter and configures each VM for RaaS. VMware has made it very simple for a customer to enable.

At GA Recovery as a Service will be available in all 5 global regions for vCHS according to VMware. In its initial release, this solution is not designed for the large enterprise. It is targeted at the SMB and Midsize Enterprise space.

RaaS is based on the Virtual Private Cloud consumption model; the minimum size required for the RaaS VPC is 20GB vRAM, 10 GHz of CPU and 1 TB of storage. 10 Mbps is required for network throughput along with 2 public IPs. It will be subscription based and is elastic so additional capacity can be added as required. In addition you get two failover tests per year, however the option to buy additional tests is available.

RaaS requires a dedicated VPC so an existing vCHS VPC that is currently running VMs cannot be used. VMware does not allow you to run the supporting infrastructure in the RaaS VPC as VMs are not powered on until a recovery is initiated. In order to provide the supporting infrastructure for the RaaS VPC (AD, DNS, etc.) you can add an additional VPC, use a current one not dedicated to RaaS or if you already reside in a datacenter delivering vCHS you have a "direct connect" option from your current infrastructure to your RaaS VPC space.

After a recovery has been initiated, to fail back you need to power off the VM in the RaaS VPC. You will then need to delete the original on premise VM in vSphere. The process then does a vCloud Connector Copy from vCHS back to the on premise vSphere. You then need to reconfigure the VM and power it on. Once it is powered up you can restart replication to the RaaS VPC. You have the option to use the reseed option after a recovery to select the powered off VM in vCHS to avoid the initial file transfer.










- Posted using BlogPress from my iPad

Desktop as a Service: Horizon View and Desktone

Typically providing desktops in the Cloud is not the first focus for Cloud Service Providers, it is generally server side applications. Before looking at Cloud based desktops, there are a few questions that you need to ask yourself?

1) Do you want to offer applications or full desktops?

When providing a full desktop be aware that consumer expectations change as well as the operational overhead. You need to consider storage of user data as well as operational patching for example.

2) What is the demarcation point between what is managed and provided?

What sort of services will you build around the desktop, what applications will you provide, what are the rules that govern the use of the desktop?

Some of the technical challenges with View are its dependancy on vCenter as it does not understand VMware's Cloud Orchestration layer. As VMware recommends vCloud Director (vCD) for multi-tenancy the interaction between the View layer and vCloud Director needs to be considered. Both View and vCD wants to "Own" resources. In integrating View with vCD you have three approaches:

Deploying View outside the vCD infrastructure. This approach allows you to build a shared service however it creates difficulties in supporting true multi-tenancy.

The other option is to deploy a View vApp per Organization. While this respects the multi-tenancy of vCD it requires much more View infrastructure. In addition any changes made to resourcing at the vCD level will not be transparent to view.

You can use the View Direct connect plugin to deploy View Desktops from a catalog from within vCD. However, this removes allot of the benefit that the View Connection broker provides to a virtual desktop environment.

Desktop licensing needs careful consideration as there is no Microsoft Service Provider Licensing Agreement (SPLA) for Desktop OSes. You can implement Bring Your Own License (BYOL) but you need to watch Microsoft compliance issues around what hardware the desktops can be deployed to. Another option is to use Windows Server OSes as desktops which gets you around the compliance issues but may add additional considerations.

Unlike View, Desktone is vCloud Director aware so it does not depend on vCenter. Desktone built the platform to use APIs so it was easy to support vCenter as well as vCD natively. With Desktone you can deliver applications or full desktops using which is referred to multi-mode and leverages Windows Remote Desktop Services (RDS). A combination of Desktone and View is recommended by VMware for service providers getting into the Desktop as a Service (DaaS) market.

Desktone allows you to create a Service Center which has multi-tenant visibility. View complements Desktone by providing windows knowledge of the desktops allowing you to manage profiles, apply sysprep, introduce persona management for example. Desktone provides Cloud layer, multi-mode and multi-datacenter functionality to enable a service provider to deliver a full service offering.









- Posted using BlogPress from my iPad

Building the Data Center of the Future: Ben Fathi Chief Technology Officer

Ben explains that it is his first time onstage for VMware, prior to VMware he worked at MIPs technologies were he was a Unix kernel engineer and worked on supercomputers. Ben went on to work at Microsoft and managed server, storage clustering etc. After that Ben managed the security of Microsoft as well as the management of Windows 7 and Hyper-V. Ben left Microsoft and went to CISCO and ran all their protocol teams. At VMware he has been managing vSphere development and now finds himself the CTO. The change to him is that rather than building OSes for servers he is now leading the building of OSes for the datacenter.

There are 3 imperatives for the IT infrastructure:

1) Virtualization needs to extend to ALL of it; including networking and storage
2) IT management needs to give way to automation
3) Compatible hybrid cloud becomes ubiquitous

Two years ago VMware announced the software defined datacenter sending the industry in a new direction. Ben believes that in 2014 SDDC will hit the tipping point. VMware is seeing really strong customer momentum; Symantec, Subaru, Dow Jones is listed among the early adopters of SDDC.

But what does SDDC mean? It is made up of compute, network and storage. Ben mentions that even though 85% of applications are running virtualized their are still some that are not. Compute continues to evolve to reduce latency and additional extensions for Hadoop. Telcos and the Hadoop communities are investing in virtualization to run applications that are have traditionally been physical.

Last year VMware announced NSX the network virtualization platform. NSX is built to run on any hypervisor, run with any application and any cloud providers toolset. VMware believes NSX will revolutionize networkings as vSphere has done for operating systems.

The current challenges to networking is that provisioning is slow, hardware dependent and operationally intensive. NSX takes advantage of virtual switches in hypervisors and creates a flat layer 2 network and programs it. This allows you to set policies that control the network programatically.

In 2012 the number of virtual ports in the datacenter exceeded the physical ports. Ben explains 3 of the top 5 investment banks are deploying NSX as well as the leading global telcos.

Ben switches gears to storage. As you know storage is one of the biggest capex and opex cost in the datacenter. The storage market is in the midst of disruption. Server flash and storage prices are falling. You have an abundance of CPU cycles in servers making new approaches possible.

VMware has been delivering storage innovation for years through vMotion, VAAI, Storage DRS and now vSAN. So what is software defined storage? In the new model there is really three different types of storage; Hypervisor Converged (vSAN), a SAN/NAS pool (traditional storage) and an Object Storage Pool or Cloud Storage.

vSAN is a distributed object store implemented directly in the hypervisor kernel. It allows you to apply policy based management to storage. It is flash accelerated with great performance and lower TCO. Ben says it is also brain dead simple to use, you simply turn it on in vCenter.

Ben explains that they have tested 915k IOPs in a 16 node cluster with less than 10% CPU overhead. VMware had 10,800 participants in there public beta. VMware is ready to release in Q1 with a 16 node configuration. In addition beta participants get 20% off on your first purchase of the product.

Ben moves to opportunities in Hybrid Cloud market. Gartner estimates that the Infrastructure as a Service market was 9 billion in 2013 and will grow to 31 billion in 2017. Ben thinks that the most important thing with a Hybrid Cloud is that it is compatible with the enterprise toolset. Ben explains that on vSphere they have over 120 different types of operating systems as well as enterprise applications which are difficult to run in a public cloud.

The challenges with public cloud is that they are proprietary platforms, do not support enterprise applications and have potential security and compliance issues. VMware's value proposition with their Hybrid Cloud services is that they have the same core components and management tools as the enterprise. In 2014 VMware is going to add Desktop as a Service and Disaster Recovery Services, Database as a Service as well as Mobile Services through AirWatch.

There are really 5 starting points for customers moving to the cloud:

1) Development and testing
2) Prepackaged software; i.e. Exchange, SharePoint etc.
3) Disaster Recovery
4) New Applications using services like Cloud Foundry

Ben challenges the crowd to embrace the new reality.










- Posted using BlogPress from my iPad

Partner Exchange, General Session: Dave O'Callaghan and Carl Eschenbach

The new world makes it difficult to understand what drives IT decisions. People want to use their own devices without compromise; how does enterprise IT fit into this new world? How do we evolve in a world that is changing so fast? Its time to make history again and rethink and master the new reality is the challenge to the partner community from VMware.

Dave O'Callaghan the Sr. Vice President of the partner community takes the stage and welcomes the crowd to PEX 2014. Dave talks about VMware's total revenue from 2007 to 2013 moving from 1 to 5 billion dollars. 85% of that revenue has come through the partner network.

Dave explains that success is about constantly aligning to the new reality. Dave draws the conclusion that the new reality is aligning our businesses to delivering a software defined datacenter. This will require practise, focus and training to meet the challenges of the new reality.

Dave introduces Carl Eschenbach, President and Chief Operating Officer at VMware. Carl explains that their are 4000 partners in attendance this year. Carl explains our challenge is to become masters of the new software defined enterprise. VMware will provide the tools to partners to assist them to do so. VMware made 5.21 Billion dollars in revenue in 2013 delivering 17% growth.

Carl reaffirms VMware's commitment to the channel as it has been key to their success. VMware has invested in $300 million in partner programs to provide incentives to the partner community. VMware's renewals business is at an all time high.

Carl explains that they have invested heavily in bringing the right executive team: Sanjay Poonen (End User Computing), Ben Fathi (Chief Technology Officer), Robin Matlock (Chief Marketing Officer), Tony Scott (Chief Information Officer), and Sanjay Mirchandani the GM of Asia Pacific and Japan.

VMware had 234 new software releases last year including major launches of VMware Horizon Suite and vCHS, NSX and vSAN beta among others. In addition VMware acquired desktone (Desktop as a Service) and virsto the storage hypervisor (vSAN) and announced the acquisition of AirWatch.

VMware's three priorities for 2014 are End-User Computing, Software Defined Datacenter and Hybrid Cloud. This potential software product market is estimated to be 50 billion dollars before services.

So what's next? The Software-Defined Enterprise is next. Carl takes us back in history from mainframe, client-server to mobile-cloud. The fundamental challenge in all these transformations has been relatively flat IT budgets. VMware sees more friction as the consumer demands more while budgets continue to remain flat. VMware believes that they are in unique position to address this. Why? because they have done this already with virtualization. By saving money, they have liberated a percentage of spending that can be spent on innovation.

The way VMware will deliver on this is to deliver the software defined enterprise. What are the foundations of the software defined enterprise:

1) Applications, however they are only as reliable as the infrastructure they run on. This stability is provided by introducing virtualization across all traditional physical datacenter infrastructure; server, network and software defined storage. However it also must extend to the Hybrid Cloud through interfaces like vCloud Automation Center (vCAC). This is the software defined datacenter.

2) End User Computing, in addition we need to give users access to the software defined datacenter though innovations in the virtual workspace while ensuring security compliance and control. The icing on the cake is AirWatch for mobility management.

VMware's mandate is Any App, Any Place, Any Time with No Compromise. VMware expects that the services revenue around these opportunities is 50 billion dollars for a combined total of software, licensing and services of 100 billion dollars.









- Posted using BlogPress from my iPad

Monday, February 10, 2014

The value of vSAN

VMware believes vSAN is a very disruptive technology that does not require you to re-architect the environment to integrate it. There are several trends that necessitate virtual storage adoption:

1) The amount of data we are storing
2) The complexity of storage today

vSAN is a very simple product to deploy. Installation involves answering a few questions to get it up and running but does not require zoning or LUN creation. VMware sees three strong use cases for vSAN: Virtual Desktop, Test and Development and Disaster Recovery.

VMware expects people to adopt vSAN organically. For example customers will buy vSAN for a development cluster initially but as it proves itself it will evolve for use in other environments. VMware is targeting vSAN for the mid-tier storage performance requirements as apposed to applicable for all workloads. vSAN will coexist with physical SAN environments in the enterprise.

VMware sees storage as the final piece of the complete Software Defined Datacenter. The challenge for VMware is will their customers see them as a storage vendor? VMware sees a large shift in the performance power of the server platform, from server flash, to multi-core CPUs delivering an enterprise grade hardware platform. In addition storage is becoming less specialized as VMs aggregate workloads on common storage platforms.

VMware believes the hypervisor is in a unique position to understand both workload performance and storage requirements as it is directly in the IO path. Although most people understand the virtualization story with VMware, the company has been innovative in storage technology and management; i.e. VMotion, Storage DRS, Storage IO control for example.

VMware sees three critical areas in Software Defined Storage; the virtual data plane or the aggregating of storage pools, virtual data services such as data protection and performance and finally the policy-driven control plane which allows policy based automation and orchestration. All these layers are necessary to make up Software Defined Storage.

vSAN will ship in as a Virtual SAN Ready node which will come direct from the hardware vendors as well as a Do it Yourself "DiY" option in which you deploy the hardware and apply the software. In a very small 16 node cluster VMware has bench marked 1 million IOPs as part of there testing.

vSAN provides enterprise grade storage performance from server based storage. vSAN makes use of Host based Hard Drives (HHDs) and Solid State Drives (SSDs) installed on the server which are presented as the vSAN datastore. This means that technology such as VMotion are fully supported on vSAN. It does not present LUNs however so Raw Disk Mapping (RDMs) are not supported on the architecture.

vSAN will work with any servers and RAID controllers on the Hardware Compatibly List (HCL) and can make use of SAS, SATA and SSD drives. VMware recommends 10 GBe connections between servers although it will run on 1 GBe.

vSAN writes to cache and then destages to disk. You can scale out vSAN by adding additional servers with additional HHDs and SSDs. It requires VMware vSphere 5.5 and VMware recommends that all servers in the cluster are configured identically.

The ability to assess use cases for vSAN and been built into the VMware Infrastructure Planner (VIP). VMware has announced that the GA release of vSAN will be in Q1 of this year.




















- Posted using BlogPress from my iPad

VMware Horizon Mirage: Endpoint Protection and Disaster Recovery; Presented by John Dodge, Stephane Asselin and Shlomo Wygodny

Disaster Recovery is a very important capability of VMware Horizon Mirage. Endpoints change naturally and by default Mirage synchronizes those changes back to the Mirage cluster in the datacenter.

Rumour has it that during the final stages of the agreement with VMware, the owner of Wanova lost his laptop. Normally this would have been catastrophic given the circumstances. In a perfect example of eating their own dog food, Wanova was able to restore his laptop with everything intact in less than an hour using the Mirage System Recovery option.

Mirage System Recovery is smart enough to bring down key pieces to get the user up and running quickly while all the data continues to trickle down. This minimal configuration is referred to as "the minimum working set" required to get the system funtional.

With Mirage the System Recovery Scenario looks as follows

1) Install Mirage (that is if the IT group has not supplied a laptop with Mirage installed)
2) Assign Central Virtual Desktop Image and have the Mirage agent pull down the pieces.
3) The desktop reboots to the new working set to complete the process

In addition Mirage can also be used to initiate a Desktop Repair. To provide an example let's look at the recovery of files.

1) User installs an app that wipes My Documents
2) The system administrator restores snapshot from the central console "No troubleshooting required" (Note: it is important to note that only the files that have changed are sent as a comparison is always done first to identify the deltas before sending the files.)

There are three options to repairing endpoints in Mirage.

1) Restore Snapshot - repair using good files and settings from Snapshot

2) Enforce Base Layer - the mirage agent roles back any changes to the standard base layer within the Central Virtual Desktop.

3) A bare metal option which allows a small Windows 7 image which has the Mirage agent to be booted from a USB drive or PXE booted from the network to pull down the assigned Central Virtual Desktop (CVD) from the datacenter to the endpoint.

In some cases the Disaster Recovery value that Mirage brings is core to a customers decision to integrating the technology.












- Posted using BlogPress from my iPad

VMware Horizon Mirage Design, John Dodge, Stephane Asselin and Shlomo Wygodny

The world is rapidly evolving: John sites several recent milestones

1) 2010 the year the non windows applications exceeded windows application driven by the smart phone and tablet market.

2) There are 250 million dropbox users world wide.

3) 52% of employees carry more than one device

VMware's new End-User Computing Vision is: "Software Defined Workspace at the Speed of Life"

VMware's strategy is to plan for the convergence of traditional Windows Management and Delivery, Windows on Mobile and Mobile Management and Delivery in general. In addition EUC is extending to the machine space; for example the Tesla is a smart phone you can drive. John provides the example of Tesla changing the software in order disable a certain feature related to suspension.

John mentions that AirWatch was VMware's largest acquisition to date. VMware now finds itself in the leading quadrant of mobile.

Conversations shifts to Mirage: Mirage provides several capabilities and operates on the notion of layers. Fundamentally there is an IT management layer involving the Base, Application and Driver layers. The other layer can be though of as the user layer such as the Machine identify, user profile and non IT installed applications. Typically the IT management layer is the layer that gets pushed, the user layer gets pulled.

Typically in the datacenter you have a cluster of stateless mirage servers that are load balanced. Mirage supports NAS and DASD however a production implementation requires NAS.

All the administrative work happens through the Mirage console. Mirage is flexible and allows delivery of applications through a mirage layer, SCCM or App-V for example.

Mirage Server runs on Windows Server 2008 R2 and requires a database which is a required component.

Mirage Client is a lightweight client which can be silently installed. Mirage will make use of the VSS so requires 10 GB locally for the installation; 5 GB for the install and additional space to download an initial image for example. Mirage will throttle transfers based on how active the user is on the endpoint. In the next release you will be able to turn the user throttling capability off however you should be aware that it will impact user activity.

1) There is a Max. 1500 Endpoints per server physical or virtual
2) Max 20,000 Central Virtual Desktops (CVD) per Mirage Cluster
3) Deploy N+1 Mirage Servers to avoid single points of failure
4) VMware recommends Two gigabit Ethernet on the server

Replication in a typical environment is estimated 15 kbps per endpoint and approx. 150 MB per 24 hour period. This will vary considerably from client to client.

Behind the scenes Mirage Storage is a Single Instance Store in which:

1) Mirage stores CVDs, 1000 per volume is VMware's guideline
2) File and binary (chunks) are de-duped depending on file type (Mirage is smart enough to understand database files for examples)

Each Mirage server has a Local Cache which caches endpoint synchronization data. Typically there is one per Mirage Server and 100 GB of space is recommended. This cache is dedicated per server however it only benefits data which is uploaded.

On an End User PC the following layers can fall under Mirage Management

1) User Personalization Layer
2) Machine Identity Layer
3) Mirage Application layer
4) Base Layer
5) Driver Library

All this layering is done using native Windows APIs.

When deploying Mirage, the first step in the process is to build a reference machine. To do this you would add an agent, create a good base image and then centralize it. You would put everything you expect to but in a single base layer. Traditionally this is the most static parts of the desktop image. This image is then replicated up to the datacenter.

Once we have this image we create a Base Layer by applying Base Layer Rules. Once we have applied the Base Layer rules it is considered a CVD. Once we have this we can assign or deploy the base layer to our endpoints. The easiest way to distribute this base layer is to have clean endpoints.

To distribute the base layer you create Mirage groups. You can create a dynamic group by adding rules based on naming conventions for example "vdi". Every endpoint that boots with a vanilla OS and the Mirage Agent and the acronym in the name will join this collection and receive the CVD.

Before Mirage sends any data the endpoint is analyze to ensure only the delta is sent. Mirage intelligently merges Base Layer changes into the End Point using native Windows native API. To the OS it feels like an application installation. These changes can be stacked to schedule the reboot to happen on a monthly basis.

Most endpoints today have disk encryption; as Mirage works from within the OS, Mirage does not notice the encryption unless a Windows XP to 7 Upgrade is attempted. In some cases you may have to decrypt before the upgrade however Mirage has worked with many mainstream encryption software to provide full support in addition to Microsoft Endpoint Encryption Software.

Creating Application Layers is very flexible and allows you to combine applications in a single layer or have a single application layer per application. If you create a single layer then you are testing all these interdependencies. Mirage Application layers do not provide application virtualization so to the OS the application(s) are natively installed. You can combine ThinApp and Mirage Layers to get the best of both worlds.

A Driver library is used to enable a single base layer which can be applied on multiple physical devices. Drivers are combined with the base layer to deal with different vendor desktops; HP, Dell etc.

One common use case for Mirage is for a Windows XP to 7 Migration. The first phase of this process is to push the base layer components down to the XP endpoint. Before the migration is done an optional pre-deployment snapshot is done to provide a fallback point in case. This snapshot is stored on the Mirage Server. Once that is done a reboot is done which is referred to as a "pivot". What is being done is the swapping out of the XP files with the Windows 7 OS files in addition to joining the desktop to the domain to complete the migration. The benefit of this approach is that an in-place migration is done and the end user impact is minimized. While times will vary a typical migration can take from 30 - 50 minutes.

Great information today from @VirtualStef and the team







































- Posted using BlogPress from my iPad


- Posted using BlogPress from my iPad