Tuesday, September 15, 2015

Safely Transitioning Users to the digital age

The IT industry is going through a huge change. Key trends driving this change are cloud, mobility, social networks and big data and analytics. The Economist recently reported that by the year 2020 80% of adults will have a mobile device equivalent to today’s supercomputers.

image This drive towards mobility is creating an explosion of devices within our environments. In addition these new mobile platforms are forcing development of Cloud first, “as-a-Service” applications. These applications are designed for volume and velocity and are fundamentally different from our legacy enterprise applications.

These new Cloud first applications are providing analytics by default driving big data usage. As the ability to measure user interaction within mobile devices grows, an increase in the volume and number of data points provides more information and greater intelligence.

This drive towards mobile and increased digitization is creating both friction and pressure for our traditional IT organizations as they struggle to manage risk and compliance while enabling freedom and choice for users. The risk to not responding to these demands is not an option as our Lines of Businesses actively look to Cloud options to deliver lower costs and greater mobility with our without our blessing.

From an IT perspective, this leaves us caught between two worlds with limited flexibility and fairly static budgets. From an End User perspective, we have large high touch enterprise desktop environments that create tremendous operational demands on our time and resources, leaving little left over for proactively planning for Cloud and Mobility.

In developing a strategy to break this cycle it is important to drive new operational efficiencies into the management of desktops by redefining our desktops as software. This can be accomplished today by introducing virtualization at the desktop, application and user metadata layers. This allows us to drive to a more non-persistent, cached desktop. A cached based desktop can provide a similar user experience as a traditional physical desktop at much less operational cost and overhead.

Making an investment in a end user virtualization software stack should include tools to both virtualize a desktop environment and deliver enterprise mobility and policy management. Only in this way can we drive down operational overhead, plan for mobility and future proof our desktop and mobility strategies. 

Thursday, September 10, 2015

#VMware #vForum Shawn Rosemarin and Greg Davidson Welcome Address & VMworld 2015 in 10 minutes

Greg, welcomes the audience and encourages everyone to pick a technology that you are not familiar with. In addition make some contacts and enjoy the show and encourages attendees to take advantage of the hands-on labs. Greg introduces Shawn who is about to do VMworld in 15 minutes.

Shawn mentions that it is all about the application; at the bottom of the stack is the infrastructure. This infrastructure is  extended to One Cloud (Public,Private or Managed). This One Cloud model is used to enable applications to be delivered to any device.

Software-Defined Data Center and Hyper Converged. Virtual San has been ungraded to 6.1. EVO Rack has been rebranded EVO SDDC. Currently available from Dell, Quantum and VCE.

VMware introduced the Unified Hybrid Cloud Platform which connects an Enterprise SDDC with VMware vCloud Air. In addition vCloud Air, SQL, Object Store and DR where all announced last week. Project “Skyscraper” is cross cloud VMotion and content sync.

In addition VMware announced vRealize Operations 6.1 with Intelligent Workload Balancing (managing and moving workloads to where they should live). Hyperic has been collapsed into vR OPs so that OS monitoring is done from a single appliance. Also look for Log Insight to continue to move closer together.

Site Recovery Manager 6.1 was announced with closer ties to NSX so that the network fails over with the workload. In addition SRM Air was announced so that vCloud Air is a target for SRM deployments onPremise.

In addition VMware Photon Platform was announced. For additional details go here. This is VMware’s Platform for container support.

Shawn reiterates the industries evaluation that VMware is leading the industry in End User Computing “EUC”.

image

NSX 6.2 was also announced at VMworld and Shawn mentions that they have many customers in production with this technology and encourages attendees to go to the NSX guided lab session.

#VMware #vForum Keynote: Brave New IT – Accelerating Business Innovation in a Liquid World: Paul Strong Global CTO

Paul Strong connects all VMware’s R & D with VMware as a company, as well as it’s partners and customers.

image

Technology has ceased to be about optimizing software and hardware, but rather is about doing things differently. With VMware it is important to understand how the world in changing so that we ensure we are going in the right direction to add value to our partners and customers.

Paul begins to discuss trends and brings up Cloud. What we have discovered about Cloud is that it is not just about technology. The reason Cloud is so compelling is that it is about a consumption model. It is about self-service, instant provisioning, pay-per-use, cost-efficient and elasticity. What people are addicted to is the experience. Cloud is really transforming things in 4 dimensions:

    • Is allows people to consume technology as a commodity. People can defocus on the plumbing and focus on the activity that drives their businesses.
    • Cloud democratizes innovation which creates a very different world as technology is available to the masses.
    • Cloud has reset expectations around IT. I can get “IT as a Service” so I want the same from my IT organization. This is really about an experience that users are searching for.
    • The technology that enables the Cloud is changing the way we think of a system. “The Cloud is now the System”

Paul moves to his thoughts on trends. The first trend is around applications and services; from Mainframe, monolithic, distributed to micro services. Reusable components that inherently small and are designed to work together.

The second trend is around infrastructure; we have moved from a discrete set of resources within and enterprise to a fabric that extends all the way out to your mobile devices. Both trends have really influenced each other; this fabric is a natural environment for micro service development.

Although evolution happens all the time, what has changed is where we are on the trajectory which has turned our assumptions on its head. For example if you wanted scale you would build a scaled out architecture as part of the build. With Cloud we build highly reliable systems out of a great number of inherently unreliable systems. In addition we can scale out applications without a large monolithic architecture supporting. Paul sites the example of large databases that were built based on the number of customers and transactions.

Paul would like to call out the assumptions around Security. We use to built walls and moats (the analogy of a castle is used). The trouble with this approach is that they were always surmountable in the end. This is how we have viewed security over the years. NSX enables us to do this, similar to mechanized warfare in the 20th century. Security is built into the mobile units; tanks can dissipate and move around separately for example.

Paul argues that we were lazy in the past; creating client server applications that required input to deliver value. In the new world rich mobile content is preempting what you may want to do or buy or be interested in. These capabilities are allowing companies to redefine and retarget their business almost in real-time using analytical data.

The enablers for this shift is the standardization of the datacenter through virtualization and containerization. Diversity in the datacenter is your enemy as it makes development extremely hard and drives operational complexity. In addition x86 has become the de facto platform for compute and data. This standardization is an enabler for automation. This is now occurring in the network as well with Software Defined Networking “SDN”. This allows us to enable routers, firewalls and networking within the hypervisor level. In addition we have moved from discrete building blocks, to converged to hyper-converged.

Paul describes virtualization as magic because it separates applications from infrastructure. What we see happening now is virtualization happening higher up the stack from the VM level with containers and technologies like Docker and VMware’s Cloud Native Application initiative.

This separation enables automation; for example if you automate a container you automate all the applications you can put in it. And that is the whole purpose of the Software Defined Datacenter “SDDC”.

This drives real choice as to what compute resources you use however you retain a singular approach to managing and viewing resources. This makes your lives easier; VMware calls this “One Cloud. One Cloud is about having one model to manage leading to simplicity.

This turns on the faucet of innovation as it allows people to try many ideas cheaply to determine what will be successful and drive business. There is no barrier to getting technology. This is highly, highly disruptive. This is the democratization of technology.

Technology is now being used to undermine the assumptions upon which establish players are founded (i.e. Taxi Cab’s vs Uber). This allows you to pull the rug out from under existing industries. Paul provides the example of the Encyclopedia Britannica under competition first from Microsoft Encarta and then totally collapsing completely with the introduction of Wikipedia. 2012 was the last year you could buy an encyclopedia Britannica.

These mobile devices will be delivering Zettabytes of data that creates a real intimacy as we understand the relationships between things. This drives what Paul calls “Systems of Intimacy” as analytics are used to greater effect to maximize the value of every transaction in real-time. This opens the door to simulation which allows you to more accurately predict without the cost to optimize your product. Paul mentions that automobile manufacturers are moving from physical crash tests to virtual crash tests using simulations.

The road of IT is shifting from manual to automation, from analog to digital. This enables new use cases. IT gets curated. What gets curated is non-differentiating IT; if it does not differentiate then curate it; if it does differentiate then invest in it. 

VMware’s role is to provide these technology to its customers and partners.

Monday, September 7, 2015

#vmworld VMworld 2015 Wrap Up

What an amazing week; so much great content and so much fun meeting and connecting with everyone. Over the last few years I have had a chance to release our Horizon book with my good friend Stephane (@VirtualStef) and present at the show. This year I was able to blog the show which gave me a completely different perspective. VMware did a great job allowing us to ask questions of all the executives including Pat Gelsinger, Carl Eschenbach and Sanjay Poonen.

It was also great to introduce our upcoming project with Pearson LiveLessons that will focus on VMware’s SDDC at the show (stay tuned for additional details as we near the release date).

Support from the community and the attendees was fantastic for the site virtualguru.org. Your interest and support had us trending around 4-5,000 hits a day. Please keep the comments coming so we can continue to tailor our content to the things you are interested in.

Interestingly, although VMware was clearly proud of the improvements in the marquee products like vSphere and Horizon, the messaging was squarely on Cloud and mobility. Keynotes focused almost exclusively on the products sitting squarely in the Cloud (SDDC and Cross Cloud VMotion) and Mobility space (AirWatch).

I had to attend the breakouts to find out about proactive HA in which vRealize Operations is being used to predict capacity and rebalance workloads across a federated vSphere and vCloud Air environment. In addition creating zero trust environments using NSX and Horizon were equally as relevant in today's hyper cyber security sensitive environments but were absent from the keynotes.

While I understand why VMware is pushing the Cloud focus, I think a little thunder around what’s been accomplished in the latest 6.X versions would have galvanized the IT and vSphere administrator audience a little better.

In the open discussions we had with Pat and Carl, it was clear that ensuring the focus is right is a bit of a fine balancing act. Given how we are all evaluating how Cloud fits in to our strategies moving forward, its nice to hear frank opinions that this balance is not easy even for VMware. It was great meeting the other vExperts (@vExperts)and many thanks to the folks that took time to say hello and comment on the posts, looking forward to seeing you all again next year.

Thursday, September 3, 2015

#vmworld Horizon and NSX Reference Architecture with Tristan Todd and Kausum Kumar

Horizon 6 with NSX dramatically simplifies networking and security. For example a good majority of customers using Horizon and NSX have one primary rule “not to let desktops talk to each other”.

NSX provides Firewalling & Security, Load Balancing, Logical Switching and Routing in software. Virtual Firewall distribution attached to a View desktop is a common use case without compromising the flexibility or mobility of virtual desktops. In addition we can also protect the Horizon Infrastructure, Desktop Pools and User Access Control using NSX.

One great capability, is firewalling based on USERID. Imagine firewall rules that apply no matter what desktop the user logs into. One new thing on the Horizon Roadmap is “Access Point” which is an virtual appliance based Security Server.

There is a fling (A tool provided for free from VMware labs but not supported from VMware) coming out that allows you to inject the service groups from within Horizon to NSX. This allows you to apply security natively to Horizon service groups within NSX’s distributed firewall interface.

In addition you can use NSX Load Balancing for rudimentary load balancing of your Horizon Connection Services. The integration of NSX and Horizon creates new design opportunities. In a traditional Horizon environment you typically create pools by team functionality. With AppVolumes and NSX integration you come closer to a one Pool rule. The idea is that eventually you would have one Pool and use AppVolumes and Firewall rules to segment vs. logically by pool. While the Horizon development team is almost there, it is still the very early stages for this “One Pool” type architecture.

It is possible with NSX and Agentless AV in VDI to quarantine a View desktop when a security flag is tripped. This does not require the user to log on or off, it is dynamically applied removing the vulnerability in the View desktop off the network.

#vmworld #neuroscience David Eaglemen (@davideagleman) Director of the Laboratory for Perception and Action

David explains that we are made of very small stuff in a very big universe and we have difficulty comprehending scale in the world we perceive. Take colors for example, we only perceive about 10% percent of the spectrum. We can’t see these spectrums because our biology filters out other perspectives such as radio waves. If you look across the animal kingdom, different species sample different spectrums to understand their slice of the world. We call this filtration of the world around us the Umwelt.

David asks what it is like to be blind? David asks the audience to imagine you are a blood hound which understands its whole world by smells. Imagine you are the blood hound imaging what it would be like to have an impoverished human nose. The point being that if we don’t have any awareness of it, we don’t perceive it.

As a nero scientist David is interested in whether we can expand our Unwelt. Implants have been used for a while; both retinal and cochlear (ear) implants. When they where first introduced, Dr’s did not believe that they would work because they do not interpret the world in the same way as our biological parts. Amazingly they worked just fine because the brain was able to interpret these signals and assign meaning to them.

David believes that these are just examples of peripheral devices that the brain figures out how to use. We see examples of this in the highly specialized peripheral devices in nature such as heat pits in snakes. As they are all peripherals, David figures that we should be able to add additional peripherals.

In David’s lab they are working on sensory substitution for the deaf. The idea is that when we make sound in the world it is translated to a set of signals to allow someone to understand it. David shows a vest that translates sound via Bluetooth to a vest that translates it to a vibration signal. In tests, a subject is able to wear the vest over a certain number of days and then interpret words even though they are completely deaf. The technology is also very cheap compared to implants and surgery.

Dave goes on to explain that they are testing the vest to see if a wearer can add additional senses beyond our native ones by gaining a direct perception of data that has been transmitted to the vest. For more information on David’s research click  https://www.ted.com/speakers/david_eagleman

#vmworld VMworld 2015: Greg Gage (@phineasgreg) Founder of Backyard Brains

Greg explains that it takes so much education to understand neuroscience even though many people will suffer neuro diseases which have no cures. Inside the Brain we have 80 million neurons that use electrical currents. Greg demonstrates his invention “the Spiker Box” using cockroaches. Greg anesthetizes a cockroach and removes a limb (don’t worry cockroaches can grow back limbs). The reason Greg uses cockroaches is that the neurons in cockroaches are similar to those in humans. Greg attaches the limb to their invention the “Spiker Box” a.k.a. the cockroach beat box. Greg is able to see the neurons through the Spiker Box.  

Greg then takes electrodes and hooks them to a volunteer and demonstrates how then can listen to neurons firing inside the body. The audience is also able to see the neurons react when the volunteer flexes his arm. Greg then takes two volunteers and demonstrates how when connected through the Spiker Box that one volunteer can control the hand movements of the other.

http://ed.ted.com/lessons/the-cockroach-beatbox

#vmworld VMworld 2015: Dr Fei Fei and Computer Vision and General Learning (@drfeifei)

Ray O’Farrell (@ray_ofarrell) takes the stage and mentions that the Thursday session focuses on innovation. With that Ray introduces Dr. Fei-Fei Li (@drfeifei) who is the director of the Stanford research labs. The Dr. contrasts the capabilities of a three year child who recognizes the world around them with our current compute devices. While a child can comprehend things around them, even our most powerful computers lack these basic skill sets. While computers can take incredibly detailed pictures, they cannot “see” the world around them.

Dr. Fei-Fei has led the Computer Vision and General Learning Lab at Stanford. The goal is for computers to see an object and understand what is happening around it. To do this they have to train the computer through a series of images to understand what an object is.

Dr. Fei-Fei’s team used images from the internet through a project called Imaginet which categorized billions of images. It took 50,000 people working across 167 companies. They then opened up the images through www.image-net.org project. Now that they had all this big data they could leverage the Convulotional Neural Network algorithms to recognize objects.

Using this algorithm they were able to learn things by analyzing google street view images. The algorithm was able to see patterns  and define objects in cluttered images. The next step is to connect words and phrases based on snippets of the image to form sentences such as “the cat is lying on the bed”.

Think of a world where Dr’s can have a tireless set of eyes to monitor patient, robots that can search disaster zones and travel and explore new worlds.

Wednesday, September 2, 2015

#vmworld #EUC5733 VMware Horizon 6 Cloud Pod Architecture Best Practices and Futures

In this session we will look at Cloud Pod Architecture “CPA” as it relates to Horizon View. As VDI matures adoption is leading to scaled out deployments, across multiple locations with complex designs.

Initially customers wanted single site HA for virtual desktop environments. As customers grew in certain verticals like healthcare multi-site solutions were introduced such as the the “AlwaysOn Desktop” model.

Early multi-site configurations required heavy orchestration on the management, user and virtual desktop metadata synchronization. With CPA we move to a federated model where Pods are aware of other Pods in other sites. A single Pod architecture can scale to 10,000 seats; to scale higher you need CPA. Between two Pods an intercommunication protocol is responsible for syncing information between sites.

The benefit of CPA is you can manage policies globally because of Global ADLDS replication. It is important that the intra-pod network is reliable to ensure information is insync. VMware has tested 2 sites and 4 pods and 20,000 concurrent sessions within their labs.

CPA takes the original Horizon Pod architecture and moves it to a federated pod architecture. To build this you have to understand your Mission critical applications to understand how much capacity you truly need to deliver capacity in the event of a failure. With CPA there is the concept of the Global Entitlement vs. Entitlement within a Pod. The user is automatically directed to a local Pod instance, if this is not available it looks to a site instance or lastly, any desktop that exists in the federation.

To configure you enable CPA on one pod and join the remaining pods. After pods are federated you configure global entitlement. You entitle users to both local and remote pools within the federated pods. A user is always served a virtual desktop from the local pool first; if it is not available then if uses the remote site.

The latest version of Horizon essentially wraps a GUI around the lmvutil utility that was introduced in View 6.0. With CPA you know see the remote pods show up in the dashboard along with their status. CPA does not do anything for the user metadata but it will replicate the entitlement information between sites. In addition to CPA you need a Load Balanced/Geo-DNS to redirect sessions properly for the client connection.

You can deploy multiple pods in a single site in a scale-up model. In the event that one pad fails the user is directed to the second pod. Another valid architecture is for roaming users; as the entitlement always prefers local the user will always receive the desktop in the local site even when travelling between New York and London. Another use case is for Disaster Recovery. In the event that the local pod failed the user would receive a desktop from the pod in the DR datacenter.

Inter-pod communication uses VIPA “View Inter-Pod API” over port 8472 which is TLS protected. Global ADLPS happens over port 22389.

Road mapped is Global Application Entitlements which is CPA for RDS applications. It requires Agent 3.5 or higher. Currently this works over HTML/Blast. In addition VMware is working on how high they can raise the scalability.

#VMworld Lakeside Software’s SysTrack Cloud Edition announced at show

I sat down for an interview with Tal Klein (@virtualtal) for a discussion about Lakeside’s (@lakesidesoft) SysTrack Cloud Edition. The big news is SysTrack Cloud Edition which is built on vCloud Air. This solution brings all the power of Systrack to customers in a Software as a Service model.

Tal mentions that they have also been working very hard to allow customers access to all the analytics that SysTrack collects. “You know user experience is like the canary in the coalmine” says Tal. “A small problem with user experience can be an early indicator of general service problems on a broader scale”.

Lakeside has developed a method to anonymize customer data to remove any security sensitive information. In doing so Lakeside has enabled customers to run comparison analytics against their industry peers to understand how well they are managing their environment.

This is a unique capability that allows customers to understand whether they are providing a good user experience in a meaningful way. SysTrack has always been able to tell you how you are doing within your organization, but the external comparison allows them to understand how they are doing on a much broader scale. Lakeside calls this feature the Systrack Community. This feature will eventually lead to more awareness of how customers are fairing with issues like risk and compliance. Systrack Community is fully integrated with Systrack 7.2 and is shipping now.

#vmworld Atlantis says HyperScale This! (#hyperscalethis!)

Atlantis (@AtlantisSDS) has been pushing the innovation aggressively with the introduction of Atlantis USX just a few short years ago. They have double-down’d on this strategy with Atlantis HyperScale which you can tell the team is happy about.

hyper_scale

The engineering group continues to move quickly and has announced several key new features at the show. With the introduction of Atlantis USXTM 3.0, the solution now supports:

  • Replication (volume to volume)
  • data store level snapshots
  • Full support for VMware vVOL’s for any storage device

A great feature for customers is “Atlantis Insight” which is a combination of call home, auto update and service history built right into the software. Thanks Mark Nijmeijer (@MarkNijmeijerCA) for the great overview at the booth and congratulations to Chetan (@chetan_), Ruben (@rspruijt) and the team for a great release!.

#vmworld #EUC5762 End-to-End Security with AirWatch NSX and Intelligent Networking

Mobile Device Management “MDM” is the ability to secure any endpoint and understand what is connected to your network. A baseline capability of MDM is ensuring a device is not compromised and corporate data is secure. AirWatch has an automated compliance engine that provides periodic checks to make sure no corporate policies have been violated. For example if Android has a vulnerability, you can ensure that the device is patched before allowing access and dynamically remove access if it is not.

AirWatch provides an AirWatch tunnel that enables access to enterprise applications in a very secure and specific manner. Unlike traditional VPNs that provide general access, the AirWatch tunnel is an App specific VPN tunnel. Because AirWatch is securing the entry point it is easy for the policy engine to revoke or grant access to corporate data.

With the integration of NSX you have the ability to provide micro-segmentation inside the datacenter. Micro-segmentation allows you to containerize and secure through policy, any object within your virtual environment. When you add NSX to the virtual environment and combine AirWatch’s policy based security engine with NSX’s micro-segmentation ability you create a zero trust environment.

The demonstration video of AirWatch and NSX integration can be found at the following URL https://youtu.be/ftYr4UKzlhQ

#vmworld #EUC4825 Horizon for Linux Technical Deep Dive

Why Linux Virtual Desktops? They are generally less costly than Windows desktops as you do not require a VDA license for connectivity. When a Linux desktop is required you can protect and centralize them using Horizon. In addition Linux is often used for graphic rendering. Horizon for Linux supports Ubuntu, RedHat Enterprise, CentOS and Neokin (a Chinese distro).

You need Horizon 6.1.1 with vSphere 5.5 Update 2 and the Horizon Client 3.4 or higher and View Linux desktops running the latest Linux Horizon Agent. In addition the Linux desktops require:

  • Java Runtime version 7 Update 75 (76 for Ubuntu) for the Agent (Note: this is bundled with the latest install)
  • DNS resolution must work throughout the environment
  • The Linux desktop must be running VMware tools
  • The desktop must be joined to the Active Directory

Audio and HTML Access is supported as well as 3D (vDGA,vGPU) graphics. There is no concept of linked clones (View Composer) at this point in time. Deployment works with a vCenter template or clone. Automated provisioning is supported through the vCenter template deployment mechanism.

In addition you can use PowerCLI to create a Linux template, clone desktops, deploy the Agent and add the desktops to the pool. Linux Virtual desktops are natively supported in Horizon so no additional licensing is required.

Linux desktops do not use PCoIP but rather use Secure WebSockets to remotely display the desktop through the Blast Secure Gateway (used for HTML 5 access). Session Shadowing is supported however.

Tuesday, September 1, 2015

#vmworld DRS Advancements in vSphere 6, Advanced Concepts and Future Directions

Most customers are using DRS on full automation with Affinity rules. Less than 1/2 are using Resource Pools although 99.8% are using maintenance mode. This discussion focuses on the specifics of how it works.

A variety of stats and metrics are considered during Initial Placement (IP) and Load Balancing (LB). There are a few that are key such as CPU and Memory reserved. The same holds true for VM performance statistics. For example active memory and CPU. All these stats are taken into account to ensure the VM has sufficient resources.

In addition the constraints within the cluster are looked at as well. These include HA and the admission control policies. As well as affinity and anti-affinity rules. The number of concurrent vMotions and how long it would take to vMotion the VM are also taken into account. Data store connectivity is also a factor when DRS considers load balancing. Finally the vCPU to pCPU ratio, existing shares and agent or special VMs (e.g. vShield Edge or Fault Tolerant (FT) VMs).

For every move DRS makes a cost vs. benefit analysis is done. The general idea is that the VM benefits must be higher than the overall negative impact of moving the VMs. The last consideration is the threshold setting configured by the vSphere administrator. VMware recommends not changes the default aggressiveness settings for most environments (the setting is set to 3 by default).

In vSphere 6.0 you can now specify a network bandwidth reservation on VMs as one of the metrics. This will invoke DRS if you have either a pNIC saturation or failure.

In addition vSphere 6 introduced a Cross-VC xvMotion placement. In this case we make a unified host and data store recommendation for x-VC motion. In this case a combined DRS and Storage DRS (SDRS) algorithm is run. All the same constraints are respected in a x-VC motion. To preserve affinity or anti-affinity rules they are migrated in an x-VC motion as well. This is referred to as a rule migration.

vSphere 6 increased the Cluster scale to 64 hosts and 8,000 VMs running DRS and HA. In general the operational throughput has been increased by 66%. VMs will Power-on quicker, clone faster, vMotion quicker and provide faster transition to host maintenance mode on vSphere 6.

vSphere Upgrade Manager uses DRS extensively to facilitate a upgrade. In addition many other components of the SDDC leverage either DRS directly or the DRS algorithms.

If you want Uniform distribution across all hosts you can set either of these advanced options

  • LimitVMsPerESXHost
  • LimitVMsPerESXHostPercent

Note: this will not violate DRS algorithm to ensure capacity and resources for the VM.

There are some best practice guidelines that VMware recommends:

  • Full connectivity to all storage pools for all hosts
  • Set BIOS power management on the host to “OS control” (note: OS control is a min BIOS setting; High Performance is a max but provides no power savings)
  • Make sure the power mode on ESXi is set to balanced
  • Fully automated is considered a best practice
  • Don’t dilute Resource Pool Shares by powering on to many VMs within them when you create them
  • Do not set CPU-affinity as it pegs the VM to that core vs. guarantying any resources.

In the future DRS will support proactive High Availability. The proactive HA will trigger based on hardware health metrics. For example if the host is partially degraded, DRS will quarantine the host. This means that DRS will opportunistically evacuate VMs and not use it to migrate VMs to. If it is fully degraded then the VMs would be proactively evacuated from the host. Like DRS, there will be a aggressiveness setting to allow you to throttle the reaction of DRS with proactive HA. With tighter integration with NSX, flow-id’s can be used to co-locate chatty VMs. This is not easy to do on significant scale but with NSX the information is already available.

With integration to vRealize Operations, DRS will use the predictive demand algorithm to allow the environment to adjust based on demand that is expected.  VMware is already running Hybrid DRS which allows DRS to seamlessly burst into vCloud Air. This will be available in future releases of the solution as well.

#vmworld Architecting Site Recovery Manager 6.1

SRM 6.1 delivers policy driven protection groups. The difference is rather than explicitly adding the VM to a protection group, you simply select the storage volume and the VM is automatically protected. This is the same if the VM is deployed or storage VMotion’d to the storage volume.

If you tend to create more Protection Groups in your SRM deployments you have more granular flexibility for testing failover. Creating less protection groups is less complex, but provides less flexibility. The right combination will vary by customer.

SRM supports Active-Passive, Active-Active (production in one site development in another), Bi-directional Failover (production in both sites and each one serves as a failover to the other) and multi-site (think remote branch to central site). In the past there was no way to leverage stretched storage with SRM. In SRM 6.1 you can now use stretched storage. The failover differs in this model as it can now be orchestrated through a cross-vCenter vMotion (latency is typically 5-10ms or 50 to 100 km in this model).

SRM is a paired topology so with a multi-site topology for each remote site you need a SRM server in the central datacenter per branch. You can also consolidate several remote sites to a central single vCenter SRM model before failing over to the central site. Keep in mind that each VM can only be replicated once so multi-hope scenarios are not natively supported in SRM. It is recommended that you do not make these topologies anymore complicated than you need to.

Recovery Time Objective “RTO” is a very important measurement when designing your DR strategy. RTO is the time between when the disaster occurs to when the system is fully recovered. IP customizations (changing IPs during the recovery) actually takes a number of steps and takes a fair bit of time. One way around this is to use technologies like OTB or NSX to enable stretched layer 2 networks between datacenters to keep the network IPs unchanged. With the integration of NSX and SRM 6.1 you have the concept of a universal network switch which enables the switching to be automatically mapped between sites.

Other things you can consider for lower RTO in an SRM architecture is:

  • Fewer larger NFS datastores (a large NFS datastore can take up to 10 seconds to mount)
  • Fewer Protection Groups
  • Don’t replicate VM swap files, put them on non-replicated datastores (weigh this against the overall complexity)
  • Fewer Recovery Plans

Recommended VM Considerations

  • Install VM tools in all VMs
  • Suspend VMs on Recovery. Although this can increase your RTO, it frees up your resources at your recovery site (works best with an active-active model; a production and development failover site)
  • PowerOff VMs ( the consideration is similar to suspending)

Recovery Site

  • Ensure that the vCenter is sized properly, it works hard during recovery situations
  • If you have an active-active model you may need more hosts as you potentially double the workload during failover

VMware has a few best practices in implementing SRM such as being clear with the business by providing a menu of SLAs .

#vmworld VMware Virtual SAN – Architecture Deep Dive

The VMware Software-Defined Storage Vision is app-centric and provides policy driven automation. VMware Virtual SAN is a Hyper-converged architecture that can leverage flash, and provides scalability through a distributed architecture. VMware’s product goal was to empower the vCenter administrator to manage there own storage without having deep storage skill sets. It is also important that it works seamlessly with all VMware’s product portfolio and vSphere features.

This vision is reflected in the architecture decisions that were made in developing Virtual SAN. It was decided to use a hyper-converged architecture integrated into the host hypervisor. Because it is a distributed architecture there is no single potential bottle-neck in the framework.

Within Virtual SAN you have the concept of Disk groups. Each host can have up to 5 disk groups per host and up to 40 drives. There are two tiers; a hybrid tier (flash and spinning) and an all flash tier (Note: a different algorithm is used for hybrid and flash tier so you do not want to create a hybrid tier and then manually adjust the disks to flash). In a hybrid tier there is 1 caching device per disk group and 1 – 7 spinning disk (HDD) per disk group. You can let the system autodiscover the drives during the setup or select specific disks.

Virtual SAN uses the flash device for delivering the performance you need. The flash is split between a 70% Reach Cache and a 30% Write-back Buffer. Virtual SAN 6.1 (6.2 is in beta testing) now supports stretched clusters which allows a cluster to span datacenters within a metropolitan network (there is a latency tolerance is this model).

Virtual SAN is tightly integrated into the hypervisor. The storage algorithms are built right into the hypervisor. A demo is displayed showing how vCenter is aware that Virtual SAN data needs to be considered when putting the host in maintenance mode. You have three options: ensure accessibility, full data migration or no data migration. Ensure accessibility is designed for short term maintenance, full data migration for replacing the host.

Virtual SAN is an object-based storage which means it stores a number of objects. Each VM consists of a number of objects which are individually distributed. Virtual SAN is a highly scalable solution. As you increase hybrid disk groups you increase the performance (almost double according to VMware’s internal testing).

VMware is doing allot of work with Tier One Applications like Oracle 11g RAC workloads to measure performance. Results show high performance that is very predictable from a scalability perspective.

Virtual SAN 6.0 also introduced vsanSparse snapshots and clones which allows greater snapshot depth (up to 32 snapshots per object). The important point of vsanSparse is that it has a negligible performance impact. Virtual SAN provides an plugin to provide you all the metrics and measurements within the vSphere Web Client. This allows you to see exactly what is happening in the Virtual SAN cluster performance. This visibility has been extended to vRealize Operations so that you have visibility on multiple Virtual SAN clusters.

From a roadmap perspective deduplication as well as end-to-end protection for data over the network and at rest through software based checksums.

#vmworld How to Drive Intelligent Provisioning and Operation of VMs Using vRealize Automation and Operations

A title written by engineers ;-). VMware’s approach to the Cloud Management Platform “CMP” is an integrated set of tools for managing Private and Public Clouds. This contrasts other CMP solutions which tend to provide a distinct set of tools just for managing 3rd Party clouds. VMware’s approach is to Automate the SDDC at Scale, enable collaboration and self-service as well as delivering continuous remediation and optimization. This presentation focuses on SDDC automation as well as remediation.

vRealize Automation “vRA” management PAK is introduced. It requires vRealize Operations and Automation 6.X or higher and management PAK 1.0. From native vRealize Automation you can see basic information around utilization. You do not however, see true health information. You can see this from vRealize Operations “vROPs” Manager by installing the Pak. Installing the Management PAK enables vRealize Operations to talk to Automation and pull in all the objects. You can now see Automation objects such as business groups, blueprints and tenants from within vRealize Ops.

One useful benefit is seeing the usage patterns for the reservation created in vRA. Several dashboards are provided with the PAK that now appear in the vROPs console including a reservation and tenant view.

There are several things to be aware of when integrating the PAK; you need the sysadmin username and password from vRA as well as a SuperUser account. The SuperUser account is not a native account and needs elevated access to all tenants and fabrics. One of the gotcha’s is ensuring the vRA self-signed certificate is replaced with a a certificate that supports a higher cipher level than TLS 1.0.

#vmworld Keynote with Pat

Pat takes the stage and mentions that it is his 6th VMware and 4th as CTO. In 2019 1/2 the population in the world will be online. The connectivity described is not expected to be uniformly distributed across the world. Pat explains the ubiquity of IT, how it is used to provide connectivity from the perspective of the stratosphere to medical procedures within the human body. This is having huge economic impacts; internet growth is impacting global GDP growth.

Pat introduces 5 major concepts:

  • Asymmetry in Business or an imbalance in power between the incumbents and upstarts. Upstarts have nothing to lose and can embrace new models. What has changed is that mobile-cloud technology has provided an unlimited set of resources to these new startups allowing them to reach global markets. This is changing industries.

Pat sites the thinking around 3D printing that may allow North America to repatriate manufacturing as it is more about science and technology while reducing costs. Pat then sites the current utilization of your typical automobile being around 4%. With the shift to shared autonomous cars this utilization is expected to increase to 75%. The bottom line is that you need to innovate like a startup and deliver like an enterprise

  • Professional era of Cloud is one that is enabled by the Unified Hybrid Cloud. This is enabled by Hybrid Applications that are aware that they will span Private and Public Domains. Pat describes a post Snowden era which has changed the thinking around Cloud. It has made a home grown Cloud a necessity. The idea that “I will use Cloud but is will reside within my borders or boundaries” as a key requirement.

The bottom line in the Professional ERA of Cloud is that the Unified Hybrid Cloud is the Future.

  • Security or the ability of security to protect people. The security industry has been messy. Never before have we had this much innovation in security. Security spend is growing quickly while IT budgets are declining. The virtualization provides the Rosetta Stone for security because it fits in the middle of everything (Apps, Data and Users). It allows us to deliver trusted services on untrusted devices.

Security is enabled through a Virtualization Security Architecture that allows you to integrate it vs. bolting it on.

  • Automate Everything With the emergence of AI we will move from being reactive to proactive with technology. This proactive technology era will be enabled by big data, analytics and real-time application development at scale. it will enable us to automate everything.

Imperative five is taking risks

  • Taking risk becomes the lowest risk nolonger is being an incumbent a guarantee to success. In this new world taking risk will become the least risky option.

Pat closes with a promise to the audience that in this time of fundamental change VMware will be there to help you weather and navigate this new world.

#vmworld VMworld 2015 General Keynote NSX 6.2 @martin_casado

Martin introduces several challenges in delivering applications, especially mobile applications. Applications have evolved into a network, while infrastructure has evolved into a software platform. Martin begins to explain network virtualization and NSX. NSX 6.2 was announced yesterday and brings with it a broad range of new services. VMware NSX 6.2 adds better integration with physical infrastructure. The use cases for NSX driving adoption are Security, Automation and Application Continuity (Application High Availability).

A demo is introduced which showcases a development cloud. An overview is provided of a Three Tier Application which is provisioned and reports a problem.  The demo cuts over to the vRealize Operations Manager console. The console shows the integration of NSX and the new release of vRealize Operations. Because of the integration of NSX and the physical networking devices, the status of the CISCO switching is available in the dashboard as well and quickly resolved (Note: not sure if this is leveraging the now integrated native Hyperic proxy monitoring of network devices or native NSX 6.2 features).

#vmworld VMworld 2015 Keynote Day 2 @spoonen

Sanjay Poonen takes the stage and shows The Economist “Planet of the Phones” cover. The message is 80% of the world will have advanced computing power in their pockets. “It will be the remote control for your lives”. At the core of this movement two things need to come together; consumer simplicity and enterprise security. VMware believes they are well suited to deliver this to the market.

The challenge of “Any Application to Any Device” is delivered through VMware Workspace Suite. VMware announced the Workspace suite last year. Sanjay mentioned that VMware was the first to introduce the term Workspace several years ago. This year they introduced Identity Manager. But how do all these products connect?

The source of the connection is the hypervisor with virtual storage and networking. Along with management, these roll up into the SDDC. But this has to integrate into the End User Computing Platform. VMware’s solution ties strongly together with virtual Desktop, Mobile and Content Collaboration.

In each of these areas VMware is leading innovation. Today VMware is the leapfrog leader in this space. Mobile is the new desktop and it is a category that will grow exponentially. As VMware considers Identity Management they see the need for simplicity without the complexity seen from other vendors. Identity Management is built on the Horizon Workspace framework.

Jim Alkove from Microsoft is introduced and Jim explains that Windows 10 is geared for enterprise mobility management. Microsoft sees tremendous opportunity to make Windows 10 the simplest OS to manage without compromising security.

Sanjay keys up the demo in which AirWatch seamlessly manages Windows 10. Compliance can be done in real-time using the AirWatch policy engine. In the demo AirWatch is used to orchestrate AppVolumes to deliver applications (The project :Project A2(squared) is in Tech Preview). The demo now shows AirWatch auto-configuring a host of mobile applications. The demo now focuses on the integration of Horizon View (VDI) into AirWatch. Now the demo switches to the integration of AirWatch and NSX. In this case they are tying the AirWatch VPN and AirWatch and NSX policy engines together.