Friday, September 29, 2017

Microsoft Ignite 2017: High Availability for your Azure VMs

The idea with Cloud is that each layer is responsible for its own availability and by combining these loosely coupled layers you get higher availability. It should be a consideration before you begin deployment. For example if you start by considering maintenance and how and when you would take a planned outage it informs your design. You should predict the types of failures you can experience so you are mitigating it in your architecture.

There is an emerging concept on Hyper-scale Cloud platforms called grey failures. A grey failure is when your VM, workloads or applications are not down but are not getting the resources they need.

“Grey Failure: The Achilles Heel of Cloud Scale Systems”

Monitoring and Alerting should be on for any VMs running in IaaS. When you open a ticket in Azure any known issues are surfaced as part of the support request process. This is part of Azure’s automated analytics engine providing support before you input the ticket.

Backup and DR plans should be applied to your VM. Azure allows you to create granular retention policies. When you recover VMs you can restore over the existing or create a new VM. For DR you can leverage ASR to replicate VMs outside another region. ASR is not multi-target however so you could not replicate VMs from the enterprise to an Azure region and then replicate them to a different one. It would be two distinct steps, first replicate and failover the VM to Azure and then you can setup replication between two regions.

Maintenance Microsoft now provides a local endpoint in a region with a simple REST API that provides information on upcoming maintenance events. These can be surfaced within the VM so you can trigger soft shutdowns of your virtual instance. For example, if you have a single VM (outside an availability set) and the host is being patched the VM can complete a graceful shutdown.

Azure uses VM preserving technology when they do underlying host maintenance. For updates that do not require a reboot of the host the VM is frozen for a few seconds while the host files are updated. For most applications this is seamless however if it is impactful you can use the REST API to create a reaction.

Microsoft collects all host reboot requirements so that they are applied at once vs. periodically throughout the year to improve platform availability. You are preemptively notified 30 days out for these events. One notification is sent per subscription to the administrator. The customer can add additional recipients.

Availability Sets  is a logical grouping of VMs within a datacenter that allows Azure to understand how your application is built to provide for redundancy and availability. Microsoft recommends that two or more VMs are created within an availability set. To have a 99.95% SLA you need to deploy your VMs in an Availability Sets. Availability Sets provide you fault isolation for your compute.

An Availability Set with Managed Disks is call a Managed Availability Set. With Managed Availability Set you get fault isolation for compute and storage. Essentially it ensures that the managed VM Disks are not on the same storage.

Microsoft Ignite 2017: Tips & Tricks with Azure Resource Manager with @rjmax

The AzureRM vision is to capture everything you might do or envision in Cloud. This should extend from infrastructure, configuration to governance and security.

Azure is seeing about 200 ARM templates deployed per second. The session will focus on some of the template enhancements and how Microsoft is more closely integrating identity management and delivering new features.

You now have the ability to deploy ARM deployments across subscriptions (service providers pay attention!). You can also deploy across resource groups. The two declaratives within the ARM template are that enable this are:

“resourceGroup”

“subscriptionId”

You may be wondering how you share your templates, increase the reliability and support them after a deployment?

Managed Applications

Managed applications is the ability to simplify template sharing. Managed applications can be shared or sold, they are meant to be simple to deploy, they are contained so they cannot be broken and they can be connected. Connected means you define what level of access you need to it after it has been deployed for ongoing management and support.

For additional details on Managed applications please see https://docs.microsoft.com/en-us/azure/azure-resource-manager/managed-application-overview .

Managed applications are available in West US and West US Central but will be global by the end of the year. When you define a managed application through the Azure portal you determine if it is locked or unlocked. If it is locked you need to define who is authorized to write to it.

image

By default Managed Applications are deployed within your subscription. From within the access pane of the Managed Application you can share it to other users and subscriptions. Delivering Managed Applications to the Azure marketplace is in Public Preview at this moment.

Managed Identity

With Managed Identity you can now create virtual machines with a service principal provided by Azure active directory. This allows the VM to get a token to enable service access to avoid having passwords and credentials in code. To learn more have a look here

https://docs.microsoft.com/en-us/azure/active-directory/msi-overview 

ARM Templates & Event Grid

You can use Event Grid to collect all ARM events and requests which can be pushed to an end point or listener. To learn more on Event Grid read here

https://buildazure.com/2017/08/24/what-is-azure-event-grid/

Resource Policies

You can use Resource Policies to do Location Ringfencing. Location Ringfencing allows you to define a policy to ensure your data does not leave a certain location.

image

You can also restrict which VM Classes that people can use. For example to prevent your developers from deploying extremely expensive classes of VMs.

Policies can be used to limit the access to all the marketplace images to just a few. You can find many starting point policies on GitHub

https://github.com/azure/azure-policy-samples

Azure Policies are in Preview and additional information can be found here:

https://azure.microsoft.com/en-us/services/azure-policy/

Microsoft MSIgnite 2017:How to get Office 365 to the next level with Brjann Brekkan

It is important that customers are configured with a single Identity or Tenant. You should look at the Identity as the Control Plane or the single source of truth. Azure Active Director “AD” has grow 30% in Year-over-Year growth to 12.8 million customers. In addition there are now 272,000 Apps in Azure AD. Ironically the most used application in Azure AD is Google Apps. Customers are using Azure AD to authenticate Google Services.

Azure AD is included with O365 so there is no additional cost. Identity in O365 consists of three different types of users:

  1. Cloud Identity: accounts live in O365 only
  2. Synchronized Identity: accounts sync with a local AD Server
  3. Federated Identity: Certificate based authentication based on an on premises deployment of AD Federation Service.

The Identity can be managed using several methods.

Password Hash Sync ensures you have the same password on-premises as in the cloud. The con to Hash sync is that disabled or user edits are not updated until the sync cycle is complete. In hash sync the hashes on-prem are not identical to those in the cloud but the passwords are the same.

Pass-through Authentication You still have the same password but passwords remain on-premises. There is a Pass-through Agent “PTA” agent that gets installed on your enterprise AD server. THE PTA Agent handles the queuing of requests from Azure AD and sends the validations back once authenticated.

Seamless Single Sign-On works with both Password Hash Sync and Pass-through Authentication. This is done with no additional requirement onsite. SSO is enabled during the installation of AD Connect.

You do not need more than on Azure AD if you have more than one AD on premises. One Azure AD can support hundreds of unique domain names. You can also mix cloud only accounts and on prem synchronized accounts. You can use PowerShell Graph API vs. AD Connect to synchronize and manage users and groups but it is much more difficult. AD Connect is necessary for Hybrid Exchange however.

There are six general use cases for Azure AD:

  1. Employee Secure Access to Applications
  2. To leverage Dynamic Groups for automated application deployment. Dynamic groups allow move, join and leave workflow processes
  3. To federate access for Business-to-Business communication and collaboration (included in Azure AD, 1 license enables 5 collaborations)
  4. Advanced threat and Identity protection. This is enabled through conditional access based on device compliance.
  5. To Abide by Governance and Compliance industry regulations. Access Review is in public review which identifies accounts that have not accessed the system for a while and prompts the administrator to review them.
  6. To leverage O365 as an application development platform

With AD Premium you get AD Connect Health, Dynamic Group memberships, Multi-Factor Authentication for all objects that can be applied when needed vs always on. In addition there is a better overall end user experience.

Thursday, September 28, 2017

Microsoft Ignite 2017: Protect Azure IaaS Deployments using Microsoft Security Center with Sarah Gender & Adwait Joshi

Adopting Cloud no longer has a security barrier. It is however a shared responsibility between the cloud provider and tenant. It is important that the tenant understands this principle so they properly secure their resources.

image

Securing IaaS is not just about just securing VMs but also the networking and services like storage. It is also about securing multiple clouds as many customers have a multi-cloud strategy. While things like malware protection need to be applied in the cloud the application is different in the cloud. Key challenges specific to cloud are:

  • Visibility and Control
  • Management Complexity (a mix of IaaS, PaaS and SaaS components)
  • Rapidly Evolving Threats (you need a solution optimized for cloud as things are more dynamic)

Microsoft ensures that Azure is built on a Secure Foundation by enforcing physical, infrastructure and operational security. Microsoft provides the controls but the customer or tenant is responsible for Identity & Access, Information Protection, Threat Protection and Security Management.

10 Ways Azure Security Center helps protect IaaS deployments

1 Monitor security state of cloud resources

  • Security Center Automatically discovers and monitors Azure resources
  • You can secure Enterprise and 3rd party clouds like AWS from Security Center

Security Center is built into the Azure Panel so no additional access is required. If you select the Data Collection policy you can automatically push the monitoring agent. This agent is the same as the Operations Management Agent. When you setup Data Collection you can set the level of logging required.

image

Security Center comes with a policy engine that allows you to tune policies via subscription. For example you can define one policy posture for production and another for dev and test.

2 Ensure VMs are configured in a certain way

  • You can see system update status, Antimalware protection (Azure has one that is built in for free), OS and web configuration assessment (e.g. IIS assessment against best practise confirms)
  • It will allow you to fix vulnerabilities quickly

3 Encrypt disks and data 

4 Control Network Traffic

5 Use NSGs add additional firewalls

6 Collect Security Data

  • Analyze and search security logs from many sources
  • Security Center allows you to integrate 3rd party products like Qualys scans as well for assessment for other applications and compliance issues. Security Center monitors IaaS VMs and some PaaS components like web apps.

Security Center provides a new dashboard for failed logon attempts on your VMs. The most common attack on cloud VMs are RPD brute force attacks. To avoid this you can use Just-in-Time access so that port 3389 is only open for a window of time from certain IPs. These are all audited and logged.

Another attack vector is Malware. Application whitelisting allows you to track for good behaviour vs. blocking the bad. Unfortunately it has been arduous to apply.

7 Block malware and unwanted applications

Security Center uses the adaptive algorithm to understand what applications are running to develop a set of whitelists. Once you are happy with lists you can move to enforcement.

8 Use advanced analytics to detect threats quickly.

Security Center looks at VMs and network activity and leverages Microsoft’s global threat intelligence to detect threats quickly. This leverages machine learning to understand what is normal activity statistically to identity abnormal behavior.

9 Quickly assess the scopes and impact of the attack

This is a new feature that graphically displays all the related components that were involved in an attack.

image

10 Automate threat response

Azure uses Logic Apps to automate responses which allows you to trigger workflows form and alert to enable conditional actions. In addition there is a new malicious map that identifies known malicious IPs by region with related threat intelligence.

The basic Policy for Security Center is free so there is no reason to not have more visibility on what is vulnerable in your environment.

For more information check out

http://azure.microsoft.com/en-us/services/security-center

Wednesday, September 27, 2017

Microsoft Ignite 2017: New advancements in Intune Management

Microsoft 365 is bringing the best of Microsoft together. One ofthe key things Satya Nadella did when he took over was to put customers front and center. Microsoft has invested in partner and customer programs to help accelerate the adoption of Intune.
There are three versions of Intune: 
  1. Intune for Enterprises
  2. Intune for Education
  3. Intune for SMBs (In Public Preview)
One of the biggest innovations Microsoft completed was moving Intune to Azure. There is a new portal for Intune available within Azure that provides an overview of Device compliance.
image
To setup Intune the first thing you do is define a device profile. Microsoft supports a range of platforms from such as Android, iOS, mac and windows. Once you have a device profile there are dozens of configurations you can apply.

Once you define the profile you assign it to Azure AD Groups. You can either include or exclude users. So you can create a baseline for all users and exclude your executive group to provide them an elevated set of features.

As it lives in the Azure Portal you can click on Azure Active Directory and see the same set of policies. Within the policy you can set access controls that are conditional. For example “you get corporate email only if you are compliant and patched”. Intune checks the state of the device and compliance and then grants access. The compliance overview portal is available in Intune from within Azure.
compliance
Microsoft has dramatically simplified the ability to add apps. From within Intune’s portal. You can access and browse the iOS AppStore to add applications within the interface. In addition to granting access to Apps you can apply App protection policies. For example you can enforce that the user is leveraging a minimum app version. You can block or warn if the user is in violation of this policy.
The demo shows an enrolled iPad attempting to use a down-level version of word that displays a warning when the user launches it. You can provide conditional access which allows a grace period for remediating certain types of non compliant states.

Many top 500 companies leverage Jamf today (https://www.jamf.com) for Apple management. Jamf is the standard for Apple mobile device management. Whether you're a small business, school or growing enterprise environment, Jamf can meet you where you're at and help you scale.

Intune can now be used in conjunction with Jamf. With this partnership you can use both Jamf and Intune together. Mac’s enroll in Jamf Pro. Jamf is able to send the macOS device inventory to Intune to determine compliance. If Intune determines it is compliant the access is allowed. If they are not, Intune and Jamf present some options to the user to enable them to resolve issues and check compliance.
image
Some other features that have been built into conditional access is to restrict access to services based on the location of the user. Microsoft has also enhanced Mobile Threat Protection and extended Geo fencing (In tech preview).

For Geo fencing you define known Geo locations. If the user roams outside of those locations the password gets locked. Similarly for Mobile Threat Protection, you define trusted locations and create rules to determine what happens if access is requested from a trusted on non-trusted location.

Microsoft Ignite 2017: Azure Security and Management for hybrid environments Jeremy Winter Director, Security and Management

Jeremy Winter is going to do a deep dive on Azure’s Management and Security announcements. Everyone is a different state when it comes to cloud. This digital transformation is having a pretty big ground level impact. Software is everywhere and it is changing the way we think about our business.

Digital transformation requires alignment across teams. This includes Developers, Operational teams and the notion of a Custodian who looks after all the components. It requires cross-team collaboration, automated execution, proactive governance and cost transparency. This is not a perfect science, it is a journey Microsoft is learning, tweaking and adjusting as they go. Operations is going to become more software based as we start to integrate scripting and programmatic languages.

Microsoft’s bet is that management and security should be part of the platform. Microsoft doesn't think you should have to take enterprise tooling and bring it with you to the cloud. You should expect this management and security to be a native capability. Microsoft thinks about this according to  a full stack of cloud capabilities

full_cld_cap

Security is integral, Protect is really about Site Recovery, Monitoring is about visibility to what is going on, Configuration is about automating what you are doing. Microsoft believes they have a good baseline for all these components in Azure and is now focused on Governance.

Many frameworks try to put a layer of abstraction between the developer and the platform. Microsoft’s strategy is different. They want to allow the activity to go direct to the platform but to protect it via policy. This is a different approach and is something that Microsoft is piloting with Microsoft IT. Intercontinental Hotels is used as a reference case for digital transformation.



Strategies for successful management in the cloud:

1. Where to start “Secure and Well-managed”

You need to secure your cloud resources through Azure Security Center, Protect your data via backup and replication and Monitor your cloud health. Ignite announced PowerShell in Azure Cloud Shell as well as Monitoring and Management built right in to the Azure portals.

Jeremy shows an Azure Monitoring a Linux VM. The Linux VM has management native on the Azure Panel. In the Panel you can see the inventory of what is inside the VM. You no longer have to remote in. It is done programmatically.

In the demo the internal version of Java is shown on the VM and we can not look at Change tracking to determine why the version of Java appears to have changed. This is important from an audit perspective as you have to be able to identity changes. You can see the complete activity log of the guest.

Also new is update management. You can see all the missing updates on individual or multiple computers. You can go ahead and schedule patching by defining your own maintenance window. In the future Microsoft will add pre and post activities. You also have the ability to use Azure Management for non-Azure based VMs.

For disaster recovery you are able to replicate from one region to another. You can use this from enterprise to Azure now, but now also region-to-region. For backup it is now exposed as a property of the virtual machine which you simply enable and assign a backup policy to.

With the new Azure Cloud Shell you have the ability to run PowerShell right inline. There are 3449 commands that have been ported over so far.

2.  Governance for the cloud

azure.com/policy is in tech preview. Jeremy switches to the demo for Azure policies. You now have the ability to create policy and see non-compliant and compliant VMs. The example shows a sample policy to ensure that all VMs do not have public IPs. With the policy you are able to quickly audit the environment. You can also enforce these policies. It is based on JSON so you can edit the policies directly. Other use cases include things like auditing for unencrypted VMs.

3.  Security Management and threat detection

Microsoft is providing unified visibility, adaptive threat and detection and intelligent response. Azure Security Center is fully hybrid, you can apply it to  enterprise and Azure workloads. Jeremy switches to Azure Security Center which provides an overview of your entire environments security posture.

Using the portal you can scan and get Microsoft's Security recommendations. Within the portal you can use ‘Just in Time Access’. This allows the developer to request that a port be open but it is enabled for a window of time. Security Center Analysis allows you to whitelist ports and audit what has changed.

Microsoft can track and group Alerts through Security Center. Now you have a new option through continuous investigation to see visually what security incident has been logged. It takes the tracking and pushes it through the analytics engine. It allows you to visually follow the investigation thread to determine the root cause of the security exploit. Azure Log Analytics is the engine that drives these tool sets.

Azure Log Analytics now has an Advanced Analytics component that provides an advanced query language that you can leverage across the entire environment. It will be fully deployed across Azure by December.

4.  Get integrated analytics and monitoring

For this you need to start from the app using application insights, bring in network visibility as well as infrastructure perspective. There is a new version of Application Insights. Jeremy shows Azure Monitor which was launched last year at Ignite. Azure Monitor is the place for end-to-end monitoring of your Azure datacenters.

The demo shows the ability to drill in on VM performance and leverage machine learning to pin point the deviations. The demo shows slow response time for the ‘contoso demo site’. It shows that everything is working but slowly. You can see the dependency view of every process on the machine and everything it is talking to. Quickly you are able to see that the problem has nothing to do the website but is actually a problem downstream.

Microsoft is able to do this because they have a team of data scientists baking in the analytics directly into the Azure platform through the Microsoft Smarts Team.

5. Migrate Workloads to the Cloud.

Microsoft announced a new capability for Azure Migration. You can now discover applications on your virtual machines and group them. From directly within the Azure portal you can determine which apps are ready for migration to Azure. In addition it will recommend the types of Migration tools that you can use to complete the migrations. This is in limited preview.

Microsoft Ignite 2017: Modernizing ETL with Azure Data Lake with @MikeDoesBigData @microsoft.com

Mike has done extensive work on the u-sql language and framework. The session will focus on modern Data Warehouse architectures as well as introducing Azure Data Lake.
A traditional Data Warehouse has Data sources, Extract-Transform-Load “ETL”, Data warehouses and BI and analytics as foundational components.
image
Today many of the data sources are increasing in data volume and the current solutions do not scale. In addition you are getting data that is non-relational from things like devices, web sensors and social channels.
A Data Lake allows you store data as it is essentially a very large scalable file system. From there you can do analysis using Hadoop, Spark and R. A Data Lake is really designed for the questions you don’t know while a Data Warehouse is designed for the questions you do.
Azure Data Lake consists of a highly scalable storage area called the ADL Store. It is exposed through a HDFS Compatible REST API which allows analytic solutions to site on top and operate at scale.
Cloudera and Hortonworks are available from the Azure Marketplace. Microsoft version of Hadoop is HDInsight. With HDInsight you pay for the cluster whether you use it or not.
Data Lake Analytics is a batch workload analytics engine. It is designed to do Analytics at Very Large Scale. Azure ADL Analytics allows you to pay for the resources you are running vs. spinning up the entire cluster with HDInsight.
You need to understand the Big Data pipeline and data flow in Azure. You go from ingestion to the Data Lake Store. From there you move it into the visualization layer. In Azure you can move data through the Azure Data Factory. You can also ingest through the Azure Event Hub.
image
Azure Data Factory is designed to move data from a variety of data stores to Azure Data Lake. For example you can take data out of AWS Redshift and move it to Azure Data Lake Store. Additional information can be found here:
U-SQL is the language framework that provides the scale out capabilities. It scales out your custom code in .NET, Python, R over your Data Lake. It is called U because is unifies SQL seamlessly across structured and unstructured data.
Microsoft suggest that you query the data where it lives. U-SQL allows you query and read/write data not just from you Azure Data Lake but also from storage blobs, Azure SQL in VMs, Azure SQL and Azure SQL Data Warehouse.
image
There are a few built-in cognitive functions that are available to you. You can install this code in your Azure Data Lake to add cognitive capabilities to your queries.

Tuesday, September 26, 2017

Microsoft Ignite 2017: Windows Server Storage Spaces and Azure File Sync

Microsoft’s strategy is about addressing storage costs and management complexity through the use of:

  1. world class infrastructure on commodity hardware
  2. Finding smarter ways to store data
  3. using storage active-tiering
  4. offload the complexity to Microsoft

How does Storage spaces work? You attach the storage directly to your nodes and then connect the nodes together in a cluster. You then create a storage volume across the the cluster.

When you create the volume you select the resiliency. You have the following three options:

  1. Mirror
    1. Fast but uses allot of storage
  2. Parity
    1. Slower but uses less storage
  3. Mirror-accelerated parity
    1. This allows you to create volumes that use both mirroring and parity. This is fast but conserves space as well

Storage Spaces Direct is a great option for running File Servers as VMs. This allows you to isolate file server VMs by use VMs running on a storage spaces direct volume. In Windows 2016 you also have the ability to introduce Storage QoS on Storage Spaces to deal with noisy neighbors. It allows you to predefine QoS storage policies to prioritize storage performance for some workloads.

You also have the ability to Dedup. Dedup works by taking unique chunks to a dedup chunk store and replacing the original block with a reference to the unique block. Ideal use cases for Microsoft Dedup is general purpose file servers, VDI and backup.

You may apply Dedup to a SQL Server and Hyper-V but it depends on how much demand there is on the system. High Random I/O workloads are not ideal for Dedup. Dedup is only supported on NTFS on Windows Server 2016. It will support ReFS on Windows Server 1709 which is the next release.

Microsoft has introduced Azure File Sync. With Azure File Sync you are able to centralize your File Services. You can use your on-premises File services to cache files for faster local performance. It is a true file share so it is services use standard SMB and NFS.

Shifting your focus from on-premises file services allows you to take advantage of cloud based backup and DR. Azure File Sync has a fast DR recovery option to get you back up and running in minutes.

Azure File Sync requires Windows 2012 or Windows 2016 and enables you to install a service that tracks file usage. It also teams the server with an Azure File Share. Files that are not touched over time are migrated to Azure File services.

To recover you simply deploy a clean server and reinstall the service. The namespace is recovered right away so the service is available quickly. When the users request the files a priority restore is performed from Azure based storage. Azure File Sync allows your branch file server to have a fixed storage profile as older files move to the cloud.

With this technology you can introduce follow the sun scenarios were work on one file server is synced through Azure File Share to a different region so it is available.

On the roadmap is cloud-to-cloud sync which allows the Azure File Shares to sync through the Azure backbone to different regions. When you have cloud-to-cloud sync the moment the branch server cannot connect to its primary Azure File Share it will go to the next closest.

Azure File Sync is now publically available in five “5” Azure Regions.

Monday, September 25, 2017

Microsoft Ignite 2017: Getting Started with IoT; a hands on Primer

As part of the session we are supplied an MXChip IoT Developer’s Kit. This provides a physical IoT device enabling us to mock up IoT scenario’s. The device we are leveraging is made by Arduino. The device comes with a myriad of sensors and interfaces including, temperature and a small display screen. The device retails for approx. 30 – 40$ USD and is a great way to get started learning IoT.

kit

When considering IoT one needs to not just connect the device but understand what you want to achieve with the data. Understanding what you want to do with the data allows you to define the backend components for storage, analytics or both. For example is you are ingesting data that you want to analyze you may leverage Azure Stream Analytics. For less complex scenarios you may define an App Service and use functions.

Microsoft’s preferred development suite is Visual Studio Code. Visual Studio Code includes extensions for Arduino. The process to get up and running is a little involved but there are lots of samples to get you started at https://aka.ms/azureiotgetstarted.

One of the more innovative ways to use the device was demonstrated in the session. The speaker created “The Microsoft Cognitive Bot” by leveraging the Arduino physical sensor and leveraging “LUIS” in the Azure Cloud. LUIS is the Language Understanding and Intelligence Service that is the underlying technology that Microsoft Cortana is built on. The speaker talks to the MXChip sensor asking details about the weather and the conference with LUIS responding.

The session starts with an introduction of what A Basic framework of an IoT Solution looks like as shown in the picture below. On the left of the frame are the devices. Devices can connect to the Azure IoT hub directly provided the traffic is secure and they can reach the internet. For devices that either do not connect directly to the internet or cannot communicated securely you can introduce a Field gateway.

Field Gateways can be used for other items as well such as translating data. In cases where you need high responsiveness, you also may analyze the data on a Field Gateway so that there is less latency between the analysis and the response. Often when dealing with IoT there is both Hot and Cold data streams.  Hot being defined as data that requires less latency between the analysis and response vs. cold which may not have a time sensitivity.

image

An ingestion point requires a D2C endpoint or “Device-to-Cloud”. In addition to D2C you have the other traffic flow which is a C2D endpoint or Cloud-to-Device. C2D traffic tends to be queued. There are two other types of endpoints that you can define; a Method endpoint which is instantaneous and is dependent on the device being connected. The other type is a Twin endpoint. With a Twin endpoint’s you can inject a property; wait for the device to report current state and then synchronize it with your desired state.

We then had an opportunity to sit down with IoT experts like Brett from Microsoft. Okay I know he does not look happy in this picture but we had a really great session. We developed an architecture to look at long term traffic patterns as well as analyze abnormal speeding patterns in real-time for more responsive actions.. “Sorry Speeders ; – )”.

image

The session turned pretty hands on and we had to get our devices communicating with an Azure based IoT hub we created. We then setup communications back to the device to review both ingress and egress traffic. In addition we configured Azure table stores and integrated some cool visualization using Power BI. Okay got to admit I totally geeked out when I first configured a functional IoT service and then started to do some analysis. It is easy to see how IoT will fundamentally change our abilities in the very near future. Great first day as Microsoft Ignite.

Thursday, September 21, 2017

ZertoCon Toronto’s 2017 Keynote Session

zertocon
Ross DiStefano the Eastern Canada Sales Manager at Zerto introduces Rob Strechay @RealStrech the Vice President of Products at Zerto. Rob mentions that Zerto was the first to bring hypervisor replication to market. Zerto has about 700 employees and is based out of Boston and Israel. With approximately 5000 customers, Zerto provides round the clock support for replication for tens of thousands of VMs. Almost all of the service providers in Gartner’s magic quadrant are leveraging Zerto software for their DR-as-a-Service offerings.

Zerto’s focus is on reducing your Disaster Recovery “DR” and Migration complexity and costs. Zerto would like to be agnostic to where the replication target and destinations are located. Today at ZertoCon Toronto, the intention is to focus on Zerto’s multi-cloud strategy.

Most customers are looking at multiple options from hyper-scale cloud, managed service providers and enterprise cloud strategies. Zerto’s strategy is to be a unifying partner for this diverse set of partners and services. This usually starts with a transformation projects such as adopting a new virtualization strategy, implementing new software or embracing hybrid, private or public cloud strategies.

451’s research shows that C-Level conversations about cloud are focused around net new initiatives, moving workloads to cloud, adding capacity to the business or the introduction of new services. The Rob transitions to what’s new with Zerto Virtual Replication. What Zerto has found is that people are looking to stop owning IT infrastructure that is not core to their business and focus on the business data and applications that do. To do this they need managed services and hyper-scale cloud partners.
Mission critical workloads are running in Public Cloud today. With Zerto 5.0 the company introduced the Mobile App, One-to-Many replication, replication to Azure and the 30-Day Journal. Zerto 5.5 was announced in August with replication from Azure, advancements in AWS recovery performance and Zerto Analytics & Mobile enhancements.

With 5.5 Zerto goes to Azure and back. A big part of this feature involved working with Microsoft’s API’s to convert VMs back from Azure. This meshes well with Microsoft’s strategy of enabling customers to scale up and down. Coming soon is region-to-region replication within Azure.

With the AWS enhancements, Zerto worked with Amazon to review why there existing default import limitations where so slow. In doing so they learned how to improve and enhance the replication so that it runs six “6” times faster. AWS import is still there, but now zerto-import or ‘zimport’ is used to support larger volumes while the native AWS import does the OS volume. You can also add a software component to the VM to further enhance the import to receive that 6 fold improvement.

Zerto analytics and Zerto Mobile provides cross-device, cross-platform information delivered as a SaaS offering. Right-now the focus is on monitoring so you can understand how prepared you are for any contingency within or between datacenters. These analytics are near real-time. As Zerto analytics has been built on cloud technologies, it will follow a continues release cycle. One new feature is RPO history that shows how you have effectively you have been meeting your SLA’s.

The next release is targeted for the mid-February timeframe which will deliver the same replication out of Amazon as well as the Azure inter-region replication. They are moving towards six “6” month product releases on a regular bases with a targeted set of features.

H2 2018 and beyond they are looking at KVM support, Container migrations, VMware on AWS and Google Cloud support. Zerto is looking to be the any-to-any migration company as an overall strategy.

dimitri
Dmitri Li, Zerto’s System Engineer in Canada takes the stage and mentions that we now live in a world that operates 24/7. It is important to Define DR not as a function of IT but as a way to understand what is critical to the business. For this it is important to focus on a Business Impact Analysis so you can properly tier applications by RTO/RPO.

You also need to ensure your DR strategy is cost effective and does not violate your governance and compliance requirements. When you lay out this plan it needs to be something you can execute and test in a simple way to validate it works.

Another important change besides round the clock operations is that we are protecting against a different set of threats today than we were in the past. Cybercrime is on the rise. With Ransomware, 70% of businesses pay to try and get their data back. The problem with paying is that once you do, you put yourself on the VIP list for repeat attacks.

Zerto was recognized as the Ransomeware security product of the year even though they are not a security product. Zerto addresses this using Journaling for Point-in-Time recovery. You can recover a file, folder, a VM or your entire site to the moments before the attack.

It is important to also look at Cloud as a potential target for your DR strategy. Building another datacenter can be cost prohibitive so hyper-scale or managed service partners like Long View Systems can be better choices.

Friday, September 8, 2017

VMworld 2017: Futures with Scott Davis EVP of Product Engineering at Embotics

Had a great discussion with Scott Davis the EVP of Product Engineering and CTO of Embotics at VMworld 2017. Scott was kind enough to share some of the future looking innovations they are working hard on at Embotics.

"Clearly the industry is moving from having virtualization or IaaS centric cross-cloud management platforms to more of an application-centric container and microservices focus. We really see customers getting serious about running containers because of the application portability, microservices synergy, development flexibility and seamless production scale-out that is possible. At the same time they are reducing the operational overhead and interacting more programmatically with the environment.

When we look at the work VMware is doing with Pivotal Container Service we believe this is the right direction but we think that the key is really enhanced automation for DevOps. One of the challenges that was pointed out, is that while customers are successfully deploying Kubernetes systems for their container development, production operation can be a struggle. Often the environment gets locked in stasis because the IT team is wary of upgrades in a live environment.

At Embotics we are all about automation. With our vCommander product we have have a lot of intelligence that we can use to build a sophisticated level of iterative automation. So let's take that challenge and let's think about what would be needed to execute a low risk DevOps migration. You would probably want to deploy the new Kubernetes version and test it against your existing set of containers. This should be sandboxed to eliminate the risk to production, validated and then the upgrade should be fully automated.”

Scott proceeds to demonstrate a beta version of Embotics Cloud Management Platform 'CMP 2.0" automating these exact set of steps across a Kubernetes environment and then rolling the changes forward to update the production environment.

“I think fundamentally we can deliver true DevOps, speeding up release cycles, delivering higher quality and providing a better end user experience. In addition we can automatically pull source code out of platforms like Jenkins, spin up instances, regression test and validate. The test instances that are successful can be vaporized, while preserving the ones that are not so that the issues can be remediated.

We are rolling this out In a set of continuous software releases to our product so that as customers are integrating Containers, the Embotics 'CMP' is extended to meet these new use-cases.

We realize as we collect a number of data points spanning user preference, IT specified compliance rules and vCommander environment knowledge across both enterprise and hyper-scale targets like Azure and AWS that we can assist our customers with intelligent placement suggestions.”

Scott switches to a demo in which the recommended cloud target is ranked by the number of stars in the beta interface.

“We are building it in a way that allows the customer to adjust the parameters and their relative importance so if PCI compliance is more important they can adjust a slider in the interface and our ranking system adjusts to the new priority. Things like cost, compliance can be set to be both relative or mandatory to tune the intelligent placement according to what the customer views as important."

Clearly Embotics is making some innovative moves to incorporate a lot of flexibility in their CMP platform. Looking forward to seeing these releases in the product line with cross cloud intelligence for containers and placement.

VMworld 2017: Interview with Zerto’s Don Wales VP of Global Cloud Sales

It is a pleasure to be here with Zerto's, Don Wales the Vice President of Global Cloud Sales at VMworld 2017. Don this is a big show for Zerto, can you tell me about some of the announcements you are showcasing here today?

"Sure Paul, we are extremely excited about our Zerto 5.5 release. With this release we have introduced an exciting number of new capabilities. You know we have many customers looking at Public and Hybrid Cloud Strategies and at Zerto we want them to be able to leverage these new destinations but do so in a way that is simple and straightforward.  Our major announcements here are our support for Fail In and Out of Azure, Increase AWS capabilities, Streamlined and Automated Upgradability, significant API enhancements and BI analytics.  All these are designed for a much better end-user experience.

One piece of feedback that we are proud of is when customers tell us that Zerto does exactly what they need it to do without a heavy engineering cost for installation. You know Paul when you think about taking DR to a Cloud Platform like Azure it can be very complex. We have made it both simple and bi-directional. You can fail into and out of Azure with all the capabilities that our customers expect from Zerto like live failover, 30-day journal retention and Journal level file restore.

We also recognize that Azure is not the only cloud target our customers want to use. We have increased the recovery times to Amazon Web services. We have improved the performance and it our testing we have seen a 6x improvement in the recovery to AWS. Zerto has also extended out support to AWS regions in Canada, Ohio, London and Mumbai.

All this as well as continuing to enhance the capabilities that our traditional Cloud Service Providers need to make their customers experience simple yet powerful."

Don, with your increased number of supported Cloud targets and regions how do you ensure your customers have visibility on what's going on?

Paul we have a SaaS product that allows our customers complete visibility on-premise and in public clouds called Zerto Analytics. It does historical reporting across all Zerto DR and recovery sites.  It is a significant step forward in providing the kind of Business Intelligence that customers need as their environments grow and expand.”

Don these innovations are great, looks like Zerto is going to have a great show. Let me ask you when Don's not helping customers with their critical problems, what do you do you unwind?

“It’s all about the family Paul. There is nothing I like better than relaxing with the family at home, and being with my wife and twin daughters.  One of our favorite things is to spend time at our beach house where our extended family gathers.  It’s a great chance to relax and get ready for my next adventure.” 

Many thanks for the time Don, it is great to see all innovations released here at VMworld 2017

Friday, September 1, 2017

VMworld 2017: Interview with Crystal Harrison @MrsPivot3

I had the pleasure of spending a few moments with Crystal Harrison, Pivot3’s dynamic VP of product strategy “@MrsPivot3”.

Crystal, I know Pivot3 from the innovative work you have been doing in security and surveillance. How has the interest been from customers in the move to datacenter and cloud?

“You know with the next generation release of Acuity, the industry’s first priority-aware hyper converged infrastructure “HCI” the demand has been incredible. While we started with 80% of the product being applied to security use cases we are now seeing a distribution of approx. 60% applied to datacenter and cloud with 40% deriving from our security practice. This is not due to any lack of demand on the security side it is just the demand on our cloud and datacenter focus has taken off with Acuity.”

We are pushing the boundaries with our HCI offering as we are leveraging NVM Express “NVMe” to capitalize on the low latency and internal parallelism of flash-based storage devices. All this is wrapped in an intuitive management interface controlled by policy.”

How do you deal with tiering within the storage components of Acuity?

Initially the policies manage where the data or instance lives in the system. We have the ability to dynamically reallocate resources  in real-time as needed. Say for example you have a critical business application that is starting to demand additional resources, we can recapture it from lower priority and policy assigned workloads on the fly. This protects your sensitive workloads and ensures they always have what they need.

How has the demand been from Cloud Service providers?

They love it. We have many flexible models including pay-by-the-drip metered and measured cost models. In addition the policy engine gives them the ability to set and charge for a set of performance based tiers for storage and compute. Iron Mountain is one of our big cloud provider customers. What is really unique is because we have lightweight management overhead and patented Erasure coding you can write to just about every terabyte that you buy which is important value to our customers and service providers.

Crystal, it really sounds like Pivot3 has built a high value innovative solution. Getting away from technology, what does Crystal do to relax when she is not helping customers adopt HCI?

My unwind is the gym. After a long day a good workout helps me reset for the next.

Crystal, it has been a pleasure, great to see Pivot3 having a great show here at VMworld 2017.