Friday, December 15, 2017

Enable edge intelligence with Azure IoT Edge

Based on a presentation by Terry Mandin @TerryMandin
Microsoft is simplifying IoT through the use of the Azure IoT Suite and the following components:
  1. Azure IoT Hub – the gateway that allows you to ingest data
  2. Azure Time Series Insights – this enables you to graphically analyze and explore data points
  3. Microsoft IoT Central; a fully managed IoT SaaS
  4. Azure IoT Edge; a virtual representation of your IoT device that can download code to a physical device and have it execute locally.
Microsoft also has a certification program to validate security on 3rd party products called Azure IoT Security. Azure IoT Edge is recently announced and is provided by Microsoft for enabling you to keep data close to the enterprise. IoT Edge forces the computing back out to the gateway device by enabling the the ability to push IoT modules called ‘module images’ from a repository on the Azure Cloud.
In the oilfields of Alberta, IoT is being leveraged to monitor pumpjacks to determine if they are working properly based on data sent to an IoT hub on the Azure cloud. In the next version of the IoT solution, the customer will send the a module image with custom code to the gateway device using IoT Edge.
In this model a gateway device will be placed on the Well site right next to the pumpjack. The Azure IoT Edge agent and runtime runs on the gateway, using local processing to find problems. If a problem is found then the pumpjack speed can be adjusted quickly while also logging the information to the cloud for maintenance.
The code or ‘module image’ in built in the repository within Azure. You also provision an IoT Edge Device in Azure which is a logical representation of your gateway. You define which modules will run on the gateway in Azure. The IoT Edge looks at the module image and pushes it out to the gateway which has the runtime environment and agent on it. When you deploy the physical device, you install the Azure IoT Edge runtime which pulls the modules down from the cloud. This is done without compromising security.
The IoT Edge agent on the device ensures the Edge module is always running and reports the health back to the cloud. The IoT Edge also communicates to the other IoT leaf devices which are other physical devices with sensors. The IoT runtime and agent can run on something as small as a physical Sensor or as large as a full blown Gateway hardware device.
You can run your own custom code within a module image or several Azure modules including Stream Analytics, Azure functions and AI and Machine learning. You can push down both Machine learning and cognitive functions as well. The underlying software that is the runtime is container based supporting the individual containers or image modules.

Microsoft Summit: AI Fear of Missing Out “FOMO”

Ozge Yeloglu @OzgeYeloglu, Data and AI Lead of Microsoft Canada has a core team of Data architects and Data Scientists located in central Canada. They have combined architects and scientists so that privacy and compliance is part of the AI implementation. Ozge was the first Data Scientist hired in Canada by Microsoft. Prior to Microsoft, Ozge was co-founder of a startup that analyzed logs to predict application failures.

What is artificial intelligence? The definition is “Intelligence exhibited by machines mimicking functions associated with human minds”. The three main pillars of human functions are reasoning (learning from data), understanding (interpreting meaning from data) and interacting (interacting with people in a human way). We are still very far away from natural human interaction with AI.
The reason AI is such a hot topic is because of advancements in the foundational components: Big Data, Cloud Computing, Analytics and powerful query algorithms. These are more universally available than at any other time in history. 

Digital Transformation in AI can be looked at in four pillars: Enable your customers through customer analytics and measuring customer experiences. Enable your employees through business data differentiation and organizational knowledge. Optimize your operations using intelligent predictions and deep insights (IoT). The final pillar is to transform your products by making them more dynamic?

The four foundational components for an AI platform is infrastructure, IT service, Digital Services and Cognitive data. The reality is that based on Gartner's research is that of the discussions happening on AI only 6% are at the implementing stage. Largely the majority of discussions are about knowledge gathering.

Ozge is doing a lot of lunch and learns to help people understand what AI is all about. Often once understood they realize that they need the foundational pieces in place before being ready for AI.
It is important to start with a single business problem, build the machine learning tooling and demonstrate the value. As you work through the use case you are educating your people. Essentially this applies a building block approach. Ozge recommends starting near future because the tools and technologies are emerging so quickly. Starting with a three year plan almost guarantees that the tools you select today will be obsolete by the time the project finishes.

It is important to know your data estate. If your data is not the right data your solutions will not be the right solutions. If it your data is not in the right place, it will take to long to run. Building the right data architecture is an enabler for AI. Great AI needs great data. It is important to also find the right people. Many Data Scientists are generalists so they may not have the right Domain expertise for your particular business. For this reason it may be better to take existing people and train them on Big Data management.

A good AI Solution is built on a AI platform, with comprehensive data, that resolves a business problem surrounded by the right people.

Friday, September 29, 2017

Microsoft Ignite 2017: High Availability for your Azure VMs

The idea with Cloud is that each layer is responsible for its own availability and by combining these loosely coupled layers you get higher availability. It should be a consideration before you begin deployment. For example if you start by considering maintenance and how and when you would take a planned outage it informs your design. You should predict the types of failures you can experience so you are mitigating it in your architecture.

There is an emerging concept on Hyper-scale Cloud platforms called grey failures. A grey failure is when your VM, workloads or applications are not down but are not getting the resources they need.

“Grey Failure: The Achilles Heel of Cloud Scale Systems”

Monitoring and Alerting should be on for any VMs running in IaaS. When you open a ticket in Azure any known issues are surfaced as part of the support request process. This is part of Azure’s automated analytics engine providing support before you input the ticket.

Backup and DR plans should be applied to your VM. Azure allows you to create granular retention policies. When you recover VMs you can restore over the existing or create a new VM. For DR you can leverage ASR to replicate VMs outside another region. ASR is not multi-target however so you could not replicate VMs from the enterprise to an Azure region and then replicate them to a different one. It would be two distinct steps, first replicate and failover the VM to Azure and then you can setup replication between two regions.

Maintenance Microsoft now provides a local endpoint in a region with a simple REST API that provides information on upcoming maintenance events. These can be surfaced within the VM so you can trigger soft shutdowns of your virtual instance. For example, if you have a single VM (outside an availability set) and the host is being patched the VM can complete a graceful shutdown.

Azure uses VM preserving technology when they do underlying host maintenance. For updates that do not require a reboot of the host the VM is frozen for a few seconds while the host files are updated. For most applications this is seamless however if it is impactful you can use the REST API to create a reaction.

Microsoft collects all host reboot requirements so that they are applied at once vs. periodically throughout the year to improve platform availability. You are preemptively notified 30 days out for these events. One notification is sent per subscription to the administrator. The customer can add additional recipients.

Availability Sets  is a logical grouping of VMs within a datacenter that allows Azure to understand how your application is built to provide for redundancy and availability. Microsoft recommends that two or more VMs are created within an availability set. To have a 99.95% SLA you need to deploy your VMs in an Availability Sets. Availability Sets provide you fault isolation for your compute.

An Availability Set with Managed Disks is call a Managed Availability Set. With Managed Availability Set you get fault isolation for compute and storage. Essentially it ensures that the managed VM Disks are not on the same storage.