How Three Lines of Code and Windows Machine Learning Empower .NET Developers to Run AI Locally on Windows 10 Devices

This post is authored by Rosane Maffei Vallim, Program Manager, and Wilson Lee, Senior Software Engineer at Microsoft.

Artificial Intelligence (AI) with deep learning and machine learning algorithms are changing the way we solve variety of problems from manufacturing to biomedical industries. The applications that can benefit from the power of AI are endless.

With the Windows Machine Learning (Windows ML) API, as .NET developers, we can now leverage the ONNX models that have been trained by data scientists and use them to develop intelligent applications that run AI locally. In this blog post, we will give an overview of what Windows ML can do for you; show you how to use ONNX in your UWP application; and introduce you to the Windows Machine Learning Explorer sample application that generically bootstraps ML models to allow users to dynamically select different models within the same application.

Channel 9's AI Show for this blog post can be found here.

Windows Machine Learning Explorer sample application code for this blog post can be found here.

Why is Windows ML + ONNX Great News for .NET Developers?

Earlier this month, we announced the AI Platform for Windows Developers.

Windows ML is an API for on-device evaluation of trained deep learning and machine learning models. It is built to help developers with scenarios where evaluation of machine learning models locally might be more advantageous, due to the lack of a reliable internet connection, latency before getting prediction results (particularly important for real-time applications) or data privacy considerations where your customers wouldn't be willing to have their data leave the device.

But more than that, Windows ML makes it easy for you to leverage the infinite possibilities of AI by establishing a simple process to integrate models with your application. By supporting Open Neural Network Exchange (ONNX), which is an open source format to represent machine learning models, you can easily leverage models created in different training frameworks to be evaluated inside your application. In addition, Windows ML's automatic interface code generation takes care of processing your ONNX file and creating wrapper classes, allowing you to easily interact with your model within your application.

Windows ML can hardware accelerate your model evaluation on DirectX 12 GPU. Developers can select their preferred evaluation device, whether CPU or GPU, and Windows ML handles communication with the hardware on their behalf.

How Can Developers Use Windows ML + ONNX in a UWP Application?

Adding the capability to run AI locally with your new or existing UWP application is now easier than ever before. You need to add an ONNX file to your UWP project to get started. Then you can decide to use the automatic generated wrapper classes directly or write a few lines of code to call the Windows ML APIs directly to evaluate your model.

Adding an ONNX File to Your UWP Project to Get Started

Windows ML's automatic interface code generation, natively integrated with VS UWP workloads, does most of the heavy lifting for you. Simply add an ONNX model file to your project, and Visual Studio will automatically extract the input and output features from the model and generate wrapper classes for your application to consume.


Figure 1 - Auto generate wrapper classes file with ONNX model in Visual Studio

This functionality is fully available for the UWP workload with Windows 10 (version 1803), Windows SDK (Build 17110), and Visual Studio (version 15.7 - Preview 1) installed.

Using Auto Generated Wrapper Classes

The wrapper classes generated by the automatic code generator provide you with an interface to easily interact with your machine learning model through Windows ML APIs. There are three basic wrapper classes:

  • Input class – This class is to represent the input data which will be bound to the model.
  • Output class – This class is to represent the output data which will be bound to the model.
  • Model class – This class is to represent the model object to load and evaluate.


Figure 2 - This shows the skeleton of the generated wrapper classes that represent Input, Output, and Model.

To use the automatic generated wrapper classes, you simply need the following three lines of code:

  • Create the model – This will create the model with the ONNX model file.
  • Initialize the input – Initialize the input object with application data to be bound to the model for evaluation.
  • Evaluate the model – Evaluate the model with the input data to obtain the resulting output data.


Figure 3 - This shows the three lines of code to Create the model, Initialize the input, and evaluate the model to obtain output data.

Using Windows ML APIs Directly

To truly appreciate how simple and easy it is to use the Windows ML APIs, we should look inside the Model wrapper class, to understand the three lines of code that are required to evaluate your machine learning model locally. If the architecture of your application has a requirement to dynamically load different models, this will help you understand how to build your own abstraction layer.

The first line of code is Load. This loads the ONNX model file from file system and store it as a LearningModelPreview object.


The second line of code is Bind. This creates a model binding object to allow you to bind your input and output objects to the model to be evaluated. The data type within the input and output objects depend on the requirements of your model.


The final line of code is Evaluate. This is where Windows ML brings everything together and uses the input binding to evaluate the model locally and returns its results in the output object.


And voila! You can either choose to directly use the generated wrapper classes or call into Windows ML APIs. Either way, the above three lines of code will enable you to run AI locally within your application. In the next section, we will explore a sample generic UWP application that showcases a way to build an abstraction on top of Windows ML APIs where it takes a picture or a video frame, evaluate through any model that accepts such input type, and display results.

End-to-End Sample Application: Windows Machine Learning Explorer

Windows Machine Learning Explorer (Windows ML Explorer) is a data driven and generic sample application that serves as a launch pad to bootstrap ML models to be evaluated by Windows ML. It currently includes the scenario of a circuit board defect detection model. This model can detect defects on static pictures, such as in figure 4 where the circuit board traces are broken between paths. It can also detect from real-time web camera feed of a perfectly normal printed circuit board shown in figure 5.

You can find the code of the Windows Machine Learning Explorer sample application here.


Figure 4 - A defective printed circuit board static picture was selected in Windows Machine Learning Explorer.


Figure 5 – A normal printed circuit board was shown in front of a web camera in the Windows Machine Learning Explorer.

The Printed Circuit Board (PCB) model was trained using Microsoft Custom Vision Service, with PCB data generated by the Circuit Board Generator. Once the CoreML model has been trained and generated, it was converted to ONNX format using WinMLTools. To accomplish the conversion steps, you can work with your data scientist or follow these steps with the Convert existing ML models to ONNX guide. The converted ONNX model and sample PCB pictures are then added to the application's project.


Figure 6 – The converted ONNX model file and the generated circuit board pictures are added within the Assets/PCB folder of the project.

In Windows ML Explorer, there is an abstraction layer that is built on top of Windows ML APIs. This enables us to generically add a new ONNX model to the application that takes a picture or a video frame as input, evaluate, and display results. This also allows the application to dynamically switch between models from the UI. This abstraction is represented by the WinMLModel abstract class.


Figure 7 - WinMLModel.cs file and abstract class can be found inside the MLModels folder.

The WinMLModel abstract class already loads the model file with its initialization steps. It expects any new model that inherits this class to override the following properties and methods:

  • DisplayInputName – This allows the UI to display the type of input images for the model.
  • DisplayMinProbability – This restricts the UI to only show evaluation results with probability higher than this number.
  • DisplayName – Friendly display name of the model.
  • DisplayResultSettings – These settings direct how the UI will show probability percentages.
  • Filename – The location of the ONNX model filename.
  • Foldername – The folder within the Assets folder where the ONNX model and the input pictures will be located.
  • EvaluateAsync(MLModelResult result, VideoFrame inputFrame)– This provides the inherited model classes to determine how to bind input and output, evaluate the model, and populate the MLModelResult object to be consumed by the UI to display results.

The provided example of the PCB model is represented as a class that inherits WinMLModel.


Figure 8 - The full skeleton of the PCBModel class which represents the PCB model that inherits the WinMLModel abstract class.

The EvaluateAsync(PcbModelInput input, string correlationId)method uses same code as to how the generated wrapper class binds inputs / outputs and evaluates the model.


Figure 9 - This shows how the PcbModel binds inputs / outputs and evaluates the model.

Adding a New Model to Windows Machine Learning Explorer

Once you have synced, built, and ran the Windows ML Explorer sample application, it is very easy for you to add a new model that expects a picture or a video frame as input. The application also allows the users to dynamically switch between multiple models, from one to another, as shown in figure 7 in the user interface.

To add a new model to the Windows ML Explorer, you simply follow with these five simple steps:

  1. Create a new model folder under Assets to represent this new model.
  2. Add the ONNX model file to the model folder and set the file's build property as content.
  3. Create a new Images folder under the model folder and add your input images.
  4. With the automatic generated wrapper classes file, modify the Model class to inherit the WinMLModel abstract class.
  5. Add an instance of the new model class within the Models list in the constructor of MainViewModel. This will enable the new model to be shown in the Select Machine Learning Model combobox dropdown in the main UI.


Figure 10 - Skeleton of the new model class to be added to the Windows Machine Learning Explorer.


Figure 11 - This shows how to add a new model to the constructor of the MainViewModel.


Figure 12 – This shows the result of adding a second model in Windows Machine Learning Explorer and allows the user to dynamically switch between one model to another.

So, What Are You Waiting For?

In this blog, we introduced how .NET developers can use Windows ML to create intelligent applications that runs AI locally on Windows 10 devices. These intelligent applications leverage ONNX models which can be easily used via the automatic generated wrapper classes or directly invoking the Windows ML APIs. We have also presented the Windows Machine Learning explorer, an end-to-end sample application that showcases how to create an abstraction layer on top of Windows ML APIs to allow users to dynamically switch between ONNX models within the application. Thus, with just a few lines of Windows ML code, every developer can now develop powerful UWP applications that run on the intelligent edge.

There is no reason to wait - go ahead and give it a try!

Rosane & Wilson

 

Resources

  • Channel 9's AI Show for this blog post can be found here.
  • Windows Machine Learning Explorer sample application code for this blog post can be found here.
  • Official guide for Windows Machine Learning can be found here.

Acknowledgement

  • The authors wish to thank Carlos Pessoa, Chris Barker, Lucas Brodzinski, Seth Juarez, and Wee Hyong Tok from Microsoft for reviewing this post; and Louis-Philippe Bourret from Microsoft for reviewing the sample application code.


from TechNet Blogs http://ift.tt/2HvQvCK
via IFTTT
How Three Lines of Code and Windows Machine Learning Empower .NET Developers to Run AI Locally on Windows 10 Devices How Three Lines of Code and Windows Machine Learning Empower .NET Developers to Run AI Locally on Windows 10 Devices Reviewed by Unknown on March 13, 2018 Rating: 5

No comments:

Powered by Blogger.