Home GitHub Tutorials Team Contacts

Minos Computing Library

Supporting Extremely Heterogeneous Computing in HPC, AI, and Data Analytics

ICS'22 -- June 27th, 2022

Programming extremely heterogeneous system with MCL

Logistics: Monday June 27th, 2022 -- 8.00am-2.00pm EDT

Objectives: This tutorial provides an overview of the MCL programming environment and a step- by-step guide to write, build, and test an MCL program in a multi-device environment. At the end of this tutorial, attendees should be able to run their MCL code on their laptops and scale out their code on more complex systems, both larger workstations and power-efficient embedded systems.

Abstract: Emerging applications in different domains, from scientific simulations and machine- learning to data-analytics and signal-processing, pose new challenges and requirements to industry and research communities. Specialization has become a fundamental pillar for the design of future high-end systems: modern supercomputers feature several accelerators (e.g., GPUs); military systems employ domain-specific SoC and ASICs; industries have introduced specialized hardware for machine-/deep-learning. This high level of specialization results in extremely heterogeneous systems that are complicated to design, test, and program. This tutorial introduces the Minos Computing Library (MCL), a new programming environment for efficient programming of extremely heterogeneous systems. MCL provides a task-based abstraction that simplifies programming and hides architectural details while the runtime supports asynchronous execution of tasks from concurrent applications. The MCL scheduler manages computing resources, performs automatic load balancing, and utilizes locality-aware scheduling. MCL increases performance portability by transparently scaling applications developed on personal desktops to large workstations and supercomputers as well as power- efficient embedded systems. This tutorial will demonstrate how MCL can be used to program and drive multiple heterogeneous classes of devices, such as GPGPU, FPGA, and DL accelerators (NVDLA), and manage multiple devices within a system (e.g., multi-GPU systems). We will also show how MCL can be used as backend of a popular DNN library (OneDNN), enabling data scientists to seamlessly port their PyTorch code to MCL and take advantages of emerging heterogeneous architectures.

Tutorial Docker Image: MCL ICS'22 Docker Image

Time (PST) Topic Presenter Material
8.00-8.30 Introduction R. Gioiosa
8.30-9.30 Hands-on Session R. Gioiosa
11.00-11.45 MCL on FPGA and NVDLA Accelerators R. Gioiosa, R. Friese
11.45-12.30 mclDNN: Accelerate DNN workloads L. Guo
1.00-1.30 A realistic example: MiniMD A. Kamatar
1.30-2.15 Programming with Rust R. Friese
2.15-2.30 Q&A and Conclusions All