Technology for embedded GPU virtualization in the edge computing environment
Author
Abstract

With the rapid development of Internet of Things technology, the requirements for edge node data processing capability are increasing, and GPU processors are becoming more widely applied in edge nodes. Current research on GPU virtualization technology mainly focuses on cloud data centers, with little research on embedded GPU virtualization in scenarios with limited edge node resources. In contrast to cloud data centers, embedded GPUs in edge nodes typically do not have access to GPU utilization and video memory usage within each container. As a result, traditional GPU virtualization technologies are ineffective for resource virtualization on embedded devices. This paper presents a method to virtualize embedded GPU resources in an edge computing environment, called sGPU. We integrated edge nodes with embedded GPUs into Kubernetes via the device-plugin mechanism. Users can package GPU applications in containers and deploy them using Kubernetes on edge nodes with embedded GPUs. sGPU allows containers to share embedded GPU computing resources and divides a physical GPU into multiple virtual GPUs that can be allocated to containers on demand. To achieve GPU computing power division, we proposed a multi-container sharing GPU algorithm and implemented it in sGPU, which ensures the most accurate computing power segmentation under the competition of computing resources of a GPU used by multiple containers at the same time. According to the experimental results, the average overhead of sGPU is 3.28\%. The accuracy of computing power segmentation is 92.7\% when a single container uses GPU.

Year of Publication
2022
Date Published
dec
Publisher
IEEE
Conference Location
Haikou, China
ISBN Number
9798350346558
URL
https://ieeexplore.ieee.org/document/10189560/
DOI
10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00238
Google Scholar | BibTeX | DOI