Published In

2022 26th International Conference on Engineering of Complex Computer Systems (ICECCS)

Document Type

Post-Print

Publication Date

5-3-2022

Subjects

Software Architecture -- Applications

Abstract

Deep-learning accelerators are increasingly popular. There are two prevalent accelerator architectures: one based on general matrix multiplication units and the other on convolution cores. However, Tensor Virtual Machine (TVM), a widely used deep-learning compiler stack, does not support the latter. This paper proposes a general framework for extending TVM to support deep-learning accelerators with convolution cores. We have applied it to two well-known accelerators: Nvidia's NVDLA and Bitmain's BM1880 successfully. Deep-learning workloads can now be readily deployed to these accelerators through TVM and executed efficiently. This framework can extend TVM to other accelerators with minimum effort.

Rights

©2022 IEEE

Description

This is the author’s version of a work that was accepted for publication in 2022 26th International Conference on Engineering of Complex Computer Systems (ICECCS).Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in 2022 26th International Conference on Engineering of Complex Computer Systems (ICECCS).

DOI

10.1109/ICECCS54210.2022.00031

Persistent Identifier

https://archives.pdx.edu/ds/psu/37569

Publisher

IEEE

Share

COinS