Zitat
H. Wöhrle, M. De Lucas Alvarez, F. Schlenke, A. Walsemann, M. Karagounis, and F. Kirchner, “Surrogate Model based Co-Optimization of Deep Neural Network Hardware Accelerators,” in 2021 IEEE International Midwest Symposium on Circuits and Systems (MWSCAS), 2021, pp. 40–45.
Abstract
In this paper, we present an ASIC based on 22FDX/FDSOI technology for the detection of atrial fibrillation in human electrocardiograms using neural networks. The ASIC consists of a RISC-V core for supporting software components and an application-specific machine learning IP core (ML-IP), which is used to implement the computationally intensive inference. The ASIC was designed for maximum energy efficiency. A special feature of the ML-IP is its modular, generic and scalable design of the ML-IP which allows to specify the quantization of each computational operation, the degree of parallelization and the architecture of the neural network. This in turn allows the use of ML-based optimization techniques to perform co-optimization for hardware design and architecture of the neural network (NNs). Here, a multi-objective optimization of the overall system is performed with respect to computational efficiency at a given classification accuracy and speed by using a multi-objective optimization, which is carried out using a probabilistic surrogate model. This model tries to find the optimal neural network architecture with a minimum number of training, simulation and evaluation steps.
Referenzen
DOI 10.1109/MWSCAS47672.2021.9531708
Schlagwörter
Computational modeling
Computer architecture
FDX/FDSOI
IP networks
Probabilistic logic
Quantization (signal)
Software
Training
bayesian optimization
deep learning
hardware acceleration