Accelerating mobile inference through fine-grained CPU-GPU co-execution
Z. Li, M. Paolieri, L. Golubchik
Abstract: Deploying deep neural networks on mobile devices is increasingly important but remains challenging due to limited computing resources. On the other hand, their unified memory architecture and narrower gap between CPU and GPU performance provide an opportunity to reduce inference latency by assigning tasks to both CPU and GPU. The main obstacles for such collaborative execution are the significant synchronization overhead required to combine partial results, and the difficulty of predicting execution times of tasks assigned to CPU and GPU (due to the dynamic selection of implementations and parallelism level). To overcome these obstacles, we propose both a lightweight synchronization mechanism based on OpenCL fine-grained shared virtual memory (SVM) and machine learning models to accurately predict execution times. Notably, these models capture the performance characteristics of GPU kernels and account for their dispatch times. A comprehensive evaluation on four mobile platforms shows that our approach can quickly select CPU-GPU co-execution strategies achieving up to 1.89x speedup of linear layers and 1.75x speedup of convolutional layers (close to the achievable maximums of 2.01x and 1.87x, respectively, found by exhaustive grid search on a Pixel 5 smartphone).
Selected papers of EPEW 2025, pp. -, Springer, 2026 2026Performance ModelsML SystemsEdge AIApplications
copy bib | save bib | save pdf | go to publisher
🏠 Home