Partitioning Convolutional Neural Networks to Maximize the Inference Rate on Constrained IoT Devices

Billions of devices will compose the IoT system in the next few years, generating a huge amount of data.We can use fog computing to process these data, considering that there is the possibility of overloading the network towards the cloud.In this context, deep learning can treat miken pro series 13 these data, but the memory requirements of deep neural networks may prevent them from executing on a single resource-constrained device.Furthermore, their computational requirements may yield an unfeasible execution time.In this work, we propose Deep Neural Networks Partitioning for Constrained IoT Devices, a new algorithm to partition neural networks for efficient distributed execution.

Our algorithm can optimize the neural network inference rate or the number of communications among devices.Additionally, our algorithm accounts appropriately for the shared parameters and biases of Convolutional Neural Networks.We investigate the inference rate maximization for the LeNet model in constrained setups.We show that the partitionings offered by popular machine learning frameworks such as TensorFlow or by the skeleton yard stakes general-purpose framework METIS may produce invalid partitionings for very constrained setups.The results show that our algorithm can partition LeNet for all the proposed setups, yielding up to 38% more inferences per second than METIS.

Leave a Reply

Your email address will not be published. Required fields are marked *