[PDF Link]
Edge computing, a key part of the upcoming 5G mobile networks and future 6G technologies, promises to distribute cloud applications while providing more bandwidth and reducing latencies [1]. The promises are delivered by moving application-specific computations between the cloud, the data producing devices, and the network infrastructure compo- nents at the edges of wireless and fixed networks. In stark contrast, current artificial intelligence (AI) and in particular machine learning (ML) methods assume computations are conducted in a homogeneous cloud with ample computa- tional and data storage resources available. Currently, AI’s cloud-centric architectural model requires transmitting data from the end-user devices to the cloud, consuming significant data transmission resources and introducing latencies.