Applications involving voluminous data but needing low-latency computation and local feedback require that the computing be performed as close to the data source as possible. Communication constraints and the need for privacy-preserving approaches also dictate the need for computing at the edge. Given the growth in such application scenarios and the recent advances in algorithms and techniques, machine learning and inference at the edge are unfolding and growing at a rapid pace. In support of these applications, a wide range of hardware (CPUs, GPUs, ASICs) is venturing farther away from the center and enabling computation closer to the physical world, often at the interface to the physical world.
The resulting diversity in edge-computing hardware in terms of capabilities, architectures, and programming models as well as the various runtime requirements and resource constraints of the various edge applications poses several new challenges. Some edge applications may need to run continuously, whereas others may run when particular events occur. Furthermore, situations may also warrant running applications in sandboxes for privacy, security and resource allocation purposes. Due to the usually limited capacity at the edge in terms of computation, energy and network bandwidth, these applications need to be scheduled simultaneously and run concurrently. Consequently, a future with heterogeneous edge hardware and multiple applications sharing the underlying resources becomes imminent.
Deploying and managing such applications with diverse properties at the edge in a concurrent manner requires considerations for multitenancy and presents a challenge that requires cooperation and coordination between the various components of the software stack. Mechanisms need to be devised that communicate both data and control with the applications in order to fine-tune their behavior and change their operational parameters. Realizing the computing continuum and coupling these edge applications with centrally located cloud and HPC resources and applications also opens up many research areas.
As we push more toward edge-enabled networks of devices, we inherit a setting where resources are deployed away from the safety of secure indoor spaces, often in the midst of a bustling urban canyon, and exposed to physical and cybersecurity threats. Deployed and interconnected predominantly over public networks, these systems have to be designed with cybersecurity as a first-class design citizen, rather than introduced as an afterthought.
Another prominent and relevant advancement to this evolving landscape is the evolution of the last-mile wireless connectivity. The emergence of 5G and Wi-Fi 6, and the likely convergence, will initially provide a combination of bandwidth improvements, and latency reduction. Together with advancements in processor technology, this will enable us to deploy more advanced sensors, actuators and services than what is possible today. The next-generation wireless connectivity will also employ radio base stations more densely and with substantially higher computing capability for both core operations and as services for end users. The allocation and orchestration of these computing resources for various competing user needs and network functions will share many challenges currently encountered in edge-computing. Perhaps, the largest DevOps and management challenges in edge-computing at infrastructure level may be witnessed in this space.
The goal of this workshop is hence to gather the community working in three broad areas:
For this workshop we welcome original work covering three broad topics including Edge AI/ML and Data, Edge Architecture and Practice. In particular, we welcome work and discussions on:
All papers must be original and not simultaneously submitted to another journal or conference. The papers submitted to the workshop will be peer reviewed by a minimum of 3 reviewers.
The following paper categories are welcome:
Templates for MS Word and LaTeX provided by IEEE eXpress Conference Publishing are available for download. See the latest versions here.
Here is a link to the EasyChair CFP. Upload your submission to EasyChair submission server in PDF format. Accepted manuscripts will be included in the IPDPS workshop proceedings.