Fifth International Workshop on Serverless Computing (WoSC) 2019

Part of 20th ACM/IFIP International Middleware Conference, Dec 9-13, 2019 in UC Davis, CA, USA.

Please visit the previous workshops website to see what you can expect.

News

2019-10-30: Preliminary workshop program is available.
2019-10-30: List of accepted papers with abstract is available.
2019-08-29: Extended submission deadline to September 15
2019-08-29: Top rated papers may get a special invite to submit a follow-up article based on the WoSC5 submission to the IEEE Software magazine issue with an emphasis on serverless, which is to be published in the beginning of 2020.
2019-08-28: Added clarification to CFP: the page limit of 6 pages contains all the content, including bibliography, appendix, etc.
2019-06-06: CFP available

Welcome

Over the last four to five years, Serverless Computing (Serverless) has gained an enthusiastic following in industry as a compelling paradigm for the deployment of cloud applications, and is enabled by the recent shift of enterprise application architectures to containers and micro-services. Many of the major cloud vendors, have released serverless platforms, including Amazon Lambda, Google Cloud Functions, Microsoft Azure Functions, IBM Cloud Functions. This workshop brings together researchers and practitioners to discuss their experiences and thoughts on future directions of serverless research.

Serverless architectures offer different tradeoffs in terms of control, cost, and flexibility compared to distributed applications built on an Infrastructure as a Service (IaaS) substrate. For example, a serverless architecture requires developers to more carefully consider the resources used by their code (time to execute, memory used, etc.) when modularizing their applications. This is in contrast to concerns around latency, scalability, and elasticity, which is where significant development effort has traditionally been spent when building cloud services. In addition, tools and techniques to monitor and debug applications aren't applicable in serverless architectures, and new approaches are needed. As well, test and development pipelines may need to be adapted. Another decision that developers face are the appropriateness of the serverless ecosystem to their application requirements. A rich ecosystem of services built into the platform is typically easier to compose and would offer better performance. However, composing external services may be unavoidable, and in such cases, many of the benefits of serverless disappear, including performance and availability guarantees. This presents an important research challenge, and it is not clear how existing results and best practices, such as workflow composition research, can be applied to composition in a serverless environment.

Workshop program

Date: December 9 (Monday)

Workshop location: UC Davis, CA, USA

Preliminary schedule - subject change depending on lunch time and coffee breaks

Morning

Invited speaker

Keynote The Dawn of the Cloud Computer

Presentations (3x20min)

Real-time Serverless: Enabling Application Performance Guarantees
Selena: a Serverless Energy Management System
FnSched: An Efficient Scheduler for Serverless Functions

Lunch

Afternoon

Invited speaker

TBA

Presentations (3x20min)

Towards Serverless as Commodity: a case of Knative
FaaS Orchestration of Parallel Workloads
Extending storage support for unikernel containers

Presentations (3x20min)

Understanding Open Source Serverless Platforms: Design Considerations and Performance
Serverless Workflows for Indexing Large Scientific Data
Function-as-a-Service Application Service Composition: Implications for a Natural Language Processing Application

Panel: Serverless 2020 and Beyond.

TBA

Invited speakers

Keynote: The Dawn of the Cloud Computer

Sepaker: Rodric Rabbah, CTO, Nimbella

Abstract: Today's applications are increasingly built on the cloud. They are highly distributed and reactive with massive scale, and consume many types of events. The prevalence of cloud platforms and cloud services allow for the intriguing possibility of realizing a new instruction set architecture (ISA) and accompanying programming models for an emerging computer architecture: the Cloud (Super) Computer. This journey was facilitated five years ago with the introduction of Amazon Lambda, a new model of computing without servers, and hence "serverless". I will describe programing, compiler, systems and architectural challenges in furthering this serverless movement, while also making the case that serverless is inevitable because it affords application developers extreme focus and agility while delivering unparalleled value and scale.

Presentation [PDF]

Paper abstracts

Real-time Serverless: Enabling Application Performance Guarantees

Presenter:

Authors: H. Nguyen, C. Zhang, Z. Xiao, A. Chien

Abstract: Today's serverless provides ``function-as-a-service'' enabling new cloud applications with dynamic scaling and fine-grained resource charging. Serverless is provided as a best-effort service. We propose an extension to the serverless interface, called real-time serverless. It provides an invocation rate guarantee, specified by the application and requires the underlying implementation to delivers this SLO. This change enables real-time serverless to support novel bursty, real-time cloud and edge applications efficiently. We show how applications can guarantee real-time performance, study real-time serverless behavior analytically and empirically to explore its benefits. Finally, we use a case study, traffic monitoring to illustrate the use and benefits of real-time serverless, and demonstrate our prototype implementation.

Presentation [PDF] [PPTX]

Selena: a Serverless Energy Management System

Presenter:

Authors: F. Huber, N. Körber, M. Mock

Abstract: Reduction of CO2 emissions has become a significant challenge faced by humanity today. Energy management systems try to contribute to addressing this challenge by enabling an intelligent use and combination of different energy sources by capturing and visualizing energy usage and production data to enable energy efficiency improvement measures. In this paper, we present Selena, a prototypical energy management system that is implemented using the serverless computing paradigm. Essential design goals for Selena are both extensibility, so that many different data sources and providers (e.g., measurement systems) can be integrated easily, as well as efficient scalability, so that the system can be used from small (e.g., one building) to large installations (potentially entire neighborhoods) with deployment cost commensurate with the installation size. Our initial experiences with Selena indicate that the serverless paradigm is very well suited to capture and process energy-related data reliably and has excellent scaling properties due to the elastic compute platform that it is built upon.

Presentation [PDF] [PPTX]

Towards Serverless as Commodity: a case of Knative

Presenter:

Authors: N. Kaviani, D. Kalinin, M. Maximilien

Abstract: Serverless computing promises to evolve cloud computing architecture from VMs and containers-as-a-service (CaaS) to function-as-a-service (FaaS). This takes away complexities of managing and scaling underlying infrastructure and can result in simpler code, cheaper realization of services, and higher availability. Nonetheless, one of the primary drawbacks customers face when making decision to move their software to a serverless platform is the potential for getting locked-in with a particular provider. This used to be a concern with Platform-as-a-Service (PaaS) offerings too. However with Kubernetes emerging as the industry standard PaaS layer, PaaS is closer to becoming commodity with the Kubernetes API as its common interface. The question is if a similar unification for the API interface layer and runtime contracts can be achieved for serverless. If achieved, this would free up serverless users from their fears of platform lock-in. Our goal in this paper is to extract a minimal common denominator model of execution that can move us closer to a unified serverless platform. As contributors to Knative~\cite{knative} with in-depth understanding of its internal design, we use Knative as the baseline for this comparison and contrast its API interface and runtime contracts against other prominent serverless platforms to identify commonalities and differences. Influenced by the work in Knative, we also discuss challenges as well as the necessary evolution we expect to see as serverless platforms themselves reach commodity status.

Presentation [PDF] [PPTX]

FnSched: An Efficient Scheduler for Serverless Functions

Presenter:

Authors: A. Suresh, A. Gandhi

Abstract: An imminent challenge in the serverless computing landscape is the escalating cost of infrastructure needed to handle the growing traffic at scale. This work presents FnSched, a function-level scheduler designed to minimize provider resource costs while meeting customer performance requirements. FnSched works by carefully regulating the resource usage of colocated functions on each invoker, and autoscaling capacity by concentrating load on few invokers in response to varying traffic. We implement a prototype of FnSched and show that, compared to existing baselines, FnSched significantly improves resource efficiency, by as much as 36%--55%, while providing acceptable application latency.

Presentation [PDF] [PPTX]

FaaS Orchestration of Parallel Workloads

Presenter:

Authors: D. Barcelona-Pons, P. García-López, Á. Ruiz-Ollobarren, A. Gómez-Gómez, G. París, M. Sánchez-Artigas

Abstract: Function as a Service (FaaS) is based on a reactive programming model where functions are activated by triggers in response to cloud events (e.g., objects added to an object store). The inherent elasticity and the pay-per-use model of serverless functions make them very appropriate for embarrassingly parallel tasks like data preprocessing, or even the execution of MapReduce jobs in the cloud. But current Serverless orchestration systems are not designed for managing parallel fork-join workflows in a scalable and efficient way. We demonstrate in this paper that existing services like AWS Step Functions or Azure Durable Functions incur in considerable overheads, and only Composer at IBM Cloud provides suitable performance. Successively, we analyze the architecture of OpenWhisk as an open-source FaaS systems and its orchestration features (Composer). We outline its architecture problems and propose guidelines for orchestrating massively parallel workloads using serverless functions.

Presentation [PDF] [PPTX]

Extending storage support for unikernel containers

Presenter:

Authors: O. Lagkas Nikolos, K. Papazafeiropoulos, S. Psomadakis, A. Nanos, N. Koziris

Abstract: In recent years, the rapid adoption of the serverless computing paradigm has led to the proliferation of Function-as-a-Service computing frameworks. The majority of these frameworks utilize containers, a lightweight operating system virtualization technique, to ensure isolated function execution. Unikernels, which package applications within a single-address space library operating system, have been proposed as an alternative function isolation mechanism, which offers stronger isolation guarantees without suffering the performance penalties of full hardware virtualization. However, due to different storage semantics between containers and unikernels, the state-of-the-art approaches for using unikernels in place of containers result in decreased performance, inefficient resource utilization and limited functionality. In this paper we bridge the storage gap between containers and unikernels in the context of serverless computing. First, we examine and categorize the storage requirements for building and running functions based on unikernels. Based on these requirements, we design and prototype a framework, which extends the Docker storage layer to support unikernel images. Our framework enables the sharing of common read-only unikernel image layers between functions and moves the unikernel image building overhead away from the critical path of function execution. We show that our framework improves function instantiation times while reducing storage space overhead.

Presentation [PDF] [PPTX]

Understanding Open Source Serverless Platforms: Design Considerations and Performance

Presenter:

Authors: J. Li, S. Kulkarni, K. Ramakrishnan, D. Li

Abstract: Serverless computing is increasingly popular because of the promise of lower cost and the convenience it provides to users who do not need to focus on server management. This has resulted in the availability of a number of proprietary and open-source serverless solutions. We seek to understand how the performance of serverless computing depends on a number of design issues using several popular open-source serverless platforms. We identify the idiosyncrasies affecting performance (throughput and latency) for different open-source serverless platforms. Further, we observe that just having either resource-based (CPU and memory) or workload-based (request per second (RPS) or concurrent requests) auto-scaling is inadequate to address the needs of the serverless platforms.

Presentation [PDF] [PPTX]

Serverless Workflows for Indexing Large Scientific Data

Presenter:

Authors: T. Skluzacek, R. Chard, R. Wong, Z. Li, Y. Babuji, L. Ward, B. Blaiszik, K. Chard, I. Foster

Abstract: The use and reuse of scientific data is ultimately dependent on the ability to understand what those data represent, how they were captured, and how they can be used. In many ways, data are only as useful as the metadata available to describe them. Unfortunately, due to growing data volumes, large and distributed collaborations, and a desire to store data for long periods of time, scientific “data lakes” quickly become disorganized and lack the metadata necessary to be useful to researchers. New automated approaches are needed to derive metadata from scientific files and to use these metadata for organization and discovery. Here we describe one such system, Xtract, a service capable of processing vast collections of scientific files and automatically extracting metadata from diverse file types. Xtract relies on function as a service models to enable scalable metadata extraction by orchestrating the execution of many, short-running extractor functions. To reduce data transfer costs, Xtract can be configured to deploy extractors centrally or near to the data (i.e., at the edge). We present a prototype implementation of Xtract and demonstrate that it can derive metadata from a 7 TB scientific data repository.

Presentation [PDF] [PPTX]

Function-as-a-Service Application Service Composition: Implications for a Natural Language Processing Application

Presenter:

Authors: M. Fotouhi, D. Chen, W. Lloyd

Abstract: Serverless computing platforms provide Function-as-a-Service (FaaS) to end users for hosting individual functions known as microservices. In this paper, we describe the deployment of a Natural Language Processing (NLP) application using AWS Lambda. We investigate and study the performance and memory implications of two alternate service compositions. First, we evaluate a switchboard architecture, where a single Lambda deployment package aggregates all of the NLP application functions together into a single package. Second, we consider a service isolation architecture where each NLP function is deployed as a separate FaaS function decomposing the application to run across separate runtime containers. We compared the average runtime and processing throughput of these compositions using different pre-trained network weights to initialize our neural networks to perform inference. Additionally, we varied the workload dataset sizes to evaluate implications of inferencing throughput for our NLP application deployed to a FaaS platform. We found our switchboard composition, that shares FaaS runtime containers for all application tasks, produced a 14.75% runtime performance improvement, and also a 17.3% improvement in NLP processing throughput (samples/second). These results demonstrate the potential for careful application service compositions to provide notable performance improvements and ultimately cost savings for application deployments to serverless FaaS platforms.

Presentation [PDF] [PPTX]

Workshop call for papers

Call For Papers (CFP)

Organization

Workshop co-chairs

Paul Castro, IBM Research
Vatche Ishakian, Bentley University
Vinod Muthusamy, IBM Research
Aleksander Slominski, IBM Research

Steering Committee

Geoffrey Fox, Indiana University
Dennis Gannon, Indiana University & Formerly Microsoft Research
Arno Jacobsen, MSRG (Middleware Systems Research Group)

Program Committee

Gul Agha, University of Illinois at Urbana-Champaign
Azer Bestavros, Boston University
Flavio Esposito, Saint Louis University
Rodrigo Fonseca, Brown University
Ian Foster, University of Chicago and Argonne National Laboratory
Geoffrey Fox, Indiana University
Dennis Gannon, Indiana University & Formerly Microsoft Research
Arno Jacobsen, MSRG (Middleware Systems Research Group)
Wes Lloyd, University of Washington Tacoma
Pedro Garcia Lopez, Universitat Rovira i Virgili (Spain)
Tyler Harter, GSL, Microsoft
Višnja Križanović, Josip Juraj Strossmayer University of Osijek
Maciej Malawski, AGH University of Science and Technology, Poland
Pietro Michiardi, Eurecom
Lucas Nussbaum, LORIA, France
Eric Rozner, University of Colorado Boulder
Josef Spillner, Zurich University of Applied Sciences
Rich Wolski, University of California, Santa Barbara

Previous workshop

Fourth International Workshop on Serverless Computing (WoSC) in Zurich, Zurich, Switzerland on Decemeber 20, 2018. In conjunction with 11th IEEE/ACM UCC and 5th IEEE/ACM BDCAT.

Third International Workshop on Serverless Computing (WoSC) in San Francisco, CA, USA on July 2nd 2018 In conjunction with IEEE CLOUD 2018 affiliated with 2018 IEEE World Congress on Services (IEEE SERVICES 2018).

Second International Workshop on Serverless Computing (WoSC) 2017 in Las Vegas, NV, USA on December 12th, 2017 part of Middleware 2017.

First International Workshop on Serverless Computing (WoSC) 2017 in Atlanta, GA, USA on June 5th, 2017 part of ICDCS 2017.

Tweets about workshop

Please use hashtags #wosc4 #serverless