Sixth International Workshop on Serverless Computing (WoSC6) 2020

Part of ACM/IFIP International Middleware Conference, Dec 7-11, 2020 in TU Delft, The Netherlands.

Please visit the previous workshops website to see what you can expect.

News

2020-11-20: Preliminary workshop program is available.
2020-10-14: Final Camera-Ready Manuscript extended to October 16, 2020
2019-10-07: Extended notification date
2019-10-05: Update notifications and camera ready dates
2019-09-08: Submission deadline extended
2019-06-29: CFP available

Welcome

Over the last four to five years, Serverless Computing (Serverless) has gained an enthusiastic following in industry as a compelling paradigm for the deployment of cloud applications, and is enabled by the recent shift of enterprise application architectures to containers and microservices. Many of the major cloud vendors have released serverless platforms, including Amazon Lambda, Google Cloud Functions, Microsoft Azure Functions, IBM Cloud Functions. Open source projects are gaining popularity in providing serverless computing as a service. In particular Kubernetes gained in popularity in enterprise and in academia. Several open source projects such as OpenFaaS and Knative aim to provide developers with serverless experience on top of Kubernetes by hiding low-level details of Kubernetes and add new capabilities such as supporting event-driven serverless cloud-native applications. This workshop brings together researchers and practitioners to discuss their experiences and thoughts on future directions of serverless research.

Serverless architectures offer different tradeoffs in terms of control, cost, and flexibility compared to distributed applications built on an Infrastructure as a Service (IaaS) substrate. For example, a serverless architecture requires developers to more carefully consider the resources used by their code (time to execute, memory used, etc.) when modularizing their applications. This is in contrast to concerns around latency, scalability, and elasticity, which is where significant development effort has traditionally been spent when building cloud services. In addition, tools and techniques to monitor and debug applications aren't applicable in serverless architectures, and new approaches are needed. As well, test and development pipelines may need to be adapted. Another decision that developers face is the appropriateness of the serverless ecosystem to their application requirements. A rich ecosystem of services built into the platform is typically easier to compose and would offer better performance. However, composing external services may be unavoidable, and in such cases, many of the benefits of serverless disappear, including performance and availability guarantees. This presents an important research challenge, and it is not clear how existing results and best practices, such as workflow composition research, can be applied to composition in a serverless environment.

Workshop program

Date: December 8 (Tuesday) [confirmed]

Time: tentative 10am ET (4pm in Europe) and ending about 5pm ET (11pm in Europe)

Workshop location: virtual

Preliminary schedule

10am-noon ET (4pm-6pm in Europe): opening remarks, keynote, talks

noon-1pm ET (6pm-7pm in Europe): break (lunch, maybe demos)

1pm-3pm ET (7pm-9pm in Europe): talks

3pm-3:30pm ET (9pm-9:30pm in Europe): short break (maybe demos)

3:30pm-5pm ET (9:30pm-11pm in Europe): last talks and panel, final remarks

Invited speaker

TBD

Paper presentations

Each talk is 10 minutes with 5 mintues for questions and answers (each talk to not take longer than 15 minutes).

We will be very strict about keeping talks on schedule as people may want ot join for the particualr talk only.

Temporal Performance Modelling of Serverless Computing Platforms

Implications of Public Cloud Resource Heterogeneity for Inference Serving

Resource Management for Cloud Functions with Memory Tracing, Profiling and Autotuning

An Evaluation of Serverless Data Processing Frameworks

Evaluation of Network File System as a Shared Data Storage in Serverless Computing

Active-Standby for High-Availability in FaaS

ACE: Just-in-time Serverless Software Component Discovery Through Approximate Concrete Execution

Serverless Isn't Server-Less: Measuring and Exploiting Resource Variability on Cloud FaaS Platforms

Towards Federated Learning using FaaS Fabric

Bringing scaling transparency to Proteomics applications with serverless computing

Proactive Serverless Function Resource Management

The Serverless Application Analytics Framework: Enabling Design Trade-off Evaluation for Serverless Software

Panel

TBD

Demos

TBD

Posters

TBD

Papers abstracts

Temporal Performance Modelling of Serverless Computing Platforms

Presenter: N. Mahmoudi

Authors: N. Mahmoudi, H. Khazaei

Abstract: Analytical performance models have been shown very efficient in analyzing, predicting, and improving the performance of distributed computing systems. However, there is a lack of rigorous analytical models for analyzing the transient behaviour of serverless computing platforms, which is expected to be the dominant computing paradigm in cloud computing. Also, due to its unique characteristics and policies, performance models developed for other systems cannot be directly applied to modelling these systems. In this work, we propose an analytical performance model that is capable of predicting several key performance metrics for serverless workloads using only their average response time for warm and cold requests. The introduced model uses realistic assumptions, which makes it suitable for online analysis of real-world platforms. We validate the proposed model through extensive experimentation on AWS Lambda. Although we focus primarily on AWS Lambda due to its wide adoption in our experimentation, the proposed model can be leveraged for other public serverless computing platforms with similar auto-scaling policies, e.g., Google Cloud Functions, IBM Cloud Functions, and Azure Functions.

Presentation [pdf] [pptx]
Video recording [lightning] [talk]

Implications of Public Cloud Resource Heterogeneity for Inference Serving

Presenter:

Authors: J. Gunasekaran, C. Mishra, P. Thinakaran, M. Kandemir, C. Das

Abstract: We are witnessing an increasing trend towards using Machine Learning (ML) based prediction systems, spanning across different application domains, including product recommendation systems, personal assistant devices, facial recognition, etc. These applications typically have diverse requirements in terms of accuracy and response latency, that have a direct impact on the cost of deploying them in a public cloud. Furthermore, the deployment cost also depends on the type of resources being procured, which by themselves are heterogeneous in terms of provisioning latencies and billing complexity. Thus, it is strenuous for an inference serving system to choose from this confounding array of resource types and model types to provide low-latency and cost-effective inferences. In this work we quantitatively characterize the cost, accuracy and latency implications of hosting ML inferences on different public cloud resource offerings. Our evaluation shows that, prior work does not solve the problem from both dimensions of model and resource heterogeneity. Hence, we argue that to address this problem, we need to holistically solve the issues that arise when trying to combine both model and resource heterogeneity towards optimizing for application constraints. Towards this, we discuss the design and implications of a self-managed inference serving system, which can optimize the application requirements based on public cloud resource characteristics.

Presentation [pdf] [pptx]
Video recording [talk]

Resource Management for Cloud Functions with Memory Tracing, Profiling and Autotuning

Presenter:

Authors: J. Spillner

Abstract: Application software provisioning evolved from monolithic designs towards differently designed abstractions including serverless applications. The promise of that abstraction is that developers are free from infrastructural concerns such as instance activation and autoscaling. Today's serverless architectures based on FaaS are however still exposing developers to explicit low-level decisions about the amount of memory to allocate for the respective cloud functions. In many cases, guesswork and ad-hoc decisions determine the values a developer will put into the configuration. We contribute tools to measure the memory consumption of a function in various Docker, OpenFaaS and GCF/GCR configurations over time and to create trace profiles that advanced FaaS engines can use to autotune memory dynamically. Moreover, we explain how pricing forecasts can be performed by connecting these traces with a FaaS characteristics knowledge base.

Presentation [pdf] [pptx]
Video recording [talk]

An Evaluation of Serverless Data Processing Frameworks

Presenter:

Authors: S. Werner, R. Girke, J. Kuhlenkamp

Abstract: Serverless computing is a promising cloud execution model that significantly simplifies cloud users’ operational concerns by offering features such as auto-scaling and a pay-as-you-go cost model. Consequently, serverless systems promise to provide an excellent fit for ad-hoc data processing. Unsurprisingly, numerous serverless systems/frameworks for data processing emerged recently from research and industry. However, systems researchers, decision-makers, and data analysts are unaware of how these serverless systems compare to each other. In this paper, we identify existing serverless frameworks for data processing. We present a qualitative assessment of different system architectures and an experiment-driven quantitative comparison, including performance, cost, and usability using the TPC-H benchmark. Our results show that the three publicly available serverless data processing frameworks outperform a comparatively sized Apache Spark cluster in terms of performance and cost for ad-hoc queries on cold data.

Presentation [pdf] [pptx]
Video recording [talk]

Evaluation of Network File System as a Shared Data Storage in Serverless Computing

Presenter:

Authors: J. Choi, K. Lee

Abstract: Fully-managed cloud and Function-as-a-Service (FaaS) services allow the wide adoption of serverless computing for various cloud-native applications. Despite the many advantages that serverless computing provides, no direct connection support exists between function run-times, and it is a barrier for data-intensive applications. To overcome this limitation, the leading cloud computing vendor Amazon Web Services (AWS) has started to support mounting the network file system (NFS) across different function run-times. This paper quantitatively evaluates the performance of accessing NFS storage from multiple function run-times and compares the performance with other methods of sharing data among function run-times. Despite the great qualitative benefits of the approach, the limited I/O bandwidth of NFS storage can become a bottleneck, especially when the number of concurrent access from function run-times increases.

Presentation [pdf] [pptx]
Video recording [talk]

Active-Standby for High-Availability in FaaS

Presenter:

Authors: Y. Bouizem, D. Dib, N. Parlavantzas, C. Morin

Abstract: Serverless computing is becoming more and more attractive for cloud solution architects and developers. This new computing paradigm relies on Function-as-a-Service (FaaS) platforms that enable deploying functions without being concerned with the underlying infrastructure. An important challenge in designing FaaS platforms is ensuring the availability of deployed functions. Existing FaaS platforms address this challenge principally through retrying function executions. In this paper, we propose and implement an alternative fault-tolerance approach based on active-standby failover. Results from an experimental evaluation show that our approach increases availability and performance compared to the retry-based approach.

Presentation [pdf] [pptx]
Video recording [talk]

ACE: Just-in-time Serverless Software Component Discovery Through Approximate Concrete Execution

Presenter:

Authors: A. Byrne, S. Nadgowda, A. Coskun

Abstract: While much of the software running on today's serverless platforms is written in easily-analyzed high-level interpreted languages, many performance-conscious users choose to deploy their applications as container-encapsulated compiled binaries on serverless container platforms such as AWS Fargate or Google Cloud Run. Modern CI/CD workflows make this deployment process nearly-instantaneous, leaving little time for in-depth manual application security reviews. This combination of opaque binaries and rapid deployment prevents cloud developers and platform operators from knowing if their applications contain outdated, vulnerable, or legally-compromised code. This paper proposes Approximate Concrete Execution (ACE), a just-in-time binary analysis technique that enables automatic software component discovery for serverless binaries. Through classification and search engine experiments with common cloud software packages, we find that ACE scans binaries 5.2x faster than a state-of-the-art binary analysis tool, minimizing the impact on deployment and cold-start latency while maintaining comparable recall.

Presentation [pdf] [pptx]
Video recording [talk]

Serverless Isn't Server-Less: Measuring and Exploiting Resource Variability on Cloud FaaS Platforms

Presenter:

Authors: S. Ginzburg, M. Freedman

Abstract: Serverless computing in the cloud, or functions as a service (FaaS), poses new and unique systems design challenges. Serverless offers improved programmability for customers, yet at the cost of increased design complexity for cloud providers. One such challenge is effective and consistent resource management for serverless platforms, the implications of which we explore in this paper. In this paper, we conduct one of the first detailed in situ measurement studies of performance variability in AWS Lambda. We show that the observed variations in performance are not only significant, but stable enough to exploit. We then design and evaluate an end-to-end system that takes advantage of this resource variability to exploit the FaaS consumption-based pricing model, in which functions are charged based on their fine-grain execution time rather than actual low-level resource consumption. By using both light-weight resource probing and function execution times to identify attractive servers in serverless platforms, customers of FaaS services can cause their functions to execute on better performing servers and realize a cost savings of up to 13% in the same AWS region.

Presentation [pdf] [pptx]
Video recording [talk]

Towards Federated Learning using FaaS Fabric

Presenter:

Authors: M. Chadha, A. Jindal, M. Gerndt

Abstract: Federated learning (FL) enables resource-constrained edge devices to learn a shared Machine Learning (ML) or Deep Neural Network (DNN) model, while keeping the training data local and providing privacy, security, and economic benefits. However, building a shared model for heterogeneous devices such as resource-constrained edge and cloud makes the efficient management of FL-clients challenging. Furthermore, with the rapid growth of FL-clients, the scaling of FL training process is also difficult. In this paper, we propose a possible solution to these challenges: federated learning over a combination of connected Function-as-a-Service platforms, i.e., FaaS fabric offering a seamless way of extending FL to heterogeneous devices. Towards this, we present FedKeeper, a tool for efficiently managing FL over FaaS fabric. We demonstrate the functionality of FedKeeper by using three FaaS platforms through an image classification task with a varying number of devices/clients, different stochastic optimizers, and local computations (local epochs).

Presentation [pdf] [pptx]
Video recording [talk]

Bringing scaling transparency to Proteomics applications with serverless computing

Presenter:

Authors: M. Mirabelli, P. Lopez, G. Vernik

Abstract: Scaling transparency means that applications can expand in scale without changes to the system structure or the application algorithms. Serverless Computing's inherent auto-scaling support and fast function launching is ideally suited to support scaling transparency in different domains. In particular, Proteomic applications could considerably benefit from scaling transparency and serverless technologies due to their high concurrency requirements. Therefore, the auto-provisioning nature of serverless platforms makes this computing model an alternative to satisfy dynamically the resources required by protein folding simulation processes. However, the transition to these architectures must face challenges: they should show comparable performance and cost to code running in Virtual Machines (VMs). In this article, we demonstrate that Proteomics applications implemented with the Replica Exchange algorithm can be moved to serverless settings guaranteeing scaling transparency. We also validate that we can reduce the total execution time by around forty percent with comparable cost to cluster technologies (Work Queue) over VMs.

Presentation [pdf] [pptx]
Video recording [talk]

Proactive Serverless Function Resource Management

Presenter:

Authors: E. Hunhoff, S. Irshad, V. Thurimella, A. Tariq, E. Rozner

Abstract: This paper introduces a new primitive to serverless language runtimes called freshen. With freshen, developers or providers specify functionality to perform before a given function executes. This proactive technique allows for overheads associated with serverless functions to be mitigated at execution time, which improves function responsiveness. We show various predictive opportunities exist to run freshen within reasonable time windows. A high-level design and implementation are described, along with preliminary results to show the potential benefits of our scheme.

Presentation [pdf] [pptx]
Video recording [talk]

The Serverless Application Analytics Framework: Enabling Design Trade-off Evaluation for Serverless Software

Presenter:

Authors: R. Cordingly, H. Yu, V. Hoang, Z. Sadeghi, D. Foster, D. Perez, R. Hatchett, W. Lloyd

Abstract: To help better understand factors that impact performance on Function-as-a-Service (FaaS) platforms we have developed the Serverless Application Analytics Framework (SAAF). SAAF provides a reusable framework supporting multiple programming languages that developers can integrate into a function’s package for deployment to multiple commercial and open source FaaS platforms. SAAF improves the observability of FaaS function deployments by collecting forty-eight distinct metrics to enable developers to profile CPU and memory utilization, monitor infrastructure state, and observe platform scalability. In this paper, we describe SAAF in detail and introduce supporting tools highlighting important features and how to use them. Our client application, FaaS Runner, provides a tool to orchestrate workloads and automate the process of conducting experiments across FaaS platforms. We provide a case study demonstrating the integration of SAAF into an existing open source image processing pipeline built for AWS Lambda. Using FaaS Runner, we automate experiments and acquire metrics from SAAF to profile each function of the pipeline to evaluate performance implications. Finally, we summarize contributions using our tools to evaluate implications of different programming languages for serverless data processing, and to build performance models to predict runtime for serverless workloads.

Presentation [pdf] [pptx]
Video recording [talk]

Workshop call for papers

Call For Papers (CFP)

Organization

Workshop co-chairs

Paul Castro, IBM Research
Pedro García López, University Rovira i Virgili
Vatche Ishakian, IBM Research
Vinod Muthusamy, IBM Research
Aleksander Slominski, IBM Research

Steering Committee

Geoffrey Fox, Indiana University
Dennis Gannon, Indiana University & Formerly Microsoft Research
Arno Jacobsen, MSRG (Middleware Systems Research Group)

Program Committee (tentative)

Gul Agha, University of Illinois at Urbana-Champaign
Azer Bestavros, Boston University
Flavio Esposito, Saint Louis University
Rodrigo Fonseca, Brown University
Ian Foster, University of Chicago and Argonne National Laboratory
Geoffrey Fox, Indiana University
Dennis Gannon, Indiana University & Formerly Microsoft Research
Pedro Garcia Lopez, Universitat Rovira i Virgili (Spain)
Arno Jacobsen, MSRG (Middleware Systems Research Group)
Ali Kanso, Microsoft
Wes Lloyd, University of Washington Tacoma
Maciej Malawski, AGH University of Science and Technology, Poland
Pietro Michiardi, Eurecom
Lucas Nussbaum, LORIA, France
Maciej Pawlik, Academic Computer Centre CYFRONET of the University of Science and Technology in Cracow
Per Persson, Ericsson Research
Peter Pietzuch, Imperial College
Rodric Rabbah, Nimbella and Apache OpenWhisk
Eric Rozner, University of Colorado Boulder
Josef Spillner, Zurich University of Applied Sciences
Rich Wolski, University of California, Santa Barbara

Previous workshop

Fifth International Workshop on Serverless Computing (WoSC) in UC Davis, CA, USA on Decemeber 9, 2019. In conjunction with 20th ACM/IFIP International Middleware Conference.

Fourth International Workshop on Serverless Computing (WoSC) in Zurich, Zurich, Switzerland on Decemeber 20, 2018. In conjunction with 11th IEEE/ACM UCC and 5th IEEE/ACM BDCAT.

Third International Workshop on Serverless Computing (WoSC) in San Francisco, CA, USA on July 2nd 2018 In conjunction with IEEE CLOUD 2018 affiliated with 2018 IEEE World Congress on Services (IEEE SERVICES 2018).

Second International Workshop on Serverless Computing (WoSC) 2017 in Las Vegas, NV, USA on December 12th, 2017 part of Middleware 2017.

First International Workshop on Serverless Computing (WoSC) 2017 in Atlanta, GA, USA on June 5th, 2017 part of ICDCS 2017.

Tweets about workshop

Please use hashtags #wosc6 #serverless