,

Fourth International Workshop on Serverless Computing (WoSC) 2018

Part of 11th IEEE/ACM UCC and 5th IEEE/ACM BDCAT.

Please visit the previous workshops website to see what you can expect. You will also find presentations and notes about the panel discussion there.

If you coming to the workshop please consider also attending European Symposium on Serverless Computing and Applications (ESCCA 2018) on December 21 (the day after this workshop), for learning and demonstrating the latest in tech and research on cloud functions, bringing together scientific progress with industrial requirements for future serverless development practices and application architectures.

News

2018-12-04: Final agenda posted
2018-11-08: Invited speaker from IBM Research Zurich
2018-10-19: Keynote by Google Cloud Functions added
2018-10-10: List of accepted papers with abstracts added
2018-09-12: Deadline extended
2018-06-01: CFP available

Welcome

Serverless Computing (Serverless) is emerging as a new and compelling paradigm for the deployment of cloud applications, and is enabled by the recent shift of enterprise application architectures to containers and micro services. Many of the major cloud vendors, have released serverless platforms within the last two years, including Amazon Lambda, Google Cloud Functions, Microsoft Azure Functions, IBM Cloud Functions. There is, however, little attention from the research community. This workshop brings together researchers and practitioners to discuss their experiences and thoughts on future directions.

Serverless architectures offer different tradeoffs in terms of control, cost, and flexibility. For example, this requires developers to more carefully consider the resources used by their code (time to execute, memory used, etc.) when modularizing their applications. This is in contrast to concerns around latency, scalability, and elasticity, which is where significant development effort has traditionally been spent when building cloud services. In addition, tools and techniques to monitor and debug applications aren't applicable in serverless architectures, and new approaches are needed. As well, test and development pipelines may need to be adapted. Another decision that developers face are the appropriateness of the serverless ecosystem to their application requirements. A rich ecosystem of services built into the platform is typically easier to compose and would offer better performance. However, composing external services may be unavoidable, and in such cases, many of the benefits of serverless disappear, including performance and availability guarantees. This presents an important research challenge, and it is not clear how existing results and best practices, such as workflow composition research, can be applied to composition in a serverless environment.

Workshop call for papers

Call For Papers (CFP)

Agenda

Date: December 20 (Thursday)

Workshop location: Zurich, Switzerland

09.00-10.00 Shared UCC keynote (Auditorium)

10.00-10.10 Workshop welcome (BASIC room)
Online slides

10.10-11.00 Invited speaker (BASIC room)
Efficient management of ephemeral data in serverless computing

11.00-11.30 Coffee break

11.30-12.50 WoSC papers (BASIC room)
EdgeBench: Benchmarking Edge Computing Platforms
A Review of Serverless Frameworks
Cold Start Influencing Factors in Function as a Service
Comparison of FaaS Orchestration Systems

13.00-14.00 Lunch

14.00-15.00 WoSC invited speakers from Google Cloud Functions (Auditorium):
WoSC Keynote: Challenges for Serverless Platform Providers

15.00-15.10 short break

15.10-16.30 WoSC papers (BASIC room)
An Investigation of the Impact of Language Runtime on the Performance and Cost of Serverless Functions
Visual-textual framework for serverless computation: a Luna Language approach
Improving Application Migration to Serverless Computing Platforms: Latency Mitigation with Keep-Alive Workloads
Benchmarking FaaS Platforms: Call for Community Participation

16.30-17.00 Coffee break

17.00-17.30 Closing ceremony

Invited speakers

Challenges for Serverless Platform Providers

Keynote by Monika Nawrot and Marek Biskup, senior software engineers in Cloud Functions, at Google in Warsaw

Abstract: Serverless platforms save customers from managing and maintaining servers, however architecting and running such services is a real challenge for Cloud providers. They need to ensure that their platform is reliable, scalable, secure, so customers are isolated from one another, and that resources are automatically provisioned on demand. In addition, the platform needs to provide excellent developer experience to enable easy and smooth software development for customers. Finally, everything must be cost-efficient so that customers save money by not paying for idle cycles, and Cloud providers generate revenue required for their business to exist. In this talk we will discuss challenges that Cloud providers face when building Serverless platforms, and research areas for future development.

Bios: Monika Nawrot is a senior software engineer in Cloud Functions, at Google in Warsaw. She manages a team that works on reliability. With her experience in both software development and site reliability engineering, she leads multiple efforts to keep the whole system up and running. Previous to Google she worked at IBM and AOL and has experience in developing distributed backend systems and large scale data processing.
Marek Biskup is a senior software engineer in Cloud Functions, at Google in Warsaw. Before Google, Marek worked on the backend of the enterprise dedupe storage appliance NEC HYDRAstor, and on data analysis tools for LHC at CERN. He holds master degrees in Computer Science and Physics (Vrije Universiteit Amsterdam and University of Warsaw) and a PhD in Computer Science (UW) for research on lossless data compression.

Presentation [PDF]

Efficient management of ephemeral data in serverless computing

Speaker: Patrick Stuedi, IBM Research, Zurich, Switzerland

Abstract: Serverless computing frameworks achieve high elasticity and scalability partially by requiring functions to be short running and stateless. In turn, these restrictions make sharing of intermediate data among serverless tasks difficult, a challenge for more complex serverless workloads consisting of multiple stages. Today, the current practice is store intermediate data in a common, remote storage service. In this talk, I argue that existing storage systems are not designed to meet the elasticity, performance and granular cost requirements of serverless applications. I'll show several data points from deployments on AWS as well as from on-premise clusters that showcase the limitations and overheads of existing storage platforms in serverless workloads. I will then present two research projects that aim to overcome these limitations through efficient and elastic management of ephemeral data. In the first part of the talk, I will show how we turned Apache Spark into a serverless service that can efficiently serve multiple users by dynamically scaling its resources up and down in a fine-grained manner. One key aspect that made this possible is the use of Apache Crail as a fast way to store ephemeral data on remote DRAM and flash. In the second part, I'll show how to improve the efficiency and reduce the cost of serverless applications built on AWS lambda by using Pocket, a new elastic storage platform designed from scratch for storing ephemeral data in serverless applications.

Bio: Patrick is a member of the research staff at IBM Research Zurich. His research interests are in distributed systems, networking and operating systems. Patrick graduated with a PhD from ETH Zurich in 2008 and spent two years (2008-2010) as a Postdoc at Microsoft Research Silicon Valley. The general theme of his work is to explore how modern networking and storage hardware can be exploited in distributed systems. Patrick is the creator of several open source projects such as DiSNI (RDMA for Java), DaRPC (Low latency RPC) and co-founder of Apache Crail (Incubating).

Presentation [PDF]

Paper abstracts

EdgeBench: Benchmarking Edge Computing Platforms

Presenter: Anirban Das

Anirban Das, Stacy Patterson
Rensselaer Polytechnic Institute, United States
Mike Wittie
Montana State University, United States

Abstract: The emerging trend of edge computing has led several cloud providers to release their own platforms for performing computation at the `edge' of the network. We compare two such platforms, Amazon AWS Greengrass and Microsoft Azure IoT Edge, using a new benchmark comprising a suite of performance metrics. We also compare the performance of the edge frameworks to cloud-only implementations available in their respective cloud ecosystems. Amazon AWS Greengrass and Azure IoT Edge use different underlying technologies, edge Lambda functions vs. containers, and so we also elaborate on platform features available to developers. Our study shows that both of these edge platforms provide comparable performance, which nevertheless differs in important ways for key types of workloads used in edge applications. Finally, we discuss several current issues and challenges we faced in deploying these platforms.

Presentation [PDF]

A Review of Serverless Frameworks

Presenter: Pawel Skrzypek

Kyriakos Kritikos
Institute of Computer Science, FORTH
Pawel Skrzypek
7Bulls, Poland

Abstract: Serverless computing is a new computing paradigm that promises to revolutionize the way applications are built and provisioned. In this computing kind, small pieces of software called functions are deployed in the cloud with zero administration and minimal costs for the software developer. Further, this computing kind has various applications in areas like image processing and scientific computing. Due to the above advantages, the current uptake of serverless computing is being addressed by traditional big cloud providers like Amazon, who offer serverless platforms for serverless application deployment and provisioning. However, as in the case of cloud computing, such providers attempt to lock-in their customers with the supply of complementary services which provide added-value support to serverless applications. To this end, to resolve this issue, serverless frameworks have been recently developed. Such frameworks either abstract away from serverless platform specificities, or they enable the production of a mini serverless platform on top of existing clouds. However, these frameworks differ in various features that do have an impact on the serverless application lifecycle. To this end, to assist the developers in selecting the most suitable framework, this paper attempts to review these frameworks according to a certain set of criteria that directly map to the application lifecycle. Further, based on the review results, some remaining challenges are supplied, which when confronted will make serverless frameworks highly usable and suitable for the handling of both serverless as well as mixed application kinds.

Presentation [PDF]

Cold Start Influencing Factors in Function as a Service

Presenter: Johannes Manner

Johannes Manner, Martin Endreß, Tobias Heckel and Guido Wirtz
University of Bamberg, Distributed Systems Group, Germany

Abstract: Function as a Service (FaaS) is a young and rapidly evolving cloud paradigm. Due to the virtualization dependency, inherent virtualization problems need an assessment from the FaaS point of view. Especially avoidance of idling and scaling on demand cause a lot of container starts and as a consequence a lot of cold starts for FaaS users. The aim of this paper is to address the cold start problem in a benchmark and investigate influential factors on the duration of the perceived cold start. We conducted a benchmark on AWS Lambda and Microsoft Azure Functions with 49500 cloud function executions. Formulated as hypotheses, the influence of the chosen programming language, platform, memory size for the cloud function, and size of the deployed artifact are the dimensions of our experiment. Cold starts on the platform as well as the cold starts for users were measured and compared to each other. Our results show that there is an enormous difference for the overhead the user perceives compared to the billed duration. In our experiment, the average cold start overheads on the user’s side ranged from 300ms to 24s for the chosen configurations.

Presentation [PDF]

Comparison of FaaS Orchestration Systems

Presenter: Pedro García López

Pedro García López, Marc Sánchez-Artigas, Gerard París, Daniel Barcelona Pons, Álvaro Ruiz Ollobarren and David Arroyo Pinto
Universitat Rovira i Virgili, Spain

Abstract: Since the appearance of Amazon Lambda in 2014, all major cloud providers have embraced the Function as a Service (FaaS) model, because of its enormous potential for a wide variety of applications. As expected (and also desired), the competition is fierce in the serverless world, and includes aspects such as the run-time support for the orchestration of serverless functions. In this regard, the three major players are currently Amazon Step Functions (December 2016), Azure Durable Functions (June 2017), and IBM Composer (October 2017), still young and experimental projects with a long way ahead. In this article, we will compare and analyze these three serverless orchestration services under a common evaluation framework. We will study their architectures, programming and billing models, and their effective support for parallel execution, among others. Through a series of experiments, we will also evaluate the run-time overhead of the different infrastructures for different types of workflows.

Presentation [PDF]

An Investigation of the Impact of Language Runtime on the Performance and Cost of Serverless Functions

Presenter: David Jackson

David Jackson and Gary Clynch
Institute of Technology, Tallaght, Ireland

Abstract: Serverless, otherwise known as "Function-as-a-Service" (FaaS), is a compelling evolution of cloud computing that is highly scalable and event-driven. Serverless applications are composed of multiple independent functions, each of which can be implemented in a range of programming languages. This paper seeks to understand the impact of the choice of language runtime on the performance and subsequent cost of serverless function execution. It presents the design and implementation of a new serverless performance testing framework created to analyse performance and cost metrics for both AWS Lambda and Azure Functions. For optimum performance and cost management of serverless applications, Python is the clear choice on AWS Lambda. C# .NET is the top performer and most economical option for Azure Functions. NodeJS on Azure Functions and .NET Core 2 on AWS should be avoided or at the very least, used carefully in order to avoid their potentially slow and costly start-up times.

Presentation [PDF]

Visual-textual framework for serverless computation: a Luna Language approach.

Presenter: Piotr Moczurad

Piotr Moczurad and Maciej Malawski
AGH University of Science and Technology, Poland

Abstract: As serverless technologies are emerging as a breakthrough in the cloud computing industry, the lack of proper tooling is becoming apparent. The model of computation that the serverless is imposing is as flexible as it is hard to manage and grasp. We present a novel approach towards serverless computing that tightly integrates it with the visual-textual, functional programming language: Luna. This way we achieve the clarity and cognitive ease of visual solutions while retaining the flexibility and expressive power of textual programming languages. Moreover, we propose a more functional paradigm for the serverless computations.

Presentation [PDF]

Improving Application Migration to Serverless Computing Platforms: Latency Mitigation with Keep-Alive Workloads

Presenter: Wes Lloyd

Minh Vu, Baojia Zhang and Wes Lloyd
University of Washington, United States
Olaf David and George Leavesley
Colorado State University, United States

Abstract: Serverless computing platforms provide Function(s)-as-a-Service (FaaS) to end users while promising reduced hosting costs, high availability, fault tolerance, and dynamic elasticity for hosting individual functions known as microservices. Serverless Computing environments, unlike Infrastructure-as-Service (IaaS) cloud platforms, abstract infrastructure management including creation of virtual machines (VMs), containers, and load balancing from users. To conserve cloud server capacity and energy, cloud providers allow serverless computing infrastructure to go COLD, deprovisioning hosting infrastructure when demand falls, freeing capacity to be harnessed by others. In this paper, we present as a case study our results on the migration of the Precipitation Runoff Modeling System (PRMS), a Java-based environmental modeling application to the AWS Lambda serverless platform. We investigate performance and cost implications of memory reservation size, and evaluate scaling performance for increasing concurrent workloads. We then investigate the use of Keep-Alive client workloads to preserve serverless infrastructure to minimize infrastructure initialization latency to ensure consistent performance over long idle periods for many concurrent users. We show how Keep-Alive workloads can be generated using cloud-based scheduled event triggers, enabling minimization of costs, to provide VM-like performance for applications hosted on serverless platforms for a fraction of the cost.

Presentation [PDF]

Benchmarking FaaS Platforms: Call for Community Participation

Presenter: Jörn Kuhlenkamp

Jörn Kuhlenkamp and Sebastian Werner
Information Systems Engineering,TU Berlin, Germany

Abstract: The number of available FaaS platforms increases with the rising popularity of a “serverless” architecture and development paradigm. As a consequence, a high demand for benchmarking FaaS platforms exists. In response to this demand, new benchmarking approaches that focus on different objectives continuously emerge. In this paper, we call for community participation to conduct a collaborative structured literature review with the goal to establish a community-driven knowledge base.

Presentation [PDF]

Organization

Workshop co-chairs

Paul Castro, IBM Research
Vatche Ishakian, Bentley University
Stefan Junker, Red Hat
Vinod Muthusamy, IBM Research
Aleksander Slominski, IBM Research

Steering Committee (tentative)

Geoffrey Fox, Indiana University
Dennis Gannon, Indiana University & Formerly Microsoft Research
Arno Jacobsen, MSRG (Middleware Systems Research Group)

Program Committee (tentative)

Gul Agha, University of Illinois at Urbana-Champaign
Azer Bestavros, Boston University
Flavio Esposito, Saint Louis University
Rodrigo Fonseca, Brown University
Ian Foster, University of Chicago and Argonne National Laboratory
Geoffrey Fox, Indiana University
Dennis Gannon, Indiana University & Formerly Microsoft Research
Arno Jacobsen, MSRG (Middleware Systems Research Group)
Tyler Harter, GSL, Microsoft
Pietro Michiardi, Eurecom
Peter Pietzuch, Imperial College
Rodric Rabbah, IBM Research
Rich Wolski, University of California, Santa Barbara
Claus Pahl, Free University of Bozen-Bolzano, Italy
Maciej Malawski, AGH University of Science and Technology, Poland
Martin Garriga, Politecnico di Milano, Italy
Theo Lynn, Dublin City University, Ireland
Višnja Križanović, Josip Juraj Strossmayer University of Osijek
Lucas Nussbaum, LORIA, France

Previous workshop

Third International Workshop on Serverless Computing (WoSC) 2017 in San Francisco, CA, USA on July 2nd 2018 In conjunction with IEEE CLOUD 2018 affiliated with 2018 IEEE World Congress on Services (IEEE SERVICES 2018).

Second International Workshop on Serverless Computing (WoSC) 2017 in Las Vegas, NV, USA on December 12th, 2017 part of Middleware 2017.

First International Workshop on Serverless Computing (WoSC) 2017 in Atlanta, GA, USA on June 5th, 2017 part of ICDCS 2017.

Follow-Up: ESCCA 2018

We are proud to announce that WoSC4 is followed by the European Symposium on Serverless Computing and Applications (ESCCA 2018) on December 21, bringing together scientific progress with industrial requirements for future serverless development practices and application architectures.

Tweets about workshop

Please use hashtags #wosc4 #serverless