Please visit the previous workshops website to see what you can expect.
2020-12-08: Join us today: register with Middleware (free), review our workshop program and use provided Zoom and Slack links.
2020-12-04: Added keynote.
2020-12-04: Our Middleware Slack channel #wosc-workshop is available.
2020-12-04: To participate in the workshop register with Middleware (free).
2020-11-20: Preliminary workshop program is available.
2020-10-14: Final Camera-Ready Manuscript extended to October 16, 2020
2019-10-07: Extended notification date
2019-10-05: Update notifications and camera ready dates
2019-09-08: Submission deadline extended
2019-06-29: CFP available
Over the last four to five years, Serverless Computing (Serverless) has gained an enthusiastic following in industry as a compelling paradigm for the deployment of cloud applications, and is enabled by the recent shift of enterprise application architectures to containers and microservices. Many of the major cloud vendors have released serverless platforms, including Amazon Lambda, Google Cloud Functions, Microsoft Azure Functions, IBM Cloud Functions. Open source projects are gaining popularity in providing serverless computing as a service. In particular Kubernetes gained in popularity in enterprise and in academia. Several open source projects such as OpenFaaS and Knative aim to provide developers with serverless experience on top of Kubernetes by hiding low-level details of Kubernetes and add new capabilities such as supporting event-driven serverless cloud-native applications. This workshop brings together researchers and practitioners to discuss their experiences and thoughts on future directions of serverless research.
Serverless architectures offer different tradeoffs in terms of control, cost, and flexibility compared to distributed applications built on an Infrastructure as a Service (IaaS) substrate. For example, a serverless architecture requires developers to more carefully consider the resources used by their code (time to execute, memory used, etc.) when modularizing their applications. This is in contrast to concerns around latency, scalability, and elasticity, which is where significant development effort has traditionally been spent when building cloud services. In addition, tools and techniques to monitor and debug applications aren't applicable in serverless architectures, and new approaches are needed. As well, test and development pipelines may need to be adapted. Another decision that developers face is the appropriateness of the serverless ecosystem to their application requirements. A rich ecosystem of services built into the platform is typically easier to compose and would offer better performance. However, composing external services may be unavoidable, and in such cases, many of the benefits of serverless disappear, including performance and availability guarantees. This presents an important research challenge, and it is not clear how existing results and best practices, such as workflow composition research, can be applied to composition in a serverless environment.
Date: December 8 (Tuesday) [confirmed]
Time: 10am ET (4pm in Europe) and ending about 4pm ET (11pm in Europe)
Workshop location: virtual (Zoom), disucssions and ask questions in #wosc-workshop Slack channel
10am-10:10am ET (4pm-4:10pm in Europe): opening remarks by workshop chair Aleksander Slominski [slides]
10:10am-10:55am ET (4:10pm-4:50pm in Europe) Invited Keynote
11am-noon ET (5pm-6pm in Europe): session 1
session chair: Aleksander Slominski
noon-1pm ET (6pm-7pm in Europe): break (lunch, maybe demos)
12:30pm-1pm ET Demo: Serverless Application Analytics Framework
1pm-2pm ET (7pm-8pm in Europe): session 2
session chair: Aleksander Slominski
2pm-2:30pm ET: short break (maybe demos)
2:30pm-3:30pm ET (8:30pm-9:30pm in Europe): session 3
session chair: Vinod Muthusamy
3:30pm-4pm ET (9:30pm-10m in Europe): Questions and Open Discussion, Closing remarks by Aleksander Slominski
Speaker: Satish Malireddi, Principal Architect, T-Mobile USA
Abstract: New technology adoption is often a challenge within large organizations. If proven right, they can increase productivity, agility, and help us make better decisions. Serverless brings in a huge advantage of offloading infrastructure management to cloud providers so that enterprises can focus more on delivering features faster to their customers. In this talk, you will hear how T-Mobile developers have embraced serverless, our journey with examples of use-cases in T-Mobile, challenges encountered with adoption of serverless, and how we have used custom built tooling to simplify development workflows to abstract the complexity from developers and facilitate adoption.
Bio: Satish is an experienced enterprise architect with a demonstrated history of working in the telecommunications & software industries. He is skilled in public cloud technologies (AWS), Agile & DevOps Methodologies, Serverless & Containers, CI/CD tooling and Software Development Life Cycle (SDLC). Satish is currently working at T-Mobile Cloud CoE and his work focuses on building next generation cloud development platforms using cutting edge technologies like serverless & containers and facilitate their adoption within the enterprise.
Twitter - @satishmr
LinkedIn - https://www.linkedin.com/in/devsatishm/
Presenter: N. Mahmoudi
Authors: N. Mahmoudi, H. Khazaei
Abstract: Analytical performance models have been shown very efficient in analyzing, predicting, and improving the performance of distributed computing systems. However, there is a lack of rigorous analytical models for analyzing the transient behaviour of serverless computing platforms, which is expected to be the dominant computing paradigm in cloud computing. Also, due to its unique characteristics and policies, performance models developed for other systems cannot be directly applied to modelling these systems. In this work, we propose an analytical performance model that is capable of predicting several key performance metrics for serverless workloads using only their average response time for warm and cold requests. The introduced model uses realistic assumptions, which makes it suitable for online analysis of real-world platforms. We validate the proposed model through extensive experimentation on AWS Lambda. Although we focus primarily on AWS Lambda due to its wide adoption in our experimentation, the proposed model can be leveraged for other public serverless computing platforms with similar auto-scaling policies, e.g., Google Cloud Functions, IBM Cloud Functions, and Azure Functions.
Presenter: J. Gunasekaran
Authors: J. Gunasekaran, C. Mishra, P. Thinakaran, M. Kandemir, C. Das
Abstract: We are witnessing an increasing trend towards using Machine Learning (ML) based prediction systems, spanning across different application domains, including product recommendation systems, personal assistant devices, facial recognition, etc. These applications typically have diverse requirements in terms of accuracy and response latency, that have a direct impact on the cost of deploying them in a public cloud. Furthermore, the deployment cost also depends on the type of resources being procured, which by themselves are heterogeneous in terms of provisioning latencies and billing complexity. Thus, it is strenuous for an inference serving system to choose from this confounding array of resource types and model types to provide low-latency and cost-effective inferences. In this work we quantitatively characterize the cost, accuracy and latency implications of hosting ML inferences on different public cloud resource offerings. Our evaluation shows that, prior work does not solve the problem from both dimensions of model and resource heterogeneity. Hence, we argue that to address this problem, we need to holistically solve the issues that arise when trying to combine both model and resource heterogeneity towards optimizing for application constraints. Towards this, we discuss the design and implications of a self-managed inference serving system, which can optimize the application requirements based on public cloud resource characteristics.
Authors: J. Spillner
Abstract: Application software provisioning evolved from monolithic designs towards differently designed abstractions including serverless applications. The promise of that abstraction is that developers are free from infrastructural concerns such as instance activation and autoscaling. Today's serverless architectures based on FaaS are however still exposing developers to explicit low-level decisions about the amount of memory to allocate for the respective cloud functions. In many cases, guesswork and ad-hoc decisions determine the values a developer will put into the configuration. We contribute tools to measure the memory consumption of a function in various Docker, OpenFaaS and GCF/GCR configurations over time and to create trace profiles that advanced FaaS engines can use to autotune memory dynamically. Moreover, we explain how pricing forecasts can be performed by connecting these traces with a FaaS characteristics knowledge base.
Presenter: Sebastian Werner
Authors: S. Werner, R. Girke, J. Kuhlenkamp
Abstract: Serverless computing is a promising cloud execution model that significantly simplifies cloud users’ operational concerns by offering features such as auto-scaling and a pay-as-you-go cost model. Consequently, serverless systems promise to provide an excellent fit for ad-hoc data processing. Unsurprisingly, numerous serverless systems/frameworks for data processing emerged recently from research and industry. However, systems researchers, decision-makers, and data analysts are unaware of how these serverless systems compare to each other. In this paper, we identify existing serverless frameworks for data processing. We present a qualitative assessment of different system architectures and an experiment-driven quantitative comparison, including performance, cost, and usability using the TPC-H benchmark. Our results show that the three publicly available serverless data processing frameworks outperform a comparatively sized Apache Spark cluster in terms of performance and cost for ad-hoc queries on cold data.
Presenter: J. Choi
Authors: J. Choi, K. Lee
Abstract: Fully-managed cloud and Function-as-a-Service (FaaS) services allow the wide adoption of serverless computing for various cloud-native applications. Despite the many advantages that serverless computing provides, no direct connection support exists between function run-times, and it is a barrier for data-intensive applications. To overcome this limitation, the leading cloud computing vendor Amazon Web Services (AWS) has started to support mounting the network file system (NFS) across different function run-times. This paper quantitatively evaluates the performance of accessing NFS storage from multiple function run-times and compares the performance with other methods of sharing data among function run-times. Despite the great qualitative benefits of the approach, the limited I/O bandwidth of NFS storage can become a bottleneck, especially when the number of concurrent access from function run-times increases.
Presenter: Y. Bouizem
Authors: Y. Bouizem, D. Dib, N. Parlavantzas, C. Morin
Abstract: Serverless computing is becoming more and more attractive for cloud solution architects and developers. This new computing paradigm relies on Function-as-a-Service (FaaS) platforms that enable deploying functions without being concerned with the underlying infrastructure. An important challenge in designing FaaS platforms is ensuring the availability of deployed functions. Existing FaaS platforms address this challenge principally through retrying function executions. In this paper, we propose and implement an alternative fault-tolerance approach based on active-standby failover. Results from an experimental evaluation show that our approach increases availability and performance compared to the retry-based approach.
Presenter: A. Byrne
Authors: A. Byrne, S. Nadgowda, A. Coskun
Abstract: While much of the software running on today's serverless platforms is written in easily-analyzed high-level interpreted languages, many performance-conscious users choose to deploy their applications as container-encapsulated compiled binaries on serverless container platforms such as AWS Fargate or Google Cloud Run. Modern CI/CD workflows make this deployment process nearly-instantaneous, leaving little time for in-depth manual application security reviews. This combination of opaque binaries and rapid deployment prevents cloud developers and platform operators from knowing if their applications contain outdated, vulnerable, or legally-compromised code. This paper proposes Approximate Concrete Execution (ACE), a just-in-time binary analysis technique that enables automatic software component discovery for serverless binaries. Through classification and search engine experiments with common cloud software packages, we find that ACE scans binaries 5.2x faster than a state-of-the-art binary analysis tool, minimizing the impact on deployment and cold-start latency while maintaining comparable recall.
Presenter: S. Ginzburg
Authors: S. Ginzburg, M. Freedman
Abstract: Serverless computing in the cloud, or functions as a service (FaaS), poses new and unique systems design challenges. Serverless offers improved programmability for customers, yet at the cost of increased design complexity for cloud providers. One such challenge is effective and consistent resource management for serverless platforms, the implications of which we explore in this paper. In this paper, we conduct one of the first detailed in situ measurement studies of performance variability in AWS Lambda. We show that the observed variations in performance are not only significant, but stable enough to exploit. We then design and evaluate an end-to-end system that takes advantage of this resource variability to exploit the FaaS consumption-based pricing model, in which functions are charged based on their fine-grain execution time rather than actual low-level resource consumption. By using both light-weight resource probing and function execution times to identify attractive servers in serverless platforms, customers of FaaS services can cause their functions to execute on better performing servers and realize a cost savings of up to 13% in the same AWS region.
Presenter: Mohak Chadha
Authors: M. Chadha, A. Jindal, M. Gerndt
Abstract: Federated learning (FL) enables resource-constrained edge devices to learn a shared Machine Learning (ML) or Deep Neural Network (DNN) model, while keeping the training data local and providing privacy, security, and economic benefits. However, building a shared model for heterogeneous devices such as resource-constrained edge and cloud makes the efficient management of FL-clients challenging. Furthermore, with the rapid growth of FL-clients, the scaling of FL training process is also difficult. In this paper, we propose a possible solution to these challenges: federated learning over a combination of connected Function-as-a-Service platforms, i.e., FaaS fabric offering a seamless way of extending FL to heterogeneous devices. Towards this, we present FedKeeper, a tool for efficiently managing FL over FaaS fabric. We demonstrate the functionality of FedKeeper by using three FaaS platforms through an image classification task with a varying number of devices/clients, different stochastic optimizers, and local computations (local epochs).
Presenter: M. Mirabelli
Authors: M. Mirabelli, P. Lopez, G. Vernik
Abstract: Scaling transparency means that applications can expand in scale without changes to the system structure or the application algorithms. Serverless Computing's inherent auto-scaling support and fast function launching is ideally suited to support scaling transparency in different domains. In particular, Proteomic applications could considerably benefit from scaling transparency and serverless technologies due to their high concurrency requirements. Therefore, the auto-provisioning nature of serverless platforms makes this computing model an alternative to satisfy dynamically the resources required by protein folding simulation processes. However, the transition to these architectures must face challenges: they should show comparable performance and cost to code running in Virtual Machines (VMs). In this article, we demonstrate that Proteomics applications implemented with the Replica Exchange algorithm can be moved to serverless settings guaranteeing scaling transparency. We also validate that we can reduce the total execution time by around forty percent with comparable cost to cluster technologies (Work Queue) over VMs.
Presenter: E. Hunhoff
Authors: E. Hunhoff, S. Irshad, V. Thurimella, A. Tariq, E. Rozner
Abstract: This paper introduces a new primitive to serverless language runtimes called freshen. With freshen, developers or providers specify functionality to perform before a given function executes. This proactive technique allows for overheads associated with serverless functions to be mitigated at execution time, which improves function responsiveness. We show various predictive opportunities exist to run freshen within reasonable time windows. A high-level design and implementation are described, along with preliminary results to show the potential benefits of our scheme.
Presenter: Robert Cordingly
Authors: R. Cordingly, H. Yu, V. Hoang, Z. Sadeghi, D. Foster, D. Perez, R. Hatchett, W. Lloyd
Abstract: To help better understand factors that impact performance on Function-as-a-Service (FaaS) platforms we have developed the Serverless Application Analytics Framework (SAAF). SAAF provides a reusable framework supporting multiple programming languages that developers can integrate into a function’s package for deployment to multiple commercial and open source FaaS platforms. SAAF improves the observability of FaaS function deployments by collecting forty-eight distinct metrics to enable developers to profile CPU and memory utilization, monitor infrastructure state, and observe platform scalability. In this paper, we describe SAAF in detail and introduce supporting tools highlighting important features and how to use them. Our client application, FaaS Runner, provides a tool to orchestrate workloads and automate the process of conducting experiments across FaaS platforms. We provide a case study demonstrating the integration of SAAF into an existing open source image processing pipeline built for AWS Lambda. Using FaaS Runner, we automate experiments and acquire metrics from SAAF to profile each function of the pipeline to evaluate performance implications. Finally, we summarize contributions using our tools to evaluate implications of different programming languages for serverless data processing, and to build performance models to predict runtime for serverless workloads.
Presenter: Robert Cordingly, Wes Lloyd
Video recording [Demo]
Paul Castro, IBM Research
Pedro García López, University Rovira i Virgili
Vatche Ishakian, IBM Research
Vinod Muthusamy, IBM Research
Aleksander Slominski, IBM Research
Geoffrey Fox, Indiana University
Dennis Gannon, Indiana University & Formerly Microsoft Research
Arno Jacobsen, MSRG (Middleware Systems Research Group)
Gul Agha, University of Illinois at Urbana-Champaign
Azer Bestavros, Boston University
Flavio Esposito, Saint Louis University
Rodrigo Fonseca, Brown University
Ian Foster, University of Chicago and Argonne National Laboratory
Geoffrey Fox, Indiana University
Dennis Gannon, Indiana University & Formerly Microsoft Research
Pedro Garcia Lopez, Universitat Rovira i Virgili (Spain)
Arno Jacobsen, MSRG (Middleware Systems Research Group)
Ali Kanso, Microsoft
Wes Lloyd, University of Washington Tacoma
Maciej Malawski, AGH University of Science and Technology, Poland
Pietro Michiardi, Eurecom
Lucas Nussbaum, LORIA, France
Maciej Pawlik, Academic Computer Centre CYFRONET of the University of Science and Technology in Cracow
Per Persson, Ericsson Research
Peter Pietzuch, Imperial College
Rodric Rabbah, Nimbella and Apache OpenWhisk
Eric Rozner, University of Colorado Boulder
Josef Spillner, Zurich University of Applied Sciences
Rich Wolski, University of California, Santa Barbara
Fifth International Workshop on Serverless Computing (WoSC) in UC Davis, CA, USA on Decemeber 9, 2019. In conjunction with 20th ACM/IFIP International Middleware Conference.
Fourth International Workshop on Serverless Computing (WoSC) in Zurich, Zurich, Switzerland on Decemeber 20, 2018. In conjunction with 11th IEEE/ACM UCC and 5th IEEE/ACM BDCAT.
Third International Workshop on Serverless Computing (WoSC) in San Francisco, CA, USA on July 2nd 2018 In conjunction with IEEE CLOUD 2018 affiliated with 2018 IEEE World Congress on Services (IEEE SERVICES 2018).
Second International Workshop on Serverless Computing (WoSC) 2017 in Las Vegas, NV, USA on December 12th, 2017 part of Middleware 2017.
First International Workshop on Serverless Computing (WoSC) 2017 in Atlanta, GA, USA on June 5th, 2017 part of ICDCS 2017.
Please use hashtags #wosc6 #serverless