Talks

#1 - Serverless Computing in the Continuum -OR- When I'll Stop Worrying and Learn to Love Serverless

Wherever we turn, our society is digital. Online gaming and streaming, science and engineering, business-critical operations, and analytics depend, often transparently, on the inter-operation of diverse distributed computer systems, across diverse resources, for diverse functional and non-functional goals. In this talk, we agree with the idea that serverless computing promises to facilitate computation in such computer ecosystems, but aim to discuss about how serverless fits in the computing continuum. What is serverless computing? Which part of the resource continuum, between cloud, edge, and end-devices, does serverless leverage? Which applications does and can serverless support? Which non-functional properties can and should serverless be concerned with? We present in this work recent results our team, often in collaboration with the SPEC RG Cloud Group, have obtained in trying to approach these questions.

Slides [PDF]

TDr.ir. Alexandru Iosup is a full professor at Vrije Universiteit Amsterdam (VU), a high-quality research university in the Netherlands. He is the tenured chair of the Massivizing Computer Systems research group at the VU and visiting researcher at TU Delft. He is also elected chair of the SPEC-RG Cloud Group. His work in distributed systems and ecosystems includes over 150 peer-reviewed articles with high scientific impact, and has applications in cloud computing, big data, scientific and business-critical computing, and online gaming. His research has received prestigious recognition, including membership in the (Young) Royal Academy of Arts and Sciences of the Netherlands, the Netherlands ICT Researcher of the Year award, and a PhD from TU Delft. He has received a knighthood for cultural and scientific merits. Contact Alexandru at A.Iosup@vu.nl or @AIosup, or visit http://atlarge.science/aiosup


#2 - Elastic Data Streams in Pravega for Serverless Computing

Serverless frameworks enable applications to react to new input. As input, such as events in an event stream, becomes available, serverless frameworks trigger functions to process them. Functions can range from simple stateless transformations (e.g., filtering) to more complex stateful computations that are an integral part of a data pipeline. Combining streaming data and serverless can achieve low-latency, scalable and cost-effective over unbounded data sets.

To enable streaming data with serverless, it is necessary to have streaming data sources in place. Those sources can be the elements generating the data (e.g., sensors) or systems used to ingest and persist streaming data. One such a system is Pravega. Pravega is a novel open-source storage system for data streams. Pravega provides client applications with write/read functionality and it guarantees that streams data is durable and consistent. As a storage system, Pravega is able to store historical streaming data in a cost-effective way and with high performance, allowing serverless applications performing historical analysis.

One of the most distinguishing features of Pravega compared to other systems (like Apache Kafka or Apache Pulsar) is its ability to change the degree of parallelism of a stream dynamically, making event streams elastic. Pravega enables dynamic changes to the number of parallel Stream segments (i.e., similar to "topic partitions" in Kafka) according to the load that the cluster handles. Pravega relies on a user-defined, auto-scaling policy per stream to trigger scaling. In this talk, we provide a general overview of Pravega, focusing on Stream auto-scaling and its design. In addition, we discuss stream scaling and how this elasticity can be exploited by the serverless and data analytics engines to process data from Pravega streams.

Slides [PDF]

Raul Gracia Tinedo (principal software engineer at DellEMC): I hold M.Sc. in Computer Engineering and Security (2011) and a Ph.D. in Computer Engineering (2015, outstanding thesis award) both from Universitat Rovira i Virgili (URV). During my Ph.D., I also worked as intern at IBM Research (Haifa) and Tel-Aviv University.

I'm a highly motivated researcher and engineer interested in distributed systems, cloud storage, data analytics and software engineering. I have published more than 20 scientific papers, including a Best Dataset Award at ACM IMC'15 and a Highlight Paper at ACM SYSTOR'15. I'm committer in open-source projects and I have actively participated in EU (H2020, FP7) and Spanish research projects.

I'm currently a principal software engineer at DellEMC working for the Pravega project (http://pravega.io): a novel distributed storage system for data streams. Pravega is at the core of DellEMC Streaming Data Platform; a new DellEMC product launched in 2020.


#3 - Federated FaaS for Flexible Scientific Computing

Growing data volumes and velocities are driving exciting new methods across the sciences in which data analytics and machine learning are increasingly intertwined with research. These new methods require new systems to enable computation to be mobile, so that, for example, it can occur near data, be triggered by events (e.g., arrival of new data), or be offloaded to specialized accelerators. They also require new design approaches in which monolithic applications can be decomposed into smaller components, that may in turn be executed separately and on the most suitable resources. The federated function-as-a-service (FaaS) model presents an attractive interface to address such needs as it abstracts underlying infrastructure almost entirely. In this talk, I describe how we have adopted the FaaS model and adapted it to enable computation to be dispatched to a federated ecosystem of remote computing endpoints. I will present our experiences applying the federated FaaS model to various scientific applications and highlight the benefits and limitations of the approach.

Slides [PDF]

Kyle Chard (Research Associate Professor, University of Chicago): Kyle Chard is a Research Associate Professor in the Department of Computer Science at the University of Chicago. He also holds a joint appointment at Argonne National Laboratory. He co-leads the Globus Labs research group focusing on research problems in data-intensive computing, distributed computing, and research data management.


#4 - Beyond Serverless Computing With Kalix

Kalix is the first and only developer platform to enable any back-end or full stack developer to easily build large-scale, high performance microservices and API's with no operations required. Kalix removes the hurdles of distributed systems and enables event driven architecture with fully managed underlying infrastructure.

Slides [PDF]

Alan Klikic: Alan is a Senior Solution Architect at Lightbend. He has 15+ years of experience in software development and is passionate about reactive and distributed systems.


#5 - Support Architecture for Serverless computing: model considerations and potential trends

The increasing interest on the hot topic of serverless computing asks for a better clarification of the practical advantages you may expect from its applications and a better discussion of pros and cons for specific deployment architectures. We sketch a model scheme to define potential perspectives of support and deployment of serverless computing, toward a discussion of the most opportune serverless target application areas.

Slides [PDF]

Antonio Corradi: Antonio Corradi is full professor at the University of Bologna, in Distributed Systems and Computer Networks, with special interests in the whole middleware and infrastructure design and deployment, from multi cloud solutions to IoT, Fog, and Edge, from manufacturing innovation design to pub/sub QoS , from new serverless computing and FaaS to crowdsourcing and smart cities. He has worked for the University of Bologna in many areas, such as in governance (he also served as the Department chair), innovation, in promoting new connections and relationship with European and worldwide organizations. From the last decade, he is deep involved in new innovation initiatives of public engagement, becoming president of the CLUST-ER 'Innovation in Services'. Antonio is part of IEEE and ACM, and has published more than 350 contributions in journal, magazines, and international conferences.


#6 - Serverless Bomberman: RTMPG PoC based on Durable Functions

Serverless Bomberman is a fun game designed to let researchers explore low- latency characteristics of serverless platforms. The game has been built by two students using Azure Functions with durable entities, which permit building stateful applications with eventual consistency given reasonably sized volumes of data. By playing Serverless Bomberman, variations in latency become visually apparent. We ponder about the applicability to a wide range of medium-complexity games that could benefit from massive elastic scaling, for instance due to popularity bursts.

Slides and video: https://drive.switch.ch/index.php/s/GHDkhYxE6MOCqGb [PDF]

Evan Hirschi and Rico Nachbur are computer science students at Zurich University of Applied Sciences. They implemented the initial Serverless Bomberman prototype based on Azure Functions and a .NET client.

Jesse Donkervliet is a PhD student at the Massivizing Computer Systems group at Vrije Universiteit Amsterdam. His research focuses on scalability and consistency in Minecraft-like services and multimedia applications. He is the tech lead of the Opencraft research project, which aims to bring modifiable virtual environments to millions of players.

Josef Spillner is associate professor at Zurich University of Applied Sciences, with research and teaching activities in distributed application computing paradigms, such as cloud-native, serverless and big data computing.


#7 - Netherite: Efficient Execution of Serverless Workflows

Combining serverless functions with reliable workflows is an increasingly popular approach for developing scalable cost-efficient cloud services. However, as workflows must continuously persist progress, cloud storage can become a bottleneck. In this talk, we introduce Netherite, a new distributed architecture for executing serverless workflows. Our work (to appear in VLDB 2022) shows that Netherite provides significant performance improvements compared to our current production system for serverless workflows in Azure Durable Functions.

Slides [PDF]

Sebastian Burckhardt: Sebastian Burckhardt is a principal researcher at MSR in Redmond. His general interests revolve around programming models for concurrent, parallel, and distributed systems. Sebastian was born in Basel, Switzerland, where he studied Mathematics at the local University. After an exchange year at Brandeis University, he decided to switch fields, earned his PhD in Computer Science at the University of Pennsylvania, and joined MSR in 2007. Sebastian Burckhardt's work ranges from theoretical foundations, such as specifications for consistency models and optimality results for replicated data types, to practical applications, such as language-integrated cloud types for replicated state, distributed transactions for Orleans virtual actors, and serverless workflows with Azure Durable Functions.


#8 - Serverless Computing: Challenges, Opportunities, and Beyond

While the popularity of serverless computing frameworks has skyrocketed in the past years as as they deliver the initial cloud computing promise of "pay-as-you-go” cost model with a near-unlimited elasticity, along with simplifying the programming model for cloud practitioners, its numerous challenges yet hinders and precludes its widespread adoption as the de facto choice for building cloud services. In this talk, we will present the current state of affairs of serverless computing frameworks in Huawei Cloud, along with our on-going research problems and vision for making serverless computing a ubiquitous building block of any cloud computing service.

Slides [PDF]

Javier Picorel: is an engineering manager and chief architect of computational cloud storage leading the Intelligent Cloud Infrastructure Research Group in Huawei Cloud. He is broadly interested in TCO-efficient cloud infrastructures through the vertical integration of hardware and software layers, minimization of data movement, and the de-centralization and disaggregation of the cloud. Javier received a PhD in computer science from EPFL in 2017, he is also the recipient of several awards in Huawei including the individual Gold Medal, and technology developed by his group has been highlighted in Huawei Connect.