by Arpit Kumar
21 Dec, 2023
8 minute read
MicroVMs, Isolates, Wasm, gVisor: A New Era of Virtualization

Exploring the evolution and nuances of serverless architectures, focusing on the emergence of MicroVMs as a solution for isolation, security, and agility. We will discuss the differences between containers and MicroVMs, their use cases in serverless setups, and highlights notable MicroVM implementations by various companies. Focusing on FirecrackerVM, V8 isolates, wasmruntime and gVisor.


While exploring the world of serverless architecture I came across MicroVMs. I was always focused on compute and scaling while thinking about the serverless architectures. Recently while reading an article about Cloudflare workers and AWS lambda I realized it is more a problem of isolation and security to run workloads of unknown code on the same machine.

In a world of faster execution and cloud native technologies, containers immediately come to mind. But what i missed while thinking about the workloads run on containers was that they still share kernel form host.

An unknown code workload can trigger kernel panic on the host system and can cause crashes of other unrelated workloads from other users on the host machine.

Running containers is very fast and has very small boot time if the image is not that complex. While this can help launching and shutting down workloads very fast, it poses the same security challenges we talked about earlier.

So in search of the right kind of security and resource isolation, people working on cloud native workload started going back to VMs. The major caveat with using a VM is that it is slow, cold start is a big problem where users want to start and shutdown workloads within a second.

The AWS team for building lambda then started working on a micro vm named firecracker. This firecracker vm is very light and has minimal device support. It can start in almost as little as 300ms and can serve the workload and shutdown. This kind of micro vm basically provides a kind of isolation, security in addition to the speed of container launch.

Dynamic Workload requirements

Serverless or Function as a Service (FaaS) workload requirements can vary depending on the specific application or use case. However, there are several common considerations and requirements for managing dynamic serverless workloads:

  • Scalability: The ability to handle varying workloads and scale resources up or down dynamically in response to demand is crucial. Serverless platforms typically offer auto-scaling capabilities, ensuring that resources are allocated as needed to handle incoming requests without manual intervention.

  • Security: Implementing security best practices, such as proper authentication, authorization, and encryption of data, is critical. Since serverless architectures rely on third-party providers, ensuring secure communication and data protection is essential.

  • Pay what you use: Understanding the cost implications of serverless architectures is crucial. Functions are billed based on execution time, resource consumption, and other factors so while scaling helps in increased traffic it also helps in making cost zero for users if there is no traffic.

  • Cold Start Mitigation: Addressing cold start issues, where functions take longer to execute due to initialization, is essential for applications that require near-instantaneous responses. Strategies like keeping functions warm or optimizing code to reduce cold start times can help mitigate this.

Adhering to these requirements and best practices can help design, deploy, and manage dynamic serverless workloads efficiently, ensuring optimal performance, scalability, and cost-effectiveness.

Containers And VMs : Choosing Wisely in Serverless Architectures

While evaluating workload orchestration for serverless architecture we have two obvious choices, containers and MicroVMs.

Both containers and MicroVMs offer distinct advantages, catering to different requirements and scenarios within serverless setups.

Containers: Agile and Lightweight

Containers have been the backbone of modern application deployment, offering agility and efficiency. They encapsulate an application and its dependencies, ensuring consistency across various environments. In serverless architectures, containers provide:

Portability: Containers are highly portable, allowing seamless deployment across different cloud providers and on-premises environments. Resource Efficiency: They share the host OS kernel, consuming fewer resources compared to traditional VMs, thereby enabling efficient resource utilization. Faster Startup Times: Containers typically start up rapidly, enabling quick scaling and handling of varying workloads in serverless setups.

MicroVMs: Enhanced Isolation and Security

On the other hand, MicroVMs focus on providing stronger isolation and security. These lightweight virtual machines offer a higher level of isolation compared to containers, leading to:

Enhanced Security: MicroVMs ensure stronger isolation between workloads, reducing the attack surface and the risk of vulnerabilities affecting other instances. Isolation Guarantees: They offer a more robust boundary between applications, ensuring that a compromise in one MicroVM doesn’t affect others. Compatibility and Flexibility: MicroVMs are designed to run with various workload types, making them versatile in diverse serverless architectures.

Strategies Followed by Different Companies

Several companies have developed their own MicroVM architectures, each with its own unique approach and features. Here are a few examples:

  • AWS Lambda (FirecrackerVM) - It uses a custom MicroVM architecture called FirecrackerVM. FirecrackerVM is a lightweight virtual machine monitor (VMM) that provides secure and fast instantiation of MicroVMs, with a focus on cloud computing use cases. It uses the KVM (Kernel-based Virtual Machine) hypervisor to provide hardware virtualization, and it includes a small, specialized Linux kernel and userspace for managing the MicroVMs.

Firecracker is not supported for nested virtualization1. Firecracker uses KVM and only runs on Linux2. Firecracker microVMs do not include unnecessary devices and guest functionality, providing a reduced memory footprint and attack surface area3. Firecracker only virtualizes the processor, not a whole machine4. Firecracker requires some extra configuration steps, such as network post-configuration, and may fail if not properly configured5.

This has made FirecrackerVM very light weight and suitable for lambda kind of workload which is ephemeral.

Firecracker does not release memory back to the OS automatically but it provides a balloon device to inflate and deflate the memory on the run. So FirecrackerVM is a mix of some good and some not so good mixed things.


Firecracker MicroVM
https://firecracker-microvm.github.io/

  • Cloudflare Workers - Cloudflare Workers uses a MicroVM architecture based on the V8 JavaScript engine. V8 Isolates provide a way to run multiple independent JavaScript contexts within a single process. Each isolate has its own memory, global object, and event loop, and is isolated from other isolates running in the same process. This allows for efficient use of resources and improved security, as each isolate is protected from the actions of other isolates.

Isolates are designed to start up quickly. They are created within an existing environment. This is different from the virtual machine model. With the virtual machine model, a new virtual machine is created for each function. Creating a new virtual machine can be slow. Isolates eliminate this slow start-up time.


Cloudflare Worker Isolate V8
https://developers.cloudflare.com/workers/reference/how-workers-works/

There is really good discussion around whether V8 isolates are actually secure or not on HackerNews. Summary of the discussion is that it’s a little bit gray area that running isolates within the same process can be insecure. Even though engineers from cloudflare believe they can control the security at this granularity level.

  • Wasmer - Wasmer is an open-source MicroVM architecture that supports the WebAssembly (WASM) binary format. Wasmer allows developers to run WASM modules in a sandboxed environment, providing strong isolation and efficient execution.

Through Wasm runtime you can allow software written in a lot of languages to run without inside sandboxes or runtimes, or virtual machines (VMs), at near-native speed. It offers several advantages over traditional VMs, such as faster and safer execution, and is often used as an alternative to VMs and containerization.

This tweet from founder of Docker says a lot -



  • Google gVisor - gVisor is an open-source security-focused container runtime developed by Google. gVisor uses a unique architecture that combines a small, lightweight kernel written in Go, called the “sentry,” with a user-space component that intercepts and handles system calls made by the application. The sentry kernel provides a sandboxed environment for the application to run in, while the user-space component handles system calls and provides a compatibility layer for the application to interact with the host operating system.

gVisor supports a wide range of container runtimes, including Docker and Kubernetes, and can be used as a drop-in replacement for traditional container runtimes. It is designed to be lightweight and efficient, with minimal performance overhead compared to traditional container runtimes. There have been concerns around the performance of the gVisor in the community. You can check the performance number comparison with other runtimes. https://gvisor.dev/docs/architecture_guide/performance/


Gvisor Alternate Kernel
https://gvisor.dev/

Hybrid approaches that leverage both containers and MicroVMs are increasingly gaining traction. Some serverless platforms combine the agility of containers with the stronger isolation of MicroVMs, offering a middle ground that addresses both performance and security concerns.

If you want to delve into that look at open source projects like firecracker-containerd, kata containers. New hypervisors are coming up cloud-hypervisor is the one I feel to watch for. Kata containers listed how to integrate with them and use them with k8s etc.

Hypervisor Written in Architectures Type
ACRN C x86_64 Type 1 (bare metal)
Cloud Hypervisor rust aarch64, x86_64 Type 2 (KVM)
Firecracker rust aarch64, x86_64 Type 2 (KVM)
QEMU C all Type 2 (KVM)
Dragonball rust aarch64, x86_64 Type 2 (KVM)

As serverless architectures continue to evolve, the integration of containers and MicroVMs will likely become more nuanced, catering to a broader spectrum of use cases. The future might witness innovations that bridge the gap between these technologies, providing the best of both worlds.

Decision between containers and MicroVMs in serverless architectures hinges on the specific needs of the application, balancing factors like performance, security, and flexibility. Understanding the strengths and limitations of each and aligning them with the project requirements will pave the way for optimal serverless infrastructure.

Where to Go From Here

Recent Posts

Understanding Asynchronous I/O in Linux - io_uring
Explore the evolution of I/O multiplexing from `select(2)` to `epoll(7)`, culminating in the advanced io_uring framework
Building a Rate Limiter or RPM based Throttler for your API/worker
Building a simple rate limiter / throttler based on GCRA algorithm and redis script
Sum of Bytes Tech Stack
Details about this blog tech stack and how I migrated from Ghost on Hetzner to Phoenix based markdown blog with bunch of technologies (Phoenix + Nimble Publisher + Neon Tech + Hetzner + Kamal Deploy + wsrv.nl) connected to provide me great experience

Get the "Sum of bytes" newsletter in your inbox
No spam. Just the interesting tech which makes scale possible.