Loading…
Open Source Summit + Embedded Linux Conference North America...
May 18-20, 2026
Minneapolis, MN
View More Details & Registration
Note: The schedule is subject to change.

The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for Open Source Summit North America 2025 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.

This schedule is automatically displayed in Central DaylightTime (UTC -5). To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date."

IMPORTANT NOTE: Timing of sessions and room locations are subject to change.


Venue: 200F (Level Two) clear filter
arrow_back View All Dates
Tuesday, May 19
 

11:00am CDT

From Guidance To Guardrails: Cost & Carbon Policy-as-Code With OPA in CI - Machiko Shinozuka & Kouki Hama, NTT, Inc
Tuesday May 19, 2026 11:00am - 11:40am CDT
Several guidelines such as FinOps Framework and Green Software Patterns provide principles for cloud optimization, but they include both abstract ideas and practical details with multiple concerns like cost and sustainability. This makes human reviews inconsistent. In this talk, we show how such guidance can be evaluated consistently in CI using Open Policy Agent (OPA).

We present a two-layer policy design: evaluation logic stays small and readable in Rego, while policy rules such as thresholds and exceptions are defined in structured JSON. This separation makes policies easier to maintain by contributors without Rego expertise. CI checks consume an input schema derived from configuration or IaC artifacts and return review-ready decisions—allow, warn, or block—along with a rule identifier, rationale, and a suggested follow-up.

What you will learn:
・How to extract checkable criteria from abstract guidance
・How to design a stable input schema
・How to structure a rules catalog so that policy evaluation remains possible even when multiple concerns interact
・How to run a policy change process that does not depend on a small set of Rego experts
Speakers
avatar for Machiko Shinozuka

Machiko Shinozuka

Research Engineer, NTT, Inc
Machiko Shinozuka is a researcher in Computer and Data Science Laboratories in NTT, Inc. She is engaged in the research and development of green software engineering. Her interest is calculating and reducing CO2 emissions in software, FinOps and cloud cost optimization. With a background... Read More →
avatar for Kouki Hama

Kouki Hama

Senior Research Engineer, NTT, Inc
Kouki Hama is a Senior Research Engineer in software engineering at NTT, Inc., Computer & Data Science Laboratories. His research focuses on improving the efficiency, reliability, and governance of CI/CD, with a focus on GreenOps, FinOps, reliability engineering, and software supply... Read More →
Tuesday May 19, 2026 11:00am - 11:40am CDT
200F (Level Two)
  Cloud + Orchestration

11:55am CDT

Unified Database Provisioning and Management on Kubernetes - Kyle Avants, Percona
Tuesday May 19, 2026 11:55am - 12:35pm CDT
Running production-grade databases on Kubernetes is becoming increasingly common, but managing their lifecycle remains fragmented and complex for SRE and DevOps teams. Critical operations—scaling, RBAC, monitoring, backup, and restore—currently require navigating distinct, database-specific APIs and tools. This complexity prevents teams from fully realizing the operational efficiency and uniformity that Kubernetes provides.

This talk introduces OpenEverest, the open-source platform designed to address this operational gap. OpenEverest provides a single, unified UI and CLI to manage SRE functions for popular open-source databases such as PostgreSQL and MySQL deployed on Kubernetes. It abstracts away database-specific differences, offering standardized control for scaling, integrated observability, granular RBAC, and reliable data protection.

Join us to learn how OpenEverest simplifies the path to production readiness, reduces operational toil, and is building a pioneering open-source database management layer.
Speakers
avatar for Kyle Avants

Kyle Avants

Senior Solutions Engineer, Percona
Tuesday May 19, 2026 11:55am - 12:35pm CDT
200F (Level Two)
  Cloud + Orchestration

2:10pm CDT

Off-Grid Cloud Native: Building Trustworthy Sponsor-to-School Delivery With Kubernetes - Vuyo Mhlotshane, Loakit
Tuesday May 19, 2026 2:10pm - 2:50pm CDT
In many rural communities, the hardest part of funding education is not raising money. It is knowing with confidence that resources reached the right school and were used as intended.

In this session, I share a real-world, open-source reference architecture for a pay-on-proof delivery pipeline where sponsor funds are released only after delivery can be verified. The system is designed for low-bandwidth and intermittent connectivity environments and uses Kubernetes, event-driven workflows, cryptographic proofs, and auditable logs to close trust gaps between sponsors, vendors, and schools.

We will walk through key design decisions, how to think about offline-first systems, and where trust commonly breaks in real deployments, along with practical ways to address those gaps without heavy infrastructure.

Attendees will learn:

- How to model sponsor to vendor workflows using events and state
- Patterns for building offline-friendly, cloud native systems
- Practical digital trust controls including identity, auditability, and proof of delivery

This talk is for platform engineers, SREs, and open source practitioners building systems that must work in real-world conditions.
Speakers
avatar for Constance (Vuyo) Mhlotshane

Constance (Vuyo) Mhlotshane

Cloud Native Engineer, Loakit
Vuyo Mhlotshane is a Cloud Native Engineer and open source practitioner focused on building resilient, trustworthy systems. She works hands-on with Kubernetes, infrastructure as code, and cloud security, and is the founder of Loakit — an initiative exploring how open source technology... Read More →
Tuesday May 19, 2026 2:10pm - 2:50pm CDT
200F (Level Two)
  Cloud + Orchestration

3:05pm CDT

Lightning Talk: Reliability at the Edge: Fail-Safe Multi Cluster Orchestration With Kubestellar - Munachimso (Muna) Nwaiwu, Cornell University
Tuesday May 19, 2026 3:05pm - 3:15pm CDT
Managing a single Kubernetes cluster is a solved problem. However, extending Kubernetes to the edge introduces a fundamental systems crisis. In remote environments, network partitions are guaranteed. When orchestrators demand real-time synchronization, routine network drops lead to configuration drift and control-plane breakdown.

This session analyzes how KubeStellar (a CNCF Sandbox project) attempts to solve this reliability crisis. Evaluated from a systems and network perspective, we dissect how KubeStellar abandons synchronous replication for an asynchronous, hub-and-spoke model. By decoupling its Workload Description Space (WDS) from the transport layer, it leverages eventual consistency to treat disconnected edge nodes as expected, not a fatal error.

To ground this theory in reality, we explore our ongoing research at Cornell University’s Smart Farms. In remote agriculture, long-term partitions are daily realities. We will outline our progress using KubeStellar to manage geographically dispersed clusters, presenting an architectural roadmap for how eventual consistency can ensure local workloads survive extended disconnects and deterministically reconcile upon reconnection.
Speakers
avatar for Muna Nwaiwu

Muna Nwaiwu

Researcher, Cornell University
Munachimso Victor Nwaiwu is a PhD student at Cornell University researching distributed systems and edge orchestration, building upon a highly accomplished career as a Network Automation Engineer. Before his academic research, he made significant contributions to next-generation network... Read More →
Tuesday May 19, 2026 3:05pm - 3:15pm CDT
200F (Level Two)
  Cloud + Orchestration

3:25pm CDT

Lightning Talk: Taking a U-Turn for Caches: Moving Back From Remote To Local - Aditya Mohan, Amazon
Tuesday May 19, 2026 3:25pm - 3:35pm CDT
With the growth of CPU compute and larger memory heaps, many cloud-native workloads that traditionally relied on remote caches like Redis and Memcached can now benefit from in-process caching using open source libraries.

In this session, we focus on Java-based cloud-native services and show how local caches, such as Caffeine, can colocate cache with application logic, reducing network overhead, simplifying consistency management, and improving latency. Drawing on large-scale production experience, we’ll explore cache invalidation, freshness guarantees, near-cache patterns, and scalability trade-offs, along with practical lessons for handling staleness, TTLs, and other caching challenges while reducing operational complexity and cost.

Finally, we’ll discuss how emerging open source tools like Databricks’ Dicer apply these caching and orchestration principles at scale for real-time services, representing the next frontier. Attendees will learn methods to design low-latency, high-throughput, maintainable, and cost-efficient caching solutions for cloud-native architectures using open source tools.
Speakers
avatar for Aditya Mohan

Aditya Mohan

Amazon Senior Machine Learning Engineer at Amazon Advertising Sponsored Products, Amazon
Aditya Mohan is a Senior Machine Learning Engineer at Amazon Advertising with 11+ years of experience and tech lead for agentic advertiser campaigns. He specializes in large-scale ML and semantic search, using LLMs and LangGraph to optimize campaigns and ensure observability, accountability... Read More →
Tuesday May 19, 2026 3:25pm - 3:35pm CDT
200F (Level Two)
  Cloud + Orchestration

3:35pm CDT

Lightning Talk: Confidential Virtual Machines in KubeVirt With Hardware-Backed Trusted Environments - Basavaraju G & Rishika Kedia, IBM
Tuesday May 19, 2026 3:35pm - 3:45pm CDT
Multi cloud deployments and shared infrastructure enhance data privacy and security issues, with containerized workloads becoming mainstream in Kubernetes, there is a need to host containers securely in addition to virtual machines (VMs) to safeguard hardware-level workloads.
KubeVirt is a cloud native virtualization platform that comes with Confidential Virtual Machines for the most sensitive use cases. They take advantage of Trusted Execution Environments such as AMD SEV, Intel TDX, and IBM Secure Execution to provide data-in-motion encryption for their workloads and defend against subverted host admins as well as against system attacks.
In this session, we will cover KubeVirt methodology for Confidential VMs, including the design of the architecture, challenges of implementation, and deployments. We will examine how the VMs protect sensitive workloads using memory encryption, workload isolation while being placed within Kubernetes orchestration and automation.
Speakers
avatar for Rishika Kedia

Rishika Kedia

STSM, Chief Product Owner- OpenShift, BM India Private Ltd
Rishika Kedia is the Product Owner for OpenShift and an Architect for Red Hat OCP on IBM Z at India Systems and Development Labs. With 18+ years of experience, she has led efforts to enable OpenShift and open-source technologies on IBM Z and LinuxONE systems. Rishika has designed... Read More →
avatar for Basavaraju G

Basavaraju G

Architect, IBM
Basava Raju.G is a Currently working at IBM, specializing in IBM Kubernetes Service and Openshift Container Platform. Basava has authored 3 IEEE publications, holds 2 Patents in domain of machine learning and Containers domain. Currently, Basavaraju is working on IBM Labs as the Product... Read More →
Tuesday May 19, 2026 3:35pm - 3:45pm CDT
200F (Level Two)
  Cloud + Orchestration

4:20pm CDT

Hardening QEMU With Self-Correcting Fuzzing Pipelines - Navid Emamdoost, Google
Tuesday May 19, 2026 4:20pm - 5:00pm CDT
This session explores a dual-phase strategy for hardening the QEMU Virtual Machine Monitor (VMM) through advanced fuzzing and AI-driven automation. We begin by detailing a manual hardening effort that expanded QEMU’s testing surface from 18 to 60 active targets, increasing device line coverage by more than 30%. While effective, manual target creation is a resource-intensive process that struggles to scale across the hundreds of virtualized devices supported by QEMU.

To address these scaling challenges, we introduce an AI-driven agentic pipeline designed to automate the generation and validation of fuzzing targets. This system leverages Large Language Models (LLMs) to analyze device source code and memory regions, generating candidate C++ targets for the QEMU fuzzing engine.

We will discuss the implementation of a self-correcting feedback loop where the agent captures compilation and runtime errors to iteratively refine its output until a stable target is produced. Attendees will see how this approach aims to reach >80% device line coverage by automating the remaining hardware targets that currently lack dedicated fuzzing.
Speakers
avatar for Navid Emamdoost

Navid Emamdoost

Software Engineer, Google
Navid Emamdoost is a Software Engineer at Google focused on infrastructure security. He holds a PhD from the University of Minnesota, where his research uncovered over 200 Linux kernel bugs and 40 CVEs. His career includes maintaining OSS-Fuzz for open source projects and hardening... Read More →
Tuesday May 19, 2026 4:20pm - 5:00pm CDT
200F (Level Two)
  Cloud + Orchestration
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Audience Experience Level
  • Timezone

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -