Loading…
Open Source Summit + Embedded Linux Conference North America...
May 18-20, 2026
Minneapolis, MN
View More Details & Registration
Note: The schedule is subject to change.

The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for Open Source Summit North America 2025 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.

This schedule is automatically displayed in Central DaylightTime (UTC -5). To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date."

IMPORTANT NOTE: Timing of sessions and room locations are subject to change.


Venue: 200A (Level Two) clear filter
arrow_back View All Dates
Wednesday, May 20
 

11:00am CDT

When Similar Is Good Enough: Rethinking Caching for AI - Madelyn Olson, Valkey & Jacob Murphy, Google Cloud
Wednesday May 20, 2026 11:00am - 11:40am CDT
Caching has traditionally relied on exact matches: the same input produces the same cached output. AI systems challenge this assumption by introducing semantic similarity — requests that are different on the surface but equivalent in meaning. This talk explores how caching is evolving to support AI workloads, from classical key-value strategies to semantic caching using vector search. We'll walk through a practical architecture that layers exact and semantic caches in front of an expensive model and demonstrate how hybrid caching can reduce cost and latency. This talk will explore multiple open-source systems, such as OpenSearch and Valkey, and discuss the tradeoffs that they provide and when they matter.
Speakers
avatar for Madelyn Olson

Madelyn Olson

Principal Engineer AWS, Maintainer of the Open-Source Valkey Project, AWS
Madelyn Olson is a co-creator and maintainer of Valkey, a high-performance key-value datastore, and Principal Engineer at Amazon Web Services (AWS). She focuses on building secure and highly reliable features, with a passion in working with open-source communities.
avatar for Jacob Murphy

Jacob Murphy

Staff Software Engineer, Google Cloud, Google Cloud
Jacob is a member of the Valkey Technical Steering Committee and an engineer on Google Cloud's Memorystore team.
Wednesday May 20, 2026 11:00am - 11:40am CDT
200A (Level Two)
  Open AI & Data

11:55am CDT

Zero Trust AI Agents: Securing MCP in Private Kubernetes Networks - Mithil Patel, Equinix
Wednesday May 20, 2026 11:55am - 12:35pm CDT
The transition from passive RAG to autonomous agentic workflows forces a dangerous trade-off: to be useful, agents need access; to be safe, they need restrictions. Giving a non-deterministic LLM distinct permissions to your Kubernetes cluster is a security nightmare, yet agentic tool execution demands real-world access to be effective.

This session introduces a battle-tested architecture for Zero Trust Agents. We will demonstrate how to secure Model Context Protocol (MCP) servers within private networks, replacing risky static credentials with a dynamic control plane that enforces strict safety guardrails.

Attendees will learn:

Identity for Autonomy: How to integrate OpenBao (LF Edge) to issue Just-In-Time (JIT) credentials, ensuring agents only hold permissions during active tool use.

Bounding Agency: Implementing "Read/Write Separation" at the protocol level, preventing stochastic errors or misinterpretations from causing deterministic outages.

Secure Orchestration: A blueprint for deploying MCP servers as secure bridges between AI reasoning and internal infrastructure.

Stop building toys. Learn how to deploy autonomous systems that your security team will actually approve.
Speakers
avatar for Mithil Patel

Mithil Patel

Principal Engineer, SRE, Equinix
Principal Engineer at Equinix driving DevOps/SRE strategy for Interconnection organization managing global infrastructure serving Fortune 500 companies. 10+ years building resilient distributed systems and Kubernetes platforms at scale. Deep expertise in cloud-native architectures... Read More →
Wednesday May 20, 2026 11:55am - 12:35pm CDT
200A (Level Two)
  Open AI & Data
  • Audience Experience Level Any

2:10pm CDT

Building a Shared, Persistent Virtual Filesystem for WebAssembly - Ayako Hayasaka, LY Corporation
Wednesday May 20, 2026 2:10pm - 2:50pm CDT
Server-side WebAssembly applications need filesystem access, but current options are limited. Host filesystem access breaks portability and sandboxing. wasi-vfs is read-only and targets Preview 1. wasi-virt supports Preview 2 but remains read-only and single-application only.
We present a virtual filesystem built on WASI Preview 2 and the Component Model that supports read/write, multi-app sharing, dynamic attachment via RPC, and optional S3 persistence. The stack uses open-source tooling from the Bytecode Alliance: wasmtime, wac, and wit-bindgen.
The talk walks through our architecture: an inode-based in-memory filesystem exposed through custom adapters implementing wasi:filesystem, composed at build time with wac plug. We then separate the filesystem into a standalone server, add RPC for runtime attachment without recompilation, and layer S3 persistence for durability. Each stage is demonstrated live.
We close with lessons learned and tradeoffs between build-time composition and runtime RPC. No deep Wasm expertise is assumed. This talk is for developers building Wasm platforms, those exploring the Component Model, and anyone curious about filesystem virtualization in WebAssembly.
Speakers
avatar for Ayako Hayasaka

Ayako Hayasaka

Software Engineer, LY Corporation
Primarily responsible for providing company-wide technical support in the area of web backend development.
Wednesday May 20, 2026 2:10pm - 2:50pm CDT
200A (Level Two)
  Cloud + Orchestration

3:05pm CDT

Cache Me If You Can: Decentralize Your Distributed Caches With Hollow - Viswanathan Ranganathan, Independent
Wednesday May 20, 2026 3:05pm - 3:45pm CDT
Distributed caches are often used for scenarios that don't actually require them. For massive datasets (100's of GB's or more), distributed caches make sense—the data simply won't fit in a single node's memory. However, distributed caches tend to be overkill when working with smaller data sets (100s of MBs to 10s of GBs) that do fit in memory. Additionally, using traditional In-Memory caching libraries creates additional operational challenges, such as cache stampedes during TTL expiration, memory spikes during reloads, and long cold-start times that directly affect deployment velocity.

This talk proposes an alternate, unconventional view: What if we could decentralize our cache while centralizing its preparation? We'll discuss how dataset distribution using Hollow (an open-source project by Netflix) enables applications to serve data from local memory with microsecond access latency while staying perfectly synchronized via delta-based updates.

We'll cover:
- Design trade-offs that make this pattern ideal for GB-scale, read-heavy workloads.
- Delta-based updates that optimize cache reloads/refreshes.
- Zero-downtime updates applied in milliseconds without memory spikes.
Speakers
avatar for Viswanathan Ranganathan

Viswanathan Ranganathan

Independent Software Practitioner
Viswanathan Ranganathan is a Senior Engineer at Netflix, where he's part of the Delivery Engineering team that powers every service deployment across the platform. His current focus is on building deployment safety and confidence features for Netflix's infrastructure. Previously... Read More →
Wednesday May 20, 2026 3:05pm - 3:45pm CDT
200A (Level Two)
  Cloud + Orchestration
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Audience Experience Level
  • Timezone

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -