Loading…
Open Source Summit + Embedded Linux Conference North America...
May 18-20, 2026
Minneapolis, MN
View More Details & Registration
Note: The schedule is subject to change.

The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for Open Source Summit North America 2025 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.

This schedule is automatically displayed in Central DaylightTime (UTC -5). To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date."

IMPORTANT NOTE: Timing of sessions and room locations are subject to change.


Venue: 211A+B (Level Two) clear filter
arrow_back View All Dates
Monday, May 18
 

11:20am CDT

MOT: A Tool To Fight Open-washing in AI - Arnaud Le Hors, IBM
Monday May 18, 2026 11:20am - 12:00pm CDT
Many models referred to as "open source" are distributed under restrictive licenses and fail to include the necessary information to actually qualify as open source. Just because a model is on HuggingFace does not mean it is open source.

Several attempts have been made to provide a definition of what "open source AI" ought to be but we now have a tool that can help: the Model Openness Tool (MOT).

The MOT was developed by the Generative AI Commons as an implementation of the Model Openness Framework (MOF) to provide model producers and consumers with a practical way to assess how open a model really is. This session will introduce attendees to the MOT and include a demo showing how it can be used along with Hugging Face and GitHub to provide greater understanding of which models are really open.
Speakers
avatar for Arnaud Le Hors

Arnaud Le Hors

Senior Technical Staff Member, IBM
Arnaud Le Hors is Senior Technical Staff Member of Open Technologies at IBM. He has been working on standards and open source for over 30 years. Arnaud was editor of several key web specifications including HTML and DOM and was a pioneer of open source with the release of libXpm in... Read More →
Monday May 18, 2026 11:20am - 12:00pm CDT
211A+B (Level Two)
  Open AI & Data

1:30pm CDT

Crawl, Walk, Run With Your MCP Servers - Lin Sun, solo.io
Monday May 18, 2026 1:30pm - 2:10pm CDT
You have built your first MCP server and tested it with the MCP inspector, but it only uses stdio or streamable HTTP without HTTPS. Do you rewrite your server to add authentication and authorization, or is there a smarter way? What if you have multiple MCP servers? Can you unify them under a single virtual server without touching any of the originals? How do you deploy all of this to Kubernetes securely and reliably?

In this demo-driven session, Lin takes you from building a simple MCP server and securing it the hard way. Then she offloads authentication, authorization, and tool multiplexing to an MCP gateway. She will show how to deploy a virtual MCP server in Kubernetes and program an AI agent to call its tools, making complex setups feel effortless. By the end, you will have practical techniques to run, secure, and scale your MCP servers with confidence.
Speakers
avatar for Lin Sun

Lin Sun

Head of Open Source, Solo.io
Lin is the Head of Open Source at Solo.io, contributing full-time to the open-source community. She serves on the CNCF Technical Oversight Committee (TOC), is a CNCF Ambassador, and is a maintainer for Istio, kgateway, and kagent. An international speaker at tech conferences, Lin... Read More →
Monday May 18, 2026 1:30pm - 2:10pm CDT
211A+B (Level Two)
  Open AI & Data

2:25pm CDT

From Image To Itinerary: Multimodal Agentic Travel Planning With MCP, A2A, and BeeAI - Ezequiel Lanza, Intel
Monday May 18, 2026 2:25pm - 3:05pm CDT
Planning a trip is a deceptively complex problem for AI, especially when the journey starts from visual context rather than text. In this session, we present a multimodal-first, local-first agentic architecture where a user uploads an image (e.g. “where is this place?”), and the system builds a travel plan from that visual input using Model Context Protocol (MCP), A2A (Agent-to-Agent), and BeeAI — all running fully locally without cloud dependencies.

The system employs a router and specialist agent pattern, where dedicated agents handle image understanding, hotel search, and flight search, each backed by MCP servers. A multimodal model extracts meaning from the image, after which the router decomposes the task and delegates work through A2A to the appropriate specialists.

We will walk-through how BeeAI manages agent lifecycles, how A2A enables explicit agent collaboration, and how MCP acts as a stable contract layer between reasoning and real-world capabilities. The focus is on practical architecture, configuration, and lessons learned, showing how to build MCP-centric, multimodal systems that remain extensible, reproducible, and maintainable as new agents and tools are added.
Speakers
avatar for Ezequiel Lanza

Ezequiel Lanza

Ai Software Evangelist, Intel
Passionate about helping people discover the exciting world of artificial intelligence, Ezequiel is a frequent AI conference presenter and the creator of use cases, tutorials, and guides that help developers adopt open source AI tools.
Monday May 18, 2026 2:25pm - 3:05pm CDT
211A+B (Level Two)
  Open AI & Data

3:35pm CDT

Who You Gonna Call? Taming OpenClaw's Rogue AI Agents With OpenTelemetry and Tetragon - Henrik Rexed, Dynatrace
Monday May 18, 2026 3:35pm - 4:15pm CDT
There's something strange in your infrastructure. Who you gonna call?
OpenClaw , the open source AI agent formerly known as Clawdbot, then Moltbot exploded past 150,000 GitHub stars in weeks. It connects LLMs to your messaging platforms, terminal, and file system, giving AI full autonomous control. But like a Ghostbusters ghost, it wreaks havoc: $20 in tokens burned overnight to check the time, a one-click RCE (CVE-2026-25253), 21,000 exposed instances, and 341 malicious skills in the marketplace.
I will straps on my proton pack to bust these ghosts with open source tools. First, the OpenClaw Observability Plugin :
- an OpenTelemetry-based plugin capturing full agent lifecycle traces: request → agent turn → tool calls, with per-tool timing, token breakdowns, and error tracking. Your PKE meter for rogue AI.
- Then, Tetragon , eBPF-powered kernel-level policies restricting file access, network connections, and process execution. The containment unit no prompt injection can escape. A live demo ties it all together: OpenClaw + observability plugin + Tetragon, with traces and security events flowing into one dashboard.
We came, we saw, we traced it.
Speakers
avatar for Henrik Rexed

Henrik Rexed

Cloud Native advocate & CNCF Ambassador, Dynatrace
Henrik is a Cloud Native Advocate at Dynatrace and a CNCF Ambassador . Prior to Dynatrace, Henrik has worked more than 15 years, as Performance Engineer. Henrik Rexed Is Also one of the Organizer of the conferences named WOPR, KCD Austria and the owner of the Youtube Channel Isit... Read More →
Monday May 18, 2026 3:35pm - 4:15pm CDT
211A+B (Level Two)
  Open AI & Data

4:30pm CDT

Scaling LLM Inference With Tiered Caching: Extending LMCache With Amazon SageMaker HyperPod - Yihua Cheng, Tensormesh, Inc. & Ziwen Ning
Monday May 18, 2026 4:30pm - 5:10pm CDT
LMCache supports tiered KV caching with CPU memory offloading, extending inference beyond GPU memory limits. But what happens when local CPU memory isn't enough? This session introduces the next tier: offloading KV cache to Amazon SageMaker HyperPod managed storage, expanding cache capacity for large-scale LLM inference.

We'll cover the technical design of the SageMaker HyperPod connector contribution to LMCache. Hot entries stay in GPU memory, warm entries spill to CPU memory, and cold entries persist to HyperPod's managed storage. This three-tier architecture lets organizations cache far more context than local resources allow, reducing redundant computation for repeated prompts and long-context scenarios.

The session demonstrates the integration in action, showing cache hit rates, latency across tiers, and how the connector handles transitions between local and remote storage. We'll discuss key engineering decisions, including async prefetching and failure handling.

Attendees will leave with practical knowledge of how managed cloud storage can extend open source caching frameworks for LLM inference infrastructure.
Speakers
avatar for Yihua Cheng

Yihua Cheng

CTO, Tensormesh, Inc.
Yihua Cheng is co-founder and CTO of Tensormesh. He has a deep background in large language models, high-performance computing, and open-source development.
Yihua created LMCache and the vLLM production stack, open-source projects that have collectively earned over 9,000 GitHub... Read More →
avatar for Ziwen Ning

Ziwen Ning

Open Source Contributor
Ziwen Ning is an open-source contributor to LMCache. He was previously a Senior Software Development Engineer at AWS, working on Amazon SageMaker HyperPod with a focus on building scalable ML infrastructure. Before that at Annapurna Labs, he enhanced the AI/ML experience through the... Read More →
Monday May 18, 2026 4:30pm - 5:10pm CDT
211A+B (Level Two)
  Open AI & Data

5:25pm CDT

Beyond Vector Search: Building Knowledge Graphs for Autonomous Infrastructure - Torsten Boettjer, Rescile
Monday May 18, 2026 5:25pm - 6:05pm CDT
Modern platform engineering has a 'context' problem. As infrastructure scales across Kubernetes, hybrid clouds, and internal developer platforms (IDPs) like Backstage, traditional RAG systems struggle to answer multi-hop queries like 'Which services depend on this failing database?' or 'What is the blast radius of this IAM change?'
In this session, we explore how GraphRAG—a combination of Knowledge Graphs and LLMs—solves the reasoning gap that vector-only search leaves behind. We will demonstrate how to index infrastructure as a graph of entities and relationships, allowing AI agents to perform complex root-cause analysis and automate documentation. Attendees will leave with a blueprint for building an open-source GraphRAG pipeline to turn platform data into actionable intelligence."
Speakers
avatar for Torsten Boettjer

Torsten Boettjer

Co-Founder, Rescile
Co-Founder at Rescile, 20 years experience in platform engineering, former CCIO at Avaloq, CTO at Cisco, Head of Innovation at Swisscom, Product Management at Oracle Cloud Infrastructure
Monday May 18, 2026 5:25pm - 6:05pm CDT
211A+B (Level Two)
  Open AI & Data
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Audience Experience Level
  • Timezone

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -