Loading…
Open Source Summit + Embedded Linux Conference North America...
May 18-20, 2026
Minneapolis, MN
View More Details & Registration
Note: The schedule is subject to change.

The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for Open Source Summit North America 2025 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.

This schedule is automatically displayed in Central DaylightTime (UTC -5). To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date."

IMPORTANT NOTE: Timing of sessions and room locations are subject to change.


Company: Advanced clear filter
Monday, May 18
 

11:20am CDT

QoD-Centric NaaS Strategy: Policy-Orchestrated Multi-Access Service - Daniel Kibler, EIS Visual & Niem Dang, NHD Consulting LLC
Monday May 18, 2026 11:20am - 12:00pm CDT
Delivering predictable, high‑quality network services in dense, multi‑access edge environments remains a central challenge for operators pursuing a Network‑as‑a‑Service (NaaS) strategy, where programmable APIs expose network capabilities as on‑demand services. Quality‑on‑Demand (QoD) APIs act as the intent interface in this model, enabling applications to request session‑level performance characteristics. QoD requires sophisticated new network control‑plane orchestration to address heterogeneous enforcement, multi‑access behavior, and session continuity issues across Wi‑Fi, private 5G, and public 5G domains.

QoD refers specifically to the CAMARA Project’s implementation, which provides standardized APIs for dynamic multi‑access orchestration, session‑level QoS enforcement, and integration with 3GPP control‑plane functions, including PCF, SMF, NEF, and NWDAF together with UE‑side ATSSS for traffic steering, switching, and splitting.

In this session, we will overview the strategic importance of QoD APIs, the global scale of the emerging NaaS domain, and detail Open Source technologies that are foundational to the industry.
Speakers
avatar for Daniel Kibler

Daniel Kibler

Senior Systems Engineer and Founder, EIS Visual
Principal-level engineer and founder of EIS Visual, Daniel designs and operates large-scale distributed platforms across communications, 5G, edge networks, and high-performance compute. He bridges architecture, execution, and operations to deliver measurable business impact. A former... Read More →
avatar for Niem Dang

Niem Dang

Founder & Principal Consultant, NHD Consulting LLC
Industry-recognized technology and thought leader with 20+ years of ground-breaking patents and accomplishments in the cable industry. Passion for delivering challenging projects through mastery of planning, strategy, technology enablement, and innovation.
Monday May 18, 2026 11:20am - 12:00pm CDT
200F (Level Two)
  Cloud + Orchestration

5:25pm CDT

Verified Debian Packaging at Scale - Frederick Lawler, Cloudflare
Monday May 18, 2026 5:25pm - 6:05pm CDT
Cloudflare’s global network relies on Debian Linux machines across 330+ cities. To enhance production security we wanted to ensure that our servers can only run authorized software. For this we leverage Linux Kernel's IMA-Measurement to validate binary signatures before execution. Our system encompasses first-party software, Docker containers, and open-source Debian packages.

This talk illustrates how we successfully injected digital signatures into every Debian package installed on our fleet. This involved deep dives into the Linux Kernel, modifying dpkg, and building a mirroring system that could sign upstream repositories. Learn about our journey enhancing software integrity on a massive scale. This session is ideal for those interested in Linux security, package management, and Internet-scale system administration.
Speakers
avatar for Frederick Lawler

Frederick Lawler

Systems Engineer, Cloudflare
Fred is a backend web developer turned kernel developer. He previously focused on the PCIe subsystem since 2018 as a hobbyist. Now he works for Cloudflare on the Linux team with a focus on securing systems and production reliability.
Monday May 18, 2026 5:25pm - 6:05pm CDT
200G (Level Two)
  Packages + Images + Containers
 
Tuesday, May 19
 

4:20pm CDT

The Hidden Cost of Sleep: How Scheduler Wakeup Latency Impacts High-Throughput AI Inference - Shubhang Kaushik, Ampere Computing
Tuesday May 19, 2026 4:20pm - 5:00pm CDT
As a Linux Kernel Developer at Ampere Computing, I focus on optimizing the scheduler for high-density ARM64 systems. My work culminates in a patch merged for the Linux 7.0 release that refines avg_idle tracking a critical metric the scheduler uses to decide how long to search for an idle CPU before giving up. In my session "The Hidden Cost of Sleep", I will break down the try_to_wake_up() path to show how even minor inaccuracies in idle-time accounting lead to poor CPU selection and increased cache misses. I’ll explain how my Linux 7.0 optimizations [commit
36ae1c45b2cede] specifically reduce the 'search cost' during wakeups, directly improving the responsiveness of AI inference workloads. By sharing raw performance data and trace analysis, I’ll demonstrate why getting the wakeup path right is the only way to achieve the deterministic performance needed for autonomous AI agents and scalable trust infrastructure.
Speakers
avatar for Shubhang Kaushik

Shubhang Kaushik

Software Engineer, Ampere Computing
Linux Kernel Developer
Tuesday May 19, 2026 4:20pm - 5:00pm CDT
205C+D (Level Two)
  Linux

4:20pm CDT

Headroom: A Context Optimization Layer for LLM Applications - Tejas Chopra, Netflix, Inc.
Tuesday May 19, 2026 4:20pm - 5:00pm CDT
LLM tokens are expensive. With context windows expanding to 200K+ tokens, a single API call can cost several dollars & in production systems handling thousands of requests, these costs compound quickly.
Most optimization efforts focus on model selection or prompt engineering, but the context itself often contains massive redundancy.

Headroom is an open-source Python library (https://github.com/chopratejas/headroom) that sits between your application and your LLM provider, transparently optimizing context before it reaches the model.
The core insight is simple: LLM contexts—especially in agentic workflows—are filled with repetitive tool outputs, verbose JSON arrays, and boilerplate that consumes tokens without adding proportional value

Headroom introduces novel concepts such as reversible compression, cache aligners, compression routers, and even persistent memory

Real-world results:
- 50-90% token reduction on typical agentic workloads
- Drop-in integrations for LangChain, OpenAI, Anthropic, and any OpenAI-compatible provider
- Zero code changes required when using the proxy server
Speakers
avatar for Tejas Chopra

Tejas Chopra

Sr. Engineer, Netflix, Inc.
Tejas Chopra is a senior ML and AI infrastructure Engineer at Netflix, where he builds large-scale systems for production AI and data platforms. He is the creator of Headroom, an open-source context optimization engine for LLMs, and a frequent speaker at global conferences on ML systems... Read More →
Tuesday May 19, 2026 4:20pm - 5:00pm CDT
211A+B (Level Two)
  Open AI & Data
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Audience Experience Level
  • Timezone

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.