R&D
Pixera: End to End 2110



    
In February 2025, we had the opportunity to be part of the ST 2110 End-to-End Showcase, hosted by PIXERA at their Santa Monica studio. Over two days, we joined forces with some of the most forward-thinking minds in media production to demonstrate how SMPTE ST 2110 is reshaping the future of live and interactive workflows.

The event brought together cutting-edge technology, real-time graphics, and a fully networked media system—proving that we are no longer bound by traditional pipelines. Instead, we’re stepping into a future where media flows like nature itself—dynamic, interconnected, and constantly evolving.

We were invited to contribute both a keynote presentation and a live demonstration that would challenge the way we think about production. Working alongside PIXERA, Panasonic Connect, Megapixel VR, ROE Visual, Netgear AV, Matrox Video, Creative Technology, and FUSE Technical Group, we set out to create something truly special.



It all began at the 2024 NAB Show, where we ran into a longtime friend who had recently joined the PIXERA team. They told us about PIXERA’s new studio in Santa Monica, built as a hub for innovation in real-time media workflows.

In the middle of the show floor, we found ourselves in an impromptu conversation with Conor McGill, Director of PIXERA USA. The discussion quickly turned into something bigger—how could PIXERA’s tools and our expertise in live, interactive workflows push the limits of SMPTE ST 2110? How do we bridge technology and creativity in a way that feels organic and alive?

We shared our past projects that experimented with AI, real-time rendering, and generative media, and the conversation naturally evolved into a bigger idea: What if we built something together? Something that not only showcased technical excellence but also created a new kind of interactive world?

A few months later, PIXERA reached out with the idea for the ST 2110 End-to-End Showcase, and we knew we had to be part of it.




    For this showcase, we wanted to break away from traditional production workflows and instead craft a living, generative world—a space where digital elements evolved together like an ecosystem. The idea became known as The Exquisite Sammish, inspired by the Exquisite Corpse—a surrealist art technique where multiple artists contribute to a single work without seeing the full picture.

    We designed a real-time generative nature system, where multiple software engines and media servers worked together as if they were part of the same organic world. Each system had a unique role, but they communicated seamlessly over SMPTE ST 2110, ensuring that every element—whether fluid simulation, environmental rendering, or real-time compositing—felt like part of a unified whole.

This entire experience was powered by a fully networked infrastructure, with contributions from our partners, including:
  • PIXERA media servers handling compositing and output
  • Panasonic’s Kairos for live production switching
  • Megapixel VR and ROE Visual for LED processing and display
  • Netgear AV and Matrox Video for high-bandwidth networking and conversion





  
    At the heart of the Exquisite Sammish was a fully networked SMPTE ST 2110 infrastructure, designed to facilitate seamless, uncompressed video and data transmission across multiple processing nodes. The system integrated PIXERA media servers, Unreal Engine, TouchDesigner, Unity, and various hardware components to generate and composite a dynamic, real-time nature simulation. The entire workflow relied on a high-speed Netgear AV network backbone, which linked different processing units through ST 2110 multicast streams.

    Creatively, we envisioned a nature-inspired world, where different digital elements—like water, air, and organic motion—functioned as a unified ecosystem, despite being generated by separate technologies. Just as in nature, where diverse systems interact seamlessly, our workflow blended real-time physics simulations, machine learning depth mapping, and high-end rendering engines into a singular, cohesive experience. A Panasonic UE160 PTZ camera was used over SMPTE ST 2110 to capture live video, which was then processed through a neural network for real-time depth mapping. This depth estimation pipeline allowed us to generate point clouds and fluid simulations in TouchDesigner, forming the foundation of this living, breathing environment.

    To handle the computational load, fluid simulations—such as a cascading waterfall—were processed on a dedicated PIXERA PX4 server, while other aspects of the scene, including real-time rendering in Unreal Engine and Unity, ran on separate PIXERA PX4 servers, ensuring smooth compositing and interactivity. A fourth PIXERA server (PX2) was dedicated to UI control and system management, allowing for flexible adjustments and live interaction with the generative environment.

    One of the most visually striking elements was the waterfall, driven by a particle simulation that dynamically revealed the Unreal Engine-generated environment behind it. As particles moved and dispersed, the waterfall acted as a natural transition layer, showcasing the depth and richness of the Unreal world in a way that mimicked the behavior of real water refracting light and revealing hidden layers beneath. This seamless blending of physics-based simulation and real-time rendering was a testament to the power of networked media workflows in creating immersive experiences.



    The processed data was then sent to PIXERA’s compositing engine, where it was combined with real-time rendering from Unreal Engine to generate a fully immersive scene. This composited image was encoded as an ST 2110 stream and distributed to multiple outputs, including a ROE LED wall powered by a Megapixel VR processor and a projection system from AV Stumpfl. Meanwhile, Panasonic’s Kairos served as the event’s live production core, integrating camera feeds and additional media for real-time switching and output.

    By leveraging ST 2110 for both video and data transport, the system enabled low-latency, high-fidelity media processing, allowing multiple tools and render engines to collaborate in real time without signal degradation. This demonstrated how networked workflows can seamlessly integrate AI-driven image processing, live video, and high-end visual effects in a unified, IP-based production environment—bridging the gap between digital and organic worlds. 

    We want to extend our deepest thanks to PIXERA and the entire AV Stumpfl team—especially Conor McGill—for inviting us to be part of this showcase and trusting us to bring something truly experimental to life.
To our partners—Panasonic Connect, Megapixel VR, ROE Visual, Netgear AV, Matrox Video, Creative Technology, and FUSE Technical Group—thank you for bringing your expertise, your technology, and your willingness to collaborate. And to everyone who attended, engaged, and explored The Exquisite Sammish with us—thank you. This is just the beginning of something bigger, and we can’t wait to explore what’s next.

Until next time.




Tell us about your project,
Contact

©2025 Smilesoft  |  All rights reserved