Throughout the technological compartment and software plug-ins have been developed and used for the different proof-of-principles and real-world scenarios. On this webpage, we make some publically available so others that are interested in developing similar works or research can benefit of already existing infrastructures.
SHARESPACE for Art
At the core of each artwork developed for the SHARESPACE project lie two basic plug-ins developed by the Ars Electronica Futurelab: The Deep Space Starterkit and the pharus tracking system.
Deep Space Starter Kit
The Deep Space 8K is an immersive room at the Ars Electronica Center that allows for 16 M x 9M stereoscopic 8K wall and floor projection. All artworks for the SHARESPACE project have been developed for this space. The Deep Space Starter Kit is an Unreal Engine template includes all configurations to quickly start a new project for this specific space. This plug-in is compatible with Unreal Engine 5.7.
You can find a link to the starter kit here: https://github.com/ArsElectronicaFuturelab/UE-DeepSpace-Starter


pharus
The pharus tracking system is developed by researcher and artist Otto Naderer from the Ars Electronica Futurelab. This tracking system allows for the locations of objects, people, or groups to be tracked on the Deep Space floor. This adds to the possibilities of making Deep Space 8K works interactive with precise tracking information. This plug-in is compatible with Unreal Engine 5.7.
You can find the link to the plug-in here: https://github.com/ArsElectronicaFuturelab/UE-DeepSpace-PharusLasertracking
Deep Sync Infrastructure
The Deep Sync Infrastructure brings biodata to the co-immersive Deep Space 8K. It includes the development of custom-made wearables and an Unreal Engine plug-in. The Deep Sync wearables are equipped with sensors (heart rate, IMU, oximeter), a button input, vibration output, LEDs, and Wi-Fi connection. The participants’ data is continuously measured and transmitted to a server application making the data available in Unreal. The two-way communication between the wearables and the Unreal Engine application allows for an exploration of the mutual influence between the participants’ active input, their physiological state, the application’s reactive environment and its personalized feedback to the participants through their wearables.
You can find the link to the plug-in here: https://github.com/ArsElectronicaFuturelab/UE-DeepSpace-DeepSync
Your can find the link to the wearable server here: https://github.com/ArsElectronicaFuturelab/DeepSync-Wearable-Server
Your can find the link to the wearable firmware here: https://github.com/ArsElectronicaFuturelab/DeepSync-Wearable-Firmware


Cognitive Architectures
Within SHARESPACE, cognitive architectures were designed, developed, and validated by SHARESPACE partner CRdC to drive the movement of virtual characters with different levels of autonomization. Two main kinds of architectures were developed. Those to increase synchronization in a group and those to amplify kinematic information in human motion. These architectures were thus designed for different use cases, and deployed both in the project’s proof-of-principle demonstrations and the application scenarios.
You can find information about all the different developed cognitive architectures, and their plug-in links here: https://sharespace.eu/cognitive-architectures/
Audio Transmission for XR Experiences
Pixel Streaming Service
Audio latency emerged as one of the most persistent technical challenges in V2 development. The cumulative latency from multiple sources—Bluetooth microphone ADC (5-10ms), local processing (20-50ms), spatial audio (100-200ms), network transmission (50-200ms), remote decoding (10-20ms), and HMD output (5-15ms)—easily exceeds 300ms, the recommended threshold for natural VR social interaction.
UE Voice Chat, the initial integrated VoIP solution, showed significant latencies during testing, particularly in Meta Quest 3 scenarios. The team from Alcatel therefore developed a complementary WebRTC-based solution leveraging the Pixel Streaming infrastructure. WebRTC enables direct peer-to-peer audio connections that avoid the mixing delays inherent in centralized voice systems. A mesh network architecture allows multiple P2P connections between emitters and receivers, minimizing latency while maintaining audio quality.
The Pixel Streaming infrastructure is available here: https://github.com/ALE-Rainbow/sharespace-pixel-streaming-infrastructure
Web Client-Server Bundle
The project delivered a web client-server bundle that could be hosted either locally or in ALE cloud with connectors for direct connection with XR applications. Though audio streams flow directly between devices, experience may be a bit different when setting up the connection with a locally hosted system compared to a cloud hosting due to a longer signaling sequence.
The web client-server bundle is available here: https://github.com/ALE-Rainbow/sharespace-pixel-streaming-cpp-service
Avatar Replication in Shared Environments
A unified multi-user platform was developed in Unreal Engine 5 by the team of Cyens, allowing distributed clients to connect and interact within shared virtual environments. The plugin supports a variety of humanoid virtual characters created with tools such as MetaHumans and Character Creator, providing flexibility across different character pipelines. It also integrates with multiple motion capture systems, including Xsens, Rokoko, and Noitom, and supports offline animation for secondary characters when live mocap is not required. The plugin is fully compatible with Unreal Engine 5.3.
You can find the link to the plug-in here: https://github.com/CYENS/virtual-share-space
