The life of Brayns

One of the keys towards understanding how the brain works as a whole is visualisation of how the individual cells function. In particular, the more morphologically accurate the visualisation can be, the easier it is for experts in the biological field to validate cell structures; photo-realistic rendering is therefore important.

The Blue Brain Project has made major efforts to create morphologically accurate neurons to simulate sub-cellular and electrical activities, e.g. molecular simulations of neuron biochemistry or multi-scale simulations of neuronal function. Ray-tracing can help to highlight areas of the circuits where cells touch each other and where synapses are being created. In combination with ‘global illumination’, which uses light, shadow, and depth of field effects to simulate photo-realistic images, this technique makes it easier to visualise how the neurons function.

Brayns is a visualiser that can perform ray-traced rendering of scientific data. It provides an abstraction of the underlying rendering engines, so that the best possible acceleration libraries can be used for the relevant hardware.

https://github.com/BlueBrain/Brayns.git

Here is its story... (WORK IN PROGRESS...)

1995

North-East Wales Institute of Technology

 


2011

Strange encounter with GPU programming 

 


It suddenly started because of the CUDA, Supercomputing for the Masses series from Rob Farber

2012

DICOM attempt

 

 

Client Server architecture

Thanks to my experience in Client/Server architecture, the engine becomes cloud ready. The client sends information such as mouse and keyboard events to the server, that takes care of the rendering and sending a stream of images back to the client. Transport is optimized using compression technologies. Each client can enjoy a different and fully customizable view of the 3D scene.

 

Souce code for the client/server setup using ZeroC's ICE framework:

 

Acceleration structures

In order to produce optimal results in terms of rendering speed, I implement a naive bounding volume hierarchy. This technique allows the ray-tracing engine to work with thousands of objects. The GPU implementation offers the necessary computing power for real-time rendering.

 

2013

Birth of the Sol-R Engine

Sol-R is an open-source CUDA/OpenCL-based realtime ray-tracer compatible with Oculus Rift DK1, Kinect, Razor Hydra and Leap Motion devices. Sol-R was used by the Interactive Molecular Visualiser project (http://www.molecular-visualization.com)
A number of videos can be found on my channel: https://www.youtube.com/user/CyrilleOnDrums
Sol-R was written as a hobby project in order to understand and learn more about CUDA and OpenCL. Most of the code was written at night and during week-ends, meaning that it's probably not the best quality ever ;-)
The idea was to produce a Ray-Tracer that has its own "personality". Most of the code does not rely on any litterature about ray-tracing, but more on a naive approach of what rays could be used for. The idea was not to produce a physically based ray-tracer, but a simple engine that could produce cool images interactively.
Take it for what it is! Sol-R is a lot of fun to play with if you like coding computer generated images.

The Sol-R source code is available here: https://github.com/favreau/Sol-R
 

Interactive ray-tracing on Android mobile device

First, an HTTP server is required to serve images generated by Sol-R:


And then a client application using the Android SDK:


 

Protein visualizer

 





 

 Virtual Reality with Oculus Rift DK1 with multiple GPUs

 

Sol-R is now able to simultaneously produce a different view for each eye, making it ready for NVIDIA 3DVision by simply capturing the window, and playing it back in a 3DVision compatible player in order to enjoy immersive experiences.


Sol-R also provides a cheap way to visualize molecules in 3D, thanks to anaglypth technology. Get a pair of glasses for less than $2 and enjoy en immersive and unmatched experience.

 

From OpenGL to Ray-tracing in 2 lines of code

 

4-view visualizer

 

Fully raytraced viewports

 

 

2014

Join Blue Brain Project at EPFL


2015

Initial version of Brayns

First prototype of an hardware-agnostic and high-performance ray-tracer dedicated to scientific visualization.

2016

First iteration of the Web UI over HTTP and open-sourcing of Brayns

Demo with Intel at SC Frankfurt


2017

First iteration of the Python API over HTTP

https://github.com/favreau/pyBrayns

2019 Rise of Brayns

Motivation

One of the keys towards seeing how the brain functions is representation of how the individual cells work. Specifically, the more morphologically precise the representation can be, the simpler it is for specialists in the organic field to approve cell structures; photograph reasonable rendering is accordingly significant. Brayns is a can intelligently perform high-quality and high-fidelity rendering of large neuroscience datasets. Thanks to its client/server architecture, Brayns can be run in the cloud as well as on a supercomputer, and stream the rendering to any browser, either in a web UI or a Jupyter notebook.
The challenges of neuroscience are numerous, but in the context of visualization at the Blue Brain Project, four objectives have to be reached: Large data sets, large displays, rendering performance and image quality.
 

As an academic institution, EPFL also want to provide free software that can be run on virtually any type of architecture: CPUs, GPUs or virtual machines hosted in the cloud.
In order to be used by a community as large possible, the target audience for Brayns includes developers, scientists, digital artists, and media designers.

Design goals

Brayns is designed to address the challenges of visualizing large scale neuroscientific data (hundreds of thousands up to few millions of highly detailed neurons and Terabytes of simulation data).
It has a research-oriented modular architecture that uses plug-ins, which makes it easy to experiment with novel rendering techniques, for instance trying to visualize neural electrical activity with signed distance fields. This architecture is well-suited to address new use cases that are requested by scientists on a regular basis. Brayns has a client-server architecture allowing to run on a desktop PC, in the cloud or on a supercomputer.
The core of Brayns currently provides two rendering engines, a CPU implementation built on top of Intel OSPRay, and a GPU one based on OptiX. Brayns provides an engine API that facilitates the integration of additional rendering engines.
Brayns has custom virtual cameras to support any type of display, for example cylindrical, omni-stereo panoramic and virtual reality setups. The rendered images can be streamed either to web browsers or large curved display walls.
Brayns aims to be a platform for scientific visualization that makes it easy to add new scientific use-cases without having to worry about the complexity of the large scale rendering challenges.
In the context of the Blue Brain Project:

  • Unified engine/platform as separate tools/applications increase the maintenance complexity
  • Unify common features like the loading of the data, and the building of the 3D scene
  • Focus on the the science, not on the engineering

As a general rule, engines do not need to have a similar set of functionalities, but implement what is necessary to serve the use-cases they are used for. Typically, the OSPRay implementation is used to render very large data sets, and the OptiX one runs the real-time immersive use-cases.

Software architecture

Modular design

Modular design is a methodology that subdivides a framework into littler parts called modules, which can be freely made, changed, supplanted or traded between various frameworks. In the case of Brayns, the philosophy is “Write code that is easy to replace, not easy to extend”. In that context, modularity is at the component level. Brayns makes extensive use of the class factory pattern to create objects for the selected implementations. The design was initially inspired by the Sol-R rendering engine that allows multiple engines (CUDA and OpenCL) to deliver interactive visualization of scientific data using the ray-tracing technique.

Distributed architecture

In the context of large scale rendering, computation is usually distributed on many nodes, when the visualization and interaction with the system still has to be performed from a single machine. For that reason, Brayns is built upon a distributed architecture that allows all client components (python scripts, UI widgets, etc) to be run on separate machines.

Abstraction

The abstraction layer defines the interface to every element that can be used by the various engines in the system.
The abstraction was put at the lowest possible level where the compromise between execution speed and code duplication was found acceptable.
Regarding the geometry, and for the sake of memory consumption, Brayns currently uses abstract data structures that are identical to the ones used by the underlying rendering engines (OSPRay and OptiX). This could vary in the future as new engines are added, but in this particular case of the geometry, and since it can be massive in the context of the Blue Brain project, the design decision was to force the engines to adapt to the definition of the abstract objects used by Brayns.

Properties

Brayns objects holding a list of properties that are mapped by name to a supported C++ type. This mechanism is used at every level of the software in order to facilitate the exposure of internal objects to the external API.

Core components

Brayns

The initialization of the system involves command line parsing, engine creation, plug-in loading, data loading and setup of input devices.


Command line parameters provide options about the application itself, the geometry and the renderer. Brayns creates the scene using built-in and plug-in provided loaders.

Parameter manager

The parameter manager manages all parameters registered by the application. By default, an instance of application, rendering, geometry and volume parameters are registered. The parameters managers offer the necessary methods to register any additional custom types of parameters.

Camera manipulators

Brayns provides two types of camera manipulators: Inspect and Fly. Inspect is the default, and allows the user to orbit around a target. The fly manipulator allows navigation in a flight simulator way.

Engine factory

The engine factory is in charge of instantiating engines according to their name.

Plug-ins

A plug-in is a set a functionalities that are not provided by the core of the application. For example, exposing a REST interface via HTTP, or streaming images to an distant display. Plug-ins are components external to the core that are dynamically loaded during the initialization process. Brayns accepts multiple iterations of the \verb|plug-in| command line argument, followed by the name of the plug-in.
The plug-in manager is in charge of loading and keeping track of the plug-ins for the lifetime of the application. At every iteration of the rendering loop, the preRender and postRender methods are respectively invoked before and after the rendering of the current frame, and this for every plug-in.

Data loaders

Brayns provides a default loader for meshes , proteins, volumes and point clouds.

Engine

The engine abstraction is a container for all components that make a rendering engine: A scene, a renderer, a set of lights, and a list of frame buffers. When adding a new engine to Brayns, those components have to be linked to the underlying corresponding ones provided by the 3rd party acceleration library, typically OSPRay or OptiX, that provides the ray-tracing implementation.
 
 
An engine can have several renderers, cameras and frame buffers but only has one single scene. The engine is responsible for the creation of the components that it contains. For this, the engine provides a set of methods that have to be called by the individual implementations. Typically, createCamera creates the camera for the current engine, createRenderer creates the renderer and so on for every other component. The engine also provides statistics about the rendering speed of a frame via getStatistics.

Scene

A scene contains collections of geometries, materials and light sources that are used to describe the 3D scene to be rendered.
 
 

Model descriptor

 
The model descriptor defines the metadata attached to a model. Enabling a model means that the model is part of scene. If disabled, the model still exists in Brayns, but is removed from the rendered scene. The visible attribute defines if the model should be visible or not. If invisible, the model is removed from the BVH. If set to true, the bounding box attribute displays a bounding box for the current model.
Model descriptor are exposed via the HTTP/WS interface. The metadata structure is a simple map of strings that contains the name of a property and its value. This can be used to describe the model, with any piece of information that is relevant to the end user.
The model descriptor manages instances of the model it contains via a list of transformations. The model descriptor provides functions to manage the metadata and the instances, to compute the bounds of the geometry, and access to the underlying model object.

Model


The model class holds the geometry attached to an asset of the scene (mesh, circuit, volume, etc). The model handles resources attached to the geometry such as implementation specific classes, and acceleration structures). Models provide a simple API to manipulate geometries (spheres, cylinders, cones, signed distance fields, triangle meshes, streamlines, etc), materials, a unique simulation handler, volumes and a unique transfer function.
An OSPRay model holds two internal sets of geometries, a primary and a secondary one.
The model is responsible for creating the materials attached to the geometry that it contains.

Application Programming Interface

Action interface

The Action Interface allows developer to extend the API exposed via the network interface. It can register notifications, which have no return values with an optional parameter, and requests, which return a value after processing. The encoding of the parameter and return value is restricted to JSON.

plug-in

The plug-in interface defines the methods that need to be implemented by any new plug-in added to the list of components that Brayns can dynamically load. using the --plug-in command line argument, the name of the plug-in can be specified and Brayns will load the corresponding library at startup. A plug-in can access the engine, the action interface, the keyboard handler, and the camera manipulator provided by Brayns. plug-ins can expose new external API, implement new data loaders, as well as shaders, cameras, geometries, materials, etc. plug-ins are also the place where use-case specific implementations are required. Brayns aims to remain agnostic to what it renders,  plug-ins are responsible for giving a meaning to what is rendered.

Loader

In a research environment, new datasets appear on a daily basis, and being able to visualize them in a fast and easy way is crucial. Brayns offers an interface to data loaders so that custom implementations can easily be added to the system. Loaders are in charge of reading the data from external sources (IO, Databases, etc) and build the corresponding 3D scene via the creation of models. Loaders are asynchronous and run in a dedicated thread.
Loaders can define custom attributes via the property registration mechanism. importFromBlob and importFromFile are the two methods that need to be implemented in order for Brayns to accept the new loader.
At runtime, the choice of the loaded is automatically determined by the extensions that it supports. If two loaders register the same extension, the priority matches to the loading order. 

Client software development kits

Introduction

Brayns client SDKs are build on a dynamic approach, meaning that they are constructed according to the API exposed by the server, at runtime. Whenever a new end point is added to Brayns, the client SDK does not need to be adapted. Methods and data structures are automatically interpreted by the SDK and appear to the client application as Python or Javascript native objects.
Client SDKs use the registry and schema end-points to list and define native and language-specific methods and data structures. As an example, the camera object appears in the registry /registry as follows:
 
{
  "camera": ["PUT", "GET"]
}

And the corresponding schema /camera/schema:

{
  "type": "object",
  "properties": {
    "current": {
      "type": "string"
    },
    "orientation": {
      "type": "array",
      "items": {
        "type": "number"
      },
      "minItems": 4,
      "maxItems": 4
    },
    "position": {
      "type": "array",
      "items": {
        "type": "number"
      },
      "minItems": 3,
      "maxItems": 3
    },
    "target": {
      "type": "array",
      "items": {
        "type": "number"
      },
      "minItems": 3,
      "maxItems": 3
    },
    "types": {
      "type": "array",
      "items": {
        "type": "string"
      }
    }
  },
  "additionalProperties": false,
  "title": "Camera"
}

Python

The Python SDK offers a simple and easy way to connect to Brayns, using the following API:

from brayns import Client
brayns = Client('host:port')
 
When this command is executed, and the SDK connected to a running instance of Brayns, the end-point registry is parsed, and Python classes are built using an object wrapper for JSON schema definitions python_jsonschema_objects library.
Methods of generated classes are JSON-RPC based. The Python call is no more than an invocation of a piece of code executed server-side. This architecture allows the Python scripts to be run on light weight clients (mobile devices, laptop computers, etc) regardless of the size of the 3D scene which is handled by the server part of Brayns. This also allows the server to be run in distributed mode.
The generated Client class has a getter/setter method for every end points exposed by the Brayns server that respectively has a GET/PUT method defined in the schema. For instance, the following code snippet illustrates how to manipulate a camera:
 
camera = brayns.get_camera()
brayns.set_camera(position=(0,0,0), orientation=camera['orientation'])

Core plug-ins

Circuit viewer and explorer

Circuit plug-ins allows visualization of Blue Brain microcircuits. Morphologies stored as SWC or H5 files can be loaded an transformed in different ways: simple spheres for somas only, simple rendering of full morphologies using spheres, cones and cylinders, or advanced rendering of full morphologies using the signed distance field technique.
Built upon Brion, those plug-ins allow fast and low-overhead access to Circuit description , H5 Synapses data, binary meshes, morphologies, compartment reports and spike reports

Microcircuits

Multi-scale models of the rat and mouse brain integrate models of ion channels, single cells, microcircuits, brain regions, and brain systems at different levels of granularity (molecular models, morphologically detailed cellular models, and abstracted point neuron models).
A neuronal microcircuit is the smallest functional ecosystem in any brain region that encompasses a diverse morphological and electrical assortment of neurons, and their synaptic interactions. Blue Brain has pioneered data-driven digital reconstructions and simulations of microcircuits to investigate how local neuronal structure gives rise to global network dynamics. These methods could be extended to digitally reconstruct microcircuits in any brain region.

Simulation

Using the NEURON simulation package, the circuit information is loaded from disk, instantiating the various cell models (morphologies with ion channel distribution) and synaptic connections. The experimenter selects a stimulus protocol which will inject electrical current into the network and increase the membrane voltages of cells. As cells approach a threshold current, they release an action potential (AP) which will then propagate additional current changes to other cells via the synapses' release mechanisms. Brayns loads the simulation reports generated by NEURON and maps the voltages to the corresponding segments of the morphologies. A transfer function defines the mapping between a color and a voltage value.

Morphology synthesis

The goal of synthesis is to be able to generate an arbitrary number of neurons (and also other cells, such as glia) that can be subsequently used in various types of simulation. Part of this goal is to recreate in the synthesized cells as many morphological features as possible.
 

The synthesis scheme is based on the assumption that it is necessary to know the environment within which the cells are growing in order to recreate them accurately. Neuronal morphologies are influenced both by the embedding space and the presence of other cells. Their axons may target certain regions or the dendrites may mass in one region to collect input, such as the apical tuft of pyramidal cells. It is important therefore to synthesize the cells within biologically accurate volumes.

Proximity detection

In the context of brain simulation, detecting touches between neurons is a essential part of the process. The circuit explorer provides a renderer that computes the distance between the geometries in the 3D scene. 




When a ray hits a geometry, a random secondary ray is sent in a direction belonging to an hemisphere defined by the normal to the surface. If that secondary ray hits another geometry, the distance between the initial hit and the new intersection is computed, and the corresponding color is assigned to the pixel. By default, red is for short distances (including touches), and green for longer ones. The notion of short and long is defined in the settings of the renderer.

Synapses

In the nervous system, a synapse is a structure that permits a neuron (or nerve cell) to pass an electrical or chemical signal to another neuron. Synapses can be classified by the type of cellular structures serving as the pre and post-synaptic components. The vast majority of synapses in the mammalian nervous system are classical axo-dendritic synapses (an axon connecting to a dendrite). 


 
Brayns provides loaders for afferent and efferent synapses, represented as simple spheres.

Diffusion Tensor Imaging

The DTI plug-in implements the visualization of diffusion magnetic resonance imaging.

Deflect

Based on the data-parallel rendering feature provided by the OSPRay engine, the Deflect plug-in extends the PixelOp implementation to stream tiles from multiple nodes to Tide. Individual tiles are computed by rendering nodes, and sent to Tide that is in charge of reconstructing the full frame and displaying it on the large screens.
The Deflect plug-in also processes input messages such as touches provided by Tide and implements the corresponding actions. Typically camera movements and the rendering options that are mapped to keyboard entries.

Rockets

Rockets is a library for easy HTTP and websockets messaging in C++ applications. It provides HTTP server with integrated websockets support, HTTP client for making simple asynchronous requests, websocket client for sending, broadcasting and receiving text and binary messages, support for JSON-RPC as a communication protocol over HTTP and websockets. Rockets extends the 2.0 specification by providing support for cancellation and progress notifications of pending requests.
The Rockets plug-in allows Brayns to expose core and use-case specific API via HTTP or websocket protocols. The loading of the plug-in is initiated at startup time with the http-server command line argument where an optional host (default is localhost) and a mandatory port are specified. End-points are registered in the c++ code using the registerNotificatio| or registerRequest method of the ActionInterface component. The list of registered end-points can be accessed via the registry end-point.

VRPN

The VRPN plug-in receives events from input devices and transform them into Brayns usable information: Camera position and orientation, and flystick interactions (position, orientation, joystick and buttons). This plug-in is mainly use for immersive setups.
 

Applications

Service

The service is an off-screen application that is typically used when running Brayns as a service in the cloud or in a supercomputer. This application does not require any kind of graphics acceleration when run with the OSPRay engine.

Viewer

The viewer is an OpenGL based application that is used to run Brayns as a heavy client on a consumer PC.

Use-cases

Visualization of Blue Brain datasets

The visualization of Blue Brain datasets requires a specific plug-in called the Circuit Explorer. This components allows the loading of the neuron , glial cells, or vasculatures, with a placement and orientation defined in the microcircuit description. Cell morphologies can be represented in a simplified way using only a sphere for the somas, or meshed on the fly to offer a high quality rendering of dendrites and axons.
The Circuit Explorer also provides visualization of simulations by mapping the voltage values to the morphology geometry.



MOOC

The Blue Brain Project provides a number of massive online courses in which students want to visualize micro-circuit structures and corresponding simulations. Since datasets can be large, running the visualization using client hardware and software resources is likely to give unacceptable results. Thanks to its client/server architecture, visualization can be processed server side on a virtual machine running in the cloud. Computed images are then streamed to the end client in real-time, allowing smooth navigation at a constant rate, regardless of the size of the dataset.
Brayns offers a CPU based renderer via its OSPRay engine that allows it to run efficiently on a GPU-free machine. Components used in this setup are the following:


The current version of Brayns provides Docker images for the user interface docker_brayns_ui and server-side components docker_brayns.

Virtual reality

The OpenDeck is the Blue Brain visualization display for presentations and immersive exploration of scientific data in 3D.
 

  • Native rendering
The system consists of a semi-cylindrical screen measuring 8 by 3 meters, with a total pixel count of about 35 megapixels, 8 high-end Sony laser projectors (7 for the screen projection + 1 for the floor projection), 4 surround speakers, a microphone, 2 video conferencing cameras, 6 infrared LED tracking cameras to track user's head motion, an infrared touch interface, a cooling system to cool down the projectors, a Windows 10 PC from which the projectors, speakers, a microphone and a display cluster of 8 nodes, each one connected to a projector.
 
 
That immersive setup requires images to be delivered at a high frame rate (60 frames per seconds) and at a very high resolution (7x4K). Naturally, the choice of the engine goes in favor of the GPU implementation. Recent updates in the hardware such as NVIDIA's RTX technology dramatically improve the rendering speed. The drawback of such a setup being the size of the data that can be rendered, and advanced software solutions such as out-of-core and geometry streaming have to be implemented.

 

Brayns uses the GPU engine to achieve the performance required to avoid motion sickness. Three plug-ins are used on the server side: VRPN for position tracking and flight stick management, OpenDeck for stereoscopic cylindrical projection, and CircuitExplorer the loading and visualization of Blue Brain datasets.
  • Remote rendering
Brayns uses the CPU engine, and runs in distributed mode to visualize Blue Brain large datasets. Three plug-ins are used on the server side: VRPN for position tracking and flight stick management, OpenDeck for stereoscopic cylindrical projection, and CircuitExplorer the loading and visualization of Blue Brain datasets.
 

Remote rendering focus gives more priority to size the dataset, and allows a lower frame rate than the native rendering setup. Visualization still has to be interactive, but does not have to be immersive. Thanks to OSPRay's distributed mode, the CPU engine allows the rendering to be performed on multiple nodes. Each node is in charge of rendering a tile that is by default of size 64x64 pixels. Each node sends its tiles to Tide using the Deflect library. Tide is in charge of recomposing the final frame buffer and insures the synchronization of the display, which is usually driver by a cluster of several machines.
Tide offers two different surfaces, one for the main cylindrical screen and the other one for the floor projection. Brayns uses a specific camera that takes advantage of those surfaces to create a fully immersive visual experience.
 

The CPU engine uses the MPI distributed mode to render tiles on multiple nodes. Those tiles are sent to Tide via the Deflect client library. Tide is in charge of reconstructing the full frame and displaying the final image on the display.

Discussion and Conclusion

Conclusions

Brayns is currently the main platform used by the Blue Brain Project to visualize different types of data including morphologies, surface meshes and volumes. Currently, Brayns has plug-ins for visualizing simulated electrophysiological activity of point neuron and full compartmental models of large scale circuits up to the size of a mouse isocortex, diffusion tensor imaging data, large volumes.
New development will now focus on additional scientific use-cases, and new rendering engines (PBRT and OptiX 7.0).

Software availability

The code is available under the GNU General Public License at https://github.com/BlueBrain/Brayns.
The Python SDK is downloadable from the Python Package Index website at https://pypi.org/project/brayns
Brayns is also available has docker images for the server side components (core modules and plug-ins) at: https://hub.docker.com/r/bluebrain/brayns and the user interface at https://hub.docker.com/r/bluebrain/brayns-ui.

Future work

Brayns is a modular platform that allows improvements in many directions: Size of datasets, types of displays (Domes, VR headsets, etc) and image quality (Shaders, materials, etc.).
The abstraction layer provides an API for additional engines, but the current implementation allows only one instance at a time. An improvement would be to have several engines running simultaneously, one for interactive navigation and another one for the off-line high quality and physically based rendering.
Sub-surface scattering, path-tracing, bi-directional path-tracing are also in the pipe to improve the overall quality one of the images produced by Brayns.