M.A.RL.I.E.

Multilayered Annotated ReLighting Images Explorer


M.A.RL.I.E is a web software that allows real-time exploration of multiple-layered, image-based objects. They can have a geometric description and be dynamically relighted using an analytical material-light interaction model called Ward. Moreover, it supports multiresolution annotations, lens metaphor for inspection, BRDF picking, geometry enhancement, achromatic rendering, etc. and it is completely configurable. It supports both mouse and touch input, and it is able to adapt its layout to multiple devices (such smartphones, tablets, desktops, etc.).

Technical details are provided in the paper "Web-based Multi-layered Exploration of Annotated Image-based Shape and Material Models", by Alberto Jaspe-Villanueva, Ruggero Pintus, Andrea Giachetti and Enrico Gobbetti, presented at the 16th Eurographics Workshop on Graphics and Cultural Heritage (GCH), in November 2019 at Sarajevo (Bosnia and Herzegovina).

For technical details or bug reports contact Alberto Jaspe at ajaspe@crs4.it


Installation & requirements

This software is written in Javascript ES6 and HTML5. It is intended to be run in a modern web browser. It uses Bootstrap for the interface, gl-matrix for some of the maths and WebGL2 for the rendering. It uses JSON files for its configuration, and it is distributed with a test dataset in order to illustrate its use. It requires a web server and a modern web browser (such as Chrome or Firefox) to run, as well as a graphic card with OpenGL ES 3.0 capabilities.


Dataset specification & config

Layers specification

Every layer has a set of images that define its properties, and all of them must have the same dimensions.

The shape of the object is defined with a normal map, encoded as an RGB image so that N = 2 * (RGB) - vec3(1). The resulting vector N must be unitary, otherwise that pixel will be discarded.

The appearance of the object is defined by three maps, which encode the following parameters of a Ward BRDF model:

Annotations

For the multiresolution annotation, we use an image pyramid. The base has the same resolution of the other maps, while for the upper levels, the height and width of each image is the half of the previous level. There is no need to define the whole set of images, and you can define how many levels are present. The image file for each level is expected to be named file_name = ${file_prefix}${level}${file_postfix}, where the prefix and postfix are defined in the configuration file, and the level starts at zero. You must also define a set of tuples ("title", "info") that define the text displayed when an annotation is rendered.

Config file

Each dataset must have a "config.json" file to define its configuration:

{
    "name":"Test",
    "info": "HTML text describing this dataset",
    "dimensions": [2048, 1280],
    "alphaLimits": [0.01, 0.5],
    "inputColorSpace": "linear",
    "layers": [
        {
            "name": "Layer 1",
            "maps": {
                "normals": "normals.jpg",
                "kd": "layer1/kd.jpg",
                "ks": "layer1/ks.jpg",
                "gloss": "layer1/gloss.jpg"
            },
            "annotations": {
                "file_prefix": "annot/annot_",
                "file_postfix": ".png",
                "n": 3,
                "infos": [
                    ["First level of annotations", "A description of the layer with HTML embed."],
                    ["Second level of annotations", "A description of the layer with HTML embed."],
                    ["Third level of annotations", "A description of the layer with HTML embed."]
                ]
            }
        },
        {
            "name": "Layer 2",
            "maps": {
                "normals": "normals.jpg",
                "kd": "layer2/kd.jpg",
                "ks": "layer2/ks.jpg",
                "gloss": "layer2/gloss.jpg"
            }
        }
    ]
}

Viewer Configuration

The "test" dataset in the software distribution shows the usage of MARLIE.

Dataset database

In the root folder you can find a configuration file called "datasets_db.json" with tuples of datasets to be listed in the interface:

{
    "Dataset 1": "data/dataset1",
    "Dataset 2": "data/dataset2",
    "And this is dataset 3": "otherpath/dataset3"
}

Presets per dataset

Each dataset has also a configuration file called "viewer_config.json" that defines presets (or "option") to be shown, both for the base render and for the lens. These options are combinations of one layer with some render parameters. The layer is identified by its position in the config.json file, starting from zero. Moreover, some parameters can be set as default for all the options, and can be overwritten inside every preset.

The render parameters currently supported are:

This is an example "viewer_config.json" file for a dataset:

{
    "defaultParams": {
        "renderMode": 0,
        "normalEnhancementK": 0.0,
        "normalEnhancementLOD": 1.0,
        "brightness": 2.0,
        "gamma": 2.0
    },
    "baseOptions": [
        {
            "name": "First layer",
            "info": "This is just an example of a possible layer.",
            "layer": 0
        },
        {
            "name": "Second layer with only specular and low gamma",
            "info": "This is just another example of another possible layer, modifying a parameter",
            "layer": 1
            "params": {
                "renderMode": 5,
                "gamma": 1.8
            }
        }
    ],
    "lensOptions": [
        {
            "name": "Second layer achromatic with enhanced geometry",
            "info": "Info of this lens layer with HTML embed",
            "layer": 1,
            "params": {
                "renderMode": 1,
                "normalEnhancementK": 2.27,
                "normalEnhancementLOD": 2.66,
            }
        },
        {
            "name": "Normal map brighty",
            "info": "Info of this lens layer with HTML embed",
            "layer": 0,
            "params": {
                "renderMode": 3,
                "brightness": 3.5
            }
        },
        {
            "name": "Simple second layer",
            "info": "Info of this lens layer with HTML embed",
            "layer": 0
        }
    ]
}

Visual Computing :: CRS4