In VR the user can enter their data and position one's viewpoint within the data itself, thus breaking through limitations of viewing 3D/4D images on a 2D desktop screen, like constantly having to turn the image with a mouse. With natural movements of the head and body a user can move freely to inspect and interact with an image from any angle and position without limitation. Users are instantly able to comprehend and internalize information about important relationships between structures within an image. These new insights, many impossible to perceive on a desktop system, can be used to evaluate existing hypotheses, create new ones, and generate appropriate data analysis strategies.
Freed from being tethered to a mouse like on a desktop computer and with depth perception equivalent to the real word, a person's hands are unencumbered to simply reach into the data to precisely and intuitively mark, measure, classify, edit, and segment. A cumbersome and frustrating process that on a desktop involves, multiple turns of the dataset, changing tools multiple times, guessing at which object is being selected, and iteratively positioning from different angles is reduced to simply reaching out and pulling a trigger or pushing a button.
"InViewR makes impossible or enormously time consuming segmentation tasks possible and reasonable to undertake," states Michael Wussow, Vice President Sales and Marketing, Imaging. "Users are able to take images 80% segmented with automatic algorithms to 100% completion by using sculpting tools to add to, remove from, delete and join segments. Manual segmentation tasks that on a desktop took researchers weeks or months are completed in hours or days with increased accuracy using semi-automatic, point and shoot, and manual painting tools."
Demonstrations of this software will be available at arivis Booth 3015 and Carl Zeiss Booth 2923 during the SFN exhibition. More information, including movies, can be found at www.arivis.com/vr
SOURCE arivis AG