A renaissance in the development of virtual (VR), augmented (AR), and mixed reality (MR) technologies with a focus on consumer and industrial applications is underway. As data becomes ubiquitous in our lives, a need arises to revisit the role of our bodies, explicitly in relation to data or information. Our observation is that VR/AR/MR technology development is a vision of the future framed in terms of promissory narratives. These narratives develop alongside the underlying enabling technologies and create new use contexts for virtual experiences. It is a vision rooted in the combination of responsive, interactive, dynamic, sharable data streams, and augmentation of the physical senses for capabilities beyond those normally humanly possible. In parallel to the varied definitions of information and approaches to elucidating information behavior, a myriad of definitions and methods of measuring and understanding presence in virtual experiences exist. These and other ideas will be tested by designers, developers and technology adopters as the broader ecology of head-worn devices for virtual experiences evolves in order to reap the full potential and benefits of these emerging technologies.
Scalable Metadata Environments (MDEs) are an artistic approach for designing immersive environments for large scale data exploration in which users interact with data by forming multiscale patterns that they alternatively disrupt and reform. Developed and prototyped as part of an art-science research collaboration, we define an MDE as a 4D virtual environment structured by quantitative and qualitative metadata describing multidimensional data collections. Entire data sets (e.g.10s of millions of records) can be visualized and sonified at multiple scales and at different levels of detail so they can be explored interactively in real-time within MDEs. They are designed to reflect similarities and differences in the underlying data or metadata such that patterns can be visually/aurally sorted in an exploratory fashion by an observer who is not familiar with the details of the mapping from data to visual, auditory or dynamic attributes. While many approaches for visual and auditory data mining exist, MDEs are distinct in that they utilize qualitative and quantitative data and metadata to construct multiple interrelated conceptual coordinate systems. These "regions" function as conceptual lattices for scalable auditory and visual representations within virtual environments computationally driven by multi-GPU CUDA-enabled fluid dyamics systems.
This panel and dialog-paper explores the potentials at the intersection of art, science, immersion and highly dimensional, “big” data to create new forms of engagement, insight and cultural forms. We will address questions such as: “What kinds of research questions can be identified at the intersection of art + science + immersive environments that can’t be expressed otherwise?” “How is art+science+immersion distinct from state-of-the art visualization?” “What does working with immersive environments and visualization offer that other approaches don’t or can’t?” “Where does immersion fall short?” We will also explore current trends in the application of immersion for gaming, scientific data, entertainment, simulation, social media and other new forms of big data. We ask what expressive, arts-based approaches can contribute to these forms in the broad cultural landscape of immersive technologies.
The ecological sciences face the challenge of making measurements to detect subtle changes sometimes over large areas across varied temporal scales. The challenge is thus to measure patterns of slow, subtle change occurring along multiple spatial and temporal scales, and then to visualize those changes in a way that makes important variations visceral to the observer. Imaging plays an important role in ecological measurement but existing techniques often rely on approaches that are limited with respect to their spatial resolution, view angle, and/or temporal resolution. Furthermore, integrating imaging acquired through different modalities is often difficult, if not impossible. This research envisions a community-based and participatory approach based around augmented rephotography of ecosystems. We show a case study for the purpose of monitoring the urban tree canopy. The goal is to explore, for a set of urban locations, the integration of ground level rephotography with available LiDAR data, and to create a dynamic view of the urban forest, and its changes across various spatial and temporal scales. This case study gives the opportunity to explore various augments to improve the ground level image capture process, protocols to support 3D
inference from the contributed photography, and both in-situ and web based visualizations of the temporal change over time.
As cities world-wide adopt and implement reforestation initiatives to plant millions of trees in urban areas, they are
engaging in what is essentially a massive ecological and social experiment. Existing air-borne, space-borne, and fieldbased
imaging and inventory mechanisms fail to provide key information on urban tree ecology that is crucial to
informing management, policy, and supporting citizen initiatives for the planting and stewardship of trees. The
shortcomings of the current approaches include: spatial and temporal resolution, poor vantage point, cost constraints and
biological metric limitations. Collectively, this limits their effectiveness as real-time inventory and monitoring tools.
Novel methods for imaging and monitoring the status of these emerging urban forests and encouraging their ongoing
stewardship by the public are required to ensure their success. This art-science collaboration proposes to re-envision
citizens' relationship with urban spaces by foregrounding urban trees in relation to local architectural features and
simultaneously creating new methods for urban forest monitoring. We explore creating a shift from overhead imaging or
field-based tree survey data acquisition methods to continuous, ongoing monitoring by citizen scientists as part of a
mobile augmented reality experience. We consider the possibilities of this experience as a medium for interacting with
and visualizing urban forestry data and for creating cultural engagement with urban ecology.
ATLAS in silico is an interactive installation/virtual environment that provides an aesthetic encounter with
metagenomics data (and contextual metadata) from the Global Ocean Survey (GOS). The installation creates a visceral
experience of the abstraction of nature in to vast data collections - a practice that connects expeditionary science of the
19th Century with 21st Century expeditions like the GOS. Participants encounter a dream-like, highly abstract, and datadriven
virtual world that combines the aesthetics of fine-lined copper engraving and grid-like layouts of 19th Century
scientific representation with 21st Century digital aesthetics including wireframes and particle systems. It is resident at
the Calit2 Immersive visualization Laboratory on the campus of UC San Diego, where it continues in active
development. The installation utilizes a combination of infrared motion tracking, custom computer vision, multi-channel
(10.1) spatialized interactive audio, 3D graphics, data sonification, audio design, networking, and the VarrierTM 60 tile,
100-million pixel barrier strip auto-stereoscopic display. Here we describe the physical and audio display systems for the
installation and a hybrid strategy for multi-channel spatialized interactive audio rendering in immersive virtual reality
that combines amplitude, delay and physical modeling-based, real-time spatialization approaches for enhanced
expressivity in the virtual sound environment that was developed in the context of this artwork. The desire to represent a
combination of qualitative and quantitative multidimensional, multi-scale data informs the artistic process and overall
system design. We discuss the resulting aesthetic experience in relation to the overall system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.