1 Year at YCR

A year ago we joined YCR, a non-profit research lab focused on long term innovation and open sharing, and moved into our awesome new office, with our own dedicated space, an amazing office manager, and everything we needed to get to work!

Above is the video we made to commemorate the occasion. We sure have done a lot in the past year! If you’re interested in any of the projects shown, we’ve probably blogged about it, so search this site. Our best email address at the moment is our gmail address, “elevirtual”. Also here’s the script of the video, with links for reference:

 

1 Year of Research

It’s eleVR’s 1-year anniversary of being hosted by Y Combinator Research and we wanted to commemorate the occasion by looking back on some projects we’ve worked on in the past year!

If you don’t know us, we’re a nonprofit research group looking at technologies like AR and VR, not for technology’s sake or for selling a product, but in preparation for a future where those technologies become an invisible part of everyday life, when instead of VR and AR being something we think or feel about, they’re a tool we think and feel with. We hope that by sharing our research, designers and technologists will be more conscious of the choices and affordances that can be made now while the technology is still visible.

There’s definitely been some themes to our past year of work, so here’s #1:

Technologies for Thinking

We don’t think VR’s power is in simulating reality. We’re interested in using it to create wholly new kinds of experiences that give us new abilities of reason, of communication, of self-expression and self-reflection, that last through the rest of our lives. What the headset shows us isn’t reality but the experience is real and it changes how we feel and how we think.

For a basic example, a virtual object once seen continues to linger. Virtual objects can be referenced, shared, pointed at, becoming a real part of our common experience. We started to see this effect in the prototype AR framework that Andrea made a couple years ago, when we’d pass around the headset and reference the placement of objects to each other. In the past year we’ve put more work into that headset, and have also started working with the Hololens, to get this effect at a larger scale.

Virtual experiences linger in our bodies too. It’s not uncommon for those new to VR to have to check themselves from trying to walk through people and objects after they come out of the headset for the first time. We’d like to understand these extraordinary powers and use them for good.

We’re inspired by on-paper visualizations such as graphs and venn diagrams, icons and abstract art, as well as computer models, interactive diagrams, and games, that give us new ways to think even when we’re away from the computer or the page.

We created dozens of virtual venn diagram variations to get a feel for how different laws of collision in VR, allowing overlap, might give us a different way of thinking about containers and categories. According to Lakoff and Johnson, our very concept of categories comes from places like kitchens where things are sorted into separate drawers and cabinets, so we created a venn kitchen that obeys different rules, giving a hands-on experience of overlapping categories that we hope might inspire the player to think more complexly about their categorizations of things and people.

We used VR to create interactive museum exhibits that are unconstrained by physical laws, and as a side effect found that this thought process helped us design compelling real-life exhibits too.

Our work with tools for thought overlaps with our second theme:

Embodied Knowledge

The thing about VR and AR isn’t just that you see 3D graphics in the headset, but that the technology tracks your actual body’s real movement. This lets us take advantage of a huge set of human skills, things that often get called intuition, that technology has previously ignored but that now we can build rational models around and design for.

Take something as simple as my ability to roughly know how I’m moving my hand though the air, without looking at it. Evelyn’s work on Networked Gestures allows us to send these hand motions into shared spaces online, where we can add gestures to our communication, as well as push virtual objects together and know we’re pushing them not because we see the hand-push graphic, but because we’re doing the pushing.

We wanted to create a prototype that helps people understand the abstraction of a graph, because graph literacy is one of the biggest predictors of success in gradeschool physics. With something as simple as a graph of your hand’s Y position over time, you can see the abstraction and feel its relationship to your body’s motion. We don’t know how effective it is yet, but we’ve created a model that can be tested using tools accessible to any education research lab.

Even our head’s motion through space comes with a lot of knowledge and expectations as to how our view of space should behave. With this in mind, we collaborated with mathematician Henry Segerman and physicist Sabetta Matsumoto to put two different hyperbolic spaces into virtual reality, allowing you to feel the way hyperbolic space behaves when you move through it, rather than merely seeing it. We also wrote a couple papers on the math and tech behind the software this year and also this work was featured in Nature and a bunch of other places, so that’s cool.

Also pretty cool is our ability to know where things are in space around us, to grab and arrange objects using our hands, and group objects into collections that are organized non-linearly. So we’ve been working on prototyping a programming language that allows you to stick programming elements together into chunks of code that can be arranged into large programs that contain a spatial texture that aids readability and understandability, changes in scale that allow larger code bits to contain their own programming, and maybe new and more expressive ways to think about programming altogether.

All right theme #3:

The Office of the Future!

Using tools like we’ve mentioned, in the past year we’ve spent a lot more time working with VR and AR, rather than just on VR and AR. Through Andrea’s asymmetric multiplayer game designs, we experienced the effectiveness of elements like gravity, scale, and placement in communicating information, as well as some flaws in our intuition for space.

From this, we delved into social VR, and started to have our weekly group meetings in various VR locations. And we’ve found that with any work we do in VR, we always end up on the floor at some point. So we embraced the fundamental truth that sitting at a desk or standing in one position is not what human bodies were made for.

And so M took a deep dive into figuring out how bodies want to work, including completing an entire Yoga teacher training course, and undaunted by the ergonomic failures of aesthetic floor-based designs such as beanbags and furry rugs, they forged ahead and ended up with a design based on restorative Yoga techniques using foam floor mats, bolsters, and blocks.

And let me tell you, it is such a great way to work.

The main office is our central prototype and grounding presence which is copied both as a networked virtual space using a 3D model of our office Elijah made, and also in different physical iterations. But the technology also lets us branch out our offices into the wider physical and virtual worlds.

We’ve been bringing our art studios into VR to share our spaces at a distance. Evelyn shared her own studio and virtual works with us during her residency at the Banff center, and also the studios of other artists, to see the extent to which we could get a sense of how different studio spaces feel in VR.

Which brings us to theme #4:

Art-Based Research

We’ve talked about the goal of finding new ways of thinking and methods of understanding, but what research practice gets you there? If we were merely looking for answers, the scientific method might be a good tool, but you can’t use it to find a hypothesis in the first place.

And that’s why in the past year we’ve been refining our practice of art-based research, the idea that artistic explorations push the boundaries of technology and human expression in ways that help you get to truly new ideas and new questions that aren’t along the standard path. With the help of Evelyn who joined us last year we’re borrowing methods from the art world and using them in self-aware ways to further our research.

M made 50 virtual bed sculptures for the piece “Making the Bed”, and in the process we learned about new sorts of spatial organization and uses of teleportation, as well as uses for scale. We’ve better understood the way AR objects stick in your brain and how groups perceive alternate realities together though their work “Would You Like To See An Invisible Sculpture?”, shown unsolicited at sfMoMA. “Tossing and Turning” is a work that combines 3D elements from a variety of different VR technologies into a new context. Oh and also in the past year M completed a project to make a spherical video every day for an entire year and we’ve certainly learned a lot of VR video’s expressive and editing capabilities from that.

Evelyn made self-expressive works following the theme of weight, texture, and scale that showed us how previous technology ignored these elements and future technology should keep them in mind. Her landscape interventions challenged my assumptions about where AR is done and the speed and scale at which it can be used, and her AR still lifes playfully toy with our reality-based expectations of the behavior of virtual objects. She has also explored AR and VR as tools for artistic thought, using both as sketchbook tools for reconceptualizing and reframing mental imagery.

Our artist-selves admire the surrealness of the conflict between the virtual and the real, and then our researcher-selves ask why we have the expectations we do, how we can use them, what would we have to change in our sensibilities in order to change those expectations, and what else these new ways of thinking, seeing, and feeling might lead to.

Conclusion

Looking forward to the next year, we hope to better understand the body’s role in cognition and how we can design for it, we hope to interface with more diverse fields, we hope to find new powerful representations of thoughts, feelings, and ideas, and of course we hope for enough funding to keep doing it all, so if you have a big pile of research funding looking for a home, we could use it.

Big thank you to everyone at HARC and YCR, to our funders, to Alan Kay and Sam Altman, and especially to M, Evelyn, and Andrea, you’re the best team ever. Here’s to another year.