Difflect is a project connecting print, spatial installation and software which makes it possible to experience print media in a digitally enhanced, interactive way.

The setup consists of two cameras and a display which shows a mirror-like, abstract representation of the user. The system can be fed with content by putting a printed object in front of it. It will recognize the medium being consumed by the user and display digital content related to the printed information. To introduce the system, four foldable printed objects describing the four classical elements can be chosen by the user. Each print object will show a digital representation of its element when put in front of the installation, allowing hands-on interaction to experience the characteristics and behaviour of the element. The project features a highly reduced visual language to put focus on interaction as well as surround sound (meaning spatially positioned stereo sound).

The entire project was realized with openFrameworks, using OpenNI for Kinect integration, as well as the libraries ofxOpenNI, OpenCV and ofxFernCameraDemo.

Difflect was the final project for my studies of communication design at the university of applied sciences, Würzburg. My professors were Prof. Erich Schöls and Prof. Uli Braun.

The name “Difflect” is a combination of “to diffuse” and “to reflect”, two words that describe the idea behind the setup pretty well. The screen appears to be like a membrane filtering certain pieces of information in- and outwards which are diffusing via the screen: information about what content is being watched at the print level is perceived by the system. in return, only information relevant and connected to the current print level is sent back to the user. The print object is source of information for the user as well as the navigational element in digital space.

At the same time, the screen serves as a reflection of the real world: the user can see himself in a highly reduced version. Self-perception was a very important aspect for me as it strengthens the connection between the user and the digital content he is interacting with.

My initial plans in realizing the project were to create a printed part which equals the digital one in complexity. For my bachelor degree, I put focus on the digital side. Each of the four elements has its ownways and possibilities to interact. The respective paper sheets give a short introduction to the respective element and invite to interact.

Content

Air lets you use the power of your hands to create your own wind phenomena and make them visible on screen. It’s the most simple way of interacting and can be seen as an introduction to the system.

 

Placing the earth sheet on the table lets you create and deform your own landscape. Pull up mountains or press in canyons. By rotating the sheet, you can navigate your newly created landscape. Turning the paper to the other side and back again lets you reset the object and start with a newly created, random landscape.

 

The fire sheet has its own fireplace in store, allowing you to turn your hand into a virtual torch and inflame your environments. Fear not: by rotating the sheet, you can access the extinguishing mode which allows you to clean up the mess.

 

Water is the most complex level of the project and consists of two modes you can switch by rotating the sheet. The first mode presents a water surface you can play and create waves with. You can let two waves fight each other and see how they compensate their effect.

The second mode lets you experience water molecules and how they react on temperature.  By rotating the sheet, you set the temperature and by that, the concentration of the molecules. You can go from ice to cooking water. Rotating the sheet further zooms out and takes you to the surface view again.

 

The setup, consisting of the Kinect, a second webcam, a construction keeping it all together, put on a tripod and placed behind the display.