XRDS

Crossroads The ACM Magazine for Students

Sign In

Association for Computing Machinery

Magazine: Features
3-D printing interactive objects

3-D printing interactive objects

By ,

Full text also available in the ACM Digital Library as PDF | HTML | Digital Edition

Tags: Human computer interaction (HCI), User centered design

back to top 

3-D printers have evolved from professional equipment for industrial design studios to staples in makerspaces—these community hubs offer open access to fabrication machines, electronics tools, and more. As 3-D printers make their way into clubs, schools, libraries, museums, and homes, we should consider what sorts of objects can these machines make and how will users design those objects?

A look at Thingiverse, an online 3-D printing community, reveals novices mostly create static and decorative objects—figurines, ornaments, and pen or tool holders—stuff that hardly belongs to a new "industrial revolution." In contrast, experienced designers create functional objects, often as assemblies. These include existing, non-printed electronic parts such as sensors and actuators, and mechanical parts such as fasteners and hinges. Designing electro-mechanical assemblies with functional components in CAD remains a complicated task for experts. Designers must specify not only where to place functional components, but how to mount them to allow assembly and ensure functionality.

At Berkeley Institute of Design (BiD Lab), our research goal is to help craft electronics for hobbyists of all ages, often called "makers," to create functional interactive objects on commodity digital-fabrication machines. We seek to enable makers to print working physical user interfaces with minimal additional instrumentation and assembly. Physical user interfaces are pervasive—think of game controllers and musical instruments—and their physicality has important benefits such as tactile feedback and high-performance manipulation [1]. For example, gamers prefer physical input for speed and performance, while musicians are interested in virtuosity and control. Building working devices that also exhibit interactive behavior requires adding electronic sensing components and circuitry to the mechanical design. We are exploring two directions to make this task easier. First, we are building tools that automatically fit electronic components into 3-D printed models. Second, we are investigating ways to replace standard wired electronics with alternative sensing strategies that only require adding a single sensor to a 3-D printed model to recognize a range of different interactions.

back to top  Hands-On Design

Toolkits, such as Arduino or .NET Gadgeteer, have lowered the threshold of experimenting with electronics for interaction, at the same time creating a community and extensive documentation. This has allowed artists, students, and other non-professionals to leverage capabilities such as lights, sound, and sensing in their projects [2]. However, integrating electronics into 3-D printed objects is difficult. Mounting parts, such as buttons and joysticks, in exactly the right place may require significant changes to a 3-D model (e.g., to add fasteners and clearances, or to split an enclosure into two half shells).

We ran a formative study that suggests novices can express their intent for the design of physical interfaces by combining physical sculpting of larger shapes and annotation of finer details, for instance, through additional drawings or marks. This study gave us the inspiration to consider tangible modeling as an alternative to CAD.

Inexpensive 3-D scanning technologies are becoming increasingly available based on smartphones [3] or web cams. 3-D scanning combined with 3-D printing opens up the possibility of a new workflow. A maker may start with a modeling task in the physical space, followed by scanning the resulting object to get a digital representation. Then that digital model is modified algorithmically, and finally printed back out. This physical–to-digital-to-physical pipeline could combine some of the benefits of tangible modeling with the flexibility and precision of CAD.

We created a system, called "Makers' Marks," which allows users to physically design custom enclosures with precision mounting geometry for functional components. To design the overall shape, users first sculpt using clay or other physical materials. Then, they add annotations with physical stickers indicating placement for functional components. In our prototype, supported parts include components for user interaction (e.g., joysticks) as well as mechanical parts (e.g., hinges). These stickers are designed with two purposes in mind: They should be legible to people, as well as reliably detectable using computer vision algorithms.

Makers' Marks captures user-created geometry using a 3-D scanner and replaces annotations with precise 3-D geometry from a library. The generated geometry ensures the indicated components can be fastened in by the designer post-print, and, hence, will be accessible to the end-user. By employing physical authoring for rough shapes and digital tools for precise assembly geometry, we hope to enable easier, quicker creation of complex functional objects (e.g. a game controller, shark, box, and baby monitor), as shown in Figure 1.

Makers' Marks creates rigid, shelled objects that incorporate surface-mounted functional components. These components must have 3-D models and additional clearance metadata. In spite of these caveats, objects created with this technique can range from whimsical and decorative (a waving shark) to precise and functional (a game controller). The physical-to-digital-to-physical pipeline enables a wide variety of device designs, which is limited only by the maker's imagination.

Other projects have also offered ways to integrate existing parts into novel 3-D models. For example, the faBrickation technique accelerates 3-D printing processes by 3-D printing high-detail areas around assembled low-resolution LEGO blocks [4]. The Enclosed interface allows users to interactively design enclosures around pre-existing electronics [5].

With Makers' Marks and other related projects, a user assembles a number of different off-the-shelf electronic and mechanical parts into a 3-D printed shell. Is it possible to further reduce this complexity by leveraging the flexibility of 3-D printing? Could it be possible to instantaneously transform a 3-D print into an interactive device by adding only a single "super sensor"?

back to top  Generating 3-D Geometry for Interactive Sensing

3-D printed plastic has a number of properties that can be leveraged for designing useful sensing techniques. Through the printing process, it can be laid down in arbitrary geometries—mechanisms can be printed fully functional without needing assembly—and multiple colors or materials can be used. We have developed tools and strategies to utilize several of these properties. Sauron is a design tool that enables users to rapidly turn 3-D models of input devices into interactive 3-D printed prototypes by adding a camera pointed into the inside of a device. Our system automatically modifies the shape and color of mechanisms so interaction can be sensed with standard computer vision techniques. Lamello is a technique that senses interaction with a single microphone. It generates 3-D printed tine structures, which are struck by the user and then vibrate at predictable frequencies.

Sauron: The all-seeing camera. Sauron enables makers to 3-D print a complete interactive device in a single step. After printing, they add a miniature camera with an integrated ring light to the prototype. After an interactive registration step, Sauron can track the motion and position of buttons, sliders, joysticks, and other input devices through machine vision performed on an end-user's computer.

Sensing all input components on a device with complex shapes can be challenging, as components may be outside the viewing frustum of a single camera or blocked by the device's geometry. To address such challenges, we introduce automatic visibility analysis and model modification to translate human input into visible movement, which can be accurately tracked with standard computer vision algorithms. Our system first determines which components will be visible to the camera based on a maker's placement of a virtual camera into the CAD model during the design phase (see Figure 2b). For components that are not visible, Sauron can modify the component model's internal geometry to extend motion into the camera's viewing frustum using parameterized extrusions (see Figure 2c). Next, Sauron uses ray tracing to determine how optical mirrors may be placed to make motion visible in cases where geometry modification fails because of mechanical interference. We implement these techniques by extending commercial parametric CAD software. The models can be printed on any 3-D printer that has removable support material (see Figure 2d). While computer vision research traditionally strives to uncover information about an unknown environment, our approach seeks to modify a known environment. In this case, the digital model of the prototype object to be fabricated in order to facilitate computer vision (see Figures 2e and 2f).

To evaluate Sauron's expressivity, we created a series of functional prototypes—a game controller, ergonomic mouse, and DJ mixing board. We also asked three knowledgeable CAD users to design DJ mixing boards with our sensing approach in mind. In all cases the users were able to focus on the usability of their prototype interfaces without being impeded by the sensing technique. Additionally, we evaluated 10 pre-made models downloaded from the Internet. We determined even designers who did not have vision-based sensing in mind while designing would have been able to use Sauron for their prototypes in most cases.

The Sauron approach was inspired by prior research on 3-D printing light pipes and integrating optical sensors into prints [6]. It has some important assumptions and limitations: Our implementation of the CAD plugin can currently only process certain types of hollow models and is not guaranteed to succeed. Second, our current model modification techniques only work for a subset of input components, though they are extensible. Despite these limitations, Sauron enables construction of a useful variety of devices. However we wanted to explore additional sensing techniques, which could leverage other plastic properties and different sensors.

Lamello: A tine-y technique. Lamello integrates algorithmically-generated tine structures into movable components to create passive tangible inputs like the slider, button, and dial (see Figure 3). Manipulating these inputs creates sounds that can be captured using an inexpensive contact microphone or existing laptop microphone, and then interpreting those sounds using real-time audio signal processing. Lamello predicts the fundamental frequency of each tine based only on its digital geometry. Thus, recognition does not require training examples. The decoded high-level events can then be forwarded to interactive applications. The name "Lamello" is derived from the lamellophone family of instruments, which create sound through vibrating tongues of varying lengths.

There are two main challenges in developing a passive acoustic sensing technique that supports a variety of input controls: designing the physical mechanisms for generating sounds, and developing recognition algorithms that can interpret those sounds in the intended manner.


We seek to enable makers to print working physical user interfaces with minimal additional instrumentation and assembly.


To generate sounds, we embed tine structures in input components. Our tines are rectangular beams attached at their base to the component, and free to deflect at their top. When an end user interacting with a component causes tine plucks, the tines vibrate the body of the component and the vibrations are captured by the microphone as sound. Tines can be arranged in configurations supporting different interactions (e.g., sliding, rotating, pressing). The audio signal of a tine strike is characterized by an initial transient—a short, high-energy sound across a wide range of frequencies followed by free vibration with a local long-decay energy peak at the tine's resonant frequency. Conceptually, our recognizer detects a transient, finds the dominant resonant frequency after the transient passes, and compares it to predicted tine frequencies.

Recognizing mechanically generated sound for input has important limitations. Movement generates sound—meaning steady state cannot be sensed. At the same time it also has appealing characteristics: compatible components can be fabricated from a single material (e.g., 3-D printed ABS plastic), and "wiring" just requires attaching a microphone. We also developed specific design and fabrication guidelines, and demonstrated several components that use the Lamello approach. Our evaluation showed training-free recognition is possible, though our accuracy could be improved with de-noising techniques. Other researchers are exploring related techniques that characterize the sound of surface textures when a user scratches an object [7]. Laput et al. use an active sensing approach in which a speaker continuously plays sound through a hollow, flute-like pipe that ends in a microphone. When end-users manipulate 3-D printed mechanisms along the pipe, they change its acoustic properties [8].

back to top  What's Next?

Our work has explored different methods to create functional and interactive objects through tangible modeling and through novel sensing techniques. Emerging fabrication technologies may soon open additional areas of exploration. We now have machines that can either spray conductive material on the outside of existing objects, or deposit conductive material layer-by-layer inside 3-D printed objects [9, 10]. With such machines, traditional printed circuit boards and wiring may become obsolete and the object itself may become the circuit. In addition, advances in material science, such as continuous liquid interface printing, promise to cut down printing time by an order of magnitude [11]. Cheap and fast fabrication could make it possible to explore larger design spaces of interactive objects by automatically generating alternatives and printing them side-by-side for comparative testing.

All this new hardware will require new design software. One intriguing opportunity lies in automatic generation (or modification) of interfaces to suit particular people and tasks, e.g., for assistive applications. We believe there is significant territory to be explored in modeling users' individual capabilities, as well as understanding how to create optimal inputs devices suited to specific tasks. Software is playing an increasingly important role in defining the physical objects that will surround us in the future. It is an exciting time to be a computer scientist.

back to top  References

[1] Klemmer, S. R., Hartmann, B. B., and Takayama, L. How bodies matter: Five themes for interaction design. In Proc. Conference on Designing Interactive Systems (DIS '06). ACM, New York, 2006, 140–149.

[2] Arduino. https://www.arduino.cc

[3] Autodesk 123D Catch. http://www.123dapp.com/catch.

[4] Mueller, S., Mohr, T., Guenther, K., Frohnhofen, J., and Baudisch, P. faBrickation: Fast 3D printing of functional objects by integrating construction kit building blocks. In Proc. Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, 2014, 3827-3834.

[5] Weichel, C., Lau, M., and Gellersen, H. Enclosed: A component-centric interface for designing prototype enclosures. In Proc. International Conference on Tangible, Embedded, and Embodied Interaction (TEI '13). ACM, New York, 2013, 215–218.

[6] Willis, K., Brockmeyer, E., Hudson, S., and Poupyrev, I. Printed optics: 3D printing of embedded optical elements for interactive devices. In Symposium on User Interface Software and Technology (UIST' 12). ACM, New York, 2012, 589–598.

[7] Murray-Smith, R., Williamson, J., Hughes, S., and Quaade, T. Stane: Synthesized surfaces for tactile input. In Proc. Conference on Human Factors in Computing Systems (CHI '08). ACM, New York, 2008, 1299–1302.

[8] Laput, G., Brockmeyer, E., Hudson, S. E., and Harrison, C. Acoustruments: Passive, acoustically-driven, interactive controls for handheld devices. In Proc. Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, 2015, 2161–2170.

[9] Optomec. http://www.optomec.com

[10] Voxel8. http://www.voxel8.co/

[11] Carbon. http://carbon3d.com

back to top  Author

Valkyrie Savage is a Ph.D. candidate at UC Berkeley. Her research focuses on design tools for 3-D printing, specifically for creating interactive objects like video game controllers. She is broadly interested in technologies to encourage interest and participation in STEAM.

back to top  Figures

F1Figure 1. With the Makers' Marks system, makers create and annotate physical designs of objects (left). Once these are 3-D scanned, the annotations are replaced with relevant geometry (center), and can be printed for components to snap in (right).

F2Figure 2. Sauron's processing steps start with a hollow 3-D model (a). When a virtual camera model is added, it can detect which components are visible and not visible (b), and modify invisible components as necessary (c). Once the object is printed (d), a single camera can view (e) and track (f) the user-facing components.

F3Figure 3. Components sensed by Lamello have tines printed at interaction points (e.g., under the stroking path of a slider), which vibrate at predictable frequencies.

back to top 

Copyright held by Owner(s)/Author(s). Publication rights licensed to ACM. 1528-4972/16/03

The Digital Library is published by the Association for Computing Machinery. Copyright © 2016 ACM, Inc.