0% found this document useful (0 votes)
501 views112 pages

Projects

Download as pdf or txt
0% found this document useful (0 votes)
501 views112 pages

Projects

Download as pdf or txt
Download as pdf or txt
You are on page 1/ 112

Many of the MIT Media Lab research projects described in the following pages are conducted under the

auspices of sponsor-supported, interdisciplinary Media Lab centers, consortia, joint research programs, and initiatives. They are: Autism & Communication Technology Initiative The Autism & Communication Technology Initiative utilizes the unique features of the Media Lab to foster the development of innovative technologies that can enhance and accelerate the pace of autism research and therapy. Researchers are especially invested in creating technologies that promote communication and independent living by enabling non-autistic people to understand the ways autistic people are trying to communicate; improving autistic people's ability to use receptive and expressive language along with other means of functional, non-verbal expression; and providing telemetric support that reduces reliance on caregivers' physical proximity, yet still enables enriching and natural connectivity as wanted and needed. CE 2.0 Most of us are awash in consumer electronics (CE) devices: from cell phones, to TVs, to dishwashers. They provide us with information, entertainment, and communications, and assist us in accomplishing our daily tasks. Unfortunately, most are not as helpful as they could and should be; for the most part, they are dumb, unaware of us or our situations, and often difficult to use. In addition, most CE devices cannot communicate with our other devices, even when such communication and collaboration would be of great help. The Consumer Electronics 2.0 initiative (CE 2.0) is a collaboration between the Media Lab and its sponsor companies to formulate the principles for a new generation of consumer electronics that are highly connected, seamlessly interoperable, situation-aware, and radically simpler to use. Our goal is to show that as computing and communication capability seep into more of our everyday devices, these devices do not have to become more confusing and complex, but rather can become more intelligent in a cooperative and user-friendly way. Center for Civic Media Communities need information to make decisions and take action: to provide aid to neighbors in need, to purchase an environmentally sustainable product and shun a wasteful one, to choose leaders on local and global scales. Communities are also rich repositories of information and knowledge, and often develop their own innovative tools and practices for information sharing. Existing systems to inform communities are changing rapidly, and new ecosystems are emerging where old distinctions like writer/audience and journalist/amateur have collapsed. The Civic Media group is a partnership between the MIT Media Lab and Comparative Media Studies at MIT. Together, we work to understand these new ecosystems and to build tools and systems that help communities collect and share information and connect that information to action. We work closely with communities to understand their needs and strengths, and to develop useful tools together using collaborative design principles. We particularly focus on tools that can help amplify the voices of communities often excluded from the digital public sphere and connect them with new audiences, as well as on systems that help us understand media ecologies, augment civic participation, and foster digital inclusion. Center for Future Storytelling The Center for Future Storytelling at the Media Lab is rethinking storytelling for the 21st century. The Center takes a new and dynamic approach to how we tell our stories, creating new methods, technologies, and learning programs that recognize and respond to the changing communications landscape. The Center builds on the Media Lab's more than 25 years of experience in developing society-changing technologies for human expression and interactivity. By applying leading-edge technologies to make stories more interactive, improvisational, and social, researchers are working to transform audiences into active participants in the storytelling process, bridging the real and virtual worlds, and allowing everyone to make and share their own unique stories. Research also explores ways to revolutionize imaging and display technologies, including developing next-generation cameras and programmable studios, making movie production more versatile and economic. Center for Mobile Learning The Center for Mobile Learning invents and studies new mobile technologies to promote learning anywhere anytime for anyone. The Center focuses on mobile tools that empower learners to think creatively, collaborate broadly, and develop applications that are useful to themselves and others around them. The Center's work covers location-aware learning applications, mobile sensing and data collection, augmented reality gaming, and other educational uses of mobile technologies. The Centers first major activity will focus on App Inventor, a programming system that makes it easy for learners to create mobile apps by fitting together puzzle piece-shaped blocks in a web browser.

The most current information about our research is available on the MIT Media Lab Web site, at http://www.media.mit.edu/research/.

MIT Media Lab

April 2013

Page i

City Science The world is experiencing a period of extreme urbanization. In China alone, 300 million rural inhabitants will move to urban areas over the next 15 years. This will require building an infrastructure equivalent to the one housing the entire population of the United States in a matter of a few decades. In the future, cities will account for nearly 90% of global population growth, 80% of wealth creation, and 60% of total energy consumption. Developing better strategies for the creation of new cities, is therefore, a global imperative. Our need to improve our understanding of cities, however, is pressed not only by the social relevance of urban environments, but also by the availability of new strategies for city-scale interventions that are enabled by emerging technologies. Leveraging advances in data analysis, sensor technologies, and urban experiments, City Science will provide new insights into creating a data-driven approach to urban design and planning. To build the cities that the world needs, we need a scientific understanding of cities that considers our built environments and the people who inhabit them. Our future cities will desperately need such understanding. Communications Futures Program The Communications Futures Program conducts research on industry dynamics, technology opportunities, and regulatory issues that form the basis for communications endeavors of all kinds, from telephony to RFID tags. The program operates through a series of working groups led jointly by MIT researchers and industry collaborators. It is highly participatory, and its agenda reflects the interests of member companies that include both traditional stakeholders and innovators. It is jointly directed by Dave Clark (CSAIL), Charles Fine (Sloan School of Management), and Andrew Lippman (Media Lab). Connection Science and Engineering Our lives have been transformed by networks that combine people and computers in new ways. They have revolutionized the nature of the economy, business, government, politics, and our day-to-day existence. But there is little understanding of the fundamental nature of these networks precisely because the combination of human and technological elements poses a host of conceptual and empirical challenges. Our goal is to forge the foundations of an integrated framework for understanding the connected world we live in. This requires a multidisciplinary, interdepartmental effort that leverages and supports existing disciplinary network projects. The Center is jointly directed by Asu Ozdaglar (EECS) and Alex 'Sandy' Pentland. Consumer Electronics Laboratory The Consumer Electronics Laboratory provides a unique research environment to explore ideas, make things, and innovate in new directions for consumer products and services. Research projects, which span the entire Media Lab and beyond, focus on: innovative materials and design/fabrication methods for them; new power technologies; new sensors, actuators, and displays; self-managing, incrementally and limitlessly scalable ecosystems of smart devices; cooperative wireless communications; co-evolution of devices and content; and user experience. An overarching theme that runs through all the work is the co-evolution of design principles and technological discoveries, resulting in simple, ubiquitous, easy- and delightful-to-use devices that know a great deal about one another, the world, and the people in their proximity. Digital Life Digital Life consortium activities engage virtually the entire faculty of the Media Lab around the theme of "open innovation." Researchers divide the topic into three areas: open communications, open knowledge, and open everything. The first explores the design and scalability of agile, grassroots communications systems that incorporate a growing understanding of emergent social behaviors in a digital world; the second considers a cognitive architecture that can support many features of "human intelligent thinking" and its expressive and economic use; and the third extends the idea of inclusive design to immersive, affective, and biological interfaces and actions. Things That Think Things That Think is inventing the future of digitally augmented objects and environments. Toward this end, Things That Think researchers are developing sophisticated sensing and computational architectures for networks of everyday things; designing seamless interfaces that bridge the digital and physical worlds while meeting the human need for creative expression; and creating an understanding of context and affect that helps things "think" at a much deeper level. Things That Think projects under way at the Lab range from inventing the city car of the future to designing a prosthesis with the ability to help a person or machine read social-emotional cuesresearch that will create the technologies and tools to redefine the products and services of tomorrow.

Page ii

April 2013

MIT Media Lab

V. Michael Bove Jr.Object-Based Media ....................................................................................................................... 1 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 3D Telepresence Chair .................................................................................................................................................. Calliope .......................................................................................................................................................................... Consumer Holo-Video .................................................................................................................................................... Direct Fringe Writing of Computer-Generated Holograms ............................................................................................. Everything Tells a Story ................................................................................................................................................. Guided-Wave Light Modulator ....................................................................................................................................... Infinity-by-Nine ............................................................................................................................................................... Living Observatory: Arboreal Telepresence ................................................................................................................... Narratarium .................................................................................................................................................................... Pillow-Talk ...................................................................................................................................................................... ProtoTouch: Multitouch Interfaces to Everyday Objects ............................................................................................... ShakeOnIt ...................................................................................................................................................................... Simple Spectral Sensing ................................................................................................................................................ Slam Force Net .............................................................................................................................................................. SurroundVision .............................................................................................................................................................. The "Bar of Soap": Grasp-Based Interfaces .................................................................................................................. Vision-Based Interfaces for Mobile Devices .................................................................................................................. 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 4

Ed BoydenSynthetic Neurobiology ............................................................................................................................... 4 18. 19. 20. 21. 22. 23. Direct Engineering and Testing of Novel Therapeutic Platforms for Treatment of Brain Disorders ............................... Exploratory Technologies for Understanding Neural Circuits ........................................................................................ Hardware and Systems for Control of Neural Circuits with Light ................................................................................... Molecular Reagents Enabling Control of Neurons and Biological Functions with Light ................................................. Recording and Data-Analysis Technologies for Observing and Analyzing Neural Circuit Dynamics ............................ Understanding Neural Circuit Computations and Finding New TherapeuticTargets ..................................................... 4 4 4 5 5 5

Cynthia BreazealPersonal Robots ................................................................................................................................ 6 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. AIDA: Affective Intelligent Driving Agent ........................................................................................................................ Animal-Robot Interaction ............................................................................................................................................... Cloud-HRI ...................................................................................................................................................................... Computationally Modeling Interpersonal Trust Using Nonverbal Behavior for Human-Robot Interactions .................. DragonBot: Android phone robots for long-term HRI ..................................................................................................... Huggable: A Robotic Companion for Long-Term Health Care, Education, and Communication ................................... MDS: Crowdsourcing Human-Robot Interaction: Online Game to Study Collaborative Human Behavior .................... Mind-Theoretic Planning for Robots .............................................................................................................................. Robotic Textiles ............................................................................................................................................................. Socially Assistive Robotics: An NSF Expedition in Computing ...................................................................................... Storytelling in the Preschool of Future ........................................................................................................................... The Helping Hands ........................................................................................................................................................ TinkRBook: Reinventing the Reading Primer ................................................................................................................ World Literacy Tablets ................................................................................................................................................... Zipperbot: Robotic Continuous Closure for Fabric Edge Joining ................................................................................... 6 6 6 7 7 7 7 8 8 8 8 8 9 9 9

Leah BuechleyHigh-Low Tech ....................................................................................................................................... 9 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. aireForm: Refigured Shape-Changing Fashion ............................................................................................................. 9 Circuit Sketchbook ....................................................................................................................................................... 10 Codeable Objects ........................................................................................................................................................ 10 Computational Textiles Curriculum .............................................................................................................................. 10 DIY Cellphone .............................................................................................................................................................. 10 DressCode ................................................................................................................................................................... 11 Exploring Artisanal Technology ................................................................................................................................... 11 LilyPad Arduino ............................................................................................................................................................ 11 LilyTiny ......................................................................................................................................................................... 11 Microcontrollers As Material ......................................................................................................................................... 11 Open Source Consumer Electronics ............................................................................................................................ 11 Programmable Paintings .............................................................................................................................................. 12

MIT Media Lab

April 2013

Page iii

51. 52.

Sticky Circuits .............................................................................................................................................................. 12 StoryClip ...................................................................................................................................................................... 12

Catherine HavasiDigital Intuition ................................................................................................................................. 12 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. CharmMe ..................................................................................................................................................................... ConceptNet .................................................................................................................................................................. Corona ......................................................................................................................................................................... Divisi: Reasoning Over Semantic Relationships .......................................................................................................... GI Mobile ...................................................................................................................................................................... MessageMe ................................................................................................................................................................. Narratarium .................................................................................................................................................................. Open Mind Common Sense ......................................................................................................................................... Red Fish, Blue Fish ...................................................................................................................................................... Second-Language Learning Using Games with a Purpose ......................................................................................... Story Space ................................................................................................................................................................. The Glass Infrastructure .............................................................................................................................................. Understanding Dialogue .............................................................................................................................................. 12 12 13 13 13 13 13 13 14 14 14 14 14

Hugh HerrBiomechatronics ......................................................................................................................................... 15 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. Volitional Control of a Powered Ankle-Foot Prosthesis .............................................................................................. A Variable-impedance Prosthetic (VIPr) Socket Design .............................................................................................. Artificial Gastrocnemius ............................................................................................................................................... Biomimetic Active Prosthesis for Above-Knee Amputees ............................................................................................ Control of Muscle-Actuated Systems via Electrical Stimulation ................................................................................... Effect of a Powered Ankle on Shock Absorption and Interfacial Pressure .................................................................. FitSocket: A Better Way to Make Sockets ................................................................................................................... Human Walking Model Predicts Joint Mechanics, Electromyography, and Mechanical Economy .............................. Load-Bearing Exoskeleton for Augmentation of Human Running ............................................................................... Neural Interface Technology for Advanced Prosthetic Limbs ...................................................................................... Powered Ankle-Foot Prosthesis ................................................................................................................................... Sensor-Fusions for an EMG Controlled Robotic Prosthesis ........................................................................................ 15 15 15 15 16 16 16 16 17 17 17 17

Cesar HidalgoMacro Connections .............................................................................................................................. 18 78. 79. 80. 81. 82. 83. Cultural Exports ........................................................................................................................................................... Immersion .................................................................................................................................................................... Place Pulse .................................................................................................................................................................. The Economic Complexity Observatory ...................................................................................................................... The Language Group Network ..................................................................................................................................... The Privacy Bounds of Human Mobility ....................................................................................................................... 18 18 18 18 19 19

Henry HoltzmanInformation Ecology .......................................................................................................................... 19 84. 85. 86. 87. 88. 89. 90. 91. 92. 93. 94. 95. 96. 97. 98. 99. 100. 8D Display .................................................................................................................................................................... Air Mobs ....................................................................................................................................................................... aireForm: Refigured Shape-Changing Fashion ........................................................................................................... Brin.gy: What Brings Us Together ............................................................................................................................... CoCam ......................................................................................................................................................................... ContextController ......................................................................................................................................................... CoSync ........................................................................................................................................................................ Droplet ......................................................................................................................................................................... Encephalodome ........................................................................................................................................................... Flow ............................................................................................................................................................................. MindRider ..................................................................................................................................................................... MobileP2P .................................................................................................................................................................... NewsJack ..................................................................................................................................................................... NeXtream: Social Television ........................................................................................................................................ OpenIR: Crowd Map Plugin ......................................................................................................................................... OpenIR: Data Viewer ................................................................................................................................................... Proverbial Wallets ........................................................................................................................................................ 19 19 19 20 20 20 20 20 21 21 21 21 21 22 22 22 22

Page iv

April 2013

MIT Media Lab

101. 102. 103. 104. 105. 106. 107.

StackAR ....................................................................................................................................................................... SuperShoes ................................................................................................................................................................. Tactile Allegory ............................................................................................................................................................ The Glass Infrastructure .............................................................................................................................................. Truth Goggles .............................................................................................................................................................. Twitter Weather ............................................................................................................................................................ Where The Hel .............................................................................................................................................................

22 23 23 23 23 23 24

Hiroshi IshiiTangible Media ......................................................................................................................................... 24 108. 109. 110. 111. 112. 113. 114. 115. 116. 117. 118. 119. 120. 121. 122. 123. 124. 125. 126. 127. 128. 129. 130. 131. aireForm: Refigured Shape-Changing Fashion ........................................................................................................... Ambient Furniture ........................................................................................................................................................ Beyond: A Collapsible Input Device for 3D Direct Manipulation .................................................................................. FocalSpace .................................................................................................................................................................. GeoSense .................................................................................................................................................................... IdeaGarden .................................................................................................................................................................. Jamming User Interfaces ............................................................................................................................................. Kinected Conference ................................................................................................................................................... MirrorFugue II .............................................................................................................................................................. MirrorFugue III ............................................................................................................................................................. PingPongPlusPlus ....................................................................................................................................................... Pneumatic Shape-Changing Interfaces ....................................................................................................................... Radical Atoms .............................................................................................................................................................. Recompose .................................................................................................................................................................. Relief ............................................................................................................................................................................ RopeRevolution ........................................................................................................................................................... SandScape .................................................................................................................................................................. Second Surface: Multi-User Spatial Collaboration System Based on Augmented Reality .......................................... Sensetable ................................................................................................................................................................... Sourcemap ................................................................................................................................................................... T(ether) ........................................................................................................................................................................ Tangible Bits ................................................................................................................................................................ Topobo ......................................................................................................................................................................... Video Play .................................................................................................................................................................... 24 24 24 24 25 25 25 25 25 26 26 26 26 26 26 27 27 27 27 28 28 28 28 29

Joseph M. JacobsonMolecular Machines .................................................................................................................. 29 132. 133. 134. 135. GeneFab ...................................................................................................................................................................... NanoFab ...................................................................................................................................................................... Scaling Up DNA Logic and Structures ......................................................................................................................... Synthetic Photosynthesis ............................................................................................................................................. 29 29 29 29

Sepandar KamvarSocial Computing ........................................................................................................................... 30 136. The Dog Programming Language ................................................................................................................................ 30 Kent LarsonChanging Places ...................................................................................................................................... 30 137. 138. 139. 140. 141. 142. 143. 144. 145. 146. 147. 148. 149. 150. A Market Economy of Trips .......................................................................................................................................... AEVITA ........................................................................................................................................................................ Autonomous Facades for Zero-Energy Urban Housing ............................................................................................... BTNz! ........................................................................................................................................................................... CityCar ......................................................................................................................................................................... CityCar Folding Chassis ............................................................................................................................................. CityCar Half-Scale Prototype ...................................................................................................................................... CityCar Ingress-Egress Model ..................................................................................................................................... CityCar Testing Platform .............................................................................................................................................. CityHealth and Indoor Environment ............................................................................................................................. CityHome ..................................................................................................................................................................... CityHome: RoboWall .................................................................................................................................................... Distinguish: Home Activity Recognition ....................................................................................................................... FlickInk ......................................................................................................................................................................... 30 31 31 31 31 31 32 32 32 32 32 33 33 33

MIT Media Lab

April 2013

Page v

151. 152. 153. 154. 155. 156. 157. 158. 159. 160. 161. 162. 163. 164. 165. 166. 167. 168. 169. 170. 171. 172. 173.

Hiriko CityCar Urban Feasibility Studies ...................................................................................................................... Hiriko CityCar with Denokinn ....................................................................................................................................... Home Genome: Mass-Personalized Housing .............................................................................................................. Human Health Monitoring in Vehicles .......................................................................................................................... Intelligent Autonomous Parking Environment ............................................................................................................. Mass-Personalized Solutions for the Elderly ............................................................................................................... Media Lab Energy and Charging Research Station ..................................................................................................... MITes+: Portable Wireless Sensors for Studying Behavior in Natural Settings ........................................................... Mobility on Demand Systems ...................................................................................................................................... Open-Source Furniture ................................................................................................................................................ Operator ....................................................................................................................................................................... Participatory Environmental Sensing for Communities ............................................................................................... PlaceLab and BoxLab .................................................................................................................................................. PowerSuit: Micro-Energy Harvesting ........................................................................................................................... Shadow Chess ............................................................................................................................................................. Shortest Path Tree ....................................................................................................................................................... Smart Customization of Men's Dress Shirts: A Study on Environmental Impact ......................................................... Smart DC MicroGrid ..................................................................................................................................................... smartCharge ................................................................................................................................................................ Spike: Social Cycling ................................................................................................................................................... SproutsIO: Microfarm ................................................................................................................................................... Wheel Robots .............................................................................................................................................................. WorkLife .......................................................................................................................................................................

33 33 34 34 34 34 35 35 35 35 35 36 36 36 36 36 37 37 37 37 37 38 38

Henry LiebermanSoftware Agents .............................................................................................................................. 38 174. 175. 176. 177. 178. 179. 180. 181. 182. 183. 184. 185. 186. 187. 188. 189. 190. 191. 192. 193. 194. AIGRE: A natural language interface that accommodates vague and ambiguous input ............................................ Common-Sense Reasoning for Interactive Applications .............................................................................................. CommonConsensus: A Game for Collecting Commonsense Goals ............................................................................ E-Commerce When Things Go Wrong ........................................................................................................................ Goal-Oriented Interfaces for Consumer Electronics .................................................................................................... Goal-Oriented Interfaces for Mobile Phones ................................................................................................................ Graphical Interfaces for Software Visualization and Debugging .................................................................................. Human Goal Network ................................................................................................................................................... Justify ........................................................................................................................................................................... Learning Common Sense in a Second Language ....................................................................................................... Multi-Lingual ConceptNet ............................................................................................................................................. Multilingual Common Sense ........................................................................................................................................ Navigating in Very Large Display Spaces .................................................................................................................... Open Interpreter ........................................................................................................................................................... ProcedureSpace: Managing Informality by Example ................................................................................................... Programming in Natural Language .............................................................................................................................. Raconteur: From Chat to Stories ................................................................................................................................. Relational Analogies in Semantic Networks ................................................................................................................ Ruminati: Tackling Cyberbullying with Computational Empathy .................................................................................. Storied Navigation ........................................................................................................................................................ Time Out: Reflective User Interface for Social Networks ............................................................................................. 38 39 39 39 39 40 40 40 40 41 41 41 41 42 42 42 42 42 43 43 43

Andy LippmanViral Spaces ......................................................................................................................................... 43 195. 196. 197. 198. 199. 200. 201. 202. 203. 204. 205. Air Mobs ....................................................................................................................................................................... AudioFile ...................................................................................................................................................................... Barter: A Market-Incented Wisdom Exchange ............................................................................................................. Brin.gy: What Brings Us Together ............................................................................................................................... BTNz! ........................................................................................................................................................................... CoCam ......................................................................................................................................................................... CoSync ........................................................................................................................................................................ Electric Price Tags ....................................................................................................................................................... Encoded Reality ........................................................................................................................................................... Geo.gy: Location Shortener ......................................................................................................................................... Graffiti Codes ............................................................................................................................................................... 43 44 44 44 44 44 45 45 45 45 45

Page vi

April 2013

MIT Media Lab

206. 207. 208. 209. 210. 211. 212. 213. 214. 215. 216.

Line of Sound ............................................................................................................................................................... NewsFlash ................................................................................................................................................................... Point & Shoot Data ...................................................................................................................................................... Reach ........................................................................................................................................................................... Recompose .................................................................................................................................................................. Social Transactions/Open Transactions ...................................................................................................................... SonicLink ..................................................................................................................................................................... T(ether) ........................................................................................................................................................................ T+1 ............................................................................................................................................................................... The Glass Infrastructure .............................................................................................................................................. VR Codes .....................................................................................................................................................................

46 46 46 46 46 47 47 47 47 48 48

Tod MachoverOpera of the Future .............................................................................................................................. 48 217. 218. 219. 220. 221. 222. 223. 224. 225. 226. 227. 228. 229. 230. 231. A Toronto Symphony: Massive Musical Collaboration ................................................................................................. Advanced Audio Systems for Live Performance .......................................................................................................... Death and the Powers: Redefining Opera ................................................................................................................... Designing Immersive Multi-Sensory Eating Experiences ............................................................................................ Disembodied Performance .......................................................................................................................................... DrumTop ...................................................................................................................................................................... Future of the Festival ................................................................................................................................................... Gestural Media Framework .......................................................................................................................................... Hyperinstruments ......................................................................................................................................................... Hyperscore ................................................................................................................................................................... Media Scores ............................................................................................................................................................... Personal Opera ............................................................................................................................................................ Remote Theatrical Immersion: Extending "Sleep No More" ........................................................................................ The Other Feast ........................................................................................................................................................... Vocal Vibrations: Expressive Performance for Body-Mind Wellbeing .......................................................................... 48 48 49 49 49 49 49 50 50 50 51 51 51 51 52

Pattie MaesFluid Interfaces ......................................................................................................................................... 52 232. 233. 234. 235. 236. 237. 238. 239. 240. 241. 242. 243. 244. 245. 246. 247. 248. 249. 250. 251. 252. 253. 254. 255. 256. 257. 258. 259. 260. Augmented Product Counter ....................................................................................................................................... Blossom ....................................................................................................................................................................... Brainstorming with Someone Else's Mind .................................................................................................................... Community Data Portrait .............................................................................................................................................. Cornucopia: Digital Gastronomy .................................................................................................................................. Defuse .......................................................................................................................................................................... Display Blocks .............................................................................................................................................................. EyeRing: A Compact, Intelligent Vision System on a Ring .......................................................................................... FlexDisplays ................................................................................................................................................................. Flexpad ........................................................................................................................................................................ Hyperego ..................................................................................................................................................................... Inktuitive: An Intuitive Physical Design Workspace ..................................................................................................... InReach ........................................................................................................................................................................ InterPlay: Full-Body Interaction Platform ..................................................................................................................... ioMaterials .................................................................................................................................................................... Liberated Pixels ........................................................................................................................................................... Light.Bodies ................................................................................................................................................................. LuminAR ...................................................................................................................................................................... MARS: Manufacturing Augmented Reality System ..................................................................................................... MemTable .................................................................................................................................................................... Mouseless .................................................................................................................................................................... Moving Portraits ........................................................................................................................................................... MTM "Little John" ......................................................................................................................................................... Perifoveal Display ........................................................................................................................................................ PreCursor ..................................................................................................................................................................... Pulp-Based Computing: A Framework for Building Computers Out of Paper .............................................................. Quickies: Intelligent Sticky Notes ................................................................................................................................. ReflectOns: Mental Prostheses for Self-Reflection ...................................................................................................... Remnant: Handwriting Memory Card ........................................................................................................................... 52 52 52 53 53 53 53 53 53 54 54 54 54 54 55 55 55 55 55 56 56 56 56 56 56 57 57 57 57

MIT Media Lab

April 2013

Page vii

261. 262. 263. 264. 265. 266. 267. 268. 269. 270. 271. 272. 273. 274. 275. 276. 277. 278. 279. 280. 281. 282. 283.

Second Surface: Multi-User Spatial Collaboration System Based on Augmented Reality .......................................... Sensei: A Mobile Tool for Language Learning ............................................................................................................. Shutters: A Permeable Surface for Environmental Control and Communication ......................................................... Siftables: Physical Interaction with Digital Media ......................................................................................................... Six-Forty by Four-Eighty: An Interactive Lighting System ............................................................................................ SixthSense ................................................................................................................................................................... Smarter Objects: Using AR technology to Program Physical Objects and their Interactions ...................................... SPARSH ...................................................................................................................................................................... Spotlight ....................................................................................................................................................................... Sprout I/O: A Texturally Rich Interface ........................................................................................................................ Surflex: A Shape-Changing Surface ............................................................................................................................ Swyp ............................................................................................................................................................................ TaPuMa: Tangible Public Map ..................................................................................................................................... TeleStudio .................................................................................................................................................................... Textura ......................................................................................................................................................................... The Design of Artifacts for Augmenting Intellect .......................................................................................................... The Relative Size of Things ......................................................................................................................................... thirdEye ........................................................................................................................................................................ Transitive Materials: Towards an Integrated Approach to Material Technology .......................................................... VisionPlay .................................................................................................................................................................... Watt Watcher ............................................................................................................................................................... Wear Someone Else's Habits ...................................................................................................................................... Wearables for Emotion Capture ...................................................................................................................................

58 58 58 58 59 59 59 59 60 60 60 60 61 61 61 61 61 61 62 62 62 62 62

Frank MossNew Media Medicine ................................................................................................................................. 63 284. 285. 286. 287. 288. CollaboRhythm ............................................................................................................................................................ Collective Discovery ..................................................................................................................................................... ForgetAboutIT? ............................................................................................................................................................ I'm Listening ................................................................................................................................................................. Oovit PT ....................................................................................................................................................................... 63 63 63 64 64

Neri OxmanMediated Matter ........................................................................................................................................ 64 289. 290. 291. 292. 293. 294. 295. 296. 297. 298. 299. 300. 301. 302. 303. 304. 305. 3D Printing of Functionally Graded Materials ............................................................................................................. Beast ............................................................................................................................................................................ Building-Scale 3D Printing ........................................................................................................................................... Carpal Skin .................................................................................................................................................................. CNSILK Pavilion .......................................................................................................................................................... CNSILK: Computer Numerically Controlled Silk Cocoon Construction ....................................................................... Digitally Reconfigurable Surface .................................................................................................................................. FABRICOLOGY: Variable-Property 3D Printing as a Case for Sustainable Fabrication ............................................. FitSocket: A Better Way to Make Sockets ................................................................................................................... Lichtenberg 3D Printing ............................................................................................................................................... Monocoque .................................................................................................................................................................. Morphable Structures ................................................................................................................................................... PCB Origami ................................................................................................................................................................ Rapid Craft ................................................................................................................................................................... Raycounting ................................................................................................................................................................. Responsive Glass ........................................................................................................................................................ SpiderBot ..................................................................................................................................................................... 64 64 65 65 65 65 66 66 66 66 66 67 67 67 67 67 68

Joseph ParadisoResponsive Environments .............................................................................................................. 68 306. 307. 308. 309. 310. 311. 312. 313. A Machine Learning Toolbox for Musician Computer Interaction ................................................................................ Beyond the Light Switch: New Frontiers in Dynamic Lighting ...................................................................................... Chameleon Guitar: Physical Heart in a Virtual Body ................................................................................................... Customizable Sensate Surface for Music Control ....................................................................................................... Data-Driven Elevator Music ......................................................................................................................................... Dense, Low-Power Environmental Monitoring for Smart Energy Profiling ................................................................... DoppelLab: Experiencing Multimodal Sensor Data ..................................................................................................... DoppelLab: Spatialized Sonification in a 3D Virtual Environment ................................................................................ 68 68 68 68 69 69 69 69

Page viii

April 2013

MIT Media Lab

314. 315. 316. 317. 318. 319. 320. 321. 322. 323. 324. 325. 326. 327. 328. 329. 330. 331. 332. 333.

Expressive Re-Performance ........................................................................................................................................ Feedback Controlled Solid State Lighting .................................................................................................................... FreeD ........................................................................................................................................................................... Gesture Recognition Toolkit ......................................................................................................................................... Grassroots Mobile Power ............................................................................................................................................. Hackable, High-Bandwidth Sensory Augmentation ..................................................................................................... Living Observatory Installation: A Transductive Encounter with Ecology .................................................................... Living Observatory: Arboreal Telepresence ................................................................................................................. Living Observatory: Sensor Networks for Documenting and Experiencing Ecology .................................................... PrintSense: A Versatile Sensing Technique to Support Flexible Surface Interaction .................................................. Prosthetic Sensor Networks: Factoring Attention, Proprioception, and Sensory Coding ............................................. Rapidnition: Rapid User-Customizable Gesture Recognition ...................................................................................... RElight: Exploring pointing and other gestures for appliance control .......................................................................... Scalable and Versatile Surface for Ubiquitous Sensing ............................................................................................... Sensor Fusion for Gesture Analyses of Baseball Pitch ............................................................................................... Sticky Circuits .............................................................................................................................................................. TRUSS: Tracking Risk with Ubiquitous Smart Sensing ............................................................................................... Virtual Messenger ........................................................................................................................................................ Wearable, Wireless Sensor System for Sports Medicine and Interactive Media ......................................................... WristQue: A Personal Wristband for Sensing and Smart Infrastructure ......................................................................

70 70 70 70 71 71 71 71 72 72 72 72 73 73 73 73 74 74 74 74

Alex 'Sandy' PentlandHuman Dynamics .................................................................................................................... 75 334. 335. 336. 337. 338. 339. 340. 341. 342. 343. Belief Dynamics ........................................................................................................................................................... Bilateral Exchanges in Social Networks ....................................................................................................................... Economic Decision-Making in the Wild ........................................................................................................................ Funf: Open Sensing Framework .................................................................................................................................. Inducing Peer Pressure to Promote Cooperation ........................................................................................................ Mobile Territorial Lab ................................................................................................................................................... openPDS: A Privacy-Preserving Personal Data Store ................................................................................................. Predicting Individual Behavior Using Network Interaction Data ................................................................................... Sensible Organizations ................................................................................................................................................ The Privacy Bounds of Human Mobility ....................................................................................................................... 75 75 75 75 76 76 76 76 76 77

Rosalind W. PicardAffective Computing .................................................................................................................... 77 344. 345. 346. 347. 348. 349. 350. 351. 352. 353. 354. 355. 356. 357. 358. 359. 360. 361. 362. 363. Analysis and Visualization of Longitudinal Physiological Data of Children with ASD .................................................. Auditory Desensitization Games .................................................................................................................................. Automatic Stress Recognition in Real-Life Settings ..................................................................................................... Cardiocam .................................................................................................................................................................... Exploring Temporal Patterns of Smile ......................................................................................................................... Facial Expression Analysis Over the Web ................................................................................................................... FEEL: A Cloud System for Frequent Event and Biophysiological Signal Labeling ...................................................... Gesture Guitar ............................................................................................................................................................. IDA: Inexpensive Networked Digital Stethoscope ........................................................................................................ Inside-Out: Reflecting on your Inner State ................................................................................................................... Long-Term Physio and Behavioral Data Analysis ........................................................................................................ Measuring Arousal During Therapy for Children with Autism and ADHD .................................................................... Measuring Customer Experiences with Arousal .......................................................................................................... Mobile Health Interventions for Drug Addiction and PTSD .......................................................................................... Multimodal Computational Behavior Analysis ............................................................................................................. Panoply ........................................................................................................................................................................ Smart Phone Frequent EDA Event Logger .................................................................................................................. Social + Sleep + Moods ............................................................................................................................................... StoryScape .................................................................................................................................................................. The Frustration of Learning Monopoly ......................................................................................................................... 77 77 77 78 78 78 78 78 78 79 79 79 79 79 80 80 80 80 81 81

MIT Media Lab

April 2013

Page ix

Ramesh RaskarCamera Culture .................................................................................................................................. 81 364. 365. 366. 367. 368. 369. 370. 371. 372. 373. 374. 375. 376. 377. 378. 379. 380. 381. 382. 383. 384. 385. 386. 387. 388. 6D Display .................................................................................................................................................................... Bokode: Imperceptible Visual Tags for Camera-Based Interaction from a Distance ................................................... CATRA: Mapping of Cataract Opacities Through an Interactive Approach ................................................................. Coded Computational Photography ............................................................................................................................. Coded Focal Stack Photography ................................................................................................................................. Compressive Light Field Camera: Next Generation in 3D Photography ...................................................................... Layered 3D: Glasses-Free 3D Printing ........................................................................................................................ LensChat: Sharing Photos with Strangers ................................................................................................................... Looking Around Corners .............................................................................................................................................. NETRA: Smartphone Add-On for Eye Tests ................................................................................................................ PhotoCloud: Personal to Shared Moments with Angled Graphs of Pictures ............................................................... Polarization Fields: Glasses-Free 3DTV ...................................................................................................................... Portable Retinal Imaging .............................................................................................................................................. Reflectance Acquisition Using Ultrafast Imaging ......................................................................................................... Second Skin: Motion Capture with Actuated Feedback for Motor Learning ................................................................ Shield Field Imaging .................................................................................................................................................... Single Lens Off-Chip Cellphone Microscopy ............................................................................................................... Slow Display ................................................................................................................................................................ SpeckleSense .............................................................................................................................................................. Tensor Displays: High-Quality Glasses-Free 3D TV .................................................................................................... Theory Unifying Ray and Wavefront Lightfield Propagation ........................................................................................ Trillion Frames Per Second Camera ............................................................................................................................ Vision on Tap ............................................................................................................................................................... VisionBlocks ................................................................................................................................................................. Visual Lifelogging ......................................................................................................................................................... 81 81 82 82 82 82 82 83 83 83 83 84 84 84 84 84 85 85 85 85 86 86 86 86 86

Mitchel ResnickLifelong Kindergarten ....................................................................................................................... 87 389. 390. 391. 392. 393. 394. 395. 396. 397. 398. 399. 400. 401. 402. 403. 404. 405. 406. App Inventor ................................................................................................................................................................. Build-in-Progress ......................................................................................................................................................... Collab Camp ................................................................................................................................................................ Computer Clubhouse ................................................................................................................................................... Computer Clubhouse Village ....................................................................................................................................... Family Creativity Workshops ....................................................................................................................................... Learning Creative Learning .......................................................................................................................................... Learning with Data ....................................................................................................................................................... MaKey MaKey .............................................................................................................................................................. Map Scratch ................................................................................................................................................................. MelodyMorph ............................................................................................................................................................... Open Learning ............................................................................................................................................................. Replay ......................................................................................................................................................................... Sanctuary ..................................................................................................................................................................... Scratch ......................................................................................................................................................................... Scratch Day ................................................................................................................................................................. ScratchJr ...................................................................................................................................................................... Singing Fingers ............................................................................................................................................................ 87 87 87 87 88 88 88 88 88 89 89 89 89 89 89 90 90 90

Deb RoyCognitive Machines ....................................................................................................................................... 90 407. 408. 409. 410. 411. 412. BlitzScribe: Speech Analysis for the Human Speechome Project ............................................................................... Crowdsourcing the Creation of Smart Role-Playing Agents ........................................................................................ HouseFly: Immersive Video Browsing and Data Visualization .................................................................................... Human Speechome Project ......................................................................................................................................... Speech Interaction Analysis for the Human Speechome Project ................................................................................ Speechome Recorder for the Study of Child Development Disorders ......................................................................... 90 91 91 91 91 92

Page x

April 2013

MIT Media Lab

Chris SchmandtSpeech + Mobility .............................................................................................................................. 92 413. 414. 415. 416. 417. 418. 419. 420. 421. 422. 423. 424. 425. 426. 427. 428. 429. 430. 431. 432. Back Talk ..................................................................................................................................................................... Flickr This ..................................................................................................................................................................... frontdesk ...................................................................................................................................................................... Going My Way ............................................................................................................................................................. Guiding Light ................................................................................................................................................................ Indoor Location Sensing Using Geo-Magnetism ......................................................................................................... InterTwinkles ................................................................................................................................................................ LocoRadio .................................................................................................................................................................... Mime ............................................................................................................................................................................ Musicpainter ................................................................................................................................................................. OnTheRun ................................................................................................................................................................... Pavlov .......................................................................................................................................................................... Puzzlaef ....................................................................................................................................................................... Radio-ish Media Player ................................................................................................................................................ ROAR .......................................................................................................................................................................... SeeIt-ShareIt ................................................................................................................................................................ Spellbound ................................................................................................................................................................... Spotz ............................................................................................................................................................................ Tin Can ........................................................................................................................................................................ Tin Can Classroom ...................................................................................................................................................... 92 92 92 93 93 93 93 93 94 94 94 94 94 95 95 95 95 95 96 96

Kevin SlavinPlayful Systems ....................................................................................................................................... 96 433. Cordon Sanitaire .......................................................................................................................................................... 96 Ethan ZuckermanCivic Media ...................................................................................................................................... 97 434. 435. 436. 437. 438. 439. 440. 441. 442. 443. 444. 445. 446. 447. 448. 449. 450. 451. 452. 453. 454. 455. 456. 457. 458. 459. Between the Bars ......................................................................................................................................................... 97 Codesign Toolkit .......................................................................................................................................................... 97 Controversy Mapper .................................................................................................................................................... 97 Data Therapy ............................................................................................................................................................... 97 Digital Humanitarian Marketplace ................................................................................................................................ 98 Erase the Border .......................................................................................................................................................... 98 Gender in Memoriam ................................................................................................................................................... 98 Grassroots Mobile Power ............................................................................................................................................. 98 LazyTruth ..................................................................................................................................................................... 98 Mapping Banned Books ............................................................................................................................................... 98 Mapping the Globe ....................................................................................................................................................... 98 Media Cloud ................................................................................................................................................................. 99 Media Meter ................................................................................................................................................................. 99 New Day New Standard: (646) 699-3989 .................................................................................................................... 99 NewsJack ..................................................................................................................................................................... 99 NGO 2.0 ....................................................................................................................................................................... 99 Open Gender Tracker ................................................................................................................................................ 100 PageOneX ................................................................................................................................................................. 100 Social Mirror ............................................................................................................................................................... 100 T.I.C.K.L.E. ................................................................................................................................................................ 100 thanks.fm ................................................................................................................................................................... 100 VoIP Drupal ................................................................................................................................................................ 100 Vojo.co ....................................................................................................................................................................... 101 VozMob ...................................................................................................................................................................... 101 What's Up ................................................................................................................................................................... 101 Whose Voices? Twitter Citation in the Media ............................................................................................................ 101

MIT Media Lab

April 2013

Page xi

V. Michael Bove Jr.Object-Based Media


How sensing, understanding, and new interface technologies can change everyday life, the ways in which we communicate with one another, storytelling, and entertainment.

1.

3D Telepresence Chair

V. Michael Bove Jr. and Daniel Novy An autostereoscopic (no glasses) 3D display engine is combined with a "Pepper's Ghost" setup to create an office chair that appears to contain a remote meeting participant. The system geometry is also suitable for other applications such as tabletop displays or automotive heads-up displays. Edwina Portocarrero Calliope is the follow-up to the NeverEnding Drawing Machine. A portable, paper-based platform for interactive story making, it allows physical editing of shared digital media at a distance. The system is composed of a network of creation stations that seamlessly blend analog and digital media. Calliope documents and displays the creative process with no need to interact directly with a computer. By using human-readable tags and allowing any object to be used as material for creation, it offers opportunities for cross-cultural and cross-generational collaboration among peers with expertise in different media.

2.

Calliope

3.

Consumer Holo-Video

V. Michael Bove Jr., James D. Barabas, Sundeep Jolly and Daniel E. Smalley The goal of this project, building upon work begun by Stephen Benton and the Spatial Imaging group, is to create an inexpensive desktop monitor for a PC or game console that displays holographic video images in real time, suitable for entertainment, engineering, or medical imaging. To date, we have demonstrated the fast rendering of holo-video images (including stereographic images that unlike ordinary stereograms have focusing consistent with depth information) from OpenGL databases on off-the-shelf PC graphics cards; current research addresses new optoelectronic architectures to reduce the size and manufacturing cost of the display system. Alumni Contributor: Quinn Y J Smithwick

4.

Direct Fringe Writing of Computer-Generated Holograms

V. Michael Bove Jr., Sundeep Jolly and University of Arizona College of Optical Sciences Photorefractive polymer has many attractive properties for dynamic holographic displays; however, the current display systems based around its use involve generating holograms by optical interference methods that complicate the optical and computational architectures of the systems and limit the kinds of holograms that can be displayed. We are developing a system to write computer-generated diffraction fringes directly from spatial light modulators to photorefractive polymers, resulting in displays with reduced footprint and cost, and potentially higher perceptual quality. V. Michael Bove Jr., David Cranor and Edwina Portocarrero Following upon work begun in the Graspables project, we are exploring what happens when a wide range of everyday consumer products can sense, interpret into human terms (using pattern recognition methods), and retain memories, such that users can construct a narrative with the aid of the recollections of the "diaries" of their sporting equipment, luggage, furniture, toys, and other items with which they interact.

5.

Everything Tells a Story

MIT Media Lab

April 2013

Page 1

6.

Guided-Wave Light Modulator

V. Michael Bove Jr., Daniel Smalley and Quinn Smithwick We are developing inexpensive, efficient, high-bandwidth light modulators based on lithium niobate guided-wave technology. These modulators are suitable for demanding, specialized applications such as holographic video displays, as well as other light modulation uses such as compact video projectors. V. Michael Bove Jr. and Daniel Novy We expand the home-video viewing experience by generating imagery to extend the TV screen and give the impression that the scene wraps completely around the viewer. Optical flow, color analysis, and heuristics extrapolate beyond the screen edge, where projectors provide the viewer's perceptual vision with low-detail dynamic patterns that are perceptually consistent with the video imagery and increase the sense of immersive presence and participation. We perform this processing in real time using standard microprocessors and GPUs.

7.

Infinity-by-Nine

8.

Living Observatory: Arboreal Telepresence


NEW LISTING

Joseph A. Paradiso, V. Michael Bove, Gershon Dublon, Edwina Portocarrero and Glorianna Davenport Extending the Living Observatory installation, we have instrumented the roots of several trees outside of E15 with vibratory transducers that excite the trees with live streaming sound from a forest near Plymouth, MA. Walking though the trees just outside the Lab, you won't notice anything, but press your ear up against one of them and you'll feel vibrations and hear sound from a tree 60 miles away. Visit at any time from dawn till dusk and again after midnight; if youre lucky you might just catch an April storm, a flock of birds, or an army of frogs. Alumni Contributors: Edwina Portocarrero and Gershon Dublon

9.

Narratarium

V. Michael Bove Jr., Catherine Havasi, Fransheska Colon, Katherine (Kasia) Hayden, Daniel Novy, Jie Qi and Robert H. Speer Remember telling scary stories in the dark with flashlights? Narratarium is an immersive storytelling environment to augment creative play using texture, color, and image. We are using natural language processing to listen to and understand stories being told, and thematically augment the environment using color and images. As a child tells stories about a jungle, the room is filled with greens and browns and foliage comes into view. A traveling parent can tell a story to a child and fill to room with images, color, and presence.

10.

Pillow-Talk

V. Michael Bove Jr., Edwina Portocarrero, David Cranor Pillow-Talk is the first of a series of objects designed to aid creative endeavors through the unobtrusive acquisition of unconscious self-generated content to permit reflexive self-knowledge. Composed of a seamless recording device embedded in a pillow, and a playback and visualization system in a jar, Pillow-Talk crystallizes that which we normally forget. This allows users to capture their dreams in a less mediated way, aiding recollection by priming the experience and providing no distraction for recall and capture through embodied interaction.

11.

ProtoTouch: Multitouch Interfaces to Everyday Objects

V. Michael Bove Jr. and David Cranor An assortment of everyday objects is given the ability to understand multitouch gestures of the sort used in mobile-device user interfaces, enabling people to use such increasingly familiar gestures to control a variety of objects, and to "copy" and "paste" configurations and other information among them.

Page 2

April 2013

MIT Media Lab

12.

ShakeOnIt

V. Michael Bove Jr. and David Cranor We are exploring ways to encode information exchange into preexisting natural interaction patterns, both between people and between a single user and objects with which he or she interacts on a regular basis. Two devices are presented to provoke thoughts regarding these information interchange modalities: a pair of gloves that requires two users to complete a "secret handshake" in order to gain shared access to restricted information, and a doorknob that recognizes the grasp of a user and becomes operational if the person attempting to use it is authorized to do so.

13.

Simple Spectral Sensing

Andrew Bardagjy The availability of cheap LEDs and diode lasers in a variety of wavelengths enables creation of simple and cheap spectroscopic sensors for specific tasks such as food shopping and preparation, healthcare sensing, material identification, and detection of contaminants or adulterants. V. Michael Bove Jr., Santiago Alfaro and Daniel Novy A basketball net incorporates segments of conductive fiber whose resistance changes with degree of stretch. By measuring this resistance over time, hardware associated with this net can calculate force and speed of a basketball traveling through the net. Applications include training, toys that indicate the force and speed on a display, dunk competitions, and augmented reality effects on television broadcasts. This net is far less expensive and more robust than other approaches to measuring data about the ball (e.g., photosensors or ultrasonic sensors) and doesnt require a physical change to the hoop or backboard other than providing electrical connections to the net. Another application of the material is a flat net that can measure velocity of a ball hit or pitched into it (as in baseball or tennis), and can measure position as well (e.g., for determining whether a practice baseball pitch would have been a strike).

14.

Slam Force Net

15.

SurroundVision

V. Michael Bove Jr. and Santiago Alfaro Adding augmented reality to the living room TV, we are exploring the technical and creative implications of using a mobile phone or tablet (and possibly also dedicated devices like toys) as a controllable "second screen" for enhancing television viewing. Thus, a viewer could use the phone to look beyond the edges of the television to see the audience for a studio-based program, to pan around a sporting event, to take snapshots for a scavenger hunt, or to simulate binoculars to zoom in on a part of the scene. Recent developments include the creation of a mobile device app for Apple products and user studies involving several genres of broadcast television programming.

16.

The "Bar of Soap": Grasp-Based Interfaces

V. Michael Bove Jr. and Brandon Taylor We have built several handheld devices that combine grasp and orientation sensing with pattern recognition in order to provide highly intelligent user interfaces. The Bar of Soap is a handheld device that senses the pattern of touch and orientation when it is held, and reconfigures to become one of a variety of devices, such as phone, camera, remote control, PDA, or game machine. Pattern-recognition techniques allow the device to infer the user's intention based on grasp. Another example is a baseball that determines a user's pitching style as an input to a video game.

MIT Media Lab

April 2013

Page 3

17.

Vision-Based Interfaces for Mobile Devices

V. Michael Bove Jr. and Santiago Alfaro Mobile devices with cameras have enough processing power to do simple machine-vision tasks, and we are exploring how this capability can enable new user interfaces to applications. Examples include dialing someone by pointing the camera at the person's photograph, or using the camera as an input to allow navigating virtual spaces larger than the device's screen.

Ed BoydenSynthetic Neurobiology
How to engineer intelligent neurotechnologies to repair pathology, augment cognition, and reveal insights into the human condition.

18.

Direct Engineering and Testing of Novel Therapeutic Platforms for Treatment of Brain Disorders

Leah Acker, Nir Grossman, Mike Henninger, and Fumi Yoshida New technologies for controlling neural circuit dynamics, or entering information into the nervous system, may be capable of serving in therapeutic roles for improving the health of human patientsenabling the restoration of lost senses, the control of aberrant or pathological neural dynamics, and the augmentation of neural circuit computation, through prosthetic means. We are assessing the translational possibilities opened up by our technologies, exploring the safety and efficacy of optogenetic neuromodulation in multiple animal models, and also pursuing, both in our group and in collaborations with others, proofs-of-principle of new kinds of optical neural control prosthetics. By combining observation of brain activity with real-time analysis and responsive optical neurostimulation, new kinds of "brain co-processors" may be possible which can work efficaciously with the brain to augment its computational abilities, e.g., in the context of cognitive, emotional, sensory, or motor disability. Brian Allen, Ishan Gupta, Rachel Bandler, Steve Bates, Fei Chen, Jonathan Gootenberg, Suhasa Kodandaramaiah, Daniel Martin-Alarcon, Paul Tillberg, Aimei Yang We are continually exploring new strategies for understanding neural circuits, often in collaboration with other scientific, engineering, and biology research groups. If you would like to collaborate on such a project, please contact us. Brian Allen, Jake Bernstein, Nir Grossman, Mike Henninger, Jorg Scholvin, Giovanni Talei Franzesi, Ash Turza, Christian Wentz, Anthony Zorzos The brain is a densely wired, heterogeneous circuit made out of thousands of different kinds of cells. Over the last several years, we have developed a set of fully genetically encoded "optogenetic" reagents that, when targeted to specific cells, enable their physiology to be controlled via light. To confront the 3D complexity of the living brain, enabling the analysis of the circuits that causally drive or support specific neural computations and behaviors, with our collaborators we have developed hardware for delivery of light into the brain, enabling control of complexly shaped neural circuits, as well as the ability to combinatorially activate and silence neural activity in distributed neural circuits. We anticipate that these tools will enable the systematic analysis of the brain circuits that mechanistically and causally contribute to specific behaviors and pathologies.

19.

Exploratory Technologies for Understanding Neural Circuits

20.

Hardware and Systems for Control of Neural Circuits with Light

Page 4

April 2013

MIT Media Lab

21.

Molecular Reagents Enabling Control of Neurons and Biological Functions with Light

Fei Chen, Yongku Cho, Amy Chuong, Nathan Klapoetke, Daniel Martin-Alarcon, Daniel Schmidt, Aimei Yang Over the last several years our lab and our collaborators have pioneered a new areathe development of a number of fully genetically encoded reagents that, when targeted to specific cells, enable their physiology to be controlled via light. These reagents, known as optogenetic tools, enable temporally precise control of neural electrical activity, cellular signaling, and other high-speed natural as well as synthetic biology processes and pathways using light. Such tools are now in widespread use in neuroscience, for the study of the neuron types and activity patterns that mechanistically and causally contribute to processes ranging from cognition to emotion to movement, and to brain disorders. These tools are also being evaluated as components of prototype neural control devices for ultra-precise treatment of intractable brain disorders. Brian Allen, Jake Bernstein, Mike Henninger, Justin Kinney, Suhasa Kodandaramaiah, Caroline Moore-Kochlacs, Nikita Pak, Jorg Scholvin, Annabelle Singer, Al Strelzoff, Giovanni Talei Franzesi, Ash Turza, Christian Wentz, Ian Wickersham, Alex Wissner-Gross, The brain is a 3D, densely wired circuit that computes via large sets of widely distributed neurons interacting at fast timescales. In order to understand the brain, ideally it would be possible to observe the activity of many neurons with as great a degree of precision as possible, so as to understand the neural codes and dynamics that are produced by the circuits of the brain. And, ideally, it would be possible to understand how those neural codes and dynamics emerge from the molecular, genetic, and structural properties of the cells making up the circuit. Along with our collaborators, we are developing a number of innovations to enable such analyses of neural circuit dynamics. These tools will hopefully enable pictures of how neurons work together to implement brain computations, and how these computations go awry in brain disorder states. Leah Acker, Brian Allen, Steve Bates, Sean Batir, Jake Bernstein, Gary Brenner, Tim Buschman, Suhasa Kodandaramaiah, Carolina Lopez-Trevino, Patrick Monahan, Caroline Moore-Kochlacs, Sunanda Sharma, Annabelle Singer, Giovanni Talei Franzesi, Fumi Yoshida We are using our toolssuch as optogenetic neural control and brain circuit dynamics measurementboth within our lab and in collaborations with others, to analyze how specific sets of circuit elements within neural circuits give rise to behaviors and functions such as cognition, emotion, movement, and sensation. We are also determining which neural circuit elements can initiate or sustain pathological brain states. Principles of controlling brain circuits may yield fundamental insights into how best to go about treating brain disorders. Finally, we are screening for neural circuit targets that, when altered, present potential therapeutic benefits, and which may serve as potential drug targets or electrical stimulation targets. In this way we hope to explore systematic, causal, temporally precise analyses of how neural circuits function, yielding both fundamental scientific insights and important clinically relevant principles.

22.

Recording and Data-Analysis Technologies for Observing and Analyzing Neural Circuit Dynamics

23.

Understanding Neural Circuit Computations and Finding New TherapeuticTargets

MIT Media Lab

April 2013

Page 5

Cynthia BreazealPersonal Robots


How to build socially engaging robots and interactive technologies that provide people with long-term social and emotional support to help people live healthier lives, connect with others, and learn better.

24.

AIDA: Affective Intelligent Driving Agent

Cynthia Breazeal and Kenton Williams Drivers spend a significant amount of time multi-tasking while they are behind the wheel. These dangerous behaviors, particularly texting while driving, can lead to distractions, and ultimately accidents. Many in-car interfaces designed to address this issue still do not take a proactive role to assist the driver nor leverage aspects of the driver's daily life to make the driving experience more seamless. In collaboration with Volkswagen/Audi and the SENSEable City Lab we are developing AIDA (Affective Intelligent Driving Agent), a robotic driver-vehicle interface that acts as a sociable partner. AIDA elicits facial expressions and strong non-verbal cues for engaging social interaction with the driver. AIDA also leverages the driver's mobile device as its face, which promotes safety, offers proactive driver support and fosters deeper personalization to the driver. Brad Knox, Patrick Mccabe and Cynthia Breazeal Like people, dogs and cats live among technology that affects their lives. Yet little of this technology has been designed with these pets in mind. We are developing systems that interact intelligently with animals to entertain, exercise, and empower them. Currently, we are developing a laser-chasing game, wherein dogs or cats are tracked by a ceiling-mounted webcam and a computer-controlled laser is moved with knowledge of the pet's position and movement. Machine learning will be applied to optimize the specific laser strategy. We envision enabling owners to initiate and view the interaction remotely through a web interface, providing stimulation and exercise to pets when the owners are at work or otherwise cannot be present. Cynthia Breazeal, Nicholas DePalma, Adam Setapen and Sonia Chernova Imagine opening your eyes and being awake for only a half an hour at a time. This is the life that robots traditionally live. This is due to a number of factors such as battery life and wear on prototype joints. Roboticists have typically muddled though this challenge by crafting handmade models of the world or performing machine learning with synthetic dataand sometimes real-world data. While robotics researchers have traditionally used large distributed systems to do perception, planning, and learning, cloud-based robotics aims to link all of a robot's experiences. This movement aims to build large-scale machine learning algorithms that use experience from large groups of people, whether sourced from a large number of tabletop robots or a large number of experiences with virtual agents. Large-scale robotics aims to change embodied AI as it changed non-embodied AI.

25.

Animal-Robot Interaction
NEW LISTING

26.

Cloud-HRI

Page 6

April 2013

MIT Media Lab

27.

Computationally Modeling Interpersonal Trust Using Nonverbal Behavior for Human-Robot Interactions

Jin Joo Lee, Brad Knox, Cynthia Breazeal, David DeSteno (Northeastern University) and Fei Sha (University of Southern California) After meeting someone for the first time, we leave away with an intuitive sense of how much we can trust that person. Nonverbal behaviors, such as gaze patterns, body language, and facial expressions, have been explored as honest or leaky signals that are salient cues in gaining trust insights. Our research is toward creating a computational model for recognizing interpersonal trust in social interactions. By observing the trust-related nonverbal cues expressed in the social interaction, we aim to design a machine learning algorithm that is capable of differentiating whether a person finds their socially assistive robot to be a trustworthy or untrustworthy partner. We aim to enable robots to understand our nonverbal signals. With so much of our communication being passed in these nonverbal streams, we hope that by enabling robots to understand these signals, we can design robots that can more effectively communicate with us. Adam Setapen, Natalie Freed, and Cynthia Breazeal DragonBot is a new platform built to support long-term interactions between children and robots. The robot runs entirely on an Android cell phone, which displays an animated virtual face. Additionally, the phone provides sensory input (camera and microphone) and fully controls the actuation of the robot (motors and speakers). Most importantly, the phone always has an Internet connection, so a robot can harness cloud-computing paradigms to learn from the collective interactions of multiple robots. To support long-term interactions, DragonBot is a "blended-reality" characterif you remove the phone from the robot, a virtual avatar appears on the screen and the user can still interact with the virtual character on the go. Costing less than $1,000, DragonBot was specifically designed to be a low-cost platform that can support longitudinal human-robot interactions "in the wild." Cynthia Breazeal, Walter Dan Stiehl, Robert Toscano, Jun Ki Lee, Heather Knight, Sigurdur Orn Adalgeirsson, Jeff Lieberman and Jesse Gray The Huggable is a new type of robotic companion for health care, education, and social communication applications. The Huggable is much more than a fun, interactive robotic companion; it functions as an essential team member of a triadic interaction. Therefore, the Huggable is not meant to replace any particular person in a social network, but rather to enhance it. The Huggable is being designed with a full-body sensitive skin with over 1500 sensors, quiet back-drivable actuators, video cameras in the eyes, microphones in the ears, an inertial measurement unit, a speaker, and an embedded PC with 802.11g wireless networking. An important design goal for the Huggable is to make the technology invisible to the user. You should not think of the Huggable as a robot but rather as a richly interactive teddy bear. Alumni Contributors: Matthew Berlin, Daniel Bernhardt (Cambridge University) and Kuk-Hyun Han (Samsung)

28.

DragonBot: Android phone robots for long-term HRI

29.

Huggable: A Robotic Companion for Long-Term Health Care, Education, and Communication

30.

MDS: Crowdsourcing Human-Robot Interaction: Online Game to Study Collaborative Human Behavior

Cynthia Breazeal, Jason Alonso and Sonia Chernova Many new applications for robots require them to work alongside people as capable members of human-robot teams. We have developed Mars Escape, a two-player online game designed to study how humans engage in teamwork, coordination, and interaction. Data gathered from hundreds of online games is being used to develop computational models of human collaborative behavior in order to create an autonomous robot capable of acting as a reliable human teammate. In the summer of 2010, we will recreate the Mars Escape game in real life at the Boston Museum of Science and invite museum visitors to perform collaborative tasks together with the autonomous MDS robot Nexi.

MIT Media Lab

April 2013

Page 7

31.

Mind-Theoretic Planning for Robots


NEW LISTING

Cynthia Breazeal and Sigurdur Orn Adalgeirsson Mind-Theoretic Planning (MTP) is a technique for robots to plan in social domains. This system takes into account probability distributions over the initial beliefs and goals of people in the environment which are relevant to the task and creates a prediction of how they will rationally act on their beliefs to achieve their goals. The MTP system then proceeds to create an action plan for the robot that simultaneously takes advantage of the effects of anticipated actions of others as well as avoids interfering with them. Cynthia Breazeal and Adam Whiton We are investigating e-textiles and fiber-electronics to develop unique soft-architecture robotic components. We have been developing large area force sensors utilizing quantum tunneling composites integrated into textiles creating fabrics that can cover the body/surface of the robot and sense touch. By using e-textiles we shift from the metaphor of a sensing skin, often used in robotics, to one of sensing clothing. We incorporated apparel design and construction techniques to develop modular e-textile surfaces that can be easily attached to a robot and integrated into a robotic system. Adding new abilities to a robot system can become as simple as changing their clothes. Our goal is to study social touch interaction and communication between people and robots while exploring the benefits of textiles and the textile aesthetic.

32.

Robotic Textiles

33.

Socially Assistive Robotics: An NSF Expedition in Computing

Tufts University, University of Southern California, Kasia Hayden with Stanford University, Sooyeon Jeong, Brad Knox, Cynthia Breazeal, Jacqueline Marie Kory, Jin Joo Lee, David Robert, Edith Ackermann, Catherine Havasi, Willow Garage and Yale University Our mission is to develop the computational techniques that will enable the design, implementation, and evaluation of "relational" robots, to encourage the social, emotional, and cognitive growth in children, including those with social or cognitive deficits. Funding for the project comes from the NSF Expeditions in Computing program. This Expedition has the potential to substantially impact the effectiveness of education and healthcare, and to enhance the lives of children and other groups that require specialized support and intervention. In particular, the MIT effort is focusing on developing second language learning companions for pre-school aged children, ultimately for ESL (English as a Second Language).

34.

Storytelling in the Preschool of Future

David Robert Using the Preschool of the Future environment, children can create stories that come to life in the real world. We are developing interfaces that enable children to author stories in the physical environmentstories where robots are the characters and children are not only the observers, but also the choreographers and actors in the stories. To do this, children author stories and robot behaviors using a simple digital painting interface. By combining the physical affordances of painting with digital media and robotic characters, stories can come to life in the real world. Programming in this environment becomes a group activity when multiple children use these tangible interfaces to program advanced robot behaviors. Cynthia Breazeal, David Nunez and Tod Machover Two robot arms are in constant motion and hard at work. From a distance, they can be seen considering their tasks, communicating with each other, and struggling to make sense of their abstract mission. Participants are encouraged to approach the hands to engage in an dialog with the robots to offer assistance or advice. The project demonstrates a chatter system to simulate affective conversation and a parametric animation engine to provide dynamic, autonomous character-driven movement in the robots. This project premiered at The Other Festival 2013.

35.

The Helping Hands


NEW LISTING

Page 8

April 2013

MIT Media Lab

36.

TinkRBook: Reinventing the Reading Primer

Cynthia Breazeal, Angela Chang and David Scott Nunez TinkRBook is a storytelling system that introduces a new concept of reading, called textual tinkerability. Textual tinkerability uses storytelling gestures to expose the text-concept relationships within a scene. Tinkerability prompts readers to become more physically active and expressive as they explore concepts in reading together. TinkRBooks are interactive storybooks that prompt interactivity in a subtle way, enhancing communication between parents and children during shared picture-book reading. TinkRBooks encourage positive reading behaviors in emergent literacy: parents act out the story to control the words on-screen, demonstrating print referencing and dialogic questioning techniques. Young children actively explore the abstract relationship between printed words and their meanings, even before this relationship is properly understood. By making story elements alterable within a narrative, readers can learn to read by playing with how word choices impact the storytelling experience. Recently, this research has been applied to developing countries. Cynthia Breazeal, David Nunez, Maryanne Wolf (Tufts) and Robin Morris (GSU) We are developing a system of early literacy apps, games, toys, and robots that will triage how children are learning, diagnose literacy deficits, and deploy dosages of content to encourage reading. Currently, over 200 Android-based tablets have been sent to children around the world; these devices are instrumented to provide a very detailed picture of how kids are using this technology. We are using this big data to discover usage and learning models that will inform future educational development. Cynthia Breazeal and Adam Whiton In robotics, the emerging field of electronic textiles and fiber-electronics represents a shift in morphology from hard and rigid mechatronic components toward a soft-architectureand more specifically, a flexible planar surface morphology. It is thus essential to determine how a robotic system might actuate flexible surfaces for donning and doffing actions. Zipperbot is a robotic continuous closure system for joining fabrics and textiles. By augmenting traditional apparel closure techniques and hardware with robotic attributes, we can incorporate these into robotic systems for surface manipulation. Through actuating closures, textiles could shape shift or self-assemble into a variety of forms.

37.

World Literacy Tablets


NEW LISTING

38.

Zipperbot: Robotic Continuous Closure for Fabric Edge Joining

Leah BuechleyHigh-Low Tech


How to engage diverse audiences in creating their own technology by situating computation in new contexts and building tools to democratize engineering.

39.

aireForm: Refigured Shape-Changing Fashion


NEW LISTING

Henry Holtzman, Hiroshi Ishii, Leah Buechley, Jennifer Jacobs, Philippa Mothersill, Ryuma Niiyama and Xiao Xiao aireForm is a dress of many forms that fluidly morph from one to another, animated by air. Its forms evoke classic feminine silhouettes, from sleek to supple to striking. Garments are a medium through which we may alter our apparent forms to project different personas. As our personas shift from moment to moment, so too does aireForm, living and breathing with us.

MIT Media Lab

April 2013

Page 9

40.

Circuit Sketchbook

Leah Buechley and Jie Qi The Circuit Sketchbook is a primer on creating expressive electronics using paper-based circuits. Inside are explanations of useful components with example circuits, as well as methods for crafting DIY switches and sensors from paper. There are also circuit templates for building functional electronics directly on the pages of the book.

41.

Codeable Objects

Jennifer Jacobs and Leah Buechley Codeable Objects is a library for Processing that allows people to design and build objects using geometry and programing. Geometric computation offers a host of powerful design techniques, but its use is limited to individuals with a significant amount of programming experience or access to complex design software. In contrast, Codeable objects allows a range of people, including novice coders, designers and artists to rapidly design, customize and construct an artifact using geometric computation and digital fabrication. The programming methods provided by the library allow the user to program a wide range of structures and designs with simple code and geometry. When the user compiles their code, the software outputs tool paths based on their specifications, which can be used in conjunction with digital fabrication tools to build their object.

42.

Computational Textiles Curriculum

Leah Buechley and Kanjun Qiu The Computational Textiles Curriculum is a collection of projects that leverages the creativity and beauty inherent in e-textiles to create an introductory computer-science curriculum for middle- and high-school students. The curriculum is taught through a sequence of hands-on project explorations of increasing difficulty, with each new project introducing new concepts in computer science, ranging from basic control flow and abstraction to more complex ideas such as networking, data processing, and algorithms. Additionally, the curriculum introduces unique methods of working with the LilyPad Arduino, creating non-traditional projects such as a game controller, a networked fabric piano, an activity monitor, or a gesture recognition glove. The projects are validated, calibrated, and evaluated through a series of workshops with middle- and high-school youth in the Boston area. David A. Mellis and Leah Buechley An exploration into the possibilities for individual construction and customization of the most ubiquitous of electronic devices, the cellphone. By creating and sharing open-source designs for the phone's circuit board and case, we hope to encourage a proliferation of personalized and diverse mobile phones. Freed from the constraints of mass production, we plan to explore diverse materials, shapes, and functions. We hope that the project will help us explore and expand the limits of do-it-yourself (DIY) practice. How close can a homemade project come to the design of a cutting-edge device? What are the economics of building a high-tech device in small quantities? Which parts are even available to individual consumers? What's required for people to customize and build their own devices?

43.

DIY Cellphone

44.

DressCode
NEW LISTING

Leah Buechley and Jennifer Jacobs DressCode is design software designed to allow novice programmers to produce clothing and fashion accessories through computational design and digital fabrication. DressCode is an integrated visual fabrication environment that features a two-panel display showing simultaneously a designers programming code and resulting design. The environment supports real-time changes in the design based on changes made in the code as well as a limited set of graphical selection tools designed to work in conjunction with the process of writing code. DressCode is designed to work with fabrication machines that function on an x-y axis (also known as two-axis devices), such as laser and vinyl cutters, computer-controlled

Page 10

April 2013

MIT Media Lab

embroidery machines, inkjet printers, and CNC milling machinesbut not 3D printers. The two-axis machines are often cheaper to acquire and more widely available than 3D printers. In addition, two-axis machines correspond well with garment creation.

45.

Exploring Artisanal Technology

Leah Buechley, Sam Jacoby and David A. Mellis We are exploring the methods by which traditional artisans construct new electronic technologies using contextually novel materials and processes, incorporating wood, textiles, reclaimed and recycled products, as well as conventional circuitry. Such artisanal technologies often address different needs, and are radically different in form and function than conventionally designed and produced products. Leah Buechley The LilyPad Arduino is a set of tools that empowers people to build soft, flexible, fabric-based computers. A set of sewable electronic modules enables users to blend textile craft, electrical engineering, and programming in surprising, beautiful, and novel ways. A series of workshops that employed the LilyPad have demonstrated that tools such as these, which introduce engineering from new perspectives, are capable of involving unusual and diverse groups in technology development. Ongoing research will explore how the LilyPad and similar devices can engage under-represented groups in engineering, change popular assumptions about the look and feel of technology, and spark hybrid communities that combine rich crafting traditions with high-tech materials and processes.

46.

LilyPad Arduino

47.

LilyTiny

Leah Buechley and Emily Marie Lovell The LilyTiny is a small sewable breakout board for ATtiny85 microcontrollersdevices which may be integrated into circuits to enable pre-determined interactions such as lights that flash or areas that can sense touch. The circuit board can be pre-loaded with a program, enabling students to incorporate dynamic behaviors into e-textile projects without having to know how to program microcontrollers.

48.

Microcontrollers As Material

Leah Buechley, Sam Jacoby, David A. Mellis, Hannah Perner-Wilson and Jie Qi Weve developed a set of tools and techniques that make it easy to use microcontrollers as an art or craft material, embedding them directly into drawings or other artifacts. We use the ATtiny45 from Atmel, a small and cheap (~$1) microcontroller that can be glued directly to paper or other objects. We then construct circuits using conductive silver ink, dispensed from squeeze bottles with needle tips. This makes it possible to draw a circuit, adding lights, speakers, and other electronic components.

49.

Open Source Consumer Electronics

David A. Mellis and Leah Buechley We offer case studies in the ways that digital fabrication allows us to treat the designs of products as a kind of source code: files that can be freely shared, modified, and produced. In particular, the case studies combine traditional electronic circuit boards and components (a mature digital fabrication process) with laser-cut or 3D printed materials. They demonstrate numerous possibilities for individual customizations both pre- and post-fabrication, as well as a variety of potential production and distribution processes and scales.

MIT Media Lab

April 2013

Page 11

50.

Programmable Paintings

Leah Buechley and Jie Qi Programmable Paintings are a series of artworks that use electronic elements such as LED lights and microphone sensors as "pigments" in paintings. The goal is to blend traditional elements of paintingcolor, texture, compositionwith these electronic components to create a new genre of time-based and interactive art. Joseph A. Paradiso, Leah Buechley, Jie Qi and Nan-wei Gong A toolkit for creating electronics using circuit board stickers. Circuit stickers are created by printing traces on flexible substrates and adding conductive adhesive. These lightweight, flexible, and sticky circuit boards allow us to begin sticking interactivity onto new spaces and interfaces such as clothing, instruments, buildings, and even our bodies. Leah Buechley and Sam Jacoby Exploring conductive inks as an expressive medium for narrative storytelling, StoryClip synthesizes electrical functionality, aesthetics, and creativity, to turn a drawing into a multimedia interface that promotes rich engagement with children.

51.

Sticky Circuits
NEW LISTING

52.

StoryClip

Catherine HavasiDigital Intuition


How to give computers human-like intuition so they can better understand us.

53.

CharmMe

Catherine Havasi, Brett Samuel Lazarus and Victor J Wang CharmMe is a mobile social discovery application that helps people meet each other during events. The application blends physical and digital proximity to help you connect with with other like-minded individuals. Armed with RFID sensors and a model of how the Lab works, CharmMe determines who you should talk to using information including checking in to conference talks or liking projects using QR codes. In addition, possible opening topics of conversation are suggested based on users' expressed similar interests.

54.

ConceptNet

Catherine Havasi, Robert Speer, Henry Lieberman and Marvin Minsky Imparting commonsense knowledge to computers enables a new class of intelligent applications better equipped to make sense of the everyday world and assist people with everyday tasks. Our approach to this problem is ConceptNet, a freely available commonsense knowledge base that possesses a great breadth of general knowledge that computers should already know, ready to be incorporated into applications. ConceptNet 5 is a semantic network with millions of nodes and edges, built from a variety of interlinked resources, both crowd-sourced and expert-created, including the Open Mind Common Sense corpus, WordNet, Wikipedia, and OpenCyc. It contains information in many languages including English, Chinese, Japanese, Dutch, and Portuguese, resulting from a collaboration of research projects around the world. In this newest version of ConceptNet, we aim to automatically assess the reliability of its data when it is collected from variously reliable sources and processes. Alumni Contributors: Jason Alonso, Kenneth C. Arnold, Ian Eslick, Xinyu H. Liu and Push Singh

Page 12

April 2013

MIT Media Lab

55.

Corona

Rob Speer and Catherine Havasi How can a knowledge base learn from the Internet, when you shouldn't trust everything you read on the Internet? CORONA is a system for building a knowledge base from a combination of reliable and unreliable sources, including crowd-sourced contributions, expert knowledge, Games with a Purpose, automatic machine reading, and even knowledge that is imperfectly derived from other knowledge in the system. It confirms knowledge as reliable as more sources confirm it or unreliable when sources disagree, and then by running the system in reverse it can discover which knowledge sources are the most trustworthy.

56.

Divisi: Reasoning Over Semantic Relationships

Robert Speer, Catherine Havasi, Kenneth Arnold, and Jason Alonso We have developed technology that enables easy analysis of semantic data, blended in various ways with common-sense world knowledge. The results support reasoning by analogy and association. A packaged library of code is being made available to all sponsors. Catherine Havasi and Brett Lazarus GI Mobile is a mobile companion to the Media Lab Glass Infrastructure system. It incorporates the MessageMe messaging system to deliver a suite of location-aware features that complement the Glass Infrastructure. These include locating others in the Lab, browsing projects physically near you, and sending location-based messages. In addition, GI Mobile will alert you when you pass by projects you may be interested in based on what projects you have "liked." Catherine Havasi and Brett Lazarus MessageMe is a location-based messaging infrastructure. It consists of a messaging server that delivers messages to recipients as they enter a desired physical space in the Lab. MessageMe builds on the Glass Infrastructure system, utilizing the RFID readers at each screen to determine users' locations. V. Michael Bove Jr., Catherine Havasi, Fransheska Colon, Katherine (Kasia) Hayden, Daniel Novy, Jie Qi and Robert H. Speer Remember telling scary stories in the dark with flashlights? Narratarium is an immersive storytelling environment to augment creative play using texture, color, and image. We are using natural language processing to listen to and understand stories being told, and thematically augment the environment using color and images. As a child tells stories about a jungle, the room is filled with greens and browns and foliage comes into view. A traveling parent can tell a story to a child and fill to room with images, color, and presence.

57.

GI Mobile
NEW LISTING

58.

MessageMe
NEW LISTING

59.

Narratarium

60.

Open Mind Common Sense

Michael Luis Puncel, Karen Anne Sittig and Robert H. Speer The biggest problem facing artificial intelligence today is how to teach computers enough about the everyday world so that they can reason about it like we doso that they can develop "common sense." We think this problem may be solved by harnessing the knowledge of people on the Internet, and we have built a Web site to make it easy and fun for people to work together to give computers the millions of little pieces of ordinary knowledge that constitute "common sense." Teaching computers how to describe and reason about the world will give us exactly the technology we need to take the Internet to the next level, from a giant repository of Web pages to a new state where it can think about all the knowledge it contains; in essence, to make it a living entity. Alumni Contributors: Jason Alonso, Kenneth C. Arnold, Ian Eslick, Henry Lieberman, Xinyu H. Liu, Bo Morgan, Push Singh and Dustin Arthur Smith

MIT Media Lab

April 2013

Page 13

61.

Red Fish, Blue Fish

Robert Speer and Catherine Havasi With commonsense computing, we can discover trends in the topics that people are talking about right now. Red Fish Blue Fish takes input in real time from lots of political blogs, and creates a visualization of what topics are being discussed by the left and the right.

62.

Second-Language Learning Using Games with a Purpose


NEW LISTING

Catherine Havasi and Kasia Hayden An online language learning tool and game with a purpose (GWAP) designed to simultaneously gather annotated speech and text data useful for improving natural language processing (NLP) applications and serve as an English-language learning resource. Catherine Havasi and Michael Luis Puncel Analogy Space, a previous project under the Digital Intuition group, developed a technique of plotting concepts in a many-dimensional semantic space in order to identify clusters of concepts that are similar to each other. Story Space will apply this technique to human narrative in order to provide a measure of similarity between different stories. It has had preliminary success using datasets that are easily broken up into discrete events, such as "how-to" articles from the internet. The next steps involve using automatic event taggers to determine the progression of a story.

63.

Story Space

64.

The Glass Infrastructure

Henry Holtzman, Andy Lippman, Matthew Blackshaw, Jon Ferguson, Catherine Havasi, Julia Ma, Daniel Schultz and Polychronis Ypodimatopoulos This project builds a social, place-based information window into the Media Lab using 30 touch-sensitive screens strategically placed throughout the physical complex and at sponsor sites. The idea is get people to talk among themselves about the work that they jointly explore in a public place. We present Lab projects as dynamically connected sets of "charms" that visitors can save, trade, and explore. The GI demonstrates a framework for an open, integrated IT system and shows new uses for it. Alumni Contributors: Rick Borovoy, Greg Elliott and Boris Grigory Kizelshteyn

65.

Understanding Dialogue

Catherine Havasi and Anjali Muralidhar In order to extend the Digital Intuition group's ability to understand human language, a module that fills in the gaps of current technology must be developed to understand dialogue. This module will be based on data from the Switchboard human-human dialogue corpus, as well as a dataset of recorded dialogues between parents and children while reading an interactive e-book created by the Lab's Personal Robots group. The goal is for the module to be able to identify the emotion and mood of the dialogue in order to make inferences about what parents and children generally talk about when reading the book, and to make suggestions about additional conversation topics. Conversations between an adult and child while reading a book can greatly contribute to the learning and development of young children.

Page 14

April 2013

MIT Media Lab

Hugh HerrBiomechatronics
How technology can be used to enhance human physical capability.

66.

Volitional Control of a Powered Ankle-Foot Prosthesis


NEW LISTING

Hugh Herr and Oliver Kannape This project focuses on granting transtibial amputees volitional control over their prostheses by combining electromyographic (EMG) activity from the amputees' residual limb muscles with intrinsic controllers on the prosthesis. The aim is to generalize biomimetic behavior of the prosthesis, making it independent of walking terrains and transitions. Hugh Herr and David Sengeh Today, 100 percent of amputees experience some form of prosthetic socket discomfort. This project involves the design and production of a comfortable, variable impedance prosthetic (VIPr) socket using digital anatomical data for a transtibial amputee using computer-aided design and manufacturing (CAD/CAM). The VIPr socket uses multiple materials to achieve compliance, thereby increasing socket comfort for amputees, while maintaining structural integrity. The compliant features are seamlessly integrated into the 3D printed socket to achieve lower interface peak pressures over bony protuberances and other anatomical points in comparison to a conventional socket. This lower peak pressure is achieved through a design that uses anthropomorphic data acquired through surface scan and Magnetic Resonance Imaging techniques. A mathematical transformation maps the quantitative measurements of the human residual limb to the corresponding socket shape and impedance characteristics, spatially. Hugh Herr and Ken Endo Human walking neuromechanical models show how each muscle works during normal, level-ground walking. They are mainly modeled with clutches and linear springs, and are able to capture dominant normal walking behavior. This suggests to us to use a series-elastic clutch at the knee joint for below-knee amputees. We have developed the powered ankle prosthesis, which generates enough force to enable a user to walk "normally." However, amputees still have problems at the knee joint due to the lack of gastrocnemius, which works as an ankle-knee flexor and a plantar flexor. We hypothesize that metabolic cost and EMG patterns of an amputee with our powered ankle and virtual gastrocnemius will dramatically improve. Hugh Herr, Elliott Rouse and Luke Mooney Using biologically inspired design principles, a biomimetic robotic knee prosthesis is proposed that uses a clutchable series-elastic actuator. In this design, a clutch is placed in parallel to a combined motor and spring. This architecture permits the mechanism to provide biomimetic walking dynamics while requiring minimal electromechanical energy from the prosthesis. The overarching goal for this project is to design a new generation of robotic knee prostheses capable of generating significant energy during level ground walking, that can be stored in a battery and used to power a robotic ankle prosthesis and other net-positive locomotion modes (i.e., stair ascent). Alumni Contributor: Ernesto C. Martinez-Villalpando

67.

A Variable-impedance Prosthetic (VIPr) Socket Design

68.

Artificial Gastrocnemius

69.

Biomimetic Active Prosthesis for Above-Knee Amputees

MIT Media Lab

April 2013

Page 15

70.

Control of Muscle-Actuated Systems via Electrical Stimulation

Hugh Herr Motivated by applications in rehabilitation and robotics, we are developing methodologies to control muscle-actuated systems via electrical stimulation. As a demonstration of such potential, we are developing centimeter-scale robotic systems that utilize muscle for actuation and glucose as a primary source of fuel. This is an interesting control problem because muscles: a) are mechanical state-dependent actuators; b) exhibit strong nonlinearities; and c) have slow time-varying properties due to fatigue-recuperation, growth-atrophy, and damage-healing cycles. We are investigating a variety of adaptive and robust control techniques to enable us to achieve trajectory tracking, as well as mechanical power-output control under sustained oscillatory conditions. To implement and test our algorithms, we developed an experimental capability that allows us to characterize and control muscle in real time, while imposing a wide variety of dynamical boundary conditions. Alumni Contributor: Waleed A. Farahat

71.

Effect of a Powered Ankle on Shock Absorption and Interfacial Pressure

Hugh Herr and David Hill Lower-extremity amputees face a series of potentially serious post-operative complications. Among these are increased risk of further amputations, excessive stress on the unaffected and residual limbs, and discomfort at the human-prosthesis interface. Currently, conventional, passive prostheses have made strides towards alleviating the risk of experiencing complications, but we believe that the limit of dumb elastic prostheses has been reached; in order to make further strides we must integrate smart technology in the form of sensors and actuators into lower-limb prostheses. This project compares the elements of shock absorption and socket pressure between passive and active ankle-foot prostheses. It is an attempt to quantitatively evaluate the patients comfort. Hugh Herr, Neri Oxman, Elizabeth Tsai, Reza Safai-Naeeni, Zjenja Doubrovski, Arthur Petron and Roy Kornbluh (SRI) Sockets - the cup-shaped devices that attach an amputated limb to a lower-limb prosthesis - are made through unscientific, artisanal methods that do not have repeatable quality and comfort from one individual with amputation to the next. The FitSocket project aims to identify the correlation between leg tissue properties and the design of a comfortable socket. We accomplish this by creating a robotic socket measurement device called the FitSocket which can directly measure tissue properties. With this data, we can rapid-prototype test sockets and socket molds in order to make rigid, spatially variable stiffness, and spatially/temporally variable stiffness sockets.

72.

FitSocket: A Better Way to Make Sockets

73.

Human Walking Model Predicts Joint Mechanics, Electromyography, and Mechanical Economy

Hugh Herr and Ken Endo We are studying the mechanical behavior of leg muscles and tendons during human walking in order to motivate the design of economical robotic legs. We hypothesize that quasi-passive, series-elastic clutch units spanning the knee joint in a musculoskeletal arrangement can capture the dominant mechanical behaviors of the human knee in level-ground walking. Biarticular elements necessarily need to transfer energy from the knee joint to hip and/or ankle joints, and this mechanism would reduce the necessary muscle work and improve the mechanical economy of a human-like walking robot.

Page 16

April 2013

MIT Media Lab

74.

Load-Bearing Exoskeleton for Augmentation of Human Running

Hugh Herr, Grant Elliott and Andrew Marecki Augmentation of human locomotion has proved an elusive goal. Natural human walking is extremely efficient and the complex articulation of the human leg poses significant engineering difficulties. We present a wearable exoskeleton designed to reduce the metabolic cost of jogging. The exoskeleton places a stiff fiberglass spring in parallel with the complete leg during stance phase, then removes it so that the knee may bend during leg swing. The result is a bouncing gait with reduced reliance on the musculature of the knee and ankle. Edward Boyden, Hugh Herr and Ron Riso Recent advances in artificial limbs have resulted in the provision of powered ankle and knee function for lower extremity amputees and powered elbow, wrist and finger joints for upper extremity prostheses. Researchers still struggle however, with how to provide prosthesis users with full volitional and simultaneous control of the powered joints. This project seeks to develop means to allow amputees to control their powered prostheses by activating the peripheral nerves present in their residule limb. Such neural control can be more natural than currently used myoelectric control since the same functions previously served by particular motor fascicles can be directed to the corresponding prosthesis actuators for simultaneous joint control as in normal limbs. Future plans include the capability to electrically activate the sensory components of residule limb nerves to provide amputees with tactile feedback and an awareness of joint position from their prostheses. Hugh Herr The human ankle provides a significant amount of net positive work during the stance period of walking, especially at moderate to fast walking speeds. Conversely, conventional ankle-foot prostheses are completely passive during stance, and consequently, cannot provide net positive work. Clinical studies indicate that transtibial amputees using conventional prostheses experience many problems during locomotion, including a high gait metabolism, a low gait speed, and gait asymmetry. Researchers believe the main cause for the observed locomotion is due to the inability of conventional prostheses to provide net positive work during stance. The objective of this project is to develop a powered ankle-foot prosthesis that is capable of providing net positive work during the stance period of walking. To this end, we are investigating the mechanical design and control system architectures for the prosthesis. We also conduct a clinical evaluation of the proposed prosthesis on different amputee participants. Alumni Contributor: Samuel Au

75.

Neural Interface Technology for Advanced Prosthetic Limbs


NEW LISTING

76.

Powered Ankle-Foot Prosthesis

77.

Sensor-Fusions for an EMG Controlled Robotic Prosthesis

Matthew Todd Farrell and Hugh Herr Current unmotorized prostheses do not provide adequate energy return during late stance to improve level-ground locomotion. Robotic prostheses can provide power during late-stance to improve metabolic economy in an amputee during level-ground walking. This project seeks to improve the types of terrain a robotic ankle and successfully navigate by using command signals taken from the intact and residual limbs of an amputee. By combining these commands signals with sensors attached to the robotic ankle it might be possible to further understand the role of physiological signals in the terrain adaptation of robotic ankles.

MIT Media Lab

April 2013

Page 17

Cesar HidalgoMacro Connections


How to transform data into knowledge.

78.

Cultural Exports

Shahar Ronen, Amy (Zhao) Yu and Csar A. Hidalgo Cultural Exports introduces a new approach for studying both connections between countries and the cultural impact of countries. Consider a native of a certain country who becomes famous in other countriesthis person is in a sense a "cultural export" of his home country "imported" to other countries. For example, the popularity of Dominican baseball player Manny Ramirez in the USA and Korea makes him a cultural export of the Dominican Republic. Using Wikipedia biographies and search-engine data, we measure the popularity of people across different countries and languages, and break it down by each person's native country, period, and occupation. This allows us to map international cultural trade and identify major exporters and importers in different fields and times, as well as hubs for cultural trade (e.g., Greece for philosophy in classical times or USA for baseball nowadays).

79.

Immersion

Deepak Jagdish, Daniel Smilkov and Cesar Hidalgo Immersion is a visual data experiment that delivers a fresh perspective of your email inbox. Focusing on a people-centric approach rather than the content of the emails, Immersion brings into view an important personal insightthe network of people you are connected to via email, and how it evolves over the course of many years. Given that this experiment deals with data that is extremely private, it is worthwhile to note that when given secure access to your Gmail inbox (which you can revoke anytime), Immersion only uses data from email headers and not a single word of any email's subject or body content.

80.

Place Pulse

Phil Salesses, Anthony DeVincenzi and Csar A. Hidalgo Place Pulse is a website that allows anybody to quickly run a crowdsourced study and interactively visualize the results. It works by taking a complex question, such as Which place in Boston looks the safest? and breaking it down into easier to answer binary pairs. Internet participants are given two images and asked "Which place looks safer?" From the responses, directed graphs are generated and can be mined, allowing the experimenter to identify interesting patterns in the data and form new hypothesis based on their observations. It works with any city or question and is highly scalable. With an increased understanding of human perception, it should be possible for calculated policy decisions to have a disproportionate impact on public opinion.

81.

The Economic Complexity Observatory

Alex Simoes, Dany Bahar, Ricardo Hausmann and Csar A. Hidalgo With more than six billion people and 15 billion products, the world economy is anything but simple. The Economic Complexity Observatory is an online tool that helps people explore this complexity by providing tools that can allow decision makers to understand the connections that exist between countries and the myriad of products they produce and/or export. The Economic Complexity Observatory puts at everyones fingertips the latest analytical tools developed to visualize and quantify the productive structure of countries and their evolution. Shahar Ronen, Kevin Hu, Michael Xu, and Csar A. Hidalgo Most interactions between cultures require overcoming a language barrier, which is why multilingual speakers play an important role in facilitating such interactions. In addition, certain languagesnot necessarily the most spoken onesare more likely

82.

The Language Group Network

Page 18

April 2013

MIT Media Lab

than others to serve as intermediary languages. We present the Language Group Network, a new approach for studying global networks using data generated by tens of millions of speakers from all over the world: a billion tweets, Wikipedia edits in all languages, and translations of two million printed books. Our network spans over eighty languages, and can be used to identify the most connected languages and the potential paths through which information diffuses from one culture to another. Applications include promotion of cultural interactions, prediction of trends, and marketing.

83.

The Privacy Bounds of Human Mobility


NEW LISTING

Cesar A. Hidalgo and Yves-Alexandre DeMontjoye We used 15 months of data from 1.5 million people to show that 4 pointsapproximate places and timesare enough to identify 95% of individuals in a mobility database. Our work shows that human behavior puts fundamental natural constraints to the privacy of individuals and these constraints hold even when the resolution of the dataset is low; even coarse datasets provide little anonymity. We further developed a formula to estimate the uniqueness of human mobility traces. These findings have important implications for the design of frameworks and institutions dedicated to protect the privacy of individuals.

Henry HoltzmanInformation Ecology


How to create seamless and pervasive connections between our physical environments and information resources.

84.

8D Display

Henry Holtzman, Matt Hirsch and Shahram Izadi The 8D Display combines a glasses-free 3D display (4D light field output) with a relightable display (4D light field input). The ultimate effect of this extension to our earlier BiDi Screen project will be a display capable of showing physically realistic objects that respond to scene lighting as we would expect. Imagine a shiny virtual teapot in which you see your own reflection, a 3D model that can be lighted with a real flashlight to expose small surface features, or a virtual flashlight that illuminates real objects in front of the display. As the 8D Display captures light field input, gestural interaction as seen in the BiDi Screen project is also possible.

85.

Air Mobs

Andy Lippman, Henry Holtzman and Eyal Toledano Air Mobs creates a local mobile community to allow them to freely share internet access among diverse carrier 3G and 4G data accounts. We create an app where anyone can advertise that they have bits and battery to spare and are willing to let other Air Mob members tether to them. They might do this if they are near their data cap and either need a little more data or have some they are willing to let others use before it expires. A website tracks the evolution of the community and posts the biggest donators and users of the system. To date, this app works on Android devices. It is designed to be open and community-based. We may experiment with market credits for sharing airtime and adding other devices and features.

86.

aireForm: Refigured Shape-Changing Fashion


NEW LISTING

Henry Holtzman, Hiroshi Ishii, Leah Buechley, Jennifer Jacobs, Philippa Mothersill, Ryuma Niiyama and Xiao Xiao aireForm is a dress of many forms that fluidly morph from one to another, animated by air. Its forms evoke classic feminine silhouettes, from sleek to supple to striking. Garments are a medium through which we may alter our apparent forms to project different personas. As our personas shift from moment to moment, so too does aireForm, living and breathing with us.

MIT Media Lab

April 2013

Page 19

87.

Brin.gy: What Brings Us Together

Henry Holtzman, Andy Lippman and Polychronis Ypodimatopoulos We allow people to form dynamic groups focused on topics that emerge serendipitously during everyday life. They can be long-lived or flower for a short time. Examples include people interested in buying the same product, those with similar expertise, those in the same location, or any collection of such attributes. We call this the Human Discovery Protocol (HDP). Similar to how computers follow well-established protocols like DNS in order to find other computers that carry desired information, HDP presents an open protocol for people to announce bits of information about themselves, and have them aggregated and returned back in the form of a group of people that match against the users specified criteria. We experiment with a web-based implementation (brin.gy) that allows users to join and communicate with groups of people based on their location, profile information, and items they may want to buy or sell. Henry Holtzman, Andy Lippman, Dan Sawada and Eyal Toledano Collaborating and media creation are difficult tasks, both for people and for network architectures. CoCam is a self-organizing network for real-time camera image collaboration. Like all camera apps, just point and shoot; CoCam then automatically joins other media creators into a network of collaborators. Network discovery, creation, grouping, joining, and leaving is done automatically in the background, letting users focus on participation in an event. We use local P2P middleware and a 3G negotiation service to create these networks for real-time media sharing. CoCam also provides multiple views that make the media experience more excitingsuch as appearing to be in multiple places at the same time. The media is immediately distributed and replicated in multiple peers, thus if a camera phone is confiscated other users have copies of the images.

88.

CoCam

89.

ContextController

Robert Hemsley, Arlene Ducao, Eyal Toledano and Henry Holtzman ContextController is a second screen social TV application that augments linear broadcast content with related contextual information. By utilizing existing closed-captioning data, ContextController gathers related explanatory video content, displaying this in real-time synchronized to the original content.

90.

CoSync

Henry Holtzman, Andy Lippman and Eyal Toledano CoSync builds the ability to create and act jointly into mobile devices . This mirrors the way we as a society act both individually and in concert. CoSync device ecology combines multiple stand-alone devices and controls them opportunistically as if they are one distributed, or diffuse, device at the users fingertips. CoSync includes a programming interface that allows time synchronized coordination at a granularity that will permit watching a movie on one device and hearing the sound from another. The open API encourages an ever growing set of such finely coordinated applications.

91.

Droplet

Robert Hemsley and Henry Holtzman Droplet is a tangible interface which explores the movement of information between digital and physical representations. Through light-based communication, the project allows information to be easily extracted from its digital form behind glass and converted into mobile, tangible representations, altering its form and our perception of the information.

92.

Encephalodome
NEW LISTING

Arlene Ducao, Henry Holtzman, Rachel Mersky "Encephalodome" (working title) is an art+science game under development for the dome projection (planetarium) setting of the Lower Eastside Girls Club. Players will wear inexpensive Electroencephalography (EEG) devices to both control and contribute to the game. They can expressively explore science through activities like

Page 20

April 2013

MIT Media Lab

concentrating, meditating, closing their eyes, and moving their bodies. By fusing many kinds of science data sets into a vast spatial experience, Encephalodome will engage players in natural beauty beyond the scales of human perception. "Encephalodome" gameplay focuses on ocean acidification: increased pollution is changing the pH of the oceans, thus affecting the growth of sea vertebrates and shellfish. "Encephalodome" will invite its users to interactively role-play prototypical sea organisms like coral, plankton, jellyfish, and lobster through decades of increased carbon emissions.

93.

Flow

Robert Hemsley and Henry Holtzman Flow is an augmented interaction project that bridges the divide between our non digital objects and items and our ecosystem of connected devices. By using computer vision Flow enables our traditional interactions to be augmented with digital meaning allowing an event in one environment to flow into the next. Through this physical actions such as tearing a document can have a mirrored effect and meaning in our digital environment leading to actions such as the deletion of the associated digital file. This project is part of an initial exploration that focuses on creating an augmented interaction overlay for our environment enabling users to redefine their physical actions.

94.

MindRider

Arlene Ducao and Henry Holtzman MindRider is a helmet that translates electroencephalogram (EEG) feedback into an embedded LED display. For the wearer, green lights indicate a focused, active mental state, while red lights indicate drowsiness, anxiety, and other states not conducive to operating a bike or vehicle. Flashing red lights indicate extreme anxiety (panic). As many people return to cycling as a primary means of transportation, MindRider can support safety by adding visibility and increased awareness to the cyclist/motorist interaction process. In future versions, MindRider may be outfitted with an expanded set of EEG contacts, GPS radio, non-helmet wearable visualization, and other features to increase the cyclist's awareness of self and environment. These features may also allow for hands-free control of cycle function. A networked set of MindRiders may be useful for urban planning and emergency response situations.

95.

MobileP2P

Yosuke Bando, Konosuke Watanabe, Daniel Dubois, Eyal Toledano, Robert Hemsley and Henry Holtzman MobileP2P aims to magically populate mobile devices with popular video clips and app updates without using people's data plans by opportunistically connecting nearby devices together when they are in range of each other.

96.

NewsJack

Sasha Costanza-Chock, Henry Holtzman, Ethan Zuckerman and Daniel E. Schultz NewsJack is a media remixing tool built from Mozilla's Hackasaurus. It allows users to modify the front pages of news sites, changing language and headlines to change the news into what they wish it could be.

MIT Media Lab

April 2013

Page 21

97.

NeXtream: Social Television

Henry Holtzman, ReeD Martin and Mike Shafran Functionally, television content delivery has remained largely unchanged since the introduction of television networks. NeXtream explores an experience where the role of the corporate network is replaced by a social network. User interests, communities, and peers are leveraged to determine television content, combining sequences of short videos to create a set of channels customized to each user. This project creates an interface to explore television socially, connecting a user with a community through content, with varying levels of interactivity: from passively consuming a series, to actively crafting one's own television and social experience. Alumni Contributor: Ana Luisa Santos

98.

OpenIR: Crowd Map Plugin


NEW LISTING

Arlene Ducao, Henry Holtzman, Ilias Koen, Juhee Bae, Stephanie New, Barry Beagen When crowd maps track a natural disaster, social information may not be enough. The OpenIR Crowd Map plugin brings infrared satellite maps into the Ushahidi crowd map platform so that social and satellite data can be analyzed together. OpenIR's Jakarta Flood 2013 deployment has won awards for its work to show social data overlaid onto environmental features and a flood vulnerability layer. Arlene Ducao, Henry Holtzman, Ilias Koen, Juhee Bae, Barry Beagen When an environmental crisis strikes, the most important element to saving lives is information. Information regarding water depths, spread of oil, fault lines, burn scars, and elevation are all crucial in the face of disaster. Much of this information is publicly available as infrared satellite data. However, with todays technology, this data is difficult to obtain, and even more difficult to interpret. Open Infrared, or OpenIR, is an ICT (information communication technology) offering geo-located infrared satellite data as on-demand map layers and translating the data so that anyone can understand it easily. OpenIR will be pilot tested in Indonesia, where ecological and economic vulnerability is apparent from frequent seismic activity and limited supporting infrastructure. The OpenIR team will explore how increased accessibility to environmental information can help infrastructure-challenged regions to deal with environmental crises of many kinds.

99.

OpenIR: Data Viewer

100. Proverbial Wallets

Henry Holtzman, John Kestner, Daniel Leithinger, Danny Bankman, Emily Tow and Jaekyung Jung We have trouble controlling our consumer impulses, and there's a gap between our decisions and the consequences. When we pull a product off the shelf, do we know our bank-account balance, or whether we're over budget for the month? Our existing senses are inadequate to warn us. The Proverbial Wallet fosters a financial sense at the point of purchase by embodying our electronically tracked assets. We provide tactile feedback reflecting account balances, spending goals, and transactions as a visceral aid to responsible decision-making.

101. StackAR

Robert Hemsley and Henry Holtzman StackAR explores the augmentation of physical objects within a digital environment by abstracting interfaces from physical to virtual implementations. StackAR is a Lilypad Arduino shield that enables capacitive touch and light base communication with a tablet. When pressed against a screen, the functionality of StackAR extends into the digital environment, allowing the object to become augmented by the underlying display. This creates an augmented breadboard environment where virtual and physical components can be combined and prototyped in a more intuitive manner.

Page 22

April 2013

MIT Media Lab

102. SuperShoes

Dhairya Dand and Henry Holtzman Our smartphones take active attention while we use them to navigate streets, find restaurants, meet friends, and remind us of tasks. SuperShoes allows us to access this information in a physical ambient form through a foot interface. SuperShoes takes us to our destination; senses interesting people, places, and events in our proximity; and notifies us about tasks, all while we immerse ourselves in the environment. We explore a physical language of interaction afforded by the foot through various tactile senses. By weaving digital bits into the shoes, SuperShoes liberates information from the confines of screens and onto the body.

103. Tactile Allegory

Henry Holtzman and Philippa Mothersill Messages are coded into all of the objects and environments around us. Some message are obvious, some are subtle; some are understood through social or contextual associations, some are perceived on a more primal level. These messages can be shaped by the design of objects; through their materials, forms and textures. Tactile Allegory is an exploration into the use of form as a means of communicating information through physical objects; the objectified medium. Initial explorations into form-changing technologies combine the fabrication strategies of digital materials with the artistic narrative of quilting to create almost pixelated surfaces which can be reconfigured to represent different objectified information. Future work includes developing this into several applications of different scales, from jewellery to clothing to furniture, and communicating personal information to the users through this objectified medium.

104. The Glass Infrastructure

Henry Holtzman, Andy Lippman, Matthew Blackshaw, Jon Ferguson, Catherine Havasi, Julia Ma, Daniel Schultz and Polychronis Ypodimatopoulos This project builds a social, place-based information window into the Media Lab using 30 touch-sensitive screens strategically placed throughout the physical complex and at sponsor sites. The idea is get people to talk among themselves about the work that they jointly explore in a public place. We present Lab projects as dynamically connected sets of "charms" that visitors can save, trade, and explore. The GI demonstrates a framework for an open, integrated IT system and shows new uses for it. Alumni Contributors: Rick Borovoy, Greg Elliott and Boris Grigory Kizelshteyn

105. Truth Goggles

Henry Holtzman and Daniel E. Schultz Truth Goggles attempts to decrease the polarizing effect of perceived media bias by forcing people to question all sources equally by invoking fact -checking services at the point of media consumption. Readers will approach even their most trusted sources with a more critical mentality by viewing content through various "lenses" of truth.

106. Twitter Weather

Henry Holtzman, John Kestner and Stephanie Bian The vast amounts of user-generated content on the Web produce information overload as frequently as they provide enlightenment. Twitter Weather reduces large quantities of text into meaningful data by gauging its emotional content. This Website visualizes the prevailing mood about top Twitter topics by rendering a weather-report-style display. Comment Weather is its counterpart for article comments, allowing you to gauge sentiment without leaving the page. Supporting Twitter Weather is a user-trained Web service that aggregates and visualizes attitudes on a topic.

MIT Media Lab

April 2013

Page 23

107. Where The Hel

Arlene Ducao and Henry Holtzman "Where The Hel" is a pair of helmets: plain and funky. The funky helmet is 3D printed; the plain helmet visualizes proximity to the funky helmet as a function of signal strength, via an LED light strip. The funky helmet contains an Xbee and a GPS Radio. Its position is tracked via a web app. The wearer of the plain helmet can track the funky one via the web app and the LED strip on his helmet. These helmets are potential iterations towards a more developed HADR (Humanitarian Assistance and Disaster Relief) helmet system.

Hiroshi IshiiTangible Media


How to design seamless interfaces between humans, digital information, and the physical environment.

108. aireForm: Refigured Shape-Changing Fashion


NEW LISTING

Henry Holtzman, Hiroshi Ishii, Leah Buechley, Jennifer Jacobs, Philippa Mothersill, Ryuma Niiyama and Xiao Xiao aireForm is a dress of many forms that fluidly morph from one to another, animated by air. Its forms evoke classic feminine silhouettes, from sleek to supple to striking. Garments are a medium through which we may alter our apparent forms to project different personas. As our personas shift from moment to moment, so too does aireForm, living and breathing with us. Hiroshi Ishii, David Rose, and Shaun Salzberg Furniture is the infrastructure for human activity. Every day we open cabinets and drawers, pull up to desks, recline in recliners, and fall into bed. How can technology augment these everyday rituals in elegant and useful ways? The Ambient Furniture project mixes apps with the IKEA catalog to make couches more relaxing, tables more conversational, desks more productive, lamps more enlightening, and beds more restful. With input from Vitra and Steelcase, we are prototyping a line of furniture to explore ideas about peripheral awareness (Google Latitude door bell), incidental gestures (Amazon restocking trash can and the Pandora lounge chair), pre-attentive processing (energy clock), and eavesdropping interfaces (FaceBook photo coffee table).

109. Ambient Furniture

110. Beyond: A Collapsible Input Device for 3D Direct Manipulation

Jinha Lee and Hiroshi Ishii Beyond is a collapsible input device for direct 3D manipulation. When pressed against a screen, Beyond collapses in the physical world and extends into the digital space of the screen, so that users have an illusion that they are inserting the tool into the virtual space. Beyond allows users to interact directly with 3D media without having to wear special glasses, avoiding inconsistencies of input and output. Users can select, draw, and sculpt in 3D virtual space, and seamlessly transition between 2D and 3D manipulation. Hiroshi Ishii, Anthony DeVincenzi and Lining Yao FocalSpace is a system for focused collaboration utilizing spatial depth and directional audio. We present a space where participants, tools, and other physical objects within the space are treated as interactive objects that can be detected, selected, and augmented with metadata. Further, we demonstrate several scenarios of interaction as concrete examples. By utilizing diminishing reality to remove unwanted background surroundings through synthetic blur, the system aims to attract participant attention to foreground activity.

111. FocalSpace

Page 24

April 2013

MIT Media Lab

112. GeoSense

Hiroshi Ishii, Anthony DeVincenzi and Samuel Luescher An open publishing platform for visualization, social sharing, and data analysis of geospatial data.

113. IdeaGarden

Hiroshi Ishii, David Lakatos, and Lining Yao The IdeaGarden allows participants of creative activities to collectively capture, select, and share (CCSS) the stories, sketches, and ideas they produce in physical and digital spaces. The iGarden attempts to optimize the CCSS loop and to bring it from hours to seconds in order to turn asynchronous collaborative thought processes into synchronous real-time cognitive flows. The iGarden system is composed of a tangible capturing system including recording devices always "at-hand", of a selection workflow that allows the group to reflect and reduce the complexity of captured data in real-time and of a sharing module that connects socially selected information to the cloud. Alumni Contributor: Jean-Baptiste Labrune

114. Jamming User Interfaces

Hiroshi Ishii, Sean Follmer, Daniel Leithinger, Alex Olwal and Nadia Cheng Malleable user interfaces have the potential to enable radically new forms of interactions and expressiveness through ?exible, free-form and computationally controlled shapes and displays. This work, speci?cally focuses on particle jamming as a simple, effective method for ?exible, shape-changing user interfaces where programmatic control of material stiffness enables haptic feedback, deformation, tunable affordances and control gain. We introduce a compact, low-power pneumatic jamming system suitable for mobile devices, and a new hydraulic-based technique with fast, silent actuation and optical shape sensing. We enable jamming structures to sense input and function as interaction devices through two contributed methods for high-resolution shape sensing using: 1) index-matched particles and ?uids, and 2) capacitive and electric ?eld sensing. We explore the design space of malleable and organic user interfaces enabled by jamming through four motivational prototypes that highlight jammings potential in HCI, including applications for tabletops, tablets and for portable shape-changing mobile devices. Anthony DeVincenzi, Lining Yao, Hiroshi Ishii and Ramesh Raskar How could we enhance the experience of video-conference by utilizing an interactive display? With a Kinect camera and sound sensors, we explore how expanding a system's understanding of spatially calibrated depth and audio alongside a live video stream can generate semantically rich three-dimensional pixels, containing information regarding their material properties and location. Four features have been implemented: Talking to Focus, Freezing Former Frames, Privacy Zone, and Spacial Augmenting Reality.

115. Kinected Conference

116. MirrorFugue II

Xiao Xiao and Hiroshi Ishii MirrorFugue is an interface for the piano that bridges the gap of location in music playing by connecting pianists in a virtual shared space reflected on the piano. Built on a previous design that only showed the hands, our new prototype displays both the hands and upper body of the pianist. MirrorFugue may be used for watching a remote or recorded performance, taking a remote lesson, and remote duet playing.

117. MirrorFugue III

Xiao Xiao and Hiroshi Ishii MirrorFugue is an installation for a player piano that evokes the impression that the "reflection" of a disembodied pianist is playing the physically moving keys. Live music emanates from a grand piano, whose keys move under the supple touch of a pianists's hands reflected on the lacquered surface of the instrument. On the music stand is visible the pianist's face, whose subtle expressions project emotions of the

MIT Media Lab

April 2013

Page 25

music. MirrorFugue recreates the feeling of a live performance, but no one is actually there. The pianist is but an illusion of light and mirrors, a ghost at once present and absent. Viewing MirrorFugue evokes the sense of walking into a memory, where the pianist plays along with no awareness of the viewer's presence. Or it is as if viewers were ghosts in another's dream, themselves incorporeal, able to sit down in place of the performing pianist and play along.

118. PingPongPlusPlus

Hiroshi Ishii, Xiao Xiao, Michael Bernstein, Lining Yao, Dvid Lakatos, Kojo Acquah, Jeff Chan, Sean Follmer and Daniel Leithinger PingPong++ (PingPongPlusPlus) builds on PingPongPlus (1998), a ping pong table that could sense ball hits, and reuse that data to control visualizations projected on the table. We have redesigned the system using open-source hardware and software platforms so that anyone in the world can build their own reactive table. We are exploring ways that people can customize their ping pong game experience. This kiosk allows players to create their own visualizations based on a set of templates. For more control of custom visualizations, we have released a software API based on the popular Processing language to enable users to write their own visualizations. We are always looking for collaborators! Visit pppp.media.mit.edu to learn more.

119. Pneumatic Shape-Changing Interfaces


NEW LISTING

Hiroshi Ishii, Jifei Ou, Lining Yao, Ryuma Niiyama and Sean Follmer An enabling technology to build shape-changing interfaces through pneumatically-driven soft composite materials. The composite materials integrate the capabilities of both input sensing and active shape output. We explore four applications: a multi-shape mobile device, table-top shape-changing tangibles, dynamically programmable texture for gaming, and shape-shifting lighting apparatus. Hiroshi Ishii, Leonardo Bonanni, Keywon Chung, Sean Follmer, Jinha Lee, Daniel Leithinger and Xiao Xiao Radical Atoms is our vision of interactions with future material. Alumni Contributors: Keywon Chung, Adam Kumpf, Amanda Parkes, Hayes Raffle and Jamie B Zigelbaum

120. Radical Atoms

121. Recompose

Hiroshi Ishii, Matthew Blackshaw, Anthony DeVincenzi and David Lakatos Human beings have long shaped the physical environment to reflect designs of form and function. As an instrument of control, the human hand remains the most fundamental interface for affecting the material world. In the wake of the digital revolution, this is changing, bringing us to reexamine tangible interfaces. What if we could now dynamically reshape, redesign, and restructure our environment using the functional nature of digital tools? To address this, we present Recompose, a framework allowing direct and gestural manipulation of our physical environment. Recompose complements the highly precise, yet concentrated affordance of direct manipulation with a set of gestures, allowing functional manipulation of an actuated surface.

122. Relief

Hiroshi Ishii and Daniel Leithinger Relief is an actuated tabletop display, able to render and animate 3D shapes with a malleable surface. It allows users to experience and form digital models such as geographical terrain in an intuitive manner. The tabletop surface is actuated by an array of motorized pins, which can be addressed individually and sense user input like pulling and pushing. Our current research focuses on utilizing freehand gestures for interacting with content on Relief. Alumni Contributor: Adam Kumpf

Page 26

April 2013

MIT Media Lab

123. RopeRevolution

Jason Spingarn-Koff (MIT), Hiroshi Ishii, Sayamindu Dasgupta, Lining Yao, Nadia Cheng (MIT Mechanical Engineering) and Ostap Rudakevych (Harvard University Graduate School of Design) Rope Revolution is a rope-based gaming system for collaborative play. After identifying popular rope games and activities from around the world, we developed a generalized tangible rope interface that includes a compact motion-sensing and force-feedback module that can be used for a variety of rope-based games, such as rope jumping, kite flying, and horseback riding. Rope Revolution is designed to foster both co-located and remote collaborative experiences by using actual rope to connect players in physical activities across virtual spaces.

124. SandScape

Carlo Ratti, Assaf Biderman and Hiroshi Ishii SandScape is a tangible interface for designing and understanding landscapes through a variety of computational simulations using sand. The simulations are projected on the surface of a sand model representing the terrain; users can choose from a variety of different simulations highlighting height, slope, contours, shadows, drainage, or aspect of the landscape model, and alter its form by manipulating sand while seeing the resulting effects of computational analysis generated and projected on the surface of sand in real time. SandScape demonstrates an alternative form of computer interface (tangible user interface) that takes advantage of our natural abilities to understand and manipulate physical forms while still harnessing the power of computational simulation to help in our understanding of a model representation. Alumni Contributors: Yao Wang, Jason Alonso and Ben Piper

125. Second Surface: Multi-User Spatial Collaboration System Based on Augmented Reality
NEW LISTING

Shunichi Kasahara, Hiroshi Ishii, Pattie Maes, Austin S. Lee and Valentin Heun An environment for creative collaboration is significant for enhancing human communication and expressive activities, and many researchers have explored different collaborative spatial interaction technologies. However, most of these systems require special equipment and cannot adapt to everyday environments. We introduce Second Surface, a novel multi-user augmented reality system that fosters real-time interactions for user-generated content on top of the physical environment. This interaction takes place in the physical surroundings of everyday objects such as trees or houses. Our system allows users to place 3D drawings, texts, and photos relative to such objects and to share this expression with any other person who uses the same software at the same spot. Second Surface explores a vision that integrates collaborative virtual spaces into the physical space. Our system can provide an alternate reality that generates a playful and natural interaction in an everyday setup. Hiroshi Ishii Sensetable is a system that wirelessly, quickly, and accurately tracks the positions of multiple objects on a flat display surface. The tracked objects have a digital state, which can be controlled by physically modifying them using dials or tokens. We have developed several new interaction techniques and applications on top of this platform. Our current work focuses on business supply-chain visualization using system-dynamics simulation. Alumni Contributors: Jason Alonso, Dan Chak, Gian Antonio Pangaro, James Patten and Matt Reynolds

126. Sensetable

MIT Media Lab

April 2013

Page 27

127. Sourcemap

Hiroshi Ishii and Leonardo Amerigo Bonanni Sourcemap.com is the open directory of supply chains and environmental footprints. Consumers use the site to learn about where products come from, what theyre made of, and how they impact people and the environment. Companies use Sourcemap to communicate transparently with consumers and tell the story of how products are made. Thousands of maps have already been created for food, furniture, clothing, electronics, and more. Behind the website is a revolutionary social network for supply-chain reporting. The real-time platform gathers information from every stakeholder so thatone day soonyoull be able to scan a product on a store shelf and know exactly who made it.

128. T(ether)

Hiroshi Ishii, Andy Lippman, Matthew Blackshaw and David Lakatos T(ether) is a novel spatially aware display that supports intuitive interaction with volumetric data. The display acts as a window affording users a perspective view of three- dimensional data through tracking of head position and orientation. T(ether) creates a 1:1 mapping between real and virtual coordinate space allowing immersive exploration of the joint domain. Our system creates a shared workspace in which co-located or remote users can collaborate in both the real and virtual worlds. The system allows input through capacitive touch on the display and a motion-tracked glove. When placed behind the display, the users hand extends into the virtual world, enabling the user to interact with objects directly.

129. Tangible Bits

Hiroshi Ishii, Sean Follmer, Jinha Lee, Daniel Leithinger and Xiao Xiao People have developed sophisticated skills for sensing and manipulating our physical environments, but traditional GUIs (Graphical User Interfaces) do not employ most of them. Tangible Bits builds upon these skills by giving physical form to digital information, seamlessly coupling the worlds of bits and atoms. We are designing "tangible user interfaces" that employ physical objects, surfaces, and spaces as tangible embodiments of digital information. These include foreground interactions with graspable objects and augmented surfaces, exploiting the human senses of touch and kinesthesia. We also explore background information displays that use "ambient media"light, sound, airflow, and water movementto communicate digitally mediated senses of activity and presence at the periphery of human awareness. We aim to change the "painted bits" of GUIs to "tangible bits," taking advantage of the richness of multimodal human senses and skills developed through our lifetimes of interaction with the physical world. Alumni Contributors: Yao Wang, Mike Ananny, Scott Brave, Dan Chak, Angela Chang, Seung-Ho Choo, Keywon Chung, Andrew Dahley, Philipp Frei, Matthew G. Gorbet, Adam Kumpf, Jean-Baptiste Labrune, Vincent Leclerc, Jae-Chol Lee, Ali Mazalek, Gian Antonio Pangaro, Amanda Parkes, Ben Piper, Hayes Raffle, Sandia Ren, Kimiko Ryokai, Victor Su, Brygg Ullmer, Catherine Vaucelle, Craig Wisneski, Paul Yarin and Jamie B Zigelbaum

130. Topobo

Hayes Raffle, Amanda Parkes and Hiroshi Ishii Topobo is a 3-D constructive assembly system embedded with kinetic memorythe ability to record and play back physical motion. Unique among modeling systems is Topobos coincident physical input and output behaviors. By snapping together a combination of passive (static) and active (motorized) components, users can quickly assemble dynamic, biomorphic forms such as animals and skeletons, animate those forms by pushing, pulling, and twisting them, and observe the system repeatedly playing back those motions. For example, a dog can be constructed and then taught to gesture and walk by twisting its body and legs. The dog will then repeat those movements.

Page 28

April 2013

MIT Media Lab

131. Video Play

Sean Follmer, Hayes Raffle and Hiroshi Ishii Long-distance families are increasingly staying connected with free video conferencing tools. However, the tools themselves are not designed to accommodate children's or families' needs. We explore how play can be a means for communication at a distance. Our Video Play prototypes are simple video-conferencing applications built with play in mind, creating opportunities for silliness and open-ended play between adults and young children. They include simple games, such as Find It, but also shared activities like book reading, where users' videos are displayed as characters in a story book. Alumni Contributor: Hayes Raffle

Joseph M. JacobsonMolecular Machines


How to engineer at the limits of complexity with molecular-scale parts.

132. GeneFab

Bram Sterling, Kelly Chang, Joseph M. Jacobson, Peter Carr, Brian Chow, David Sun Kong, Michael Oh and Sam Hwang What would you like to "build with biology"? The goal of the GeneFab projects is to develop technology for the rapid fabrication of large DNA molecules, with composition specified directly by the user. Our intent is to facilitate the field of Synthetic Biology as it moves from a focus on single genes to designing complete biochemical pathways, genetic networks, and more complex systems. Sub-projects include: DNA error correction, microfluidics for high throughput gene synthesis, and genome-scale engineering (rE. coli). Alumni Contributor: Chris Emig

133. NanoFab

Kimin Jun, Jaebum Joo and Joseph M. Jacobson We are developing techniques to use the focused ion beam to program the fabrication of nanowires based nanostructures and logic devices.

134. Scaling Up DNA Logic and Structures

Joseph M. Jacobson and Noah Jakimo Our goals include novel gene logic and data logging systems, as well as DNA scaffolds that can be produced on commercial scales. State of the art in the former is limited by finding analogous and orthogonal proteins for those used in current single-layer gates and two-layered circuits. State of the art in the latter is constrained in size and efficiency by kinetic limits on self-assembly. We have designed and plan to demonstrate cascaded logic on chromosomes and DNA scaffolds that exhibit exponential growth. Joseph M. Jacobson and Kimin Jun We are using nanowires to build structures for synthetic photosynthesis for the solar generation of liquid fuels.

135. Synthetic Photosynthesis

MIT Media Lab

April 2013

Page 29

Sepandar KamvarSocial Computing


How to meaningfully connect people with information.

136. The Dog Programming Language

Salman Ahmad, Zahan Malkani and Sepandar Kamvar Dog is a new programming language that makes it easy and intuitive to create social applications. Dog focuses on a unique and small set of features that allows it to achieve the power of a full-blown application development framework. One of Dogs key features is built-in support for interacting with people. Dog provides a natural framework in which both people and computers can be given instructions and return results. It can perform a long-running computation while also displaying messages, requesting information, or even sending operations to particular individuals or groups. By switching between machine and human computation, developers can create powerful workflows and model complex social processes without worrying about low-level technical details.

Kent LarsonChanging Places


How new strategies for architectural design, mobility systems, and networked intelligence can make possible dynamic, evolving places that respond to the complexities of life.

137. A Market Economy of Trips

Dimitris Papanikolaou and Kent Larson We are developing a new strategy to create autonomous self-organizing vehicle sharing systems that uses incentive mechanisms (dynamic pricing) to smooth demand imbalances, and an interactive graphical user interface to effectively communicate location-based price information. Prices adjust dynamically to parking needs, incentivizing users to drive vehicles to stations with too few vehicles, while discouraging arrivals to stations with excess vehicles. This research explains how users make decisions in dynamically priced mobility systems, under which circumstances their actions may make up a self-regulating economy, and how this economy dynamically performs in different demand patterns. To address these issues we develop a computational framework using system dynamics, urban economics, and game theory that models system behavior which will be used to determine optimum pricing policy, fleet size, and density of parking stations for having a stable yet profitable system. Alumni Contributor: William J. Mitchell

138. AEVITA

Kent Larson, William Lark, Jr., Nicholas David Pennycooke and Praveen Subramani With various private, governmental, and academic institutions researching autonomous vehicle deployment strategies, the way we think about vehicles must adapt. But what happens when the driverthe main conduit of information transaction between the vehicle and its surroundingsis removed? The living EV system aims to fill this communication void by giving the autonomous vehicle the means to sense others around it, and react to various stimuli as intuitively as possible by taking design cues from the living world. The system is comprised of various types of sensors (computer vision, UWB beacon tracking, sonar) and actuators (light, sound, mechanical) in order to express recognition of others, announce intentions, and portray the vehicles general state. All systems are built

Page 30

April 2013

MIT Media Lab

on the second version of the half-scale CityCar concept vehicle, featuring advanced mixed-materials (CFRP + Aluminum) and a significantly more modularized architecture.

139. Autonomous Facades for Zero-Energy Urban Housing

Ronan Lonergan and Kent Larson We are developing self-powered responsive building envelope components that efficiently integrate solar shading and heating, ventilation, privacy control, and ambient lighting. Dynamic facade modules integrate sensing systems to respond to both environmental conditions and the activities of people. Kent Larson, Andy Lippman, Shaun David Salzberg, Dan Sawada and Jonathan Speiser BTNz! is a lightweight, viral interface consisting of a button and a screen strategically positioned around the Media Lab complex to foster social interactions within the community. Users will be able to upload messages to be displayed on the screen when the button is pushed. The goal is see if the action of pressing a tangible button makes people more aware of what is going on throughout the community. In some ways, BTNz! is a "twitter of billboards". The idea is to get people together with almost no overhead, and in a fun way, with a single-dimension interface. The work includes building an application environment and collecting and analyzing data on the emergent social activities. Later work may involve tying identity to button-pushers and providing more context-aware messages to the users.

140. BTNz!

141. CityCar

Ryan C.C. Chin, William Lark, Jr., Nicholas Pennycooke, Praveen Subramani, and Kent Larson CityCar is a foldable, electric, sharable, two-passenger vehicle for crowded cities. Wheel Robotsfully modular in-wheel electric motorsintegrate drive motors, suspension, braking, and steering inside the hub-space of the wheel. This drive-by-wire system requires only data, power, and mechanical connection to the chassis. With over 80 degrees of steering freedom, Wheel Robots enable a zero-turn radius, and without the gasoline-powered engine and drive-train the CityCar can fold.We are working with Denokinn on an integrated, modular system for assembly and distribution of the CityCar. Based in Spain's Basque region, the project is called "Hiriko," which stands for Urban Car. The Hiriko project aims to create a new, distributed manufacturing system for the CityCar, enabling automotive suppliers to provide "core" components made of integrated modules such as in-wheel motor units, battery systems, interiors, vehicle control systems, vehicle chassis/exoskeleton, and glazing. (Continuing the vision of William J. Mitchell.) Alumni Contributors: Patrik Kunzler, Philip Liang, William J. Mitchell and Raul-David Poblano

142. CityCar Folding Chassis

William Lark, Jr., Nicholas Pennycooke, Ryan C.C. Chin and Kent Larson The CityCar folding chassis is a half-scale working prototype that consists of four independently controlled in-wheel electric motors, four-bar linkage mechanism for folding, aluminum exoskeleton, operable front ingress/egress doors, lithium-nanophosphate battery packs, vehicle controls, and a storage compartment. The folding chassis can demonstrate compact folding (3:1 ratio compared to conventional vehicles), omni-directional driving, and wireless remote controls. The half-scale mock-up explores the material character and potential manufacturing strategies that will scale to a future full-scale build. (Continuing the vision of William J. Mitchell.) Alumni Contributors: William J. Mitchell and Raul-David Poblano

MIT Media Lab

April 2013

Page 31

143. CityCar Half-Scale Prototype

Kent Larson, Nicholas David Pennycooke and Praveen Subramani The CityCar half-scale prototype has been redesigned from the ground up to incorporate the latest materials and manufacturing processes, sensing technologies, battery systems, and more. This new prototype demonstrates the functional features of the CityCar at half-scale, including the folding chassis. New sensing systems have been embedded to enable research into autonomous driving and parking, while lithium batteries will provide extended range. A new control system based on microprocessors allows for faster boot time and modularity of the control system architecture. Kent Larson, Nicholas David Pennycooke and Praveen Subramani The CityCar Ingress-Egress Model provides a full-scale platform for testing front ingress and egress for new vehicle types. The platform features three levels of actuation for controlling the movement of seats within a folding vehicle, and can store custom presets of seat positioning and folding process for different users. William Lark, Jr., Nicholas Pennycooke, Ryan C.C. Chin and Kent Larson The CityCar Testing Platform is a full-scale and modular vehicle that consists of four independently controlled Wheel Robots, an extruded aluminum frame, battery pack, driver's interface, and seating for two. Each Wheel Robot is capable of over 120 degrees of steering freedom, thus giving the CityCar chassis omni-directional driving ability such as sideways parking, zero-radius turning, torque steering, and variable velocity (in each wheel) steering. This four-wheeler is an experimental platform for by-wire controls (non-mechanically coupled controls) for the Wheel Robots, thus allowing for the platform to be controlled by wireless joysticks. The four-wheeler also allows the CityCar design team to experiment with highly personalized body/cabin designs. (Continuing the vision of William J. Mitchell.) Alumni Contributor: William J. Mitchell

144. CityCar Ingress-Egress Model

145. CityCar Testing Platform

146. CityHealth and Indoor Environment

Rich Fletcher, Jason Nawyn, and Kent Larson The spaces in which we live and work have a strong affect on our physical and mental health. In addition to obvious effects on physical illness and healing, the quality of our air, the intensity of sound, and the color of our artificial lighting have also been shown to be important factors that affect cognitive skills, stress levels, motivation, and work productivity. As a research tool, we have developed small, wireless, wearable sensors that enable us to simultaneously monitor our environment and our physiology in real time. By better understanding these environmental factors, we can design architectural spaces that automatically adapt to the needs of specific human activities (work/concentration, social relaxation) and automatically provide for specific health requirements (physical illness, assisted living). Kent Larson, Daniel Smithwick and Hasier Larrea We demonstrate how the CityHome, which has a very small footprint (840 square feet), can function as an apartment two to three times that size. This is achieved through a transformable wall system which integrates furniture, storage, exercise equipment, lighting, office equipment, and entertainment systems. One potential scenario for the CityHome is where the bedroom transforms to a home gym, the living room to a dinner party space for 14 people, a suite for four guests, two separate office spaces plus a meeting space, or an a open loft space for a large party. Finally, the kitchen can either be open to the living space, or closed off to be used as a catering kitchen. Each occupant engages in a process to personalize the precise design of the wall units according to his or her unique activities and requirements.

147. CityHome

Page 32

April 2013

MIT Media Lab

148. CityHome: RoboWall

Kent Larson, Hasier Larrea and Carlos Olabarri The RoboWall is the key module of the CityHome apartment, providing flexibility to the space by moving and transforming, serving as the technology that enables home reconfiguration. It is a wall that not only moves but also is functional and smart. The completely modular design allows the infill of the wall to be customized to address each persons specific needs. Mainly thought for newly constructed buildings, the RoboWall can also be used to retrofit old apartments: its integrated system locates all the complexity on the wall. There are no physical rails or need for extra electrical installation. Plus, the pressure sensors create a seamless interface to operate the wall in a more natural way, also improving safety.

149. Distinguish: Home Activity Recognition

Kent Larson We propose a recognition system with a user-centric point of view, designed to make the activity detection processes intelligible to the end-user of the home, and to permit these users to improve recognition and customize activity models based on their particular habits and behaviors. Our system, named Distinguish, relies on high-level, common sense information to create activity models used in recognition. These models are understandable by end-users and transferable between homes. Distinguish consists of a common-sense recognition engine that can be modified by end-users using a novel phone interface. Sheng-Ying (Aithne) Pao and Kent Larson Have you ever been in a teleconference and found it difficult to share the ideas you've been developing on your notebook to a remote participant? FlickInk reinvents paper/pen-based interaction. With a quick flick of the pen, analog ink on paper is instantly transferred to surrounding digital interfaces as well as to a remote destination. The flicking gesture is directional. When there are multiple screens with different remote collaborators, our system allows for the directionality of the gesture to select the destination. In addition, with FlickInks wireless gesture sensing module, various digital pens can be turned into a FlickInk pen by attaching the wireless module, leveraging any writable surface to create an enhanced personalized experience for collaborative work.

150. FlickInk

151. Hiriko CityCar Urban Feasibility Studies

Kent Larson, Chih-Chao Chuang and Ryan C.C. Chin We are engaging in research that may be incorporated by Denokinn into a feasibility study for Mobility-on-Demand (MoD) systems in a select number of cities, including Berlin, Barcelona, Malmo, and San Francisco. The goal of the project is to propose electric mobility car-sharing pilot programs to collaborated cities, which will work with their existing public infrastructure, use Hiriko CityCar as the primary electric vehicle, and to study how this system will work with the urbanscape and lifestyle in different cities. Ryan C.C. Chin, Kent Larson, William Lark, Jr., Chih-Chao Chuang, Nicholas Pennycooke, and Praveen Subramani We are working with Denokinn to design and develop an integrated modular system for assembly and distribution of the CityCar. This project, based in the Basque region of Spain, will be called the "Hiriko" Project, which stands for Urban Car (Hiri = urban, Ko = coche or car in Basque). The goal of the Hiriko project is to create a new, distributed manufacturing system for the CityCar which will enable automotive suppliers to provide "core" components made of integrated modules such as in-wheel motor units, battery systems, interiors, vehicle control systems, vehicle chassis/exoskeleton, and glazing. A full-scale working prototype will be completed by the end of 2011 with an additional 20 prototypes to be built for testing in 2012. (Continuing the vision of William J. Mitchell). Alumni Contributors: William J. Mitchell and Raul-David Poblano

152. Hiriko CityCar with Denokinn

MIT Media Lab

April 2013

Page 33

153. Home Genome: Mass-Personalized Housing

Daniel Smithwick and Kent Larson The home is becoming a center for preventative health care, energy production, distributed work, and new forms of learning, entertainment, and communication. We are developing techniques for capturing and encoding concepts related to human needs, activities, values, and practices. We are investigating solutions built from an expanding set of building blocks, or genes, which can be combined and recombined in various ways to create a unique assembly of spaces and systems. We are developing algorithms to match individuals to design solutions in a process analogous to that used to match customer profiles to music, movies, and books, as well as new fabrication and supply-chain technologies for efficient production. We are exploring how to tap the collective intelligence of distributed groups of people and companies to create an expanding set of solutions. Rich Fletcher and Kent Larson There is increasing interest in performing physiology monitoring in vehicles. This is motivated by healthcare trends, aging population, accident prevention, insurance, and forensic interests. We have developed sensors that can be embedded in a car seat and wirelessly measure occupant heart rate parameters and respiration. By developing algorithms that can detect driver stress, fatigue, or impairment, we can create better automotive safety systems, controls, and smart lighting for next-generation smart vehicles. Chris Post, Raul-David Poblano, Ryan C.C. Chin, and Kent Larson In an urban environment, space is a valuable commodity. Current parking structures must allow each driver to independently navigate the parking structure to find a space. As next-generation vehicles turn more and more to drive-by-wire systems, though, direct human interaction will not be necessary for vehicle movement. An intelligent parking environment can use drive-by-wire technology to take the burden of parking away from the driver, allowing for more efficient allocation of parking resources to make urban parking less expensive. With central vehicle control, cars can block each other while parked since the parking environment can move other vehicles to enabled a blocked vehicle to leave. The parking environment can also monitor the vehicle charge, allowing intelligent and efficient utilization of charge stations by moving vehicles to and from charge stations as necessary. Kent Larson, Ryan C.C. Chin, Daniel John Smithwick and Tyrone L. Yang The housing, mobility, and health needs of the elderly are diverse, but current products and services are generic, disconnected from context, difficult to access without specialized guidance, and do not anticipate changing life circumstances. We are creating a platform for delivering integrated, personalized solutions to help aging individuals remain healthy, autonomous, productive, and engaged. We are developing new ways to assess specific individual needs and create mass-customized solutions. We are also developing new systems and standards for construction that will enable the delivery of more responsive homes, products, and services; these standards will make possible cost-effective but sophisticated, interoperable building components and systems. For instance, daylighting controls will be coordinated with reconfigurable rooms and will accommodate glare sensitivity. These construction standards will enable industrial suppliers to easily upgrade and retrofit homes to better care for home occupants as their needs change over time.

154. Human Health Monitoring in Vehicles

155. Intelligent Autonomous Parking Environment

156. Mass-Personalized Solutions for the Elderly

Page 34

April 2013

MIT Media Lab

157. Media Lab Energy and Charging Research Station

Praveen Subramani, Raul-David Poblano, Ryan C.C. Chin, Kent Larson and Schneider Electric We are collaborating with Schneider Electric to develop a rapid, high-power charging station in MIT's Stata Center for researching EV rapid charging and battery storage systems for the electric grid. The system is built on a 500 kW commercial uninterruptible power supply (UPS) designed by Schneider Electric and modified by Media Lab researchers to enable rapid power transfer from lead-acid batteries in the UPS to lithium-ion batteries onboard an electric vehicle. Research experiments include: exploration of DC battery banks for intermediate energy storage between the grid and vehicles; repurposing the lead acid batteries in UPS systems with lithium-ion cells; and exploration of Level III charging connectors, wireless charging, and user-interface design for connecting the vehicles to physical infrastructure. The station is scheduled for completion by early 2012 and will be among the most advanced battery and EV charging research platforms at a university. Kent Larson and Stephen Intille MITes (MIT environmental sensors) are low-cost, wireless devices for collecting data about human behavior and the state of the environment. Nine versions of MITes have now been developed, including MITes for people movement (3-axis accelerometers), object movement (2-axis accelerometers), temperature, light levels, indoor location, ultra-violet light exposure, heart rate, haptic output, and electrical current flow. MITes are being deployed to study human behavior in natural setting. We are also developing activity recognition algorithms using MITes data for health and energy applications. (a House_n Research Consortium Initiative funded by the National Science Foundation) Alumni Contributors: Randy Rockinson and Emmanuel Munguia Tapia

158. MITes+: Portable Wireless Sensors for Studying Behavior in Natural Settings

159. Mobility on Demand Systems

Kent Larson, Ryan C.C. Chin, Chih-Chao Chuang, William Lark, Jr., Brandon Phillip Martin-Anderson and SiZhi Zhou Mobility on Demand (MoD) systems are fleets of lightweight electric vehicles at strategically distributed electrical charging stations throughout a city. MoD systems solve the first and last mile problem of public transit, providing mobility between transit station and home/workplace. Users swipe a membership card at the MoD station to access vehicles, which can be driven to any other station (one-way rental). The Vlib' system of 20,000+ shared bicycles in Paris is the largest and most popular one-way rental system in the world. MoD systems incorporate intelligent fleet management through sensor networks, pattern recognition, and dynamic pricing, and the benefits of Smart Grid technologies including intelligent electrical charging (including rapid charging), vehicle-to-grid (V2G), and surplus energy storage for renewable power generation and peak shaving for the local utility. We have designed three MoD vehicles: CityCar, RoboScooter, and GreenWheel bicycle. (Continuing the vision of William J. Mitchell.)

160. Open-Source Furniture

Kent Larson We are exploring the use of parametric design tools and CNC fabrication technology to enable lay people to navigate through a complex furniture and cabinetry design process for office and residential applications. We are also exploring the integration of sensors, lighting, and actuators into furniture to create objects that are responsive to human activity. Kent Larson and Brandon Phillip Martin-Anderson Operator is an AI agent that keeps tabs on how things are running around town, and tells you how to get where you want to go in the least effortful of ways.

161. Operator

MIT Media Lab

April 2013

Page 35

162. Participatory Environmental Sensing for Communities

Rich Fletcher and Kent Larson Air and water pollution are well-known concerns in cities throughout the world. However, communities often lack practical tools to measure and record pollution levels, and thus are often powerless to motivate policy change or government action. Although some government-funded pollution monitors do exist, they are sparsely located, and many large national and local governments fail to disclose this environmental data in areas where pollution is most prevalent. In order to address this public health need, we have been developing very low-cost, ultra low-power environmental sensors for air, soil, and water, that enable communities to easily sample their environment and upload data to their mobile phone and an online map. The ability to perform fine resolution, large-scale environmental monitoring not only empowers communities to enact new policies, but also serves as a public resource for city health services, traffic control, and general urban design. Jason Nawyn, Stephen Intille and Kent Larson The PlaceLab was a highly instrumented, apartment-scale, shared research facility where new technologies and design concepts were tested and evaluated in the context of everyday living. It was used by researchers until 2008 to collect fine-grained human behavior and environmental data, and to systematically test and evaluate strategies and technologies for the home in a natural setting with volunteer occupants. BoxLab is a portable version with many of the data collection capabilities of PlaceLab. BoxLab can be deployed in any home or workplace. (A House_n Research Consortium project funded by the National Science Foundation.) Alumni Contributors: Jennifer Suzanne Beaudin, Manu Gupta, Pallavi Kaushik, Aydin Oztoprak, Randy Rockinson and Emmanuel Munguia Tapia

163. PlaceLab and BoxLab

164. PowerSuit: Micro-Energy Harvesting

Jennifer Broutin Farah, Kent Larson The PowerSuit is a micro-energy harnessing material that functions based on temperature differentials between a person's skin and the outside environment. The skin becomes an activated landscape that can be used for micro-power generation. The idea is to consider small increments of energy as useful toward a specific purpose such as lighting safety LEDs while running at night time on cold days. This project is the beginning of an exploration in materials structures that yield micro-power through temperature differentials. Fundamentally, this is a shift in how people consider energy. Rather than constantly striving for tools and devices that are more powerful and less energy efficient, why not consider using small amounts of energy not typically utilized toward more efficient devices such as LED lighting. Shaun Salzberg Shadow Chess is a pair of Internet-connected chess sets that allows remote users to play a physical game of chess together. The boards can sense and display where pieces are moved from and to, determine if the move is valid, and send the move via WiFi to the other board, which can then replicate the move using magnets embedded in the pieces. This project explores how we can have more meaningful and tangible interactions with others over a distance than simply playing digital online games. Kent Larson and Brandon Phillip Martin-Anderson Shortest Path Tree is an experimental way to interact with an algorithmic multimodal trip planner. It emphasizes how the shape of the city interacts with the planning process embedded in every mobility decision.

165. Shadow Chess


NEW LISTING

166. Shortest Path Tree

Page 36

April 2013

MIT Media Lab

167. Smart Customization of Men's Dress Shirts: A Study on Environmental Impact

Ryan C. C. Chin, Daniel Smithwick and Kent Larson Sanders Consultings 2005 ground-breaking research, Why Mass Customization is the Ultimate Lean Manufacturing System showed that the best standard mass-production practices when framed from the point of view of the entire product lifecyclefrom raw material production to point of purchasewas actually very inefficient and indeed wasteful in terms of energy, material use, and time. Our research examines the environmental impacts when applying mass customization methodologies to men's custom dress shirts. This study traces the production, distribution, sale, and customer-use of the product, in order to discover key areas of waste and opportunities for improvement. Our comparative study examines not only the energy and carbon emissions due production and distribution, but also customer acquisition and use, by using RFID tag technology to track shirt utilization of over 20 subjects over a three-month period. Kent Larson and Christophe Yoh Charles Meyers Given the increasing development of renewable energy, its integration into the electric distribution grid needs to be addressed. In addition, the majority of household appliances operate on DC. The aim of this project is to develop a microgrid capable of addressing these issues, while drawing on a smart control system.

168. Smart DC MicroGrid

169. smartCharge

Praveen Subramani, Sean Cockey, Guangyan Gao, Jean Martin and Kent Larson With the next generation of lightweight electric vehicles being deployed in vehicle sharing systems across the world, there is a growing need for smarter charging infrastructure. smartCharge is the next generation of intelligent charging infrastructure for EVs in cities. Specifically optimized for EV sharing systems, the smartCharge platform integrates secure locking, high current vehicle charging (up to 36A), and data transfer into a single connector. Its concentric connector design allows users to insert the plug from any angle, allowing them to quickly lock and charge the rented vehicle without wasting time and space with separate docking and charging systems. The system connects vehicles to a smart charging post that integrates ambient LED lighting to provide feedback to users on the current state of charge of the vehicle, its availability status, and maintenance needs. The connection system is universally designed to function with electric bicycles, scooters, cars, and other lightweight EVs.

170. Spike: Social Cycling

Kent Larson and Sandra Richter Spike is a social cycling application developed for bike-sharing programs. The application persuades urban dwellers to bike together, increasing the perceived level of safety. Social deals and benefits which can only be redeemed together motivate the behavior change. Frequent Biker Miles sustain the behavior. An essential feature is real-time information on where the users of the social network are currently biking or when they are planning to bike, to facilitate bike dates.

171. SproutsIO: Microfarm

Jennifer Broutin Farah, Kent Larson SproutsIO is a microfarming system that assists everyday people in reliably producing healthy food in urban areas. SproutsIO has scalable, modular components augmented by technology such as monitoring sensors, network capability and smart mobile applications to facilitate ease and a deeper understanding of the process through which aeroponic vegetables are grown. We believe that SproutsIO serves as a platform for closing the loop between people and food.

MIT Media Lab

April 2013

Page 37

172. Wheel Robots

William Lark, Jr., Nicholas Pennycooke, Ryan C.C. Chin and Kent Larson The mechanical components that make driving a vehicle possible (acceleration, braking, steering, springing) are located inside the space of the wheel, forming independent wheel robots and freeing the vehicular space of these components. Connected to the chassis are simple mechanical, power, and data connections, allowing for the wheel robots to plug in to a vehicle simply and quickly. A CPU in the vehicle provides the input necessary for driving according to the vehicle's dimensions or loading condition. The design of the wheel robots provides optimal contact patch placement, lower unsprung and rotational mass, omnidirectional steering, great space savings, and modularity, as the wheel robots can function appropriately on vehicles of different dimensions and weight. (Continuing the vision of William J. Mitchell.) Alumni Contributors: Patrik Kunzler, Philip Liang and William J. Mitchell

173. WorkLife

Jarmo Suominen and Kent Larson The nature of work is rapidly changing, but designers have a poor understanding of how places of work affect interaction, creativity, and productivity. We are using mobile phones that ask context-triggered questions and sensors in workplaces to collect information about how spaces are used and how space influences feelings such as productivity and creativity. A pilot study took place at the Steelcase headquarters in 2007, and in the offices of EGO, Inc. in Helsinki, Finland 2009. (A House_n Research Consortium project funded by TEKES.) Alumni Contributor: Kenneth Cheung

Henry LiebermanSoftware Agents


How software can act as an assistant to the user rather than a tool, by learning from interaction and by proactively anticipating the user's needs.

174. AIGRE: A natural language interface that accommodates vague and ambiguous input

Henry Lieberman and Dustin Arthur Smith A major problem for natural language interfaces is their inability to handle text whose meaning depends in part on context. If a user asks his car radio to play "a fast song", or his calendar to schedule "a short meeting," the interpreter would have to accommodate vagueness and ambiguity to figure out what he meant based on what he said. For it to understand what songs or events the speaker intended, it must make decisions that depend on assumed common knowledge about the world and language. Our research presents two approaches for reducing uncertainty in natural language interfaces, by modeling interpretation as a plan recognition problem.

Page 38

April 2013

MIT Media Lab

175. Common-Sense Reasoning for Interactive Applications

Henry Lieberman A long-standing dream of artificial intelligence has been to put common-sense knowledge into computersenabling machines to reason about everyday life. Some projects, such as Cyc, have begun to amass large collections of such knowledge. However, it is widely assumed that the use of common sense in interactive applications will remain impractical for years, until these collections can be considered sufficiently complete, and common-sense reasoning sufficiently robust. Recently we have had some success in applying common-sense knowledge in a number of intelligent interface agents, despite the admittedly spotty coverage and unreliable inference of today's common-sense knowledge systems. Alumni Contributors: Xinyu H. Liu and Push Singh

176. CommonConsensus: A Game for Collecting Commonsense Goals

Henry Lieberman and Dustin Smith We have developed, Common Consensus: a fun, self-sustaining web-based game, that both collects and validates Commonsense knowledge about everyday goals. Goals are a key element of commonsense knowledge; in many of our inferface agents, we need to recognize goals from user actions (plan recognition), and generate sequences of actions that implement goals (planning). We also often need to answer more general questions about the situations in which goals occur, such as when and where a particular goal might be likely, or how long it is likely to take to achieve. Alumni Contributor: Push Singh

177. E-Commerce When Things Go Wrong

Henry Lieberman One of the biggest challenges for the digital economy is what to do when things go wrong. Orders get misplaced, numbers mistyped, requests misunderstood: then what? Consumers are frustrated by long waits on hold, misplaced receipts, and delays to problem resolution; companies are frustrated by the cost of high-quality customer service. Online companies want customers trust, and how a company handles problems directly impacts that. We explore how software agents and other technologies can help with this issue. Borrowing ideas from software debugging, we can have agents help to automate record-keeping and retrieval, track dependencies, and provide visualization of processes. Diagnostic problem-solving can generate hypotheses about causes of errors, and seek information that allows hypotheses to be tested. Agents act on behalf of both the consumer and the vendor to resolve problems more quickly and at lower cost. Henry Lieberman and Pei-Yu Chi Consumer electronics devices are becoming more complicated, intimidating users. These devices do not know anything about everyday life or human goals, and they show irrelevant menus and options. Using common-sense reasoning, we are building a system, Roadie, with knowledge about the user's intentions; this knowledge will help the device to display relevant information to reach the user's goal. For example, an amplifier should suggest a play option when a new instrument is connected, or a DVD player suggest a sound configuration based on the movie it is playing. This will lead to more human-like interactions with these devices. We have constructed a Roadie interface to real consumer electronics devices: a television, set top box, and smart phone. The devices communicate over Wi-Fi, and use the UPnP protocols. Alumni Contributor: Jose H. Espinosa

178. Goal-Oriented Interfaces for Consumer Electronics

MIT Media Lab

April 2013

Page 39

179. Goal-Oriented Interfaces for Mobile Phones

Henry Lieberman, Karthik Dinakar, Christopher Fry, Dustin Arthur Smith, Hal Abelson and Venky Raju Contemporary mobile phones provide a vast array of capabilities in so-called "apps," but currently each app lives in its own little world, with its own interface. Apps are usually unable to communicate with each other and unable to cooperate to meet users' needs. This project intends to enable end-users to "program" their phones using natural language and speech recognition to perform complex tasks. A user, for example, could say: "Send the song I play most often to Bill." The phone should realize that an MP3 player holds songs, and that the MP3 app has a function to order songs by play frequency. It should know how to send a file to another user, and how to look up the user's contact information. We use state-of-the art natural language understanding, commonsense reasoning, and a partial-order planner. Henry Lieberman This project explores how modern graphical interface techniques and explicit support for the user's problem-solving activities can make more productive interfaces for debugging, which accounts for half the cost of software development. Animated representations of code, a reversible control structure, and instant connections between code and graphical output are some of the techniques used. Henry Lieberman and Dustin Smith What motivates people? What changes do people want in the world? We approach questions of this kind by mining goals and plans from text-based websites: wikiHow, eHow, 43things, to-do lists, and commonsense knowledge bases. 43things tells us about people's long term ambitions. How-to instructions and to-do lists tell us about everyday activities. We've analyzed the corpus to find out which goals are most popular, controversial, and concealed. The resulting goal network can be used for plan recognition, natural language understanding, and building intelligent interfaces that understand why they are being used. Come by and learn about how you can use this knowledge about actions/goals, their properties (cost, duration, location) and their relations in your own applications.

180. Graphical Interfaces for Software Visualization and Debugging

181. Human Goal Network

182. Justify

Henry Lieberman and Christopher Fry Making optimal decisions can improve a wide array of situations. Humans often perform well on small, focused choices but performance degrades as complexity increases. Justify leverages human fine-grained reasoning capabilities into a hierarchy that automatically aggregates and summarizes at each level. This flexible organization makes understanding complex arguments more manageable. A Justify discussion is comprised of points; each point has a type that conveys its domain-independent meaning and determines its summarization strategy. There are points for questions, answers, arithmetic, pro & con rationale, voting, and grouping that help to crystalize an issue. These point types represent a language to facilitate reasoning for both humans and the Justify program itself.

Page 40

April 2013

MIT Media Lab

183. Learning Common Sense in a Second Language

Henry Lieberman, Ned Burns and Li Bian It's well known that living in a foreign country dramatically improves the effectiveness of learning a second language over classroom study alone. This is likely because people make associations with the foreign language as they see and participate in everyday life activities. We are designing language-teaching sequences for a sensor-equipped residence that can detect user interaction with household objects. We use our common-sense knowledge base and reasoning tools to construct teaching sequences, wholly in the target language, of sentences and question-answering interactions that gradually improve the learner's language competence. For example, the first time the user sits in a chair, the system responds with the foreign-language word for "chair," and later with statements and questions such as, "You sit in the chair" (complete sentence), "You sat in the chair" (tenses), "What is the chair made of?" (question, materials), or "Why are you sitting in the chair?" (goals, plans). Hyemin Chung, Jaewoo Chung, Wonsik Kim, Sung Hyon Myaeng and Walter Bender A ConceptNet in English is already established and working well. We are now attempting to expand it to other languages and cultures. This project is an extended ConceptNet with Korean common sense, which is fundamentally different from English. Through this project, we can learn how to expand the ConceptNet into other languages and how to connect them. By connecting English and Korean ConceptNets, we are hoping not only to see cultural or linguistic differences, but also to solve problems such as the ambiguity of multivocal words, which were difficult to solve with only one ConceptNet.

184. Multi-Lingual ConceptNet

185. Multilingual Common Sense

Aparecido Fabiano Pinatti de Carvalho, Jesus Savage Carmona, Marie Tsutsumi, Junia Anacleto, Henry Lieberman, Jason Alonso, Kenneth Arnold, Robert Speer, Vania Paula de Almeida and Veronica Arreola Rios This project aims to collect and reason over common-sense knowledge in languages other than English. We have collected large bodies of common-sense knowledge in Portuguese and Korean, and we are expanding to other languages such as Spanish, Dutch, and Italian. We can use techniques based on AnalogySpace to discover correlations between languages, enabling our knowledge bases in different languages to learn from each other. Alumni Contributors: Hyemin Chung, Jose H. Espinosa, Wonsik Kim and Yu-Te Shen

186. Navigating in Very Large Display Spaces

Henry Lieberman How would you browse a VERY large display space, such as a street map of the entire world? The traditional solution is zoom and pan, but these operations have drawbacks that have gone unchallenged for decades. Shifting attention loses the wider context, leading to that "lost in hyperspace" feeling. We are exploring alternative solutions, such as a new technique that allows zooming and panning in multiple translucent layers.

MIT Media Lab

April 2013

Page 41

187. Open Interpreter

Henry Lieberman and Dustin Arthur Smith Language interpretation requires going beyond the words to derive what the speaker meantcooperatively making 'leaps of faith' and putting forth assumptions that can later be revised or redacted. Current natural language interfaces are opaque; when interpretation goes wrongwhich it inevitably doesthe human is left without recourse. The Open Interpreter project brings the assumptions involved with interpreting English event descriptions into the user interface, so people can participate in teaching the computer to derive the same common-sense assumptions that they expected. We show the immediate applications for an intelligent calendaring application.

188. ProcedureSpace: Managing Informality by Example

Henry Lieberman and Kenneth C. Arnold Computers usually require us to be precise about what we want them to do and how, but humans find it hard to be so formal. If we gave computers formal examples of our informal instructions, maybe they could learn to relate ordinary users' natural instructions with the specifications, code, and tests with which they are comfortable. Zones and ProcedureSpace are examples of this. Zones is a code search interface that connects code with comments about its purpose. Completed searches become annotations, so the system learns by example. The backend, ProcedureSpace, finds code for a purpose comment (or vice versa) by relating words and phrases to code characteristics and natural language background knowledge. Users of the system were able describe what they wanted in their own words, and often found that the system gave them helpful code. Henry Lieberman and Moin Ahmad We want to build programming systems that can converse with their users to build computer programs. Such systems will enable users without programming expertise to write programs using natural language. The text-based, virtual-world environments called the MOO (multi-user, object-oriented Dungeons and Dragons) allow their users to build objects and give them simple, interactive, text-based behaviors. These behaviors allow other participants in the environment to interact with those objects by invoking actions and receiving text messages. Through our natural-language dialogue system, the beginning programmer will be able to describe objects and the messages in MOO environments. Henry Lieberman and Pei-Yu Chi Raconteur is a story-editing system for conversational storytelling that provides intelligent assistance in illustrating a story with photos and videos from an annotated media library. It performs natural language processing on a text chat between two or more participants, and recommends appropriate items from a personal media library to illustrate a story. A large common-sense knowledge base and a novel common-sense inference technique are used to find relevant media materials to match the story intent in a way that goes beyond keyword matching or word co-occurrence based techniques. Common-sense inference can identify larger-scale story patterns such as expectation violation or conflict and resolution, and helps a storyteller to chat and brainstorm his personal stories with a friend. Henry Lieberman and Jayant Krishnamurthy Analogy is a powerful comparison mechanism, commonly thought to be central to human problem solving. Analogies like "an atom is like the solar system" enable people to effectively transfer knowledge to new domains. Can we enable computers to do similar comparisons? Prior work on analogy (structure mapping) provides guidance about the nature of analogies, but implementations of these theories are inefficient and brittle. We are working on a new analogy mechanism that uses instance learning to make robust, efficient comparisons.

189. Programming in Natural Language

190. Raconteur: From Chat to Stories

191. Relational Analogies in Semantic Networks

Page 42

April 2013

MIT Media Lab

192. Ruminati: Tackling Cyberbullying with Computational Empathy

Karthik Dinakar, Henry Lieberman, and Birago Jones The scourge of cyberbullying has assumed worrisome proportions with an ever-increasing number of adolescents admitting to having dealt with it either as a victim or bystander. Anonymity and the lack of meaningful supervision in the electronic medium are two factors that have exacerbated this social menace. This project explores computational methods from natural language processing and reflective user interfaces to alleviate this problem. Henry Lieberman Today, people can tell stories by composing, manipulating, and sequencing individual media artifacts using digital technologies. However, these tools offer little help in developing a story's plot. Specifically, when a user tries to construct her story based on a collection of individual media elements (videos, audio samples), current technological tools do not provide helpful information about the possible narratives that these pieces can form. Storied Navigation is a novel approach to this problem; media sequences are tagged with free-text annotations and stored as a collection. To tell a story, the user inputs a free-text sentence and the system suggests possible segments for a storied succession. This process iterates progressively, helping the user to explore the domain of possible stories. The system achieves the association between the input and the segments' annotations using reasoning techniques that exploit the WordNet semantic network and common-sense reasoning technology. Alumni Contributors: Barbara Barry, Glorianna Davenport and edshen

193. Storied Navigation

194. Time Out: Reflective User Interface for Social Networks

Birago Jones, Henry Lieberman and Karthik Dinakar Time Out is a experimental user interface system for addressing cyberbullying on social networks. A Reflective User Interface (RUI) is a novel concept to help users consider the possible consequences of their online behavior, and assist in intervention or mitigation of potentially negative/harmful actions.

Andy LippmanViral Spaces


How to make scalable systems that enhance how we learn from and experience real spaces.

195. Air Mobs

Andy Lippman, Henry Holtzman and Eyal Toledano Air Mobs creates a local mobile community to allow them to freely share internet access among diverse carrier 3G and 4G data accounts. We create an app where anyone can advertise that they have bits and battery to spare and are willing to let other Air Mob members tether to them. They might do this if they are near their data cap and either need a little more data or have some they are willing to let others use before it expires. A website tracks the evolution of the community and posts the biggest donators and users of the system. To date, this app works on Android devices. It is designed to be open and community-based. We may experiment with market credits for sharing airtime and adding other devices and features.

196. AudioFile

Andy Lippman, Travis Rich and Stephanie Su AudioFile overlays imperceptible tones on standard audio tracks to embed digital information that can be decoded by standard mobile devices. AudioFile lets users explore their media more deeply by granting them access to a new channel of communication. The project creates sound that is simultaneously meaningful to

MIT Media Lab

April 2013

Page 43

humans and machines. Movie tracks can be annotated with actor details, songs can be annotated with artist information, or public announcements can be infused with targeted, meaningful data.

197. Barter: A Market-Incented Wisdom Exchange

Dawei Shen, Marshall Van Alstyne and Andrew Lippman Creative and productive information interchange in organizations is often stymied by a perverse incentive setting among the members. We transform that competition into a positive exchange by using market principles. Specifically, we apply innovative market mechanisms to construct incentives while still encouraging pro-social behaviors. Barter includes means to enhance knowledge sharing, innovation creation, and productivity. Barter provides managerial capability by using economic tools to stimulate activities and modify behaviors. We will measure the results and test the effectiveness of an information market in addressing organizational challenges. We are learning that transactions in rich markets can become an organizing principle among people potentially as strong as social networks. Henry Holtzman, Andy Lippman and Polychronis Ypodimatopoulos We allow people to form dynamic groups focused on topics that emerge serendipitously during everyday life. They can be long-lived or flower for a short time. Examples include people interested in buying the same product, those with similar expertise, those in the same location, or any collection of such attributes. We call this the Human Discovery Protocol (HDP). Similar to how computers follow well-established protocols like DNS in order to find other computers that carry desired information, HDP presents an open protocol for people to announce bits of information about themselves, and have them aggregated and returned back in the form of a group of people that match against the users specified criteria. We experiment with a web-based implementation (brin.gy) that allows users to join and communicate with groups of people based on their location, profile information, and items they may want to buy or sell. Kent Larson, Andy Lippman, Shaun David Salzberg, Dan Sawada and Jonathan Speiser BTNz! is a lightweight, viral interface consisting of a button and a screen strategically positioned around the Media Lab complex to foster social interactions within the community. Users will be able to upload messages to be displayed on the screen when the button is pushed. The goal is see if the action of pressing a tangible button makes people more aware of what is going on throughout the community. In some ways, BTNz! is a "twitter of billboards". The idea is to get people together with almost no overhead, and in a fun way, with a single-dimension interface. The work includes building an application environment and collecting and analyzing data on the emergent social activities. Later work may involve tying identity to button-pushers and providing more context-aware messages to the users.

198. Brin.gy: What Brings Us Together

199. BTNz!

200. CoCam

Henry Holtzman, Andy Lippman, Dan Sawada and Eyal Toledano Collaborating and media creation are difficult tasks, both for people and for network architectures. CoCam is a self-organizing network for real-time camera image collaboration. Like all camera apps, just point and shoot; CoCam then automatically joins other media creators into a network of collaborators. Network discovery, creation, grouping, joining, and leaving is done automatically in the background, letting users focus on participation in an event. We use local P2P middleware and a 3G negotiation service to create these networks for real-time media sharing. CoCam also provides multiple views that make the media experience more excitingsuch as appearing to be in multiple places at the same time. The media is immediately distributed and replicated in multiple peers, thus if a camera phone is confiscated other users have copies of the images.

Page 44

April 2013

MIT Media Lab

201. CoSync

Henry Holtzman, Andy Lippman and Eyal Toledano CoSync builds the ability to create and act jointly into mobile devices . This mirrors the way we as a society act both individually and in concert. CoSync device ecology combines multiple stand-alone devices and controls them opportunistically as if they are one distributed, or diffuse, device at the users fingertips. CoSync includes a programming interface that allows time synchronized coordination at a granularity that will permit watching a movie on one device and hearing the sound from another. The open API encourages an ever growing set of such finely coordinated applications.

202. Electric Price Tags

Andy Lippman, Matthew Blackshaw and Rick Borovoy Electric Price Tags are a realization of a mobile system that is linked to technology in physical space. The underlying theme is that being mobile can mean far more than focusing on a portable deviceit can be the use of that device to unlock data and technology embedded in the environment. In its current version, users can reconfigure the price tags on a store shelf to display a desired metric (e.g., price, unit price, or calories). While this information is present on the boxes of the items for sale, comparisons would require individual analysis of each box. The visualization provided by Electric Price Tags allows users to view and filter information in physical space in ways that was previously possible only online.

203. Encoded Reality

Andy Lippman and Travis Rich We explore techniques to integrate digital codes into physical objects. Spanning both the hard and the soft, this work entails incorporating texture patterns into the surfaces of objects in a coded manner. Leveraging advancements in rapid prototyping and manufacturing capabilities, techniques for creating deterministic encoded surface textures are explored. The goal of such work is to take steps towards a self-descriptive universe in which all objects contain within their physical structure hooks to information about how they can be used, how they can be fixed, what they're used for, who uses them, etc. Our motivation is to transform opaque technologies into things that teach and expose information about themselves through the sensing technologies we already, or foreseeably could, carry on us.

204. Geo.gy: Location Shortener

Andy Lippman and Polychronis Ypodimatopoulos Have you ever been in the middle of a conversation and needed to share your location with the other party? Geo.gy is a location shortener service. It allows you to easily share your location with your peers by encoding it in a short URL which we call a "geolink." It is platform-independent, and based on HTML5, so you can use any device with a modern browser to generate a geolink, simply by visiting the project's page. There are no user accounts, so geolinks remain anonymous. You can use Geo.gy to add location context to a post, SMS, anything you want decorated with location context. Andrew Lippman and Jeremy Rubin Graffiti Codes allow users to encode a small piece of information into a physical space, much like a QR code. This work diverges from the camera-scanning model and uses accelerometer-based paths to unlock data. These paths can be drawn onto any surface with analog tools (such as markers), and scanned by tracing them over with a mobile phone. Where a QR code cannot be easily generated in the field, Graffiti Codes only require a marker and a surface. Grace Rusi Woo, Rick Borovoy and Andy Lippman We show how data can be used to deliver sound information only in the direction in which one looks. The demonstration is done using two 55-inch screens which are transmitting both human and machine relevant information. Each screen is used to

205. Graffiti Codes


NEW LISTING

206. Line of Sound

MIT Media Lab

April 2013

Page 45

show a video which flashes a single bit indicator which transmits to a camera mounted on headphones. This is used to distinguish between the two screens, and to correlate an audio track to the video track.

207. NewsFlash

Andy Lippman and Grace Rusi Woo NewsFlash is a social way to experience the global and local range of current events. People see a tapestry of newspaper front-pages. The headlines and main photos tell part of the story, NewsFlash tells you the rest. People point their phones at a headline or picture of interest to bring up a feed of the article text from that given paper. The data emanates from the screen and and is captured by a cell phone cameraany number of people can see it at once and discuss the panoply of ongoing events. NewsFlash creates a local space that is simultaneously interactive and provocative. We hope it gets people talking.

208. Point & Shoot Data

Andy Lippman and Travis Rich Point & Shoot Data explores the use of visible light as a wireless communication medium for mobile devices. A snap-on case allows users to send messages to other mobile devices based on directionality and proximity. No email address, phone number, or account login is needed, just point and shoot your messages! The project enables infrastructure-free, scalable, proximity-based communication between two mobile devices. Alumni Contributors: Samuel Luescher and Shaun David Salzberg

209. Reach

Andy Lippman, Boris G Kizelshteyn and Rick Borovoy Reach merges inherently local communications with user requests or offers of services. It is built atop data from services users already use, like Facebook and Google Latitude. Reach is intended to demonstrate a flexible, attractive mobile interface that allows one to discover "interesting" aspects of the environment and to call upon services as needed. These can range from a broadcast offer to serve as a triage medic, to a way to share a cab or get help for a technical service problem like plugging into a video projector.

210. Recompose

Hiroshi Ishii, Matthew Blackshaw, Anthony DeVincenzi and David Lakatos Human beings have long shaped the physical environment to reflect designs of form and function. As an instrument of control, the human hand remains the most fundamental interface for affecting the material world. In the wake of the digital revolution, this is changing, bringing us to reexamine tangible interfaces. What if we could now dynamically reshape, redesign, and restructure our environment using the functional nature of digital tools? To address this, we present Recompose, a framework allowing direct and gestural manipulation of our physical environment. Recompose complements the highly precise, yet concentrated affordance of direct manipulation with a set of gestures, allowing functional manipulation of an actuated surface.

Page 46

April 2013

MIT Media Lab

211. Social Transactions/Open Transactions

Andy Lippman, Kwan Lee, Dawei Shen, Eric Shyu and Phumpong Watanaprakornkul Social Transactions is an application that allows communities of consumers to collaboratively sense the market from mobile devices, enabling more informed financial decisions in a geo-local and timely context. The mobile application not only allows users to perform transactions, but also to inform, share, and purchase in groups at desired times. It could, for example, help people connect opportunistically in a local area to make group purchases, pick up an item for a friend, or perform reverse auctions. Our framework is an Open Transaction Network that enables applications from restaurant menu recommendations to electronics purchases. We tested this with MIT's TechCASH payment system to investigate whether shared social transactions could provide just-in-time influences to change behaviors. Andrew Lippman and Dan Sawada SonicLink is a fully decentralized, proximal communication framework for personal devices to seamlessly discover, connect, and interact with arbitrary public installations (e.g. digital billboards). It establishes connections based on audio proximity when you are not on the same network: being near in physical space does not mean you are near in "network space." SonicLink uses near-ultrasonic acoustic signals that permits devices and installations to discover each other. It also exploits peer-to-peer proximal wireless networking techniques for establishing a high-bandwidth, low latency link between the device and the installation. Possible use case scenarios include, borrowing a large-screen TV and a camera in the public space for personal video conferencing, presenting personal notifications to the public display, and taking over neon lights for visualizing music on the phone. Hiroshi Ishii, Andy Lippman, Matthew Blackshaw and David Lakatos T(ether) is a novel spatially aware display that supports intuitive interaction with volumetric data. The display acts as a window affording users a perspective view of three- dimensional data through tracking of head position and orientation. T(ether) creates a 1:1 mapping between real and virtual coordinate space allowing immersive exploration of the joint domain. Our system creates a shared workspace in which co-located or remote users can collaborate in both the real and virtual worlds. The system allows input through capacitive touch on the display and a motion-tracked glove. When placed behind the display, the users hand extends into the virtual world, enabling the user to interact with objects directly.

212. SonicLink
NEW LISTING

213. T(ether)

214. T+1

Dawei Shen, Rick Borovoy and Andrew Lippman T+1 is an application that creates an iterative structure to help groups organize their interests and schedules. Users of T+1 receive instructions and send their personal information through mobile devices at discretized time steps, orchestrated by a unique, adaptive scheduling engine. At each time-step t, T+1 takes as inputs several relevant factors of human interactions, such as participants' interests, opinions, locations, and partner matching schedules. It then computes and optimizes the structure and format of a group interactions for the next interval. T+1 facilitates consensus formation, better group dynamics, and more engaging user experiences by using a clearly visible and comprehensible process. We are planning to deploy the platform in both academic and political discussion settings, analyze how user opinions and interests evolve in time to understand its efficacy.

215. The Glass Infrastructure

Henry Holtzman, Andy Lippman, Matthew Blackshaw, Jon Ferguson, Catherine Havasi, Julia Ma, Daniel Schultz and Polychronis Ypodimatopoulos This project builds a social, place-based information window into the Media Lab using 30 touch-sensitive screens strategically placed throughout the physical complex and at sponsor sites. The idea is get people to talk among themselves about the work that they jointly explore in a public place. We present Lab projects

MIT Media Lab

April 2013

Page 47

as dynamically connected sets of "charms" that visitors can save, trade, and explore. The GI demonstrates a framework for an open, integrated IT system and shows new uses for it. Alumni Contributors: Rick Borovoy, Greg Elliott and Boris Grigory Kizelshteyn

216. VR Codes

Andy Lippman and Grace Woo VR Codes are dynamic data invisibly hidden in television and graphic displays. They allow the display to present simultaneously visual information in an unimpeded way, and real-time data to a camera. Our intention is to make social displays that many can use at once; using VR codes, many can draw data from a display and control its use on a mobile device. We think of VR Codes as analogous to QR codes for video, and envision a future where every display in the environment contains latent information embedded in VR codes.

Tod MachoverOpera of the Future


How musical composition, performance, and instrumentation can lead to innovative forms of expression, learning, and health.

217. A Toronto Symphony: Massive Musical Collaboration

Tod Machover, Akito Van Troyer, Benjamin Bloomberg and Peter Alexander Torpey Thus far, the results of existing crowdsourced and interactive music are limited, with the public only a small part of a final musical result, and often disconnected from the artist leading the project. We believe that a new musical ecology is needed for true creative collaboration between experts and amateurs that benefits both. For this purpose, we have created a new work for symphony orchestra in collaboration with the entire city of Toronto. Called A Toronto Symphony, the workcommissioned by the Toronto Symphony Orchestrapremiered in March 2013. We designed the necessary infrastructure, a variety of web-based music composition applications, a social media framework, and real-world community-building activities to bring together an unprecedented number of people from diverse ages, experiences, and musical backgrounds to create this new work. This process establishes a new model for creating complex creative collaborations between experts and everyone else. Tod Machover and Ben Bloomberg This project explores the contribution of advanced audio systems to live performance, their design and construction, and their integration into the theatrical design process. We look specifically at innovative input and control systems for shaping the analysis and processing of live performance; and at large-scale output systems which provide a meaningful virtual abstraction to DSP in order to create flexible audio systems that can both adapt to many environments and achieve a consistent and precise sound field for large audiences. Tod Machover, Ben Bloomberg, Peter Torpey, Elena Jessop, Bob Hsiung, Michael Miller, Akito van Troyer, and Eyal Shahar "Death and the Powers" is a groundbreaking opera that brings a variety of technological, conceptual, and aesthetic innovations to the theatrical world. Created by Tod Machover (composer), Diane Paulus (director), and Alex McDowell (production designer), the opera uses the techniques of tomorrow to address age-old human concerns of life and legacy. The unique performance environment, including autonomous robots, expressive scenery, new Hyperinstruments, and

218. Advanced Audio Systems for Live Performance

219. Death and the Powers: Redefining Opera

Page 48

April 2013

MIT Media Lab

human actors, blurs the line between animate and inanimate. The opera premiered in Monte-Carlo in fall 2010, with additional performances in Boston and Chicago in 2011 and continuing engagements worldwide, including upcoming performances in Dallas in February 2014.

220. Designing Immersive Multi-Sensory Eating Experiences

Tod Machover and Janice Wang Food offers a rich multi-modal experience that can deeply affect emotion and memory. We're interested in exploring the artistic and expressive potential of food beyond mere nourishment, as a means of creating memorable experiences that involve multiple senses. For instance, music can change our eating experience by altering our emotions during the meal, or by evoking a specific time and place. Similarly, sight, smell, temperature can all be manipulated to combine with food for expressive effect. In addition, by drawing upon people's physiology and upbringing, we seek to create individual, meaningful sensory experiences. Specifically, my masters thesis looks at the connection between music and flavor perception. Tod Machover, Peter Torpey and Elena Jessop Early in the opera "Death and the Powers," the main character Simon Powers is subsumed into a technological environment of his own creation. The set comes alive through robotic, visual, and sonic elements that allow the actor to extend his range and influence across the stage in unique and dynamic ways. This environment must assume the behavior and expression of the absent Simon; to distill the essence of this character, we recover performance parameters in real time from physiological sensors, voice, and vision systems. Gesture and performance parameters are then mapped to a visual language that allows the off-stage actor to express emotion and interact with others on stage. To accomplish this, we developed a suite of innovative analysis, mapping, and rendering software systems. Our approach takes a new direction in augmented performance, employing a non-representational abstraction of a human presence that fully translates a character into an environment. Tod Machover and Akito Oshiro van Troyer This project aims to transform everyday objects into percussive musical instruments, encouraging people to rediscover their surroundings through musical interactions with the objects around them. DrumTop is a drum machine made up of eight transducers. Placing objects on top of the transducers triggers a "hit," causing sounds to come out from the objects themselves. In addition, users can program drum patterns by pushing on a transducer, and the weight of an object can be measured to control the strength of a hit.

221. Disembodied Performance

222. DrumTop

223. Future of the Festival


NEW LISTING

Tod Machover, Ben Bloomberg, Elena Jessop, Rebecca Kleinberger, Simone Ovsey, Peter Torpey, Akito van Troyer and Janice Wang The Opera of the Future group designed and ran the Future of the Festival class, which helped to define and shape The Other Festival. The group also designed and created a wide range of projects for the Festival, including Figments, a performance shaping multiple theatrical modalities using Media Scores; Crenulations and Excursions, an installation and performance space where expressive qualities of movement control a rich soundscape; a multisensory culinary experience for a large group; a multiplexed performance exploring the role of production techniques in live music; an interactive book exploring the musicality of the spoken word; a collaborative vocal improvisation experience for novices; objects with unexpected behaviors; and a mass anatine migration.

MIT Media Lab

April 2013

Page 49

224. Gestural Media Framework

Tod Machover and Elena Jessop We are all equipped with two extremely expressive instruments for performance: the body and the voice. By using computer systems to sense and analyze human movement and voices, artists can take advantage of technology to augment the body's communicative powers. However, the sophistication, emotional content, and variety of expression possible through the original physical channels is often not captured by or addressed in the technologies used for analyzing them, and thus cannot be transferred from body to digital media. To address these issues, we are developing systems that use machine learning to map continuous input data, whether of gesture or voice or biological/physical states, to a space of expressive, qualitative parameters. We are also developing a new framework for expressive performance augmentation, allowing users to easily create clear, intuitive, and comprehensible mappings by using high-level qualitative movement descriptions, rather than low-level descriptions of sensor data streams. Tod Machover The Hyperinstrument project creates expanded musical instruments and uses technology to give extra power and finesse to virtuosic performers. They were designed to augment a wide range of traditional musical instruments and have been used by some of the world's foremost performers (Yo-Yo Ma, the Los Angeles Philharmonic, Peter Gabriel, and Penn & Teller). Research focuses on designing computer systems that measure and interpret human expression and feeling, exploring appropriate modalities and content of interactive art and entertainment environments, and building sophisticated interactive musical instruments for non-professional musicians, students, music lovers, and the general public. Recent projects involve both new hyperinstruments for children and amateurs, and high-end hyperinstruments capable of expanding and transforming a symphony orchestra or an entire opera stage. Alumni Contributors: Roberto M. Aimi, Mary Farbood, Ed Hammond, Tristan Jehan, Margaret Orth, Dan Overholt, Egon Pasztor, Joshua Strickon, Gili Weinberg and Diana Young

225. Hyperinstruments

226. Hyperscore

Tod Machover Hyperscore is an application to introduce children and non-musicians to musical composition and creativity in an intuitive and dynamic way. The "narrative" of a composition is expressed as a line-gesture, and the texture and shape of this line are analyzed to derive a pattern of tension-release, simplicity-complexity, and variable harmonization. The child creates or selects individual musical fragments in the form of chords or melodic motives, and layers them onto the narrative-line with expressive brushstokes. The Hyperscore system automatically realizes a full composition from a graphical representation, allowing individuals with no musical training to create professional pieces. Currently, Hyperscore uses a mouse-based interface; the final version will support freehand drawing, and integration with the Music Shapers and Beatbugs to provide a rich array of tactile tools for manipulation of the graphical score. Alumni Contributors: Mary Farbood, Ed Hammond, Tristan Jehan, Margaret Orth, Dan Overholt, Egon Pasztor, Joshua Strickon, Gili Weinberg and Diana Young

227. Media Scores

Tod Machover and Peter Torpey Media Scores extends the concept of a musical score to other modalities, facilitating the process of authoring and performing multimedia compositions and providing a medium through which to realize a modern-day Gesamtkunstwerk. Through research into representation and encoding of expressive intent, systems for composing with media scores are being developed. Using such a tool, the composer will be able to shape an artistic work that may be performed through

Page 50

April 2013

MIT Media Lab

human and technological means in a variety of media and modalities. Media scores offer the potential for authoring content considering live performance data as well as audience participation and interaction. This paradigm bridges the extremes of the continuum from composition to performance, allowing for improvisatory compositional acts at performance time. The media score also provides a common point of reference in collaborative productions as well as the infrastructure for real-time control of technologies used during live performance.

228. Personal Opera

Tod Machover and Peter Torpey Personal Opera is a radically innovative creative environment that enables anyone to create musical masterpieces sharing ones deepest thoughts, feelings, and memories. Based on our design of, and experience with, such projects as Hyperscore and the Brain Opera, we are developing a totally new environment to allow the incorporation of personal stories, images, and both original and well-loved music and sounds. Personal Opera builds on our guiding principle that active music creation yields far more powerful benefits than passive listening. Using music as the through-line for assembling and conveying our own individual legacies, Personal Opera represents a new form of expressive archiving; easy to use and powerful to experience. In partnership with the Royal Opera House in London, we have begun conducting Personal Opera workshops specifically targeting seniors to help them tell their own meaningful stories through music, text, visuals, and acting.

229. Remote Theatrical Immersion: Extending "Sleep No More"

Tod Machover, Punchdrunk, Akito Van Troyer, Ben Bloomberg, Gershon Dublon, Jason Haas, Elena Jessop, Brian Mayton, Eyal Shahar, Jie Qi, Nicholas Joliat, and Peter Torpey We have collaborated with London-based theater group Punchdrunk to create an online platform connected to their NYC show, Sleep No More. In the live show, masked audience members explore and interact with a rich environment, discovering their own narrative pathways. We have developed an online companion world to this real-life experience, through which online participants partner with live audience members to explore the interactive, immersive show together. Pushing the current capabilities of web standards and wireless communications technologies, the system delivers personalized multimedia content, allowing each online participant to have a unique experience co-created in real time by his own actions and those of his onsite partner. This project explores original ways of fostering meaningful relationships between online and onsite audience members, enhancing the experiences of both through the affordances that exist only at the intersection of the real and the virtual worlds. Tod Machover, Janice Wang, Benjamin Bloomberg, Peter A. Torpey and Philippa Mothersill The feast is a immersive, multi-sensory experience in four acts. Each act explores a different theme of eating, from mysterious to playful to communal. The feast is not a normal dinner where people passive eat what's in front of them. In the small hours of the night, we will explore the active, creative role of the diner and build community around food and companionship. Tod Machover, Elena Jessop, Rebecca Kleinberger, Le Laboratoire, and the Dalai Lama Center at MIT Vocal Vibrations is exploring the relationships between human physiology and the resonant vibrations of the voice. The voice and body are instruments everyone possessesthey are incredibly individual, infinitely expressive, and intimately linked to one's own physical form. In collaboration with Le Laboratoire in Paris and the Dalai Lama Center at MIT, we are exploring the hypothesis that the singing voice can influence mental and physical health through physicochemical phenomena and in ways consistent with contemplative practices. We are developing a series of multimedia experiences, including individual "meditations," a group "singing circle,"

230. The Other Feast


NEW LISTING

231. Vocal Vibrations: Expressive Performance for Body-Mind Wellbeing

MIT Media Lab

April 2013

Page 51

and an iPad application, all effecting mood modulation and spiritual enhancement in an enveloping context of stunningly immersive, responsive music. For Fall 2013, we are developing a vocal art installation in Paris where a private "grotto environment allows individual visitors to meditate using vibrations generated by their own voice, augmented by visual, acoustic, and physical stimuli. Alumni Contributor: Eyal Shahar

Pattie MaesFluid Interfaces


How to integrate the world of information and services more naturally into our daily physical lives, enabling insight, inspiration, and interpersonal connections.

232. Augmented Product Counter

Natan Linder, Pattie Maes and Rony Kubat We have created an augmented reality (AR) based product display counter that transforms any surface or object into an interactive surface, blending digital media and information with physical space. This system enables shoppers to conduct research in the store, learn about product features, and talk to a virtual expert to get advice via built-in video conferencing. The Augmented Product Counter is based on LuminAR technology, which can transform any standard product counter, enabling shoppers to get detailed information on products as well as web access to read unbiased reviews, compare pricing, and conduct research while they interact with real products. This system delivers an innovative in-store shopping experience combining live product interactions in a physical environment with the vast amount of information available on the web in an engaging and interactive manner. Pattie Maes and Sajid Sadi Blossom is a multiperson awareness system that uses ioMaterials-based techniques to connect distant friends and family. It provides an awareness medium that does not rely on the attention- and reciprocity-demanding interfaces that are provided by digital communication media such as mobile phones, SMS, and email. Combining touch-based input with visual, haptic, and motile feedback, Blossoms are created as pairs that can communicate over the network, echoing the conditions of each other and forming an implicit, always-there link that physically express awareness, while retaining the instantaneous capabilities that define digital communication.

233. Blossom

234. Brainstorming with Someone Else's Mind


NEW LISTING

Pattie Maes and Cassandra Xia This project examines how a file system can be organized for brainstorming. We perform a keyword search against a filesystem of information written by a particular user as well as external material that the user has deemed inspirational. When tasked with the objective of generating new ideas, we hit upon the user's files to come up with relevant thoughts endorsed by that user. If multiple people organize their filesystems in this way, then it is possible to brainstorm with someone else's head using the same search with the other person's filesystem. Pattie Maes and Doug Fritz As research communities grow, it is becoming increasingly difficult to understand the dynamics of the community; their history and the varying perspective with which they are interpreted. As our information becomes more digital, the histories and artifacts of community become increasingly hidden. The purpose here is to show a given researcher how they fit into the background of a larger community, hopefully strengthening weak ties and understanding. At a high level this project is intended

235. Community Data Portrait

Page 52

April 2013

MIT Media Lab

to have real impact by allowing the Media Lab community to reflect on what things it has been working on over the past 25 years and where it should be heading next. On a more individual level this is intended to help researchers within the community situate themselves by better understanding the research directions and interests of their collaborators.

236. Cornucopia: Digital Gastronomy

Marcelo Coelho Cornucopia is a concept design for a personal food factory, bringing the versatility of the digital world to the realm of cooking. In essence, it is a 3D printer for food that works by storing, precisely mixing, depositing, and cooking layers of ingredients. Cornucopia's cooking process starts with an array of food canisters that refrigerate and store a user's favorite ingredients. These are piped into a mixer and extruder head that can accurately deposit elaborate combinations of food; while this takes place, the food is heated or cooled. This fabrication process not only allows for the creation of flavors and textures that would be completely unimaginable through other cooking techniques, but it also allows the user to have ultimate control over the origin, quality, nutritional value, and taste of every meal. Alumni Contributors: William J. Mitchell and Amit Zoran

237. Defuse

Aaron Zinman, Judith Donath and Pattie Maes Defuse is a commenting platform that rethinks the medium's basic interactions. In a world where a single article in The New York Times can achieve 3,000 comments, the original design of public asynchronous text systems has reached its limit; it needs more than social convention. Defuse uses context to change the basics of navigation and message posting. It uses a combination of machine learning, visualization, and structural changes to achieve this goal.

238. Display Blocks

Pattie Maes and Pol Pla i Conesa Display Blocks is a novel approach to display technology, which consists of arranging six organic light emitting diode screens in a cubic form factor. The aim of the project is to explore the possibilities that this type of display holds for data visualization, manipulation and exploration. The research focuses on exploring how the physicality of the screen can be leveraged to better interpret its contents. To this end, the physical design is accompanied by a series of applications that demonstrate the advantages of this technology.

239. EyeRing: A Compact, Intelligent Vision System on a Ring

Suranga Nanayakkara and Roy Shilkrot EyeRing is a wearable intuitive interface that allows a person to point at an object to see or hear more information about it. We came up with the idea of a micro camera worn as a ring on the index finger with a button on the side, which can be pushed with the thumb to take a picture or a video that is then sent wirelessly to a mobile phone to be analyzed. The user receives information about the object in either auditory or visual form. Future versions of our proposed system may include more sensors to allow non-visual data capture and analysis. This finger-worn configuration of sensors opens up a myriad of possible applications for the visually impaired as well as the sighted. Pattie Maes, Juergen Steimle, and Simon Olberding We believe that in the near future many portable devices will have resizable displays. This will allow for devices with a very compact form factor, which can unfold into a large display when needed. In this project, we design and study novel interaction techniques for devices with flexible, rollable, and foldable displays. We explore a number of scenarios, including personal and collaborative uses.

240. FlexDisplays

MIT Media Lab

April 2013

Page 53

241. Flexpad
NEW LISTING

Pattie Maes, Jrgen Steimle and Andreas Jordt Flexpad is a highly flexible display interface. Using a Kinect camera and a projector, Flexpad transforms virtually any sheet of paper or foam into a flexible, highly deformable, and spatially aware handheld display. It uses a novel approach for tracking deformed surfaces from depth images in real time. This approach captures deformations in high detail, is very robust to occlusions created by the users hands and fingers, and does not require any kind of markers or visible texture. As a result, the display is considerably more deformable than previous work on flexible handheld displays, enabling novel applications that leverage the high expressiveness of detailed deformation. Pattie Maes and Aaron Zinman When we meet new people in real life, we assess them using a multitude of signals relevant to our upbringing, society, and our experiences and disposition. When we encounter a new individual virtually, usually we are looking at a single communication instance in bodiless form. How can we gain a deeper understanding of this individual without the cues we have in real life? Hyperego aggregates information across various online services to provide a more uniform data portrait of the individual. These portraits are at the user's control, allowing specific data to be hidden, revealed, or grouped in aggregate using an innovative privacy model.

242. Hyperego

243. Inktuitive: An Intuitive Physical Design Workspace

Pranav Mistry and Kayato Sekiya Despite the advances and advantages of computer-aided design tools, the traditional pencil and paper continue to exist as the most important tools in the early stages of design. Inktuitive is an intuitive physical design workspace that aims to bring together conventional design tools such as paper and pencil with the power and convenience of digital tools for design. Inktuitive also extends the natural work-practice of using physical paper by giving the pen the ability to control the design in physical, 3-D, freeing it from its tie to the paper. The intuition of pen and paper are still present, but lines are captured and translated into shapes in the digital world. The physical paper is augmented with overlaid digital strokes. Furthermore, the platform provides a novel interaction mechanism for drawing and designing using above the surface pen movements. Anette von Kapri, Seth Hunter, and Pattie Maes Remote collaboration systems are still far from offering the same rich experience that collocated meetings provide. Collaborators can transmit their voice and face at a distance, but it is very hard to point at physical objects and interpret gestures. ReachIn explores how remote collaborators can "reach into" a shared digital workspace where they can manipulate virtual objects and data. The collaborators see their live 3D recreated mesh in a shared virtual space and can point at data or 3D models. They can grab digital objects with their bare hands, and translate, scale, and rotate them.

244. InReach

245. InterPlay: Full-Body Interaction Platform

Pattie Maes, Seth Hunter and Pol Pla i Conesa InterPlay is a platform for designers to create dynamic social simulations that transform public spaces into immersive environments where people become the central agents. It uses computer vision and projection to facilitate full-body interaction with digital content. The physical world is augmented to create shared experiences that encourage active play, negotiation, and creative composition. Pattie Maes, Sajid Sadi and Amir Mikhak ioMaterials is a project encompassing a variety of collocated sensing-actuation platforms. The project explores various aspects of dense sensing for humane communication, memory, and remote awareness. Using dense collocated sensing

246. ioMaterials

Page 54

April 2013

MIT Media Lab

actuation and sensing, we can change common objects into an interface capable of hiding unobtrusively in plain sight. Relational Pillow and Blossom are instantiations of this ideal.

247. Liberated Pixels

Susanne Seitinger We are experimenting with systems that blur the boundary between urban lighting and digital displays in public spaces. These systems consist of liberated pixels, which are not confined to rigid frames as are typical urban screens. Liberated pixels can be applied to existing horizontal and vertical surfaces in any configuration, and communicate with each other to enable a different repertoire of lighting and display patterns. We have developed Urban Pixels a wireless infrastructure for liberated pixels. Composed of autonomous units, the system presents a programmable and distributed interface that is flexible and easy to deploy. Each unit includes an on-board battery, RF transceiver unit, and microprocessor. The goal is to incorporate renewable energy sources in future versions. Alumni Contributor: William J. Mitchell

248. Light.Bodies

Susanne Seitinger, Alex S. Taylor and Microsoft Research Light bodies are mobile and portable, hand-held lights that respond to audio and vibration input. The motivation to build these devices is grounded in a historical reinterpretation of street lighting. Before fixed infrastructure illuminated cities at night, people carried lanterns with them to make their presence known. Using this as our starting point, we asked how we might engage people in more actively shaping the lightscapes which surround them. A first iteration of responsive, LED-based colored lights were designed for use in three different settings including a choreographed dance performance, an outdoor public installation and an audio-visual event. Alumni Contributor: William J. Mitchell

249. LuminAR

Natan Linder, Pattie Maes and Rony Kubat LuminAR reinvents the traditional incandescent bulb and desk lamp, evolving them into a new category of robotic, digital information devices. The LuminAR Bulb combines a Pico-projector, camera, and wireless computer in a compact form factor. This self-contained system enables users with just-in-time projected information and a gestural user interface, and it can be screwed into standard light fixtures everywhere. The LuminAR Lamp is an articulated robotic arm, designed to interface with the LuminAR Bulb. Both LuminAR form factors dynamically augment their environments with media and information, while seamlessly connecting with laptops, mobile phones, and other electronic devices. LuminAR transforms surfaces and objects into interactive spaces that blend digital media and information with the physical space. The project radically rethinks the design of traditional lighting objects, and explores how we can endow them with novel augmented-reality interfaces.

250. MARS: Manufacturing Augmented Reality System

Rony Daniel Kubat, Natan Linder, Niaja Farve, Yihui Saw and Pattie Maes Projected augmented reality in the manufacturing plant can increase worker productivity, reduce errors, game-ify the workspace to increase worker satisfaction, and collect detailed metrics. We have built new LuminAR hardware customized for the needs of the manufacturing plant and software for a specific manufacturing use case.

MIT Media Lab

April 2013

Page 55

251. MemTable

Pattie Maes, Seth Hunter, Alexandre Milouchev and Emily Zhao MemTable is a table with a contextual memory. The goal of the system is to facilitate reflection on the long-term collaborative work practices of a small group by designing an interface that supports meeting annotation, process documentation, and visualization of group work patterns. The project introduces a tabletop designed both to remember how it is used and to provide an interface for contextual retrieval of information. MemTable examines how an interface that embodies the history of its use can be incorporated into our daily lives in more ergonomic and meaningful contexts.

252. Mouseless

Pranav Mistry and Pattie Maes Mouseless is an invisible computer mouse that provides the familiarity of interaction of a physical mouse without actually needing a real hardware mouse. Despite the advances in computing hardware technologies, the two-button computer mouse has remained the predominant means to interact with a computer. Mouseless removes the requirement of having a physical mouse altogether, but still provides the intuitive interaction of a physical mouse with which users are familiar.

253. Moving Portraits

Pattie Maes Moving portrait is a framed portrait that is aware of and reacts to viewers presence and body movements. A portrait represents a part of our lives and reflects our feelings, but it is completely oblivious to the events that occur around it or to the people who view it. By making a portrait interactive, we create a different and more engaging relationship between it and the viewer.

254. MTM "Little John"

Natan Linder MTM "Little John" is a multi-purpose, mid-size, rapid prototyping machine with the goal of being a personal fabricator capable of performing a variety of tasks (3D printing, milling, scanning, vinyl cutting) at a price point in the hundreds rather than thousands of dollars. The machine was designed and built in collaboration with the MTMMachines that Make Project at MIT Center for Bits and Atoms.

255. Perifoveal Display

Valentin Heun, Anette von Kapri and Pattie Maes Today's GUIs are made for small screens with little information shown. Real-time data that goes beyond one small screen needs to be continuously scanned with our eyes in order to create an abstract model of it in one's mind. GUIs therefore do not work with huge amounts of data. The Perifoveal Display takes this abstraction from the user and visualizes it so that the full range of vision can be used for data monitoring. This can be realized by taking care of the different visual systems in our eye. Our vision has a field of view from about 120 degrees, which is highly sensitive for motion. Six degrees of our vision is very slow but complex enough to read text.

256. PreCursor

Pranav Mistry and Pattie Maes 'PreCursor' is an invisible layer that hovers in front of the screen and enables novel interaction that reaches beyond the current touchscreens. Using a computer mouse provides two levels of depth when interacting with content on a screen. One can just hover or can click. Hover allows receiving short descriptions, while click selects or performs an action. PreCursor provides this missing sense of interaction to touchscreens. PreCursor technology has the potential to expand beyond a basic computer screen. It can also be applied to mobile touchscreens to objects in the real world, or can be the launching pad for creating a 3D space for interaction.

Page 56

April 2013

MIT Media Lab

257. Pulp-Based Computing: A Framework for Building Computers Out of Paper

Marcelo Coelho, Pattie Maes, Joanna Berzowska and Lyndl Hall Pulp-Based Computing is a series of explorations that combine smart materials, papermaking, and printing. By integrating electrically active inks and fibers during the papermaking process, it is possible to create sensors and actuators that behave, look, and feel like paper. These composite materials not only leverage the physical and tactile qualities of paper, but can also convey digital information, spawning new and unexpected application domains in ubiquitous and pervasive computing at extremely affordable costs. Pranav Mistry and Pattie Maes The goal of Quickies is to bring one of the most useful inventions of the twentieth century into the digital age: the ubiquitous sticky note. Quickies enriches the experience of using sticky notes by linking hand-written sticky notes to mobile phones, digital calendars, task-lists, email, and instant messaging clients. By augmenting the familiar, ubiquitous sticky note, Quickies leverages existing patterns of behavior, merging paper-based sticky note usage with the user's informational experience. The project explores how the use of artificial intelligence (AI), natural language processing (NLP), RFID, and ink-recognition technologies can make it possible to create intelligent sticky notes that can be searched, located, can send reminders and messages, and more broadly, can act as an I/O interface to the digital information world. Pattie Maes and Sajid Sadi ReflectOns are objects that help people think about their actions and change their behavior based on subtle, ambient nudges delivered at the moment of action. Certain taskssuch as figuring out the number of calories consumed, or amount of money spent eating outare generally difficult for the human mind to grapple with. By using in-place sensing combined with gentle feedback and understanding of users' goals, we can recognize behaviors and trends, and provide a reflection of their own actions tailored to enable both better understanding of the repercussions of those actions, and changes to their behaviors to help them better match their own goals. Pattie Maes and Sajid Sadi Remnant is a greeting card that merges the affordances of physical materials with the temporal malleability of digital systems to create, enshrine, and reinforce the very thing that makes a greeting personal: that hand of the sender. The card records both the timing and the form of the sender's handwriting when it is first used. At a later time, collocated output recreates the handwriting, allowing the invisible, memorized hand of the sender to write his or her message directly in front of the recipient.

258. Quickies: Intelligent Sticky Notes

259. ReflectOns: Mental Prostheses for Self-Reflection

260. Remnant: Handwriting Memory Card

MIT Media Lab

April 2013

Page 57

261. Second Surface: Multi-User Spatial Collaboration System Based on Augmented Reality
NEW LISTING

Shunichi Kasahara, Hiroshi Ishii, Pattie Maes, Austin S. Lee and Valentin Heun An environment for creative collaboration is significant for enhancing human communication and expressive activities, and many researchers have explored different collaborative spatial interaction technologies. However, most of these systems require special equipment and cannot adapt to everyday environments. We introduce Second Surface, a novel multi-user augmented reality system that fosters real-time interactions for user-generated content on top of the physical environment. This interaction takes place in the physical surroundings of everyday objects such as trees or houses. Our system allows users to place 3D drawings, texts, and photos relative to such objects and to share this expression with any other person who uses the same software at the same spot. Second Surface explores a vision that integrates collaborative virtual spaces into the physical space. Our system can provide an alternate reality that generates a playful and natural interaction in an everyday setup. Pattie Maes, Suranga Nanayakkara and Roy Shilkrot Sensei is a mobile interface for language learning (words, sentences, pronunciation). It combines techniques from computer vision, augmented reality, speech recognition, and commonsense knowledge. In the current prototype, the user points his cell phone at an object and then sees the word and hears it pronounced in the language of his choice. The system also shows more information pulled from a commonsense knowledge base. The interface is primarily designed to be used as an interactive and fun language-learning tool for children. Future versions will be applied to other contexts such as real-time language translation for face-to-face communication and assistance to travelers for reading information displays in foreign languages; in addition, future versions will provide feedback to users about whether they are pronouncing words correctly. The project is implemented on a Samsung Galaxy phone running Android, donated by Samsung Corporation. Marcelo Coelho and Pattie Maes Shutters is a permeable kinetic surface for environmental control and communication. It is composed of actuated louvers (or shutters) that can be individually addressed for precise control of ventilation, daylight incidence, and information display. By combining smart materials, textiles, and computation, Shutters builds upon other facade systems to create living environments and work spaces that are more energy efficient, while being aesthetically pleasing and considerate of their inhabitants' activities. Pattie Maes Siftables are compact electronic devices with motion sensing, graphical display, and wireless communication. One or more Siftables may be physically manipulated to interact with digital information and media. A group of Siftables can thus act in concert to form a physical, distributed, gesture-sensitive, human-computer interface. Each Siftable object is stand-alone (battery-powered and wireless); Siftables do not require installed infrastructure such as large displays, instrumented tables, or cameras in order to be used. Siftables' key innovation is to give direct physical embodiment to information items and digital media content, allowing people to use their hands and bodies to manipulate these data instead of relying on virtual cursors and windows. By leveraging peoples ability to manipulate physical objects, Siftables radically simplify the way we interact with information and media. Alumni Contributors: Jeevan James Kalanithi and David Merrill

262. Sensei: A Mobile Tool for Language Learning

263. Shutters: A Permeable Surface for Environmental Control and Communication

264. Siftables: Physical Interaction with Digital Media

Page 58

April 2013

MIT Media Lab

265. Six-Forty by Four-Eighty: An Interactive Lighting System

Marcelo Coelho and Jamie Zigelbaum Six-Forty by Four-Eighty is an interactive lighting system composed of an array of magnetic physical pixels. Individually, pixel-tiles change their color in response to touch and communicate their state to each other by using a person's body as the conduit for information. When grouped together, the pixel-tiles create patterns and animations that can serve as a tool for customizing our physical spaces. By transposing the pixel from the confines of the screen and into the physical world, focus is drawn to the materiality of computation and new forms for design emerge. Pranav Mistry Information is often confined to paper or computer screens. SixthSense frees data from these confines and seamlessly integrates information and reality. With the miniaturization of computing devices, we are always connected to the digital world, but there is no link between our interactions with these digital devices and our interactions with the physical world. SixthSense bridges this gap by augmenting the physical world with digital information, bringing intangible information into the tangible world. Using a projector and camera worn as a pendant around the neck, SixthSense sees what you see and visually augments surfaces or objects with which you interact. It projects information onto any surface or object, and allows users to interact with the information through natural hand gestures, arm movements, or with the object itself. SixthSense makes the entire world your computer.

266. SixthSense

267. Smarter Objects: Using AR technology to Program Physical Objects and their Interactions

Pattie Maes, Valentin Heun and Shunichi Kasahara The Smarter Objects system explores a new method for interaction with everyday objects. The system associates a virtual object with every physical object to support an easy means of modifying the interface and the behavior of that physical object as well as its interactions with other "smarter objects." As a user points a smart phone or tablet at a physical object, an augmented reality (AR) application recognizes the object and offers an intuitive graphical interface to program the object's behavior and interactions with other objects. Once reprogrammed, the Smarter Object can then be operated with a simple tangible interface (such as knobs or buttons). Smarter Objects combine the adaptability of digital objects with the simple tangible interface of a physical object. We have implemented several Smarter Objects and usage scenarios demonstrating the potential of this approach. Pranav Mistry, Suranga Nanayakkara, and Pattie Maes SPARSH explores a novel interaction method to seamlessly transfer data among multiple users and devices in a fun and intuitive way. A user touches a data item to be copied from a device, conceptually saving the item in his or her body. Next, the user touches the other device to which he or she wants to paste/pass the saved content. SPARSH uses touch-based interactions as indications for what to copy and where to pass it. Technically, the actual transfer of media happens via the information cloud.

268. SPARSH

MIT Media Lab

April 2013

Page 59

269. Spotlight

Pattie Maes and Sajid Sadi Spotlight is about an artist's ability to create a new meaning using the combination of interactive portraits and diptych or polyptych layouts. The mere placement of two or more portraits near each other is a known technique to create a new meaning in the viewer's mind. Spotlight takes this concept into the interactive domain, creating interactive portraits that are aware of each other's state and gesture. So not only the visual layout, but also the interaction with others creates a new meaning for the viewer. Using a combination of interaction techniques, Spotlight engages the viewer at two levels. At the group level, the viewer influences the portrait's "social dynamics." At the individual level, a portrait's "temporal gestures" expose much about the subject's personality. Alumni Contributor: Orit Zuckerman

270. Sprout I/O: A Texturally Rich Interface

Marcelo Coelho and Pattie Maes Sprout I/O is a kinetic fur that can capture, mediate, and replay the physical impressions we leave in our environment. It combines embedded electronic actuators with a texturally rich substrate that is soft, fuzzy, and pliable to create a dynamic structure where every fur strand can sense physical touch and be individually moved. By developing a composite material that collocates kinetic I/O, while preserving the expectations that we normally have from interacting with physical things, we can more seamlessly embed and harness the power of computation in our surrounding environments to create more meaningful interfaces for our personal and social activities. Marcelo Coelho and Pattie Maes Surflex is a programmable surface for the design and visualization of physical objects and spaces. It combines the different memory and elasticity states of its materials to deform and gain new shapes, providing a novel alternative for 3-D fabrication and the design of physically adaptable interfaces. Natan Linder and Alexander List With Swyp you can transfer any file from any app to any app on any device: simply with a swipe of a finger. Swyp is a framework facilitating cross-app, cross-device data exchange using physical "swipe" gestures. Our framework allows any number of touch-sensing and collocated devices to establish file-exchange and communications with no pairing other than a physical gesture. With this inherent physical paradigm, users can immediately grasp the concepts behind device-to-device communications. Our prototypes application Postcards explore touch-enabled mobile devices connected to the LuminAR augmented surface interface. Postcards allows users to collaborate and create a digital postcards using Swyp interactions. We demonstrate how Swyp enabled interfaces can support new generation of interactive workspaces possible by allowing pair-free gesture-based communications to and from collocated devices.

271. Surflex: A Shape-Changing Surface

272. Swyp

Page 60

April 2013

MIT Media Lab

273. TaPuMa: Tangible Public Map

Pranav Mistry and Tsuyoshi Kuroki TaPuMa is a digital, tangible, public map that allows people to use everyday objects they carry to access relevant, just-in-time information and to find locations of places or people. TaPuMa envisions that conventional maps can be augmented with the unique identities and affordances of the objects. TaPuMa uses an environment where map and dynamic content is projected on a tabletop. A camera mounted above the table identifies and tracks the locations of the objects on the surface, and a software program identifies and registers the location of objects. After identifying the objects, the software provides relevant information visualizations directly on the table. The projector augments both object and table with projected digital information. TaPuMa explores a novel interaction mechanism where physical objects are used as interfaces to digital information. It allows users to acquire information through tangible media, the things they carry. Seth Hunter TeleKinect is a peer to peer software for creative tele-video interactions. The environment can be used to interact with others in the same digital window at a distance such as: presenting a powerpoint together, broadcasting your own news, creating an animation, acting/dancing with any online video, overdub-commentary, teaching, creating a puppet show, storytelling, social TV viewing, and exercising together. The system tracks gestures and objects in the local environment and maps them to virtual objects and characters. It allows users to creatively bridge the physical and digital meeting spaces by defining their own mappings.

274. TeleStudio

275. Textura

Pattie Maes, Marcelo Coelho and Pol Pla i Conesa Textura is an exploration of how to enhance white objects with textures. By projecting onto any white surface, we can simulate different textures and materials. We envision this technology to have great potential for customization and personalization, and to be applicable to areas such as industrial design, the game industry, and retail scenarios.

276. The Design of Artifacts for Augmenting Intellect


NEW LISTING

Pattie Maes and Cassandra Xia Fifty years ago, Doug Engelbart created a conceptual framework for augmenting human intellect in the context of problem-solving. We expand upon Engelbart's framework and use his concepts of process hierarchies and artifact augmentation for the design of personal intelligence augmentation (IA) systems within the domains of memory, decision making, motivation, and mood. We propose a systematic design methodology for personal IA devices, organiz existing IA research within a logical framework, and uncover underexplored areas of IA that could benefit from the invention of new artifacts. Marcelo Coelho and Pattie Maes The Relative Size of Things is a low-cost 3D scanner for the microscopic world. It combines a webcam, a three-axis computer-controlled plotter, and image processing to merge hundreds of photographs into a single three-dimensional scan of surface features which are invisible to the naked eye. Pranav Mistry and Pattie Maes thirdEye is a new technique that enables multiple viewers to see different things on a same display screen at the same time. With thirdEye: a public sign board can show a Japanese tourist instructions in Japanese and an American in English; games won't need a split screen anymoreeach player can see his or her personal view of the game on the screen; two people watching TV can watch their favorite channel on a single screen; a public display can show secret messages or patterns; and in the same movie theater, people can see different ends of a suspense movie.

277. The Relative Size of Things

278. thirdEye

MIT Media Lab

April 2013

Page 61

279. Transitive Materials: Towards an Integrated Approach to Material Technology

Pattie Maes, Marcelo Coelho, Neri Oxman, Sajid Sadi, Amit Zoran and Amir Mikhak Transitive Materials is an umbrella project encompassing novel materials, fabrication technologies, and traditional craft techniques that can operate in unison to create objects and spaces that realize truly omnipresent interactivity. We are developing interactive textiles, ubiquitous displays, and responsive spaces that seamlessly couple input, output, processing, communication, and power distribution, while preserving the uniqueness and emotional value of physical materials and traditional craft. Life in a Comic, Physical Heart in a Virtual Body, Augmented Pillows, Flexible Urban Display, Shutters, Sprout I/O, and Pulp-Based Computing are current instantiations of these technologies. Pattie Maes and Seth Hunter VisionPlay is a framework to support the development of augmented play experiences for children. We are interested in exploring mixed reality applications enabled by web cameras, computer vision techniques, and animation that are more socially oriented and physically engaging. These include using physical toys to control digital characters, augmenting physical play environments with projection, and merging representations of the physical world with virtual play spaces.

280. VisionPlay

281. Watt Watcher

Pattie Maes, Sajid Sadi and Eben Kunz Energy is the backbone of our technological society, yet we have great difficulty understanding where and how much of it is used. Watt Watcher is a project that provides in-place feedback on aggregate energy use per device in a format that is easy to understand and intuitively compare. Energy is inherently invisible, and its use is often sporadic and difficult to gauge. How much energy does your laptop use compared to your lamp? Or perhaps your toaster? By giving users some intuition regarding these basic questions, this ReflectOn allows users both to understand their use patterns and form new, more informed habits.

282. Wear Someone Else's Habits


NEW LISTING

Pattie Maes and Cassandra Xia This project explores the idea that some "intelligence" is encoded in the habits that people assume in daily life. Adopting someone else's habits might allow you to break out of a personal rut, glean some success tactics from someone you admire, or to empathize with someone you care about. This is a wearable system with a Google Calendar backend that actively alerts users to perform a habit based on the events of their calendar. Pattie Maes and Cassandra Xia We are exploring the use of wearable objects for capturing emotion. When the user experiences a particular emotion, she initiates the wearable object to generate unique haptic sensations that come to be associated with the emotion. We explore the use of these haptic-emotion capture devices triggered by natural gestures such as the knee-slapping funny gesture and the congratulatory high-five gesture.

283. Wearables for Emotion Capture


NEW LISTING

Page 62

April 2013

MIT Media Lab

Frank MossNew Media Medicine


How radical new collaborations will catalyze a revolution in health.

284. CollaboRhythm

Frank Moss, John Moore MD, Scott Gilroy, Joslin Diabetes Clinic, UMass Medical School, Department of Veterans Affairs, Children's Hospital Boston, Boston Medical Center CollaboRhythm is a platform that enables patients to be at the center of every interaction in their healthcare with the goal of empowering them to be involved, reflective, and proactive. Care can be coordinated securely through cell phones, tablets, televisions, and computers so that support can be provided in real-time in the real world instead of through inconvenient doctor's office visits. We are currently developing and demonstrating applications for diabetes and hypertension management. A number of third parties have also developed exciting applications using CollaboRhythm. Please visit http://newmed.media.mit.edu to learn about how you can build a project with us using CollaboRhythm.

285. Collective Discovery

Henry A. Lieberman, Frank Moss, Ian Eslick and Pete Szolovits The choices we make about diet, environment, medications, or alternative therapies constitute a massive collection of "everyday experiments." These data remain largely unrecorded and are underutilized by traditional research institutions. Collective Discovery leverages the intuition and insight of patient communities to generate datasets about everyday experiments. We support the patient's process by simplifying tracking and assessment of lifestyle changes in their bodies and lives. This model is embodied in the free-for-the-public website Personal Experiments (http://personalexperiments.org) and is used to power a clinical "N-of-1" experiment platform called MyIBD at the Cincinnati Children's Hospital.

286. ForgetAboutIT?

John Moore MD and Frank Moss ForgetAboutIT has become an integrated part of CollaboRhythm. Currently only 50% of patients with chronic diseases take their medications. The problem is not simple forgetfulness; it is a complex combination of lack of understanding, poor self-reflection, limited social support, and almost non-existent communication between provider and patient. ForgetAboutIT? is a system to support medication adherence which presupposes that patients engaged in tight, collaborative communication with their providers through interactive interfaces would think it preposterous not to take their medications. Technically, it is an awareness system that employs ubiquitous connectivity on the patient side through cell phones, televisions, and other interactive devices and a multi-modal collaborative workstation on the provider side.

MIT Media Lab

April 2013

Page 63

287. I'm Listening

John Moore MD, Henry Lieberman and Frank Moss Increasing understanding of how to categorize patient symptoms for efficient diagnosis has led to structured patient interviews and diagnostic flowcharts that can provide diagnostic accuracy and save valuable physician time. But the rigidity of predefined questions and controlled vocabulary for answers can leave patients feeling over-constrained, as if the doctor (or computer system) is not really attending to them. Im Listening is a system for automatically conducting patient pre-visit interviews. It does not replace a human doctor, but can be used before an office visit to prepare the patient, deliver educational materials or triage care, and preorder appropriate tests, making better use of both doctor and patient time. It uses an on-screen avatar and natural language processing to (partially) understand the patient's response. Key is a common-sense reasoning system that lets patients express themselves in unconstrained natural language, even using metaphor, and that maps the language to medically relevant categories.

288. Oovit PT

Mar Gonzalez, John Moore, and Frank Moss Patient adherence to physical therapy regimens is poor, and there is a lack of quantitative data about patient performance, particularly at home. This project is an end-to-end virtual rehabilitation system for supporting patient adherence to home exercise that addresses the multi-factorial nature of the problem. The physical therapist and patient make shared decisions about appropriate exercises and goals and patients use a sensor-enabled gaming interface at home to perform exercises. Quantitative data is then fed back to the therapist, who can properly adjust the regimen and give reinforcing feedback and support.

Neri OxmanMediated Matter


How digital and fabrication technologies mediate between matter and environment to radically transform the design and construction of objects, buildings, and systems.

289. 3D Printing of Functionally Graded Materials

Neri Oxman and Steven Keating Functionally graded materialsmaterials with spatially varying composition or microstructureare omnipresent in nature. From palm trees with radial density gradients, to the spongy trabeculae structure of bone, to the hardness gradient found in many types of beaks, graded materials offer material and structural efficiency. But in man-made structures such as concrete pillars, materials are typically volumetrically homogenous. While using homogenous materials allows for ease of production, improvements in strength, weight, and material usage can be obtained by designing with functionally graded materials. To achieve graded material objects, we are working to construct a 3D printer capable of dynamic mixing of composition material. Starting with concrete and UV-curable polymers, we aim to create structures, such as a bone-inspired beam, which have functionally graded materials. This research was sponsored by the NSF EAGER award: Bio-Beams: FGM Digital Design & Fabrication. Neri Oxman Beast is an organic-like entity created synthetically by the incorporation of physical parameters into digital form-generation protocols. A single continuous surface, acting both as structure and as skin, is locally modulated for both structural support and corporeal aid. Beast combines structural, environmental, and corporeal performance by adapting its thickness, pattern density, stiffness, flexibility, and translucency to load, curvature, and skin-pressured areas respectively.

290. Beast

Page 64

April 2013

MIT Media Lab

291. Building-Scale 3D Printing

Neri Oxman and Steven Keating How can additive fabrication technologies be scaled to building-sized construction? We introduce a novel method of mobile swarm printing that allows small robotic agents to construct large structures. The robotic agents extrude a fast curing material which doubles as both a concrete mold for structural walls and as a thermal insulation layer. This technique offers many benefits over traditional construction methods, such as speed, custom geometry, and cost. As well, direct integration of building utilities like wiring and plumbing can be incorporated into the printing process. This research was sponsored by the NSF EAGER award: Bio-Beams: FGM Digital Design & Fabrication. Neri Oxman Carpal Skin is a prototype for a protective glove to protect against Carpal Tunnel Syndrome, a medical condition in which the median nerve is compressed at the wrist, leading to numbness, muscle atrophy, and weakness in the hand. Night-time wrist splinting is the recommended treatment for most patients before going into carpal tunnel release surgery. Carpal Skin is a process by which to map the pain-profile of a particular patientits intensity and durationand to distribute hard and soft materials to fit the patients anatomical and physiological requirements, limiting movement in a customized fashion. The form-generation process is inspired by animal coating patterns in the control of stiffness variation.

292. Carpal Skin

293. CNSILK Pavilion

Neri Oxman, Carlos Gonzalez, Markus Kayser and Jared Laucks The CNSILK Pavilion extends current development of CNSILK research into large-scale inhabitable spaces. Rigorous study and analysis of micro-scale fibrous structures akin to silkworm cocoons and spiderwebs is underway in collaboration with Tufts University and the Wyss Institute. Through this research, the team will develop a process of analysis and feedback while experimenting with multi-scalar composite shell environments. Research and analysis at the micro-scale will aid in a greater understanding of fibrous systems, traditionally used in tension, across various scales to develop habitable space. The synthesis between biology, material science, and computation, coupled with large-scale, multi-axis fabrication, opens new avenues for embedded, performance-based design at a habitable scale. This approach will allow us to create an environmentally tailored pavilion for an event in the spring of 2013.

294. CNSILK: Computer Numerically Controlled Silk Cocoon Construction

Neri Oxman CNSILK explores the design and fabrication potential of silk fibersinspired by silkworm cocoonsfor the construction of woven habitats. It explores a novel approach to the design and fabrication of silk-based building skins by controlling the mechanical and physical properties of spatial structures inherent in their microstructures using multi-axes fabrication. The method offers construction without assemblies such that material properties vary locally to accommodate for structural and environmental requirements. This approach stands in contrast to functional assemblies and kinetically actuated facades which require a great deal of energy to operate, and are typically maintained by global control. Such material architectures could simultaneously bear structural load, change their transparency so as to control light levels within a spatial compartment (building or vehicle), and open and close embedded pores so as to ventilate a space. Neri Oxman and Benjamin Peters The digitally reconfigurable surface is a pin matrix apparatus for directly creating rigid 3D surfaces from a computer-aided design (CAD) input. A digital design is uploaded into the device, and a grid of thousands of tiny pinsmuch like the popular pin-art toyare actuated to form the desired surface. A rubber sheet is held by vacuum pressure onto the tops of the pins to smooth out the surface formed by

295. Digitally Reconfigurable Surface

MIT Media Lab

April 2013

Page 65

them; this strong surface can then be used for industrial forming operations, simple resin casting, and many other applications. The novel phase-changing electronic clutch array allows the device to have independent position control over thousands of discrete pins with only a single motorized 'push plate,' lowering the complexity and manufacturing cost of this type of device. Research is ongoing into new actuation techniques to further lower the cost and increase the surface resolution of this technology.

296. FABRICOLOGY: Variable-Property 3D Printing as a Case for Sustainable Fabrication

Neri Oxman Rapid prototyping technologies speed product design by facilitating visualization and testing of prototypes. However, such machines are limited to using one material at a time; even high-end 3D printers, which accommodate the deposition of multiple materials, must do so discretely and not in mixtures. This project aims to build a proof-of-concept of a 3D printer able to dynamically mix and vary the ratios of different materials in order to produce a continuous gradient of material properties with real-time correspondence to structural and environmental constraints. Alumni Contributors: Mindy Eng, William J. Mitchell and Rachel Fong

297. FitSocket: A Better Way to Make Sockets

Hugh Herr, Neri Oxman, Elizabeth Tsai, Reza Safai-Naeeni, Zjenja Doubrovski, Arthur Petron and Roy Kornbluh (SRI) Sockets - the cup-shaped devices that attach an amputated limb to a lower-limb prosthesis - are made through unscientific, artisanal methods that do not have repeatable quality and comfort from one individual with amputation to the next. The FitSocket project aims to identify the correlation between leg tissue properties and the design of a comfortable socket. We accomplish this by creating a robotic socket measurement device called the FitSocket which can directly measure tissue properties. With this data, we can rapid-prototype test sockets and socket molds in order to make rigid, spatially variable stiffness, and spatially/temporally variable stiffness sockets.

298. Lichtenberg 3D Printing


NEW LISTING

Neri Oxman and Steven Keating Using electricity to generate 3D Lichtenberg structures in sintered media (i.e. glass) offers a new approach to digital fabrication. By robotically controlling the electrodes, a digital form can be rapidly fabricated with the benefits of a fine fractal structure. Numerous applications ranging from chemical catalysts, to fractal antennas, to product design exist. Neri Oxman French for "single shell," Monocoque stands for a construction technique that supports structural load using an object's external skin. Contrary to the traditional design of building skins that distinguish between internal structural frameworks and non-bearing skin elements, this approach promotes heterogeneity and differentiation of material properties. The project demonstrates the notion of a structural skin using a Voronoi pattern, the density of which corresponds to multi-scalar loading conditions. The distribution of shear-stress lines and surface pressure is embodied in the allocation and relative thickness of the vein-like elements built into the skin. Its innovative 3D printing technology provides for the ability to print parts and assemblies made of multiple materials within a single build, as well as to create composite materials that present preset combinations of mechanical properties.

299. Monocoque

300. Morphable Structures

Neri Oxman and Steven Keating Granular materials can be put into a jammed state through the application of pressure to achieve a pseudo-solid material with controllable rigidity and geometry. While jamming principles have been long known, large-scale applications of

Page 66

April 2013

MIT Media Lab

jammed structures have not been significantly explored. The possibilities for shape-changing machines and structures are vast and jamming provides a plausible mechanism to achieve this effect. In this work, jamming prototypes are constructed to gain a better understanding of this effect. As well, potential specific applications are highlighted and demoed. Such applications range from a morphable chair, to a floor which dynamically changes its softness in response to a user falling down to reduce injury, to artistic free-form sculpting.

301. PCB Origami

Neri Oxman and Yoav Sterman The PCB Origami project is an innovative concept for printing digital materials and creating 3D objects with Rigid-flex PCBs and pick and place machines. These machines allow printing of digital electronic materials, while controlling the location and property of each of the components printed. By combining this technology with Rigid-flex PCB and computational origami, it is possible to create from a single sheet of PCB almost any 3D shape that is already embedded with electronics, to produce a finished product with that will be both structural and functional.

302. Rapid Craft

Neri Oxman The values endorsed by vernacular architecture have traditionally promoted designs constructed and informed by and for the environment while using local knowledge and indigenous materials. Under the imperatives and growing recognition of sustainable design, Rapid Craft seeks the integration sought between local construction techniques and globally available digital design technologies to preserve, revive, and reshape these cultural traditions.

303. Raycounting

Neri Oxman Raycounting is a method for generating customized light-shading constructions by registering the intensity and orientation of light rays within a given environment. 3D surfaces of double curvature are the result of assigning light parameters to flat planes. The algorithm calculates the intensity, position and direction of one, or multiple, light sources placed in a given environment and assigns local curvature values to each point in space corresponding to the reference plane and the light dimension. Light performance analysis tools are reconstructed programmatically to allow for morphological synthesis based on intensity, frequency and polarization of light parameters as defined by the user.

304. Responsive Glass

Neri Oxman, Elizabeth Tsai, and Michal Firstenberg Hydrogels are crosslink polymers that are capable of absorbing great amount of water. They have been studied during the last 50 years, largely due to their hydrophilic character at ambient temperatures, which make them biocompatible and attractive for various biological applications. Nevertheless, in our project, we are interested in their hydrophilic-hydrophobic phase-transition, occurring slightly above room temperature. We investigate the mechanical and optical transformations at this phase transitionnamely, their swelling, permeability, and optical transmission modificationas enabling responsive or passive dynamics for future product design.

305. SpiderBot

Neri Oxman and Benjamin Peters The SpiderBot is a suspended robotic gantry system that provides an easily deployable platform from which to print large structures. The body is composed of a deposition nozzle, a reservoir of material, and parallel linear actuators. The robot is connected to stable points high in the environment, such as large trees or buildings. This arrangement is capable of moving large distances without the need for more conventional linear guides, much like a spider does. The system is easy to set up for mobile projects, and will afford sufficient printing resolution and build volume. Expanding foam can be deposited to create a building-scale printed object rapidly.

MIT Media Lab

April 2013

Page 67

Another material type of interest is the extrusion or spinning of tension elements, like rope or cable. With tension elements, unique structures such as bridges or webs can be wrapped, woven, or strung around environmental features or previously printed materials.

Joseph ParadisoResponsive Environments


How sensor networks augment and mediate human experience, interaction, and perception.

306. A Machine Learning Toolbox for Musician Computer Interaction

Joe Paradiso and Nick Gillian The SEC is an extension to the free open-source program EyesWeb that contains a large number of machine learning and signal processing algorithms that have been specifically designed for real-time pattern and gesture recognition. All the algorithms within the SEC are encapsulated as individual blocks, allowing the user to connect the output of one block to the input of another to create a signal flow chain. This allows a user to quickly build and train their own custom gesture recognition system, without having to write a single line of code or explicitly understand how any of the machine learning algorithms within their recognition system work. Matthew Aldrich Advances in building technology and sensor networks offer a chance to imagine new forms of personalized and efficient utility control. One such area is lighting control. With the aid of sensor networks, these new control systems not only offer lower energy consumption, but also enable new ways to specify and augment lighting. It is our belief that dynamic lighting controlled by a single user, or even an entire office floor, is the frontier of future intelligent and adaptive systems. Joe Paradiso and Amit Zoran How can traditional values be embedded into a digital object? We explore this concept by implementing a special guitar that combines physical acoustic properties with virtual capabilities. The acoustical values will be embodied by a wooden hearta unique, replaceable piece of wood that will give the guitar a unique sound. The acoustic signal created by this wooden heart will be digitally processed in order to create flexible sound design. Joe Paradiso, Nan-Wei Gong and Nan Zhao We developed a music control surface which enables integration between any musical instruments via a versatile, customizable, and inexpensive user interface. This sensate surface allows capacitive sensor electrodes and connections between electronics components to be printed onto a large roll of flexible substrate unrestricted in length. The high dynamic range capacitive sensing electrodes can not only infer touch, but near-range, non-contact gestural nuance in a music performance. With this sensate surface, users can cut out their desired shapes, paste the number of inputs, and customize their controller interfaces, which can then send signals wirelessly to effects or software synthesizers. We seek to find a solution for integrating the form factor of traditional music controllers seamlessly on top of ones instrument while adding expressiveness to performance by sensing and incorporating movements and gestures to manipulate the musical output.

307. Beyond the Light Switch: New Frontiers in Dynamic Lighting

308. Chameleon Guitar: Physical Heart in a Virtual Body

309. Customizable Sensate Surface for Music Control

Page 68

April 2013

MIT Media Lab

310. Data-Driven Elevator Music

Joe Paradiso, Gershon Dublon and Brian Dean Mayton Our new building lets us see across spaces, extending our visual perception beyond the walls that enclose us. Yet, invisibly, networks of sensors, from HVAC and lighting systems to Twitter and RFID, control our environment and capture our social dynamics. This project proposes extending our senses into this world of information, imagining the building as glass in every sense. Sensor devices distributed throughout the Lab transmit privacy-protected audio streams and real-time measurements of motion, temperature, humidity, and light levels. The data are composed into an eight-channel audio installation in the glass elevator that turns these dynamic parameters into music, while microphone streams are spatialized to simulate their real locations in the building. A pressure sensor in the elevator provides us with fine-grained altitude to control the spatialization and sonification. As visitors move from floor to floor, they hear the activities taking place on each. Alumni Contributor: Nicholas Joliat

311. Dense, Low-Power Environmental Monitoring for Smart Energy Profiling

Nan-Wei Gong, Ashley Turza, David Way and Joe Paradiso with: Phil London, Gary Ware, Brett Leida and Tim Ren (Schneider Electric); Leon Glicksman and Steve Ray (MIT Building Technologies) We are working with sponsor Schneider Electric to deploying a dense, low-power wireless sensor network aimed at environmental monitoring for smart energy profiling. This distributed sensor system measures temperature, humidity, and 3D airflow, and transmits this information through a wireless Zigbee protocol. These sensing units are currently deployed in the lower atrium of E14. The data is being used to inform CFD models of airflow in buildings, explore and retrieve valuable information regarding the efficiency of commercial building HVAC systems, energy efficiency of different building materials, and lighting choices in novel architectural designs. Joe Paradiso, Gershon Dublon and Brian Dean Mayton Homes and offices are being filled with sensor networks to answer specific queries and solve pre-determined problems, but no comprehensive visualization tools exist for fusing these disparate data to examine relationships across spaces and sensing modalities. DoppelLab is a cross-reality virtual environment that represents the multimodal sensor data produced by a building and its inhabitants. Our system encompasses a set of tools for parsing, databasing, visualizing, and sonifying these data; by organizing data by the space from which they originate, DoppelLab provides a platform to make both broad and specific queries about the activities, systems, and relationships in a complex, sensor-rich environment. Joe Paradiso, Gershon Dublon, Nicholas David Joliat and Brian Dean Mayton In DoppelLab, we are developing tools that intuitively and scalably represent the rich, multimodal sensor data produced by a building and its inhabitants. Our aims transcend the traditional graphical display, in terms of the richness of data conveyed and the immersiveness of the user experience. To this end, we have incorporated 3D spatialized data sonification into the DoppelLab application, as well as in standalone installations. Currently, we virtually spatialize streams of audio recorded by nodes throughout the physical space. By reversing and shuffling short audio segments, we distill the sound to its ambient essence while protecting occupant privacy. In addition to the sampled audio, our work includes abstract data sonification that conveys multimodal sensor data.

312. DoppelLab: Experiencing Multimodal Sensor Data

313. DoppelLab: Spatialized Sonification in a 3D Virtual Environment

MIT Media Lab

April 2013

Page 69

314. Expressive Re-Performance

Joe Paradiso, Nick Gillian and Laurel Smith Pardue Expressive musical re-performance is about enabling a person to experience the creative aspects of a playing a favorite song regardless of technical expertise. This is done by providing users with computer-linked electronic instruments that distills the instruments' interface but still allows them to provide expressive gesture. The next note in an audio source is triggered on the instrument, with the computer providing correctly pitched audio and mapping the expressive content onto it. Thus, the physicality of the instrument remains, but requires far less technique. We are implementing an expressive re-performance system using commercially available, expressive electronic musical instruments and an actual recording as the basis for deriving audio. Performers will be able to select a voice within the recording and re-perform the song with the targeted line subject to their own creative and expressive impulse. Joe Paradiso, Matthew Henry Aldrich and Nan Zhao At present, luminous efficacy and cost remain the greatest barriers to broad adoption of LED lighting. However, it is anticipated that within several years, these challenges will be overcome. While we may think our basic lighting needs have been met, this technology offers many more opportunities than just energy efficiency: this research attempts to alter our expectations for lighting and cast aside our assumptions about control and performance. We will introduce new, low-cost sensing modalities that are attuned to human factors such as user context, circadian rhythms, or productivity, and integrate these data with atypical environmental factors to move beyond traditional lux measurements. To research and study these themes, we are focusing on the development of superior color-rendering systems, new power topologies for LED control, and low-cost multimodal sensor networks to monitor the lighting network as well as the environment. Joe Paradiso and Amit Zoran The FreeD is a hand-held, digitally controlled, milling device that is guided and monitored by a computer while still preserving the craftsperson's freedom to sculpt and carve. The computer will intervene only when the milling bit approaches the planned model. Its interaction is either by slowing down the spindle speed or by drawing back the shaft; the rest of the time it allows complete freedom, letting the user to manipulate and shape the work in any creative way.

315. Feedback Controlled Solid State Lighting

316. FreeD

317. Gesture Recognition Toolkit

Joe Paradiso and Nick Gillian The Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, c++ machine-learning library that has been specifically designed for real-time gesture recognition. The GRT has been created as a general-purpose tool for allowing programmers with little or no machine-learning experience to develop their own machine-learning based recognition systems, through just a few lines of code. Further, the GRT is designed to enable machine-learning experts to precisely customize their own recognition systems, and easily incorporate their own algorithms within the GRT framework. In addition to facilitating developers to quickly create their own gesture-recognition systems, the machine-learning algorithms at the core of the GRT have been designed to be rapidly trained with a limited number of training examples for each gesture. The GRT therefore allows a more diverse group of users to easily integrate gesture recognition into their own projects.

Page 70

April 2013

MIT Media Lab

318. Grassroots Mobile Power

Joe Paradiso, Ethan Zuckerman, Pragun Goyal and Nathan Matias We want to help people in nations where electric power is scarce sell power to their neighbors. Were designing a piece of prototype hardware that plugs into a diesel generator or other power source, distributes the power to multiple outlets, monitors how much power is used, and uses mobile payments to charge the customer for the power consumed. Joe Paradiso and Gershon Dublon The tongue is known to have an extremely dense sensing resolution, as well as an extraordinary degree of neuroplasticity, the ability to adapt to and internalize new input. Research has shown that electro-tactile tongue displays paired with cameras can be used as vision prosthetics for the blind or visually impaired; users quickly learn to read and navigate through natural environments, and many describe the signals as an innate sense. However, existing displays are expensive and difficult to adapt. Tongueduino is an inexpensive, vinyl-cut tongue display designed to interface with many types of sensors besides cameras. Connected to a magnetometer, for example, the system provides a user with an internal sense of direction, like a migratory bird. Plugged into weighted piezo whiskers, a user can sense orientation, wind, and the lightest touch. Through tongueduino, we hope to bring electro-tactile sensory substitution beyond vision replacement, towards open-ended sensory augmentation. Joseph A. Paradiso, Brian Mayton and Gershon Dublon In the Living Observatory installation at the Other Festival, we invite participants into a transductive encounter with a wetland environment in flux. Our installation brings sights, smells, sounds, and a bit of mud from a peat bog undergoing restoration near Plymouth, MA to the MIT Media Lab. As part of the Living Observatory initiative, we are developing sensor networks that document ecological processes and allow people to experience the data at different spatial and temporal scales. Small, distributed sensor devices capture climate and other environmental data, while others stream audio from high in the trees and underwater. Visit at any time from dawn till dusk and again after midnight, and check the weather report on our website (tidmarsh.media.mit.edu) for highlights; if youre lucky you might just catch an April storm. Joseph A. Paradiso, V. Michael Bove, Gershon Dublon, Edwina Portocarrero and Glorianna Davenport Extending the Living Observatory installation, we have instrumented the roots of several trees outside of E15 with vibratory transducers that excite the trees with live streaming sound from a forest near Plymouth, MA. Walking though the trees just outside the Lab, you won't notice anything, but press your ear up against one of them and you'll feel vibrations and hear sound from a tree 60 miles away. Visit at any time from dawn till dusk and again after midnight; if youre lucky you might just catch an April storm, a flock of birds, or an army of frogs. Alumni Contributors: Edwina Portocarrero and Gershon Dublon

319. Hackable, High-Bandwidth Sensory Augmentation

320. Living Observatory Installation: A Transductive Encounter with Ecology


NEW LISTING

321. Living Observatory: Arboreal Telepresence


NEW LISTING

322. Living Observatory: Sensor Networks for Documenting and Experiencing Ecology

Glorianna Davenport, Joe Paradiso, Gershon Dublon, Pragun Goyal and Brian Dean Mayton Living Observatory is an initiative for documenting and interpreting ecological change that will allow people individually and collectively, to better understand relationships between ecological processes, human life-style choices, and climate change adaptation. As part of this initiative, we are developing sensor networks that document ecological processes and allow people to experience the data at different spatial and temporal scales. Low-power sensor nodes capture climate and other data at a high spatiotemporal resolution, while others stream audio. Sensors on

MIT Media Lab

April 2013

Page 71

trees measure transpiration and other cycles, while fiber-optic cables in streams capture high resolution temperature data. At the same time, we are developing tools that allow people to explore this data, both remotely and onsite. The remote interface allows for immersive 3-d exploration of the terrain, while visitors to the site will be able to access data from the network around them directly from wearable devices.

323. PrintSense: A Versatile Sensing Technique to Support Flexible Surface Interaction


NEW LISTING

Joseph A. Paradiso and Nan-wei Gong Touch sensing has become established for a range of devices and systems both commercially and in academia. In particular, multi-touch scenarios based on flexible sensing substrates are popular for products and research projects. We leverage recent developments in single-layer, off-the-shelf, inkjet-printed conductors on flexible substrates as a practical way to prototype the necessary electrode patterns, and combine this with our custom-designed PrintSense hardware module which uses the full range of sensing techniques. Not only do we support touch detection, but in many scenarios also pressure, flexing, and close proximity gestures. Joseph A. Paradiso and Gershon Dublon Sensor networks permeate our built and natural environments, but our means for interfacing to the resultant data streams have not evolved much beyond HCI and information visualization. Researchers have long experimented with wearable sensors and actuators on the body as assistive devices. A users neuroplasticity can, under certain conditions, transcend sensory substitution to enable perceptual-level cognition of extrasensory stimuli delivered through existing sensory channels. But there remains a huge gap between data and human sensory experience. We are exploring the space between sensor networks and human augmentation, in which distributed sensors become sensory prostheses. In contrast, user interfaces are substantially unincorporated by the bodyour relationship to them never fully pre-attentive. Attention and proprioception are key, not only to moderate and direct stimuli, but also to enable users to move through the world naturally, attending to the sensory modalities relevant to their specific contexts. Joe Paradiso and Nick Gillian Rapidnition is a new way of thinking about gesturally controlled interfaces. Rather than forcing users to adapt their behavior to a predefined gestural interface, Rapidnition frees users to define their own gestures, which the system rapidly learns. The machine learning algorithms at the core of Rapidnition enable it to quickly infer a users gestural vocabulary, using a small number of user-demonstrated examples of each gesture. Rapidnition is capable of recognizing not just static postures but also dynamic temporal gestures. In addition, Rapidnition allows the user to define complex, nonlinear, continuous-mapping spaces. Rapidnition is currently being applied to the real-time recognition of musical gestures to rigorously test both the discrete and continuous recognition abilities of the system.

324. Prosthetic Sensor Networks: Factoring Attention, Proprioception, and Sensory Coding
NEW LISTING

325. Rapidnition: Rapid User-Customizable Gesture Recognition

Page 72

April 2013

MIT Media Lab

326. RElight: Exploring pointing and other gestures for appliance control
NEW LISTING

Joseph A. Paradiso, Brian Mayton, Nan Zhao and Nicholas Gillian Increasing numbers of networked appliances are bringing about new opportunities for control and automation. At the same time, an increase in multifunctional appliances is creating a complex and often frustrating environment for the end-user. Motivated by these opportunities and challenges, we are exploring the potential for sensor fusion to increase usability and improve user experience while retaining the user in the control loop. We have developed a novel, camera-less, multi-sensor solution for intuitive gesture-based indoor lighting control, called RElight. Using a wireless handheld device, the user simply points at a light fixture to select it and rotates his hand to continuously configure the dimming level. Pointing is a universal gesture that communicates ones interest in or attention to an object. Advanced machine learning algorithms allow rapid training of gestures and continuous control that supplements gesture classification. Joe Paradiso, Nan-Wei Gong and Steve Hodges (Microsoft Research Cambridge) We demonstrate the design and implementation of a new versatile, scalable, and cost-effective sensate surface. The system is based on a new conductive inkjet technology, which allows capacitive sensor electrodes and different types of RF antennas to be cheaply printed onto a roll of flexible substrate that may be many meters long. By deploying this surface on (or under) a floor it is possible to detect the presence and whereabouts of users through both passive and active capacitive coupling schemes. We have also incorporated GSM and NFC electromagnetic radiation sensing and piezoelectric pressure and vibration detection. We believe that this technology has the potential to change the way we think about covering large areas with sensors and associated electronic circuitrynot just floors, but potentially desktops, walls, and beyond. Joseph A. Paradiso, Carolina Brum Medeiros and Michael Lapinski Current sports-medicine practices for understanding the motion of athletes while engaged in their sport of choice are limited to camera-based marker tracking systems that generally lack the fidelity and sampling rates necessary to make medically usable measurements; they also typically require a structured, stable "studio" environment, and need considerable time to set up and calibrate. The data from our system provides the ability to understand the forces and torques that an athlete's joints and body segments undergo during activity. It also allows for precise biomechanical modeling of an athlete's motion. The application of sensor fusion techniques is essential for optimal extraction of kinetic and kinematic information. Also, it provides an alternative measurement method that can be used in out-of-lab scenarios. Joseph A. Paradiso, Leah Buechley, Jie Qi and Nan-wei Gong A toolkit for creating electronics using circuit board stickers. Circuit stickers are created by printing traces on flexible substrates and adding conductive adhesive. These lightweight, flexible, and sticky circuit boards allow us to begin sticking interactivity onto new spaces and interfaces such as clothing, instruments, buildings, and even our bodies. Joe Paradiso, Gershon Dublon and Brian Dean Mayton We are developing a system for inferring safety context on construction sites by fusing data from wearable devices, distributed sensing infrastructure, and video. Wearable sensors stream real-time levels of dangerous gases, dust, noise, light quality, precise altitude, and motion to base stations that synchronize the mobile devices, monitor the environment, and capture video. Context mined from these data is used to highlight salient elements in the video stream for monitoring and decision support in a control room. We tested our system in a initial user study on a

327. Scalable and Versatile Surface for Ubiquitous Sensing

328. Sensor Fusion for Gesture Analyses of Baseball Pitch


NEW LISTING

329. Sticky Circuits


NEW LISTING

330. TRUSS: Tracking Risk with Ubiquitous Smart Sensing

MIT Media Lab

April 2013

Page 73

construction site, instrumenting a small number of steel workers and collecting data. A recently completed hardware revision will be followed by further user testing and interface development.

331. Virtual Messenger

Joe Paradiso and Nick Gillian The virtual messenger system acts as a portal to subtly communicate messages and pass information between the digital, virtual, and physical worlds, using the Media Labs Glass Infrastructure system. Users who opt into the system are tracked throughout the Media Lab by a multimodal sensor network. If a participating user approaches any of the Labs Glass Infrastructure displays they are met by their virtual personal assistant (VPA), who exists in Dopplelabs virtual representation of the current physical space. Each VPA acts as a mediator to pass on any messages or important information from the digital world to the user in the physical world. Participating users can interact with their VPA through a small subset of hand gestures, allowing the user to read any pending messages or notices, or inform their virtual avatar not to bother them until later.

332. Wearable, Wireless Sensor System for Sports Medicine and Interactive Media

Joe Paradiso, Michael Thomas Lapinski, Dr. Eric Berkson and MGH Sports Medicine This project is a system of compact, wearable, wireless sensor nodes, equipped with full six-degree-of-freedom inertial measurement units and node-to-node capacitive proximity sensing. A high-bandwidth, channel-shared RF protocol has been developed to acquire data from many (e.g., 25) of these sensors at 100 Hz full-state update rates, and software is being developed to fuse this data into a compact set of descriptive parameters in real time. A base station and central computer clock the network and process received data. We aim to capture and analyze the physical movements of multiple people in real time, using unobtrusive sensors worn on the body. Applications abound in biomotion analysis, sports medicine, health monitoring, interactive exercise, immersive gaming, and interactive dance ensemble performance. Alumni Contributors: Ryan Aylward and Mathew Laibowitz

333. WristQue: A Personal Wristband for Sensing and Smart Infrastructure

Joe Paradiso and Brian Mayton While many wearable sensors have been developed, few are actually worn by people on a regular basis. WristQue is a wristband sensor that is comfortable and customizable to encourage widespread adoption. The hardware is 3D printable, giving users a choice of materials and colors. Internally, the wristband will include a main board with microprocessor, standard sensors, and localization/wireless communication, and an additional expansion board that can be replaced to customize functionality of the device for a wide variety of applications. Environmental sensors (temperature, humidity, light) combined with fine-grained indoor localization will enable smarter building infrastructure, allowing HVAC and lighting systems to optimize to the locations and ways that people are actually using the space. Users' preferences can be input through buttons on the wristband. Fine-grained localization also opens up possibilities for larger applications, such as visualizing building usage through DoppelLab and smart displays that react to users' presence.

Page 74

April 2013

MIT Media Lab

Alex 'Sandy' PentlandHuman Dynamics


How social networks can influence our lives in business, health, and governance, as well as technology adoption and diffusion.

334. Belief Dynamics


NEW LISTING

Alex 'Sandy' Pentland, Peter Krafft and Ankur Mani The current political system in the United States is paralyzed by polarization. On numerous occasions in recent years, we have come to the edge of a fiscal cliff because of our Senate's inability to reach compromise. This polarization in Congress is a reflection of the polarization of the country as a whole. We are interested in understanding situations in which this type of polarization can occur or is likely to occur. In this work, we apply ideas from game theory and social network analysis to address this question. Ankur Mani and Alex 'Sandy' Pentland How different are the characteristics of societies that are constrained to local interactions in networks as compared to societies where all interactions happen in organized markets? Among most species and even in modern human societies, exchange, whether of food, information, or labor, naturally tends to occur locally, as encounters happen between nearby individuals in networks. We study how these local exchanges govern the large scale properties of networked societies - stability, welfare, dynamics, and fairness and how can we use peer-pressure to improve social welfare. Today we have easy availability of big data about social and economic interactions. To use this new resource for the betterment of society, we identify the properties of stable exchanges in networked societies, build tools for computing the structure of stable and fair networked societies and predict how they may respond to policy changes. Alex (Sandy) Pentland, Yaniv Altshuler, Katherine Krumme and Wei Pan We are using credit card transaction data and FOREX trading data to look at patterns of human behavior change over time and space, and how these patterns change with social influence and with macroeconomic features. To what extent do network features help to predict economic ones? Alex (Sandy) Pentland, Nadav Aharony, Wei Pan, Cody Sumter and Alan Gardner The Funf open sensing framework is an Android-based extensible framework for phone-based mobile sensing. The core concept is to provide a reusable set of functionalities enabling collection, uploading, and configuration for a wide range of data types. Funf Journal is an Android application for researchers, self-trackers, and anyone interested in collecting and exploring information related to the mobile device, its environment, and its user's behavior. It is built using the Funf framework and makes use of many of its built-in features.

335. Bilateral Exchanges in Social Networks


NEW LISTING

336. Economic Decision-Making in the Wild

337. Funf: Open Sensing Framework

338. Inducing Peer Pressure to Promote Cooperation


NEW LISTING

Ankur Mani, Iyad Rahwan, Alex(Sandy) pentland Cooperation in a large society of self-interested individuals is notoriously difficult to achieve when the externality of one individuals action is spread thin and wide on the whole society. This leads to the tragedy of the commons in which rational action will ultimately make everyone worse-off. Traditional policies to promote cooperation involve Pigouvian taxation or subsidies that make individuals internalize the exter- nality they incur. We introduce a new approach to achieving global cooperation by localizing externalities to ones peers in a social network, thus

MIT Media Lab

April 2013

Page 75

leveraging the power of peer-pressure to regulate behavior. The mechanism relies on a joint model of ex- ternalities and peer-pressure. Surprisingly, this mechanism can require a lower budget to operate than the Pigouvian mechanism, even when accounting for the social cost of peer pressure. Even when the available budget is very low, the social mechanisms achieve greater improvement in the outcome.

339. Mobile Territorial Lab


NEW LISTING

Alex 'Sandy' Pentland and Bruno Lepri The Mobile Territorial Lab (MTL) aims at creating a living laboratory integrated in the real life of the Trento (Italy)territory open to manifold kinds of experimentations. In particular, the Lab is focused on exploiting the sensing capabilities of mobile phones to track and understand human behaviors (e.g., families spending behaviors, lifestyles, mood and stress patterns, etc.), on designing and testing social strategies aiming at empower individual and collective lifestyles through attitude and behavior change, and on investigating new paradigms in personal data management and sharing. MTL has been created by Human Dynamics group, Telecom Italia SKIL Lab, Foundation Bruno Kessler and Telefonica I+D. Alex (Sandy) Pentland, Brian Sweatt, Henrik Sandell, Jeffrey Schmitz, John Clippinger and Yves-Alexandre de Montjoye With their built-in sensors, smart phones are at the forefront of personal data collection. However, personal data currently tends to be monopolized and siloed, preventing companies from building innovative data-driven services. While there is substantial work on privacy and fair use of personal data, a pragmatic technical solution has yet to be realized. openPDS is a privacy-preserving implementation of an information repository which allows the user to collect, store, and give access to his data. Via an innovative framework for third-party applications, the system ensures that the sensitive data processing takes place within the user's PDS, as opposed to a third-party server. The framework allows for PDSs to engage in privacy-preserving group computation, which is used as a replacement for centralized aggregation. Alex 'Sandy' Pentland, Dhaval Adjodah, Erez Shmueli, Vivek Singh If we are to enact better policy, fight crime and decrease poverty, we will need better computational models of how society works. In order to make computational social science a useful reality, we will need models and theories of how social influence sprouts at the individual level and how it leads to emergent social behavior. In this project, we take steps at understanding the motivators and conduits of social influence by analyzing real-life data, and we use our findings to create a high-accuracy prediction model of individuals' future behavior. In addition to explaining and demonstrating the causes of social influence with unprecedented detail using network analysis and machine learning, this project also makes the contribution of investigating the policy ramifications of providing the social and algorithmic capabilities of changing behavior at the individual level. Alumni Contributors: Cody Sumter, Cory Ip, Nadav Aharony, Wei Pan and Wen Dong

340. openPDS: A Privacy-Preserving Personal Data Store

341. Predicting Individual Behavior Using Network Interaction Data


NEW LISTING

342. Sensible Organizations

Alex (Sandy) Pentland, Benjamin Waber and Daniel Olguin Olguin Data mining of email has provided important insights into how organizations function and what management practices lead to greater productivity. But important communications are almost always face-to-face, so we are missing the greater part of the picture. Today, however, people carry cell phones and wear RFID badges. These body-worn sensor networks mean that we can potentially know who talks to whom, and even how they talk to each other. Sensible Organizations investigates how these new technologies for sensing human interaction can be used to reinvent organizations and management.

Page 76

April 2013

MIT Media Lab

343. The Privacy Bounds of Human Mobility


NEW LISTING

Cesar A. Hidalgo and Yves-Alexandre DeMontjoye We used 15 months of data from 1.5 million people to show that 4 pointsapproximate places and timesare enough to identify 95% of individuals in a mobility database. Our work shows that human behavior puts fundamental natural constraints to the privacy of individuals and these constraints hold even when the resolution of the dataset is low; even coarse datasets provide little anonymity. We further developed a formula to estimate the uniqueness of human mobility traces. These findings have important implications for the design of frameworks and institutions dedicated to protect the privacy of individuals.

Rosalind W. PicardAffective Computing


How new technologies can help people better communicate, understand, and respond to affective information.

344. Analysis and Visualization of Longitudinal Physiological Data of Children with ASD
NEW LISTING

Rosalind W. Picard, Akane Sano, Javier Hernandez Rivera, Jean Deprey, Matthew Goodwin and Miriam Zisook Individuals diagnosed with Autism Spectrum Disorder (ASD) who have written about their experiences almost always describe immense stress and anxiety. Traditional methods of measuring these responses consist of monitoring the Autonomic Nervous System (ANS) of participants who behave compliantly in artificial laboratory settings. To the best of our knowledge, the study here is the first to conduct long-term monitoring and analysis of ANS in daily school activity settings with minimally-verbal individuals on the autism spectrum. ANS data obtained under natural circumstances can be very useful to provide warning indications of stress-related events and life-threatening events. Rosalind W. Picard, Matthew Goodwin and Rob Morris Persons on the autism spectrum often report hypersensitivity to sound. Efforts have been made to manage this condition, but there is wide room for improvement. One approachexposure therapyhas promise, and a recent study showed that it helped several individuals diagnosed with autism overcome their sound sensitivities. In this project, we borrow principles from exposure therapy, and use fun, engaging, games to help individuals gradually get used to sounds that they might ordinarily find frightening or painful. Rosalind W. Picard, Robert Randall Morris and Javier Hernandez Rivera Technologies to automatically recognize stress, are extremely important to prevent chronic psychological stress and the pathophysiological risks associated to it. The introduction of comfortable and wearable biosensors have created new opportunities to measure stress in real-life environments, but there is often great variability in how people experience stress and how they express it physiologically. In this project, we modify the loss function of Support Vector Machines to encode a person's tendency to feel more or less stressed, and give more importance to the training samples of the most similar subjects. These changes are validated in a case study where skin conductance was monitored in nine call center employees during one week of their regular work. Employees working in this type of settings usually handle high volumes of calls every day, and they frequently interact with angry and frustrated customers that lead to high stress levels.

345. Auditory Desensitization Games

346. Automatic Stress Recognition in Real-Life Settings

MIT Media Lab

April 2013

Page 77

347. Cardiocam

Ming-Zher Poh, Daniel McDuff and Rosalind W. Picard Cardiocam is a low-cost, non-contact technology for measurement of physiological signals such as heart rate and breathing rate using a basic digital imaging device such as a webcam. The ability to perform remote measurements of vital signs is promising for enhancing the delivery of primary health care.

348. Exploring Temporal Patterns of Smile

Rosalind W. Picard and Mohammed Ehasanul Hoque A smile is a multi-purpose expression. We smile to express rapport, polite disagreement, delight, sarcasm, and often, even frustration. Is it possible to develop computational models to distinguish among smiling instances when delighted, frustrated or just being polite? In our ongoing work, we demonstrate that it is useful to explore how the patterns of smile evolve through time, and that while a smile may occur in positive and in negative situations, its dynamics may help to disambiguate the underlying state. Rosalind W. Picard, Rana el Kaliouby, Daniel Jonathan McDuff, Affectiva and Forbes This work builds on our earlier work with FaceSense, created to help automate the understanding of facial expressions, both cognitive and affective. The FaceSense system has now been made available commercially by Media Lab spin-off Affectiva as Affdex. In this work we present the first project analyzing facial expressions at scale over the Internet. The interface analyzes the participants' smile intensity as they watch popular commercials. They can compare their responses to an aggregate from the larger population. The system also allows us to crowd-source data for training expression recognition systems and to gain better understanding of facial expressions under natural at-home viewing conditions instead of in traditional lab settings. Yadid Ayzenberg and Rosalind Picard The wide availability of low-cost, wearable, biophysiological sensors enables us to measure how the environment and our experiences impact our physiology. This creates a new challenge: in order to interpret the collected longitudinal data, we require the matching contextual information as well. Collecting weeks, months, and years of continuous biophysiological data makes it unfeasible to rely solely on our memory for providing the contextual information. Many view maintaining journals as burdensome, which may result in low compliance levels and unusable data. If we are to learn the effects of the environment and our day-to-day actions, and choices on our physiology, it would be invaluable to develop systems that will label biophysiological sensor data with contextual information. We present an architecture and implementation of a system for the acquisition, processing, and visualization of biophysiological signals and contextual information. Rosalind W. Picard, Rob Morris and Tod Machover Emotions are often conveyed through gesture. Instruments that respond to gestures offer musicians new, exciting modes of musical expression. This project gives musicians wireless, gestural-based control over guitar effects parameters.

349. Facial Expression Analysis Over the Web

350. FEEL: A Cloud System for Frequent Event and Biophysiological Signal Labeling
NEW LISTING

351. Gesture Guitar

352. IDA: Inexpensive Networked Digital Stethoscope

Yadid Ayzenberg Complex and expensive medical devices are mainly used in medical facilities by health professionals. IDA is an attempt to disrupt this paradigm and introduce a new type of device: easy to use, low cost, and open source. It is a digital stethoscope that can be connected to the Internet for streaming the physiological data to remote clinicians. Designed to be fabricated anywhere in the world with minimal equipment, it can be operated by individuals without medical training.

Page 78

April 2013

MIT Media Lab

353. Inside-Out: Reflecting on your Inner State


NEW LISTING

Richard R. Fletcher, Rosalind W. Picard, Daniel Jonathan McDuff and Javier Hernandez Rivera We present a novel sensor system and interface that enables an individual to capture and reflect on their daily activities. The wearable system gathers both physiological responses and visual context through the use of a wearable biosensor and a cell-phone camera, respectively. Collected information is locally stored and securely transmitted to a novel digital mirror. Through interactive visualizations, this interface allows users to reflect not only on their outer appearance but also on their inner physiological responses to daily activities. Finally, we illustrate how combining a time record of physiological data with visual contextual information can improve and enhance the experience of reflection in many real-life scenarios, and serve as a useful tool for behavior science and therapy. Akane Sano and Rosalind Picard Can we recognize stress, mood, health condition from wearable sensors or mobile phone usage data? We analyze long-term multi-modal physiological and behavioral data (electro-dermal activity, skin temperature, accelerometer, how often you use your mobile phone, how often you make call/sms) during day and night with wearable sensors and mobile phones to extract bio-markers related to health conditions, interpret inter-individual differences, and develop systems to keep people healthy. Rosalind W. Picard and Elliott Hedman Physiological arousal is an important part of occupational therapy for children with autism and ADHD, but therapists do not have a way to objectively measure how therapy affects arousal. We hypothesize that when children participate in guided activities within an occupational therapy setting, informative changes in electrodermal activity (EDA) can be detected using iCalm. iCalm is a small, wireless sensor that measures EDA and motion, worn on the wrist or above the ankle. Statistical analysis describing how equipment affects EDA was inconclusive, suggesting that many factors play a role in how a childs EDA changes. Case studies provided examples of how occupational therapy affected childrens EDA. This is the first study of the effects of occupational therapys in situ activities using continuous physiologic measures. The results suggest that careful case study analyses of the relation between therapeutic activities and physiological arousal may inform clinical practice. Rosalind W. Picard and Elliott Hedman How can we better understand peoples emotional experiences with a product or service? Traditional interview methods require people to remember their emotional state, which is difficult. We use psychophysiological measurements such as heart rate and skin conductance to map peoples emotional changes across time. We then interview people about times when their emotions changed, in order to gain insight into the experiences that corresponded with the emotional changes. This method has been used to generate hundreds of insights with a variety of products including games, interfaces, therapeutic activities, and self-driving cars. Rich Fletcher and Rosalind Picard We are developing a mobile phone-based platform to assist people with chronic diseases, panic-anxiety disorders or addictions. Making use of wearable, wireless biosensors, the mobile phone uses pattern analysis and machine learning algorithms to detect specific physiological states and perform automatic interventions in the form of text/images plus sound files and social networking elements. We are currently working with the Veterans Administration drug rehabilitation program involving veterans with PTSD.

354. Long-Term Physio and Behavioral Data Analysis

355. Measuring Arousal During Therapy for Children with Autism and ADHD

356. Measuring Customer Experiences with Arousal

357. Mobile Health Interventions for Drug Addiction and PTSD

MIT Media Lab

April 2013

Page 79

358. Multimodal Computational Behavior Analysis

David Forsyth (UIUC), Gregory Abowd (GA Tech), Jim Rehg (GA Tech), Shri Narayanan (USC), Rana el Kaliouby, Matthew Goodwin, Rosalind W. Picard, Javier Hernandez Rivera, Stan Scarloff (BU) and Takeo Kanade (CMU) This project will define and explore a new research area we call Computational Behavior Scienceintegrated technologies for multimodal computational sensing and modeling to capture, measure, analyze, and understand human behaviors. Our motivating goal is to revolutionize diagnosis and treatment of behavioral and developmental disorders. Our thesis is that emerging sensing and interpretation capabilities in vision, audition, and wearable computing technologies, when further developed and properly integrated, will transform this vision into reality. More specifically, we hope to: (1) enable widespread autism screening by allowing non-experts to easily collect high-quality behavioral data and perform initial assessment of risk status; (2) improve behavioral therapy through increased availability and improved quality, by making it easier to track the progress of an intervention and follow guidelines for maximizing learning progress; and (3) enable longitudinal analysis of a child's development based on quantitative behavioral data, using new tools for visualization.

359. Panoply

Rosalind W. Picard and Robert Morris In the next year, roughly 26 million Americans will suffer from depression. Many more will meet the clinical diagnosis for an anxiety disorder. While psychotherapies like cognitive-behavioral therapy are known to be effective for these conditions, the demand for these treatments exceeds the resources available. There are simply not enough clinicians available. Access is also limited by cost, stigma, and the logistics of scheduling and traveling to appointments. What if we could crowdsource this problem? Panoply is a crowd-based platform for mental health and emotional well-being. In lieu of clinician oversight, Panoply coordinates therapeutic support from anonymous online workers who are trained on demand. The system utilizes advances in collective intelligence and crowdsourcing to ensure that feedback is timely and vetted for quality.

360. Smart Phone Frequent EDA Event Logger

Yadid Ayzenberg and Rosalind Picard Have you ever wondered which emails, phone calls, or meetings cause you the most stress or anxiousness? Well, now you can find out. A wristband sensor measures electrodermal activity (EDA), which responds to stress, anxiety, and arousal. Each time you read an email, place a call, or hold a meeting, your phone will measure your EDA levels by connecting to the sensor via Bluetooth. The goal is to design a tool that enables the user to attribute levels of stress and anxiety to particular events. FEEL allows the user to view all of the events and the levels of EDA that are associated with them: with FEEL, users can see which event caused a higher level of anxiety and stress, and can view which part of an event caused the greatest reaction. Users can also view EDA levels in real time. Akane Sano and Rosalind Picard Sleep is critical to a wide range of biological functions; inadequate sleep results in impaired cognitive performance and mood, and adverse health outcomes including obesity, diabetes, and cardiovascular disease. Recent studies have shown that healthy and unhealthy sleep behaviors can be transmitted by social interactions between individuals within social networks. We investigate how social connectivity and light exposure influence sleep patterns and their health and performance. Using multimodal data collected from closely connected MIT undergraduates with wearable sensors and mobile phones, we will develop the statistical and multi-scale mathematical models of sleep dynamics within social networks based on sleep and circadian physiology. These models will provide insights into the emergent dynamics of sleep behaviors within social networks, and allow us to test the effects of candidate strategies for intervening in populations with unhealthy sleep behaviors.

361. Social + Sleep + Moods

Page 80

April 2013

MIT Media Lab

362. StoryScape

Rosalind W. Picard and Micah Eckhardt StoryScape is a social illustrated primer. The StoryScape platform is being developed to allow for easy creation of highly interactive and customizable stories. In addition, the platform will allow a community of content creators to easily share, collaborate, and remix each others' works. Experimental goals of StoryScape include its use with children diagnosed with autism who are minimally verbal or non-verbal. We seek to test our interaction paradigm and personalization feature to determine if multi-modal interactive and customizable stories influence language acquisition and expression.

363. The Frustration of Learning Monopoly

Rosalind W. Picard and Elliott Hedman We are looking at the emotional experience created when children learn games. Why do we start games with the most boring part, reading directions? How can we create a product that does not create an abundance of work for parents? Key insights generated from field work, interviews, and measurement of electrodermal activity are: kids become bored listening to directions, "it's like going to school"; parents feel rushed reading directions as they sense their children's boredom; children and parents struggle for power in interpreting and enforcing rules; children learn games by mimicking their parents, and; children enjoy the challenge of learning new games.

Ramesh RaskarCamera Culture


How to create new ways to capture and share visual information.

364. 6D Display

Ramesh Raskar, Martin Fuchs, Hans-Peter Seidel, and Hendrik P. A. Lensch Is it possible to create passive displays that respond to changes in viewpoint and incident light conditions? Holograms and 4D displays respond to changes in viewpoint. 6D displays respond to changes in viewpoint as well as surrounding light. We encode the 6D reflectance field into an ordinary 2D film. These displays are completely passive and do not require any power. Applications include novel instruction manuals and mood lights.

365. Bokode: Imperceptible Visual Tags for Camera-Based Interaction from a Distance

Ramesh Raskar, Ankit Mohan, Grace Woo, Shinsaku Hiura and Quinn Smithwick With over a billion people carrying camera-phones worldwide, we have a new opportunity to upgrade the classic bar code to encourage a flexible interface between the machine world and the human world. Current bar codes must be read within a short range and the codes occupy valuable space on products. We present a new, low-cost, passive optical design so that bar codes can be shrunk to fewer than 3mm and can be read by unmodified ordinary cameras several meters away. Ramesh Raskar, Vitor Pamplona, Erick Passos, Jan Zizka, Jason Boggess, David Schafran, Manuel M. Oliveira, Everett Lawson, and Estebam Clua We introduce a novel interactive method to assess cataracts in the human eye by crafting an optical solution that measures the perceptual impact of forward scattering on the foveal region. Current solutions rely on highly trained clinicians to check the back scattering in the crystallin lens and test their predictions on visual acuity tests. Close-range parallax barriers create collimated beams of light to scan through sub-apertures scattering light as it strikes a cataract. User feedback generates maps for opacity, attenuation, contrast, and local point-spread functions. The goal is to allow a general audience to operate a portable, high-contrast,

366. CATRA: Mapping of Cataract Opacities Through an Interactive Approach

MIT Media Lab

April 2013

Page 81

light-field display to gain a meaningful understanding of their own visual conditions. The compiled maps are used to reconstruct the cataract-affected view of an individual, offering a unique approach for capturing information for screening, diagnostic, and clinical analysis.

367. Coded Computational Photography

Jaewon Kim, Ahmed Kirmani, Ankit Mohan and Ramesh Raskar Computational photography is an emerging multi-disciplinary field that is at the intersection of optics, signal processing, computer graphics and vision, electronics, art, and online sharing in social networks. The first phase of computational photography was about building a super-camera that has enhanced performance in terms of the traditional parameters, such as dynamic range, field of view, or depth of field. We call this 'Epsilon Photography.' The next phase of computational photography is building tools that go beyond the capabilities of this super-camera. We call this 'Coded Photography.' We can code exposure, aperture, motion, wavelength, and illumination. By blocking light over time or space, we can preserve more details about the scene in the recorded single photograph. Ramesh Raskar, Gordon Wetzstein, Xing Lin and Tsinghua University We present coded focal stack photography as a computational photography paradigm that combines a focal sweep and a coded sensor readout with novel computational algorithms. We demonstrate various applications of coded focal stacks, including photography with programmable non-planar focal surfaces and multiplexed focal stack acquisition. By leveraging sparse coding techniques, coded focal stacks can also be used to recover a full-resolution depth and all-in-focus (AIF) image from a single photograph. Coded focal stack photography is a significant step towards a computational camera architecture that facilitates high-resolution post-capture refocusing, flexible depth of field, and 3D imaging. Kshitij Marwah, Gordon Wetzstein, Yosuke Bando and Ramesh Raskar Consumer photography is undergoing a paradigm shift with the development of light field cameras. Commercial products such as those by Lytro and Raytrix have begun to appear in the marketplace with features such as post-capture refocus, 3D capture, and viewpoint changes. These cameras suffer from two major drawbacks: major drop in resolution (converting a 20 MP sensor to a 1 MP image) and large form factor. We have developed a new light field camera that circumvents traditional resolution losses (a 20 MP sensor turns into a full sensor resolution refocused image) in a thin form factor that can fit into traditional DSLRs and mobile phones. Gordon Wetzstein, Douglas Lanman, Matthew Hirsch, Wolfgang Heidrich, and Ramesh Raskar We develop tomographic techniques for image synthesis on displays composed of compact volumes of light-attenuating material. Such volumetric attenuators recreate a 4D light field or high-contrast 2D image when illuminated by a uniform backlight. Since arbitrary views may be inconsistent with any single attenuator, iterative tomographic reconstruction minimizes the difference between the emitted and target light fields, subject to physical constraints on attenuation. For 3D displays, spatial resolution, depth of field, and brightness are increased, compared to parallax barriers. We conclude by demonstrating the benefits and limitations of attenuation-based light field displays using an inexpensive fabrication method: separating multiple printed transparencies with acrylic sheets.

368. Coded Focal Stack Photography


NEW LISTING

369. Compressive Light Field Camera: Next Generation in 3D Photography

370. Layered 3D: Glasses-Free 3D Printing

Page 82

April 2013

MIT Media Lab

371. LensChat: Sharing Photos with Strangers

Ramesh Raskar, Rob Gens and Wei-Chao Chen With networked cameras in everyone's pockets, we are exploring the practical and creative possibilities of public imaging. LensChat allows cameras to communicate with each other using trusted optical communications, allowing users to share photos with a friend by taking pictures of each other, or borrow the perspective and abilities of many cameras. Andreas Velten, Di Wu, Christopher Barsi, Ayush Bhandari, Achuta Kadambi, Nikhil Naik, Micha Feigin, Daniel Raviv, Thomas Willwacher, Otkrist Gupta, Ashok Veeraraghavan, Moungi G. Bawendi, and Ramesh Raskar Using a femtosecond laser and a camera with a time resolution of about one trillion frames per second, we recover objects hidden out of sight. We measure speed-of-light timing information of light scattered by the hidden objects via diffuse surfaces in the scene. The object data are mixed up and are difficult to decode using traditional cameras. We combine this "time-resolved" information with novel reconstruction algorithms to untangle image information and demonstrate the ability to look around corners. Alumni Contributors: Andreas Velten, Otkrist Gupta and Di Wu

372. Looking Around Corners

373. NETRA: Smartphone Add-On for Eye Tests

Vitor Pamplona, Manuel Oliveira, Erick Passos, Ankit Mohan, David Schafran, Jason Boggess and Ramesh Raskar Can a person look at a portable display, click on a few buttons, and recover his refractive condition? Our optometry solution combines inexpensive optical elements and interactive software components to create a new optometry device suitable for developing countries. The technology allows for early, extremely low-cost, mobile, fast, and automated diagnosis of the most common refractive eye disorders: myopia (nearsightedness), hypermetropia (farsightedness), astigmatism, and presbyopia (age-related visual impairment). The patient overlaps lines in up to eight meridians and the Android app computes the prescription. The average accuracy is comparable to the prior artand in some cases, even better. We propose the use of our technology as a self-evaluation tool for use in homes, schools, and at health centers in developing countries, and in places where an optometrist is not available or is too expensive.

374. PhotoCloud: Personal to Shared Moments with Angled Graphs of Pictures

Ramesh Raskar, Aydin Arpa, Otkrist Gupta and Gabriel Taubin We present a near real-time system for interactively exploring a collectively captured moment without explicit 3D reconstruction. Our system favors immediacy and local coherency to global consistency. It is common to represent photos as vertices of a weighted graph. The weighted angled graphs of photos used in this work can be regarded as the result of discretizing the Riemannian geometry of the high dimensional manifold of all possible photos. Ultimately, our system enables everyday people to take advantage of each others' perspectives in order to create on-the-spot spatiotemporal visual experiences similar to the popular bullet-time sequence. We believe that this type of application will greatly enhance shared human experiences spanning from events as personal as parents watching their children's football game to highly publicized red-carpet galas. Douglas Lanman, Gordon Wetzstein, Matthew Hirsch, Wolfgang Heidrich, and Ramesh Raskar We introduce polarization field displays as an optically efficient design for dynamic light field display using multi-layered LCDs. Such displays consist of a stacked set of liquid crystal panels with a single pair of crossed linear polarizers. Each layer is modeled as a spatially controllable polarization rotator, as opposed to a conventional spatial light modulator that directly attenuates light. We demonstrate that such displays can be controlled, at interactive refresh rates, by adopting the

375. Polarization Fields: Glasses-Free 3DTV

MIT Media Lab

April 2013

Page 83

SART algorithm to tomographically solve for the optimal spatially varying polarization state rotations applied by each layer. We validate our design by constructing a prototype using modified off- the-shelf panels. We demonstrate interactive display using a GPU-based SART implementation supporting both polarization-based and attenuation-based architectures.

376. Portable Retinal Imaging

Everett Lawson, Jason Boggess, Alex Olwal, Gordon Wetzstein, and Siddharth Khullar The major challenge in preventing blindness is identifying patients and bringing them to specialty care. Diseases that affect the retina, the image sensor in the human eye, are particularly challenging to address, because they require highly trained eye specialists (ophthalmologists) who use expensive equipment to visualize the inner parts of the eye. Diabetic retinopathy, HIV/AIDS related retinitis, and age-related macular degeneration are three conditions that can be screened and diagnosed to prevent blindness caused by damage to retina. We exploit a combination of two novel ideas which simplify the constraints of traditional devices, with simplified optics and cleaver illumination in order to capture and visualize images of the retina in a standalone device easily operated by the user. Prototypes are conveniently embedded in either a mobile hand-held retinal camera, or wearable eye-glasses.

377. Reflectance Acquisition Using Ultrafast Imaging

Ramesh Raskar and Nikhil Naik We demonstrate a new technique that allows a camera to rapidly acquire reflectance properties of objects 'in the wild' from a single viewpoint, over relatively long distances and without encircling equipment. This project has a wide variety of applications in computer graphics including image relighting, material identification, and image editing. Alumni Contributor: Andreas Velten

378. Second Skin: Motion Capture with Actuated Feedback for Motor Learning

Ramesh Raskar, Kenichiro Fukushi, Christopher Schonauer and Jan Zizka We have created a 3D motion-tracking system with an automatic, real-time vibrotactile feedback with an assembly of photo-sensors, infrared projector pairs, vibration motors, and wearable suit. This system allows us to enhance and quicken the motor learning process in variety of fields such as healthcare (physiotherapy), entertainment (dance), and sports (martial arts). Alumni Contributor: Dennis Ryan Miaw

379. Shield Field Imaging

Jaewon Kim We present a new method for scanning 3D objects in a single shot, shadow-based method. We decouple 3D occluders from 4D illumination using shield fields: the 4D attenuation function which acts on any light field incident on an occluder. We then analyze occluder reconstruction from cast shadows, leading to a single-shot light field camera for visual hull reconstruction.

Page 84

April 2013

MIT Media Lab

380. Single Lens Off-Chip Cellphone Microscopy

Ramesh Raskar and Aydin Arpa Within the last few years, cellphone subscriptions have widely spread and now cover even the remotest parts of the planet. Adequate access to healthcare, however, is not widely available, especially in developing countries. We propose a new approach to converting cellphones into low-cost scientific devices for microscopy. Cellphone microscopes have the potential to revolutionize health-related screening and analysis for a variety of applications, including blood and water tests. Our optical system is more flexible than previously proposed mobile microscopes, and allows for wide field of view panoramic imaging, the acquisition of parallax, and coded background illumination, which optically enhances the contrast of transparent and refractive specimens. Daniel Saakes, Kevin Chiu, Tyler Hutchison, Biyeun Buczyk, Naoya Koizumi and Masahiko Inami How can we show our 16 megapixel photos from our latest trip on a digital display? How can we create screens that are visible in direct sunlight as well as complete darkness? How can we create large displays that consume less than 2W of power? How can we create design tools for digital decal application and intuitive-computer aided modeling? We introduce a display that is high resolution but updates at a low frame rate, a slow display. We use lasers and monostable light-reactive materials to provide programmable space-time resolution. This refreshable, high resolution display exploits the time decay of monostable materials, making it attractive in terms of cost and power requirements. Our effort to repurpose these materials involves solving underlying problems in color reproduction, day- night visibility, and optimal time sequences for updating content.

381. Slow Display

382. SpeckleSense

Alex Olwal, Andrew Bardagjy, Jan Zizka and Ramesh Raskar Motion sensing is of fundamental importance for user interfaces and input devices. In applications where optical sensing is preferred, traditional camera-based approaches can be prohibitive due to limited resolution, low frame rates, and the required computational power for image processing. We introduce a novel set of motion-sensing configurations based on laser speckle sensing that are particularly suitable for human-computer interaction. The underlying principles allow these configurations to be fast, precise, extremely compact, and low cost.

383. Tensor Displays: High-Quality Glasses-Free 3D TV

Gordon Wetzstein, Douglas Lanman, Matthew Hirsch and Ramesh Raskar We introduce tensor displays: a family of glasses-free 3D displays comprising all architectures employing (a stack of) time-multiplexed LCDs illuminated by uniform or directional backlighting. We introduce a unified optimization framework that encompasses all tensor display architectures and allows for optimal glasses-free 3D display. We demonstrate the benefits of tensor displays by constructing a reconfigurable prototype using modified LCD panels and a custom integral imaging backlight. Our efficient, GPU-based NTF implementation enables interactive applications. In our experiments we show that tensor displays reveal practical architectures with greater depths of field, wider fields of view, and thinner form factors, compared to prior automultiscopic displays. George Barbastathis, Ramesh Raskar, Belen Masia, Se Baek Oh and Tom Cuypers This work focuses on bringing powerful concepts from wave optics to the creation of new algorithms and applications for computer vision and graphics. Specifically, ray-based, 4D lightfield representation, based on simple 3D geometric principles, has led to a range of new applications that include digital refocusing, depth estimation, synthetic aperture, and glare reduction within a camera or using an array of cameras. The lightfield representation, however, is inadequate to describe interactions with diffractive or phase-sensitive optical elements. Therefore we use

384. Theory Unifying Ray and Wavefront Lightfield Propagation

MIT Media Lab

April 2013

Page 85

Fourier optics principles to represent wavefronts with additional phase information. We introduce a key modification to the ray-based model to support modeling of wave phenomena. The two key ideas are "negative radiance" and a "virtual light projector." This involves exploiting higher dimensional representation of light transport.

385. Trillion Frames Per Second Camera

Andreas Velten, Di Wu, Adrin Jarabo, Belen Masia, Christopher Barsi, Chinmaya Joshi, Everett Lawson, Moungi Bawendi, Diego Gutierrez, and Ramesh Raskar We have developed a camera system that captures movies at an effective rate of approximately one trillion frames per second. In one frame of our movie, light moves only about 0.6 mm. We can observe pulses of light as they propagate through a scene. We use this information to understand how light propagation affects image formation and to learn things about a scene that are invisible to a regular camera.

386. Vision on Tap

Ramesh Raskar Computer vision is a class of technologies that lets computers use cameras to automatically stitch together panoramas, reconstruct 3-D geometry from multiple photographs, and even tell you when the water's boiling. For decades, this technology has been advancing mostly within the confines of academic institutions and research labs. Vision on Tap is our attempt to bring computer vision to the masses. Alumni Contributor: Kevin Chiu

387. VisionBlocks

Chunglin Wen and Ramesh Raskar VisionBlocks is an on-demand, in-browser, customizable, computer-vision application-building platform for the masses. Even without any prior programming experience, users can create and share computer vision applications. End-users drag and drop computer vision processing blocks to create their apps. The input feed could be either from a user's webcam or a video from the Internet. VisionBlocks is a community effort where researchers obtain fast feedback, developers monetize their vision applications, and consumers can use state-of-the-art computer vision techniques. We envision a Vision-as-a-Service (VaaS) over-the-web model, with easy-to-use interfaces for application creation for everyone. Alumni Contributors: Abhijit Bendale, Kshitij Marwah and Jason Boggess and Kevin Chiu

388. Visual Lifelogging


NEW LISTING

Hyowon Lee, Nikhil Naik, Lubos Omelina, Daniel Tokunaga, Tiago Lucena and Ramesh Raskar We are creating a novel visual lifelogging framework for applications in personal life and workplaces.

Page 86

April 2013

MIT Media Lab

Mitchel ResnickLifelong Kindergarten


How to engage people in creative learning experiences.

389. App Inventor

Hal Abelson, Eric Klopfer, Mitchel Resnick, Leo Burd, Andrew McKinney, Shaileen Pokress, CSAIL and Scheller Teacher Education Program The Center for Mobile Learning is driven by a vision that people should be able to experience mobile technology as creators, not just consumers. One focus of our activity here is App Inventor, a Web-based program development tool that even beginners with no prior programming experience can use to create mobile applications for business, education, social good, entertainment and anything else they might dream of. Work on App Inventor was initiated in Google Research by Hal Abelson and is continuing at the MIT Media Lab as a collaboration with the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Scheller Teacher Education Program (STEP).

390. Build-in-Progress
NEW LISTING

Tiffany Tseng, Mitchel Resnick Build-in-Progress is a new platform for people to document and share DIY projects that are still works-in-progress. The website encourages designers to share their designs as they are under development, showcasing the trials and errors that naturally occur throughout one's design process. This is in contrast to existing platforms, which tend to present users with edited recipes for replicating existing projects. Build-in-Progress also has a companion mobile app for enabling designers to easily share media associated with their projects. Ricarose Roque, Amos Blanton, Natalie Rusk and Mitchel Resnick To foster and better understand collaboration in the Scratch Online Community, we created Collab Camp, a month-long event in which Scratch community members form teams (collabs) to work together on Scratch projects. Our goals include: analyzing how different organizational structures support collaboration in different ways; examining how design decisions influence the diversity of participation in collaborative activities; and studying the role of constructive feedback in creative, collaborative processes.

391. Collab Camp

392. Computer Clubhouse

Mitchel Resnick, Natalie Rusk, Chris Garrity, Claudia Urrea, and Robbie Berg At Computer Clubhouse after-school centers, young people (ages 10-18) from low-income communities learn to express themselves creatively with new technologies. Clubhouse members work on projects based on their own interests, with support from adult mentors. By creating their own animations, interactive stories, music videos, and robotic constructions, Clubhouse members become more capable, confident, and creative learners. The first Computer Clubhouse was established in 1993, as a collaboration between the Lifelong Kindergarten group and The Computer Museum (now part of the Boston Museum of Science). With financial support from Intel Corporation, the network has expanded to more than 20 countries, serving more than 20,000 young people. The Lifelong Kindergarten group continues to develop new technologies, introduce new educational approaches, and lead professional-development workshops for Clubhouses around the world. Alumni Contributors: Leo Burd, Robbin Chapman, Rachel Garber, Tim Gorton, Michelle Hlubinka and Elisabeth Sylvan

MIT Media Lab

April 2013

Page 87

393. Computer Clubhouse Village

Chris Garrity, Natalie Rusk and Mitchel Resnick The Computer Clubhouse Village is an online community that connects people at Computer Clubhouse after-school centers around the world. Through the Village, Clubhouse members and staff (at more than 100 Clubhouses in 21 countries) can share ideas with one another, get feedback and advice on their projects, and work together on collaborative design activities. Alumni Contributors: Robbin Chapman, Rachel Garber and Elisabeth Sylvan

394. Family Creativity Workshops

Ricarose Roque and Mitchel Resnick In Family Creativity Workshops, we engage parents and their children in workshops to design and invent together with Scratch, a programming language where people can create their own interactive animations, games, and stories. Just as children's literacy can be supported by parents reading with them, children's creativity can be supported by parents creating with them. Children who learn to create with technologies like Scratch often come from homes with strong support systems. In these workshops, we especially target families with limited access to resources and social support around technology. By promoting participation across generations, these creative workshops engage parents in supporting their children in becoming creators and full participants in todays digital society. Mitchel Resnick, Philipp Schmidt, Natalie Rusk, Ricarose Roque, Sayamindu Dasgupta Learning Creative Learning (http://learn.media.mit.edu) is a new online course that introduces ideas and strategies for supporting creative learning. In the first semester (spring 2013), thousands of educators, designers, and technologists participated in the course and shared ideas with one another. We view the course as an experimental alternative to traditional Massive Open Online Courses (MOOCs), putting greater emphasis on peer-to-peer learning, hands-on projects, and sustainable communities. Sayamindu Dasgupta and Mitchel Resnick More and more computational activities revolve around collecting, accessing, and manipulating large sets of data, but introductory approaches for learning programming typically are centered around algorithmic concepts and flow of control, not around data. Computational exploration of data, especially data-sets, has been usually restricted to predefined operations in spreadsheet software like Microsoft Excel. This project builds on the Scratch programming language and environment to allow children to explore data and datasets. With the extensions provided by this project, children can build Scratch programs to not only manipulate and analyze data from online sources, but also to collect data through various means such as surveys and crowd-sourcing. This toolkit will support many different types of projects like online polls, turn-based multiplayer games, crowd-sourced stories, visualizations, information widgets, and quiz-type games.

395. Learning Creative Learning


NEW LISTING

396. Learning with Data

397. MaKey MaKey

Eric Rosenbaum, Jay Silver, and Mitchel Resnick MaKey MaKey lets you transform everyday objects into computer interfaces. Make a game pad out of Play-Doh, a musical instrument out of bananas, or any other invention you can imagine. It's a little USB device you plug into your computer and you use it to make your own switches that act like keys on the keyboard: Make + Key = MaKey MaKey! Its plug and play. No need for any electronics or programming skills. Since MaKey MaKey looks to your computer like a regular mouse and keyboard, its automatically compatible with any piece of software you can think of. Its great for beginners tinkering and exploring, for experts prototyping and inventing, and for everybody who wants to playfully transform their world.

Page 88

April 2013

MIT Media Lab

398. Map Scratch

Sayamindu Dasgupta, Brian Silverman, and Mitchel Resnick Map Scratch is an extension of Scratch that enables kids to program with maps within their Scratch projects. With Map Scratch, kids can create interactive tours, games, and data visualizations with real-world geographical data and maps.

399. MelodyMorph

Eric Rosenbaum and Mitchel Resnick MelodyMorph is an interface for constructing melodies and making improvised music. It removes a constraint of traditional musical instruments: a fixed mapping between space and pitch. What if you blew up the piano so you could put the keys anywhere you want? With MelodyMorph you can create a customized musical instrument, unique to the piece of music, the player, or the moment.

400. Open Learning


NEW LISTING

Philipp Schmidt and Mitchel Resnick Learning for everyone, by everyone. The Open Learning project builds online learning communities that work like the web - peer-to-peer, loosely joined, open. And it works with Media Lab faculty and students to open up the magic of the Lab through online learning. Our first experiment was Learning Creative Learning, a course taught at the Media Lab, which attracted 24,000 participants. We are currently developing ideas for massive citizen science projects, engineering competitions for kids, and new physical infrastructures for learning that reclaim the library. Tiffany Tseng and Mitchel Resnick Replay is a self-documenting construction kit for children both to share their designs with others and reflect on their own design process. Replay consists of a set of angular construction pieces that can sense their connection and orientation. A virtual model is rendered in real time as a design is constructed, and an on-screen playback interface allows users to view models from multiple perspectives and watch how a design was assembled.

401. Replay

402. Sanctuary

Eric Klopfer, Jason Haas, Jordan Haines and Nick Benson Sanctuary is an educational game to be played in pairs. It addresses topics in high-school biology and mathematics, and encourages players to become collaborative scientists with asymmetric interfaces and tools.

403. Scratch

Mitchel Resnick, John Maloney, Natalie Rusk, Karen Brennan, Champika Fernanda, Ricarose Roque, Sayamindu Dasgupta, Amos Blanton, Michelle Chung, Abdulrahman idlbi, Eric Rosenbaum, Brian Silverman, Paula Bonta Scratch is a programming language and online community (http://scratch.mit.edu) that makes it easy to create your own interactive stories, games, animations, and simulationsand share your creations online. As young people create and share Scratch projects, they learn to think creatively, reason systematically, and work collaborative, while also learning important mathematical and computational ideas. More than 3 million projects have been shared on the Scratch website. Alumni Contributors: Gaia Carini, Margarita Dekoli, Evelyn Eastmond, Amon Millner, Andres Monroy-Hernandez and Tamara Stern

404. Scratch Day

Ingeborg Endter, Ricarose Roque, Karen Brennan and Mitchel Resnick Scratch Day (day.scratch.mit.edu) is a network of face-to-face local gatherings, on the same day in all parts of the world, where people can meet, share, and learn more about Scratch, a programming environment that enables people to create their own interactive stories, games, animations, and simulations. We believe that these types of face-to-face interactions remain essential for ensuring the accessibility and

MIT Media Lab

April 2013

Page 89

sustainability of initiatives such as Scratch. In-person interactions enable richer forms of communication among individuals, more rapid iteration of ideas, and a deeper sense of belonging and participation in a community. The first Scratch Day took place on May 16, 2009, with 120 events in 44 different countries. In 2012, there were 186 events in 44 countries.

405. ScratchJr

Mitchel Resnick, Marina Bers, Paula Bonta, Brian Silverman, and Sayamindu Dasgupta The ScratchJr project aims to bring the ideas and spirit of Scratch programming activities to younger children, enabling children ages five to seven to program their own interactive stories, games, and animation. To make ScratchJr developmentally appropriate for younger children, we are revising the interface and providing new structures to help young children learn core math concepts and problem-solving strategies. We hope to make a version of ScratchJr publicly available in 2014.

406. Singing Fingers

Eric Rosenbaum, Jay Silver and Mitchel Resnick Singing Fingers allows children to fingerpaint with sound. Users paint by touching a screen with a finger, but color only emerges if a sound is made at the same time. By touching the painting again, users can play back the sound. This creates a new level of accessibility for recording, playback, and remixing of sound.

Deb RoyCognitive Machines


How to build machines that learn to use language in human-like ways, and develop tools and models to better understand how children learn to communicate and how adults behave.

407. BlitzScribe: Speech Analysis for the Human Speechome Project

Brandon Roy and Deb Roy BlitzScribe is a new approach to speech transcription driven by the demands of today's massive multimedia corpora. High-quality annotations are essential for indexing and analyzing many multimedia datasets; in particular, our study of language development for the Human Speechome Project depends on speech transcripts. Unfortunately, automatic speech transcription is inadequate for many natural speech recordings, and traditional approaches to manual transcription are extremely labor intensive and expensive. BlitzScribe uses a semi-automatic approach, combining human and machine effort to dramatically improve transcription speed. Automatic methods identify and segment speech in dense, multitrack audio recordings, allowing us to build streamlined user interfaces maximizing human productivity. The first version of BlitzScribe is already about 4-6 times faster than existing systems. We are exploring user-interface design, machine-learning and pattern-recognition techniques to build a human-machine collaborative system that will make massive transcription tasks feasible and affordable.

Page 90

April 2013

MIT Media Lab

408. Crowdsourcing the Creation of Smart Role-Playing Agents

Jeff Orkin and Deb Roy We are crowdsourcing the creation of socially rich interactive characters by collecting data from thousands of people interacting and conversing in online multiplayer games, and mining recorded gameplay to extract patterns in language and behavior. The tools and algorithms we are developing allow non-experts to automate characters who can play roles by interacting and conversing with humans (via speech or typed text), and with each other. The Restaurant Game recorded over 16,000 people playing the roles of customers and waitresses in a virtual restaurant. Improviso is recording humans playing the roles of actors on the set of a sci-fi movie. This approach will enable new forms of interaction for games, training simulations, customer service, and HR job applicant screening systems. Philip DeCamp, Rony Kubat and Deb Roy HouseFly combines audio-video recordings from multiple cameras and microphones to generate an interactive, 3D reconstruction of recorded events. Developed for use with the longitudinal recordings collected by the Human Speechome Project, this software enables the user to move freely throughout a virtual model of a home and to play back events at any time or speed. In addition to audio and video, the project explores how different kinds of data may be visualized in a virtual space, including speech transcripts, person tracking data, and retail transactions. Alumni Contributor: George Shaw

409. HouseFly: Immersive Video Browsing and Data Visualization

410. Human Speechome Project

Philip DeCamp, Brandon Roy, Soroush Vosoughi and Deb Roy The Human Speechome Project is an effort to observe and computationally model the longitudinal language development of a single child at an unprecedented scale. To achieve this, we are recording, storing, visualizing, and analyzing communication and behavior patterns in over 200,000 hours of home video and speech recordings. The tools that are being developed for mining and learning from hundreds of terabytes of multimedia data offer the potential for breaking open new business opportunities for a broad range of industriesfrom security to Internet commerce. Alumni Contributors: Michael Fleischman, Jethran Guinness, Alexia Salata and George Shaw

411. Speech Interaction Analysis for the Human Speechome Project

Brandon Roy and Deb Roy The Speechome Corpus is the largest corpus of a single child learning language in a naturalistic setting. We have now transcribed significant amounts of the speech to support new kinds of language analysis. We are currently focusing on the child's lexical development, pinpointing "word births" and relating them to caregiver language use. Our initial results show child vocabulary growth at an unprecedented temporal resolution, as well as a detailed picture of other measures of linguistic development. The results suggest individual caregivers "tune" their spoken interactions to the child's linguistic ability with far more precision than expected, helping to scaffold language development. To perform these analyses, new tools have been developed for interactive data annotation and exploration. Soroush Vosoughi, Joe Wood, Matthew Goodwin and Deb Roy Collection and analysis of dense, longitudinal observational data of child behavior in natural, ecologically valid, non-laboratory settings holds significant benefits for advancing the understanding of autism and other developmental disorders. We have developed the Speechome Recordera portable version of the embedded audio/video recording technology originally developed for the Human Speechome Projectto facilitate swift, cost-effective deployment in special-needs clinics and homes. Recording child behavior daily in these settings will enable us to study

412. Speechome Recorder for the Study of Child Development Disorders

MIT Media Lab

April 2013

Page 91

developmental trajectories of autistic children from infancy through early childhood, as well as atypical dynamics of social interaction as they evolve on a day-to-day basis. Its portability makes possible potentially large-scale comparative study of developmental milestones in both neurotypical and autistic children. Data-analysis tools developed in this research aim to reveal new insights toward early detection, provide more accurate assessments of context-specific behaviors for individualized treatment, and shed light on the enduring mysteries of autism. Alumni Contributors: George Shaw and Philip DeCamp

Chris SchmandtSpeech + Mobility


How speech technologies and portable devices can enhance communication.

413. Back Talk

Chris Schmandt and Andrea Colaco The living room is the heart of social and communal interactions in a home. Often present in this space is a screen: the television. When in use, this communal gathering space brings together people and their interests, and their varying needs for company, devices, and content. This project focuses on using personal devices such as mobile phones with the television; the phone serves as a controller and social interface by offering a channel to convey engagement, laughter, and viewer comments, and to create remote co-presence.

414. Flickr This

Chris Schmandt and Dori Lin Inspired by the fact that people are communicating more and more through technology, Flickr This explores ways for people to have emotion-rich conversations through all kinds of media provided by people and technologya way for technology to allow remote people to have conversations more like face-to-face experiences by grounding them in shared media. Flickr This lets viewable contents provide structure for a conversation; with a grounding on the viewable contents, conversation can move between synchronous and asynchronous, and evolve into a richer collaborative conversation/media.

415. frontdesk

Chris Schmandt and Andrea Colaco Calling a person versus calling a place has quite distinctive affordances. With the arrival of mobile phones, the concept of calling has moved from calling a place to calling a person. Frontdesk proposes a place-based communication tool that is accessed primarily through any mobile device and features voice calls and text chat. The application uses place loosely to define a physical space created by a group of people that have a shared context of that place. Examples of places could be different parts of a workspace in a physical building, such as the machine shop, caf, or Speech + Mobility group area at the Media Lab. When a user calls any of these places, frontdesk routes their call to all people that are checked-in to the place.

Page 92

April 2013

MIT Media Lab

416. Going My Way

Chris Schmandt and Jaewoo Chung When friends give directions, they often don't describe the whole route, but instead provide landmarks along the way which with they think we'll be familiar. Friends can assume we have certain knowledge because they know our likes and dislikes. Going My Way attempts to mimic a friend by learning about where you travel, identifying the areas that are close to the desired destination from your frequent path, and picking a set of landmarks to allow you to choose a familiar one. When you select one of the provided landmarks, Going My Way will provide directions from it to the destination. Alumni Contributors: Chaochi Chang and Paulina Lisa Modlitba

417. Guiding Light

Chris Schmandt and Jaewoo Chung Guiding Light is a navigation-based application that provides directions by projecting them onto physical spaces both indoors and outdoors. It enables a user to get relevant spatial information by using a mini projector in a cell phone. The core metaphor involved in this design is that of a flashlight which reveals objects in and information about the space it illuminates. For indoor navigation, Guiding Light uses a combination of e-compass, accelerometer, proximity sensors, and tags to place information appropriately. In contrast to existing heads-up displays that push information into the user's field of view, Guiding Light works on a pull principle, relying entirely on users' requests and control of information.

418. Indoor Location Sensing Using Geo-Magnetism

Chris Schmandt, Jaewoo Chung, Nan-Wei Gong, Wu-Hsi Li and Joe Paradiso We present an indoor positioning system that measures location using disturbances of the Earth's magnetic field by structural steel elements in a building. The presence of these large steel members warps the geomagnetic field such that lines of magnetic force are locally not parallel. We measure the divergence of the lines of the magnetic force field using e-compass parts with slight physical offsets; these measurements are used to create local position signatures for later comparison with values in the same sensors at a location to be measured. We demonstrate accuracy within one meter 88% of the time in experiments in two buildings and across multiple floors within the buildings. Chris Schmandt and Charlie DeTar Bringing deliberative process and consensus decision making to the 21st century! A practical set of tools for assisting in meeting structure, deliberative process, brainstorming, and negotiation. Helping groups to democratically engage with each other, across geographies and time zones.

419. InterTwinkles

420. LocoRadio

Chris Schmandt and Wu-Hsi Li LocoRadio is a mobile, augmented-reality, audio browsing system that immerses you within a soundscape as you move. To enhance the browsing experience in high-density spatialized audio environments, we introduce a UI feature, "auditory spatial scaling," which enables users to continuously adjust the spatial density of perceived sounds. The audio will come from a custom, geo-tagged audio database. The current demo uses iconic music to represent restaurants. As users move in the city, they encounter a series of music and the perception enhances their awareness of the numbers, styles, and locations of nearby restaurants.

421. Mime
NEW LISTING

Andrea Colaco Mime is a compact, low-power 3D sensor for short-range gestural control of small display devices. The sensor's performance is based on a novel signal processing pipeline that combines low-power time-of-flight (TOF) sensing for 3D hand-motion tracking with RGB image-based computer vision algorithms for finer gestural

MIT Media Lab

April 2013

Page 93

control. Mime is an addition to a growing number of input devices developed around the engineering design philosophy of sacrificing generality for battery-friendly and accurate performance to retain the portability advantages of our smart devices. We demonstrate the utility of Mime for Head Mounted Display control and smart phones with a variety of application scenarios, including 3D spatial input using close range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions.

422. Musicpainter

Chris Schmandt, Barry Vercoe and Wu-Hsi Li Musicpainter is a networked, graphical composing environment that encourages sharing and collaboration within the composing process. It provides a social environment where users can gather and learn from each other. The approach is based on sharing and managing music creation in small and large scales. At the small scale, users are encouraged to begin composing by conceiving small musical ideas, such as melodic or rhythmic fragments, all of which are collected and made available to all users as a shared composing resource. The collection provides a dynamic source of composing material that is inspiring and reusable. At the large scale, users can access full compositions that are shared as open projects. Users can listen to and change any piece. The system generates an attribution list on the edited piece, allowing users to trace how it evolves in the environment.

423. OnTheRun

Chris Schmandt and Matthew Joseph Donahoe OnTheRun is a location-based exercise game designed for the iPhone. The player assumes the role of a fugitive trying to gather clues to clear his name. The game is played outdoors while running, and the game creates missions that are tailored to the player's neighborhood and running ability. The game is primarily an audio experience, and gameplay involves following turn-by-turn directions, outrunning virtual enemies, and reaching destinations.

424. Pavlov
NEW LISTING

Chris Schmandt and Sujoy Kumar Chowdhury Pavlov is a virtual pet that encourages you to be physically active. He has ambient presence in the screens with which you interact. Pavlov is happy and healthy when you have walked a certain number of steps every day. When you are sedentary for a while, Pavlov nags you to take him out for a walk. He also craves to be the leader of all Pavlovs in your area. He can only be so when you, as his owner, become the most physically active person amongst your friends. Pavlov pings you every day at a certain time, telling you that he is going to have a dog-fight with other Pavlovs. You have the option to watch the dog-fight. Otherwise Pavlov simply tells you if he has won the fight, which may indicate that today you were more physically active than your friends. Chris Schmandt, Sinchan Banerjee, and Drew Harry How can one understand and visualize the lifestyle of a person on the other side of the world? Puzzlaef attempts to tackle this question through a mobile picture puzzle game where users collaboratively solve with pictures from their lifestyles.

425. Puzzlaef

Page 94

April 2013

MIT Media Lab

426. Radio-ish Media Player

Chris Schmandt, Barry Vercoe and Wu-Hsi Li How many decisions does it take before you hear a desired piece of music on your iPod? First, you are asked to pick a genre, then an artist, then an album, and finally a song. The more songs you own, the tougher the choices are. To resolve the issues, we turn the modern music player into an old analog radio tuner, the Radio-ish Media Player. No LCDs, no favorite channels, all you have is a knob that will help you surf through channel after channel accompanied by synthesized noise. Radio-ish is our attempt to revive the lost art of channel surfing in the old analog radio tuner. Let music find you: your ears will tell you if the music is right. This project is not only a retrospective design, but also our reflection on lost simplicity in the process of digitalization. A mobile phone version is also available for demo. Chris Schmandt and Drew Harry The experience of being in a crowd is visceral. We feel a sense of connection and belonging through shared experiences like watching a sporting event, speech, or performance. In online environments, though, we are often part of a crowd without feeling it. ROAR is designed to allow very large groups of distributed spectators have meaningful conversations with strangers or friends while creating a sense of presence of thousands of other spectators. ROAR is also interested in creating opportunities for collective action among spectators and providing flexible ways to share content among very large groups. These systems combine to let you feel the roar of the crowd even if you're alone in your bedroom.

427. ROAR

428. SeeIt-ShareIt

Chris Schmandt, Andrea Colaco Now that mobile phones are starting to have 3D display and capture capabilities, there are opportunities to enable new applications that enhance person-person communication or person-object interaction. This project explores one such application: acquiring 3D models of objects using cell phones with stereo cameras. Such models could serve as shared objects that ground communication in virtual environments and mirrored worlds or in mobile augmented reality applications.

429. Spellbound

Misha Sra and Chris Schmandt Turning screen time into activity time, Spellbound is a cooperatively competitive real-time, real-world multiplayer mobile game. It uses a fantasy game context to connect and bring people together around a shared experience, create serendipitous connections, and encourage new kinds of activities in existing physical spaces. The game is freed from the screen and interlaced with the real world by using mobile phones. The game system uses activity detection via sensors on the mobile phone and presence and location detection via GPS. Communication with the game is done using speech interaction with the phone, and output is displayed on the phone screen as well as through a custom wristband interface. Spellbound explores the space between real-world physical activities and fantastical video-game worlds as a place to create new social experiences for both players and audience.

430. Spotz

Chris Schmandt and Misha Sra Exploring your city is a great way to make friends, discover new places, find new interests, and invent yourself. Tagzz is an Android app where everyone collectively defines the places they visit and the places in turn define them. Tagzz allows you to discover yourself by discovering places. You tag a spot, create some buzz for it and, if everyone agrees the spot is 'fun' this bolsters your 'fun' quotient. If everyone agrees the spot is 'geeky' it pushes up your geeky score. Thus emerges your personal tag cloud. Follow tags to chance upon new places. Find people with similar 'tag clouds' as your own and experience new places together. Create buzz for your favorite spots and track other buzz to find who has the #bestchocolatecake in town!

MIT Media Lab

April 2013

Page 95

431. Tin Can

Chris Schmandt, Matthew Donahoe and Drew Harry Distributed meetings present a set of interesting challenges to staying engaged and involved. Because one person speaks at a time, it is easy (particularly for remote participants) to disengage from the meeting undetected. However, non-speaking roles in a meeting can be just as important as speaking ones, and if we could give non-speaking participants ways to participate, we could help support better-run meetings of all kinds. Tin Can collects background tasks like taking notes, managing the agenda, sharing relevant content, and tracking to-dos in a distributed interface that uses meeting participants' phones and laptops as input devices, and represents current meeting activities on an iPad in the center of the table in each meeting location. By publicly representing these background processes, we provide meeting attendees with new ways to participate and be recognized for their non-verbal participation.

432. Tin Can Classroom

Chris Schmandt, Drew Harry and Eric Gordon (Emerson College) Classroom discussions may not seem like an environment that needs a new kind of supporting technology. But we've found that augmenting classroom discussions with an iPad-based environment to help promote discussion, keep track of current and future discussion topics, and create a shared record of class keeps students engaged and involved with discussion topics, and helps restart the discussion when conversation lags. Contrary to what you might expect, having another discussion venue doesn't seem to add to student distraction; rather it tends to focus distracted students on this backchannel discussion. For the instructor, our system offers powerful insights into the engagement and interests of students who tend to speak less in class, which in turn can empower less-active students to contribute in a venue in which they feel more comfortable.

Kevin SlavinPlayful Systems


How to design systems that become experiences by transcending mere utility and usability.

433. Cordon Sanitaire


NEW LISTING

Kevin Slavin Named for, and inspired by, the medieval practice of erecting barriers to prevent the spread of disease, Cordon Sanitaire is a collaborative, location-based mobile game in which players seek to isolate an infectious "patient zero" from the larger population. Every day, the game starts abruptlysynchronizing all players at onceand lasts for two minutes. In 60 seconds, players must choose either to help form the front line of a quarantine, or remain passive. Under pressure, the uninfected attempt to collaborate without communication, seeking to find the best solution for the group. When those 60 seconds end, a certain number of players are trapped inside with patient zero, and the score reflects the groups ability to cooperate under duress.

Page 96

April 2013

MIT Media Lab

Ethan ZuckermanCivic Media


How to create technical and social systems to allow communities to share, understand, and act on civic information.

434. Between the Bars

Charlie DeTar Between the Bars is a blogging platform for one out of every 142 Americansprisonersthat makes it easy to blog using standard postal mail. It consists of software tools to make it easy to upload PDF scans of letters, crowd-sourced transcriptions of the scanned images. Between the Bars includes the usual full-featured blogging tools including comments, tagging, RSS feeds, and notifications for friends and family when new posts are available.

435. Codesign Toolkit

Sasha Costanza-Chock and Becky Hurwitz Involving communities in the design process results in products more responsive to a community's needs, more suited to accessibility and usability concerns, and easier to adopt. Civic media tools, platforms, and research work best when practitioners involve target communities at all stages of the processiterative ideation, prototyping, testing, and evaluation. In the codesign process, communities act as codesigners and participants, rather than mere consumers, end-users, test subjects, or objects of study. In the Codesign Studio, students practice these methods in a service learning project-based studio, focusing on collaborative design of civic media with local partners. The Toolkit will enable more designers and researchers to utilize the co-design process in their work by presenting current theory and practices in a comprehensive, accessible manner. Alumni Contributor: Molly Sauter

436. Controversy Mapper

Erhardt Graeff, Matt Stempeck, and Ethan Zuckerman How does a media controversy become the only thing any of us are talking about? Using the Media Cloud platform, we're reverse-engineering major news stories to visualize how ideas spread, how media frames change over time, and whose voices dominate a discussion. We've started with a case study of Trayvon Martin, a teenager who was shot and killed. His story became major national news... several weeks after his death. First, we looked at attention levels across multiple media sources talking about Trayvon: News and blog articles, Broadcast news mentions, Tweets, Google Search Trends, and petition signatures calling for his killer's arrest. Then, we dove into the networks of interlinked news articles and blog posts to trace the changes in how Trayvon's story was being framed, and identify the most influential sources according to network structure. Analyses of stories like Trayvon's provide a revealing portraits of today's complicated media ecosystems.

437. Data Therapy

Ethan Zuckerman and Rahul Bhargava As part of our larger effort to build out a suite of tools for community organizers, we are helping to build their capacity to do their own creative data visualization and presentation. New computer-based tools are lowering the barriers of entry for making engaging and creative presentations of data. Rather than encouraging partnerships with epidemiologists, statisticians, or programmers, we see an opportunity to build capacity within small community organizations by using these new tools. This work involves workshops, webinars, and writing about how to pick more creative ways to present their data stories.

MIT Media Lab

April 2013

Page 97

438. Digital Humanitarian Marketplace


NEW LISTING

Matthew Stempeck The Internet has disrupted the aid sector like so many other industries before it. In times of crisis, donors are increasingly connecting directly with affected populations to provide participatory aid. The Digital Humanitarian Marketplace aggregates these digital volunteering projects by crisis and skills required to help coordinate this promising new space. Catherine D'Ignazio Erase the Border is a web campaign and voice petition platform. It tells the story of the Tohono O'odham people whose community has been divided along 75 miles of the US-Mexico border by a fence. The border fence divides the community, prevents tribe members from receiving critical health services and subjects O'odham to racism and discrimination. This platform is a pilot that we are using to research the potential of voice and media petitions for civic discourse. Sophie Diehl and Nathan Matias Obituaries reflect society's values for men and women's achievements, aspirations, and families. Gender in Memoriam shows twenty years of language used by the US media to talk about society's heroes, leaders, and visionaries.

439. Erase the Border


NEW LISTING

440. Gender in Memoriam

441. Grassroots Mobile Power

Joe Paradiso, Ethan Zuckerman, Pragun Goyal and Nathan Matias We want to help people in nations where electric power is scarce sell power to their neighbors. Were designing a piece of prototype hardware that plugs into a diesel generator or other power source, distributes the power to multiple outlets, monitors how much power is used, and uses mobile payments to charge the customer for the power consumed. Ethan Zuckerman, Matt Stempeck, David Kim, Evan Moore, Justin Nowell and Tess Wise Have you ever been forwarded an email that you just cant believe? Our inboxes are rife with misinformation. The truth is out there, just not when we actually need it. LazyTruth is a Gmail gadget that surfaces verified truths when you receive common chain emails. It all happens right in your inbox, without requiring you to search anywhere. The result is that it becomes much more convenient for citizens to combat misinformation, rather than acquiesce to its volume. Whether its political rumors, gift card scams, or phishing attempts, fact is now as convenient as fiction.

442. LazyTruth

443. Mapping Banned Books

Ethan Zuckerman, American Library Association, Chris Peterson and National Coalition Against Censorship Books are challenged and banned in public schools and libraries across the country. But which books, where, by whom, and for what reasons? The Mapping Banned Books project is a partnership between the Center for Civic Media, the American Library Association, and the National Coalition Against Censorship to a) visualize existing data on book challenges, b) detect what the existing data doesn't capture, and c) devise new methods to surface suppressed speech.

444. Mapping the Globe

Catherine D'Ignazio and Ethan Zuckerman Mapping the Globe is an interactive tool and map that helps us understand where the Boston Globe directs its attention. Media attention matters in quantity and quality. It helps determine what we talk about as a public and how we talk about it. Mapping the Globe tracks where the paper's attention goes and what that attention looks like across different regional geographies in combination with diverse data sets like population and income. Produced in partnership with the Boston Globe.

Page 98

April 2013

MIT Media Lab

445. Media Cloud

Hal Roberts, Ethan Zuckerman and David LaRochelle Media Cloud is a platform for studying media ecosystemsthe relationships between professional and citizen media, between online and offline sources. By tracking millions of stories published online or broadcast via television, the system allows researchers to track the spread of memes, media framings and the tone of coverage of different stories. The platform is open source and open data, designed to be a substrate for a wide range of communications research efforts. Media Cloud is a collaboration between Civic Media and the Berkman Center for Internet and Society at Harvard Law School.

446. Media Meter

Ethan Zuckerman, Nathan Matias, Matt Stempeck, Rahul Bhargava and Dan Schultz What have you seen in the news this week? And what did you miss? Are you getting the blend of local, international, political, and sports stories you desire Were building a media-tracking platform to empower you, the individual, and news providers themselves, to see what youre getting and what youre missing in your daily consumption and production of media. The first round of modules developed for the platform allow you to compare the breakdown of news topics and byline gender across multiple news sources.

447. New Day New Standard: (646) 699-3989

Abdulai Bah, Anjum Asharia, Sasha Costanza-Chock, Rahul Bhargava, Leo Burd, Rebecca Hurwitz, Marisa Jahn and Rodrigo Davies New Day New Standard is an interactive hotline that informs nannies, housekeepers, eldercare-givers, and their employers about the landmark Domestic Workers' Bill of Rights, passed in New York State in November 2010. Operating in English and Spanish, it's a hybrid application that combines regular touchtone phones and Internet-based telephony within an open source framework. The Center for Civic Media and REV- (http://www.rev-it.org) are currently developing Call to Action, a generalized version of the platform and associated GUI to allow other groups to create interactive hotlines for a wide range of use cases. NDNS was presented to the White House's Open Government Initative. Sasha Costanza-Chock, Henry Holtzman, Ethan Zuckerman and Daniel E. Schultz NewsJack is a media remixing tool built from Mozilla's Hackasaurus. It allows users to modify the front pages of news sites, changing language and headlines to change the news into what they wish it could be.

448. NewsJack

449. NGO 2.0

Jing Wang, Rongting Zhou, Endy Xie, Shi Song NGO2.0 is a project grown out of the work of MITs New Media Action Lab. The project recognizes that digital media and Web 2.0 are vital to grassroots NGOs in China. NGOs in China operate under enormous constraints because of their semi-legal status. Grassroots NGOs cannot compete with governmental affiliated NGOs for the attention of mainstream media, which leads to difficulties in acquiring resources and raising awareness of the cause they are promoting. The NGO2.0 Project serves grassroots NGOs in the underdeveloped regions of China, training them to enhance their digital and social media literacy through Web 2.0 workshops. The project also rolls out a crowd map to enable the NGO sector and the Corporate Social Responsibility sector to find out what each sector has accomplished in producing social good.

MIT Media Lab

April 2013

Page 99

450. Open Gender Tracker


NEW LISTING

Irene Ros, Adam Hyland, J. Nathan Matias and Ethan Zuckerman Open Gender Tracker is a suite of open source tools and APIs that make it easy for newsrooms and media monitors to collect metrics and gain a better understanding of gender diversity in their publications and audiences. This project has been created in partnership with Irene Ros of Bocoup, with funding from the Knight Foundation. Ethan Zuckerman, Edward Platt, Rahul Bhargava and Pablo Rey Mazon Newspaper front pages are a key source of data about our media ecology. Newsrooms spend massive time and effort deciding what stories make it to the front page. PageOneX makes coding and visualizing newspaper front page content much easier, democratizing access to newspaper attention data. Communication researchers have analyzed newspaper front pages for decades, using slow, laborious methods. PageOneX simplifies, digitizes, and distributes the process across the net and makes it available for researchers, citizens, and activists.

451. PageOneX

452. Social Mirror

Ethan Zuckerman, Nathan Matias, Gaia Marcus and Royal Society of Arts Social Mirror transforms social science research by making offline social network research cheaper, faster, and more reliable. Research on whole life networks typically involves costly paper forms which take months to process. Social Mirrors digital process respects participant privacy while also putting social network analysis within reach of community research and public service evaluation. By providing instant feedback to participants, Social Mirror can also invite people to consider and change their connection to their communities. Our pilot studies have already shown the benefits for people facing social isolation.

453. T.I.C.K.L.E.

Ethan Zuckerman, Nathan Matias and Eric Rosenbaum The Toy Interface Construction Kit Learning Environment (T.I.C.K.L.E.) is a universal construction kit for the rest of us. It doesn't require 3D printers or CAD skills. Instead, it's a DIY social process for creating construction interoperability.

454. thanks.fm
NEW LISTING

J Nathan Matias and Mitchel Resnick Thanks.fm is a web platform for thanking and acknowledging your creative collaborators. Add a project, acknowledge individuals, and embed acknowledgments throughout the social web. Leo Burd VoIP Drupal is an innovative framework that brings the power of voice and Internet-telephony to Drupal sites. It can be used to build hybrid applications that combine regular touchtone phones, web, SMS, Twitter, IM and other communication tools in a variety of ways, facilitating community outreach and providing an online presence to those who are illiterate or do not have regular access to computers. VoIP Drupal will change the way you interact with Drupal, your phone and the web.

455. VoIP Drupal

456. Vojo.co

Ethan Zuckerman, Sasha Costanza-Chock, Rahul Bhargava, Ed Platt, Becky Hurwitz, Rodrigo Davies, Alex Goncalves, Denise Cheng and Rogelio Lopez Vojo.co is a hosted mobile blogging platform that makes it easy for people to share content to the web from mobile phones via voice calls, SMS, or MMS. Our goal is to make it easier for people in low-income communities to participate in the digital public sphere. You don't need a smart phone or an app to post blog entries or digital stories to Vojo - any phone will do. You don't even need internet access: Vojo lets you create an account via sms and start posting right away. Vojo is powered by the VozMob Drupal Distribution, a customized version of the popular free and open

Page 100

April 2013

MIT Media Lab

source content management system that is being developed through an ongoing codesign process by day laborers, household workers, and a diverse team from the Institute of Popular Education of Southern California (IDEPSCA).

457. VozMob

Sasha Costanza-Chock The VozMob Drupal Distribution is Drupal customized as a mobile blogging platform. VozMob has been designed to make it easy to post content to the web from mobile phones via voice calls, SMS, or MMS. You don't need a smart phone or an app to post blog entries - any phone will do. VozMob allows civic journalists in low-income communities to participate in the digital public sphere. Features include groups, tags, geocoding and maps, MMS filters, and new user registration via SMS. Site editors can send multimedia content out to registered users' mobile phones. VozMob Drupal Distribution is developed through an ongoing codesign process by day laborers, household workers, and students from the Institute of Popular Education of Southern California (IDEPSCA.org). The project received early support from the Annenberg School for Communication and Journalism at the University of Southern California, Macarthur/HASTAC, Nokia, and others.

458. What's Up

Leo Burd What's Up is a set of tools designed to allow people in a small geographic community to share information, plan events and make decisions, using media that's as broadly inclusive as possible. The platform incorporates low cost LED signs, online and paper event calendars and a simple, yet powerful, phone system that is usable with the lowest-end mobile and touch tone phones.

459. Whose Voices? Twitter Citation in the Media

Ethan Zuckerman, Nathan Matias, Diyang Tang Mainstream media increasingly quote social media sources for breaking news. "Whose Voices" tracks who's getting quoted across topics, showing just how citizen media sources are influencing international news reporting.

MIT Media Lab

April 2013

Page 101

You might also like