Augur Article Swipe
YOU?
·
· 2016
· Open Access
·
· DOI: https://doi.org/10.1145/2858036.2858528
· OA: W2281258985
From smart homes that prepare coffee when we wake, to phones that know not to\ninterrupt us during important conversations, our collective visions of HCI\nimagine a future in which computers understand a broad range of human\nbehaviors. Today our systems fall short of these visions, however, because this\nrange of behaviors is too large for designers or programmers to capture\nmanually. In this paper, we instead demonstrate it is possible to mine a broad\nknowledge base of human behavior by analyzing more than one billion words of\nmodern fiction. Our resulting knowledge base, Augur, trains vector models that\ncan predict many thousands of user activities from surrounding objects in\nmodern contexts: for example, whether a user may be eating food, meeting with a\nfriend, or taking a selfie. Augur uses these predictions to identify actions\nthat people commonly take on objects in the world and estimate a user's future\nactivities given their current situation. We demonstrate Augur-powered,\nactivity-based systems such as a phone that silences itself when the odds of\nyou answering it are low, and a dynamic music player that adjusts to your\npresent activity. A field deployment of an Augur-powered wearable camera\nresulted in 96% recall and 71% precision on its unsupervised predictions of\ncommon daily activities. A second evaluation where human judges rated the\nsystem's predictions over a broad set of input images found that 94% were rated\nsensible.\n