This page showcases our projects (published after August 2014).


Phones on Wheels: Exploring Interaction for Smartphones with Kinetic Capabilities
This work introduces novel interaction and applications using smartphones with kinetic capabilities. We develop an accessory module with robot wheels for a smartphone. With this module, the smartphone can move in a linear direction or rotate with sufficient power. The module also includes rotary encoders, allowing us to use the wheels as an input modality. We demonstrate a series of novel mobile interaction for mobile devices with kinetic capabilities through three applications.

Takefumi Hiraki, Koya Narumi, Koji Yatani, and Yoshihiro Kawahara. 2016. Phones on Wheels: Exploring Interaction for Smartphones with Kinetic Capabilities. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16 Adjunct). ACM, New York, NY, USA, 121-122.
インフォグラフィックスの作成をインタラクティブに支援するシステム(An interactive system for authoring infographics)
データ可視化の手段としてinfographics を用いる機会が近年増えている。データ可視化においては、読み手により良い理解を促すためにストーリーテリングが重要である。ストーリーテリングには重要なデータ探索、ストーリー作り、ストーリーの伝え方の3つのプロセスがある。本稿ではinfographics にてよく用いられるスタイルの画像を自動的に生成するシステムによって特にストーリーの伝え方を支援するシステムを提案する。画像は、インターネットからピクトグラムを取得しそれを加工することによって生成を行う。

LumiO: A Plaque-aware Toothbrush
Toothbrushing plays an important role in daily dental plaque removal for preventive dentistry. Prior work has investigated improvements on toothbrushing with sensing technologies. But existing toothbrushing support focuses mostly on estimating brushing coverage. Users thus only have indirect information about how well their toothbrushing removes dental plaque. We present LumiO, a toothbrush that offers users continuous feedback on the amount of plaque on teeth. Lumio uses a well-known method for plaque detection, called Quantitative Light-induced Fluorescence (QLF). QLF exploits a red fluorescence property that bacterium in the plaque demonstrates when a blue-violet ray is cast. Blue-violet light excites this fluorescence property, and a camera with an optical filter can capture plaque in pink. We incorporate this technology into an electric toothbrush to achieve improvements in performance on plaque removal in daily dental care. This paper first discusses related work in sensing for oral activities and interaction as well as dental care with technologies. We then describe the principles of QLF, the hardware design of LumiO, and our vision-based plaque detection method. Our evaluations show that the vision-based plaque detection method with three machine learning techniques can achieve F-measures of 0.68 – 0.92 under user-dependent training. Qualitative evidence also suggests that study participants were able to have improved awareness of plaque and build confidence on their toothbrushing.

T. Yoshitani, M. Ogata, and K. Yatani. LumiO: A Plaque-aware Toothbrush. in Proceedings of UbiComp 2016, 2016.
[Paper] [Video]

Autocomplete Hand-drawn Animations
Hand-drawn animation is a major art form and communication edium, but can be challenging to produce. We present a system to help people create frame-by-frame animations through hand-drawn sketches. We design our interface to be minimalistic; it contains only a canvas for sketches and a few controls. When users are drawing on the canvas, our system silently analyzes all past sketches and predicts what might be drawn in the future across both spatial locations and temporal frames. The interface also offers users suggestions to beautify existing drawings. These predictions and suggestions greatly reduce the workload on creating multiple frames for animation and also help to create desirable results. Users can accept, ignore, or modify such predictions visualized on the canvas by simple gestures. Our method considers both high level structures and low level repetitions, and can significantly reduce manual workload while help produce better results. We evaluate our system through a preliminary user study and confirm that it can enhance both users’ objective performance and subjective satisfaction.

J. Xing, L.-Y. Wei, T. Shiratori, and K. Yatani. “Autocomplete Hand-drawn Animations,” in Proceedings of SIGGRAPH Asia 2015, 2015.

Mixed-Initiative Approaches to Global Editing in Slideware
Good alignment and repetition of objects across presentation slides can facilitate visual processing and contribute to audience understanding. However, creating and maintaining such consistency during slide design is difficult. To solve this problem, we present two complementary tools: (1) StyleSnap, which increases the alignment and repetition of objects by adaptively clustering object edge positions and allowing parallel editing of all objects snapped to the same spatial extent; and (2) FlashFormat, which infers the least-general generalization of editing examples and applies it throughout the selected range. In user studies of repetitive styling task performance, StyleSnap and FlashFormat were 4-5 times and 2-3 times faster respectively than conventional editing. Both use a mixed-initiative approach to improve the consistency of slide decks and generalize to any situations involving direct editing across disjoint visual spaces.

M. Ko, S. Yang, J. Lee, C. Heizmann, J. Jeong, U. Lee, D. H. Shin, K. Yatani, J. Song, and K. Chung, “Mixed-Initiative Approaches to Global Editing in Slideware,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2015), 2015.

NUGU: A Group-based Intervention App for Improving Self-Regulation of Limiting Smartphone Use
Our preliminary study reveals that individuals use various management strategies for limiting smartphone use, ranging from keeping smartphones out of reach to removing apps. However, we also found that users often had difficulties in maintaining their chosen management strategies due to the lack of self-regulation. In this paper, we present NUGU, a group-based intervention app for improving self-regulation on smartphone use through leveraging social support: groups of people limit their use together by sharing their limiting information. NUGU is designed based on social cognitive theory and it has been developed iteratively through two pilot tests. Our three-week user study (n = 62) demonstrated that compared with its non-social counterpart, the NUGU users’ usage amount significantly decreased and their perceived level of managing disturbances improved. Furthermore, our exit interview confirmed that NUGU’s design elements are effective for achieving limiting goals.

M. Ko, S. Yang, J. Lee, C. Heizmann, J. Jeong, U. Lee, D. H. Shin, K. Yatani, J. Song, and K. Chung, “NUGU: A Group-based Intervention App for Improving Self-Regulation of Limiting Smartphone Use,” in Proceedings of the ACM conference on Computer-Supported Cooperative Work and Social Computing (CSCW 2015), 2015.

ReviewCollage: A Mobile Interface for Direct Comparison Using Online Reviews
Review comments posted in online websites can help the user decide a product to purchase or place to visit. They can also be useful to closely compare a couple of candidate entities. However, the user may have to read different webpages back and forth for comparison, and this is not desirable particularly when she is using a mobile device. We present ReviewCollage, a mobile interface that aggregates information about two reviewed entities in a one-page view. ReviewCollage uses attribute-value pairs, known to be effective for review text summarization, and highlights the similarities and differences between the entities. Our user study confirms that ReviewCollage can support the user to compare two entities and make a decision within a couple of minutes, at least as quickly as existing summarization interfaces. It also reveals that ReviewCollage could be most useful when two entities are very similar.

H. Jin, T. Sakai, and K. Yatani, “ReviewCollage: A Mobile Interface for Direct Comparison Using Online Reviews,” in Proceedings of the ACM SIGCHI International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI 2014), 2014.
[Paper] [Video] Honorable Mention Award

TalkZones: Section-based Time Support for Presentations
Managing time while presenting is challenging, but mobile devices offer both convenience and flexibility in their ability to support the end-to-end process of setting, refining, and following presentation time targets. From an initial HCI-Q study of 20 presenters, we identified the need to set such targets per “zone” of consecutive slides (rather than per slide or for the whole talk), as well as the need for feedback that accommodates two distinct attitudes towards presentation timing. These findings led to the design of TalkZones, a mobile application for timing support. When giving a 20-slide, 6m40s rehearsed but interrupted talk, 12 participants who used TalkZones registered a mean overrun of only 8s, compared with 1m49s for 12 participants who used a regular timer. We observed a similar 2% overrun in our final study of 8 speakers giving rehearsed 30-minute talks in 20 minutes. Overall, we show that TalkZones can encourage presenters to advance slides before it is too late to recover, even under the adverse timing conditions of short and shortened talks.

B. Saket, S. Yang, H. Z. Tan, K. Yatani, and D. Edge, “TalkZones: Section-based Time Support for Presentations,” in Proceedings of the ACM SIGCHI International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI 2014), 2014.
[Paper] [Video] Honorable Mention Award