[1] Besserer D, Bäurle J, Nikic A, et al. Fitmirror: A smart mirror for positive affect in everyday user morning routines[C]//Proceedings of the Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction. New York: Association for Computing Machinery, 2016: 48-55.
[2] Hippocrate A A E, Luhanga E T, Masashi T, et al. Smart gyms need smart mirrors: Design of a smart gym concept through contextual inquiry[C]//Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers. New York: Association for Computing Machinery, 2017: 658-661.
[3] Mohamed A S A, Wahab M N A, Suhaily S S, et al. Smart mirror design powered by raspberry PI[C]//Proceedings of the 2018 Artificial Intelligence and Cloud Computing Conference. New York: Association for Computing Machinery, 2018: 166-173.
[4] Salgian A, Vickerman D, Vassallo D. A smart mirror for music conducting exercises[C]//Proceedings of the on Thematic Workshops of ACM Multimedia 2017. New York: Association for Computing Machinery, 2017: 544-549.
[5] Dang C T, Aslan I, Lingenfelser F, et al. Towards somaesthetic smarthome designs: Exploring potentials and limitations of an affective mirror[C]//Proceedings of the 9th International Conference on the Internet of Things. New York: Association for Computing Machinery, 2019: 1-8.
[6] Chu M, Dalal B, Walendowski A, et al. Countertop responsive mirror: Supporting physical retail shopping for sellers, buyers and companions[C]//Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York: Association for Computing Machinery, 2010: 2533-2542.
[7] Parlangeli O, Guidi S, Marchigiani E, et al. Shopping online and online design: The role of prospective memory in the use of online product configurators[C]//Proceedings of the 13th Biannual Conference of the Italian SIGCHI Chapter: Designing the Next Interaction. New York: Association for Computing Machinery, 2019: 1-7.
[8] Kim H, Huh B K, Im S H, et al. Finding satisfactory transparency: An empirical study on public transparent displays in a shop context[C]//Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. New York: Association for Computing Machinery, 2015: 1151-1156.
[9] Weißker T, Berst A, Hartmann J, et al. The massive mobile multiuser framework: Enabling ad-hoc realtime interaction on public displays with mobile devices[C]//Proceedings of the 5th ACM International Symposium on Pervasive Displays. New York: Association for Computing Machinery, 2016: 168-174.
[10] Newn J, Velloso E, Carter M, et al. Multimodal segmentation on a large interactive tabletop: Extending interaction on horizontal surfaces with gaze[C]//Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces. New York: Association for Computing Machinery, 2016: 251-260.
[11] Hansen T R, Eriksson E, Lykke-Olesen A. Mixed interaction space: Designing for camera based interaction with mobile devices[C]//CHI’05 Extended Abstracts on Human Factors in Computing Systems. New York: Association for Computing Machinery, 2005: 1933-1936.
[12] Bohari U, Chen T J. To draw or not to draw: Recognizing stroke-hover intent in non-instrumented gesturefree mid-air sketching[C]//23rd International Conference on Intelligent User Interfaces. New York: Association for Computing Machinery, 2018: 177-188.
[13] Ishak E W, Feiner S K. Interacting with hidden content using content-aware free-space transparency[C]//Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology. New York: Association for Computing Machinery, 2004: 189-192.
[14] Kubo Y, Takada R, Shizuki B, et al. Exploring context-aware user interfaces for smartphone-smartwatch cross-device interaction[J]. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2017, 1(3): 69:1-69.
[15] Chen X, Marquardt N, Tang A, et al. Extending a mobile device’s interaction space through body-centric interaction[C]//Proceedings of the 14th International Conference on Human-computer Interaction with Mobile Devices and Services. New York: Association for Computing Machinery, 2012: 151-160.
[16] Loorak M H, Zhou W, Trinh H, et al. Hand-over-face input sensing for interaction with smartphones through the built-in camera[C]//Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services. New York: Association for Computing Machinery, 2019: 1-12.
[17] Mistry P, Maes P. Mouseless: A computer mouse as small as invisible[C]//CHI’11 Extended Abstracts on Human Factors in Computing Systems. New York: Association for Computing Machinery, 2011: 1099-1104.
[18] Lee S S, Chae J, Kim H, et al. Towards more natural digital content manipulation via user freehand gestural interaction in a living room[C]//Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing. New York: Association for Computing Machinery, 2013: 617-626.
[19] Yeo H S, Feng W, Huang M X. WATouCH: Enabling direct input on non-touchscreen using smartwatch’s photoplethysmogram and IMU sensor fusion[C]//Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. New York: Association for Computing Machinery, 2020: 1-10.
[20] Bostan I, Buruk O T, Canat M, et al. Hands as a controller: User preferences for hand specific on-skin gestures[C]//Proceedings of the 2017 Conference on Designing Interactive Systems. New York: Association for Computing Machinery, 2017: 1123-1134.
[21] Sharma R, Patterh M. Face recognition using face alignment and PCA techniques: A literature survey[J]. IOSR Journal of Computer Engineering (IOSR-JCE), 2015, 17(4): 17-30.
[22] Jin X, Tan X. Face alignment in-the-wild: A survey[J].Computer Vision and Image Understanding, 2017, 162: 1-22.
[23] Wang N, Gao X, Tao D, et al. Facial feature point detection: A comprehensive survey[J]. Neurocomputing, 2018, 275: 50-65.
[24] Tan S, Chen D, Guo C, et al. A robust shape reconstruction method for facial feature point detection[J]. Computational Intelligence and Neuroscience, 2017, 2017: 1-11.
[25] Wu W, Qian C, Yang S, et al. Look at boundary: A boundary-aware face alignment algorithm[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. New York: Association for Computing Machinery, 2018: 2129-2138.
[26] Zhang J, Hu H, Shen G. Joint stacked hourglass network and salient region attention refinement for robust face alignment[J]. ACM Transactions on Multimedia Computing, Communications, and Applications, 2020, 16(1): 1-18.
[27] Chang H, Lu J, Yu F, et al. Pairedcyclegan: Asymmetric style transfer for applying and removing makeup[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. New York: Association for Computing Machinery, 2018: 40-48.
[28] Bao R, Yu H, Liu S, et al. Automatic makeup based on generative adversarial nets[C]//Proceedings of the 10th International Conference on Internet Multimedia Computing and Service. New York: Association for Computing Machinery, 2018: 1-5.
[29] Li Y, Huang H, Cao J, et al. Disentangled representation learning of makeup portraits in the wild[J]. International Journal of Computer Vision, 2020, 128(8-9): 2166-2184.
[30] Li T, Qian R, Dong C, et al. BeautyGAN: Instance-level facial makeup transfer with deep generative adversarial network[C]//Proceedings of the 26th ACM International Conference on Multimedia. New York: Association for Computing Machinery, 2018: 645-653.
[31] Park J, Kim H, Ji S, et al. An automatic virtual makeup scheme based on personal color analysis[C]//Proceedings of the 12th International Conference on Ubiquitous Information Management and Communication. New York: Association for Computing Machinery, 2018: 1-7.
[32] Evangelista B, Meshkin H, Kim H, et al. Realistic AR makeup over diverse skin tones on mobile[C]//Siggraph Asia 2018 Posters. New York: Association for Computing Machinery, 2018: 1-2.
[33] Liu L, Xing J, Liu S, et al. Wow! You are so beautiful today[J]. ACM Transactions on Multimedia Computing, Communications, and Applications, 2014, 11(s1): 1-20.
[34] Ou X, Liu S, Cao X, et al. Beauty eMakeup: A deep makeup transfer system[C]//Proceedings of the 24th ACM International Conference on Multimedia. New York: Association for Computing Machinery, 2016: 701-702.
[35] Nguyen T V, Liu L. Smart mirror: Intelligent makeup recommendation and synthesis[C]//Proceedings of the 25th ACM International Conference on Multimedia. New York: Association for Computing Machinery, 2017: 1253-1254.
[36] 朱淼良, 姚远, 蒋云良 . 增强现实综述[J]. 中国图象图形学报, 2004(7): 3-10.
[37] Bermano A H, Billeter M, Iwai D, et al. Makeup Lamps: Live augmentation of human faces via projection[J]. Computer Graphics Forum, 2017, 36(2): 311-323.
[38] Treepong B, Mitake H, Hasegawa S. Makeup creativity enhancement with an augmented reality face makeup system[J]. Computers in Entertainment, 2018, 16(4): 6-17.
[39] Nakagawa M, Tsukada K, Siio I. Smart makeup system: Supporting makeup using lifelog sharing[C]//Proceedings of the 13th International Conference on Ubiquitous Computing. New York: Association for Computing Machinery, 2011: 483-484.
[40] Truong A, Chi P, Salesin D, et al. Automatic generation of two-level hierarchical tutorials from instructional makeup videos[C]//Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Yokohama: ACM, 2021: 1-16.
[41] Hung M H, Yang J, Hsieh C H. A new virtual makeup system based on golden sample search[C]//Proceedings of the 2020 4th International Conference on Electronic Information Technology and Computer Engineering. Xiamen: ACM, 2020: 350-354.
[42] Rahman A S M M, Tran T T, Hossain S A, et al. Augmented rendering of makeup features in a smart interactive mirror system for decision support in cosmetic products selection[C]//2010 IEEE/ACM 14th International Symposium on Distributed Simulation and Real Time Applications. New York: IEEE, 2010: 203-206.
[43] Nishimura A, Siio I. IMake: Eye makeup design generator[C]//Proceedings of the 11th Conference on Advances in Computer Entertainment Technology. New York: Association for Computing Machinery, 2014: 1-6.
[44] Beaudouin-Lafon M. Designing interaction, not interfaces[C]//Proceedings of the Working Conference on Advanced Visual Interfaces. New York: Association for Computing Machinery, 2004: 15-22.
[45] Edelberg J, Kilrain J. Design systems: Consistency, efficiency & collaboration in creating digital products[C]//Proceedings of the 38th ACM International Conference on Design of Communication. New York: Association for Computing Machinery, 2020: 1-3.
[46] Johnston V, Black M, Wallace J, et al. A framework for the development of a dynamic adaptive intelligent user interface to enhance the user experience[C]//Proceedings of the 31st European Conference on Cognitive Ergonomics. New York: Association for Computing Machinery, 2019: 32-35.
[47] Jacob R J K, Girouard A, Hirshfield L M, et al. Reality-based interaction: Unifying the new generation of interaction styles[C]//CHI’07 Extended Abstracts on Human Factors in Computing Systems. New York: Association for Computing Machinery, 2007: 2465-2470.
[48] O’hara K, Harper R, Mentis H, et al. On the natural⁃ness of touchless: Putting the“interaction”back into NUI[J]. ACM Transactions on Computer-Human Interaction, 2013, 20(1): 5-25.
[49] Lenz E, Hassenzahl M, Diefenbach S. How performing an activity makes meaning[C]//Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. New York: Association for Computing Machinery, 2019: 1-6.
[50] Catarci T, Amendola M, Bertacchini F, et al. Digital interaction: Where are we going? [C]//Proceedings of the 2018 International Conference on Advanced Visual Interfaces. New York: Association for Computing Machinery, 2018: 1-5.