1) The document discusses Darwin Phones, a mobile phone sensing system that applies distributed computing concepts to allow for the evolution, pooling, and collaborative inference of classification models on mobile devices.
2) Darwin Phones aims to address the limitations of traditional supervised mobile sensing approaches that require retraining models for different environments and do not scale well.
3) The key aspects of Darwin Phones include allowing classification models to evolve on devices without supervision, pooling existing models between devices, and enabling collaborative inference across multiple devices.
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
M1gp Shimizu - Darwin phone
1. Darwin Phones: the Evolution ofSensing and Inferenceon Mobile Phones EmilianoMiluzzo, Cory T. Cornelius, AshwinRamaswamy,TanzeemChoudhury, ZhigangLiu, Andrew T. Campbell Mobisys 2010 Presenter: Kazuto SHIMIZU SezakiLab, Dept. of IST, Univ. of Tokyo
2. Introduction Fortunately, the presentation the author used at Mobisys 2010 is available on the web site. http://www.cs.dartmouth.edu/~miluzzo/publications.html
3. Introduction Fortunately, the presentation the author used at Mobisys 2010 is available on the web site. http://www.cs.dartmouth.edu/~miluzzo/publications.html So,
4. Introduction Fortunately, the presentation the author used at Mobisys 2010 is available on the web site. http://www.cs.dartmouth.edu/~miluzzo/publications.html Experience Top Conference Quality from Now!!
5. Darwin Phones: the Evolution of Sensing and Inference on Mobile Phones Emiliano Miluzzo*, Cory T. Cornelius*, AshwinRamaswamy*, TanzeemChoudhury*, Zhigang Liu**, Andrew T. Campbell* * CS Department – Dartmouth College ** Nokia Research Center – Palo Alto
21. application distribution deploy apps onto millions of phones at the blink of an eye miluzzo@cs.dartmouth.edu Emiliano Miluzzo
22. application distribution deploy apps onto millions of phones at the blink of an eye collect huge amount of data for research purposes miluzzo@cs.dartmouth.edu Emiliano Miluzzo
25. cloud infrastructure cloud - backend support we want to push intelligence to the phone miluzzo@cs.dartmouth.edu Emiliano Miluzzo
26. cloud infrastructure cloud - backend support preserve the phone user experience (battery lifetime, ability to make calls, etc.) miluzzo@cs.dartmouth.edu Emiliano Miluzzo
43. societal scale sensing reality mining using mobile phones will play a big role in the future global mobilesensor network miluzzo@cs.dartmouth.edu Emiliano Miluzzo
44. end of PR – now darwin Emiliano Miluzzo miluzzo@cs.dartmouth.edu
45. a small building block towards the big vision Emiliano Miluzzo miluzzo@cs.dartmouth.edu
46. from motes to mobile phones miluzzo@cs.dartmouth.edu Emiliano Miluzzo
47. evolution of sensing and inference on mobile phones from motes to mobile phones miluzzo@cs.dartmouth.edu Emiliano Miluzzo
48.
49.
50. why darwin? mobile phone sensing today miluzzo@cs.dartmouth.edu Emiliano Miluzzo
51. why darwin? mobile phone sensing today train classification model X in the lab miluzzo@cs.dartmouth.edu Emiliano Miluzzo
52. why darwin? mobile phone sensing today train classification model X in the lab deploy classifier X miluzzo@cs.dartmouth.edu Emiliano Miluzzo
53. why darwin? mobile phone sensing today train classification model X in the lab deploy classifier X train classification model X’ in the lab miluzzo@cs.dartmouth.edu Emiliano Miluzzo
54. why darwin? mobile phone sensing today train classification model X in the lab deploy classifier X train classification model X’ in the lab deploy classifier X’ miluzzo@cs.dartmouth.edu Emiliano Miluzzo
55. why darwin? mobile phone sensing today a fully supervised approach doesn’t scale! train classification model X in the lab deploy classifier X train classification model X’ in the lab deploy classifier X’ miluzzo@cs.dartmouth.edu Emiliano Miluzzo
56. why darwin? a same classifier does not scale to multiple environments (e.g., quiet and noisy env) miluzzo@cs.dartmouth.edu Emiliano Miluzzo
57. why darwin? a same classifier does not scale to multiple environments (e.g., quiet and noisy env) miluzzo@cs.dartmouth.edu Emiliano Miluzzo
58. why darwin? a same classifier does not scale to multiple environments (e.g., quiet and noisy env) miluzzo@cs.dartmouth.edu Emiliano Miluzzo
59. why darwin? a same classifier does not scale to multiple environments (e.g., quiet and noisy env) darwin creates new classification models transparently from the user (classification model evolution) miluzzo@cs.dartmouth.edu Emiliano Miluzzo
60. why darwin? ability for an application to rapidly scale to many devices miluzzo@cs.dartmouth.edu Emiliano Miluzzo
61. why darwin? darwin re-uses classification models when possible (classification model pooling) ability for an application to rapidly scale to many devices miluzzo@cs.dartmouth.edu Emiliano Miluzzo
62. why darwin? leverage the large ensemble of in-situ resources miluzzo@cs.dartmouth.edu Emiliano Miluzzo
63. why darwin? darwin exploits spatial diversity and co-operate to alleviate the “sensing context” problem (collaborative inference) leverage the large ensemble of in-situ resources miluzzo@cs.dartmouth.edu Emiliano Miluzzo
67. darwin phases supervised initial training (derive model seed) miluzzo@cs.dartmouth.edu Emiliano Miluzzo
68. darwin phases supervised initial training (derive model seed) unsupervised classification model evolution miluzzo@cs.dartmouth.edu Emiliano Miluzzo
69. darwin phases supervised initial training (derive model seed) unsupervised classification model evolution classification model pooling miluzzo@cs.dartmouth.edu Emiliano Miluzzo
70. darwin phases supervised initial training (derive model seed) unsupervised classification model evolution classification model pooling collaborative inference miluzzo@cs.dartmouth.edu Emiliano Miluzzo
72. classification model training filtering (silence suppression + voicing) sensed event miluzzo@cs.dartmouth.edu Emiliano Miluzzo
73. classification model training feature extraction (MFCC) filtering (silence suppression + voicing) sensed event miluzzo@cs.dartmouth.edu Emiliano Miluzzo
74. classification model training send MFCC to backend to train the model send model + baseline back to phone model model training (GMM) feature extraction (MFCC) filtering (silence suppression + voicing) baseline sensed event backend miluzzo@cs.dartmouth.edu Emiliano Miluzzo
75. classification model training phone: feature extraction (low computation) backend: model training (high computation) backend miluzzo@cs.dartmouth.edu Emiliano Miluzzo
78. classification model evolution training sampled phone: determines when to evolve miluzzo@cs.dartmouth.edu Emiliano Miluzzo
79. classification model evolution match? YES phone: determines when to evolve do not evolve miluzzo@cs.dartmouth.edu Emiliano Miluzzo
80. classification model evolution match? NO phone: determines when to evolve evolve (train new model using backend as before) miluzzo@cs.dartmouth.edu Emiliano Miluzzo
85. classification model pooling Phone B Phone A Speaker A’s model Speaker B’s model Speaker B’s model Speaker C’s model Speaker C’s model Phone C miluzzo@cs.dartmouth.edu Emiliano Miluzzo
86. classification model pooling Phone B Phone A we have two options Speaker A’s model 1. train a new classifier for each speaker (costly for power, inference delay) Speaker B’s model Speaker B’s model Speaker C’s model Speaker C’s model Phone C miluzzo@cs.dartmouth.edu Emiliano Miluzzo
87. classification model pooling Phone B Phone A we have two options Speaker A’s model 1. train a new classifier for each speaker (costly for power, inference delay) Speaker B’s model Speaker B’s model 2. re-use already available classifiers Speaker C’s model Speaker C’s model Phone C miluzzo@cs.dartmouth.edu Emiliano Miluzzo
88. classification model pooling Phone B Phone A we have two options Speaker A’s model 1. train a new classifier for each speaker (costly for power, inference delay) Speaker B’s model Speaker B’s model 2. re-use already available classifiers Speaker C’s model Speaker C’s model Phone C miluzzo@cs.dartmouth.edu Emiliano Miluzzo
89. classification model pooling Phone B Phone A Speaker A’s model Speaker B’s model Speaker B’s model Speaker C’s model Speaker C’s model Phone C miluzzo@cs.dartmouth.edu Emiliano Miluzzo
90. classification model pooling Phone B Phone A Speaker A’s model Speaker A’s model Speaker B’s model Speaker B’s model Speaker C’s model Speaker C’s model Speaker C’s model Phone C miluzzo@cs.dartmouth.edu Emiliano Miluzzo
91. classification model pooling Phone B Phone A Speaker A’s model Speaker A’s model Speaker B’s model Speaker B’s model Speaker B’s model Speaker C’s model Speaker A’s model Speaker C’s model Speaker C’s model Phone C miluzzo@cs.dartmouth.edu Emiliano Miluzzo
92. classification model pooling Phone B Phone A Speaker A’s model Speaker B’s model Speaker B’s model Speaker C’s model Speaker A’s model Speaker C’s model Speaker C’s model Speaker A’s model Phone C Speaker B’s model miluzzo@cs.dartmouth.edu Emiliano Miluzzo
93. classification model pooling ready to run the collaborative inference algorithm - local inference first - final inference later Phone B Phone A Speaker A’s model Speaker B’s model Speaker B’s model Speaker C’s model Speaker A’s model Speaker C’s model Speaker C’s model Speaker A’s model Phone C Speaker B’s model miluzzo@cs.dartmouth.edu Emiliano Miluzzo
95. collaborative inference two phases 1. local inference(running independently in parallel on each mobile phone) miluzzo@cs.dartmouth.edu Emiliano Miluzzo
96. collaborative inference two phases 1. local inference(running independently in parallel on each mobile phone) 2. final inference(after collecting Local Inference results, to get better confidence about the final classification result) miluzzo@cs.dartmouth.edu Emiliano Miluzzo
98. collaborative inference local inference (LI) speaker A speaking!!! Phone B Phone A Phone C miluzzo@cs.dartmouth.edu Emiliano Miluzzo
99. collaborative inference local inference (LI) speaker A speaking!!! Phone B Phone A B’s LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10 A’s LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 C’s LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 Phone C miluzzo@cs.dartmouth.edu Emiliano Miluzzo
100. collaborative inference local inference (LI) speaker A speaking!!! Phone B Phone A B’s LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10 A’s LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 C’s LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 Phone C miluzzo@cs.dartmouth.edu Emiliano Miluzzo
101. collaborative inference local inference (LI) speaker A speaking!!! Phone B Phone A B’s LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10 A’s LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 C’s LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 Phone C miluzzo@cs.dartmouth.edu Emiliano Miluzzo
102. collaborative inference local inference (LI) speaker A speaking!!! Phone B Phone A B’s LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10 A’s LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 C’s LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 Phone C miluzzo@cs.dartmouth.edu Emiliano Miluzzo
103. collaborative inference local inference (LI) speaker A speaking!!! Phone B Phone A B’s LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10 A’s LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 C’s LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 Phone C miluzzo@cs.dartmouth.edu Emiliano Miluzzo
104. collaborative inference local inference (LI) speaker A speaking!!! individual classification can be misleading! Phone B Phone A B’s LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10 A’s LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 C’s LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 Phone C miluzzo@cs.dartmouth.edu Emiliano Miluzzo
105. collaborative inference final inference (FI) each phone gathers LI results Phone B Phone A B’s LI results B’s LI results B’s LI results A’s LI results A’s LI results A’s LI results C’s LI results C’s LI results C’s LI results Phone C miluzzo@cs.dartmouth.edu Emiliano Miluzzo
106. collaborative inference final inference (FI) on each phone A’s LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 C’s LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 B’s LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10 miluzzo@cs.dartmouth.edu Emiliano Miluzzo
107. collaborative inference final inference (FI) on each phone A’s LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 C’s LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 B’s LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10 x x x x x x miluzzo@cs.dartmouth.edu Emiliano Miluzzo
108. collaborative inference final inference (FI) on each phone A’s LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 C’s LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 B’s LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10 x x x x x x = FI results (normalized): Confidence (A speaking) = 1 Confidence (B speaking) = 0.12 Confidence (C speaking) = 0.002 miluzzo@cs.dartmouth.edu Emiliano Miluzzo
109. collaborative inference final inference (FI) on each phone A’s LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 C’s LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 B’s LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10 x x x x x x = FI results (normalized): Confidence (A speaking) = 1 Confidence (B speaking) = 0.12 Confidence (C speaking) = 0.002 miluzzo@cs.dartmouth.edu Emiliano Miluzzo
110. collaborative inference final inference (FI) on each phone A’s LI results: Prob(A speaking) = 0.65 Prob(B speaking) = 0.25 Prob(C speaking) = 0.10 C’s LI results: Prob(A speaking) = 0.30 Prob(B speaking) = 0.67 Prob(C speaking) = 0.03 B’s LI results: Prob(A speaking) = 0.79 Prob(B speaking) = 0.11 Prob(C speaking) = 0.10 collaborative inference compensates the inaccuracies of individual inferences x x x x x x = FI results (normalized): Confidence (A speaking) = 1 Confidence (B speaking) = 0.12 Confidence (C speaking) = 0.002 miluzzo@cs.dartmouth.edu Emiliano Miluzzo
112. evaluation C/C++ & implemented on Nokia N97 and iPhone in support of a speaker recognition app miluzzo@cs.dartmouth.edu Emiliano Miluzzo
113. evaluation C/C++ & implemented on Nokia N97 and iPhone in support of a speaker recognition app unix server miluzzo@cs.dartmouth.edu Emiliano Miluzzo
114. evaluation C/C++ & implemented on Nokia N97 and iPhone in support of a speaker recognition app lightweight reliable protocol to transfer models from the server and between phones unix server miluzzo@cs.dartmouth.edu Emiliano Miluzzo
115. evaluation C/C++ & implemented on Nokia N97 and iPhone in support of a speaker recognition app UDP multicast protocol to distribute local inference results between phones miluzzo@cs.dartmouth.edu Emiliano Miluzzo
116. experimental scenarios up to eight people in conversation in three different scenarios (quiet indoor, down the street, in a restaurant) miluzzo@cs.dartmouth.edu Emiliano Miluzzo
118. need for evolution train indoor, evaluate outdoor miluzzo@cs.dartmouth.edu Emiliano Miluzzo
119. need for evolution accuracy accuracy improvement after evolution miluzzo@cs.dartmouth.edu Emiliano Miluzzo
120. indoor quiet scenario 8 people talking around a table miluzzo@cs.dartmouth.edu Emiliano Miluzzo
121. indoor quiet scenario 8 people talking around a table miluzzo@cs.dartmouth.edu Emiliano Miluzzo
122. indoor quiet scenario 8 people talking around a table miluzzo@cs.dartmouth.edu Emiliano Miluzzo
123. indoor quiet scenario 8 people talking around a table miluzzo@cs.dartmouth.edu Emiliano Miluzzo
124. indoor quiet scenario 8 people talking around a table miluzzo@cs.dartmouth.edu Emiliano Miluzzo
125. indoor quiet scenario 8 people talking around a table miluzzo@cs.dartmouth.edu Emiliano Miluzzo
126. indoor quiet scenario collaborative inference + classification model evolution boost the performance of a mobile sensing app 8 people talking around a table miluzzo@cs.dartmouth.edu Emiliano Miluzzo
127. impact of the number of mobile phones miluzzo@cs.dartmouth.edu Emiliano Miluzzo
128. impact of the number of mobile phones miluzzo@cs.dartmouth.edu Emiliano Miluzzo
129. impact of the number of mobile phones miluzzo@cs.dartmouth.edu Emiliano Miluzzo
130. impact of the number of mobile phones miluzzo@cs.dartmouth.edu Emiliano Miluzzo
131. impact of the number of mobile phones the larger the number of mobile phones collaborating, the better the final inference result miluzzo@cs.dartmouth.edu Emiliano Miluzzo
132. battery lifetime Vs inference responsiveness miluzzo@cs.dartmouth.edu Emiliano Miluzzo
133. battery lifetime Vs inference responsiveness miluzzo@cs.dartmouth.edu Emiliano Miluzzo
134. battery lifetime Vs inference responsiveness high responsiveness miluzzo@cs.dartmouth.edu Emiliano Miluzzo
135. battery lifetime Vs inference responsiveness short battery life miluzzo@cs.dartmouth.edu Emiliano Miluzzo
138. battery lifetime Vs inference responsiveness smart duty-cycling techniques and machine learning algorithms with better performance in terms of energy usage on mobile phones need to be identified miluzzo@cs.dartmouth.edu Emiliano Miluzzo
139. a quick recap smartphone’s are everywhere, let’s exploit their collective sensing and computation capabilities miluzzo@cs.dartmouth.edu Emiliano Miluzzo
140. a quick recap smartphone’s are everywhere – let’s exploit their collective sensing and computation capabilities smartphone sensing opens up new frontiers: applications can be spread and big data collected at unprecedented scale enabling endless research opportunities miluzzo@cs.dartmouth.edu Emiliano Miluzzo
141. a quick recap smartphone’s are everywhere – let’s exploit their collective sensing and computation capabilities smartphone sensing opens up new frontiers: applications can be spread and big data collected at unprecedented scale enabling endless research opportunities continuous sensing is still challenging; efficient mobile sensing requires to preserve the phone user experience (need for energy efficient ML algorithms and smart duty-cycling techniques) miluzzo@cs.dartmouth.edu Emiliano Miluzzo
142. a quick recap smartphone’s are everywhere – let’s exploit their collective sensing and computation capabilities smartphone sensing opens up new frontiers: applications can be spread and big data collected at unprecedented scale enabling endless research opportunities continuous sensing is still challenging; efficient mobile sensing requires to preserve the phone user experience (need for energy efficient ML algorithms and smart duty-cycling techniques) ML algorithms should perform reliably in the wild miluzzo@cs.dartmouth.edu Emiliano Miluzzo
143. a quick recap smartphone’s are everywhere – let’s exploit their collective sensing and computation capabilities ok I think I’m done… smartphone sensing opens up new frontiers: applications can be spread and big data collected at unprecedented scale enabling endless research opportunities continuous sensing is still challenging; efficient mobile sensing requires to preserve the phone user experience (need for energy efficient ML algorithms and smart duty-cycling techniques) ML algorithms should perform reliably in the wild miluzzo@cs.dartmouth.edu Emiliano Miluzzo
144. a quick recap smartphone’s are everywhere – let’s exploit their collective sensing and computation capabilities but please bear in mind… smartphone sensing opens up new frontiers: applications can be spread and big data collected at unprecedented scale enabling endless research opportunities continuous sensing is still challenging; efficient mobile sensing requires to preserve the phone user experience (need for energy efficient ML algorithms and smart duty-cycling techniques) ML algorithms should perform reliably in the wild miluzzo@cs.dartmouth.edu Emiliano Miluzzo
145. Mobile Phone Sensing is the NextBig Thing! miluzzo@cs.dartmouth.edu Emiliano Miluzzo
146. Thank you!! Mobile Sensing Group http://sensorlab.cs.dartmouth.edu miluzzo@cs.dartmouth.edu Emiliano Miluzzo
147. Personal Opinion Contribution -Implemented the modality of unsupervised labeling -Built & implemented concept of collaborative sensing Merit -Drastic improve of accuracy -Shorten learning time Future work -Energy management -Machine resource
148. Thank you REFERENCE EmilianoMiluzzo, Cory T. Cornelius, AshwinRamaswamy, TanzeemChoudhury, Zhigang Liu, Andrew T. Campbell. “Darwin Phone:the Evolution of Sensing and Inference on Mobile Phones,” http://www.cs.dartmouth.edu/~miluzzo/publications.html Talk(ppt), pdf, video, press available
150. EmilianoMiluzzo (Ph.D) Andrew T. Campbell (Professor) etc… Mobile Sensing Group, Dartmouth College, Hanover, NH, USA http://sensorlab.cs.dartmouth.edu/index.html Author Background
155. Sample Application Speaker Model Computation ->MFCC feature extraction (Mel Frequency CepstramCoefficient, メル周波数ケプストラム係数) Leading approach for speech feature extraction [16,17,42] Emphasize the part human use Machine learning algorithm ->GMM (Gaussian Mixture Model) Common to unsupervised machine learning
156. Privacy & Security - Store and share not raw data but model & feature (of course protected) - User can opt in and out anytime - Darwin meets Run on trusted device Subscribe to trusted system Run on trusted application i.e. pre-installed or downloaded from trusted third party.