2021 Google AI Boot Camp
2021 Google AI Boot Camp

2021 Google AI Boot Camp
MLSS 2021 TAIPEI (Machine Learning Summer School)
■活動名稱:MLSS 2021 TAIPEI
■課程時間:
2021/8/2 – 2021/8/6 (週一至週五) 上午9 時至下午5 時30 分
2021/8/9–2021/8/20 (週一至週五) 晚上8 時至10 時30 分
MLSS(http://mlss.cc)為一國際機器學習暑期課程組織,始於2002 年,主要目的為推廣統計機器學習和推論的最新技術及方法。本組織每年皆有各國家申請辦理活動,邀請的講師為國際各相關領域的專家教授,提供的主題涵蓋基礎知識,到最新機器學習的實踐。
主辦單位希望藉此活動,鼓勵臺灣優秀AI 研究人才與國際社群交流,並形塑臺灣成為亞洲AI 領域區域人才中心。藉由MLSS 的多年累積之領域資源之連結,邀請各國知名領域專家授課指導,徵集世界各國優秀學生參與。
■歡迎追蹤官方Twitter掌握活動最新消息 https://twitter.com/2021Mlss
■主要學員對象:
– Students: Early-mid stage Ph.D., Graduate
– Academics: Post-doctoral Researcher, Professor
– Professionals: Corporate Specialist, Executives
■報名截止日期:
– General Program: 2021/6/30
– Standard Program: 2021/6/20
■收費方式:
– General Program: Student: free / Non-student: NTD$1500
– Standard Program: Student: NTD$1200/ Non-student: NTD$3600
■注意事項:
1. 本課程期間詳細議程及活動持續規劃中,請參看活動官網公告
2. 本課程為全英文授課,無中文翻譯,需備有中高級以上之英文聽力聽取全英文課程
3. 各program 之權益義務請仔細閱覽網頁,待收到審核通知才需要繳費
👉👉👉 MLSS 2021 TAIPEI 活動介紹-0518
主辦單位:臺灣大學人工智慧研究中心 (AINTU)、臺灣大學資訊工程學系、 臺灣大學電機工程學系、中華民國計算語言學學會 (ACLCLP)
指導單位:科技部
**本課程隨時會有更新,敬請上官網查詢最新訊息及資料,不另通知**
**主辦單位保留修改的權利**
2021年科技部AI專案計畫跨域交流觀摩會
日期: 110年01月05日(二) 09:00-16:30
地點: 交通大學光復校區(300新竹市東區大學路1001號)-電子資訊研究中心-1F國際會議廳
上圖:大合照
上圖:講者陳信希教授
更多照片請參考:
https://www.facebook.com/media/set/?vanity=AI.ntucenter&set=a.746691999275842
議程
指導單位:科技部、國立交通大學
主辦單位:科技部補助人工智慧普適研究中心 (PAIR Labs)
協辦單位: 科技部補助人工智慧技術暨全幅健康照護聯合研究中心 (AINTU)、科技部補助人工智慧製造系統研究中心 (AIMS)、科技部補助人工智慧生技醫療創新研究中心 (AIBMRC)
【訊息更新】
為推動「數位國家‧創新經濟發展方案(DIGI+)」、「臺灣AI行動計畫」政策,科技部自107年起分別補助臺大、清大、交大及成大等四所頂尖大學所成立之研究中心,聚焦AI技術研發、應用及人才培育,並自108年起每季輪流由四個AI中心辦理交流會,以增進各中心與轄下計畫團隊交流與切磋之機會,並扣合產業需求。
2020/11/10日於台大醫院國際會議中心舉辦《2020科技部跨域交流觀摩暨成果發表會》,會場有華碩電腦黃泰一全球副總裁及唐玄輝教授分享業界動態及設計創新理念的應用,並由6支頂尖AI計畫團隊結合動態Demo體驗與靜態海報之方式,展示各自最新成果與應用,希冀提升臺灣在人工智慧研究領域的廣度及深度,強化跨領域合作與研究能量整合,增進國際競爭力。
本年度跨域交流觀摩暨成果發表會提供AI創新研究中心轄下共計74隊計畫團隊一個能相互砥礪與交流的平台,展示辛勤耕耘之研究成果,並邀請國際級業界與學界專業人士剖析AI科技發展動向、趨勢前瞻看法與應用契機,會場更透過現場技術分享與實際體驗,讓不同領域間之想法與專業進行碰撞與激盪,並促進產學研間之鏈結,以帶動臺灣AI研發量能的提升與整合,以及加速AI技術應用的創新與落地。
照片說明:左起為成大AI中心梁勝富主任、交大AI中心曾煜棋主任、臺大AI中心傅立成共同主任、臺大AI中心陳信希主任、華碩全球副總裁黃泰一博士、科技部謝達斌政務次長、臺大陳銘憲副校長、科技部前瞻司陳國樑司長、國立台灣科技大學設計系唐玄輝教授、人工智能股份有限公司張榮貴董事長
更多照片請見
https://www.facebook.com/media/set/?vanity=AI.ntucenter&set=a.719762635302112
科技部補助人工智慧技術暨全幅健康照護聯合研究中心 (AINTU)將於109年11月10日(二),假台大醫院國際會議中心4樓舉辦《2020科技部跨域交流觀摩暨成果發表會》,歡迎報名參與(2020/10/15開始報名)
科技部為創新人工智慧生態體系,打造國際級AI創新研究中心,達成研發尖端技術、培養領導人才、提升研發成果商業化之潛力、衍生AI新創公司之目的,自107年起科技部補助臺、清、交、成四所大學成立人工智慧研究中心,並且為求加速推動各AI創新研究中心與轄下計畫團隊互相交流切磋,更扣合產業需求,自108年起每季一次由四個AI中心輪流辦理跨域交流觀摩暨成果發表會。
本次活動呈現了四大AI中心轄下各研究團隊辛勤耕耘的研究成果,結合論壇模式,邀請國際級企業專業人士剖析AI科技發展動向、趨勢前瞻看法與應用契機,希冀提升臺灣在人工智慧研究領域的廣度及深度,增進國際競爭力。
聯絡人:
邱小姐(02-2732-8564 / tosyc@ntu.edu.tw)
賴小姐(02-3366-9558 / laichiling@ntu.edu.tw)
指導單位:科技部
主辦單位:人工智慧技術暨全幅健康照護聯合研究中心 (AINTU)
協辦單位:交大人工智慧普適研究中心 (PAIR)、成大人工智慧生技醫療創新研究中心 (AIBMRC)、清大人工智慧製造系統研究中心 (AIMS) (按筆劃順序排列)
贊助廠商:旺宏教育基金會、華碩電腦AI研發中心(AICS)、維曙智能科技股份有限公司(按筆劃順序排列)
科技與人文對話系列研討會—國際人工智慧科研發展之指引、規範與實踐
近年來人工智慧技術與應用之興起,相應的倫理,社會,法律議題也陸續浮現。各國也陸續制定國家層級的指引,準則與規範,來應對這波人工智慧的社會衝擊。本活動的主要目的,希望藉由不同領域,不同面向的看法,讓與會人士了解這個問題的複雜度,未來社會可能的影響與挑戰,以及學者與企業可能因應的實踐作法。議題面向以科技部民國108年9月發佈的「人工智慧科研發展指引」,及歐盟於2020年2月發表之「人工智慧白皮書」;並以透明性、可追溯性與可解釋性的AI項目為主要討論範疇。將從AI技術帶來的挑戰,如何建立規範,以及可能的實踐樣態,進行多面向的探討。
科技部之「人工智慧科研發展指引」,其中包含隱私性,透明性、可追溯性與可解釋性的AI項目,希望引導台灣AI科研之相關倫理實踐,因AI所生成之決策於利害關係人有重大影響,為保障決策之公正性,在AI系統、軟體及演算法等技術發展與應用上,確保一般人得以知悉人工智慧系統生成決策之要素。而歐盟人工智慧白皮書認為使用AI同時帶來機會以及風險,於白皮書中列舉了對高風險AI的要求。為達成AI使用的責任,建立信任與促進矯正,以積極的態度提供足夠的資訊對於使用高風險AI系統是相當重要的,應要確認提供了清楚的資訊說明AI系統的能力、限制,特別是系統的目的、預期作用的情境和對預期目標可達成的精確度。
希望藉由此活動,提高AI科研人員對相關技術之應用可能造成的社會影響之認知與了解;同時讓人社領域專家了解實際科研人員在落實此指引之過程中可能面臨的困難與負擔;也希望探討不同科研階段的指引實踐可能需要分別看待。研討會後將彙整活動過程與成果,提供給科技部做為參考資料。
自辛亥路、復興南路口臺大第二校門進入後,右手邊第一棟即為社科院。自辛亥路、復興南路口臺大第二校門進入後右手邊第一棟即為辛亥路地下停車場。
新店線:臺灣大學位於捷運公館站(新店線)旁,捷運公館站 3 號出口距離本校校門口(羅斯福路與新生南路交口)3 分鐘左右步行時間,2 號出口連接本校舟山路出口(羅斯福路與舟山路交口)。
文湖線:若您欲到達臺大校總區東北側,可考慮搭乘捷運文湖線,在科技大樓站下車,沿著復興南路往南走,可抵達本校第二校門。
📢2020/8/10最新公告
【重要公告!!!】2020 AI Summer School夏季學習營是否停辦
因颱風「米克拉」逼近台灣,2020 AI Summer School夏季學習營明天(8/11)是否停辦將依據台北市是否宣布「停止上班」為準。
後續課程(8/12~8/13)是否停辦同上。
————————————————————————————————
📢最新公告(2020/8/7)
📢課程微調:
因考量課程內容與學習成效,8/12「手把手帶你玩RL/GAN」與8/13「手把手帶你玩BERT/問答系統」對調,造成不便敬請見諒。(更正後8/12課程為「手把手帶你玩BERT/問答系統」,8/13為「手把手帶你玩RL/GAN」)
————————————————————————————————
📢📢📢注意事項補充(2020/7/27)
※報名成功後系統將發出一封自動通知信,請勿直接回覆系統信,若有疑問請來信詢問aintu@ntu.edu.tw。
※活動前1-3天將陸續寄出行前通知信件,請參加者自行注意信箱。
※因為個人因素要求退費者,請於7/31(五)前來信申請,將全額退費(郵政劃撥會扣除手續費20元),退費方式同以下說明。8/7後因個人因素一概不受理退費。
※郵政劃撥因帳務作業時間需3-5天的時間,若有疑問請來信詢問aintu@ntu.edu.tw
————————————————————————————————
👉活動日期:2020/8/11(二) ~ 8/13(四)
👉活動地點:臺灣大學博雅教學館101 🚗交通指引
👉招生對象:國內大專程度學生與社會人士,稍具AI相關知識或無AI基礎者
👉報名時間:7/13~8/5以繳費日期為準
👉報名連結:https://conference.iis.sinica.edu.tw/servlet/Register?ConferenceID=367報名已截止
👉報名費用:
一般參加人員 |
|
學生$3,500 |
社會人士$5,000 |
計算語言學學會會員 |
|
學生$2,500 |
社會人士$4,000 |
👉活動議程:
2020/8/11 (週二) | 2020/8/12 (週三) | 2020/8/13 (週四) | |
08:30~08:50 | 報到 | 報到 | 報到 |
08:50~09:00 | 活動開場 | ||
09:00~10:20 | 深度學習基本概念
李宏毅老師 (臺大電機系) |
一堂課速覽自然語言處理
李宏毅老師 |
電腦視覺:前瞻技術與最新發展(2)
王鈺強老師 |
10:20~10:40 | Break | ||
10:40~12:00 | 學一個模型就打遍天下 – Transformer
李宏毅老師 |
手把手帶你玩 RL / GAN
李宏毅老師助教團: 林義聖/劉俊緯 |
手把手帶你玩 BERT/問答系統
李宏毅老師助教團: 姜成翰/紀伯翰 |
12:00~13:30 | Lunch | ||
13:30~14:50 | 一堂課速覽語音處理
李宏毅老師 |
手把手帶你做語音合成/語音分離
李宏毅老師助教團: 簡仲明/黃冠博 |
AI 應用
陳縕儂老師 (臺大資工系) |
14:50~15:10 | Break | ||
15:10~16:30 | 手把手教你 PyTorch
李宏毅老師助教團: 吳元魁/楊舒涵 |
電腦視覺:前瞻技術與最新發展(1)
王鈺強老師 (臺大電機系) |
科技大擂台第2屆分享
主持:陳縕儂老師 |
※本次活動手把手教學助教團為李宏毅老師助教團隊
👉本次師資:
👉歡迎按讚中心粉專
聯絡人:aintu@ntu.edu.tw (02)3366-9558 賴小姐
﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍﹍
指導單位:科技部
主辦單位:臺灣大學人工智慧研究中心AINTU,中華民國計算語言學學會ACLCLP
協辦單位:臺大醫院智慧醫療中心
最強的AI物件偵測技術Yolo-v4作者親自剖析
2020/7/2更新
活動順利結束~感謝廖弘源所長與王建堯博士的精采分享,以及感謝各位現場與線上熱情支持!
👉👉直播影片公開
https://www.youtube.com/watch?v=HdQqAF-rMKc
👉👉歡迎按讚中心粉專
https://www.facebook.com/AI.ntucenter/
以下是活動現場照片
圖左至右:廖弘源所長(中央研究院資訊科學研究所)
圖左至右:《引言-我們的AI計畫》廖弘源 所長
上圖:《YOLOv4的技術深入與未來方向》王建堯 博士
✨2020/7/1現場與直播放送✨
目前世界最強的AI物件偵測技術(Yolo-v4)來了。
本中心邀請兩位作者,中央研究院資訊科學研究所廖弘源所長和王建堯博士,深入剖析這項最新的技術與未來的發展,僅此一場,切勿錯過!
👉時間:2020/7/1(三),晚上18:30-21:00
本活動採現場與直播同時進行(請詳見報名連結的說明)。
👉地點:現場於臺大集思會議中心蘇格拉底廳
(直播連結將在活動前兩天(6/29)寄出)
臺大集思會議中心:106台北市大安區羅斯福路四段85號B1
👉👉👉報名連結:
https://www.accupass.com/event/2006100241241521736134
歡迎踴躍報名!
👉聯絡資訊:aintu@ntu.edu.tw (02)3366-9558 賴小姐
主辦單位:臺大人工智慧研究中心(人工智慧技術暨全幅健康照護聯合研究中心)
指導單位:科技部
2020 Google AI Bootcamp
📢Google 致力協助臺灣培育科技智慧人才,今年再次與科技部 AI 創新研究中心及科技部臺大人工智慧研究中心合作舉辦 「2020 Google AI 創新研究營」,誠摯邀請您線上與會,期待透過分享 Google 最新 AI 研究案例,與臺灣學術界進行深度產學交流,一齊推動臺灣的 AI 研究進程。
🔎活動詳情
日期:2020/7/8(三)-7/9(四)
時間:9:00 – 18:00
地點:YouTube直播
📣活動內容及報名資訊:
https://storage.googleapis.com/…/200609-aib…/ai_public.html…
💡活動適合對象:本活動參加對象以學研界團隊為主,並將以AI研究計畫主持人,共同主持人,指導教授,博碩士研究員,博碩士班研究生優先參與。
⏰報名截止日期: 2020/7/1(三)
【AI Meetup 】 基於AI的語音增強技術及其在溝通輔具上的應用
🌏主題:基於AI的語音增強技術及其在溝通輔具上的應用
🌏講者:曹昱
中央研究院資訊科技創新研究中心 副研究員兼副主任
兼人工智慧創新應用專題中心 執行長
🌏時間:2020/5/29(五),晚上19:00準時開始
👉報名網址:
https://www.accupass.com/event/2005040141561805118260
👉中心粉專
https://www.facebook.com/AI.ntucenter/
✨本活動採線上直播,報名成功後於活動前三天會寄出提醒信與直播連結,請參加者務必填寫正確信箱✨
Titouan Parcollet is an associate professor in computer science at the Laboratoire Informatique d’Avignon (LIA), from Avignon University (FR) and a visiting scholar at the Cambridge Machine Learning Systems Lab from the University of Cambridge (UK). Previously, he was a senior research associate at the University of Oxford (UK) within the Oxford Machine Learning Systems group. He received his PhD in computer science from the University of Avignon (France) and in partnership with Orkis focusing on quaternion neural networks, automatic speech recognition, and representation learning. His current work involves efficient speech recognition, federated learning and self-supervised learning. He is also currently collaborating with the university of Montréal (Mila, QC, Canada) on the SpeechBrain project.
Mirco Ravanelli is currently a postdoc researcher at Mila (Université de Montréal) working under the supervision of Prof. Yoshua Bengio. His main research interests are deep learning, speech recognition, far-field speech recognition, cooperative learning, and self-supervised learning. He is the author or co-author of more than 50 papers on these research topics. He received his PhD (with cum laude distinction) from the University of Trento in December 2017. Mirco is an active member of the speech and machine learning communities. He is founder and leader of the SpeechBrain project.
Shinji Watanabe is an Associate Professor at Carnegie Mellon University, Pittsburgh, PA. He received his B.S., M.S., and Ph.D. (Dr. Eng.) degrees from Waseda University, Tokyo, Japan. He was a research scientist at NTT Communication Science Laboratories, Kyoto, Japan, from 2001 to 2011, a visiting scholar in Georgia institute of technology, Atlanta, GA in 2009, and a senior principal research scientist at Mitsubishi Electric Research Laboratories (MERL), Cambridge, MA USA from 2012 to 2017. Prior to the move to Carnegie Mellon University, he was an associate research professor at Johns Hopkins University, Baltimore, MD USA from 2017 to 2020. His research interests include automatic speech recognition, speech enhancement, spoken language understanding, and machine learning for speech and language processing. He has been published more than 200 papers in peer-reviewed journals and conferences and received several awards, including the best paper award from the IEEE ASRU in 2019. He served as an Associate Editor of the IEEE Transactions on Audio Speech and Language Processing. He was/has been a member of several technical committees, including the APSIPA Speech, Language, and Audio Technical Committee (SLA), IEEE Signal Processing Society Speech and Language Technical Committee (SLTC), and Machine Learning for Signal Processing Technical Committee (MLSP).
Karteek Alahari is a senior researcher (known as chargé de recherche in France, which is equivalent to a tenured associate professor) at Inria. He is based in the Thoth research team at the Inria Grenoble – Rhône-Alpes center. He was previously a postdoctoral fellow in the Inria WILLOW team at the Department of Computer Science in ENS (École Normale Supérieure), after completing his PhD in 2010 in the UK. His current research focuses on addressing the visual understanding problem in the context of large-scale datasets. In particular, he works on learning robust and effective visual representations, when only partially-supervised data is available. This includes frameworks such as incremental learning, weakly-supervised learning, adversarial training, etc. Dr. Alahari’s research has been funded by a Google research award, the French national research agency, and other industrial grants, including Facebook, NaverLabs Europe, Valeo.
Sijia Liu is currently an Assistant Professor at the Computer Science & Engineering Department of Michigan State University. He received the Ph.D. degree (with All-University Doctoral Prize) in Electrical and Computer Engineering from Syracuse University, NY, USA, in 2016. He was a Postdoctoral Research Fellow at the University of Michigan, Ann Arbor, in 2016-2017, and a Research Staff Member at the MIT-IBM Watson AI Lab in 2018-2020. His research spans the areas of machine learning, optimization, computer vision, signal processing and computational biology, with a focus on developing learning algorithms and theory for scalable and trustworthy artificial intelligence (AI). He received the Best Student Paper Award at the 42nd IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). His work has been published at top-tier AI conferences such as NeurIPS, ICML, ICLR, CVPR, ICCV, ECCV, AISTATS, and AAAI.
Soheil Feizi is an assistant professor in the Computer Science Department at University of Maryland, College Park. Before joining UMD, he was a post-doctoral research scholar at Stanford University. He received his Ph.D. from Massachusetts Institute of Technology (MIT). He has received the NSF CAREER award in 2020 and the Simons-Berkeley Research Fellowship on deep learning foundations in 2019. He is the 2020 recipient of the AWS Machine Learning Research award, and the 2019 recipients of the IBM faculty award as well as the Qualcomm faculty award. He is the recipient of teaching award in Fall 2018 and Spring 2019 in the CS department at UMD. His work has received the best paper award of IEEE Transactions on Network Science and Engineering, over a three-year period of 2017-2019. He received the Ernst Guillemin award for his M.Sc. thesis, as well as the Jacobs Presidential Fellowship and the EECS Great Educators Fellowship at MIT.
I am a Research Staff Member at IBM T. J. Watson Research Center.
Prior to this, I was a Postdoctoral Researcher at the Center for Theoretical Physics, MIT.
I received my Ph.D. in 2018 from Centrum Wiskune & Informatica and QuSoft, Amsterdam, Netherlands, supervised by Ronald de Wolf. Before that I finished my M.Math in Mathematics from University of Waterloo and Institute of Quantum computing, Canada in 2014, supervised by Michele Mosca.
Shang-Wen (Daniel) Li is an Engineering and Science Manager at Facebook AI. His research focuses on natural language and speech understanding, conversational AI, meta learning, and auto ML. He led a team at AWS AI on building conversation AI technology for call center analytics and chat bot authoring. He also worked at Amazon Alexa and Apple Siri for implementing their conversation assistants. He earned his PhD from MIT CSAIL with topics on natural language understanding and its application to online education. He co-organized the workshop of “Self-Supervised Learning for Speech and Audio Processing” at NeurIPS (2020) and the workshop of “Meta Learning and Its Applications to Natural Language Processing” at ACL (2021).
Thang Vu received his Diploma (2009) and PhD (2014) degrees in computer science from Karlsruhe Institute of Technology, Germany. From 2014 to 2015, he worked at Nuance Communications as a senior research scientist and at Ludwig-Maximilian University Munich as an acting professor in computational linguistics. In 2015, he was appointed assistant professor at University of Stuttgart, Germany. Since 2018, he has been a full professor at the Institute for Natural Language Processing in Stuttgart. His main research interests are natural language processing (esp. speech, natural language understanding and dialog systems) and machine learning (esp. deep learning) for low-resource settings.
Song Han is an assistant professor at MIT’s EECS. He received his PhD degree from Stanford University. His research focuses on efficient deep learning computing. He proposed “deep compression” technique that can reduce neural network size by an order of magnitude without losing accuracy, and the hardware implementation “efficient inference engine” that first exploited pruning and weight sparsity in deep learning accelerators. His team’s work on hardware-aware neural architecture search that bring deep learning to IoT devices was highlighted by MIT News, Wired, Qualcomm News, VentureBeat, IEEE Spectrum, integrated in PyTorch and AutoGluon, and received many low-power computer vision contest awards in flagship AI conferences (CVPR’19, ICCV’19 and NeurIPS’19). Song received Best Paper awards at ICLR’16 and FPGA’17, Amazon Machine Learning Research Award, SONY Faculty Award, Facebook Faculty Award, NVIDIA Academic Partnership Award. Song was named “35 Innovators Under 35” by MIT Technology Review for his contribution on “deep compression” technique that “lets powerful artificial intelligence (AI) programs run more efficiently on lowpower mobile devices.” Song received the NSF CAREER Award for “efficient algorithms and hardware for accelerated machine learning” and the IEEE “AIs 10 to Watch: The Future of AI” award.
Hung-yi Lee received the M.S. and Ph.D. degrees from National Taiwan University (NTU), Taipei, Taiwan, in 2010 and 2012, respectively. From September 2012 to August 2013, he was a postdoctoral fellow in Research Center for Information Technology Innovation, Academia Sinica. From September 2013 to July 2014, he was a visiting scientist at the Spoken Language Systems Group of MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).
John Shawe-Taylor is professor of Computational Statistics and Machine Learning at University College London. He has helped to drive a fundamental rebirth in the field of machine learning, with applications in novel domains including computer vision, document classification, and applications in biology and medicine focussed on brain scan, immunity and proteome analysis. He has published over 250 papers and two books that have together attracted over 80000 citations.
He has also been instrumental in assembling a series of influential European Networks of Excellence. The scientific coordination of these projects has influenced a generation of researchers and promoted the widespread uptake of machine learning in both science and industry that we are currently witnessing.
He was appointed UNESCO Chair of Artificial Intelligence in November 2018 and is the leading trustee of the UK Charity, Knowledge 4 All Foundation, promoting open education and helping to establish a network of AI researchers and practitioners in sub-Saharan Africa. He is the Director of the International Research Center on Artificial Intelligence established under the Auspices of UNESCO in Ljubljana, Slovenia.
Been Kim is a staff research scientist at Google Brain. Her research focuses on improving interpretability in machine learning by building interpretability methods for already-trained models or building inherently interpretable models. She gave a talk at the G20 meeting in Argentina in 2019. Her work TCAV received UNESCO Netexplo award, was featured at Google I/O 19′ and in Brian Christian’s book on “The Alignment Problem”. Been has given keynote at ECML 2020, tutorials on interpretability at ICML, University of Toronto, CVPR and at Lawrence Berkeley National Laboratory. She was a co-workshop Chair ICLR 2019, and has been an area chair/senior area chair at conferences including NeurIPS, ICML, ICLR, and AISTATS. She received her PhD. from MIT.
Philipp is an Assistant Professor in the Department of Computer Science at the University of Texas at Austin. He received his PhD in 2014 from the CS Department at Stanford University and then spent two wonderful years as a PostDoc at UC Berkeley. His research interests lie in Computer Vision, Machine learning and Computer Graphics. He is particularly interested in deep learning, image, video, and scene understanding.
Cho-Jui Hsieh is an assistant professor in UCLA Computer Science Department. He obtained his Ph.D. from the University of Texas at Austin in 2015 (advisor: Inderjit S. Dhillon). His work mainly focuses on improving the efficiency and robustness of machine learning systems and he has contributed to several widely used machine learning packages. He is the recipient of NSF Career Award, Samsung AI Researcher of the Year, and Google Research Scholar Award. His work has been recognized by several best/outstanding paper awards in ICLR, KDD, ICDM, ICPP and SC.
Ming-Wei Chang is currently a Research Scientist at Google Research, working on machine learning and natural language processing problems. He is interested in developing fundamental techniques that can bring new insights to the field and enable new applications. He has published many papers on representation learning, question answering, entity linking and semantic parsing. Among them, BERT, a framework for pre-training deep bidirectional representations from unlabeled text, probably received the most attention. BERT achieved state-of-the-art results for 11 NLP tasks at the time of the publication. Recently he helped co-write the Deep learning for NLP chapter in the fourth edition of the Artificial Intelligence: A Modern Approach. His research has won many awards including 2019 NAACL best paper, 2015 ACL outstanding paper and 2019 ACL best paper candidate.
I received my Ph.D. in 11/2005 working at Ecole des Mines de Paris. Before that I graduated from the ENSAE with a master degree from ENS Cachan. I worked as a post-doctoral researcher at the Institute of Statistical Mathematics, Tokyo, between 11/2005 and 03/2007. Between 04/2007 and 09/2008 I worked in the financial industry. After working at the ORFE department of Princeton University between 02/2009 and 08/2010 as a lecturer, I was at the Graduate School of Informatics of Kyoto University between 09/2010 and 09/2016 as an associate professor (tenured in 11/2013). I have joined ENSAE in 09/2016. I now work there part-time, since 10/2018 when I have joined the Paris office of Google Brain, as a research scientist.
Arthur Gretton is a Professor with the Gatsby Computational Neuroscience Unit, and director of the Centre for Computational Statistics and Machine Learning (CSML) at UCL. He received degrees in Physics and Systems Engineering from the Australian National University, and a PhD with Microsoft Research and the Signal Processing and Communications Laboratory at the University of Cambridge. He previously worked at the MPI for Biological Cybernetics, and at the Machine Learning Department, Carnegie Mellon University.
Arthur’s recent research interests in machine learning include the design and training of generative models, both implicit (e.g. GANs) and explicit (exponential family and energy-based models), nonparametric hypothesis testing, survival analysis, causality, and kernel methods.
He has been an associate editor at IEEE Transactions on Pattern Analysis and Machine Intelligence from 2009 to 2013, an Action Editor for JMLR since April 2013, an Area Chair for NeurIPS in 2008 and 2009, a Senior Area Chair for NeurIPS in 2018 and 2021, an Area Chair for ICML in 2011 and 2012, a member of the COLT Program Committee in 2013, and a member of Royal Statistical Society Research Section Committee since January 2020. Arthur was program chair for AISTATS in 2016 (with Christian Robert), tutorials chair for ICML 2018 (with Ruslan Salakhutdinov), workshops chair for ICML 2019 (with Honglak Lee), program chair for the Dali workshop in 2019 (with Krikamol Muandet and Shakir Mohammed), and co-organsier of the Machine Learning Summer School 2019 in London (with Marc Deisenroth).
Prof. Hsuan-Tien Lin received a B.S. in Computer Science and Information Engineering from National Taiwan University in 2001, an M.S. and a Ph.D. in Computer Science from California Institute of Technology in 2005 and 2008, respectively. He joined the Department of Computer Science and Information Engineering at National Taiwan University as an assistant professor in 2008, and was promoted to an associate professor in 2012, and has been a professor since August 2017. Between 2016 and 2019, he worked as the Chief Data Scientist of Appier, a startup company that specializes in making AI easier in various domains, such as digital marketing and business intelligence. Currently, he keeps growing with Appier as its Chief Data Science Consultant.
From the university, Prof. Lin received the Distinguished Teaching Award in 2011, the Outstanding Mentoring Award in 2013, and the Outstanding Teaching Award in 2016, 2017 and 2018. He co-authored the introductory machine learning textbook Learning from Data and offered two popular Mandarin-teaching MOOCs Machine Learning Foundations and Machine Learning Techniques based on the textbook. His research interests include mathematical foundations of machine learning, studies on new learning problems, and improvements on learning algorithms. He received the 2012 K.-T. Li Young Researcher Award from the ACM Taipei Chapter, the 2013 D.-Y. Wu Memorial Award from National Science Council of Taiwan, and the 2017 Creative Young Scholar Award from Foundation for the Advancement of Outstanding Scholarship in Taiwan. He co-led the teams that won the third place of KDDCup 2009 slow track, the champion of KDDCup 2010, the double-champion of the two tracks in KDDCup 2011, the champion of track 2 in KDDCup 2012, and the double-champion of the two tracks in KDDCup 2013. He served as the Secretary General of Taiwanese Association for Artificial Intelligence between 2013 and 2014.
Lecture Title
Developing a World-Class AI Facial Recognition Solution – CyberLink FaceMe®
Lecture Abstract
CyberLink’s FaceMe® is a world-leading AI facial recognition solution. In this session, Davie Lee (R&D Vice President of CyberLink) will share the fundamentals of developing facial recognition solutions, such as the interface pipeline, and will share the key industrial use cases and trends of AI facial recognition.
Lecture Title
Transform the Beauty Industry through AI + AR: Perfect Corp’s Innovative Vision into the Digital Era
Lecture Abstract
Perfect Corp. is the world’s leading beauty tech solutions provider transforming the industry by marrying the highest level of augmented reality (AR) and artificial intelligence (AI) technology for a re-imagined consumer shopping experience.
Johnny Tseng (CTO of Perfect Corp.) will share the AI/AR beauty tech solutions and the roadmap with Perfect Corp.’s advance AI technology.
Dr. Shou-De Lin is Appier’s Chief Machine Learning (ML) Scientist since February 2020 with 20+ years of experience in AI,machine learning, data mining and natural language processing. Prior to joining Appier, he served as a full-time professor at the National Taiwan University (NTU) Department of Computer Science and Information Engineering. Dr. Lin is the recipient of several prestigious research awards and brings a mix of both academic and industry expertise to Appier. He has advised more than 50 global companies in the research and application of AI, winning awards from Microsoft, Google and IBM for his work. He led or co-led the NTU team to win 7 ACM KDD Cup championships. He has over 100 publications in top-tier journals and conferences, winning various dissertation awards. After joining Appier, Dr. Lin led the AiDeal team to win the Best Overall AI-based Analytics Solution in the 2020 Artificial Intelligence Breakthrough Awards. Dr. Lin holds a BS-EE degree t from NTU and an MS-EECS degree from the University of Michigan. He also holds an MS degree in Computational Linguistics and a Ph.D. in Computer Science, both from the University of Southern California.
Lecture Title
Machine Learning as a Services: Challenges and Opportunities
Lecture Abstract
Businesses today are dealing with huge amounts of data and the volume is growing faster than ever. At the same time, the competitive landscape is changing rapidly and it’s critical for commercial organizations to make decisions fast. Business success comes from making quick, accurate decisions using the best possible information.
Machine learning (ML) is a vital technology for companies seeking a competitive advantage, as it can process large volumes of data fast that can help businesses more effectively make recommendations to customers, hone manufacturing processes or anticipate changes to a market, for example.
Machine Learning as a Service (MLaaS) is defined in a business context as companies designing and implementing ML models that will provide a continuous and consistent service to customers. This is critical in areas where customer needs and behaviours change rapidly. For example, from 2020, people have changed how they shop, work and socialize as a direct result of the COVID-19 pandemic and businesses have had to shift how they service their customers to meet their needs.
This means that the technology they are using to gather and process data also needs to be flexible and adaptable to new data inputs, allowing businesses to move fast and make the best decisions.
One current challenge of taking ML models to MLaaS has to do with how we currently build ML models and how we teach future ML talent to do it. Most research and development of ML models focuses on building individual models that use a set of training data (with pre-assigned features and labels) to deliver the best performance in predicting the labels of another set of data (normally we call it testing data). However, if we’re looking at real-world businesses trying to meet the ever-evolving needs of real-life customers, the boundary between training and testing data becomes less clear. The testing or prediction data for today can be exploited as the training data to create a better model in the future.
Consequently, the data used for training a model will no doubt be imperfect for several reasons. Besides the fact that real-world data sources can be incomplete or unstructured (such as open answer customer questionnaires), they can come from a biased collection process. For instance, the data to be used for training a recommendation model are normally collected from the feedbacks of another recommender system currently serving online. Thus, the data collected are biased by the online serving model.
Additionally, sometimes the true outcome we really care about is usually the hardest to evaluate. Let’s take digital marketing for ecommerce as an example. The most
Dr. Shou-De Lin is Appier’s Chief Machine Learning (ML) Scientist since February 2020 with 20+ years of experience in AI,machine learning, data mining and natural language processing. Prior to joining Appier, he served as a full-time professor at the National Taiwan University (NTU) Department of Computer Science and Information Engineering. Dr. Lin is the recipient of several prestigious research awards and brings a mix of both academic and industry expertise to Appier. He has advised more than 50 global companies in the research and application of AI, winning awards from Microsoft, Google and IBM for his work. He led or co-led the NTU team to win 7 ACM KDD Cup championships. He has over 100 publications in top-tier journals and conferences, winning various dissertation awards. After joining Appier, Dr. Lin led the AiDeal team to win the Best Overall AI-based Analytics Solution in the 2020 Artificial Intelligence Breakthrough Awards. Dr. Lin holds a BS-EE degree t from NTU and an MS-EECS degree from the University of Michigan. He also holds an MS degree in Computational Linguistics and a Ph.D. in Computer Science, both from the University of Southern California.
As the Managing Director, Jason Ma oversees Google Taiwan’s site growth, business management and development, as well as leads multiple R&D projects across the board. Before taking this leadership role at Google Taiwan, Jason was a Platform Technology and Cloud Computing expert in the Platform & Ecosystem business group at Google Mountain View, CA. In his 10 years with Google, Jason has successfully led strategic partnerships with global hardware and software manufacturers and major chip providers to drive various innovations in cloud technology. These efforts have not only contributed to a substantial increase in Chromebook’s share in global education, consumer and enterprise markets, but have also attracted global talents to join Google and its partners in furthering the development of hardware and software technology solutions/services.
Prior to joining Google, Jason served on the Office group at Microsoft Redmond, WA. He represented the company in a project, involving Merck, Dell, Boeing, and the United States Department of Defense, to achieve solutions in unified communications and integrated voice technology. In 2007, Jason was appointed Director of the Microsoft Technology Center in Taiwan. During which time, Jason led the Microsoft Taiwan technology team and worked with Intel and HP to establish a Solution Center in Taiwan to promote Microsoft public cloud, data center, and private cloud technologies, connecting Taiwan’s cloud computing industry with the global market and supply chain.
Before joining Microsoft, Jason was Vice President and Chief Technology Officer at Soma.com. At Soma.com, Jason led the team in designing and launching e-commerce services, and partnered with Merck and WebMD on health consultation services and over the counter/prescription drugs/services. Soma.com was in turn acquired by CVS, the second largest pharmacy chain in the United States, forming CVS.com, where Jason served as Vice President and Chief Technology Officer and provided solutions for digital integration.
Jason graduated from the Department of Electrical Engineering at National Cheng Kung University, subsequent which he moved to the United States to further his graduate studies. In 1993, Jason obtained a Ph.D. in Electrical Engineering from the University of Washington, with a focus in the integration and innovation of power systems and AI Expert Systems. In 1997, Jason joined the National Sun Yat-sen University as an Associate Professor of Electrical Engineering. To date, Jason has published 22 research papers and co-authored 2 books. Due to his outstanding performance, Jason was nominated and listed in Who’s Who in the World in 1998.
Kai-Wei Chang is an assistant professor in the Department of Computer Science at the University of California Los Angeles (UCLA). His research interests include designing robust machine learning methods for large and complex data and building fair, reliable, and accountable language processing technologies for social good applications. Dr. Chang has published broadly in natural language processing, machine learning, and artificial intelligence. His research has been covered by news media such as Wires, NPR, and MIT Tech Review. His awards include the Sloan Research Fellowship (2021),
the EMNLP Best Long Paper Award (2017), the KDD Best Paper Award (2010), and the Okawa Research Grant Award (2018). Dr. Chang obtained his Ph.D. from the University of Illinois at Urbana-Champaign in 2015 and was a post-doctoral researcher at Microsoft Research in 2016.
Additional information is available at http://kwchang.net
Short version:
Dr. Pin-Yu Chen is a research staff member at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA. He is also the chief scientist of RPI-IBM AI Research Collaboration and PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Chen received his Ph.D. degree in electrical engineering and computer science from the University of Michigan, Ann Arbor, USA, in 2016. Dr. Chen’s recent research focuses on adversarial machine learning and robustness of neural networks. His long-term research vision is building trustworthy machine learning systems. At IBM Research, he received the honor of IBM Master Inventor and several research accomplishment awards. His research works contribute to IBM open-source libraries including Adversarial Robustness Toolbox (ART 360) and AI Explainability 360 (AIX 360). He has published more than 40 papers related to trustworthy machine learning at major AI and machine learning conferences, given tutorials at IJCAI’21, CVPR(’20,’21), ECCV’20, ICASSP’20, KDD’19, and Big Data’18, and organized several workshops for adversarial machine learning. He received a NeurIPS 2017 Best Reviewer Award, and was also the recipient of the IEEE GLOBECOM 2010 GOLD Best Paper Award.
Full version:
Dr. Pin-Yu Chen is currently a research staff member at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA. He is also the chief scientist of RPI-IBM AI Research Collaboration and PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Chen received his Ph.D. degree in electrical engineering and computer science and M.A. degree in Statistics from the University of Michigan, Ann Arbor, USA, in 2016. He received his M.S. degree in communication engineering from National Taiwan University, Taiwan, in 2011 and B.S. degree in electrical engineering and computer science (undergraduate honors program) from National Chiao Tung University, Taiwan, in 2009.
Dr. Chen’s recent research focuses on adversarial machine learning and robustness of neural networks. His long-term research vision is building trustworthy machine learning systems. He has published more than 40 papers related to trustworthy machine learning at major AI and machine learning conferences, given tutorials at IJCAI’21, CVPR(’20,’21), ECCV’20, ICASSP’20, KDD’19, and Big Data’18, and organized several workshops for adversarial machine learning. His research interest also includes graph and network data analytics and their applications to data mining, machine learning, signal processing, and cyber security. He was the recipient of the Chia-Lun Lo Fellowship from the University of Michigan Ann Arbor. He received a NeurIPS 2017 Best Reviewer Award, and was also the recipient of the IEEE GLOBECOM 2010 GOLD Best Paper Award. Dr. Chen is currently on the editorial board of PLOS ONE.
At IBM Research, Dr. Chen has co-invented more than 30 U.S. patents and received the honor of IBM Master Inventor. In 2021, he received an IBM Corporate Technical Award. In 2020, he received an IBM Research special division award for research related to COVID-19. In 2019, he received two Outstanding Research Accomplishments on research in adversarial robustness and trusted AI, and one Research Accomplishment on research in graph learning and analysis.
Chun-Yi Lee is an Associate Professor of Computer Science at National Tsing Hua University (NTHU), Hsinchu, Taiwan, and is the supervisor of Elsa Lab. He received the B.S. and M.S. degrees from National Taiwan University, Taipei, Taiwan, in 2003 and 2005, respectively, and the M.A. and Ph.D. degrees from Princeton University, Princeton, NJ, USA, in 2009 and 2013, respectively, all in Electrical Engineering. He joined NTHU as an Assistant Professor at the Department of Computer Science since 2015. Before joining NTHU, he was a senior engineer at Oracle America, Inc., Santa Clara, CA, USA from 2012 to 2015.
Prof. Lee’s research focuses on deep reinforcement learning (DRL), intelligent robotics, computer vision (CV), and parallel computing systems. He has contributed to the discovery and development of key deep learning methodologies for intelligent robotics, such as virtual-to-real training and transferring techniques for robotic policies, real-time acceleration techniques for performing semantic image segmentation, efficient and effective exploration approaches for DRL agents, as well as autonomous navigation strategies. He has published a number of research papers on major artificial intelligence (AI) conferences including NeurIPS, CVPR, IJCAI, AAMAS, ICLR, ICML, ECCV, CoRL,, ICRA IROS, GTC, and more. He has also published several research papers at IEEE Transaction on Very Large Scale Integration Systems (TVLSI) and Design Automation Conference (DAC). He founded Elsa Lab at National Tsing Hua University in 2015, and have led the members from Elsa Lab to win several prestigious awards from a number of worldwide robotics and AI challenges, such as the first place at NVIDIA Embedded Intelligent Robotics Challenge in 2016, the first place of the world at NVIDIA Jetson Robotics Challenge in 2018, the second place from the Person-In-Context (PIC) Challenge at the European Conference on Computer Vision (ECCV) in 2018, and the second place of the world from NVIDIA AI at the Edge Challenge in 2020. Prof. Lee is the recipient of the Ta-You Wu Memorial Award from the Ministry of Science and Technology (MOST) in 2020, which is the most prestigious award in recognition of outstanding achievements in intelligence computing for young researchers.
He has also received several outstanding research awards, distinguished teaching awards, and contribution awards from multiple institutions, such as NVIDIA Deep Learning institute (DLI) The Foundation for the Advancement of Outstanding Scholarship (FAOS), The Chinese Institute of Electrical Engineering (CIEE), Taiwan Semiconductor Industry Association (TSIA), and National Tsing Hua University (NTHU). In addition, he has served as the committee members and reviewers at many international and domestic conferences. His researches are especially impactful for autonomous systems, decision making systems, game engines, and vision-AI based robotic applications.
Prof. Lee is a member of IEEE and ACM. He has served as session chairs and technical program committee several times at ASP-DAC, NoCs, and ISVLSI. He has also served as the paper reviewer of NeurIPS, AAAI, IROS, ICCV, IEEE TPAMI, TVLSI, IEEE TCAD, IEEE ISSCC, and IEEE ASP-DAC. He has been the main organizer of the 3rd, 4th, and 5th Augmented Intelligence and Interaction (AII) Workshops from 2019-2021. He was the co-director of MOST Office for International AI Research Collaboration from 2018-2020.
Henry Kautz is serving as Division Director for Information & Intelligent Systems (IIS) at the National Science Foundation where he leads the National AI Research Institutes program. He is a Professor in the Department of Computer Science and was the founding director of the Goergen Institute for Data Science at the University of Rochester. He has been a researcher at AT&T Bell Labs in Murray Hill, NJ, and a full professor at the University of Washington, Seattle. In 2010, he was elected President of the Association for Advancement of Artificial Intelligence (AAAI), and in 2016 was elected Chair of the American Association for the Advancement of Science (AAAS) Section on Information, Computing, and Communication. His interdisciplinary research includes practical algorithms for solving worst-case intractable problems in logical and probabilistic reasoning; models for inferring human behavior from sensor data; pervasive healthcare applications of AI; and social media analytics. In 1989 he received the IJCAI Computers & Thought Award, which recognizes outstanding young scientists in artificial intelligence, and 30 years later received the 2018 ACM-AAAI Allen Newell Award for career contributions that have breadth within computer science and that bridge computer science and other disciplines. At the 2020 AAAI Conference he received both the Distinguished Service Award and the Robert S. Engelmore Memorial Lecture Award.
Lecture Abstract
Each AI summer, times of enthusiasm for the potential of artificial intelligence, has led to enduring scientific insights. Today’s third summer is different because it might not be followed by a winter, and it enables powerful applications for good and bad. The next steps in AI research are tighter symbolic-neuro integration
Lecture Outline
Jason Lee received his Ph.D. at Stanford University, advised by Trevor Hastie and Jonathan Taylor, in 2015. Before joining Princeton, he was a postdoctoral scholar at UC Berkeley with Michael I. Jordan. His research interests are in machine learning, optimization, and statistics. Lately, he has worked on the foundations of deep learning, non-convex optimization, and reinforcement learning.
Csaba Szepesvari is a Canada CIFAR AI Chair, the team-lead for the “Foundations” team at DeepMind and a Professor of Computing Science at the University of Alberta. He earned his PhD in 1999 from Jozsef Attila University, in Szeged, Hungary. In addition to publishing at journals and conferences, he has (co-)authored three books. Currently, he serves as the action editor of the Journal of Machine Learning Research and Machine Learning and as an associate editor of the Mathematics of Operations Research journal, while also regularly serves in various senior positions on program committees of various machine learning and AI conferences. Dr. Szepesvari’s main interest is to develop new, principled, learning-based approaches to artificial intelligence (AI), as well as to study the limits of such approaches. He is the co-inventor of UCT, a Monte-Carlo tree search algorithm, which inspired much work in AI.
Dr. Yuh-Jye Lee received the PhD degree in Computer Science from the University of Wisconsin-Madison in 2001. Now, he is a professor of Department of Applied Mathematics at National Chiao-Tung University. He also serves as a SIG Chair at the NTU IoX Center. His research is primarily rooted in optimization theory and spans a range of areas including network and information security, machine learning, data mining, big data, numerical optimization and operations research. During the last decade, Dr. Lee has developed many learning algorithms in supervised learning, semi-supervised learning and unsupervised learning as well as linear/nonlinear dimension reduction. His recent major research is applying machine learning to information security problems such as network intrusion detection, anomaly detection, malicious URLs detection and legitimate user identification. Currently, he focus on online learning algorithms for dealing with large scale datasets, stream data mining and behavior based anomaly detection for the needs of big data and IoT security problems.
請提供以下資料,完成後中心會將檔案連結寄至您的信箱。
(*為必填欄位)