【活動資訊】2021 Google AI Boot Camp (Google AI 創新研究營)已圓滿結束,感謝所有與會者!
👉2021/7/9 活動會後資訊
Google Speakers: Khiem Pham, would like to follow up one item that is to share the details of the “Open Images Extended” dataset with the participants, as below:
https://ai.ntu.edu.tw/wp-content/uploads/2021/06/未命名-1.jpg504883ginadeng/wp-content/uploads/2019/07/logo01.pngginadeng2021-07-21 14:32:302021-07-21 14:34:07【活動資訊】2021 Google AI Boot Camp (Google AI 創新研究營)已圓滿結束,感謝所有與會者!
AI 領域研究專家、研究人員,並將以學界研究計畫主持人、共同主持人、指導教授優先參與,亦歡迎研究生報名參加。
📌報名截止日期:2021/7/4 (Sun)
https://ai.ntu.edu.tw/wp-content/uploads/2020/06/未命名-1.jpg879883ginadeng/wp-content/uploads/2019/07/logo01.pngginadeng2021-06-22 16:13:172021-07-21 14:38:19【活動訊息】歡迎踴躍報名GoogIe AI 創新研究營
Symbolic-neural learning involves deep learning methods in combination with symbolic structures. A “deep learning method” is taken to be a learning process based on gradient descent on real-valued model parameters. A “symbolic structure” is a data structure involving symbols drawn from a large vocabulary; for example, sentences of natural language, parse trees over such sentences, databases (with entities viewed as symbols), and the symbolic expressions of mathematical logic or computer programs.
Symbolic-neural learning has an innovative feature that allows to model interactions between different modals: speech, vision, and language. Such multimodal information processing is crucial for realizing research outcomes in real-word.
For growing needs and attention to multimodal research, SNL workshop this year features researches on “Beyond modality: Researches across speech, vision, and language boundaries.”
Topics of interests include, but are not limited to, the following areas:
Speech, vision, and natural language interactions in robotics
Multimodal and grounded language processing
Multimodal QA and translation
Dialogue systems
Language as a mechanism to structure and reason about visual perception
Image caption generation and image generation from text
General knowledge question answering
Reading comprehension
Textual entailment
Deep learning systems across these areas share various architectural ideas. These include word and phrase embeddings, self-attention neural networks, recurrent neural networks (LSTMs and GRUs), and various memory mechanisms. Certain linguistic and semantic resources may also be relevant across these applications. For example, dictionaries, thesauri, WordNet, FrameNet, FreeBase, DBPedia, parsers, named entity recognizers, coreference systems, knowledge graphs and encyclopedias.
https://ai.ntu.edu.tw/wp-content/uploads/2019/11/book.jpg607901ginadeng/wp-content/uploads/2019/07/logo01.pngginadeng2021-06-08 16:49:452021-06-22 16:30:00【活動訊息轉發】Fifth International Workshop on Symbolic-Neural Learning (SNL-2021)迎踴躍報名參加!
Object detection in the computer vision area has been extensively studied and making tremendous progress in recent years using deep learning methods. However, due to the heavy computation required in most deep learning-based algorithms, it is hard to run these models on embedded systems, which have limited computing capabilities. In addition, the existing open datasets for object detection applied in ADAS applications usually include pedestrian, vehicles, cyclists, and motorcycle riders in western countries, which is not quite similar to the crowded Asian countries like Taiwan with lots of motorcycle riders speeding on city roads, such that the object detection models training by using the existing open datasets cannot be applied in detecting moving objects in Asian countries like Taiwan.
In this competition, we encourage the participants to design object detection models that can be applied in Taiwan’s traffic with lots of fast speeding motorcycles running on city roads along with vehicles and pedestrians. The developed models not only fit for embedded systems but also achieve high accuracy at the same time.
Regular Awards
According to the points of each team in the final evaluation, we select the highest three teams for regular awards.
Champion: $USD 1,500
1st Runner-up: $USD 1,000
3rd-place $USD 750
Special Awards
Best accuracy award – award for the highest mAP in the final competition: $USD 200;
Best bicycle detection award – award for the highest AP of bicycle recognition in the final competition: $USD 200;
Best scooter detection award – award for the highest AP of scooter recognition in the final competition: $USD 200;
All the award winners must agree to submit contest paper and attend the ACM ICMR2021 Grand Challenge PAIR Competition Special Session to present their work.