Deep Leaing for Video Understanding (Wireless Networks)

Deep Leaing for Video Understanding (Wireless Networks)

by: Zuxuan Wu (Author),Yu-Gang Jiang(Author)

Publisher: Springer

Edition: 2024th

Publication Date: 2024/8/2

Language: English

Print Length: 197 pages

ISBN-10: 3031576780

ISBN-13: 9783031576782

Book Description

This book presents deep leaing techniques for video understanding. For deep leaing basics, the authors cover machine leaing pipelines and notations, 2D and 3D Convolutional Neural Networks for spatial and temporal feature leaing. For action recognition, the authors introduce classical frameworks for image classification, and then elaborate both image-based and clip-based 2D/3D CNN networks for action recognition. For action detection, the authors elaborate sliding windows, proposal-based detection methods, single stage and two stage approaches, spatial and temporal action localization, followed by datasets introduction. For video captioning, the authors present language-based models and how to perform sequence to sequence leaing for video captioning. For unsupervised feature leaing, the authors discuss the necessity of shifting from supervised leaing to unsupervised leaing and then introduce how to design better surrogate training tasks to lea video representations. Finally, the book introduces recent self-training pipelines like contrastive leaing and masked image/video modeling with transformers. The book provides promising directions, with an aim to promote future research outcomes in the field of video understanding with deep leaing.

About the Author

This book presents deep leaing techniques for video understanding. For deep leaing basics, the authors cover machine leaing pipelines and notations, 2D and 3D Convolutional Neural Networks for spatial and temporal feature leaing. For action recognition, the authors introduce classical frameworks for image classification, and then elaborate both image-based and clip-based 2D/3D CNN networks for action recognition. For action detection, the authors elaborate sliding windows, proposal-based detection methods, single stage and two stage approaches, spatial and temporal action localization, followed by datasets introduction. For video captioning, the authors present language-based models and how to perform sequence to sequence leaing for video captioning. For unsupervised feature leaing, the authors discuss the necessity of shifting from supervised leaing to unsupervised leaing and then introduce how to design better surrogate training tasks to lea video representations. Finally, the book introduces recent self-training pipelines like contrastive leaing and masked image/video modeling with transformers. The book provides promising directions, with an aim to promote future research outcomes in the field of video understanding with deep leaing.

电子书代发PDF格式价格10我要求助
未经允许不得转载:Wow! eBook » Deep Leaing for Video Understanding (Wireless Networks)