Linguistic Resources for Natural Language Processing: On the Necessity of Using Linguistic Methods to Develop NLP Software

Linguistic Resources for Natural Language Processing: On the Necessity of Using Linguistic Methods to Develop NLP Software
Author: by Max Silberztein (Editor)
Publisher: Springer
Edition: 2024th
Publication Date: 2024-03-14
Language: English
Print Length: 239 pages
ISBN-10: 3031438108
ISBN-13: 9783031438103


Book Description

Empirical ― data-driven, neural network-based, probabilistic, and statistical ― methods seem to be the modern trend. Recently, OpenAI’s ChatGPT, google’s Bard and Microsoft’s Sydney chatbots have been garnering a lot of attention for their detailed answers across many knowledge domains. In consequence, most AI researchers are no longer interested in trying to understand what common intelligence is or how intelligent agents construct scenarios to solve various problems. Instead, they now develop systems that extract solutions from massive databases used as cheat sheets. In the same manner, Natural Language Processing (NLP) software that uses training corpora associated with empirical methods are trendy, as most researchers in NLP today use large training corpora, always to the detriment of the development of formalized dictionaries and grammars.

Not questioning the intrinsic value of many software applications based on empirical methods, this volume aims at rehabilitating the linguistic approach to NLP. In an introduction, the editor uncovers several limitations and flaws of using training corpora to develop NLP applications, even the simplest ones, such as automatic taggers.

The first part of the volume is dedicated to showing how carefully handcrafted linguistic resources could be successfully used to enhance current NLP software applications. The second part presents two representative cases where data-driven approaches cannot be implemented simply because there is not enough data available for low-resource languages. The third part addresses the problem of how to treat multiword units in NLP software, which is arguably the weakest point of NLP applications today but has a simple and elegant linguistic solution.

It is the editor's belief that readers interested in Natural Language Processing will appreciate the importance of this volume, both for its questioning of the training corpus-based approaches and for the intrinsic value of the linguistic formalization and the underlying methodology presented.



From the Back Cover

Empirical ― data-driven, neural network-based, probabilistic, and statistical ― methods seem to be the modern trend. Recently, OpenAI’s ChatGPT, google’s Bard and Microsoft’s Sydney chatbots have been garnering a lot of attention for their detailed answers across many knowledge domains. In consequence, most AI researchers are no longer interested in trying to understand what common intelligence is or how intelligent agents construct scenarios to solve various problems. Instead, they now develop systems that extract solutions from massive databases used as cheat sheets. In the same manner, Natural Language Processing (NLP) software that uses training corpora associated with empirical methods are trendy, as most researchers in NLP today use large training corpora, always to the detriment of the development of formalized dictionaries and grammars.

Not questioning the intrinsic value of many software applications based on empirical methods, this volume aims at rehabilitating the linguistic approach to NLP. In an introduction, the editor uncovers several limitations and flaws of using training corpora to develop NLP applications, even the simplest ones, such as automatic taggers.

The first part of the volume is dedicated to showing how carefully handcrafted linguistic resources could be successfully used to enhance current NLP software applications. The second part presents two representative cases where data-driven approaches cannot be implemented simply because there is not enough data available for low-resource languages. The third part addresses the problem of how to treat multiword units in NLP software, which is arguably the weakest point of NLP applications today but has a simple and elegant linguistic solution.

It is the editor's belief that readers interested in Natural Language Processing will appreciate the importance of this volume, both for its questioning of the training corpus-based approaches and for the intrinsic value of the linguistic formalization and the underlying methodology presented.



About the Author

Max Silberztein is a Professor of Linguistics, Computational Linguistics and Computer Science at the Université de Franche-Comté. He is the author of the three NLP software platforms (INTEX, NooJ and ATISHS), two books (Dictionnaires électroniques et analyse automatique de textes: le système INTEX, Masson 1993; Formalizing Natural Languages: the NooJ approach, Wiley 2016), and editor of over 15 volumes of selected Proceedings in Springer CCIS and LNCS series.


Amazon page

资源下载资源下载价格10立即购买
1111

未经允许不得转载:Wow! eBook » Linguistic Resources for Natural Language Processing: On the Necessity of Using Linguistic Methods to Develop NLP Software

评论 0

评论前必须登录!

登陆 注册