The browser you are using is not supported by this website. All versions of Internet Explorer are no longer supported, either by us or Microsoft (read more here: https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support).

Please use a modern browser to fully experience our website, such as the newest versions of Edge, Chrome, Firefox or Safari etc.

Constructing large proposition databases

Author

Summary, in English

With the advent of massive online encyclopedic corpora such as Wikipedia, it has become possible to apply a systematic analysis to a wide range of documents covering a significant part of human knowledge. Using semantic parsers, it has become possible to extract such knowledge in the form of propositions (predicate―argument structures) and build large proposition databases from these documents. This paper describes the creation of multilingual proposition databases using generic semantic dependency parsing. Using Wikipedia, we extracted, processed, clustered, and evaluated a large number of propositions. We built an architecture to provide a complete pipeline dealing with the input of text, extraction of knowledge, storage, and presentation of the resulting propositions

Publishing year

2012

Language

English

Pages

3836-3839

Publication/Series

Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)

Document type

Conference paper

Publisher

European Language Resources Association

Topic

  • Computer Science

Keywords

  • Knowledge Discovery/Representation
  • Information Extraction
  • Information Retrieval
  • Semantics

Conference name

The eighth international conference on Language Resources and Evaluation (LREC 2012)

Conference date

2012-05-21 - 2012-05-27

Conference place

Istanbul, Turkey

Status

Published

ISBN/ISSN/Other

  • ISBN: 978-2-9517408-7-7