other-events:parseme-st
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
other-events:parseme-st [2025/04/15 11:22] – [PARSEME 2.0 Multilingual Shared Task on Identification and Paraphrasing of Multiword Expressions] agata.savary | other-events:parseme-st [2025/04/15 15:17] (current) – agata.savary | ||
---|---|---|---|
Line 5: | Line 5: | ||
* **Location**: | * **Location**: | ||
* **Dates**: TBA | * **Dates**: TBA | ||
+ | * **Data**: to be provided by the ongoing [[: | ||
* **Shared task organizers**: | * **Shared task organizers**: | ||
* Manon Scholivet, Université Paris-Saclay, | * Manon Scholivet, Université Paris-Saclay, | ||
Line 14: | Line 15: | ||
|[[https:// | |[[https:// | ||
- | ====== Subtask 1: MWE identification | + | |
- | This subtask is an extension of PARSEME shared tasks on automatic identification of verbal MWEs. | + | ===== Subtask 1: MWE identification ===== |
+ | This subtask is an extension of [[https:// | ||
* Task: Given a raw text, automatically underline MWEs in it | * Task: Given a raw text, automatically underline MWEs in it | ||
* Data: PARSEME 2.0 annotated corpora (not necessarily all the texts from release 1.3) | * Data: PARSEME 2.0 annotated corpora (not necessarily all the texts from release 1.3) | ||
Line 36: | Line 38: | ||
* Minimum annotation effort: 2000 annotated MWEs | * Minimum annotation effort: 2000 annotated MWEs | ||
- | ====== Subtask 2: MWE paraphrasing | + | ===== Subtask 2: MWE paraphrasing ===== |
* Task: Given a sentence with a MWE, rephrase a sentence so that there is no MWEs but the meaning is the same | * Task: Given a sentence with a MWE, rephrase a sentence so that there is no MWEs but the meaning is the same | ||
* Examples: | * Examples: | ||
Line 61: | Line 63: | ||
* Ukrainian | * Ukrainian | ||
+ | ===== Timeline ===== | ||
+ | * < | ||
+ | * 19 May 2025 - SemEval notification | ||
+ | * 15 July 2025: Sample data ready | ||
+ | * 1 September 2025: Training data ready | ||
+ | * 1 December 2025: Evaluation data ready (internal deadline; not for public release) | ||
+ | * 10 January 2026: Evaluation start | ||
+ | * 31 January 2026: Evaluation end (latest date; task organizers may choose an earlier date) | ||
+ | * February 2026: Paper submission | ||
+ | * March 2026: Notification to authors | ||
+ | * April 2026: Camera ready | ||
+ | * Summer 2026: SemEval workshop (co-located with a major NLP conference) | ||
other-events/parseme-st.1744708952.txt.gz · Last modified: by agata.savary