<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Projects on NTU-NLP</title>
    <link>https://ntunlpsg.github.io/project/</link>
    <description>Recent content in Projects on NTU-NLP</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <copyright>ntunlp &amp;copy; 2020</copyright>
    <lastBuildDate>Sun, 15 May 2022 00:00:00 +0800</lastBuildDate>
    
	<atom:link href="https://ntunlpsg.github.io/project/index.xml" rel="self" type="application/rss+xml" />
    
    
    <item>
      <title>GlobalWoZ: Globalizing MultiWoZ to Develop Multilingual Task-Oriented Dialogue Systems</title>
      <link>https://ntunlpsg.github.io/project/globalwoz/</link>
      <pubDate>Sun, 15 May 2022 00:00:00 +0800</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/globalwoz/</guid>
      <description>Multilingual ToD</description>
    </item>
    
    <item>
      <title>Straight to the Gradient: Learning to Use Novel Tokens for Neural Text Generation</title>
      <link>https://ntunlpsg.github.io/project/scalegrad/</link>
      <pubDate>Wed, 30 Mar 2022 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/scalegrad/</guid>
      <description>A novel training objective for text generation</description>
    </item>
    
    <item>
      <title>LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based on Prompt Tuning of T5 </title>
      <link>https://ntunlpsg.github.io/project/lfpt5/</link>
      <pubDate>Thu, 10 Mar 2022 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/lfpt5/</guid>
      <description>a unified framework for LFLL based on prompt tuning of T5</description>
    </item>
    
    <item>
      <title>Rethinking Self-Supervision Objectives for Generalizable Coherence Modeling</title>
      <link>https://ntunlpsg.github.io/project/coherence-paradigm/</link>
      <pubDate>Thu, 10 Mar 2022 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/coherence-paradigm/</guid>
      <description>We show empirically that increasing the density of negative samples improves the basic model, and using a global negative queue further improves and stabilizes the model while training with hard negative samples.</description>
    </item>
    
    <item>
      <title>A Unified Speaker Adaptation Approach for ASR</title>
      <link>https://ntunlpsg.github.io/project/asr/</link>
      <pubDate>Thu, 09 Sep 2021 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/asr/</guid>
      <description>A unified speaker adaptation approach consisting of feature adaptation and model adaptation for ASR.</description>
    </item>
    
    <item>
      <title>MulDA: A Multilingual Data Augmentation Framework for Low-Resource Cross-Lingual NER</title>
      <link>https://ntunlpsg.github.io/project/mulda/</link>
      <pubDate>Wed, 04 Aug 2021 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/mulda/</guid>
      <description>A data augmentation method for NER.</description>
    </item>
    
    <item>
      <title>RST Parsing from Scratch</title>
      <link>https://ntunlpsg.github.io/project/naacl21-rst-parsing-resource/naacl21-rst-parsing-resource/</link>
      <pubDate>Sat, 01 May 2021 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/naacl21-rst-parsing-resource/naacl21-rst-parsing-resource/</guid>
      <description>A novel top-down end-to-end formulation of document level discourse parsing in the Rhetorical Structure Theory (RST) framework.</description>
    </item>
    
    <item>
      <title>UXLA: A Robust Unsupervised Data Augmentation Framework for Zero-Resouce Cross-Lingual NLP</title>
      <link>https://ntunlpsg.github.io/project/uxla/</link>
      <pubDate>Wed, 28 Apr 2021 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/uxla/</guid>
      <description>We propose UXLA, a novel data augmentation framework for self-supervised learning in zero-resource transfer learning scenarios.</description>
    </item>
    
    <item>
      <title>Rethinking Coherence Modeling: Synthetic vs. Downstream Tasks</title>
      <link>https://ntunlpsg.github.io/project/coherence/coh-eval/</link>
      <pubDate>Fri, 16 Apr 2021 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/coherence/coh-eval/</guid>
      <description>Coherence models are typically evaluated only on synthetic tasks, which may not be representative of their performance in downstream applications. To investigate how representative the synthetic tasks are of downstream use cases, we conduct experiments on benchmarking well-known traditional and neural coherence models on synthetic sentence ordering tasks, and contrast this with their performance on three downstream applications: coherence evaluation for MT and summarization, and next utterance prediction in retrieval-based dialog.</description>
    </item>
    
    <item>
      <title>DAGA: Data Augmentation with a Generation Approach for Low-resource Tagging Tasks</title>
      <link>https://ntunlpsg.github.io/project/daga/</link>
      <pubDate>Wed, 24 Feb 2021 00:00:00 +0800</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/daga/</guid>
      <description>Data augmentation for low resource tagging.</description>
    </item>
    
    <item>
      <title>Evaluating Pronominal Anaphora in Machine Translation: An Evaluation Measure and a Test-suite</title>
      <link>https://ntunlpsg.github.io/project/discomt/eval-anaphora/</link>
      <pubDate>Wed, 30 Oct 2019 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/discomt/eval-anaphora/</guid>
      <description>An extensive, targeted dataset that can be used as a test suite for pronoun translation, covering multiple source languages and different pronoun errors drawn from real system translations, for English</description>
    </item>
    
    <item>
      <title>A Unified Neural Coherence Model</title>
      <link>https://ntunlpsg.github.io/project/coherence/n-coh-emnlp19/</link>
      <pubDate>Mon, 28 Oct 2019 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/coherence/n-coh-emnlp19/</guid>
      <description>A unified coherence model that incorporates sentence grammar, inter-sentence coherence relations, and global coherence patterns into a common neural framework.</description>
    </item>
    
    <item>
      <title>Hierarchical Pointer Net Parsing</title>
      <link>https://ntunlpsg.github.io/project/parser/ptrnet-depparser/</link>
      <pubDate>Tue, 24 Sep 2019 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/parser/ptrnet-depparser/</guid>
      <description>A hierarchical pointer network parsers applied to dependency and sentence-level discourse parsing tasks.</description>
    </item>
    
    <item>
      <title>Discourse Processing and Its Applications --- Tutoral at ACL-2019</title>
      <link>https://ntunlpsg.github.io/project/acl19tutorial/</link>
      <pubDate>Mon, 15 Jul 2019 00:00:00 +0800</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/acl19tutorial/</guid>
      <description>Discourse processing is a suite of Natural Language Processing (NLP) tasks to uncover linguistic structures from texts at several levels, which can support many NLP applications.</description>
    </item>
    
    <item>
      <title>Unsupervised Word Translation</title>
      <link>https://ntunlpsg.github.io/project/unsup-word-translation/</link>
      <pubDate>Thu, 04 Apr 2019 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/unsup-word-translation/</guid>
      <description>Adversarial Autoencoder with Cycle Consistency and Improved Training</description>
    </item>
    
    <item>
      <title>Malay-English Neural Machine Translation System.</title>
      <link>https://ntunlpsg.github.io/project/malay-english-neural-machine-translator/</link>
      <pubDate>Fri, 07 Sep 2018 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/malay-english-neural-machine-translator/</guid>
      <description>A demo of malay english Machine translation system.</description>
    </item>
    
    <item>
      <title>Discourse Processing and Its Applications in Text Mining --- Tutoral at ICDM-2018</title>
      <link>https://ntunlpsg.github.io/project/icdmtutorial/</link>
      <pubDate>Fri, 07 Sep 2018 00:00:00 +0800</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/icdmtutorial/</guid>
      <description>Discourse processing is a suite of Natural Language Processing (NLP) tasks to uncover linguistic structures from texts at several levels, which can support many text mining applications.</description>
    </item>
    
    <item>
      <title>Coherence Modeling of Asynchronus Conversations</title>
      <link>https://ntunlpsg.github.io/project/coherence/n-coh-acl18/</link>
      <pubDate>Sat, 28 Apr 2018 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/coherence/n-coh-acl18/</guid>
      <description>A neural approach for modeling coherence of asynchronus conversation</description>
    </item>
    
    <item>
      <title>A Unified Linear-Time Framework for Sentence-Level Discourse Parsing</title>
      <link>https://ntunlpsg.github.io/project/parser/pointer-net-parser/</link>
      <pubDate>Thu, 28 Apr 2016 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/parser/pointer-net-parser/</guid>
      <description>This repository contains the source code of our paper &amp;ldquo;A Unified Linear-Time Framework for Sentence-Level Discourse Parsing&amp;rdquo; in ACL 2019.
 Getting Started These instructions will help you to run our unified discourse parser based on RST dataset.
Prerequisites * PyTorch 0.4 or higher * Python 3 * AllenNLP  Dataset We train and evaluate the model with the standard RST Discourse Treebank (RST-DT) corpus. * Segmenter: we utilize all 7673 sentences for training and 991 sentences for testing.</description>
    </item>
    
    <item>
      <title>Community Question Answering System</title>
      <link>https://ntunlpsg.github.io/project/community-qa/</link>
      <pubDate>Thu, 28 Apr 2016 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/community-qa/</guid>
      <description>This search tool helps you to find good answers to your question by searching through previously asked questions in the Qatarliving forum.</description>
    </item>
    
    <item>
      <title>Deep Learning for Crisis Computing</title>
      <link>https://ntunlpsg.github.io/project/crisis-computing/</link>
      <pubDate>Thu, 28 Apr 2016 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/crisis-computing/</guid>
      <description>Python implementation of a number of deep neural networks classifiers for the classification of crisis-related data on Twitter.</description>
    </item>
    
    <item>
      <title>Discourse Parser for English</title>
      <link>https://ntunlpsg.github.io/project/parser/parser/</link>
      <pubDate>Thu, 28 Apr 2016 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/parser/parser/</guid>
      <description>About This package includes:
 A discourse segmenter A discourse parser Evaluation metrics for discourse parsing  Download Document-level Discourse Parser for English
Demo Link
Installation Required for the discourse segmenter:
 Charniak&amp;rsquo;s reranking parser. Put it in Tools/CharniakParserRerank and install it. Taggers from UIUC. Download POS tagger and shallow chunker [LBJPOS.jar, LBJChunk.jar, LBJ2.jar, LBJ2Library.jar] and put these in Tools/UIUC_TOOLs/ Install scikit-learn and scipy (instructions) Install java if not installed (instructions for Ubuntu) Make sure the Tools/SPADE_UTILS/bin/edubreak is set to executable.</description>
    </item>
    
    <item>
      <title>Discourse-informed Sen2Vec</title>
      <link>https://ntunlpsg.github.io/project/discourse-info-sen2vec/</link>
      <pubDate>Thu, 28 Apr 2016 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/discourse-info-sen2vec/</guid>
      <description>CON-S2V: A Generic Framework for Incorporating</description>
    </item>
    
    <item>
      <title>LNMap: Departures from Isomorphic Assumption in Bilingual Lexicon Induction Through Non-Linear Mapping in Latent Space</title>
      <link>https://ntunlpsg.github.io/project/lnmap/</link>
      <pubDate>Thu, 28 Apr 2016 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/lnmap/</guid>
      <description>This paper shows with a semi-supervised algorithm that BLI is more suitable through Non-Linear Mapping (specially for low resource languages).</description>
    </item>
    
    <item>
      <title>Neural Domain Adaptation Model for Machine Translation</title>
      <link>https://ntunlpsg.github.io/project/neural-domain-adaptation-model-for-machine-translation/</link>
      <pubDate>Thu, 28 Apr 2016 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/neural-domain-adaptation-model-for-machine-translation/</guid>
      <description>About This resource contain the source code of Domain adaptation using neural network joint model paper.
Source code Link to github
Publications Shafiq Joty, Nadir Durrani, Hassan Sajjad, and Ahmed Abdelali. Domain Adaptation Using Neural Network Joint Model. In Computer Speech &amp;amp; Language (Special Issue on Deep Learning for Machine Translation) : pages 161-179, 2017.
@article{joty-durrani-sajjad-abdelali-csl-17, title=&amp;quot;{Domain Adaptation Using Neural Network Joint Model}&amp;quot;, author={Shafiq Joty and Nadir Durrani and Hassan Sajjad and Ahmed Abdelali}, journal = {Computer Speech &amp;amp; Language}, volume={45}, publisher={Elsevier}, pages={161-179}, year={2017}, doi = {https://doi.</description>
    </item>
    
    <item>
      <title>Neural Local Coherence Model</title>
      <link>https://ntunlpsg.github.io/project/coherence/n-coh-acl17/</link>
      <pubDate>Thu, 28 Apr 2016 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/coherence/n-coh-acl17/</guid>
      <description>Neural coherence for monologue</description>
    </item>
    
    <item>
      <title>Recurrent Neural Models for Fine-grained Opinion Analysis</title>
      <link>https://ntunlpsg.github.io/project/opinion-analysis/</link>
      <pubDate>Thu, 28 Apr 2016 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/opinion-analysis/</guid>
      <description>Publications Pengfei Liu, Shafiq Joty, Helen Meng. Fine-grained Opinion Mining with Recurrent Neural Networks and Word Embeddings. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-2015), Lisbon, Portugal, 2015.
@InProceedings{liu-joty-meng-emnlp-15, author = {Liu, Pengfei and Joty, Shafiq and Meng, Helen}, title = {Fine-grained Opinion Mining with Recurrent Neural Networks and Word Embeddings}, booktitle = {Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing}, year = {2015}, address = {Lisbon, Portugal}, series = {EMNLP&#39;15}, pages = {1433--1443}, url = {http://aclweb.</description>
    </item>
    
    <item>
      <title>Speech act recognizer for synchronous and asynchronous conversations</title>
      <link>https://ntunlpsg.github.io/project/speech-act/</link>
      <pubDate>Thu, 28 Apr 2016 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/speech-act/</guid>
      <description>This resource addresses the problem of speech act recognition in written asynchronous conversations</description>
    </item>
    
    <item>
      <title>Topic Segmenter &amp; Labeler for Asynchronous Conversations</title>
      <link>https://ntunlpsg.github.io/project/topic-segmenter/</link>
      <pubDate>Thu, 28 Apr 2016 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/topic-segmenter/</guid>
      <description>This parser builds a discourse tree by applying an optimal parsing algorithm to probabilities inferred from two Conditional Random Fields: one for intra-sentential parsing and the other for multi-sentential parsing.</description>
    </item>
    
    <item>
      <title>SegBot: A Generic Neural Text Segmentation Model with Pointer Network</title>
      <link>https://ntunlpsg.github.io/project/segbot/</link>
      <pubDate>Mon, 18 Apr 2016 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/segbot/</guid>
      <description>Online Demo Figure 1 shows the model architecture of SegBot. For EDU segmentation, the units in the input $ U0 \ to \ U8 $ are words in a sentence. Formally, given an input sequence $ U = (U_1, U_2, &amp;hellip; , U_N) $ of length $N$, we get its distributed representations $ X = (x_1, x_2, &amp;hellip; , x_N $ by looking up the corresponding embedding matrix, where $x_n \in R^k$ is the representation for the unit $U_n$ with $K$ being the dimensions.</description>
    </item>
    
    <item>
      <title>Contrastive Clustering to Mine Pseudo Parallel Data for Unsupervised Translation</title>
      <link>https://ntunlpsg.github.io/project/swav/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/swav/</guid>
      <description>Fully unsupervised mining method that can built synthetic parallel data for unsupervised machine translation.</description>
    </item>
    
    <item>
      <title>Cross-model Back-translated Distillation for Unsupervised Machine Translation</title>
      <link>https://ntunlpsg.github.io/project/cross_model_back_translated/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/cross_model_back_translated/</guid>
      <description>A novel strategy to improve unsupervised MT by using back-translation with multiple models.</description>
    </item>
    
    <item>
      <title>Data Diversification: A Simple Strategy For Neural Machine Translation</title>
      <link>https://ntunlpsg.github.io/project/data_diverse/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/data_diverse/</guid>
      <description>A simple way to boost many NMT tasks by using multiple backward and forward models.</description>
    </item>
    
    <item>
      <title>Differentiable Window for Dynamic Local Attention]</title>
      <link>https://ntunlpsg.github.io/project/dynamic-attention/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/dynamic-attention/</guid>
      <description>This resource contains the source code of our ACL-2020 paper entitled &lt;a href=&#34;https://arxiv.org/abs/2006.13561&#34; target=&#34;_blank&#34;&gt;Differentiable Window for Dynamic Local Attention&lt;/a&gt;</description>
    </item>
    
    <item>
      <title>Efficient Constituency Parsing by Pointing</title>
      <link>https://ntunlpsg.github.io/project/ptr-constituency-parser/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/ptr-constituency-parser/</guid>
      <description>This resource contains the source code of our ACL-2020 paper entitled &lt;a href=&#34;https://arxiv.org/abs/2006.13557&#34; target=&#34;_blank&#34;&gt;Efficient Constituency Parsing by Pointing&lt;/a&gt;</description>
    </item>
    
    <item>
      <title>Tree-Structured Attention with Hierarchical Accumulation</title>
      <link>https://ntunlpsg.github.io/project/tree_transformer/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>https://ntunlpsg.github.io/project/tree_transformer/</guid>
      <description>A novel attention mechanism that aggregates hierarchical structures to encode constituency trees for downstream tasks.</description>
    </item>
    
  </channel>
</rss>