ACADSTAFF UGM

CREATION
Title : Q-Learning for Shift-Reduce Parsing in Indonesian Tree-LSTM-Based Text Generation
Author :

ROCHANA PRIH HASTUTI (1) Dr. Yohanes Suyanto, M.I.Kom. (2) Anny Kartika Sari, S.Si., M.Sc., Ph.D. (3)

Date : 0 2022
Keyword : Natural language generation,Reinforcement learning,Neural networks,Shift reduce parsing, tree-structured neural network Natural language generation,Reinforcement learning,Neural networks,Shift reduce parsing, tree-structured neural network
Abstract : Tree-LSTM algorithm accommodates tree structure processing to extract information outside the linear sequence pattern. The use of Tree-LSTM in text generation problems requires the help of an external parser at each generation iteration. Developing a good parser demands the representation of complex features and relies heavily on the grammar of the corpus. The limited corpus results in an insufficient number of vocabs for a grammar-based parser, making it less natural to link the text generation process. This research aims to solve the problem of limited corpus by proposing the use of a Reinforcement Learning algorithm in the formation of constituency trees, which link the sentence generation process given a seed phrase as the input in the Tree-LSTM model. The tree production process is modeled as a Markov’s decision process, where a set of states consists of word embedding vectors, and a set of actions of {Shift, Reduce}. The Deep Q-Network model as an approximator of the Q-Learning algorithm is trained to obtain optimal weights in representing the Q-value function. The test results on perplexity-based evaluation show that the proposed Tree-LSTM and Q-Learning combi- nation model achieves values 9.60 and 4.60 for two kinds of corpus with 205 and 1000 sentences respectively, better than the Shift-All model. Human evaluation of Friedman test and posthoc analysis showed that all five respondents tended to give the same assessment for the combination model of Tree-LSTM and Q-Learning, which on average outperforms two other nongrammar models, i.e., Shift-All and Reduce-All.
Group of Knowledge : Ilmu Komputer
Original Language : English
Level : Internasional
Status :
Published
Document
No Title Document Type Action
1 TALLIP-20-0232_R1_Proof_fl.pdf
Document Type : [PAK] Full Dokumen
[PAK] Full Dokumen View
2 Q-Learning for Shift-Reduce Parsing in Indonesian%0ATree-LSTM-Based Text Generatio.pdf
Document Type : [PAK] Cek Similarity
[PAK] Cek Similarity View
3 jurnal_2102041_ac5544c37facbeb00c76c684d5b146bc.pdf
Document Type : [PAK] Bukti Korespondensi Penulis
[PAK] Bukti Korespondensi Penulis View